Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Managing the growing risk profile of agentic AI and MCP in the enterprise

    Managing the growing risk profile of agentic AI and MCP in the enterprise

    June 17, 2025

    Advancements in artificial intelligence continue to give developers an edge in efficiently producing code, but developers and companies can’t forget that it’s an edge that can always cut both ways.

    The latest innovation is the advent of agentic AI, which brings automation and decision-making to complex development tasks. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic, providing an open standard for orchestrating connections between AI assistants and data sources, streamlining the work of development and security teams, which can turbocharge productivity that AI has already accelerated. 

    Anthropic’s competitors have different “MCP-like” protocols making their way into the space, and as it stands, the internet at large has yet to determine a “winner” of this software race. MCP is Anthropic for AI-to-tool connections. A2A is Google, and also facilitates AI-to-AI comms. Cisco and Microsoft will both come out with their own protocol, as well. 

    But, as we’ve seen with generative AI, this new approach to speeding up software production comes with caveats. If not carefully controlled, it can introduce new vulnerabilities and amplify existing ones, such as vulnerability to prompt injection attacks, the generation of insecure code, exposure to unauthorized access and data leakage. The interconnected nature of these tools inevitably expands the attack surface.

    Security leaders need to take a hard look at how these risks affect their business, being sure they understand the potential vulnerabilities that result from using agentic AI and MCP, and take the necessary steps to minimize those risks.

    How Agentic AI Works With MCP

    After generative AI took the world by storm starting in November 2022 with the release of ChatGPT, agentic AI can seem like the next step in AI’s evolution, but they are two different forms of AI.

    GenAI creates content, using advanced machine learning to draw on existing data to create text, images, videos, music and code. 

    Agentic AI is about solving problems and getting things done, using tools such as machine learning, natural language processing and automation technologies to make decisions and take action. Agentic AI can be used, for example, in self-driving cars (responding to circumstances on the road), cybersecurity (initiating a response to a cyberattack) or customer service (proactively offering help to customers). In software development, agentic AI can be used to write large sections of code, optimize code and troubleshoot problems.

    Meanwhile, MCP, developed by Anthropic and introduced in November 2024, accelerates the work of agentic AI and other coding assistants by providing an open, universal standard for connecting large language models (LLMs) with data sources and tools, enabling teams to apply AI capabilities throughout their environment without having to write separate code for each tool. By essentially providing a common language for LLMs such as ChatGPT, Gemini, DALL•E, DeepSeek and many others to communicate, it greatly increases interoperability among LLMs.

    MCP is even touted as a way to improve security, by providing a standard way to integrate AI capabilities and automate security operations across an organization’s toolchain. Although it was treated as a general-purpose tool, MCP can be used by security teams to increase efficiency by centralizing access, adding interoperability with security tools and applications, and giving teams flexible control over which LLMs are used for specific tasks.

    But as with any powerful new tool, organizations should not just blindly jump into this new model of development without taking a careful look at what could go wrong. There is a significant profile of increased security risks associated with agentic AI coding tools within enterprise environments, specifically focusing on MCP. 

    Productivity Is Great, but MCP Also Creates Risks

    Invariant Labs recently discovered a critical vulnerability in MCP that could allow for data exfiltration via indirect prompt injections, a high-risk issue that Invariant has dubbed “tool poisoning” attacks. Such an attack embeds malicious code instructing an AI model to perform unauthorized actions, such as accessing sensitive files and transmitting data without the user being aware. Invariant said many providers and systems like OpenAI, Anthropic, Cursor and Zapier are vulnerable to this type of attack. 

    In addition to tool poisoning, such as indirect prompt injection, MCP can introduce other potential vulnerabilities related to authentication and authorization, including excessive permissions. MCP can also lack robust logging and monitoring, which are essential to maintaining the security and performance of systems and applications. 

    The vulnerability concerns are valid, though they are unlikely to stem the tide moving toward the use of agentic AI and MCP. The benefits in productivity are too great to ignore. After all, concerns about secure code have always revolved around GenAI coding tools, which can introduce flaws into the software ecosystem if the GenAI models were initially trained on buggy software. However, developers have been happy to make use of GenAI assistants anyway. In a recent survey by Stack Overflow, 76% of developers said they were using or planned to use AI tools. That’s an increase from 70% in 2023, despite the fact that during the same time period, those developers’ view of AI tools as favorable or very favorable dropped from 77% to 72%.

    The good news for organizations is that, as with GenAI coding assistants, agentic AI tools and MCP functions can be safely leveraged, as long as security-skilled developers handle them. The key emergent risk factor here is that skilled human oversight is not scaling at anywhere near the rate of agentic AI tool adoption, and this trend must course-correct, pronto.

    Developer Education and Risk Management Is the Key

    Regardless of the technologies and tools in play, the key to security in a highly connected digital environment (which is pretty much every environment these days) is the Software Development Lifecycle (SDLC). Flaws at the code level are a top target of cyberattackers, and eliminating those flaws depends on ensuring that secure coding practices are de rigueur in the SDLC, which are applied from the beginning of the development cycle. 

    With AI assistance, it’s a real possibility that we will finally see the eradication of long-standing vulnerabilities like SQL injection and cross-site scripting (XSS) after decades of them haunting every pentest report. However, most other categories of vulnerabilities will remain, especially those relating to design flaws, and we will inevitably see new groups of AI-borne vulnerabilities as the technology progresses. Navigating these issues depends on developers being security-aware with the skills to ensure, as much as possible, that both the code they create and code generated by AI is secure from the get-go. 

    Organizations need to implement ongoing education and upskilling programs that give developers the skills and tools they need to work with security teams to mitigate flaws in software before they can be released into the ecosystem. A program should make use of benchmarks to establish the baseline skills developers need and measure their progress. It should be framework and language-specific, allowing developers to work in real-world scenarios with the programming language they use on the job. Interactive sessions work best, within a curriculum that is flexible enough to adjust to changes in circumstances.

    And organizations need to confirm that the lessons from upskilling programs have hit home, with developers putting secure best practices to use on a routine basis. A tool that makes use of benchmarking metrics to track the progress of individuals, teams and the organization overall, assessing the effectiveness of a learning program against both internal and industry standards, would provide the granular insights needed to truly move the needle is the most beneficial. Enterprise security leaders ultimately need a fine-grained view of developers’ specific skills for every code commit while showing how well developers apply their new skills to the job.

    Developer upskilling has proved to be effective in improving software security, with our research showing that companies that implemented developer education saw 22% to 84% fewer software vulnerabilities, depending on factors such as the size of the companies and whether the training focused on specific problems. Security-skilled developers are in the best position to ensure that AI-generated code is secure, whether it comes from GenAI coding assistants or the more proactive agentic AI tools.

    The drawcard of agentic models is their ability to work autonomously and make decisions independently, and these being embedded into enterprise environments at scale without appropriate human governance will inevitably introduce security issues that are not particularly visible or easy to stop. Skilled developers using AI securely will see immense productivity gains, whereas unskilled developers will simply generate security chaos at breakneck speed.

    CISOs must reduce developer risk, and provide continuous learning and skills verification within their security programs to safely implement the help of agentic AI agents.

    The post Managing the growing risk profile of agentic AI and MCP in the enterprise appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAnubis Ransomware Encrypts and Wipes Files, Making Recovery Impossible Even After Payment
    Next Article SD Times 100

    Related Posts

    Tech & Work

    CodeSOD: A Unique Way to Primary Key

    July 22, 2025
    Tech & Work

    BrowserStack launches Figma plugin for detecting accessibility issues in design phase

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Ubuntu 25.10 Snapshot 2 is Now Available to Download

    Linux

    Ready to ditch Windows? ‘End of 10’ makes converting your PC to Linux easier than ever

    News & Updates

    14 Best Free and Open Source Linux GUI Flashcard Software

    Linux

    I love Elden Ring Nightreign’s weirdest boss — he bargains with you, heals you, and throws tantrums if you ruin his meditation

    News & Updates

    Highlights

    News & Updates

    It only took Call of Duty: Warzone going back in time to make the game so much better — to the point even I enjoy it now

    April 7, 2025

    Warzone’s first map returning is a great step forward, with the old favorite giving a…

    Veo3.bot: Free Access to Google Veo 3 AI Video Generation with Native Audio

    July 15, 2025

    CVE-2025-26074 – Orkes Conductor Java Deserialization Vulnerability

    June 30, 2025

    North Korean Hackers Target Web3 with Nim Malware and Use ClickFix in BabyShark Campaign

    July 2, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.