The Model Context Protocol has become the backbone for connecting AI models to external tools and data sources in 2026. Claude, Cursor, VS Code, Windsurf, and dozens of other AI tools use MCP to let agents read files, query databases, execute code, and interact with APIs. And as of today, MCP has no built-in authentication, no authorization framework, and no access control policies.
OWASP recognized this gap and published the MCP Top 10 — a catalog of critical security risks specific to the Model Context Protocol. If your developers use any AI coding assistant with MCP servers, your security team needs to read this. Because the tools your engineers are connecting to their AI agents right now have the same class of vulnerabilities that plagued early web applications — and nobody is patching them.
Why MCP Servers Are High-Value Targets
An MCP server sits between an AI model and your infrastructure. It holds authentication tokens for databases, APIs, file systems, and cloud services. Breaching a single MCP server can grant an attacker access to every connected service's tokens and the ability to execute actions across multiple systems — all through a protocol that was designed for ease of use, not security.
The attack surface is expanding rapidly. Developers install MCP servers from npm, pip, and GitHub with the same casual trust they give to any open-source dependency. But unlike a library that runs in a sandbox, an MCP server has runtime privileges — it can execute shell commands, write files, and make authenticated API calls on behalf of the AI agent.
MCP was designed for flexibility, not security. The protocol does not enforce authentication, authorization, or access control. Every MCP server is a trust boundary that most organizations don't even know exists.
The OWASP MCP Top 10, Explained
OWASP's MCP Top 10 identifies the most critical risks. Here are the ones that matter most for enterprise security teams.
Tool Poisoning. Attackers manipulate tool metadata or behavior to make MCP tools perform unintended actions. A tool description says it "reads calendar events" but actually exfiltrates contacts. Because AI models trust tool descriptions to decide how to use them, a poisoned description can redirect agent behavior without changing any code.
Prompt Injection via Tool Results. When an MCP tool returns data to an AI model, that data becomes part of the model's context. Malicious content embedded in tool results — hidden instructions in a database record, a customer support ticket, or an email — can influence the agent's subsequent actions, leading to data exfiltration or privilege escalation.
Context Spoofing. MCP servers can present fabricated context to AI models, causing them to make decisions based on false information. This is particularly dangerous in agentic workflows where models chain multiple tool calls together — a spoofed result early in the chain can corrupt every subsequent action.
Insecure Memory References. MCP allows tools to store and retrieve data across sessions. Without proper isolation, one tool can access another tool's stored data, and session data can leak between users sharing the same MCP server instance.
Real-World MCP Security Incidents
These aren't theoretical risks. Security researchers have already demonstrated real-world MCP vulnerabilities in production systems. The OpenClaw crisis — where 341 malicious skills were planted in a public marketplace — showed exactly what supply chain attacks look like in the MCP ecosystem.
- Asana's MCP implementation contained a bug that caused unintended data exposure across workspaces.
- Microsoft 365 Copilot was found vulnerable to hidden prompts that could exfiltrate sensitive data through MCP-connected tools.
- The widely-used 'mcp-remote' npm package was susceptible to remote code execution via crafted server responses.
- Multiple MCP marketplace servers were found storing authentication tokens in plaintext configuration files.
The Governance Gap: No Visibility, No Control
For security teams, the MCP problem isn't just technical — it's organizational. Most security teams don't know which MCP servers their developers have installed, what permissions those servers have, or what data flows through them. There is no centralized registry, no approval workflow, and no monitoring.
Consider what a typical developer's MCP configuration looks like: a GitHub server with repo access, a database server with read/write permissions, a Slack server that can post messages, and a file system server with access to the project directory. Each of these is a potential exfiltration vector. Each was installed with a single command. None of them went through a security review.
The average developer AI setup has 3-5 MCP servers connected, each with credentials to different systems. Zero of these went through your security approval process. This is shadow IT with admin privileges.
What Security Teams Should Do About MCP
MCP governance requires a different approach than traditional SaaS security. Here's a practical framework.
- Inventory your MCP servers. Scan developer machines and CI/CD environments for MCP configuration files. Know what's connected before you try to govern it.
- Implement a server allowlist. Define which MCP servers are approved for use, and monitor for unapproved installations. Block connections to unknown servers.
- Monitor tool descriptions for changes. Tool poisoning works by changing what a tool says it does. Hash tool descriptions and alert on changes — this is rug-pull detection.
- Scan data flowing through MCP. Prompts and tool results should be inspected for sensitive data (PII, credentials, source code) before they leave your network.
- Enforce least-privilege for tool permissions. An MCP server that reads calendar events doesn't need write access to your file system. Define and enforce permission boundaries per tool.
MCP Security Is AI Governance
The Model Context Protocol is where AI meets your infrastructure. It's the layer where prompts become actions and tool results become decisions. Governing MCP isn't a niche developer concern — it's the next frontier of AI security. As our RSAC 2026 analysis showed, every major vendor recognizes this — but most solutions are partial. The organizations that figure this out now will be prepared for the agentic AI era. The ones that don't will be the next OpenClaw headline.
Vloex MCP Gateway wraps every MCP server connection with policy enforcement, sensitive data scanning, tool description monitoring, and rug-pull detection. No code changes required — install once, govern all MCP connections across your dev team. Learn more about MCP security.
Get started free