An audit of 50 open-source Model Context Protocol servers found something alarming. 61% lack input validation. 43% have command injection vulnerabilities. 31% have path traversal risks. These tools run with filesystem and environment access in your coding session - and almost nobody's checking the security.
The audit results reveal a systematic problem: developers are building MCP servers like internal tools, then sharing them as if they're production-ready. The threat model doesn't match the deployment reality.
What MCP Servers Actually Do
Model Context Protocol servers connect AI assistants to external systems. They let your coding assistant read files, run terminal commands, query databases, or integrate with APIs. When Claude or another LLM needs to interact with your local environment, an MCP server handles that bridge.
The protocol is barely six months old but adoption is accelerating fast. Developers love the convenience - write a server once, works across any MCP-compatible AI tool. The open-source ecosystem has exploded with community-built servers for everything from Git operations to cloud service management.
Here's the problem: these servers run with the same permissions as your coding session. Filesystem access. Environment variables. Terminal execution. An MCP server can do anything you can do - which means a compromised server can do anything you can do.
The Vulnerability Breakdown
The audit scanned 50 popular MCP servers from GitHub and community repositories. The results cluster around three failure modes.
Input validation was missing in 61% of servers. They accept parameters from the LLM and pass them directly to system calls, database queries, or file operations. No sanitisation. No allowlisting. No bounds checking. If the LLM sends a malicious payload - either from a prompt injection attack or a compromised model - there's nothing stopping it.
Command injection vulnerabilities showed up in 43% of servers. These take user input and construct shell commands through string concatenation. Classic vulnerability, well-understood in web security, completely overlooked in MCP development. An attacker who can manipulate the LLM's tool calls can execute arbitrary commands on your machine.
Path traversal risks affected 31% of servers. File operations accept paths without validation, letting an attacker read or write outside intended directories. Your credentials directory, SSH keys, environment secrets - all accessible if the path isn't properly constrained.
Why This Matters Now
The attack surface is larger than it first appears. You're not just trusting the MCP server code - you're trusting every input source that feeds the LLM. A compromised website in your context window. A malicious file attachment. A prompt injection in a Slack message the AI reads. Anything that can influence the model's tool selection can potentially exploit server vulnerabilities.
The ecosystem is moving fast. New MCP servers publish daily. Developers integrate them based on functionality, not security audits. Most assume the servers are vetted because they're popular or come from known developers. The audit shows that assumption is wrong.
What Should Change
For MCP server developers, the fix isn't complicated - it's standard defensive programming. Validate all inputs. Use parameterised commands instead of string concatenation. Constrain file operations to specific directories. Apply principle of least privilege to every operation.
The challenge is cultural, not technical. Developers are building these servers as if they're trusted internal tools, but distributing them as public packages. That mismatch creates systematic vulnerabilities. The fix requires treating MCP servers with the same security rigour as any other code that handles external input and system access.
For users, the immediate advice is straightforward: audit before you install. Check the source code of any MCP server before giving it filesystem or command execution access. Look for input validation, parameterised commands, and constrained file operations. If those aren't present, the server isn't production-ready regardless of how useful it seems.
The Broader Pattern
This isn't unique to MCP. It's what happens whenever a new integration layer gets adopted faster than security practices catch up. Browser extensions went through this. Docker containers went through this. OAuth implementations went through this. The pattern is familiar: convenient new capability, rapid ecosystem growth, systematic security gaps, eventual correction.
MCP is at the "systematic security gaps" stage. The good news is the problems are well-understood and fixable. The concerning bit is how many servers are in active use with these vulnerabilities present. Every developer running an unvalidated MCP server with filesystem access is one prompt injection away from a bad day.
The audit results should be a wake-up call. Not because MCP is fundamentally insecure, but because the ecosystem is treating security as optional. For a protocol that bridges AI assistants to system-level operations, that's not viable. The convenience of MCP is real. But convenience without security just means you're vulnerable faster.