NHI Forum
Read full article here: https://astrix.security/learn/blog/state-of-mcp-server-security-2025/?utm_source=nhimg
When Anthropic introduced the Model Context Protocol (MCP), it enabled AI assistants to securely interact with APIs and tools — effectively transforming them into full-fledged AI agents. But in the rush to innovate, many developers overlooked the basics of identity security.
The Astrix team’s study shows how quickly insecure practices spread:
- Over 20,000 MCP server repositories now exist on GitHub.
- 88% of servers require credentials to operate.
- Over half (53%) rely on static API keys or Personal Access Tokens (PATs) — long-lived, rarely rotated, and prone to leakage.
- Only 8.5% adopt OAuth, the modern gold standard for secure delegated access.
- 79% of servers store credentials in plain environment variables, exposing them to theft or misconfiguration.
The conclusion is clear: while MCP has fueled innovation, its identity model has yet to catch up to enterprise-grade security standards.
Static Secrets: The Hidden Weak Link
The study found that the vast majority of servers still depend on static, hardcoded credentials — the same security patterns that have plagued developers for decades. Many MCP servers embed secrets in configuration files or environment variables, which can easily end up in public code repositories or logs.
This problem mirrors early DevOps challenges, where static credentials caused major breaches in CI/CD pipelines and cloud automation systems. In the MCP landscape, such exposure could enable attackers to hijack agent actions, exfiltrate data, or impersonate enterprise AI systems.
Despite years of best practices urging the use of ephemeral tokens, short-lived credentials, and secret vaulting, adoption among AI agents remains dangerously low.
OAuth Adoption: Too Little, Too Slow
Only 8.5% of analyzed MCP servers used OAuth 2.0, the preferred standard for secure delegated access. OAuth’s complexity and the lack of standardized libraries for MCP likely explain this slow adoption.
The challenge is that AI agents operate in multi-user, multi-tenant environments, where they need to request scoped permissions on behalf of users. Implementing OAuth correctly requires identity maturity and infrastructure readiness, something many open-source developers lack.
As a result, the ecosystem is dominated by developer convenience over security, prioritizing quick setup over long-term resilience.
Where Secrets Live: A Dangerous Pattern
A deeper look into credential handling reveals that environment variables are the default for nearly four out of five MCP servers. While convenient, this approach provides no real isolation, anyone with local or container access can dump variables and retrieve secrets in plain text.
Furthermore:
- Few servers use vault-based retrieval or runtime injection.
- Most rely on static .env files committed alongside code.
- Audit trails or rotation mechanisms are nearly nonexistent.
This practice means that a single leaked .env file or backup snapshot can expose entire clusters of MCP agents, creating an easily exploitable chain of trust.
Sensitive Tool Usage: The Next Security Blind Spot
The research also found that distinguishing sensitive MCP tools (those capable of modifying or exfiltrating data) is not straightforward. Many MCP servers don’t clearly separate read-only from write capabilities, making it impossible to assign different privilege levels to AI agents.
This is a major governance challenge: an agent with access to both a read and write tool may unintentionally perform destructive actions. Astrix recommends further code-level research to classify tool sensitivity, as documentation-based analysis alone cannot capture real-world risks.
Astrix’s Solution: The MCP Secret Wrapper
To combat these systemic weaknesses, Astrix has released an open-source tool: MCP Secret Wrapper.
This lightweight solution wraps around any MCP server and dynamically retrieves credentials from secure vaults (currently supporting AWS Secrets Manager). Instead of embedding secrets in configuration files, the wrapper:
- Pulls secrets at runtime, right before the MCP server starts.
- Injects them into the process environment temporarily.
- Prevents secrets from being written to disk or committed to source code.
This eliminates exposed static credentials while maintaining compatibility with existing MCP server architectures. The wrapper also supports secret rotation policies, allowing organizations to integrate vault-managed credentials without rewriting existing codebases.
Beyond Wrapping: Toward Secure AI Agent Identity
The MCP Secret Wrapper addresses one major problem, secret exposure, but Astrix acknowledges that long-term security will require a fundamental shift in how AI agent identities are issued and governed.
Their Agent Control Plane (ACP) aims to operationalize this vision by issuing ephemeral, scoped credentials to each AI agent, aligning with least privilege principles and zero-standing access. With ACP, every AI agent becomes auditable, time-bound, and risk-aware, closing the loop between innovation and compliance.
Key Takeaways
|
Metric |
Finding |
Risk Level |
|
Total Servers Analyzed |
5,205 unique servers |
- |
|
Credentials Required |
88% |
High |
|
Static Secrets (API keys/PATs) |
53% |
Critical |
|
OAuth Adoption |
8.5% |
Low |
|
Stored in Environment Variables |
79% |
Critical |
The Road Ahead
The State of MCP Server Security 2025 reveals a clear pattern: the AI ecosystem is innovating faster than its security models can evolve. Without secure-by-design identity governance, MCP servers risk becoming the next major attack vector for the AI era.
Astrix’s open-source MCP Secret Wrapper provides an immediate mitigation step, while the Agent Control Plane represents the next generation of AI identity and access governance — bridging the gap between innovation and protection.
The message is clear: AI needs identity security now, not later.