NHI Forum
Read full article here: https://blog.gitguardian.com/a-look-into-the-secrets-of-mcp/?source=nhimg
MCP rapidly enhances AI capabilities but introduces security challenges through its distributed architecture. Especially, the distributed nature of MCP requires a lot of NHIs and their secrets. Our research shows that MCP is a new source of leaks that already discloses real-world secrets.
MCP and the Hidden Risk of Secrets Leaks in Agentic AI
As AI evolves, so do the risks — and the Model Context Protocol (MCP) is no exception.
MCP is the newest innovation powering advanced, agentic AI systems like Claude. It allows AI assistants to connect to external tools, APIs, and local data, enabling far more powerful capabilities. But beneath that power lies a quiet but growing risk: the unchecked spread of secrets and credentials.
What is MCP?
Developed by Anthropic, MCP enables AI agents to dynamically call external tools and services — both local and remote — to complete tasks. These tools are hosted via MCP servers, which communicate with an MCP host and a client LLM. This architecture transforms AI from a passive tool into an active digital operator.
But this distributed structure introduces hundreds or even thousands of non-human identities (NHIs), including:
-
API keys for accessing LLMs
-
OAuth tokens for remote services
-
Server-to-server authentication secrets
-
TLS certificates for secure connections
Each of these credentials poses a potential secrets leakage risk — and our research confirms that this isn’t theoretical.
Real-World Findings: Secrets Are Already Leaking
A recent analysis by GitGuardian of 3,829 public MCP server GitHub repositories found that:
-
5.2% of them leaked at least one secret — a rate higher than the general GitHub average.
-
Leaked credentials include Bearer tokens
-
, X-API-Keys, and OAuth tokens.
-
Both local and remote MCP servers leak secrets at roughly the same rate.
This shows that MCP-based systems are already a live source of exposed secrets, making them a new surface area of concern for AI security teams, DevOps, and CISOs.
What’s at Risk?
Because MCP bridges the AI layer with critical systems and data, leaked credentials could result in:
-
Unauthorized access to sensitive files or databases
-
Prompt injection via impersonated servers
-
Privilege escalation via misconfigured local MCPs
-
Massive downstream compromise due to the AI’s orchestration of multiple systems
And remember — many MCP servers are already being shared through registries like Smithery.ai, just like Docker images. That means reused or poorly secured servers may expose secrets from multiple organizations.
How to Mitigate the Risks
For AI and platform engineers:
-
Never hardcode credentials into MCP server code.
-
Avoid storing or logging secrets in shared infrastructure.
-
Ensure TLS is enforced and user inputs are strictly validated.
For security teams and AI adopters:
-
Harden local MCP environments — use containers and restrict commands.
-
Manually review sensitive actions initiated by LLMs.
-
Integrate secret detection tools into your CI pipeline (e.g., GitGuardian).
-
Treat MCP as a new IAM surface — inventory all credentials and govern access.
Final Thoughts
MCP is a powerful step forward in the evolution of agentic AI. But it also introduces a new and underestimated threat vector: distributed, unmanaged secrets sprawl.
As adoption grows, organizations must act fast to secure this emerging ecosystem. Because the next AI-driven breakthrough shouldn't come at the cost of your API keys.