The Ultimate Guide to Non-Human Identities Report

Securing Model Context Protocol (MCP) with Teleport and AWS

Securing Model Context Protocol (MCP) with Teleport and AWS – Teleport

Enterprise AI adoption is accelerating rapidly, with organizations deploying large language models and AI agents to access sensitive corporate data and automate critical business processes. 

The emergence of agentic AI is driving this transformation. These are intelligent systems that can reason, plan, and execute complex multi-step workflows autonomously. Unlike traditional AI that simply responds to queries, agentic systems like Amazon Bedrock Agents (supported by open-source frameworks like Strands Agents SDK) can orchestrate sophisticated business processes, maintain context across sessions, and dynamically adapt their approach based on changing conditions from other applications, agents, or data sources. These agents represent a fundamental shift from AI as a tool to AI as an autonomous business partner, making the security implications even more critical.

However, this transformation introduces security vulnerabilities that traditional approaches cannot address. Anthropic’s Model Context Protocol (MCP) and agentic AI are introducing new paradigms of how we think about security – with a recent survey from SS&C Blue Prism that 70% of surveyed leaders say they’re highly confident that AI-based automation will take over from traditional, rule-based robotic process automation (RPA) in the next three years, enabling unprecedented levels of intelligent automation but shifting how security teams must approach securing their applications.

MCP has emerged as a robust universal connector, enabling AI systems to interact with enterprise data sources in real-time, as Anthropic points out – think of MCP as a USB-C port for AI applications. As LLMs and agents increasingly act on behalf of users through MCP-based systems, the security paradigm is shifting toward treating AI agents like human users, requiring the same principles of least privilege, credential management, and behavioral visibility that organizations have long applied to protect against insider threats.

Teleport’s Infrastructure Identity Platform directly addresses these gaps by extending infrastructure identity governance to AI systems. This approach transforms MCP deployments into controlled, auditable, and compliant AI infrastructure components.

The Security Reality of Modern AI Infrastructure

The enterprise AI landscape increasingly relies on sophisticated integration patterns. AI agents and large language models need access to databases, APIs, file systems, and cloud services to deliver business value.

MCP serves as a middleware layer, providing standardized connectivity between AI systems and enterprise resources. MCP leverages OAuth 2.1 for authorization, offering a standardized approach to access control, though organizations implementing MCP should view it as one component within their broader security architecture.

For enterprise deployments, complementary security measures, monitoring capabilities, and governance frameworks tailored to specific AI infrastructure needs can help organizations maximize the value of MCP while maintaining an appropriate security posture.

MCP’s Inherent Security Limitations

MCP servers typically authenticate using static credentials or API keys, creating persistent attack surfaces. Once an MCP connection is established, the protocol provides no granular access controls or session management capabilities. AI agents can request broad, unrestricted access to data sources with minimal oversight or revocation mechanisms in place.

For secure implementation, the protocol should be enhanced with comprehensive logging and audit trails, which are not inherently included in the protocol. Organizations that do not implement tracking will lack visibility into which AI systems are accessing specific data when access occurs, as well as what actions are performed. This opacity creates significant compliance risks and makes incident response nearly impossible.

Furthermore, AI agents should be treated with the principles of least privilege in mind, with scoped permissions tailored to specific use cases or business requirements.

Common Security Gaps within LLM Applications

The OWASP Top 10 for LLMs provides a framework for understanding security vulnerabilities specific to large language model applications.

As organizations deploy LLMs that interact with enterprise data through protocols like MCP, they can consider the OWASP Top 10 for LLM frameworks, which will help identify common risk areas, such as over-privileged access patterns, credential management failures, and audit blindness. 

Over-Privileged Access Patterns

AI systems often operate with excessive permissions that exceed their operational requirements. An AI assistant designed to answer HR questions may receive full database access, including sensitive employee records, financial data, and strategic information.

This pattern violates fundamental least-privilege principles, creating unnecessary risk exposure. The OWASP Top 10 for LLMs specifically identifies this issue under “LLM08: Excessive Agency,” which occurs when AI systems are granted over-functionality, excessive permissions, or too much autonomy. According to OWASP, this vulnerability can lead to unintended consequences when LLMs access unnecessary functions or possess unneeded permissions on downstream systems, highlighting the importance of implementing strict access controls and permission boundaries for AI applications.

Credential Management Failures

Most MCP implementations rely on static API keys, database passwords, or service account credentials. These secrets often live in configuration files, environment variables, or code repositories where they can be discovered and misused.

Traditional secret rotation practices are frequently inadequate for dynamic AI workloads that can scale rapidly or operate across multiple environments. OWASP’s “LLM07: Insecure Plugin Design” highlights how plugins and integrations can be prone to exploitation due to insufficient access controls and improper credential management. OWASP recommends enforcing appropriate authentication identities and API keys for authorization, as well as implementing secure access control guidelines to prevent credential exposure that could lead to data exfiltration or privilege escalation when LLMs interact with external systems and resources.

Audit and Compliance Blindness

Enterprise compliance frameworks can require detailed access logging, attribution, and policy enforcement.

OWASP’s “LLM06: Sensitive Information Disclosure” vulnerability warns that LLM applications can inadvertently disclose confidential data if proper tracking mechanisms are not in place. OWASP emphasizes the importance of implementing robust monitoring and logging to detect when sensitive information might be exposed through model interactions.

Additionally, “LLM02: Insecure Output Handling” notes that without proper audit trails, organizations cannot verify whether LLM outputs were appropriately validated before being processed by downstream systems, thereby creating compliance risks and complicating incident response efforts.

Teleport’s Identity-First Solution for AI Infrastructure

Teleport addresses these fundamental security gaps by extending its proven infrastructure identity platform to encompass AI systems, MCP servers, and cloud AI services, such as Amazon Bedrock. This approach treats AI components as first-class infrastructure resources subject to the same security controls as traditional systems.

By applying Infrastructure Identity principles to AI workloads, Teleport enables organizations to bring MCP implementations under consistent governance while addressing multiple vulnerabilities identified in the OWASP Top 10 for LLMs. Rather than creating parallel security architectures for AI systems, Teleport extends existing security frameworks to include these new components, ensuring comprehensive protection without fragmentation or policy drift.

Unified Identity Framework

Teleport creates a single identity model that encompasses humans, machines, workloads, and AI systems. Each MCP server receives a unique, cryptographically backed identity that can be authenticated, authorized, and audited using consistent policy frameworks. This eliminates the need for static credentials while providing granular control over the capabilities of AI systems.

OWASP’s “LLM07: Insecure Plugin Design” vulnerability is targeted by implementing proper authentication identities for MCP servers and enforcing strict access control guidelines. By treating AI systems as first-class identities with mutual TLS authentication, Teleport prevents unauthorized access and privilege escalation risk.

This unified identity approach helps mitigate “LLM05: Supply Chain Vulnerabilities” by providing visibility into which AI components are accessing enterprise resources and enforcing consistent security controls across the AI supply chain.

Zero Trust Architecture for AI

Every AI interaction requires explicit authentication and authorization through Teleport’s access controls. AI agents cannot access resources without valid roles and approved tasks.

This enforcement happens at the protocol level, ensuring that even sophisticated prompt injection attacks cannot bypass security boundaries, defending against “LLM01: Prompt Injection”, which warns about attackers manipulating LLMs through crafted inputs to execute unauthorized actions. By enforcing trust boundaries between the LLM, external sources, and extensible functionality, Teleport prevents compromised models from accessing unauthorized resources even if the prompt layer is breached.

The Zero Trust approach also mitigates “LLM08: Excessive Agency” by implementing strict permission boundaries that limit what actions AI systems can perform, regardless of the instructions they receive, ensuring that AI workloads operate only within their intended scope and preventing potential abuse of their capabilities.

Dynamic Credential Management

Teleport eliminates static secrets by providing ephemeral, just-in-time credentials for AI systems. MCP servers receive temporary access tokens scoped to specific resources and time windows.

This approach reduces credential exposure while supporting dynamic AI workloads. “LLM07: Insecure Plugin Design” highlights how static credentials and insufficient access controls create significant security risks when LLMs interact with enterprise systems. By implementing short-lived credentials that automatically expire, Teleport prevents the credential sprawl that can occur in MCP implementations, reducing the attack surface for potential data exfiltration or privilege escalation.

Additionally, the “LLM10: Model Theft” risk is reduced by ensuring that even if credentials are compromised, they have limited utility due to their short lifespan and narrow scope, preventing attackers from gaining persistent access to sensitive AI infrastructure or the data it processes.

Comprehensive Audit and Attribution

All AI system interactions flow through Teleport’s audit framework, creating detailed logs that include AI agent identity, accessed resources, actions performed, and business context.

This visibility enables compliance reporting, incident investigation, and behavioral analysis of AI systems. “LLM06: Sensitive Information Disclosure” warns about LLMs inadvertently exposing confidential data without proper tracking mechanisms. Teleport’s detailed audit trails provide the visibility needed to detect potential data leakage and trace exactly which information was accessed by AI systems.

The audit framework also helps mitigate “LLM02: Insecure Output Handling” risks by creating accountability for how LLM outputs are processed and used within the organization. Security teams can monitor AI behavior patterns in real-time, detect anomalous access attempts that may indicate prompt injection or other attacks, and maintain comprehensive audit records required for regulatory compliance in industries with strict data governance requirements.

Real-World Implementation: Securing AI-Driven Database Access

Consider an AI-powered business intelligence system that uses MCP to query multiple enterprise databases containing customer data, financial records, and operational metrics. In a traditional implementation, this system would require persistent database credentials and broad query permissions.

Traditional Approach Risks

The AI system typically receives database usernames and passwords stored in configuration files or environment variables. These credentials grant extensive access to multiple databases, which often include sensitive tables containing personally identifiable information, financial data, and strategic business information.

When business users interact with the AI system, their queries may inadvertently trigger access to unauthorized data. The system cannot distinguish between legitimate business questions and potential data mining attempts. No audit trail exists to track which specific data was accessed or how it was used.

If the AI system experiences a prompt injection attack, attackers could potentially access any data reachable through the stored credentials. The organization would have no way to detect unauthorized access or understand the scope of potential data exposure.

Teleport-Secured Implementation

Teleport and AWS MCP diagram

With Teleport’s Infrastructure Identity platform, the MCP server receives a unique identity with scoped permissions aligned to specific business functions. Rather than persistent database credentials, the system requests just-in-time access tokens for specific queries or data sets.

Business users authenticate through Teleport’s access controls, and their identity context flows through to database interactions. This enables the attribution of AI queries to specific individuals and business purposes. Access policies can restrict which databases, tables, or even particular rows the AI system can access based on user context and business requirements.

All interactions generate detailed audit logs that include the user’s identity, the AI system’s identity, the specific data accessed, and the business justification for the interaction. Security teams can monitor AI behavior in real-time and detect anomalous access patterns that may indicate prompt injection or other attacks.

Emergency response procedures can instantly revoke or restrict access to AI system resources without disrupting other business operations. This capability proves essential when security incidents require immediate containment and response.

Strategic Implementation Roadmap

Before implementing MCP-based AI systems, organizations should establish and regularly review their core security fundamentals, which should mirror the protection models used by human users. 

Identity and Access Management (IAM) forms the cornerstone. Every AI agent requires a distinct identity with clearly defined permissions, just as human users do. Least privilege principles become even more critical with AI agents that can operate in 1:many scenarios, making fine-grained permissions through limited AWS IAM roles and organizational policies essential to ensure agents only access resources necessary for their specific functions. Ensure that these Agents are also limited by the permissions of the individual user who is accessing or triggering them. 

Data boundaries must be explicitly defined based on functional tenancy; for example, a support chatbot should only access customer service data and relevant support document knowledge bases, excluding internal financial records or HR information, which is enforced through targeted Amazon S3 bucket policies and resource-based policies. Utilizing fundamental system logging using AWS CloudTrail for comprehensive audit logging of AI agent activities within the AWS environment. We strongly suggest reviewing the Security Reference Architecture for Generative AI for further reading.

Establish AI Identity Governance

Organizations should extend existing identity and access management frameworks to include AI systems. This involves cataloging AI applications, MCP servers, and data connections to understand the current security posture. Identity governance policies should explicitly address AI systems as infrastructure components requiring authentication, authorization, and audit controls.

Implement Security-First AI Development

Security controls should be integrated into AI development workflows from the outset rather than being added as an afterthought. Development teams should utilize Teleport’s identity platform during AI pilot projects to establish secure patterns and prevent the accumulation of security debt. This approach ensures that security scales with AI adoption rather than becoming a limiting factor.

Deploy Comprehensive Monitoring

AI systems require specialized monitoring that goes beyond traditional infrastructure metrics. Organizations should implement monitoring that tracks AI behavior, access patterns, and policy compliance to ensure effective governance and oversight. This monitoring should integrate with existing security operations to provide unified visibility across traditional and AI infrastructure.

Establish Incident Response Procedures

AI-specific incident response procedures should address prompt injection attacks, unauthorized data access, and compromise of AI systems. These procedures should leverage Teleport’s emergency access revocation capabilities to contain incidents quickly while preserving audit evidence for investigation.

Business Impact and Strategic Value

Implementing Teleport’s Infrastructure Identity platform for AI infrastructure delivers measurable business value beyond security improvements. Organizations can accelerate AI adoption by providing secure, compliant access to enterprise data. Development teams can focus on AI functionality rather than security implementation, reducing time-to-market for AI initiatives.

Compliance and audit capabilities enable the deployment of AI in regulated industries where data governance requirements previously hindered AI adoption. Organizations can demonstrate regulatory compliance through comprehensive audit trails and policy enforcement mechanisms.

The unified identity approach transforms AI infrastructure management by solving the fundamental M×N integration problem. Instead of building custom security for every AI system connecting to every data source, organizations implement M+N standardized identity controls. This architectural shift enables enterprise AI to become truly scalable.

Conclusion

The enterprise AI revolution requires a fundamental shift in the security approach. Traditional security models are unable to address the unique risks posed by AI systems that operate with broad data access and dynamic behavior patterns. MCP’s powerful connectivity capabilities become dangerous vulnerabilities without proper security controls.

Teleport’s Infrastructure Identity platform provides the missing security foundation for enterprise AI infrastructure, enabling sophisticated AI orchestration through Amazon Bedrock Agents. By treating AI systems as first-class infrastructure components subject to rigorous identity governance, organizations can deploy intelligent agents that securely orchestrate complex workflows from accessing S3 data lakes and Amazon DynamoDB tables to integrating with SaaS applications on Amazon Elastic Kubernetes Service (EKS) and external APIs through MCP servers. These agents can handle multi-step planning, maintain memory across sessions, and collaborate with other agents while ensuring security and compliance requirements are met across their entire cloud ecosystem, thereby transforming simple AI interactions into comprehensive business solutions.

The future of enterprise AI depends on establishing security frameworks that scale with the adoption of AI. Organizations that implement identity-based security for AI infrastructure today will be positioned to lead in the AI-driven economy. In contrast, those who defer security considerations will face increasing risks and limitations.

Success in enterprise AI requires treating security as an enabler rather than a constraint. Teleport’s approach demonstrates that comprehensive security controls can accelerate rather than hinder AI adoption, providing the foundation for responsible AI innovation at the enterprise scale.

About AWS

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud, offering over 200 fully featured services from its global data centers. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

About Teleport

Teleport is the Infrastructure Identity Company, modernizing identity, access, and policy for infrastructure, improving engineering velocity and resiliency of critical infrastructure against human factors and/or compromise.

Learn More