The Ultimate Guide to Non-Human Identities Report
NHI Forum

Notifications
Clear all

Securing AI Agents and Non-Human Identities in the Age of MCP and GenAI


(@oasis-security)
Eminent Member
Joined: 2 weeks ago
Posts: 12
Topic starter  

Read full article here: https://www.oasis.security/blog/ai-nhi-security-challenge/?source=nhimg

Large Language Models (LLMs) like ChatGPT, Claude, and Lamma have already transformed how organizations innovate, automate, and scale. With the release of the Model Context Protocol (MCP) by Anthropic, the AI landscape is evolving even faster, unlocking seamless integration between agents, applications, and enterprise environments.

Adoption has been swift and widespread: over 90% of Fortune 500 companies are now using LLMs in some capacity, thousands of MCP servers have been deployed online, and the MCP GitHub repository has been forked more than 4,000 times. Ready or not, AI agents are becoming a core part of enterprise ecosystems—connecting to internal systems, triggering automations, and accessing critical data sources.

But this new wave of innovation comes with a hidden cost.

AI agents, powered by LLMs and MCP, rely on Non-Human Identities (NHIs) such as API tokens, service accounts, and credentials to function. These NHIs are being provisioned rapidly, often by developers or end-users, with limited oversight. This opens the door to new risks around identity sprawl, privilege escalation, shadow access, and data exposure.

Security leaders must shift from asking “Should we allow this?” to “How do we secure this safely?”

In this article, we’ll unpack the rising security risks posed by LLMs, MCP servers, and autonomous AI agents—and provide a clear framework for enabling safe innovation without compromising control.

 

Key Points:

  • AI agents are widely deployed, with over 4,000 forks of the protocol and thousands of servers published publicly

  • AI agents are autonomous actors—they don’t just analyze, they execute (e.g., scale infra, open tickets, modify configs)

  • Developers and non-developers are introducing GenAI with poor practices like over-permissioned identities, plaintext secrets, or personal OAuth tokens

  • Identity sprawl is accelerating, often outside traditional IAM visibility and security controls

  • Misconfigurations are common, with real-world examples like Claude Desktop storing API keys in plaintext uploaded to GitHub

  • Security blind spots are expected, not rare, in the GenAI adoption rush

 

 

This topic was modified 2 days ago by Abdelrahman

   
Quote
Share: