Introduction to Panel and Session Overview
The session was expertly hosted by Henrique Texiera, SVP of Strategy at Saviynt, who guided a forward-looking discussion on one of the hottest topics in identity security: the rise of Agentic AI and its intersection with Non-Human Identities (NHIs). The panel featured leading voices in the field, including Idan Gour, Co-Founder & CTO at Astrix Security, Ido Shlomo, Co-Founder & CTO at Token Security and Paresh Bhaya, Co- Founder and Head of GTM at Natoma.
The discussion highlighted the potential risks, opportunities, and strategic considerations for organizations adopting AI agents, especially as these agents become more autonomous and integrated into enterprise operations.
Key Discussion Points
Ownership of AI and NHI Identities
Participants agreed that the Identity and Access Management (IAM) leader should own AI agent identities because of their broad, organizational perspective. They understand existing identities and can oversee the integration of agentic workloads and third-party technologies.
- Same leader should manage both NHI and AI agent identities for consistency.
- AI agents are viewed as a subset of NHI but with unique characteristics.
Nature of AI Agents Compared to Other NHIs
AI agents are different from traditional NHIs like RPA bots because they combine flexibility (like humans) with robustness (large scale). They are unpredictable, capable of natural language interactions, and operate with a level of autonomy that challenges existing identity management frameworks.
- Current AI agents are often basic, but their capabilities are rapidly evolving.
- They blur the lines between deterministic automation and human-like unpredictability.
Security implications include the need to rethink how ownership, data sharing, and security controls are applied to these agents.
Biggest Risks in Agentic AI
The primary risk identified is uncontrolled and excessive privileges. As AI agents become more autonomous, they may access sensitive data with broad permissions, leading to potential catastrophic outcomes.
- Today, privilege management is a challenge; in the future, agents may operate without human oversight.
- Domain-specific and multi-agent systems will increase complexity and risk.
Concerns include prompt injection, data poisoning, and the difficulty of controlling multiple interconnected agents.
Challenges of Scaling AI Agents
While a single AI agent’s risks are significant, the scale of multiple agents amplifies the threat. The protocol model (e.g., context protocol) enables agents to communicate and operate autonomously, complicating discovery, ownership, and security management.
- Protocols like MCP (Model Context Protocol) are critical but underdeveloped.
- Security gaps in these protocols could lead to widespread vulnerabilities.
Recommendations for 2025: Low-Hanging Fruits
Participants offered practical steps for organizations:
- Early adoption of controls – Implement security controls from day one when deploying MCP and AI agents.
- Build infrastructure proactively – Develop systems for inventory, discovery, and ownership of NHIs and AI agents.
- Engage vendors – Initiate conversations with vendors about security, data access, and governance early in the procurement process.
- Leadership enthusiasm – Be the most excited person about AI in your organization to influence adoption and security practices.
Role of Security and Identity Teams
Security teams often see their role as blockers, but the panel emphasized that security should be integrated into the deployment process from the start. The MCP protocol offers a way to accelerate AI adoption while maintaining control, provided security considerations are prioritized.
Security teams need to shift from reactive to proactive, embedding controls into the architecture rather than as afterthoughts.
Industry Collaboration and Vendor Cooperation
Vendors in the identity space should:
- Work together to combat criminal activities and malicious actors.
- Reinvent themselves to support AI agent security and management.
- Focus on customer needs over competition, especially in the fast-moving AI landscape.
Collaboration across vendors and organizations is essential to develop standards, protocols, and best practices for secure AI agent deployment.
Closing Remarks
The session concluded with a clear call to action around the emerging identity frontier: AI agents as a new class of non-human identities. Panelists emphasized that these identities come with unique security challenges, requiring urgent attention from identity and security leaders.
A key message was the critical risk of uncontrolled privileges, which if left unmanaged, could result in significant organizational damage. As such, security must be integrated from day one, not as an afterthought. The role of protocols like the Model Context Protocol (MCP) was highlighted as essential for providing structure and guardrails in agent-to-agent communications.
Panelists also underscored the importance of proactive, engaged leadership in managing AI-driven identities, advocating for a culture that is not only technically prepared but strategically excited about securing the future these intelligent systems.
Finally, the discussion stressed the need for industry-wide collaboration and vendor alignment, emphasizing that no single organization can tackle these risks in isolation. Collective effort is vital to stay ahead of evolving threats in the era of Agentic AI.