The Ultimate Guide to Non-Human Identities Report
NHI Forum

Notifications
Clear all

ChatGPT-5, AI Agents and NHI Security


(@astrix)
Eminent Member
Joined: 6 months ago
Posts: 15
Topic starter  

Read full article here: https://astrix.security/learn/blog/secure-chat-gpt5-with-astrix-security/?source=nhimg

 

The release of GPT-5 on August 7, 2025, has made creating AI agents effortless. Any employee—technical or not—can now spin up a customized, autonomous agent in minutes. While this fuels productivity and innovation, it also creates an ungoverned expansion of Non-Human Identities (NHIs) that access corporate systems using service accounts, API keys, and tokens.

The Risk Landscape

These AI agents are no longer passive assistants. With adaptive reasoning, massive 256K-token context windows, and integrated tool use, they can autonomously connect to APIs, process sensitive data, and execute cross-system tasks. Without oversight, this leads to:

  • Shadow AI agents with unknown owners

  • Over-permissioned NHIs violating least privilege

  • Dormant or orphaned credentials acting as attack backdoors

  • Privilege escalation paths that can be exploited by attackers

The OWASP Top 10 for LLM Applications explicitly warns that NHIs are critical to agent security, highlighting risks like Tool Misuse (T2), Privilege Compromise (T3), and Identity Spoofing (T9).

 

How Astrix Secures AI Agents

Astrix focuses on governing the identities behind AI agents—ensuring every action is authorized, monitored, and compliant.

  1. Discovery & Visibility

    • Inventory all GPT and Co-Pilot instances, mapping each to a human owner

    • Visualize agent–permission–resource relationships through lineage graphs

    • Identify platform connections (e.g., Google Workspace, Salesforce)

  2. Governance & Compliance

    • Enforce secure credential rotation for API keys and tokens

    • Require approvals before agents go live

    • Maintain full audit trails for regulatory readiness

  3. Risk Detection & Response

    • Enforce least-privilege access policies

    • Continuously monitor for anomalous or policy-violating agent behavior

    • Detect insecure connections, unsafe sharing, and public exposures

Real-World Results

In our case study, a global enterprise using ChatGPT Enterprise discovered 250+ GPT agents—some with admin-level access and PII exposure. Astrix helped rapidly identify these risks, remediate vulnerabilities, and enable safe AI scaling without sacrificing visibility or control.

Key Takeaway

AI security isn’t just about protecting models—it’s about securing the NHIs they operate under. Without identity governance, AI agents can become uncontrolled, high-risk entities inside the enterprise. Astrix delivers the visibility, control, and guardrails needed to adopt AI at scale, securely.

 

 


   
Quote
Share: