NHI Forum
Read full article here: https://astrix.security/learn/blog/case-study-how-a-major-brand-scaled-ai-agent-governance-with-astrix-nhi-security/?source=nhimg
A globally recognized brand already using Astrix to secure Non-Human Identities (NHIs) like API keys and service accounts, faced a new challenge: governing AI agents developed internally and through ChatGPT.
The company’s CISO was concerned about the lack of visibility and policy enforcement as AI agent use surged. They extended their existing Astrix partnership to govern GPTs as part of their NHI strategy.
The Challenge: Accelerated AI Agent Growth
The organization piloted three Agentic AI platforms—Vertex AI, ChatGPT Enterprise, and Glean—issuing enterprise licenses to 200 developers.
Within just 1.5 months, developers had created 400 GPTs in ChatGPT Enterprise:
-
240 published GPTs
-
160 drafts
The critical question arose: “What do these GPTs have access to?”
At this stage, leaders realized they were already losing visibility into their AI environment.
The Solution: Astrix AI Agent Governance
Astrix deployed governance and monitoring capabilities to provide full oversight, including:
-
Comprehensive GPT inventory mapping with linked NHIs, secrets, and permissions
-
Detection of shared files and connected external systems
-
Real-time alerts on access and usage changes
-
Policy recommendations (e.g., eliminating public link-based sharing)
Findings in the First Week
-
250+ GPTs identified in the workspace
-
25% of agent were accessible by all users, some with direct access to production BigQuery tables
-
Sensitive files used as training data by public GPTs
-
GPTs with admin-level scopes linked to external platforms like Jira
-
10% with direct API access to external systems (e.g., BigQuery, Atlassian)
-
One high-severity case: A GPT connected to BigQuery exposed PII via a developer-issued credential—flagged immediately by the CISO as a nightmare scenario.
Impact
-
High-risk GPTs identified and remediated
-
Public and unrestricted access to sensitive data eliminated
-
Governance controls implemented to align with compliance requirements
-
Security posture strengthened, enabling safe, scalable AI adoption without loss of control
The case reinforced a clear lesson: AI agents must be governed from day one. Without proactive oversight, rapid adoption can introduce significant compliance, privacy, and security risks, especially at enterprise scale.