NHI Forum
Read full article here: https://aembit.io/blog/crewai-github-token-exposure-highlights-the-growing-risk-of-static-credentials-in-ai-systems/?utm_source=nhimg
As AI platforms grow increasingly complex and interconnected, small failures can cascade into high-impact security incidents. The recent CrewAI GitHub token exposure exemplifies how routine development activity can suddenly put critical systems at risk.
What Happened in CrewAI
During a provisioning failure, a flaw in CrewAI’s error-handling logic caused an exception response—the message returned by a service when it encounters an unhandled error—to contain an internal GitHub token. This token had administrative access to all private repositories inside the organization, creating a security exposure far beyond the original malfunction.
Noma Security identified and responsibly disclosed the vulnerability. Analysis shows this incident is not unique: it reflects a systemic pattern where long-lived machine credentials, service accounts, and automation tokens remain present in areas that were never designed to hold sensitive information.
A token with administrative scope acts as a privileged non-human identity, capable of accessing source code, modifying repositories, interfering with workflows, and extracting additional secrets.
Why This Is a Systemic Risk
- AI platforms now rely heavily on automation and agentic workflows, each powered by service accounts and tokens.
- Static credentials are often long-lived, shared across CI/CD pipelines, and embedded in environments that developers and tools can inadvertently expose.
- Small oversights—like exception handlers, logs, or debugging messages—can leak high-privilege tokens.
Research from Wiz demonstrates how pervasive this problem is: 65% of leading AI companies have unintentionally leaked secrets, including model access tokens, workflow keys, and API tokens. Many leaks originate from ordinary development activities rather than malicious actions.
Practical Safeguards Against Token Exposure
Organizations can dramatically reduce the risk of incidents like CrewAI by adopting dynamic, short-lived identities and strict governance practices:
- Limit exception output – Ensure internal details, repository paths, and tokens are never returned in standard errors.
- Replace static tokens with short-lived credentials – Issue permissions dynamically at runtime with strict scope and duration policies.
- Vendor enforcement – Require third-party providers to adopt short-lived token strategies and integrate token risk assessments in security reviews.
- Rotate remaining credentials – Schedule regular rotations and restrict tokens to minimal necessary tasks.
- Monitor service accounts – Apply the same scrutiny to non-human identities as to human users.
- Scan historical repositories – Include old branches, deleted forks, and developer-owned repos for residual secrets.
The Stakes of Identity in Agentic AI Systems
The CrewAI incident illustrates a key lesson: non-human identities are now a critical security surface. Agentic AI systems operate across code, data, and deployment pipelines. When these identities rely on long-lived tokens:
- The risk of large-scale exposure increases.
- Tokens become a single point of failure in otherwise routine processes.
- Autonomy amplifies the potential blast radius of any compromise.
The solution is a dynamic identity model: short-lived, scoped, policy-driven credentials for non-human identities. This approach reduces risk, limits exposure, and strengthens resilience as AI systems grow in scale and autonomy.