NHI Foundation Level Training Course Launched

CrewAI GitHub Token Leak Exposes Sensitive Source Code

In September 2025, researchers from Noma Labs discovered a critical security flaw in CrewAI’s platform: an internal GitHub token, with admin-level access to CrewAI’s entire GitHub infrastructure, had been accidentally exposed to users via  improper exception handling.

The vulnerability, dubbed “Uncrew,” was assigned a high severity (CVSS 9.2), reflecting the serious risk of full repository compromise, code theft, and downstream supply-chain exposure.

CrewAI responded quickly, issuing a security patch within hours of disclosure to revoke the exposed token and fix the flawed exception-handling code.

What Happened

During a standard machine-provisioning operation within CrewAI, a backend error occurred. Instead of safely handling the exception, the platform returned a JSON error response containing a field, repo_clone_url, which embedded a full internal GitHub access token used by CrewAI for repository operations.

Because this token had unrestricted permissions (read/write/admin) on CrewAI’s private repositories, any user who encountered that error, or intercepted the response, suddenly had full access: the ability to clone, read, modify, or exfiltrate source code and proprietary assets.

In other words: a simple provisioning failure, combined with poor error handling, exposed a high-privilege credential and with it, the entire internal codebase of CrewAI.

How It Happened

  • Exception mishandled by design – Instead of sanitizing errors, CrewAI’s code returned raw JSON including sensitive credential data on provisioning failure.
  • Long-lived static token with broad permissions – The leaked credential was a long-living GitHub token, not a short-lived or scoped credential, meaning it granted full access persistently until explicitly revoked.
  • No built-in secret redaction or vault isolation – The system lacked safeguards to prevent internal tokens from being exposed through user-facing responses, logs, or error outputs.
  • Supply-chain scale amplification – Because CrewAI is used to orchestrate AI agents, automations, and development workflows, one compromised token could allow attackers to tamper with any or all downstream automation, integrations, or deployments relying on those repositories.

What Was at Risk

  • Full source-code exposure – Private repositories, including proprietary logic, infrastructure-as-code, and AI-agent orchestration code — could be cloned, inspected, or stolen.
  • Supply-chain compromise – Attackers gaining write access could inject backdoors, malicious dependencies, or trojan code into builds or shared libraries, affecting all users downstream.
  • Credential harvesting & escalation – Once inside, adversaries could locate further secrets (API keys, cloud credentials, database credentials) stored in the codebase or config files and use them to breach other systems.
  • Loss of trust, IP theft, and business disruption – For a company building agentic AI systems, exposing internal code and automation pipelines risks intellectual property, corporate reputation, and operational integrity.

Response & Resolution

Immediately after disclosure by Noma Labs, CrewAI revoked the exposed GitHub token and patched their exception-handling logic so that sensitive data would no longer be exposed in error messages.

No public reports have surfaced indicating that the token was exploited in the wild, but the incident serves as a stark warning about the fragility of static machine identities and the risks of improper error handling.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

What This Incident Teaches Us

The CrewAI “Uncrew” leak illustrates a fundamental risk that many organizations now face: as AI platforms, automation tools, and machine-driven workflows proliferate, non-human identities like service accounts, tokens, and automation credentials become critical attack surfaces.

Any of the following can trigger a severe breach:

  • Long-lived static tokens with broad privileges
  • Insecure error handling or logging that leaks sensitive data
  • Mixing automation credentials with user-facing systems or APIs
  • Lack of secret management and token lifecycle governance
  • Inadequate supply-chain hygiene and code-audit practices

In modern DevSecOps and AI-driven development environments, treating machine identities with the same rigor as human credentials is no longer optional, it’s essential.