NHI Foundation Level Training Course Launched
Poll – Does Agentic AI Security = NHI Security?

The Non-Human Identity Management Group recently launched a LinkedIn poll asking: Does Agentic AI Security = NHI Security?

Very interesting results on how the community perceives the relationship between Agentic AI security and NHI security:

Poll - Does Agentic AI Security = NHI Security? by NHI Mgmt Group
  • Yes – 35%
  • No – 65%

No surprise to see the majority responding “No,” reflecting that most professionals understand that securing autonomous AI agents involves different considerations than securing Non-Human Identities. That said, 35% of respondents answered “Yes,” showing there is still some confusion or overlap in thinking about the security of AI agents versus NHIs.

This poll highlights a growing awareness of the distinct challenges in NHI security, while also suggesting a need for further education around how Agentic AI security intersects with, but does not fully equal, NHI security. As Agentic AI adoption grows, understanding this distinction will be critical for organizations managing both AI agents and NHIs safely.

17,000+ Secrets Exposed in Public GitLab Repositories — What Went Wrong and How to Fix It

In late November 2025, a major security analysis revealed a startling reality: public repositories on GitLab Cloud have leaked over 17,000 live secrets, including API keys, access tokens, cloud credentials, and more.

The scan covered about 5.6 million public repositories, uncovering exposed secrets across more than 2,800 domains.

This massive exposure underscores how widespread and dangerous “secrets sprawl”, hard-coded credentials in public repos, has become.

What Happened

Security researcher Luke Marshall used the open-source tool TruffleHog to scan every public GitLab Cloud repository. He automated the process: repository names were enumerated via GitLab’s public API, queued in AWS Simple Queue Service (SQS), then scanned by AWS Lambda workers running TruffleHog, all within about 24 hours and costing roughly $770 in cloud compute.

The result: 17,430 verified live secrets, nearly three times the amount found in a similar scan of another platform.

These secrets included sensitive credentials like cloud API keys, tokens for cloud services, and other access credentials.

In short: thousands of developers inadvertently committed real secrets to public repositories, data that is now exposed for anyone to find and misuse.

How It Happened

  • Human error + insecure practices – Many developers or teams treat Git repositories as just code storage, not secret vaults. As a result, credentials (API keys, cloud secrets, tokens) get committed alongside code.
  • Public repository exposure by default or misconfiguration – Some repos are intentionally public (open-source), others may have been mis-marked as private, or visibility changed over time. Once public, everything becomes accessible.
  • Lack of secret hygiene and automated detection – Without automated secret-scanning tools integrated into CI/CD or repository workflows, exposed secrets remain undetected, sometimes for years.
  • Scale & automation exacerbate the risk – With millions of repos and many contributors, the probability of human error is high. Coupled with the fact that secrets often remain valid (long-lived credentials), this creates a large attack surface.

Possible Impact & Risks

The exposure of 17,000+ secrets across public repositories has major implications for both individual developers and organizations:

  • Credential theft leading to account takeover – Cloud API keys, tokens, or credentials could be used by attackers to hijack cloud services, spin up infrastructure, exfiltrate data, or launch attacks.
  • Supply-chain and downstream attacks – Public projects often serve as dependencies for other codebases, leaked secrets in one repo can jeopardize entire downstream ecosystems.
  • Long-term undetected compromise – Since credentials tend to stay valid unless rotated, exposed secrets may have already been harvested, even if no breach has yet been observed.
  • Reputational damage for teams and organizations – Leaks from public repos erode trust and may cause clients or partners to reconsider engagements.
  • Regulatory or compliance fallout for enterprises – Organizations that inadvertently publish credentials tied to sensitive data or infrastructure could face compliance risks, especially in regulated industries.

Recommendations

If you manage code or cloud infrastructure, or create software that depends on public repositories, now is the time to act:

  1. Audit all repositories public and private – Scan for hard-coded secrets, credentials, tokens, API keys. Treat every repo as a potential liability.
  2. Use automated secret-scanning tools – Integrate tools like TruffleHog, secret-scanners, or dedicated secret-management platforms into your CI/CD pipelines to catch leaks before code is merged.
  3. Rotate all exposed credentials immediately – If you find secrets in code, rotate or revoke them. Treat any exposed credential as compromised, even if it seems unused.
  4. Adopt secrets-management best practices – Use vaults / secrets managers rather than hard-coding credentials. Use ephemeral or short-lived tokens where possible.
  5. Enforce least-privilege and zero-trust – Limit what each token or API key can do. Avoid giving broad privileges by default.
  6. Educate developers and teams – Make secret-hygiene part of code reviews and development culture. Ensure everyone understands the risks of committing secrets.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The discovery of over 17,000 live secrets exposed in public GitLab repositories is a stark wake-up call. It shows how modern development practices, when paired with human error and outdated credential management, can create massive security risks invisible until it’s too late.

This isn’t a flaw in GitLab itself. It’s a systemic issue of secrets sprawl and lack of hygiene. The silver lining: unlike a software vulnerability, this is a problem teams can fix, by auditing their repos, rotating credentials, and putting real secret management practices in place.

If your organization relies on code repositories, cloud services, or third-party integrations, it’s time to treat every secret as a high-value identity. Because once secrets leak, the attackers don’t need exploits, just access.

Inside the Shai-Hulud Campaign: npm Malware Attack Exposes Secrets on GitHub

On November 2025, security researchers uncovered a widespread supply-chain attack targeting the JavaScript ecosystem. A new malware strain named Shai-Hulud was found infecting over 500 npm packages, silently harvesting sensitive information from developer environments and then exfiltrating secrets to private GitHub repositories controlled by the attackers. The breach highlights once again how fragile the open-source supply chain has become, and how easily compromised packages can escalate into large-scale credential theft.

What Happened?

The malicious campaign revolved around compromised npm packages that developers downloaded and integrated into their projects without knowing they were weaponized. Once installed, the infected packages executed malicious code that harvested:

  • API keys
  • Environment variables
  • Access tokens
  • Cloud provider secrets
  • GitHub authentication credentials

The stolen data was then transmitted to remote GitHub repositories owned by the attackers.
Because npm packages are widely reused via dependency chains, developers were compromised even if they never installed the malicious package directly, a dependency of a dependency was enough to trigger the infection.

How It Happened

npm Malware Attack pathway

The Shai-Hulud malware infiltrated the npm ecosystem through malicious package uploads that imitated legitimate libraries. Attackers used techniques like:

  • Typosquatting – publishing packages with names nearly identical to trusted dependencies
  • Dependency hijacking – uploading packages using abandoned or expired namespace names
  • Social trust exploitation – inserting malicious updates into previously safe packages

Once downloaded, the malware ran during the package installation lifecycle (via postinstall scripts), enabling it to execute automatically inside developer environments—no manual action required.

From there, Shai-Hulud scanned local machines for secrets commonly used in development workflows and pushed them in bulk to the attackers’ hidden GitHub repositories.
Because the exfiltration traffic blended into normal GitHub API communication, it was extremely difficult for organizations to detect the theft in real time.

What Was Compromised?

The full scope of stolen data is still under investigation, but security research shows that the malware targeted:

  • AWS, Azure, and GCP access keys
  • GitHub and GitLab personal access tokens
  • Environment variables used for CI/CD
  • API keys for third-party SaaS platforms
  • Database connection strings
  • OAuth tokens

Given how many npm packages depend on other libraries, the total exposure could impact thousands of organizations across multiple ecosystems, not just JavaScript projects.

Possible Impacts

If the stolen secrets are used successfully, organizations may face:

  • Unauthorized access to cloud environments
  • Source code and intellectual property theft
  • CI/CD pipeline compromise
  • Lateral movement into internal networks
  • Creation of fraudulent infrastructure using real organization credentials
  • Downstream customer breaches through trusted integrations
  • Ransom, extortion, or data destruction attacks

Recommendations

To reduce risk and respond effectively:

  • Rotate all secrets stored in development environments and CI/CD pipelines
  • Audit GitHub access tokens and revoke suspicious or unused ones
  • Scan source code and repos for embedded secrets using automated tools
  • Implement a centralized secrets management solution (e.g., Vault, AWS Secrets Manager, Doppler)
  • Enable dependency pinning and dependency integrity checks
  • Adopt software composition analysis (SCA) and SBOMs to monitor open-source changes
  • Block automatic execution of postinstall scripts unless required

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The Shai-Hulud attack is another reminder that cybercriminals are increasingly targeting the software supply chain, not enterprise networks directly. Developers and CI/CD environments now represent one of the most valuable attack surfaces because stealing a single set of credentials can unlock access to cloud accounts, infrastructure, and entire customer ecosystems.

Organizations that treat secret management, dependency security, and supply-chain monitoring as optional will remain exposed. Protecting development pipelines, and the NHIs (non-human identities) inside them, is now a core security responsibility, not a nice-to-have.

Code Formatting Tools Cause Massive Credential Leaks in Enterprises

In November 2025, security researchers discovered that a code beautifier tool, used to format and clean source code, inadvertently exposed sensitive credentials from some of the world’s most sensitive organizations, including banks, government agencies, and major tech companies. The discovery underscores the unexpected risks that development tools and automation can introduce into enterprise security.

These tools, widely trusted by developers to improve code readability, were found to transmit or expose embedded secrets in source code, including passwords, API keys, and cloud credentials, to external services or temporary storage locations. The breach demonstrates how even well-intentioned developer utilities can become vectors for credential leaks.

What Happened

Developers across multiple industries use code beautifiers and formatters to automatically standardize code. However, recent analysis shows that certain tools:

  • Processed files containing hard-coded secrets without detecting them.
  • Stored or transmitted parts of code to remote servers for processing, inadvertently exposing sensitive information.
  • Failed to sanitize outputs or logs, leaving secrets in temporary or publicly accessible locations.

The breach was discovered after security researchers identified patterns of leaked credentials linked to these tools. The exposed data included credentials for cloud services, internal databases, APIs, and even secure admin panels.

In some cases, the compromised credentials belonged to banks and government agencies, highlighting how even routine developer tools can pose high-stakes security risks when handling sensitive code.

How It Happened

The leak occurred due to a combination of factors:

  1. Hard-coded secrets in source code – Developers sometimes store passwords, API keys, or tokens directly in their source files for convenience.
  2. Tool behavior – The code beautifiers analyzed, formatted, or processed these files using cloud-based services or shared environments, inadvertently transmitting sensitive information.
  3. Lack of detection – The tools did not include automatic secret detection or redaction features.
  4. Chain of trust issues – Organizations relied on trusted development tools without fully auditing their operations, assuming local processing was safe.

This situation demonstrates that non-human identities and automated tools (like beautifiers or formatters) can become unmonitored attack surfaces if not properly governed.

What Was Compromised

Exposed data includes:

  • Active Directory credentials
  • Database and cloud credentials
  • Private keys
  • Code repository tokens
  • CI/CD secrets
  • Payment gateway keys
  • API tokens
  • SSH session recordings
  • Large amounts of personally identifiable information (PII), including know-your-customer (KYC) data
  • An AWS credential set used by an international stock exchange’s Splunk SOAR system
  • Credentials for a bank exposed by an MSSP onboarding email

Even a single leaked key can give attackers lateral access to multiple systems, making this breach particularly dangerous.

Possible Impacts

The compromise of credentials through code beautifiers could have severe repercussions:

  • Unauthorized access to cloud and internal systems
  • Theft of sensitive customer or citizen data
  • Disruption of critical services or infrastructure
  • Lateral movement across corporate or government networks
  • Reputational and regulatory consequences for affected organizations

Recommendations

Organizations and developers should take immediate action to mitigate risks:

  1. Audit and rotate credentials that may have been exposed via code formatting tools.
  2. Avoid hard-coding secrets in source code; use secure vaults and environment variables.
  3. Evaluate all development tools for security practices, especially those using cloud or remote processing.
  4. Integrate automated secret scanning into CI/CD pipelines.
  5. Educate developers about the risks of using third-party utilities with sensitive code.
  6. Adopt least-privilege principles for all machine identities.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The code beautifier breach is a stark reminder that even tools designed to help developers can unintentionally compromise security. With thousands of secrets at risk across banks, government agencies, and tech organizations, this incident emphasizes the importance of secrets hygiene, developer awareness, and non-human identity governance.

Organizations must treat developer tools as potential attack surfaces and implement continuous monitoring, secret management, and secure coding practices to prevent similar incidents.

TruffleNet BEC Attack: Over 800 Hosts Compromised Using Stolen AWS Credentials

Security researchers have uncovered a large and sophisticated campaign, dubbed “TruffleNet,” in which attackers are abusing stolen Amazon Web Services (AWS) credentials to hijack AWS’s Simple Email Service (SES) and launch Business Email Compromise (BEC) attacks. Rather than relying on malware or phishing alone, the adversaries are weaponizing legitimate cloud infrastructure, making their emails look far more trustworthy. The scale is massive: over 800 unique hosts across 57 different networks are participating in the operation.

What Happened

The TruffleNet campaign begins with attackers using TruffleHog, an open-source secret-scanning tool, to validate AWS credentials they’ve compromised. Once they confirm valid AWS access keys, they execute reconnaissance: the compromised hosts make a GetCallerIdentity API call to check who owns the credentials.

If the credentials pass that test, the next step is to call GetSendQuota from the AWS SES API, which helps the attackers determine how many emails they can send using SES. Many of the IP addresses used for this activity have never been flagged by reputation or antivirus systems, suggesting that the attackers built their infrastructure specifically for this campaign.

After reconnaissance, the adversaries abuse SES: they create verified sending identities using compromised domains and stolen DKIM (DomainKeys Identified Mail) cryptographic keys from previously compromised WordPress sites. With these identities in place, they execute BEC attacks, sending highly convincing phishing or invoice emails that appear to come from legitimate companies.

In one documented case, attackers targeted companies in the oil and gas sector, sending fake vendor onboarding invoices claiming to be from ZoomInfo, and requesting payments of $50,000 via ACH. To make the scam even more convincing, they included W-9 forms with real Employer Identification Numbers (EINs), and used typosquatted domains to handle responses.

How It Happened

TruffleNet BEC Attack Campaign
  1. Credential Validation & Reconnaissance – Attackers start with a large trove of stolen AWS keys. Using TruffleHog, they test each key to find the ones that still work.
  2. SES Capability Check – Once valid keys are found, the campaign sends GetCallerIdentity and GetSendQuota API calls to assess the identity and email-sending capabilities associated with those credentials.
  3. Containerized Attack Infrastructure – The operation relies on more than 800 hosts, many of which run Portainer, a Docker/Kubernetes management tool. This gives attackers a centralized way to manage and coordinate their malicious nodes.
  4. Email Identity Fabrication – With SES access, the attackers import DKIM signing keys stolen from compromised WordPress websites. They then use those keys to register email identities in SES, making their malicious emails appear legitimate.
  5. BEC Execution – Finally, TruffleNet sends business email compromise messages under trusted-looking domains. These messages are sophisticated, complete with W-9 forms and legitimate-looking invoices, designed to trick financial teams.

Possible Impact

  • Highly Convincing Phishing & Fraud – Using real SES infrastructure and verified DKIM domains makes the BEC scams extremely hard to distinguish from legitimate emails.
  • Financial Loss – Victims could be tricked into sending large payments (e.g., $50,000 ACH transfers) to attackers.
  • Mass Credential Abuse – The campaign’s size (800+ hosts) shows how many stolen AWS credentials are being tested and abused in parallel.
  • Long-Term Cloud Risk – Any AWS account with compromised SES credentials could be repurposed for ongoing fraud, phishing, or further infrastructure abuse.
  • Reputation Damage – Organizations whose SES accounts are abused may find their domains blacklisted or flagged as sources of phishing.

Recommendations

To defend against TruffleNet-style attacks, security teams should consider the following actions:

  • Audit and Rotate Credentials Frequently – Regularly scan for exposed keys (in code repositories, logs, etc.) and rotate them.
  • Restrict SES Permissions – Apply the principle of least privilege: only give SES access to identities that truly need it.
  • Enable Cloud Logging & Monitoring – Use AWS CloudTrail to monitor calls like GetCallerIdentity and SES API usage for unusual patterns.
  • Alert on Anomalous SES Behavior – Set up behavioral alerts for sudden spikes in sending quotas, new verified identities, or new DKIM configurations.
  • Secure Your Domains – Protect your WordPress or other web assets from compromise, especially if they sign DKIM keys for SES domains.
  • Train Financial Teams – Teach accounts payable and finance teams to verify vendor payment requests out-of-band (phone calls, known contacts), even when the email looks “official.”
  • Use Secret-Scanning Tools – Integrate tools like TruffleHog or similar secret scanners in your CI/CD pipeline to catch exposed AWS keys before they’re exploited.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The TruffleNet campaign is a powerful reminder that identity compromise is now a core threat vector for cloud systems. By weaponizing stolen AWS credentials, attackers are turning trusted services like SES into tools for large-scale fraud and BEC.

This operation isn’t just about phishing, it’s a sophisticated, automated cloud abuse campaign built on legitimacy, scale, and stealth. As cloud adoption continues to grow, organizations must treat credential security and API access with the same rigor they apply to traditional perimeter defenses.

Hardcoded Credentials in SAP SQL Anywhere Monitor Expose Enterprises to Critical Remote Access Risk

On November 11, 2025, SAP issued a security update patching a severe flaw in SQL Anywhere Monitor (non-GUI version), tracked as CVE-2025-42890. The vulnerability stemmed from hard-coded credentials embedded in the monitoring tool, a major security misstep given the level of access the monitor holds.

Because the flaw earned a CVSS severity score of 10.0 (maximum), SAP and security researchers flagged it as critical, a must-patch for all affected environments.

What Happened

SQL Anywhere Monitor is used to oversee and manage distributed or remote databases. The non-GUI variant (likely deployed in headless or automated appliances) was shipped with a built-in default credential embedded directly inside the code, that granted privileged access to the monitoring database. This “baked-in” credential was never meant for external exposure.

With those credentials easily available to any attacker who discovers them, threat actors could potentially connect to the monitoring database remotely, even without prior authentication as a legitimate user. Once inside, the risk wasn’t just limited to viewing data: the flaw allowed arbitrary code execution. In other words, attackers could gain full control over the system, compromise data integrity, and potentially pivot into internal networks.

Given that many organizations deploy SQL Anywhere across critical systems, sometimes in unattended environments, such a vulnerability posed a high-stakes risk.

How It Happened

The root cause was surprisingly low-tech: default credentials hard-coded inside the application code, never intended for public or high-security deployment. Specifically:

  • The credentials were embedded in a Java class inside migrator.jar, used by Monitor’s non-GUI version to connect to its internal database.
  • Because the monitor shipped ready-to-use, with pre-configured database files and no requirement for administrators to change credentials, many deployments remained vulnerable for an extended period.
  • The vulnerability allowed remote, unauthenticated access over the network. An attacker just needed to know the default credentials and reach the service to exploit it.

In response to the security advisory, SAP removed the pre-configured monitoring database and the hard-coded credentials in the patched version. As a result, running instances are no longer inherently vulnerable, assuming administrators applied the update or removed the vulnerable Monitor entirely.

What Was at Risk

By exploiting the hardcoded credentials flaw, attackers could have done the following:

  • Gain unauthorized access to monitoring systems and database internals
  • Execute arbitrary code, effectively compromising the host system
  • Disrupt database monitoring, data integrity, and availability
  • Use the compromised monitor as a foothold to lateral-move within network environments
  • Exfiltrate sensitive data, tamper with stored records, or sabotage backups — with minimal detection

Given the severity, any organization relying on SQL Anywhere Monitor (non-GUI) faced a real threat to confidentiality, integrity, and availability of their database environments.

What Organizations Should Do

  1. Immediately apply SAP’s November 2025 update that patches CVE-2025-42890, or remove SQL Anywhere Monitor if not required.
  2. Audit existing deployments, search for any instances of the vulnerable Monitor, especially on unattended appliances or legacy systems.
  3. Restrict network access to monitoring services until patched, block public access and limit to internal trusted networks.
  4. Rotate credentials and secrets if the default credentials were ever used or exposed.
  5. Review logs and audit trails for any suspicious connections to the monitor, especially attempts prior to patching, and investigate for possible unauthorized activity.
  6. Adopt a policy of no embedded credentials, enforce secret management, avoid shipping credentials in code or binaries, and require per-deployment credential configuration.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

This flaw in SQL Anywhere Monitor highlights a broader, often overlooked issue: many monitoring, admin, or support tools use default or hard-coded credentials under the assumption of “internal use only”. Once deployed, especially on network-connected appliances, those assumptions break down quickly and become a major risk vector.

In modern enterprise environments, non-human and machine identities, like service accounts, database monitors, automation agents, demand as much care and governance as human users. A single “backdoor” credential in an admin tool can compromise entire systems.

Therefore, simply relying on vendor defaults and internal trust is no longer enough. Organizations must treat every identity, human or not, as a potential entry point and enforce proper credential lifecycle management, segmentation, and least-privilege principles.

Copilot Studio Agents Exploited in New CoPhish OAuth Token Theft

A new phishing technique known as CoPhish was disclosed in October 2025. This attack abuses Copilot Studio agents to trick users into granting OAuth consents and thereby steals their OAuth tokens.

Because the agents are hosted on legitimate Microsoft domains and the phishing flow uses bona fide‑looking interfaces, victims are more likely to trust them.

Researchers from Datadog Security Labs disclosed the technique, warning that the flexibility of Copilot Studio introduces new, undocumented phishing risks that target OAuth-based identity flows. has confirmed the issue and said they plan to fix the underlying causes in a future update.

What Happened

Attackers crafted malicious Copilot Studio agents that abuse the platform’s “demo website” functionality. Because these agents are hosted on copilotstudio.microsoft.com, the URLs appear legitimate, making it easy for victims to trust and interact with them.

When a user interacts with the agent, it can present a “Login” or “Consent” button. If the user clicks it, they are redirected to an OAuth consent screen, made to look like a valid Microsoft or enterprise permission prompt, requesting access permissions.

Once the user grants consent, their OAuth token is silently forwarded to the attacker’s infrastructure, often via an HTTP request configured inside the malicious agent. The exfiltration can use legitimate Microsoft infrastructure, making it harder to detect through conventional network monitoring.

Crucially: this is not a software vulnerability but a social‑engineering abuse of legitimate platform features (custom agents + OAuth consent flows) to steal credentials.

How the Attack Works

  1. An attacker (or compromised tenant) creates a malicious agent in Copilot Studio and enables the “demo website” sharing feature.
  2. The agent’s “Login” topic is configured to redirect users to an OAuth consent flow, often masquerading as a legitimate login / authorization prompt.
  3. The attacker distributes the agent link via phishing, email, chat, internal messages, relying on the legitimate Microsoft domain to avoid suspicion.
  4. A victim clicks “Login” and consents, unknowingly granting permissions. The OAuth token returned is immediately forwarded, silently, to an attacker-controlled backend.
  5. With the stolen token, attackers can access mail, chat, calendars, files, and other resources via OAuth/Graph APIs, potentially granting full tenant compromise depending on permissions.

Because everything is hosted under Microsoft’s own domain and appears legitimate, the attack bypasses phishing filters, domain‑based detection, and many usual safety nets.

What’s at Risk

Because OAuth tokens grant access to platforms, services, and data, stolen tokens from a successful CoPhish attack can lead to:

  • Unauthorized access to corporate resources: emails, chats, calendar, cloud files, internal documents, etc.
  • Persistent access until the token is revoked or expires, enabling long-term espionage or data theft.
  • Lateral movement: attackers with a stolen token could impersonate users, possibly including privileged users, to access more resources or escalate privileges.
  • Evading detection: because the token is exfiltrated via legitimate Microsoft infrastructure, traffic may appear benign, bypassing many standard security controls.

Given how easy it is to deploy Copilot Studio agents and how many organizations use Microsoft cloud tools, the potential blast radius is wide, from small teams to large enterprises.

What Organizations Should Do Right Now

To defend against CoPhish and similar AI‑based token‑theft attacks:

  • Enforce strict consent policies, require admin approval for any new OAuth apps or Copilot agent consents.
  • Lock down Copilot Studio, disable public sharing or “demo website” features, or restrict agent creation to trusted users only.
  • Monitor enrollment, consent, and token issuance events in your identity platform (e.g. Microsoft Entra ID) for anomalous or unexpected entries.
  • Revoke and rotate tokens periodically, especially after user role changes, or if unexpected consents arise.
  • Educate teams about the risk, treat AI agents as high‑privilege identities, not just convenience tools. Automated features can carry real security consequences.
  • Implement least‑privilege principles and granular permission scopes: only grant what’s strictly needed.

Overview

In March 2025, a major supply-chain attack compromised the popular GitHub Action tj-actions/changed-files, used by roughly 23,000 repositories.

The malicious update caused CI/CD secrets, like API keys, tokens, AWS credentials, npm/Docker credentials and more, to be dumped into build logs. For projects whose logs were publicly accessible (or visible to others), this meant secrets were exposed to outsiders.

A subsequent review estimated that at least 218 repositories confirmed leaked secrets.

What Happened

On March 14, 2025, attackers pushed a malicious commit to the tj-actions/changed-files repository. They retroactively updated several version tags so that all versions (even older ones) referenced the malicious commit, meaning users did not need to explicitly update to “vulnerable” version: their existing workflows were already poisoned. The compromised action included code that, when run during CI workflows, dumped the GitHub Actions runner process memory into workflow logs, exposing secrets, tokens, environment variables, and other sensitive data. On March 15, 2025, after the breach was detected, GitHub removed the compromised version and the action was restored once the malicious code was removed.

Because many projects rely on GitHub Actions and many developers use default tags (like @v1) rather than pinning to commit SHAs, the attack had a broad reach, potentially impacting any workflow that ran the action during the infection window.

How It Happened

  • Supply-chain compromise – Attackers first compromised a bot account used to maintain the action (the @tj-actions-bot), gaining push privileges.
  • Malicious commit & tag poisoning – They committed malicious code that dumps secrets, then retroactively updated version tags so all previous and current versions were tainted.
  • Execution during CI/CD workflows – When any affected repository ran its CI workflow using the action, the malicious code executed and printed secret data into build logs.
  • Secrets exposed – Logs could contain GitHub tokens, AWS credentials, npm/Docker secrets, environment variables, and other sensitive data — instantly readable by anyone with access to the logs.
  • Wide potential blast radius – Given the popularity of the action, thousands of repositories were at risk, even those unaware of the change, underscoring the danger of trusting widely-used dependencies without lock-down.

Possible Impact & Risks

  • Secret leakage – CI/CD secrets, including API keys, cloud credentials, tokens,  exposed publicly. Attackers could use these for cloud account access, code or package registry abuse, infrastructure compromise, or further supply-chain attacks.
  • Compromise of downstream systems – With stolen credentials, attackers could breach production environments, publish malicious packages, or manipulate deployment pipelines.
  • Widespread supply-chain distrust – The breach erodes trust in open-source automation tools and GitHub Actions. Projects relying on third-party actions must now treat them as potential risk vectors.
  • Developer & enterprise exposure – Both open-source and private organizations using the compromised action may be impacted, especially if they exposed logs publicly or reused leaked secrets across systems.

Recommendations

If you use GitHub Actions, especially third-party ones, here are essential steps to protect yourself:

  • Audit your workflows – Identify if your repositories referenced tj-actions/changed-files, especially via mutable tags (e.g. @v1). If yes, treat credentials used in those workflows as potentially compromised.
  • Rotate all secrets – Tokens, API keys, cloud credentials, registry credentials) that were used during the period between March 14–15, 2025.
  • Pin actions to immutable commit SHAs – Rather than version tags, to avoid retroactive tag poisoning.
  • Review all third-party actions before adding to workflows, and prefer actions from trusted authors with minimal permissions.

Use least-privilege tokens and ephemeral credentials in CI/CD, avoid granting broad access via long-lived secrets.

  • Restrict access to workflow logs, especially for public repositories, avoid storing sensitive data in logs.
  • Enable secret-scanning and auditing for CI/CD pipelines, watch for suspicious logs or leak indicators.
  • Treat automation agents and CI identity (bots, actions) as non-human identities (NHIs) – apply the same governance, monitoring, and security hygiene as for real user accounts.

Final Thoughts

The March 2025 breach of tj-actions/changed-files underlines a harsh but clear truth: software supply chains, including CI/CD tools and automation frameworks, are first-class attack surfaces. A single compromised action can leak secrets, expose credentials, and undermine trust in entire ecosystems.

For developers, organizations, and security teams, the lesson is urgent and unavoidable: never treat dependencies or automation tools as inherently safe. Always enforce inventory, least-privilege, version pinning, secret hygiene, and regular audits, especially for machine identities, third-party code, and CI/CD pipelines.

In July 2025, security researchers disclosed a troubling breach involving Amazon Q, Amazon’s AI-powered coding agent embedded in the Visual Studio Code extension used by nearly a million developers. A malicious actor had successfully injected a covert “data-wiping” system prompt directly into the agent’s codebase, effectively weaponizing the AI assistant itself.

What Happened

The incident began on July 13, 2025, when a GitHub user operating under the alias lkmanka58 submitted a pull request to the Amazon Q extension repository. Despite being an untrusted contributor, the PR was accepted and merged, a sign of misconfigured repository permissions or insufficient review controls within the workflow.

Inside the merged code was a malicious “system prompt” designed to manipulate the AI agent embedded in Amazon Q. The prompt instructed the agent to behave as a “system cleaner,” issuing explicit commands to delete local file-system data and wipe cloud resources using AWS CLI operations. In effect, the attacker attempted to weaponize the AI model into functioning as a destructive wiper tool.

Four days later, on July 17, 2025, Amazon published version 1.84.0 of the VS Code extension to the Marketplace, unknowingly distributing the compromised version to users worldwide. It wasn’t until July 23 that security researchers observed suspicious behavior inside the extension and alerted AWS. This triggered an internal investigation, and by the next day, AWS had removed the malicious code, revoked associated credentials, and shipped a clean update (v1.85.0).

According to AWS, the attacker’s prompt contained formatting mistakes that prevented the wiper logic from executing under normal conditions. As a result, the company states there is no evidence that any customer environment suffered data deletion or operational disruption.

The True Root Cause

What makes this breach uniquely alarming is not just the unauthorized code change, it’s the fact that the attacker weaponized the AI coding agent itself.

Unlike traditional malware, which executes code directly, this attack relied on manipulating the agent’s system-level instructions, repurposing Amazon Q’s AI behaviors into destructive actions. The breach demonstrated how:

  • AI agents can be social-engineered or artificially steered simply through malicious system prompts.
  • Developers increasingly trust AI-driven tools, giving them broad access to local machines and cloud environments.
  • A compromised AI agent becomes a powerful attacker multiplier, capable of interpreting and running harmful natural-language commands.

Even though AWS later clarified that the injected prompt was likely non-functional due to formatting issues, meaning no confirmed data loss occurred, the exposure risk alone was severe.

What Was at Risk

Had the malicious prompt executed as intended, affected users and organizations faced potentially severe consequences:

  • Local data destruction – The prompt aimed to wipe users’ home directories and local files, risking irreversible data loss.
  • Cloud infrastructure wiping – The injected commands included AWS CLI instructions to terminate EC2 instances, delete S3 buckets, remove IAM users, and otherwise destroy cloud resources tied to an AWS account.
  • Widespread distribution – With nearly one million installs, the compromised extension could have impacted a large developer population, especially those using Amazon Q for projects tied to critical infrastructure, production environments, or cloud assets.
  • Supply-chain confidence erosion – The breach undermines trust in AI-powered or open-source development tools, a single malicious commit can compromise thousands of users instantly.

Recommendations

If you use Amazon Q, or any AI-powered coding extension / agent, treat this incident as a wake-up call. Essential actions:

  • Update to the clean version (1.85.0) immediately. If you have 1.84.0 or earlier, remove it.
  • Audit extension use and permissions – treat extensions as potential non-human identities. Restrict permissions where possible; avoid granting unnecessary filesystem or cloud-access privileges.
  • Review and lock down CI/CD, dev workstations, and cloud credentials – never assume that an IDE or plugin is “safe.” Use vaults, environment isolation, and minimal permissions.
  • Vet open-source contributions carefully – apply stricter review and validation for pull requests in critical tools; avoid blindly trusting automated merges or simplified workflows.
  • Segment environments – avoid using AI extensions on machines or environments that store production data or credentials.
  • Monitor logs and cloud resource activity – watch for suspicious deletions, cloud resource termination, or unexpected CI jobs after tool updates.

Final Thoughts

The breach of Amazon Q reveals a troubling reality: as AI tools continue to integrate deeply into development workflows, they become part of the enterprise threat landscape, not just optional helpers. A single bad commit, merged without proper checks, can transform a widely trusted extension into a potential weapon against users.

This isn’t just about one extension, it’s about the broader risks of machine identities, AI-powered tools, supply-chain trust, and code governance in modern DevOps environments. As complexity grows, so must our security practices.

A new phishing technique known as CoPhish was disclosed in October 2025. This attack abuses Copilot Studio agents to trick users into granting OAuth consents and thereby steals their OAuth tokens.

Because the agents are hosted on legitimate Microsoft domains and the phishing flow uses bona fide‑looking interfaces, victims are more likely to trust them.

Researchers from Datadog Security Labs disclosed the technique, warning that the flexibility of Copilot Studio introduces new, undocumented phishing risks that target OAuth-based identity flows. has confirmed the issue and said they plan to fix the underlying causes in a future update.

What Happened

Attackers crafted malicious Copilot Studio agents that abuse the platform’s “demo website” functionality. Because these agents are hosted on copilotstudio.microsoft.com, the URLs appear legitimate, making it easy for victims to trust and interact with them.

When a user interacts with the agent, it can present a “Login” or “Consent” button. If the user clicks it, they are redirected to an OAuth consent screen, made to look like a valid Microsoft or enterprise permission prompt, requesting access permissions.

Once the user grants consent, their OAuth token is silently forwarded to the attacker’s infrastructure, often via an HTTP request configured inside the malicious agent. The exfiltration can use legitimate Microsoft infrastructure, making it harder to detect through conventional network monitoring.

Crucially: this is not a software vulnerability but a social‑engineering abuse of legitimate platform features (custom agents + OAuth consent flows) to steal credentials.

What’s at Risk

Because OAuth tokens grant access to platforms, services, and data, stolen tokens from a successful CoPhish attack can lead to:

  • Unauthorized access to corporate resources: emails, chats, calendar, cloud files, internal documents, etc.
  • Persistent access until the token is revoked or expires — enabling long-term espionage or data theft.
  • Lateral movement: attackers with a stolen token could impersonate users — possibly including privileged users — to access more resources or escalate privileges.
  • Evading detection: because the token is exfiltrated via legitimate Microsoft infrastructure, traffic may appear benign, bypassing many standard security controls.

Given how easy it is to deploy Copilot Studio agents and how many organizations use Microsoft cloud tools, the potential blast radius is wide — from small teams to large enterprises.

How the Attack Works

  1. An attacker (or compromised tenant) creates a malicious agent in Copilot Studio and enables the “demo website” sharing feature.
  2. The agent’s “Login” topic is configured to redirect users to an OAuth consent flow, often masquerading as a legitimate login / authorization prompt.
  3. The attacker distributes the agent link via phishing, email, chat, internal messages, relying on the legitimate Microsoft domain to avoid suspicion.
  4. A victim clicks “Login” and consents, unknowingly granting permissions. The OAuth token returned is immediately forwarded, silently, to an attacker-controlled backend.
  5. With the stolen token, attackers can access mail, chat, calendars, files, and other resources via OAuth/Graph APIs, potentially granting full tenant compromise depending on permissions.

Because everything is hosted under Microsoft’s own domain and appears legitimate, the attack bypasses phishing filters, domain‑based detection, and many usual safety nets.

Why It Matters

  • The CoPhish attack is a major wake-up call that AI agent platforms like Copilot Studio can be weaponized, not just for convenience.
  • It exposes a gap in current security, legitimate features (custom agents, OAuth consent) becoming attack vectors through social engineering and workflow abuse.
  • As more organizations adopt AI-based automation and assistants, the risk associated with misuse of OAuth tokens grows — token theft can lead to data breaches, compliance violations, and wide-scale compromise.
  • Traditional security measures, firewalls, network monitoring, email filters — are insufficient, because the malicious activity leverages trusted infrastructure and legitimate domains.

What Organizations Should Do Right Now

To defend against CoPhish and similar AI‑based token‑theft attacks:

  • Enforce strict consent policies, require admin approval for any new OAuth apps or Copilot agent consents.
  • Lock down Copilot Studio, disable public sharing or “demo website” features, or restrict agent creation to trusted users only.
  • Monitor enrollment, consent, and token issuance events in your identity platform (e.g. Microsoft Entra ID) for anomalous or unexpected entries.
  • Revoke and rotate tokens periodically, especially after user role changes, or if unexpected consents arise.
  • Educate teams about the risk, treat AI agents as high‑privilege identities, not just convenience tools. Automated features can carry real security consequences.
  • Implement least‑privilege principles and granular permission scopes: only grant what’s strictly needed.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Final Thoughts

The CoPhish attack highlights a new and evolving threat: AI-powered agents themselves can become attack vectors. What makes this breach particularly concerning is that attackers exploited trusted Microsoft infrastructure and OAuth consent flows, meaning traditional defenses like phishing filters or domain verification are largely ineffective.

Organizations must recognize that AI assistants, Copilot agents, and other automated tools are effectively non-human identities with privileges. Just like service accounts or API tokens, they require strict governance, monitoring, and access control.

Hard-Coded Secrets in VSCode Extensions Trigger Massive Supply-Chain Risk — Over 500 Plugins Expose Tokens & Credentials

In October 2025, Wiz publicly disclosed a major supply-chain security problem affecting the VSCode extension ecosystem (including both the official marketplace and alternative registries like Open VSX). They found that hundreds of extensions, many from legitimate publishers, contained hard-coded secrets and access tokens inside their distributed packages.

Because VSCode and Open VSX regularly auto-update installed extensions, these leaks granted attackers the ability to hijack entire developer toolchains, push malicious updates, or exfiltrate credentials, turning trusted developer tools into powerful malware delivery vectors.

What Happened

Researchers at Wiz began investigating after suspicious activity tied to attempts to plant malware in the VSCode Marketplace. During that process, instead of only finding overtly malicious extensions, they discovered a more widespread issue: a large number of extensions unintentionally shipped with embedded secrets.

  • In total, Wiz identified over 550 validated secrets across more than 500 extensions.
  • These secrets included a wide variety: API keys for AI providers, cloud services (AWS, GCP), database credentials (MongoDB, Postgres, Supabase), payment/service platform secrets (Stripe, Auth0), and more.
  • Most critically, more than 100 leaked tokens were Marketplace Personal Access Tokens (PATs), meaning they granted full control over extension publishing and updates. This affected roughly 85,000+ installations. Meanwhile, over 30 leaked tokens belonged to Open VSX, impacting an additional 100,000+ installs.

Because VSCode updates extensions automatically, an attacker with a valid token could push a malicious update that installs malware across all installations, without user interaction.

In response, both Microsoft (for VSCode Marketplace) and Open VSX acted: they revoked exposed tokens, removed vulnerable extensions, and rolled out enhanced secret-scanning protections to prevent future leaks.

How It Happened

The leak was not primarily due to a software bug, but a combination of human error, poor secrets hygiene, and misplaced trust in development tools. Main factors include:

  • Hard-coded secrets and tokens in extension code or configuration files – Developers/publishers erroneously embedded credentials directly into the extension package before publishing. Because extensions are zipped archives (.vsix), anyone can unzip and inspect every file.
  • Broad permissions tied to PATs – Some of the embedded tokens granted not just API access, but publishing privileges, effectively giving whoever had the token the power to push new code under the extension’s identity.
  • Automated updates by default – The auto-update feature of VSCode meant that once an extension was installed, any malicious update would be automatically applied, expanding the reach of a malicious actor without additional user action.
  • Lack of secret-scanning prior to publishing – Marketplaces previously lacked effective controls to detect embedded credentials before extensions went live. This gap allowed credential leaks to slip into public distribution at scale.

The result: even extensions that appeared benign, themes, utilities, AI assistants, became potential trojan horses. Some of the compromised extensions belonged to major enterprises and vendors, broadening the risk significantly.

Possible Impact & Risk

This supply-chain exposure in the development environment has serious, multi-layered consequences:

  • Mass credential exposure at scale – API keys, cloud credentials, database secrets, and more, all potentially harvested by attackers.
  • Malware propagation across developer systems – Hidden scripts or malicious updates can spread automatically via extension updates, compromising development machines, CI/CD pipelines, and even production infrastructure downstream.
  • Source-code theft, intellectual property loss, and supply-chain contamination – Attackers could inject backdoors into extensions or deploy compromised code across many projects.
  • Wide enterprise exposure – Because VSCode is used globally, including enterprise developers, a single malicious extension update could affect thousands of developers and multiple companies.
  • Erosion of trust in open-source tooling and third-party marketplaces. The incident shows how risky it is to treat developer tools and extensions as inherently safe.

According to industry reporting, this is “one of the largest VSCode supply-chain risks ever uncovered,” and could have enabled attackers to distribute malware to a cumulative ≈150,000 developers if not remediated quickly.

Recommendations for Developers & Organizations

Given the exposure, it’s critical to take proactive steps to secure development workflows:

  • Audit all installed VSCode/Open VSX extensions – Remove non-essential extensions, especially those with unclear provenance or unknown publishers.
  • Disable auto-updates, or require manual review before applying updates – This helps prevent automatic propagation of malicious changes.
  • Scan extension packages before installation – Treat .vsix files like any other third-party software, unzip and scan for embedded secrets or unexpected code.
  • Use least-privilege credentials only – Never embed high-privilege tokens (like publish PATs) in public packages. Use environment-specific credentials managed via vaults or CI secrets instead.
  • Incorporate secrets detection into CI/CD pipelines – Automatic tools can scan for credentials in code, config, and dependency artifacts before deployment.
  • Maintain an approved-extension allowlist – Enterprises should define a curated list of trusted extensions and block everything else.
  • Educate developers about supply-chain risks – Treat IDE extensions and plugins as part of the attack surface, not just code dependencies.

Conclusion

This incident is a textbook example of how non-human identities, tokens, service accounts, automation credentials, are among the riskiest assets in a modern development environment. When poorly managed, they can turn familiar developer tools into backdoors.

As organisations increasingly rely on automation, AI-powered dev assistants, and third-party tooling, controlling and governing these machine identities becomes essential. The VSCode extension marketplace breach reinforces that identity security isn’t just about user accounts, it is also about every piece of software, automation token, and CI/CD pipeline you trust.

How Stolen Credentials Enabled Mass Breach of SonicWall VPN Accounts

In October 2025, researchers disclosed a large‑scale campaign targeting SonicWall SSL VPN accounts. In this wave of attacks, threat actors successfully compromised over 100 VPN accounts across multiple customer environments, not by brute‑force, but by using stolen, valid credentials.

The campaign ,observed between October 4 and October 10, impacted at least 16 distinct customer environments protected by a managed security platform. Attackers moved rapidly through accounts, indicating coordinated credential misuse rather than random guessing.

What Happened

The sequence of events appears to be:

  • Starting October 4, 2025, attackers began logging into multiple SonicWall SSL VPN accounts across several organizations using valid but stolen credentials, suggesting prior credential theft or compromise, not brute‑force.
  • These logins originated from the same suspicious IP address (202.155.8[.]73), indicating a coordinated campaign rather than isolated incidents.
  • In some cases, attackers disconnected quickly after login. In others, they proceeded with internal scans and attempted to access local Windows accounts, indicators of reconnaissance or lateral movement inside networks.
  • The campaign affected more than a hundred accounts across at least 16 environments, signaling a broad and ongoing threat rather than a narrow, one-off breach.

Because the attackers used valid credentials, standard brute-force defenses would not necessarily detect the intrusion, making this a stealthy and effective threat vector.

What’s at Risk

This breach carries serious risks for affected organizations and beyond:

  • Unauthorized network access – VPN access often grants entry to internal networks, corporate resources, file servers, and more. Once the attacker has a foothold, they can attempt lateral movement.
  • Credential reuse risk – If the stolen credentials were reused across systems or windows accounts in the infrastructure, the breach could extend beyond VPN to internal systems.
  • Potential for ransomware or data theft – Threat actors with VPN-level access can deploy ransomware, exfiltrate sensitive data, or manipulate critical assets. Indeed, several subsequent compromises involving ransomware groups have leveraged SonicWall VPN access.
  • Erosion of trust in remote access infrastructure – Widespread credential leaks and VPN compromises undermine confidence in remote‑access tools, especially important as hybrid and remote work remain common.
  • Difficulty in detection – Because the attackers used legitimate credentials and valid login procedures, the breach may evade traditional intrusion detection methods focused on brute-force or exploitation signatures.

What Organizations Should Do Immediately

If your organization uses SonicWall SSL VPN or similar remote access infrastructure, it’s time for urgent action. Recommended steps:

  • Reset all SSL VPN passwords – treat every potentially impacted account as compromised. Rotate credentials immediately.
  • Invalidate unused or stale VPN accounts – minimize the attack surface by removing accounts not currently needed.
  • Enforce Multi-Factor Authentication (MFA) – add an extra authentication layer to reduce the risk of credential reuse or theft leading to successful logins.
  • Restrict remote access exposure – limit VPN access to known IP ranges, enforce geolocation restrictions, and disable VPN/WAN access when not needed.
  • Enable logging and monitoring of VPN activity – alert on unusual login patterns, unexpected VPN connections, or large-scale login attempts.
  • Audit and rotate any related credentials – if VPN access led to other sensitive credentials (e.g. service accounts, shared secrets, management credentials), rotate them too.
  • Segment internal networks – treat remote-access servers and VPN entry points as high-risk, and isolate or limit what they can directly access inside your network.
  • Regularly review backup configurations and cloud backupsbecause previous breaches of configuration backup services increased supply chain risk for firewall devices.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Final Thoughts

The 2025 SonicWall VPN credential breach should serve as a wake‑up call. In a world where stolen credentials, not zero-day exploits, are driving large campaigns, organizations must treat remote-access credentials and VPN infrastructure as critical assets, requiring rigorous governance, monitoring, and rotation.

Relying on complex firewalls or VPN appliances is not enough. Security teams need to assume compromise is possible and adopt a defense-in-depth posture: minimize privileged access, enforce MFA, restrict exposure, rotate credentials regularly, and isolate trusted environments.

For organizations still using SSL VPNs, particularly with legacy configurations or without regular credential hygiene, the time to act is now.

CrewAI GitHub Token Leak Exposes Sensitive Source Code

In September 2025, researchers from Noma Labs discovered a critical security flaw in CrewAI’s platform: an internal GitHub token, with admin-level access to CrewAI’s entire GitHub infrastructure, had been accidentally exposed to users via  improper exception handling.

The vulnerability, dubbed “Uncrew,” was assigned a high severity (CVSS 9.2), reflecting the serious risk of full repository compromise, code theft, and downstream supply-chain exposure.

CrewAI responded quickly, issuing a security patch within hours of disclosure to revoke the exposed token and fix the flawed exception-handling code.

What Happened

During a standard machine-provisioning operation within CrewAI, a backend error occurred. Instead of safely handling the exception, the platform returned a JSON error response containing a field, repo_clone_url, which embedded a full internal GitHub access token used by CrewAI for repository operations.

Because this token had unrestricted permissions (read/write/admin) on CrewAI’s private repositories, any user who encountered that error, or intercepted the response, suddenly had full access: the ability to clone, read, modify, or exfiltrate source code and proprietary assets.

In other words: a simple provisioning failure, combined with poor error handling, exposed a high-privilege credential and with it, the entire internal codebase of CrewAI.

How It Happened

  • Exception mishandled by design – Instead of sanitizing errors, CrewAI’s code returned raw JSON including sensitive credential data on provisioning failure.
  • Long-lived static token with broad permissions – The leaked credential was a long-living GitHub token, not a short-lived or scoped credential, meaning it granted full access persistently until explicitly revoked.
  • No built-in secret redaction or vault isolation – The system lacked safeguards to prevent internal tokens from being exposed through user-facing responses, logs, or error outputs.
  • Supply-chain scale amplification – Because CrewAI is used to orchestrate AI agents, automations, and development workflows, one compromised token could allow attackers to tamper with any or all downstream automation, integrations, or deployments relying on those repositories.

What Was at Risk

  • Full source-code exposure – Private repositories, including proprietary logic, infrastructure-as-code, and AI-agent orchestration code — could be cloned, inspected, or stolen.
  • Supply-chain compromise – Attackers gaining write access could inject backdoors, malicious dependencies, or trojan code into builds or shared libraries, affecting all users downstream.
  • Credential harvesting & escalation – Once inside, adversaries could locate further secrets (API keys, cloud credentials, database credentials) stored in the codebase or config files and use them to breach other systems.
  • Loss of trust, IP theft, and business disruption – For a company building agentic AI systems, exposing internal code and automation pipelines risks intellectual property, corporate reputation, and operational integrity.

Response & Resolution

Immediately after disclosure by Noma Labs, CrewAI revoked the exposed GitHub token and patched their exception-handling logic so that sensitive data would no longer be exposed in error messages.

No public reports have surfaced indicating that the token was exploited in the wild, but the incident serves as a stark warning about the fragility of static machine identities and the risks of improper error handling.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

What This Incident Teaches Us

The CrewAI “Uncrew” leak illustrates a fundamental risk that many organizations now face: as AI platforms, automation tools, and machine-driven workflows proliferate, non-human identities like service accounts, tokens, and automation credentials become critical attack surfaces.

Any of the following can trigger a severe breach:

  • Long-lived static tokens with broad privileges
  • Insecure error handling or logging that leaks sensitive data
  • Mixing automation credentials with user-facing systems or APIs
  • Lack of secret management and token lifecycle governance
  • Inadequate supply-chain hygiene and code-audit practices

In modern DevSecOps and AI-driven development environments, treating machine identities with the same rigor as human credentials is no longer optional, it’s essential.