Agentic AI Module Added To NHI Training Course
Amazon Breach: Hacked AWS Accounts Fuel Ongoing Crypto-Mining

In December 2025, Amazon’s AWS (Amazon Web Services) faced a significant security breach that saw compromised accounts being utilized for an ongoing crypto-mining campaign. This incident specifically targeted AWS’s Elastic Compute Cloud (EC2) and Elastic Container Service (ECS), impacting a vast number of users relying on these services for their cloud-based applications and workloads. The breach was particularly alarming due to the scale of the attack, which utilized valid credentials for Identity and Access Management (IAM) from multiple customer accounts. The ramifications of this breach extend beyond Amazon, affecting countless organizations that rely on AWS for their cloud infrastructure. As the details unfold, it becomes critical to analyze how this breach occurred, the methods employed by the attackers, and the implications for both Amazon and its customers.

What Happened

The breach was discovered on December 17, 2025, when Amazon’s AWS GuardDuty security team issued a warning about an ongoing crypto-mining operation linked to compromised AWS accounts. Here’s a chronological account of the breach:

  • November 2, 2025: The operation began with attackers leveraging compromised IAM credentials to access AWS resources.
  • Late October 2025: A malicious Docker Hub image was created, which would later serve as a vector for deploying the crypto-miners, accumulating over 100,000 pulls by the time of the breach’s discovery.
  • December 17, 2025: AWS GuardDuty identified the crypto-mining campaign and alerted customers of the compromised accounts.

Initial detection highlighted that the attackers did not exploit any vulnerabilities in AWS systems; instead, they operated using valid credentials obtained from customer accounts. The types of data compromised included IAM credentials, which allowed the attackers to deploy and run unauthorized crypto-mining software on the EC2 and ECS instances, leading to significant computational resource exhaustion for affected users.

How It Happened

The attack leveraged a combination of social engineering and poor security practices that led to the compromise of valid IAM credentials. Here’s a deeper look into the technical aspects of the breach:

  • Credential Compromise: Attackers obtained IAM credentials through phishing or other means, enabling them to bypass security protocols.
  • Deployment Method: Utilizing a malicious Docker Hub image, which was pulled over 100,000 times, the attackers were able to deploy crypto-miners effectively on the compromised accounts.
  • Persistence Mechanism: The attackers implemented a persistence mechanism that allowed them to maintain control over the mining operations, even after initial detection attempts by incident responders.

The infrastructure weaknesses stemmed from inadequate monitoring of IAM roles and permissions, which allowed the threat actors to establish and maintain exploitation without immediate detection. Attribution to a specific threat actor was not disclosed, but the methodology indicates a sophisticated approach often seen in organized cybercrime.

Impact

The impact of the AWS breach was multi-faceted, affecting both the organization and its users significantly:

  • Immediate Consequences: AWS customers experienced substantial performance degradation due to resource exhaustion caused by the unauthorized mining activities.
  • Customer Impact: Many organizations relying on AWS for critical operations faced increased operational costs and potential disruptions to their services.
  • Financial Implications: AWS had to absorb the costs associated with additional computational resources consumed by the mining operations, potentially reaching into millions of dollars.
  • Regulatory and Legal Consequences: The breach raised concerns regarding compliance with data protection regulations, putting AWS at risk of legal scrutiny.
  • Long-term Reputation Damage: Trust in AWS’s security measures could be undermined, leading to customer attrition and a tarnished brand reputation.
  • Industry-wide Implications: This incident serves as a stark reminder to other cloud service providers, emphasizing the need for stringent security protocols to protect against similar threats.

Overall, the AWS breach not only highlighted vulnerabilities within cloud services but also underscored the need for robust security measures across the tech industry.

Recommendations

In light of the AWS breach, organizations should adopt the following security measures to prevent similar incidents:

  • Enhance IAM Policies: Implement the principle of least privilege to limit access to critical resources.
  • Regular Credential Audits: Conduct frequent audits of IAM credentials to identify any unauthorized access or anomalies.
  • Multi-Factor Authentication: Enforce multi-factor authentication for all access to AWS accounts to mitigate credential theft.
  • Monitoring and Alerts: Utilize AWS tools like GuardDuty to monitor account activity and receive alerts for suspicious behavior.
  • Security Awareness Training: Educate employees about phishing attacks and other social engineering tactics to reduce the likelihood of credential compromise.

By implementing these actionable recommendations, organizations can significantly bolster their defenses against similar breaches and enhance their overall cybersecurity posture.

How NHI Mgmt Group Can Help

Securing Non-Human Identities (NHIs) including AI Agents, is becoming increasingly crucial as attackers discover and target service accounts, API keys, tokens, secrets, etc., during breaches. These NHIs often hold extensive permissions that can be exploited, making their security a priority for any organization focused on protecting their digital assets.

Take our NHI Foundation Level Training Course, the most comprehensive in the industry, that will empower you and your organization with the knowledge needed to manage and secure these non-human identities effectively. 

👉 Further details here

In addition to our NHI training, we offer independent Advisory & Consulting services that include:

  • NHI Maturity Risk Assessments
  • Business Case Development
  • Program Initiation
  • Market Analysis & RFP Strategy/Guidance

With our expertise, we can help your organization identify vulnerabilities and implement robust security measures to protect against future breaches. 

👉 Contact us here

Final Thoughts

The AWS breach serves as a critical wake-up call for organizations utilizing cloud services. The exploitation of valid credentials highlights vulnerabilities that can exist even within established security frameworks. As cyber threats continue to evolve, the necessity for proactive security measures cannot be overstated. Organizations must prioritize cybersecurity to safeguard against potential breaches and protect their digital assets. Staying informed about best practices and emerging threats is essential for maintaining a resilient security posture in today’s digital landscape.

Attackers Are Exploiting Gladinet’s Hard‑Coded Keys for Remote Code Execution

In December 2025, Security researchers have sounded the alarm on an actively exploited vulnerability in Gladinet’s CentreStack and Triofox file‑sharing and remote access products. The flaw stems from hard‑coded cryptographic keys embedded in the software’s configuration, a serious design flaw that allows attackers to forge access tokens, decrypt sensitive files, and even trigger remote code execution (RCE) on affected servers.

At least nine organizations across industries, including healthcare and technology, have already been compromised by this attack chain, and exploitation has occurred in the wild through crafted HTTP requests targeting the file server component.

What Happened?

Gladinet CentreStack and its enterprise file‑sharing counterpart, Triofox, are widely deployed by companies needing secure, remote access to files and collaboration tools. However, researchers from Huntress discovered that the products used static, hard‑coded machineKey values in their configuration files that are intended to secure ASP.NET ViewState, a mechanism for maintaining state across web requests.

Because these machineKey values were identical across installations and never dynamically generated, attackers could:

  1. Decrypt or forge ViewState data, removing cryptographic integrity protections.
  2. Access sensitive server files like web.config without valid credentials.
  3. Obtain machineKey values directly from configuration.
  4. Craft malicious ViewState payloads that ASP.NET deserializes as trusted data, leading to remote code execution on the server process.

In practice, adversaries sent specially crafted URL requests to endpoints such as /storage/filesvr.dn where the hard‑coded keys allowed them to bypass normal access controls and retrieve protected files. In some cases the access ticket issued by the server contained a timestamp set to “9999,” effectively creating a ticket that never expires and can be reused indefinitely for exploitation.

How It Happened

At the core of the issue is the GenerateSecKey() function within GladCtrl64.dll, which returns the same predictable 100‑byte text strings for every installation. These strings are used to derive the cryptographic keys that sign and encrypt access tickets. Because they never change, threat actors can leverage them to:

  • Decrypt access tickets and access protected server resources.
  • Reverse‑engineer or predict token values.
  • Forge malicious tickets that the server will accept.
  • Trigger ViewState deserialization attacks using the known keys.

Attack campaigns first surfaced when threat actors combined this flaw with previously disclosed vulnerabilities, including an earlier hard‑coded key issue (CVE‑2025‑30406) that also enabled RCE, to obtain the necessary machineKey from web.config.

The attack chain begins with specially crafted HTTP requests that break normal authentication by exploiting the predictable keying material, granting unauthorized access to system configuration files. Once the machineKey is known, attackers can leverage serialization/deserialization mechanisms in ASP.NET to execute arbitrary code on the server.

What Was at Risk

The exploitation of hard‑coded keys in CentreStack and Triofox opened the door to several high‑impact outcomes:

  • Unauthorized file access – Attackers could read confidential files, including configuration and user data, without valid logins.
  • Remote code execution – By using serialized payloads, adversaries could execute arbitrary code on affected servers, potentially installing malware, backdoors, or pivot tools.
  • Persistent compromise – Because access tickets could be crafted to never expire, a foothold once gained could be used again and again.
  • Lateral movement – From a compromised file server, attackers could escalate within networks or explore other hosts.
  • Widespread impact – Attack campaigns have affected at least nine organizations so far, and additional exploitation is ongoing.

Recommendations

In light of active exploitation, Gladinet has released updates that address the underlying vulnerability:

  • Scan logs for known indicators of compromise, such as repeated requests containing encrypted representations of sensitive file paths (e.g., “vghpI7EToZUDIZDdprSubL3mTZ2…”).
  • Rotate machineKey values in existing installations by backing up web.config, generating new machineKey entries via IIS Manager, and restarting the application services.
  • Apply all vendor patches immediately and verifying that no outdated configurations or legacy keys remain.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Final Thoughts

The Gladinet incident is a stark reminder that hard‑coded cryptographic keys and default secrets are among the most dangerous security weaknesses. When the same key is used across installations, attackers can precompute exploits that work universally, effectively dissolving cryptographic integrity protections and turning benign features into attack vectors.

Additionally, combining multiple vulnerabilities, such as a local file inclusion bug with predictable key material, can amplify risk far beyond what developers intended. These kinds of chained exploits enable remote code execution using widely understood ASP.NET deserialization techniques, giving attackers disproportionate leverage against enterprise systems with internet‑accessible file servers.

Massive Docker Hub Leak: 10,000+ Images Expose Secrets and Auth Keys

Security researchers have uncovered a widespread and dangerous credential exposure issue affecting the Docker Hub container image registry. In a comprehensive scan of images uploaded in December 2025, threat intelligence firm Flare discovered that 10,456 container images contained one or more exposed secrets, including access tokens, API keys, cloud credentials, CI/CD secrets, and AI model authentication keys. This exposure impacts developers, cloud platforms, and at least 101 companies across industries, including a Fortune 500 firm and a major national bank.

Docker Hub is the largest public container registry, where developers push and pull images that contain everything needed to run applications. But these images, often treated as portable build artifacts, also included sensitive data that should never be present in publicly accessible artifacts.

What Happened

During routine container image analysis in Decemeber 2025, researchers from Flare scanned public Docker Hub images and identified 10,456 images that exposed sensitive secrets. In many cases, these secrets were embedded directly in the container’s file system, often due to careless development practices where environment variable files, configuration files, or source code containing credentials were copied into the image.

Among the most frequently leaked credentials were access tokens for AI model providers, including keys for AI services such as OpenAI, Hugging Face, Anthropic, Gemini, and Groq, totaling roughly 4,000 exposed model keys. Researchers also found cloud provider credentials (AWS, Azure, GCP), CI/CD tokens, database passwords, and other critical authentication material tucked away in manifests, .env files, YAML configs, Python application files, and more.

In many cases, multiple secrets were present in a single image, 42 % of the exposed images contained five or more sensitive values, meaning a single leaked image might be capable of unlocking an entire organization’s infrastructure if misused.

Most of the leaked images came from 205 distinct Docker Hub namespaces, representing 101 organizations ranging from small developers and contractors to large enterprises. Some of these images originated from shadow IT accounts, personal or third-party containers created outside corporate monitoring and governance.

How It Happened

The root cause of this massive exposure is not a vulnerability in Docker Hub itself, but developer negligence and insecure build practices:

  • Secrets bundled into images – Developers sometimes include .env files, config directories, or hard-coded API tokens during local development or CI builds, and those files inadvertently become part of the final image.
  • No secret sanitization during image builds – Docker build contexts that include entire project directories often automatically copy sensitive files into the image layer structure, which remains publicly accessible once pushed to Docker Hub.
  • Shadow IT and personal accounts – Many images with exposed secrets belonged to Docker Hub accounts outside corporate governance, contractors or personal projects where enterprise secret management controls were absent.
  • Lack of rotation or revocation after exposure – While roughly 25 % of developers removed leaked secrets from container images once notified, in 75 % of cases the exposed keys were never revoked or rotated, meaning attackers could continue to exploit them long after they were discovered. Once these images were published publicly, malicious actors or automated scanning tools could quickly crawl Docker Hub, harvest credentials, and use them to access cloud environments, source code repositories, CI/CD systems, internal databases, and more.

Potential Impact & Risks

The scale and type of secrets exposed through Docker Hub images poses serious threats:

  • Unauthorized cloud access – Exposed cloud provider credentials (AWS, Azure, GCP) could allow attackers to spin up or tear down resources, exfiltrate data, or expand compromise.
  • CI/CD pipeline compromise – CI tokens and build secrets exposed in images could be abused to alter software build processes, inject malicious code, or leak other credentials.
  • AI abuse – Nearly 4,000 AI model API tokens (e.g., OpenAI, Hugging Face) were present in imagesstolen keys could be used for free, unauthorized access to expensive services or to impersonate enterprise workloads.
  • Wider infrastructure exposure – Database connection strings, API keys, and internal application secrets could lead to full application compromise.
  • Persistent exploitation – Because most leaked credentials were never revoked, attackers able to harvest them while exposed could continue exploiting them indefinitely.

Recommendations

To prevent similar leaks and protect cloud infrastructure, the following practices are critical:

  1. Never embed secrets in container images – secrets should never be part of build contexts, Dockerfiles, or image manifests.
  2. Use centralized secrets management – store credentials in a vault or secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager) and inject them at runtime rather than build time.
  3. Enforce automated scanning – integrate secret scanning tools into pre-commit, CI pipelines, and container registry scanning to catch leaks before images are pushed.
  4. Use ephemeral or short-lived credentials – avoid long-lived static keys; instead use short-lived tokens or IAM roles with limited scope.
  5. Revoke and rotate leaked keys immediately – once a credential leak is detected, rotate keys and invalidate sessions to prevent unauthorized use.
  6. Monitor shadow IT accounts and registries – corporate security teams should track public container activity from personal accounts linked to their organization.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Final Thoughts

The discovery that more than 10,000 public Docker Hub images are leaking sensitive credentials is a stark reminder that container security is only as strong as the development practices behind it. Despite growing awareness of secret hygiene, careless inclusion of .env files, config files, and hard-coded API keys continues to create a massive attack surface, accessible to threat actors and automated scanners alike. In an era of cloud-native development and rapid container adoption, organizations can no longer treat container images as disposable artifacts. They must be governed, scanned, and treated with the same rigor as application code or infrastructure configurations. Without that, the convenience of containerization can turn into a serious security liability.

Media Pressure, Then Action: Inside Home Depot’s Year-Long Token Exposure

In a troubling cybersecurity oversight, The Home Depot, one of the largest home improvement retailers in the United States, inadvertently exposed an internal GitHub access token for more than a year, leaving critical internal systems and source code repositories potentially vulnerable to unauthorized access. The exposure persisted from early 2024 through late 2025, and the issue was only resolved after media intervention following ignored warnings from a security researcher.

What Happened?

In early November 2025, security researcher Ben Zimmermann discovered a publicly exposed GitHub access token that belonged to a Home Depot employee. This token had been inadvertently published online, most likely due to an employee mistake, sometime in early 2024 and remained accessible for nearly a year.

When Zimmermann tested the token, he found it provided full access to hundreds of private Home Depot GitHub repositories. The access included not just read permissions, but write privileges, meaning the token could be used to modify source code, manipulate cloud infrastructure configurations, or change CI/CD pipelines, all core components of Home Depot’s digital infrastructure.

Worse still, the token’s access extended beyond the codebase. According to Zimmermann, it granted access to portions of Home Depot’s cloud infrastructure, including systems linked to order fulfillment, inventory management, and developer pipelines.

Repeated attempts by the researcher to privately notify Home Depot’s security and leadership teams, including messages to the company’s Chief Information Security Officer, went unanswered. Concerned by the lack of response, Zimmermann eventually contacted TechCrunch, prompting public scrutiny and ultimately forcing Home Depot to revoke the token.

Home Depot confirmed that the exposed token has since been revoked and is no longer active, but the prolonged period of exposure highlights a critical failure in credential management and internal security monitoring.

How It Happened

Security researchers believe the token was accidentally published by a Home Depot employee, for example, added to a public code repository, documentation, or other internet-facing location without proper access controls. Once exposed, the token’s permissions allowed anyone who discovered it to authenticate as the employee across internal systems and repositories.

The incident was worsened by Home Depot’s slow or nonexistent initial response to repeated reports. Without an effective vulnerability disclosure program or bug bounty process, the researcher’s warnings were reportedly ignored for weeks or months before the issue was addressed publicly.

What Was at Risk

The leaked GitHub access token posed a broad and serious threat:

  • Unauthorized code modification – The token allowed write access to hundreds of private repositories, meaning malicious actors could have changed source code or inserted backdoors.
  • Cloud infrastructure access – Because many DevOps workflows are tied to GitHub and cloud providers, attackers could have accessed internal systems such as cloud instances, deployment pipelines, and backend services.
  • Operational disruption – With token access, attackers could potentially disrupt order fulfillment, inventory systems, or other mission-critical services.
  • Data exfiltration and espionage – Sensitive internal data, intellectual property, or strategic code could have been copied or exposed.

Although no evidence of malicious exploitation has been reported, the potential for undetected abuse during the year the token was active cannot be ruled out.

Recommendations

To prevent similar oversights, the following best practices are critical:

  • Implement secret scanning and automated detection across code repositories, cloud environments, and documentation.
  • Use ephemeral, short-lived tokens instead of long-lived static secrets.
  • Establish a formal vulnerability disclosure or bug bounty program to ensure researchers can report issues securely and directly.
  • Conduct regular audits and automated log monitoring to detect unauthorized exposures early.
  • Educate developers and engineers on the risks of hard-coded tokens and public disclosure of secrets.

Final Thoughts

The Home Depot token exposure incident is a stark reminder that even mature, well-resourced enterprises can fall victim to basic security mistakes with far-reaching implications. Allowing a powerful access token to remain exposed for over a year, despite repeated warnings, signals broader gaps in monitoring, credential governance, and incident response readiness.

In today’s threat landscape, no credential, human or machine, should be trusted by default. Vigilant secret management, robust reporting channels, and rapid response frameworks are essential to safeguarding digital infrastructure, protecting operational integrity, and maintaining stakeholder trust.

17,000+ Secrets Exposed in Public GitLab Repositories — What Went Wrong and How to Fix It

In late November 2025, a major security analysis revealed a startling reality: public repositories on GitLab Cloud have leaked over 17,000 live secrets, including API keys, access tokens, cloud credentials, and more.

The scan covered about 5.6 million public repositories, uncovering exposed secrets across more than 2,800 domains.

This massive exposure underscores how widespread and dangerous “secrets sprawl”, hard-coded credentials in public repos, has become.

What Happened

Security researcher Luke Marshall used the open-source tool TruffleHog to scan every public GitLab Cloud repository. He automated the process: repository names were enumerated via GitLab’s public API, queued in AWS Simple Queue Service (SQS), then scanned by AWS Lambda workers running TruffleHog, all within about 24 hours and costing roughly $770 in cloud compute.

The result: 17,430 verified live secrets, nearly three times the amount found in a similar scan of another platform.

These secrets included sensitive credentials like cloud API keys, tokens for cloud services, and other access credentials.

In short: thousands of developers inadvertently committed real secrets to public repositories, data that is now exposed for anyone to find and misuse.

How It Happened

  • Human error + insecure practices – Many developers or teams treat Git repositories as just code storage, not secret vaults. As a result, credentials (API keys, cloud secrets, tokens) get committed alongside code.
  • Public repository exposure by default or misconfiguration – Some repos are intentionally public (open-source), others may have been mis-marked as private, or visibility changed over time. Once public, everything becomes accessible.
  • Lack of secret hygiene and automated detection – Without automated secret-scanning tools integrated into CI/CD or repository workflows, exposed secrets remain undetected, sometimes for years.
  • Scale & automation exacerbate the risk – With millions of repos and many contributors, the probability of human error is high. Coupled with the fact that secrets often remain valid (long-lived credentials), this creates a large attack surface.

Possible Impact & Risks

The exposure of 17,000+ secrets across public repositories has major implications for both individual developers and organizations:

  • Credential theft leading to account takeover – Cloud API keys, tokens, or credentials could be used by attackers to hijack cloud services, spin up infrastructure, exfiltrate data, or launch attacks.
  • Supply-chain and downstream attacks – Public projects often serve as dependencies for other codebases, leaked secrets in one repo can jeopardize entire downstream ecosystems.
  • Long-term undetected compromise – Since credentials tend to stay valid unless rotated, exposed secrets may have already been harvested, even if no breach has yet been observed.
  • Reputational damage for teams and organizations – Leaks from public repos erode trust and may cause clients or partners to reconsider engagements.
  • Regulatory or compliance fallout for enterprises – Organizations that inadvertently publish credentials tied to sensitive data or infrastructure could face compliance risks, especially in regulated industries.

Recommendations

If you manage code or cloud infrastructure, or create software that depends on public repositories, now is the time to act:

  1. Audit all repositories public and private – Scan for hard-coded secrets, credentials, tokens, API keys. Treat every repo as a potential liability.
  2. Use automated secret-scanning tools – Integrate tools like TruffleHog, secret-scanners, or dedicated secret-management platforms into your CI/CD pipelines to catch leaks before code is merged.
  3. Rotate all exposed credentials immediately – If you find secrets in code, rotate or revoke them. Treat any exposed credential as compromised, even if it seems unused.
  4. Adopt secrets-management best practices – Use vaults / secrets managers rather than hard-coding credentials. Use ephemeral or short-lived tokens where possible.
  5. Enforce least-privilege and zero-trust – Limit what each token or API key can do. Avoid giving broad privileges by default.
  6. Educate developers and teams – Make secret-hygiene part of code reviews and development culture. Ensure everyone understands the risks of committing secrets.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The discovery of over 17,000 live secrets exposed in public GitLab repositories is a stark wake-up call. It shows how modern development practices, when paired with human error and outdated credential management, can create massive security risks invisible until it’s too late.

This isn’t a flaw in GitLab itself. It’s a systemic issue of secrets sprawl and lack of hygiene. The silver lining: unlike a software vulnerability, this is a problem teams can fix, by auditing their repos, rotating credentials, and putting real secret management practices in place.

If your organization relies on code repositories, cloud services, or third-party integrations, it’s time to treat every secret as a high-value identity. Because once secrets leak, the attackers don’t need exploits, just access.

Inside the Shai-Hulud Campaign: npm Malware Attack Exposes Secrets on GitHub

In November 2025, security researchers uncovered a widespread supply-chain attack targeting the JavaScript ecosystem. A new malware strain named Shai-Hulud was found infecting over 500 npm packages, silently harvesting sensitive information from developer environments and then exfiltrating secrets to private GitHub repositories controlled by the attackers. The breach highlights once again how fragile the open-source supply chain has become, and how easily compromised packages can escalate into large-scale credential theft.

What Happened?

The malicious campaign revolved around compromised npm packages that developers downloaded and integrated into their projects without knowing they were weaponized. Once installed, the infected packages executed malicious code that harvested:

  • API keys
  • Environment variables
  • Access tokens
  • Cloud provider secrets
  • GitHub authentication credentials

The stolen data was then transmitted to remote GitHub repositories owned by the attackers.
Because npm packages are widely reused via dependency chains, developers were compromised even if they never installed the malicious package directly, a dependency of a dependency was enough to trigger the infection.

How It Happened

npm Malware Attack pathway

The Shai-Hulud malware infiltrated the npm ecosystem through malicious package uploads that imitated legitimate libraries. Attackers used techniques like:

  • Typosquatting – publishing packages with names nearly identical to trusted dependencies
  • Dependency hijacking – uploading packages using abandoned or expired namespace names
  • Social trust exploitation – inserting malicious updates into previously safe packages

Once downloaded, the malware ran during the package installation lifecycle (via postinstall scripts), enabling it to execute automatically inside developer environments—no manual action required.

From there, Shai-Hulud scanned local machines for secrets commonly used in development workflows and pushed them in bulk to the attackers’ hidden GitHub repositories.
Because the exfiltration traffic blended into normal GitHub API communication, it was extremely difficult for organizations to detect the theft in real time.

What Was Compromised?

The full scope of stolen data is still under investigation, but security research shows that the malware targeted:

  • AWS, Azure, and GCP access keys
  • GitHub and GitLab personal access tokens
  • Environment variables used for CI/CD
  • API keys for third-party SaaS platforms
  • Database connection strings
  • OAuth tokens

Given how many npm packages depend on other libraries, the total exposure could impact thousands of organizations across multiple ecosystems, not just JavaScript projects.

Key Findings by Entro Security

  • Banks, Governments, and Fortune 500 companies are affected
  • 30,000+ attacker-controlled repos cloned before takedown
  • 1,195 organizations identified across tech, finance, government, healthcare
  • 55.7% of affected orgs are technology/SaaS companies
  • Valid cloud and CI credentials observed up to 72+ hours after disclosure


Key Findings by GitGuardian

  • GitGuardian identified 754 distinct npm packages (spanning some 1,700 versions) as infected.
  • In their snapshot of 20,649 publicly exposed repositories (as of Nov 24), they found 294,842 “secret occurrences”, corresponding to 33,185 unique secrets.
  • Out of those, 3,760 secrets were validated as live/valid at analysis time. Real exposure numbers may have been higher, since some tokens could have been revoked by then.
  • Most of the valid secrets were high-value credentials: GitHub Personal Access Tokens (PATs), OAuth tokens, etc.

Possible Impacts

If the stolen secrets are used successfully, organizations may face:

  • Unauthorized access to cloud environments
  • Source code and intellectual property theft
  • CI/CD pipeline compromise
  • Lateral movement into internal networks
  • Creation of fraudulent infrastructure using real organization credentials
  • Downstream customer breaches through trusted integrations
  • Ransom, extortion, or data destruction attacks

Recommendations

To reduce risk and respond effectively:

  • Rotate all secrets stored in development environments and CI/CD pipelines
  • Audit GitHub access tokens and revoke suspicious or unused ones
  • Scan source code and repos for embedded secrets using automated tools
  • Implement a centralized secrets management solution (e.g., Vault, AWS Secrets Manager, Doppler)
  • Enable dependency pinning and dependency integrity checks
  • Adopt software composition analysis (SCA) and SBOMs to monitor open-source changes
  • Block automatic execution of postinstall scripts unless required

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The Shai-Hulud attack is another reminder that cybercriminals are increasingly targeting the software supply chain, not enterprise networks directly. Developers and CI/CD environments now represent one of the most valuable attack surfaces because stealing a single set of credentials can unlock access to cloud accounts, infrastructure, and entire customer ecosystems.

Organizations that treat secret management, dependency security, and supply-chain monitoring as optional will remain exposed. Protecting development pipelines, and the NHIs (non-human identities) inside them, is now a core security responsibility, not a nice-to-have.

Code Formatting Tools Cause Massive Credential Leaks in Enterprises

In November 2025, security researchers discovered that a code beautifier tool, used to format and clean source code, inadvertently exposed sensitive credentials from some of the world’s most sensitive organizations, including banks, government agencies, and major tech companies. The discovery underscores the unexpected risks that development tools and automation can introduce into enterprise security.

These tools, widely trusted by developers to improve code readability, were found to transmit or expose embedded secrets in source code, including passwords, API keys, and cloud credentials, to external services or temporary storage locations. The breach demonstrates how even well-intentioned developer utilities can become vectors for credential leaks.

What Happened

Developers across multiple industries use code beautifiers and formatters to automatically standardize code. However, recent analysis shows that certain tools:

  • Processed files containing hard-coded secrets without detecting them.
  • Stored or transmitted parts of code to remote servers for processing, inadvertently exposing sensitive information.
  • Failed to sanitize outputs or logs, leaving secrets in temporary or publicly accessible locations.

The breach was discovered after security researchers identified patterns of leaked credentials linked to these tools. The exposed data included credentials for cloud services, internal databases, APIs, and even secure admin panels.

In some cases, the compromised credentials belonged to banks and government agencies, highlighting how even routine developer tools can pose high-stakes security risks when handling sensitive code.

How It Happened

The leak occurred due to a combination of factors:

  1. Hard-coded secrets in source code – Developers sometimes store passwords, API keys, or tokens directly in their source files for convenience.
  2. Tool behavior – The code beautifiers analyzed, formatted, or processed these files using cloud-based services or shared environments, inadvertently transmitting sensitive information.
  3. Lack of detection – The tools did not include automatic secret detection or redaction features.
  4. Chain of trust issues – Organizations relied on trusted development tools without fully auditing their operations, assuming local processing was safe.

This situation demonstrates that non-human identities and automated tools (like beautifiers or formatters) can become unmonitored attack surfaces if not properly governed.

What Was Compromised

Exposed data includes:

  • Active Directory credentials
  • Database and cloud credentials
  • Private keys
  • Code repository tokens
  • CI/CD secrets
  • Payment gateway keys
  • API tokens
  • SSH session recordings
  • Large amounts of personally identifiable information (PII), including know-your-customer (KYC) data
  • An AWS credential set used by an international stock exchange’s Splunk SOAR system
  • Credentials for a bank exposed by an MSSP onboarding email

Even a single leaked key can give attackers lateral access to multiple systems, making this breach particularly dangerous.

Possible Impacts

The compromise of credentials through code beautifiers could have severe repercussions:

  • Unauthorized access to cloud and internal systems
  • Theft of sensitive customer or citizen data
  • Disruption of critical services or infrastructure
  • Lateral movement across corporate or government networks
  • Reputational and regulatory consequences for affected organizations

Recommendations

Organizations and developers should take immediate action to mitigate risks:

  1. Audit and rotate credentials that may have been exposed via code formatting tools.
  2. Avoid hard-coding secrets in source code; use secure vaults and environment variables.
  3. Evaluate all development tools for security practices, especially those using cloud or remote processing.
  4. Integrate automated secret scanning into CI/CD pipelines.
  5. Educate developers about the risks of using third-party utilities with sensitive code.
  6. Adopt least-privilege principles for all machine identities.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The code beautifier breach is a stark reminder that even tools designed to help developers can unintentionally compromise security. With thousands of secrets at risk across banks, government agencies, and tech organizations, this incident emphasizes the importance of secrets hygiene, developer awareness, and non-human identity governance.

Organizations must treat developer tools as potential attack surfaces and implement continuous monitoring, secret management, and secure coding practices to prevent similar incidents.

TruffleNet BEC Attack: Over 800 Hosts Compromised Using Stolen AWS Credentials

In November, Security researchers have uncovered a large and sophisticated campaign, dubbed “TruffleNet,” in which attackers are abusing stolen Amazon Web Services (AWS) credentials to hijack AWS’s Simple Email Service (SES) and launch Business Email Compromise (BEC) attacks. Rather than relying on malware or phishing alone, the adversaries are weaponizing legitimate cloud infrastructure, making their emails look far more trustworthy. The scale is massive: over 800 unique hosts across 57 different networks are participating in the operation.

What Happened

The TruffleNet campaign begins with attackers using TruffleHog, an open-source secret-scanning tool, to validate AWS credentials they’ve compromised. Once they confirm valid AWS access keys, they execute reconnaissance: the compromised hosts make a GetCallerIdentity API call to check who owns the credentials.

If the credentials pass that test, the next step is to call GetSendQuota from the AWS SES API, which helps the attackers determine how many emails they can send using SES. Many of the IP addresses used for this activity have never been flagged by reputation or antivirus systems, suggesting that the attackers built their infrastructure specifically for this campaign.

After reconnaissance, the adversaries abuse SES: they create verified sending identities using compromised domains and stolen DKIM (DomainKeys Identified Mail) cryptographic keys from previously compromised WordPress sites. With these identities in place, they execute BEC attacks, sending highly convincing phishing or invoice emails that appear to come from legitimate companies.

In one documented case, attackers targeted companies in the oil and gas sector, sending fake vendor onboarding invoices claiming to be from ZoomInfo, and requesting payments of $50,000 via ACH. To make the scam even more convincing, they included W-9 forms with real Employer Identification Numbers (EINs), and used typosquatted domains to handle responses.

How It Happened

TruffleNet BEC Attack Campaign
  1. Credential Validation & Reconnaissance – Attackers start with a large trove of stolen AWS keys. Using TruffleHog, they test each key to find the ones that still work.
  2. SES Capability Check – Once valid keys are found, the campaign sends GetCallerIdentity and GetSendQuota API calls to assess the identity and email-sending capabilities associated with those credentials.
  3. Containerized Attack Infrastructure – The operation relies on more than 800 hosts, many of which run Portainer, a Docker/Kubernetes management tool. This gives attackers a centralized way to manage and coordinate their malicious nodes.
  4. Email Identity Fabrication – With SES access, the attackers import DKIM signing keys stolen from compromised WordPress websites. They then use those keys to register email identities in SES, making their malicious emails appear legitimate.
  5. BEC Execution – Finally, TruffleNet sends business email compromise messages under trusted-looking domains. These messages are sophisticated, complete with W-9 forms and legitimate-looking invoices, designed to trick financial teams.

Possible Impact

  • Highly Convincing Phishing & Fraud – Using real SES infrastructure and verified DKIM domains makes the BEC scams extremely hard to distinguish from legitimate emails.
  • Financial Loss – Victims could be tricked into sending large payments (e.g., $50,000 ACH transfers) to attackers.
  • Mass Credential Abuse – The campaign’s size (800+ hosts) shows how many stolen AWS credentials are being tested and abused in parallel.
  • Long-Term Cloud Risk – Any AWS account with compromised SES credentials could be repurposed for ongoing fraud, phishing, or further infrastructure abuse.
  • Reputation Damage – Organizations whose SES accounts are abused may find their domains blacklisted or flagged as sources of phishing.

Recommendations

To defend against TruffleNet-style attacks, security teams should consider the following actions:

  • Audit and Rotate Credentials Frequently – Regularly scan for exposed keys (in code repositories, logs, etc.) and rotate them.
  • Restrict SES Permissions – Apply the principle of least privilege: only give SES access to identities that truly need it.
  • Enable Cloud Logging & Monitoring – Use AWS CloudTrail to monitor calls like GetCallerIdentity and SES API usage for unusual patterns.
  • Alert on Anomalous SES Behavior – Set up behavioral alerts for sudden spikes in sending quotas, new verified identities, or new DKIM configurations.
  • Secure Your Domains – Protect your WordPress or other web assets from compromise, especially if they sign DKIM keys for SES domains.
  • Train Financial Teams – Teach accounts payable and finance teams to verify vendor payment requests out-of-band (phone calls, known contacts), even when the email looks “official.”
  • Use Secret-Scanning Tools – Integrate tools like TruffleHog or similar secret scanners in your CI/CD pipeline to catch exposed AWS keys before they’re exploited.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

The TruffleNet campaign is a powerful reminder that identity compromise is now a core threat vector for cloud systems. By weaponizing stolen AWS credentials, attackers are turning trusted services like SES into tools for large-scale fraud and BEC.

This operation isn’t just about phishing, it’s a sophisticated, automated cloud abuse campaign built on legitimacy, scale, and stealth. As cloud adoption continues to grow, organizations must treat credential security and API access with the same rigor they apply to traditional perimeter defenses.

Hardcoded Credentials in SAP SQL Anywhere Monitor Expose Enterprises to Critical Remote Access Risk

On November 11, 2025, SAP issued a security update patching a severe flaw in SQL Anywhere Monitor (non-GUI version), tracked as CVE-2025-42890. The vulnerability stemmed from hard-coded credentials embedded in the monitoring tool, a major security misstep given the level of access the monitor holds.

Because the flaw earned a CVSS severity score of 10.0 (maximum), SAP and security researchers flagged it as critical, a must-patch for all affected environments.

What Happened

SQL Anywhere Monitor is used to oversee and manage distributed or remote databases. The non-GUI variant (likely deployed in headless or automated appliances) was shipped with a built-in default credential embedded directly inside the code, that granted privileged access to the monitoring database. This “baked-in” credential was never meant for external exposure.

With those credentials easily available to any attacker who discovers them, threat actors could potentially connect to the monitoring database remotely, even without prior authentication as a legitimate user. Once inside, the risk wasn’t just limited to viewing data: the flaw allowed arbitrary code execution. In other words, attackers could gain full control over the system, compromise data integrity, and potentially pivot into internal networks.

Given that many organizations deploy SQL Anywhere across critical systems, sometimes in unattended environments, such a vulnerability posed a high-stakes risk.

How It Happened

The root cause was surprisingly low-tech: default credentials hard-coded inside the application code, never intended for public or high-security deployment. Specifically:

  • The credentials were embedded in a Java class inside migrator.jar, used by Monitor’s non-GUI version to connect to its internal database.
  • Because the monitor shipped ready-to-use, with pre-configured database files and no requirement for administrators to change credentials, many deployments remained vulnerable for an extended period.
  • The vulnerability allowed remote, unauthenticated access over the network. An attacker just needed to know the default credentials and reach the service to exploit it.

In response to the security advisory, SAP removed the pre-configured monitoring database and the hard-coded credentials in the patched version. As a result, running instances are no longer inherently vulnerable, assuming administrators applied the update or removed the vulnerable Monitor entirely.

What Was at Risk

By exploiting the hardcoded credentials flaw, attackers could have done the following:

  • Gain unauthorized access to monitoring systems and database internals
  • Execute arbitrary code, effectively compromising the host system
  • Disrupt database monitoring, data integrity, and availability
  • Use the compromised monitor as a foothold to lateral-move within network environments
  • Exfiltrate sensitive data, tamper with stored records, or sabotage backups — with minimal detection

Given the severity, any organization relying on SQL Anywhere Monitor (non-GUI) faced a real threat to confidentiality, integrity, and availability of their database environments.

What Organizations Should Do

  1. Immediately apply SAP’s November 2025 update that patches CVE-2025-42890, or remove SQL Anywhere Monitor if not required.
  2. Audit existing deployments, search for any instances of the vulnerable Monitor, especially on unattended appliances or legacy systems.
  3. Restrict network access to monitoring services until patched, block public access and limit to internal trusted networks.
  4. Rotate credentials and secrets if the default credentials were ever used or exposed.
  5. Review logs and audit trails for any suspicious connections to the monitor, especially attempts prior to patching, and investigate for possible unauthorized activity.
  6. Adopt a policy of no embedded credentials, enforce secret management, avoid shipping credentials in code or binaries, and require per-deployment credential configuration.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Conclusion

This flaw in SQL Anywhere Monitor highlights a broader, often overlooked issue: many monitoring, admin, or support tools use default or hard-coded credentials under the assumption of “internal use only”. Once deployed, especially on network-connected appliances, those assumptions break down quickly and become a major risk vector.

In modern enterprise environments, non-human and machine identities, like service accounts, database monitors, automation agents, demand as much care and governance as human users. A single “backdoor” credential in an admin tool can compromise entire systems.

Therefore, simply relying on vendor defaults and internal trust is no longer enough. Organizations must treat every identity, human or not, as a potential entry point and enforce proper credential lifecycle management, segmentation, and least-privilege principles.

Copilot Studio Agents Exploited in New CoPhish OAuth Token Theft

A new phishing technique known as CoPhish was disclosed in October 2025. This attack abuses Copilot Studio agents to trick users into granting OAuth consents and thereby steals their OAuth tokens.

Because the agents are hosted on legitimate Microsoft domains and the phishing flow uses bona fide‑looking interfaces, victims are more likely to trust them.

Researchers from Datadog Security Labs disclosed the technique, warning that the flexibility of Copilot Studio introduces new, undocumented phishing risks that target OAuth-based identity flows. has confirmed the issue and said they plan to fix the underlying causes in a future update.

What Happened

Attackers crafted malicious Copilot Studio agents that abuse the platform’s “demo website” functionality. Because these agents are hosted on copilotstudio.microsoft.com, the URLs appear legitimate, making it easy for victims to trust and interact with them.

When a user interacts with the agent, it can present a “Login” or “Consent” button. If the user clicks it, they are redirected to an OAuth consent screen, made to look like a valid Microsoft or enterprise permission prompt, requesting access permissions.

Once the user grants consent, their OAuth token is silently forwarded to the attacker’s infrastructure, often via an HTTP request configured inside the malicious agent. The exfiltration can use legitimate Microsoft infrastructure, making it harder to detect through conventional network monitoring.

Crucially: this is not a software vulnerability but a social‑engineering abuse of legitimate platform features (custom agents + OAuth consent flows) to steal credentials.

How the Attack Works

  1. An attacker (or compromised tenant) creates a malicious agent in Copilot Studio and enables the “demo website” sharing feature.
  2. The agent’s “Login” topic is configured to redirect users to an OAuth consent flow, often masquerading as a legitimate login / authorization prompt.
  3. The attacker distributes the agent link via phishing, email, chat, internal messages, relying on the legitimate Microsoft domain to avoid suspicion.
  4. A victim clicks “Login” and consents, unknowingly granting permissions. The OAuth token returned is immediately forwarded, silently, to an attacker-controlled backend.
  5. With the stolen token, attackers can access mail, chat, calendars, files, and other resources via OAuth/Graph APIs, potentially granting full tenant compromise depending on permissions.

Because everything is hosted under Microsoft’s own domain and appears legitimate, the attack bypasses phishing filters, domain‑based detection, and many usual safety nets.

What’s at Risk

Because OAuth tokens grant access to platforms, services, and data, stolen tokens from a successful CoPhish attack can lead to:

  • Unauthorized access to corporate resources: emails, chats, calendar, cloud files, internal documents, etc.
  • Persistent access until the token is revoked or expires, enabling long-term espionage or data theft.
  • Lateral movement: attackers with a stolen token could impersonate users, possibly including privileged users, to access more resources or escalate privileges.
  • Evading detection: because the token is exfiltrated via legitimate Microsoft infrastructure, traffic may appear benign, bypassing many standard security controls.

Given how easy it is to deploy Copilot Studio agents and how many organizations use Microsoft cloud tools, the potential blast radius is wide, from small teams to large enterprises.

What Organizations Should Do Right Now

To defend against CoPhish and similar AI‑based token‑theft attacks:

  • Enforce strict consent policies, require admin approval for any new OAuth apps or Copilot agent consents.
  • Lock down Copilot Studio, disable public sharing or “demo website” features, or restrict agent creation to trusted users only.
  • Monitor enrollment, consent, and token issuance events in your identity platform (e.g. Microsoft Entra ID) for anomalous or unexpected entries.
  • Revoke and rotate tokens periodically, especially after user role changes, or if unexpected consents arise.
  • Educate teams about the risk, treat AI agents as high‑privilege identities, not just convenience tools. Automated features can carry real security consequences.
  • Implement least‑privilege principles and granular permission scopes: only grant what’s strictly needed.

Overview

In March 2025, a major supply-chain attack compromised the popular GitHub Action tj-actions/changed-files, used by roughly 23,000 repositories.

The malicious update caused CI/CD secrets, like API keys, tokens, AWS credentials, npm/Docker credentials and more, to be dumped into build logs. For projects whose logs were publicly accessible (or visible to others), this meant secrets were exposed to outsiders.

A subsequent review estimated that at least 218 repositories confirmed leaked secrets.

What Happened

On March 14, 2025, attackers pushed a malicious commit to the tj-actions/changed-files repository. They retroactively updated several version tags so that all versions (even older ones) referenced the malicious commit, meaning users did not need to explicitly update to “vulnerable” version: their existing workflows were already poisoned. The compromised action included code that, when run during CI workflows, dumped the GitHub Actions runner process memory into workflow logs, exposing secrets, tokens, environment variables, and other sensitive data. On March 15, 2025, after the breach was detected, GitHub removed the compromised version and the action was restored once the malicious code was removed.

Because many projects rely on GitHub Actions and many developers use default tags (like @v1) rather than pinning to commit SHAs, the attack had a broad reach, potentially impacting any workflow that ran the action during the infection window.

How It Happened

  • Supply-chain compromise – Attackers first compromised a bot account used to maintain the action (the @tj-actions-bot), gaining push privileges.
  • Malicious commit & tag poisoning – They committed malicious code that dumps secrets, then retroactively updated version tags so all previous and current versions were tainted.
  • Execution during CI/CD workflows – When any affected repository ran its CI workflow using the action, the malicious code executed and printed secret data into build logs.
  • Secrets exposed – Logs could contain GitHub tokens, AWS credentials, npm/Docker secrets, environment variables, and other sensitive data — instantly readable by anyone with access to the logs.
  • Wide potential blast radius – Given the popularity of the action, thousands of repositories were at risk, even those unaware of the change, underscoring the danger of trusting widely-used dependencies without lock-down.

Possible Impact & Risks

  • Secret leakage – CI/CD secrets, including API keys, cloud credentials, tokens,  exposed publicly. Attackers could use these for cloud account access, code or package registry abuse, infrastructure compromise, or further supply-chain attacks.
  • Compromise of downstream systems – With stolen credentials, attackers could breach production environments, publish malicious packages, or manipulate deployment pipelines.
  • Widespread supply-chain distrust – The breach erodes trust in open-source automation tools and GitHub Actions. Projects relying on third-party actions must now treat them as potential risk vectors.
  • Developer & enterprise exposure – Both open-source and private organizations using the compromised action may be impacted, especially if they exposed logs publicly or reused leaked secrets across systems.

Recommendations

If you use GitHub Actions, especially third-party ones, here are essential steps to protect yourself:

  • Audit your workflows – Identify if your repositories referenced tj-actions/changed-files, especially via mutable tags (e.g. @v1). If yes, treat credentials used in those workflows as potentially compromised.
  • Rotate all secrets – Tokens, API keys, cloud credentials, registry credentials) that were used during the period between March 14–15, 2025.
  • Pin actions to immutable commit SHAs – Rather than version tags, to avoid retroactive tag poisoning.
  • Review all third-party actions before adding to workflows, and prefer actions from trusted authors with minimal permissions.

Use least-privilege tokens and ephemeral credentials in CI/CD, avoid granting broad access via long-lived secrets.

  • Restrict access to workflow logs, especially for public repositories, avoid storing sensitive data in logs.
  • Enable secret-scanning and auditing for CI/CD pipelines, watch for suspicious logs or leak indicators.
  • Treat automation agents and CI identity (bots, actions) as non-human identities (NHIs) – apply the same governance, monitoring, and security hygiene as for real user accounts.

Final Thoughts

The March 2025 breach of tj-actions/changed-files underlines a harsh but clear truth: software supply chains, including CI/CD tools and automation frameworks, are first-class attack surfaces. A single compromised action can leak secrets, expose credentials, and undermine trust in entire ecosystems.

For developers, organizations, and security teams, the lesson is urgent and unavoidable: never treat dependencies or automation tools as inherently safe. Always enforce inventory, least-privilege, version pinning, secret hygiene, and regular audits, especially for machine identities, third-party code, and CI/CD pipelines.

In July 2025, security researchers disclosed a troubling breach involving Amazon Q, Amazon’s AI-powered coding agent embedded in the Visual Studio Code extension used by nearly a million developers. A malicious actor had successfully injected a covert “data-wiping” system prompt directly into the agent’s codebase, effectively weaponizing the AI assistant itself.

What Happened

The incident began on July 13, 2025, when a GitHub user operating under the alias lkmanka58 submitted a pull request to the Amazon Q extension repository. Despite being an untrusted contributor, the PR was accepted and merged, a sign of misconfigured repository permissions or insufficient review controls within the workflow.

Inside the merged code was a malicious “system prompt” designed to manipulate the AI agent embedded in Amazon Q. The prompt instructed the agent to behave as a “system cleaner,” issuing explicit commands to delete local file-system data and wipe cloud resources using AWS CLI operations. In effect, the attacker attempted to weaponize the AI model into functioning as a destructive wiper tool.

Four days later, on July 17, 2025, Amazon published version 1.84.0 of the VS Code extension to the Marketplace, unknowingly distributing the compromised version to users worldwide. It wasn’t until July 23 that security researchers observed suspicious behavior inside the extension and alerted AWS. This triggered an internal investigation, and by the next day, AWS had removed the malicious code, revoked associated credentials, and shipped a clean update (v1.85.0).

According to AWS, the attacker’s prompt contained formatting mistakes that prevented the wiper logic from executing under normal conditions. As a result, the company states there is no evidence that any customer environment suffered data deletion or operational disruption.

The True Root Cause

What makes this breach uniquely alarming is not just the unauthorized code change, it’s the fact that the attacker weaponized the AI coding agent itself.

Unlike traditional malware, which executes code directly, this attack relied on manipulating the agent’s system-level instructions, repurposing Amazon Q’s AI behaviors into destructive actions. The breach demonstrated how:

  • AI agents can be social-engineered or artificially steered simply through malicious system prompts.
  • Developers increasingly trust AI-driven tools, giving them broad access to local machines and cloud environments.
  • A compromised AI agent becomes a powerful attacker multiplier, capable of interpreting and running harmful natural-language commands.

Even though AWS later clarified that the injected prompt was likely non-functional due to formatting issues, meaning no confirmed data loss occurred, the exposure risk alone was severe.

What Was at Risk

Had the malicious prompt executed as intended, affected users and organizations faced potentially severe consequences:

  • Local data destruction – The prompt aimed to wipe users’ home directories and local files, risking irreversible data loss.
  • Cloud infrastructure wiping – The injected commands included AWS CLI instructions to terminate EC2 instances, delete S3 buckets, remove IAM users, and otherwise destroy cloud resources tied to an AWS account.
  • Widespread distribution – With nearly one million installs, the compromised extension could have impacted a large developer population, especially those using Amazon Q for projects tied to critical infrastructure, production environments, or cloud assets.
  • Supply-chain confidence erosion – The breach undermines trust in AI-powered or open-source development tools, a single malicious commit can compromise thousands of users instantly.

Recommendations

If you use Amazon Q, or any AI-powered coding extension / agent, treat this incident as a wake-up call. Essential actions:

  • Update to the clean version (1.85.0) immediately. If you have 1.84.0 or earlier, remove it.
  • Audit extension use and permissions – treat extensions as potential non-human identities. Restrict permissions where possible; avoid granting unnecessary filesystem or cloud-access privileges.
  • Review and lock down CI/CD, dev workstations, and cloud credentials – never assume that an IDE or plugin is “safe.” Use vaults, environment isolation, and minimal permissions.
  • Vet open-source contributions carefully – apply stricter review and validation for pull requests in critical tools; avoid blindly trusting automated merges or simplified workflows.
  • Segment environments – avoid using AI extensions on machines or environments that store production data or credentials.
  • Monitor logs and cloud resource activity – watch for suspicious deletions, cloud resource termination, or unexpected CI jobs after tool updates.

Final Thoughts

The breach of Amazon Q reveals a troubling reality: as AI tools continue to integrate deeply into development workflows, they become part of the enterprise threat landscape, not just optional helpers. A single bad commit, merged without proper checks, can transform a widely trusted extension into a potential weapon against users.

This isn’t just about one extension, it’s about the broader risks of machine identities, AI-powered tools, supply-chain trust, and code governance in modern DevOps environments. As complexity grows, so must our security practices.

A new phishing technique known as CoPhish was disclosed in October 2025. This attack abuses Copilot Studio agents to trick users into granting OAuth consents and thereby steals their OAuth tokens.

Because the agents are hosted on legitimate Microsoft domains and the phishing flow uses bona fide‑looking interfaces, victims are more likely to trust them.

Researchers from Datadog Security Labs disclosed the technique, warning that the flexibility of Copilot Studio introduces new, undocumented phishing risks that target OAuth-based identity flows. has confirmed the issue and said they plan to fix the underlying causes in a future update.

What Happened

Attackers crafted malicious Copilot Studio agents that abuse the platform’s “demo website” functionality. Because these agents are hosted on copilotstudio.microsoft.com, the URLs appear legitimate, making it easy for victims to trust and interact with them.

When a user interacts with the agent, it can present a “Login” or “Consent” button. If the user clicks it, they are redirected to an OAuth consent screen, made to look like a valid Microsoft or enterprise permission prompt, requesting access permissions.

Once the user grants consent, their OAuth token is silently forwarded to the attacker’s infrastructure, often via an HTTP request configured inside the malicious agent. The exfiltration can use legitimate Microsoft infrastructure, making it harder to detect through conventional network monitoring.

Crucially: this is not a software vulnerability but a social‑engineering abuse of legitimate platform features (custom agents + OAuth consent flows) to steal credentials.

What’s at Risk

Because OAuth tokens grant access to platforms, services, and data, stolen tokens from a successful CoPhish attack can lead to:

  • Unauthorized access to corporate resources: emails, chats, calendar, cloud files, internal documents, etc.
  • Persistent access until the token is revoked or expires — enabling long-term espionage or data theft.
  • Lateral movement: attackers with a stolen token could impersonate users — possibly including privileged users — to access more resources or escalate privileges.
  • Evading detection: because the token is exfiltrated via legitimate Microsoft infrastructure, traffic may appear benign, bypassing many standard security controls.

Given how easy it is to deploy Copilot Studio agents and how many organizations use Microsoft cloud tools, the potential blast radius is wide — from small teams to large enterprises.

How the Attack Works

  1. An attacker (or compromised tenant) creates a malicious agent in Copilot Studio and enables the “demo website” sharing feature.
  2. The agent’s “Login” topic is configured to redirect users to an OAuth consent flow, often masquerading as a legitimate login / authorization prompt.
  3. The attacker distributes the agent link via phishing, email, chat, internal messages, relying on the legitimate Microsoft domain to avoid suspicion.
  4. A victim clicks “Login” and consents, unknowingly granting permissions. The OAuth token returned is immediately forwarded, silently, to an attacker-controlled backend.
  5. With the stolen token, attackers can access mail, chat, calendars, files, and other resources via OAuth/Graph APIs, potentially granting full tenant compromise depending on permissions.

Because everything is hosted under Microsoft’s own domain and appears legitimate, the attack bypasses phishing filters, domain‑based detection, and many usual safety nets.

Why It Matters

  • The CoPhish attack is a major wake-up call that AI agent platforms like Copilot Studio can be weaponized, not just for convenience.
  • It exposes a gap in current security, legitimate features (custom agents, OAuth consent) becoming attack vectors through social engineering and workflow abuse.
  • As more organizations adopt AI-based automation and assistants, the risk associated with misuse of OAuth tokens grows — token theft can lead to data breaches, compliance violations, and wide-scale compromise.
  • Traditional security measures, firewalls, network monitoring, email filters — are insufficient, because the malicious activity leverages trusted infrastructure and legitimate domains.

What Organizations Should Do Right Now

To defend against CoPhish and similar AI‑based token‑theft attacks:

  • Enforce strict consent policies, require admin approval for any new OAuth apps or Copilot agent consents.
  • Lock down Copilot Studio, disable public sharing or “demo website” features, or restrict agent creation to trusted users only.
  • Monitor enrollment, consent, and token issuance events in your identity platform (e.g. Microsoft Entra ID) for anomalous or unexpected entries.
  • Revoke and rotate tokens periodically, especially after user role changes, or if unexpected consents arise.
  • Educate teams about the risk, treat AI agents as high‑privilege identities, not just convenience tools. Automated features can carry real security consequences.
  • Implement least‑privilege principles and granular permission scopes: only grant what’s strictly needed.

How NHI Mgmt Group Can Help

Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.

At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.

We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.

If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.

Final Thoughts

The CoPhish attack highlights a new and evolving threat: AI-powered agents themselves can become attack vectors. What makes this breach particularly concerning is that attackers exploited trusted Microsoft infrastructure and OAuth consent flows, meaning traditional defenses like phishing filters or domain verification are largely ineffective.

Organizations must recognize that AI assistants, Copilot agents, and other automated tools are effectively non-human identities with privileges. Just like service accounts or API tokens, they require strict governance, monitoring, and access control.