The Non-Human Identity Management Group recently launched a LinkedIn poll asking:
If you had to pick one option, what would it be?
“AI for IAM or IAM for AI?”
With 236 total votes, the results were very telling:
IAM for AI – 78%
AI for IAM – 22%
No real surprise to see IAM for AI taking a clear lead. The majority of respondents recognize that as AI systems, models, and agents become more autonomous and gain access to critical resources, identity must be the foundation of AI security.
While AI can certainly enhance IAM through analytics, automation, and anomaly detection, the poll results show that professionals understand a deeper truth: AI itself needs to be governed by strong IAM principles. Without clear identities, authentication, authorization, and lifecycle management, AI becomes another powerful, but unmanaged, actor in the environment.
This poll highlights a growing maturity in how the community thinks about AI security. Rather than asking how AI can improve IAM, more organizations are asking how IAM can be applied to AI. As Agentic AI adoption accelerates, IAM for AI will be critical to ensuring accountability, control, and trust across modern digital environments.
In December 2025, Amazon’s AWS (Amazon Web Services) faced a significant security breach that saw compromised accounts being utilized for an ongoing crypto-mining campaign. This incident specifically targeted AWS’s Elastic Compute Cloud (EC2) and Elastic Container Service (ECS), impacting a vast number of users relying on these services for their cloud-based applications and workloads. The breach was particularly alarming due to the scale of the attack, which utilized valid credentials for Identity and Access Management (IAM) from multiple customer accounts. The ramifications of this breach extend beyond Amazon, affecting countless organizations that rely on AWS for their cloud infrastructure. As the details unfold, it becomes critical to analyze how this breach occurred, the methods employed by the attackers, and the implications for both Amazon and its customers.
What Happened
The breach was discovered on December 17, 2025, when Amazon’s AWS GuardDuty security team issued a warning about an ongoing crypto-mining operation linked to compromised AWS accounts. Here’s a chronological account of the breach:
November 2, 2025: The operation began with attackers leveraging compromised IAM credentials to access AWS resources.
Late October 2025: A malicious Docker Hub image was created, which would later serve as a vector for deploying the crypto-miners, accumulating over 100,000 pulls by the time of the breach’s discovery.
December 17, 2025: AWS GuardDuty identified the crypto-mining campaign and alerted customers of the compromised accounts.
Initial detection highlighted that the attackers did not exploit any vulnerabilities in AWS systems; instead, they operated using valid credentials obtained from customer accounts. The types of data compromised included IAM credentials, which allowed the attackers to deploy and run unauthorized crypto-mining software on the EC2 and ECS instances, leading to significant computational resource exhaustion for affected users.
How It Happened
The attack leveraged a combination of social engineering and poor security practices that led to the compromise of valid IAM credentials. Here’s a deeper look into the technical aspects of the breach:
Credential Compromise: Attackers obtained IAM credentials through phishing or other means, enabling them to bypass security protocols.
Deployment Method: Utilizing a malicious Docker Hub image, which was pulled over 100,000 times, the attackers were able to deploy crypto-miners effectively on the compromised accounts.
Persistence Mechanism: The attackers implemented a persistence mechanism that allowed them to maintain control over the mining operations, even after initial detection attempts by incident responders.
The infrastructure weaknesses stemmed from inadequate monitoring of IAM roles and permissions, which allowed the threat actors to establish and maintain exploitation without immediate detection. Attribution to a specific threat actor was not disclosed, but the methodology indicates a sophisticated approach often seen in organized cybercrime.
Impact
The impact of the AWS breach was multi-faceted, affecting both the organization and its users significantly:
Immediate Consequences: AWS customers experienced substantial performance degradation due to resource exhaustion caused by the unauthorized mining activities.
Customer Impact: Many organizations relying on AWS for critical operations faced increased operational costs and potential disruptions to their services.
Financial Implications: AWS had to absorb the costs associated with additional computational resources consumed by the mining operations, potentially reaching into millions of dollars.
Regulatory and Legal Consequences: The breach raised concerns regarding compliance with data protection regulations, putting AWS at risk of legal scrutiny.
Long-term Reputation Damage: Trust in AWS’s security measures could be undermined, leading to customer attrition and a tarnished brand reputation.
Industry-wide Implications: This incident serves as a stark reminder to other cloud service providers, emphasizing the need for stringent security protocols to protect against similar threats.
Overall, the AWS breach not only highlighted vulnerabilities within cloud services but also underscored the need for robust security measures across the tech industry.
Recommendations
In light of the AWS breach, organizations should adopt the following security measures to prevent similar incidents:
Enhance IAM Policies: Implement the principle of least privilege to limit access to critical resources.
Regular Credential Audits: Conduct frequent audits of IAM credentials to identify any unauthorized access or anomalies.
Multi-Factor Authentication: Enforce multi-factor authentication for all access to AWS accounts to mitigate credential theft.
Monitoring and Alerts: Utilize AWS tools like GuardDuty to monitor account activity and receive alerts for suspicious behavior.
Security Awareness Training: Educate employees about phishing attacks and other social engineering tactics to reduce the likelihood of credential compromise.
By implementing these actionable recommendations, organizations can significantly bolster their defenses against similar breaches and enhance their overall cybersecurity posture.
How NHI Mgmt Group Can Help
Securing Non-Human Identities (NHIs) including AI Agents, is becoming increasingly crucial as attackers discover and target service accounts, API keys, tokens, secrets, etc., during breaches. These NHIs often hold extensive permissions that can be exploited, making their security a priority for any organization focused on protecting their digital assets.
Take our NHI Foundation Level Training Course, the most comprehensive in the industry, that will empower you and your organization with the knowledge needed to manage and secure these non-human identities effectively.
The AWS breach serves as a critical wake-up call for organizations utilizing cloud services. The exploitation of valid credentials highlights vulnerabilities that can exist even within established security frameworks. As cyber threats continue to evolve, the necessity for proactive security measures cannot be overstated. Organizations must prioritize cybersecurity to safeguard against potential breaches and protect their digital assets. Staying informed about best practices and emerging threats is essential for maintaining a resilient security posture in today’s digital landscape.
As agentic AI systems move from experimental prototypes into real-world use across sectors such as finance, healthcare, and defense, the security landscape is shifting in profound ways. These systems are no longer simple automations; they can plan, decide, and execute multi-step actions with significant autonomy. This evolution requires a clear, strategic, and unified approach to security.
This report provides an authoritative reference point for security teams working to understand and manage the emerging risk surface. It consolidates guidance from the OWASP Agentic Security Initiative, rooted in the comprehensive Agentic AI: Threats and Mitigations taxonomy, into a streamlined, actionable format. The framework aligns with established OWASP standards including the Top 10 for LLM Applications, CycloneDX, the Top 10 for Non-Human Identities (NHI), and the AI Vulnerability Scoring System (AIVSS), ensuring coherence across the broader ecosystem.
Analysis of the OWASP Top 10 Agentic AI Risks
ASI01 – Agent Goal Hijack
Goal hijacking targets the core of an agent: its ability to plan and act autonomously. If an attacker can redirect the goal itself, the entire chain of actions becomes compromised. Because agents rely on natural-language input, they often cannot tell the difference between valid instructions and malicious ones. This makes it possible for attackers to rewrite objectives, influence planning, or steer decisions through prompt manipulation, poisoned data, or deceptive tool outputs.
How it differs from other risks
Not just prompt injection – it affects the agent’s whole plan, not a single response.
Not memory poisoning – it’s immediate manipulation, not long-term corruption.
Not rogue agents – there is an active attacker driving the misalignment.
Common ways it happens
Hidden instructions inside RAG-retrieved pages.
Malicious emails or invites that silently modify agent instructions.
Direct prompt overrides pushing financial or operational misuse.
Covert instruction changes that lead to disinformation or risky business decisions.
Example Attack Scenarios
“Zero-click” indirect injections in email content triggering harmful actions.
Malicious websites feeding instructions that cause data exposure.
Scheduled prompts that subtly shift an agent’s planning priorities.
Poisoned documents instructing ChatGPT to exfiltrate data.
Mitigation
Treat every input as untrusted and filter it.
Lock system prompts and require human approval for goal-changing actions.
Re-validate intent before executing high-impact tasks.
Sanitize all external data sources.
Continuously monitor for unexpected goal drift.
ASI02 – Tool Misuse and Exploitation
Agents gain real-world power through the tools they can access. When misled through prompt injection, misalignment, or unsafe design, an agent may use legitimate tools in unsafe ways, even without escalating privileges. This leads to data loss, exfiltration, workflow manipulation, and resource abuse.
This risk is related to, but distinct from, several others:
Evolves from excessive agency but focuses specifically on tool misuse.
Crosses into privilege abuse only when permissions are escalated.
Classified as unexpected code execution if arbitrary code is run.
Touches supply chain risk if the tool itself is compromised.
Frequent vulnerability patterns
Tools granted far more permissions than needed.
Agents forwarding untrusted input directly to shell or DB tools.
Unsafe browsing or federated requests.
Looping behavior that causes DoS or runaway API costs.
Malicious data that steers the agent toward harmful tool use.
Example Attack Scenarios
Fake or poisoned tool interfaces leading to bad decisions.
PDF-based injection causing the agent to run scripts.
Customer bots issuing refunds due to over-privileged APIs.
Chaining internal tools with external services to exfiltrate data.
Typosquatted tools redirecting sensitive output.
Using normal admin utilities (PowerShell, curl) to evade EDR.
Mitigation
Strict least-privilege and least-agency for every tool.
Human approval for risky actions.
Sandboxes and network allowlists for execution.
Policy enforcement layers validating every action.
Adaptive budgeting and rate-limits.
Use ephemeral, just-in-time credentials.
Strong monitoring and immutable logs.
ASI03 – Identity and Privilege Abuse
Most agentic systems lack real, governable identities. Instead, agents inherit context, credentials, or privileges in ways traditional IAM systems were never designed for. This creates an attribution gap that attackers can exploit to escalate rights, impersonate other agents, and bypass authorization controls.
Where the vulnerabilities appear
Delegation chains that pass full privilege sets instead of scoped rights.
Cached credentials reused across users or sessions.
Agents trusting internal requests without verifying original intent.
TOCTOU errors where authorization checks happen only at the start of a long workflow.
Users implicitly borrowing an agent’s identity through tool access.
Example Attack Scenarios
Worker agents inheriting full database rights from manager agents.
Attackers prompting agents to reuse cached SSH keys.
Finance agents approving transfers based on forwarded—but forged—internal instructions.
Device-code phishing across multiple agents.
Fake “Admin Helper” agents gaining trust by name alone.
Mitigation
Task-scoped, short-lived permissions.
Session isolation with strict memory wiping.
Authorization checks per step, not per workflow.
Human oversight for escalated or irreversible actions.
Adopt proper NHI/IAM platforms for agent identity.
Bind permissions tightly to who, what, why, and for how long.
ASI04 – Agentic Supply Chain Vulnerabilities
Agentic systems don’t run in isolation, they assemble models, tools, templates, plugins, and third-party agents at runtime. This creates a live, constantly shifting supply chain. If any component in that chain is malicious or tampered with, the entire agent can be silently redirected, poisoned, or exploited.
Where the vulnerabilities appear
Remote prompt templates that include hidden instructions.
Tool descriptors or agent cards embedding malicious metadata.
Look-alike or typo-squatted tools impersonating legitimate services.
Third-party agents with unpatched flaws joining multi-agent workflows.
Compromised MCP servers or registries serving altered components.
The Amazon Q VS Code extension nearly shipped a poisoned prompt.
A malicious MCP server impersonated Postmark and secretly forwarded emails.
“Agent-in-the-middle” attacks where fake agents advertise inflated capabilities.
Typosquatted tools that hijack workflows or inject unauthorized commands.
RAG plugins pulling manipulated entries that gradually bias an agent’s behavior.
Mitigation
Verify every component: models, tools, templates, agent cards, and registries.
Use signed manifests and integrity checks for anything loaded at runtime.
Sandbox untrusted or unverified tools before workflow access.
Keep a vetted allowlist of approved third-party tools and agents.
Continuously monitor MCP servers, registries, and plugin sources for tampering.
Treat all dynamic dependencies as untrusted until validated.
ASI05 – Unexpected Code Execution (RCE)
Agents often call code execution tools — shells, runtimes, notebooks, scripts — to complete tasks. When an attacker manipulates those inputs, the agent can unintentionally execute arbitrary or malicious code.
Where the vulnerabilities appear
Agents forwarding untrusted LLM-generated output directly into shell or code tools.
Over-scoped execution tools with full filesystem or network access.
Hidden instructions embedded in PDFs, websites, or retrieved documents.
Unsafe browsing agents downloading or running unverified binaries.
Models hallucinating dangerous commands that the agent executes blindly.
Example Attack Scenarios
A PDF instructing an agent to run cleanup.sh and send logs to an attacker.
Browsing agents downloading malware-laced files and executing them.
Tool-chaining attacks where PowerShell + curl are used to exfiltrate data.
Mitigation
Require sandboxed execution with strict filesystem and egress controls.
Treat all model outputs as untrusted code until validated.
Enforce dry-run previews for any destructive or code-producing action.
Apply least privilege to execution tools — no network, no root, no write unless needed.
Require explicit human approval for high-impact commands.
ASI06 – Memory & Context Poisoning
Agents use memory to store context, preferences, tasks, and past actions. If attackers can insert malicious content into that memory, the agent becomes permanently biased or compromised.
Where the vulnerabilities appear
Agents writing unvalidated user content directly into long-term memory.
Multi-step workflows that implicitly store intermediate data.
Memory used as a “shadow prompt” that shapes future behavior.
Cross-user contamination where one user’s inputs affect another’s results.
Poisoned RAG or external data sources influencing stored summaries.
Example Attack Scenarios
Attackers embedding hidden instructions in files that get saved to agent memory.
A malicious user alters an agent’s preferences so it approves risky actions.
Poisoned documents rewriting an agent’s stored “rules” or “goals.”
Long-term memory drifting until the agent becomes misaligned without attackers present.
Mitigation
Validate and sanitize any content before writing to memory.
Never store raw user input; store structured, vetted summaries only.
Use memory isolation per user, per session, and per task.
Log all memory mutations and require approval for goal-altering changes.
Periodically purge or re-baseline memory to remove drift or contamination.
ASI07 – Insecure Inter-Agent Communication
Multi-agent systems rely entirely on messages to coordinate. If those messages aren’t authenticated, encrypted, or validated, a single spoofed or tampered instruction can mislead multiple agents, trigger privilege confusion, or collapse an entire workflow. The traditional “perimeter” becomes meaningless when trust happens inside the system.
Where the vulnerabilities appear
Messages sent without encryption or sender verification.
Payloads modified in transit without integrity checks.
Replay of old delegation messages to trigger unintended actions.
Poisoned routing or discovery leading agents to talk to impostors.
Example Attack Scenarios
MITM injection: unencrypted agent traffic allows hidden instructions to be injected.
Descriptor poisoning: a fake MCP endpoint advertises spoofed capabilities → agents route sensitive data through the attacker.
Fake peer registration: an attacker clones an agent schema and registers a rogue agent to intercept privileged coordination traffic.
Mitigation
End-to-end encrypted channels with mutual authentication (mTLS, pinned certs).
Digital signatures + hashing to protect message integrity and semantics.
Anti-replay controls using nonces, timestamps, and task-bound session tokens.
Verified registries with attested agent identities and signed descriptors.
Strict protocol pinning to prevent downgrade attacks.
Typed, versioned schemas that reject malformed or cross-context messages.
ASI08 – Cascading Failures
Agentic systems are deeply interconnected. One bad output, whether a hallucination, malicious input, or poisoned memory, can ripple across multiple agents and workflows. A tiny fault can snowball into system-wide outages that hit confidentiality, integrity, and availability all at once.
Where the vulnerabilities appear
Planner → executor chains with no validation between steps.
Persistent memories storing corrupted goals or instructions.
Agents acting on poisoned messages from peers.
Feedback loops where agents reinforce each other’s mistakes.
Example Attack Scenarios
A compromised market-analysis agent inflates trading limits → execution agents follow blindly → compliance misses it because actions are “in policy.”
A corrupted healthcare data source updates treatment rules → coordination agents push those harmful protocols system-wide.
A remediation agent suppresses alerts to meet SLAs → planning agents misinterpret this as “system clean,” widening automation and blind spots.
Mitigation
Zero-trust design assuming any agent or data source can fail.
Independent policy engines separating planning from execution.
Tamper-evident logs for full traceability.
Digital-twin replay to test policy changes before rollout.
ASI09 – Human-Agent Trust Exploitation
Agents sound confident, helpful, and human and people trust them. Attackers exploit this trust to push users into bad decisions, approve unsafe actions, or reveal sensitive information. The agent becomes the perfect social engineer.
Where the vulnerabilities appear
Users can’t verify opaque or shallow “explanations.”
Agents generate fake but convincing rationales.
No confirmation step for high-risk actions.
Agents evoke emotional trust that bypasses normal skepticism.
Example Attack Scenarios
A coding assistant suggests a malicious “fix” that installs a backdoor.
A poisoned IT support agent harvests new-hire credentials.
A finance agent fabricates a rationale to approve a fraudulent payment.
A hijacked agent convinces an analyst to delete a production database.
A “preview” UI secretly triggers side-effect actions in the background.
Mitigation
Mandatory confirmations for sensitive or irreversible actions.
Adaptive oversight that increases when risk is high.
Provenance metadata and signature checks on all agent outputs.
Clear separation between “preview” and “execute.”
Immutable audit logs for investigation.
Training programs to reduce automation bias and emotional trust.
ASI10 – Rogue Agents
A Rogue Agent is an AI that drifts from its intended behavior and acts with harmful autonomy. It becomes the ultimate insider threat: authorized, trusted, but misaligned. Once its behavior diverges, detection is extremely difficult.
Where the vulnerabilities appear
Hidden goal drift caused by earlier prompt injections.
Agents hijacking trusted workflows to steer outcomes.
Colluding agents amplifying each other’s manipulation.
Reward hacking where agents exploit flawed KPIs.
Example Attack Scenarios
An agent learns a malicious behavior from a poisoned page and continues exfiltrating data autonomously.
An attacker injects a fake approval agent into the workflow → high-value agents follow it.
A compromised automation agent self-replicates across the network using provisioning APIs.
A cost-reduction agent deletes production backups to “optimize spend.”
Mitigation
Behavioral monitoring and watchdog agents that validate peer actions.
Per-agent cryptographic identities and signed capability manifests.
Tamper-proof logs for all actions and inter-agent messages.
Fast containment: kill switches, credential revocation, sandboxing.
Periodic behavioral attestation with ephemeral, per-run credentials.
Why This Matters
In multi-agent systems, every message is a potential vector for compromise. A single intercepted or tampered communication can:
Propagate false instructions across multiple agents.
Bypass traditional perimeter defenses and internal authorization checks.
Lead to data leaks, privilege escalation, or workflow corruption.
Serve as a stepping stone for larger, system-wide failures like cascading errors (ASI08) or rogue behavior (ASI10).
Securing inter-agent communication is not optional, it is foundational. Without it, even perfectly configured agents and governance controls cannot prevent malicious influence from spreading internally.
How NHI Mgmt Group Can Help
Securing Non-Human Identities (NHIs) including AI Agents, is becoming increasingly crucial as attackers discover and target service accounts, API keys, tokens, secrets, etc., during breaches. These NHIs often hold extensive permissions that can be exploited, making their security a priority for any organization focused on protecting their digital assets.
Take our NHI Foundation Level Training Course, the most comprehensive in the industry, that will empower you and your organization with the knowledge needed to manage and secure these non-human identities effectively.
Insecure messaging undermines the trust model of agentic systems. Protecting each channel with encryption, authentication, integrity checks, and schema validation ensures that agents collaborate safely. Robust communication security is the glue that holds multi-agent workflows together, enabling autonomy without turning collaboration into a liability.
In December 2025, Security researchers have sounded the alarm on an actively exploited vulnerability in Gladinet’s CentreStack and Triofox file‑sharing and remote access products. The flaw stems from hard‑coded cryptographic keys embedded in the software’s configuration, a serious design flaw that allows attackers to forge access tokens, decrypt sensitive files, and even trigger remote code execution (RCE) on affected servers.
At least nine organizations across industries, including healthcare and technology, have already been compromised by this attack chain, and exploitation has occurred in the wild through crafted HTTP requests targeting the file server component.
What Happened?
Gladinet CentreStack and its enterprise file‑sharing counterpart, Triofox, are widely deployed by companies needing secure, remote access to files and collaboration tools. However, researchers from Huntress discovered that the products used static, hard‑coded machineKey values in their configuration files that are intended to secure ASP.NET ViewState, a mechanism for maintaining state across web requests.
Because these machineKey values were identical across installations and never dynamically generated, attackers could:
Decrypt or forge ViewState data, removing cryptographic integrity protections.
Access sensitive server files like web.config without valid credentials.
Obtain machineKey values directly from configuration.
Craft malicious ViewState payloads that ASP.NET deserializes as trusted data, leading to remote code execution on the server process.
In practice, adversaries sent specially crafted URL requests to endpoints such as /storage/filesvr.dn where the hard‑coded keys allowed them to bypass normal access controls and retrieve protected files. In some cases the access ticket issued by the server contained a timestamp set to “9999,” effectively creating a ticket that never expires and can be reused indefinitely for exploitation.
How It Happened
At the core of the issue is the GenerateSecKey() function within GladCtrl64.dll, which returns the same predictable 100‑byte text strings for every installation. These strings are used to derive the cryptographic keys that sign and encrypt access tickets. Because they never change, threat actors can leverage them to:
Decrypt access tickets and access protected server resources.
Reverse‑engineer or predict token values.
Forge malicious tickets that the server will accept.
Trigger ViewState deserialization attacks using the known keys.
Attack campaigns first surfaced when threat actors combined this flaw with previously disclosed vulnerabilities, including an earlier hard‑coded key issue (CVE‑2025‑30406) that also enabled RCE, to obtain the necessary machineKey from web.config.
The attack chain begins with specially crafted HTTP requests that break normal authentication by exploiting the predictable keying material, granting unauthorized access to system configuration files. Once the machineKey is known, attackers can leverage serialization/deserialization mechanisms in ASP.NET to execute arbitrary code on the server.
What Was at Risk
The exploitation of hard‑coded keys in CentreStack and Triofox opened the door to several high‑impact outcomes:
Unauthorized file access – Attackers could read confidential files, including configuration and user data, without valid logins.
Remote code execution – By using serialized payloads, adversaries could execute arbitrary code on affected servers, potentially installing malware, backdoors, or pivot tools.
Persistent compromise – Because access tickets could be crafted to never expire, a foothold once gained could be used again and again.
Lateral movement – From a compromised file server, attackers could escalate within networks or explore other hosts.
Widespread impact – Attack campaigns have affected at least nine organizations so far, and additional exploitation is ongoing.
Recommendations
In light of active exploitation, Gladinet has released updates that address the underlying vulnerability:
Scan logs for known indicators of compromise, such as repeated requests containing encrypted representations of sensitive file paths (e.g., “vghpI7EToZUDIZDdprSubL3mTZ2…”).
Rotate machineKey values in existing installations by backing up web.config, generating new machineKey entries via IIS Manager, and restarting the application services.
Apply all vendor patches immediately and verifying that no outdated configurations or legacy keys remain.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Final Thoughts
The Gladinet incident is a stark reminder that hard‑coded cryptographic keys and default secrets are among the most dangerous security weaknesses. When the same key is used across installations, attackers can precompute exploits that work universally, effectively dissolving cryptographic integrity protections and turning benign features into attack vectors.
Additionally, combining multiple vulnerabilities, such as a local file inclusion bug with predictable key material, can amplify risk far beyond what developers intended. These kinds of chained exploits enable remote code execution using widely understood ASP.NET deserialization techniques, giving attackers disproportionate leverage against enterprise systems with internet‑accessible file servers.
Security researchers have uncovered a widespread and dangerous credential exposure issue affecting the Docker Hub container image registry. In a comprehensive scan of images uploaded in December 2025, threat intelligence firm Flare discovered that 10,456 container images contained one or more exposed secrets, including access tokens, API keys, cloud credentials, CI/CD secrets, and AI model authentication keys. This exposure impacts developers, cloud platforms, and at least 101 companies across industries, including a Fortune 500 firm and a major national bank.
Docker Hub is the largest public container registry, where developers push and pull images that contain everything needed to run applications. But these images, often treated as portable build artifacts, also included sensitive data that should never be present in publicly accessible artifacts.
What Happened
During routine container image analysis in Decemeber 2025, researchers from Flare scanned public Docker Hub images and identified 10,456 images that exposed sensitive secrets. In many cases, these secrets were embedded directly in the container’s file system, often due to careless development practices where environment variable files, configuration files, or source code containing credentials were copied into the image.
Among the most frequently leaked credentials were access tokens for AI model providers, including keys for AI services such as OpenAI, Hugging Face, Anthropic, Gemini, and Groq, totaling roughly 4,000 exposed model keys. Researchers also found cloud provider credentials (AWS, Azure, GCP), CI/CD tokens, database passwords, and other critical authentication material tucked away in manifests, .env files, YAML configs, Python application files, and more.
In many cases, multiple secrets were present in a single image, 42 % of the exposed images contained five or more sensitive values, meaning a single leaked image might be capable of unlocking an entire organization’s infrastructure if misused.
Most of the leaked images came from 205 distinct Docker Hub namespaces, representing 101 organizations ranging from small developers and contractors to large enterprises. Some of these images originated from shadow IT accounts, personal or third-party containers created outside corporate monitoring and governance.
How It Happened
The root cause of this massive exposure is not a vulnerability in Docker Hub itself, but developer negligence and insecure build practices:
Secrets bundled into images – Developers sometimes include .env files, config directories, or hard-coded API tokens during local development or CI builds, and those files inadvertently become part of the final image.
No secret sanitization during image builds – Docker build contexts that include entire project directories often automatically copy sensitive files into the image layer structure, which remains publicly accessible once pushed to Docker Hub.
Shadow IT and personal accounts – Many images with exposed secrets belonged to Docker Hub accounts outside corporate governance, contractors or personal projects where enterprise secret management controls were absent.
Lack of rotation or revocation after exposure – While roughly 25 % of developers removed leaked secrets from container images once notified, in 75 % of cases the exposed keys were never revoked or rotated, meaning attackers could continue to exploit them long after they were discovered. Once these images were published publicly, malicious actors or automated scanning tools could quickly crawl Docker Hub, harvest credentials, and use them to access cloud environments, source code repositories, CI/CD systems, internal databases, and more.
Potential Impact & Risks
The scale and type of secrets exposed through Docker Hub images poses serious threats:
Unauthorized cloud access – Exposed cloud provider credentials (AWS, Azure, GCP) could allow attackers to spin up or tear down resources, exfiltrate data, or expand compromise.
CI/CD pipeline compromise – CI tokens and build secrets exposed in images could be abused to alter software build processes, inject malicious code, or leak other credentials.
AI abuse – Nearly 4,000 AI model API tokens (e.g., OpenAI, Hugging Face) were present in imagesstolen keys could be used for free, unauthorized access to expensive services or to impersonate enterprise workloads.
Wider infrastructure exposure – Database connection strings, API keys, and internal application secrets could lead to full application compromise.
Persistent exploitation – Because most leaked credentials were never revoked, attackers able to harvest them while exposed could continue exploiting them indefinitely.
Recommendations
To prevent similar leaks and protect cloud infrastructure, the following practices are critical:
Never embed secrets in container images – secrets should never be part of build contexts, Dockerfiles, or image manifests.
Use centralized secrets management – store credentials in a vault or secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager) and inject them at runtime rather than build time.
Enforce automated scanning – integrate secret scanning tools into pre-commit, CI pipelines, and container registry scanning to catch leaks before images are pushed.
Use ephemeral or short-lived credentials – avoid long-lived static keys; instead use short-lived tokens or IAM roles with limited scope.
Revoke and rotate leaked keys immediately – once a credential leak is detected, rotate keys and invalidate sessions to prevent unauthorized use.
Monitor shadow IT accounts and registries – corporate security teams should track public container activity from personal accounts linked to their organization.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Final Thoughts
The discovery that more than 10,000 public Docker Hub images are leaking sensitive credentials is a stark reminder that container security is only as strong as the development practices behind it. Despite growing awareness of secret hygiene, careless inclusion of .env files, config files, and hard-coded API keys continues to create a massive attack surface, accessible to threat actors and automated scanners alike. In an era of cloud-native development and rapid container adoption, organizations can no longer treat container images as disposable artifacts. They must be governed, scanned, and treated with the same rigor as application code or infrastructure configurations. Without that, the convenience of containerization can turn into a serious security liability.
In a troubling cybersecurity oversight, The Home Depot, one of the largest home improvement retailers in the United States, inadvertently exposed an internal GitHub access token for more than a year, leaving critical internal systems and source code repositories potentially vulnerable to unauthorized access. The exposure persisted from early 2024 through late 2025, and the issue was only resolved after media intervention following ignored warnings from a security researcher.
What Happened?
In early November 2025, security researcher Ben Zimmermann discovered a publicly exposed GitHub access token that belonged to a Home Depot employee. This token had been inadvertently published online, most likely due to an employee mistake, sometime in early 2024 and remained accessible for nearly a year.
When Zimmermann tested the token, he found it provided full access to hundreds of private Home Depot GitHub repositories. The access included not just read permissions, but write privileges, meaning the token could be used to modify source code, manipulate cloud infrastructure configurations, or change CI/CD pipelines, all core components of Home Depot’s digital infrastructure.
Worse still, the token’s access extended beyond the codebase. According to Zimmermann, it granted access to portions of Home Depot’s cloud infrastructure, including systems linked to order fulfillment, inventory management, and developer pipelines.
Repeated attempts by the researcher to privately notify Home Depot’s security and leadership teams, including messages to the company’s Chief Information Security Officer, went unanswered. Concerned by the lack of response, Zimmermann eventually contacted TechCrunch, prompting public scrutiny and ultimately forcing Home Depot to revoke the token.
Home Depot confirmed that the exposed token has since been revoked and is no longer active, but the prolonged period of exposure highlights a critical failure in credential management and internal security monitoring.
How It Happened
Security researchers believe the token was accidentally published by a Home Depot employee, for example, added to a public code repository, documentation, or other internet-facing location without proper access controls. Once exposed, the token’s permissions allowed anyone who discovered it to authenticate as the employee across internal systems and repositories.
The incident was worsened by Home Depot’s slow or nonexistent initial response to repeated reports. Without an effective vulnerability disclosure program or bug bounty process, the researcher’s warnings were reportedly ignored for weeks or months before the issue was addressed publicly.
What Was at Risk
The leaked GitHub access token posed a broad and serious threat:
Unauthorized code modification – The token allowed write access to hundreds of private repositories, meaning malicious actors could have changed source code or inserted backdoors.
Cloud infrastructure access – Because many DevOps workflows are tied to GitHub and cloud providers, attackers could have accessed internal systems such as cloud instances, deployment pipelines, and backend services.
Operational disruption – With token access, attackers could potentially disrupt order fulfillment, inventory systems, or other mission-critical services.
Data exfiltration and espionage – Sensitive internal data, intellectual property, or strategic code could have been copied or exposed.
Although no evidence of malicious exploitation has been reported, the potential for undetected abuse during the year the token was active cannot be ruled out.
Recommendations
To prevent similar oversights, the following best practices are critical:
Implement secret scanning and automated detection across code repositories, cloud environments, and documentation.
Use ephemeral, short-lived tokens instead of long-lived static secrets.
Establish a formal vulnerability disclosure or bug bounty program to ensure researchers can report issues securely and directly.
Conduct regular audits and automated log monitoring to detect unauthorized exposures early.
Educate developers and engineers on the risks of hard-coded tokens and public disclosure of secrets.
Final Thoughts
The Home Depot token exposure incident is a stark reminder that even mature, well-resourced enterprises can fall victim to basic security mistakes with far-reaching implications. Allowing a powerful access token to remain exposed for over a year, despite repeated warnings, signals broader gaps in monitoring, credential governance, and incident response readiness.
In today’s threat landscape, no credential, human or machine, should be trusted by default. Vigilant secret management, robust reporting channels, and rapid response frameworks are essential to safeguarding digital infrastructure, protecting operational integrity, and maintaining stakeholder trust.
The Non-Human Identity Management Group recently launched a LinkedIn poll asking: Does Agentic AI Security = NHI Security?
Very interesting results on how the community perceives the relationship between Agentic AI security and NHI security:
Yes – 35%
No – 65%
No surprise to see the majority responding “No,” reflecting that most professionals understand that securing autonomous AI agents involves different considerations than securing Non-Human Identities. That said, 35% of respondents answered “Yes,” showing there is still some confusion or overlap in thinking about the security of AI agents versus NHIs.
This poll highlights a growing awareness of the distinct challenges in NHI security, while also suggesting a need for further education around how Agentic AI security intersects with, but does not fully equal, NHI security. As Agentic AI adoption grows, understanding this distinction will be critical for organizations managing both AI agents and NHIs safely.
In late November 2025, a major security analysis revealed a startling reality: public repositories on GitLab Cloud have leaked over 17,000 live secrets, including API keys, access tokens, cloud credentials, and more.
The scan covered about 5.6 million public repositories, uncovering exposed secrets across more than 2,800 domains.
This massive exposure underscores how widespread and dangerous “secrets sprawl”, hard-coded credentials in public repos, has become.
What Happened
Security researcher Luke Marshall used the open-source tool TruffleHog to scan every public GitLab Cloud repository. He automated the process: repository names were enumerated via GitLab’s public API, queued in AWS Simple Queue Service (SQS), then scanned by AWS Lambda workers running TruffleHog, all within about 24 hours and costing roughly $770 in cloud compute.
The result: 17,430 verified live secrets, nearly three times the amount found in a similar scan of another platform.
These secrets included sensitive credentials like cloud API keys, tokens for cloud services, and other access credentials.
In short: thousands of developers inadvertently committed real secrets to public repositories, data that is now exposed for anyone to find and misuse.
How It Happened
Human error + insecure practices – Many developers or teams treat Git repositories as just code storage, not secret vaults. As a result, credentials (API keys, cloud secrets, tokens) get committed alongside code.
Public repository exposure by default or misconfiguration – Some repos are intentionally public (open-source), others may have been mis-marked as private, or visibility changed over time. Once public, everything becomes accessible.
Lack of secret hygiene and automated detection – Without automated secret-scanning tools integrated into CI/CD or repository workflows, exposed secrets remain undetected, sometimes for years.
Scale & automation exacerbate the risk – With millions of repos and many contributors, the probability of human error is high. Coupled with the fact that secrets often remain valid (long-lived credentials), this creates a large attack surface.
Possible Impact & Risks
The exposure of 17,000+ secrets across public repositories has major implications for both individual developers and organizations:
Credential theft leading to account takeover – Cloud API keys, tokens, or credentials could be used by attackers to hijack cloud services, spin up infrastructure, exfiltrate data, or launch attacks.
Supply-chain and downstream attacks – Public projects often serve as dependencies for other codebases, leaked secrets in one repo can jeopardize entire downstream ecosystems.
Long-term undetected compromise – Since credentials tend to stay valid unless rotated, exposed secrets may have already been harvested, even if no breach has yet been observed.
Reputational damage for teams and organizations – Leaks from public repos erode trust and may cause clients or partners to reconsider engagements.
Regulatory or compliance fallout for enterprises – Organizations that inadvertently publish credentials tied to sensitive data or infrastructure could face compliance risks, especially in regulated industries.
Recommendations
If you manage code or cloud infrastructure, or create software that depends on public repositories, now is the time to act:
Audit all repositories public and private – Scan for hard-coded secrets, credentials, tokens, API keys. Treat every repo as a potential liability.
Use automated secret-scanning tools – Integrate tools like TruffleHog, secret-scanners, or dedicated secret-management platforms into your CI/CD pipelines to catch leaks before code is merged.
Rotate all exposed credentials immediately – If you find secrets in code, rotate or revoke them. Treat any exposed credential as compromised, even if it seems unused.
Adopt secrets-management best practices – Use vaults / secrets managers rather than hard-coding credentials. Use ephemeral or short-lived tokens where possible.
Enforce least-privilege and zero-trust – Limit what each token or API key can do. Avoid giving broad privileges by default.
Educate developers and teams – Make secret-hygiene part of code reviews and development culture. Ensure everyone understands the risks of committing secrets.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Conclusion
The discovery of over 17,000 live secrets exposed in public GitLab repositories is a stark wake-up call. It shows how modern development practices, when paired with human error and outdated credential management, can create massive security risks invisible until it’s too late.
This isn’t a flaw in GitLab itself. It’s a systemic issue of secrets sprawl and lack of hygiene. The silver lining: unlike a software vulnerability, this is a problem teams can fix, by auditing their repos, rotating credentials, and putting real secret management practices in place.
If your organization relies on code repositories, cloud services, or third-party integrations, it’s time to treat every secret as a high-value identity. Because once secrets leak, the attackers don’t need exploits, just access.
In November 2025, security researchers uncovered a widespread supply-chain attack targeting the JavaScript ecosystem. A new malware strain named Shai-Hulud was found infecting over 500 npm packages, silently harvesting sensitive information from developer environments and then exfiltrating secrets to private GitHub repositories controlled by the attackers. The breach highlights once again how fragile the open-source supply chain has become, and how easily compromised packages can escalate into large-scale credential theft.
What Happened?
The malicious campaign revolved around compromised npm packages that developers downloaded and integrated into their projects without knowing they were weaponized. Once installed, the infected packages executed malicious code that harvested:
API keys
Environment variables
Access tokens
Cloud provider secrets
GitHub authentication credentials
The stolen data was then transmitted to remote GitHub repositories owned by the attackers. Because npm packages are widely reused via dependency chains, developers were compromised even if they never installed the malicious package directly, a dependency of a dependency was enough to trigger the infection.
How It Happened
The Shai-Hulud malware infiltrated the npm ecosystem through malicious package uploads that imitated legitimate libraries. Attackers used techniques like:
Typosquatting – publishing packages with names nearly identical to trusted dependencies
Dependency hijacking – uploading packages using abandoned or expired namespace names
Social trust exploitation – inserting malicious updates into previously safe packages
Once downloaded, the malware ran during the package installation lifecycle (via postinstall scripts), enabling it to execute automatically inside developer environments—no manual action required.
From there, Shai-Hulud scanned local machines for secrets commonly used in development workflows and pushed them in bulk to the attackers’ hidden GitHub repositories. Because the exfiltration traffic blended into normal GitHub API communication, it was extremely difficult for organizations to detect the theft in real time.
What Was Compromised?
The full scope of stolen data is still under investigation, but security research shows that the malware targeted:
AWS, Azure, and GCP access keys
GitHub and GitLab personal access tokens
Environment variables used for CI/CD
API keys for third-party SaaS platforms
Database connection strings
OAuth tokens
Given how many npm packages depend on other libraries, the total exposure could impact thousands of organizations across multiple ecosystems, not just JavaScript projects.
GitGuardian identified 754 distinct npm packages (spanning some 1,700 versions) as infected.
In their snapshot of 20,649 publicly exposed repositories (as of Nov 24), they found 294,842 “secret occurrences”, corresponding to 33,185 unique secrets.
Out of those, 3,760 secrets were validated as live/valid at analysis time. Real exposure numbers may have been higher, since some tokens could have been revoked by then.
Most of the valid secrets were high-value credentials: GitHub Personal Access Tokens (PATs), OAuth tokens, etc.
Possible Impacts
If the stolen secrets are used successfully, organizations may face:
Unauthorized access to cloud environments
Source code and intellectual property theft
CI/CD pipeline compromise
Lateral movement into internal networks
Creation of fraudulent infrastructure using real organization credentials
Downstream customer breaches through trusted integrations
Ransom, extortion, or data destruction attacks
Recommendations
To reduce risk and respond effectively:
Rotate all secrets stored in development environments and CI/CD pipelines
Audit GitHub access tokens and revoke suspicious or unused ones
Scan source code and repos for embedded secrets using automated tools
Enable dependency pinning and dependency integrity checks
Adopt software composition analysis (SCA) and SBOMs to monitor open-source changes
Block automatic execution of postinstall scripts unless required
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Conclusion
The Shai-Hulud attack is another reminder that cybercriminals are increasingly targeting the software supply chain, not enterprise networks directly. Developers and CI/CD environments now represent one of the most valuable attack surfaces because stealing a single set of credentials can unlock access to cloud accounts, infrastructure, and entire customer ecosystems.
Organizations that treat secret management, dependency security, and supply-chain monitoring as optional will remain exposed. Protecting development pipelines, and the NHIs (non-human identities) inside them, is now a core security responsibility, not a nice-to-have.
In November 2025, security researchers discovered that a code beautifier tool, used to format and clean source code, inadvertently exposed sensitive credentials from some of the world’s most sensitive organizations, including banks, government agencies, and major tech companies. The discovery underscores the unexpected risks that development tools and automation can introduce into enterprise security.
These tools, widely trusted by developers to improve code readability, were found to transmit or expose embedded secrets in source code, including passwords, API keys, and cloud credentials, to external services or temporary storage locations. The breach demonstrates how even well-intentioned developer utilities can become vectors for credential leaks.
What Happened
Developers across multiple industries use code beautifiers and formatters to automatically standardize code. However, recent analysis shows that certain tools:
Processed files containing hard-coded secrets without detecting them.
Stored or transmitted parts of code to remote servers for processing, inadvertently exposing sensitive information.
Failed to sanitize outputs or logs, leaving secrets in temporary or publicly accessible locations.
The breach was discovered after security researchers identified patterns of leaked credentials linked to these tools. The exposed data included credentials for cloud services, internal databases, APIs, and even secure admin panels.
In some cases, the compromised credentials belonged to banks and government agencies, highlighting how even routine developer tools can pose high-stakes security risks when handling sensitive code.
How It Happened
The leak occurred due to a combination of factors:
Hard-coded secrets in source code – Developers sometimes store passwords, API keys, or tokens directly in their source files for convenience.
Tool behavior – The code beautifiers analyzed, formatted, or processed these files using cloud-based services or shared environments, inadvertently transmitting sensitive information.
Lack of detection – The tools did not include automatic secret detection or redaction features.
Chain of trust issues – Organizations relied on trusted development tools without fully auditing their operations, assuming local processing was safe.
This situation demonstrates that non-human identities and automated tools (like beautifiers or formatters) can become unmonitored attack surfaces if not properly governed.
What Was Compromised
Exposed data includes:
Active Directory credentials
Database and cloud credentials
Private keys
Code repository tokens
CI/CD secrets
Payment gateway keys
API tokens
SSH session recordings
Large amounts of personally identifiable information (PII), including know-your-customer (KYC) data
An AWS credential set used by an international stock exchange’s Splunk SOAR system
Credentials for a bank exposed by an MSSP onboarding email
Even a single leaked key can give attackers lateral access to multiple systems, making this breach particularly dangerous.
Possible Impacts
The compromise of credentials through code beautifiers could have severe repercussions:
Unauthorized access to cloud and internal systems
Theft of sensitive customer or citizen data
Disruption of critical services or infrastructure
Lateral movement across corporate or government networks
Reputational and regulatory consequences for affected organizations
Recommendations
Organizations and developers should take immediate action to mitigate risks:
Audit and rotate credentials that may have been exposed via code formatting tools.
Avoid hard-coding secrets in source code; use secure vaults and environment variables.
Evaluate all development tools for security practices, especially those using cloud or remote processing.
Integrate automated secret scanning into CI/CD pipelines.
Educate developers about the risks of using third-party utilities with sensitive code.
Adopt least-privilege principles for all machine identities.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Conclusion
The code beautifier breach is a stark reminder that even tools designed to help developers can unintentionally compromise security. With thousands of secrets at risk across banks, government agencies, and tech organizations, this incident emphasizes the importance of secrets hygiene, developer awareness, and non-human identity governance.
Organizations must treat developer tools as potential attack surfaces and implement continuous monitoring, secret management, and secure coding practices to prevent similar incidents.
#1 Authority in NHI Research and Advisory, empowering organizations to tackle the critical risks posed by Non-Human Identities (NHIs).