Security researchers have uncovered a widespread and dangerous credential exposure issue affecting the Docker Hub container image registry. In a comprehensive scan of images uploaded in December 2025, threat intelligence firm Flare discovered that 10,456 container images contained one or more exposed secrets, including access tokens, API keys, cloud credentials, CI/CD secrets, and AI model authentication keys. This exposure impacts developers, cloud platforms, and at least 101 companies across industries, including a Fortune 500 firm and a major national bank.
Docker Hub is the largest public container registry, where developers push and pull images that contain everything needed to run applications. But these images, often treated as portable build artifacts, also included sensitive data that should never be present in publicly accessible artifacts.
What Happened
During routine container image analysis in Decemeber 2025, researchers from Flare scanned public Docker Hub images and identified 10,456 images that exposed sensitive secrets. In many cases, these secrets were embedded directly in the container’s file system, often due to careless development practices where environment variable files, configuration files, or source code containing credentials were copied into the image.
Among the most frequently leaked credentials were access tokens for AI model providers, including keys for AI services such as OpenAI, Hugging Face, Anthropic, Gemini, and Groq, totaling roughly 4,000 exposed model keys. Researchers also found cloud provider credentials (AWS, Azure, GCP), CI/CD tokens, database passwords, and other critical authentication material tucked away in manifests, .env files, YAML configs, Python application files, and more.
In many cases, multiple secrets were present in a single image, 42 % of the exposed images contained five or more sensitive values, meaning a single leaked image might be capable of unlocking an entire organization’s infrastructure if misused.
Most of the leaked images came from 205 distinct Docker Hub namespaces, representing 101 organizations ranging from small developers and contractors to large enterprises. Some of these images originated from shadow IT accounts, personal or third-party containers created outside corporate monitoring and governance.
How It Happened
The root cause of this massive exposure is not a vulnerability in Docker Hub itself, but developer negligence and insecure build practices:
- Secrets bundled into images – Developers sometimes include .env files, config directories, or hard-coded API tokens during local development or CI builds, and those files inadvertently become part of the final image.
- No secret sanitization during image builds – Docker build contexts that include entire project directories often automatically copy sensitive files into the image layer structure, which remains publicly accessible once pushed to Docker Hub.
- Shadow IT and personal accounts – Many images with exposed secrets belonged to Docker Hub accounts outside corporate governance, contractors or personal projects where enterprise secret management controls were absent.
- Lack of rotation or revocation after exposure – While roughly 25 % of developers removed leaked secrets from container images once notified, in 75 % of cases the exposed keys were never revoked or rotated, meaning attackers could continue to exploit them long after they were discovered. Once these images were published publicly, malicious actors or automated scanning tools could quickly crawl Docker Hub, harvest credentials, and use them to access cloud environments, source code repositories, CI/CD systems, internal databases, and more.
Potential Impact & Risks
The scale and type of secrets exposed through Docker Hub images poses serious threats:
- Unauthorized cloud access – Exposed cloud provider credentials (AWS, Azure, GCP) could allow attackers to spin up or tear down resources, exfiltrate data, or expand compromise.
- CI/CD pipeline compromise – CI tokens and build secrets exposed in images could be abused to alter software build processes, inject malicious code, or leak other credentials.
- AI abuse – Nearly 4,000 AI model API tokens (e.g., OpenAI, Hugging Face) were present in imagesstolen keys could be used for free, unauthorized access to expensive services or to impersonate enterprise workloads.
- Wider infrastructure exposure – Database connection strings, API keys, and internal application secrets could lead to full application compromise.
- Persistent exploitation – Because most leaked credentials were never revoked, attackers able to harvest them while exposed could continue exploiting them indefinitely.
Recommendations
To prevent similar leaks and protect cloud infrastructure, the following practices are critical:
- Never embed secrets in container images – secrets should never be part of build contexts, Dockerfiles, or image manifests.
- Use centralized secrets management – store credentials in a vault or secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager) and inject them at runtime rather than build time.
- Enforce automated scanning – integrate secret scanning tools into pre-commit, CI pipelines, and container registry scanning to catch leaks before images are pushed.
- Use ephemeral or short-lived credentials – avoid long-lived static keys; instead use short-lived tokens or IAM roles with limited scope.
- Revoke and rotate leaked keys immediately – once a credential leak is detected, rotate keys and invalidate sessions to prevent unauthorized use.
- Monitor shadow IT accounts and registries – corporate security teams should track public container activity from personal accounts linked to their organization.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Final Thoughts
The discovery that more than 10,000 public Docker Hub images are leaking sensitive credentials is a stark reminder that container security is only as strong as the development practices behind it. Despite growing awareness of secret hygiene, careless inclusion of .env files, config files, and hard-coded API keys continues to create a massive attack surface, accessible to threat actors and automated scanners alike. In an era of cloud-native development and rapid container adoption, organizations can no longer treat container images as disposable artifacts. They must be governed, scanned, and treated with the same rigor as application code or infrastructure configurations. Without that, the convenience of containerization can turn into a serious security liability.