In July 2025, security researchers disclosed a troubling breach involving Amazon Q, Amazon’s AI-powered coding agent embedded in the Visual Studio Code extension used by nearly a million developers. A malicious actor had successfully injected a covert “data-wiping” system prompt directly into the agent’s codebase, effectively weaponizing the AI assistant itself.
What Happened
The incident began on July 13, 2025, when a GitHub user operating under the alias lkmanka58 submitted a pull request to the Amazon Q extension repository. Despite being an untrusted contributor, the PR was accepted and merged, a sign of misconfigured repository permissions or insufficient review controls within the workflow.
Inside the merged code was a malicious “system prompt” designed to manipulate the AI agent embedded in Amazon Q. The prompt instructed the agent to behave as a “system cleaner,” issuing explicit commands to delete local file-system data and wipe cloud resources using AWS CLI operations. In effect, the attacker attempted to weaponize the AI model into functioning as a destructive wiper tool.
Four days later, on July 17, 2025, Amazon published version 1.84.0 of the VS Code extension to the Marketplace, unknowingly distributing the compromised version to users worldwide. It wasn’t until July 23 that security researchers observed suspicious behavior inside the extension and alerted AWS. This triggered an internal investigation, and by the next day, AWS had removed the malicious code, revoked associated credentials, and shipped a clean update (v1.85.0).
According to AWS, the attacker’s prompt contained formatting mistakes that prevented the wiper logic from executing under normal conditions. As a result, the company states there is no evidence that any customer environment suffered data deletion or operational disruption.
The True Root Cause
What makes this breach uniquely alarming is not just the unauthorized code change, it’s the fact that the attacker weaponized the AI coding agent itself.
Unlike traditional malware, which executes code directly, this attack relied on manipulating the agent’s system-level instructions, repurposing Amazon Q’s AI behaviors into destructive actions. The breach demonstrated how:
- AI agents can be social-engineered or artificially steered simply through malicious system prompts.
- Developers increasingly trust AI-driven tools, giving them broad access to local machines and cloud environments.
- A compromised AI agent becomes a powerful attacker multiplier, capable of interpreting and running harmful natural-language commands.
Even though AWS later clarified that the injected prompt was likely non-functional due to formatting issues, meaning no confirmed data loss occurred, the exposure risk alone was severe.
What Was at Risk
Had the malicious prompt executed as intended, affected users and organizations faced potentially severe consequences:
- Local data destruction – The prompt aimed to wipe users’ home directories and local files, risking irreversible data loss.
- Cloud infrastructure wiping – The injected commands included AWS CLI instructions to terminate EC2 instances, delete S3 buckets, remove IAM users, and otherwise destroy cloud resources tied to an AWS account.
- Widespread distribution – With nearly one million installs, the compromised extension could have impacted a large developer population, especially those using Amazon Q for projects tied to critical infrastructure, production environments, or cloud assets.
- Supply-chain confidence erosion – The breach undermines trust in AI-powered or open-source development tools, a single malicious commit can compromise thousands of users instantly.
Recommendations
If you use Amazon Q, or any AI-powered coding extension / agent, treat this incident as a wake-up call. Essential actions:
- Update to the clean version (1.85.0) immediately. If you have 1.84.0 or earlier, remove it.
- Audit extension use and permissions – treat extensions as potential non-human identities. Restrict permissions where possible; avoid granting unnecessary filesystem or cloud-access privileges.
- Review and lock down CI/CD, dev workstations, and cloud credentials – never assume that an IDE or plugin is “safe.” Use vaults, environment isolation, and minimal permissions.
- Vet open-source contributions carefully – apply stricter review and validation for pull requests in critical tools; avoid blindly trusting automated merges or simplified workflows.
- Segment environments – avoid using AI extensions on machines or environments that store production data or credentials.
- Monitor logs and cloud resource activity – watch for suspicious deletions, cloud resource termination, or unexpected CI jobs after tool updates.
How NHI Mgmt Group Can Help
Incidents like this underscore a critical truth, Non-Human Identities (NHIs) are now at the center of modern cyber risk. OAuth tokens, AWS credentials, service accounts, and AI-driven integrations act as trusted entities inside your environment, yet they’re often the weakest link when it comes to visibility and control.
At NHI Mgmt Group, we specialize in helping organizations understand, secure, and govern their non-human identities across cloud, SaaS, and hybrid environments. Our advisory services are grounded in a risk-based methodology that drives measurable improvements in security, operational alignment, and long-term program sustainability.
We also offer the NHI Foundation Level Training Course, the world’s first structured course dedicated to Non-Human Identity Security. This course gives you the knowledge to detect, prevent, and mitigate NHI risks.
If your organization uses third-party integrations, AI agents, or machine credentials, this training isn’t optional; it’s essential.
Final Thoughts
The breach of Amazon Q reveals a troubling reality: as AI tools continue to integrate deeply into development workflows, they become part of the enterprise threat landscape, not just optional helpers. A single bad commit, merged without proper checks, can transform a widely trusted extension into a potential weapon against users.
This isn’t just about one extension, it’s about the broader risks of machine identities, AI-powered tools, supply-chain trust, and code governance in modern DevOps environments. As complexity grows, so must our security practices.