The Ultimate Guide to Non-Human Identities Report
NHI Forum

Notifications
Clear all

LLMjacking: How Exposed AWS Keys Are Fueling GenAI Abuse


(@entro)
Eminent Member
Joined: 6 months ago
Posts: 12
Topic starter  

Read the full article here: https://entro.security/blog/llmjacking-in-the-wild-how-attackers-recon-and-abuse-genai-with-aws-nhis/?source=nhimg


In a world where machine identities now outnumber humans, a new threat is rising fast: LLMjacking—the hijacking of GenAI services through compromised non-human identities (NHIs) like AWS keys and API tokens.

Unlike traditional breaches that go after user accounts, LLMjacking attacks exploit machine credentials to quietly access and abuse powerful AI models such as OpenAI, Claude, and DeepSeek—often without triggering alerts or setting off billing alarms. This isn’t theory. It’s already happening.

What We Did: The LLMjacking Experiment

At Entro Labs, we decided to go beyond speculation. Our security research team intentionally leaked functional AWS keys (with controlled access) across platforms like GitHub, Pastebin, and Reddit—places where accidental leaks commonly happen.

What we saw next was both eye-opening and alarming.

1. First Access Attempts Within 9 Minutes

On average, attackers probed the leaked AWS keys within 17 minutes—with the fastest exploitation attempt hitting in just 9 minutes. These weren’t fake or decoy tokens—they were real, controlled AWS credentials connected to real environments.

2. Reconnaissance Comes Before Abuse

Instead of launching large-scale AI prompts right away, attackers first mapped out capabilities:

  • Checked billing APIs (GetCostAndUsage)

  • Queried available AI models (GetFoundationModelAvailability)

  • Avoided obvious indicators like GetCallerIdentity to stay stealthy

This is smart and patient behavior. They test the water before jumping in—making LLMjacking harder to detect in early stages.

 

3. Bot Scripts and Human Attackers, Side-by-Side

Most initial attempts were automated. User-agent strings like botocore/ and python-requests gave them away. But we also caught manual attempts via Firefox browsers, showing that both bots and human adversaries are involved.

Bots harvest at scale. Humans step in when they see high-value credentials—especially if those credentials have access to GenAI.

 

4. Real Attempts to Invoke Claude and Others

Attackers didn’t just stop at reconnaissance. They went further—trying to invoke AI models using stolen AWS access. In one case, they attempted to run Anthropic Claude via InvokeModel API calls, turning our test environment into their AI playground.

We had blocked actual execution, but the intent was crystal clear: hijack GenAI models to run illicit workloads or generate content under someone else’s AWS bill.

 

5. The Real-World Cost 

Had these attacks succeeded, the financial damage could’ve been brutal. Some advanced LLMs cost thousands of dollars per day to run. A single leaked key could easily rack up $46,000 in cloud costs daily, not to mention reputational and legal risks.

 

What You Should Do Right Now

LLMjacking is not just a futuristic scenario. It’s live, active, and spreading fast. Here’s how to protect your environment:

  • Continuously detect and monitor NHIs for exposure

  • Rotate and revoke secrets automatically when a leak is detected

  • Enforce least privilege access for all machine identities

  • Monitor API activity for GenAI abuse patterns

  • Train developers on proper NHI and secrets hygiene

This topic was modified 4 weeks ago by Entro Security
This topic was modified 2 days ago by Abdelrahman

   
Quote
Share: