In February 2025, a significant data breach involving OmniGPT, a widely-used AI-powered chatbot platform, was reported. A threat actor known as “Gloomer” claimed responsibility for the breach, alleging the exposure of sensitive user information, including email addresses, phone numbers, API keys, and extensive chat logs. This report delves into the specifics of the breach, examines the potential vulnerabilities exploited, assesses the impact, and offers recommendations to prevent similar incidents in the future.
Background on OmniGPT
OmniGPT serves as an AI aggregator, providing users with access to multiple AI models such as ChatGPT-4, Claude 3.5, Gemini, and DeepSeek. The platform integrates seamlessly with various workplace applications, including WhatsApp, Slack, Google Workspace, and Notion, enhancing productivity and collaboration. Its widespread adoption has made it a repository of vast amounts of user-generated content and sensitive data.

Details of the Breach
The breach was first reported on February 11, 2025, when “Gloomer” posted on a notorious platform for illicit data trading. The hacker claimed to have accessed and leaked:
- Personally Identifiable Information (PII) – Approximately 30,000 user email addresses and phone numbers.
- Chat Logs – Over 34 million lines of user conversations with various AI models.
- API Keys and Credentials: Sensitive information potentially allowing unauthorized access to linked services.
- File Links – URLs to files uploaded by users on the platform.
Samples of the leaked data were made available publicly, with the full dataset offered for sale on the dark web. The exposure of such a vast amount of data raises significant concerns about user privacy and platform security.
Cause of the Breach
While the exact method of intrusion remains under investigation, several potential vulnerabilities could have been exploited:
- Inadequate API Security – Weak authentication mechanisms or lack of proper input validation may have allowed unauthorized access to backend systems.
- Insufficient Data Encryption – Storing sensitive data without robust encryption could have made it easier for attackers to extract information.
- Vulnerable Third-Party Integrations – Compromises in integrated services such as Slack, Google Workspace, might have served as entry points.
- Weak Access Controls – Failure to implement the principle of least privilege could have enabled attackers to escalate privileges and access sensitive data.
Impact Assessment
- User Privacy Compromise – Exposure of PII and personal conversations can lead to identity theft, phishing attacks, and other malicious activities.
- Credential Leakage – Access to API keys and other credentials may allow attackers to infiltrate connected services, posing further security risks.
- Reputation Damage – Users may lose confidence in AI platforms’ ability to safeguard their data, potentially leading to decreased adoption and engagement.
- Regulatory Consequences – The breach could draw the attention of data protection authorities, potentially resulting in fines, legal actions, and stricter compliance requirements for OmniGPT.
Recommendations
- API Key Management – Enforce strict controls over API key generation, storage, and usage.
- Input Validation – Implement strict input validation to prevent injection attacks. Ensure that all user inputs are sanitized and validated before processing.
- Regular Security Assessments – Conduct periodic penetration testing and vulnerability assessments to identify and address potential security gaps.
- Robust Data Encryption – Ensure all sensitive data, both at rest and in transit, is encrypted using industry-standard protocols.
- Use Token-Based Authentication – Implement OAuth 2.0, OpenID Connect (OIDC), and JWTs for secure, short-lived API access tokens.
- Database Access Restrictions – Enforce strict role-based access control (RBAC) and least privilege principles.
- Monitor Model Behavior for Anomalies – Deploy monitoring systems that detect unusual AI interactions or unauthorized API queries.
- Vendor Security Assessments – Regularly audit third-party integrations (e.g., Slack, Google Workspace, Notion) for vulnerabilities.
Conclusion The OmniGPT breach serves as a clear example of how poor management of NHIs can lead to devastating consequences. As AI platforms continue to expand and integrate with numerous third-party services, the associated security risks will also grow. Securing NHIs, enforcing stronger access controls, and enhancing monitoring mechanisms are vital to protecting AI systems from such breaches in the future.