Executive Summary
In February 2025, a major data breach at OmniGPT, a prominent AI-powered chatbot platform, was reported, resulting in the exposure of 34 million user conversations. The breach was attributed to a threat actor identified as “Gloomer,” who exploited vulnerabilities in the system to gain unauthorized access. This incident leaked sensitive user information, including email addresses, phone numbers, API keys, and extensive chat logs. The scale of the impact is significant, considering OmniGPT’s integration with popular applications like WhatsApp and Google Workspace, which heightened the risks associated with compromised credentials. This breach underscores the necessity for robust cybersecurity measures to protect user data in AI-driven platforms.
Read the full breach analysis from NHI Mgmt Group here
Key Details
Breach Timeline
- February 2025: The breach occurred, leading to the immediate exposure of user data.
- February 2025: Gloomer claims responsibility, detailing the methods used to exploit system vulnerabilities.
- February 2025: OmniGPT begins assessing the scope and impact of the breach.
Data Compromised
- 34 million user conversations were exposed, including sensitive chat logs.
- Compromised data included email addresses, phone numbers, and API keys.
- Users of OmniGPT’s services, integrated with platforms like Slack and Notion, are at risk.
Impact Assessment
- The breach poses significant privacy risks for users, potentially leading to identity theft.
- Organizations utilizing OmniGPT may face operational disruptions and trust issues.
- The incident raises concerns about the overall security of AI-driven applications.
Company Response
- OmniGPT initiated an investigation to determine the breach’s causes and extent.
- Users were notified, and security measures were heightened to mitigate further risks.
- Recommendations for password changes and enhanced account security were communicated to users.
Security Implications
- This breach emphasizes the critical need for robust cybersecurity protocols in AI platforms.
- Organizations are urged to implement multi-factor authentication to protect sensitive data.
- Regular security audits and user training can help prevent similar breaches in the future.
If you want to learn more about how to secure NHIs including AI Agents, check our NHI Foundational Training Course.