Stolen ChatGPT Credentials Found for Sale on Dark Web
Over 200,000 compromised ChatGPT credentials have been found on dark web marketplaces between 2022 and 2023. Learn about the threat’s details and the cybersecurity incident’s implications.

- Cybersecurity researchers have identified compromised credentials of ChatGPT accounts that were being traded on dark web marketplaces between 2022 and 2023.
- The credentials were accessed through stealer-infected devices, with most of the leaked data being extracted from the Asia Pacific region.
Over 200,000 sets of OpenAI’s ChaGPT credentials were found to be up for sale on dark web marketplaces in 2023, according to a threat intelligence report by researchers from the cybersecurity firm Group-IB. ChatGPT accounts were apparently compromised by exploiting information stealer malware such as RedLine and LummaC2 on targeted devices.
According to the report, a 36% increase in leaked ChatGPT credentials was noticed between June and October 2023, compared to the period between January and May 2023. LummaC2 has been found to be the most commonly used tool for data leaks. Furthermore, targets in the Asia Pacific region were most common, followed by Europe and the Middle East.
In addition, ChatGPT-like tools were developed during this period, including WormGPT, WolfGPT, DarkBARD, and FraudGPT, which were used through social engineering and phishing strategies to enable infostealer malware activities.
The development highlights the threats of AI as associated with sensitive organizational data. Employees often enter classified information or proprietary code in chatbots for work purposes, potentially offering bad actors access to sensitive intelligence.
Reports have suggested that over 4% of employees enter sensitive data into AI chatbots, including business information, personally identifiable information, and source code. It highlights the need for organizations to enforce best practices, including controls over copy-paste actions and group-level security controls for AI interactions.
What do you think about data security concerns with AI tech? Let us know your thoughts on LinkedIn, X, or Facebook. We’d love to hear from you!
Image source: Shutterstock