
OpenAI flags Chinese operatives misusing ChatGPT for mass surveillance
**OpenAI Flags Chinese Operatives Misusing ChatGPT for Mass Surveillance**
*By Mudit Dube | Oct 08, 2025*
OpenAI has recently flagged the misuse of its AI chatbot, ChatGPT, by suspected Chinese government operatives. The company revealed that these users attempted to build tools for large-scale monitoring of data collected from various social media platforms.
One notable case involved a user who was banned after trying to use ChatGPT to create promotional materials and project plans for an AI-powered social media listening tool intended for a government client.
### Surveillance Tool “Probe” Targets Extremist Speech and Political Content
The tool, referred to as a social media “probe,” was designed to monitor platforms such as X, Facebook, Instagram, Reddit, TikTok, and YouTube. Its purpose was to identify extremist speech along with ethnic, religious, and political content.
In another incident, an account suspected to be linked to a government entity was banned after using ChatGPT to draft a proposal for a “High-Risk Uyghur-Related Inflow Warning Model.” This model aimed to analyze transport bookings against police records to track travel movements of the Uyghur community.
### OpenAI’s Stance on Misuse and Access
OpenAI noted that some of these activities appeared to facilitate large-scale monitoring of both online and offline traffic. The company emphasized the importance of maintaining vigilance to prevent authoritarian abuses stemming from such technologies.
Interestingly, OpenAI’s AI models are not officially available in China. The company suspects these users accessed ChatGPT through VPNs to bypass regional restrictions.
### Russian Hackers Exploit ChatGPT for Malware Creation
Aside from surveillance misuse, OpenAI also reported that Russian hackers have exploited its AI models to create and improve malware, including remote access trojans and credential stealers. Persistent threat actors have reportedly changed tactics to mask typical indicators of AI-generated content.
Despite these concerns, OpenAI found no evidence that its models have provided threat actors with novel offensive capabilities or new tactics.
### Usage Trends: Scam Detection Outpaces Scam Creation
Despite misuse cases, OpenAI highlighted that ChatGPT is predominantly used to help people identify scams rather than create them. The company estimates that the tool is employed for scam detection up to three times more frequently than for scam creation.
Since launching its public threat reporting in February 2024, OpenAI has disrupted and reported more than 40 networks violating its usage policies.
—
OpenAI continues to monitor and address the misuse of its AI technologies while promoting responsible use across the globe.
https://www.newsbytesapp.com/news/science/chinese-government-operatives-misusing-chatgpt-for-surveillance-openai/story