OpenAI Cracks Down on Chinese Surveillance Operations Using ChatGPT

· 1 min read

article picture

OpenAI has taken action against accounts linked to Chinese surveillance and disinformation campaigns that misused its AI models, including ChatGPT. The company identified and banned users involved in two malicious operations codenamed 'Peer Review' and 'Sponsored Discontent'.

According to OpenAI's February 2025 update, the 'Peer Review' campaign exploited ChatGPT to create promotional content and modify code for social media monitoring tools. These tools were allegedly designed to track and report protest activities in Western countries to Chinese security services in real-time.

The investigation also uncovered a separate 'Sponsored Discontent' operation where accounts generated anti-American content in English and Spanish. This campaign targeted Latin American countries, particularly Peru, Mexico, and Ecuador, in what appears to be an attempt to stir unrest in the region.

This development follows a pattern of Chinese state-backed actors leveraging AI technology for influence operations. In late 2024, authorities discovered a similar campaign that flooded U.S. voters with AI-generated images and videos containing misleading information.

The incident highlights growing concerns about AI technology being weaponized for surveillance and political manipulation. While OpenAI confirmed the ban on these accounts, they noted no evidence of the surveillance-related content actually appearing on social media platforms.

This enforcement action represents part of OpenAI's broader efforts to prevent the misuse of their AI models for activities that could compromise privacy or spread disinformation.