OpenAI has revealed that there is a surge in clandestine use of its tools, particularly linked to Chinese groups that are taking advantage of its ChatGPT platform for malicious reasons.
Since ChatGPT’s launch in November 2022 to immediate success, there have been concerns about its potential consequences based on its abilities to generate human-like audio, text, images, and videos. And OpenAI has regularly released reports on any malicious activities on its platforms, like fake content for websites.
OpenAI dismantles 10 separate campaigns abusing its tools
In a report that was published on Thursday, the San Francisco-based firm revealed that it recently dismantled ten separate campaigns taking advantage of its AI tools. Of these, four were arranged by groups linked to China, while smaller-scale schemes were linked to other countries.
According to Ben Nimmo, principal investigator on OpenAI’s investigations team, these China-linked operations deployed a wide range of tactics across multiple online venues.
“We’re observing an expanding toolkit,” he told reporters.
“Some campaigns blended influence operations, social engineering and surveillance, and they spanned platforms from TikTok and X to Reddit and Facebook.”
Nimmo.
One of the key examples, dubbed “Sneer Review,” used ChatGPT to roll out brief posts and comments in English, Chinese and Urdu. These covered subjects like the dissolution of the US Agency for International Development, alternately applauding and criticizing the decision, and criticism of a Taiwanese strategy video game presented as an attack on China’s ruling party.
According to OpenAI’s report, in most cases the operation created not only original posts but also replies, making it look like a “genuine” debate. The same groups also created a long-form article claiming extensive public uproar against the game.
The “Sneer Review” also used ChatGPT to draft internal documents and performance reviews, highlighting in detail every step of the campaign’s execution. Analysts at OpenAI discovered that the actual social media behavior closely echoed those self-assessments, underlining how AI can streamline both front-line influence and back-office management.
The report also showed that another network linked to China focused on intelligence gathering, pretending to be journalists and geopolitical commentators. They used the chatbot to create account biographies on X, translate communications between Chinese and English, and even created messages that were directed at a US senator concerning a federal nomination.
Groups from other countries also misused OpenAI’s tools
In addition to Chinese operations, the report notes that Russian and Iranian actors also tried to use ChatGPT for election-related influence, echoing concerns about generative AI’s role in shaping public opinion.
In the Philippines, a commercial marketing outfit was connected to a spam campaign, while a recruitment scam with links to Cambodia also surfaced. The company also flagged another employment initiative with North Korean interests.
All these threats coincided with a prior report from February in which OpenAI discovered a Chinese-affiliated surveillance initiative. This campaign reportedly monitored Western protests in real time, giving summaries to Chinese security agencies and ChatGPT facilitated everything from code debugging to drafting sales pitches for monitoring software.
Nimmo indicated that most of these operations were detected and stopped, although they seemed to use sophisticated tools.
“Advanced AI doesn’t necessarily translate to more effective outcomes,” he observed. Indeed, OpenAI’s regular threat reports suggest that while generative models can accelerate content creation, they do not guarantee genuine influence or widespread traction.
In response to these findings, OpenAI has continued to refine its monitoring and enforcement mechanisms. The firm bars accounts that are linked to the detected operations and removes all those that are involved in creating malware, and automated political messages or misleading content.
In its latest round, OpenAI took down accounts that posted geopolitical controversies that relate to China, including false allegations against activists in Pakistan as well as commentary about Trump’s tariff policies.
Despite OpenAI’s successes, the company also acknowledges the double-edged nature of generative AI, as a tool for innovation and also a tool for disinformation.
KEY Difference Wire: the secret tool crypto projects use to get guaranteed media coverage