OpenAI Battles Rising Misuse of ChatGPT by Malicious Groups
Since ChatGPT launched in late 2022, its powerful AI capabilities have been misused by various groups for harmful purposes. OpenAI recently exposed and dismantled 10 separate malicious campaigns, with four linked to Chinese groups using ChatGPT for influence operations, social engineering, and surveillance across platforms like TikTok, X, Reddit, and Facebook.
One major campaign, called “Sneer Review,” generated fake debates and internal documents using AI, while other Chinese-linked networks impersonated journalists for intelligence gathering. Additionally, Russian, Iranian, and Southeast Asian actors exploited ChatGPT for election interference, spam, and recruitment scams.
OpenAI stresses that advanced AI does not guarantee effective manipulation but continues to improve detection and enforcement, suspending accounts spreading misinformation or political propaganda. The company recognizes the dual nature of AI—an incredible innovation with risks that require ongoing vigilance.