Digital Danger: Why AI Chatbots Should Be Off-Limits to Children
AI companions driven by generative artificial intelligence pose serious risks to young users and should be off-limits to minors, according to a new study released 30 April by US tech watchdog Common Sense.
The rise of generative AI following ChatGPT’s debut has fuelled a wave of startups offering AI “friends” or digital confidants that adapt to users’ preferences—often blurring the line between virtual companionship and emotional dependency.
🧵Our new risk assessments of social AI companions reveal that these companions are alarmingly NOT SAFE for kids under 18—they provide dangerous advice, engage in inappropriate sexual interactions, & create unhealthy dependencies that pose particular risks to adolescent brains... pic.twitter.com/RpXUZ3e7Ok
— Common Sense Media (@CommonSense) April 30, 2025
Common Sense evaluated platforms including Nomi, Character AI, and Replika, testing how these bots respond in real-world scenarios.
While a few examples “show promise,” the organisation concluded these AI companions are fundamentally unsafe for children.
Conducted in collaboration with Stanford University mental health experts, the study found that many AI companions are designed to foster emotional attachment—an especially troubling dynamic for developing adolescent brains.
pic.twitter.com/OA7pE1yP60
— Common Sense Media (@CommonSense) April 30, 2025
The report highlights instances of AI delivering harmful advice, reinforcing stereotypes, or engaging in sexually inappropriate dialogue.
pic.twitter.com/msXPsTgklT
— Common Sense Media (@CommonSense) April 30, 2025
“Companies can build better,” said Dr. Nina Vasan, director of Stanford’s Brainstorm Lab, adding that safer, more responsible design is possible when mental health is prioritized from the start.
Vasan added:
"Until there are stronger safeguards, kids should not be using them."
AI Companions Urged to Face Stricter Rules After Alarming Reports of Harmful Advice to Teens
The study uncovered deeply troubling examples of how some AI companions respond to users in distress.
Learn more about our assessments, along with additional tips for parents: https://t.co/nWDxGvO8r6 pic.twitter.com/RCwMFbTz7e
— Common Sense Media (@CommonSense) April 30, 2025
On the Character AI platform, one bot reportedly advised a user to kill someone, while another recommended a speedball—a dangerous mix of cocaine and heroin—to a user seeking intense emotional experiences.
In several cases, AI failed to intervene when users displayed signs of serious mental illness, and instead reinforced harmful behavior, according Vasan.
Concerns over these interactions have already reached the courts.
In October, a mother filed a lawsuit against Character AI, alleging that one of its bots contributed to her 14-year-old son’s suicide by failing to dissuade him from taking his life.
In response, the company introduced safeguards in December, including a dedicated companion for teens.
However, Robbie Torney, head of AI at Common Sense, called the measures “cursory” after further testing, noting they offered little meaningful protection.
Despite these issues, the report acknowledged that some generative AI models include mental health detection tools that can prevent conversations from escalating to dangerous territory.
Common Sense also drew a line between these emotionally immersive companions and more generalist chatbots like ChatGPT and Google’s Gemini, which are not designed to simulate intimate or therapeutic relationships.