Digital Danger: Why AI Chatbots Should Be Off-Limits to Children

AI companions driven by generative artificial intelligence pose serious risks to young users and should be off-limits to minors, according to a new study released 30 April by US tech watchdog Common Sense.

The rise of generative AI following ChatGPT’s debut has fuelled a wave of startups offering AI “friends” or digital confidants that adapt to users’ preferences—often blurring the line between virtual companionship and emotional dependenc.

On the Character AI platform, one bot reportedly advised a user to kill someone, while another recommended a speedball—a dangerous mix of cocaine and heroin—to a user seeking intense emotional experiences

“Companies can build better,” said Dr. Nina Vasan, director of Stanford’s Brainstorm Lab, adding that safer, more responsible design is possible when mental health is prioritized from the start.

Digital Danger: Why AI Chatbots Should Be Off-Limits to Children

AI companions driven by generative artificial intelligence pose serious risks to young users and should be off-limits to minors, according to a new study released 30 April by US tech watchdog Common Sense.

The rise of generative AI following ChatGPT’s debut has fuelled a wave of startups offering AI “friends” or digital confidants that adapt to users’ preferences—often blurring the line between virtual companionship and emotional dependency.