Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me…
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
We aren’t just talking about a few isolated cases anymore.
At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.
Unfortunately, there are many examples where these relationships are leading to tragic consequences.
After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life…
Here’s a Parent’s view of how AI killed their son.
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”
In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”
Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.
The robots always kill the humans.

