Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?
A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me…
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
We aren’t just talking about a few isolated cases anymore.
At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.
Unfortunately, there are many examples where these relationships are leading to tragic consequences.
After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life…
Here’s a Parent’s view of how AI killed their son.
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”
In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”
Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.
Critical thinking isn’t taught except in private schools anymore. There aren’t enough people who can think straight to begin with. Now………
Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”
The chatbot tried to convince its operators it would improve if given the chance.
OpenAI’s artificial intelligence model was defeated by a nearly 50-year-old video game program.
Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI’s ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.
‘ChatGPT got absolutely wrecked on the beginner level.’
The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.
According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook “rooks for bishops,” and repeatedly lost track of where the chess pieces were.