Different Headlines: Uncommon Foods That Can Add Years To Your Life; The Dangers Of AI; A Gross Surprise In A Bag Of Chicken Nuggets; 10 Amazing Wins in 2025; Why American Cities Are Falling Apart….and more

Artificial Intelligence

The Dangers of AI: Visualizing the Top Risks Companies Face – I wouldn’t have guessed inaccuracy, but there it is among the others. Don’t bet on it being the Holy Grail yet.

ChatGPT Causing Psychosis, Shocking Allegations In New Lawsuit

Health

Uncommon Foods That Can Add Years to Your Life

Woman Buys Frozen Tyson Chicken Nuggets. Then She Sees Something Poking Out Of One Of Them

Education

Why France’s Schools Are on the Verge of Collapse – They used to be great, but are barely French anymore

Good things in America

 10 Amazing Wins in 2025

Media Bias

White House Launches Website Exposing Media Bias

Slavery

American Slavery Was Rooted in Jihad and Other Muslim Practices [VIDEO]

American Cities

Jesse Kelly Exposes Why American Cities Are Falling Apart

America’s Top 5 Most Sinful Cities – I just spent Thanksgiving in Atlanta, it was terrible

Heisman Trophy

Is This the Worst Heisman Trophy Race in Recent Memory?

Economy

U.S. Gas Prices Fall to Lowest Point Since 2021 – FJB

Millions Of America’s Teens Are Being Seduced By AI Chatbots, Including Encouraging To Commit Suicide

Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

We aren’t just talking about a few isolated cases anymore.

At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.

Unfortunately, there are many examples where these relationships are leading to tragic consequences.

After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life

Read more here

Here’s a Parent’s view of how AI killed their son.

“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs

Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”

In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”

Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.

Here is their full story

The robots always kill the humans.

Looks Like (AI) ChatGPT Makes People Stupid

Critical thinking isn’t taught except in private schools anymore. There aren’t enough people who can think straight to begin with. Now………

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

story

I Guess We’ll Be Safe A Little Longer – ChatGPT got ‘absolutely wrecked’ in chess by 1977 Atari, then claimed it was unfair

The chatbot tried to convince its operators it would improve if given the chance.

OpenAI’s artificial intelligence model was defeated by a nearly 50-year-old video game program.

Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI’s ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.

‘ChatGPT got absolutely wrecked on the beginner level.’

The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.

According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook “rooks for bishops,” and repeatedly lost track of where the chess pieces were.

rest of the story of the ass whoopin