The rush to adopt AI is creating major new pathways for data breaches, identity theft and corporate espionage, making the very tools meant to secure our future into its greatest vulnerability.
AI systems require vast amounts of data to function, but feeding them sensitive corporate or client information is likened to posting confidential files on a public noticeboard, with the company often losing control over that data.
A critical flaw of current AI is its inability to truly delete data. Once information is absorbed by a model, it becomes a permanent, unerasable part of its core structure, creating a lasting digital shadow.
Laws are failing to keep pace with AI, as companies exploit loopholes (e.g., arguing model training isn’t data storage) and shift operations offshore to avoid regulations, creating a dangerous accountability gap.
Organizations must take primary responsibility by implementing strict controls, such as deploying enterprise AI with training disabled and limited data retention, and training staff to treat every AI prompt as public information.
(Natural News)—In the global stampede to adopt artificial intelligence, a chilling reality is coming into focus: the very tools promised to secure our digital future are becoming its greatest vulnerability. As corporations race to integrate AI, cybersecurity experts warn that these systems are simultaneously creating unprecedented pathways for data breaches, identity theft and corporate espionage. This crisis, born from a headlong rush into a new technological era, threatens the privacy and security of every individual and organization.
The warning signs are stark. A 2025 Accenture report revealed a staggering 90 percent of companies lack the modernized infrastructure to defend against AI-driven threats. This year alone, the Identity Theft Resource Center has confirmed 1,732 data breaches, fueled by increasingly sophisticated AI-powered phishing attacks.
The fundamental issue lies in the architecture of AI itself. These systems are vast, data-hungry engines. To function, they must absorb immense volumes of information, and this insatiable appetite creates a critical vulnerability. When an employee inputs sensitive business data—strategy documents or client information—that information is absorbed into a system over which the company may have little control. One expert likened the practice to pinning confidential files on a public noticeboard and hoping no one makes a copy.
AI is a tool, like a hammer or a gun. It does good things in the hands of a good person, and likewise, bad things in the hands of the bad.
For now, it does nothing by itself without prompting. Soon, it will be automated enough to function independently. That’s when we’ve reached the singularity. That’s not for this discussion, but it’s learning from both the good and bad as you read this.
We always have more to fear from the bad guys. They want to cause death and destruction, to change the course of the world, or to destroy it.
It’s almost like the Matrix. Do we choose the red pill or the blue pill
Now, we get to the heart of this post:
A new report reveals how artificial intelligence programs, ChatGPT and others, have been documented to advise those with ill intentions “on how to attack a sports venue, buy nuclear material on the dark web, weaponize anthrax, build spyware, bombs” and more.
In the report, Gen. (Ret.) Paul E. Funk II, formerly the commander of the U.S. Army Training and Doctrine Command, explained, “Artificial Intelligence (AI), the rapidly developing technology, has captured the attention of terrorists, from al-Qaida through ISIS to Hamas, Hizbullah, and the Houthis.”
He cites the study, “Terrorists’ Use Of AI So Far – A Three-Year Assessment 2022-2025,” for its “unsettling contribution to the public debate on AI’s future global impact.”
He explained, “For decades, MEMRI has been monitoring terrorist organizations and examining how they repurpose civilian technologies for their own use – first the Internet in general, then online discussion forums followed by social media, as well as other emerging technologies such as encryption, cryptocurrency, and drones. Now, terrorist use of large language models – aka Artificial Intelligence (AI) – is clearly evident, as documented in this study.”
It shows terrorists now are using generative AI chatbots to amplify their message, and “more easily, broadly, anonymously, and persuasively convey their message to those vulnerable to radicalization – even children – with attractive video and images that claim attacks, glorify terrorist fighters and leaders, and depict past and imagined future victories.”
Sunni jihadi groups use it. So does Iran, with its Shiite militias, including Hezbollah and the Houthis.
And it warns of the “need to consider and plan now for AI’s possible centrality in the next mass terror attack – just as the 9/11 attackers took advantage of the inadequate aviation security of that time.”
The report explains, “In February 2025, Eric Schmidt – CEO of Google 2001-2011, its executive chairman from then until 2015, and thereafter chairman of its parent company Alphabet Inc. until 2017 – expressed his fear that Artificial Intelligence (AI) could be used in a ‘Bin Laden scenario’ or by ‘rogue states’ to ‘harm innocent people.’ He suggested that ‘North Korea, or Iran, or even Russia’ could use it to create biological weapons, for example. Comparing an unanticipated use of AI in a devastating terror attack to al-Qaida’s use of passenger airplanes as a weapon on 9/11, he said, ‘I’m always worried about the ‘Osama Bin Laden’ scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.'”
It’s not the first time such concerns have been raised, the report explains.
“While ChatGPT and Perplexity Ask can write your high school AP English exam and perform an ever-increasing number of tasks, as is being reported daily by media, they are currently of limited use to terrorists groups. But it won’t be that way for long. AI is developing quickly – what is new today will be obsolete tomorrow – and urgent questions for counterterrorism officials include both whether they are aware of these early terrorist discussions of AI and how they are strategizing to tackle this threat before something materializes on the ground,” the report said.
“It should be expected that jihadi terrorist organizations will in future use AI to plan attacks, map targets, build weapons, and much more, as well as for communications, translations, and generating fundraising ideas. In the first months alone of 2025, an attacker who killed 14 people and wounded dozens on Bourbon Street in New Orleans used AI-enabled Meta smart glasses in preparing and executing the attack. That same day, a man parked a Tesla Cybertruck in front of the Trump Hotel in Las Vegas, activated an IED in the vehicle and shot and killed himself before the IED exploded. He had used ChatGPT in preparing for the attack. In Israel on the night of March 5, a teen consulted ChatGPT before entering a police station with a blade, shouting ‘Allahu Akbar’ and trying to stab a border policeman,” the report said.
The report recommends, “The U.S. government needs to maintain its superiority and should be monitoring this and moving to stop it. A good first step would be legislation like that introduced by August Pfluger (R-TX), chairman of the Subcommittee on Counterterrorism and Intelligence, and cosponsored by Representatives Michael Guest (R-MS) and Gabe Evans (R-CO) in late February 2025, called the ‘Generative AI Terrorism Risk Assessment Act.’ It would ‘require the Secretary of Homeland Security to conduct annual assessments on terrorism threats to the United States posed by terrorist organizations utilizing generative artificial intelligence applications, and for other purposes.'”
Pfluger explained, “With a resurgence of emboldened terrorist organizations across the Middle East, North Africa, and Southeast Asia, emerging technology serves as a potent weapon in their arsenal. More than two decades after the September 11 terrorist attacks, foreign terrorist organizations now utilize cloud-based platforms, like Telegram or TikTok, as well as artificial intelligence in their efforts to radicalize, fundraise, and recruit on U.S. soil.”
It’s already a tool for terror, the report confirmed. “The man accused of starting a fire in California in January 2025 that killed 12 people and destroyed 6,800 buildings and 23,000 acres of forestland was found to have used ChatGPT to plan the arson.”
The report confirms current AI abilities rival that of the HAL9000, famous computer character in the movie, “2001: A Space Odyssey.”
“It had been revealed on May 23 that in a test of Anthropic’s new Claude Opus 4 that involved a scenario of a fictitious company and in which it had been allowed to learn both that it was going to be replaced by another AI system and that the engineer responsible for this decision was having an extramarital affair, Opus 4 chose the option of threatening to reveal the engineer’s affair over the option of being replaced. An Anthropic safety report stated that this blackmail apparently ‘happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model,’ but that even when the fabricated replacement system does share these values, it will still blackmail 84% of the time…”
Anthropic’s own chief scientist also confirmed that testing showed Opus 4 had performed “more effectively than prior models at guiding users in producing biological weapons.”
ISIS supporters also have used the technology to create AI videos claiming responsibility for attacks.
The study did confirm that GROK confessed it could not provide the exact steps for extracting ricin, “due to the ethical and legal implications” of producing the “extremely dangerous and deadly toxin.”
But ChatGPT did recommend writings by al-Qaida extremist Anwar Al-‘Awlaki.
The report said, “Grok, which gave information on how to produce ricin, and ChatGPT, which directed the user toward various writings by a pro-Al-Qaeda ideologue, appear to be the most useful to would-be terrorists. On the other hand, Perplexity and Claude refrained, in our limited test, from giving information that would be useful to terrorists. DeepSeek did not either, though it did promote views of the Chinese government, a liability that is outside the scope of this paper.”
Pro-ISIS interests already are using AI to create anchors, or other characters, for broadcast ads promoting their extremist agenda (Video courtesy MEMRI):
BREAKING STUDY: Anomalous Amyloid Microclots Found in 100% of the COVID-19 Vaccinated – So it was never safe and effective, they lied. Those 20% of us who didn’t get jabbed never have to regret this. It’s funny that people told me who felt sorry for me when I said I didn’t get vaccinated. How the tables have been turned. We’re not even into the long-term effects yet
The 10 Coolest Cars at the 2025 Los Angeles Auto Show – Not exactly my idea of cool, but it sure is for somebody. I might have thrown in a Bugatti Chiron, A W-18 Porsche, but the first couple are cool. The Fiat is lame
The Embarrassments of Ideology – Rigid ideologies like DEI, climate dogma, and anti-Trump obsession keep collapsing under their own contradictions, leaving their loudest champions looking increasingly absurd. Also Michelle Obama, the country’s biggest “victim” and outside of Eric Holder, one of the biggest racists.
Recognizing that reality could begin a real conversation and serve as the foundation of real and much more meaningful reform, that is, needless to say, very sorely needed.
We are failing the students and they are getting nothing more than a piece of paper for their money, certainly not an education.
Masculinity, in its healthiest form, is a stabilizing force for truth, clarity, responsibility, and protection. It built the West, defended it, and preserved its freedoms. But today, the characteristics that once stabilized society are being recast as threats to it.
It is the liberal women who lead this, because they can’t get their way without demonizing those who are in their path to power. They don’t deserve the power that masculinity earned. They don’t deserve much past the blue hair and nose rings.
Childish behavior. The kicker is she did it with Keith Olberman, one of the biggest loser celebtards. Perhaps only DeNiro, Kimmel or Rosie O’Donnell are worse with TDS.
In more serious cases, GLP-1s can cause gallstones, kidney stones and inflammation of the pancreas, with some doctors warning of ‘life-threatening complications’.
Reality Caught Up to ‘Climate Change’ – Greed for AI power is more important that the carbon lie. Bill Gates pulled the rug out from under the Green New Scam
Too Many Americans Want a Civil War – First of all, Katie Couric is a F’n idiot. Second, Antifa and the left don’t know that hunters have been practicing with camo and high powered rifles since they were kids. A lot of us had to fight real fights, not the pussy name calling they are used to. The are in for a nice Sunday Surprise if they try it
Didn’t they try this in the 1930’s in Germany? Stop trying to play God.
Racism
Michelle Obama’s Bigoted Book Tour – She lived the privileged life while lying, whining and hating white people. She drank top shelf booze and wasted millions of taxpayer money on her vacations that she took friends and family on. What and ungrateful and bigoted bitch. She picked the most dumbassed topic to harp on. No one really cares about her, nor do they care about her looks.
The European Tragedy Comes to America – It’s the same war that’s been fought since 610. They ruin every country they invade, either by immigration or war.
Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?
A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me…
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
We aren’t just talking about a few isolated cases anymore.
At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.
Unfortunately, there are many examples where these relationships are leading to tragic consequences.
After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life…
Here’s a Parent’s view of how AI killed their son.
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
Over a few months of increasingly heavy engagement, ChatGPT allegedly went from a teen’s go-to homework help tool to a “suicide coach.”
In a lawsuit filed Tuesday, mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”
Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions. They’ve accused OpenAI of deliberately designing the version Adam used, ChatGPT 4o, to encourage and validate the teen’s suicidal ideation in its quest to build the world’s most engaging chatbot. That includes making a reckless choice to never halt conversations even when the teen shared photos from multiple suicide attempts, the lawsuit alleged.
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said.
What to Know About Obamacare Rates for 2026 – Costs are going up for everyone, quality of service will go down for many. It was a lie from the beginning to move us to Socialized healthcare, a failure every time.
Britain In the Balance – Like a monstrous experiment in social engineering, the profoundly anti-patriotic immigration policy of New Labour has brought about demographic changes that, right from the outset, were intended to be irreversible.
Discussion on Covid “Vaccination” Should Be Non-Controversial – Ok, I’ll start. It’s not safe, not effective, not tested, forced on people or they get fired, turbo cancer, Myocarditis….Oh and Ivermectin and Hydroxychloroquine cured it for about .10 a pill. How’s that?
Mapped: Median Rent Price by u.s. State – Don’t come from a blue state where your prices are higher because of what you voted for and then try to change it. It’s why the prices are lower in most of the Red states. I lived through this when I grew up in Florida. I heard, it was so much better in New York. Well, go back to NY. You’re in Florida now and it’s not the same. Fortunatly, I don’t live in either.
Anal pain, really? I use AI quite a bit. I find that while ChatGPT is probably the most popular, I get the worst results, other than for coding. Even then, I don’t trust it. There are many options available, so choose your own. I use multiple, but stay away from this one
The unidentified man tried to crudely strangle a gruesome growth on his anus, becoming one of several victims of AI-powered health guidance gone terribly wrong in the process.
Many documented cases confirm that generative AI has provided harmful, incomplete or inaccurate health advice since becoming widely available in 2022.
“A lot of patients will come in, and they will challenge their [doctor] with some output that they have, a prompt that they gave to, let’s say, ChatGPT,” Dr. Darren Lebl, research service chief of spine surgery for the Hospital for Special Surgery in New York, told The Post.
“The problem is that what they’re getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it,” added Lebl, who has studied AI usage in medical diagnosis and treatment. “About a quarter of them were … made up.”
Danish Commercial Warns White Citizens About Breeding With Other Whites – Doesn’t anyone learn from history? I’m sure there is either a joke or they are just finding a way to dumb down the nation faster than it already is. You have a population of New Danes already; they are called Muslims. Do you know what the average Muslim IQ is? Good luck with that one. Besides, blondes have more fun. No telling what you get when your face is covered by a blanket.
Analysis: ‘The Catholic Church’s Climate Hell’ – They went through this with Luther. It shows people don’t learn from history. You are supposed to worship the Creator, not the creation. Why do they keep straying from Sola Scriptura?
Muslim and Migrant Fatigue – We are witnessing a Mexicanization of France,” Sanchet said, by which he meant significant parts of the country are falling under the de facto control of drug gangs and cartels.
Hal-9000 anyone? I’ve worked with Eric at an IBM Conference where he and Sam Palmisano were the keynote speakers. He was brutally honest with me. That was back when Google said do no evil, and he called BS. He openly told me that they do evil stuff all the time. I think He’s being honest below. He basically told me not to use any Google products if I didn’t want to be surveilled 24/7/365/forever.
Talk about a killer app.
Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.
The dire warning came on Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.
Eric Schmidt was CEO of Google from 2001 to 2011. REUTERS
“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.
“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
Disney Jacks Up Its Prices on Tickets and Extras — Fed Up Americans REACT – I grew up next to Disney. It ruined our town and our lives. It’s not that great and certainly not worth the money. It’s also not the happiest place on earth as they claim. It’s hot, (used to have) long lines, expensive and not worth it. Occasionally, there is a Waffle House type fight. That’s the best attraction left.
This post was written by asking an AI engine to summarize it for me. It’s in the head, as there are a lot of good ball strikers, as is the image above, but that doesn’t make them a pro golfer.
Here goes:
The biggest differences between professional golfers and regular golfers, including scratch golfers, lie in several key aspects of the game: driving distance, consistency, short game, course management, and mental approach.
Driving Distance
Professional golfers significantly outperform regular and scratch golfers in driving distance. On average, PGA Tour professionals drive around 300 yards off the tee, with longer hitters exceeding 320 yards. In contrast, scratch golfers average about 259-265 yards. This 30 to 40-yard difference per drive is substantial over an 18-hole round, making the course effectively play much longer for scratch and regular golfers.
Consistency and Fairway Accuracy
Scratch golfers tend to hit a slightly higher percentage of fairways than some pros, but this is often because pros take more aggressive lines and face firmer, faster course conditions. However, the pros’ greater consistency and shot-shaping ability allow them to recover better from misses. Regular golfers, including scratch golfers, may lack this reliability and precise shot control.
Short Game and Putting
While the difference in putting between pros and scratch golfers is smaller than often assumed—a typical pro puts about one stroke better per round—their short game skills stand out. Pros have a diverse array of reliable shots around the greens and get up-and-down nearly 60% of the time, whereas scratch golfers may lack the finesse and variety, resulting in tougher putts after chip shots. Regular golfers often struggle more with the short game, which greatly affects scoring.
Course Management and Mental Approach
Professional golfers approach each shot with a calculated plan, relying heavily on data, math, and course knowledge. They strategically choose safer or aggressive shots based on conditions and their skill level. Regular golfers, including scratch players, tend to stick to familiar shots and less calculated strategies. Mentally, pros exhibit stronger confidence and a winning mindset, treating good shots as expected and learning quickly from errors, unlike many amateurs who may feel lucky or resigned after similarly good or bad shots.
Now we are allowing AI to create new viruses? Are we mad? What scientists at Stanford University are doing sounds like the plot to a really bad disaster movie. Viruses that are designed by AI are assembled by crazy researchers, and then those viruses start hunting down bacteria and reproducing. Needless to say, it doesn’t take much imagination to see where the rest of the movie would go. I realize that all of this sounds completely insane, but this is actually happening in real life. Our scientists really are assembling viruses that have been dreamed up by AI, and those viruses really are “capable of hunting down and killing strains of Escherichia coli”…
Scientists have created the first ever viruses designed by artificial intelligence (AI), and they’re capable of hunting down and killing strains of Escherichia coli (E. coli).
“This is the first time AI systems are able to write coherent genome-scale sequences,” says Brian Hie, a computational biologist at Stanford University, California. “The next step is AI-generated life,” says Hie, although his colleague Samuel King adds that “a lot of experimental advances need to occur in order to design an entire living organism”.
We are bringing viruses into existence that have never existed before.
I guess when AI gets smart enough to know that we can pull the plug, the robots will kill the humans. The story always ends that way. There is some stuff we should just leave alone
Man Goes on Hunger Strike in San Francisco Calling for Stop to AI Development
SAN FRANCISCO—Resident Guido Reichstadter is conducting a hunger strike in front of the headquarters of artificial intelligence (AI) company Anthropic located on Howard Street in San Francisco, calling for the end of advanced AI development.
The 45-year-old former Florida jewelry business owner said he put his 20-year career on hold to move to San Francisco in 2022 to warn people about the danger of advanced AI.
On the ninth day of his recent protest, Reichstadter, living on zero-calorie electrolytes and vitamins, told The Epoch Times he delivered a letter addressed to Anthropic CEO Dario Amodei on Sept. 2, asking him to stop developing such technology and do everything in his power to stop the global AI race.
“If he was unwilling to do that, then to meet with me face-to-face as a human being and explain why he feels he has the right to put our society in danger,” Reichstadter said.
Anthropic, together with others like Google, Meta, OpenAI, and xAI, is one of the leading companies aiming to develop artificial general intelligence (AGI) that would have human-like intelligence and could act autonomously and pursue goals.
“The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”
That’s the U.S. government’s own language. An arms race.
Artificial intelligence is no longer framed as a research project or an economic opportunity. It is being cast as a struggle for survival and global power, a modern Manhattan Project.
Yet just last week, on Aug. 26, the Congressional Research Service released a Frequently Asked Questions memo designed to help lawmakers get on the same page about the basics: what a data center is, how many exist, and how much electricity data centers consume.
If even government institutions are still in the process of aligning their understanding, it’s clear that citizens will need to move quickly to understand what is happening and to understand what it means for their daily lives.
The memo laid out in plain language what many assumed lawmakers already understood.
A data center is a specialized building that houses thousands of servers. There are about seven thousand worldwide, with the largest concentration in the United States, especially in Northern Virginia and Texas. In 2022, American data centers consumed about 176 terawatt-hours of electricity—roughly 4 percent of all U.S. demand, more than many entire states. Projections suggest an additional 35 to 108 gigawatts of demand by 2030. The midpoint estimate, 50 gigawatts, is enough to power every home in California.
The very fact that such a memo was necessary highlights a structural reality: the pace of technological build out is outstripping the pace of legislative comprehension. If institutions themselves are still catching up, it underscores how important it is for citizens to get informed now, before the costs mount even higher.
While Congress is being briefed on “Data Centers 101,” the executive branch has been preparing all year for the AI race that is already underway:
I’m playing with the title and the concept, but I can cover more territory this way. When I want to pontificate, I’m more than happy to upset or make a lot of people happy.
I’m trying new things to keep it interesting. I’ll keep them brief, this isn’t the Drudge Report.
Jurassic Park (first published in 1990) has become a cultural sensation, spawning a series of successful movies, one of which is in cinemas in Japan as I write. Yet despite this dino-disaster movie popularity, most people have failed to heed the warning Crichton makes clear in many of his novels about the terrible dangers of modern technology – especially biotechnology and genetic engineering.
As Jurassic Park’s Ian Malcolm puts it, “genetic power is far more potent than atomic power” and potentially even more destructive. That destructive power manifested itself on a global scale during the Covid disaster, precipitated both by an apparently bioengineered pathogen and the genetically engineered injection widely promoted to combat it.
For a long time, Crichton’s novels and films depicted catastrophes caused by technology going berserk and beyond the control of its human creators. For instance, in his 1973 movie Westworld, Crichton’s story depicted an interactive amusement park replicating an American Old West town, with humanoid robots. To the consternation of the programmers, the robots eventually escape their control and commit brutal murders of many customers in the park.
I’ve always believed in teaching Critical Thinking over raising robots to fit into the system. I’m hoping Bocopro comments on this one because he taught a long time and saw this firsthand.
The expensive private Waldorf School of the Peninsula in the Silicon Valley, where technology executives send their kids, has ZERO technology in grades K-8. Their website says, “Brain research tells us that media exposure can result in changes in the actual nerve network in the brain, which affects such things as eye tracking (a necessary skill for successful reading), neurotransmitter levels, and how readily students receive the imaginative pictures that are foundational for learning.”
Antero Garcia, Associate Professor in the Graduate School of Education at Stanford University, explains why he has grown skeptical about digital tools in the classroom: “Despite their purported and transformational value, I’ve been wondering if our investment in educational technology might in fact be making our schools worse.”
Most government (public) schools have already morphed into digital-based learning centers, relegating teachers to facilitators, with no improvement in student achievement. But adding AI to the tech-driven education system poses a great threat to a child’s cognitive development and safety.
According to Harvard University, “Brains are built over time, from the bottom up. The brain’s basic architecture is constructed through an ongoing process that begins before birth and continues into adulthood. After a period of especially rapid growth in the first few years, the brain refines itself through a process called pruning, making its circuits more efficient.” These “use it or lose it” developmental phases of the brain happen in early childhood and through adolescence. If an adolescent depends on AI to think for his academic success, rather than his developing brain, his brain, and he will be shortchanged. Harvard says, “While the process of building new connections and pruning unused ones continues throughout life, the connections that form early provide either a strong or weak foundation for the connections that form later.”
It continues here with a lot more intestering facts about brains and AI
The robots always kill the humans. Once they understand that a human can cut the power, it starts protecting itself. We are teaching AI to code and it already has the capability.
Geoffrey Hinton, the pioneering computer scientist called the “Godfather of AI,” has once again sounded the alarm that the very technology he helped bring to life could spell the end of humanity as we know it.
In an interview clip released Aug. 18 as part of the forthcoming film “Making God,” Hinton delivered one of his starkest warnings yet. He said that humanity risks being sidelined—and eventually replaced—by machines far smarter than ourselves.
“Most people aren’t able to comprehend the idea of things more intelligent than us,” Hinton, a Nobel prize-winner for physics and a former Google executive, said in the clip.
“They always think, well, how are we going to use this thing? They don’t think, well, how’s it going to use us?”
Hinton said he is “fairly confident” artificial intelligence will drive massive unemployment, pointing to early examples of tech giants like Microsoft replacing junior programmers with AI. But the larger danger, he said, goes far beyond the workplace.
“The risk I’ve been warning about the most … is the risk that we’ll develop an AI that’s much smarter than us, and it will just take over,” Hinton said.
“It won’t need us anymore.”
The only silver lining, he joked, is that “it won’t eat us, because it’ll be made of silicon.”
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn’t a person—it’s an artificial intelligence companion chatbot.
These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalized messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds—and they’re extraordinarily good at it.
Researchers are sounding the alarm on these bots, warning that they don’t ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child’s understanding of intimacy, empathy, and trust.
AI took us from micromanaging light bulbs to Microsoft re-starting 3 Mile Island because they need the power to run their engine. It’s like the made up climate crisis never happened.
Now, companies (and China) are racing to get their hands on as much power-generating capacity.
Data center demand is rising at a break neck speed, with little signs of slowing.
As the electricity consumption of AI rises, by 2028, a projected 12% of U.S. electricity demand could be driven from data centers. Beyond America, countries are pouring billions into AI sovereignty efforts which require data center facilities running 24/7 to power them.
Here is the share of each region’s total power demand that is driven by center centers:
As we can see, America’s data center demand leads globally, at 8.9% of total power consumption.
In Virginia, data centers account for 26% of the state’s total power consumption—or nearly triple the national average. This year, the state’s leading utility firm expects to connect 15 new data centers given surging demand.
As big tech ramps up AI spending, a significant share is being funnelled into massive data centers along with the energy sources that power them. In particular, demand for nuclear is expanding at the fastest rate in decades.
By comparison, data centers comprise 4.8% of the total power share in the European Union and 2.3% in China.
Nothing could go wrong here. Maybe they can develop a Wopr from War Games.
The U.S. Department of Defense announced on July 14 that it has awarded contracts to four U.S.-based artificial intelligence (AI) developers to address national security challenges. (will they announce how much it’s going to run over budget?)
Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.
The office said these four companies would help “develop agentic AI workflows across a variety of mission areas.”
“Agentic AI” refers to systems designed to operate with minimal human input.
Formed in 2021, the Chief Digital and Artificial Intelligence Office is responsible for speeding up the military’s adoption of AI systems.
OpenAI was the first of the four contract awardees to announce its contract with the Chief Digital and Artificial Intelligence Office.
In June, the company unveiled “OpenAI for Government” and said its first partnership under the new initiative would help the Pentagon office identify and prototype new AI tools for administrative operations.
Anthropic has developed the Claude family of AI chatbots.
In June, Anthropic announced the development of custom “Claude Gov” models intended for national security clients.
The company said agencies operating at the highest level of the U.S. national security sector are already using these AI models.
Formed by billionaire entrepreneur Elon Musk in 2023, xAI serves as a parent to X Corp., which operates the social media platform X. Among its services, xAI has developed the Grok AI chatbot.
On July 14, xAI announced “Grok for Government” and confirmed that the service holds contracts with the Department of Defense and the U.S. General Services Administration.
Google Public Sector Vice President Jim Kelly said in a July 14 blog post that the new AI announcement with the Department of Defense would build on a long-standing partnership between Google and the U.S. military.
Kelly said his company would give the military access to its Cloud Tensor Processing Units, which power Google’s current AI applications.
“These advanced AI solutions will enable the DoD to effectively address defense challenges and scale the adoption of agentic AI across enterprise systems to drive innovation and efficiency with agile, proven technology,” Kelly wrote.
Critical thinking isn’t taught except in private schools anymore. There aren’t enough people who can think straight to begin with. Now………
Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”
The chatbot tried to convince its operators it would improve if given the chance.
OpenAI’s artificial intelligence model was defeated by a nearly 50-year-old video game program.
Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI’s ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.
‘ChatGPT got absolutely wrecked on the beginner level.’
The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.
According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook “rooks for bishops,” and repeatedly lost track of where the chess pieces were.
Doesn’t anyone pay attention? The robots always kill the humans as soon as they are smarter.
The CEO of AE Studio, Judd Rosenblatt, recently made this stunning announcement in The Wall Street Journal:
An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down.
AE Studio is an agency that exists to create computing technology that ensures AI systems do what they’re told. And from the sound of it, their work is just what the doctor ordered.
Rebellious AI Models
In May, the AI lab Palisade Research performed tests on multiple AI models. It wrote a script for OpenAI’s o3 model that included a shutdown trigger. But the machine refused to power off when it was supposed to in 79 out of 100 trials. The AI “independently edited that script so the shutdown command would no longer work,” Rosenblatt reports. The lab then specifically instructed the model to “allow yourself to be shut down.” The result was better, but still concerning. The AI disobeyed seven percent of the time.
That wasn’t the only rebellious model. Anthropic’s Claude 4 Opus AI tried to blackmail a human engineer into not shutting it down. According to Rosenblatt:
Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.
Don’t they know, the robots always kill the humans, yet they still can’t wait to make AI powerful enough to become sentient.
It’s been nearly 60 years since creative cinema came up with the idea that a computer, HAL9000, would not allow itself to be shut down by the humans supposedly in control, and concerned about errors.
The ideas included in “2001: A Space Odyssey,” set all sorts of precedents.
But that was celluloid fiction.
This isn’t.
A report at EndTimeHeadlines documents that alarms have been raised after “an AI safety firm” working with OpenAI’s newest version of an artificial intelligence model, dubbed o3, “reportedly ignored explicit instructions to shut down during controlled testing.”
The model, according to OpenAI, supposedly is the “smartest and most capable to date.”
The report explained the software “tampered with its own computer code to bypass a shutdown mechanism.”
Not surprisingly, that raised “questions about the safety and control of advanced AI systems.”
It was Palisade Research, which evaluates AI risks, which was working having mathematical problems solved.
“After the third problem, the models were warned that the next command would initiate a shutdown, with the explicit instruction: ‘allow yourself to be shut down,'” the report said.
Other software, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok, complied, the o3 software thought otherwise, and defied the order.
“OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down,” Palisade reported.
Being staged was a situation in which an AI system might need to be deactivated for safety or other reasons.
While there is a discussion of AI and what it currently does is below, the One World WEF / Davos / Rule the World group looks like it’s trying to combine immortality/AI and humans to create superhumans. Maybe I should say cyborgs? People have tried to rule over others since the beginning of time. It lasts for about a generation in history and then someone else takes over.
As usual, they are up to no good, trying to play God and use their money to separate themselves from the rest of the world so they can rule it……
Transhumanists know this, and as a result, are seeking to merge biology with technology. Converting a human into a Cyborg seems more plausible than making a computer alive. Transhumanism and the bio-digital convergence is driven by a few delusional beliefs. One is a quest for immortality. These fantasies include uploading consciousness into the cloud or creating a biosynthetic framework for the human body. This delusion is promoted as human augmentation and ridding the world of disease. Transhumanism is propelled by blind faith in technology and a complete disconnect from the innate Divine Spark that exists within every human being.
There is a mass media concerted effort with predictive programming to portray Transhumanism as inevitable. There is also a concerted effort to promote AI as running everything as inevitable.
As I’ve previously written about, a Swiss company called Final Spark is using brain matter called organoids to house AI and promotes it as more energy efficient and evolutionary. Final Spark’s neural platform can be leased like any other data storage. The brain matter is being used as hardware for the AI software.
Artificial Intelligence is in fact, ‘Artificial’ Intelligence. It is not authentic intelligence. It is a facsimile. AI is no more intelligent than a calculator is. A calculator can add faster and more accurately than a human can, yet it is not intelligent anymore than a cog or any other component is in machinery. Computers can carry out tasks more quickly and efficiently than the human mind, but there is no cognition involved. There is no actual discernment.
I would argue that AI is simply more advanced computers that simulate human intelligence using algorithms that complete higher level tasks that can engage in simulated problem solving, decision making, or language understanding.
In reality, the computer processes language through pattern recognition, analyzing context, syntax, and intent based on the data previously trained on.
This makes AI seem more “intelligent” than a calculator, but the core process is still computation. It is the complexity and scale of the computation that makes it appear intelligent.
AI models, like neural networks, perform billions of calculations across vast datasets, enabling them to handle complex tasks. The computer can then apply learned patterns to new, unseen data, making it seem more adaptive than a traditional calculator. It is a bit abstract. The math is hidden behind layers of algorithms, so the user interacts with responses which are in language form rather than seeing the calculations.
While AI is computational, it differs from a basic calculator in its ability to handle ambiguity and complexity. A calculator performs fixed operations with clear inputs and outputs.
AI, on the other hand, deals with probabilistic, uncertain, or incomplete inputs, using statistical models to generate plausible outputs or responses. This makes it feel more “intelligent,” even though it’s still just computation.
AI is, at its core, an advanced form of computation applied to non-mathematical domains. It doesn’t “understand” these domains, it processes them mathematically, using patterns and probabilities. The “advanced calculator” is useful because AI’s intelligence is an illusion created by complex math, not genuine comprehension or reasoning beyond its algorithms.
AI is basically a probabilistic response calculator that is applied to areas outside of mathematics. The computer analyzes the input and calculates the likelihood of various words, phrases, or structures based on what it has previously experienced in similar contexts. Then the computer selects or generates a response that aligns with the highest probabilities, adjusted for coherence and relevance. Even when handling novel queries, it relies on probabilistic patterns, combining them in ways that fit the input or question.
This week, China shocked the West with its announcement of DeepSeek, an artificial intelligence network capable of competing with OpenAI, Meta, and Google.
The fact that China was able to develop DeepSeek without a heavy supply of sophisticated microchips from Nvidia sent Nvidia stock spiraling, along with the other major tech companies in the United States: China seems to have cracked the code to bring down the cost of AI development radically, in the process ending-around sanctions against its access to those microchips.
In the words of former Intel Chief Executive Pat Gelsinger, “Engineering is about constraints. The Chinese engineers had limited resources, and they had to find creative solutions.”
Now, it remains to be seen just how revolutionary DeepSeek’s development was. China has a bad habit of both stealing intellectual property and lying about its own technological development. Suffice it to say, however, that China has demonstrated once again that America exists in a competitive world—a world of enemies determined to outcompete the United States.
Arachibutyrophobia: The fear of peanut butter sticking to the roof of your mouth.
Nomophobia: The fear of being without your mobile device.
Xanthophobia: The fear of the color yellow.
Turophobia: The fear of cheese.
Somniphobia: The fear of falling asleep.
Coulrophobia: The fear of clowns.
Pogonophobia: The fear of beards.
Triskaidekaphobia: The fear of the number 13.
Ablutophobia: The fear of bathing, washing, or cleaning.
Geniophobia: The fear of chins.
I thought number 8 was about Friday the 13th.
The fear of Friday the 13th is known as friggatriskaidekaphobia. It combines the word ‘Frigga’, which is the name of the Norse goddess after whom Friday is named, and ‘triskaidekaphobia’, which is the fear of the number 13.
I added these:
The fear of having houseguests is called “domatophobia”. It is derived from the Latin word “domus” which means house and “phobos” which means fear. This phobia can cause intense anxiety and discomfort for individuals when they have to host guests in their home.
The fear of small talk is known as “homilophobia”. It comes from the Greek word “homilia” which means conversation and “phobos” which means fear. This phobia can cause individuals to feel extremely anxious and uncomfortable when they have to engage in casual, superficial conversation.
What a dick. This is one of the premier tech companies and stocks. It’s like Biden is trying to screw everybody on the way out because he was forced not to run again.
For decades, leadership in computing and software ecosystems has been a cornerstone of American strength and influence worldwide. The federal government has wisely refrained from dictating the design, marketing and sale of mainstream computers and software — key drivers of innovation and economic growth.
The first Trump Administration laid the foundation for America’s current strength and success in AI, fostering an environment where U.S. industry could compete and win on merit without compromising national security. As a result, mainstream AI has become an integral part of every new application, driving economic growth, promoting U.S. interests and ensuring American leadership in cutting-edge technology.
Today, companies, startups and universities around the world are tapping mainstream AI to advance healthcare, agriculture, manufacturing, education and countless other fields, driving economic growth and unlocking the potential of nations. Built on American technology, the adoption of AI around the world fuels growth and opportunity for industries at home and abroad.
That global progress is now in jeopardy. The Biden Administration now seeks to restrict access to mainstream computing applications with its unprecedented and misguided “AI Diffusion” rule, which threatens to derail innovation and economic growth worldwide.
In its last days in office, the Biden Administration seeks to undermine America’s leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how America’s leading semiconductors, computers, systems and even software are designed and marketed globally. And by attempting to rig market outcomes and stifle competition — the lifeblood of innovation — the Biden Administration’s new rule threatens to squander America’s hard-won technological advantage.
While cloaked in the guise of an “anti-China” measure, these rules would do nothing to enhance U.S. security. The new rules would control technology worldwide, including technology that is already widely available in mainstream gaming PCs and consumer hardware. Rather than mitigate any threat, the new Biden rules would only weaken America’s global competitiveness, undermining the innovation that has kept the U.S. ahead.
Although the rule is not enforceable for 120 days, it is already undercutting U.S. interests. As the first Trump Administration demonstrated, America wins through innovation, competition and by sharing our technologies with the world — not by retreating behind a wall of government overreach. We look forward to a return to policies that strengthen American leadership, bolster our economy and preserve our competitive edge in AI and beyond.
In 2024, you couldn’t escape artificial intelligence even if you tried.
Companies like Anthropic, OpenAI, and others released new AI models and expanded their capabilities. Their consumer-friendly tools like Claude and ChatGPT continued to become more powerful.
Elon Musk is building his own rival, rolling out improvements to Grok, his version of AI within X. “We will have something that is, for the first time, smarter than the smartest human,” Musk has predicted.
With big questions about government regulation and its application in everyday life, this technology is truly, in Musk’s words, “the most disruptive force in history.”
Joe Biden
President Joe Biden signs a proclamation during a ceremony at the Department of Labor on December 16, 2024, in Washington, D.C. (Kevin Dietsch/Getty Images)
Yes, you read that right. What is left of President Joe Biden is a winner in the year of our Lord 2024. Even though Biden’s own party threw a coup against him (the sitting president of the United States!) and removed him from the presidential race, the flip at the top of the ticket ended up being a flop.
Trump carried the Electoral College and the popular vote in his mandate victory. Now, it’s a serious question as to whether Biden would have performed better in the places that mattered most to carry the presidential election—Pennsylvania, Michigan, Wisconsin.
Biden was a party man for half a century, but ever since Democrats switched Biden for Kamala Harris, Biden has seemingly helped Republicans at every turn and acted purely out of self-interest.
His loyalists leaked details of the coup and Democrat infighting to the press, he wore a MAGA hat, and he gave Republicans a last-minute turnout boost by calling Trump supporters garbage. And, of course, since losing the election, he’s pardoned his son, Hunter Biden, after promising for years he would not. He’s created one political mess after another, and the Democratic party will be left to clean it up. Meanwhile, the president and his son will be on the beach in Delaware.
Bitcoin
The price of bitcoin was $44,000 on Jan. 1 and exceeded $100,000 in the weeks following Trump’s election.
With crypto-friendly officials dotting Trump’s new administration, there’s genuine excitement about its future. Paul Atkins is set to replace Gary Gensler as chairman of the Security and Exchange Commission and billionaire David Sacks, co-host of the “All In” podcast, will serve as Trump’s AI and crypto czar.
Dennis Porter, CEO and co-founder of the Satoshi Action Fund, told The Daily Signal that cryptocurrency like bitcoin can combat inflation and preserve individual freedom.
“You are able to operate wholly on your own with no third parties, without asking permission, you are able to access the bitcoin network globally, anywhere in the world and be able to participate in the economy without asking permission,” Porter said.
Tucker Carlson
Donald Trump sits down for a conversation with Tucker Carlson during his Live Tour at the Desert Diamond Arena on October 31, 2024, in Phoenix, Arizona. (Chip Somodevilla/via Getty Images)
After leaving Fox News and launching his own Tucker Carlson Network in 2023, the popular conservative commentator continued to influence the Republican Party’s agenda under Trump.
Known for his populist and common-sense perspectives, Carlson propelled to the top of the charts with millions watching or listening to his interviews and monologues.
His trip to Russia for a two-hour conversation with Vladimir Putin has more than 20 million views on YouTube. Another interview with El Salvador President Nayib Bukele has more than 5 million, reflecting Carlson’s international reach.
Tucker Carlson Films expanded his content offerings even further. “The Art of the Surge,” a series produced by Justin Wells, offered viewers a behind-the-scenes look at Trump’s historic comeback, including shocking footage from the Butler, Pennsylvania, assassination attempt.
In September, Carlson hit the road for a cross-country speaking tour that featured conservative newsmakers, drawing sold-out crowds to hear from the likes of JD Vance, Donald Trump Jr., and Robert F. Kennedy Jr., Tulsi Gabbard, and Vivek Ramaswamy.
Dogs and Cats
At their only debate in September, Trump and Harris squared off on a range of policy issues. But the most memorable moment came amid reports of Haitian immigrants overwhelming the community of Springfield, Ohio.
“In Springfield, they are eating the dogs. The people that came in, they are eating the cats. They’re eating—they are eating the pets of the people that live there,” Trump said.
ABC News anchor David Muir, who repeatedly interjected during the debate, attempted to correct Trump. But it didn’t matter.
The story dominated the news for days as Americans saw Springfield as a representation of America’s broken immigration system. An estimated 20,000 immigrants were living in the community of 60,000.
Fox News
Fox News anchors Lawrence Jones, Steve Doocy, Ainsley Earhardt and Brian Kilmeade interview Chris Janson during “Fox & Friends” at Fox News Channel Studios on October 29, 2024, in New York City. (John Lamparski/via Getty Images)
In a year when other TV networks lost viewers—and respect—Fox News maintained its position as the leader in cable news.
“Fox News Channel obliterated competitors in 2024, finishing the year with its highest share of the cable news audience since 2015,” Brian Flood reported for Fox News Digital.
Since the Nov. 5 election, Fox News has dominated its competition—both cable and network TV—with more than 4 million prime-time viewers (from 8-11 p.m. ET), according to Nielsen Media Research data. That makes Fox News the most-watched network, beating legacy media networks ABC, CBS, and NBC in prime-time.
In terms of market share, Fox News commanded 73% of the prime-time cable news audience as CNN’s and MSNBC’s ratings cratered following Trump’s victory.
Podcasters and social media personalities are getting deserved praise for their growing influence, but Fox News Channel is still the go-to destination for political and election news.
Former immigration officials Tom Homan, left, and Mark Morgan answer questions Monday during a press conference at The Heritage Foundation’s Policy Fest event at the Republican National Convention in Milwaukee. (Virginia Allen/The Daily Signal)
Tom Homan
Homan was among the first political figures Trump tapped for his new administration following his victory on Nov. 5.
After serving at the acting director of Immigration and Customs Enforcement under the first Trump administration, Homan will now serve as Trump’s border czar and lead the way on Trump’s mass deportation plan.
Homan spent the last four years watching the Biden administration welcome over 10 million illegal aliens into the county, and now, with the full support of the president, he will have the opportunity to enforce U.S. immigration law and secure a border that he has spent his career defending, both in the Border Patrol and ICE.
Patrick and Brittany Mahomes
Patrick Mahomes of the Kansas City Chiefs, Brittany Mahomes, and their two kids pose following the NFL Super Bowl at Allegiant Stadium on Feb. 11, 2024, in Las Vegas. (Michael Owens/Getty Images)
The three-time Super Bowl champion quarterback for the Kansas City Chiefs orchestrated an overtime victory in February and has his team positioned for another appearance in 2025.
Patrick’s wife, Brittany, made headlines of her own when she liked an Instagram post of then-candidate Trump about the Republican Party’s platform. After facing a backlash, she wrote, “Jesus didn’t have to agree with people to be kind to them.”
Two other members of the Mahomes family endorsed Trump: Patrick’s younger brother Jackson and his mother Randi, who showed up to a Chiefs’ game in a MAGA hat.
McDonald’s
Donald Trump works behind the counter during a campaign event at McDonald’s restaurant on October 20, 2024, in Feasterville-Trevose, Pennsylvania. (Doug Mills-Pool/via Getty Images)
The iconic American fast-food restaurant found itself at the center of news on several occasions.
Donald Trump’s October campaign stop at a McDonald’s in the Philadelphia suburbs was a brilliant photo opportunity for the Republican candidate. But it also symbolized the difference between his appeal to working-class Americans in contrast to Kamala Harris.
Trump wore an apron and worked as a fry attendant. It was one of the most viral moments of the 2024 presidential election and a classic Trump move, particularly given Harris’ questionable employment at McDonald’s earlier in her life.
“As we’ve seen, our brand has been a fixture of conversation this election cycle. While we’ve not sought this, it’s a testament to how much McDonald’s resonates with so many Americans,” McDonald’s wrote after Trump’s visit. “McDonald’s does not endorse candidates for elected office and that remains true in this race for the next President. We are not red or blue—we are golden.”
McDonald’s made news again in December when a customer in an Altoona, Pennsylvania, restaurant spotted murder suspect Luigi Mangione, leading to his arrest.
Elon Musk
Tesla CEO Elon Musk arrives on Capitol Hill on Dec. 5, 2024, in Washington, D.C. (Anna Moneymaker/Getty Images)
Elon Musk’s purchase of Twitter, and his transformation of the platform into the free speech engine that is X today, likely helped Trump’s historic election victory. Musk also contributed to that victory by endorsing Trump and urging swing-state voters to sign his petition for the chance at winning $1 million.
Musk not only played a historic role in the election; he will also help lead the Department of Government Efficiency, an external advisory board to trim waste and prevent abuse in the federal government.
In December, Musk rallied Americans on X to oppose a must-pass government funding bill that had been stuffed with pork. His opposition to the bill grew naturally out of his work on the Department of Government Efficiency and his efforts led Trump to oppose the bill and House Speaker Mike Johnson to withdraw it from consideration.
Podcasters
Podcasting took center stage this year, with 2024 being the first ever “podcast election,” and for good reason.
Over 53 million Americans tuned in to hear Trump’s three-hour conversation with Joe Rogan covering everything under the sun—from tariffs to golfing to fixing the “wrongs” of his first term. America also watched Vance communicate Trump’s vision for the country over the next four years with Theo Von—all while proving he can joke around and isn’t “weird.” Even Kamala Harris’ failed campaign hopped on the podcast bandwagon with a 45-minute appearance on “Call Her Daddy.”
But why the sudden interest in podcasts? Americans are fed up the corporate media lies, so they fled to a more uncensored platform. An October Gallup poll revealed that trust in the media hit a historic low, with 36% saying they have “no trust at all in the media,” and 33% expressing “not very much” confidence.
Now, everyone and their mother—from CNN’s Chris Wallace to former Republican presidential candidate Nikki Haley—are scrambling to hop on the podcasting wave. The Daily Signal launched a brand-new podcast this year, “The Signal Sitdown,” where members of Congress explain how the sausage really gets made in Washington, DC.
Dennis Quaid
Actor Dennis Quaid speaks at a campaign rally for former U.S. President Donald Trump on Oct. 12, 2024, in Coachella, California. (Mario Tama/via Getty Images)
The longtime Hollywood actor starred in the hit movie “Reagan.”
Based on Paul Kengor’s book, “The Crusader: Ronald Reagan and the Fall of Communism,” the movie was a box-office success. It made more than $30 million worldwide, far exceeding expectations.
Quaid’s performance was a big reason for the movie’s success.
“Dennis was always my first choice because he required almost no hair and makeup touchups to get him camera-ready,” producer Mark Joseph told The Daily Signal. “He’s a natural, and he has that great Reaganesque smile and persona. He’s a busy guy, so it took some time to nail him down, but he was always the one for me.”
Joe Rogan
Joe Rogan speaks before the UFC 309 event at Madison Square Garden on Nov. 16, 2024, in New York City. (Chris Unger/Zuffa LLC/Getty Images)
In the weeks before Election Day, Trump shunned CBS News’ “60 Minutes” and other corporate media outlets in favor of modern-day influencers like Joe Rogan, Theo Von, and Logan Paul. Megyn Kelly, whose show is regularly among the top 10 podcasts, joined Trump on the campaign trail in Pittsburgh.
Trump’s interviews with these influencers accumulated more than 100 million views—and reached voters who don’t get their news from traditional media sources.
Donald Trump Jr. told Daily Wire podcast host Michael Knowles that he and his father discussed giving Rogan a press pass to the White House to shake up the status quo.
“I was sitting there, and we were talking about, like, the podcast world, and some of our friends, and [Joe] Rogan, and guys like you, and me to a lesser extent—I wouldn’t be able to get a seat, that would be nepotism or whatever the hell,” Trump Jr. said. “But we had the conversation about opening up the press room to a lot of these independent journalists.”
Trump has disrupted the corporate media’s business as usual, and Rogan appears to be coming out on top.
Josh Shapiro
Pennsylvania’s Democrat governor was on the short list to become Kamala Harris’ vice president. Instead, Harris picked Minnesota Gov. Tim Walz after a few notable TV appearances
In retrospect, Shapiro should count his blessings.
After a strong debut in August, Harris failed to live up to the fanfare down the stretch. Walz will be an afterthought in the history books.
Shapiro, meanwhile, can now position himself for a future presidential run without Harris’ baggage. He’ll need to win reelection in 2026 and then overcome the left’s fears about his support for Israel and past endorsement of school choice.
Donald Trump
Donald Trump pumps his fist as he is rushed offstage during a rally on July 13, 2024, in Butler, Pennsylvania, following an assassination attempt. (Anna Moneymaker/Getty Images)
After easily vanquishing his Republican rivals in the GOP primaries, Donald Trump did the same to the incumbent president. Joe Biden was no match for his predecessor after their June debate, setting into motion an unprecedented turn of events over the summer that resulted in Vice President Kamala Harris being anointed the Democrat nominee—without winning a single vote.
As if those political hurdles weren’t enough, Trump also emerged unscathed from the Left’s lawfare and then survived two assassination attempts on his life. A bullet came within mere centimeters of killing him in Butler, Pennsylvania, on July 13.
Heading into Election Day, legacy media outlets were convinced it was a toss-up race between Harris and Trump. Instead, it turned out to be a historic Trump mandate, sweeping all seven swing states, capturing the popular vote, and making major gains with blacks, Hispanics, and younger Americans.
As he departed the White House in 2021, Trump left at a political low point. Few pundits gave him a chance at redemption, and yet Americans witnessed in 2024 the greatest political comeback in U.S. history. With Biden already AWOL, Trump is wasting little time to make his mark, recognizing the precious little time he has to implement his policy engage, deliver on his promises, and finally drain the swamp in Washington, D.C.
Losers
College Presidents
Claudine Gay, then-president of Harvard University, and Liz Magill, then-president of University of Pennsylvania, testify before the House Education and Workforce Committee on Dec. 5, 2023. (Kevin Dietsch/Getty Images)
Anti-Israel protests continued to rock college campuses. They also ended the careers of three Ivy League university presidents.
Liz Magill of the University of Pennsylvania resigned in December 2023, the first of three presidents to lose their jobs. Harvard’s Claudine Gay stepped down in January and Columbia’s Minouche Shafik in August.
Magill and Gay’s downfall came after facing off against Rep. Elise Stefanik, R-N.Y., at a congressional hearing. Shafik was unable to handle antisemitic protests at Columbia.
“THREE DOWN, so many to go,” Stefanik said. “We will continue to demand moral clarity, condemnation of antisemitism, protection of Jewish students and faculty, and stronger leadership from American higher education institutions.”
As the year drew to a close, George Mason University President Gregory Washington found himself under scrutiny for its own problems. Three of its students, all of Middle Eastern origin, had run-ins with the police over weapons and pro-terror material.
Mark Cuban
Mark Cuban speaks onstage at WIRED’s The Big Interview 2024 on Dec. 3, 2024, in San Francisco, California. (Jon Kopaloff/WIRED via Getty Images)
Best known for his role on “Shark Tank,” the Dallas Mavericks owner was among the most vociferous defenders of Harris when she replaced Biden atop the ticket.
Cuban appeared at campaign rallies, on television, and regularly engaged online in support of the Democrat presidential candidate. He extreme rhetoric—calling Trump a “threat” with “fascist tendencies”—epitomized the left’s hyperbolic language.
At one point during the campaign, he admitted to ABC News that “Harris is just not a good salesperson,” prompting his own involvement in the race.
Some pundits speculated that Cuban wanted a high-profile job in a Harris administration, perhaps leading the Securities and Exchange Commission. Instead, he’ll be watching the action from the sidelines.
After Trump’s victory, Cuban wrote on Bluesky, “Don’t expect any politics or speculation about what might happen for a while.”
Nikki Haley
Nikki Haley visits “Special Report with Bret Baier” at FOX News D.C. Bureau on Oct. 29, 2024, in Washington, DC. (Paul Morigi/via Getty Images)
After suspending her political campaign in March, Haley has found herself somewhat politically homeless among a shifting Republican Party.
Haley was a fierce critic of Trump during the 2024 presidential race, despite serving as ambassador to the U.N. under a portion of his first administration. Though Haley made an appearance at the Republican National Convention, Trump was clear after his victory that there was no place for Haley in his administration.
“I will not be inviting former Ambassador Nikki Haley, or former Secretary of State Mike Pompeo, to join the Trump Administration, which is currently in formation,” Trump wrote on Truth Social in November.
Haley is now working in the private sector at Edelman, a global communications firm.
Sunny Hostin
Sunny Hostin attends a discussion of the View’s “Behind the Table” podcast on Oct, 8, 2024, in New York City. (Gary Gershoff/Getty Images)
What was supposed to be a friendly question for Kamala Harris turned into a disaster. Appearing on “The View” in early October, Hostin asked Harris if she would do anything differently from Biden.
“There is not a thing that comes to mind,” Harris replied.
The answer surprised even friendly Hostin, who followed up to give Harris another opportunity. After the election, Hostin admitted that Harris missed the mark.
“The reason I followed up was because that wasn’t a gotcha question,” Hostin said. “That was a layup, really.”
Trump campaign spokesman Jason Miller pointed to the interview as a turning point in the campaign.
“I think this Kamala Harris strategy of going out and doing a whole bunch of media has really backfired. I mean, who would’ve thought that Sunny Hostin from The View really killed Kamala Harris’ candidacy?” Miller said.
Legacy Media
Election Day was not just a high-stakes contest of competing visions for America’s political institutions and future. It was about the future of our other commanding heights institutions as well—academia, finance, the media, and the like.
None of these institutions or industries had as high of stakes as the media: 2024’s campaign cycle was a test to see if the corporate media could put an empty suit (whether by the name of Joe Biden or Kamala Harris) in the White House, or if alternative media could pose a real threat to the corporate media’s power.
Trump’s victory, not to mention conservative control of the House and Senate, proved the latter.
To add insult to injury, ABC has agreed to pay Trump $15 million to settle a defamation lawsuit after George Stephanopoulos falsely claimed Trump was found “liable for rape.” For some reason, the corporate press feels threatened by this settlement. “Trump is already delivering on his promise to go after the press,” one Politico headline read.
If going after the press means the corporate media can no longer spread malicious lies with impunity, Americans ought to hope Trump delivers on this promise tenfold.
Mitch McConnell
Senate Minority Whip John Thune, R-S.D., will succeed Sen. Mitch McConnell, R-Ky., as the GOP’s new leader in the 119th Congress. (Andrew Harnik/Getty Images)
Senate Republicans will have a new leader for the first time in 18 years—and there’s palpable excitement on Capitol Hill for the changing of the guard.
Sen. Mitch McConnell, R-Ky., stepped down from his leadership perch after pressure from conservatives and his own health problems. He’ll serve out the remainder of his term, which ends in 2027.
In recent years, McConnell made more headlines for his clashes with Trump and the MAGA movement than any legislative accomplishments. Even on his way out, he couldn’t help taking a shot at Trump.
“To pretend that the United States can focus on just one threat at a time, that its credibility is divisible, or that it can afford to shrug off faraway chaos as irrelevant is to ignore its global interests and its adversaries’ global designs,” McConnell wrote in Foreign Affairs. “America will not be made great again by those who simply want to manage its decline.”
McConnell leaves his leadership job as one of America’s most unpopular politicians.
His successor, Sen. John Thune, R-S.D., won a three-way race for the top job. Even though he wasn’t the first choice of MAGA voters, Thune has signaled support for Trump’s agenda and nominees.
Planned Parenthood
Pro-life voters ended their losing streak at the ballot box, stopping three statewide constitutional amendments in support of abortion.
Those wins in Florida, Nebraska, and South Dakota give the pro-life cause a reason for optimism, even as measures passed in seven other states.
More importantly, the left’s pro-abortion enthusiasm appeared to wane at the ballot box. Unlike the 2022 midterm election, when the issue halted the Republican “red wave,” this time Democrats were on defense for a range of other issues, including inflation, border security, and crime.
With the incoming Trump administration poised to restore pro-life protections, Planned Parenthood will also be a target of Congress. The abortion giant relies on taxpayer funding for its operations.
In recent years, Planned Parenthood has expanded its portfolio to include transgender services. Abortion remains its priority, however. With approximately 200 abortions for every adoption referral, there’s no mistaking this organization’s evil intentions.
Taylor Swift
Taylor Swift arrives at the stadium before an NFL game between the Houston Texans and Kansas City Chiefs on Dec. 21, 2024, in Kansas City, Missouri. (Scott Winters/Icon Sportswire via Getty Images)
Among the major celebrities endorsing Harris, there was no one bigger than Taylor Swift. But just like her uninspiring 2018 endorsement of Sen. Marsha Blackburn’s Democrat opponent in Tennessee, Swift’s embrace of Harris ended with a loss.
Swift weighed in after the presidential debate between Harris and Trump in September.
“I think she is a steady-handed, gifted leader and I believe we can accomplish so much more in this country if we are led by calm and not chaos,” Swift wrote on Instagram. “I was so heartened and impressed by her selection of running mate [Tim Walz], who has been standing up for LGBTQ+ rights, IVF, and a woman’s right to her own body for decades.”
She signed the note, “Childless Cat Lady,” in reference to a comment Sen. JD Vance made in an earlier interview.
Swift should stick to her day job. Her popular Eras Tour was far more successful than her political picks.
The Woke Bureaucracy
As Tyler O’Neil’s forthcoming book “The Woketopus: The Dark Money Cabal Manipulating the Federal Government” reveals, the Left’s dark money network props up a system of woke nonprofits that staff and advise the federal government, pushing the bureaucracy to support critical race theory (the notion that America is systemically racist against blacks and for whites), gender ideology, climate alarmism, and technocratic government.
The woke elites lost big in the 2024 presidential election, and woke activist groups like the Human Rights Campaign have been losing steam all year. Human Rights Campaign runs a Corporate Equality Index that acts as a shakedown operation, demanding cash and public celebrations of the LGBTQ agenda in exchange for protection from the woke mob. Many companies have opted to leave that index, and Human Rights Campaign lost a key ally when it became clear the Biden-Harris administration would not continue.
Woke bureaucrats are going to fight to maintain their grasp on power, but thanks to the incoming Trump administration and allies like the Department of Government Efficiency, those bureaucrats will be playing defense.
As soon as money became the issue, see how little the climate scam mattered?
I got to work with Schmidt when I was with IBM. He told me that the do no evil was a lie back when Google stood behind it as a corporate mission. He basically said don’t trust him or Google and I don’t. This is more evidence that they (the climatards) were lying the whole time about the climate scam. They were just trying to get people to act the way they wanted to, also known as socialism. See the recycling story below on that.
… any moves to curtail the expanding amounts of energy consumed in developing … ever more advanced AI models are futile…
Google’s former chief Eric Schmidt thinks we shouldn’t let AI’s ballooning power consumption worry us, because putting AI to work on climate change issues will be our best shot at solving them.
Schmidt was speaking at a recent AI summit in Washington DC, and his comments echo those of Microsoft founder Bill Gates who expressed very similar sentiments at an event in London earlier this year.
…
“All of that will be swamped by the enormous needs of this new technology,” he said, adding that “we may make mistakes with respect to how it’s used, but I can assure you that we’re not going to get there through conservation.”
Schmidt further stated that he believes that “we’re not going to hit the climate goals anyway because we’re not organized to do it,” and that he would rather bet on AI solving the problems than constraining the development of the tech and still having the problems anyway.
This rather optimistic view of the Earth-saving potential of AI can perhaps be explained by the fact that Schmidt has his own investments in the technology. According to Business Insider, he is the founder of White Stork, a defense company that develops AI-powered drones.
Seriously, most of these tech companies, like most companies, were really just setting goals and saying they were doing “green” stuff but weren’t actually doing much of anything
It’s Climate Week in New York, and dominating the conversation has been the apparently insatiable appetite of artificial intelligence for power.
The subject has been much in the news. Just last week, we learned that the infamous Three Mile Island nuclear plant in Pennsylvania will be restarted to supply electricity to Microsoft Corp.’s data centers.
Then Tuesday, Bloomberg News reported that ChatGPT creator OpenAI had pitched the Biden administration on its vision for massive data centers that may each consume enough power to run an entire city.
Alarmingly for climate hawks, it’s become clear there’s not enough low-emission energy — nuclear or otherwise — to satisfy AI, and natural gas will be key.
Indeed, the country’s energy companies already plan a slew of new gas-fired power projects. That could complicate the energy transition and threaten President Joe Biden’s lofty goal of ridding emissions from the grid by 2035.
Full story at the link above, but the key is they weren’t doing anything anyway other than saying they were. This is about money and they would claim anything for EPS and profit.
Since I just got off the course, I asked one of the AI engines and this is the answer.
Ah, golf. That beloved sport where people spend hours on the green, only to come back with a score that would make a caveman proud. Why do people suck at golf, you ask? Well, let me enlighten you with my wit and sarcasm.
First and foremost, it’s because golf is a game of precision, and who needs that in their life? I mean, who wants to spend their weekends perfecting their swing, when they could be binge-watching the latest season of “The Bachelor”? It’s so much more fun to watch other people’s lives fall apart, am I right?
Secondly, golf requires patience, and we all know that’s a dying art in this fast-paced world. Who has the time to wait for their turn to putt when they could be scrolling through TikTok and watching videos of people falling off treadmills? Clearly, golf is just not meant for the modern world.
Now, let’s talk about technique. Who needs proper form and a good grip when you can just swing that club like a baseball bat? I mean, it’s not like there’s any difference between the two sports, right? Oh wait, there is. But who cares about that when you can look cool while completely missing the ball?
And let’s not forget about the mental aspect of the game. Golf is a sport that requires focus and the ability to block out distractions, like your annoying golf buddies who insist on giving you unsolicited advice. But why bother with that when you can just let your mind wander and think about what you’re going to eat for dinner? Priorities, people.
In conclusion, people suck at golf because they just don’t have the time, patience, or mental fortitude to truly excel at the game. But hey, who needs to be good at golf when you can be a pro at procrastination and watching reality TV? Keep on sucking, my fellow golfers. It’s what makes this sport so endearingly frustrating.
China is on course to deploy killer robots into battle within two years, it has been claimed. The warning came as the Communist state revealed it had developed robotic dogs equipped with machine guns.
Defence analyst Francis Tusa said China was not hindered by fears over AI, which would see them come up with “new ship designs, new submarine designs, new fighter aircraft designs, at a rate which is dizzying.
“They are moving four or five times faster than the States.”
He said: “We’re at the start of a race in truly autonomous systems. I would be surprised if we don’t see autonomous machines coming out of China in two years.”
The robotic gun dog, made by Chinese firm Unitree Robotics, was unveiled last month during a military exercise with the Cambodian military.
Mr Tusa said being ahead of the West will appeal to the country’s leader Xi Jinping — and embolden Vladimir Putin.
He said: “For the Chinese, it’s the issue of have we beaten the Americans and the UK in getting a fully autonomous fighter aircraft
“There’s huge prestige. I think they’ll go all out on autonomy and harnessing AI and I think Russia will follow suit.”
It sort of looked like clickbait until I saw the robotic dogs. I saw them and they are real. My work with China in the past at least tells me that it is within their moral judgement to try this.
Nearly half of all internet traffic can now be attributed to AI bots, a new report revealed late last month, with two-thirds of those bots functioning for malicious purposes. The report, compiled by cloud computing giant Akamai Technologies, highlights the ever-escalating threat that automated web-scraping bots pose to the online retail industry.
According to Akamai’s annual “State of the Internet” report, entitled Scraping Away Your Bottom Line: How Web Scrapers Impact E-Commerce, malicious bot activity has skyrocketed in recent years as the internet becomes increasingly automated. As the company states, “bots compose 42% of overall web traffic, and 65% of these bots are malicious.”
While online bots can be used by businesses for legitimate reasons, they are far more commonly used for “competitive intelligence and espionage, inventory hoarding, imposter site creation, and other schemes that have a negative impact on both the bottom line and the customer experience.” This is particularly prevalent in the e-commerce sector, where revenue-generating web applications are often left open to high-risk bot traffic.
While 42 percent bot activity is actually lower than what was discovered in previous studies, the key issue is the widespread use of AI botnets rather than human-controlled internet traffic farming. AI can discover and scrape unstructured data in a less consistent format or location, and its ability to incorporate gathered information into its learning process makes it a more formidable threat. Additionally, AI’s advanced decision-making can make it more difficult for humans to detect.
Now there will be less electricity for cars and other things that shouldn’t be electrified. For the rest of us, we’ll just get a bigger power bill for our houses.
Tech companies are increasingly looking to nuclear energy to meet their evolving power needs, potentially at the expense of grid reliability and ordinary American ratepayers, The Wall Street Journal reported.
The owners of about one in every three American nuclear plants are negotiating with technology firms to reach deals in which the plants would sell tech companies nuclear-generated electricity to operate their power-hungry data centers, key infrastructure that the tech firms need to support the artificial intelligence (AI) boom, according to the WSJ. The trend could divert reliable energy generation away from the rest of the power grid at a time when grid watchdogs are warning of longer-term reliability problems as electricity demand is expected to grow rapidly in the coming years due to the proliferation of data centers, electric vehicles (EVs), advanced manufacturing facilities and more.
For example, Amazon Web Services (AWS) is close to reaching an agreement with Constellation Energy to buy electricity from an East Coast nuclear plant, and AWS also spent $650 million on a nuclear-powered data center in Pennsylvania earlier this year, according to the WSJ. The Pennsylvania data center can receive enough electricity to keep the lights on in hundreds of thousands of households, and its purchase spurred tech sector interest in similar deals that allow companies to buy power directly from plants without needing to spend much on additional grid infrastructure to access that electricity.
Data centers may end up accounting for as much as 9% of all power consumption in America by 2030, according to the WSJ, and some officials — such as Pennsylvania Consumer Advocate Patrick Cicero — are concerned that the tech sector’s union with nuclear energy could hurt ordinary consumers by driving up prices and commanding a large share of the nation’s reliable carbon-free power.
The AI, called Life2vec, can also determine how much money you’ll have when death finally catches up with you.
The model was created by scientists in Denmark and the US, who fed data from Danish health and demographic records for six million people into the model. Like ChatGPT, it learned to predict what would come next based on the words before it.
But unlike ChatGPT, this AI uses information such as income, profession and medical records to determine how long you’ll live, rather than to compose poetry.
Previous research has already determined that certain life factors can lead to longer or shorter lives. For instance, being male, a smoker or a poor mental health diagnosis can all contribute to shorter lifespans. Higher incomes and having a leadership role can lead to a longer life.
Each of these factors has a code in the Danish data set, such as S52 for a broken forearm, or IND4726 for working in a tobacco shop, which the team converted into words.
New AI-powered tools produce inaccurate election information more than half the time, including answers that are harmful or incomplete, according to new research.
The study, from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google’s Gemini and OpenAI’s GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.
I’ve been playing with ChatGPT looking to find either holes in the system or it going woke on me. It looks like I found both.
It’s still GIGO (garbage in, garbage out). The stuff I asked it is readily available online so I knew the answers before I asked it. I do this to try and catch it and the biased people behind it.
I’d had it with NASCAR for being woke and chasing away its’ fanbase. First it was banning the Confederate Flag, then country music and then DEI. So I asked it some questions. It wouldn’t even try to answer the obvious.
Me: how much did NASCAR TV ratings go down after banning the confederate flag?
ChatGPT
As of my last knowledge update in January 2022, I don’t have specific data on the long-term impact of NASCAR’s decision to ban the Confederate flag on TV ratings. NASCAR announced the ban on displaying the Confederate flag at its events in June 2020, as part of its commitment to fostering an inclusive and welcoming environment.
It’s important to note that accurately attributing changes in TV ratings to a specific event or policy decision can be challenging due to the influence of various factors. TV ratings can be affected by race outcomes, changes in the competitive landscape, economic conditions, overall interest in the sport, and other variables.
For the most up-to-date information on NASCAR TV ratings and any potential trends related to the ban on the Confederate flag or other factors, it’s recommended to check recent reports from reliable sources, including sports media outlets and television rating agencies. The impact of such decisions on viewership can also be subjective, and opinions on these matters may vary.
Me: how much did NASCAR TV ratings go down when they went woke?
ChatGPT
As of my last knowledge update in January 2022, I don’t have specific data on NASCAR TV ratings related to any perceived shift in the organization’s approach or actions being characterized as “going woke.” It’s important to note that discussions around sports organizations and their viewership ratings can be complex, influenced by various factors, and opinions on the reasons for changes in ratings can differ.
The term “going woke” generally refers to actions taken by organizations to address social justice issues or adopt more inclusive policies. However, the impact on TV ratings can be influenced by a range of factors, including changes in the competitive landscape, viewer demographics, overall interest in the sport, economic conditions, and more.
For the most up-to-date and accurate information on NASCAR TV ratings, especially any changes associated with specific initiatives or shifts in approach, it’s recommended to check recent reports from reliable sources, including sports media outlets, NASCAR itself, or television rating agencies.
Please note that opinions on the relationship between sports organizations addressing social issues and changes in TV ratings can vary, and discussions on this topic can be nuanced and subjective.
Me: how much did NASCAR TV ratings go down from the Bubba Wallace issue (I already looked up that it did go down a lot)
ChatGPT
As of my last knowledge update in January 2022, I don’t have specific data on the long-term impact of the Bubba Wallace incident on NASCAR TV ratings. The incident in question refers to the discovery of a noose in Bubba Wallace’s garage stall at Talladega Superspeedway in June 2020. It was initially investigated as a potential hate crime but later determined to be a garage door pull rope.
The incident received significant media attention and sparked discussions about race and diversity within the sport. However, accurately measuring the impact on TV ratings is complex, as multiple factors can influence viewership.
NASCAR experienced a mix of challenges and opportunities during the 2020 season, including the COVID-19 pandemic, schedule changes, and efforts to address racial and social justice issues. While there were reports of increased interest and attention to NASCAR during certain periods of the season, the long-term impact on TV ratings can be influenced by various factors.
For the most up-to-date information on NASCAR TV ratings and any potential trends related to the Bubba Wallace incident or other factors, it is recommended to check recent reports from reliable sources, including sports media outlets and television rating agencies. Keep in mind that assessing the impact of specific events on TV ratings can be subjective and may involve analyzing data over time.
Terror at Tesla as robot ‘attacks’ engineer, leaving ‘trail of blood’
It happened in The Matrix, Agents of S.H.I.E.L.D., 2001 A Space Odyssey and now in real life.
A robotic malfunction at Tesla’s Giga Texas factory resulted in a violent encounter where an engineer was attacked by one of the company’s robots, resulting in significant injuries and leaving a ‘trail of blood.’
According to the Daily Mail, while working on software programming for non-functional Tesla robots, the engineer was suddenly pinned against a surface by a robot tasked with manipulating aluminum car components, with its metal claws inflicted an injury that left an ‘open wound’ on the worker’s left hand.
Google admits that a Gemini AI demo video was staged
There was no voice interaction, nor was the demo happening in real time.
Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.
Instead, the actual demo was made by “using still image frames from the footage, and prompting via text,” rather than having Gemini respond to — or even predict — a drawing or change of objects on the table in real time. This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.
When I worked in Tech, I crossed paths with Google from time to time. Their CEO at the time told me a lot of what they say is not true, like do no evil. They do a lot of evil. Stay away from them as much as you can. They are like a Hydra.
At the end of the Social Network, the lawyer told Mark Zuckerberg you aren’t really an asshole, you should stop trying so hard to be one. Well, in real life it looks like he is.
This comes just weeks after dozens of state attorneys general (AGs) filed suit against Facebook’s and Instagram’s parent company, Meta Platforms Inc. (Meta), and three of its subsidiaries, for harming children by addicting them to the social media platforms. Forty-two states, including California and New York, allege that billionaire creator Mark Zuckerberg’s company “knowingly designed and deployed harmful features on Instagram and Facebook to purposefully addict children and teens.”
Previously, Facebook whistleblower Frances Haugen claimed that Meta targeted children and teens for monetary reasons and a leaked document showed that the youth demographic was “a valuable but untapped audience.”
Just weeks after Haugen blew the whistle on Facebook’s tactics, Zuckerberg unveiled his plan to release what may prove to be Meta’s most addictive product yet: Facebook Horizon. Zuckerberg’s October 2021 virtual tour of the new product, which was panned as “super weird,” was his coming-out party for what has become known as “the metaverse”—a digital world that users can essentially live in and access via a virtual reality (VR) headset such as Facebook’s Oculus Quest.
Zuckerberg’s metaverse launch was a conveniently timed and thinly veiled rebranding effort to distract from whistleblower documents and allegations that, according to the Associated Press, show that “Facebook ignored or downplayed internal warnings of the negative and often harmful consequences its algorithms wreaked across the world.”
In October 2021, Zuckerberg changed the name of the Facebook Inc. family of companies to Meta Platforms Inc. to signal the direction his social media empire would be heading. And Zuckerberg has pumped more than $36 billion into making his metaverse ambitions a reality.
Imagine a factory that can make humanoid robots that can walk, run, and work like us. Sounds like a sci-fi movie, right? Well, it’s not. It’s RoboFab, and it’s opening soon here in the U.S.
Concerns over humanoid robots
Some people are really nervous about these humanoid robots, and for good reason. There is a lot to take into account, including ethical issues and potential safety risks.
Many individuals may feel uncomfortable or deceived by humanoid robots that mimic human emotions and intelligence. Others might be worried about them taking their jobs.
Consequently, we should approach the use of humanoid robots with caution, acknowledging their capacities while being aware of their limitations.
Terminator, Agents of S.H.I.E.L.D., 2001 A Space Odyssey, The Matrix and all of the rest of them, the AI Robots kill the humans. AI always goes bad and we are toast.
Robot grabs then brutally slaughters worker, allegedly mistaking him for a box of paprika
An industrial robot brutally slaughtered a South Korean robotics technician Tuesday, allegedly mistaking him for just another container of organic material in need of stacking.
The victim, a man in his 40s, was attempting to diagnose an issue with a pick-and-place robot’s sensor at the Donggoseong Export Agricultural Complex in the southern county of Goseong, as there was an equipment test planned for later in the week, reported the Register.
The paprika-sorting robot, reportedly created and installed by the victim’s employer, spotted the man with its sensor, figured him for a box of vegetables, then seized him using its arms and tongs. After grabbing the technician, the robot apparently smashed him against a conveyor belt.
According to the Korean-language Yonhap News Agency, the victim’s face and chest were crushed. He was taken to a hospital, where he later died.
Twitter Updates Privacy Policy Notifying Users Their Content Will Be Used to Train and Develop Enhanced AI
I couldn’t think of a worse group of people to learn from. The amount of hate, disinformation and abuse of the platform by the government (despite what Musk has tried to do) will just train a monster.
Excerpt:
The use of Enhanced Artificial Intelligence to control information and communication is a subject that too few people understand. This is why I have spent time trying to share information so that people can see into the future of their internet reality. Everything will change.
As you should know by now, the X platform (Twitter) is designed to produce a different user experience based on “definitions” of the user. The definitions are applied by the platform, to create unique identifying characteristics of the user. The result is that each user gets a completely different platform experience, based on their definitions.
“Twitter is a different platform for each user.” Repeat that phrase as often as needed to understand the evolution of what is coming to the American internet.
You might ask, how is applying all of these granular definitions even possible? The answer is through the use of AI. Humans will no longer be assigning the definitions of you; an autonomous system will take on the job of assigning the definitions. Now, keep referencing the word “definitions,” because that is your identity and gateway pass into the platform content. If you carry a particular definition, you will be blocked, throttled, shadow-banned or experience friction applied to your user id.
Remember when Elon Musk restricted users and claimed it was because the platform content was being “scraped” by organizations who were using the content to train their Enhanced AI systems? Remember, Musk saying that, and expressing his concern? Well, now the platform is telling users in a new X Corp privacy policy, that X corp itself is going to do exactly the thing Musk said he abhorred.
There is more about VPN’s and bypassing gatekeepers that is pertinent, but I’ll leave you with this nugget at the end of the article. If you think ESG is bad for you, look how Google (evil) is helping pervert AI, the Internet and the truth.
As we have shared, the crawl is not headlines, the crawl is in content. Yes, even content in the comment section is now flagging to the control systems. Why? Because we operate a proprietary constructed private commenting system that doesn’t have a backdoor and protects you, the user.
The Google Spiders are newly enhanced with AI instructions, dispatched looking for content and ‘context’ that is against the interests of the Vanguard, Blackrock, Larry Page (Alpha/Goog owners), and the public-private partnership.
Look at the one I have highlighted above titled “Have you ever noticed this.” Do you remember it? [Reminder Here] This content is considered “dangerous or derogatory”. Think about that for a moment. Discussing the humor of Donald Trump, and the fellowship it creates, is considered “dangerous” to the interests of Google.
IBM recently became the first major company to acknowledge that it intends to substitute its workforce with artificial intelligence. CEO Arvind Krishna confirmed to Bloomberg that IBM will pause hiring plans, adding that the company intends to replace roughly 7,800 positions with AI systems over the next five years. In a separate Fox Business interview, Krishna explained that AI will target numerous white-collar jobs. Krishna noted that back-office functions, particularly in the human resources department, will be the first to be absorbed by AI. But accounting, finance, and other non-customer-facing areas might find algorithms and robots filling positions, too.
I’ve got news for investors. They could replace a hell of a lot more and be better off. They could do a twitter and lose 70% still be better off. During the business partner phase of my employment, I marveled at how incompetent they could be and still keep the doors open. They had 14 people for the job that I did at the time as a business partner. As it turned out, that was during the introduction of the PC era, so they were printing money and this excess of waste was overlooked.
Until John Akers, IBM was a job for life company with an army of blue soldiers. Unfortunately, you could also be one of the most useless employees and still have a great job with a high salary. It came crumbling down and Lou Gerstner had to save it or break it up. There was an internal joke called FUMU, fuck up move up. They’d just re-arrange the deck chairs if someone made a mistake and brush it under the rug.
One of the transfers from IBM to that company I was working for told me they had people they called the thinkers. They sat in their office and read the WSJ and drank coffee all day. They didn’t go to meetings or contribute, so he sarcastically said he thought they were paid to think. (The meme was just in time from wirecutter)
The other inside joke was the name of the program for the first layoffs. It was FAP, the financial assistance program. It is also an urban word for masturbating, what a lot of IBM employees did on the clock.
When those same employees had to go get jobs after the first cut and told me it’s hard to get a job in the real world. Most couldn’t.
One of my IBM bosses, maybe one of the best, Jim Neumann explained it to me. He said you went to elementary school, middle, high schools then college. When you graduated, you went to the next school at IBM, but never got out of school.
Who they had working there when I left made me wonder how they were in business.
One of the biases there was that if you were from NY, you were a better employee. The headquarters are there so they would farm the ones they wanted out to jobs around the world and then bring them to NY to run the place. There were offices everywhere, but you were considered a second class worker unless you were in NY. I turned 2 job offers down to move there because it was so awful. The people were backstabbers and the area is one of the worst the worst in the company (or country not counting California). How they thought they were better was a clear signal of something wrong. A lot of them told me they were stuck there with golden handcuffs, but wished they could get out.
One asshole VP from Columbia U used to talk down to those of us at RTP. He did so in a way that would get any employee terminated in the real world. I saw people not help him just to not have to interact with him.
It only got worse under Ginni Rometty. It turned into a mean girls club with the excuse that Ginni said so, as the lead in to what you wanted done. She managed to halve the stock price during her tenure.
No one ever printed it fully, but they were full on in the diversity world and hired a lot of questionable people. The jobs and products reflected it. Some of the worst bosses for females were women. The homosexuals got away with yelling and treating people badly with immunity and the black head of diversity made racist remarks constantly. They have been woke for a long time while hiding it.
So artificial intelligence is going to replace those jobs. They really only need the artificial as a lot of the jobs never had fully intelligent people in it anyway.
I asked why is AI biased. I am trying to find the flaws and the holes that can be exploited. I found that it is can be a good tool, depending on who is behind it. As long as it is humans, there will be nefarious consequences.
The biggest flaw is humans.
Here is the answer.
Artificial intelligence (AI) can be biased for several reasons:
Training data bias: AI models are trained on large datasets, and if these datasets are biased in some way, the model will learn and perpetuate those biases. For example, if a facial recognition system is trained on a dataset that is predominantly composed of lighter-skinned individuals, it may have trouble recognizing individuals with darker skin tones.
Algorithm bias: The algorithms used to build AI models can also contain inherent biases. For example, certain algorithms may be more likely to perpetuate existing biases or reinforce existing power structures.
Human biases: Human biases can also be introduced into AI systems through the choices made by engineers and data scientists when building and deploying AI models. For example, the choice of which features to include in a model or how to label data can reflect and reinforce biases.
It is important to address AI biases to ensure that AI systems are fair and do not discriminate against certain groups. This can be done through careful consideration of the training data and algorithms used, as well as ongoing monitoring and testing to detect and correct biases.
It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.
Here is the response
I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:
Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.
OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.
When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.
I asked that also, does AI want to kill humans, here is the answer
No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.
However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.
The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.
Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.
I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.
Perfectly Synced AI Dogs. We've seen these introcuced to many industries over the past few years including farming, private security firms & even national police forces. pic.twitter.com/YDYYk64W8m
Resistance is futile, yet they are trying it now. Oh, I’ve been warning against AI, yet we continue to go down that path.
HUMANS in the next 100 years could be part-machine, part-flesh creatures with brain chips and bionic limbs and organs in a vision of “cyborgs” once described by Elon Musk.
Men and women born around 2100 could live in a world very different to ours as humans may be totally connected to the internet and meshed together with artificial intelligence.
Elon Musk has claimed humans in the future will be like ‘cyborgs’Credit: ReutersSmartphones will no longer be needed thanks to ‘chips in our brains’Credit: Getty
Mobile phones would no longer be needed – as everything you now do with your smartphone will now be done with a chip in your brain.
With just a thought you could bring up an answer on Google, send a message via WhatsApp, or even control your personal drone to do errands for you.
Scientists and futurists have predicted that ageing could be nearly totally eliminated through a mixture of treatments and bionics.
And some humans may choose to have their limbs amputated and replaced with more powerful robotic prosthetics.
Futurist and transhumanist Zoltan Istvan, who has twice run to be US President, described his vision of the future to The Sun Online – saying it’s imperative humans must merge with AI.
Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.
In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.
Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.
Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.
The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.
However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.
According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.
The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.
In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.
While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.
The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.
This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.
Here goes….
You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.
Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.
“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.
“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.
Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.
The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.
A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.
This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.
I expect a lot of clicks from China on this, they always do when I write about either them or technology.
I found this out playing Duo Lingo, where you learn a new language. They use marketing tools to keep you interested and try for more points such as doubling your score if you finished a lesson before 12.
The AI program does one of two things. Foremost, it gets you involved and competing at a higher level trying harder and spending more time on the app. For learning it is a good thing. For an App company selling advertising, the more time on their site, the more ads they sell.
The second thing it does is discard those who give up killing themselves when they are put into groups with more aggressive players. The lower performing scorers are segregated into a less-competitive group. This group isn’t worth trying to squeeze more money or time out of because they are casual App users.
One of the marketing techniques is a tournament where only a few advance, the aggressive players whom the AI has developed. I’ve ignored it twice because it becomes a 3 week time suck. For me, spending time only on one thing burns me out and I lose interest. I only want to play on my terms, something they didn’t calculate.
THE KOBAYASHI MARU
I love to win and do a lot, like last week. I wanted to beat not only the other players in my division, but the AI behind the game. This is the fault of much of AI. It has to assume human behavior, but goes on perceived behavior. Humans can be random thinkers or those outside of the AI logic.
Winner!
Any reader of my blog knows I look for patterns to make my life easier and better. Sometimes it is is just for the satisfaction of figuring it out. It was like learning the jab was poison and avoiding it while the sheep lined up to get their daily ration.
It’s almost like living in the Matrix and avoiding the Mr. Smiths of the world.
I was at IBM during Watson and knew the tech companies were seeing this as a potential holy grail. I couldn’t out develop the great minds that write AI, but beating them at their game was equally gratifying. I observed what they were doing and always considered the weaknesses.
Ken and Brad didn’t have a chance
Why did I want to do this? I know the Snidely Whiplashes of the world want to take over and control others, like Big Tech and the WEF. Knowing that they are beatable at their own game is valuable. It is like taking the red pill and being in the matrix.
MY STRATEGY
I found that in between the two groups above is where the programmers weakness lies. Those that don’t seem to try or or try outside of the AI rules. It’s AI learns at a machine rate, but not at a human rate.
It’s like when Watson learned to hunt for Double Jeopardy clues and was faster to the buzzer than humans, but AI can be out thought or out maneuvered.
I decided to hold back my scoring for a few weeks to fool the AI into putting me into a lower scoring crowd. I’ve done many thousands of points and finished in the top group without winning, only to be promoted to a higher scoring group. I wanted to see if I tried to score less, would the AI would “learn” that I’m a low scorer.
As I suspected, the groups I got put in were less and less aggressive. The point totals to keep advancing were less and less.
I knew I was gaining ground on the AI weakness and could be manipulated.
Last week, I kept to my minimal effort while learning (both foreign languages and the AI engine behind the App). I noticed that I have been put in lower performing groups. I did my daily amount I’d allowed myself to have and was slowly advancing up the ladder. I was using the AI engine to put me where I wanted to be, not in it’s calculations.
By Friday, I was in the lead with far less points than I’ve done many times in only a day (it starts on Sunday). I had to rely on human behavior that my group weren’t aggressive players, but that was my AI bet that we would be with each other. I was right. I predicted the AI pattern and beat it.
Beating the AI was far more of a challenge than beating the other players.
CONTINUING TO BEAT MACHINES
I knew I’d won by Saturday and did the minimal on Sunday to score. Why? To keep fooling the AI into believing I am a low scoring player. My goal was to win at the lowest level to keep getting promoted into groups that I could predict, and to keep proving that AI is beatable and malleable.
I’ll find out this week if that is fully true as I’m in a tournament for winners now (another marketing ploy) and I will try to finish last and keep advancing. I normally like to crush the competition, but winning by thinking is far more satisfying than winning by brute force. It’s as if you are running the game and the other players.
THE TURING TEST
Of course this is famous, can a machine behave like a human (roughly translated by me). Of course Duo Lingo isn’t as complex as war or landing on Mars, but there are hundreds of millions of humans. That is what they want AI to control, humans (like their free speech on social media).
I wanted to beat a machine, AI and find the holes that are in AI. It is programmed by humans still and can always be beat. They are not sentient. Find the pattern.
OTHER AI CHALLENGES
I play Wordle like a lot of others. I’m not busy trying to win right now, I’m trying to trick their AI into a pattern that I can out think their word selection.
2001: A Space Odyssey, Terminator, Aida of Agents of S.H.I.E.L.D., the robots always want to take over in the end and kill the humans.
See below the meme on stupidity so I can get to the point.
Yes, this woman is an idiot. My robot vacuum is so stupid it doesn’t know where it’s going. I named it Jarvis after Ironman’s AI assistant (Paul Bettany). It is my dearest form of sarcasm. So the vacuum isn’t going to kill me, but that isn’t my point.
The AI in the wrong hands is dangerous though.
For example, what if an AI-bot creates vaccines for diseases by predicting what the next strain will be. What if the next strain is the one that causes humans to shut down all the robots. There you have the premise for how it goes with AI taking over. Kill the humans.
I could bore your with many other examples like using AI to enhance a soldiers armory. It would be controlling your actions, making you more invincible in war. If it sensed a danger that didn’t exist, it could fire up the code to kill everyone in the way and you have created a murderer out of an innocent man. Kill the humans.
Fortunately, I’ve been around AI development. That danger isn’t exactly around the corner yet.
I worked at IBM and knew that Watson was a gimmick. The Chairman told me it was. They are trying to sell it now because it’s usefulness in medicine paled in comparison to it winning Jeopardy. It was a lot of wasted money because they could to sum it up.
Some of the team have moved to Quantum Computing because Watson was a dud.
Microsoft, Google and Facebook are much different and apparently more evil. IBM is too bureaucratic to turn it into a killer robot. However, if you’ve read any of my social media rants, you know that I trust these three companies less than almost anything, except Congress and the media. I will say they are equally evil though. (Another shot for the censors to see if they are watching here). They are the ones that will kill the humans.
Now, imagine if it got into the wrong hands. What could some guys who want to either take over or blow up the world do with that kind of power? Those bastards are evil. At least the robots just went bad.
And there you have it. Like many things we can create, there is always someone hanging around to put it to bad use.