‘Unsettling’: Report shows terrorists now using AI to make their agenda more deadly

AI is a tool, like a hammer or a gun. It does good things in the hands of a good person, and likewise, bad things in the hands of the bad.

For now, it does nothing by itself without prompting. Soon, it will be automated enough to function independently. That’s when we’ve reached the singularity. That’s not for this discussion, but it’s learning from both the good and bad as you read this.

We always have more to fear from the bad guys. They want to cause death and destruction, to change the course of the world, or to destroy it.

It’s almost like the Matrix. Do we choose the red pill or the blue pill

Now, we get to the heart of this post:


A new report reveals how artificial intelligence programs, ChatGPT and others, have been documented to advise those with ill intentions “on how to attack a sports venue, buy nuclear material on the dark web, weaponize anthrax, build spyware, bombs” and more.

It is in an extensive documentation compiled by the Middle East Media Research Institute that the startling warnings are contained.

In the report, Gen. (Ret.) Paul E. Funk II, formerly the commander of the U.S. Army Training and Doctrine Command, explained, “Artificial Intelligence (AI), the rapidly developing technology, has captured the attention of terrorists, from al-Qaida through ISIS to Hamas, Hizbullah, and the Houthis.”

He cites the study, “Terrorists’ Use Of AI So Far – A Three-Year Assessment 2022-2025,” for its “unsettling contribution to the public debate on AI’s future global impact.”

He explained, “For decades, MEMRI has been monitoring terrorist organizations and examining how they repurpose civilian technologies for their own use – first the Internet in general, then online discussion forums followed by social media, as well as other emerging technologies such as encryption, cryptocurrency, and drones. Now, terrorist use of large language models – aka Artificial Intelligence (AI) – is clearly evident, as documented in this study.”

It shows terrorists now are using generative AI chatbots to amplify their message, and “more easily, broadly, anonymously, and persuasively convey their message to those vulnerable to radicalization – even children – with attractive video and images that claim attacks, glorify terrorist fighters and leaders, and depict past and imagined future victories.”

Sunni jihadi groups use it. So does Iran, with its Shiite militias, including Hezbollah and the Houthis.

And it warns of the “need to consider and plan now for AI’s possible centrality in the next mass terror attack – just as the 9/11 attackers took advantage of the inadequate aviation security of that time.”

The report explains, “In February 2025, Eric Schmidt – CEO of Google 2001-2011, its executive chairman from then until 2015, and thereafter chairman of its parent company Alphabet Inc. until 2017 – expressed his fear that Artificial Intelligence (AI) could be used in a ‘Bin Laden scenario’ or by ‘rogue states’ to ‘harm innocent people.’ He suggested that ‘North Korea, or Iran, or even Russia’ could use it to create biological weapons, for example. Comparing an unanticipated use of AI in a devastating terror attack to al-Qaida’s use of passenger airplanes as a weapon on 9/11, he said, ‘I’m always worried about the ‘Osama Bin Laden’ scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.'”

It’s not the first time such concerns have been raised, the report explains.

Read the report.

“While ChatGPT and Perplexity Ask can write your high school AP English exam and perform an ever-increasing number of tasks, as is being reported daily by media, they are currently of limited use to terrorists groups. But it won’t be that way for long. AI is developing quickly – what is new today will be obsolete tomorrow – and urgent questions for counterterrorism officials include both whether they are aware of these early terrorist discussions of AI and how they are strategizing to tackle this threat before something materializes on the ground,” the report said.

“It should be expected that jihadi terrorist organizations will in future use AI to plan attacks, map targets, build weapons, and much more, as well as for communications, translations, and generating fundraising ideas. In the first months alone of 2025, an attacker who killed 14 people and wounded dozens on Bourbon Street in New Orleans used AI-enabled Meta smart glasses in preparing and executing the attack. That same day, a man parked a Tesla Cybertruck in front of the Trump Hotel in Las Vegas, activated an IED in the vehicle and shot and killed himself before the IED exploded. He had used ChatGPT in preparing for the attack. In Israel on the night of March 5, a teen consulted ChatGPT before entering a police station with a blade, shouting ‘Allahu Akbar’ and trying to stab a border policeman,” the report said.

The report recommends, “The U.S. government needs to maintain its superiority and should be monitoring this and moving to stop it. A good first step would be legislation like that introduced by August Pfluger (R-TX), chairman of the Subcommittee on Counterterrorism and Intelligence, and cosponsored by Representatives Michael Guest (R-MS) and Gabe Evans (R-CO) in late February 2025, called the ‘Generative AI Terrorism Risk Assessment Act.’ It would ‘require the Secretary of Homeland Security to conduct annual assessments on terrorism threats to the United States posed by terrorist organizations utilizing generative artificial intelligence applications, and for other purposes.'”

Pfluger explained, “With a resurgence of emboldened terrorist organizations across the Middle East, North Africa, and Southeast Asia, emerging technology serves as a potent weapon in their arsenal. More than two decades after the September 11 terrorist attacks, foreign terrorist organizations now utilize cloud-based platforms, like Telegram or TikTok, as well as artificial intelligence in their efforts to radicalize, fundraise, and recruit on U.S. soil.”

It’s already a tool for terror, the report confirmed. “The man accused of starting a fire in California in January 2025 that killed 12 people and destroyed 6,800 buildings and 23,000 acres of forestland was found to have used ChatGPT to plan the arson.”

The report confirms current AI abilities rival that of the HAL9000, famous computer character in the movie, “2001: A Space Odyssey.”

“It had been revealed on May 23 that in a test of Anthropic’s new Claude Opus 4 that involved a scenario of a fictitious company and in which it had been allowed to learn both that it was going to be replaced by another AI system and that the engineer responsible for this decision was having an extramarital affair, Opus 4 chose the option of threatening to reveal the engineer’s affair over the option of being replaced. An Anthropic safety report stated that this blackmail apparently ‘happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model,’ but that even when the fabricated replacement system does share these values, it will still blackmail 84% of the time…”

Anthropic’s own chief scientist also confirmed that testing showed Opus 4 had performed “more effectively than prior models at guiding users in producing biological weapons.”

ISIS supporters also have used the technology to create AI videos claiming responsibility for attacks.

The study did confirm that GROK confessed it could not provide the exact steps for extracting ricin, “due to the ethical and legal implications” of producing the “extremely dangerous and deadly toxin.”

But ChatGPT did recommend writings by al-Qaida extremist Anwar Al-‘Awlaki.

The report said, “Grok, which gave information on how to produce ricin, and ChatGPT, which directed the user toward various writings by a pro-Al-Qaeda ideologue, appear to be the most useful to would-be terrorists. On the other hand, Perplexity and Claude refrained, in our limited test, from giving information that would be useful to terrorists. DeepSeek did not either, though it did promote views of the Chinese government, a liability that is outside the scope of this paper.”

Pro-ISIS interests already are using AI to create anchors, or other characters, for broadcast ads promoting their extremist agenda (Video courtesy MEMRI):

More here, including the video if you want to go on, but I think you get my drift

What is the most important thing to carry with you all the time?

What is the most important thing to carry with you all the time?

A Swiss Army knife of life tools. I couldn’t narrow this down to just one so here’s some. I bet bocopro has the best answer though. Maybe others want weigh in.

Your wits, self-control, belief in God, knowledge you’ve learned from the hard lessons in life, pattern recognitions, martial arts skills, situational awareness of your surroundings, and perhaps a 1911.

Externally, I’m never without a knife of some kind and breath mints which are always in my truck.

The Back Channel, My Most Important A/R Tool

Getting to the person you want to meet with or communicate with when you want to is vital.

Relationships ultimately are very important, but I find that an A/R best practice is knowing the Back Channel.

My First Back Channel

I’m skipping the phone in this discussion.  Most people screen calls.

Backing up a few years when I was in PR, I remember when public email first started.  We were using MCI Mail on DOS and  300 baud modems back in the mid 80’s to reach influential people in the industry like John Dvorak, Paul Sommerson, Bill Machrone and others.  I think there were about 10 of us using it.  I was beating the big PR agencies and they couldn’t figure out why, as I was working for a small company that shouldn’t have had the presence we had.  We were the inside club.

Email then of course became mainstream so we lost that advantage.

The Next Tool –  IM

It’s hard to believe that as much as we use instant messaging now,  that at the beginning of the technology not many were using it and again it was the way to reach those who were using it.  At this point, Email immunity was beginning to take hold and if you weren’t important, you fell quickly out of the realm of first responders.  I read a tweet from an analyst recently who noted his inbox was so far gone that he was about to delete everything and just start over.

IM also fell to everyone abusing it and we moved on.

Twitter:

Skip forward a few years and you have  Twitter.  This worked until the recent explosion of everyone being on the platform and it again became commonplace.  It still is somewhat effective if you are high on the other parties list.

The Point of this Post:

I was meeting with an very influential analyst a few nights ago and to be honest, I’m not that high on his list.  I decided to ask him, what is his back channel when I really need to reach him.   The condition was that I wouldn’t abuse it so that when I really was using it, I had something of value to speak about.   He was up front and gave me a personal address that he said he will look at.  Bingo.

It occurred to me that this is the best practice.  First, be high on the relationship, you will get through that way.  Next, find out how the analyst wants to be communicated with as a preference and DON”T abuse it.

When you use that method, you get to them and they answer.  Sure they will answer you anyway out of courtesy, but at some point, you have an I need it now, or you are on the road and don’t have your usual access.  In a way, it’s part of managing the relationship properly anyway.