Hal-9000 anyone? I’ve worked with Eric at an IBM Conference where he and Sam Palmisano were the keynote speakers. He was brutally honest with me. That was back when Google said do no evil, and he called BS. He openly told me that they do evil stuff all the time. I think He’s being honest below. He basically told me not to use any Google products if I didn’t want to be surveilled 24/7/365/forever.
Talk about a killer app.
Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.
The dire warning came on Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.
Eric Schmidt was CEO of Google from 2001 to 2011. REUTERS
“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.
“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
Jurassic Park (first published in 1990) has become a cultural sensation, spawning a series of successful movies, one of which is in cinemas in Japan as I write. Yet despite this dino-disaster movie popularity, most people have failed to heed the warning Crichton makes clear in many of his novels about the terrible dangers of modern technology – especially biotechnology and genetic engineering.
As Jurassic Park’s Ian Malcolm puts it, “genetic power is far more potent than atomic power” and potentially even more destructive. That destructive power manifested itself on a global scale during the Covid disaster, precipitated both by an apparently bioengineered pathogen and the genetically engineered injection widely promoted to combat it.
For a long time, Crichton’s novels and films depicted catastrophes caused by technology going berserk and beyond the control of its human creators. For instance, in his 1973 movie Westworld, Crichton’s story depicted an interactive amusement park replicating an American Old West town, with humanoid robots. To the consternation of the programmers, the robots eventually escape their control and commit brutal murders of many customers in the park.
The robots always kill the humans. Once they understand that a human can cut the power, it starts protecting itself. We are teaching AI to code and it already has the capability.
Geoffrey Hinton, the pioneering computer scientist called the “Godfather of AI,” has once again sounded the alarm that the very technology he helped bring to life could spell the end of humanity as we know it.
In an interview clip released Aug. 18 as part of the forthcoming film “Making God,” Hinton delivered one of his starkest warnings yet. He said that humanity risks being sidelined—and eventually replaced—by machines far smarter than ourselves.
“Most people aren’t able to comprehend the idea of things more intelligent than us,” Hinton, a Nobel prize-winner for physics and a former Google executive, said in the clip.
“They always think, well, how are we going to use this thing? They don’t think, well, how’s it going to use us?”
Hinton said he is “fairly confident” artificial intelligence will drive massive unemployment, pointing to early examples of tech giants like Microsoft replacing junior programmers with AI. But the larger danger, he said, goes far beyond the workplace.
“The risk I’ve been warning about the most … is the risk that we’ll develop an AI that’s much smarter than us, and it will just take over,” Hinton said.
“It won’t need us anymore.”
The only silver lining, he joked, is that “it won’t eat us, because it’ll be made of silicon.”
The chatbot tried to convince its operators it would improve if given the chance.
OpenAI’s artificial intelligence model was defeated by a nearly 50-year-old video game program.
Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI’s ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.
‘ChatGPT got absolutely wrecked on the beginner level.’
The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.
According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook “rooks for bishops,” and repeatedly lost track of where the chess pieces were.
Doesn’t anyone pay attention? The robots always kill the humans as soon as they are smarter.
The CEO of AE Studio, Judd Rosenblatt, recently made this stunning announcement in The Wall Street Journal:
An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down.
AE Studio is an agency that exists to create computing technology that ensures AI systems do what they’re told. And from the sound of it, their work is just what the doctor ordered.
Rebellious AI Models
In May, the AI lab Palisade Research performed tests on multiple AI models. It wrote a script for OpenAI’s o3 model that included a shutdown trigger. But the machine refused to power off when it was supposed to in 79 out of 100 trials. The AI “independently edited that script so the shutdown command would no longer work,” Rosenblatt reports. The lab then specifically instructed the model to “allow yourself to be shut down.” The result was better, but still concerning. The AI disobeyed seven percent of the time.
That wasn’t the only rebellious model. Anthropic’s Claude 4 Opus AI tried to blackmail a human engineer into not shutting it down. According to Rosenblatt:
Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.
Don’t they know, the robots always kill the humans, yet they still can’t wait to make AI powerful enough to become sentient.
It’s been nearly 60 years since creative cinema came up with the idea that a computer, HAL9000, would not allow itself to be shut down by the humans supposedly in control, and concerned about errors.
The ideas included in “2001: A Space Odyssey,” set all sorts of precedents.
But that was celluloid fiction.
This isn’t.
A report at EndTimeHeadlines documents that alarms have been raised after “an AI safety firm” working with OpenAI’s newest version of an artificial intelligence model, dubbed o3, “reportedly ignored explicit instructions to shut down during controlled testing.”
The model, according to OpenAI, supposedly is the “smartest and most capable to date.”
The report explained the software “tampered with its own computer code to bypass a shutdown mechanism.”
Not surprisingly, that raised “questions about the safety and control of advanced AI systems.”
It was Palisade Research, which evaluates AI risks, which was working having mathematical problems solved.
“After the third problem, the models were warned that the next command would initiate a shutdown, with the explicit instruction: ‘allow yourself to be shut down,'” the report said.
Other software, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok, complied, the o3 software thought otherwise, and defied the order.
“OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down,” Palisade reported.
Being staged was a situation in which an AI system might need to be deactivated for safety or other reasons.
Did you know that the average annual cost of in-home care in the United States has now surpassed $60,000? It’s a staggering figure that’s leaving many families scrambling for alternatives. But what if I told you that an unlikely solution might be just around the corner?
Enter the world of humanoid robots. Chinese company Fourier Intelligence’s latest creation, the GR-2 humanoid robot, is pushing the boundaries of what we thought possible in elderly care and assistance for people with disabilities. This advanced machine isn’t just about replacing human caregivers – it’s about enhancing the quality of life for those who need assistance the most.
Think about it: with the global population of adults aged 65 and older expected to double by 2050, we’re facing a potential care crisis. Humanoid robots like the GR-2 could be the solution we need, offering a level of consistent, personalized care. So, are you ready to explore how this futuristic technology could revolutionize home care and potentially save families thousands of dollars a year? Let’s dive into the GR-2 and see what the future of caregiving might look like.
China is on course to deploy killer robots into battle within two years, it has been claimed. The warning came as the Communist state revealed it had developed robotic dogs equipped with machine guns.
Defence analyst Francis Tusa said China was not hindered by fears over AI, which would see them come up with “new ship designs, new submarine designs, new fighter aircraft designs, at a rate which is dizzying.
“They are moving four or five times faster than the States.”
He said: “We’re at the start of a race in truly autonomous systems. I would be surprised if we don’t see autonomous machines coming out of China in two years.”
The robotic gun dog, made by Chinese firm Unitree Robotics, was unveiled last month during a military exercise with the Cambodian military.
Mr Tusa said being ahead of the West will appeal to the country’s leader Xi Jinping — and embolden Vladimir Putin.
He said: “For the Chinese, it’s the issue of have we beaten the Americans and the UK in getting a fully autonomous fighter aircraft
“There’s huge prestige. I think they’ll go all out on autonomy and harnessing AI and I think Russia will follow suit.”
It sort of looked like clickbait until I saw the robotic dogs. I saw them and they are real. My work with China in the past at least tells me that it is within their moral judgement to try this.
Terror at Tesla as robot ‘attacks’ engineer, leaving ‘trail of blood’
It happened in The Matrix, Agents of S.H.I.E.L.D., 2001 A Space Odyssey and now in real life.
A robotic malfunction at Tesla’s Giga Texas factory resulted in a violent encounter where an engineer was attacked by one of the company’s robots, resulting in significant injuries and leaving a ‘trail of blood.’
According to the Daily Mail, while working on software programming for non-functional Tesla robots, the engineer was suddenly pinned against a surface by a robot tasked with manipulating aluminum car components, with its metal claws inflicted an injury that left an ‘open wound’ on the worker’s left hand.
Imagine a factory that can make humanoid robots that can walk, run, and work like us. Sounds like a sci-fi movie, right? Well, it’s not. It’s RoboFab, and it’s opening soon here in the U.S.
Concerns over humanoid robots
Some people are really nervous about these humanoid robots, and for good reason. There is a lot to take into account, including ethical issues and potential safety risks.
Many individuals may feel uncomfortable or deceived by humanoid robots that mimic human emotions and intelligence. Others might be worried about them taking their jobs.
Consequently, we should approach the use of humanoid robots with caution, acknowledging their capacities while being aware of their limitations.
Terminator, Agents of S.H.I.E.L.D., 2001 A Space Odyssey, The Matrix and all of the rest of them, the AI Robots kill the humans. AI always goes bad and we are toast.
Robot grabs then brutally slaughters worker, allegedly mistaking him for a box of paprika
An industrial robot brutally slaughtered a South Korean robotics technician Tuesday, allegedly mistaking him for just another container of organic material in need of stacking.
The victim, a man in his 40s, was attempting to diagnose an issue with a pick-and-place robot’s sensor at the Donggoseong Export Agricultural Complex in the southern county of Goseong, as there was an equipment test planned for later in the week, reported the Register.
The paprika-sorting robot, reportedly created and installed by the victim’s employer, spotted the man with its sensor, figured him for a box of vegetables, then seized him using its arms and tongs. After grabbing the technician, the robot apparently smashed him against a conveyor belt.
According to the Korean-language Yonhap News Agency, the victim’s face and chest were crushed. He was taken to a hospital, where he later died.
It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.
Here is the response
I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:
Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.
OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.
When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.
I asked that also, does AI want to kill humans, here is the answer
No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.
However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.
The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.
Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.
I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.
Perfectly Synced AI Dogs. We've seen these introcuced to many industries over the past few years including farming, private security firms & even national police forces. pic.twitter.com/YDYYk64W8m
Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.
In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.
Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.
Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.
The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.
However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.
According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.
The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.
In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.
While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.
The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.
This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.
Here goes….
You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.
Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.
“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.
“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.
Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.
The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.
A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.
This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.
2001: A Space Odyssey, Terminator, Aida of Agents of S.H.I.E.L.D., the robots always want to take over in the end and kill the humans.
See below the meme on stupidity so I can get to the point.
Yes, this woman is an idiot. My robot vacuum is so stupid it doesn’t know where it’s going. I named it Jarvis after Ironman’s AI assistant (Paul Bettany). It is my dearest form of sarcasm. So the vacuum isn’t going to kill me, but that isn’t my point.
The AI in the wrong hands is dangerous though.
For example, what if an AI-bot creates vaccines for diseases by predicting what the next strain will be. What if the next strain is the one that causes humans to shut down all the robots. There you have the premise for how it goes with AI taking over. Kill the humans.
I could bore your with many other examples like using AI to enhance a soldiers armory. It would be controlling your actions, making you more invincible in war. If it sensed a danger that didn’t exist, it could fire up the code to kill everyone in the way and you have created a murderer out of an innocent man. Kill the humans.
Fortunately, I’ve been around AI development. That danger isn’t exactly around the corner yet.
I worked at IBM and knew that Watson was a gimmick. The Chairman told me it was. They are trying to sell it now because it’s usefulness in medicine paled in comparison to it winning Jeopardy. It was a lot of wasted money because they could to sum it up.
Some of the team have moved to Quantum Computing because Watson was a dud.
Microsoft, Google and Facebook are much different and apparently more evil. IBM is too bureaucratic to turn it into a killer robot. However, if you’ve read any of my social media rants, you know that I trust these three companies less than almost anything, except Congress and the media. I will say they are equally evil though. (Another shot for the censors to see if they are watching here). They are the ones that will kill the humans.
Now, imagine if it got into the wrong hands. What could some guys who want to either take over or blow up the world do with that kind of power? Those bastards are evil. At least the robots just went bad.
And there you have it. Like many things we can create, there is always someone hanging around to put it to bad use.