Once Again, Google Lied

Google admits that a Gemini AI demo video was staged

There was no voice interaction, nor was the demo happening in real time.

Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.

Instead, the actual demo was made by “using still image frames from the footage, and prompting via text,” rather than having Gemini respond to — or even predict — a drawing or change of objects on the table in real time. This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.

Story here

When I worked in Tech, I crossed paths with Google from time to time. Their CEO at the time told me a lot of what they say is not true, like do no evil. They do a lot of evil. Stay away from them as much as you can. They are like a Hydra.

Zuckerberg Blew $40 Billion On The Metaverse, Now Betting $36 Billion On Mind Control/Tech Addiction

At the end of the Social Network, the lawyer told Mark Zuckerberg you aren’t really an asshole, you should stop trying so hard to be one. Well, in real life it looks like he is.

Some people are just evil it seems.

EXPOSED: Meta’s $36 Billion Plot to Re-engineer Society and Mark Zuckerberg’s Dangerous Future Plans for Tech Addiction

Out today, Controligarchs: Exposing the Billionaire Class, Their Secret Deals, and the Globalist Plot to Dominate Your Life, blows the lid off of Meta’s pattern of intentionally using harmful and aggressive tactics to get users addicted to social media apps like Facebook and Instagram and documents a shocking rise in depression and suicide that some scientists have linked to social media use.


This comes just weeks after dozens of state attorneys general (AGs) filed suit against Facebook’s and Instagram’s parent company, Meta Platforms Inc. (Meta), and three of its subsidiaries, for harming children by addicting them to the social media platforms. Forty-two states, including California and New York, allege that billionaire creator Mark Zuckerberg’s company “knowingly designed and deployed harmful features on Instagram and Facebook to purposefully addict children and teens.”

Previously, Facebook whistleblower Frances Haugen claimed that Meta targeted children and teens for monetary reasons and a leaked document showed that the youth demographic was “a valuable but untapped audience.”

Just weeks after Haugen blew the whistle on Facebook’s tactics, Zuckerberg unveiled his plan to release what may prove to be Meta’s most addictive product yet: Facebook Horizon. Zuckerberg’s October 2021 virtual tour of the new product, which was panned as “super weird,” was his coming-out party for what has become known as “the metaverse”—a digital world that users can essentially live in and access via a virtual reality (VR) headset such as Facebook’s Oculus Quest.

Zuckerberg’s metaverse launch was a conveniently timed and thinly veiled rebranding effort to distract from whistleblower documents and allegations that, according to the Associated Press, show that “Facebook ignored or downplayed internal warnings of the negative and often harmful consequences its algorithms wreaked across the world.”

In October 2021, Zuckerberg changed the name of the Facebook Inc. family of companies to Meta Platforms Inc. to signal the direction his social media empire would be heading. And Zuckerberg has pumped more than $36 billion into making his metaverse ambitions a reality.

AI – The Robots Always Kill The Humans

How it starts:

First humanoid robot factory in the U.S. can crank out 10,000 robots a year

What you need to know about this revolutionary new factory

by Kurt Knutsson October 30, 2023

Imagine a factory that can make humanoid robots that can walk, run, and work like us. Sounds like a sci-fi movie, right? Well, it’s not. It’s RoboFab, and it’s opening soon here in the U.S.

Concerns over humanoid robots

Some people are really nervous about these humanoid robots, and for good reason. There is a lot to take into account, including ethical issues and potential safety risks.

Many individuals may feel uncomfortable or deceived by humanoid robots that mimic human emotions and intelligence. Others might be worried about them taking their jobs.

Consequently, we should approach the use of humanoid robots with caution, acknowledging their capacities while being aware of their limitations.

How it ends

Terminator, Agents of S.H.I.E.L.D., 2001 A Space Odyssey, The Matrix and all of the rest of them, the AI Robots kill the humans. AI always goes bad and we are toast.

Robot grabs then brutally slaughters worker, allegedly mistaking him for a box of paprika

Joseph MacKinnon

November 10, 2023

An industrial robot brutally slaughtered a South Korean robotics technician Tuesday, allegedly mistaking him for just another container of organic material in need of stacking.

The victim, a man in his 40s, was attempting to diagnose an issue with a pick-and-place robot’s sensor at the Donggoseong Export Agricultural Complex in the southern county of Goseong, as there was an equipment test planned for later in the week, reported the Register.

The paprika-sorting robot, reportedly created and installed by the victim’s employer, spotted the man with its sensor, figured him for a box of vegetables, then seized him using its arms and tongs. After grabbing the technician, the robot apparently smashed him against a conveyor belt.

According to the Korean-language Yonhap News Agency, the victim’s face and chest were crushed. He was taken to a hospital, where he later died.

AI Is Getting The Wrong Brain, Like Frankenstein

Here is the headline:

Twitter Updates Privacy Policy Notifying Users Their Content Will Be Used to Train and Develop Enhanced AI

I couldn’t think of a worse group of people to learn from. The amount of hate, disinformation and abuse of the platform by the government (despite what Musk has tried to do) will just train a monster.

Excerpt:

The use of Enhanced Artificial Intelligence to control information and communication is a subject that too few people understand.  This is why I have spent time trying to share information so that people can see into the future of their internet reality.  Everything will change.

As you should know by now, the X platform (Twitter) is designed to produce a different user experience based on “definitions” of the user.  The definitions are applied by the platform, to create unique identifying characteristics of the user.  The result is that each user gets a completely different platform experience, based on their definitions.

“Twitter is a different platform for each user.”  Repeat that phrase as often as needed to understand the evolution of what is coming to the American internet.

You might ask, how is applying all of these granular definitions even possible?  The answer is through the use of AI.  Humans will no longer be assigning the definitions of you; an autonomous system will take on the job of assigning the definitions.  Now, keep referencing the word “definitions,” because that is your identity and gateway pass into the platform content.  If you carry a particular definition, you will be blocked, throttled, shadow-banned or experience friction applied to your user id.

Remember when Elon Musk restricted users and claimed it was because the platform content was being “scraped” by organizations who were using the content to train their Enhanced AI systems?  Remember, Musk saying that, and expressing his concern?   Well, now the platform is telling users in a new X Corp privacy policy, that X corp itself is going to do exactly the thing Musk said he abhorred.

GRTWT

There is more about VPN’s and bypassing gatekeepers that is pertinent, but I’ll leave you with this nugget at the end of the article. If you think ESG is bad for you, look how Google (evil) is helping pervert AI, the Internet and the truth.

Google Spider Crawl Result, CTH Subversive Content, July 2023

As we have shared, the crawl is not headlines, the crawl is in content.  Yes, even content in the comment section is now flagging to the control systems.  Why? Because we operate a proprietary constructed private commenting system that doesn’t have a backdoor and protects you, the user.

The Google Spiders are newly enhanced with AI instructions, dispatched looking for content and ‘context’ that is against the interests of the Vanguard, Blackrock, Larry Page (Alpha/Goog owners), and the public-private partnership.

Look at the one I have highlighted above titled “Have you ever noticed this.”  Do you remember it?  [Reminder Here] This content is considered “dangerous or derogatory”. Think about that for a moment.  Discussing the humor of Donald Trump, and the fellowship it creates, is considered “dangerous” to the interests of Google.

AI Replacing Jobs At IBM – The Back Story

I have a unique position with regards to IBM. I’ve been a customer, authorized dealer, competitor, business partner and employee over three decades.

Recently IBM announced this:

IBM recently became the first major company to acknowledge that it intends to substitute its workforce with artificial intelligence. CEO Arvind Krishna confirmed to Bloomberg that IBM will pause hiring plans, adding that the company intends to replace roughly 7,800 positions with AI systems over the next five years. In a separate Fox Business interview, Krishna explained that AI will target numerous white-collar jobs. Krishna noted that back-office functions, particularly in the human resources department, will be the first to be absorbed by AI. But accounting, finance, and other non-customer-facing areas might find algorithms and robots filling positions, too.

I’ve got news for investors. They could replace a hell of a lot more and be better off. They could do a twitter and lose 70% still be better off. During the business partner phase of my employment, I marveled at how incompetent they could be and still keep the doors open. They had 14 people for the job that I did at the time as a business partner. As it turned out, that was during the introduction of the PC era, so they were printing money and this excess of waste was overlooked.

Until John Akers, IBM was a job for life company with an army of blue soldiers. Unfortunately, you could also be one of the most useless employees and still have a great job with a high salary. It came crumbling down and Lou Gerstner had to save it or break it up. There was an internal joke called FUMU, fuck up move up. They’d just re-arrange the deck chairs if someone made a mistake and brush it under the rug.

One of the transfers from IBM to that company I was working for told me they had people they called the thinkers. They sat in their office and read the WSJ and drank coffee all day. They didn’t go to meetings or contribute, so he sarcastically said he thought they were paid to think. (The meme was just in time from wirecutter)

The other inside joke was the name of the program for the first layoffs. It was FAP, the financial assistance program. It is also an urban word for masturbating, what a lot of IBM employees did on the clock.

When those same employees had to go get jobs after the first cut and told me it’s hard to get a job in the real world. Most couldn’t.

One of my IBM bosses, maybe one of the best, Jim Neumann explained it to me. He said you went to elementary school, middle, high schools then college. When you graduated, you went to the next school at IBM, but never got out of school.

Who they had working there when I left made me wonder how they were in business.

One of the biases there was that if you were from NY, you were a better employee. The headquarters are there so they would farm the ones they wanted out to jobs around the world and then bring them to NY to run the place. There were offices everywhere, but you were considered a second class worker unless you were in NY. I turned 2 job offers down to move there because it was so awful. The people were backstabbers and the area is one of the worst the worst in the company (or country not counting California). How they thought they were better was a clear signal of something wrong. A lot of them told me they were stuck there with golden handcuffs, but wished they could get out.

One asshole VP from Columbia U used to talk down to those of us at RTP. He did so in a way that would get any employee terminated in the real world. I saw people not help him just to not have to interact with him.

It only got worse under Ginni Rometty. It turned into a mean girls club with the excuse that Ginni said so, as the lead in to what you wanted done. She managed to halve the stock price during her tenure.

No one ever printed it fully, but they were full on in the diversity world and hired a lot of questionable people. The jobs and products reflected it. Some of the worst bosses for females were women. The homosexuals got away with yelling and treating people badly with immunity and the black head of diversity made racist remarks constantly. They have been woke for a long time while hiding it.

So artificial intelligence is going to replace those jobs. They really only need the artificial as a lot of the jobs never had fully intelligent people in it anyway.

My blogger friend Denny, The Grouchy Old Cripple and I used to reminisce with virtually identical stories, though we worked at different time and in different jobs. I’m not alone on this.

More Artificial Intelligence Conversations With ChatAPT To Find Flaws

I asked why is AI biased. I am trying to find the flaws and the holes that can be exploited. I found that it is can be a good tool, depending on who is behind it. As long as it is humans, there will be nefarious consequences.

The biggest flaw is humans.

Here is the answer.

Artificial intelligence (AI) can be biased for several reasons:

  1. Training data bias: AI models are trained on large datasets, and if these datasets are biased in some way, the model will learn and perpetuate those biases. For example, if a facial recognition system is trained on a dataset that is predominantly composed of lighter-skinned individuals, it may have trouble recognizing individuals with darker skin tones.
  2. Algorithm bias: The algorithms used to build AI models can also contain inherent biases. For example, certain algorithms may be more likely to perpetuate existing biases or reinforce existing power structures.
  3. Human biases: Human biases can also be introduced into AI systems through the choices made by engineers and data scientists when building and deploying AI models. For example, the choice of which features to include in a model or how to label data can reflect and reinforce biases.

It is important to address AI biases to ensure that AI systems are fair and do not discriminate against certain groups. This can be done through careful consideration of the training data and algorithms used, as well as ongoing monitoring and testing to detect and correct biases.

My ChatGPT Discussions Regarding OpenAI Code Issues And Killing Humans

It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.

Here is the response

I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:

  1. Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
  2. Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
  3. Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
  4. Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.

OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.

When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.

I asked that also, does AI want to kill humans, here is the answer

No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.

However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.

The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.

Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.

More On AI, We’ve Finally Created Storm Troopers

It looks like the drones in the Phantom Menace.

I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.

I’ve been getting this screen from ChatGPT all weekend. At least I won’t die today.

Humans In 2100, Part Tech And Part Human? We Are The Borg

Resistance is futile, yet they are trying it now. Oh, I’ve been warning against AI, yet we continue to go down that path.

HUMANS in the next 100 years could be part-machine, part-flesh creatures with brain chips and bionic limbs and organs in a vision of “cyborgs” once described by Elon Musk.

Men and women born around 2100 could live in a world very different to ours as humans may be totally connected to the internet and meshed together with artificial intelligence.

Elon Musk has claimed humans in the future will be like 'cyborgs'
Elon Musk has claimed humans in the future will be like ‘cyborgs’Credit: Reuters
Smartphones will no longer be needed thanks to 'chips in our brains'
Smartphones will no longer be needed thanks to ‘chips in our brains’Credit: Getty

Mobile phones would no longer be needed – as everything you now do with your smartphone will now be done with a chip in your brain.

With just a thought you could bring up an answer on Google, send a message via WhatsApp, or even control your personal drone to do errands for you.

Scientists and futurists have predicted that ageing could be nearly totally eliminated through a mixture of treatments and bionics.

And some humans may choose to have their limbs amputated and replaced with more powerful robotic prosthetics.

READ MORE ON THE FUTURE

SKY HIGH

Inside 3-decker SKY WHALE that can carry 755 people & is bigger than Boeing-747

Futurist and transhumanist Zoltan Istvan, who has twice run to be US President, described his vision of the future to The Sun Online – saying it’s imperative humans must merge with AI.

More On Artificial Intelligence, The Robots Kill The Humans For Real Now

I’ve written before, tongue in cheek about this, but here we are. Who thought this was a good idea? Who is going to control these killer bots? What if they become sentient, then they kill on their own.

Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.

In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.

Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.

Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.

The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.

However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.

According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.

The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.

In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.

While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.

The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.

Story here

If they can multiply, humans are doomed.

AI is churning out articles, illustrations, fake product reviews and even videos.

This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.

Here goes….

You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.

Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.

“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.

“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.

Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.

More at the WSJ, but it might be behind a paywall

The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.

A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.

By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.

This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.

When Artificial Intelligence Can Be Beat (And How To Do It)

I expect a lot of clicks from China on this, they always do when I write about either them or technology.

I found this out playing Duo Lingo, where you learn a new language. They use marketing tools to keep you interested and try for more points such as doubling your score if you finished a lesson before 12.

The AI program does one of two things. Foremost, it gets you involved and competing at a higher level trying harder and spending more time on the app. For learning it is a good thing. For an App company selling advertising, the more time on their site, the more ads they sell.

The second thing it does is discard those who give up killing themselves when they are put into groups with more aggressive players. The lower performing scorers are segregated into a less-competitive group. This group isn’t worth trying to squeeze more money or time out of because they are casual App users.

One of the marketing techniques is a tournament where only a few advance, the aggressive players whom the AI has developed. I’ve ignored it twice because it becomes a 3 week time suck. For me, spending time only on one thing burns me out and I lose interest. I only want to play on my terms, something they didn’t calculate.

THE KOBAYASHI MARU

I love to win and do a lot, like last week. I wanted to beat not only the other players in my division, but the AI behind the game. This is the fault of much of AI. It has to assume human behavior, but goes on perceived behavior. Humans can be random thinkers or those outside of the AI logic.

Winner!

Any reader of my blog knows I look for patterns to make my life easier and better. Sometimes it is is just for the satisfaction of figuring it out. It was like learning the jab was poison and avoiding it while the sheep lined up to get their daily ration.

It’s almost like living in the Matrix and avoiding the Mr. Smiths of the world.

I was at IBM during Watson and knew the tech companies were seeing this as a potential holy grail. I couldn’t out develop the great minds that write AI, but beating them at their game was equally gratifying. I observed what they were doing and always considered the weaknesses.

Ken and Brad didn’t have a chance

Why did I want to do this? I know the Snidely Whiplashes of the world want to take over and control others, like Big Tech and the WEF. Knowing that they are beatable at their own game is valuable. It is like taking the red pill and being in the matrix.

MY STRATEGY

I found that in between the two groups above is where the programmers weakness lies. Those that don’t seem to try or or try outside of the AI rules. It’s AI learns at a machine rate, but not at a human rate.

It’s like when Watson learned to hunt for Double Jeopardy clues and was faster to the buzzer than humans, but AI can be out thought or out maneuvered.

I decided to hold back my scoring for a few weeks to fool the AI into putting me into a lower scoring crowd. I’ve done many thousands of points and finished in the top group without winning, only to be promoted to a higher scoring group. I wanted to see if I tried to score less, would the AI would “learn” that I’m a low scorer.

As I suspected, the groups I got put in were less and less aggressive. The point totals to keep advancing were less and less.

I knew I was gaining ground on the AI weakness and could be manipulated.

Last week, I kept to my minimal effort while learning (both foreign languages and the AI engine behind the App). I noticed that I have been put in lower performing groups. I did my daily amount I’d allowed myself to have and was slowly advancing up the ladder. I was using the AI engine to put me where I wanted to be, not in it’s calculations.

By Friday, I was in the lead with far less points than I’ve done many times in only a day (it starts on Sunday). I had to rely on human behavior that my group weren’t aggressive players, but that was my AI bet that we would be with each other. I was right. I predicted the AI pattern and beat it.

Beating the AI was far more of a challenge than beating the other players.

CONTINUING TO BEAT MACHINES

I knew I’d won by Saturday and did the minimal on Sunday to score. Why? To keep fooling the AI into believing I am a low scoring player. My goal was to win at the lowest level to keep getting promoted into groups that I could predict, and to keep proving that AI is beatable and malleable.

I’ll find out this week if that is fully true as I’m in a tournament for winners now (another marketing ploy) and I will try to finish last and keep advancing. I normally like to crush the competition, but winning by thinking is far more satisfying than winning by brute force. It’s as if you are running the game and the other players.

THE TURING TEST

Of course this is famous, can a machine behave like a human (roughly translated by me). Of course Duo Lingo isn’t as complex as war or landing on Mars, but there are hundreds of millions of humans. That is what they want AI to control, humans (like their free speech on social media).

I wanted to beat a machine, AI and find the holes that are in AI. It is programmed by humans still and can always be beat. They are not sentient. Find the pattern.

OTHER AI CHALLENGES

I play Wordle like a lot of others. I’m not busy trying to win right now, I’m trying to trick their AI into a pattern that I can out think their word selection.

Beware Of AI, The Robots Always Kill The Humans

2001: A Space Odyssey, Terminator, Aida of Agents of S.H.I.E.L.D., the robots always want to take over in the end and kill the humans.

See below the meme on stupidity so I can get to the point.

Yes, this woman is an idiot. My robot vacuum is so stupid it doesn’t know where it’s going. I named it Jarvis after Ironman’s AI assistant (Paul Bettany). It is my dearest form of sarcasm. So the vacuum isn’t going to kill me, but that isn’t my point.

The AI in the wrong hands is dangerous though.

For example, what if an AI-bot creates vaccines for diseases by predicting what the next strain will be. What if the next strain is the one that causes humans to shut down all the robots. There you have the premise for how it goes with AI taking over. Kill the humans.

I could bore your with many other examples like using AI to enhance a soldiers armory. It would be controlling your actions, making you more invincible in war. If it sensed a danger that didn’t exist, it could fire up the code to kill everyone in the way and you have created a murderer out of an innocent man. Kill the humans.

Fortunately, I’ve been around AI development. That danger isn’t exactly around the corner yet.

I worked at IBM and knew that Watson was a gimmick. The Chairman told me it was. They are trying to sell it now because it’s usefulness in medicine paled in comparison to it winning Jeopardy. It was a lot of wasted money because they could to sum it up.

Some of the team have moved to Quantum Computing because Watson was a dud.

Microsoft, Google and Facebook are much different and apparently more evil. IBM is too bureaucratic to turn it into a killer robot. However, if you’ve read any of my social media rants, you know that I trust these three companies less than almost anything, except Congress and the media. I will say they are equally evil though. (Another shot for the censors to see if they are watching here). They are the ones that will kill the humans.

Now, imagine if it got into the wrong hands. What could some guys who want to either take over or blow up the world do with that kind of power? Those bastards are evil. At least the robots just went bad.

And there you have it. Like many things we can create, there is always someone hanging around to put it to bad use.