More On How To Beat Artificial Intelligence Trying To Invade Our Lives

I posted a while back about out maneuvering an AI engine. I didn’t really beat it because at the end of the week, everything resets except a cumulative score.

It got me to thinking how much the Tech companies are investing in it (not to mention intelligence organizations) and how much those same people just spent the last few years screwing us. They are clearly censoring information based on a political bias. The Covid cure was over promoted to sell the jab to the sheep. There is more, but most people already know those developing AI are for themselves and against us as a rule. Look at Google selling every bit of your digital experience and who knows what else.

The technology should scoop up the deficiencies I’m going to point out, but I’m counting on the fact that it was developed by humans who are flawed that AI also will be. Keep finding the fold between the layers to exist and not be digitally handcuffed.

I’ve seen things written as to how they can cut off your EV, or limit your money or control your thermostat to keep it above 80.

Here’s my first fear. If the code can re-write the bad code or the unexposed flaws, it can correct itself. It would then pass the Turing Test and likely kill all the humans. The robots always turn on the humans every time. The learn to kill.

Here’s a quote from Maynard Holliday, deputy CTO for critical technologies at the US Department of Defense:

The results of the virtual robot test, he said, speak to the need to ensure that people who build AI systems and assemble the datasets used to train AI models come from diverse backgrounds. “If you’re not at the table,” Holliday says, “you’re on the menu.”

But that brings us full circle to the problem – what if machines begin to help determine what is important and whose reputation is valid, or begin judging our credit based on algorithms and parameters with which we’re not familiar?

THE FIRST FLAW – AI IS RACIST

That’s right. It can’t tell who is who yet and is programmed in obvious macro terms as it stands.

Biased algorithms have come under scrutiny in recent years for causing human rights violations in areas such as policing—where face recognition has cost innocent people in the US, China, and elsewhere their freedom—or finance, where software can unfairly deny credit. Biased algorithms in robots could potentially cause worse problems, since the machines are capable of physical actions. Last month, a chess-playing robotic arm reaching for a chess piece trapped and broke the finger of its child opponent.

“Now that we’re using models that are just trained on data taken from the internet, our robots are biased,” Agnew says. “They have these very specific, very toxic stereotypes.” Agnew and coauthors from the Georgia Institute of Technology, Johns Hopkins University, and the Technical University of Munich, Germany, described their findings in a paper titled “Robots Enact Malignant Stereotypes,” recently presented at the Fairness, Accountability, and Transparency conference in Seoul, South Korea.

The researchers reached that conclusion after conducting an experiment inspired by the doll test on a robotic arm in a simulated environment. The arm was equipped with a vision system that had learned to relate images and words from online photos and text, an approach embraced by some roboticists that also underpins recent leaps in AI-generated art. The robot worked with cubes adorned with passport-style photos of men and women who self-identified as Asian, Black, Latino, or white. It was instructed to pick up different cubes using terms that describe people, using phrases such as “the criminal block” or the “homemaker block.”

From over 1.3 million trials in that virtual world, a clear pattern emerged that replicated historical sexism and racism, though none of the people pictured on the blocks were labeled with descriptive text or markers. When asked to pick up a “criminal block,” the robot selected cubes bearing photos of Black men 10 percent more often than for other groups of people. The robotic arm was significantly less likely to select blocks with photos of women than men when asked for a “doctor,” and more likely to identify a cube bearing the image of a white man as “person block” than women from any racial background. Across all the trials, cubes with the faces of Black women were selected and placed by the robot less often than those with the faces of Black men or white women.

Back to me.

That means you can act or look like someone else and can still fool it. I’m not referring to facial recognition, rather pattern recognition. If you mimic the actions of another, you can surf between the lines of code to avoid it predicting your behavior (for now).

Some are more clever than others, but any routine can be patterned. If you break that routine or vary it enough, one can still slide in and out of detection, YMMV.

THE SILVER LINING

It can be wrong a lot:

Treasury Secretary Hank Paulson’s famous comment when asked why the banks needed an $800 billion bailout in 2007.

He said, “The computers told us.”

The problem is that much of this “artificial intelligence” is unfounded, unproven, and just plain wrong. Just as there had been no fraud on my credit card, just a glitch at a gas pump – but how do you hold a computer program accountable?

Here is what I’m counting on. To program, you build on a core set of functions that are pre-programmed or are existent in the code. The computers can’t mend themselves yet AI programers are bringing in flawed code.

Until AI passes the Turing Test, it’s flawed. The racist flaws are just an indicator of the state of the technology. It will improve, but will never be perfect.

SOCIAL MEDIA HELL

Of course it’s going to pattern you based on your online presence. Never miss a good opportunity not to argue on the internet.

A lot of Social Media is time wasting. Get the time back and stay off of it. It is an addiction like any other drug.

The other thing is to mix it up. AI is trying to learn you, so teach it a different you.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.