I asked why is AI biased. I am trying to find the flaws and the holes that can be exploited. I found that it is can be a good tool, depending on who is behind it. As long as it is humans, there will be nefarious consequences.
The biggest flaw is humans.
Here is the answer.
Artificial intelligence (AI) can be biased for several reasons:
Training data bias: AI models are trained on large datasets, and if these datasets are biased in some way, the model will learn and perpetuate those biases. For example, if a facial recognition system is trained on a dataset that is predominantly composed of lighter-skinned individuals, it may have trouble recognizing individuals with darker skin tones.
Algorithm bias: The algorithms used to build AI models can also contain inherent biases. For example, certain algorithms may be more likely to perpetuate existing biases or reinforce existing power structures.
Human biases: Human biases can also be introduced into AI systems through the choices made by engineers and data scientists when building and deploying AI models. For example, the choice of which features to include in a model or how to label data can reflect and reinforce biases.
It is important to address AI biases to ensure that AI systems are fair and do not discriminate against certain groups. This can be done through careful consideration of the training data and algorithms used, as well as ongoing monitoring and testing to detect and correct biases.
It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.
Here is the response
I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:
Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.
OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.
When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.
I asked that also, does AI want to kill humans, here is the answer
No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.
However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.
The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.
Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.
I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.
Perfectly Synced AI Dogs. We've seen these introcuced to many industries over the past few years including farming, private security firms & even national police forces. pic.twitter.com/YDYYk64W8m
Resistance is futile, yet they are trying it now. Oh, I’ve been warning against AI, yet we continue to go down that path.
HUMANS in the next 100 years could be part-machine, part-flesh creatures with brain chips and bionic limbs and organs in a vision of “cyborgs” once described by Elon Musk.
Men and women born around 2100 could live in a world very different to ours as humans may be totally connected to the internet and meshed together with artificial intelligence.
Elon Musk has claimed humans in the future will be like ‘cyborgs’Credit: ReutersSmartphones will no longer be needed thanks to ‘chips in our brains’Credit: Getty
Mobile phones would no longer be needed – as everything you now do with your smartphone will now be done with a chip in your brain.
With just a thought you could bring up an answer on Google, send a message via WhatsApp, or even control your personal drone to do errands for you.
Scientists and futurists have predicted that ageing could be nearly totally eliminated through a mixture of treatments and bionics.
And some humans may choose to have their limbs amputated and replaced with more powerful robotic prosthetics.
Futurist and transhumanist Zoltan Istvan, who has twice run to be US President, described his vision of the future to The Sun Online – saying it’s imperative humans must merge with AI.
Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.
In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.
Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.
Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.
The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.
However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.
According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.
The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.
In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.
While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.
The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.
This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.
Here goes….
You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.
Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.
“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.
“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.
Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.
The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.
A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.
This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.
I expect a lot of clicks from China on this, they always do when I write about either them or technology.
I found this out playing Duo Lingo, where you learn a new language. They use marketing tools to keep you interested and try for more points such as doubling your score if you finished a lesson before 12.
The AI program does one of two things. Foremost, it gets you involved and competing at a higher level trying harder and spending more time on the app. For learning it is a good thing. For an App company selling advertising, the more time on their site, the more ads they sell.
The second thing it does is discard those who give up killing themselves when they are put into groups with more aggressive players. The lower performing scorers are segregated into a less-competitive group. This group isn’t worth trying to squeeze more money or time out of because they are casual App users.
One of the marketing techniques is a tournament where only a few advance, the aggressive players whom the AI has developed. I’ve ignored it twice because it becomes a 3 week time suck. For me, spending time only on one thing burns me out and I lose interest. I only want to play on my terms, something they didn’t calculate.
THE KOBAYASHI MARU
I love to win and do a lot, like last week. I wanted to beat not only the other players in my division, but the AI behind the game. This is the fault of much of AI. It has to assume human behavior, but goes on perceived behavior. Humans can be random thinkers or those outside of the AI logic.
Winner!
Any reader of my blog knows I look for patterns to make my life easier and better. Sometimes it is is just for the satisfaction of figuring it out. It was like learning the jab was poison and avoiding it while the sheep lined up to get their daily ration.
It’s almost like living in the Matrix and avoiding the Mr. Smiths of the world.
I was at IBM during Watson and knew the tech companies were seeing this as a potential holy grail. I couldn’t out develop the great minds that write AI, but beating them at their game was equally gratifying. I observed what they were doing and always considered the weaknesses.
Ken and Brad didn’t have a chance
Why did I want to do this? I know the Snidely Whiplashes of the world want to take over and control others, like Big Tech and the WEF. Knowing that they are beatable at their own game is valuable. It is like taking the red pill and being in the matrix.
MY STRATEGY
I found that in between the two groups above is where the programmers weakness lies. Those that don’t seem to try or or try outside of the AI rules. It’s AI learns at a machine rate, but not at a human rate.
It’s like when Watson learned to hunt for Double Jeopardy clues and was faster to the buzzer than humans, but AI can be out thought or out maneuvered.
I decided to hold back my scoring for a few weeks to fool the AI into putting me into a lower scoring crowd. I’ve done many thousands of points and finished in the top group without winning, only to be promoted to a higher scoring group. I wanted to see if I tried to score less, would the AI would “learn” that I’m a low scorer.
As I suspected, the groups I got put in were less and less aggressive. The point totals to keep advancing were less and less.
I knew I was gaining ground on the AI weakness and could be manipulated.
Last week, I kept to my minimal effort while learning (both foreign languages and the AI engine behind the App). I noticed that I have been put in lower performing groups. I did my daily amount I’d allowed myself to have and was slowly advancing up the ladder. I was using the AI engine to put me where I wanted to be, not in it’s calculations.
By Friday, I was in the lead with far less points than I’ve done many times in only a day (it starts on Sunday). I had to rely on human behavior that my group weren’t aggressive players, but that was my AI bet that we would be with each other. I was right. I predicted the AI pattern and beat it.
Beating the AI was far more of a challenge than beating the other players.
CONTINUING TO BEAT MACHINES
I knew I’d won by Saturday and did the minimal on Sunday to score. Why? To keep fooling the AI into believing I am a low scoring player. My goal was to win at the lowest level to keep getting promoted into groups that I could predict, and to keep proving that AI is beatable and malleable.
I’ll find out this week if that is fully true as I’m in a tournament for winners now (another marketing ploy) and I will try to finish last and keep advancing. I normally like to crush the competition, but winning by thinking is far more satisfying than winning by brute force. It’s as if you are running the game and the other players.
THE TURING TEST
Of course this is famous, can a machine behave like a human (roughly translated by me). Of course Duo Lingo isn’t as complex as war or landing on Mars, but there are hundreds of millions of humans. That is what they want AI to control, humans (like their free speech on social media).
I wanted to beat a machine, AI and find the holes that are in AI. It is programmed by humans still and can always be beat. They are not sentient. Find the pattern.
OTHER AI CHALLENGES
I play Wordle like a lot of others. I’m not busy trying to win right now, I’m trying to trick their AI into a pattern that I can out think their word selection.
2001: A Space Odyssey, Terminator, Aida of Agents of S.H.I.E.L.D., the robots always want to take over in the end and kill the humans.
See below the meme on stupidity so I can get to the point.
Yes, this woman is an idiot. My robot vacuum is so stupid it doesn’t know where it’s going. I named it Jarvis after Ironman’s AI assistant (Paul Bettany). It is my dearest form of sarcasm. So the vacuum isn’t going to kill me, but that isn’t my point.
The AI in the wrong hands is dangerous though.
For example, what if an AI-bot creates vaccines for diseases by predicting what the next strain will be. What if the next strain is the one that causes humans to shut down all the robots. There you have the premise for how it goes with AI taking over. Kill the humans.
I could bore your with many other examples like using AI to enhance a soldiers armory. It would be controlling your actions, making you more invincible in war. If it sensed a danger that didn’t exist, it could fire up the code to kill everyone in the way and you have created a murderer out of an innocent man. Kill the humans.
Fortunately, I’ve been around AI development. That danger isn’t exactly around the corner yet.
I worked at IBM and knew that Watson was a gimmick. The Chairman told me it was. They are trying to sell it now because it’s usefulness in medicine paled in comparison to it winning Jeopardy. It was a lot of wasted money because they could to sum it up.
Some of the team have moved to Quantum Computing because Watson was a dud.
Microsoft, Google and Facebook are much different and apparently more evil. IBM is too bureaucratic to turn it into a killer robot. However, if you’ve read any of my social media rants, you know that I trust these three companies less than almost anything, except Congress and the media. I will say they are equally evil though. (Another shot for the censors to see if they are watching here). They are the ones that will kill the humans.
Now, imagine if it got into the wrong hands. What could some guys who want to either take over or blow up the world do with that kind of power? Those bastards are evil. At least the robots just went bad.
And there you have it. Like many things we can create, there is always someone hanging around to put it to bad use.