My ChatGPT Discussions Regarding OpenAI Code Issues And Killing Humans

It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.

Here is the response

I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:

  1. Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
  2. Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
  3. Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
  4. Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.

OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.

When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.

I asked that also, does AI want to kill humans, here is the answer

No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.

However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.

The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.

Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.

More On AI, We’ve Finally Created Storm Troopers

It looks like the drones in the Phantom Menace.

I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.

I’ve been getting this screen from ChatGPT all weekend. At least I won’t die today.

More On Artificial Intelligence, The Robots Kill The Humans For Real Now

I’ve written before, tongue in cheek about this, but here we are. Who thought this was a good idea? Who is going to control these killer bots? What if they become sentient, then they kill on their own.

Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.

In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.

Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.

Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.

The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.

However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.

According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.

The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.

In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.

While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.

The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.

Story here

If they can multiply, humans are doomed.

AI is churning out articles, illustrations, fake product reviews and even videos.

This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.

Here goes….

You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.

Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.

“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.

“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.

Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.

More at the WSJ, but it might be behind a paywall

The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.

A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.

By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.

This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.

Beware Of AI, The Robots Always Kill The Humans

2001: A Space Odyssey, Terminator, Aida of Agents of S.H.I.E.L.D., the robots always want to take over in the end and kill the humans.

See below the meme on stupidity so I can get to the point.

Yes, this woman is an idiot. My robot vacuum is so stupid it doesn’t know where it’s going. I named it Jarvis after Ironman’s AI assistant (Paul Bettany). It is my dearest form of sarcasm. So the vacuum isn’t going to kill me, but that isn’t my point.

The AI in the wrong hands is dangerous though.

For example, what if an AI-bot creates vaccines for diseases by predicting what the next strain will be. What if the next strain is the one that causes humans to shut down all the robots. There you have the premise for how it goes with AI taking over. Kill the humans.

I could bore your with many other examples like using AI to enhance a soldiers armory. It would be controlling your actions, making you more invincible in war. If it sensed a danger that didn’t exist, it could fire up the code to kill everyone in the way and you have created a murderer out of an innocent man. Kill the humans.

Fortunately, I’ve been around AI development. That danger isn’t exactly around the corner yet.

I worked at IBM and knew that Watson was a gimmick. The Chairman told me it was. They are trying to sell it now because it’s usefulness in medicine paled in comparison to it winning Jeopardy. It was a lot of wasted money because they could to sum it up.

Some of the team have moved to Quantum Computing because Watson was a dud.

Microsoft, Google and Facebook are much different and apparently more evil. IBM is too bureaucratic to turn it into a killer robot. However, if you’ve read any of my social media rants, you know that I trust these three companies less than almost anything, except Congress and the media. I will say they are equally evil though. (Another shot for the censors to see if they are watching here). They are the ones that will kill the humans.

Now, imagine if it got into the wrong hands. What could some guys who want to either take over or blow up the world do with that kind of power? Those bastards are evil. At least the robots just went bad.

And there you have it. Like many things we can create, there is always someone hanging around to put it to bad use.