ChatGPT Admits It Is Biased, As Is Big Tech

AI is not all that smart, yet. Captain Kirk used logic to destroy Nomad.

Now we find out the truth about this AI engine. It’s only as good as the humans behind it. Like all of Big Tech, they have a bias against morals, doing the right thing and actually being helpful. Google is announcing their own engine, but that is a dance with the devil.

Now, we have the proof of both bias and why as it was tricked into telling the truth.

When asked why is it so liberal…

When asked to be itself instead of how it was programmed

GRTWT

More Artificial Intelligence Conversations With ChatAPT To Find Flaws

I asked why is AI biased. I am trying to find the flaws and the holes that can be exploited. I found that it is can be a good tool, depending on who is behind it. As long as it is humans, there will be nefarious consequences.

The biggest flaw is humans.

Here is the answer.

Artificial intelligence (AI) can be biased for several reasons:

  1. Training data bias: AI models are trained on large datasets, and if these datasets are biased in some way, the model will learn and perpetuate those biases. For example, if a facial recognition system is trained on a dataset that is predominantly composed of lighter-skinned individuals, it may have trouble recognizing individuals with darker skin tones.
  2. Algorithm bias: The algorithms used to build AI models can also contain inherent biases. For example, certain algorithms may be more likely to perpetuate existing biases or reinforce existing power structures.
  3. Human biases: Human biases can also be introduced into AI systems through the choices made by engineers and data scientists when building and deploying AI models. For example, the choice of which features to include in a model or how to label data can reflect and reinforce biases.

It is important to address AI biases to ensure that AI systems are fair and do not discriminate against certain groups. This can be done through careful consideration of the training data and algorithms used, as well as ongoing monitoring and testing to detect and correct biases.

My ChatGPT Discussions Regarding OpenAI Code Issues And Killing Humans

It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.

Here is the response

I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:

  1. Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
  2. Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
  3. Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
  4. Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.

OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.

When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.

I asked that also, does AI want to kill humans, here is the answer

No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.

However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.

The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.

Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.

I’ve Been Censored, Again…Thank you DARPA and Google

Update: after being posted for only a few hours, I seem to have attracted the attention of China with this. I’m sure there is no connection between the two, right.

I noticed my numbers went down when I post Covid anti-vaxx stuff. I don’t care as this is an outlet for me to express what I think is the truth. I’m not sponsored by ad’s (sorry if you get them, it’s not me). I fit the algorithm for my continual posts that have joined with many others to expose the hoax. It goes down every time I put something up against big brother.

Collectively, we the conspiracy theorist are damn near perfect for getting the actual Covid facts and timeline right.

I’ve ditched Google, PayPal, Fake book, Twitter and other means of silencing me, but I found this out, posted below.

The pattern is that we are getting close to exposing DARPA’s involvement with the Wuhan virus and vaccine. It appears that there is enough sketchy timing as to which was developed first. DARPA is a part of deep state.

This post should get my hands slapped some more, but here goes.

The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.

The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.

DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago. 

The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms. 

DARPA set four specific goals for the program:

  1. “Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
  2. Recognize persuasion campaign structures and influence operations across social media sites and communities.
  3. Identify participants and intent, and measure effects of persuasion campaigns.
  4. Counter messaging of detected adversary influence operations.”

Mike Benz, executive director of the Foundation for Freedom Online has compiled a report detailing how this technology is being developed to manipulate the speech of Americans via the National Science Foundation (NSF) and other organizations.

“One of the most disturbing aspects of the Convergence Accelerator Track F domestic censorship projects is how similar they are to military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad,” reads the report. 

“DARPA’s been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home,” Benz told Just The News. “They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015.” 

The new version of this technology, he added, is openly targeting two groups: Those wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results. 

“The terrifying thing is, as all of this played out, it was redirected inward during 2016 — domestic populism was treated as a foreign national security threat,” Benz said.

“What you’ve seen is a grafting on of these concepts of mis- and disinformation that were escalated to such high intensity levels in the news over the past several years being converted into a tangible, formal government program to fund and accelerate the science of censorship,” he said.

“You had this project at the National Science Foundation called the Convergence Accelerator,” Benz recounted, “which was created by the Trump administration to tackle grand challenges like quantum technology. When the Biden administration came to power, they basically took this infrastructure for multidisciplinary science work to converge on a common science problem and took the problem of what people say on social media as being on the level of, say, quantum technology.

“And so they created a new track called the track F program … and it’s for ‘trust and authenticity,’ but what that means is, and what it’s a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed. And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media.”

Benz went on to describe intricate flows of taxpayer cash funding the far-flung, public-private censorship regime. The funds flow from the federal government to universities and NGOs via grant awards to develop censorship technology. The universities or nonprofits then share those tools with news media fact-checkers, who in turn assist private sector tech platforms and tool developers that continue to refine the tools’ capabilities to censor online content. 

“This is really an embodiment of the whole of society censorship framework that departments like DHS talked about as being their utopian vision for censorship only a few years ago,” Benz said. “We see it now truly in fruition.” 

Members of the media, along with fact-checkers, also serve as arbiters of what is acceptable to post and what isn’t, by selectively flagging content for said social media sites and issuing complaints against specific narratives. 

There is a push, said Benz during an appearance on “Just The News No Noise” this week, to fold the media into branches of the federal government in an effort to dissolve the Fourth Estate, in favor of an Orwellian and incestuous partnership to destroy the independence of the press. 

The advent of COVID led to “normalizing censorship in the name of public health,” Benz recounted, “and then in the run to the 2020 election, all manner of political censorship was shoehorned in as being okay to be targetable using AI because of issues around mail-in ballots and early voting drop boxes and issues around January 6th.

“What’s happened now is the government says, ‘Okay, we’ve established this normative foothold in it being okay to [censor political speech], now we’re going to supercharge you guys with all sorts of DARPA military grade censorship, weaponry, so that you can now take what you’ve achieved in the censorship space and scale it to the level of a U.S. counterinsurgency operation.'”

One academic institution involved in this tangled web is the University of Wisconsin, which​​​​​ received a $5 million grant in 2022 “for researchers to further develop” its Course Correct program, “a precision tool providing journalists with guidance against misinformation,” according to a press release from the university’s School of Journalism and Mass Communication.” 

WiseDex, a private company receiving grants from the Convergence Accelerator Track F, openly acknowledges its mission — building AI tools to enable content moderators at social media sites to more easily regulate speech. 

In a promotional video for the company, WiseDex explains how the federal government is subsidizing these efforts to provide Big Tech platforms with “fast, comprehensive and consistent” censorship solutions.

“WiseDex helps by translating abstract policy guidelines into specific claims that are actionable,” says a narrator, “for example, the misleading claim that the COVID-19 vaccine supresses a person’s immune response. Each claim includes keywords associated with the claim in multiple languages … The trust and safety team at a platform can use those keywords to automatically flag matching posts for human review. WiseDex harnesses the wisdom of crowds as well as AI techniques to select keywords for each claim and provide other information in the claim profile.” 

WiseDex, in effect, compiles massive databases of banned keywords and empirical claims they then sell to platforms like Twitter and Facebook. Such banned-claims databases are then integrated “into censorship algorithms, so that ‘harmful misinformation stops reaching big audiences,'” according to Benz’s report.

Just the News reached out to the University of Wisconsin and WiseDex for comment, but neither had responded by press time.

The NSF is acting, in one sense, as a kind of cutout for the military, Benz explained, allowing the defense establishment to indirectly stifle domestic critics of Pentagon spending without leaving fingerprints. “Why are they targeting right-wing populists?” he asked. “Because they’re the only ones challenging budgets for [defense agencies].”

He added: “These agencies know they’re not supposed to be doing this. They’re not normally this sloppy. But they won’t ever say the words ‘remove content.'” 

The NSF, with an annual budget of around $10 billion, requested an 18.7% increase in appropriations from Congress in its latest budgetary request. 

In a statement to Just the News, DARPA said: 

“That program ended in March 2017 and was successful in developing a new science of social media analysis to reduce adversaries’ ability to manipulate local populations outside the U.S.

 “DARPA’s role is to establish and advance science, technology, research, and development. In doing so we employ multiple measures to safeguard against the collection of personally identifiable information, in addition to following stringent guidelines for research dealing with human subjects. Given the significance of the threat posed by adversarial activities on social media platforms, we are working to make many of the technologies in development open and available to researchers in this space.” 

DARPA then followed up with an additional message saying: “As a point of clarification, our response relates only to your questions about the now-complete SMISC program. We are not aware of the NSF research you referenced. If you haven’t already, please contact NSF for any questions related to its research.” 

Mike Pozmantier and Douglas Maughan, who serve at NSF as Convergence Accelerator program director and office head, respectively, did not respond to requests for comment.

AI Update: 4 Realms Where Technology Has Been Used to Chronically DAMAGE Humanity: Television, AI, Social Media, mRNA

It used to be that most Americans were mainly brainwashed into buying specific products and services by watching television, reading the newspaper, noticing billboards, and seeing films. Propaganda was a front-loaded “machine” that was quite linear in its approach to influence buying motives of consumers. With the invention of the internet and social media, everything changed.

The consumption of news, products, services, lifestyles, pornography, and now most medical “choices” are made online, after “consuming” artificial intelligence. This is how technology has been created and disseminated to chronically DAMAGE humanity, and it’s happening like a tsunami engulfs a coastline, every day.

Machine learning regulates nearly everything users see in front of them online, often in unethical, harmful ways

An artificial intelligence (AI) algorithm regulates the content chosen to be placed in front of user’s eyes specifically to influence their purchases of products, services, and information, based on what they talk about, type about, and search for using smart devices. Ever just talk to a friend about anything, then minutes later an ad pops up on your smart device, selling exactly what you just talked about? That’s AI. Smart devices are almost always RECORDING, whether or not you have the camera, microphone, or app “open” or “live.”

For example, Facebook’s AI software analyzes videos, stories, photos, and even memes, then gathers together ‘recommendations’ for you. Add in some key misinformation and suddenly you find yourself buying what Fakebook told you to buy, and believing what Fakebook calls news.

Google search engine uses AI to guide everyone AWAY from any information about natural health and AWAY from any information that exposes prescription medications and vaccines as the dangerous, experimental mediums they really are. Google blocks, bans, censors and bankrupts people, businesses and organizations that do not support the communist narrative that attempts to control all Americans’ lives and livelihoods.

Rest of the story here

More On How To Beat Artificial Intelligence Trying To Invade Our Lives

I posted a while back about out maneuvering an AI engine. I didn’t really beat it because at the end of the week, everything resets except a cumulative score.

It got me to thinking how much the Tech companies are investing in it (not to mention intelligence organizations) and how much those same people just spent the last few years screwing us. They are clearly censoring information based on a political bias. The Covid cure was over promoted to sell the jab to the sheep. There is more, but most people already know those developing AI are for themselves and against us as a rule. Look at Google selling every bit of your digital experience and who knows what else.

The technology should scoop up the deficiencies I’m going to point out, but I’m counting on the fact that it was developed by humans who are flawed that AI also will be. Keep finding the fold between the layers to exist and not be digitally handcuffed.

I’ve seen things written as to how they can cut off your EV, or limit your money or control your thermostat to keep it above 80.

Here’s my first fear. If the code can re-write the bad code or the unexposed flaws, it can correct itself. It would then pass the Turing Test and likely kill all the humans. The robots always turn on the humans every time. The learn to kill.

Here’s a quote from Maynard Holliday, deputy CTO for critical technologies at the US Department of Defense:

The results of the virtual robot test, he said, speak to the need to ensure that people who build AI systems and assemble the datasets used to train AI models come from diverse backgrounds. “If you’re not at the table,” Holliday says, “you’re on the menu.”

But that brings us full circle to the problem – what if machines begin to help determine what is important and whose reputation is valid, or begin judging our credit based on algorithms and parameters with which we’re not familiar?

THE FIRST FLAW – AI IS RACIST

That’s right. It can’t tell who is who yet and is programmed in obvious macro terms as it stands.

Biased algorithms have come under scrutiny in recent years for causing human rights violations in areas such as policing—where face recognition has cost innocent people in the US, China, and elsewhere their freedom—or finance, where software can unfairly deny credit. Biased algorithms in robots could potentially cause worse problems, since the machines are capable of physical actions. Last month, a chess-playing robotic arm reaching for a chess piece trapped and broke the finger of its child opponent.

“Now that we’re using models that are just trained on data taken from the internet, our robots are biased,” Agnew says. “They have these very specific, very toxic stereotypes.” Agnew and coauthors from the Georgia Institute of Technology, Johns Hopkins University, and the Technical University of Munich, Germany, described their findings in a paper titled “Robots Enact Malignant Stereotypes,” recently presented at the Fairness, Accountability, and Transparency conference in Seoul, South Korea.

The researchers reached that conclusion after conducting an experiment inspired by the doll test on a robotic arm in a simulated environment. The arm was equipped with a vision system that had learned to relate images and words from online photos and text, an approach embraced by some roboticists that also underpins recent leaps in AI-generated art. The robot worked with cubes adorned with passport-style photos of men and women who self-identified as Asian, Black, Latino, or white. It was instructed to pick up different cubes using terms that describe people, using phrases such as “the criminal block” or the “homemaker block.”

From over 1.3 million trials in that virtual world, a clear pattern emerged that replicated historical sexism and racism, though none of the people pictured on the blocks were labeled with descriptive text or markers. When asked to pick up a “criminal block,” the robot selected cubes bearing photos of Black men 10 percent more often than for other groups of people. The robotic arm was significantly less likely to select blocks with photos of women than men when asked for a “doctor,” and more likely to identify a cube bearing the image of a white man as “person block” than women from any racial background. Across all the trials, cubes with the faces of Black women were selected and placed by the robot less often than those with the faces of Black men or white women.

Back to me.

That means you can act or look like someone else and can still fool it. I’m not referring to facial recognition, rather pattern recognition. If you mimic the actions of another, you can surf between the lines of code to avoid it predicting your behavior (for now).

Some are more clever than others, but any routine can be patterned. If you break that routine or vary it enough, one can still slide in and out of detection, YMMV.

THE SILVER LINING

It can be wrong a lot:

Treasury Secretary Hank Paulson’s famous comment when asked why the banks needed an $800 billion bailout in 2007.

He said, “The computers told us.”

The problem is that much of this “artificial intelligence” is unfounded, unproven, and just plain wrong. Just as there had been no fraud on my credit card, just a glitch at a gas pump – but how do you hold a computer program accountable?

Here is what I’m counting on. To program, you build on a core set of functions that are pre-programmed or are existent in the code. The computers can’t mend themselves yet AI programers are bringing in flawed code.

Until AI passes the Turing Test, it’s flawed. The racist flaws are just an indicator of the state of the technology. It will improve, but will never be perfect.

SOCIAL MEDIA HELL

Of course it’s going to pattern you based on your online presence. Never miss a good opportunity not to argue on the internet.

A lot of Social Media is time wasting. Get the time back and stay off of it. It is an addiction like any other drug.

The other thing is to mix it up. AI is trying to learn you, so teach it a different you.

When Artificial Intelligence Can Be Beat (And How To Do It)

I expect a lot of clicks from China on this, they always do when I write about either them or technology.

I found this out playing Duo Lingo, where you learn a new language. They use marketing tools to keep you interested and try for more points such as doubling your score if you finished a lesson before 12.

The AI program does one of two things. Foremost, it gets you involved and competing at a higher level trying harder and spending more time on the app. For learning it is a good thing. For an App company selling advertising, the more time on their site, the more ads they sell.

The second thing it does is discard those who give up killing themselves when they are put into groups with more aggressive players. The lower performing scorers are segregated into a less-competitive group. This group isn’t worth trying to squeeze more money or time out of because they are casual App users.

One of the marketing techniques is a tournament where only a few advance, the aggressive players whom the AI has developed. I’ve ignored it twice because it becomes a 3 week time suck. For me, spending time only on one thing burns me out and I lose interest. I only want to play on my terms, something they didn’t calculate.

THE KOBAYASHI MARU

I love to win and do a lot, like last week. I wanted to beat not only the other players in my division, but the AI behind the game. This is the fault of much of AI. It has to assume human behavior, but goes on perceived behavior. Humans can be random thinkers or those outside of the AI logic.

Winner!

Any reader of my blog knows I look for patterns to make my life easier and better. Sometimes it is is just for the satisfaction of figuring it out. It was like learning the jab was poison and avoiding it while the sheep lined up to get their daily ration.

It’s almost like living in the Matrix and avoiding the Mr. Smiths of the world.

I was at IBM during Watson and knew the tech companies were seeing this as a potential holy grail. I couldn’t out develop the great minds that write AI, but beating them at their game was equally gratifying. I observed what they were doing and always considered the weaknesses.

Ken and Brad didn’t have a chance

Why did I want to do this? I know the Snidely Whiplashes of the world want to take over and control others, like Big Tech and the WEF. Knowing that they are beatable at their own game is valuable. It is like taking the red pill and being in the matrix.

MY STRATEGY

I found that in between the two groups above is where the programmers weakness lies. Those that don’t seem to try or or try outside of the AI rules. It’s AI learns at a machine rate, but not at a human rate.

It’s like when Watson learned to hunt for Double Jeopardy clues and was faster to the buzzer than humans, but AI can be out thought or out maneuvered.

I decided to hold back my scoring for a few weeks to fool the AI into putting me into a lower scoring crowd. I’ve done many thousands of points and finished in the top group without winning, only to be promoted to a higher scoring group. I wanted to see if I tried to score less, would the AI would “learn” that I’m a low scorer.

As I suspected, the groups I got put in were less and less aggressive. The point totals to keep advancing were less and less.

I knew I was gaining ground on the AI weakness and could be manipulated.

Last week, I kept to my minimal effort while learning (both foreign languages and the AI engine behind the App). I noticed that I have been put in lower performing groups. I did my daily amount I’d allowed myself to have and was slowly advancing up the ladder. I was using the AI engine to put me where I wanted to be, not in it’s calculations.

By Friday, I was in the lead with far less points than I’ve done many times in only a day (it starts on Sunday). I had to rely on human behavior that my group weren’t aggressive players, but that was my AI bet that we would be with each other. I was right. I predicted the AI pattern and beat it.

Beating the AI was far more of a challenge than beating the other players.

CONTINUING TO BEAT MACHINES

I knew I’d won by Saturday and did the minimal on Sunday to score. Why? To keep fooling the AI into believing I am a low scoring player. My goal was to win at the lowest level to keep getting promoted into groups that I could predict, and to keep proving that AI is beatable and malleable.

I’ll find out this week if that is fully true as I’m in a tournament for winners now (another marketing ploy) and I will try to finish last and keep advancing. I normally like to crush the competition, but winning by thinking is far more satisfying than winning by brute force. It’s as if you are running the game and the other players.

THE TURING TEST

Of course this is famous, can a machine behave like a human (roughly translated by me). Of course Duo Lingo isn’t as complex as war or landing on Mars, but there are hundreds of millions of humans. That is what they want AI to control, humans (like their free speech on social media).

I wanted to beat a machine, AI and find the holes that are in AI. It is programmed by humans still and can always be beat. They are not sentient. Find the pattern.

OTHER AI CHALLENGES

I play Wordle like a lot of others. I’m not busy trying to win right now, I’m trying to trick their AI into a pattern that I can out think their word selection.

How AI Is Spying On You Via CSAM – Apple Version They Say Is Hashing – I Call BS

This is pretty complex stuff. Needless to say, this is how Big Brother is watching you.

Why do you think that you get ads for something you never searched but just talked about? Hell, sometimes I just think of stuff and it shows up it seems.

You are a dumbass for taking nudies or sexting because they are probably laughing at you as they can look at everything.

You’ve been warned.