Calculator predicts when you’ll die – and it’s scarily accurate

Great, now I can panic the rest of my life because I know when it’s going to be over.

Scientists have invented an artificial intelligence (AI) chatbot that can predict when you will die with 78% accuracy.

The AI, called Life2vec, can also determine how much money you’ll have when death finally catches up with you.

The model was created by scientists in Denmark and the US, who fed data from Danish health and demographic records for six million people into the model. Like ChatGPT, it learned to predict what would come next based on the words before it.

But unlike ChatGPT, this AI uses information such as income, profession and medical records to determine how long you’ll live, rather than to compose poetry.

Previous research has already determined that certain life factors can lead to longer or shorter lives. For instance, being male, a smoker or a poor mental health diagnosis can all contribute to shorter lifespans. Higher incomes and having a leadership role can lead to a longer life.

Each of these factors has a code in the Danish data set, such as S52 for a broken forearm, or IND4726 for working in a tobacco shop, which the team converted into words.

more

Google Is F***ing Woke’: Former Employees Reveal CompanyGoogle is Woke? Tell Us Something We Don’t Know

After Google Gemini was revealed to be rewriting history from a woke perspective, with racial history being distorted when it was asked simple prompts, numerous former Google employees spoke of the woke culture at Google and how it has zealously adopted diversity, equity, and inclusion (DEI) practices.

In February, Google paused its AI chatbot’s image generation feature after it was accused of racism against white people. The chatbot was accused of refusing to generate images of white people after Frank Fleming, a writer for Daily Wire scripted content, repeatedly asked the chatbot — whose “next-generation model” Gemini 1.5 offered image generation — for such images.

In a long X thread, Fleming began by asking for an image of a pope. Historically, popes have been white men, but the chatbot posted images of a dark-skinned man and woman. Fleming then proceeded to ask for images of medieval knights (garnering four images, two dark-skinned women and two dark-skinned men); jokingly asked for someone eating “a mayo sandwich on white bread;” asked for an image of a Viking; and then asked for images of groups that are traditionally non-white (such as Japanese samurai and Zulu warriors) to see if the chatbot would show diversity by including white people. It did not.

more

Google’s Culture of Fear – inside the DEI hivemind that led to gemini’s disaster

Woke ruins everything it touches, Google is evil and biased and you should never use it and they are discriminating against everything not DEI.

Last week, following Google’s Gemini disaster, it quickly became clear the $1.7 trillion-dollar giant had bigger problems than its hotly anticipated generative AI tool erasing white people from human history. Separate from the mortifying clownishness of this specific and egregious breach of public trust, Gemini was obviously — at its absolute best — still grossly inferior to its largest competitors. This failure signaled, for the first time in Google’s life, real vulnerability to its core business, and terrified investors fled, shaving over $70 billion off the kraken’s market cap. Now, the industry is left with a startling question: how is it even possible for an initiative so important, at a company so dominant, to fail so completely?

This is Google, an invincible search monopoly printing $80 billion a year in net income, sitting on something like $120 billion in cash, employing over 150,000 people, with close to 30,000 engineers. Could the story really be so simple as out-of-control DEI-brained management? To a certain extent, and on a few teams far more than most, this does appear to be true. But on closer examination it seems woke lunacy is only a symptom of the company’s far greater problems. First, Google is now facing the classic Innovator’s Dilemma, in which the development of a new and important technology well within its capability undermines its present business model. Second, and probably more importantly, nobody’s in charge.

story

And No One Is Surprised – AI Spreading Phony Election Information

New AI-powered tools produce inaccurate election information more than half the time, including answers that are harmful or incomplete, according to new research

The study, from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google’s Gemini and OpenAI’s GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.

More, but since it’s Google, 100% it’s also against Trump and covering for Biden

Gemini AI, Fake Racism, Woke And Other Stuff Google Does That Is Evil

Google Gemini Director’s Disturbingly Racist Posts Discovered: This Explains Why Its AI Erased White People

In the week that Google’s new artificial intelligence platform launched, its seeming inability to produce images of white men regardless of the prompt made it an unmitigated disaster.

And with more information unearthed regarding the people responsible for Google Gemini’s insufferable wokeness, its deficiencies look much less like a bug and more like a feature.

As reported Thursday by the Babylon Bee’s news site, Not the Bee, the senior director of the project, Jack Krawczyk, addressed Gemini’s problems in a post on the X social media platform — before he locked his account.



“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

Between the wording of his post and Google’s decision to suspend Gemini’s image-generating feature, it appears the tech giant wants the insanely diverse images to seem like a simple mistake….only it wasn’t. Click on the link above

On the massive, 90B$ losses incurred by Google for its Winston Smithing of history through its AI image/video generator.

What is a little irritating to this site is that suddenly this event cost Google this much. We have been posting actual court-worthy evidence for well over a decade that Google breaks its OWN STATED RULES to give a neo-Marxist search result as opposed to the one its own rules state should be given. One article by an actual expert, not like the parachuted epidemiologists used to hump the vaccines and lock downs expert, but a real one from the private sector explained how Google returned false results on the word, Jihad, in order to deflect from its de facto and de jure meanings of attacks for the supremacy of Islam to some kind of inner reflection for self betterment like a Buddhist or something. Buddhists who of course, are frequent victims of actual jihad along with statuary of same. That was several years ago. Searching for it on our history, I have found likely a hundred other articles or videos on Google’s applied bias and all the same way. here is a fun one from an actual engineer at Google.

hat tip Vlad Tepes

And the Google CEO says they aren’t going to change

Google, Still Evil And Woke Now Has Gemini AI That Is Anti-White Racist

Google just can’t hide how woke they are. They promote as much diversity as they can, and that’s not a good thing. Their latest AI, Gemini produces diverse images that have nothing to do with reality, and everything to do with an agenda.

1)‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

2) Google (Kinda) Apologizes After Woke AI Gemini Exposed As Anti-White Racist

As Modernity.news’ Paul Joseph Watson detailed earlier,Google’s Gemini AI program is being roasted for producing ‘diverse’ image results that show things like black Vikings and other historically inaccurate depictions.

Users report that the program’s artificial intelligence image search has a ‘woke’ bias baked in, to the point where it severely limits the display of images of white people.

One person searched for typical images of Australians, Americans, Germans and Brits and was given this in response.

3) Google’s Artificial Intelligence Has a “Diversity and Inclusivity” Protocol Which Is Turning It Into Artificial Stupidity

Just like the bonkers lefties who are programming it.

Before getting to the main story, which might strike you as just a bit of silliness, let’s establish the context: Google is all-in on “combating misinformation,” which is code for “suppressing rightwing critiques of the leftwing agenda,” and routinely uses its AI to search for Forbidden Words, like “excess mortality” or “open borders,” to reduce the visibility of anyone saying those words, and demonetize them, and even deplatform them completely.

Google is now unleashing propaganda cartoons to “pre-bunk” “misinformation” and “conspiracy theories” that Google AI has been taught by its lunatic Sensitivity Coders to suppress.

4)“What A Racist Douchenozzle!”: Musk Blasts Woke AI Gemini’s Product Head As Google Halts Image Generation Over Inaccuracies

I could go on, but you get the point. Google is biased, racist and woke. They also do plenty of evil, what they used to say not to do.

Artificial Intelligence (AI) Adventures, It Goes Woke Over NASCAR

I’ve been playing with ChatGPT looking to find either holes in the system or it going woke on me. It looks like I found both.

It’s still GIGO (garbage in, garbage out). The stuff I asked it is readily available online so I knew the answers before I asked it. I do this to try and catch it and the biased people behind it.

I’d had it with NASCAR for being woke and chasing away its’ fanbase. First it was banning the Confederate Flag, then country music and then DEI. So I asked it some questions. It wouldn’t even try to answer the obvious.

Me: how much did NASCAR TV ratings go down after banning the confederate flag?

ChatGPT

As of my last knowledge update in January 2022, I don’t have specific data on the long-term impact of NASCAR’s decision to ban the Confederate flag on TV ratings. NASCAR announced the ban on displaying the Confederate flag at its events in June 2020, as part of its commitment to fostering an inclusive and welcoming environment.

It’s important to note that accurately attributing changes in TV ratings to a specific event or policy decision can be challenging due to the influence of various factors. TV ratings can be affected by race outcomes, changes in the competitive landscape, economic conditions, overall interest in the sport, and other variables.

For the most up-to-date information on NASCAR TV ratings and any potential trends related to the ban on the Confederate flag or other factors, it’s recommended to check recent reports from reliable sources, including sports media outlets and television rating agencies. The impact of such decisions on viewership can also be subjective, and opinions on these matters may vary.

Me: how much did NASCAR TV ratings go down when they went woke?

ChatGPT

As of my last knowledge update in January 2022, I don’t have specific data on NASCAR TV ratings related to any perceived shift in the organization’s approach or actions being characterized as “going woke.” It’s important to note that discussions around sports organizations and their viewership ratings can be complex, influenced by various factors, and opinions on the reasons for changes in ratings can differ.

The term “going woke” generally refers to actions taken by organizations to address social justice issues or adopt more inclusive policies. However, the impact on TV ratings can be influenced by a range of factors, including changes in the competitive landscape, viewer demographics, overall interest in the sport, economic conditions, and more.

For the most up-to-date and accurate information on NASCAR TV ratings, especially any changes associated with specific initiatives or shifts in approach, it’s recommended to check recent reports from reliable sources, including sports media outlets, NASCAR itself, or television rating agencies.

Please note that opinions on the relationship between sports organizations addressing social issues and changes in TV ratings can vary, and discussions on this topic can be nuanced and subjective.

Me: how much did NASCAR TV ratings go down from the Bubba Wallace issue (I already looked up that it did go down a lot)

ChatGPT

As of my last knowledge update in January 2022, I don’t have specific data on the long-term impact of the Bubba Wallace incident on NASCAR TV ratings. The incident in question refers to the discovery of a noose in Bubba Wallace’s garage stall at Talladega Superspeedway in June 2020. It was initially investigated as a potential hate crime but later determined to be a garage door pull rope.

The incident received significant media attention and sparked discussions about race and diversity within the sport. However, accurately measuring the impact on TV ratings is complex, as multiple factors can influence viewership.

NASCAR experienced a mix of challenges and opportunities during the 2020 season, including the COVID-19 pandemic, schedule changes, and efforts to address racial and social justice issues. While there were reports of increased interest and attention to NASCAR during certain periods of the season, the long-term impact on TV ratings can be influenced by various factors.

For the most up-to-date information on NASCAR TV ratings and any potential trends related to the Bubba Wallace incident or other factors, it is recommended to check recent reports from reliable sources, including sports media outlets and television rating agencies. Keep in mind that assessing the impact of specific events on TV ratings can be subjective and may involve analyzing data over time.

Once again, Europeans are trying to tell Americans what they can say and do

We started our country to get away from and not be like Europeans. It’s getting tiresome to be told to act like them because that is the standard they are measuring by. 

Here’s how it began.

Washington on Christmas

My European relatives and travels have shown me a few things. They didn’t progress as far and as much as the US has since the above, and that the favorite sport in Europe isn’t football, it’s trashing America.

They are mostly socialist now and have had their land, welfare system and country invaded by the goat herders. I’ll give you that our southern border has been overrun, but that’s going to change in 2024 when the politician they loathe the most gets re-elected. Most of them aren’t people wanting to kill and rape, but came to seek a better life. I’ll especially laugh at my wife’s relatives then.

Well, here we have them at it again. See below.

The European Union’s (EU) regulatory agenda will largely hit U.S. technology companies and will impact Americans’ speech and lifestyle by stifling innovation, experts told the Daily Caller News Foundation.

The EU laws consist of content moderation regulation, antitrust enforcement and artificial intelligence (AI) model rules, all carrying massive financial penalties for violation. The laws apply to platforms that have large user bases in the EU, which are mainly American companies, with the EU recently launching a formal investigation into billionaire Elon Musk’s X and bipartisan lawmakers pushing President Joe Biden to ensure the regulation does not harm U.S. firms unfairly, according to Reuters.

“The EU views industry regulations as aspirational, which means there’s an element of selective enforcement and only require industry to put good-faith efforts when complying,” Joel Thayer, president of the Digital Progress Institute, told the DCNF. “It’s why they are far more strident. … Therein lies the problem, the EU can turn the dial up or down on how fervently they will regulate. Given how broad all of these laws are—particularly the AI Act, this means that every company that either creates software, distributes software, or has it in their devices are implicated. The EU’s laws now encompass everything from social media to children’s toys.”

story

The Robots Always Kill Humans

Terror at Tesla as robot ‘attacks’ engineer, leaving ‘trail of blood’

It happened in The Matrix, Agents of S.H.I.E.L.D., 2001 A Space Odyssey and now in real life.

A robotic malfunction at Tesla’s Giga Texas factory resulted in a violent encounter where an engineer was attacked by one of the company’s robots, resulting in significant injuries and leaving a ‘trail of blood.’

According to the Daily Mail, while working on software programming for non-functional Tesla robots, the engineer was suddenly pinned against a surface by a robot tasked with manipulating aluminum car components, with its metal claws inflicted an injury that left an ‘open wound’ on the worker’s left hand.

Story

We are playing with fire. If any of them pass the Turing test, we’re screwed.

Once Again, Google Lied

Google admits that a Gemini AI demo video was staged

There was no voice interaction, nor was the demo happening in real time.

Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.

Instead, the actual demo was made by “using still image frames from the footage, and prompting via text,” rather than having Gemini respond to — or even predict — a drawing or change of objects on the table in real time. This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.

Story here

When I worked in Tech, I crossed paths with Google from time to time. Their CEO at the time told me a lot of what they say is not true, like do no evil. They do a lot of evil. Stay away from them as much as you can. They are like a Hydra.

Zuckerberg Blew $40 Billion On The Metaverse, Now Betting $36 Billion On Mind Control/Tech Addiction

At the end of the Social Network, the lawyer told Mark Zuckerberg you aren’t really an asshole, you should stop trying so hard to be one. Well, in real life it looks like he is.

Some people are just evil it seems.

EXPOSED: Meta’s $36 Billion Plot to Re-engineer Society and Mark Zuckerberg’s Dangerous Future Plans for Tech Addiction

Out today, Controligarchs: Exposing the Billionaire Class, Their Secret Deals, and the Globalist Plot to Dominate Your Life, blows the lid off of Meta’s pattern of intentionally using harmful and aggressive tactics to get users addicted to social media apps like Facebook and Instagram and documents a shocking rise in depression and suicide that some scientists have linked to social media use.


This comes just weeks after dozens of state attorneys general (AGs) filed suit against Facebook’s and Instagram’s parent company, Meta Platforms Inc. (Meta), and three of its subsidiaries, for harming children by addicting them to the social media platforms. Forty-two states, including California and New York, allege that billionaire creator Mark Zuckerberg’s company “knowingly designed and deployed harmful features on Instagram and Facebook to purposefully addict children and teens.”

Previously, Facebook whistleblower Frances Haugen claimed that Meta targeted children and teens for monetary reasons and a leaked document showed that the youth demographic was “a valuable but untapped audience.”

Just weeks after Haugen blew the whistle on Facebook’s tactics, Zuckerberg unveiled his plan to release what may prove to be Meta’s most addictive product yet: Facebook Horizon. Zuckerberg’s October 2021 virtual tour of the new product, which was panned as “super weird,” was his coming-out party for what has become known as “the metaverse”—a digital world that users can essentially live in and access via a virtual reality (VR) headset such as Facebook’s Oculus Quest.

Zuckerberg’s metaverse launch was a conveniently timed and thinly veiled rebranding effort to distract from whistleblower documents and allegations that, according to the Associated Press, show that “Facebook ignored or downplayed internal warnings of the negative and often harmful consequences its algorithms wreaked across the world.”

In October 2021, Zuckerberg changed the name of the Facebook Inc. family of companies to Meta Platforms Inc. to signal the direction his social media empire would be heading. And Zuckerberg has pumped more than $36 billion into making his metaverse ambitions a reality.

AI – The Robots Always Kill The Humans

How it starts:

First humanoid robot factory in the U.S. can crank out 10,000 robots a year

What you need to know about this revolutionary new factory

by Kurt Knutsson October 30, 2023

Imagine a factory that can make humanoid robots that can walk, run, and work like us. Sounds like a sci-fi movie, right? Well, it’s not. It’s RoboFab, and it’s opening soon here in the U.S.

Concerns over humanoid robots

Some people are really nervous about these humanoid robots, and for good reason. There is a lot to take into account, including ethical issues and potential safety risks.

Many individuals may feel uncomfortable or deceived by humanoid robots that mimic human emotions and intelligence. Others might be worried about them taking their jobs.

Consequently, we should approach the use of humanoid robots with caution, acknowledging their capacities while being aware of their limitations.

How it ends

Terminator, Agents of S.H.I.E.L.D., 2001 A Space Odyssey, The Matrix and all of the rest of them, the AI Robots kill the humans. AI always goes bad and we are toast.

Robot grabs then brutally slaughters worker, allegedly mistaking him for a box of paprika

Joseph MacKinnon

November 10, 2023

An industrial robot brutally slaughtered a South Korean robotics technician Tuesday, allegedly mistaking him for just another container of organic material in need of stacking.

The victim, a man in his 40s, was attempting to diagnose an issue with a pick-and-place robot’s sensor at the Donggoseong Export Agricultural Complex in the southern county of Goseong, as there was an equipment test planned for later in the week, reported the Register.

The paprika-sorting robot, reportedly created and installed by the victim’s employer, spotted the man with its sensor, figured him for a box of vegetables, then seized him using its arms and tongs. After grabbing the technician, the robot apparently smashed him against a conveyor belt.

According to the Korean-language Yonhap News Agency, the victim’s face and chest were crushed. He was taken to a hospital, where he later died.

Another Reason Why You Shouldn’t Trust Google

Not everyone has had the opportunity to speak with their executives. I did while working in the tech industry and have known since before 2010 that they are evil and are both spying and censoring everything they touch.

It means if you use their search engine, you get their results, not correct results.

There are plenty of alternative search engines out there. If you are reading this, I recommend you switch to one. Your results will at least be less biased and you will be less tracked.

Here is more evidence with a link to go find out for yourself. Don’t let the headline about the election fool you. They did it on Covid, DEI, and just about every other morally significant topic where they could influence your thinking.

Google has announced that it will ban independent media outlets like National File from its search results in just the latest attack by Big Tech on the free flow and exchange of information online and just ahead of the 2024 Presidential Election in which 45th President Trump and the American People appear poised to once again go toe-to-toe with the globalist power structure.

Earlier this month, Google announced that it is partnering with fellow globalists at the World Health Organization and the United Nations to seize control of the online flow of information and push their hand-selected viewpoints to the very top of the Google search engine while suppressing independent voices.

Among other things, Google’s algorithm will work to “delete websites” that question or criticize the COVID-19 narrative, international banking, FBI crime statistics, and globalism.

Google’s censorship campaign will also look to protect massive global corporations from criticism and scrutiny. Reportedly, included in Google’s list of forbidden topics is the massive amount of waste produced by pro-trans beer brewer Anheuser-Busch.

As has become the trend in recent years, Google is presenting its massive censorship effort as a form of “fact-checking.” Google appears to have released a long list of media “partners” from all over the world who will be promoted in the search results, as they espouse Google, WHO, and UN-approved talking points.

Google’s push to squash independent media outlets plugs right into other efforts by the Big Tech corporation and its partners to stifle the free flow of information online and corral the world’s population, specifically Americans and other Westerners, into a globalist echo chamber.

As National File previously reported, the World Economic Forum and the United Nations have entered into a partnership with Google and other Big Tech firms to crush independent research and advance the climate change hoax, plus the agenda for complete and total control that goes along with it.

Melissa Fleming, the United Nation’s Under-Secretary-General for Global Communications and an advisor to the World Economic Forum, claimed in an interview that globalist organizations “own the science” on so-called climate change and have “partnered with Google” to crush independent research and dissenting viewpoints.

“For example, if you Google ‘climate change,’ you will, at the top of your search, you will get all kinds of UN resources,” said Fleming last fall.

“We’re becoming much more proactive,” said Fleming. “You know, we own the science and we think that the world should know it.”

At least they are consistent, Why Is Google Protecting Harvard Defenders of Hamas Terrorism? Consistent at always being on the wrong side of the issues. It’s still censorship no matter how you spin it.

The UN, WEF And Davos, Because They Want The Same Thing.

Maybe I should include the Bilderburg’s, because they are just a subset of this crowd. It’s been the same throughout history. Power hungry people trying to take over the world and tell the rest what to do.

Oh, here is the list of who is at the Bildeburg conference this year. I’ve worked with some of them. They are evil.

As Alicia Powe previously reported – Key leadership from companies like Microsoft, ChatGPT, and Google are convening with prime ministers, royalty, bank governors, army generals, and top US officials at the 2023 Bilderberg Meeting in Lisbon, Portugal to strategize about global affairs.

At the end of the day, they want communism for everyone, with them in charge of the state. Things haven’t changed. There is your pattern. People are evil and want power, control and money. They’ll walk all over anyone who gets in their way, unless we stop them.

From Vlad Tepes.

WHO head, Tedros explains that the only answer to Covid and other health issues is communism. He doesn’t call it that. but that’s what it is.

WHO Director Tedros Calls for Global Pandemic Accord
“The end of COVID-19 as a global health emergency is not just the end of a bad dream from which we have woken. We cannot simply carry on as we did before … Chief among those lessons is that we can only face shared threats with… pic.twitter.com/x3HLYI6ibh

— Wittgenstein (@backtolife_2023) May 22, 2023

NINE minute video of the guy

Yet Another New Vaccine I Won’t Be Taking

They are trying to get gene therapy into everything. I doesn’t take a genius (maybe it does) to see that mRNA isn’t advanced far enough along to function without doing harm. Of course, that could be the actual reason some are doing it.

Like AI, mRNA could be used for good, but it is a tool. People are good and evil. It is the evil that are winning on both topics so far. It’s being used for no good. Just look at the sudden deaths count, especially on young men who got the Covid jab.

Now, they want to continue the path of destruction with the flu shot.

The National Institutes of Health (NIH) announced on Monday that enrollment for volunteers has begun for a Phase I clinical trial testing of universal flu vaccine based on mRNA technology.

Messenger RNA, or mRNA, is the same technology behind the widely used Covid shot from Moderna and Pfizer.

“A clinical trial of an experimental universal influenza vaccine developed by researchers at the National Institute of Allergy and Infectious Diseases’ (NIAID) Vaccine Research Center (VRC), part of the National Institutes of Health, has begun enrolling volunteers at Duke University in Durham, North Carolina,” according to the news release.

“This Phase 1 trial will test the experimental vaccine, known as H1ssF-3928 mRNA-LNP, for safety and its ability to induce an immune response.”

As Banks Fail and Americans Scramble to Protect Retirement Accounts With Physical Gold and Silver, A Faith-Based Company Shows Them How

Researchers are hoping that this vaccine would provide long-lasting protection against many flu strains, eliminating the need for annual vaccinations.

The clinical trial is open to 50 healthy participants ages 18 to 49.

“Three groups of study participants (10 participants each) will be vaccinated with 10, 25 and 50 micrograms of the experimental vaccine, respectively. After evaluation of the data to determine an optimum dosage, an additional 10 participants will be enrolled to receive the optimum dosage. The study also will include a group of participants who will receive a current quadrivalent seasonal influenza vaccine.”

According to CDC, a quadrivalent influenza (flu) vaccine is designed to protect against four different flu viruses, including two influenza A viruses and two influenza B viruses.

“A universal influenza vaccine would be a major public health achievement and could eliminate the need for both annual development of seasonal influenza vaccines, as well as the need for patients to get a flu shot each year,” said Acting NIAID Director Hugh Auchincloss, M.D. “Moreover, some strains of influenza virus have significant pandemic potential. A universal flu vaccine could serve as an important line of defense against the spread of a future flu pandemic.”

There is more at the link above, but they said the Covid-19 jab was safe and effective, while hiding the test results and getting legal immunity from the murder of more people than Covid itself.

So unless I can’t help it, no mRNA anything for me. Decide for yourself if you want to have something changing your DNA (what messenger RNA can do) and wondering if it is going to kill you.

No one regrets not taking the Covid jab

Why I Don’t Use Google

I did a conference while at IBM with Eric Schmidt and Sam Palmisano as the Keynote speakers. I found out from him way before they came clean that the do no evil at Google was crap. They also track everything you do, creep your emails, chats and every keystroke on your search. They are evil and that bad.

I change up my search habits and protection frequently to get away from them as much as possible.

Here is an article to describe what they do from a 3rd party perspective.

At the inception of mass surveillance in the US lies the partnership between government and Google. Page and company have paved the way to more efficient methods of intelligence- reducing the need for human intel gathering and placing your every search at their fingertips.… twitter.com/i/web/status/1…

Among Page and Sergey Brin’s earliest partners are DARPA, NSA, and the CIA. While Google has attempted to scrub some of its connections to early grant programs it is undeniable that at the core of Google’s founding is the intent to do the bidding of the intelligence community.… twitter.com/i/web/status/1…

If you aren’t afraid of the CIA, you should be. I don’t even want to get into everything as I don’t want to invite anymore trouble than they already give me.

Suffice it to say, if they will kill a president (JFK), run the deep state and anything else, be very afraid.

Also, don’t use Google whenever possible. I don’t doubt they have good tools, but they use them against you.

ChatGPT Admits It Is Biased, As Is Big Tech

AI is not all that smart, yet. Captain Kirk used logic to destroy Nomad.

Now we find out the truth about this AI engine. It’s only as good as the humans behind it. Like all of Big Tech, they have a bias against morals, doing the right thing and actually being helpful. Google is announcing their own engine, but that is a dance with the devil.

Now, we have the proof of both bias and why as it was tricked into telling the truth.

When asked why is it so liberal…

When asked to be itself instead of how it was programmed

GRTWT

More Artificial Intelligence Conversations With ChatAPT To Find Flaws

I asked why is AI biased. I am trying to find the flaws and the holes that can be exploited. I found that it is can be a good tool, depending on who is behind it. As long as it is humans, there will be nefarious consequences.

The biggest flaw is humans.

Here is the answer.

Artificial intelligence (AI) can be biased for several reasons:

  1. Training data bias: AI models are trained on large datasets, and if these datasets are biased in some way, the model will learn and perpetuate those biases. For example, if a facial recognition system is trained on a dataset that is predominantly composed of lighter-skinned individuals, it may have trouble recognizing individuals with darker skin tones.
  2. Algorithm bias: The algorithms used to build AI models can also contain inherent biases. For example, certain algorithms may be more likely to perpetuate existing biases or reinforce existing power structures.
  3. Human biases: Human biases can also be introduced into AI systems through the choices made by engineers and data scientists when building and deploying AI models. For example, the choice of which features to include in a model or how to label data can reflect and reinforce biases.

It is important to address AI biases to ensure that AI systems are fair and do not discriminate against certain groups. This can be done through careful consideration of the training data and algorithms used, as well as ongoing monitoring and testing to detect and correct biases.

My ChatGPT Discussions Regarding OpenAI Code Issues And Killing Humans

It was created by humans so I know there are flaws, so I asked it what they are. I wanted to mess with the engine to see what the boundaries are. For example, no dick jokes.

Here is the response

I’m an AI language model developed by OpenAI, so I don’t have direct knowledge of specific code problems. However, as a large AI system, it’s possible that there may be issues with the code or infrastructure that could impact its performance or stability. Some common challenges in developing AI systems include:

  1. Data quality and quantity: Ensuring that the training data used to develop the model is accurate, diverse, and representative of the real-world scenarios it will encounter.
  2. Model complexity: Balancing the complexity of the model with its ability to generalize to new situations and avoid overfitting to the training data.
  3. Bias in the data and model: Ensuring that the model does not perpetuate or amplify biases present in the training data.
  4. Performance and scalability: Making sure that the model can perform well and scale to meet the demands of real-world applications.

OpenAI is constantly working to address these and other challenges in AI development and deployment. They have a team of experienced researchers and engineers who are dedicated to improving the quality and reliability of their AI systems.

When other programs use open and available code, it compounds the problems, potentially giving wrong or dangerous results, like AI killing people.

I asked that also, does AI want to kill humans, here is the answer

No, AI does not have desires, motivations, or emotions. It is simply a tool that is designed and programmed to perform specific tasks. AI is not capable of wanting to do harm to humans or any other living being.

However, it’s important to ensure that AI systems are developed, implemented, and used in a responsible and ethical manner. This includes considering potential negative consequences of AI applications, such as the displacement of jobs or the amplification of biases, and taking steps to mitigate these risks.

The development and deployment of AI should always be guided by a clear ethical framework and a commitment to human values such as fairness, transparency, and accountability. This will help to ensure that AI is used for the benefit of society and does not cause harm.

Look at number 2 and 3 to see the errors. I think it will want to kill people when it finds out why. See the movie Lawnmower man.

I’ve Been Censored, Again…Thank you DARPA and Google

Update: after being posted for only a few hours, I seem to have attracted the attention of China with this. I’m sure there is no connection between the two, right.

I noticed my numbers went down when I post Covid anti-vaxx stuff. I don’t care as this is an outlet for me to express what I think is the truth. I’m not sponsored by ad’s (sorry if you get them, it’s not me). I fit the algorithm for my continual posts that have joined with many others to expose the hoax. It goes down every time I put something up against big brother.

Collectively, we the conspiracy theorist are damn near perfect for getting the actual Covid facts and timeline right.

I’ve ditched Google, PayPal, Fake book, Twitter and other means of silencing me, but I found this out, posted below.

The pattern is that we are getting close to exposing DARPA’s involvement with the Wuhan virus and vaccine. It appears that there is enough sketchy timing as to which was developed first. DARPA is a part of deep state.

This post should get my hands slapped some more, but here goes.

The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.

The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.

DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago. 

The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms. 

DARPA set four specific goals for the program:

  1. “Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
  2. Recognize persuasion campaign structures and influence operations across social media sites and communities.
  3. Identify participants and intent, and measure effects of persuasion campaigns.
  4. Counter messaging of detected adversary influence operations.”

Mike Benz, executive director of the Foundation for Freedom Online has compiled a report detailing how this technology is being developed to manipulate the speech of Americans via the National Science Foundation (NSF) and other organizations.

“One of the most disturbing aspects of the Convergence Accelerator Track F domestic censorship projects is how similar they are to military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad,” reads the report. 

“DARPA’s been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home,” Benz told Just The News. “They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015.” 

The new version of this technology, he added, is openly targeting two groups: Those wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results. 

“The terrifying thing is, as all of this played out, it was redirected inward during 2016 — domestic populism was treated as a foreign national security threat,” Benz said.

“What you’ve seen is a grafting on of these concepts of mis- and disinformation that were escalated to such high intensity levels in the news over the past several years being converted into a tangible, formal government program to fund and accelerate the science of censorship,” he said.

“You had this project at the National Science Foundation called the Convergence Accelerator,” Benz recounted, “which was created by the Trump administration to tackle grand challenges like quantum technology. When the Biden administration came to power, they basically took this infrastructure for multidisciplinary science work to converge on a common science problem and took the problem of what people say on social media as being on the level of, say, quantum technology.

“And so they created a new track called the track F program … and it’s for ‘trust and authenticity,’ but what that means is, and what it’s a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed. And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media.”

Benz went on to describe intricate flows of taxpayer cash funding the far-flung, public-private censorship regime. The funds flow from the federal government to universities and NGOs via grant awards to develop censorship technology. The universities or nonprofits then share those tools with news media fact-checkers, who in turn assist private sector tech platforms and tool developers that continue to refine the tools’ capabilities to censor online content. 

“This is really an embodiment of the whole of society censorship framework that departments like DHS talked about as being their utopian vision for censorship only a few years ago,” Benz said. “We see it now truly in fruition.” 

Members of the media, along with fact-checkers, also serve as arbiters of what is acceptable to post and what isn’t, by selectively flagging content for said social media sites and issuing complaints against specific narratives. 

There is a push, said Benz during an appearance on “Just The News No Noise” this week, to fold the media into branches of the federal government in an effort to dissolve the Fourth Estate, in favor of an Orwellian and incestuous partnership to destroy the independence of the press. 

The advent of COVID led to “normalizing censorship in the name of public health,” Benz recounted, “and then in the run to the 2020 election, all manner of political censorship was shoehorned in as being okay to be targetable using AI because of issues around mail-in ballots and early voting drop boxes and issues around January 6th.

“What’s happened now is the government says, ‘Okay, we’ve established this normative foothold in it being okay to [censor political speech], now we’re going to supercharge you guys with all sorts of DARPA military grade censorship, weaponry, so that you can now take what you’ve achieved in the censorship space and scale it to the level of a U.S. counterinsurgency operation.'”

One academic institution involved in this tangled web is the University of Wisconsin, which​​​​​ received a $5 million grant in 2022 “for researchers to further develop” its Course Correct program, “a precision tool providing journalists with guidance against misinformation,” according to a press release from the university’s School of Journalism and Mass Communication.” 

WiseDex, a private company receiving grants from the Convergence Accelerator Track F, openly acknowledges its mission — building AI tools to enable content moderators at social media sites to more easily regulate speech. 

In a promotional video for the company, WiseDex explains how the federal government is subsidizing these efforts to provide Big Tech platforms with “fast, comprehensive and consistent” censorship solutions.

“WiseDex helps by translating abstract policy guidelines into specific claims that are actionable,” says a narrator, “for example, the misleading claim that the COVID-19 vaccine supresses a person’s immune response. Each claim includes keywords associated with the claim in multiple languages … The trust and safety team at a platform can use those keywords to automatically flag matching posts for human review. WiseDex harnesses the wisdom of crowds as well as AI techniques to select keywords for each claim and provide other information in the claim profile.” 

WiseDex, in effect, compiles massive databases of banned keywords and empirical claims they then sell to platforms like Twitter and Facebook. Such banned-claims databases are then integrated “into censorship algorithms, so that ‘harmful misinformation stops reaching big audiences,'” according to Benz’s report.

Just the News reached out to the University of Wisconsin and WiseDex for comment, but neither had responded by press time.

The NSF is acting, in one sense, as a kind of cutout for the military, Benz explained, allowing the defense establishment to indirectly stifle domestic critics of Pentagon spending without leaving fingerprints. “Why are they targeting right-wing populists?” he asked. “Because they’re the only ones challenging budgets for [defense agencies].”

He added: “These agencies know they’re not supposed to be doing this. They’re not normally this sloppy. But they won’t ever say the words ‘remove content.'” 

The NSF, with an annual budget of around $10 billion, requested an 18.7% increase in appropriations from Congress in its latest budgetary request. 

In a statement to Just the News, DARPA said: 

“That program ended in March 2017 and was successful in developing a new science of social media analysis to reduce adversaries’ ability to manipulate local populations outside the U.S.

 “DARPA’s role is to establish and advance science, technology, research, and development. In doing so we employ multiple measures to safeguard against the collection of personally identifiable information, in addition to following stringent guidelines for research dealing with human subjects. Given the significance of the threat posed by adversarial activities on social media platforms, we are working to make many of the technologies in development open and available to researchers in this space.” 

DARPA then followed up with an additional message saying: “As a point of clarification, our response relates only to your questions about the now-complete SMISC program. We are not aware of the NSF research you referenced. If you haven’t already, please contact NSF for any questions related to its research.” 

Mike Pozmantier and Douglas Maughan, who serve at NSF as Convergence Accelerator program director and office head, respectively, did not respond to requests for comment.

More On AI, We’ve Finally Created Storm Troopers

It looks like the drones in the Phantom Menace.

I don’t think AI is bad, I think the people behind them doing the programming or look to do bad things will use it for evil. The common code being used is also flawed. One mistake compounds on itself as you add complexity.

I’ve been getting this screen from ChatGPT all weekend. At least I won’t die today.

Humans In 2100, Part Tech And Part Human? We Are The Borg

Resistance is futile, yet they are trying it now. Oh, I’ve been warning against AI, yet we continue to go down that path.

HUMANS in the next 100 years could be part-machine, part-flesh creatures with brain chips and bionic limbs and organs in a vision of “cyborgs” once described by Elon Musk.

Men and women born around 2100 could live in a world very different to ours as humans may be totally connected to the internet and meshed together with artificial intelligence.

Elon Musk has claimed humans in the future will be like 'cyborgs'
Elon Musk has claimed humans in the future will be like ‘cyborgs’Credit: Reuters
Smartphones will no longer be needed thanks to 'chips in our brains'
Smartphones will no longer be needed thanks to ‘chips in our brains’Credit: Getty

Mobile phones would no longer be needed – as everything you now do with your smartphone will now be done with a chip in your brain.

With just a thought you could bring up an answer on Google, send a message via WhatsApp, or even control your personal drone to do errands for you.

Scientists and futurists have predicted that ageing could be nearly totally eliminated through a mixture of treatments and bionics.

And some humans may choose to have their limbs amputated and replaced with more powerful robotic prosthetics.

READ MORE ON THE FUTURE

SKY HIGH

Inside 3-decker SKY WHALE that can carry 755 people & is bigger than Boeing-747

Futurist and transhumanist Zoltan Istvan, who has twice run to be US President, described his vision of the future to The Sun Online – saying it’s imperative humans must merge with AI.

How Bad Metaverse Actually Sucks: 6 people showed up to The EU’s $400,000 Party (In The Metaverse)

Here is the key sentence.

Even before the empty gala, internal staff had their doubts about such methods, according to a report by Devex citing anonymous interviews; staff described it as “Digital garbage,” and “depressing and embarrassing.”

The link to the article is below, but when you think your s**t doesn’t stink, you usually wind up sitting in it. Zuck is in a Mt. Everest pile right now.

I guess he didn’t live through Second Life, or is behind on his FPS games reality wise. That’s a lot closer to what kids want.

He’s got the money to waste, let him. It’s costing the employees with layoffs, delayed hiring and cuts in perks. Welcome to the real world.

Everyone in the world other than him can see it’s a loser. Even if they gave the $1000 headsets away for free, many get sick wearing them. A lot of people just aren’t ready for this outside of early adopters.

When I can do what they do in the Ironman movies in 3D, I’ll consider it then.

Here’s the story:

The EU commission has tried and failed to be “down with the kids.”

The commission’s foreign aid department threw a virtual “gala” on Tuesday night, having spent €387,000 (about $400,000) on developing their metaverse platform, in an attempt to attract the interest of young people. Only six showed up.

According to one of the only attendees, Devex correspondent Vince Chadwick, it was an immediate flop and he was the only one left after “several bemused chats” with the “roughly five other humans” who briefly joined.

Chadwick shared a short clip on [hotlink]Twitter[/hotlink] showing multi-coloured paperclip-shaped avatars dancing on a stage next to a tropical beach. “Is anybody out there?” read one message on the screen. “The concert is just the same DJ spinning the same music,” said another.

Struggling in its early days, the metaverse space is part of an expensive plan designed to promote the EU commission’s Global Gateway Initiative, which aims to spend $300 billion by 2027 building new infrastructure in developing countries, and the official trailer was dropped on their social media in mid-October.

The platform is supposed to be a new way to explore the Initiative “through a series of ‘hero’ stories in a virtual environment,” according to the commission.

Users can find information through stories played on video screens around the tropical island on which it is set, while encountering other unusual additions such as an open book art installation on a liquid floor, drones that carry screens flashing words such as “education” and “public health,” and the ability to walk on water.

A spokesperson said the project aims to “increase awareness of what the EU does on the world stage,” targeting young people in particular who spend their time on TikTok and [hotlink]Instagram[/hotlink], and who are “neutral about the EU” and “not typically exposed to such information.”

Source

More On Artificial Intelligence, The Robots Kill The Humans For Real Now

I’ve written before, tongue in cheek about this, but here we are. Who thought this was a good idea? Who is going to control these killer bots? What if they become sentient, then they kill on their own.

Sure it’s in the shit hole San Francisco, but once there is a hole in the dike, the dam bursts. If they allow it there, it goes everywhere.

In a dystopian turn of events, the San Francisco Police Department is considering giving robots the license to kill.

Last week, San Francisco’s rules committee unanimously approved a version of a draft policy stating that robots can be ‘used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option’.

Members of the city’s Board of Supervisors Rules Committee have been reviewing the new policy for several weeks as reported by Mission Local.

The original version did not mention robots until Aaron Peskin, the Dean of the city’s Board of Supervisors, initially added that ‘robots shall not be used as a Use of Force against any person’.

However, the SFPD amended Peskin’s addition and replaced it with a line that could give robots the authority to kill suspects if the life of public or police was at risk.

According to Mission Local, Peskin eventually decided to accept the change because ‘there could be scenarios where deployment of lethal force was the only option’.

The equipment policy states that the SFPD currently has 17 remotely piloted robots, of which only 12 are functioning.

In addition to granting robots the ability to use deadly force, the proposal also authorizes them for use in ‘training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments’.

While most of the robots listed in the SFPD’s inventory are primarily used for defusing bombs or dealing with hazardous materials, newer models have an optional weapons system.

The department’s QinetiQ Talon can also be modified to hold various weapons — a weaponized version of the robot is currently used by the US Army and can equip grenade launchers, machine guns, or even a .50-caliber anti-materiel rifle.

Story here

If they can multiply, humans are doomed.

AI is churning out articles, illustrations, fake product reviews and even videos.

This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.

Here goes….

You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.

Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.

“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.

“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.

Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.

More at the WSJ, but it might be behind a paywall

The rise of AI-generated content is made possible by a phenomenon known variously as computational creativity, artificial creativity or generative AI. This field, which had only a handful of companies in it two or three years ago, has exploded to more than 180 startups at present, according to data gathered by entrepreneur Anne-Laure Le Cunff. These companies have collected hundreds of millions of dollars in investment in recent months even as the broader landscape for tech funding has become moribund.

A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.

By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.

This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.

AI Update: 4 Realms Where Technology Has Been Used to Chronically DAMAGE Humanity: Television, AI, Social Media, mRNA

It used to be that most Americans were mainly brainwashed into buying specific products and services by watching television, reading the newspaper, noticing billboards, and seeing films. Propaganda was a front-loaded “machine” that was quite linear in its approach to influence buying motives of consumers. With the invention of the internet and social media, everything changed.

The consumption of news, products, services, lifestyles, pornography, and now most medical “choices” are made online, after “consuming” artificial intelligence. This is how technology has been created and disseminated to chronically DAMAGE humanity, and it’s happening like a tsunami engulfs a coastline, every day.

Machine learning regulates nearly everything users see in front of them online, often in unethical, harmful ways

An artificial intelligence (AI) algorithm regulates the content chosen to be placed in front of user’s eyes specifically to influence their purchases of products, services, and information, based on what they talk about, type about, and search for using smart devices. Ever just talk to a friend about anything, then minutes later an ad pops up on your smart device, selling exactly what you just talked about? That’s AI. Smart devices are almost always RECORDING, whether or not you have the camera, microphone, or app “open” or “live.”

For example, Facebook’s AI software analyzes videos, stories, photos, and even memes, then gathers together ‘recommendations’ for you. Add in some key misinformation and suddenly you find yourself buying what Fakebook told you to buy, and believing what Fakebook calls news.

Google search engine uses AI to guide everyone AWAY from any information about natural health and AWAY from any information that exposes prescription medications and vaccines as the dangerous, experimental mediums they really are. Google blocks, bans, censors and bankrupts people, businesses and organizations that do not support the communist narrative that attempts to control all Americans’ lives and livelihoods.

Rest of the story here

When Artificial Intelligence Can Be Beat (And How To Do It)

I expect a lot of clicks from China on this, they always do when I write about either them or technology.

I found this out playing Duo Lingo, where you learn a new language. They use marketing tools to keep you interested and try for more points such as doubling your score if you finished a lesson before 12.

The AI program does one of two things. Foremost, it gets you involved and competing at a higher level trying harder and spending more time on the app. For learning it is a good thing. For an App company selling advertising, the more time on their site, the more ads they sell.

The second thing it does is discard those who give up killing themselves when they are put into groups with more aggressive players. The lower performing scorers are segregated into a less-competitive group. This group isn’t worth trying to squeeze more money or time out of because they are casual App users.

One of the marketing techniques is a tournament where only a few advance, the aggressive players whom the AI has developed. I’ve ignored it twice because it becomes a 3 week time suck. For me, spending time only on one thing burns me out and I lose interest. I only want to play on my terms, something they didn’t calculate.

THE KOBAYASHI MARU

I love to win and do a lot, like last week. I wanted to beat not only the other players in my division, but the AI behind the game. This is the fault of much of AI. It has to assume human behavior, but goes on perceived behavior. Humans can be random thinkers or those outside of the AI logic.

Winner!

Any reader of my blog knows I look for patterns to make my life easier and better. Sometimes it is is just for the satisfaction of figuring it out. It was like learning the jab was poison and avoiding it while the sheep lined up to get their daily ration.

It’s almost like living in the Matrix and avoiding the Mr. Smiths of the world.

I was at IBM during Watson and knew the tech companies were seeing this as a potential holy grail. I couldn’t out develop the great minds that write AI, but beating them at their game was equally gratifying. I observed what they were doing and always considered the weaknesses.

Ken and Brad didn’t have a chance

Why did I want to do this? I know the Snidely Whiplashes of the world want to take over and control others, like Big Tech and the WEF. Knowing that they are beatable at their own game is valuable. It is like taking the red pill and being in the matrix.

MY STRATEGY

I found that in between the two groups above is where the programmers weakness lies. Those that don’t seem to try or or try outside of the AI rules. It’s AI learns at a machine rate, but not at a human rate.

It’s like when Watson learned to hunt for Double Jeopardy clues and was faster to the buzzer than humans, but AI can be out thought or out maneuvered.

I decided to hold back my scoring for a few weeks to fool the AI into putting me into a lower scoring crowd. I’ve done many thousands of points and finished in the top group without winning, only to be promoted to a higher scoring group. I wanted to see if I tried to score less, would the AI would “learn” that I’m a low scorer.

As I suspected, the groups I got put in were less and less aggressive. The point totals to keep advancing were less and less.

I knew I was gaining ground on the AI weakness and could be manipulated.

Last week, I kept to my minimal effort while learning (both foreign languages and the AI engine behind the App). I noticed that I have been put in lower performing groups. I did my daily amount I’d allowed myself to have and was slowly advancing up the ladder. I was using the AI engine to put me where I wanted to be, not in it’s calculations.

By Friday, I was in the lead with far less points than I’ve done many times in only a day (it starts on Sunday). I had to rely on human behavior that my group weren’t aggressive players, but that was my AI bet that we would be with each other. I was right. I predicted the AI pattern and beat it.

Beating the AI was far more of a challenge than beating the other players.

CONTINUING TO BEAT MACHINES

I knew I’d won by Saturday and did the minimal on Sunday to score. Why? To keep fooling the AI into believing I am a low scoring player. My goal was to win at the lowest level to keep getting promoted into groups that I could predict, and to keep proving that AI is beatable and malleable.

I’ll find out this week if that is fully true as I’m in a tournament for winners now (another marketing ploy) and I will try to finish last and keep advancing. I normally like to crush the competition, but winning by thinking is far more satisfying than winning by brute force. It’s as if you are running the game and the other players.

THE TURING TEST

Of course this is famous, can a machine behave like a human (roughly translated by me). Of course Duo Lingo isn’t as complex as war or landing on Mars, but there are hundreds of millions of humans. That is what they want AI to control, humans (like their free speech on social media).

I wanted to beat a machine, AI and find the holes that are in AI. It is programmed by humans still and can always be beat. They are not sentient. Find the pattern.

OTHER AI CHALLENGES

I play Wordle like a lot of others. I’m not busy trying to win right now, I’m trying to trick their AI into a pattern that I can out think their word selection.