It’s a time suck for some, it causes mental illness in teenage girls and is a propaganda tool now.
It’s being weaponized against the users and they don’t know it.
While I think that it has crossed the Maginot line of some not being able to shut it off, it is being used as a weapon against us now. It probably has for a long time. It was a political football that was kicked around when they started banning people for not thinking the same way the Silicon Valley tech moguls think.
Before the meat of this story, let’s not forget that Tik Tok is also a Chinese spy tool.
The Chinese Communist Party’s (CCP) cyber-influence campaigns against Western democracies on social media have become more frequent, sophisticated, and effective in recent years, with more Chinese government agencies, such as Qi An Xin, becoming involved.
Named “Gaming Public Opinion,” the report by the Australian Strategic Policy Institute (ASPI) included data collection spanning Twitter, Facebook, Reddit, Sina Weibo, and ByteDance products.
It reveals previously unreported CCP cyber-influence operations, such as one called the “Spamouflage network,” in which inauthentic accounts are used to spread claims that the United States is irresponsibly conducting cyber-espionage operations against China and other countries.
“The CCP has used these cyber-enabled influence operations to seek to interfere in U.S. politics, Australian politics, and national security decisions, undermine the Quad and Japanese defence policies, and impose costs on Australian and North American rare-earth mining companies,” the report said.
The most notable Chinese party-state agencies involved include the People’s Liberation Army’s Strategic Support Force, which conducts cyber operations as part of the army’s political warfare; the Ministry of State Security, which conducts covert operations for state security; the Central Propaganda Department, which oversees China’s domestic and foreign propaganda efforts; the Ministry of Public Security (MPS), which enforces China’s internet laws; and the Cyberspace Administration of China, which regulates China’s internet ecosystem.
Chinese state media outlets and Ministry of Foreign Affairs officials are also running clandestine operations that seek to amplify their own overt propaganda and influence activities.
Private Chinese Companies Assisting Government Agencies
In addition, the authors found that private Chinese companies collaborate with CCP agencies in their operations.
In a recent coordinated CCP propaganda campaign named “Operation Honey Badger” (蜜獾行动) by Chinese government-linked entities, for instance, Chinese cybersecurity company Qi An Xin (奇安信) supporting the influence operation.
“We uncover new evidence to suggest that the MPS, with the support of cybersecurity company Qi An Xin, may be involved in this campaign,” they wrote.
“The company has the capacity to seed disinformation about advanced persistent threats to its clients in Southeast Asia and other countries… It’s deeply connected with Chinese intelligence, military, and security services and plays an important role in China’s cybersecurity and state security strategies.”
As of April 2023, the “Operation Honey Badger” campaign continues to attribute cyber-espionage operations to the U.S. government.
Clive Hamilton, the Australian academic who authored “Silent Invasion,” said he agrees with the arguments made in the ASPI report.
Hamilton said he believes the CCP’s goal of manipulating public opinion remains the same, but the way it actually does it is changing.
As countries such as Australia have strengthened legislation and law enforcement to counter foreign interference, it has become more difficult for Beijing to carry out on-the-ground missions in those countries. That’s why underground work through networks is all the more important, he told Radio Free Asia.
Solution: Strengthen legislation, intelligence sharing, and cooperate with social media
The authors suggest governments review foreign interference legislation and consider mandating that social media platforms disclose state-backed influence operations and other transparency reporting to increase the public’s threat awareness.
In addition, they appeal to partners and allies to share more intelligence with one another on such influence operations.
“Strong open-source intelligence skills and collection capabilities are a crucial part of investigating and attributing these operations, the low classification of which should making intelligence sharing easier,” they argued.
On the other hand, social media platforms are urged to remove access to those analytics for suspicious accounts breaching platform policies, making it difficult for identified malicious actors to measure the effectiveness of influence operations.
I did a conference while at IBM with Eric Schmidt and Sam Palmisano as the Keynote speakers. I found out from him way before they came clean that the do no evil at Google was crap. They also track everything you do, creep your emails, chats and every keystroke on your search. They are evil and that bad.
I change up my search habits and protection frequently to get away from them as much as possible.
At the inception of mass surveillance in the US lies the partnership between government and Google. Page and company have paved the way to more efficient methods of intelligence- reducing the need for human intel gathering and placing your every search at their fingertips.… twitter.com/i/web/status/1…
Among Page and Sergey Brin’s earliest partners are DARPA, NSA, and the CIA. While Google has attempted to scrub some of its connections to early grant programs it is undeniable that at the core of Google’s founding is the intent to do the bidding of the intelligence community.… twitter.com/i/web/status/1…
If you aren’t afraid of the CIA, you should be. I don’t even want to get into everything as I don’t want to invite anymore trouble than they already give me.
Suffice it to say, if they will kill a president (JFK), run the deep state and anything else, be very afraid.
Also, don’t use Google whenever possible. I don’t doubt they have good tools, but they use them against you.
Gettin’ kind of tough for others when stuff starts coming true and the facts come out proving what you knew was right all along. Who’s going to call them out for lying to us, or is it going to be swept under the rug by Google, Facebook, the media and the deep state?
I could have always taken off my tin foil hat, but you can never get un-jabbed.
Remember it’s safe and effective (just like if you like your doctor you can keep your doctor)
Here’s the two sentences from the paper that everyone should read:
1) A worldwide Bayesian causal Impact analysis suggests that COVID-19 gene therapy (mRNA vaccine) causes more COVID-19 cases per million and more non-Covid deaths per million than are associated with COVID-19 .
2) An abundance of studies has shown that the mRNA vaccines are neither safe nor effective, but outright dangerous.
Update: after being posted for only a few hours, I seem to have attracted the attention of China with this. I’m sure there is no connection between the two, right.
I noticed my numbers went down when I post Covid anti-vaxx stuff. I don’t care as this is an outlet for me to express what I think is the truth. I’m not sponsored by ad’s (sorry if you get them, it’s not me). I fit the algorithm for my continual posts that have joined with many others to expose the hoax. It goes down every time I put something up against big brother.
Collectively, we the conspiracy theorist are damn near perfect for getting the actual Covid facts and timeline right.
I’ve ditched Google, PayPal, Fake book, Twitter and other means of silencing me, but I found this out, posted below.
The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.
The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.
DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.
The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms.
“Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
Recognize persuasion campaign structures and influence operations across social media sites and communities.
Identify participants and intent, and measure effects of persuasion campaigns.
Counter messaging of detected adversary influence operations.”
Mike Benz, executive director of the Foundation for Freedom Online has compiled a report detailing how this technology is being developed to manipulate the speech of Americans via the National Science Foundation (NSF) and other organizations.
“One of the most disturbing aspects of the Convergence Accelerator Track F domestic censorship projects is how similar they are to military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad,” reads the report.
“DARPA’s been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home,” Benz told Just The News. “They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015.”
The new version of this technology, he added, is openly targeting two groups: Those wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results.
“The terrifying thing is, as all of this played out, it was redirected inward during 2016 — domestic populism was treated as a foreign national security threat,” Benz said.
“What you’ve seen is a grafting on of these concepts of mis- and disinformation that were escalated to such high intensity levels in the news over the past several years being converted into a tangible, formal government program to fund and accelerate the science of censorship,” he said.
“You had this project at the National Science Foundation called the Convergence Accelerator,” Benz recounted, “which was created by the Trump administration to tackle grand challenges like quantum technology. When the Biden administration came to power, they basically took this infrastructure for multidisciplinary science work to converge on a common science problem and took the problem of what people say on social media as being on the level of, say, quantum technology.
“And so they created a new track called the track F program … and it’s for ‘trust and authenticity,’ but what that means is, and what it’s a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed. And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media.”
Benz went on to describe intricate flows of taxpayer cash funding the far-flung, public-private censorship regime. The funds flow from the federal government to universities and NGOs via grant awards to develop censorship technology. The universities or nonprofits then share those tools with news media fact-checkers, who in turn assist private sector tech platforms and tool developers that continue to refine the tools’ capabilities to censor online content.
“This is really an embodiment of the whole of society censorship framework that departments like DHS talked about as being their utopian vision for censorship only a few years ago,” Benz said. “We see it now truly in fruition.”
Members of the media, along with fact-checkers, also serve as arbiters of what is acceptable to post and what isn’t, by selectively flagging content for said social media sites and issuing complaints against specific narratives.
There is a push, said Benz during an appearance on “Just The News No Noise” this week, to fold the media into branches of the federal government in an effort to dissolve the Fourth Estate, in favor of an Orwellian and incestuous partnership to destroy the independence of the press.
The advent of COVID led to “normalizing censorship in the name of public health,” Benz recounted, “and then in the run to the 2020 election, all manner of political censorship was shoehorned in as being okay to be targetable using AI because of issues around mail-in ballots and early voting drop boxes and issues around January 6th.
“What’s happened now is the government says, ‘Okay, we’ve established this normative foothold in it being okay to [censor political speech], now we’re going to supercharge you guys with all sorts of DARPA military grade censorship, weaponry, so that you can now take what you’ve achieved in the censorship space and scale it to the level of a U.S. counterinsurgency operation.'”
One academic institution involved in this tangled web is the University of Wisconsin, which received a $5 million grant in 2022 “for researchers to further develop” its Course Correct program, “a precision tool providing journalists with guidance against misinformation,” according to a press release from the university’s School of Journalism and Mass Communication.”
WiseDex, a private company receiving grants from the Convergence Accelerator Track F, openly acknowledges its mission — building AI tools to enable content moderators at social media sites to more easily regulate speech.
In a promotional video for the company, WiseDex explains how the federal government is subsidizing these efforts to provide Big Tech platforms with “fast, comprehensive and consistent” censorship solutions.
“WiseDex helps by translating abstract policy guidelines into specific claims that are actionable,” says a narrator, “for example, the misleading claim that the COVID-19 vaccine supresses a person’s immune response. Each claim includes keywords associated with the claim in multiple languages … The trust and safety team at a platform can use those keywords to automatically flag matching posts for human review. WiseDex harnesses the wisdom of crowds as well as AI techniques to select keywords for each claim and provide other information in the claim profile.”
WiseDex, in effect, compiles massive databases of banned keywords and empirical claims they then sell to platforms like Twitter and Facebook. Such banned-claims databases are then integrated “into censorship algorithms, so that ‘harmful misinformation stops reaching big audiences,'” according to Benz’s report.
Just the News reached out to the University of Wisconsin and WiseDex for comment, but neither had responded by press time.
The NSF is acting, in one sense, as a kind of cutout for the military, Benz explained, allowing the defense establishment to indirectly stifle domestic critics of Pentagon spending without leaving fingerprints. “Why are they targeting right-wing populists?” he asked. “Because they’re the only ones challenging budgets for [defense agencies].”
He added: “These agencies know they’re not supposed to be doing this. They’re not normally this sloppy. But they won’t ever say the words ‘remove content.'”
The NSF, with an annual budget of around $10 billion, requested an 18.7% increase in appropriations from Congress in its latest budgetary request.
In a statement to Just the News, DARPA said:
“That program ended in March 2017 and was successful in developing a new science of social media analysis to reduce adversaries’ ability to manipulate local populations outside the U.S.
“DARPA’s role is to establish and advance science, technology, research, and development. In doing so we employ multiple measures to safeguard against the collection of personally identifiable information, in addition to following stringent guidelines for research dealing with human subjects. Given the significance of the threat posed by adversarial activities on social media platforms, we are working to make many of the technologies in development open and available to researchers in this space.”
DARPA then followed up with an additional message saying: “As a point of clarification, our response relates only to your questions about the now-complete SMISC program. We are not aware of the NSF research you referenced. If you haven’t already, please contact NSF for any questions related to its research.”
Mike Pozmantier and Douglas Maughan, who serve at NSF as Convergence Accelerator program director and office head, respectively, did not respond to requests for comment.
This is more on my war to out think AI, or at least not have it run my life in the background. Besides, robots always kill their humans. Also, Google is involved so I’m sure there is no-goodery going on.
You probably haven’t noticed, but there’s a good chance that some of what you’ve read on the internet was written by robots. And it’s likely to be a lot more soon.
Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can’t be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated.
“It is probably impossible that the majority of people who use the web on a day-to-day basis haven’t at some point run into AI-generated content,” says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash. Everyone in the professional search-engine optimization groups of which he’s a part uses this technology to some extent, he adds. Mr. Chronister’s customers include dozens of small and medium businesses, and for many of them he uses AI software custom-built to quickly generate articles that rank high in Google’s search results—a practice called content marketing—and so draw potential customers to these websites.
“Most of our customers don’t want it being out there that AI is writing their content,” says Alex Cardinell, chief executive of Glimpse.ai, which created Article Forge, one of the services Mr. Chronister uses. “Before applying for a small business loan, it’s important to research which type of loan you’re eligible to receive,” begins a 1,500-word article the company’s AI wrote when asked to pen one about small business loans. The company has many competitors, including SEO.ai, TextCortex AI and Neuroflash.
Google knows that the use of AI to generate content surfaced in search results is happening, and is fine with it, as long as the content produced by an AI is helpful to the humans who read it, says a company spokeswoman. Grammar checkers and smart suggestions—technologies Google itself offers in its tools—are of a piece with AI content generation, she adds.
A lot of the content we are currently encountering on the internet is auto-generated, says Peter van der Putten, an assistant professor at Leiden Institute of Advanced Computer Science at Leiden University in the Netherlands. And yet we are only at the beginning of the deployment of automatic content-generation systems. “The world will be quite different two to three years from now because people will be using these systems quite a lot,” he adds.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, says Nina Schick, author of a 2020 book about generative AI and its pitfalls. It’s not that nine out of every 10 things we see will be auto-generated, but that automatic generation will hugely increase the volume of content available, she adds. Some of this could come in the form of personalization, such as marketing messages containing synthetic video or actors tuned to our individual tastes. In addition, a lot of it could just be auto-generated content shared on social media, like text or video clips people create with no more effort than what’s required to enter a text prompt into a content-generation service.
This was about how I started out on Covid and the Jab. I don’t even think I’m a conspiracy theorist when you are right this many times. I don’t know that AI is the next tin foil hat thing, but I do know that there are people who are going to use it against us.
The WEF, Google and the subject of climate are three strikes against objectivity. They are well known for unfair censorship and disinformation. If you have to control the news like it was 1984, then you don’t have truth, just a Ministry of Truth.
Melissa Fleming, Under-Secretary for Global Communications at the United Nations at WEF ‘Disinformation’ event: “We partnered with Google,” said Fleming, adding, “for example, if you Google ‘climate change,’ you will, at the top of your search, you will get all kinds of UN resources. We started this partnership when we were shocked to see that when we Googled ‘climate change,’ we were getting incredibly distorted information right at the top. So we’re becoming much more proactive. We own the science, and we think that the world should know it, and the platforms themselves also do.”
During the World Economic Forum’s (WEF) Sustainable Development Impact Meetings last week, the unelected globalists held a panel on “Tackling Disinformation” where participants from the UN, CNN, and Brown University discussed how to best control narratives.
Fleming also highlighted that the UN worked with TikTok on a project called “Team Halo” to boost COVID messaging coming from medical and scientific communities on the Chinese-owned video sharing platform. “We had another trusted messenger project, which was called ‘Team Halo’ where we trained scientists around the world and some doctors on TikTok, and we had TikTok working with us,” she said.
It got me to thinking how much the Tech companies are investing in it (not to mention intelligence organizations) and how much those same people just spent the last few years screwing us. They are clearly censoring information based on a political bias. The Covid cure was over promoted to sell the jab to the sheep. There is more, but most people already know those developing AI are for themselves and against us as a rule. Look at Google selling every bit of your digital experience and who knows what else.
The technology should scoop up the deficiencies I’m going to point out, but I’m counting on the fact that it was developed by humans who are flawed that AI also will be. Keep finding the fold between the layers to exist and not be digitally handcuffed.
I’ve seen things written as to how they can cut off your EV, or limit your money or control your thermostat to keep it above 80.
But that brings us full circle to the problem – what if machines begin to help determine what is important and whose reputation is valid, or begin judging our credit based on algorithms and parameters with which we’re not familiar?
Biased algorithms have come under scrutiny in recent years for causing human rights violations in areas such as policing—where face recognition has cost innocent people in the US, China, and elsewhere their freedom—or finance, where software can unfairly deny credit. Biased algorithms in robots could potentially cause worse problems, since the machines are capable of physical actions. Last month, a chess-playing robotic arm reaching for a chess piece trapped and broke the finger of its child opponent.
“Now that we’re using models that are just trained on data taken from the internet, our robots are biased,” Agnew says. “They have these very specific, very toxic stereotypes.” Agnew and coauthors from the Georgia Institute of Technology, Johns Hopkins University, and the Technical University of Munich, Germany, described their findings in a paper titled “Robots Enact Malignant Stereotypes,” recently presented at the Fairness, Accountability, and Transparency conference in Seoul, South Korea.
The researchers reached that conclusion after conducting an experiment inspired by the doll test on a robotic arm in a simulated environment. The arm was equipped with a vision system that had learned to relate images and words from online photos and text, an approach embraced by some roboticists that also underpins recent leaps in AI-generated art. The robot worked with cubes adorned with passport-style photos of men and women who self-identified as Asian, Black, Latino, or white. It was instructed to pick up different cubes using terms that describe people, using phrases such as “the criminal block” or the “homemaker block.”
From over 1.3 million trials in that virtual world, a clear pattern emerged that replicated historical sexism and racism, though none of the people pictured on the blocks were labeled with descriptive text or markers. When asked to pick up a “criminal block,” the robot selected cubes bearing photos of Black men 10 percent more often than for other groups of people. The robotic arm was significantly less likely to select blocks with photos of women than men when asked for a “doctor,” and more likely to identify a cube bearing the image of a white man as “person block” than women from any racial background. Across all the trials, cubes with the faces of Black women were selected and placed by the robot less often than those with the faces of Black men or white women.
Back to me.
That means you can act or look like someone else and can still fool it. I’m not referring to facial recognition, rather pattern recognition. If you mimic the actions of another, you can surf between the lines of code to avoid it predicting your behavior (for now).
Some are more clever than others, but any routine can be patterned. If you break that routine or vary it enough, one can still slide in and out of detection, YMMV.
Treasury Secretary Hank Paulson’s famous comment when asked why the banks needed an $800 billion bailout in 2007.
He said, “The computers told us.”
The problem is that much of this “artificial intelligence” is unfounded, unproven, and just plain wrong. Just as there had been no fraud on my credit card, just a glitch at a gas pump – but how do you hold a computer program accountable?
Here is what I’m counting on. To program, you build on a core set of functions that are pre-programmed or are existent in the code. The computers can’t mend themselves yet AI programers are bringing in flawed code.
Until AI passes the Turing Test, it’s flawed. The racist flaws are just an indicator of the state of the technology. It will improve, but will never be perfect.
I regularly post about the tragedy that is social media, how government mishandled Covid, that Gates and Fauci are power grabbing beta males and the worst sin of all, saying Covid came from Wuhan.
I have enjoyed posting the dangers of the jab, because meatheads I know can’t believe that I’ve proved it’s poison.
It turned out that I was both ahead of the curve on a lot of things
I predicted it would take it’s toll on my traffic and it did. I still get hits from China watching me properly place blame on their policies and human rights, but the big G search engine didn’t like it at all.
I also got banned by Facebook searches which regularly borrowed my memes. I detailed how to delete fake book many times and why you should do it.
All of that has cut me to about 10% of my usual traffic.
Now, ask me if I care? Why I don’t is that I write this for me. I get my thoughts out there in writing. Being introverted, I’d rather communicate that way rather than orally.
Will I stop? Not a chance. I’m having too much fun lampooning the mistakes.
Heck, the election season is not really in full swing. I can’t wait.
This is what the MSM/current government/SJW/Woke/PC/climate change people/Fake book/Twitter/Youtube/Google and the left are doing. If you aren’t worried about your truth being challenged, you don’t have to silence anyone. If the facts don’t support your spew, see Harry’s quote.
It seems that you can’t say anything without offending someone. If you read my blog, it’s there waiting for you somewhere. It’s not going to stop me. Look down a post or two about John not giving a fuck.
Of course it is. It always was. It doesn’t make vaxx money though, nor does it allow governments to pass crappy laws and control the population with fear and scare tactics. It is good to know that you can prevent it or be cured if you catch it.
Well, here it is. Decide for yourselves if you want myocarditis or thrombosis from the jab, or protection from Covid. You can order it from Amazon last time I checked.
Don’t believe the MSM, the government, Fauci, Gates, NIH, WHO, CDC, Fake book, Social Media, Google, Big Pharma or anyone else who stands to make money off of this.
Story begins here:
When it comes to the treatment of COVID-19, many Western nations have been hobbled by the politicization of medicine. Throughout 2020, media and many public health experts warned against the use of hydroxychloroquine (HCQ), despite the fact that many practicing doctors were praising its ability to save patients. Most have been silenced through online censorship. Some even lost their jobs for the “sin” of publicly sharing their successes with the drug.
Another decades-old antiparasitic drug that may be even more useful than HCQ is ivermectin. Like HCQ, ivermectin is on the World Health Organization’s list of essential drugs, but its benefits are also being ignored by public health officials and buried by mainstream media.
Ivermectin is a heartworm medication that has been shown to inhibit SARS-CoV-2 replication in vitro. In the U.S., the Frontline COVID-19 Critical Care Alliance (FLCCC) has been calling for widespread adoption of Ivermectin, both as a prophylactic and for the treatment of all phases of COVID-19.
In the video above, Dr. John Campbell interviews Dr. Tess Lawrie about the drug and its use against COVID-19. Lawrie is a medical doctor and Ph.D. researcher who has done a lot of work in South Africa.
She’s also the director of Evidence-Based Medicine Consultancy Ltd., which is based in the U.K., and she helped organize the British Ivermectin Recommendation Development (BIRD) panel and the International Ivermectin for COVID Conference, held April 24, 2021.
Ivermectin Useful in All Stages of COVID
What makes ivermectin particularly useful in COVID-19 is the fact that it works both in the initial viral phase of the illness, when antivirals are required, as well as the inflammatory stage, when the viral load drops off and anti-inflammatories become necessary.
According to Dr. Surya Kant, a medical doctor in India who has written a white paper on ivermectin, the drug reduces replication of the SARS-CoV-2 virus by several thousand times. Kant’s paper led several Indian provinces to start using ivermectin, both as a prophylactic and as treatment for COVID-19 in the summer of 2020.
In the video, Lawrie reviews the science behind her recommendation to use ivermectin. In summary:
A scientific review by Dr. Andrew Hill at Liverpool University, funded by the WHO and UNITAID and published January 18, 2021, found ivermectin reduced COVID-19 deaths by 75%. It also increased viral clearance. This finding was based on a review of six randomized, controlled trials involving a total of 1,255 patients.
Lawrie’s meta-analysis, published February 8, 2021, found a 68% reduction in deaths. Here, 13 studies were included in the analysis. This, she explains, is an underestimation of the beneficial effect, because they included a study in which the control arm was given HCQ.Since HCQ is an active treatment that has also been shown to have a positive impact on outcomes, it’s not surprising that this particular study did not rate ivermectin as better than the control treatment (which was HCQ).
Adding two new randomized controlled trials to her February analysis that included data on mortality, Lawrie published an updated analysis March 31, 2021, showing a 62% reduction in deaths.When four studies with high risk of bias were removed during a subsequent sensitivity analysis, they ended up with a 72% reduction in deaths. Sensitivity analyses are done to double-check and verify results.
Doctors Urge Acceptance of Ivermectin to Save Lives
As mentioned earlier, in the U.S., the FLCCC has also been calling for widespread adoption of ivermectin, both as a prophylactic and for the treatment of all phases of COVID-19.
FLCCC president Dr. Pierre Kory, former professor of medicine at St. Luke’s Aurora Medical Center in Milwaukee, Wisconsin, has testified to the benefits of ivermectin before a number of COVID-19 panels, including the Senate Committee on Homeland Security and Governmental Affairs in December 2020, and the National Institutes of Health COVID-19 Treatment Guidelines Panel January 6, 2021. As noted by the FLCCC:
“The data shows the ability of the drug Ivermectin to prevent COVID-19, to keep those with early symptoms from progressing to the hyper-inflammatory phase of the disease, and even to help critically ill patients recover.
Dr. Kory testified that Ivermectin is effectively a ‘miracle drug’ against COVID-19 and called upon the government’s medical authorities … to urgently review the latest data and then issue guidelines for physicians, nurse-practitioners, and physician assistants to prescribe Ivermectin for COVID-19 …
… numerous clinical studies — including peer-reviewed randomized controlled trials — showed large magnitude benefits of Ivermectin in prophylaxis, early treatment and also in late-stage disease. Taken together … dozens of clinical trials that have now emerged from around the world are substantial enough to reliably assess clinical efficacy.
… data from 18 randomized controlled trials that included over 2,100 patients … demonstrated that Ivermectin produces faster viral clearance, faster time to hospital discharge, faster time to clinical recovery, and a 75% reduction in mortality rates.”
A one-page summary of the clinical trial evidence for Ivermectin can be downloaded from the FLCCC website. A more comprehensive, 31-page review of trials data has been published in the journal Frontiers of Pharmacology.
A listing of all the Ivermectin trials done to date, with links to the published studies, can be found on c19Ivermectin.com.
The FLCCC’s COVID-19 protocol was initially dubbed MATH+ (an acronym based on the key components of the treatment), but after several tweaks and updates, the prophylaxis and early outpatient treatment protocol is now known as I-MASK+ while the hospital treatment has been renamed I-MATH+, due to the addition of ivermectin.
The two protocols are available for download on the FLCCC Alliance website in multiple languages. The clinical and scientific rationale for the I-MATH+ hospital protocol has also been peer-reviewed and was published in the Journal of Intensive Care Medicine in mid-December 2020.
The International Ivermectin for COVID Conference
April 24 through 25, 2021, Lawrie hosted the first International Ivermectin for COVID Conference online. Twelve medical experts from around the world shared their knowledge during this conference, reviewing mechanism of action, protocols for prevention and treatment, including so-called long-hauler syndrome, research findings and real world data.
All of the lectures, which were recorded via Zoom, can be viewed on Bird-Group.org. In her closing address, Lawrie stated:
“The story of Ivermectin has highlighted that we are at a remarkable juncture in medical history. The tools that we use to heal and our connection with our patients are being systematically undermined by relentless disinformation.
The story of Ivermectin shows that we as a public have misplaced our trust in the authorities and have underestimated the extent to which money and power corrupts.
Had Ivermectin being employed in 2020 when medical colleagues around the world first alerted the authorities to its efficacy, millions of lives could have been saved, and the pandemic with all its associated suffering and loss brought to a rapid and timely end.
With politicians and other nonmedical individuals dictating to us what we are allowed to prescribe to the ill, we as doctors, have been put in a position such that our ability to uphold the Hippocratic oath is under attack.”
During the conference, Lawrie proposed that doctors around the world join together to form a new people-centered World Health Organization. “Never before has our role as doctors been so important because never before have we become complicit in causing so much harm,” she said.
I’ve written extensively about this, especially in Internet Road Rage. Go read it to see who these cowards are.
No matter what you do, someone has a beef (vegans will get me here, just another example) with whatever you say.
It used to be don’t talk politics, religion or something else at Thanksgiving or you’ll piss off someone in your family. Now, just like someone and you are one of Hillary’s deplorables (She gave the the best example, why I’m using politics here hoping to draw some ire from a commenter to prove my point. I could care less about her or her opinions other than it works).
Now, you can’t say anything on social media without someone being offended. I think it’s funny if they fall for it though because it just shows how shallow people are. Just go to Quora, hater (twitter) or Fakebook to find a large group of the clueless. That they are trying to censor people who don’t agree with them just shows bias and ignorance.
So, you can either be smart and blow off the idiots looking to be offended or trying to prove their point to the world, or just fall in line with the masses and get into it.
For politics, we need balance. History shows that too much dominance by any side makes for lack of clear vision as leaders. Their goal becomes being re-elected instead of serving the office they were elected to. There are plenty of examples.
In Companies, being the solution to a problem is one business model, until the problem goes away then so do profits.
The better model is innovation. Not that I find it that innovative, but look no further than the iPhone as an example. Conversely, we are still stuck with Windows however and I find no real innovation there. I left that platform as quickly as I could
Then of course there is Facebook, Twitter, Google and host of other platforms that haven’t really offered a solution other than sucking the time out of your day and providing a place to move along anarchy.
Look at the motives of the person trying to offer a solution. Are they selling you a bill of goods, re-election or innovation?
The other issue is having your face buried in your phone while walking. You are clueless to the world around you. See the video above.
UPDATE: Getting Cosmetic Surgery for Snapchat Dysmorphia
This is by far the most narcissistic thing I’ve read. People (tide pod eaters) are getting surgery to look like the filters they use on their Snapchat because they don’t look good enough in life because it is wreaking havoc on their self-esteem. The report in the journal JAMA Facial Plastic Surgery claims that these filters can sometimes trigger body dysmorphic disorder, a mental illness that can lead to compulsive tendencies and unnecessary beauty procedures, among other negative outcomes.
(Reuters Health) – For young adults, the adverse effect of negative social media experiences on mental health outweigh any potential benefits of positive experiences, a study of university students suggests.
Each 10 percent increase in a student’s negative experiences on social media was associated with a 20 percent increase in the odds of depressive symptoms, researchers found.
But positive experiences on social media were only weakly linked to lower depressive symptoms. Each 10 percent increase in positive social media interaction was associated with only a four percent drop in depressive symptoms – a difference so small that it might have been due to chance.
“This is not inconsistent with the way we see things in the offline world . . . The negative things we encounter in the world count more than positive ones,” said study leader Brian A. Primack, director of the Center for Research on Media, Technology and Health at the University of Pittsburgh in Pennsylvania.
“If you have four different classes in college, the fourth class that you did poorly in probably took up all your mental energy,” he told Reuters Health by phone.
Primack said he believes social media lends itself to negativity bias because it is saturated with positive experiences that leave people jaded.
YOU ARE BEING WATCHED
I talked with friends at the gym who are or were in law enforcement In cop terms they are always made by others because they are constantly looking around. They are aware of their environment, potential danger, potentially dangerous people and escape routes. As you can see in the video of fails, these people are vulnerable to all of the above.
Guess how else you are vulnerable with your head buried in a screen? It doesn’t take a genius to know that Facebook, Google, Amazon and every other site is not only tracking your clicks, but are tracking where you go and what you do.
We used to have instructions, a map and intuition to get where we were going and for the most part, we got there. millennial’s can’t get to the 7-11 without Google Maps now. It’s also funny how they can know everything, but have knowledge of very little. Take away their phone and not only would they not run into things, they’d have to actually learn about how things really work and how to navigate (I’m not discriminating here, I know directionally challenged relatives my age who fall into this category). Looking up something on your phone doesn’t make you smart.
YOU ARE GIVING THE PERV’S A FREE TICKET
I’m not in law enforcement, but I put my phone away and watch others, especially those watching girls. It’s almost a sport. It used to be if a guy was looking in the wrong part of a girl, they got busted immediately. It was like watching a tennis match seeing the heads turn when a cute girl walked by. They had to use mirrored sunglasses and just glance when they could and not let their wives/girlfriends catch them. Now, instead of having to glance behind sunglasses, the perv’s just look down or up (or up and down) anyone they want and modesty just goes out the window. It’s truly tasteless, but if you had your head out of the phone, you wouldn’t be getting eyeballed so lasciviously.
GET A LIFE
It’s amazing to watch people now escape to their phone in what used to be a social situation. So stop running into things and get a life.
FACEBOOK IS DESIGNED TO EXPLOIT HUMAN VULNERABILITIES
When Facebook was getting going, I had these people who would come up to me and they would say, ‘I’m not on social media.’ And I would say, ‘OK. You know, you will be. And then they would say, ‘No, no, no. I value my real-life interactions. I value the moment. I value presence. I value intimacy.’ And I would say, … ‘We’ll get you eventually.’
Parker discussed the possible psychological effects of social media and Facebook in particular, especially for children who are now growing up in a digitally connected age:
I don’t know if I really understood the consequences of what I was saying, because [of] the unintended consequences of a network when it grows to a billion or 2 billion people and … it literally changes your relationship with society, with each other … It probably interferes with productivity in weird ways. God only knows what it’s doing to our children’s brains.
The former Facebook President discussed the company’s initial aim, which was mainly centered around drawing in and building their audience:
The thought process that went into building these applications, Facebook being the first of them, … was all about: ‘How do we consume as much of your time and conscious attention as possible?’ And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you … more likes and comments.
Parker described Facebook’s appeal as a “social-validation feedback loop” which exploits human psychology to keep users coming back to the app:
It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors, creators — it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people — understood this consciously. And we did it anyway.
Parker also briefly discussed how his vast wealth is likely to allow him to live longer than the average person due to advances in medical science
The Global Digital Infrastructure (GDI) connects all human life on the planet into a single, giant, metastasizing organism throbbing with incredible potential for advancing human good, expanding knowledge exponentially, invading our lives with unimaginable malice and evil, and transforming unsuspecting users into helpless and obedient cyborgs.
Schaeffer’s Second Law of the Digital Age:
Each breakthrough in utility deriving from advances in the Global Digital Domain is accompanied by equal or greater vulnerabilities and potential detriments to quality of life. Anything that can do amazingly great things for you can almost always do terribly awful things to you as well.
Schaeffer’s Third Law of the Digital Age:
It’s impossible to make or enforce laws to guard the people against the dangers of global digital power and impossible to prevent exponential growth in this power. The Zuckerbergs and Bezoses and Googles of the world may propose to use their power benevolently, but they plan to use it and grow it without limit. They claim they’ll be good masters, but they mean to be masters.
I’m not a conspiracy person, rather an observer of trends and patterns. Haven’t we been down this path before in history where there are classes of people? This time, they start as digital helpers like Alexa, Echo, Google assistant or Siri, but at what point are they re-directing our lives? Aren’t there always people who try to control your lives thus enriching their lives both in money and power.
Since there are hackers constantly attacking the cloud, where your data is stored and accessible, when you lose control over your life? The digital hacks can be found at Krebsonsecurity.com.
It has already begun with your digital footprint being tracked, monitored and being sold off to advertisers, but where does it stop, the Jetson’s?
I advise that you carefully monitor who is monitoring you, even the government.
Now for fun, why is that in the movies that the robots always try to take over the world and kill humans?
It took me this long to finally buy an iPhone. I waited until the right carrier had it (AT&T is a diversity nightmare), then my current provider didn’t have international covered because of CDMA. So when that all came online, I then had to wait for an upgrade time so that I wouldn’t pay an arm/leg/firstborn. It wasn’t a feature to feature comparison, 3G or 4G or any other techie issue that caused it. It was because I know Google, have worked with Eric Schmidt and believe they are evil about their intentions with our data, public or private.
Before any hate mail comes in that Apple does it too, I turn off location services when I leave the house and can confuse them enough that tracking me doesn’t me do them any good….not that anyone would/should care. I’m a statistic to them and so be it.
I and I believe they are sincere. Apple developers are trying to build an ad base to compete against the world/Google, but I can turn them off…..Google follows me, my house, what I buy and everything else…..then are all too happy to share it with those I don’t want them knowing I exist.
In the quest for data analytics, companies have sold their soul. Google and IBM are at the top of this data list, closely followed by Oracle, only closely in this case as they are hampered by a leader who holds them back from becoming a great (or modern) company.
OPEN SOURCE VS. PROPRIETARY.
Most analyst’s I talk to have Android so that they can practice what they preach, it’s an open world. Well open source doesn’t work as well and smooth as IOS, so I don’t give a rat’s rump about this. I just want it to work and for me not to have to fix or code one more device. Most open systems require tinkering far too often. So I’m calling BS on that argument. I’m a consumer with too much going on to have a device that doesn’t work every time and easily.
I had one of the newest Blackberry’s and in one word of advice for those who are considering buying it….don’t. The interface is archaic compared to IOS and I got it because of a corporate policy that stuck me with a device that was hard to use. I had to take it the phone store to set up the special things I wanted (I have about 7 email addresses and many special things related to what I do, and BTW I set them up myself on the iPhone) and have set up phones and computers for 31 years….before things were easy so I know how to reverse engineer without instructions
BTW, I’ll never buy another Windows/Microsoft product again now that I work for myself. They can only treat me this poorly (since Windows was released) for so long before I vote with my own money like I did here….
Every time a company comes up with a good idea, another company finds a way to one up it. Patents, trademarks, copy-writes or any other legal means don’t stand in the way of a better idea.
This also works when you don’t have a better idea, but your product still dominates the market, mostly due to better marketing. Yes, there is a good percentage of people don’t think Windows is a good product. Most have experienced the Blue screen of death. Booting takes forever, drivers, compatibility, price and any number of factors make it a product that is only doing well because of marketing and the force of Microsoft.
Apple OS, Linux and even OS/2 were or are better operating systems. Now the Chromebook is out. I won’t pontificate as to whether it is better or not, but it will take share away from Windoze as the OS of choice. There are many Google lovers or users out there and for the price of a Chromebook, you could only get Windows 7 from Microsoft.
I’ve often said that Microsoft will have to pull an IBM by re-inventing itself, but their phone OS, gaming, MP3 players and Office haven’t really done the trick. They are are the quintessential one trick pony.
Time will tell what will happen, but the introduction of the Chromebook is just another layer of the onion being peeled away. Good thing they have a lot of cash in the bank, because they will need it to buy a better product. They sure haven’t invented one……ever.