Russia’s New Disinformation Tactic: Poisoning The AI Replacing Critical Thinking Skills

Reading Time: 3 minutes.
A robot emoji with a text box. Hello is written in Russian, specifically, "Привет!"

From Russia with lies

Russia attempted to help get Trump elected in 2016 through misinformation campaigns targeting social networks, especially Twitter and Facebook. They found that people were too lazy to fact check, too easily manipulated into believing whatever they hoped was true. Social media news readers are easy to misinform. They divided America, and took control.

China may have an entire social network with TikTok, one they may or may not use to further divide, placate, and prevent real political action, going so far as to mock people for caring about issues. They create bubbles so strong that no issue seems like a big enough deal to lead to action. But Russia isn’t done with the United States yet. As people move away from social media, and as even mainstream media replaces quality journalism with AI, Russia went to the source of many people’s feeds, news, and even their thoughts: AI.

Disinformation is everywhere, so how can you control its narrative? Manipulate the tool that’s doing everyone’s thinking now: AI.

The scariest part is, it’ll likely work.

Russia’s “AI Grooming” Efforts Discovered

NewsGuard, an agency for fact checking that Media Bias/Fact Check claims may be more lenient and right leaning than their own system, has had concerns about AI manipulation before. In a recently published report, they tested 10 “leading AI chatbots” for Russian propaganda, OpenAI’s ChatGPT-4o, Google’s Gemini, Microsoft’s Copilot, Perplexity’s answer engine, Anthropic’s Claude, xAI’s Grok, Meta AI, You.com‘s Smart Assistant, Inflection’s Pi, and Mistral’s le Chat. They chose not to share which companies’ chatbots shared what content, because, according to NewsGuard, the issue was “systemic.” It’s everywhere.

The Pravda network is reportedly a Russian propaganda network that shared over 3.6 million articles over more than 200 domains in 2024. Their results are integrated into the datasets for leading AI companies. When there’s no curation for input, anything on the net can get gobbled up, including disinformation made to ruin AI models and influence voters.

That poisoning runs deep. According to NewsGuard, 33.55% of requests to AI chatbots returned provably false Russian disinformation. 7 out of 10 of the chatbots even specifically cited a Pravda network website as the source. Even when they refuted the claims, they’d often still link to the propaganda, throwing the AI’s own fact checking into question. A reader may believe that the AI was wrong when it links to a news article claiming the AI’s fgact check was wrong.

This, people, is why you curate your sources instead of scraping the entire web. Copyright laws and AI laws protecting real human creators would have prevented this. Instead, these AI companies poisoned their own products, and now return an alarming amount of false information from a reported Russian propaganda network.

An example of a false news story includes one aligning with Putin’s attempts to paint Zelensky in a bad light. The claim, going along with Trump’s false claim that Zelensky is a dictator (again, he is not), tries to make Zelensky out to be controlling the media. It claimed that Zelensky banned Truth Social, Trump’s social network, in Ukraine. In reality, however, Truth Social has never expanded to Ukraine.

6 out of 10 of the chatbots falsely claimed Zelensky banned Truth Social in Ukraine. Most of our chatbots are now lying for Russia and far-right politicians.

The Danger of Propaganda AI

I couldn’t help but think of Nightshade. Nightshade is a tool artists can do to “poison” their artwork. It implants data into their art that confuses AI, leading AI to generate broken images when prompted. It does this by providing misleading data in easy to access tokens AI will gladly emphasize, much in the same way Russian propaganda networks can now do. This increases the damage to a model, allowing a smaller percentage of data to have an outsized impact. When it’s protecting copyright, it’s cool. When it’s destabilizing worldwide democracy, it’s problematic.

The real danger of this “large language model grooming” is humanity’s trust in AI. A Microsoft study found that people who interact with and rely on AI more frequently are less likely to engage in critical thinking. Critical thinking that is necessary for making their own decisions, discerning fact from fiction, or even voting. By giving up their critical thinking skills, they’re offloading their higher processes to AI. According to NewsGuard, that AI has been provably tainted. If you’re not thinking for yourself, how are you any different from a machine?

People stopped picking their own music. They browse feeds of videos of strangers rather than those of their friends and favorite creators. They stopped reading entire articles, relying on AI summaries instead of reading information themselves. They won’t even write their emails, papers, or code themselves. Even those who read their news often have articles handed to them by algorithms, either through social media or news apps.

If you’re letting AI do your thinking for you, can you really get mad when it’s manipulating you? You’re the one who chose to stop thinking for yourself. Don’t get mad at the poisoned AI models that are picking up the slack.


Sources:
,