Russia Today
Photo: Ruslan Gurzhiy/SlavicSac.com

Artificial intelligence is only as reliable as the information it’s trained on—and a growing Russian disinformation campaign is testing just how fragile that system can be. Russian disinformation has long been a major concern for the West, from influencing elections to fueling division through social media and fake news campaigns. It has become one of Russia’s strongest offensive tools in the digital age.

РУССКИЙ

However, this new report raises flashing red flags we can’t afford to ignore. The “Pravda” network (not to be confused with the Soviet-era newspaper or current Russian outlet, Pravda Media Network) has reportedly infiltrated the databases that AI systems use to gather information, directly shaping responses from numerous artificial intelligence chatbots.

According to a NewsGuard report, Pravda published over 3.5 million articles in 2024 alone. Even more alarming, a third of major chatbots developed by companies like Microsoft, Google, and OpenAI were found to cite Pravda sources directly. The network spans more than 150 websites in dozens of languages, creating the illusion of credible, diverse sources while spreading a single coordinated narrative.

The Danger:

Artificial intelligence has taken the world by storm over the past few years. It’s become the next major wave of innovation, changing industries from education and journalism to law and health care. But with all that growth comes real risk, especially when it becomes harder to tell what’s real and what’s AI-generated.

The Pravda scandal is especially dangerous because, even though many people know to be skeptical of content from outlets like Russia Today or Первый канал, they may not question answers coming from a friendly, neutral-sounding chatbot. When disinformation is filtered through AI, it can seem more objective or credible—especially to those who aren’t as familiar with global propaganda tactics, or are just looking for the answer to a question quickly.

That’s why it’s so important to check the sources behind the information AI tools give you. If the foundation is flawed, the results will be too. When millions of people are using these tools daily to learn, research, or make decisions, the consequences can be far-reaching.

Andrew Martinovsky | SlavicSac.com