Is Russia really ‘grooming’ Western AI?
The panic is there, but the evidence is thin – at best.
Published On 8 Jul 20258 Jul 2025

In March, NewsGuard – a company that tracks misinformation – published a report claiming that generative Artificial Intelligence (AI) tools, such as ChatGPT, were amplifying Russian disinformation. NewsGuard tested leading chatbots using prompts based on stories from the Pravda network – a group of pro-Kremlin websites mimicking legitimate outlets, first identified by the French agency Viginum. The results were alarming: Chatbots “repeated false narratives laundered by the Pravda network 33 percent of the time”, the report said.
The Pravda network, which has a rather small audience, has long puzzled researchers. Some believe that its aim was performative – to signal Russia’s influence to Western observers. Others see a more insidious aim: Pravda exists not to reach people, but to “groom” the large language models (LLMs) behind chatbots, feeding them falsehoods that users would unknowingly encounter.
NewsGuard said in its report that its findings confirm the second suspicion. This claim gained traction, prompting dramatic headlines in The Washington Post, Forbes, France 24, Der Spiegel, and elsewhere.
But for us and other researchers, this conclusion doesn’t hold up. First, the methodology NewsGuard used is opaque: It did not release its prompts and refused to share them with journalists, making independent replication impossible.
Second, the study design likely inflated the results, and the figure of 33 percent could be misleading. Users ask chatbots about everything from cooking tips to climate change; NewsGuard tested them exclusively on prompts linked to the Pravda network. Two-thirds of its prompts were explicitly crafted to provoke falsehoods or present them as facts. Responses urging the user to be cautious about claims because they are not verified were counted as disinformation. The study set out to find disinformation – and it did.
Advertisement
This episode reflects a broader problematic dynamic shaped by fast-moving tech, media hype, bad actors, and lagging research. With disinformation and misinformation ranked as the top global risk among experts by the World Economic Forum, the concern about their spread is justified. But knee-jerk reactions risk distorting the problem, offering a simplistic view of complex AI.
It’s tempting to believe that Russia is intentionally “poisoning” Western AI as part of a cunning plot. But alarmist framings obscure more plausible explanations – and generate harm.
So, can chatbots reproduce Kremlin talking points or cite dubious Russian sources? Yes. But how often this happens, whether it reflects Kremlin manipulation, and what conditions make users encounter it are far from settled. Much depends on the “black box” – that is, the underlying algorithm – by which chatbots retrieve information.
We conducted our own audit, systematically testing ChatGPT, Copilot, Gemini, and Grok using disinformation-related prompts. In addition to re-testing the few examples NewsGuard provided in its report, we designed new prompts ourselves. Some were general – for example, claims about US biolabs in Ukraine; others were hyper-specific – for example, allegations about NATO facilities in certain Ukrainian towns.
If the Pravda network was “grooming” AI, we would see references to it across the answers chatbots generate, whether general or specific.
We did not see this in our findings. In contrast to NewsGuard’s 33 percent, our prompts generated false claims only 5 percent of the time. Just 8 percent of outputs referenced Pravda websites – and most of those did so to debunk the content. Crucially, Pravda references were concentrated in queries poorly covered by mainstream outlets. This supports the data void hypothesis: When chatbots lack credible material, they sometimes pull from dubious sites – not because they have been groomed, but because there is little else available.
If data voids, not Kremlin infiltration, are the problem, then it means disinformation exposure results from information scarcity – not a powerful propaganda machine. Furthermore, for users to actually encounter disinformation in chatbot replies, several conditions must align: They must ask about obscure topics in specific terms; those topics must be ignored by credible outlets; and the chatbot must lack guardrails to deprioritise dubious sources.
Even then, such cases are rare and often short-lived. Data voids close quickly as reporting catches up, and even when they persist, chatbots often debunk the claims. While technically possible, such situations are very rare outside of artificial conditions designed to trick chatbots into repeating disinformation.
Advertisement
The danger of overhyping Kremlin AI manipulation is real. Some counter-disinformation experts suggest the Kremlin’s campaigns may themselves be designed to amplify Western fears, overwhelming fact-checkers and counter-disinformation units. Margarita Simonyan, a prominent Russian propagandist, routinely cites Western research to tout the supposed influence of the government-funded TV network, RT, she leads.
Indiscriminate warnings about disinformation can backfire, prompting support for repressive policies, eroding trust in democracy, and encouraging people to assume credible content is false. Meanwhile, the most visible threats risk eclipsing quieter – but potentially more dangerous – uses of AI by malign actors, such as for generating malware reported by both Google and OpenAI.
Separating real concerns from inflated fears is crucial. Disinformation is a challenge – but so is the panic it provokes.
The views expressed in this article are the authors’ own and do not necessarily reflect Al Jazeera’s editorial stance.