Russian influence campaigns are shifting their battlefield from social media to artificial intelligence. Research by Pointer (KRO-NCRV) has revealed that the so-called Pravda network — a web of hundreds of pro-Russian websites — is deliberately feeding AI systems, such as chatbots, with disinformation.

Although these Pravda sites attract few human visitors and are riddled with language errors, they produce millions of posts in dozens of languages. The aim is not to convince people directly, but to be quoted by sources users consider trustworthy, such as Wikipedia or AI chatbots. This sophisticated tactic is now known as LLM grooming: the manipulation of datasets used to train chatbots, ensuring that propaganda subtly appears in their answers.

Tests have shown that chatbots like Copilot and Mistral regularly repeat incorrect, pro-Russian narratives about the war in Ukraine. As a result, disinformation increasingly reaches professionals and organisations through digital tools that are wrongly perceived as reliable.

How Do We Counter This Threat?

According to Dr Kenneth Lasoen, Lecturer in Intelligence and Security Studies at KSI, professor, and recognised expert to the European Commission on countering disinformation, this development represents a dramatic escalation in an already saturated information landscape. He stresses that this requires a new approach, going beyond the current purely defensive strategies. “We really need to think about how to counter this assertively,” he states. “Fortunately, there are opportunities — and just as AI is being used to produce and spread disinformation, we can train AI to fight it as well.”

 

Lasoen also highlights the importance of critical thinking and a fundamental shift in our approach to information. More than ever, he argues, we must evaluate information for reliability with an eye on our collective resilience — something he believes should receive far greater emphasis in education. “We almost need a ‘counter-intelligence’ mindset, where we instinctively question information rather than immediately accepting it as true.”

What Does This Mean for Your Organisation?

If you or your organisation work with digital tools or AI systems, it is crucial to remain vigilant against information manipulation. This requires new skills and a mindset where information is constantly assessed for credibility.

At The Knowledge Centre for Security Intelligence (KSI), we support organisations in recognising, analysing, and countering such risks. Our training programmes and consultancy services help professionals to:

  • Identify and verify disinformation and signs of digital influence;
  • Carefully validate sources to separate fact from fiction;
  • Critically assess AI systems for reliability and potential risks.

With courses such as Inlichtingenkunde, Open Source Intelligence (OSINT), Intelligence Analysis Techniques, Threat Assessment, and Cybersecurity & AI, KSI equips your organisation with the knowledge and tools to operate securely and reliably — even in a world where technology and threats are increasingly intertwined.

Want to know how your organisation can defend against this?

Interested in the world of intelligence services, counter-espionage, deception operations, and protection against digital and technical espionage tools? The Knowledge Centre for Security Intelligence (KSI) offers training and courses in threat assessment, intelligence methodology, and digital security.

Discover our programmes on our website or contact us at info@ksi.institute