When discussing the greatest challenge in the world of disinformation, many are quick to point to a single factor: artificial intelligence.
Artificial intelligence – or simply AI – is no longer a new technology, and it has had several years now to influence our shared information landscape.
So how exactly has AI entered the disinformation scene? In which topics do we encounter it most often, and what types of AI-generated content do fact-checkers typically face?
These are the questions NORDIS has attempted to answer by mapping out the AI-related articles published by its four affiliated fact-checking media in Finland, Norway, Sweden, and Denmark. The mapping is indicative only.
The mapping indicates, among other things, that only a small fraction of the total number of articles investigate AI-generated content, prompting the question: “Have we overestimated the threat from AI?” Tommaso Canetta, EDMO’s Fact-checking Coordinator, comments on that later in the article.
The mapping includes a total of 49 articles, including fact checks, insight pieces, guides, and citation stories from other media outlets to provide the broadest possible perspective on AI. Common to all the articles is that they are based on shared AI content. The articles were published between 2023 and 2025. The 49 articles do not constitute an exhaustive list. It should also be noted that the articles reflect the editorial choices of the four fact-checking media – and thus what kinds of AI-related content they have prioritized.
Fraud and Politics
To get a sense of the most common topics where AI content appears, NORDIS manually reviewed the 49 articles, identified the AI-related content, and categorized it.
Unsurprisingly, the mapping shows that AI is used in a variety of ways such as producing or manipulating images, creating deepfake videos, voice cloning and manipulation, text translation, and amplifying disinformation through automation.
The review also shows that AI is used across a wide range of topics, such as climate and identity politics. But three topics in particular recur across the fact-checkers’ articles:
14 articles are related to politics and elections, many of which focus on Russian influence attempts from disinformation networks like Pravda, the spread of false and pro-Russian stories in the run-up to the 2024 Paris Olympics, and AI-generated images ahead of the 2024 U.S. presidential election.
Economic fraud, such as investment scams that exploit celebrities as unwilling promoters of dubious investment platforms, is another area where fact-checkers have detected AI usage. Economic fraud accounts for 12 articles, including manipulated images of celebrities and deepfakes where their voices are altered to endorse the platforms.
War is also a frequent topic where fact-checkers encounter AI-generated content. Eight articles relate to this, with AI content being spread in the context of the war in Ukraine, the conflict in Gaza, and the civil war in Syria.
Only a small portion
The mapping also suggests that AI-influenced content makes up only a small part of fact-checkers’ work.
NORDIS reviewed one of the media outlets – TjekDet – and all of its 2025 publications up to April 28. TjekDet had published 104 articles, 44 of which were actual fact checks. Only three of those were related to AI.
This suggestion is supported by the European Digital Media Observatory’s (EDMO) monthly Fact-Checking Briefs, which estimate the proportion of disinformation involving AI-generated content. According to EDMO, this figure ranges from two to eight percent of the total content covered by its hubs – including NORDIS.
During the planning of NORDIS’ second phase, there was broad consensus that AI should be a major focus, due to fears about its potential impact. But the low share of AI-related content now prompts the question: Have we overestimated the threat from AI?
Tommaso Canetta, EDMO’s Fact-checking Coordinator, doesn’t believe so.
“Even if the percentage is relatively low, it’s very important to keep the situation constantly monitored and to create awareness in the public opinions and among stakeholders about the risks. And even if it is true that we have not seen the avalanche we fear could come, we are already seeing some worrying developments,” he says.
He points out that a single AI-generated audio clip, if released shortly before an election, could have serious consequences. This is a threat also highlighted by Antti Sillanpää, a preparedness expert at Finland’s national security agency, ahead of the 2024 Finnish presidential election.
The fear is not unfounded. Ahead of the 2023 parliamentary elections in Slovakia, this is exactly what happened. A fabricated AI-generated conversation about election fraud between journalist Monika Tódová and Michal Šimečka, leader of the Progressive Slovakia party, was spread two days before the vote.
That’s why Tommaso Canetta believes it makes perfect sense to keep AI in sharp focus.
“In general I think that the generative-AI storm is coming. Maybe it will hit us in a few months, maybe in a few years, but I think it would be foolish to think that it won’t come. We need to be prepared, without of course falling in the trap of ‘crying wolf’ and creating fatigue and skepticism in the public opinions,” he says.
A Helping Hand
Despite the threat AI poses to the information landscape, the very same technology can also be part of the solution. Norwegian company Factiverse, also part of the NORDIS consortium, is one of the players working with AI technology to counter the spread of disinformation.
They have developed several tools to assist with fact-checking. And even though their products, according to Gaute Kokkvoll, Head of Product at Factiverse, are not explicitly designed to detect AI-generated content, they can still help.
“Our tools can help provide indicators that can be used to determine whether content is AI-generated. For instance, they can analyze the volume of the content, the networks it’s spreading through, the platform, and other metadata – all of which can be just as useful in the assessment,” he says.
For Factiverse, identifying AI isn’t seen as a “holy grail” but rather as a supportive tool in making overall evaluations of content credibility.
“That’s why we continue to focus on the overall credibility of content, with factors like how content was produced and distributed being useful elements,” he says.
Factiverse’s systems are available to NORDIS fact-checkers, who have already tested them several times. You can read more about that here.