In July, the London-based fact-checking organization Full Fact used artificial intelligence tools to scan public debate in media and online. They identified an average of 240,437 pieces of content circulating each day that would be possible, for Full Fact or one of a dozen other fact-checking organizations that use its tools, to fact-check.
Of course, not all false claims are of equal importance. Again, using AI tools, Full Fact was able to eliminate more than 99% of those as neither relevant nor important for the organization to check — being opinions, predictions, repeated claims or on so abstract a topic as to have little chance to have an impact.
With the technology having narrowed the selection pool down to several dozen claims that could be good to check, fact-checkers still had to decide what claims to check, taking account of the claim’s prominence, potential impact and other factors. Full Fact, the UKs largest independent fact checker, writes a maximum of 10 full, detailed fact checks per day.
Most fact-checking organizations follow a broadly similar process: using a blend of digital search and human insight to sift myriad claims and focus on those they find matter most.
To do this, we have developed a rigorous harms-based model for selecting important claims to check. In a trial we carried out last year, we drew useful conclusions about potential consequences of specific false claims for individuals and society, using a combination of the evidence set out in good quality fact checks and preexisting research.
According to an independent study of our trial, conducted by doctoral researchers at the University of Wisconsin-Madison, the model forms the basis of a way for fact-checkers to select the most important claims with the highest potential for harm.
Claims of ‘bias’ and the problem of false balance
On Jan. 7, Meta boss Mark Zuckerberg entered the fray around fact-checking, claiming that U.S. fact-checkers were “too politically biased” because of the claims they choose to check. He put a halt to Meta’s use of independent fact-checkers in the United States and hinted at plans to do the same around the world.
Inconveniently for fact-checkers, of course, the world is messy. Politicians don’t utter falsehoods in neatly symmetrical amounts. Some make more blunt, checkable statements than others. Some just talk more. And some claims matter more than others. Meanwhile, political environments around the world also vary hugely. Some countries effectively have one party, some have two, and many are multi-party in a way that makes the idea of a simple “50/50 balance” implausible.
The Code of Principles of the International Fact-Checking Network governs how its member fact-checkers operate to reflect this reality. Fact-checkers commit not to an artificially perfect balance, but to “not unduly concentrate” on any one side and must explain, each year, how they do this more difficult task.
How this works in the real world: harm, audience reach and scrutiny of the powerful
Ahead of the annual GlobalFact conference in June in Brazil, 70 of the world’s leading fact-checking organizations responded to a survey we sent out, identifying in their answers the factors they take into account when choosing what to check each day.
The answers showed the three key factors: harm; the reach of the claim; and the power of those who made it. The most frequent factor cited was that the claim “may have potential to cause specific harm — now or soon,” reported as a factor by 93% of respondents. Second, reported by 83% of organizations, is whether the claim has large audience reach, and, thus, fact-checking it could promote fact-checking skills. Close behind, reported by 81%, was whether the claim came from someone influential. Fact-checkers tend to fact-check the party or parties in power more than those in the minority or opposition.
Other answers showed that fact-checkers take account of practical considerations and the potential for claims to contribute to harms in future. More than a third (71%) of respondents consider whether the claim is feasible to check. More than half (54%) say they factor in the potential for the claim to cause harm in the future. And just under half (47%) take account of requests from readers.
The political “side” of the claim is the lowest-ranked major factor, with only 26% of respondents considering it in their choice.
Trial finds predictive model helps fact-checkers focus on harmful claims
Predicting the weather is hard. Predicting the effects of information on behavior is harder, as decades of academic research have proven.
Asked to identify specific false claims as potentially harmful, or not, most people, fact-checkers included, overestimate the potential for many claims to cause or contribute to substantive consequences, a new peer-reviewed book, “Fake News: What the Harm?” by Peter Cunliffe-Jones argues.
And yet, as the book and numerous other studies show, the real-world consequences of false information are often severe. This potential for specific effects has been proven time and again, from widespread harms to individual and public health, vigilante violence, wars and conflicts, and distortions of democracy to negative effects on the economy, the justice system and individuals’ mental health.
But what about scale?
Still, the vast amount of misinformation available in the world remains daunting. Part of the answer lies, of course, in the design and regulation of media and online platforms. Part lies in media literacy. In terms of identifying for audiences what is true or false, at scale, at the moment we have two main routes.
One is the Community Notes systems used by platforms operated by X worldwide, and by TikTok and Meta in the United States. These companies say the system can address the factuality of the content assessed at a scale that goes beyond what professional fact-checkers can, based on audience consensus. As useful as this may be, the model is not designed around claims’ potential to cause harm.
The other route is therefore the use of AI to match fact checks with new and existing content spreading the same or very similar claims. Platforms have used this process for a long time. According to one EU report, Meta used claim-matching AI to label 27 million additional posts during a six-month window in 2024. It is not a perfect science, but large online platforms with the resources and AI expertise should continue to invest in how they add this vital context to more of the most important claims made.
The numbers involved in the field are huge. Community notes may help platforms address factuality more often. Fact-checkers seek to focus on checking the most potentially harmful information they see online, in media and in wider public debate; providing audiences with vital insight to keep them safe. Platforms can support this work by working with them, and smartly amplifying that work to ensure that every fact check written works as hard as possible to give people the context they need to understand the world around them.
“Fact-checkers tend to fact-check the party or parties in power more than those in the minority or opposition. ”
It would be lovely if we had any hard data backing that claim.
Fact checkers always seem to try to come up with ways of explaining away data patterns that suggest bias. The system suggested in the article is just another form of the same thing, unless we’re provided hard data showing the AI system gauges “harm” objectively and consistently. There’s good reason to doubt.
I’d say it makes more sense to think about “unfair harm” than “harm” per se. It harmed President Biden when people said he was cognitively impaired. How do we objectively balance that harm against the national harm of electing a president with cognitive impairment?