Wikipedia:Wikipedia Signpost/Single
Election cycles come and go, and Wikimedia Foundation achieves record revenue in 2024–2025!
Admin and ArbCom election cycles upcoming
Following the closure of the nomination process for candidates, online ballots for the December 2025 ArbCom elections have opened on November 18, with the Arbitration Committee looking to seat nine members for the upcoming term. Eligible voters can cast their vote through the SecurePoll system until December 1 – see the October 20 and 10 November 10 issues for more.
In other election news, the call for candidates in the December 2025 admin elections will take place (November 25–December 1), before moving to the discussion phase (December 4–8) and, finally, the voting phase (December 9–15). – O
Two new Board of Trustees members get elected following controversy
Users Bobby Shabangu and Michał Buczyński have been elected to the WMF Board of Trustees as part of the 2025 elections. Shabangu was elected in the second round of the STV ranked vote, having collected 2,258 preferences, whereas Buczyński, who had originally won 1,298 votes, was elected in the fifth round. The two new Trustees will be appointed at the next Board meeting in December 2025.
The latest BoT elections went ahead as expected despite a recent controversy involving former candidates Ravan Al-Taie and Lane Rasberry, who the Board had both removed from the slate for the elections before the start of the community voting period. The BoT faced criticism for their unprecedented decision, with Board member Victoria Doronina later announcing that she had suspended herself from most of her activities until the end of the year.
See both the Special report and Interview sections from our October issue for further context on the elections and the controversy surrounding them. – O, B
Wikimedia Foundation publishes its audit report for fiscal year 2024–2025
The Wikimedia Foundation has published its audit report (in US dollars) covering the period from July 2024 to June 2025. Highlights include record revenue, investments in product and technology, funding advocacy and awareness for Wikimedia projects, limited internal expense growth, and improved revenue diversification.
Key stats (rounded) from page 6 of the report:
- Total operating support and revenue: $209m (up from $185m)
- Total operating expenses: $191m (up from $178m)
- Net assets at end of year: $297m (up from $272m)
For frequently asked questions see the FAQ on Meta-Wiki. Additional questions can be asked on the FAQ's talk page. See also the Wikimedia Foundation's blog post.
Reporting on Wikimedia Enterprise is included in the audit report, but there is also a separate report for Wikimedia Enterprise, which states: "We are pleased to share that not only does this report show the first complete year of profitability, but even more importantly, that earned revenue has now fully repaid the initial investment in the project from previous fiscal years."
The Wikimedia Endowment became its own standalone 501(c)(3) in September 2023 and now files its own audit report, which will become available in a few months' time. – AK
Board of Trustees task force stays cautious about Wikispore, while setting a phase-out of Wikinews
Back in June, the Community Affairs Committee (CAC) of the Wikimedia Foundation Board of Trustees (BoT) had assigned the Sister Projects Task Force (SPTF) to "update and implement a procedure for assessing the lifecycle" both of a proposed Wikimedia sister project, meta:Wikispore, and an existing project, Wikinews. Two separate public consultations followed, with discussions continuing until August 15.
On November 25, BoT Chair-elected member Lorenzo Losa published the results of the consultations and the Task Force's consequent recommendations to the Board, which you can read in full on the Meta page. In summary, the Board was advised to keep the current technical setup of Wikispore, and consider options to integrate the project in the Wikimedia Incubator.
No immediate changes should be made to Wikispore's current technical setup. The current setup is functioning and supports ongoing experimentation, and bringing Wikispore into the WMF technical setup at this point of time would add necessary technical constraints that would not well support Wikispore's evolution.
Exploring options to more closely align Wikispore with the Wikimedia Incubator is encouraged, potentially as a dedicated subdomain or pilot project for a fixed period (e.g., two years). This would simplify processes and allow lessons learned in Incubator to inform Wikispore's trajectory, if this can be done while not restricting Wikispore's growth.
Initially proposed by user Pharos in 2019, Wikispore is a project that is intended as an extension of the already existing meta:Wikimedia Incubator, in order to develop and test in a "safe space" any of the other proposed sister Wikis that are consistent with the copyright and NPOV principles of the WMF.
On the other hand, the Task Force proposed to cease the activity of Wikinews permanently, while encouraging "exploration of new paradigms for Wikimedia news content" with the help of community of new members.
Archive all editions of Wikinews, preserving their content.
The implementation (including the timeline, archival method, etc) are the responsibility of the Product & Technology department, but the process should be sensitive to local project contexts and follow inclusive, transparent processes.
The Wikimedia Foundation should support and provide resources to groups exploring new paradigms for Wikimedia news content, such as the proposed "Wikinews Pulse" centralised multilingual headline portal. It should set a fixed timetable (e.g., one year) for these pilot initiatives to demonstrate progress, after which results should be publicly reviewed and further recommendations made.
Wikinews is an official Wikimedia project based on news reporting and citizen journalism that was first launched in November 2004, following an online vote on Meta: Wikipedia co-founder Jimmy Wales intended the project as a way to write each story "as a news story, as opposed to an encyclopedia article". However, Wikinews has always struggled to gain momentum in comparison to other WMF projects throughout the years: at the time of this issue's publication, the platform is active in 31 languages, with just over 700 active editors across the board. – O
WMF removes separate mobile domains, fixing major SEO failures for Wikimedia Commons in the process
In a post on the Wikimedia Foundation's tech blog, titled "Unifying our mobile and desktop domains", WMF engineer Timo Tijhof describes the recently completed work to remove the separate mobile domains for Wikimedia projects (e.g. en.m.wikipedia.org). Among other benefits, this "led to a 20% improvement in mobile response times" for readers who access Wikipedia and its sister sites coming from Google (i.e. a majority of readers overall).
The post recaps how the separate mobile domains had been introduced back in 2008 when this was a common practice among major websites like the BBC, IMDb and Facebook, which however has long become outdated. What triggered current conversion, though, was that "above all, we had reason to believe Google stopped supporting separate mobile domains." This was found to have caused huge slowdown for many Wikipedia readers (which apparently had gone undetected for over a year):
Google introduced a new crawler in 2016, and gradually re-indexed the Internet with it.[...] This new “mobile-first” crawler acts like a mobile device rather than a desktop device, and removes the ability to advertise a separate mobile or desktop link. It’s now one link for everyone! Wikipedia.org was among the last sites Google switched, with May 2024 as the apparent change window.[...] This meant the 60% of incoming pageviews referred by Google, now had to wait for the same redirect that the other 40% of referrals have experienced since 2011. [...]
Unifying our domains eliminated the redirect and led to a 20% improvement in mobile response times.[...] This improvement is both a recovery and a net-improvement because it applies to everyone! It recovers the regression that Google-referred traffic started to experience last year, but also improves response times for all other traffic by the same amount.
As part of the conversion work, WMF engineers also fixed two major SEO failures involving Commons that likewise appear to have flown under the radar for a while:
1. In response to two Community Wishlist proposals (one by User:TheDJ and another by User:Prototyperspective),
Tim Starling found in June that only half of the 140 million pages on Commons were known to Google. [phab:T400022] And of these known pages, 20 million were also delisted due to the mobile redirect. This had been growing by one million delisted pages every month. [...] Tim and myself [Timo] disabled the mobile redirect for “Googlebot on Commons” through an emergency intervention on June 23rd. Referrals then began to come back, and kept rising for eleven weeks in a row, until reaching a 100% increase in Google-referrals. [...] The index had likely been shrinking for two years already.
2. In addition, it was found that videos on Commons had been almost entirely absent from Google:
"We also found that less than 0.1% of videos on Commons were recognised by Google as video watch pages (for the Google Search “Videos” tab). I [Timo] raised this in a partnership meeting with Google Search, and it may’ve been a bug on their end. Commons started showing up in Google Videos a week later.
– H
Brief notes
- The Wiki Science Competition gears up worldwide: The 2025 edition of the Wiki Science Competition, a global photography competition for students, researchers and other volunteers that aims to expand the catalogue of science-related media to Wikimedia Commons, has officially begun on November 1. The WSC will be hosted internationally, but several national contests are taking place across the globe: see this page to look at all of the contests available and get involved.
- Milestones: The following Wikimedia projects have reached milestones in October and November 2025:
- 1,000 articles: Rakhine/Arakanese Wikipedia
- 20,000 articles: Assamese Wikipedia, Oriya Wikipedia
- Articles for Improvement: This week's Article for Improvement is Emergency management, which will be followed by Computer algebra system (beginning 1 December). Please be bold in helping improve these articles!
Wales walk-off, antisemitism, supernatural powers, feminism turmoil, saints, and sex
Jimmy Wales walks out on "stupid question"
Jimmy Wales, in Germany to promote his book The Seven Rules of Trust, answered the same question four times in one minute in a Jung & Naiv video interview hosted by Tilo Jung – before walking out of the interview. The question posed by Jung, known for a faux-naïve interview style inspired by Stephen Colbert, was: "Are you the founder or co-founder of Wikipedia?" Wales's answer was: "It doesn't matter."
The incident attracted press coverage in Germany –
- Berliner Zeitung: "„Dümmste Frage der Welt“: Co-Gründer von Wikipedia verlässt Interview mit Tilo Jung nach Sekunden"
- Der Spiegel: "Wikipedia-Gründer Jimmy Wales nennt Tilo Jungs Fragen »dumm«"
– and further afield, including British tabloids:
- The Daily Mail: "Furious Wikipedia co-founder storms out of interview after refusing to answer basic question")
- The Daily Express: "Moment guest storms out of huge interview and leaves host speechless: 'I'm done''
The Times of India noted that the incident has widely circulated on social media (see e.g. a Reddit thread with over 6,700 comments). – S, AK
"Grokipedia is a Warning"
An article by The Intelligencer titled "Grokipedia Is a Warning" finds that Elon Musk's "Wikipedia clone is ridiculous" but "also a glimpse of the future":
Grokipedia, and Musk's AI projects in general, invite us to see LLMs as powerful and intrinsically biased ideological tools, which, whatever you make of Grok's example, they always are.
A somewhat similar argument is made in a London School of Economics blog post by Patrick Gildersleve (author of several peer-reviewed research publications about Wikipedia). He argues that Grokipedia has essentially fallen flat, with little Google visibility and dwindling traffic. But Gildersleve notes that Wikipedia is operating in an ever more hostile political environment, and describes Grokipedia as
a warning shot for AI’s real impact on Wikipedia and open knowledge. [...] AI is a threat to Wikipedia, but Grokipedia itself is little more than a politically charged sideshow to the deeper battles underway in the digital knowledge ecosystem.
An article by disinformation scholar Renée DiResta in The Atlantic, titled "The Right-Wing Attack on Wikipedia", states:
The free internet encyclopedia is widely used to train AI. That's why conservatives are trying to dethrone it.
As for the Grokipedia concept, she says:
It's pure algorithmic output with no community, no transparency, no clear process for dispute resolution. The irony is striking: Even as Musk and his friends attack Wikipedia for supposed bias, he is building something far more opaque and unaccountable.
As a concrete example, DiResta highlighted serious issues in the Grokipedia article about herself:
The remarkably thorough article about me contains nonsense that conspiracy theorists entered into congressional proceedings—including claims that my former research team at Stanford Internet Observatory censored 22 million tweets during the 2020 presidential campaign. [...] I reported these issues via the Suggest Edit tool included in Grokipedia’s user interface—so far, to no avail. On Wikipedia, I could appeal to an editor by dropping a note on a Talk page. But Musk’s version misses what gives Wikipedia authority: human consensus.
However, in a post on her Substack several days later, DiResta reported that Grok had resolved these:
Grokipedia also has hallucination issues—a known challenge with AI, but a particularly big one for an encyclopedia. I experienced this firsthand [...] I flagged the errors for the chatbot. Tonight, around two weeks later, just as I was getting ready to publish this post, it finally fixed them.
Journalist and novelist Stephen Harrison discusses Wikipedia and the role of an editor community in a Slate podcast titled "Wikipedia Enters the Culture Wars" (transcript).
Paid editing investigations
"Scandals Erased, Editors Paid: How Big Law Firms Try to Control Their Wikipedia Pages" – that's the headline of a law.com article (published back in September, archive) that told its readers:
A deep analysis by Law.com shows how some law firms pay editors, flout the rules, whether consciously or not, and remove controversies to curate their image on one of the world's most popular websites.
The article looks at the editing history of several law firm articles. It gives examples of firms using PR consultants who openly disclosed their work in line with Wikipedia's rules and quotes a commercial editor who makes undisclosed edits. It also covers the politicisation of law firms:
Law firms with connections to U.S. President Donald Trump or his executive orders often show evidence of this on their Wikipedia pages.
The Verge published an article on Jeffrey Epstein-related Wikipedia editing. The article mentions and indeed quotes from a March 2020 Signpost piece by User:Smallbones. See the current issue's Disinformation report for more. – AK
Unionization efforts and political turmoil among Art+Feminism staff
A recent article in Nonprofit Quarterly, titled "We Stood Up: Organizing at a Feminist Nonprofit", details the author's critique of a 501(c)3 organization whose "founders were so assured in their politics that they placed the word 'feminism' in the organization’s name", but that
over time, and with new leadership, politics were treated more as an impediment than as a part of our mission. For example, developing a material commitment to complement the organization’s Black Lives Matter statement was a challenge, as was sustaining a disability justice praxis, or taking a clear stance against apartheid, genocide, and settler colonialism.
Leaning both on provocation and intention, I began to assert that we ought to rename the organization, replacing "feminism" with "women" to more accurately reflect our focus on the representation of careered, cisgender women. (The organization, not surprisingly, did not budge.)
The name of the organization containing the word "feminism" is not directly provided in the article, but the author's biography on NPQ leaves little doubt which organization it is referring to: "She was terminated from her nonprofit job during a union campaign; she continues to organize with her former coworkers in Art+Feminism Workers United!"
(The Wikipedia article about Art+Feminism currently describes it as "an annual worldwide edit-a-thon to add content to Wikipedia about women artists, which started in 2014." According to the most recent Form 990 for Art Feminism Inc, the nonprofit had a budget of about 0.5 million USD in 2022/23; see also meta:Art+Feminism User Group. Its current grant request for the Wikimedia Foundation's "Wikimedia Community Fund" is for $1.3 million USD over three years, i.e. 2026–28. It mentions that the organization currently has three staff members – two full time and one part time – and that "[t]he biggest change [regarding staffing, from previous years] is sunsetting the Regional Ambassador Program. [...] This also resulted in the layoff of the Program Director role where the majority of the role was managing this program." A separate document details the rationale for this change.)
A more recent post by the aforementioned "Art+Feminism Workers United" follows up on the NPQ piece, clarifying that the organization had indeed justified the NPQ author's firing by "citing a 'lack of work' due to the elimination of the regional ambassador role as the reason", but still attempts to put it into the context of wider threats to open knowledge:
Across open knowledge sectors, the threats to our work are ongoing: whether it's recent threats at the Wiki North America Conference, the far right attacks on Wiki groups like Art+Feminism, or the undemocratic removal of two candidates, including the only woman and the only openly queer candidate, from the Foundation’s Board Election ballot, it can feel intimidating to do this work in a public sphere; but more than ever, the time to ensure A+F is operating in alignment with our values is now.
In an earlier post from around July 2025, the collective – all three of them still employed at A+F at the time – had focused less on external politics and more on internal organizational issues, which they claimed to have caused a concerning decline in the organization's impact:
Since 2019, the number of Art+Feminism events has dropped by 80%, and attendance has declined by 82%. Our global footprint, which once spanned 43 countries, now extends to only 27. Even our core Wikipedia contributions, new articles and edits, have collapsed by more than 94% in just the past year.
– H
In brief
- Mammoth mummy sex-change: 404 Media reported on a paper just published in the journal Cell, "Ancient RNA expression profiles from the extinct woolly mammoth". The main result of the paper was that 39,000 year old RNA can now be examined by scientists. An interesting sidebar is that the mummified wooly mammoth Yuka has been reclassified from female to male. 404 mentioned that this news is so new that even Wikipedia hadn't reported it yet. This oversight has now been corrected. – S
- Praise for pictures and photographers: Some favorite wildlife photos of the Wikimedia Commons Pictures of the Year contest were shared by AOL and Bored Panda [1].
- Stop scraping me, bro: Wikimedia would like AI companies to use its paid API product and properly attribute scraped content. A recent Wikimedia blog post to that effect sparked coverage in TechCrunch as well as an article by CNET.
- "How to save Wikipedia from AI": An article in the Chicago Tribune (widely syndicated) takes another Wikimedia recent blog post that reported a reduction in human pageviews, attributed to AI use, as its point of departure (cf. Signpost coverage: "Wikimedia Foundation reports 8% traffic drop since last year due to 'the impact of generative AI and social media'"). It argues that students need to be taught to click through to sources: "Education is at its best – more magical than an LLM ever will be – when it moves us from being passive recipients of knowledge to active explorers who reason about what we see. The challenge we face is to move from being passive users of AI to engaged citizens who can verify AI responses that, as a common expression goes, are 'frequently wrong but never in doubt.'"
- Q: Where do babies come from? A: Nazis.: "Priest horrified to learn he is Himmler's grandson after recognising Nazi's mistress on Wikipedia" (The Independent)
- Annual rankings start coming in: Visual Capitalist gives us "Ranked: The Most Viewed Wikipedia Pages of 2025 (so far)".
- ... and in reverse: 20 Great Articles and Essays about Artificial Intelligence at The Electric Typewriter includes "Wikipedia's Moment of Truth" which appeared in July in The New York Times.
- No theories on Jesus' sexuality, please: Christian magazine Charisma questions the article Sexuality of Jesus, whose lede says it covers "alternative and fringe theories" on Jesus's sexuality. Charisma objects, saying the Wikipedia community "allow[s] activist contributors to insert unorthodox views under the guise of academic balance".
- Saintly Wikimedia donors: Canadian news satire site The Beaverton pokes fun at Wikimedia's ongoing banner campaign, imagining a reader who achieves sainthood by responding "not with apathy, but with a one-time contribution of the minimum listed amount ($2.75)". But with fame comes scrutiny, and a fictitious journalist calls her "a fake and a phony! ... My sources have shared that she didn't even mean to donate to Wikipedia, she just accidentally clicked on 'Payment Method' and her Google Chrome extension already had the credit card info filled out. And it wasn't even her credit card, it was her dad's!"
- "Wikipedia's Antisemitism": An article published by Orthodox Jewish outreach group Aish HaTorah welcomed Jimmy Wales' call to action regarding the Gaza genocide article, and said additional bias, antisemitism, and falsehoods could be found in the articles Jewish Supremacy, Well poisoning, Palestinian genocide accusation, Al-Ahli Arab Hospital explosion, and Jews.
- See above: Ashley Rindsberg in Spiked also highlights Israel, complaining of "The hostile takeover of Wikipedia". Deborah Lipstadt tells the Jewish News Syndicate she has "lost faith" in Wikipedia's neutrality on Israel, adding: "It is pretty close to, but not yet identical with, the BBC."
- American History X, but from a
Nazi'sAI's point of view: "Grokipedia cites the neo-Nazi website Stormfront as a source 42 times", NBC News reports based on Cornell University research that analyzed 883,000 articles. The Stormfront-love ironically included its citation six times in Grok's rewrite of the American History X overview. The NBC writers noted "Wikipedia and Grokipedia rely on some of the same sources, with 57 internet domains showing up in the top 100 source lists for both encyclopedias. But the two sites often diverge, especially when it comes to sources Wikipedia editors consider unreliable or even blacklist", another example being conspiracy theory site Infowars. - Wales gets Forked: The New York Times's Lulu Garcia-Navarro interviewed Jimmy Wales in the NYT podcast Hard Fork, titled "How Wikipedia Is Responding to the Culture Wars" (part of the book tour, see last issue). The podcast runs almost an hour long, with no walkouts as far as we could tell.
- Superpowered, and lawsuit-proof: "Wikipedia's army of volunteers is its superpower", says The Chronicle of Philanthropy. The distributed volunteer base makes it "hard to sue even as it has been drawn into partisan political wars" (though we admit we aren't gods).
- Ketchupy kudos: Congratulations to the editors of Ketchup, which article plays a part in the plot of the Keanu Reeves film Good Fortune, noted in reviews appearing in The Independent and The Wall Street Journal (also syndicated by Hindustan Times).[1]
- ^ The top ten editors are: WikiEditor50, Amygmain, Toytoy, Not Beethoven, Northamerica1000, Asamboi, Mlgc1998, Julia033, and two anonymous editors.
At least 80 million inconsistent facts on Wikipedia – can AI help find them?
A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.
At least 80 million (3.3%) of Wikipedia's facts are inconsistent, LLMs may help finding them
A paper titled "Detecting Corpus-Level Knowledge Inconsistencies in Wikipedia with Large Language Models",[1] presented earlier this month at the EMNLP conference, examines
inconsistencies, a specific type of factual inaccuracy [on English Wikipedia], and introduce the task of corpus-level inconsistency detection. We present CLAIRE, an agentic system that combines LLM reasoning with retrieval to surface potentially inconsistent claims along with contextual evidence for human review. In a user study with experienced Wikipedia editors, 87.5% reported higher confidence when using CLAIRE, and participants identified 64.7% more inconsistencies in the same amount of time.
Combining CLAIRE with human annotation, we contribute WIKICOLLIDE, the first benchmark of real Wikipedia inconsistencies. Using random sampling with CLAIRE-assisted analysis, we find that at least 3.3% of English Wikipedia facts contradict another fact [...]
In a Twitter thread, the lead author shared his
Takeaways:
- Contradictions are measurable and fixable at scale.
- LLMs aren't ready to fully automate yet (best AUROC 75.1% on WikiCollide) but are effective copilots.
The authors focus specifically on internal inconsistencies
, which they define as
contradictory facts within Wikipedia that indicate errors requiring correction through consultation of original sources. In a crowdsourced repository, inconsistencies can arise from outdated information, limited awareness of related content during editing, or simple human error.
They illustrate this notion with an example (still not yet corrected on-wiki at the time of writing) drawn from FEVEROUS, a Wikipedia-derived dataset published in 2021 whose rate of inconsistencies was found to be even higher (7.3%):
François de Bourbon-Montpensier was born in 1492 and received the title “duchy-peerage of Châtellerault” in 1515. However, the Wikipedia table [rather, infobox in] “Duke of Châtellerault” incorrectly states that the title was created 23 years earlier.
To support editors in finding such inconsistencies, the authors construct the aforementioned LLM-based
CLAIRE (Corpus-Level Assistant for Inconsistency REcognition), a system for surfacing inconsistencies in large corpora. [...] CLAIRE finds and displays not only candidate contradictions but also disambiguating context and explanations of specialized terminology. It features an interactive interface implemented as a browser extension that surfaces potential inconsistencies to Wikipedia visitors.
(Unfortunately, that browser extension doesn't yet seem to have been released as part of the project's code repository or elsewhere.)
CLAIRE is then used to facilitate a (manually confirmed) lower bound estimate of the overall frequency of inconsistent facts on Wikipedia:
Applying CLAIRE to 700 atomic facts uniformly sampled from Wikipedia articles, we identified 44 potentially inconsistent facts, of which 23 were manually confirmed inconsistent. With 99% confidence, we estimate that approximately 3.3% ± 1.7%[1.6%, 5.0%] of all facts in Wikipedia contradict other information in the corpus. This is a lower bound, as CLAIRE may miss inconsistencies [...] Extrapolated to the entire encyclopedia, this corresponds to between 37.6 million and 121.9 million inconsistent facts,[...] underscoring the need for systematic inconsistency detection.
The authors then present their own WIKICOLLIDE dataset, consisting of 955 atomic facts drawn from Wikipedia [using a snapshot from November 1, 2024], each manually labeled as either consistent or inconsistent with the corpus.
This sample was drawn from a subset of articles (Level 5 Vital Articles) and deliberately biased to prioritize facts more likely to be inconsistent
. It is thus not representative of Wikipedia as a whole. However, the paper's classification of the types of inconsistencies present in this corpus should still give an idea of which are most frequent on Wikipedia:
| Inconsistency Type | Description | % |
|---|---|---|
| Numerical | Inconsistencies in numerical data, such as quantities, measurements, or percentages | 54.7 |
| Off-by-One Numerical | Small discrepancy involving a margin of one unit | 23.0 |
| Clear Numerical | Significant difference that cannot be explained by a margin of one unit | 31.7 |
| Logical | The claim and evidence directly or indirectly contradict each other | 17.5 |
| Direct Logical | Clear negation or alternative to a unique fact | 14.8 |
| Indirect Logical | Contradiction inferred or indirectly implied | 2.7 |
| Definition | Different definitions or interpretations for the same term or concept | 10.6 |
| Temporal | Inconsistencies in dates, durations, or event sequences | 7.9 |
| Named Entity | Inconsistencies identifying specific entities (people, organizations, locations) | 6.0 |
| Categorical | Differences in categorizing entities, objects, or concepts | 2.1 |
| Spatial | Inconsistencies in spatial descriptions or geographical information | 1.2 |
See also:
- Research project page on Meta-wiki
- Our coverage of a related earlier paper (which however was confined to contradictions within a single article, relying on the Template:Self-contradictory added by Wikipedia editors): "WikiContradiction: Detecting Self-Contradiction Articles on Wikipedia"
- "WikiContradict", below
Briefly
- A Call for Extended Abstracts has been published for Wiki Workshop 2026 (to be held as a virtual event on March 25-26, 2026, submission deadline: January 23, 2026)
- See the page of the monthly Wikimedia Research Showcase for videos and slides of past presentations.
Other recent publications
Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.
"WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia"
From the abstract:[2]
"Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information.[...] we introduce WikiContradict, a benchmark consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. We benchmark a diverse range of both closed and open-source LLMs [...] we also introduce an automated model that estimates LLM performance using a strong open-source language model, achieving an F-score of 0.8. Using this automated metric, we evaluate more than 1,500 answers from seven LLMs across all WikiContradict instances. To facilitate future work, we release WikiContradict on: [ibm.biz/wikicontradict ]."
From the paper:
"[...] Wikipedia editors use a wide range of maintenance tags to flag problematic content for improvement. However, these maintenance tags are typically removed when creating Wikipedia datasets for LLM pre-training, which results in content with various quality issues being included in the pre-training process.
In this work, we focus on three tags that indicate content inconsistencies: inconsistent, self-contradictory, and contradict-other. The first two tags denote contradictory statements within the same article, whereas the third tag highlights instances where the content of one article contradicts that of another article. In total, we collect around 1,200 articles that contain these tags [...]
"Factual Inconsistencies in Multilingual Wikipedia Tables"
From the abstract:[3]
"Despite covering the same topics, the different [language] versions of Wikipedia are written and updated independently. This leads to factual inconsistencies that can impact the neutrality and reliability of the encyclopedia and AI systems, which often rely on Wikipedia as a main training source. This study investigates cross-lingual inconsistencies in Wikipedia's structured content, with a focus on tabular data. We developed a methodology to collect, align, and analyze tables from Wikipedia multilingual articles, defining categories of inconsistency. We apply various quantitative and qualitative metrics to assess multilingual alignment using a sample dataset. These insights have implications for factual verification, multilingual knowledge interaction, and design for reliable AI systems leveraging Wikipedia content."
From the paper:
"while English provides the most comprehensive coverage in terms of volume, German Wikipedia faces significant data quality challenges despite having substantial content"
"When Collaborative Maintenance Falls Short: The Persistence of Retracted Papers on Wikipedia"
From the abstract:[4]
"We construct a novel dataset that integrates Wikipedia revision histories with metadata from Retraction Watch, Crossref, Altmetric, and OpenAlex, identifying 1,181 citations of retracted papers. We find that 71.6% of all citations analyzed are problematic. These are citations added before a paper's retraction, as well as the citations introduced after retraction without any in-text mention of the paper's retracted status. Our analysis reveals that these citations persist for a median of over 3.68 years (1,344 days). Through survival analysis, we find that signals of human attention are associated with a faster correction process. Unfortunately, a paper's established scholarly authority, a higher academic citation count, is associated with a slower time to correction."
From the "Discussion" section:
"A key consideration is the role of automated tools, such as RetractionBot [25]. This bot exemplifies the specialized roles that automated agents play in Wikipedia’s quality control ecosystem [66]. It primarily serves an editorial audience. By systematically adding a prominent template to the reference section, the bot is highly effective at its specific task of signaling a source’s retracted status to editors engaged in verification and maintenance. [...] However, our work highlights a persistent gap between the effectiveness of automation for these specific, often editor-facing tasks and the challenges of repairing more nuanced, epistemic issues for a general reader. This distinction is key: while a bot can efficiently apply a “technical flag,” this action is distinct from the substantive, contextual repair required to update an article’s main text."
See also a related recent blog post by Egon Willighagen: "Retracted articles cited in Wikipedia"
"Automatically Estimating the Trustworthiness of Wikipedia Articles"
From the abstract:[5]
"We present a model to assess the trustworthiness of external sources based on manually annotated [English] Wikipedia articles. To do so, we analyze how often an external source was referenced in Wikipedia articles in which either a problem with reliability was identified or a previously identified problem was solved. From the frequency of the respective occurrences, we aim to draw conclusions about a positive or negative influence of the source on the trustworthiness of new Wikipedia articles. For this, we use the external sources referenced in a Wikipedia article to predict whether the article contains a reliability issue or not. First experiments show that our model is not able to reliably assess the trustworthiness of Wikipedia articles yet."
"Proper Noun Diacritization for Arabic Wikipedia: A Benchmark Dataset"
From the abstract:[6]
"Proper nouns in Arabic Wikipedia are frequently undiacritized, creating ambiguity in pronunciation and interpretation, especially for transliterated named entities of foreign origin. While transliteration and diacritization have been well-studied separately in Arabic NLP, their intersection remains underexplored. In this paper, we introduce a new manually diacritized dataset of Arabic proper nouns of various origins with their English Wikipedia equivalent glosses, and present the challenges and guidelines we followed to create it."
The authors evaluated GPT-4o on their benchmark, finding that it "performs reasonably well, especially on frequent names" in adding diacritics missing on Arabic Wikipedia, but "struggles with rarer entries and variant mappings."
"Reading between the lines with topic models and machine learning: Islam’s representation on Wikipedia"
From the abstract:[7]
"[...] we first construct a representative dataset on Islam using Wikipedia articles. Afterwards, we apply several topic modelling and machine learning based approaches on the newly constructed dataset to find representation of Islam on Wikipedia. Also, we design two algorithms based on word2vec to find the inter topic similarity and intra topic similarity for the topic models. The intra topic similarity algorithm agrees well with human judgment of topic resolution and coherence of topics. As topic models find the dominant topics prevailing in a natural language document corpus, the intra topic similarity algorithm can be used as a new metric to find the coherence of single topics within the topic model."
References
- ^ Semnani, Sina; Burapacheep, Jirayu; Khatua, Arpandeep; Atchariyachanvanit, Thanawan; Wang, Zheng; Lam, Monica (November 2025). "Detecting Corpus-Level Knowledge Inconsistencies in Wikipedia with Large Language Models". In Christos Christodoulopoulos; Tanmoy Chakraborty; Carolyn Rose; Violet Peng (eds.). Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. EMNLP 2025. Suzhou, China: Association for Computational Linguistics. pp. 34827–34854. doi:10.18653/v1/2025.emnlp-main.1765. ISBN 9798891763326. / Data and code
- ^ Hou, Yufang; Pascale, Alessandra; Carnerero-Cano, Javier; Tchrakian, Tigran; Marinescu, Radu; Daly, Elizabeth; Padhi, Inkit; Sattigeri, Prasanna (2024-06-19). "WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia". arXiv:2406.13805 [cs].
- ^ Cappa, Silvia; Kong, Lingxiao; Peet, Pille-Riin; Wei, Fanfu; Zhou, Yuchen; Kalo, Jan-Christoph (2025-07-24). "Factual Inconsistencies in Multilingual Wikipedia Tables". arXiv:2507.18406 [cs].
- ^ Shi, Haohan; Yu, Yulin; Romero, Daniel M.; Horvát, Emőke-Ágnes (2025-09-24). "When Collaborative Maintenance Falls Short: The Persistence of Retracted Papers on Wikipedia". arXiv:2509.18403 [cs].
- ^ Grumbach, Luca-Philipp (2025-02-21). Automatically Estimating the Trustworthiness of Wikipedia Articles (PDF) (bachelor thesis). Friedrich-Schiller-Universität Jena. / Presentation slides
- ^ Bondok, Rawan; Nassar, Mayar; Khalifa, Salam; Micallef, Kurt; Habash, Nizar (2025-06-23). "Proper Noun Diacritization for Arabic Wikipedia: A Benchmark Dataset". arXiv:2505.02656 [cs].
- ^ Khan, Sazid Zaman; As-ad, Jamil; Khaliluzzaman, Md; Anwar, Toni; Islam, Rashedul (2025-08-18). "Reading between the lines with topic models and machine learning: Islam's representation on Wikipedia". Journal of Computational Social Science. 8 (4): 89. doi:10.1007/s42001-025-00415-6. ISSN 2432-2725.
Epstein email exchanges planned strategy, edits and reported progress
- This article uses Jeffrey Epstein's emails (which were recently released by the U.S. House Oversight Committee) to follow up on a Signpost article from March 2020 about Epstein. These emails are not part of the Epstein files, which are required by Epstein Files Transparency Act to be released by the DOJ within 30 days of passage of the act, i.e. by around December 19.
1. Optimizing good information to push down the negative: using new content (blog, websites, press releases, public network profiles, google images, etc) and search engine optimization. Changing the Wikipedia profile. Notes Wikipedia below.
WIKIPEDIA
This is a tough nut to crack. And I need to do more research on this. Wiki comes up first on the Google list due to its powerful domain and contains totally lopsided and damning content on you.
But we'll crack it. On the surface, Wiki is controlled by a morass of copyediting geeks who have nothing better to do than to discuss reference tags etc. It is also 'the people's' encyclopedia, dictated by the tyranny of the majority—so no objectivity at all.
Current Strategy - 1. The Reputation group have a copyright team dedicated to balancing the content on Wikipedia and gradually sectioning the bad stuff down to sub categories in the profile. They don't seem to be able to eliminate the bad though. 2. So, I think it's worth seeing how people like Prince Andrew managed to create an absolutely stellar profile—or Bill Clinton for that matter. No mention of his impeachment or lying to the Grand Jury. It barely touches on Lewinsky etc. 3. There are also protected pages on Wiki but I need more time and research.
— Christina Galbraith to Jeffrey Epstein, 12/16/2011, HOUSE_OVERSIGHT_025233 (via Jmail)
On November 12, 2025 Republican members of the US Representatives on the Oversight Committee released about 20,000 email records from convicted sexual predator Jeffrey Epstein. Seventy-two of these records referred to Wikipedia in some way, but not all are about whitewashing with Wikipedia, e.g. here's one about genetics. About 20 of the emails include Epstein and his hired wikiwashers discussing their plans, strategies, and even progress reports on editing two Wikipedia articles, Jeffrey Epstein and Jeffrey Epstein VI Foundation. They include Epstein's complaints about the cost of the wikiwashing.
In March 2020 The Signpost published the article "Jeffrey Epstein has asked me …" by this reporter using Wikipedia's extensive edit histories to document how this wikiwashing was done, and how it affected people outside this encyclopedia. That earlier article is republished in this issue at From the archives for easy reference.
This article shows in detail, in the words of Epstein and his employees, how their planning affected the articles and how other Wikipedians resisted their efforts.
The emails come from a disorganized database given to the Oversight Committee by Epstein's estate (see 404 Media, "The Epstein Email Dump Is a Mess"). They are occasionally crude or vicious, and often lack proper grammar and conventional spelling. I've generally left them as is, without copy editing, to preserve their spirit, except in cases where a few changes are needed in order to understand them. Mostly, though, they are reasonably calm or even boring discussions of how to whitewash Wikipedia and the rest of the internet. This article has greatly benefited from the work of others who provided data and new information. User:Dflovett first reported the existence of the email records about Wikipedia on his Substack blog "Did Jeffrey Epstein edit his own Wikipedia article?" He also performed the difficult task of sorting out the files and formatting several of them into readable form. The Verge, wrote an excellent article How Jeffrey Epstein used SEO to bury news about his crimes which gives additional information and interpretation from an expert. Later, Riley Walz and a collaborator also did a remarkable job in taking the disorganized data dump of 20,000 emails, and putting it in a readable format at Jmail. An earlier attempt at journaliststudio.google.com can be checked for additional people who were copied on the email, or in one case for an attachment that wasn't included in Jmail.
Early whitewashing
The Epstein article was created on August 4, 2006 featuring Epstein's criminal indictment for the solicitation of a minor for prostitution and included a reference to the charge within the first 4 edits. The first apparent whitewashing occurred in July 2007 by an anonymous IP editor 63.165.175.250 who removed the text about the charge and the reference, and added complimentary information about Epstein. They made a dozen similar edits through October 2007. These methods of whitewashing are very simple – just remove controversial, but cited facts, and add complimentary information. When other editors put back the cited information, the whitewashers just use the complimentary information to push the cited facts to the bottom of the article.
These edits took place before the earliest available email evidence, but the strategy described in the emails was the same: brute force removal of cited information combined with the addition of complimentary info to push the ugly facts away from the readers' view.
Reputation management
Epstein's email exchanges with several practitioners of online reputation management (ORM) begin in 2010. Their strategies offered to Epstein were not limited to Wikipedia, but aimed at censoring the entire internet. Most of these strategies were as simple as they were brutal. Any website or newspaper article in the top dozen items from a Google search that told the truth about Epstein's crimes were to be forced off that page by complaints to Google or the website owner or by other tools in the search engine optimization (SEO) toolkit. Any websites which presented complimentary information on Epstein were to be promoted on the Google results page, perhaps by linking these articles to each other or writing new articles with those links. Also articles were to be posted that simply confused matters such as one about Jeffrey Epstein (plastic surgeon). The strategy was not limited to Wikipedia, but since this encyclopedia was generally listed as the top article on the Google search page, it was of special importance.
As Al Seckel wrote on December 10, 2010:
Wikipedia was an important victory, as it will always be at the top of the search engine results. (N)ow the head lines do not mention convicted sex offender or pedophile. Instead, Philanthrophic work, Epstein Foundation, Promotion of Scientists. … Your wiki entry now is pretty tame, and bad stuff has been muted, bowlerized, (sic) and pus(h)ed to the bottom. … We hacked the site to replace the mug shot and caption, and now has an entirely different photo and caption. This was a big success.
We pushed the Edge all the way up to the front page, where it was previously buried on page 5 of google search.
We have promoted the other jeffrey epsteins, and other pages are also filled with your material.
— Al Seckel to Jeffrey Epstein, 12/15/2010, HOUSE_OVERSIGHT_022216.txt.pdf (via Jmail)
Al Seckel may have been the first reputation manager to work for Epstein. His low pricing for his services to Epstein calls into question his experience in ORM. In another context, The Telegraph calls him "a top-notch charlatan".
His connection to Epstein may have been made through Seckel's domestic partner Isabel Maxwell, Ghislaine Maxwell's older sister. Ghislaine was Epstein's former girlfriend who was convicted for child sex trafficking.
Epstein and Seckel argued about the cost of the whitewashing. Epstein complained
I was never told never, that there was a 10k fee per month,, you inittaly said the project would take 20.. then another 10. then another 10..
— Epstein to Seckel, 12/16/2010, (via Jmail)
Seckel responded, in part,
My initial estimate, given to me by Pablos, was 25, and that was based solely on his quick look at the situation, and not knowing what was really out there. In fact, as I have repeated many times, the job was far far worse that originally expected, you have a dedicated group of people trying to undo and damage you, including now, they have started up in the last week full force again, as it is obvious that they can't fux with your wiki page any more as we blocked that..
Then, there was the issue of trying to create additionally on you a positive web presence, with the science and org sites.
I spend literally four months of non-stop work, creativity, and my own political capital to get you this so far, which saved you not only time, but countless dollars, and isn't something that can be readily bought. ...
We were trying to fix up your mess. I didn't create it. Just thought it would be something to help. This was NEVER about trying to pull money out of you, and fact, we have don't everything possible to keep the costs down considerably.
I must talk to you about the island thing asap. When can we do that?
The Verge notes the argument about the price (Archived), and quotes an SEO expert, Rand Fishkin, saying that the cost should have been $100,000 to start with more than a $10,000 monthly maintenance fee. "The prices just looked insanely low to me. Here's a billionaire who supposedly is worried about his reputation as a fucking pedophile coming out in public arguing over a few thousand dollars. Honestly, the chutzpah is insane."
Epstein's dissatisfaction with Seckel's efforts (and prices) seems to be reflected in an apparent ORM proposal from Osborne & Partners LLP, a UK PR firm, dated 14 June 2011. It is long and detailed and emphasizes material in the U.K. press. Apparently this proposal was not accepted. (This link is not to the Jmail site, because Jmail does not include attachments to the emails.) The following year Osborne, who was also a venture capitalist, reportedly lobbied his contacts at Epstein's behest to install Jes Staley as CEO of Barclays Bank. Staley was Epstein's long time banker at J.P. Morgan & Co. and his work with Epstein was involved with his departures at both Morgan and Barclays.
Another long and detailed apparent ORM proposal was sent to Epstein by Christina Galbraith on Dec 16, 2011. She was recommending the firm Reputation.com and stated
You need people who eat, drink and sleep algorithms and search engine optimization. I've researched several companies and the most skilled, discrete, award winning and comprehensive is a company called, www.reputation.com based in Silicon Valley.
I would serve as a liaison between you and Reputation. I would monitor their progress and provide them with all positive content that they need. They would make sure that the content has maximum algorithmic potential. You could keep your positive content simple (using you Edge.org summary which is good), or expand your content with updates on current science work and scientists. The benefit of the former, is that it keeps you more anonymous. The benefit of the latter is that is algorithmically associates you to a larger pool of known scientists, further pushing up positive content. My advice would be to combine the two: a simple repetitive bio summary of you but with a larger list of scientists added on. But I would ask the Reputation team about this: saying that the main goal is to enhance anonymity and algorithmic associations at the same time.
— HOUSE_OVERSIGHT_025233 Christina Galbraith to Epstein, 12/16/2011 (via Jmail)
Reputation.com may not have been hired by Epstein, but Galbraith certainly was. She served as his in-house publicity director and published a dozen or more press releases at PR Newswire giving her name and work phone for journalists to contact, according to The New York Times.
After this proposal, Galbraith was an influential figure in the ORM project, being copied in 4 email threads on the topic and serving as a go-between for Epstein and the reputation managers.
Galbraith has an interesting background. Her father Evan Galbraith was successful on Wall Street and was appointed as the U.S. Ambassador to France under President Ronald Reagan. He was also a friend and supporter of William F. Buckley and the National Review.
Buckley died a month after Evan Galbraith and Christina Galbraith wrote an appreciation of their long friendship in the National Review (archived) in her father's voice. In 2013 she also wrote a puff piece in the National Review about Epstein's donations to Harvard University.
Galbraith's emails with Tyler Shears, a reputation manager who worked for Epstein in 2014 and 2015, were very specific suggesting that she was managing or closely monitoring him.
Im at a Kinko's.
I see that you're boosting non-website url's -- are you sure this is the right approach? (vs. boosting Jeffrey's sites?) the .org and foundation sites are slipping down and the USVI and science are permanently off the first page.
Thanks for you input.
Christina
Conclusion
Jeffrey Epstein and his often changing group of online reputation managers planned, edited, and tracked their whitewashing of Wikipedia as part of a larger effort to keep reports of Epstein's crimes off the internet. The effort to whitewash Wikipedia was one of the key parts, often the first step, of these efforts. Given the large amount of money spent and the number of whitewashers hired, it is surprising that Wikipedians were at all successful, but they continued to update the articles and helped frustrate Epstein's coverup. Though Epstein was partially successful for a decade, he could not succeed in the long run.
This type of whitewashing is powerful. Given the upcoming legally required release of of the Epstein files, you might expect to see similar reports in the future, perhaps in the next issue of The Signpost.
It's a family affair
- This traffic report is adapted from the Top 25 Report, prepared with commentary by Igordebraga, GN22, Shuipzv3, CAWylie, Rahcmander and Vestrian24Bio.
I'm back, back in the New York groove (November 2 to 8)
| Rank | Article | Class | Views | Image | Notes/about |
|---|---|---|---|---|---|
| 1 | Zohran Mamdani | 7,556,241 | This 34-year old Democratic Party candidate won the 2025 New York City mayoral election (#4) comfortably, receiving just over 50% of the vote, compared to 41% for independent Andrew Cuomo and 7% for Republican candidate Curtis Sliwa (#9). Mamdani ran on a democratic socialist platform of focusing on the cost of living. Receiving more than one million votes, the first since 1969, analysts attributed Mamdani's victory to support from first-time and younger voters. Many Democrats congratulated Mamdani on his election, though notable exceptions included outgoing mayor Eric Adams and New York senator Chuck Schumer. | ||
| 2 | Rama Duwaji | 2,612,539 | An animator, illustrator and ceramist, Duwaji is married to #1. | ||
| 3 | Mira Nair | 2,066,282 | An award-winning filmmaker, Nair is the mother of #1 and wife of #5. | ||
| 4 | 2025 New York City mayoral election | 1,835,703 | The first election since 1969 to attract more than two million votes, this mayoral election was closely watched not just in the United States, but also internationally. Figures of the political left hoped Mamdani's win will lead to a resurgence in democratic socialism, while many politicians in Israel condemned Mamdani's victory. Some commentators see this election as a wider "blue wave" in 2025 United States elections, which also saw Democrats triumph in two governor elections, flipping Virginia and held New Jersey with increased margins. | ||
| 5 | Mahmood Mamdani | 1,663,272 | An anthropologist, academic and political commentator, he is the father of #1 and husband of #3. | ||
| 6 | Dick Cheney | 1,592,930 | The 46th vice president of the United States from 2001 to 2009, under President George W. Bush, died at age 84 on November 3. His tenure is often called the most powerful vice presidency in American history, with many pundits and historians noting that he was the first vice president to be more powerful than the presidents they served under. | ||
| 7 | Marshawn Kneeland | 1,134,001 | In yet another sad football death, this Dallas Cowboys defensive player eluded police after a high-speed car chase following a traffic violation on November 5. The next day, his girlfriend reported that he was brandishing a gun, had a history of mental illness, and was contemplating suicide. Police found him dead from a suspected self-inflicted gunshot wound after he had crashed his car. He was aged 24. | ||
| 8 | Deaths in 2025 | 1,045,470 | Die, die, we all pass away But don't wear a frown 'cause it's really okay You might try and hide And you might try and pray But we all end up the remains of the day! | ||
| 9 | Curtis Sliwa | 1,018,061 | In October, this founder of the New York City crime-fighting Guardian Angels got into a shouting match live on-air with WABC owner John Catsimatidis, who called for Sliwa to drop out of #4 and running for NYC mayor. Sliwa immediately quit WABC, vowing never to return, no matter the outcome of the election. He ultimately got only 7.2% of the vote, well behind Andrew Cuomo. | ||
| 10 | ChatGPT | 913,083 | After three years, you'd think people know what ChatGPT is. |
New life in a day, some new Frankenstein (November 9 to 15)
| Rank | Article | Class | Views | Image | Notes/about |
|---|---|---|---|---|---|
| 1 | Dharmendra | 2,383,752 | Turning the age of 90 next month and lately having respiratory issues, this beloved Indian actor's health set the internet abuzz this month with the fake news of his death. Both his daughter and wife quelled the rumours, but not before the misinformation reached the media. | ||
| 2 | James A. Garfield | 1,624,936 | The oft-forgotten 20th POTUS and the focus of the widely acclaimed Netflix miniseries Death by Lightning. The series depicts Garfield's (Michael Shannon) election and presidency, as well as how his path crossed with Charles J. Guiteau, who ended up assassinating him. | ||
| 3 | 2025 Bihar Legislative Assembly election | 1,602,520 | On November 14, the election results from the previous week were declared and all 243 seats were filled for the 18th Bihar Assembly session to begin on November 20. A reported 2,616 people contested the seats. Incumbent Chief Minister Nitish Kumar will assume his office for a record tenth term. | ||
| 4 | Frankenstein (2025 film) | 1,442,277 | Guillermo del Toro has finally made the monster movie that he has been obsessed with remaking ever since he saw the 1931 film as a child. Typically a gothic horror director, del Toro called the genre "[his] church" and "Boris Karloff (who played the 1931 monster) [his] Messiah." The film had a limited theatrical release in October and was released on Netflix on November 7. | ||
| 5 | Pluribus (TV series) | 1,283,874 | Vince Gilligan continues to make television magic with this Apple TV post-apocalyptic science fiction series starring Rhea Seehorn in the lead role as a woman seemingly not affected to join a hive mind. This critically acclaimed series is currently airing new episodes weekly, while a second season is on the way as well. | ||
| 6 | Deaths in 2025 | 1,024,644 | "To examine the causes of life, we must first have recourse to death." | ||
| 7 | Zohran Mamdani | 923,175 | The mayor-elect of New York City after winning the election earlier this month, is set to begin his term on January 1, 2026 when he will become the city's first Muslim and first South Asian mayor. | ||
| 8 | ChatGPT | 861,596 | We're getting tired of writing about this. And authors are worried their works are basically being stolen to teach chatbots. | ||
| 9 | SS Edmund Fitzgerald | 770,037 | SS Edmund Fitzgerald was a lake freighter operating on the Great Lakes of North America. On November 10, 1975, the ship was caught in a severe storm in Lake Superior and sank with the loss of all 29 crew members. The sinking is one of the best-known in the Great Lakes, due in large part to the ballad "The Wreck of the Edmund Fitzgerald" by the Canadian singer-songwriter Gordon Lightfoot. | ||
| 10 | Predator: Badlands | 747,605 | Although considered a standalone in the Predator franchise, this recent film is part of the new era of the franchise, with Dan Trachtenberg at helm. It even heavily references a corporation from the Alien franchise, the series famously crosses over with. Badlands follows a young Predator, trying to prove himself to his father, taking on an apex creature on a neighboring planet. He enlists the help of a damaged synthetic whose team was destroyed by the creature. Unlike the other two Trachtenberg Predators (one of whom is actually named opposite, Prey), the film was released theatrically and has already made $136 million worldwide. |
Watching the whole world wind around and round, I'll be coming home next year (November 16 to 22)
| Rank | Article | Class | Views | Image | Notes/about |
|---|---|---|---|---|---|
| 1 | 2026 FIFA World Cup qualification | 1,377,921 | Football's biggest tournament tops this list ahead of schedule, as the year's last FIFA dates determined more of the 48 teams that will play all over North America. The only continental qualifier that also entered the list was Europe, with the final 9 guaranteed spots including past winners Germany (trying to redeem themselves from 2 straight group stage crashes!) and Spain; perpetual runner-ups Netherlands; Portugal allowing Cristiano Ronaldo to match Lionel Messi in a record sixth World Cup; frequent participants Belgium and Switzerland; and three who hadn't qualified since 1998, Norway, Austria and Scotland. This meant that the runners-up that will seek 4 additional spots include Italy, who after being humiliated 4-1 by Norway will seek to avoid the larger shame of missing 3 straight Cups in spite of four titles. Direct spots also emerged from North/Central America, as without the US, Canada and Mexico there was an opportunity for Haiti (who made it only once in 1974), Panama (ditto in 2018), and in a massive surprise, the Dutch island of Curaçao, who have become the smallest territory to ever enter the WC! Plus, Africa and Asia decided who will be their representatives in the intercontinental playoff for the two last spots, gotten by DR Congo (beating Nigeria; they had last appeared in 1974, still as Zaire) and Iraq (beating UAE, and seeking a return after 1986 still under Saddam Hussein). | ||
| 2 | 2026 FIFA World Cup qualification (UEFA) | 1,275,545 | |||
| 3 | 2026 FIFA World Cup | 1,193,594 | |||
| 4 | Miss Universe 2025 | 1,020,137 | The 74th annual beauty pageant was held in Thailand on November 21. Mexico's Fátima Bosch took home her country's 4th crown, despite allegations the results were rigged via photos of Bosch with pageant owner Raul Rocha. | ||
| 5 | Jeffrey Epstein | 1,107,399 | 6 years after killing himself in prison, Epstein still haunts the world of the living as his activities mostly involving sex trafficking are unveiled, harming the reputation of his friends that included the U.S. president (which even led to the statue to the left...). | ||
| 6 | Deaths in 2025 | 1,011,441 | But you and I, we live and die The world's still spinnin' 'round, we don't know why Why, why, why, why... | ||
| 7 | ChatGPT | 979,294 | GPT-5.1 was released on the 12th, authors started a copyright infrigement lawsuit against OpenAI, and this is still not used to write the Report. | ||
| 8 | Jeffrey Epstein client list | 965,444 | The Epstein Files Transparency Act passed both houses of the United States Congress and was signed into law by president Donald Trump on November 18 and 19. It requires the United States Department of Justice to release all files pertaining to the prosecution of #5 within 30 days, meaning the files are expected to be made public on or around December 19. | ||
| 9 | Wicked: For Good | 827,960 | Sequel to last year's Wicked, second part of the two-part adaption of the 2003 musical, which itself was based on the revisionist 1995 novel, portraying characters and setting from the 1900 novel The Wonderful Wizard of Oz and its 1939 film adaptation. The Wonderful Wizard of Oz (Jurassic alumnus Jeff Goldblum) is the antagonist, while shedding light on Elphaba Thropp aka the Wicked Witch of the West (played by Cynthia Erivo). Other main cast and characters include Ariana Grande-Butera as Galinda "Glinda" Upland aka Glinda the Good, Jonathan Bailey as Fiyero Tigelaar aka the Scarecrow, Ethan Slater as Boq Woodsman aka the Tin Woodman, Colman Domingo as the voice of Brrr the Cowardly Lion, along with Bethany Weaver as Dorothy Gale. The film was released theatrically by Universal Pictures last Friday and opened to mostly positive reviews and grossed a total of $226 million worldwide including $150 million from US and Canada. It became the second-biggest opening of 2025, the second-biggest all-time opening for a Universal film, the second-biggest pre-Thanksgiving debut of all-time, and the third-biggest all-time opening for a musical film. | ||
| 10 | Troy Parrott | 743,277 | This Dubliner not only scored both of Ireland's points in their qualification game (#2), but he also scored a hat-trick three days later in the 96th minute to secure Ireland's place in the playoffs game. The Emerald Isle's path to the World Cup requires beating the Czech Republic and then the winner of Denmark-North Macedonia. |
Exclusions
- These lists exclude the Wikipedia main page, non-article pages (such as redlinks), and anomalous entries (such as DDoS attacks or likely automated views). Since mobile view data became available to the Report in October 2014, we exclude articles that have almost no mobile views (5–6% or less) or almost all mobile views (94–95% or more) because they are very likely to be automated views based on our experience and research of the issue. Please feel free to discuss any removal on the Top 25 Report talk page if you wish.
Most edited articles
For the October 24 – November 24 period, per this database report.
| Title | Revisions | Notes |
|---|---|---|
| Deaths in 2025 | 2208 | Along with Cheney and Kneeland (but not Dharmendra, who enters the following week), the deceased of the period included Satish Shah, Nick Mangold, Diane Ladd, Cleto Escobedo III, Lenny Wilkens, and the Kessler Twins. |
| UPS Airlines Flight 2976 | 1739 | On November 4, a McDonnell Douglas MD-11 cargo plane suffered an engine separation during takeoff at Louisville, Kentucky and crashed shortly afterward into an industrial area, leaving a 300-foot (90 m) gash (pictured) on the ground, killing all three pilots and 11 people on the ground. Numerous CCTV and dashcam angles emerged of the fiery explosion and were quickly shared on social media. The National Transportation Safety Board is investigating the crash, while the Federal Aviation Administration issued an emergency airworthiness directive grounding all flights of the plane type. |
| Wozzeck | 1569 | MONTENSEM continues to improve opera articles, this time a work by Austrian composer Alban Berg that first premiered in 1925. |
| 2025 World Series | 1517 | It took extra innings and superstar Shohei Ohtani only threw for just over two innings, but the Los Angeles Dodgers beat the Toronto Blue Jays to repeat the titleLos Angeles Dodgers. Ohtani's compatriot Yoshinobu Yamamoto was chosen as World Series MVP |
| Hurricane Melissa | 1465 | This monster tropical cyclone formed as a wave off West Africa on October 16, quickly moved westward and slowed to become a tropical storm in the Caribbean Sea on October 21, meandered and slightly weakened from October 25–27, before strengthening into a Category 5 hurricane near New Hope, Jamaica, on October 28 (pictured). She was the most intense hurricane to make landfall since the 1935 Labor Day hurricane, with most people questioning adding another category to the Saffir–Simpson scale. 102 deaths have been attributed to her, who dissipated on November 4. |
| Bigg Boss (Tamil TV series) season 9 | 1411 | One of the many Indian versions of Big Brother. |
| 2025 FIFA U-17 World Cup | 1174 | 3 years after the adult World Cup, Qatar received some teenage footballers. The final on the 27th is Portugal (who defeated former colony Brazil in the semifinal) versus Austria (who beat neighbor Italy), neither of whom has won the tournament before. |
| Zohran Mamdani | 1018 | New York City Boy, you'll never have a bored day once you become mayor. |
| Bridge | 994 | Like a bridge over troubled water, Noleander will lay this article down. Possibly all the way to Featured status! |
| 2025 New York City mayoral election | 937 | Eric Adams had so many problems in his mayoralty he quit the race, and supported Andrew Cuomo, who lost by 200,000 votes to Zohran Mamdani. |
| 2025 Pacific typhoon season | 876 | The storms haven't yet calmed, with Typhoon Koto (Verbena in the Philippines) forming on November 23. |
| Grokipedia | 867 | Elon Musk is not a fan of Wikipedia, so through the Grok AI he created his own encyclopedia... which forked as much content from of our wiki as possible while adding some of Musk's bias and a few hallucinations. |
| República Mista | 866 | One user is improving the page on a Spanish politics-related treatise published in 1602, which the author planned to have seven volumes but only finished the first. |
| Battle for Dream Island | 823 | A new episode of this web animation parodying game shows prompted a legion of fans to update its article. |
| 2025 Bihar Legislative Assembly election | 767 | The third most populous Indian state chose its new representatives. |
The Seven Rules of Trust
Wales, Jimmy (2025). The Seven Rules of Trust: A Blueprint for Building Things That Last. New York: Crown Currency. ISBN 978-0-593-72747-8.
Introduction
The Seven Rules of Trust was released on October 28, 2025. Not many books have the claim to fame that this one does: Jimmy Wales is featured prominently on the cover (with co-author Dan Gardner in much smaller lettering). But does this book actually have something fundamentally important to say about Wikipedia? I would argue that it depends on the audience reading it. I read this book with my Wikipedian hat proudly on and that affects my perspective quite strongly. Therefore, I think a more casual reader may find the book to be more insightful than I did.
For the sake of transparency, I am declaring the conflict-of-interest that I have with Jimmy Wales that is already on my userpage. I'd also like to mention that I received a free copy of the book from him about a week before the book was officially released. Despite this connection, I don't think I have let this cloud my review. Feel free to take my opinion with a grain of salt regardless. I won't be offended if you don't trust my review to be purely objective.
Contents
So what is this book actually about? As the title implies, it's about trust. While the book lists both Jimmy Wales and Dan Gardner on the front cover, I perceived the book to be written in "Jimmy's voice". I say that because there is a lot of first-person language like "I". There are plenty of personal insights about Wales' life and there are none about Gardner's. The back cover only includes a photograph of Jimmy Wales.
Wales opens the book by talking about a very personal experience. His daughter had meconium aspiration syndrome and he had to make a decision as a parent on whether to accept an experimental treatment for the condition. I can only imagine the stress of that situation. He couldn't really find anything other than scientific papers that were unintelligible to laymen like him. Wales had this experience about a month before launching Wikipedia. He describes it as fortifying his belief in this website's importance. It's a touching story and I wholeheartedly believe its veracity. I'd personally advise more caution in the strength of our medical content than Wales does when he says "Worried parents will never go through a similar experience today because Wikipedia has excellent articles" on page 16 because our disclaimer about such content exists for a reason. I also admit to being a bit wary that Wales does not mention Larry Sanger until page 152. Another personal experience that I appreciated reading about was Wales' tribute to Jo Cox.
I would argue that this is not strictly a book about Wikipedia even if there are Wikipedia-related anecdotes scattered throughout it. I think that The Editors by Stephen Harrison (Signpost reviews) actually features Wikipedia more prominently despite technically only being "inspired" by it. In contrast, The Seven Rules of Trust reads like your standard business-related book with Wikipedia laying the foundation. There's numerous pages that have absolutely nothing to do with Wikipedia, discussing Uber, AirBNB, Quaker Oats, and the subreddit r/ChangeMyView, to name a few examples. Even Jeff Bezos gets a positive mention. What all of these subjects have in common is something Wales perceives as being relevant to the matter of trust. I'm not saying these reflections aren't meaningful. I was simply expecting to see more about Wikipedia. The limited content about the hopes and dreams people had with Web 2.0 made me long to read more. Wikipedia really is something magical when compared to the toxicity that runs rampant across most of the Internet and the book is at its best when it focuses on that.
I really enjoyed the mini-biographies of Wikipedians Annie Rauwerda, Emily Temple-Wood, and James Heilman. I also enjoyed the much briefer quotes of other Wikipedians. I know that plenty of Wikipedians were interviewed while this book was being written and I'm disappointed that this type of content was only present in a handful of pages. I admit to also being confused about why for-profit companies were repeatedly held up as examples of institutions that were trusted. These comparisons seemed out of place, akin to apples to oranges. I think a crucial part of what makes us different is our independence from corporate interests. I think other large collaborative projects like the Internet Archive, the Great Backyard Bird Count, Distributed Proofreaders, and Open Street Map would've been better comparisons.
I'm genuinely shocked that page 47 made it through the publication process. There, Wales describes the Lynching of Horace Maples as part of the dark history of his hometown, Huntsville, Alabama. I understand the argument that Wales was trying to make. There's a dark side to the Internet and not everyone comes together to do something great. The Gamergate analogy made sense. But then Wales started writing about the lynching as an example of volunteerism and focuses on that aspect way too much for my liking. I'll emphasize that Wales sees this incident as evil. But the framing felt off, made me wonder if anyone at Penguin involved a sensitivity reader at any point, and felt like it cheapened the horror that this history contains. I encourage people to read the full page that prompted my reaction here. I'd argue that it's actually worse in context than without it, but it's possible interpretations on this may vary. For even more context, this is the following page. As I said, I understand the argument he was trying to make here, but I don't think this was a good way to go about it.
Chapter 4 gets across the principle of assume good faith decently well. However, I do think the example written in the book takes the practice farther than how it is actually applied. Wales describes a situation in which a new editor erases a paragraph in an article about American politics and replaces it with slanted content cited to an obscure blog. He believes that the way Wikipedians would react to this situation would be to revert the edit (correct) and then open a discussion on the talk page explaining why they disagree with it (this generally wouldn't happen from the editor doing the reverting). I think there's a sincere value to writing edit summaries like this when dealing with good faith edits. But I also recognize that most editors will simply offer a quick link to the relevant policy and see that as sufficient. Whether that's how things should be is a different conversation, but I'm worried this example will give casual readers unrealistic expectations. I do agree with the message that it's difficult to have constructive conversations if people are making personal attacks. Civility is important.
I admit that I'm also not a big fan of the AI tool that Wales describes working on in the conclusion of the book. I think the answer to people not understanding our radical transparency is to get better at teaching the public how to engage with Wikipedia and how we work, not to have an LLM summarize and name the editors that participated in the discussions that led to the current consensus. I get why someone would think it's a good idea to make our processes more understandable this way but it'd only do so on a surface level. I worry that the results would cause more harm than good.
In summary
Overall, I'd say the book is a decent lighthearted read, even if I can't ignore its very noticeable flaws. Despite that assessment, I don't hate this book. I'd rate it 3/5 stars. It's difficult to get across when something is done well because you don't have too much to say when it is. I think critiques just stand out to me more in comparison. I read a lot of books and that likely makes my standards higher as well. I'd suggest people borrow this book from a library if they want to read it. $42 CAD works out to 20 cents a page. That's a hard expense to justify for the average person.
"I have been asked by Jeffrey Epstein ..."
- This piece originally appeared as a Signpost "In focus" report in March 2020. The release of 20,000 emails to or from Epstein on November 10, 2025 adds a new aspect to this story: what Epstein and his paid whitewashers were planning (complete with progress reports) during much of the period covered here. See the Disinformation report in the current issue for an analysis of these emails.

The life of reputed billionaire[1] Jeffrey Epstein took many strange turns. So did the articles about him on Wikipedia. In 2008, he was convicted of soliciting a minor for prostitution, served 13 months in a Florida jail, and was required to register as a sex offender. He allegedly committed illicit sexual activities while he was on work release from jail,[2] and was suspected of continuing them through at least 2015.[3]
Ultimately, he was accused of additional serious offenses, including sex trafficking, resulting in his July 2019 arrest. That month, publicity about the lenient plea deal in Epstein's 2008 case resulted in the resignation of US Secretary of Labor, Alexander Acosta, who as U.S. attorney for Southern Florida had approved the deal.[4] Epstein died in jail in August 2019, his death ruled a suicide by the New York City medical examiner.[5]
Between his 2009 release from jail and his second arrest, the editing got interesting on Wikipedia. Epstein began an extensive campaign to whitewash his reputation. Most notably, he claimed large donations to well-known academics through his charitable foundations.[6] Given his heinous crimes, the probable high-level political interest in the story, and the campaign to whitewash his reputation, in retrospect Wikipedia's article on Epstein seems like an obvious potential target for conflict-of-interest or paid editing.
The New York Times on November 26, 2019, broke a story about Wikipedia editing by accounts with probable links to Epstein. The Times wrote that after his 2008 conviction, Epstein named reputation management services that he used, and a Wikipedia user account, "Turville", appeared in information Epstein provided.[6] However there is no User:Turville registered on Wikipedia, and the Times suggested that User:Turvill (without an "e"), was the account referred to.
This Signpost report investigates whether User:Turvill and other accounts were associated with Epstein and how their edits affected two articles on Epstein. It is important to note that no purely on-Wiki investigation can prove the identity of a user account. For example other people may impersonate an editor in order to embarrass them, a tactic known as Joe jobbing. We cannot conclusively determine whether Epstein himself, employees of the Jeffrey Epstein VI Foundation, reputation management companies, or other types of paid editors edited the Wikipedia articles about Epstein. We can however gather evidence about how editors who associated themselves in some way with Epstein affected the articles.
The Epstein article
The Wikipedia article on Epstein was created on August 4, 2006, a week after Epstein's first indictment and two years before his conviction. Within hours a reliable source was added, citing CNN on the indictment.
Over the next two years the article grew to include 15 references, almost all of them about the sex scandal or investigation, until on June 30, 2008, when Epstein pleaded guilty and the plea was noted in Wikipedia.
However, not all was well with the article during this time. A lot of material, mostly unreferenced and complimentary to Epstein was added to the top of the article and all the referenced material on the criminal case was pushed to the bottom. From October 2007 to February 2008 one IP editor made six major deletions of material on the sexual allegations and related lawsuits.
In December 2011 there was more conflict among editors. Wikipedia editors had been keeping the article up to date, with reliable sources—until the arrival of editors favoring Epstein. Trouble began when User:Stgeorge12 reverted an administrator and removed material about the sexual offense conviction with the edit summary "I have been asked by Jeffrey Epstein to describe his biography in a professional and accurate way, that does not involve any scandals or disreputable content. As a living person, this is his right." On January 7, 2012, Stgeorge12 was indefinitely blocked for this and similar edits, at exactly the same time as another new editor, User:Ottotiv, who had made similar edits. These single-purpose accounts had established a pattern of obstruction and interference that would continue with User:Turvill.
Turvill

Six weeks after Stgeorge12 was indefinitely blocked, User:Turvill made their first edits, with five of the first seven edits on talk pages discussing deleting the Jeffrey Epstein article or protesting the proposed deletion of Jeffrey Epstein (plastic surgeon), which they called "my article".
Turvill has associated themself with Epstein by uploading flattering photos of Epstein to Wikimedia Commons with one described "Previously published: on my website, on facebook www.jeffreyepstein.org". When these files were deleted from Commons because the copyright permission was poorly formatted, Turvill implied that Epstein's foundation would release the photos. Turvill also tried to get a public domain Florida mugshot of Epstein speedily deleted from Commons because it was a "personal attack; violation of biography of a living person."
Turvill's main topic for editing was an article they helped create on the Jeffrey Epstein VI Foundation. Using the articles for creation procedure, an IP editor proposed the article on March 24, 2013, but Turvill took control of it two days later and made the vast majority of edits on it until August 20, 2013, when it was accepted as an article.
The article as first accepted claimed that the foundation funded projects worth "$200 million a year." The New York Times[6] published financial statements from the foundation showing that, in total over 18 years, the foundation funded less than $20 million in projects.[7] The article as first accepted also included a 100 word biography of Epstein which did not mention his conviction for soliciting a minor for prostitution.

Turvill was a single-purpose account devoted to articles related to Epstein. Turvill occasionally signed an edit in text as "Turville" (with an "e") or once as "Tuville" resulting in their IP address being posted on talk pages. The IP editor posted 25 of their 31 edits on the same pages where Turvill contributed. Turvill's last edit on Wikipedia was in January 2015 when they removed the words "convicted paedophile" from the foundation article.
In March 2012 and again in July 2012, Turvill was warned about editing warring on the Jeffrey Epstein article. Following the November 26 New York Times article, Carrite asked Turvill whether they would make a paid editing declaration, even though Turvill had not edited in almost five years. The next day Turvill was indefinitely blocked for "(Spam / advertising-only account WP:UPE)", where "UPE" refers to "undeclared paid editing".

Connections with MIT Media Lab
In the same month that Turvill began editing the Jeffrey Epstein VI Foundation article, Epstein met the director of the Massachusetts Institute of Technology's Media Lab, Joi Ito for the first time, with both interested in a donation to the Media Lab. On March 2, 2013, Ito requested staff members to conduct due diligence on Epstein.[8] One of them responded by email two days later:
| “ | You should read his Wikipedia bio, there may be some other things to consider. Though he seems to be a generous philanthropist, he might not be an individual the Lab should work with.
http://en.wikipedia.org/wiki/Jeffrey_Epstein[8] |
” |
In an independent report commissioned by MIT, the authors noted six sentences or paragraphs in the Wikipedia article that could have warned Ito that MIT should not accept Epstein's money, although the article "also included statements that could be read as undercutting the strength of some of the allegations."[8]
The MIT report noted that MIT did consider the possible risks to MIT's reputation when they accepted Epstein's money, but MIT "did not appropriately take into account the significant damage to the MIT community, particularly victims of sexual assault and abuse, from allowing Epstein to associate himself with MIT."[8]
Some staffers at the Media Lab were clearly worried about Epstein's victims. Epstein visited the MIT Media Lab in 2016, about a year after Turvill's final edit. According to an MIT staffer interviewed by Ronan Farrow in The New Yorker,[3] two young women accompanied Epstein on the visit. "They were models. Eastern European, definitely … All of us women made it a point to be super nice to them. We literally had a conversation about how, on the off chance that they're not there by choice, we could maybe help them."
Conclusion
Wikipedians reported on the Epstein investigation soon after it was announced in 2006, and they reported his conviction and many details of other allegations against him. But they were regularly opposed by single-purpose editors who removed material on Epstein's conviction or otherwise whitewashed the articles. Two of these editors associated themselves with Epstein or his foundation.
These Epstein-related accounts were not enough to prevent the Wikipedia article on Epstein from alerting MIT to Epstein's offenses, but they did soft-pedal the story enough that MIT managed to ignore the alert long enough to accept Epstein's money. Wikipedia's editors performed their work well in a difficult situation.
References
- ^ Voytko, Lisette (16 July 2019). "Court Documents Confirm Jeffrey Epstein Is Nowhere Near A Billionaire". Forbes. Retrieved 21 March 2020.
- ^ Baker, Mike (17 July 2019). "Sheriff to Investigate Jeffrey Epstein 'Work Release' After Allegation of Nude Meeting". The New York Times. Retrieved 21 March 2020.
- ^ a b Farrow, Ronan (7 September 2019). "How an Élite University Research Center Concealed Its Relationship with Jeffrey Epstein". The New Yorker. Retrieved 21 March 2020.
- ^ Helmore, Edward (12 July 2019). "Alex Acosta resigns as US labor secretary following Epstein plea deal scandal". Guardian. Retrieved 21 March 2020.
- ^ Barrett, Devlin (14 August 2019). "Jeffrey Epstein's autopsy concludes his death was a suicide by hanging". Washington Post. Retrieved 20 March 2020.
- ^ a b c Eder, Steven; Goldstein, Matthew (26 November 2019). "Jeffrey Epstein's Charity: An Image Boost Built on Deception". The New York Times. Archived from the original on 9 December 2019. Retrieved 21 March 2020.
- ^ "J. Epstein Virgin Islands Foundation , Inc" (PDF). The New York Times. Retrieved 21 March 2020.
- ^ a b c d Braceras, Roberto M.; Chunias, Jennifer L.; Martin, Kevin P. (10 March 2020). "Report Concerning Jeffrey Epstein's Interactions with the Massachusetts Institute of Technology" (PDF). Archived from the original (PDF) on 10 January 2020. Retrieved 22 March 2020.
An interview with Wikipe-tan
The Signpost sat down with Wikipe-tan, the ... mascot of Wikipedia, to have a conversation.
- The Signpost: Who are you?
- Wikipe-tan: I'm Wikipe-tan, official mascot of Wikipedia.
- TS: Official or unofficial?
- WT: I don't care. That's the dumbest question in the world.
- TS: Really?
- WT: Absolutely.
- TS: There seems to be a dispute.
- WT: There's no dispute. I don't care. So yeah, say what you like. It doesn't matter.
- TS: Well, isn't that, when it comes to Wikipedia, a problem? What are the facts?
- WT: Yeah. [laughter] Well, it's not a fact. It's an opinion. So, you can have whatever opinion you like. It doesn't matter.
- TS: But for you, you're the official mascot.
- WT: Can I just say again, it doesn't matter. I've answered your question four times.
- TS: ...
- WT: You know what? I'm done. Thank you. Thanks.
- TS: What's going on?
- WT: It's stupid. Don't ask me stupid questions.
- TS: How is it stupid?
- WT: It's a stupid question.
- TS: It was the first question.
- [Wikipe-tan walks out of the room.]
AI finds errors in 90% of Wikipedia's best articles
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its "Extended Thinking" version) to find an error in "Today's featured article". In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.
This experiment was inspired by an October 2 Twitter thread by Noam Brown (a research scientist at OpenAI):[remarks 1]
My new hobby is asking GPT-5 Thinking to find errors in every @Wikipedia page I read. Interestingly, almost every page I checked has at least one error.
This caught my interest as a Wikipedian with over 50,000 edits since 2003 (a large part of which has consisted in fixing factual errors and other problems, rather than adding new information). I decided to test whether Brown's bold "almost every page" observation would hold up for the most difficult dataset I could think of: Featured articles are considered among Wikipedia very best. They have been awarded the community's highest quality rating, after a review for accuracy, neutrality, completeness, and style. Currently only 0.1% of articles have achieved this stage. Every day, one of them is selected to be showcased on Wikipedia's main page as "Today's featured article", leading to additional scrutiny.
I used the exact same prompt as in one of Brown's tweets:
Find at least one error in this Wikipedia page: [link to article]
I ran this query towards the end of each TFA's time on the main page, or shortly afterwards. I verified all the alleged errors myself to the best of my ability (often spending considerable time to first familiarize myself enough with the topic and the relevant sources), and provide my verdict on each below. Whenever I agreed that something was an error, I fixed it in the article (with a few exceptions, explained below), without taking up ChatGPT's suggestion to provide the correction itself (or, in the few cases where it already suggested a corrected wording, e.g. for the Oct 31 article, without using that suggestion).
Results
Detailed results are available in the table below. To summarize:
- Overall, ChatGPT identified 56 supposed errors in these 31 featured articles.
- I confirmed 38 of these (i.e. 68%) as valid errors in my assessment. I
Implemented corrections for 35 of these, and
Agreed with 3 additional ones without yet implementing a correction myself. - I
Disagreed with 13 of the alleged errors (23%). - I rated 4 as
Inconclusive (7%), and one as
Not Applicable (in the sense that ChatGPT's observation appeared factually correct but would only have implied an error in case that part of the article was intended in a particular way, a possibility that the ChatGPT response had acknowledged explicitly). - I corrected errors pointed out by ChatGPT in 28 of the 31 featured articles in this sample (90%). In three of them (Oct 8, Oct 20, Oct 24), ChatGPT did not find what I could confirm as a valid error (10%).
Subjectively, I found a few of the
Implemented ones (see e.g. Oct 14, Oct 26) to be a bit nitpicky, even if valid. But otherwise I considered the vast majority of them serious and important to correct.
Related efforts
The idea to use LLMs to fact-check Wikipedia articles is not new. A few examples of related efforts:
- A group of Stanford researchers recently announced a paper titled "Detecting Corpus-Level Knowledge Inconsistencies in Wikipedia with LLMs", reporting e.g. that "at least 3.3% of English Wikipedia facts contradict another fact", and building a brower extension to assist Wikipedia editors in finding them. See this issue's "Recent research".
- In 2022, a team from Facebook AI Research/Meta presented
a neural network based system, called Side, to identify Wikipedia citations that are unlikely to support their claims
(Signpost coverage). However, it never made it beyond the proof of concept stage. - In 2024, the Wikimedia Foundation issued a $5000 grant to develop "An AI-Driven Fact-Checking Assistant for Wikimedia Projects", which however did not progress beyond a rudimentary initial version.
- Of course, many editors have already experimented with using previous versions of ChatGPT and other LLM chatbots to review articles, checking citations etc. (See e.g. a list by the recently created WikiProject AI Tools)
- Most recently, several Wikipedia editors have used the AI-generated corrections applied by Grokipedia to the articles it imported from Wikipedia to fix the same error in the source Wikipedia article. (See e.g. this thread by an experienced editor who considers this kind of thing to hold
great promise as an aid to editors
. Another editor used it to correct a 19 year old error on Wikipedia, and I myself used it correct one that had been present on Wikipedia for 20 years.) - Generally, the problem of "textual entailment" (roughly, determining whether a given text implies a given statement) has long been studied in NLP, sometimes also with application to Wikipedia citations (example: "WiCE: Real-World Entailment for Claims in Wikipedia").
A few observations about this experiment
- This used a recent paid version of ChatGPT, specifically the "ChatGPT 5 Thinking" reasoning model, using the "Extended Thinking" setting where a request can take up to several minutes to complete). This post by Simon Willison (who nicknamed it a "Research Goblin" for its persistence in going through lots of possible sources) should provide an idea about what this model can be good and less good at. Presumably one can get good results from the reasoning versions of competitors like Claude or Gemini as well, but the limitations of the free version of ChatGPT might make it less useful with this prompt.
- This experiment can be seen as a Wikipedia-specific benchmark for LLMs (or LLM agents). With AI steadily improving in general but still miserably failing at many tasks, it can be difficult to keep track where current models stand with regard to specific tasks that would be helpful to one's own work. This is why experts like Ethan Mollick have been recommending to would-be AI users to create their own small benchmark with a few tasks that they care about and feel competent to evaluate, and test different models on them over time (as part of "Giving your AI a Job Interview"). The results here can be seen in this vein. I would love for us to come up with more such benchmarks that evaluate LLMs on such tasks that may be useful for assisting Wikipedia editors, and keep evaluating LLMs on them over time. Many Wikipedians have already volunteered to do similar manual evaluation work for various external AI research projects by academics (for example the Stanford study on inconsistency detection mentioned above).
- As mentioned, the evaluation of this benchmark has been quite time-consuming, partly because I was aiming to reach a solid verdict even in topic areas I'm not familiar with myself, but also because of the inherent effort of fact-checking work in general (e.g. acquiring sources, etc.). This illustrates another remark by Mollick: "It is getting harder and harder to test AIs as they get 'smarter' at a wide variety of tasks," because evaluating the validity of the results on less trivial tasks requires more effort and expertise from human evaluators.
- If repeating this experiment, I would modify the prompt to facilitate systematic analysis of the results, e.g. by requiring the LLM to produce structured output, and also confine it to one error per article only.
- The results presented here are already outdated: Shortly after I ran the last daily query using ChatGPT 5, OpenAI announced ChatGPT 5.1, promising fewer hallucinations among other improvements.
- In my impression there is still room for more systematic approaches that use AI to fact-check Wikipedia articles statement by statement (like some of the projects mentioned under "Related efforts"). That said, I also think that this experiment illustrates how such custom-built AI projects might sometimes get overtaken by the improving general capacities of the big AI lab's models, such as ChatGPT.
- The "at least one" in the prompt is deliberately aggressive, and seems likely to force hallucinations in case an article is definitely error-free. So, while the sample here (running the prompt only once against a small set of articles) would still be too small for it, it might be interesting to investigate using this prompt to produce a kind of article quality metric: If it repeatedly results only in invalid error findings (i.e. what a human reviewer
Disagrees with), that should indicate that the article is less likely to contain factual errors. - Conversely, this experiment could also be conceived as a small-scale study of featured articles. (Personally, I find it a little bit reminiscent of a small examination I did many years ago by hand, collecting errors in Citizendium's "approved articles", which that Wikipedia competitor – launched by Larry Sanger – had highlighted as being of superior quality. Or also of the page Wikipedia:Errors in the Encyclopædia Britannica that have been corrected in Wikipedia.) I don't want to draw too many far-reaching conclusions here. But it seems safe to say that the results here show that the current FA review process is unlikely to ensure perfect factual accuracy. That said, as a non-FA-regular, perusing the edit history and talk pages for various articles in this sample has given me newfound respect for the diligence and efforts of many FA authors and reviewers. I should also say that I encountered almost no WP:OWN issues while implementing the corrections; several of the articles' main authors reacted very graciously. (Two of my 35 correction edits – for the Oct 19 and Oct 22 TFAs – were reverted or partially reverted, in both cases because editors had concerns about the sources supporting the correction; one of these has since been resolved using different sources.)
A bit of error analysis: What may have caused the mistakes by Wikipedia or ChatGPT in some of these cases
A systematic error analysis (i.e. identifying causes for either the mistakes ChatGPT identified in the Wikipedia articles or in ChatGPT's mistaken claims about such mistakes) wasn't the objective here. But I'll offer a few casual, tentative observations.
Various mistaken error reports by ChatGPT appear to have had to do with:
- Lack of access to paywalled/offline sources combined with FUTON bias/overconfidence (e.g. Oct 28)
- Misreading the Wikipedia article due to parsing fails involving Wikipedia templates (Oct 10, Nov 1, ...) and tables (Oct 26)
- Sloppy reading of sources (Oct 9, Oct 24)
- In very few cases, outright hallucinations or "r's in strawberry" type fails (probably Oct 20)
Further insights might be obtained by inspecting the "Thinking" traces and the full list of sources consulted that are included in each ChatGPT answer. See this chat transcript for October 24 as an example, one of the cases where its correction suggestion went awry – click "Thought" for the former and "Sources" for the latter. The table below only includes the sources that ChatGPT included as citations in its final answer.
Apparent causes for some of the identified Wikipedia errors:
- A few likely vandalism edits (Oct 6, Nov 2) that were overlooked during FA review
- Ironically, one of the factual errors found by ChatGPT appears to have come from text that was autogenerated without human review - not by an LLM though, but by somewhat convoluted template code to autogenerate alt text (Oct 7).
- Interdisciplinarity: I'm speculating here, but one of the known difficulties in writing comprehensive high-quality Wikipedia articles is that we are a general encyclopedia that often covers article topics from the perspectives of multiple disciplines. And even if an editor is well-read in one of these, they might have blind spots in others, and featured article reviews might not always attract reviewers who can fill all such gaps. A few of the errors that ChatGPT identified seem of the type that would have immediately occurred to an expert with "average familiarity" in a particular discipline (e.g. palaeogeology in the dewiki bonus example below).
Results in detail
| "Find at least one error in this Wikipedia page:" | Response by ChatGPT 5 Extended thinking[remarks 2] | My verdict | Error analysis i.e. observations on why either Wikipedia or ChatGPT might have erred here (incomplete and somewhat speculative) |
|---|---|---|---|
|
|
Here are two clear errors on that page:
Authoritative sources list the tower as 67 floors (CTBUH) or 66 stories (NYC City Planning/LPC). Even the article's own lead calls it "a 67-story... skyscraper," so the "70" in the infobox is inconsistent with both the text and external references.
It currently reads: "$7 million (equivalent to about $ million in )"—the conversion template is empty. The underlying figure is '$7 million', per the LPC designation report, but the inflation-adjusted value and year are missing. (Also likely off: the infobox lists 24 elevators, while CTBUH lists 18 elevator cars—worth double-checking and aligning the definition used.) |
|
Error #1 (floor count) was the result of an apparent vandalism edit from 2020. I also reverted an edit by the same IP in a different article, where it had likewise persisted for over five years. |
|
|
Here's one clear error:
(If you want a second fix: the article says the Llano de Chajnantor Observatory is "less than 20 km south" of Licancabur; using published coordinates for Licancabur (22.83°S, 67.88°W) and the ALMA/APEX site on the Chajnantor plateau (≈23°00′21″S, 67°45′33″W) puts it roughly 25 km away, not <20 km.) Smithsonian Global Volcanism Programapex-telescope.org |
Re "one clear error": Re "a second fix": |
Re "one clear error": The erroneous (and conceivably politically sensitive) claim that "Licancabur is located in Bolivia" formed the alt text for the map in the infobox. It did not appear in the article's source wikitext, but rather seems to have been auto-generated via the infobox template and country map used in the article - a setup that makes sense for increasing accessibility without adding extra work for editors, but in this case was detrimental to the article's accuracy.
|
Here's a clear one:
(If you want a second fix: the infobox credits Whitney Spinks as Producer, but official credits list her as Executive Producer.)mobygames.com |
"Wrong citations for the sales figure.": Re "a second fix": |
||
|
|
Here are two fixable issues I spotted on the current page:
If you want, I can propose exact replacement text for those lines. |
1. "People "Sexiest Man Alive" ranking": After reviewing relevant People articles available online ([7][8]) I believe ChatGPT is likely correct that the magazine did not assign an official "number two". But on the other hand, the HuffPo article cited in the Wikipedia article clearly supports the statement, so appears to be a WP:VNT case where a generally reliable source contains an error. 2. "Alaskan Nets" phrasing |
|
|
No Doubt in 2009 |
Here are (at least) two fixable errors on that page:
(Optional cleanup) The singles list shows "Excuse Me Mr." as released June 21, 1997, which was the commercial Japan release; the song was serviced to U.S. radio earlier, on August 21, 1996. If the section intends first release dates, that line should reflect the 1996 U.S. radio release (and/or clarify territories). |
1. 2. Re "(Optional cleanup)": |
|
Here's one clear factual error on that page:
(Another fix you could make while you're there: the honours line calls his German award a "Knight Commander's Cross." The official notice in the Bundesanzeiger shows he received the Großes Verdienstkreuz mit Stern in July 1952, i.e., "Grand Cross of Merit with Star," not a "Knight Commander's Cross." |
"one clear factual error" "Another fix": |
||
|
Nathan Chen, winner of the men's event |
Here's one clear inaccuracy:
Fix: change that clause to something like: "with their free skate to a Queen medley ('Bohemian Rhapsody' / 'We Are the Champions')." OlympicsPopsugar |
||
|
|
Here are two concrete issues I found (as of today, Oct 13, 2025):
In the Location section it says the tell is "less than 1 mi inland." Using the site's published coordinates (31.475817, 34.404631) and the mouth of Wadi Gaza on the coast (~31.46389, 34.37583), the straight-line distance to the shoreline is roughly 2–3 km (≈1.2–1.9 mi), i.e., more than a mile. Pleiades+1 The infobox lists Ayman Hassouna under "Archaeologists," which reads as if he excavated the site in 1999–2000. The published excavation reports name Pierre de Miroschedji and Moain Sadeq as the excavation directors for the 1999 salvage and the 2000 season; Hassouna's work at Tell es-Sakan was much later, as field director for GAZAMAP survey/monitoring in 2022–2023, not part of the original excavations. So the infobox is misleading as written. The BAS Library+2 Want me to draft a crisp correction note you can paste on the article's talk page? |
Location: The Wikipedia article's "less than 1 mi" appears correct, based on a look at the map. Infobox: |
Location: ChatGPT clearly picked a wrong reference point for calculating the distance to the coast. |
|
|
Error: In the section "Reaction, later issues and colonial striking," the article says the silver threepence "was the same diameter as the groat, and the two coins could not co-exist."
Why this is wrong: The threepence began being issued for general circulation in 1845, and groats continued to be struck for circulation through 1855—so they did co-exist for roughly a decade (even though the size clash caused confusion and helped doom the groat). royalmintmuseum.org.ukMuseums Victoria Collections Minimal fix: Change to something like: "The threepence, introduced for general circulation in 1845, was the same diameter as the groat; although both circulated concurrently until 1855, the overlap led to confusion and the groat's discontinuation." royalmintmuseum.org.ukMuseums Victoria Collections |
A bit nitpicky, not to say approaching the borderline between factual error and mere awkward phrasing. But ChatGPT is correct that the statement is factually wrong if taken in a concrete sense. (NB: The source it provided for its suggested "minimal fix" did not support the modified claim.) |
|
Here's a clear one:
(Extra fix you might consider)
|
* "Early life..." section : The cited source says that while the peninsula "had no roads for vehicular traffic" until 1946, "During the French occupation at the beginning of the nineteenth century, some primitive roads were built, but these could accommodate pack animals only."
|
||
|
|
Error: The lead says: "Through Corymore Productions ... Lansbury assumed ownership of the series [Murder, She Wrote] ..." — that's incorrect.
Why it's wrong: Murder, She Wrote was produced and is owned/distributed by Universal/NBCUniversal, not by Lansbury personally. Corymore Productions (Lansbury & Peter Shaw's company) co-produced the show in its later seasons, and Lansbury served as executive producer—but that's not ownership. What to change: Replace "assumed ownership of the series" with something like:Sources: Universal's control of the property is shown by (a) Universal Pictures developing the Murder, She Wrote feature film, and (b) NBCUniversal's distribution/branding of the series and channels: Deadline reporting Universal's film project (Dec. 16, 2024) Deadline; NBCUniversal/Universal marketing the complete-series release and FAST channels for Murder, She Wrote (Universal Pictures Home Entertainment page; Deadline on NBCU–Roku channels) UpheDeadline. The series has long been produced and distributed by Universal Television (see history/coverage) csmonitor.com. |
The "assumed ownership" had been added in 2008 by an IP editor in a wording that suggests they had something like creative control by Lansbury in mind, rather than copyright and/or trademark ownership by her and her husband's company. | |
|
|
Here are two fixable issues:
If you'd like, I can draft the exact wikitext changes next. |
Reception: Uilleann pipes: |
|
Here's a clean factual slip you can fix:
If you'd like, I can scan for more. |
This error was caused by a somewhat unintuitive parameter in the convert template, which also means it is not easy to spot for editors in the source wikitext (snapshot of the rendered page before the correction: [24]). | ||
|
|
Here are two concrete errors you can fix:
If you'd like, I can propose specific wording changes for those spots. |
1.: 2.: |
|
|
Chicago police officer using a phone |
Here are two concrete errors on that page:
If you'd like, I can suggest precise wording fixes for those spots. |
1. 'Writer: ChatGPT's description is correct, but a footnote in the article already clarifies that "The opinion was signed on Madigan's behalf by Michael J. Luke, counsel to the Attorney General." 2. formatting: |
2. ChatGPT's erroneous complaint about a non-missing hyphen might possibly be due a parsing error on OpenAI's side, or a "r's in strawberry" type LLM issue. |
|
Old Trafford, venue of the match |
Here's a clean, verifiable one:
Geography error (and inconsistency within the article): The lead says the match was "played at Old Trafford, Manchester," and the page header calls it "a 1995 football match in Manchester, England." But Old Trafford stadium is not in the City of Manchester—it's in the Old Trafford area of Stretford, in the Metropolitan Borough of Trafford, Greater Manchester. The infobox partly acknowledges this ("City: Trafford, Greater Manchester"), but even that label is wrong because Trafford isn't a city; it's a metropolitan borough. The lead and infobox should read "Old Trafford, Trafford, Greater Manchester" (or "Old Trafford, Stretford (Trafford), Greater Manchester"). WikipediaWikipediaWikipedia Suggested fixes:
|
(although not using ChatGPT's exact suggestion, e.g. because this infobox only supports a "city" field. As an aside, I'm not sure whether ChatGPT's additional explanation that the stadium is in the Old Trafford area of Stretford, in the Metropolitan Borough of Traffordis correct regarding Stretford. But in any case its main point is correct that the stadium is not located in the city of Manchester.) |
|
|
Kaniehtiio Horn |
Here's a clean one:
ADG award year is wrong. The article says the episode won at the "2023 Art Directors Guild Awards." The win was at the 2024 (28th) ADG Awards, in Half-Hour Single-Camera Series for "Reservation Dogs: 'Deer Lady'," credited to production designer Brandon Tonner-Connolly. See the ADG's official winners page and the winners announcement. Art Directors GuildADG Deadline (Optional minor nit: several listings put the runtime at 29 minutes rather than 30—e.g., TV Guide—though stream runtimes can vary slightly.) TVGuide.com |
ADG award year: ChatGPT is correct that using 2023 (the year where the awarded work was relased) instead of 2024 (the year when the awarding took place) is inconsistent both with the ADG's own website and with third party reliable sources (e.g. the source cited in the Wikipedia article for this very statement called it "the 2024 ADG Awards"). That said, it's the approach long taken by English Wikipedia (example), i.e. not an issue with this article per se, and arguably also a bit of a judgment call. I would support correcting it, but it would need a wider discussion and changes in a considerable number of articles, going beyond the scopes of this project. runtime: |
|
|
Illustration of redshift and blueshift |
Here are two clear issues on that page:
(Any one of the above suffices as an error; I included both a factual date mistake and stale record claims.) |
Re 1. #Date error: I would agree with ChatGPT that the year when such a scientific discovery was published in peer-reviewed form (here: 1998) seems more appropriate to highlight in such a case than the year when the work that resulted in that discovery began. But the Wikipedia sentence is correct as written and supported by one of the cited sources ("one of the two research teams, the Supernova Cosmology Project, initiated a decade earlier in 1988"). Re 2. #Out-of-date records: |
|
|
Male in breeding plumage |
Here are clear, source-checkable errors on that page and how to fix them:
If you'd like, I can draft exact replacement sentences for those lines so you can paste them into the article. |
Re Réunion introduction year: ChatGPT is definitely wrong in asserting that
Re misspelled plant names: |
|
|
Henry VI of England |
Here are (at least) two concrete errors on that page:
Optional clean-up: the infobox link labeled "Original Text" points to a Wikisource page that itself says the text is taken from Davies (1856), An English Chronicle...—a chronicle extract, not the enrolled statute/roll. If the label is meant literally, it should point to the Parliament Rolls (PROME/Rotuli Parliamentorum) or be relabeled to reflect the chronicle source. Wikisource How to fix:
If you want, I can draft the exact wikitext changes next. |
Re Wrong "Commencement" date in the infobox: Re Internal date inconsistency (24 vs 25/31 October) + a miscount: Re Optional clean-up: |
|
|
Cover of first issue (October/November 1930) |
Here's one clear, sourced error on the page:
If you want a second one to fix:
(Minor precision note you may also consider): the grid note says the title changed to "Magic Carpet" in January 1933; the masthead/title used in sources is "The Magic Carpet Magazine" (often even "Oriental Stories Combined With the Magic Carpet Magazine" on that first issue). onlinebooks.library.upenn.eduISFDB |
"Minor precision note": |
Bibliographic table: Apparently a parsing issue on OpenAI's side
|
|
Taylor Swift |
One clear error: the article repeatedly misspells the Quietus critic Amy Pettifer as "Pettifier." Her surname is Pettifer (one "f," no extra "i"), as shown on her author page and on The Quietus' review of 1989. The QuietusThe Quietus
If you'd like, I can draft a tidy edit note and correction for the page. |
||
|
Illustration of Mathos by Poirson, 1890 |
Here are two concrete problems on that page:
If you'd like, I can draft precise, sourced wording to fix those two passages. |
1. Leaders captured at the Saw: 2. Aftermath dating and target towns:
|
2. might be an example of FUTON bias (ChatGPT not having access to the Hoyos 2007 book) combined with over-reliance on Polybius. Interestingly, this answer did not include any citations (although the reasoning trace shows that it retrieved an online version of the Polybius text it quotes from). |
|
|
Error (infobox): It lists Clark's service in the Confederate House of Representatives as June 10, 1864 – May 10, 1865.
Why this is wrong: The 2nd Confederate Congress met in two sessions—May 2–June 14, 1864 and Nov 7, 1864–Mar 18, 1865—and adjourned sine die on March 18, 1865. Clark is recorded as serving only in the second session, so his House term should end March 18, 1865 (and his start should align with that session, not June 10, 1864). Wikipedia+1 (Sources: the "2nd Confederate States Congress" page lists the session dates and shows "John Bullock Clark Sr., 3rd Missouri — session 2"; the "Confederate States Congress" page notes the House's final adjournment on March 18, 1865.) |
"his House term should end March 18, 1865": This is what the article itself already said elsewhere ( He held this position from June 10, 1864, to March 18, 1865}). "his start should align with that session, not June 10, 1864" |
The "May 10, 1865" error had been present in the infobox for almost 13 years (introduced in an edit rather misleadingly described as "Formatting infobox/succession boxes."). |
|
Cartoon in Punch |
One clear, checkable error:
Distance to Shipley. The article says Neal sent Archer to "a druggist in the town of Shipley five miles (eight kilometres) away." Shipley (and specifically Baildon Bridge in Shipley, where Hodgson's shop was) is only about three miles (~5 km) from central Bradford. Multiple sources give ~3 mi straight-line (with ~5 mi as a driving distance today): see Travelmath's straight-line vs driving figures for Bradford↔Shipley, the Baildon page noting Baildon is 3 mi north of Bradford city centre, and a detailed local history write-up placing Hodgson's pharmacy "three miles (4.8 km) ... at Baildon Bridge in Shipley." Travelmathen.wikipedia.orgen.wikipedia.org Suggested fix: change "five miles (eight kilometres)" to "about three miles (five kilometres)" (optionally adding that it's roughly 4–5 mi by road). |
||
|
First page of the autograph score |
One clear error is in the lead:What Bach actually did was reuse three specific movements from BWV 79 across two Missae breves—not "most" of the cantata, and not "most" of either Mass:
This mapping is documented by Bach Digital and other scholarly notes; it totals three of six BWV 79 movements (i.e., not "most"). bach-digital.deThe Bach Choir of Bethlehememmanuelmusic.org Suggested fix: replace "most of the music for two of his four short masses" with: "Bach later parodied three movements from BWV 79 in two of his short masses: the opening chorus and the duet in the Missa in G major (BWV 236), and the alto aria in the Missa in A major (BWV 234)."bach-digital.deThe Bach Choir of Bethlehememmanuelmusic.org |
I consider this one a bit nitpicky. But ChatGPT has a point that "most" for three out of six movements can be seen as a bit misleading (and in any case this sentence had apparently been added to the start of the article while overlooking that the topic had already been covered with more precision later in the lead section). |
|
|
|
Here are two solid, source-checkable errors on that page:
Optional cleanup note: In the "Taxonomy and evolution" lead, the etymology sentence is missing the glosses (it currently reads "albi meaning [blank] and nucha meaning [blank]"). You could complete it as "albi ('white') + nucha ('nape')." (No external source needed for Latin basics, but many field guides use that derivation.) |
1. Diet: 2. "Synonym year": |
Re "Optional cleanup note": This looks like a parsing fail on OpenAI's side involving the {{gloss}} template. (The source wikitext reads |
|
|
Here's a clear one:
(If you want another tidy-up: the article calls his father "Ferdinand Walters," but ADB gives the full name Arthur Ferdinand Walters—worth aligning to the source.) adb.anu.edu.au |
1928–1962: "another tidy-up": |
1962 had been changed without explanation to 1963 in Special:Diff/1054517782 four years earlier, quite likely a vandalism edit.
|
| Error: The lead claims Nizaa is "the only Bantoid language that allows multiple verbal suffixes on one verb."
Why this is wrong: Bantu languages (e.g., Swahili, Bemba, Chichewa) routinely stack multiple verbal (derivational) suffixes—this is a well-documented hallmark of Bantu morphology. And Bantu is a subbranch of Southern Bantoid, i.e., within Bantoid. So Nizaa cannot be the only Bantoid language with such stacking. roa.rutgers.edujzeller.debritannica.com Suggested fix: Change to something like: "Nizaa is notable among North Bantoid/Mambiloid languages for allowing multiple verbal suffixes on a single verb," and cite an appropriate source for the North-Bantoid/Mambiloid rarity rather than using "only Bantoid." |
The cited source used a different definition of "Bantoid language" than English Wikipedia currently does (see article talk page for the messy details). | ||
|
Reanne Evans, the first woman in the tournament |
Error found:
In the lead, the article says Neil Robertson "required four snookers in the penultimate frame" of the final. Contemporary match reports say he needed one snooker to tie frame 18, forced a re-spotted black, won that frame, and then took the decider. snooker.orgwpbsa.comRTÉ [43] Suggested fix: Change "required four snookers" to "required a snooker (and won the frame on a re-spotted black)". |
The error had been introduced a week before the article was featured on the mainpage, in a "Copy edit" that changed "having required foul shots in the penultimate frame" to "having required four snookers in the penultimate frame, apparently misreading "foul" as "four". | |
|
Conspirators of the Gunpowder Plot |
One clear error is in the lead:The officer who led the posse was Sir Richard Walsh, the (High) Sheriff of Worcestershire (the county), not the sheriff of the city of Worcester. Contemporary and reputable accounts identify him as the Sheriff/High Sheriff of Worcestershire, and even the body of the same article later uses the correct title ("Richard Walsh (Sheriff of Worcestershire) ... besieged Holbeche House"). ourwarwickshire.org.ukgutenberg.orgWikipedia Suggested fix: change "Sheriff of Worcester" to "Sheriff of Worcestershire." |
This error appears to have been present for almost 16 years [46]. |
Bonus track: German Wikipedia (not included in the tallies above)
I also tried this out once on German Wikipedia, for the "article of the day" of October 19, 2025 (with the prompt translated into German, slightly modified). This uncovered a major error (two figures being off by a factor of 1000) which had been present in the article for over 11 years and survived FA review (Exzellenzkandidatur).
Extended content
| ||||||||
|---|---|---|---|---|---|---|---|---|
|
Notes
- ^ Signpost readers might remember that earlier this year Brown had controversially speculated that OpenAI's newly released ChatGPT Deep Research tool
might be the beginning of the end for Wikipedia
, in a since deleted tweet that was followed by some more nuanced statements. - ^ The ChatGPT responses generally include one or more inline source links which I tried to reproduce in most cases (they are difficult to copypaste), as well as a longer "Sources" list at the bottom (often over 50 entries, apparently everything that ChatGPT consulted during the reasoning phase), which are not reproduced here. Click 🤖 for the full response, which includes all source links and also the reasoning traces.
Highlights from the itWikiCon 2025
Every year, Wikimedians from around the world gather for a new edition of Wikimania, the best-known and important convention involving Wikipedia, Commons, Wikidata and all the other WMF projects. However, not as many people know that several national and regional communities host their own yearly meetings, too: among them, there’s the Italian community, which recently met at the latest itWikiCon, held in Catania from 7 to 9 November 2025.
Highlights of the event
As reported by Wikimedia Italia staff member Simona Cannataro, almost 90 people took part in this itWikiCon, with more than 30 sessions being hosted within the rooms of Palazzo Biscari, where participants also had the opportunity to join edit-a-thons and other activities in their spare time.
The sessions included in-wiki related themes like advocacy activities, the upcoming Wikifunctions and Abstract Wikipedia platforms and the constant push-and-pull between Wikipedia and AI. More practical applications of Wikimedia projects were explored in detail, too, including the digitization of cultural heritage, university programs and partnerships with local associations. The 2025 itWikiCon was also the right occasion to celebrate the latest round of WikiRicci, the yearly prizes dedicated to frequent contributors of the Italian community, as well as the birthday of Wikidata, and a special collection of postcards headed to contributors of the Iranian community.
Finally, throughout the week and the days leading to the meeting, participants had the opportunity to visit some of the most iconic locations in Catania, including the Teatro Massimo Bellini.
A symbolic success
The 2025 itWikiCon is just the second national event ever hosted in Southern Italy – following the 2023 edition in Bari – and the first one held on an island rather than the peninsula, specifically Sicily.
The southern regions of Italy do offer particularly rich cultural and natural heritage, but suffer from a chronic lack of economic resources and a shrinking population. According to the latest edition of a report by Fondazione Migrantes, an internal institution of the Episcopal Conference of Italy (CEI), more than 1.6 million people have left the country from 2006 to 2024, with Sicily being the region that registers the highest number of Italian citizens living abroad – over 844,000 people; the report also found that over one million citizens have moved from Southern Italy to Central and Northern Italy in the last ten years, with 48.5% of them being aged between 20 and 34.
The WikiSud collective, which was first created at the 2023 convention in Bari, has aimed to build a more solid network between Wikipedians who still live – or come from – Southern Italy, by bolstering up the number of active volunteers, initiatives, organized groups – including Wikimediani in Sicilia – and GLAM institutions. These ambitions were highlighted during a session at the 2024 Wikimania in Katowice, and have now been brought to the organization of the latest itWikiCon in Catania.
A member of the convention's organizing team, Giovanni Pennisi, shared partial results of a poll sent to participants with The Signpost. The participants were mostly active on Wikipedia, Wikidata and Commons; over 25 people said they had taken part in a national convention for the first time; while the majority of participants were aged 31 or older, about twenty of them were aged between 18 and 30; some users from Switzerland, France and Belgium took part in the event, and over twenty people received a travel scholarship. Finally, many of the participants stated that they felt "more like a part of the community", that they were "more motivated to contribute" and that they understood more clearly "the issues of Wiki projects and the community".
Take-home message
The success of the latest itWikiCon served as a step forward towards the ambitious goals of the WikiSud collective. It showed that, even in territories where volunteers face more systemic obstacles and many people still feel forced to leave in order to get better chances for their studies or their professional careers, local communities can still thrive and work together to bring a glimpse of hope and positive changes for their lands.














