For years, the reference list was treated as the most mechanical part of a paper: tedious, technical, and faintly boring. In 2026, it has become a forensic crime scene. In an April 1, 2026, Nature feature, reporters described how AI-generated “hallucinated” citations — references that look scholarly but do not actually exist or are badly corrupted — are now appearing not only in drafts, but in published research itself. Nature’s analysis, conducted with Grounded AI, suggested that at least tens of thousands of 2025 scholarly publications probably contained invalid references generated by AI; one rough extrapolation put the figure above 110,000, though that estimate is necessarily tentative. (nature.com)
What makes the problem alarming is that it is no longer anecdotal. A January 2026 preprint examining ACL, NAACL, and EMNLP proceedings from 2024–25 found nearly 300 papers with at least one hallucinated citation, with more than 100 such papers in EMNLP 2025 alone. A February 2026 study of four major high-performance-computing conferences reported that 2–6% of 2025 papers contained “mysterious citations,” whereas none of the 2021 proceedings did. Another February 2026 preprint focused on NeurIPS 2025 identified 100 fabricated citations spread across 53 accepted papers, roughly 1% of all accepted submissions. (doi.org)
This is more than a bibliographic nuisance. Citations are the load-bearing beams of academic argument: they tell readers what evidence exists, whose work is being built upon, and whether a claim deserves trust. Once fake references seep into journals, conference proceedings, and books, they threaten to create a feedback loop in which later researchers cite ghosts, reviewers miss the fraud, and the literature slowly becomes harder to verify. That is why publishers are now exploring automated screening tools, while Springer Nature says that some papers with isolated AI-related reference errors may be corrected, but only if authors can clearly document what happened and show that the rest of the work remains reliable. (nature.com)
The emerging consensus is blunt: humans, not chatbots, bear responsibility. Springer Nature’s guidance says authors must verify and reference any AI-assisted output, and the ICMJE likewise states that authors are responsible for accurate citations and that AI-generated material should not be treated as a primary source. The age of effortless machine-written scholarship has arrived; the age of effortless trust, it seems, is over. (group.springernature.com)










