Can a paper written by AI really pass peer review? The honest answer is: it might get farther than many people expect, but that does not mean it is strong research. In a 2025 study in Mayo Clinic Proceedings, human reviewers could not clearly tell the difference between human-written and AI-generated medical manuscripts. In another 2025 study, 78.6% of reviewers did not realize that a manuscript had been written entirely by GPT-4o. (sciencedirect.com)
Still, “looking academic” is not the same as “being correct.” In the GPT-4o study, 42.9% of editors rejected the paper before peer review. During peer review, 42.9% recommended rejection and 28.6% asked for major revisions. This suggests that AI can copy the style of a research paper, but it may still fail when experts examine the logic, methods, or claims more carefully. Publishers also warn that AI output can be incorrect, incomplete, biased, or even include fabricated references. (sciencedirect.com)
That is why major publishers now use a careful rule: AI may help, but humans must stay responsible. Springer Nature says AI should support human work, not replace human expertise, and any important use of AI should be declared. Elsevier also requires authors to disclose generative AI use and says AI cannot be listed as an author. The ICMJE, a major medical publishing group, says the same: AI tools cannot be authors because they cannot take responsibility for accuracy, integrity, and originality. (group.springernature.com)
So, can an AI-written paper pass peer review? In some cases, yes—especially if reviewers look only at smooth English. But real science needs more than fluent sentences. It needs trustworthy data, correct citations, and a human who can stand behind every claim. (sciencedirect.com)










