Can AI really do research and write papers by itself? As of March 25, 2026, the answer is: partly, and only in a narrow area. In Nature, researchers from Sakana AI and their collaborators described The AI Scientist, a system that can generate ideas, run code-based experiments, create figures, and draft a machine-learning paper. In a controlled test with the ICLR 2025 workshop “I Can’t Believe It’s Not Better,” the team submitted three fully AI-generated papers without human modification. One paper received reviewer scores of 6, 7, and 6, ranked in the top 45% of submissions sent for review, and cleared the workshop’s average acceptance threshold. (nature.com)
That sounds dramatic, but it does not mean AI has already become a true scientist. The same Nature paper stresses the limits: only one of the three papers passed, workshops are easier to enter than main conferences, and the system still makes many mistakes, including shallow ideas, weak methods, implementation errors, duplicated figures, and hallucinated citations. Nature’s editorial made a similar point. The striking part was not the paper’s negative result itself, but the fact that an AI system could carry out so much of the research pipeline, raising new questions about how science should be done. (nature.com)
So, is the era of AI researchers coming? Probably yes, but more as “AI co-scientists” than as independent replacements for humans. Nature Portfolio’s policy still says AI tools cannot be authors, because authorship requires responsibility and accountability. Its journals allow limited writing assistance, but substantive AI use must be disclosed, and human researchers remain responsible for the final text. In other words, the future of science may be faster and more automated, yet still deeply human at its core. AI may help discover patterns, test ideas, and draft papers, but people will still need to judge what is original, reliable, and worth believing. (nature.com)










