Can a study give the same answer when other researchers check it again? A new Nature paper, published on April 1, 2026, looked at this question in economics and political science. The researchers tested 110 recent papers from leading journals where data and code sharing are required. They found that more than 85% of published claims were computationally reproducible. In other words, when another team used the same data and the same analysis steps, they usually got the same result. (nature.com)
But that is only the first test. The paper also asked about robustness. That means changing the analysis in reasonable ways and seeing whether the conclusion still stands. Here, the picture was less perfect. In the reanalyses, 72% of statistically significant estimates stayed significant and pointed in the same direction. The median reproduced effect size was almost identical to the original one, at 99% of the published effect size. The authors also found that teams with more experience reported lower robustness, which suggests that small research decisions can still change results. (nature.com)
This paper was part of a larger Nature collection about trust in social and behavioural science. Another study in the same collection checked 600 papers from 62 journals published between 2009 and 2018. Only 144 papers, or 24.0%, made their data available for direct reproducibility checks. Among the 143 papers that could be assessed, 53.6% were precisely reproducible and 73.5% were at least approximately reproducible. The researchers also found better reproducibility in economics and political science, in newer papers, and in journals that require data sharing. (nature.com)
And what about doing the study again with new data? That is even harder. A separate Nature paper tried to replicate 274 claims from 164 papers and found success for 49.3% of papers by its main measure. Replication effect sizes were also much smaller than the originals. So the latest message is not simply “science is failing.” It is more useful than that: open data, open code, and careful checking clearly help, but strong-looking results still need to be tested again before we trust them too much. (nature.com)










