When the Open Science Collaboration published its landmark replication study in 2015, finding that only about thirty-six percent of psychology findings replicated at the original effect size, the response split along a predictable fault line. Journalists and critics declared a crisis. Many working scientists pushed back, arguing the study was being misread and that the state of the field was not as dire as headlines suggested.
Both responses were, in their way, wrong — or rather, both were engaging with the wrong question.
What Replication Is For
The replication of experimental findings is not a quality-control check that science failed to perform and is now urgently retrofitting. It is the core mechanism by which scientific claims graduate from interesting results to established knowledge. The fact that many published findings do not replicate is not a malfunction. It is the system working.
The problem is not that findings fail to replicate. The problem is the set of institutional incentives — publication bias toward positive results, career structures that reward novelty over verification, the economics of academic journals — that caused replication to be systematically underperformed for decades while the public was encouraged to treat single published studies as reliable guides to action.
The Press Release Problem
This distinction matters enormously for how science journalism is practiced. The standard model — university issues press release, journalist writes article beginning “scientists have discovered,” reader updates beliefs — treats the publication of a study as the terminus of scientific process rather than a point along a much longer arc.
A single study, even a well-designed one, is not evidence of a finding. It is evidence that a finding is worth investigating further. The difference between these two things is not technical or pedantic. It is the difference between informing the public and systematically misleading it.
What Good Science Coverage Looks Like
Responsible science journalism requires engaging with effect sizes, not just directions. It requires distinguishing between studies and bodies of evidence. It requires communicating uncertainty as a feature of honest reporting rather than a disclaimer that undermines the story’s impact. And it requires a willingness to follow up — to report when findings fail to replicate, when meta-analyses revise the picture, when scientific consensus shifts.
This is harder, slower, and less amenable to the headline cycle than the current model. It is also what covering science honestly requires.
We will try to practice it here. When we cite research, we will tell you what kind of evidence it represents. When the picture is uncertain, we will say so — not as a caveat, but as the substance of the reporting.
Comments