Hexbyte News Computers
Most new publications, upon their launch, seek to promote their content as novel, surprising, exciting.
A new journal that began publishing this week does … the opposite of that.
Start with the name: Series of Unsurprising Results in Economics (SURE). The journal publishes papers with findings that are, well, really boring — so boring that other journals rejected them just for being boring. Its first paper, published Tuesday, is about an education intervention that was found to have no effects at all on anything.
But before you close this tab, hear me out. SURE is actually far from boring, even if the papers it publishes are guaranteed to be, as the name implies, unsurprising. In fact, it’s a pretty big deal, and a significant step toward fixing a major problem with scientific research.
SURE exists to fight “publication bias,” which affects every research field out there. Publication bias works like this: Let’s say hundreds of scientists are studying a topic. The ones who find counterintuitive, surprising results in their data will publish those surprising results as papers.
The ones who find extremely standard, unsurprising results — say, “This intervention does not have any effects,” or, “There doesn’t seem to be a strong relationship between any of these variables” — will usually get rejected from journals, if they bother turning their disappointing results into a paper at all.
That’s because journals like to publish novel results that change our understanding of the field. Null results (where the researchers didn’t find anything) or boring results (where they confirm something we already know) are much less likely to be published. And efforts to replicate other people’s papers often aren’t published, either, because journals want something new and different.
That makes sense — but it’s terrible for science. This tendency leads researchers to waste time on analyses that other researchers may have already pursued but not publicized; to twist their data for results so they can publish when they initially don’t find anything; and to look for surprising outliers instead of the often mundane reality.
But awareness about this problem is growing. And in response, scientists are trying to build better processes. SURE is one step toward that goal.
How publication bias can mislead us
SURE is an online-only, open-access, no-fee journal. It accepts papers that its independent peer reviewers verify are “high quality” and that were rejected from other economics journals only because their results were statistically insignificant or otherwise unsurprising.
The first paper published, for example, by Nick Huntington-Klein and Andrew M. Gill at California State University, looked at whether informing students about the benefits of taking more credit hours (to improve their odds of graduating) would motivate them to take more classes or finish school sooner. It doesn’t. That’s unfortunate, but now we know, and other researchers can avoid this dead end. The published results will help steer clear of publication bias too.
Publication bias is often cited as a major factor in the so-called replication crisis in research. We’ve started to look back at old results in fields from medicine to psychology and have found that we can’t reproduce many of the claims in those papers — so they may have been wrong. Scientists are realizing that better methodology is needed across the board to avoid publishing research that gets it wrong.
Here’s how publication bias works: Imagine that 200 scientists go to work on an important question, like, say, which early childhood interventions improve test scores in fourth grade. (I picked that question because there’s a good case that the correct answer is “none of them.”) Most of the researchers find no results. They don’t publish those findings, just sadly call it off and move on to a different research project.
But some of them will get results — by pure chance. A common convention is to declare results “statistically significant” if they have a p-value of less than 0.05, which simply means there’s a less than 5 percent chance that the result a study found would have occurred by coincidence if there were no real effect there at all.
That means that if you have hundreds of studies, a dozen of them will find a p-value that’s less than 0.05, just by chance. And because those findings are surprising, a bunch of papers will be published identifying promising interventions that do not, in fact, get results.
That has all kinds of consequences. Using the published research, charities and policymakers might start trying to implement the interventions, and end up wasting money and resources on things that don’t work.
There are more considerations at work here too. Not publishing enough papers can hold back an academic’s career, which makes it hard to just move on when the data comes up empty. Driven by this imperative, some scientists will rerun their numbers, comparing different variables, in search of a statistically significant result that they can then publish. That makes it vastly more likely you’ll get a result you can write a paper about — but it’s intellectually dishonest, and the results will likely be false.
That’s where SURE comes in. If you conducted a rigorous study but journals find your result too boring to publish, SURE will publish it. The hope is that this will fix the incentives for the whole field. More null results will get published, mitigating publication bias. Researchers can get a paper published even if they found null or unexciting results, which should discourage scouring their data for unreliable results.
If SURE works, hopefully it’ll be emulated — economics isn’t the only field that needs it. While exciting results get headlines, it’s the boring results that often do the most to add to our knowledge of the world. Those boring results deserve a journal of their own.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.