The Rady School of Management at UC San Diego is shown in this photograph from Aug. 23, 2019.
The Rady School of Management at UC San Diego is shown in this photograph from Aug. 23, 2019. Photo by Zoë Meyers/inewsource

Unreplicated papers published in leading psychology, economic and science journals are often the most cited papers in academic research, despite being less likely to be verifiable, according to a study published Friday by UC San Diego’s Rady School of Management.

Published in Science Advances, the paper explores the ongoing “replication crisis,” characterized as an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce.

The Rady School’s Marta Serra-Garcia, an assistant professor of economics and strategy, and Uri Gneezy, a professor of behavioral economics, determined that findings from studies which cannot be verified when the experiments are repeated have a bigger influence over time. The unreliable research tends to be cited as if the results were true long afterward, they found.

“We also know that experts can predict well which papers will be replicated,”  they wrote. “Given this prediction, we ask ‘Why are nonreplicable papers accepted for publication in the first place?”‘

The possible answer, they said, is that review teams of academic journals face a trade-off. When the results are more “interesting,” they apply lower standards regarding their reproducibility.

The link between interesting findings and nonreplicable research also can explain why it is cited at a much higher rate, according to the authors, who determined that papers that are successfully replicated are cited 153 times less than those that failed.

“Interesting or appealing findings are also covered more by media or shared on platforms like Twitter, generating a lot of attention, but that does not make them true,” Gneezy said.

Serra-Garcia and Gneezy analyzed data from three influential replication projects which tried to systematically replicate the findings in top psychology, economic and general science journals. In psychology, only 39 of the 100 experiments successfully replicated. In economics, 11 of the 18 studies were replicated, as were 13 of the 21 studies published in Nature/Science.

With the findings, the authors used Google Scholar to test whether unreplicated papers are cited significantly more often than those that were successfully replicated, both before and after the replication projects were published. The largest gap was in papers published in Nature/Science: non- replicable papers were cited 300 times more than replicable ones.

When the authors took into account several characteristics of the studies replicated — such as the number of authors, the rate of male authors, the details of the experiment — location, language and online implementation — and the field in which the paper was published, the relationship between replicability and citations was unchanged.

They also show the impact of such citations grows over time. Yearly citation counts reveal a pronounced gap between papers that replicated and those that did not. On average, papers that failed to replicate are cited 16 times more per year. This gap remains even after the replication project is published.

“Remarkably, only 12% of post-replication citations of non-replicable findings acknowledge the replication failure,” the authors wrote.

The influence of an inaccurate paper published in a prestigious journal can have repercussions for decades. For example, a study Andrew Wakefield published in The Lancet in 1998 turned tens of thousands of parents around the world against the measles, mumps and rubella vaccine because of an implied link between vaccinations and autism, the authors noted. The incorrect findings were retracted by The Lancet 12 years later, but the claims that autism is linked to vaccines continue.

The authors added that journals may feel pressure to publish interesting findings, and so do academics.

“We hope our research encourages readers to be cautious if they read something that is interesting and appealing,” Serra-Garcia said. “Whenever researchers cite work that is more interesting or has been cited a lot, we hope they will check if replication data is available and what those findings suggest.”

Show comments