Let's learn from this debacle. Academic publishing is broken

Let's learn from this debacle. Academic publishing is broken

2020, Jun 01    

Joe Brew and Carlos Chaccour.

If you haven’t yet heard of “Surgisphere”, you will.

And you’ll probably hear this: a mysterious, previously unknown, tiny company claims to have an implausibly large database of extremely sensitive personal health data (this has already happened); it publishes papers based on this secret database in the world’s most prestigious journals (already happened); those papers have a huge impact on global pandemic response policy (already happened); researchers publicly question the database (already happened); newspaper journalists identify discrepancies in the database (already happened); the academic journals investigate and eventually retract the papers entirely (hasn’t happened yet, but probably will).

And when this house of cards (inevitably) comes tumbling down, the biggest loser will be science itself. The journals’ reputations will be irreparably harmed, but the damage will go beyond just that: the whole episode will have confirmed the suspicions of conspiracy theorists, and added juicy material to the ramblings of world leaders who regularly disparage the scientific process, like Donald Trump and Jair Bolsonaro. Many will rightly lose faith in scientific journals as effective “gate-keepers” which protect the public from non-rigorous results. But many will also wrongly lose faith in science as a method.

So many things had to go wrong for this whole debacle to take place. All the usual safeguards (ethical committees, editorial discretion, basic quality checks, peer-review) failed. But this wasn’t just a random malfunction of the academic process - it was a failure that took place precisely because of that process. Academia’s trust in opaque methods (private, pre-publication peer-review), its acceptance of arbitrary gate-keeping (editorial discretion), its infatuation with prestige (it’s worth noting that the first author of both major Surgisphere publications is from Harvard), and its reliance on journals to serve as its quality-filtering mechanism make academia particularly susceptible to trickery.

Lots of people got tricked. And the blame can’t be placed entirely on the trickster - Surgisphere is certainly not a particularly sophisticated operation. There’s plenty of blame left-over for those who of us who have tacitly accepted (and, through our willingness to pay publication fees and offer review services for free, actively promoted) an antiquated, opaque, wasteful, biased, and unnecessary industry: academic journals.

Academic publishing is broken. Scientists neither need third-party middle-men to “protect” them from bad science, nor are journals apparently even capable of being effective gate-keepers. In fact, the only scientific mechanism which functioned properly in the Surgisphere debacle was post-publication peer review, i.e. open, transparent, collaborative analysis of already published content which prompted hundreds of scientists to (openly, and without going through a journal) publish their critiques.

Peer-review is good. But it’s time to de-couple it from the opaque, secretive, for-profit industry of academic publishing. If we learn anything from the Surgisphere mess it should be this: as long as perverse incentives exist in science, perverse behaviors will take place. The question is not how to eliminate them (since that’s impossible), but rather how to identify them. Differentiating good science from bad science requires more openness, less prestige-signaling, and less gate-keeping. It means putting the review process in the hands of more people, not fewer. It means, above all, more transparency.

But, you’re thinking this: if we eliminate the middle-man (academic publishers), how will we know who to trust? The answer is simple: nobody. Scepticism is a fundamental characteristic of good science. “Trust” is not. The degree to which one should base decisions on publications should correspond with the degree to which those publications provide evidence for their results. Tragically, in the case of Surgisphere, there was an enormous mismatch between (a) evidence provided and (b) impact obtained. And one cause of this mismatch was the fact that we (the scientific and public health communities) “trust” journals and institutions, especially those with certain names. We shouldn’t.

Let’s not resign ourselves to repeating this episode every few years. Let’s reflect, and improve. Let’s adapt scientific communication to the 21st century, by ridding ourselves of practices from the paper-era (unaccountable editorial discretion, formatting restrictions, opaque pre-publication review processes, publication fees, paywalls, prestige worship, informal phone calls to chum editors, etc.) and fully embrace radical openness, transparency, and reproducibility. Let’s keep those things which work (post-publication peer review) and overhaul those things which don’t (just about everything else about academic publishing). It’s time to experiment with novel forms of scientific quality control, because the current forms (clearly) are not good enough. The future of science depends on it.

Update

More material