“The journals want the papers that make the sexiest claims. And scientists believe that the way you succeed is having splashy papers in Science or Nature - it’s not bad for them if a paper turns out to be wrong, if it’s gotten a lot of attention.” Michael Eisen
Last October, LaTimes had an interesting article in the business section titled Science has lost its way, at a big cost to humanity, subtitle Researchers are rewarded for splashy findings, not for double-checking accuracy. So many scientists looking for cures to diseases have been building on ideas that aren’t even true. Excerpts:
A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology. But what they found was startling: Of the 53 landmark papers, only six could be proved valid.
The thing that should scare people is that so many of these important published studies turn out to be wrong when they’re investigated further,” says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute.
Eisen says the more important flaw in the publication model is that the drive to land a paper in a top journal - Nature and Science lead the list -encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn’t a safeguard. Eisen says the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws.
Eisen is a pioneer in open-access scientific publishing, which aims to overturn the traditional model in which leading journals pay nothing for papers often based on publicly funded research, then charge enormous subscription fees to universities and researchers to read them.
But concern about what is emerging as a crisis in science extends beyond the open-access movement. It’s reached the National Institutes of Health, which last week launched a project to remake its researchers’ approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience.
PubMed Commons is an effort to counteract the “perverse incentives” in scientific research and publishing, says David J. Lipman, director of NIH’s National Center for Biotechnology Information, which is sponsoring the venture.
Earlier this month, Science published a piece by journalist John Bohannon about what happened when he sent a spoof paper with flaws that could have been noticed by a high school chemistry student to 304 open-access chemistry journals (those that charge researchers to publish their papers, but make them available for free). It was accepted by more than half of them.
One that didn’t bite was PloS One, an online open-access journal sponsored by the Public Library of Science, which Eisen co-founded. In fact, PloS One was among the few journals that identified the fake paper’s methodological and ethical flaws.
It was the traditionalist Science that published the most dubious recent academic paper of all.
This was a 2010 paper by then-NASA biochemist Felisa Wolfe-Simon and colleagues claiming that they had found bacteria growing in Mono Lake that were uniquely able to subsist on arsenic and even used arsenic to build the backbone of their DNA.
The publication in Science was accompanied by a breathless press release and press conference sponsored by NASA, which had an institutional interest in promoting the idea of alternative life forms. But almost immediately it was debunked by other scientists for spectacularly poor methodology and an invalid conclusion.
To Eisen, the Wolfe-Simon affair represents the “perfect storm of scientists obsessed with making a big splash and issuing press releases” - the natural outcome of a system in which there’s no career gain in trying to replicate and validate previous work, as important as that process is for the advancement of science.
The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole. NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.
JC comments: This article raises some important issues, convolutes several of them and then concludes that science has lost its way. Has it?
In thinking about this issue, I find it useful to return to the previous CE post on Pasteur’s quadrant, and the distinction between pure discovery research, use-inspired research, and applied/regulatory research. The arsenic study is arguably pure discovery research, whereas most of the rest of the research (including the deliberately fake paper discussed in this Science article) is use inspired research. It doesn’t really matter outside the scientific community if pure discovery research is incorrect, i.e. it is not immediately obvious what kind of adverse societal impacts might be associated with arsenic and the bacteria in Mono Lake. On the other hand, with cancer research, there are substantial societal and financial impacts involved. The other distinction is between mechanistic research, whereby physical/chemical/biological processes are postulated, in contrast to epidemiological research which is fundamentally statistical. Mechanistic flaws are more easily identified, whereas flaws in epidemiological research is much more difficult to identify and to replicate.
There should be different reward structures for scientists working in the different quadrants, novelty and pushing knowledge frontiers is key for Bohr’s quadrant. However, in use-inspired research there is tremendous potential to provide a misleading foundation for applied/regulatory research, and this is where I see the biggest problem. Replication/auditing and robustness should be key goals for use-inspired research (and part of the reward system for scientists working on these problems). Unfortunately, scientists are rewarded in a way that makes sense for Bohr’s quadrant, and not so much for Pasteur’s quadrant.
Where does climate research lie in all this? Elements of climate research and mechanistic focused on processes, whereas other elements are statistical in nature. In terms of money being thrown down a rathole for climate research, I argued in the Pasteur’s Quadrant post that taxonomical studies of model-based regional impacts rests on the premise that climate models provide useful information for regional impact studies, and they do not.
And finally, I am a big fan Eisen’s models for open access publishing and extended peer review, and I am not a fan of the Nature/Science model with its press releases and press embargoes. Eisen’s model provides the right incentive structure for scientists, whereas the Nature/Science model IMO does not.
So, has science lost it’s way? I don’t think so, but the Science/Nature publishing model and the way that universities reward scientists are providing perverse incentives that do not serve well the societally-relevant applications of science.