Why Negative Medical Studies Are Good

This is a guest column by Ivan Oransky, M.D., who is executive editor of Reuters Health and blogs at Embargo Watch and Retraction Watch.

One of the things that makes evaluating medical evidence difficult is knowing whether what’s being published actually reflects reality. Are the studies we read a good representation of scientific truth, or are they full of cherry-picked data that help sell drugs or skew policy decisions?

That question may sound like that of a paranoiac, but rest assured, it’s not. Researchers have worried about a “positive publication bias” for decades. The idea is that studies showing an effect of a particular drug or procedure are more likely to be published. In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94 percent — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half — or 51 percent — were positive.

A PLoS Medicine study published that same year found similar results for studies long after drugs were approved: Less than half — 43 percent — of studies used by the FDA to approve 90 drugs were published within five years of approval. It was those with positive results that were more likely in journals.

All of that can leave the impression that something may work better than it really does. And there is at least one powerful incentive for journals to publish positive studies: Drug and device makers are much more likely to buy reprints of such reports. Such reprints are highly lucrative for journals.

As former British Medical Journal editor Richard Smith put it in 2005:

An editor may thus face a frighteningly stark conflict of interest: publish a trial that will bring $100,000 of profit or meet the end-of-year budget by firing an editor.

The editors of many prominent journals, to their credit, have made it mandatory that study sponsors — often drug companies — register all trials. The idea there is that at least regulators will know how many studies began; if they’re not all published, perhaps the data aren’t as robust as they look. There is at least one journal, the Journal of Negative Results in BioMedicine, dedicated to such findings.

Still, it’s a good assumption that many of these studies never see the light of day in journals. After all, Nature published a letter earlier this month titled “Negative results need airing, too.”

A new study in the Annals of Surgery suggests one place reporters can go look for them: Lower-ranked journals. The authors of the study grouped surgery journals by impact factor — a measure of how often, on average, other studies cite articles in those journals. In the top-ranked journals, 6 percent of studies were negative or inconclusive, compared to 12 percent in the middle-tier journals, and 16 percent of those in the lowest-tier. (Of note: The lowest-ranked journal the researchers looked at was still in the top third of surgery journals overall.)

The authors suggest their results are likely true of more than just surgery journals:

Although these data are based upon analysis of surgical journals, in as much as that group constitutes nearly 18 percent of indexed medical journals, we believe these data may be applicable to other disciplines.

The findings present a bit of a dilemma for journalists. On the one hand, reporters covering studies should probably stick mostly to the highest-ranked journals, where there is competition to publish, and whose studies other researchers are more likely to read and follow. (Put together with positive publication bias, that competition probably explains some of why negative trials end up in lower-ranked journals.) Journal ranking is one of the criteria I use to decide what to cover at Reuters Health.

And the highly ranked journals did a few things better than their lower-ranked peers: They disclosed conflicts of interest among authors more often, and published more randomized controlled clinical trials, considered by many to be the gold standard of clinical evidence. So there are plenty of reasons to focus on such journals.

But reporters should also want to give their readers, listeners, and viewers a complete picture, and reporting on negative studies could mean dipping into lower-ranked journals. Of course, it’s just one study, and just as medical practice shouldn’t change based on a single report, neither should journalism. Still, based on this Annals of Surgery study, I don’t see any harm in looking at lower-ranked journals periodically, and applying the same other criteria to them that I would to any journal. Even sticking to the top third of such journals, as the authors of this study did, would increase my yield of negative results. It seems worth testing.

*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*


You may also like these posts

WordPress › Error

There has been a critical error on this website.

Learn more about troubleshooting WordPress.