Better Health: Smart Health Commentary Better Health (TM): smart health commentary

Article Comments

Why Negative Medical Studies Are Good

This is a guest column by Ivan Oransky, M.D., who is executive editor of Reuters Health and blogs at Embargo Watch and Retraction Watch.

One of the things that makes evaluating medical evidence difficult is knowing whether what’s being published actually reflects reality. Are the studies we read a good representation of scientific truth, or are they full of cherry-picked data that help sell drugs or skew policy decisions?

That question may sound like that of a paranoiac, but rest assured, it’s not. Researchers have worried about a “positive publication bias” for decades. The idea is that studies showing an effect of a particular drug or procedure are more likely to be published. In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94 percent — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half — or 51 percent — were positive.

A PLoS Medicine study published that same year found similar results for studies long after drugs were approved: Less than half — 43 percent — of studies used by the FDA to approve 90 drugs were published within five years of approval. It was those with positive results that were more likely in journals.

All of that can leave the impression that something may work better than it really does. And there is at least one powerful incentive for journals to publish positive studies: Drug and device makers are much more likely to buy reprints of such reports. Such reprints are highly lucrative for journals.

As former British Medical Journal editor Richard Smith put it in 2005:

An editor may thus face a frighteningly stark conflict of interest: publish a trial that will bring $100,000 of profit or meet the end-of-year budget by firing an editor.

The editors of many prominent journals, to their credit, have made it mandatory that study sponsors — often drug companies — register all trials. The idea there is that at least regulators will know how many studies began; if they’re not all published, perhaps the data aren’t as robust as they look. There is at least one journal, the Journal of Negative Results in BioMedicine, dedicated to such findings.

Still, it’s a good assumption that many of these studies never see the light of day in journals. After all, Nature published a letter earlier this month titled “Negative results need airing, too.”

A new study in the Annals of Surgery suggests one place reporters can go look for them: Lower-ranked journals. The authors of the study grouped surgery journals by impact factor — a measure of how often, on average, other studies cite articles in those journals. In the top-ranked journals, 6 percent of studies were negative or inconclusive, compared to 12 percent in the middle-tier journals, and 16 percent of those in the lowest-tier. (Of note: The lowest-ranked journal the researchers looked at was still in the top third of surgery journals overall.)

The authors suggest their results are likely true of more than just surgery journals:

Although these data are based upon analysis of surgical journals, in as much as that group constitutes nearly 18 percent of indexed medical journals, we believe these data may be applicable to other disciplines.

The findings present a bit of a dilemma for journalists. On the one hand, reporters covering studies should probably stick mostly to the highest-ranked journals, where there is competition to publish, and whose studies other researchers are more likely to read and follow. (Put together with positive publication bias, that competition probably explains some of why negative trials end up in lower-ranked journals.) Journal ranking is one of the criteria I use to decide what to cover at Reuters Health.

And the highly ranked journals did a few things better than their lower-ranked peers: They disclosed conflicts of interest among authors more often, and published more randomized controlled clinical trials, considered by many to be the gold standard of clinical evidence. So there are plenty of reasons to focus on such journals.

But reporters should also want to give their readers, listeners, and viewers a complete picture, and reporting on negative studies could mean dipping into lower-ranked journals. Of course, it’s just one study, and just as medical practice shouldn’t change based on a single report, neither should journalism. Still, based on this Annals of Surgery study, I don’t see any harm in looking at lower-ranked journals periodically, and applying the same other criteria to them that I would to any journal. Even sticking to the top third of such journals, as the authors of this study did, would increase my yield of negative results. It seems worth testing.

*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*

You may also like these posts

    None Found

Read comments »

Comments are closed.

Return to article »

Latest Interviews

IDEA Labs: Medical Students Take The Lead In Healthcare Innovation

It’s no secret that doctors are disappointed with the way that the U.S. healthcare system is evolving. Most feel helpless about improving their work conditions or solving technical problems in patient care. Fortunately one young medical student was undeterred by the mountain of disappointment carried by his senior clinician mentors…

Read more »

How To Be A Successful Patient: Young Doctors Offer Some Advice

I am proud to be a part of the American Resident Project an initiative that promotes the writing of medical students residents and new physicians as they explore ideas for transforming American health care delivery. I recently had the opportunity to interview three of the writing fellows about how to…

Read more »

See all interviews »

Latest Cartoon

See all cartoons »

Latest Book Reviews

Book Review: Is Empathy Learned By Faking It Till It’s Real?

I m often asked to do book reviews on my blog and I rarely agree to them. This is because it takes me a long time to read a book and then if I don t enjoy it I figure the author would rather me remain silent than publish my…

Read more »

The Spirit Of The Place: Samuel Shem’s New Book May Depress You

When I was in medical school I read Samuel Shem s House Of God as a right of passage. At the time I found it to be a cynical yet eerily accurate portrayal of the underbelly of academic medicine. I gained comfort from its gallows humor and it made me…

Read more »

Eat To Save Your Life: Another Half-True Diet Book

I am hesitant to review diet books because they are so often a tangled mess of fact and fiction. Teasing out their truth from falsehood is about as exhausting as delousing a long-haired elementary school student. However after being approached by the authors’ PR agency with the promise of a…

Read more »

See all book reviews »