One of the things that makes evaluating medical evidence difficult is knowing whether what’s being published actually reflects reality. Are the studies we read a good representation of scientific truth, or are they full of cherry-picked data that help sell drugs or skew policy decisions?
That question may sound like that of a paranoiac, but rest assured, it’s not. Researchers have worried about a “positive publication bias” for decades. The idea is that studies showing an effect of a particular drug or procedure are more likely to be published. In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94 percent — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half — or 51 percent — were positive.
A PLoS Medicine study published that same year found similar results for studies long after drugs were approved: Less than half — 43 percent — of studies used by the FDA to approve 90 drugs were published within five years of approval. It was those with positive results that were more likely in journals.
All of that can leave the impression that something may work better than it really does. And there is at least one powerful incentive for journals to publish positive studies: Drug and device makers are much more likely to buy reprints of such reports. Such reprints are highly lucrative for journals. Read more »
*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*