March 14th, 2011 by GarySchwitzer in News, Research
No Comments »
Here we go again. Headlines across America blaring lines like, “Coffee may reduce stroke risk.”
It was a big study, but an observational study. Not a trial. Not an experiment. And, as we say so many times on this website that you could almost join along with the chorus, observational studies have inherent limitations that should always be mentioned in stories. They can’t prove cause and effect. They can show a strong statistical association, but they can’t prove cause and effect. So you can’t prove benefit or risk reduction. And stories should say that.
USA Today, for example, did not explain that in its story. Nor did it include any of the limitations that were included in, for example, a HealthDay story, which stated:
“The problem with this type of study is that there are too many factors unaccounted for and association does not prove causality, said Dr. Larry B. Goldstein, director of the Duke Stroke Center at Duke University Medical Center.
“Subjects were asked about their past coffee consumption in a questionnaire and then followed over time. There is no way to know if they changed their behavior,” Goldstein said.
And, he noted, there was no control for medication use or other potential but unmeasured factors.
“The study is restricted to a Scandinavian population, and it is not clear, even if there is a relationship, that it would be present in more diverse populations. I think that it can be concluded, at least in this population, that there was not an increased risk of stroke among coffee drinkers,” he said.”
When you don’t explain the limitations of observational studies — and/or when you imply that cause and effect has been established — you lose credibility with some readers. And you should. Read more »
*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*
March 8th, 2011 by GarySchwitzer in News, Opinion, Research
No Comments »
We’re delighted to see that USA Today, Reuters, and WebMD were among the news organizations that included what an editorial writer said about an observational study linking ibuprofen use with fewer cases of Parkinson’s disease. All three news organizations used some version of what editorial writer Dr. James Bower of the Mayo Clinic wrote or said:
“Whenever in epidemiology you find an association, that does not mean causation.”
“An association does not prove causation.”
“There could be other explanations for the ibuprofen-Parkinson’s connection.”
Kudos to those news organizations. And some praise goes to the journal Neurology for publishing Dr. Bower’s editorial to accompany the study. His piece is entitled, “Is the answer for Parkinson disease already in the medicine cabinet? Unfortunately not.”
And unfortunately not all news organizations got that message. Because many don’t read the journals, so they certainly never get to the editorials. Instead, they rewrite quick hits off a wire service story. As a result, we end up with some of the following:
A FoxNews.com story was particularly deaf to Bower’s caveat, stating: “That bottle of ibuprofen in your medicine cabinet is more powerful than you may think.”
A CBSNews.com story never addressed the observational study limitation, instead whimsically writing: “Pop a pill to prevent Parkinson’s disease? A new study says it’s possible, and the pill in question isn’t some experimental marvel that’s still years away from drugstore shelves. It’s plain old ibuprofen.” Read more »
*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*
February 18th, 2011 by GarySchwitzer in Health Policy, Opinion
No Comments »
This is a guest column by Ivan Oransky, M.D., who is executive editor of Reuters Health and blogs at Embargo Watch and Retraction Watch.
One of the things that makes evaluating medical evidence difficult is knowing whether what’s being published actually reflects reality. Are the studies we read a good representation of scientific truth, or are they full of cherry-picked data that help sell drugs or skew policy decisions?
That question may sound like that of a paranoiac, but rest assured, it’s not. Researchers have worried about a “positive publication bias” for decades. The idea is that studies showing an effect of a particular drug or procedure are more likely to be published. In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94 percent — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half — or 51 percent — were positive.
A PLoS Medicine study published that same year found similar results for studies long after drugs were approved: Less than half — 43 percent — of studies used by the FDA to approve 90 drugs were published within five years of approval. It was those with positive results that were more likely in journals.
All of that can leave the impression that something may work better than it really does. And there is at least one powerful incentive for journals to publish positive studies: Drug and device makers are much more likely to buy reprints of such reports. Such reprints are highly lucrative for journals. Read more »
*This blog post was originally published at Gary Schwitzer's HealthNewsReview Blog*