Better Health: Smart Health Commentary Better Health (TM): smart health commentary

Article Comments

Analyzing Pseudoscience Can Aid In Improving Legitimate Scientific Research

While we frequently on SBM target the worst abuses of science in medicine, it’s important to recognize that doing rigorous science is complex and mainstream scientists often fall short of the ideal. In fact, one of the advantages of exploring pseudoscience in medicine is developing a sensitive detector for errors in logic, method, and analysis. Many of the errors we point out in so-called “alternative” medicine also crop up elsewhere in medicine – although usually to a much less degree.

It is not uncommon, for example, for a paper to fail to adjust for multiple analysis – if you compare many variables you have to take that into consideration when doing the statistical analysis, otherwise the probability of a chance correlation will be increased.

I discussed just last week on NeuroLogica the misapplication of meta-analysis – in this case to the question of whether or not CCSVI correlates with multiple sclerosis. I find this very common in the literature, essentially a failure to appreciate the limits of this particular analysis tool.

Another example comes recently from the journal Nature Neuroscience (an article I learned about from Ben Goldacre over at the Bad Science blog). Erroneous analyses of interactions in neuroscience: a problem of significance investigates the frequency of a subtle but important statistical error in high profile neuroscience journals.

The authors, Sander Nieuwenhuis, Birte U Forstmann, and Eric-Jan Wagenmakers, report:

We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience.

The incorrect procedure is this – looking at the effects of an intervention to see if they are statistically significant when compared to a no-intervention group (whether it is rats, cells, or people). Then comparing a placebo intervention to the no-intervention group to see if it has a statistically significant effect. Then comparing the results. This seems superficially legitimate, but it isn’t.

For example, if the intervention produces a barely statistically significant effect, and the placebo produces a barely not statistically significant effect, the authors might still conclude that the intervention is statistically significantly superior to placebo. However, the proper comparison is to directly compare the differences to see if the difference of difference is itself statistically significant (which it likely won’t be in this example).

This is standard procedure, for example, in placebo-controlled medical trials – the treatment group is compared to the placebo group. But what more than half of the researchers were doing in the articles reviewed is to compare both groups to a no-intervention group but not comparing them to each other. This has the effect of creating the illusion of a statistically significant difference where none exists, or to create a false positive type of error (erroneously rejecting the null hypothesis).

The frequency of this error is huge, and there is no reason to believe that it is unique to neuroscience research or more common in neuroscience than in other areas of research.

I find this article to be very important, and I thought it deserved more play than it seems to be getting. Keeping to the highest standards of scientific rigor is critical in biomedical research. The authors do an important service in pointing out this error, and researchers, editors, and peer reviewers should take note. This should, in fact, be part of a check list that journal editors employ to ensure that submitted research uses legitimate methods. (And yes, this is a deliberate reference to The Checklist Manifesto – a powerful method for minimizing error.)

I would also point out that one of the authors on this article, Eric-Jan Wagenmakers, was the lead author on an interesting paper analyzing the psi research of Daryl Bem. (You can also listen to a very interesting interview I did with Wagenmakers on my podcast here.) To me this is an example of how it pays for mainstream scientists to pay attention to fringe science – not because the subject of the research itself is plausible or interesting, but because they often provide excellent examples of pathological science. Examining pathological science is a great way to learn what makes legitimate science legitimate, and also gives one a greater ability to detect logical and statistical errors in mainstream science.

What the Nieuwenhuis et.al. paper shows is that more scientists should be availing themselves of the learning opportunity afforded by analyzing pseudoscience.

*This blog post was originally published at Science-Based Medicine*


You may also like these posts

Read comments »


Comments are closed.

Return to article »

Latest Interviews

IDEA Labs: Medical Students Take The Lead In Healthcare Innovation

It’s no secret that doctors are disappointed with the way that the U.S. healthcare system is evolving. Most feel helpless about improving their work conditions or solving technical problems in patient care. Fortunately one young medical student was undeterred by the mountain of disappointment carried by his senior clinician mentors…

Read more »

How To Be A Successful Patient: Young Doctors Offer Some Advice

I am proud to be a part of the American Resident Project an initiative that promotes the writing of medical students residents and new physicians as they explore ideas for transforming American health care delivery. I recently had the opportunity to interview three of the writing fellows about how to…

Read more »

See all interviews »

Latest Cartoon

See all cartoons »

Latest Book Reviews

Book Review: Is Empathy Learned By Faking It Till It’s Real?

I m often asked to do book reviews on my blog and I rarely agree to them. This is because it takes me a long time to read a book and then if I don t enjoy it I figure the author would rather me remain silent than publish my…

Read more »

The Spirit Of The Place: Samuel Shem’s New Book May Depress You

When I was in medical school I read Samuel Shem s House Of God as a right of passage. At the time I found it to be a cynical yet eerily accurate portrayal of the underbelly of academic medicine. I gained comfort from its gallows humor and it made me…

Read more »

Eat To Save Your Life: Another Half-True Diet Book

I am hesitant to review diet books because they are so often a tangled mess of fact and fiction. Teasing out their truth from falsehood is about as exhausting as delousing a long-haired elementary school student. However after being approached by the authors’ PR agency with the promise of a…

Read more »

See all book reviews »