A hat tip to KevinMD’s guest blogger, JoshMD for this great link. The British Medical Journal offers a short historical analysis of 7 common medical myths, sometimes perpetuated by physicians themselves:
- People should drink at least eight glasses of water a day
- We use only 10% of our brains
- Hair and fingernails continue to grow after death
- Shaving hair causes it to grow back faster, darker, or coarser
- Reading in dim light ruins your eyesight
- Eating turkey makes people especially drowsy
- Mobile phones create considerable electromagnetic interference in hospitals.
To find out why each of these commonly held beliefs are either untrue or unsubstantiated, check out the original journal article. It’s a lot of fun.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
This is my final post in a series inspired by Dr. Barker Bausell’s recent book, “Snake Oil Science: The Truth About Complementary and Alternative Medicine.” Since I began this series, the New York Times has published a rave review of Bausell’s book, which only further confirms the importance of Bausell’s contributions.
Although Bausell’s main thesis is that there are currently no large, randomized controlled trials (published in leading medical journals) demonstrating the effect of any CAM therapy beyond placebo, I have chosen to highlight some of his thinking about research methodology as it applies to the medical literature in general.
So far I have explained why most research (if not carefully designed) will lead to a false positive result. This inherent bias is responsible for many of the illusionary treatment benefits that we hear about so commonly through the media (whether they’re reporting about CAM or Western medicine), because it is their job to relay information in an entertaining way more so than an accurate manner (i.e. good science makes bad television).Then I explained a three step process for determining the trustworthiness of health news and research. We can remember these steps with a simple mnemonic: C-P-R.
The C stands for credibility- in other words, “consider the source” – is the research published in a top tier medical journal with a scientifically rigorous review process?
The P stands for plausibility- is the proposed finding consistent with known principles of physics, chemistry, and physiology or would accepting the result require us to suspend belief in everything we’ve learned about science to date?
And finally we arrive at R – reproducibility. If the research study were repeated, would similar results be obtained?
This third and final pillar of trustworthy science is a simple, but sometimes forgotten, principle. If there is a true cause and effect relationship observed by the researcher, then surely that cause and effect can be demonstrated again and again under the same conditions. Touching a hot stove burner always results in a burned hand. No matter how frequently you test this causal relationship, the result will be similar.
Sometimes conflicting results are obtained by repeating a study. When this happens, the reader should be careful in interpreting the conclusions – there may be a flaw in the study design, or it may be that the conclusions drawn were inaccurate. There could have been a false positive result, or no appreciable effect of the treatment under consideration, therefore leaving the results to chance. Flipping a coin gives you heads one minute and tails the next. Yet a person unfamiliar with coins could conclude (after one flip) that it has a head on both sides. In the end, therefore, one can be more confident in a study’s result if it is born out by other studies.
And so as we conclude this series, I hope that you now feel well equipped to perform CPR (credibility, plausibility, reproducibility checks) on health news. A little healthy skepticism can protect your brain from all the mixed health messages that barrage us each day. At the very least, now you’ll appreciate why most health news reports include an expert quote stating something to the effect of “it’s too early to know for sure if these findings are relevant.” That statement may be the most trustworthy of the entire report.
Next up: Shannon Brownlee’s book “Overtreated: Why Too Much Medicine Is Making Us Sicker And Poorer.” Shannon and I corresponded about this book two years ago, so I’m looking forward to seeing how it has turned out. Once I’ve finished it I’ll give you my thoughts here in this blog.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
Which news source do you trust more: The New York Times or The National Enquirer? Which news reporter would you trust more: Charlie Gibson or Jerry Springer? As it turns out, medical journals and science researchers run the gamut from highly credible and respected to dishonest and untrustworthy. So as we continue down this road of learning how to evaluate health news, let’s now turn our attention to pillar number one of trustworthy science: credibility.
In medical research, I like to think of credibility in three categories:
1. The credibility of the researcher: does the researcher have a track record of excellence in research methodology? Is he or she well-trained and/or have access to mentors who can shepherd along the research project and avoid the pitfalls of false positive “artifacts?” Has the researcher published previously in highly respected, peer reviewed journals?
2. The credibility of the research: does the study design reflect a clear understanding of potential result confounders and does it control for false positive influences, especially the placebo effect?
3. The credibility of the journal that publishes the research: top tier journals have demonstrated a track record of careful peer review. They have editorial boards of experts who are trained in research methodology and are screened for potential conflicts of interest that could inhibit an objective analysis of the research that they review. The importance of careful peer review must not be underestimated. Some say that the quality of a product is only as good as its quality control system. Top tier journals have the best quality control systems, and the articles they publish must undergo very careful scrutiny before they are published.
So as a lay person, how do you evaluate the credibility of a health news report? In practical terms, here’s what I’d recommend:
1. Look at the name of the journal reference – where was the research published? Is it from a top tier journal? R. Barker Bausell considers the following journals to be “top tier:” The New England Journal of Medicine (NEJM), The Journal of the American Medical Association (JAMA), Annals of Internal Medicine, Nature, and Science. I might cast a slightly larger net, but no one will argue that these are certainly some of the most respected journals in medicine and science.
2. Look at the study design described in the research article abstract. Was it a randomized, controlled, double-blind, placebo-controlled trial? Were there more than 50 subjects in each group? Did the authors overstate their conclusions? This sort of analysis is challenging to the lay person – so do it if you can, but if it proves too difficult, fall back on credibility check #1.
3. Look at the primary author of the research. Search for his/her name in the National Library of Medicine’s Medline database and see what other research he or she has done, and where it was published.
If the news report is based on credible research, you may feel confident in taking the results more seriously (so long as the media is representing them accurately). But before you hang your hat on a journal’s reputation, let’s take a look at the other 2 pillars of trustworthy science: plausibility and reproducibility. These two will help you navigate your way through the vast gray zone, where the credibility check doesn’t pass with flying colors – or maybe you’re dealing with neither Charlie Gibson nor Jerry Springer.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
Have you ever been surprised and confused by what seem to be conflicting results from scientific research? Have you ever secretly wondered if the medical profession is comprised of neurotic individuals who change their mind more frequently than you change your clothes? Well, I can understand why you’d feel that way because the public is constantly barraged with mixed health messages. But why is this happening?
The answer is complex, and I’d like to take a closer look at a few of the reasons in a series of blog posts. First, the human body is so incredibly complicated that we are constantly learning new things about it – how medicines, foods, and the environment impact it from the chemical to cellular to organ system level. There will always be new information, some of which may contradict previous thinking, and some that furthers it or ads a new facet to what we have already learned. Because human behavior is also so intricate, it’s far more difficult to prove a clear cause and effect relationship with certain treatments and interventions, due to the power of the human mind to perceive benefit when there is none (placebo effect).
Second, the media, by its very nature, seeks to present data with less ambiguity than is warranted. R. Barker Bausell, PhD, explains this tendency:
1. Superficiality is easier to present than depth.
2. The media cannot deal with ambiguity, subtlety, and diversity (which always characterizes scientific endeavors involving new areas of investigation or human behavior in general)
3. The bizarre always gets more attention than the usual.
I really don’t blame the media – they’re under intense pressure to find interesting sound bites to keep peoples’ attention. It’s not their job to present a careful and detailed analysis of the health news that they report. So it’s no wonder that a research paper suggesting that a certain herb may influence cancer cell protein expression in a Petri dish becomes: herb is new cure for cancer! Of course, many media outlets are more responsible in their reporting than that, but you get the picture.
And thirdly, the scientific method (if not carefully followed in rigorous, randomized, placebo-controlled trials) is a set up for false positive tests. What does that mean? It means that the default for your average research study (before it even begins) is that there will be a positive association between intervention and outcome. So I could do a trial on, say, the potential therapeutic use of candy bars for the treatment of eczema, and it’s likely (if I’m not a careful scientist) that the outcome will show a positive correlation between the two.
There are many reasons for false positive results (e.g. wrongly ascribing effectiveness to a given therapy) in scientific research. “Experimental artifacts” as they’re called, are very common and must be accounted for in a study’s design. For fun let’s think about how the following factors stack the deck in favor of positive research findings (regardless of the treatment being analyzed):
1. Natural History: most medical conditions have fluctuating symptoms and many improve on their own over time. Therefore, for many conditions, one would expect improvement during the course of study, regardless of treatment.
2. Regression to the Mean: people are more likely to join a research study when their illness/problem is at its worst during its natural history. Therefore, it is more likely that the symptoms will improve during the study than if they joined at times when symptoms were not as troublesome. Therefore, in any given study – there is a tendency for participants in particular to improve after joining.
3. The Hawthorne Effect: people behave differently and experience treatment differently when they’re being studied. So for example, if people know they’re being observed regarding their work productivity, they’re likely to work harder during the research study. The enhanced results therefore, do not reflect typical behavior.
4. Limitations of Memory: studies have shown that people ascribe greater improvement of symptoms in retrospect. Research that relies on patient recall is in danger of increased false positive rates.
5. Experimenter Bias: it is difficult for researchers to treat all study subjects in an identical manner if they know which patient is receiving an experimental treatment versus a placebo. Their gestures and the way that they question the subjects may set up expectations of benefit. Also, scientists are eager to demonstrate positive results for publication purposes.
6. Experimental Attrition: people generally join research studies because they expect that they may benefit from the treatment they receive. If they suspect that they are in the placebo group, they are more likely to drop out of the study. This can influence the study results so that the sicker patients who are not finding benefit with the placebo drop out, leaving the milder cases to try to tease out their response to the intervention.
7. The Placebo Effect: I saved the most important artifact for last. The natural tendency for study subjects is to perceive that a treatment is effective. Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.
So my dear readers – if the media wants to get your attention with exaggerated representations of research findings, and the research findings themselves are stacked in favor of reporting an effect that isn’t real… then how on earth are we to know what to make of health news? Luckily, R. Barker Bausell has explained all of this really well in his book and I will attempt to summarize the following principles in the next few posts:
1. The importance of credible scientific evidence
2. The importance of plausible scientific evidence
3. The importance of reproducible scientific evidenceThis post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
It’s time for the 4th annual Medical Blogger awards… nominate your favorites at MedGadget:
is designed to recognize the very best from the medical blogosphere,
and to highlight the diversity and excitement of the world of medical
The categories for this year’s awards will be:
– Best Medical Weblog
– Best New Medical Weblog (established in 2007)
– Best Literary Medical Weblog
– Best Clinical Sciences Weblog
– Best Health Policies/Ethics Weblog
– Best Medical Technologies/Informatics Weblog
– Best Patient’s Blog
This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.