No Comments »
Which news source do you trust more: The New York Times or The National Enquirer? Which news reporter would you trust more: Charlie Gibson or Jerry Springer? As it turns out, medical journals and science researchers run the gamut from highly credible and respected to dishonest and untrustworthy. So as we continue down this road of learning how to evaluate health news, let’s now turn our attention to pillar number one of trustworthy science: credibility.
In medical research, I like to think of credibility in three categories:
1. The credibility of the researcher: does the researcher have a track record of excellence in research methodology? Is he or she well-trained and/or have access to mentors who can shepherd along the research project and avoid the pitfalls of false positive “artifacts?” Has the researcher published previously in highly respected, peer reviewed journals?
2. The credibility of the research: does the study design reflect a clear understanding of potential result confounders and does it control for false positive influences, especially the placebo effect?
3. The credibility of the journal that publishes the research: top tier journals have demonstrated a track record of careful peer review. They have editorial boards of experts who are trained in research methodology and are screened for potential conflicts of interest that could inhibit an objective analysis of the research that they review. The importance of careful peer review must not be underestimated. Some say that the quality of a product is only as good as its quality control system. Top tier journals have the best quality control systems, and the articles they publish must undergo very careful scrutiny before they are published.
So as a lay person, how do you evaluate the credibility of a health news report? In practical terms, here’s what I’d recommend:
1. Look at the name of the journal reference – where was the research published? Is it from a top tier journal? R. Barker Bausell considers the following journals to be “top tier:” The New England Journal of Medicine (NEJM), The Journal of the American Medical Association (JAMA), Annals of Internal Medicine, Nature, and Science. I might cast a slightly larger net, but no one will argue that these are certainly some of the most respected journals in medicine and science.
2. Look at the study design described in the research article abstract. Was it a randomized, controlled, double-blind, placebo-controlled trial? Were there more than 50 subjects in each group? Did the authors overstate their conclusions? This sort of analysis is challenging to the lay person – so do it if you can, but if it proves too difficult, fall back on credibility check #1.
3. Look at the primary author of the research. Search for his/her name in the National Library of Medicine’s Medline database and see what other research he or she has done, and where it was published.
If the news report is based on credible research, you may feel confident in taking the results more seriously (so long as the media is representing them accurately). But before you hang your hat on a journal’s reputation, let’s take a look at the other 2 pillars of trustworthy science: plausibility and reproducibility. These two will help you navigate your way through the vast gray zone, where the credibility check doesn’t pass with flying colors – or maybe you’re dealing with neither Charlie Gibson nor Jerry Springer.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
No Comments »
Have you ever been surprised and confused by what seem to be conflicting results from scientific research? Have you ever secretly wondered if the medical profession is comprised of neurotic individuals who change their mind more frequently than you change your clothes? Well, I can understand why you’d feel that way because the public is constantly barraged with mixed health messages. But why is this happening?
The answer is complex, and I’d like to take a closer look at a few of the reasons in a series of blog posts. First, the human body is so incredibly complicated that we are constantly learning new things about it – how medicines, foods, and the environment impact it from the chemical to cellular to organ system level. There will always be new information, some of which may contradict previous thinking, and some that furthers it or ads a new facet to what we have already learned. Because human behavior is also so intricate, it’s far more difficult to prove a clear cause and effect relationship with certain treatments and interventions, due to the power of the human mind to perceive benefit when there is none (placebo effect).
Second, the media, by its very nature, seeks to present data with less ambiguity than is warranted. R. Barker Bausell, PhD, explains this tendency:
1. Superficiality is easier to present than depth.
2. The media cannot deal with ambiguity, subtlety, and diversity (which always characterizes scientific endeavors involving new areas of investigation or human behavior in general)
3. The bizarre always gets more attention than the usual.
I really don’t blame the media – they’re under intense pressure to find interesting sound bites to keep peoples’ attention. It’s not their job to present a careful and detailed analysis of the health news that they report. So it’s no wonder that a research paper suggesting that a certain herb may influence cancer cell protein expression in a Petri dish becomes: herb is new cure for cancer! Of course, many media outlets are more responsible in their reporting than that, but you get the picture.
And thirdly, the scientific method (if not carefully followed in rigorous, randomized, placebo-controlled trials) is a set up for false positive tests. What does that mean? It means that the default for your average research study (before it even begins) is that there will be a positive association between intervention and outcome. So I could do a trial on, say, the potential therapeutic use of candy bars for the treatment of eczema, and it’s likely (if I’m not a careful scientist) that the outcome will show a positive correlation between the two.
There are many reasons for false positive results (e.g. wrongly ascribing effectiveness to a given therapy) in scientific research. “Experimental artifacts” as they’re called, are very common and must be accounted for in a study’s design. For fun let’s think about how the following factors stack the deck in favor of positive research findings (regardless of the treatment being analyzed):
1. Natural History: most medical conditions have fluctuating symptoms and many improve on their own over time. Therefore, for many conditions, one would expect improvement during the course of study, regardless of treatment.
2. Regression to the Mean: people are more likely to join a research study when their illness/problem is at its worst during its natural history. Therefore, it is more likely that the symptoms will improve during the study than if they joined at times when symptoms were not as troublesome. Therefore, in any given study – there is a tendency for participants in particular to improve after joining.
3. The Hawthorne Effect: people behave differently and experience treatment differently when they’re being studied. So for example, if people know they’re being observed regarding their work productivity, they’re likely to work harder during the research study. The enhanced results therefore, do not reflect typical behavior.
4. Limitations of Memory: studies have shown that people ascribe greater improvement of symptoms in retrospect. Research that relies on patient recall is in danger of increased false positive rates.
5. Experimenter Bias: it is difficult for researchers to treat all study subjects in an identical manner if they know which patient is receiving an experimental treatment versus a placebo. Their gestures and the way that they question the subjects may set up expectations of benefit. Also, scientists are eager to demonstrate positive results for publication purposes.
6. Experimental Attrition: people generally join research studies because they expect that they may benefit from the treatment they receive. If they suspect that they are in the placebo group, they are more likely to drop out of the study. This can influence the study results so that the sicker patients who are not finding benefit with the placebo drop out, leaving the milder cases to try to tease out their response to the intervention.
7. The Placebo Effect: I saved the most important artifact for last. The natural tendency for study subjects is to perceive that a treatment is effective. Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.
So my dear readers – if the media wants to get your attention with exaggerated representations of research findings, and the research findings themselves are stacked in favor of reporting an effect that isn’t real… then how on earth are we to know what to make of health news? Luckily, R. Barker Bausell has explained all of this really well in his book and I will attempt to summarize the following principles in the next few posts:
1. The importance of credible scientific evidence
2. The importance of plausible scientific evidence
3. The importance of reproducible scientific evidenceThis post originally appeared on Dr. Val’s blog at RevolutionHealth.com.
2 Comments »
I’ve been reading Mindy Roberts’ hilarious book: Mommy Confidential: Adventures From The Wonderbelly of Motherhood. I particularly enjoy the moments she captures about her son, Will. I thought I’d share some excerpts with you to give you a good chuckle:
Today at Jake’s 6th birthday party, Will rushed up to me saying, “Mommy! There’s a dead squirrel over there! Hurry mommy, before he goes to heaven!”
Will is obsessed with size differentials among animals and the relative strengths and weaknesses of each as they relate to other predators. He wants to know exactly how big everything is so that he can determine how many predators it takes to bring down each type of prey. Among the factors are: height, weight, speed, habitat, how far it can jump, whether it can rear up, whether it can swim, and how sharp the teeth are. Usually he wants to know if, say, 20 wolves can take on 10 tigers, but this morning’s question took the cake. “Daddy, can 10 monkeys take down a zebra?”
You can find Mindy’s book at her website.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.