Better Health: Smart Health Commentary Better Health (TM): smart health commentary

Latest Posts

Bad Science: Nine Ways To Dress Up Flawed Research

No Comments »

This post is a follow up to my book review of Bad Science, located here.

***

I couldn’t help but feel unusually depressed by Dr. Ben Goldacre’s exposé of researchers who resort to trickery to get their articles published in peer-reviewed journals. There are a number of ways to manipulate data and many ways that flawed research is presented to enhance its chances of publication.

Before we get started, I need to point out that “negative trials” – or research results that don’t corroborate the investigator’s original hypothesis/es – are much less likely to be published. People (and/or publishers) are far more interested in finding a needle in a haystack, than hearing that no needle could be found.  This is a driving force in all manner of mathematical convolutions aimed at demonstrating something interesting and to warrant publication. After all, who can blame the researchers for wanting to get their research published, and to have it make a meaningful contribution to their field of study? Who wants to toil for months to years on end, only to discover that their hypotheses were not born out by experimentation, and in fact no helpful conclusions may be drawn whatsoever?

And so, with this intense pressure to find something meaningful in one’s research (either for profit or personal satisfaction and professional advancement) – there are some typical stragegies that researchers use to make something out of nothing. Ben Goldacre reviews these strategies in the voice of an unscrupulous senior pharmaceutical investigator, giving advice to his junior colleague. (Parenthetically, it reminded me of The Screwtape Letters – an amusing book written by C.S. Lewis, featuring the imaginary advice of a senior demon to his junior counterpart as they tempt humans to evil.)

(This passage is taken directly from pages 192-193 of Bad Science)

1. Ignore the protocol entirely. Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report – as significant – any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive by sheer luck.

2. Play with the baseline. Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.

3. Ignore dropouts. People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side effects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis.

4. Clean up the data. Look at your graphs. There will be some anomalous ‘outliers,’ or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.

5. The best of five… no… seven… no… nine! If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly significant,’ extend the trial by another three months.

6. Torture the data. If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will conhfess to anything’ as they say at Guantanamo Bay.

7. Try every button on the computer. If you’re really desperate, and analysing your data the way you planned doesn’t give you the results you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.

8. Publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, then put it in an obscure journal… and hope that readers are not attentive enough to read beyond the abstract to recognize its flaws.

9. Do not publish. If your finding is really embarrassing, hide it away somewhere and cite ‘data on file.’ Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully that won’t be for ages.


Consumer-Generated Clinical Trials? Research Minus Science = Gossip

20 Comments »

The internet, in democratizing knowledge, has led a lot of people to believe that it is also possible to democratize expertise.

– Commenter at Science Based Medicine

Regular readers of this blog know how passionate I am about protecting the public from misleading health information. I have witnessed first-hand many well-meaning attempts to “empower consumers” with Web 2.0 tools. Unfortunately, they were designed without a clear understanding of the scientific method, basic statistics, or in some cases, common sense.

Let me first say that I desperately want my patients to be knowledgeable about their disease or condition. The quality of their self-care depends on that, and I regularly point each of my patients to trusted sources of health information so that they can be fully informed about all aspects of their health. Informed decisions are founded upon good information. But when the foundation is corrupt – consumer empowerment collapses like a house of cards.

In a recent lecture on Health 2.0, it was suggested that websites that enable patients to “conduct their own clinical trials” are the bold new frontier of research. This assertion betrays a lack of understanding of basic scientific principles. In healthcare we often say, “the plural of anecdote is not data” and I would translate that to “research minus science equals gossip.” Let me give you some examples of Health 2.0 gone wild:

1. A rating tool was created to “empower” patients to score their medications (and user-generated treatment options) based on their perceived efficacy for their disease/condition. The treatments with the highest average scores would surely reflect the best option for a given disease/condition, right? Wrong. Every single pain syndrome (from headache to low back pain) suggested a narcotic was the most popular (and therefore “best”) treatment. If patients followed this system for determining their treatment options, we’d be swatting flies with cannon balls – not to mention being at risk for drug dependency and even abuse. Treatments must be carefully customized to the individual – genetic differences, allergy profiles, comorbid conditions, and psychosocial and financial considerations all play an important role in choosing the best treatment. Removing those subtleties from the decision-making process is a backwards step for healthcare.

2. An online tracker tool was created without the input of a clinician. The tool purported to “empower women” to manage menopause more effectively online. What on earth would a woman want to do to manage her menopause online, you might ask? Well apparently these young software developers strongly believed that a “hot flash tracker” would be just what women were looking for. The tool provided a graphical representation of the frequency and duration of hot flashes, so that the user could present this to her doctor. One small problem: hot flash management is a binary decision. Hot flashes either are so personally bothersome that a woman would decide to receive hormone therapy to reduce their effects, or the hot flashes are not bothersome enough to warrant treatment. It doesn’t matter how frequently they occur or how long they last. Another ill-conceived Health 2.0 tool.

When it comes to interpreting data, Barker Bausell does an admirable job of reviewing the most common reasons why people are misled to believe that there is a cause and effect relationship between a given intervention and outcome. In fact, the deck is stacked in favor of a perceived effect in any trial, so it’s important to be aware of these potential biases when interpreting results. Health 2.0 enthusiasts would do well to consider the following factors that create the potential for “false positives”in any clinical trial:

1. Natural History: most medical conditions have fluctuating symptoms and many improve on their own over time. Therefore, for many conditions, one would expect improvement during the course of study, regardless of treatment.

2. Regression to the Mean: people are more likely to join a research study when their illness/problem is at its worst during its natural history. Therefore, it is more likely that the symptoms will improve during the study than if they joined at times when symptoms were not as troublesome. Therefore, in any given study – there is a tendency for participants in particular to improve after joining.

3.  The Hawthorne Effect: people behave differently and experience treatment differently when they’re being studied. So for example, if people know they’re being observed regarding their work productivity, they’re likely to work harder during the research study. The enhanced results therefore, do not reflect typical behavior.

4. Limitations of Memory: studies have shown that people ascribe greater improvement of symptoms in retrospect. Research that relies on patient recall is in danger of increased false positive rates.

5. Experimenter Bias: it is difficult for researchers to treat all study subjects in an identical manner if they know which patient is receiving an experimental treatment versus a placebo. Their gestures and the way that they question the subjects may set up expectations of benefit. Also, scientists are eager to demonstrate positive results for publication purposes.

6. Experimental Attrition: people generally join research studies because they expect that they may benefit from the treatment they receive. If they suspect that they are in the placebo group, they are more likely to drop out of the study. This can influence the study results so that the sicker patients who are not finding benefit with the placebo drop out, leaving the milder cases to try to tease out their response to the intervention.

7. The Placebo Effect: I saved the most important artifact for last. The natural tendency for study subjects is to perceive that a treatment is effective. Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.

In my opinion, the often-missing ingredient in Health 2.0 is the medical expert. Without our critical review and educated guidance, there is a greater risk of making irrelevant tools or perhaps even doing more harm than good. Let’s all work closely together to harness the power of the Internet for our common good. While research minus science = gossip, science minus consumers = inaction.

One Man’s Mission To Expose Medical Quackery

No Comments »

Sorry for the late notice, folks… but Revolution Health’s sister site, HealthTalk, is hosting a call-in podcast with Dr. Stephen Barrett, founder of Quackwatch.org, TONIGHT. The name of the show is, “One Man’s Mission To Expose Medical Quackery” and Dr. Barrett is a polarizing figure for sure. Love him or hate him, it should be a great interview. To join, go to this page.

You can send in your questions in advance, listen to the call live (8:30pm EDT, Wednesday, April 30th), or listen to the podcast post-show. Hope to meet you there!

Here are a few of my recent posts about how to discern truth from error in medicine:

Good Science Makes Bad Television (And Other Truths)

The Three Pillars of Trustworthy Science: Credibility, Plausibility, and Reproducibility

Plausibility, Homeopathy, and Science Fiction

False Positive Research Findings: The Deck Is Stacked

Reproducibility: The Final Pillar of Trustworthy Science

The Rise of Snake Oil In America

The Placebo Effect: Whatever Works?This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.

Crazy Talk: Medical Misconceptions and Snake Oil

No Comments »

In my roaming around the blogosphere I found some fairly humorous tidbits. It’s amazing what people will believe, or what crazy “cures” are being sold on the Internet. I thought you enjoy some choice samples:

1. Demi Moore uses leeches to “detoxify her blood.” Dr. Ramona Bates offers a great summary of the potential medical uses of leeches along with some caveats (like a 20% chance of infection with leeches… and a way to keep them from wandering too far afield. Tie a string to their tails? Yikes.) If blood letting is good for detoxification, why not donate blood instead?

2. One snake oil site promises to cure “shock” and cardiac arrests with an herbal liquid that turns out to be mostly brandy. This cure is “also helpful for animals who have experienced mild trauma.” Funny stuff.

3. An Indian magician who claimed to be able to kill people with a magic incantation was unable to do so during a TV show segment. The host of the show offered to be killed though the incantation was unsuccessful. Reminds me of the guy who swore he could make his arm impervious to harm by a sharp sword. That didn’t work either.This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.

Dr. Val’s Equal Opportunity Smack Down

No Comments »

So my last post stimulated some interesting discussion amongst friends and colleagues. Some applauded the late night supplement “smack down,” and others felt it was too harsh. Still others who don’t read my blog regularly complained that it was unfair to pick on the supplement industry without also pointing out the flaws of Big Pharma. Well, here’s to equal opportunity smack downs – where things “applied directly to the forehead” are as fair game as anti-psychotics, IT mishaps, healthcare professional SNAFUs, and misguided policies resulting in unanticipated harm to patients.

But let’s not forget the happy stories, the unlikely triumphs, and the snatching of victory from the jaws of defeat. We can laugh at ourselves, cry with our friends, and mourn the loss of the lonely. Medicine is full of a broad array of emotions and perspectives, captured here for you in this blog.

And now, back to bunnies and puppies…This post originally appeared on Dr. Val’s blog at RevolutionHealth.com.

Latest Interviews

IDEA Labs: Medical Students Take The Lead In Healthcare Innovation

It’s no secret that doctors are disappointed with the way that the U.S. healthcare system is evolving. Most feel helpless about improving their work conditions or solving technical problems in patient care. Fortunately one young medical student was undeterred by the mountain of disappointment carried by his senior clinician mentors…

Read more »

How To Be A Successful Patient: Young Doctors Offer Some Advice

I am proud to be a part of the American Resident Project an initiative that promotes the writing of medical students residents and new physicians as they explore ideas for transforming American health care delivery. I recently had the opportunity to interview three of the writing fellows about how to…

Read more »

See all interviews »

Latest Cartoon

See all cartoons »

Latest Book Reviews

Book Review: Is Empathy Learned By Faking It Till It’s Real?

I m often asked to do book reviews on my blog and I rarely agree to them. This is because it takes me a long time to read a book and then if I don t enjoy it I figure the author would rather me remain silent than publish my…

Read more »

The Spirit Of The Place: Samuel Shem’s New Book May Depress You

When I was in medical school I read Samuel Shem s House Of God as a right of passage. At the time I found it to be a cynical yet eerily accurate portrayal of the underbelly of academic medicine. I gained comfort from its gallows humor and it made me…

Read more »

Eat To Save Your Life: Another Half-True Diet Book

I am hesitant to review diet books because they are so often a tangled mess of fact and fiction. Teasing out their truth from falsehood is about as exhausting as delousing a long-haired elementary school student. However after being approached by the authors’ PR agency with the promise of a…

Read more »

See all book reviews »

Commented - Most Popular Articles