November 11th, 2009 by Harriet Hall, M.D. in Better Health Network, Book Reviews, Quackery Exposed
1 Comment »
That’s the title of a new book by Melvin H. Kirschner, M.D. When I first saw the title, I expected a polemic against conventional medicine. The first line of the Preface reassured me: “Everything we do has a risk-benefit ratio.” Dr. Kirschner took the title from his first pharmacology lecture in medical school. The professor said “I am here to teach you how to poison people.” After a pause, he added, “without killing them, of course.”
He meant that any medicine that has effects has side effects, that the poison is in the dose, and that we must weigh the benefits of any treatment against the risks. Dr. Kirschner has no beef with scientific medicine. He does have a lot of other beefs, mainly with the health insurance industry, the pharmaceutical industry, and alternative medicine. Read more »
*This blog post was originally published at Science-Based Medicine*
July 14th, 2009 by Harriet Hall, M.D. in Better Health Network, Quackery Exposed
No Comments »
A study published in Alternative Therapies in Health and Medicine is being cited as evidence for the efficacy of healing touch (HT). It enrolled 237 subjects who were scheduled for coronary bypass, randomized them to receive HT, a visitor, or no treatment; and found that HT was associated with a greater decrease in anxiety and shorter hospital stays.
This study is a good example of what I have called “Tooth Fairy Science.” You can study how much money the Tooth Fairy leaves in different situations (first vs. last tooth, age of child, tooth in baggie vs. tooth wrapped in Kleenex, etc.), and your results can be replicable and statistically significant, and you can think you have learned something about the Tooth Fairy; but your results don’t mean what you think they do because you didn’t stop to find out whether the Tooth Fairy was real or whether some more mundane explanation (parents) might account for the phenomenon.
Theoretical underpinnings
According to the study’s introduction:
Healing touch is a biofield- or energy-based therapy that arose out of nursing in the early 1980s…HT aids relaxation and supports the body’s natural healing process, i.e., one’s ability to self-balance and self-heal.” This noninvasive technique involves (1) intention (such as the practitioner centering with the deep, gentle, conscious breath) and (2) placement of hands in specific patterns or sequences either on the body or above it. At its core, the theoretical basis of the work is that a human being is a multi-dimensional energy system (including consciousness) that can be affected by another to promote well-being.
They cite a number of references to theorists who support these ideas. They cite Ochsman; he wrote a book Energy Medicine: The Scientific Basis which I reviewed, showing that despite the book’s title, there is no credible scientific basis and the “evidence” he presents cannot be taken seriously.
They cite Candace Pert, who said in the foreword to Ochsman’s book that Dr. Oschman “pulled” some energy away from her “stagnant” liver. She said the body is “a liquid crystal under tension capable of vibrating at a number of frequencies, some in the range of visible light,” with “different emotional states, each with a predominant peptide ligand-induced ‘tone’ as an energetic pattern which propagates throughout the bodymind.” Does this even mean anything?
They even cite the PEAR study, suggesting that it is still ongoing (it isn’t) and claiming it shows that “actions in one system can potentially influence actions of another on a quantum energetic level.” (It didn’t.)
This is nothing but imaginative speculation based on a misunderstanding of quantum physics and of what physicists mean by “energy.” It is a truism that electromagnetic phenomena are widespread in the human body, but there is a giant gap between that and the idea that a nurse with intention and hand movements can influence electrical, magnetic, or any other physical processes in the body to promote healing. There is no evidence for the alleged “human biofield.”
Previous Research
They cite several randomized controlled studies of HT over the last few years. One showed “better health-related quality of life” in cancer patients. One, the Post-White study, showed no difference between HT and massage. One small study by Ziembroski et al. that I couldn’t find in PubMed apparently showed no significant difference between HT and standard care for hospice patients. One study showed that HT raised secretory IgA concentrations, lowered stress perceptions and relieved pain, and results were greater with more experienced practitioners; but it only compared HT to no treatment and didn’t use any placebo treatment.
A pilot study compared 4 noetic therapies-stress relaxation, imagery, touch therapy, and prayer, and found no difference.
A larger study showed that neither touch therapy nor masked prayer significantly improved clinical outcome after elective catheterisation or percutaneous coronary intervention.
They cite a review of healing touch studies by Wardell and Weymouth It concluded “Over 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.” Wardell noted that “the question has been raised whether the field of energy research readily lends itself to traditional scientific analysis due to coexisting paradoxical findings.” This is a common excuse of true believers who find that science is not cooperative in validating their beliefs.
Study Design
237 patients undergoing first-time elective coronary artery bypass surgery were randomly assigned to one of 3 groups: an HT group, a visitor group, and a standard care group. All received the same standard care from the hospital. The HT group received preoperative HT education and 3 HT interventions. Practitioners established a relationship with their patients, assessed their energy fields, and performed a variety of HT techniques based on their assessment, including techniques that involved light touch and those that involved no touch (practitioners’ hands held above body). Sessions lasted 20 to 90 minutes; each patient had the same practitioner throughout the study. The “visitor” group patients were visited by a nurse on the same schedule. The visits consisted of general conversation or the visitor remaining quietly in the room with the patient. They mentioned that some visits were shortened at the patient’s request.
Results of the Study
The six outcome measures were postoperative length of stay, incidence of postoperative atrial fibrillation, use of anti-emetic medication, amount of narcotic pain medication, functional status, and anxiety. HT had no effect on atrial fibrillation, anti-emetics, narcotics, or functional status. The only significant differences were for anxiety scores and length of stay. The length of stay for the HT group was 6.9 days, for the visitor group 7.7 days, and for the routine care group 7.2 days, suggesting that the simple presence of a visitor made things worse(!?). Curiously, for the subgroup of inpatients, the length of stay was HT 7.4 days, visitor 7.7 days and routine care 6.8 days, which was non-significant at p=0.26 and suggested that both HT and visitor made things worse.
The mean decreases in anxiety scores were HT 6.3, visitor 5.8, and control 1.8. They said this was significant at the p=0.01 level. But the tables for results broken down by inpatient and outpatient show no significant differences (p=0.32 for outpatients and p=0.10 for inpatients). If it was not significantly different for either subgroup, how could it be significant for the combined group?
These discrepancies are confusing. They suggest that the significant differences found were due to chance rather than to any real effect of HT..
Problems with this Study
Four out of the six outcomes were negative: there was no change in the use of pain medication, anti-emetic medication, incidence of atrial fibrillation, or functional status. The only two outcomes that were significant were hospital stay and anxiety, and these results are problematic and might have other explanations.
It is impossible to interpret what the difference in length of stay means, because they did not record the reasons for delaying discharge. As far as we can tell from the paper, the doctors deciding when to discharge a patient were not blinded as to which study group the patient was in. It’s interesting that the visitor group length of stay was intermediate in the outpatient subgroup, but higher than control for the combined inpatient/outpatient group. They offer no explanation for this. I was puzzled by the bar graph showing these numbers, because the numbers on the graph don’t seem to match the numbers in the text. The numbers were manipulated: they did a logarithm transformation for length of stay “to handle the skewness of the raw data.” I don’t understand that and can’t comment. The range of hospital days is such that the confidence intervals largely overlap. In all, these data are not very robust or convincing and they raise questions.
They interpret the anxiety reduction scores (HT 6.3, visitor 5.8, and control 1.8) as showing a significant efficacy of HT, but it seems more compatible with a placebo response and a slightly better response for the more elaborate placebo.
There were fewer patients (63) in the visitor group than in the HT and control groups (87 each). This was not explained. The comparison of groups appears to show that the control group had significantly higher pre-op anxiety scores than either of the other groups, which would tend to skew the results
They didn’t use a credible control group. A visitor sitting in the room can’t be compared to a charismatic touchy-feely hand-waving practitioner. Other studies have used mock HT where the hand movements were not accompanied by healing thoughts. These researchers rejected that approach because they didn’t think it would be ethical to offer a sham procedure where the practitioner only “pretended” to help. Hmm… One could argue that they have provided no evidence that HT practitioners are ever doing anything more than pretending to help.
They don’t comment on how practitioners were able to “assess the energy fields” of their patients. Emily Rosa’s landmark study showed that practitioners who claimed to be able to sense those fields couldn’t.
The authors consist of 3 RNs (2 of them listed as healing touch therapists and presumably the ones who provided treatment in the study), a statistician with an MS, and two “directors of research” for whom no degrees are listed. The authors are clearly prejudiced in favor of HT.
They interpret this study as supporting the efficacy of HT. I don’t think it does that. I think the results are entirely compatible with a placebo response. With any made-up intervention presented with strong suggestion, one could expect to find one or two statistically significant differences when multiple endpoints are evaluated. And the magnitude of the improvement here is far from robust. This is the kind of result that tends to diminish in magnitude or vanish when better controls are used. I think the study is Tooth Fairy science, purporting to study the effects of a non-existent phenomenon, but actually only demonstrating a placebo response.
I wonder if better results might be obtained by having a patient advocate stay with the patient and offer reassurance, explanations, massage and other comfort measures – something like the doulas who have been shown to improve childbirth outcomes.
The frightening thing is that during the course of this study, patients increasingly bought into the HT belief system and refused to sign up for the study because they wanted HT and didn’t want to risk being assigned to a control group. And hospital staff bought into the belief system, were treated themselves, and became proponents of offering it to patients for other indications.
The paper ends with a rather incoherent statement one would not expect to find in a scientific medical journal: “At the very heart of this study is the movement toward recognizing that the metaphoric and physical heart are both very real, if we allow them to be.”
*This blog post was originally published at Science-Based Medicine*
July 9th, 2009 by Harriet Hall, M.D. in Better Health Network
No Comments »
It’s easy to think of medical tests as black and white. If the test is positive, you have the disease; if it’s negative, you don’t. Even good clinicians sometimes fall into that trap. Based on the pre-test probability of the disease, a positive test result only increases the probability by a variable amount. An example: if the probability that a patient has a pulmonary embolus (based on symptoms and physical findings) is 10% and you do a D-dimer test, a positive result raises the probability of PE to 17% and a negative result lowers it to 0.2%.
Even something as simple as a throat culture for strep throat can be misleading. It’s possible to have a positive culture because you happen to be an asymptomatic strep carrier, while your current symptoms of fever and sore throat are actually due to a virus. Not to mention all the things that might have gone wrong in the lab: a mix-up of specimens, contamination, inaccurate recording…
Mammography is widely used to screen for breast cancer. Most patients and even some doctors think that if you have a positive mammogram you almost certainly have breast cancer. Not true. A positive result actually means the patient has about a 10% chance of cancer. 9 out of 10 positives are false positives.
But women don’t just get one mammogram. They get them every year or two. After 3 mammograms, 18% of women will have had a false positive. After ten exams, the rate rises to 49.1%. In a study of 2400 women who had an average of 4 mammograms over a 10 year period, the false positive tests led to 870 outpatient appointments, 539 diagnostic mammograms, 186 ultrasound examinations, 188 biopsies, and 1 hospitalization. There are also concerns about changes in behavior and psychological wellbeing following false positives.
Until recently, no one had looked at the cumulative incidence of false positives from other cancer screening tests. A new study in the Annals of Family Medicine has done just that.
They took advantage of the ongoing Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial to gather their data. In this large controlled trial (over 150,000 subjects), men randomized to screening were offered chest x-rays, flexible sigmoidoscopies, digital rectal examinations and PSA blood tests. Women were offered CA-125 blood tests for cancer antigen, transvaginal sonograms, chest x-rays, and flexible sigmoidoscopies. During the 3-year study period, a total of 14 screening tests were possible for each sex. The subjects didn’t all get every test.
By the 4th screening test, the risk of false positives was 37% for men and 26% for women. By the 14th screening test, 60% of men and 49% of women had had false positives. This led to invasive diagnostic procedures in 29% of men and 22% of women. 3% were minimally invasive (like endoscopy), 15.8% were moderately invasive (like biopsy) and 1.6% involved major surgical procedures (like hysterectomy). The rate of invasive procedures varied by screening test: 3% of screened women underwent a major surgical procedure for false-positive findings on a transvaginal sonogram.
These numbers do not include non-invasive diagnostic procedures, imaging studies, office visits. They do not address the psychological impact of false positives. And they do not address the cost of further testing.
These data should not be over-interpreted. They don’t represent the average patient undergoing typical cancer screening in the typical clinic. But they do serve as a wake-up call. Screening tests should be chosen to maximize benefit and minimize harm. Organizations like the U.S. Preventive Services Task Force try to do just that; they frequently re-evaluate any new evidence and offer new recommendations. Data like these on cumulative false positive risks will help them make better decisions than they could make based on previously available single-test false positive rates.
“In a post earlier this year, I discussed the pros and cons of PSA screening. Last year, I discussed screening ultrasound exams offered direct to the public to bypass medical judgment). If you do 20 lab tests on a normal person, statistically one will come back false positive just because of the way normal lab results are determined. Figuring out which tests to do on a given patient, either for screening or for diagnosis, is far from straightforward.
This new information doesn’t mean we should abandon cancer screening tests. It does mean we should use them judiciously and be careful not to mislead our patients into thinking they offer more certainty and less risk than they really do.
*This blog post was originally published at Science-Based Medicine*
May 20th, 2009 by Harriet Hall, M.D. in Better Health Network
No Comments »
There is no question that patients on insulin benefit from home monitoring. They need to adjust their insulin dose based on their blood glucose readings to avoid ketoacidosis or insulin shock. But what about patients with non-insulin dependent diabetes, those who are being treated with diet and lifestyle changes or oral medication? Do they benefit from home monitoring? Does it improve their blood glucose levels? Does it make them feel more in control of their disease?
This has been an area of considerable controversy. Various studies have given conflicting results. Those studies have been criticized for various flaws: some were retrospective, non-randomized, not designed to rule out confounding factors, high drop-out rate, subjects already had well-controlled diabetes, etc. A systematic review showed no benefit from monitoring. So a new prospective, randomized, controlled, community based study was designed to help resolve the conflict.
O’Kane et al studied 184 newly diagnosed patients with type 2 diabetes who had never used insulin or had any previous experience with blood glucose monitoring. They were under the age of 70 and recruited from community referrals to hospital outpatient clinics, so they were likely representative of patients commonly seen in practice. They were randomized to monitoring or no monitoring. Patients in the monitoring group were given glucose meters and were instructed in their use and in appropriate responses to high or low readings, such as dietary review or exercise. They were asked to take four fasting and four postprandial readings every week for a year. Patients in the no monitoring group were specifically asked NOT to acquire a glucose monitor or do any kind of self-testing. Otherwise, the two groups were treated alike with diabetes education and an identical treatment algorithm based on HgbA1C levels.
Their findings:
We were unable to identify any significant effect of self monitoring over one year on HbA1c, BMI, use of oral hypoglycaemic drugs, or reported incidence of hypoglycaemia. Furthermore, monitoring was associated with a 6% higher score on the well-being depression subscale.
So home monitoring not only did no good but it made patients feel worse. Why? Perhaps because they were constantly reminded that they had a disease and worried when blood glucose levels rose, especially when the recommended responses of dietary review and exercise didn’t rapidly lead to lower readings.
We would not accept the results of one isolated study without replication, but in this case the new study adds significantly to the weight of previous evidence and arguably tips the balance enough to justify a change in practice.
The American Diabetes Association still says “Experts feel that anyone with diabetes can benefit from checking their blood glucose.” But they only recommend blood glucose checks if you have diabetes and are:
• taking insulin or diabetes pills
• on intensive insulin therapy
• pregnant
• having a hard time controlling your blood glucose levels
• having severe low blood glucose levels or ketones from high blood glucose levels
• having low blood glucose levels without the usual warning signs
Diabetes experts see the severe, complicated cases and have a different perspective from that of the family physician seeing mostly mild and uncomplicated cases. An article in American Family Physician said
Except in patients taking multiple insulin injections, home monitoring of blood glucose levels has questionable utility, especially in relatively well-controlled patients. Its use should be tailored to the needs of the individual patient.
An editorial in the BMJ pointed out that
Home blood glucose monitoring is a big business. The main profit for the manufacturing industry comes from the blood glucose testing strips. Some £90m was spent on testing strips in the United Kingdom in 2001, 40% more than was spent on oral hypoglycaemic agents.2 New types of meters are usually not subject to the same rigorous evaluation of cost effectiveness, compared with existing models, as new pharmaceutical agents are.
If the scientific evidence supporting the role of home blood glucose monitoring in type 2 diabetes was subject to the same critical evaluation that is applied to new pharmaceutical agents, then it would perhaps not have been approved for use by patients.
Conclusion
Home glucose monitoring in type 2 diabetes is not justified by the evidence. It does not improve outcome, it is expensive, and it may decrease the quality of life of patients.
Common sense suggested monitoring should improve outcome. We had assumed it would work. Scientists thought to question that assumption. They found a way to test that assumption. New evidence showed that it was a false assumption. In response to that evidence, the practice is now being abandoned. This is how science is supposed to work. Another small triumph for science-based medicine.
*This blog post was originally published at Science-Based Medicine*