A new study published in PLOS Biology looks at the potential magnitude and effect of publication bias in animal trials. Essentially, the authors conclude that there is a significant file drawer effect –- failure to publish negative studies -– with animal studies and this impacts the translation of animal research to human clinical trials.
SBM is greatly concerned with the technology of medical science. On one level, the methods of individual studies need to be closely analyzed for rigor and bias. But we also go to great pains to dispel the myth that individual studies can tell us much about the practice of medicine.
Reliable conclusions come from interpreting the literature as a whole, and not just individual studies. Further, the whole of the literature is greater than the sum of individual studies –- there are patterns and effects in the literature itself that need to be considered.
One big effect is the file drawer effect, or publication bias –- the tendency to publish positive studies more than negative studies. A study showing that a treatment works or has potential is often seen as doing more for the reputation of a journal and the careers of the scientists than negative studies. So studies with no measurable effect tend to languish unpublished. Read more »
*This blog post was originally published at Science-Based Medicine*
This post is a follow up to my book review of Bad Science, located here.
I couldn’t help but feel unusually depressed by Dr. Ben Goldacre’s exposé of researchers who resort to trickery to get their articles published in peer-reviewed journals. There are a number of ways to manipulate data and many ways that flawed research is presented to enhance its chances of publication.
Before we get started, I need to point out that “negative trials” – or research results that don’t corroborate the investigator’s original hypothesis/es – are much less likely to be published. People (and/or publishers) are far more interested in finding a needle in a haystack, than hearing that no needle could be found. This is a driving force in all manner of mathematical convolutions aimed at demonstrating something interesting and to warrant publication. After all, who can blame the researchers for wanting to get their research published, and to have it make a meaningful contribution to their field of study? Who wants to toil for months to years on end, only to discover that their hypotheses were not born out by experimentation, and in fact no helpful conclusions may be drawn whatsoever?
And so, with this intense pressure to find something meaningful in one’s research (either for profit or personal satisfaction and professional advancement) - there are some typical stragegies that researchers use to make something out of nothing. Ben Goldacre reviews these strategies in the voice of an unscrupulous senior pharmaceutical investigator, giving advice to his junior colleague. (Parenthetically, it reminded me of The Screwtape Letters - an amusing book written by C.S. Lewis, featuring the imaginary advice of a senior demon to his junior counterpart as they tempt humans to evil.)
(This passage is taken directly from pages 192-193 of Bad Science)
1. Ignore the protocol entirely. Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report – as significant – any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive by sheer luck.
2. Play with the baseline. Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.
3. Ignore dropouts. People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side effects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis.
4. Clean up the data. Look at your graphs. There will be some anomalous ‘outliers,’ or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.
5. The best of five… no… seven… no… nine! If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly significant,’ extend the trial by another three months.
6. Torture the data. If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will conhfess to anything’ as they say at Guantanamo Bay.
7. Try every button on the computer. If you’re really desperate, and analysing your data the way you planned doesn’t give you the results you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
8. Publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, then put it in an obscure journal… and hope that readers are not attentive enough to read beyond the abstract to recognize its flaws.
9. Do not publish. If your finding is really embarrassing, hide it away somewhere and cite ‘data on file.’ Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully that won’t be for ages.