Medical studies are seriously biased by interested funders and by tolerance for sloppy methods. Here are four examples.
1. A recent PLoS Medicine looked at 111 studies of soft drinks, juice, and milk that cited funding sources.
22% had all industry funding, 47% had no industry funding, and 32% had mixed funding. … the proportion with unfavorable [to industry] conclusions was 0% for all industry funding versus 37% for no industry funding .
2. Last February the Canadian Medical Association Journal reported that in 487 studies, those whose method left more room for fudging "found" higher accuracy of diagnostic tests:
The quality of reporting was poor in most of the studies. We found significantly higher estimates of diagnostic accuracy in studies with nonconsecutive inclusion of patients … and retrospective data collection … Studies that selected patients based on whether they had been referred for the index test, rather than on clinical symptoms, produced significantly lower estimates
3. In 1995, the Journal of American Medical Association reported that of 250 studies of treatments, those with easier fudging similarly "found" stronger effects:
Compared with trials in which authors reported adequately concealed treatment allocation, … Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials … Trials that were not double-blind also yielded … odds ratios being exaggerated by 17%
4. In 2005, the Journal of American Medical Association found that of medical studies since 1990 cited 1000 times or more, 1/3 were contradicted by replications, and 1/4 had no replication attempts:
Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6 highly-cited nonrandomized studies had been contradicted or had found stronger effects vs 9 of 39 randomized controlled trials (P = .008).
The obvious question is: how can we produce medical estimates that correct for such biases? And why don’t we?
Nope, I know it doesn't happen on a large scale. It doesn't even happen in the academic medical center where I practice, at least much of the time. Just read the article quoted in the other EBM thread about Dan Merenstein if you have any doubts...but since my job is to convince medical students that they need to practice that way, all I can do is point them in the right direction. Introduce them to evidence-based clinical guidelines, teach them how to critically read and be skeptical...the fact is, right or wrong, that variability in practice may be disappearing for the wrong reason...through issues of managed care and cost control. But decreasing the variability of practice patterns, using the Dartmouth index effectively, and paying attention to current literature will improve the quality of medical care. It's the "one day" I'm aiming for...but I do believe that the situation has improved a great deal in the 25 years I've been teaching. In my course we use small group teaching for half the content, reading current articles. Among my small group teachers are the head of the lung transplant program, the division chief of GI, a former dean of the medical school, the department chair of family medicine, and assorted hematologists, rheumatologists, a urologist, and a medicine chief resident. Not one of these teachers have had formal training in EBM....they all are committed to critical reading of the literature and implementing it in their practice. That has an effect on students regarding the relevance of the content. I frequently have graduates come up and tell me that they have continued reading in their practices. I think things are changing.
I'm sorry, were you laboring under the impression that the actual practice of medicine by physiciansis based on science, evidence, and empirical thinking? That would be a revolution. In practice,what a doc does to determine how to treat a patient does not involve a hardcore look into the dataor results of the clinical studies. I mean, there's some communication pipeline that tells docs howthey should diagnose things and treat things under certain circumstances, but it is not a criticallens around the clinical trials. Did you really think that that happens on a large scale? Are youjoking? Maybe one day.