Skewed Results? Failure to Account for Clinical Trial Drop-Outs Can Lead to Erroneous Findings in Top Medical Journals / BMJ

Unmanned satellite-aircraft devices can now stay in orbit more than a year, then return to earth and make a soft landing at a military base!  Just imagine the accuracy and precision of the measurements and analysis needed to achieve this.  This accomplishment deserves more than a hat-tip to the scientific method.

Yet when it comes to medical research, arguably as important to the well-being of citizens, accuracy and precision are nowhere in sight.  Results of studies are all over the map.  Some researchers best distinguish themselves by the clever ways they can transform any set of results into support for whatever drug, device or vaccine their sponsor wants to sell.  Imagine if they were rocket scientists!

I have previously discussed about ten methods that have been used to squeeze an ill-fitting conclusion from a clinical dataset.  One such method:  the handling of subjects “Lost to Followup” has now been shown to skew the results of as many as one third of clinical trials published in top medical journals, according to an open-access article in the May 18, 2012 BMJ by Akl EA et al.  Akl’s group examined 235 papers in 5 top medical journals for whether losses to followup were mentioned, and how the losses were handled.  Over half the authors of the 235 papers (54%) did not respond to questions from Akl’s group regarding details of the research, which is troubling.

Nineteen per cent of the trials did not mention how losses to followup were being handled.  Of those papers that did mention losses to followup, generally 2-15% of total subjects were lost.

When reasonable assumptions about these losses were made, it was found that the conclusions of the trials could become nonsignificant in about 1/3 of cases.  In other words, the positive result would be lost.

From the SUNY Buffalo press release:

“We found that in up to a third of trials, the results that
were reported as positive – in other words, statistically significant –
would become negative – not statistically significant, if the
investigators had appropriately taken into consideration those
participants who were lost to follow-up,” says Elie A. Akl, MD, MPH,
PhD, lead author, and associate professor of medicine, family medicine
and social and preventive medicine at the University at Buffalo School
of Medicine and Biomedical Sciences and School of Public Health and
Health Professions. He also has an appointment at McMaster University

“In
other words, one of three claims of effectiveness of interventions made
in top general medical journals might be wrong,” he says.

In
one example, a study that compared two surgical techniques for treating
stress urinary incontinence found that one was superior. But in the
analysis published this month, it was found that 21 percent of
participants were lost to follow-up. “When we reanalyzed that study by
taking into account those drop-outs, we found that the trial might have
overestimated the superiority of one procedure over the other,” Akl
says.

According to Akl, it has always been suspected, but never
proven, that loss to follow-up introduces bias into the results of
clinical trials. “The methodology we developed allowed us to provide
that proof,” he says.

The methodology that he and his coauthors
developed consists of sensitivity analyses, a statistical approach to
test the robustness of the results of an analysis in the face of
specific assumptions, in this case, assumptions about the outcomes of
patients lost to follow-up.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Scroll to Top