By David Tuller, DrPH
In a well-designed clinical trial, the protocol, the registration and the statistical analysis plan should complement and not contradict each other. Investigators spend huge amounts of time developing clinical trial protocols. These are road-maps to the project, complete with (hopefully) well thought-out and clearly defined primary and secondary outcomes. These documents have to pass muster with oversight and ethics committees and often go through multiple iterations before final approval–and funding.
Before recruiting patients, investigators are supposed to open an entry for their clinical trial in a recognized registry. This trial registration is expected to include the same primary and secondary outcomes as in the protocol. When the investigators draw up a statistical analysis plan, it is supposed to explain in greater detail how the data for the primary and secondary outcomes listed in both the protocol and trial registration are to be analyzed.
It goes without saying that the published report of any clinical trial is expected to adhere to its own predesignated primary and secondary outcomes, as noted in the protocol, registration, and statistical analysis plan. That is, unless the investigators have obtained permission from appropriate oversight committees to make necessary changes after providing adequate justification. If this occurs, such changes should be disclosed in the published account.
These practices are critical to prevent what is called selective outcome reporting, when investigators cherry-pick among options to report the most attractive-looking results. All major medical journals profess fealty to these standards.
**********
On Saturday, I wrote about a study published recently by BMJ Paediatrics Open called €œCognitive€“behavioural therapy combined with music therapy for chronic fatigue following Epstein-Barr virus infection in adolescents: a feasibility study.€ After the journal posted the study, it generated a lively discussion on the Science For ME forum. Some of the very smart people there noted issues of concern, including some divergence between the trial registration and the reported outcomes.
The study was part of a larger Norwegian research project called Chronic Fatigue Following Acute Epstein-Barr Virus Infection in Adolescents, or CEBA. The main element was a prospective study tracking 200 adolescents to assess factors implicated in the prolonged fatigue that can follow the acute viral illness. The second was a randomized trial to assess an intervention combining cognitive behavior therapy and music therapy for the adolescents who experienced this prolonged fatigue.
In my post, I noted that the trial registration did not include recovery and post-exertional malaise as outcomes, despite the investigators’ decision to highlight these measures in suggesting further research might be warranted. I also noted that, in their conclusions, the authors sought to disappear the terrible findings for the primary outcome, or at least ignore the implications of the findings.
I have since examined the protocol and the statistical analysis plan, as found on this page from Akershus University Hospital, with which the lead and senior authors are affiliated. The page also includes another protocol and statistical analysis plan, which are for the larger prospective study.
None of these study documents suggests that the clinical trial aspect of the research was set up as a feasibility study. The stated trial outcomes do not involve issues of feasibility. Instead, the study appears to have been launched as a small but full-scale randomized trial, presumably with the notion that the results might inform clinical practice. Unfortunately, recruitment fell below the expected levels and the results were less than hoped for. Given what might be viewed as a failed randomized trial, the investigators seem to have switched gears.
Hm. As I documented with the Lightning Process study published in another BMJ journal in 2017, the ethically and methodologically challenged research team from the University of Bristol turned their feasibility study into a full-scale trial, breaking all sorts of rules in the process. In this case, the investigators appear to have engineered the reverse. They started off with an actual randomized trial that they’ve now demoted to a feasibility study. Interesting strategy!
**********
Now let’s take a look at the statements about endpoints. The documents posted on the web page include, in this order, €œResearch protocol,€ €œResearch protocol, processing,€ €œStatistical analysis plan,€ and €œStatistical analysis plan part 2.€ The second protocol and the second statistical analysis plan are the ones for the feasibility stu, oops!, for the randomized trial.
€œProtocol-processing€ includes this statement: €œThe primary end-point in the present study is patients’ functional capacity, operationalized as mean steps/day count during a seven day period after 12 weeks of mental intervention.€ It also mentions €œrecovery,€ noting that €œwe define recovery as a dichotomized Chalder fatigue score < 4; fatigue score is a secondary endpoint in the present study.€ The protocol does not mention post-exertional malaise as an outcome.
€œStatistical analysis plan 2€ highlights the same single primary outcome. But the plan, unlike the protocol, does not mention recovery. This is perplexing, since statistical analysis plans are supposed to elaborate on the methods to be used to assess the data. Given this absence in the trial registration and the statistical analysis plan, the status of recovery as a predesignated outcome measure is somewhat ambiguous. The statistical analysis plan also does not include post-exertional malaise as an outcome.
The inconsistent information about outcomes in core documents, especially given how the outcomes are presented in the published study, does not inspire confidence in the integrity of the research. And as noted in my previous post, the investigators seem to have gone out of their way to bury the bad news about the undisputed primary outcome.
**********
So let’s recap.
A group of investigators launch a randomized trial. Recruitment falls below expectations and the intervention arm experiences a high attrition rate, complicating the analysis of the data. Moreover, the results for the primary intervention are disastrous. The investigators downplay these inconvenient results and focus on two other endpoints, neither of which were included in the trial registration or the statistical analysis plan (although one made a cameo in the protocol). The investigators then publish their small randomized trial as a feasibility study for a new and bigger randomized trial, without disclosing in the paper that it did not start out as a feasibility study.
I’m making a wild guess here, but I don’t think this is all ok. These researchers, and the editors at BMJ Paediatrics Open–might have some explaining to do.
More to come€¦
Comments are closed.