Trial By Error: So What’s Happening with the MAGENTA Trial?

By David Tuller, DrPH

I€™ll be in Bristol later this week for the CFS/ME Research Collaborative€™s annual conference. I was not welcome last year, since I was at that point engaged in harshly criticizing the organization for its unwillingness to acknowledge that its deputy chair had falsely accused me of libel. This year, things have changed and both the chair and the new deputy chair have graciously welcomed me.

In any event, given the proximity of the conference to the University of Bristol, it seemed like a good time to take another look at some of the problematic research conducted on children at that august institution. In my past scrutiny of this work, I have focused on a school absence study, a study of the Lightning Process, the ongoing study of online CBT called FITNET-NHS, and various prevalence studies, all of them deeply flawed.

I haven€™t paid much or any attention to the ongoing MAGENTA trial. (Full name: Managed Activity Graded Exercise in Teenagers and Pre-Adolescents). But commenters on the Science for ME forum have recently noted that the MAGENTA investigators had folded a feasibility trial into the full trial, a similar strategy to that pursued by the investigators of the Lightning Process study. This prompted me to take a look.

The MAGENTA trial was designed to test graded exercise therapy against activity management as a treatment for kids. (Frankly, the descriptions of the two interventions do not sound all that different to me, except that the first focuses on increasing exercise and the second on increasing overall activity by a similar amount.) It goes without saying that MAGENTA suffers from a major design flaw shared by so many other studies in this field, it is an open-label trial relying solely on subjective or self-reported outcomes, so any results are going to be so potentially fraught with bias as to be uninterpretable.

Before conducting a full-scale study, the MAGENTA investigators decided to conduct a so-called feasibility trial. Among the aims, as reported in the feasibility trial protocol: To ascertain the feasibility and acceptability of conducting an RCT to investigate the effectiveness and cost-effectiveness of GET compared with activity management for the treatment of CFS/ME in children. We will use the information to inform the design of a full-scale, adequately powered trial.

According to this protocol, the feasibility trial began in September, 2015. (The protocol was published in 2016.) Yet when the feasibility trial ended, the investigators did not design a new trial. Instead, in March of 2017, they sought, and received, research ethics committee approval to extend this feasibility study into a full trial. After that, the initial trial registration was updated to include information about the full trial. The feasibility trial included 100 participants; the investigators sought to add another 122, for a total in the full trial of 222.

In the feasibility trial protocol, the designated primary outcome was an assessment of the feasibility and acceptability of a full trial. The protocol included lists of the data and questionnaires the investigators planned to collect but did not designate any of them as primary or secondary outcomes for assessing treatment efficacy. Almost all the outcomes were self-reported. School attendance was listed, but it was self-reported attendance, which is subject to bias in a way that official school attendance records are not.

Only after collecting data for the feasibility study did the investigators designate which measures were the primary and secondary outcomes for the full trial. Because they were folding their feasibility study participants into the larger sample, they were thus able to prioritize outcome measures based on actual data from the trial sample, an excellent way to bias the reported findings. In this case, the investigators designated physical function at six months as the primary outcome measure after almost half the full study sample had already provided data.

Physical function might seem like an obvious choice for primary outcome, since it has been a primary outcome in other studies of this illness. However, it was not the only candidate here. Fatigue might have been selected, for example, or one of the other scales. In the feasibility trial of the Lightning Process, school attendance at six months was the primary outcome, although that was demoted to secondary outcome status after the investigators reviewed the feasibility study data. It is certainly possible or perhaps even likely that physical function was selected because, per the feasibility study findings, it generated the most positive results of the various available options.

Moreover, in between the feasibility trial and the full trial the investigators seem to have dropped MAGENTA€™s only objective measure, levels of physical activity assessed by accelerometers worn for a week. In the feasibility trial protocol, the investigators noted that accelerometers had been “shown to provide reliable indicators of physical activity among children and adults. Yet the trial registration does not mention accelerometers, so the reason for their absence from the full trial is hard to understand.

Presumably, the investigators found their use as an outcome measure to be either infeasible or unacceptable. MAGENTA is therefore left without a single objective outcome measure. It is worth noting that, in previous studies of this illness, objective measurements of physical activity have failed to corroborate the positive outcomes on self-reported measures. Moreover, investigators in this field have routinely ignored these objective findings and have highlighted instead the better-looking subjective results. In fact, PACE itself dropped the use of similar devices after Dutch researchers found the results did not support claims of improvement. So the MAGENTA investigators€™ decision to disappear their sole objective measure is not too surprising, whatever their reasons.

In the trial registration, the study is now wrongly labeled as prospective. Perhaps it was prospective in 2015, when the trial was first registered. But if you have designated your primary and secondary outcome measures or have dropped outcome measures only after almost half your sample has provided data, you cannot legitimately call the overall study prospective. An important feature of a prospective study is that primary and secondary outcome measures are pre-designated. That did not happen in the MAGENTA trial.

What is going on with the regional REC? Why are its members failing so completely in their oversight function? This is presumably the same REC involved in the other egregious studies from the university. The committee€™s self-evident incompetence and its lack of professional understanding of what is required for research to be conducted in an ethical and appropriate fashion is shocking.

In the Lightning Process paper, as I have documented, the investigators similarly received REC permission to extend a feasibility trial while swapping primary and secondary outcomes. The published paper in Archives of Disease in Childhood failed to disclose that the outcome measures were swapped after more than half of the study sample provided data. The journal has posted an opaque notice about these missteps, whose inadequacy I have previously discussed. But the journal’s editor as well as Fiona Godlee, BMJ’s editorial director, have so far failed to fully resolve the issue or take responsibility for publishing this obviously deficient paper in the first place.

Beyond this, commenters on the Science For ME forum also noticed something odd about MAGENTA: Last month, the trial registration was updated again. This time, the start date of the trial was backdated by two years, from September 2015 to January 2013. Huh? The change is not explained, so its meaning or significance is unclear. But it is certainly odd. Is it possible the investigators did not realize at the time of the initial registration in 2015 that their trial had actually started two years earlier?

The MAGENTA trial is expected to finish sometime next year, with publications undoubtedly to follow. But we already know that whatever the results, they will be rife with bias and unable to provide any useful information about treatment options. That this kind of nonsense receives UK taxpayer funding and gets to pose as legitimate research represents a serious breakdown of academic, ethical and financial accountability standards.

Comments are closed.

Scroll to Top