By David Tuller, DrPH
David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.
A few years ago, Dr. Racaniello let me hijack this space for a long piece about the CDC’s persistent incompetence in its efforts to address the devastating illness the agency itself had misnamed €œchronic fatigue syndrome.€ Now I’m back with an even longer piece about the U.K’s controversial and highly influential PACE trial. The $8 million study, funded by British government agencies, purportedly proved that patients could €œrecover€ from the illness through treatment with one of two rehabilitative, non-pharmacological interventions: graded exercise therapy, involving a gradual increase in activity, and a specialized form of cognitive behavior therapy. The main authors, a well-established group of British mental health professionals, published their first results in The Lancet in 2011, with additional results in subsequent papers.
Much of what I report here will not be news to the patient and advocacy communities, which have produced a voluminous online archive of critical commentary on the PACE trial. I could not have written this piece without the benefit of that research and the help of a few statistics-savvy sources who talked me through their complicated findings. I am also indebted to colleagues and friends in both public health and journalism, who provided valuable suggestions and advice on earlier drafts. Today’s Virology Blog installment is the final quarter; the first and second installment were published previously. I was originally working on this piece with Retraction Watch, but we could not ultimately agree on the direction and approach.
This examination of the PACE trial of chronic fatigue syndrome identified several major flaws:
*The study included a bizarre paradox: participants’ baseline scores for the two primary outcomes of physical function and fatigue could qualify them simultaneously as disabled enough to get into the trial but already €œrecovered€ on those indicators–even before any treatment. In fact, 13 percent of the study sample was already €œrecovered€ on one of these two measures at the start of the study.
*In the middle of the study, the PACE team published a newsletter for participants that included glowing testimonials from earlier trial subjects about how much the €œtherapy€ and €œtreatment€ helped them. The newsletter also included an article informing participants that the two interventions pioneered by the investigators and being tested for efficacy in the trial, graded exercise therapy and cognitive behavior therapy, had been recommended as treatments by a U.K. government committee €œbased on the best available evidence.€ The newsletter article did not mention that a key PACE investigator was also serving on the U.K. government committee that endorsed the PACE therapies.
*The PACE team changed all the methods outlined in its protocol for assessing the primary outcomes of physical function and fatigue, but did not take necessary steps to demonstrate that the revised methods and findings were robust, such as including sensitivity analyses. The researchers also relaxed all four of the criteria outlined in the protocol for defining €œrecovery.€ They have rejected requests from patients for the findings as originally promised in the protocol as €œvexatious.€
*The PACE claims of successful treatment and €œrecovery€ were based solely on subjective outcomes. All the objective measures from the trial, a walking test, a step test, and data on employment and the receipt of financial benefits, failed to provide any evidence to support such claims. Afterwards, the PACE authors dismissed their own main objective measures as non-objective, irrelevant, or unreliable.
*In seeking informed consent, the PACE authors violated their own protocol, which included an explicit commitment to tell prospective participants about any possible conflicts of interest. The main investigators have had longstanding financial and consulting ties with disability insurance companies, having advised them for years that cognitive behavior therapy and graded exercise therapy could get claimants off benefits and back to work. Yet prospective participants were not told about any insurance industry links and the information was not included on consent forms. The authors did include the information in the €œconflicts of interest€ sections of the published papers.
Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:
Dr. Bruce Levin, Columbia University: €œTo let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.€
Dr. Ronald Davis, Stanford University: €œI’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.€
Dr. Arthur Reingold, University of California, Berkeley: €œUnder the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.€
Dr. Jonathan Edwards, University College London: €œIt’s a mass of un-interpretability to me€¦All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.€
Dr. Leonard Jason, DePaul University: €œThe PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.€
The Publication Aftermath
Publication of the paper triggered what The Lancet described in an editorial as €œan outpouring of consternation and condemnation from individuals or groups outside our usual reach.€ Patients expressed frustration and dismay that once again they were being told to exercise and seek psychotherapy. They were angry as well that the paper ignored the substantial evidence pointing to patients’ underlying biological abnormalities.
Even Action For ME, the organization that developed the adaptive pacing therapy with the PACE investigators, declared in a statement that it was €œsurprised and disappointed€ at €œthe exaggerated claims€ being made about the rehabilitative therapies. And the findings that the treatments did not cause relapses, noted Peter Spencer, Action For ME’s chief executive officer, in the statement, €œcontradict the considerable evidence of our own surveys and those of other patient groups.€
Many believed the use of the broad Oxford criteria helped explain some of the reported benefits and lack of adverse effects. Although people with psychosis, bipolar disorder, substance €œmisuse,€ organic brain disorder, or an eating disorder were screened out of the PACE sample, 47 percent of the participants were nonetheless diagnosed with €œmood and anxiety disorders,€ including depression. But just as cognitive and behavioral interventions have proven successful with people suffering from primary depression, as DePaul psychologist Leonard Jason had noted, the increased activity was also unlikely to harm such participants if they did not also experience the core ME/CFS symptom of post-exertional malaise.
Others, like Tom Kindlon, speculated that many of the patients in the two rehabilitative arms, even if they had reported subjective improvements, might not have significantly increased their levels of exertion. To bolster this argument, he noted the poor results from the six-minute walking test, which suggested little or no improvement in physical functioning.
€œIf participants did not follow the directives and did not gradually increase their total activity levels, they might not suffer the relapses and flare-ups that patients sometimes report with these approaches,€ said Kindlon.
During an Australian radio interview, Lancet editor Richard Horton denounced what he called the €œorchestrated response€ from patients, based on €œthe flimsiest and most unfair allegations,€ seeking to undermine the credibility of the research and the researchers. €œOne sees a fairly small, but highly organized, very vocal and very damaging group of individuals who have, I would say, actually hijacked this agenda and distorted the debate so that it actually harms the overwhelming majority of patients,€ he said.
In fact, he added, €œwhat the investigators did scrupulously was to look at chronic fatigue syndrome from an utterly impartial perspective.€
In explaining The Lancet‘s decision to publish the results, Horton told the interviewer that the paper had undergone €œendless rounds of peer review.€ Yet the ScienceDirect database version of the article indicated that The Lancet had €œfast-tracked€ it to publication. According to current Lancet policy, a standard fast-tracked article is published within four weeks of receipt of the manuscript.
Michael Sharpe, one of the lead investigators, also participated in the Australian radio interview. In response to a question from the host, he acknowledged that only one in seven participants received a €œclinically important treatment benefit€ from the rehabilitative therapies of graded exercise therapy and cognitive behavior therapy, a key data point not mentioned in the Lancet paper.
€œWhat this trial isn’t able to answer is how much better are these treatments than really not having very much treatment at all,€ Sharpe told the radio host in what might have been an unguarded moment, given that the U.K. government had spent five million pounds on the PACE study to find out the answer. Sharpe’s statement also appeared to contradict the effusive €œrecovery€ and €œback-to-normal€ news stories that had greeted the reported findings.
In correspondence published three months after the trial, the PACE authors gave no ground. In response to complaints about changes from the protocol, they wrote that the mid-trial revisions €œwere made to improve either recruitment or interpretability€ and €œwere approved by the Trial Steering Committee, were fully reported in our paper, and were made before examining outcome data to avoid outcome reporting bias.€ They did not mention whether, since it was an unblinded trial, they already had a general sense of outcome trends even before examining the actual outcome data. And they did not explain why they did not conduct sensitivity analyses to measure the impact of the protocol changes.
They defended their post-hoc €œnormal ranges€ for fatigue and physical function as having been calculated through the €œconventional€ statistical formula of taking the mean plus/minus one standard deviation. As in the Lancet paper itself, however, they did not mention or explain the unusual overlaps between the entry criteria for disability and the outcome criteria for being within the €œnormal range.€ And they did not explain why they used this €œconventional€ method for determining normal ranges when their two population-based data sources did not have normal distributions, a problem White himself had acknowledged in his 2007 study.
The authors clarified that the Lancet paper had not discussed €œrecovery€ at all; they promised to address that issue in a future publication. But they did not explain why Chalder, at the press conference, had declared that patients got €œback to normal.€
They also did not explain why they had not objected to the claim in the accompanying commentary, written by their colleagues and discussed with them pre-publication, that 30 percent of participants in the rehabilitative arms had achieved €œrecovery€ based on a €œstrict criterion€ , especially since that €œstrict criterion€ allowed participants to get worse and still be €œrecovered.€ Finally, they did not explain why, if the paper was not about €œrecovery,€ they had not issued public statements to correct the apparently inaccurate news coverage that had reported how study participants in the graded exercise therapy and cognitive behavior therapy arms had €œrecovered€ and gotten €œback to normal.€
The authors acknowledged one error. They had described their source for the €œnormal range€ for physical function as a €œworking-age€ population rather than what it actually was–an €œadult€ population. (Unlike a €œworking-age€ population, an €œadult€ population includes elderly people and is therefore less healthy. Had the PACE participants’ scores on the SF-36 physical function scale actually been compared to the SF-36 responses of the working-age subset of the adult population used as the source for the €œnormal range,€ the percentages achieving the €œnormal range€ threshold of this healthier group would have been even lower than the reported results.)
Yet The Lancet did not append a correction to the article itself, leaving readers completely unaware that it contained, and still contains–a mistake that involved a primary outcome and made the findings appear better than they actually were. (Lancet policy calls for correcting €œany substantial error€ and €œany numerical error in the results, or any factual error in interpretation of results.€)
A 2012 paper in PLoS One, on financial aspects of the illness, included outcomes for some additional objective measures. Instead of a decrease in financial benefits received by those in the rehabilitative therapy arms, as would be expected if disabled people improved enough to increase their ability to work, the paper reported a modest average increase in the receipt of benefits across all the arms of the study. There were also no differences among the groups in days lost from work.
The investigators did not include the promised information on wages. They also had still not published the results of the self-paced step-test, described in the protocol as a measure of fitness.
[The following two paragraphs have replaced the two paragraphs in the initial version to correct a couple of inaccurate statements about the assumptions on how to value unpaid care. See end of story for the original two paragraphs.]
In another finding, the PLoS One paper argued that the graded exercise and cognitive behavior therapies were the most cost-effective treatments from a societal perspective. In reaching this conclusion, the investigators valued so-called €œinformal€ care, unpaid care provided by family and friends–at the mean national wage. In contrast, the PACE statistical analysis plan (approved in 2010 but not published until 2013) had proposed to value informal care using three different assumptions–at the cost of a home heath care work, the minimum wage, and zero cost. The statistical analysis plan did not propose to value informal care at the mean national wage.
The PLoS One paper itself did not provide the findings under any of the three assumptions proposed in the statistical analysis plan. The authors did not explain this discrepancy, nor did they explain why they chose a fourth assumption that was not mentioned in the statistical analysis plan. The paper noted, however, that €œsensitivity analyses revealed that the results were robust for alternative assumptions.€
Commenters on the PLoS One website, including Tom Kindlon, challenged the claim that the findings would be €œrobust€ under the alternative assumptions for informal care. In fact, they pointed out, the lower-cost conditions would reduce or fully eliminate the reported societal cost-benefit advantages of the cognitive behavior and graded exercise therapies.
In a posted response, the paper’s lead author, Paul McCrone, conceded that the commenters were right about the impact that the lower-cost, alternative assumptions would have on the findings. However, McCrone did not explain or even mention the apparently erroneous sensitivity analyses he had cited in the paper, which had found the societal cost-benefit advantages for graded exercise therapy and cognitive behavior therapy to be €œrobust€ under all assumptions. Instead, he argued that the two lower-cost approaches were unfair to caregivers because families deserved more economic consideration for their labor.
€œIn our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,€ McCrone wrote. €œLikewise, to assume it only has the value of the minimum wage is also very restrictive.€
In a subsequent comment, Kindlon chided McCrone, pointing out that he had still not explained the paper’s claim that the sensitivity analyses showed the findings were €œrobust€ for all assumptions. Kindlon also noted that the alternative, lower-cost assumptions were included in PACE’s own statistical plan.
€œRemember it was the investigators themselves that chose the alternative assumptions,€ wrote Kindlon. €œIf it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way. There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.€
The journal Psychological Medicine published the long-awaited findings on €œrecovery€ in January, 2013. In the paper, the investigators imposed a serious limitation on their construct of €œrecovery.€ They now defined it as recovery solely from the most recent bout of illness, a health status generally known as €œremission,€ not €œrecovery.€ The protocol definition included no such limitation.
In a commentary, Fred Friedberg, a psychologist in the psychiatry department at Stony Brook University and an expert on the illness, criticized the PACE authors’ use of the term €œrecovery€ as inaccurate. €œTheir central construct€¦refers only to recovery from the current episode, rather than sustained recovery over long periods,€ he and a colleague wrote. The term €œremission,€ they noted, was €œless prone to misinterpretation and exaggeration.€
Tom Kindlon was more direct. €œNo one forced them to use the word ‘recovery’ in the protocol and in the title of the paper,€ he said. €œIf they meant ‘remission,’ they should have said ‘remission.’€ As with the release of the Lancet paper, when Chalder spoke of getting €œback to normal€ and the commentary claimed €œrecovery€ based on a €œstrict criterion,€ Kindlon believed the PACE approach to naming the paper and reporting the results would once again lead to inaccurate news reports touting claims of €œrecovery.€
In the new paper, the PACE investigators loosened all four of the protocol’s required criteria for €œrecovery€ but did not mention which, if any, oversight committees approved this overall redefinition of the term. Two of the four revised criteria for €œrecovery€ were the Lancet paper’s fatigue and physical function €œnormal ranges.€ Like the Lancet paper, the Psychological Medicine paper did not point out that these €œnormal ranges€, now re-purposed as €œrecovery€ thresholds–overlapped with the study’s entry criteria for disability, so that participants could already be €œrecovered€ on one or both of these two indicators from the outset.
The four revised €œrecovery€ criteria were:
*For physical function, €œrecovery€ required a score of 60 or more. In the protocol, €œrecovery€ required a score of 85 or more. At entry, a score of 65 or less was required to demonstrate enough disability to be included in the trial. This entry threshold of 65 indicated better health than the new €œrecovery€ threshold of 60.
*For fatigue, a score of 18 or less out of 33 (on the fatigue scale, a higher score indicated more fatigue). In the protocol, €œrecovery€ required a score of 3 or less out of 11 under the original scoring system. At entry, a score of at least 12 on the revised scale was required to demonstrate enough fatigue to be included the trial. This entry threshold of 12 indicated better health than the new €œrecovery€ threshold of 18.
*A score of 1 (€œvery much better€) or 2 (€œmuch better€) out of 7 on the Clinical Global Impression scale. In the protocol, €œrecovery€ required a score of 1 (€œvery much better€ on the Clinical Global Impression scale; a score of 2 (€œmuch better€) was not good enough. The investigators made this change, they wrote, because €œwe considered that participants rating their overall health as ‘much better’ represented the process of recovery.€ They did not cite references to justify their post-protocol reconsideration of the meaning of the Clinical Global Impression scale, nor did they explain when and why they changed their minds about how to interpret it.
*The last protocol requirement for €œrecovery€, not meeting any of the three case definitions used in the study–was now divided into less and more restrictive sub-categories. Presuming participants met the relaxed fatigue, physical function, and Clinical Global Impression thresholds, those who no longer met the Oxford criteria were now defined as having achieved €œtrial recovery,€ even if they still met one of the other two case definitions, the CDC’s chronic fatigue syndrome case definition and the ME definition. Those who fulfilled the protocol’s stricter criteria of not meeting any of the three case definitions were now defined as having achieved €œclinical recovery.€ The authors did not explain when or why they decided to divide this category into two.
After these multiple relaxations of the protocol definition of €œrecovery,€ the paper reported the full data for the less restrictive category of €œtrial recovery,€ not the more restrictive category of €œclinical recovery.€ The authors found that the odds of €œtrial recovery€ in the cognitive behavior therapy and graded exercise therapy arms were more than triple those in the adaptive pacing therapy and specialist medical care arms. They did not report having conducted any sensitivity analyses to measure the impact of all the changes in protocol definition of €œrecovery.€
They acknowledged that the €œtrial recovery€ rate from the two rehabilitative treatments, at 22 percent in each group, was low. They suggested that increasing the total number of graded exercise therapy and cognitive behavior therapy sessions and/or bundling the two interventions could boost the rates.
Like the Lancet paper, the €œrecovery€ findings received uncritical media coverage, and as Tom Kindlon feared, the news accounts did not generally mention €œremission.€ Nor did they discuss the dramatic changes in all four of the criteria from the original protocol definition of €œrecovery.€ Not surprisingly, the report drew fire from patients and advocacy groups.
Commenters on the journal’s website and on patient and advocacy blogs challenged the revised definition for €œrecovery,€ including the use of the overlapping €œnormal ranges€ for fatigue and physical function as two of the four criteria. They wondered why the PACE authors used the term €œrecovery€ at all, given the serious limitation they had placed on its meaning. They also noted that the investigators were ignoring the Lancet paper’s objective results from the six-minute walking test in assessing whether people had recovered, as well as the employment and benefits data from the PLoS One paper, all of which failed to support the €œrecovery€ claims.
In their response, White and his colleagues defended their use of the term €œrecovery€ by noting that they explained clearly what they meant in the paper itself. €œWe were careful to give a precise definition of recovery and to emphasize that it applied at one particular point only and to the current episode of illness,€ they wrote. But they did not explain why, given that narrow definition, they simply did not use the standard term €œremission, € since there was always the possibility that the word €œrecovery€ would lead to misunderstanding of the findings.
Once again, they did not address or explain why the entry criteria for disability and the outcome criteria for the physical function and fatigue €œnormal ranges€, now redefined as €œrecovery€ thresholds–overlapped. They again did not explain why they used the statistical formula to find €œnormal ranges€ for normally distributed populations on samples that they knew were skewed. And they now disavowed the significance of objective measures they themselves had selected, starting with the walking test, which had been described as €œan objective outcome measure of physical capacity€ in the protocol.
€œWe dispute that in the PACE trial the six-minute walking test offered a better and more ‘objective’ measure of recovery,€ they now wrote, citing €œpractical limitations€ with the data.
For one thing, the researchers now explained that during the walking test, in deference to participants’ poor health, they did not verbally encourage them, in contrast to standard practice. For another, they did not have follow-up walking tests for more than a quarter of the sample, a significant data gap that they did not explain. (One possible explanation is that participants were too sick to do the walking test at all, suggesting that the findings might have looked significantly worse if they had included actual results from those missing subjects.)
Finally, the PACE investigators explained, they had only 10 meters of corridor space for conducting the test, rather than the standard of 30 to 50 meters–although they did not explain whether all six of their study centers around the country, or just some of them, suffered from this deficiency. €œThis meant that participants had to stop and turn around more frequently, slowing them down and thereby vitiating comparisons with other studies,€ wrote the investigators.
This explanation raised further questions, however. The investigators had started assessing participants–and administering the walking-test–in 2005. Yet two years later, in the protocol published in BMC Neurology, they did not mention any comparison-vitiating problems; instead, they described the walking test as an €œobjective€ measure of physical capacity. While the protocol itself was written before the trial started, the authors posted a comment on the BMC Neurology web page in 2008, in response to patient comments, that reaffirmed the six-minute walking test as one of €œseveral objective outcome measures.€
In their response in the Psychological Medicine correspondence, White and his colleagues did not explain if they had recognized the walking test’s comparison-vitiating limitations by the time they published their protocol in 2007 or their comment on BMC Neurology’s website in 2008–and if not, why not.
In their response, they also dismissed the relevance of their employment and benefits outcomes, which had been described as €œanother more objective measure of function€ in the protocol. €œRecovery from illness is a health status, not an economic one, and plenty of working people are unwell, while well people do not necessarily work,€ they now wrote. €œIn addition, follow-up at 6 months after the end of therapy may be too short a period to affect either benefits or employment. We therefore disagree…that such outcomes constitute a useful component of recovery in the PACE trial.€
In conclusion, they wrote in their Psychological Medicine response, cognitive behavior therapy and graded exercise therapy €œshould now be routinely offered to all those who may benefit from them.€
Each published paper fueled new questions. Patients and advocates filed dozens of freedom-of-information requests for PACE-related documents and data with Queen Mary University of London, White’s institutional home and the designated administrator for such matters.
How many PACE participants, patients wanted to know, were €œrecovered€ according to the much stricter criteria in the 2007 protocol? How many participants were already €œwithin the normal range€ on fatigue or physical function when they entered the study? When exactly were the changes made to the assessment strategies promised in the protocol, what oversight committees approved them, and why?
Some requests were granted. One response revealed that 85 participants, or 13 percent of the total sample–were already €œrecovered€ or €œwithin the normal range€ for fatigue or physical function even as they qualified as disabled enough for the study. (Almost all of these, 78 participants, achieved the threshold for physical function alone; four achieved it for fatigue, and three for both.)
But many other requests have been turned down. Anna Sheridan, a long-time patient with a doctorate in physics, requested data last year on how the patients deemed €œrecovered€ by the investigators in the 2013 Psychological Medicine paper had performed on the six-minute walking test. Queen Mary University rejected the request as €œvexatious.€
Sheridan asked for an internal review. €œAs a scientist, I am seeking to understand the full implications of the research,€ she wrote. €œAs a patient, the distance that I can walk is of incredible concern€¦When deciding to undertake a treatment such as CBT and GET, it is surely not unreasonable to want to know how far the patients who have recovered using these treatments can now walk.€
The university re-reviewed the request and informed Sheridan that it was not, in fact, €œvexatious.€ But her request was again being rejected, wrote the university, because the resources needed to locate and retrieve the information €œwould exceed the appropriate limit€ designated by the law. Sheridan appealed the university’s decision to the next level, the U.K. Information Commissioner’s Office, but was recently turned down.
The Information Commissioner’s Office also turned down a request from a plaintiff seeking meeting minutes for PACE oversight committees to understand when and why outcome measures were changed. The plaintiff appealed to a higher-level venue, the First-Tier Tribunal. The tribunal panel–a judge and two lay members, upheld the decision, declaring that it was €œpellucidly clear€ that release of the minutes would threaten academic freedom and jeopardize future research.
The tribunal panel defended the extensive protocol changes as €œcommon to most clinical trials€ and asserted that the researchers €œdid not engineer the results or undermine the integrity of the findings.€ The panel framed the many requests for trial documents and data as part of a campaign of harassment against the researchers, and sympathetically cited the heavy time burdens that the patients’ demands placed on White. In conclusion, wrote the panel, the tribunal €œhas no doubt that properly viewed in its context, this request should have been seen as vexatious–it was not a true request for information–rather its function was largely polemical.€
To date, the PACE investigators have rejected requests to release raw data from the trial for independent analysis. Patients and other critics say the researchers have a particular obligation to release the data because the trial was conducted with public funds.
Since the Lancet publication, much media coverage of the PACE investigators and their colleagues has focused on what The Guardian has called the €œcampaign of abuse and violence€ purportedly being waged by €œmilitants€¦considered to be as dangerous and uncompromising as animal rights extremists.€ In a news account in the BMJ, White portrayed the protestors as hypocrites. €œThe paradox is that the campaigners want more research into CFS, but if they don’t like the science they campaign to stop it,€ he told the publication. While news reports have also repeated the PACE authors’ claims of treatment success and €œrecovery,€ these accounts have not generally examined the study itself in depth or investigated whether patients’ complaints about the trial are valid.
Tom Kindlon has often heard these arguments about patient activists and says they are used to deflect attention away from the PACE trial’s flaws. €œThey’ve said that the activists are unstable, the activists have illogical reasons and they are unfair or prejudiced against psychiatry, so they’re easy to dismiss,€ said Kindlon.
What patients oppose, he and others explain, is not psychiatry or psychiatrists, but being told that their debilitating organic disease requires treatments based on the hypothesis that they have false cognitions about it.
In January of this year, the PACE authors published their paper on mediators of improvement in The Lancet Psychiatry. Not surprisingly, they found that reducing participants’ presumed fears of activity was the main mechanism through which the rehabilitative interventions of graded exercise therapy and cognitive behavior therapy delivered their purported benefits. News stories about the findings suggested that patients with ME/CFS could get better if they were able to rid themselves of their fears of activity.
Unmentioned in the media reports was a tiny graph tucked into a page with 13 other tiny graphs: the results of the self-paced step-test, the fitness measure promised in the protocol. The small graph indicated no advantages for the two rehabilitative intervention groups on the step-test. In fact, it appeared to show that those in the other two groups might have performed better. However, the paper did not include the data on which the graph was based, and the graph was too small to extract any useful data from it.
After publication of the study, a patient filed a request to obtain the actual step-test results that were used to create the graph. Queen Mary University rejected the request as €œvexatious.€
With the publication of the step-test graph, the study’s key €œobjective€ outcomes, except for the still-unreleased data on wages–had now all failed to support the claims of €œrecovery€ and treatment success from the two rehabilitative therapies. The Lancet Psychiatry paper did not mention this serious lack support for the study’s subjective findings from all its key objective measures.
Some scientific developments since the 2011 Lancet paper–such as this year’s National Institutes of Health and Institute of Medicine panel reports, the Columbia University findings of distinct immune system signatures, further promising findings from Norwegian research into the immunomodulatory drug [see correction below] pioneered by rheumatoid arthritis expert Jonathan Edwards, and a growing body of evidence documenting patients’ abnormal responses to activity–have helped shift the focus to biomedical factors and away from PACE, at least outside Great Britain.
In the U.K. itself, the Medical Research Council, in a modest shift, has awarded some grants for biomedical research, but the PACE approach remains the dominant framework for treatment within the national health system. Two years ago, the disparate scientific and political factions launched the CFS/ME Research Collaborative, conceived as an umbrella organization representing a range of views. At the collaborative’s inaugural two-day gathering in Bristol in September of 2014, many speakers presented on promising biomedical research. Peter White’s talk, called €œPACE: A Trial and Tribulations,€ focused on the response to his study from disaffected patients.
According to the conference report, White cited the patient community’s €œcampaign against the PACE trial€ for recruitment delays that forced the investigators to seek more time and money for the study. He spoke about €œvexatious complaints€ and demands for PACE-related data, and said he had so far fielded 168 freedom-of-information requests. (He’d received a freedom-of-information request asking how many freedom-of-information requests he’d received.) This type of patient activity €œdamages€ research efforts, he said.
Jonathan Edwards, the rheumatoid arthritis expert now working on ME/CFS, filed a separate report on the conference for a popular patient forum. €œI think I can only describe Dr. White’s presentation as out of place,€ he wrote. After White briefly discussed the trial outcomes, noted Edwards, €œhe then spent the rest of his talk saying how unreasonable it was that patients did not gratefully accept this conclusion, indicating that this was an attack on science€¦
€œI think it was unfortunate that Dr. White suggested that people were being unreasonable over the interpretation of the PACE study,€ concluded Edwards. €œFortunately nobody seemed to take offence.€
Correction: The original text referred to the drug as an anti-inflammatory.
Correction: Below are the two original paragraphs about the PLoS One study that have now been replaced.
“In another finding, the PLoS One paper argued that the graded exercise and cognitive behavior therapies were the most cost-effective treatments from a societal perspective. In reaching this conclusion, the investigators valued so-called €œinformal€ care, unpaid care provided by family and friends€“at the replacement cost of a homecare worker. The PACE statistical analysis plan (approved in 2010 but not published until 2013) had included two additional, lower-cost assumptions. The first valued informal care at minimum wage, the second at zero compensation.
The PLoS One paper itself did not provide these additional findings, noting only that €œsensitivity analyses revealed that the results were robust for alternative assumptions.€