By David Tuller, DrPH
What is going on at The BMJ? In May, the journal corrected an obvious error in a paper about a prominent Long Covid mental and physical rehabilitation trial called REGAIN. The trial was conducted among patients who had been hospitalized for Covid-19, but key sections of the paper generalized the findings to all Long Covid patients—a huge lapse that should have been caught by peer-reviewers, not to mention The BMJ’s own editorial team. Given that the great majority of Long Covid patients have not been hospitalized, the correction essentially undermined any claim that the REGAIN findings could be extrapolated to everyone with the illness.
Now The BMJ has published a major review of interventions for Long Covid that compounds the original mistake. The review’s abstract claims that, based on “moderate certainty evidence,” the REGAIN intervention “probably” improves Long Covid symptoms–without noting that the trial participants had been hospitalized. The review mentions this highly salient fact only deep in the text of the paper. But the broad statements in the abstract and elsewhere, which essentially extrapolate the purported benefits to all Long Covid patients, seems to have raised no questions among peer reviewers. Nor did this excessively expansive interpretation of the REGAIN results lead to any apparent concern among editors at The BMJ, who presumably should have known that the referenced trial, published by their own journal earlier this year, already bore an embarrassing correction for having misrepresented its findings.
This means the review itself requires a similar correction—and I plan to urge the journal to take such action.
The review—which also cites a blog post of mine while misrepresenting my qualifications–is called “Interventions for the management of long covid (post-covid condition): living systematic review.” (Can someone please explain to me the difference between a “living” systematic review and a “dead” one?) The review includes 24 clinical trials, with four each involving drug interventions, dietary approaches, and medical devices or technologies; three involving behavioral interventions, including cognitive behavior therapy (CBT); and eight involving physical activity or rehabilitation. One included trial—REGAIN—involved a combination of mental and physical rehabilitation.
As with REGAIN and the review’s positive recommendation for mental and physical health rehabilitation, a single CBT trial formed the basis for the review’s conclusion that there was “moderate certainty evidence” the therapy would “probably” improve symptoms. Beyond those two trials, another one suggested that intermittent aerobic activity was more beneficial than “continuous exercise.” None of the other 21 trials yielded actionable evidence, per the review.
In a blistering take-down on his blog, The Science Bit, Professor Brian Hughes, a psychologist at the University of Galway, pointed out that the relevant psycho-behavioral studies included in the review were all at high risk of bias in various ways. The review itself concedes the point in this acknowledgement buried in the text:
“The evidence addressing CBT and physical and mental health rehabilitation was…at high risk of bias due to lack of blinding and imbalances in the degree of interactions between patients and healthcare providers between arms.”
Here’s a rough translation of that sentence into English:
“If you offer one group of sick patients an intervention that they believe might help them feel better, and you offer another group of sick patients nothing, those who receive the intervention they believe might help them feel better will be much more likely afterwards to report that they feel better than those who receive nothing.”
Responses at “high risk of bias” are obviously an unreliable and inappropriate basis for making confident predictions about the effects of an intervention. So it is perplexing that the review cites such questionable data to assert that anything has been shown by “moderately certain evidence” or is “probably” going to occur.
**********
Why does the review identify me as a patient or health care provider?
Here’s how the authors explain why they wrote the review:
“Some patients and healthcare providers have questioned the credibility of interventions in published trials, such as exercise and cognitive behavioural therapy (CBT). Trustworthy systematic reviews that clarify the benefits and harms of available interventions are critical to promote evidence based care.”
The first sentence of that passage cites three supporting references—one of which is a post I wrote about a dodgy Dutch trial, reported in a 2023 paper; that trial turns out to be the sole basis for the review’s positive conclusions about CBT. But contrary to the sentence’s claim about “patients and healthcare providers” questioning the research, I do not fall into either of those categories. As a public health academic at the University of California, Berkeley, I have some training in basic epidemiology, unlike many—although obviously not all–patients and healthcare providers.
In one sense, this is an inconsequential error. I mean, who cares? However, the error does have the convenient effect of obscuring a key point–many well-qualified academics, and not solely patients and healthcare providers, have highlighted the serious flaws that mar the body of research on CBT, graded exercise, and other psycho-behavioral approaches in both ME/CFS and Long Covid. There’s a reason why the National Institute for Health and Care Excellence (NICE), in an assessment prepared for its 2021 ME/CFS guidelines, found the quality of all the evidence for CBT and GET to be “very low” or merely “low”—certainly not the level required for meaningful clinical guidelines.
In the case of the CBT study highlighted in the review as having “moderate certainty evidence,” the participants—as I noted in the cited blog post–did not move any more at the end of the treatment than before, as measured by body monitors. These null findings suggested that any marginal self-reported improvements were likely an artifact of the bias-inducing study design rather than reflective of genuine changes. The trial investigators chose not to report the results from this outcome in the main paper—an egregious failure of judgment and a likely example of research misconduct.
And yet this is the trial the review cites to justify its recommendation for CBT.
The review trots out the usual rationale for explaining the purported effectiveness of CBT and graded activity:
“CBT and graduated physical activity are offered to patients with long covid and ME/CFS based on the observation that patients often reduce activity in response to their symptoms. Consequently, patients may become physically deconditioned, develop disrupted sleep-wake patterns, and hold unhelpful beliefs about fatigue. Interventions such as CBT and supervised physical activity which gradually reintroduce patients to activity may help with reconditioning, regularising patterns of activity, optimising rest and sleep, and addressing patients’ unhelpful beliefs about fatigue and activity.”
The problem is that, as NICE noted, the evidence in support of this approach—however appealing the interventions sound in theory–is extremely weak. The theory is essentially what Professor Sir Simon Wessely and colleagues proposed back in the late 1980s—and the discredited and fraudulent PACE study was said to be the “definitive” test of that proposition. In fact, the review’s self-delusion is evident in its irony-free reference to PACE as the “only” known trial that found benefits for “all interventions found to be effective” in the review–that is, both mental and physical rehabilitation approaches. After everything that has happened in this field, the blanket declaration that PACE documented benefits is laughable—and unjustifiable from a scientific perspective.
The NICE’s 2021 ME/CFS guidelines themselves serve as a rebuttal to the arguments advanced in the review. However, the review fails to mention this authoritative NICE document–another oversight that should have been called out by peer reviewers or The BMJ’s own editors.
All in all, pure propaganda.
”
The virus and its potential to harm hasn’t gone away. Fortunately, not so many people are dying from it now and that’s because of modern medicine, not because people are riding their bikes around their local parks or following some psychotherapy app on their mobile phones. It’s still a threat with regard to both acute and chronic illness, and a threat to our economies, and I’d contend that brushing that threat under the carpet by pushing non-medical treatments at the expense of medical ones will only compound the problem and make the provision of good healthcare to all even more difficult.
There’s a striking difference, I think, between the empathetic people who recover from Long Covid and say that it was awful but that it’s so much worse for those who don’t recover… and those who preach that they managed to recover by doing x, y or z so everyone else with Long Covid can too if only they follow their wonderful example and up their game. Who was it now who wrote about doctors being narcissists and ‘denigrators of vulnerability’? (Ah, I think I may have found it -https://europeanhealthcaredesign2019.salus.global/uploads/media/article_file/0001/15/1353d835e9df518b51283f9498b6821030e7c87d.pdf ). To generalize is wrong of course, but it seems to me that a fair few doctors fit that description.
So, I wonder, is the problem at the journal that it is populated by people fitting that description or has the pandemic not taught them that a too-much-medicine (arguably neoliberalist) approach to healthcare could lead to too little medicine at a time when the world really needs that medicine to maintain a productive workforce? (Shouldn’t perfectionism cause them to focus on the risk of bias at least?)