By David Tuller, DrPH
*This is a crowdfunding month at UC Berkeley. If you’d like to support my work, the link is here.
How many bad papers can Trudie Chalder, King’s College London’s factually and mathematically challenged professor of cognitive behavior therapy (CBT), churn out? The woman’s name seems attached to an inexhaustible supply of scientific drek. Just this month I critiqued two papers (here and here) for which she served as a co-author. Both publications were of poor quality, full of unjustified assumptions and assertions–a known hallmark of Professor Chalder’s work.
Professor Chalder’s academic and professional activities appear designed solely to confirm the purported value and importance of CBT, no matter the condition under consideration. Her faith in this principle stands firm even in the face of pathetic results from her research. She is a one-trick pony—and her one trick doesn’t even do what she claims it does. Which brings us to our latest example of Professor Chalder’s problematic output: “The effectiveness of specialist cognitive behavioural therapy for functional neurological disorder: a service evaluation,” a study published last month in Frontiers in Psychiatry.
As usual with Professor Chalder and her fellow travelers in the CBT ideological brigades, there is much less here than meets the eye.
The problems start with the title, which declares that the study is focused on the “effectiveness” of CBT for this population. But this research has zilch to do with “effectiveness,” which is a causal construct“–a measure of whether an effect can be confidently attributed to a specific cause. The study is, by its very design, incapable of demonstrating causality. It is not a randomized controlled trial with equally balanced arms that differ only in receiving, or not receiving, the intervention. Instead, it is a retrospective review of results from a specialist clinical service. There is no comparison arm of patients who did not receive the CBT intervention.
The findings therefore represent associations or correlations only. These data do not indicate the direction or exact nature of these associations. Suggestions of a causal relationship between intervention and outcomes is unwarranted. In fact, the investigators themselves know this. As they write:
“The study made retrospective use of routinely collected clinical outcomes data and did not feature randomisation or make comparisons to a control group. This therefore prevents conclusions about causality…The lack of a control group also means that it is not possible to determine whether the improvements in outcome measures were the result of the active therapeutic components of the CBT intervention as opposed to placebo effect, or the non-specific influence of a supportive therapeutic alliance.”
Under this premise, any mention of “effectiveness” in the title should have been questioned—and quashed—by peer reviewers and editors. That is, of course, besides the fact that the investigators should never have used the word in the first place.
This isn’t the first time that Professor Chalder has deployed causal language in inappropriate contexts. She and Professor Sir Simon Wessely committed the same error in an analysis of clinical outcomes published a few years ago in the Journal of the Royal Society of Medicine. That study was also an examination of a cohort of patients that did not include a comparison arm. When Brian Hughes, a professor of psychology at the University of Galway, and I submitted a letter of concern, the journal rejected it, so we ended up publishing it as a commentary elsewhere.
**********
Modest reported benefits in generic measures
The new study in Frontiers in Psychiatry reported modest benefits in generic measures for what are often called “quality-of-life” domains, such as emotional upset and social adjustment, as well as in specific mental and behavioral “processes” claimed to be implicated in maintaining physical complaints, such as focusing on symptoms, avoidance of activity, and so on. Here is the conclusion of the abstract: “These findings suggest that specialist CBT for FND delivered in routine clinical practice is associated with meaningful improvements in distress, functioning, and key cognitive-behavioural maintenance processes.”
So apart from the mixed messages concerning causality, why can’t these associations be taken at face value? Because the study is fraught with multiple other issues that render the reported results suspect.
For one, all the outcomes are self-reported and subjective. If participants are receiving an intervention, and they know the intervention is supposed to reduce anxiety and depression and improve their emotional state, they are likely to be biased toward providing positive responses, regardless of actual impact. That could easily explain some improvements in these measured domains.
In addition, based on their findings, the investigators assert the following:
“This [the reported benefits] suggests that the CBT intervention may be associated with change in cognitive and behavioural processes believed to contribute to the maintenance of FND symptoms. These findings are therefore consistent with mechanistic models of FND, suggesting that CBT may influence cognitive and behavioural processes implicated in symptom maintenance.”
There is a huge problem with this reasoning. The study did not measure any significant improvements in the two core symptom domains—physical function and pain. It is therefore hard to understand why the investigators believe their data support the argument that the identified “cognitive and behavioral processes” are implicated in the “maintenance of FND symptoms.” If this were so, the study should have reflected such a link by documenting symptom reduction—but it did not.
This discrepancy is apparently not a problem for investigators seeking to define “recovery” downward to mean “coping better with chronic illness,” regardless of whether symptoms have abated. (This effort to define “recovery” downward appears to be part of Professor Chalder’s project, as I noted in a recent blog post.) As the paper states: “This [the findings] suggests that the CBT intervention primarily reduced the extent to which difficulties interfered with everyday participation, rather than producing significant change in perceived physical capability.” To address this issue, the investigators recommend “psychoeducation that explicitly distinguishes between changes in wellbeing and participation and changes in perceived physical capability or symptom-related limitations.
A sense of wellbeing is important, especially in the face of chronic illness. But that is not what “recovery” means to many, if not most, patients. Moreover, such statements again contradict, or at least undermine, the claim that these “thinking processes and coping mechanisms” actually generate and/or maintain the troubling symptoms. The evidence on offer here clearly does not support this etiological assumption. Can’t these people keep their arguments straight?
And beyond all that…the following details provide even more information regarding the reliability—or lack thereof—of the reported findings: “184 participants provided pre-treatment data. 71 participants provided mid-treatment data and 53 provided end-treatment data.”
In other words, partway through treatment, less than half the sample chose to respond to questionnaires. By the end of treatment, that proportion had fallen to less than a third. That represents a huge drop-off, or loss-to-follow-up. It is telling when so many participants choose not to finish an intervention, or at least not to endorse it by taking a few minutes to fill out some questionnaires. With such a poor response rate, it is hard to take any of the results or conclusions too seriously, no matter how much fancy statistical juggling is involved.
The investigators would undoubtedly have preferred to ignore this highly salient issue, but even they need to acknowledge the obvious. As the paper notes in its discussion section, “the high proportion of missing data…limits our ability to accurately describe clinical outcomes in the treatment group overall.”
Translation into standard English: Given these enormous gaps, any reported findings are essentially uninterpretable and should not be used as a guide for public health policy.

Thank you David, as always!
I would be interested to understand who funds this, how much for, why, and if they’re aware of the numerous issues you’ve diligently critiqued.
A contrived, loaded, Chalder, questionnaire, never did, and never will, cure anyone of anything. Nobody is interested in a concept of ‘fractions of cure’, so why is this ‘Complete Bloody Travesty’ still being allowed to be repeated, decade after decade? If anyone is ever cured by being told to pull their socks up in psychobabble, they will tell you they are cured: if they don’t: stop wasting public money on charlatan pseudoscience.