By David Tuller, DrPH
A British medical education company has recently disseminated a recruitment ad for a high-profile pediatric study of treatment for what it calls CFS/ME. The recruitment ad€™s headline describes the intervention being investigated as effective, without caveat or reservation. (Full headline: Chronic fatigue syndrome (CFS/ME): effective home treatment for teenagers)
To back up this assertion, the recruitment ad claims that a previous Dutch study of the intervention reported impressive results, with 65 % in that arm achieving recovery, compared to only 8 % among other participants. The ad declares that these results were maintained for years. [In fact, the Dutch study reported this “recovery” rate as 66 %.]
This e-mailed recruitment ad is for FITNET-NHS, a trial of online CBT for kids. The UK€™s National Institute for Health Research, an arm of the National Health Service, funded this major study, which is seeking to enroll more than 700 children. The recruitment ad was apparently sent to GPs, at least, it was received by one, who passed it along. The company that sent it out, Red Whale, or perhaps Red Whale/GP Update, states that 15,000 primary care practitioners take its courses annually. So perhaps the ad was widely distributed, perhaps not.
Whatever the case, the language in the recruitment ad appears likely to convince GPs that their patients have an excellent chance of cure if they€™re assigned to the treatment arm. And if the GPs embrace that belief, the patients they recruit to the study are likely to share it. In contrast, if patients are recruited through these GPs and are then assigned to the study€™s comparison arm, they are less likely to harbor such high expectations.
Inducing or promoting such pre-conceptions among prospective study participants might not be of such concern in other circumstances. For example, if this were a double-blinded drug trial with biomarkers as the designated outcome measures, a similar recruitment ad would not impact the comparisons between the study arms. (It could perhaps impact the composition of the entire study sample.) But FITNET-NHS is already designed in a way likely to maximize biased responses. Since it is clearly impossible to blind participants and providers to treatment allocation, this is an open-label trial. It also relies on subjective outcomes.
Combining these two traits in a single study can create conditions for so much bias as to render reported findings uninterpretable, which is why other fields of medicine reject such evidence. With its problematic open-label/subjective-outcome design, FITNET-NHS should probably not have been approved and funded in the first place. (In contrast, it is possible to obtain viable data from open-label trials that include objective outcomes, and from blinded trials with subjective outcomes.)
Despite these flaws, the study is underway, so it would have made sense to avoid exacerbating the problem by introducing even more potential bias. Yet the recruitment ad does just that, even before treatment allocation. Not only does the ad seem to promise an unequivocal shot at recovery, it provides misleading and exaggerated claims about the Dutch study, another open-label trial with self-reported outcomes.
Were the Dutch recovery results really impressive, as claimed in the FITNET-NHS recruitment ad? Not if you consider that the investigators did not pre-specify the definition of recovery. They created it after viewing the results, so these were post-hoc findings. Post-hoc findings carry much less weight than pre-specified ones and it is inappropriate to cite them without noting that they are post-hoc. Yet the recruitment ad does not mention this key fact. [A recruitment video on the FITNET-NHS website also includes the recovery claim.]
The Dutch FITNET investigators are longtime associates of members of the PACE team. But even two of the PACE authors, in a Lancet commentary, expressed skepticism about these recovery figures. They noted pointedly that these were post-hoc results and that the criteria used to define recovery were not stringent. If that€™s the case, why did the FITNET-NHS recruitment ad promote these same results to GPs as impressive?
What about the claim that findings from the Dutch online CBT group were maintained at long-term follow up? That€™s true, as far as it goes. But the recruitment ad does not mention the salient detail that other trial participants scored the same at long-term follow-up as those assigned to the online CBT arm. In other words, the treatment conferred no long-term benefits, according to the study’s findings. A clinical trial is designed to measure results between treatment groups. To highlight within-group findings rather than between-group findings is a deceptive way to report clinical trial results, whether in a peer-reviewed paper or in a recruitment ad.
I addressed the problems with FITNET-NHS and its Dutch predecessor in blog posts in late 2016, here and here. (I repeatedly offered the FITNET-NHS team a chance to respond to my criticisms at length on Virology Blog, but never heard back. However, my initial FITNET-NHS post was publicly highlighted in a lecture slide as an example of libellous blogs. My efforts to obtain an explanation or an apology for this false accusation were ignored by the relevant parties. I would still welcome such an explanation or apology. As I have repeatedly made clear, I would also be happy to post on Virology Blog any documentation proving that criticisms I have made of FITNET-NHS, or any other studies, are inaccurate.)
Investigators are supposed to consider reasonable alternative explanations for their findings. Beyond the bias inherent in the Dutch FITNET study design itself, there is a very reasonable alternative explanation for why those receiving online CBT might have reported improvements: These patients were able to stay at home rather than having to attend in-person treatment sessions. Perhaps they were better able to pace themselves and therefore less likely to exceed their energy thresholds and suffer relapses. The Dutch investigators did not consider this self-evident possibility. Sometimes people are so attached to their own perspective that other logical interpretations never occur to them.
Another problematic issue is that low 8 % recovery rate among those who received usual care. In the Dutch study, the most common forms of usual care were in-person CBT and GET. Like their UK colleagues, the Dutch investigators have long promoted these two therapies as the treatments of choice, so it is perplexing that participants who received them fared so badly. Because the investigators provided few details about the quantity or quality of these usual care interventions, the reasons remain unknown. But the poor findings raise concerns about the reliability and validity of claims of treatment effectiveness from earlier studies by members of the same research group.
Who is Red Whale/GP Update, anyway, and why is it providing unreliable and overblown information to physicians to get them to recruit vulnerable kids into a clinical trial? Most of its work appears to involve courses updating GPs on clinical practice. This is the description of the company on its website:
We are one of the leading providers of primary care medical education in the UK with around 15,000 primary care practitioners attending our courses each year. We specialise in producing courses that are evidence-based, very relevant to everyday practice, and full of action points that delegates can take away and implement immediately. The firm further proclaims itself to be free of pharmaceutical funding.
I can€™t find anything on the website that describes the company€™s role in providing recruitment outreach services for clinical trials, although just because I can€™t find it doesn€™t mean it€™s not there. But this recruitment ad misrepresents earlier findings in a way likely to generate a sample of pre-biased study participants. It undermines Red Whale/GP Update€™s self-congratulatory assertion that it produces evidence-based materials. Perhaps Red Whale/GP Update is so focused on avoiding drug company influence that it is blind to bad science from powerful non-pharmaceutical interests, including government-funded researchers.