Trial By Error: Norway’s Double Whammy of Fuzzy Science

By David Tuller, DrPH

Norway’s got a double whammy going on. First there’s the group of investigators that seems to have had trouble determining whether their newly published research on CBT and music therapy was an actual randomized trial or merely a feasibility study. (More on that below.) Then we have Dagbladet, a widely read tabloid, promoting a new study of the Lightning Process–with the same senior investigator as the music therapy research. Dagbladet has so far published two stories about the matter (here and here), with perhaps more on the way.

The Dagbladet stories are stuffed with misstatements and omissions about ME, about patients, about the Lightning Process, and about the 2017 Lightning Process trial–too many to address here. The first article mentions me in referencing my criticism of research on this woo-woo intervention.  I am referred to as a journalist and a blogger. I certainly don’t object to those terms. But as in previous articles, my academic credentials are not mentioned. Nor is the fact that the investigation of crap research–like the CBT and music therapy study–is in my job description.

The story mentions that the Norwegian ME Association has supported my crowdfunding efforts. The implication is that I am willing to trash research into the Lightning Process because of this financial support–rather than because it is crap. I guess Dagbladet is more concerned with my source of funding than with the 2017 study’s documented ethical and methodological violations. In contrast, the reporter does not appear to be concerned about the key involvement of a prominent Lightning Process practitioner in the proposed Norwegian study. Such a person would clearly have every financial and professional incentive to want to prove that this goulash of neurolinguistic programming, osteopathy and life-coaching has what looks like scientific backing.

As far as I know, the reporter made no effort to contact me. I am a very easy person to find. My academic credentials and my professional title at UC Berkeley are readily available to anyone who knows how to do an online search. Today I plan to send a letter to the editor.


On Monday, I wrote about the protocol and statistical analysis plan for that CBT and music therapy thing. The peer reviews also make for interesting reading and shed some light on the project’s downward slide from full-scale randomized trial to feasibility study. BMJ Paediatrics Open has an open review policy, that is, the peer reviewers are not anonymous. The authors see their names, and so does anyone who wants to review the reviews, which are appended to the article on the journal’s site. There are major advantages to this system in terms of transparency and accountability.

But it can also lead to embarrassment when a reviewer, like one of the two who assessed this manuscript, writes this: €œI haven’t read beyond the abstract.€ BMJ prides itself on the rigor of its peer review process. It is hard to square that pride with BMJ Paediatrics Open‘s apparent decision not to press the peer reviewer to read beyond the abstract and, if that failed, to reassign the manuscript to someone else. (I am assuming this because nothing in the record indicates any such follow-up occurred.)

Given his candor, I don’t fault the reviewer, although perhaps it would have been better to decline the invitation to review in the first place. It was BMJ Paediatrics Open‘s responsibility to decide if such limited scrutiny could pass BMJ’s purportedly high standards for quality and integrity. Editors either didn’t bother to read the review carefully, or they determined that a review of just the study’s abstract was sufficient.

Whatever the reason, other observers might view the failure to seek further input as a disturbing lapse in editorial judgement and an abrogation of BMJ’s obligation to readers–and to the medical literature. This failure suggests that disdain for or indifference to proper quality control–so clearly demonstrated in the company’s disastrous handling of the 2017 Lightning Process study–is perhaps systemic and not limited to a single BMJ journal or editor.

In the draft submitted to the journal for review, the title billed the study not as a “feasibility study” but as €œan exploratory randomized trial.€ (Had they gotten decent results, perhaps they would have dropped the word €œexploratory.€) The first reviewer praised multiple aspects of the study but also noted the following:

“I struggle to understand from the aims of the study and the way the study is described whether this was intended as a feasibility study €“ i.e. to look at feasibility (can this be done?), acceptability (how do participants experience it?) and to give some indication of potential effect sizes to power a future larger scale trial, or whether this was intended as a fully powered trial. Throughout, I think this needs to be clarified for the reader and interpretations/conclusions drawn in light of what the aim was.”

Here is how the authors responded to this point: €œThank you. We agree €“ this study should be regarded a feasibility study, and the manuscript has been rephrased accordingly.€

Perceptive editors would have noticed that this response was non-responsive. The reviewer did not ask how the study €œshould be regarded€ now that it was already done. The reviewer asked whether the study started out as a fully powered trial or as a feasibility study. She wanted the authors to clarify this point, not to fudge it. The interpretation and conclusions, she noted, needed to be drawn “in light of what the aim was€–not in light of how the authors reframed that aim after-the-fact.

And here is how the published paper describes the aim: “The aim of the present study was to explore the feasibility of this mental training programme in adolescents suffering from CF after acute EBV infection, and to provide preliminary estimates of effects as a basis for a full-scale clinical trial in the future.”

This is demonstrably not the case. As I noted in my earlier post, the protocol and the other documents include no mention of this trial being a feasibility study. Despite the reviewer’s straightforward request, the revised version did not clarify that the study was intended to be a fully powered trial. Instead, the authors rewrote the paper–and the history of the research–as if they’d intended from the start to conduct a feasibility study.

The authors should be expected to account for this mischaracterization of their research. So should the editors who accepted the article for publication while ignoring a reviewer’s alert that he hadn’t reviewed the actual paper. Did anyone at the journal read the supporting trial documentation? My best guess is no.


Comments are closed.

Scroll to Top