Aug 31, 2023

Applying Science to SCAM: A Brief Summary of the Past Thirty Years

Commentary

Applying Science to SCAM: A Brief Summary of the Past Thirty Years

Edzard Ernst

Skeptical Inquirier

From: Volume 47, No. 1
January/February 2023

It has been almost thirty years since I started my job at the University of Exeter as a full-time researcher of so-called alternative medicine (SCAM). Perhaps this is a good time to reflect on what has happened during this time in the realm of SCAM research.

One of the first things I did after being appointed in 1993 was to define the aim of my unit, which was to apply science to SCAM. At the time, this intention upset quite a few people. The most prevalent arguments of SCAM proponents against my plan (apart from attacks on me personally) was that the study of SCAM using scientific methods was quite simply impossible. They claimed that SCAM included holistic and complex interventions that cannot possibly be put into the “straight jacket” of conventional research, e.g., a controlled clinical trial.

I then spent the next few years showing that this notion was erroneous. Gradually and hesitantly, SCAM researchers seemed to agree with my view—not all of them, of course, but at first a few and then slowly, often reluctantly, the majority of them. More often than not, their motivation seemed to be that if nothing else, research would be good for promotion.

What followed was a period during which we and several other research groups started conducting more or less rigorous tests of the hypotheses underlying SCAM. All too often, the results of these tests turned out to be disappointing, to say the least: not only did most of the therapies in question fail to show efficacy, but they were also by no means free of risks. Perhaps worst of all, much of SCAM was shown to be biologically implausible.

The realization that rigorous scientific scrutiny often generated findings that were not what SCAM proponents had hoped for led to a sharp decline in the willingness of SCAM enthusiasts to conduct or cooperate in research. Many of them began to doubt whether science was such a good idea after all.

But how could they change their minds without losing face? The solution was simple: they had to appear to be dedicated to science but argue that a different type of scientific approach was required. An article from 2014 may serve as a good example of this revised stance of SCAM-proponents on science. Here proponents of alternative medicine argued that:

The reductionist placebo-controlled randomized control trial (RCT) model that works effectively for determining efficacy for most pharmaceutical or placebo trial RCTs may not be the most appropriate for determining effectiveness in clinical practice for either CAM/IHC or many of the interventions used in primary care, including health promotion practices. Therefore, the reductionist methodology inherent in efficacy studies, and in particular in RCTs, may not be appropriate to study the outcomes for much of CAM/IHC, such as Traditional Korean Medicine (TKM) or other complex non-CAM/IHC interventions—especially those addressing comorbidities. In fact it can be argued that reductionist methodology may disrupt the very phenomenon, the whole system, that the research is attempting to capture and evaluate (i.e., the whole system in its naturalistic environment). Key issues that surround selection of the most appropriate methodology to evaluate complex interventions are well described in the Kings Fund report on IHC and also in the UK Medical Research Council (MRC) guidelines for evaluating complex interventions—guidelines which have been largely applied to the complexity of conventional primary care and care for patients with substantial comorbidity. These reports offer several potential solutions to the challenges inherent in studying CAM/IHC. (Coulter et al. 2014)

One of several options for evaluating complex interventions that suited SCAM proponents particularly well is the “A+B versus B” trial. It is a type of study that looks rigorous and guarantees generating nothing but positive results. There are now hundreds of these “pragmatic” trials, and their principle might be best explained using an example. Let’s take one titled “Acupuncture for Cancer-Related Fatigue in Patients with Breast Cancer: A Pragmatic Randomized Controlled Trial” (Molassiotis et al. 2012). The study tested acupuncture as a treatment of cancer-related fatigue. Cancer patients who were suffering from fatigue were randomized to receive usual care or usual care plus regular acupuncture. The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did significantly better than the control group. This looked like an encouraging result; an editorial in the journal confirmed this impression by calling the evidence “compelling” (Bower 2012). Due to a cleverly over-stated press release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer care. Finally, most commentators felt, research had identified an effective therapy for a debilitating symptom that affects so many of the most desperate patients. Few people seemed to realize that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue.

To understand this better, we might take a closer look at the trial design and employ an analogy. Imagine you have an amount of money A, and your friend owns the same sum plus another amount, B. Who has more money? It’s simple: of course your friend. A+B will always be more than A (unless B is a negative amount). For the same reason, “pragmatic” trials following the “A+B versus B” design will always generate positive results (unless the treatment in question does significant harm). Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore more than likely to produce a better result. This is true even if acupuncture is a mere placebo. After all, a placebo is more than nothing, so the placebo effect will have an impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.

I can be fairly confident that this is more than a theoretical consideration, because we once analyzed all acupuncture studies with such a design (Ernst and Lee 2008). Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are thus in perfect harmony.

Studies following the “A+B versus B” design can be randomized and thus appear to be rigorous. This means they can fool a lot of people. Yet they do not allow conclusions about cause and effect. In other words, they fail to show that the therapy in question has led to the observed result. Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, a placebo-response, or other non-specific effects.

The current frequent use of the “A+B versus B” design by SCAM researchers is much more than a theoretical concern. Armed with such (false) positive results, SCAM proponents evidently want us to integrate SCAM into real medicine. But rolling out acupuncture (or any other SCAM supported by such pseudo-research) across routine care at high cost would be entirely the wrong solution. Providing good care with empathy and compassion could be much more effective and less expensive than acupuncture. Moreover, adopting acupuncture on a grand scale would keep us from looking for a treatment that is truly effective beyond a placebo—and that surely would not be in the best interest of the patient.

So, in the past thirty years of SCAM research, we have gone from the rejection of science to accepting that it would be good for promotion, to insisting on an “alternative” version of science, to misleading the public with false-positive findings. It has been a long and tedious journey without actually advancing all that far.

References

Bower, J.E. 2012. Treating cancer-related fatigue: The search for interventions that target those most in need. Journal of Clinical Oncology 30(36): 4449–4450.

Coulter, I.D., G. Lewith, R. Khorsan, et al. 2014. Research methodology: Choices, logistics, and challenges. Evidence-Based Complementary and Alternative Medicine 2014(10): 780520. DOI: 10.1155/2014/780520.

Ernst, E., and M.S. Lee. 2008. A trial design that generates only “positive” results. Journal of Postgraduate Medicine 54(3): 214–216.

Molassiotis, A., J. Bardy, J. Finnegan-John, et al. 2012. Acupuncture for cancer-related fatigue in patients with breast cancer: A pragmatic randomized controlled trial. Journal of Clinical Oncology 30(36): 4470–4476.

Edzard Ernst

Edzard Ernst is emeritus professor, University of Exeter, United Kingdom, and author, most recently, of Don’t Believe What You Think: Arguments for and against SCAM.

https://skepticalinquirer.org/2022/12/applying-science-to-scam-a-brief-summary-of-the-past-thirty-years/

 

No comments: