Enough already: Let’s move on from meta-analyses of psychoanalytic psychotherapy and do the hard work of quality studies

Enough already: Let’s move on from meta-analyses of psychoanalytic psychotherapy and do the hard work of quality studies

The past few years has seen an increasing push to demonstrate the legitimacy of long-term psychodynamic and psychoanalytic therapies (e.g., Leichsenring & Rabung, 2008). It seems proponents of psychodynamic therapy are trying to play catch up. There’s an enormous amount of research support for cognitive behavioral approaches; by contrast, controlled research for psychodynamic approaches is sparse.

Since long-term psychoanalytic psychotherapy is a mouthful, we’ll follow the cue of those before us and call it LTPP for short.

It’s unfortunate there’s not a lot of controlled research on LTPP, as I think controlled studies carefully examining the processes and outcomes of psychodynamic therapies could only enhance our understanding of treatment. As a way to bolster support, some psychodynamic researchers have taken what studies exist and published meta-analyses of the existing research. Some of the recent ones concluded that LTPP is an effective treatment for a variety of psychiatric conditions (De Maat, 2009; Leichsenring & Rabung, 2008, 2011). These findings were not without controversy, however.

You may have noticed, for example, that Leichsenring and Rabung are listed twice. There’s a reason for this: their 2008 meta-analysis was widely criticized for miscalculating effect sizes. According to critics, the researchers had simply looked at pre-post changes (within-group difference) rather than comparing LTPP against the control conditions (between-group differences). The researchers redid their analysis in the 2011 meta-analyses and found LTPP was effective, albeit to a lesser degree.

Although I think meta-analyses on psychodynamic therapy have been over-played of late, I was excited about a new one published in Clinical Psychology Review (Smit et al., 2012). This article is a collaboration of Dutch researchers and John Ioannidis. I perked up at the mention of Dr. Ioannidis’ involvement.

Who is John Ioannidis?

Not to give short shrift to the others involved, but I was really excited by Ioannidis’ presence on this article. Ioannidis, a medical researcher with several academic appointments, has become one of the foremost experts in the credibility of medical research. He published a hugely influential paper arguing that most medical findings are inaccurate, and he was even profiled in The Atlantic. With his name attached to this piece, I could be confident that the methodology of this meta-analysis had been scrupulously thought out and executed.

A little background on meta-analysis

Meta-analyses are a way of consolidating a number of studies on a particular area of focus that allows for comparison across studies. Researchers may then draw more general conclusions from a bunch of data. This is done by converting the results from each individual study into effect sizes.

Like any tool, meta-analyses are only as good as the way they are used. Researchers make decisions about what studies to include (and not to include), what outcomes to look at, and how to run the analyses. As they say, “garbage in, garbage out.” Moreover, meta-analyses are no substitute for rigorously controlled studies.

The bottom line: meta-analyses of low quality research lead to low quality conclusions

The reason why I’m hoping this article will be the final word on this topic for now is that it ultimately points to the need for more high quality data.

In contrast to previous meta-analyses, the researchers in this study had difficulty drawing firm conclusion about LTPP because the available research was generally of low quality. Their main criticism is that LTPP was often compared against substandard treatments. The authors call these “straw man” comparisons, as there is little reason to believe these control conditions are effective. In the few studies that compare LTPP against evidence-based treatments such as dialectical behavior therapy, LTPP does not fare so well, according the researchers.

What this means is that without well-controlled studies of LTPP against established treatments for specific psychiatric problems. It is difficult to gauge the effectiveness of LTPP. The few highly quality studies available suggest that when LTPP is compared against bona fide treatments, it doesn’t appear to be particularly effective. Hopefully, researchers will now take a break from meta-analysis and focus their efforts towards creating more high quality, controlled studies comparing LTPP to treatments with a strong track record.

Let’s move on and do the work…

I think we’ve seen enough meta-analyses on LTPP for the time being. If a strong argument for LTPP is to be made, it will require a focus on quality, controlled research that compares LTPP to bona fide treatments for specific conditions.

But don’t take my word for it: I highly recommend reading the study yourself. For a scientific article, it’s actually quite lucid and readable. I obtained it by following James Coyne’s suggestion in the blog post that alerted me to this article, and emailing the author, Arnoud Arntz, who quickly and thoughtfully sent me a copy:

Arnoud.Arntz@Maastrichtuniversity.nl

Motivating Clinicians to Learn and Use Exposure Therapy

Motivating Clinicians to Learn and Use Exposure Therapy

Although exposure-based treatments have been around for several decades, and exposure is arguably at the core of the most effective treatments for anxiety-related disorders, only a minority of clinicians actually use exposure in an intentional and planful way. Barriers include lack of knowledge, as well as concerns about potential harm, and a perceived rigidity in using exposure. One promising avenue for overcoming some of these barriers is easily accessible Internet-based training. A group of researchers associated with Behavioral Tech, the umbrella organization at the heart of the dissemination of Dialectical Behavior Therapy (DBT), conducted a study aimed at encouraging clinicians to use exposure-based treatments and training them in its use (i.e., Harned, Dimeff, Woodcock, & Skutch, 2011).

 

The Three Conditions Used in the Study

The researchers created an online multimedia training in exposure therapy. A total of 51 participants were randomly assigned to one of three conditions:

  1. An online training in exposure therapy (ET OLT), 
  2. Online training in exposure therapy plus 1-2 phone calls from the experimenters. In these brief calls, the experimenters responded to questions and attempted to increase engagement through using Motivational Interviewing. Motivational interviewing is a well-supported approach, but I should note that, as the authors admit, it’s impossible to know whether motivational interviewing had a unique effect, or whether the participants simply found it helpful to talk to the experimenters. 
  3. In an attempt to have a placebo condition (control OLT), a third of the participants didn’t receive exposure training at all. Instead, they received an online instruction in using DBT to validate clients

What Did They Find?

As it turns out, online training appears to be a viable means of educating therapists about exposure therapy and increasing therapist confidence in using exposure. The addition of the phone calls appeared to improve attitudes towards exposure therapy beyond the training alone, but it’s hard to know for certain whether this is because the phone calls were rooted in motivational interviewing or simply because the therapists had a chance to talk through their concerns with a knowledgeable and sympathetic person.

Limitations and What We Can Take Away

I’ll mention here that participants were recruited from a DBT listserv. That participants were on a DBT listserv suggests that the pool of people were more favorably disposed towards evidence-based therapy than, for example, a Jungian listserv. That these individuals also volunteered for a research study further narrows the sample into people open to learn these sorts of treatments. Consequently, this isn’t a completely random sample of therapists.
One take home message from this study: access to decent training is major barrier to using exposure, and this barrier can be surmounted through online training. I think this is a pretty important point. There are therapists who want more training in exposure therapy, and the Internet is a very viable way of making training available.
Additionally, a brief (< 20 minutes) phone call or two can help grease the wheels and increase likelihood that someone will use the treatment.
The researchers looked at a number of other variables, but at the risk of cluttering this post, I’ll leave those out. If you’re interested, you can download a pdf of the article through NIH by clicking on the link below.

Reference

Harnad, M.S., Dimeff, L.A., Woodcock, E.A., & Skutch, J.M. (2011). Overcoming barriers to disseminating exposure therapies for anxiety disorders: A pilot randomized controlled trial of training methods. Journal of Anxiety Disorders, 25, 155-163.

Reducing Shame in Addictions: Slow and Steady Wins the Race

Reducing Shame in Addictions: Slow and Steady Wins the Race

I’m pretty excited about publishing the 51st randomized clinical trial on Acceptance and Commitment Therapy (in The Journal of Consulting and Clinical Psychology). Our study is the first randomized trial ever published to test the effectiveness of an intervention targeting shame in substance use disorders. Authors have been writing about the importance of shame in addiction for decades, but no one has spent the time and money to actually test an intervention. It’s pretty cool to be the first.

This study adds to the rapidly growing database on ACT

The number of randomized clinical trials on ACT is growing rapidly, with most studies published in just the last four years (see graph below of the number of published randomized clinical trials on ACT, by year, courtesy of Steve Hayes  – the graph is missing the five most recently published randomized trials).

 

Those who follow this blog are going to get a sneak peak at what will be in the manuscript. Below, I’ll snip out a few findings and the abstract. I’m pretty excited about this work and where our research on shame and self-stigma is leading. Keep tuned to this blog for more about where this work goes. You can find past publications on the topic on our Portland Psychotherapy publications page.

 

First, the abstract:

Objective: Shame has long been seen as relevant to substance use disorders, but interventions have not been tested in randomized trials. This study examined a group-based intervention for shame based on the principles of Acceptance and Commitment Therapy (ACT) in patients (N = 133; 61% female; M = 34 years old; 86% Caucasian) in a 28-day residential addictions treatment program. Method: Consecutive cohort pairs were assigned in a pair-wise random fashion to receive treatment as usual (TAU) or the ACT intervention in place of six hours of treatment that would have occurred at that same time. The ACT intervention consisted of three, two-hour group sessions scheduled during asingle week. Results: Intent-to-treat analyses demonstrated that the ACT intervention resulted in smaller immediate gains in shame, but larger reductions at four month follow up. Those attending the ACT group also evidenced fewer days of substance use and higher treatment attendance at follow up. Effects of the ACT intervention on treatment utilization at follow up were statistically mediated by post treatment levels of shame, in that those evidencing higher levels of shame at post treatment were more likely to be attending treatment at follow up. Intervention effects on substance use at follow up were mediated by treatment utilization at follow up, suggesting that the intervention may have had its effects, at least in part, through improving treatment attendance. Conclusions: These results demonstrate that an approach to shame based on mindfulness and acceptance appears to produce better treatment attendance and reduced substance use. 

 

And the overall summary of the findings from the discussion:

A six-hour group using an ACT approach to shame as a small part of a 28-day residential program led to slower immediate gains in shame, but better long term progress….Results indicated that reductions in shame during active treatment predicted higher levels of substance use at follow up. Mediational analyses suggested that the more gradual reductions in shame found in the ACT group protected against the pattern seen in TAU for shame reductions to be associated with subsequent higher levels of substance use. As predicted, the ACT intervention led to higher levels of outpatient treatment attendance during follow up, which in turn was functionally related to lower levels of substance use. Across the board, participants in the ACT condition showed a pattern of continuous treatment gains, especially on psychosocial measures, rather than the boom and bust cycles seen in treatment as usual.

 Our explanation for this pattern of results:

… something in the six hours spent in the ACT group changed the overall effect of this residential program. Unhealthy suppression of shame may be involved in the “treatment high” sometimes seen in early recovery in which sobriety can lead to unrealistic treatment gains, only to be followed by urges to use, relapse, or depression … It seems plausible that these six hours [of the intervention] kept participants from interacting with the overall treatment program in a way that produced illusory short term gains, perhaps by helping them experience shame in a more open and  mindful fashion, thereby allowing the emotion to perform its regulatory function of warning against or punishing violations of personal values or social norms and of helping to repair strained social roles. The resulting improvement in functioning and reintegration into healthy social networks, such as those found in a recovery community, led to less shame over time.

At the end of our article we summed up our hopes for how this research might help people with addiction:

Many people with substance use disorders experience shame as a result of the stigma of substance abuse, failure to control their substance use, and failures in role functioning. Understandably, people are motivated to avoid or reduce this extremely painful affect. Unfortunately, when the emotion of shame itself becomes the target of avoidance, this may exacerbate shame in the long run, even though it may provide some relief in the short-term. In a similar way, while negative self-conceptions are painful, direct change efforts can paradoxically increase the frequency and regulatory power of negative self-conceptions. Results of this study suggest that acceptance and mindfulness based interventions may help people to step out of a cycle of avoidance and shame and move toward a path of successful recovery that leads to more stable reductions in shame and to more functional ways of living. 

 

Citation:

Luoma, J. B., & Kohlenberg, B.S., Hayes, S. C., & Fletcher, L. (in press). Slow and Steady Wins the Race: A Randomized Clinical Trial of Acceptance and Commitment Therapy Targeting Shame in Substance Use Disorders. Journal of Consulting and Clinical Psychology.

The full study should be available shortly on the journal’s website.

“Evidence-Based Psychotherapy” versus “Scientifically Oriented Psychotherapy”

“Evidence-Based Psychotherapy” versus “Scientifically Oriented Psychotherapy”

I just stumbled across a new paper by David and Montgomery (2011), who provide a novel system for categorizing psychotherapies in terms of their quality of evidence. One reason we named this blog Science-Based Psychotherapy, is to highlight some of the flaws in the current methods of evaluating evidence-based practice. I hope that some of the recommendations of David and Montgomery (2011) get adopted, because their guidelines would be a huge advance over the current state of affairs. As stated in the article:

…all the current systems of evaluating evidence-based psychotherapies have a significant weakness; they restrict their focus on evidence to data supporting (psycho)therapeutic packages while ignoring whether any evidence exists to support the proposed theoretical underpinnings of these techniques. (i.e., theory about psychological mechanisms of change; p. 90)

Evidence-based therapy lists ignore basic science and theory

One big problem of the current methods of evaluating evidence is the lack of attention to basic science and theory. The result is that therapy packages that are based on theories that have been clearly invalidated can still appear to be scientifically credible:

By ignoring the theory, the evaluative frameworks of various health-related interventions (including psychotherapy), technically (a) allow pseudoscientific (i.e., ‘‘junk-science’’) interventions to enter into the classification schemes and or (b) bias the scientific research in a dangerous direction (p. 90).

The danger of these kinds of incentives is that they push researchers to focus solely on outcome research at the expense of testing and refining the scientific theories that will allow for future advances in therapy.

…a consequence of current classification schemes (which consistently do not address underlying theories about mechanisms of change) is that as long as there are randomized trial data, the validity of the underlying theory is less relevant (p. 90).

The current evaluative systems focus on only one kind of evidence: outcome evidence based on the performance of particular therapy packages. This evidence is typically in the form of randomized controlled trials (RCTs). What David et al. add is a second factor that focuses on evidence for the underlying theory.

They propose that each of these two factors are evaluated on three levels

a) empirically well supported;

(b) equivocal no clear data, which includes–not yet evaluated, preliminary data, or mixed data

(c) strong contradictory evidence (SCE; i.e., invalidating evidence).

Here’s their diagram showing how this breaks down:

One of the cool things about this framework is that it allows distinctions between therapies with both types of evidence and therapies that only have one form of evidence. They call those therapies with the highest levels of evidence “Scientifically Oriented Psychotherapies.”

Scientically oriented psychotherapies (SOPs) are those which do not have clear SCE for theory and package; the highest level of validation of a SOP is that in which both the theory about psychological mechanisms of change and the therapeutic package are well validated (i.e., Category I). A SOP seeks to investigate empirically both the therapeutic package in question and the underlying theory guiding the design and implementation of the therapeutic package (i.e., theory about mechanisms of change; p. 91).

A definition of pseudoscience

This allows for a pretty solid definition of a therapy based on pseudoscience.

Pseudoscientically oriented psychotherapies (POPs) are those that claim to be scientific, or that are made to appear scientific, but that do not adhere to an appropriate scientific methodology (e.g., there is an overreliance on anecdotal evidence and testimonial rather than empirical evidence collected in controlled studies; Lilienfeld, Lynn, & Lohr, 2003)…. We define POPs as therapies used and promoted in the clinical field as if they were scientifically based, despite strong contrary evidence related to at least one of their two components (i.e., therapeutic package and theory; p. 92).

One consequence of this approach is that it allows for the identification of therapies that have accumulated evidence of effectiveness, but where the theory on which they are based has been invalidated. If these therapies are promulgated based on the invalidated theory, they are classified as pseudoscientifically oriented psychotherapies (POP). Here’s an example from their article of a commonly utilized approach, neurolinguistic programming, that is based on a disproven theory:

An interesting shift from SOPs to POPs is illustrated by neurolinguistic programming. Once an interesting system (e.g., Category IV of SOPs, according to our classification), it is now seen largely as a POP (Category VII) because although its theory was invalidated by a series of studies (for details, see Heap, 1988; Lilienfeld et al., 2003), it continues to be promoted in practice based on the same theory, as if it were valid (p. 95).

Let’s break this down a little bit. While there is a general lack of evidence for the effectiveness of NLP, there is a greater consensus that the underlying theory contradicts basic research in neuroscience or psychology. NLP uses many scientific sounding but empty terms such as pragmagraphics, surface structure, non-accessing movement, metamodeling, metaprogramming, and submodalities. While these terms form the theoretical foundation for much of the  NLP techniques and sound scientific, they have not stood up to scientific scrutiny and thus the term pseudoscientific applies to this therapy.

Science cannot be stagnant. It is ever evolving and needs to be modifiable based on what the data suggest. In order for science to progress and produce effective treatments over time, good theory is needed. Theory is what allows scientists to make sense of the findings that are observed and guides new research. Brute force empiricism, without theory, leads to a lot of blind paths and wasted energy. I’m heartened to see a leading journal discussing alternate schemes for evaluating the scientific credibility of therapies that focus on mechanisms of action, theory, and incorporates understanding derived from basic science.

Reference:

David, D., & Montgomery, G. H. (2011). The Scientific Status of Psychotherapies: A New Evaluative Framework for Evidence-Based Psychosocial Interventions. Clinical Psychology: Science and Practice. Volume 18, Issue 2, pages 89–99.

What is Science-Based Psychotherapy?

What is Science-Based Psychotherapy?

Science-Based Psychotherapy is focused on educating therapists and the public about the role of science in the practice of psychotherapy.We will blog about topics such as:

1) How to use scientific thinking to inform the practice of psychotherapy

2) Particular psychotherapy methods that have been studied scientifically, and the evidence — either for or against — those models

3) New findings in basic and applied research that might have implications for psychotherapy practice

4) Research relating to training, supervision, professional well-being, and continuing to develop as a psychotherapist.

While psychotherapy is at its heart an interpersonal enterprise, this enterprise is best informed by scientific findings whenever possible. While we believe that the therapeutic relationship is very important for effective psychotherapy, and we strive to have a positive therapeutic relationship with every client we see, we also believe that psychotherapy is best guided by science. Fortunately, the evidence-base for psychotherapy has grown immensely over the last two decades and now we know a lot more about what works in therapy.

The name of our blog was inspired by the writers at Science-Based Medicine. Like them, we believe that good science is the best way to determine whether mental health treatments are safe and effective. This idea has been the core of the evidence-based psychotherapy (EBP)movement. While the EBP  movement has been a positive development in many ways and we are supportive of it, we also think that EBP proponents often focus too much on clinical trials as the primary (or sole) source of evidence for whether a mental health intervention is safe and effective. Lists of recognized evidence-based psychotherapies (for example, the APA Division 12 list) are often based solely on outcome research from clinical trials, and other kinds of applied or basic research are little considered. This is not optimal for the progress of science over time or for guiding therapists about what to do in therapy.

All the authors of Science-Based Psychotherapy are researchers, as well as active clinicians, with years of scientific study and clinical practice under our belts.

If you are looking for lists of evidence-based psychotherapies, here are some resources:

UPCOMING TRAINING EVENTS

Therapy and Research in Psychedelic Science (TRIPS) Seminar Series

Second Friday of each month from 12:00 PM – 1:00 PM (PT)

TRIPS is an online seminar series that hosts speakers discussing science-informed presentations and discussions about psychedelics to educate healthcare professionals. This series was created to guide healthcare providers and students preparing to be professionals towards the most relevant, pragmatic, and essential information about psychedelic-assisted therapy, changing legal statuses, and harm reduction approaches in order to better serve clients and communities. This seminar series is a fundraiser for our clinical trial of MDMA-assisted psychotherapy for social anxiety disorder that Portland Psychotherapy investigators are preparing for and starting in the Fall of 2021. All proceeds after presenter remuneration will go to fund this clinical trial. Read more.

December 11th, 2020 – Ethical and Legal Considerations in Providing Psychedelic Integration Therapy with Brian Pilecki, Ph.D. & Jason Luoma, Ph.D.
January 8th, 2021 – What’s it Like to Trip? The Patient Experience in Psychedelic-Assisted Therapy with Brian Pilecki, Ph.D.
February 12th, 2021 – 5-MEO-DMT with Rafael Lancelotta, M.S.
March 12th, 2021 – What does Psilocybin-Assisted Therapy for Depression Look Like? A Clinical Case Presentation based on a Recent Clinical Trial from Johns Hopkins with Alan K. Davis, Ph.D.
April 9th, 2021 – Gregory Wells, Ph.D.
May 14th, 2021  Research on MDMA and Psychedelic-Assisted Therapy: An Overview of the Evidence for Clinicians with Jason Luoma, Ph.D.