IAPT Training – ‘jump through our hoops and make no difference to client outcome’

that’s the take home message from a study conducted by Liness et al (2019). IAPT trainees were evaluated using the Cognitive Therapy Rating Scale Revised (CTRSR) and client outcome assessed, mainly with the PHQ9 and GAD7, and no relationship was found, either at the end of training or 12 months later, see link below:


Instead of the author’s concluding that something is seriously amiss if there is no relationship between competence and outcome, the authors celebrate that they could keep the newly trained therapists scoring highly on the CTRS!

Curiouser and Curiouser

It is a truly bizarre paper, on the one hand the authors acknowledge that it is important to assess adherence, competence and outcome but proceed only to analyse the relationship between competence and outcome. Treatment fidelity involves a combination of adherence (highlighting the appropriate disorders/difficulties and matching treatment strategies) and competence (how skillfully treatment is delivered). Thus the assessment of a surgeon’s key hole skills (competence) would make no sense at all if he/she were not using them in an appropriate context , e.g this week it was reported that a 26 week old unborn child with spina bifida was operated on with key hole surgery in the womb to help ensure some mobility after birth. By contrast key hole surgery, no matter how competently delivered, for say a person with simply diabetes would make no sense at all, it would be a matter of infidelity.

Inept IAPT

IAPT’s procedures make it impossible to ensure adherence. In order to guarantee adherence an open ended interview needs to be conducted to let the client tell their story. This is then the springboard for a reliable diagnostic interview, designed to elicit the prescence of disorder/s. Such a two-fold procedure protects against the use of misleading rules of thumb e.g ‘nightmares of extreme trauma, must be PTSD’. There can be no appropriate matching of protocols to reliably identified disorders without taking the time to get a comprehensive client story [see Scott (2009) Simply Effective Cognitive Behaviour Therapy London: Routledge]. Taking shortcuts means that the individual receives a hotchpotch of generic CBT for which there is no evidence base.

Trying to determine competence within IAPT’s structures is a will o’ the wisp exercise.

The Mis-Selling of the CTRS

On October 2017 I wrote a blog on this topic. Liness et al (2019) maintain that the CTRS addresses the issue of adherence, but it does not, whilst there is an agenda item on the CTRS, keeping to the agenda does not at all mean that an appropriate agenda has been identified!

The authors note that the CTRS has become the ‘gold standard’ on courses, but their is a weak evidence base for it’s predictive power for depression (see earlier blog ), an even weaker power for anxiety disorders and none outside this range. I have suggested that what should be employed are fidelity measures that incorporate both adherence and competence. Scott (2015) Simply Effective Supervision, London: Routledge.

Re-focussing on Real World Outcomes With Routine Clinicians In Customary Contexts

It appears not to have occured to Liness et al (2019) that changes on the PHQ9 and GAD7 may not be actually measuring outcome. Rather they are most likely measuring a) improvements with the passage in time as people inevitably enter therapy at their worst point and b) a placebo response because the therapists in the study (42% of the IAPT therapists were clinical or counselling psychologists) created an expectation the alleged CBT would make a difference and they gave clients attention (an average of 11-12 session).

Clients were not assessed by someone independent of IAPT using a standardised diagnostic interview to determine whether they had got back to their usual self with treatment and the results were not contrasted with an attention placebo. It is thus impossible to actually determine whether the alleged CBT made a real world, socially significant difference. Nevertheless the IAPT luminaries in the study will doubtless use the ‘findings’ to promote their brand in the UK and beyond!

The study also lacks ecological validity: where else are there such a high proportion of qualified clinicians, where else are IAPT staff routinely providing 11-12 sessions, where else are there clients without severe psychosocial problems and staff given weekly group and 1.5 hr long individual supervision. Further the therapists in the study chose sessions to be rated on.

The Hi-jacking of Supervision

Within IAPT training supervisors are expected to attend courses run at Universities at which there supervisees are being trained. But there is no evidence that this form of supervision results in better client outcome. It is possible to operate with a wholly different model of supervision in which its’ major function is to act as a conduit for evidence based treatment.

The New Totalitarians

Disturbingly IAPT is like a totalitarian state determining, mental health job opportunities and the way in which assessment, treatment and supervision are conducted. Further it controls journals such as Behavioural and Cognitive Psychotherapy, Behaviour Research and Therapy and even it seems Cognitive Therapy and Research in which the Liness et al (2019) paper appeared (this was the journal to first publish Beck’s seminal study on the efficacy of CBT for depression). It is extremely difficult to get an airing if one sees IAPT in practice as deeply flawed

Dr Mike Scott

2 thoughts to “IAPT Training – ‘jump through our hoops and make no difference to client outcome’”

  1. I’ll admit I’ve never really ‘got’ the CTSR, it seems so fragmented as to completely obscure the bigger picture, but I’ve probably missed the point! As a trainee what I remember is apparently wildly different interpretations between different supervisors as to what constitutes a good score on the different items.

Leave a Reply