Sorting Assessment of CBT Competence

the assessment of CBT competence has become a mess within IAPT, with poor agreement between assessors on whatever measures is used and an inability to predict outcome (see references at the end of this blog). The problem goes to the very heart of IAPT, a failure to ensure a reliable diagnosis. In randomised controlled trials when the competence of clinicians is being assessed it is known that there has first been  a reliable diagnosis of the disorder under study, and this determines what are appropriate targets, whether a skill appropriate to each target is being deployed and the skill of that deployment.  Without the anchor of reliable diagnosis  assessments of CBT competence will be highly idiosyncratic.

In Simply Effective CBT Supervision (2013) published by London: Routledge, I made the point that fidelity to an evidence based treatment protocol has 2 components a) adherence to a protocol for the reliably identified disorder and b) competence in the skill used to tackle an appropriate treatment target.  Thus competence is meaningless if discussed outside the context of adherence.  A supervision workshop that I delivered in 2014 includes a slide of ‘The Competence Engine’ and an example of a Fidelity scale, see link below:

https://www.dropbox.com/s/jv22q8lv00orcd6/Simply%20Effective%20CBT%20Supervision%20Workshop.pdf?dl=0

The book contains Fidelity Scales for depression and the anxiety disorders

Liness et al https://www.dropbox.com/s/e26n191ie09sngs/Competence%20and%20Outcome%20IAPT%20no%20relation%202019.pdf?dl=0

Liness et al  Behavioural and Cognitive Psychotherapy (2019), 47, 672–685
doi:10.1017/S1352465819000201

Roth et al  Behavioural and Cognitive Psychotherapy (2019), 47, 736–744 doi:10.1017/S1352465819000316

Dr Mike Scott

 

Mis-selling of the Cognitive Therapy Rating Scale

If your performance has been evaluated using the cognitive therapy rating scale (or the revised version) you may have a claim for ‘damages’. Curiously the cognitive therapy rating scale has a shaky foundation:

  1. The CTRS has only been evaluated in a sample of depressed clients undergoing cognitive therapy [Shaw et al (1999)] , therapists scores on this did not  predict outcome on self-report measures the Beck Depression Inventory or the SCL-90 (a more general measure of psychological  distress) however it did predict outcome on the clinician administered Hamilton Depression Scale predicting just 19% of the variance in outcome, but it was the structure parts of the scale (setting of an agenda, pacing, homework) that accounted for this 19% not items measuring socratic dialogue etc. The authors concluded: ‘The results are, however, not as strong or consistent as expected’
  2. There is no evidence that the CTRS is applicable to disorders other than depression. Some aspects of the CTRS such as socratic dialogue may be particularly inappropriate with some clients e.g OCD and PTSD sufferers.
  3. The CTRS does not make it clear that the clinician cannot have set an appropriate agenda without reliably determining what the person is suffering from.
  4. In practice raters appear to pay more attention to the socratic dialogue item as opposed to interpersonal effectivenes (e.g non-verbal behaviour). There is a poor intra class correlation of the order 0.1, ratings of least competent therapists are more in agreement with those of supervisors than the more competent therapists! [McManus et al (2012)]
  5. The Hamilton Scale used in the Shaw et al (1999) study was developed before the development of DSM criteria and it is questionable about whether any correlation would be found between DSM diagnostic status and score on the CTRS for depression or indeed any disorder.

 

Dr Mike Scott