Categories
l

Sorting Assessment of CBT Competence

the assessment of CBT competence has become a mess within IAPT, with poor agreement between assessors on whatever measures is used and an inability to predict outcome (see references at the end of this blog). The problem goes to the very heart of IAPT, a failure to ensure a reliable diagnosis. In randomised controlled trials when the competence of clinicians is being assessed it is known that there has first been  a reliable diagnosis of the disorder under study, and this determines what are appropriate targets, whether a skill appropriate to each target is being deployed and the skill of that deployment.  Without the anchor of reliable diagnosis  assessments of CBT competence will be highly idiosyncratic.

In Simply Effective CBT Supervision (2013) published by London: Routledge, I made the point that fidelity to an evidence based treatment protocol has 2 components a) adherence to a protocol for the reliably identified disorder and b) competence in the skill used to tackle an appropriate treatment target.  Thus competence is meaningless if discussed outside the context of adherence.  A supervision workshop that I delivered in 2014 includes a slide of ‘The Competence Engine’ and an example of a Fidelity scale, see link below:

https://www.dropbox.com/s/jv22q8lv00orcd6/Simply%20Effective%20CBT%20Supervision%20Workshop.pdf?dl=0

The book contains Fidelity Scales for depression and the anxiety disorders

Liness et al https://www.dropbox.com/s/e26n191ie09sngs/Competence%20and%20Outcome%20IAPT%20no%20relation%202019.pdf?dl=0

Liness et al  Behavioural and Cognitive Psychotherapy (2019), 47, 672–685
doi:10.1017/S1352465819000201

Roth et al  Behavioural and Cognitive Psychotherapy (2019), 47, 736–744 doi:10.1017/S1352465819000316

Dr Mike Scott