Categories
l

IAPT’s Unreliable Assessment of Competence, Incompetence and Spin

the competence of trainee CBT therapists is routinely assessed using the Cognitive Therapy Rating Scale-Revised (CTRS-R), but a just published study by Roth et al (2019) has shown poor  levels of agreement on the performance of IAPT trainees, using this measure.  The levels of agreement were no better when an alternative measure of competence the University College London CBT Scale was used. On both measures of competence the intra-class correlation coefficients were less than 0.5, indicating poor reliability (on a scale poor, moderate, good, excellent). The UCL Scale is rooted in the competence framework developed by Roth and Pilling (2008) as part of the IAPT programme.

The chaos is underlined by a study conducted by Liness et al (2019), published in Cognitive Therapy and Research which assessed the competence  of IAPT trainees using the CTRS-R with client outcome assessed, mainly with the PHQ9 and GAD7, and no relationship was found, either at the end of training or 12 months later, see link below:

https://www.dropbox.com/s/e26n191ie09sngs/Competence%20and%20Outcome%20IAPT%20no%20relation%202019.pdf?dl=0

But the same set of authors as in the Liness et al (2019) study, have published a further paper in Behavioural and Cognitive Psychotherapy, again of IAPT trainees, evaluated using the CTRS-R. But this time, in the abstract, they reported that ‘CBT competence predicted a small variance in clinical outcome for depression cases’ with no reference to the findings of their other paper!  In the body of their Behavioural and Cognitive Psychotherapy report one discovers that for depression cases the CTRS-R explained 1.3% of the variance in outcome, it is extremely doubtful if this is of any social or clinical significance. There is also a failure to mention in the abstract that CTRS-R did not at all relate to anxiety.  The abstract is dominated by the message that training helped trainees score highly on the CTRS-R, without acknowledging that this might be without meaning. Three of the 6 authors have links to IAPT  and spin is not therefore unexpected. 

Liness et al  Behavioural and Cognitive Psychotherapy (2019), 47, 672–685
doi:10.1017/S1352465819000201

Roth et al  Behavioural and Cognitive Psychotherapy (2019), 47, 736–744 doi:10.1017/S1352465819000316

Roth, A. D. and Pilling, S. (2008). Using an evidence-based methodology to identify the competences required to deliver effective cognitive and behavioural therapy for depression and anxiety disorders. Behavioural and Cognitive Psychotherapy, 36, 129–147. doi: 10.1017/S1352465808004141

 

Dr Mike Scott

 

 

Categories
l

IAPT’s Clients – Vulnerable Adults With No Protection

Neither NHS England, Clinical Commissioning Groups or BABCP have taken any steps to ensure that there is independent monitoring of the welfare of IAPT’s clients. Such clients suffer a double whammy, not only do they experience the sense of helplessness often accompanying psychological debility, but they are also powerless to say anything about their experience.

The CONSORT guidelines ( see link below) state that randomised controlled trials should address outcomes that are meaningful to the patient. The same should apply to services delivered in routine practice. Changes in psychometric tests scores are not meaningful to clients, whereas no longer suffering from the disorder they were suffering from at the start of treatment is. But IAPT obfuscates its’ true performance by sleight of hand with psychometric test results. Clients are fodder for providing psychometric test data at each session, no matter that there is no certainty that the test is pertinent to what they are suffering, that repeated administration means that they can remember their last score and will want to convince themselves that they are getting better and that the results are interpreted by their therapist, creating a demand effect.

A major feature of the CONSORT guidelines is that treatment should be evaluated by those independent of service provision. There is no opportunity to protest about incompetence or the arbitrary number of session limit. IAPT violates this and every aspect of the guidelines that might be pertinent to routine practice.

Unfortunately Editors of Journals such as Behavioural and Cognitive Psychotherapy, Behaviour Research and Therapy and the Lancet often ignore the CONSORT guidelines or any translation of them into routine practice. Consequently the evidence base for expansion of IAPT into areas such as psychosis in secondary care, is much less than understood by its’ workers.

https://www.dropbox.com/s/vj2hp1q43hz4lh8/CONSORT%202010%20Explanation%20and%20Elaboration%20Document-BMJ.pdf?dl=0

Dr Mike Scott

Categories
Current Psychological Therapy Issues IAPT l

Prestigous Journals Have Stopped Looking at Real World Mental Health Outcomes

Papers in Journals such as The Lancet, Behaviour Research and Therapy and Behavioural and Cognitive Psychotherapy have in recent years relied entirely on psychometric tests completed by clients, with no independent assessment by an outside body using a ‘gold standard’ diagnostic interview. The sole use of psychometric tests is great for academic clinicians, research papers can be produced at  pace and at little cost, securing places in academia. Conferences are dominated by their offerings but actually nothing is changing in the real world of clients.

 

 

The Lancet paper on the PACE trial on CBT  for chronic fatigue syndrome [Sharpe et al (2015) Rehabilitative treatments for chronic fatigue syndrome Lancet Psychiatry, 2, 1067-1074] provides a great example of how to ‘muddy the waters’. The authors presented CBT as making a major contribution to the treatment of CFS. But Bakanuria (2017) [ Chronic fatigue syndrome prevalence is grossly overestimated using Oxford criteria compared to Centers for Disease Control (Fukuda) criteria in a U.S population study. Fatigue: Biomedicine, Health and Behavior, ps 1-15] has pointed out that the authors used the very loose Oxford criteria for CFS, requiring mild fatigue, but the incidence of CFS is ten times less if the Center for Disease Control (CDC) rigorous criteria are used. Thus Sharpe et al had not demonstrated the efficacy of CBT in a population who truly had CFS. In December last the Lancet published a paper by Clark et al on predictors of outcome in IAPT but again the dependent variable is of  doubtful validity, changes on PHQ9 and GAD7 in a population whose  diagnostic status is unknown. In fairness in the discussion Clark et al (2017) do note that it is a limitation of their study that they have relied on self-report measures but there is no acknowledgement that their findings are actually unreliable. Doubtless their conclusion that organisational factors effect delivery of an efficacious treatment is true, but this is stating the obvious, if a treatment is found to be efficacious in a randomised controlled trial, unless there is a careful mapping of key elements in the rct e.g reliable diagnosis, ‘gold standard’ assessment, fidelity measures, there will be an inadequate translation from research into routine practice.

My hope for the New Year is objective measures of outcome so that we can truly begin serving clients, now there is a novel idea.

Dr Mike Scott

Categories
l

CBT Researchers Have Abandoned Independent Blind Assesment – Beware of Findings

I have been looking in vain for the last time CBT researchers assessed outcome on the basis of independent blind assessment, which was a cornerstone of the initial randomised controlled trials of CBT.  Current CBT research is more about academic clinicians marketing their wares. Journals such as Behaviour Research and Therapy and Behavioural and Cognitive Psychotherapy and organisations such as BABCP and BPS are happily complicit in this. The message is give a subject a self-report measure to complete, it is less costly than expensive highly trained independent interviewers blinded to treatment, forget about the demand characteristics of a self-report measure ( a wish to please those who have provided a service) and don’t worry if the measure does not accurately reflect the construct under question. My psychiatric colleagues might be forgiven for saying that at least the trials of antidepressants have usually been double blinded, if since the millennium CBT studies have rarely managed to be single blinded, is it time the CBT-centric era ended? But purveyors of other psychotherapies have even more rarely bought into the importance of independent blind assessment.

The overall impact of inattention to independent blind assessment is that the case for pushing CBT is actually not as powerful as the prime movers in the field would have us believe, this may actually be a relief to struggling practitioners. For example Zhu et al (2014) [Shangai Arch Psychiatry, 26, 319-331 examined 12 randomised controlled trials of CBT for generalised anxiety disorder in which there was supposedly independent blind assessment  but in 6 of the 12 studies the main outcome measure was based on the results of a self-reported scale completed by the client (i.e outcome was not actually assessed by the blinded assessor) and concluded that the quality of the evidence supporting the conclusion that CBT was effective for GAD was poor. A meta-analysis of outcome studies  conducted by Cuijpers (2016) World Psychiatry, 15, 245-258 found that using criteria of the Cochrane risk of bias tool only 17% (24 of 144) rct’s of CBT for anxiety and depressive disorders were of high quality. Cuijper et al concluded that CBT ‘is probably effective in the treatment of MDD, GAD, PAD and SAD; that the effects are large when the control condition is waiting list, but small to moderate when it is care-as-usual or pill placebo; and that, because of the small number of high-quality trials, these effects are still
uncertain and should be considered with caution’. Only half the studies had blind assessors and it is not clear whether they were the determinants of outcome or a client completed self-report measure, the study needs further analysis. My impression is that the weakest of studies are those examining guided self-help, computer assisted CBT, (the step 2 interventions in IAPT) yet these interventions are most commonly offered.

Dr Mike Scott

Categories
l

Bias in CBT Journals

When the organs of communication are controlled by a single ideology we are on a short road to hell. Recently I protested to the Editor of Behavior Research and Therapy (BRAT), that no conflict of interest had been declared in a paper authored by Ali et al published in this month’s issue of the Journal, focusing on IAPT data on relapse after low intensity interventions. I pointed out that the lead author headed the Northern IAPT research network, not only did the editor ignore the conflict of interest but so to did the two reviewers, of a rejoinder to the paper that I wrote. But it is not just BRAT, IAPT sponsored papers regularly appear in Behavioural and Cognitive Psychotherapy without declarations of conflicts of interest.  I have protested to the editor about this, but again to no avail. Unfortunately it is not just a matter of what Editors of CBT Journals allow through the ‘Nothing to Declare’ aisle but also their blocking of objections to the current zeitgeist that is a cause for concern. More about this anon.

Dr Mike Scott