Categories
BABCP Response - NICE Consultation January 2022

IAPT – Suicidal and Given Online CBT

 I recently came across a former IAPT client that the Organisation’s own documentation described as considering two different means of suicide. He had been bullied at school and engaged in a lot of self-harm. This depressed young man was given computer assisted CBT by IAPT and dropped out after 4 sessions. He told me that it did not teach him anything he did not already know. IAPT’s decision making is based on exigencies rather than clinical need.

Oftentimes a client with thoughts that they would be ‘better off dead’ are passed back to their GP. The GP is then obliged to contact the patient to discover that the ‘suicidal thoughts’ are most often passive and without any active intent or planning. In such instances IAPT had not taken the time to discover whether there was any active planning of suicide. The reaction of the Organisation is that ‘we do not want egg on our face’, so bounce it back to the GP. Unfortunately GP’s don’t complain to their Clinical Commissioning Groups about IAPT, content that they get a break from these ‘non-medical’ cases whilst they are being seen by IAPT, albeit that it is a revolving door.

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

IAPT Fails To Rebut Charge Of a Tip Of The Iceberg Rate Of Recovery

In the March Issue of the British Journal of Clinical Psychology, 3 academics admit their links to the Improving Access to Psychological Therapies (IAPT) Service, having failed to do so on an earlier occasion.  Their attempted rebuttal of my paper ‘Ensuring IAPT Does What It Says On The Tin’, published in the same issue of the Journal is a Donald Trump like expose. The British Government is looking at the matter of making NHS England accountable, to date the latter has allowed IAPT to mark its’ homework, with no involvement of the Care Quality Commission. Having spent over £4billion on IAPT the time for change is long overdue. Below is my response to Kellett et al (2021).

Practice-based evidence has been termed a three-legged stool comprising best research evidence, the clinician’s expertise and patient preferences [Spring (2007)]. Wakefield et al., (2021) published a systematic review and meta-analysis of 10 years of practice-based evidence generated by the Improving Access to Psychological Therapies (IAPT) services which is clearly pertinent to the research evidence leg of this stool. In response to this paper I wrote a critical commentary ‘Ensuring IAPT does what it says on the tin’ [ Scott (2021)].  In turn Kellett et al., (2021) have responded with their own commentary ‘The costs and benefits of practice-based evidence: Correcting some misunderstandings about the 10-year meta-analysis of IAPT studies’ accepting some of my points and dismissing others. Their rebuttal exposes an even greater depth of conflicts of interest in IAPT than originally thought. The evidence supplied by Wakefield et al (2021), renders the research evidence leg of the stool unstable and it collapses under the weight of IAPT.

 

Transparency and Independent Evaluation

 

Kellett et al (2021) in their rebuttal head their first paragraph ‘The need for transparency and independent evaluation of psychological services’. But these authors claimed no conflict of interest in their original paper, despite the corresponding author’s role as an IAPT Programme Director.  In their rebuttal Kellet et al., (2021) concede ‘Three of us are educators, clinicians and/or clinical supervisors whose work directly or partially focuses on IAPT services’. This stokes rather that allays fears that publication bias may be an issue.

There has been a deafening silence from Kellett et al., (2021) that in none of the IAPT studies, has there been an independent evaluator, using a standardised semi-structured diagnostic interview to assess diagnostic status at the beginning, end of treatment and follow up. It has to be determined that any recovery is not just a flash in the pan. Loss of diagnostic status is a minimum condition for determining whether a client is back to their old selves (or best functioning) post treatment. Studies that have allowed reliable determination of diagnostic status have formed the basis for the NICE recommended treatments for depression and the anxiety disorders.  As such they speak much more to the real world of a client than IAPT’s metric of single point assessments on psychometric test completed in a diagnostic vacuum.

 

The Dissolution of Evidence-Based Practice

The research evidence leg of IAPT’s evidence-based practice stool is clearly flawed. Kellet et al., (2021) seek to put a ‘wedge’ under this leg by asserting that the randomised controlled trials are in any case of doubtful clinical value because their focus is on carefully selected clients i.e they have poor external validity. But they provide no evidence of this. Contrary to their belief randomised controlled trials (rcts) admit client with a limited range of comorbidity. A study by Stirman et al., (2005) showed that the needs of 80% of clients could be accommodated by reference to a set 100 rcts. Further Stirman et al., (2005) found that clients in routine practice were no more complex than those in the rcts.. Kellett et al., (2021) cannot have it both ways on the one hand praise IAPT for attempting to observe National Institute for Health and Care Excellence (NICE) guidance and then pull the rug on the rcts which are the basis for the guidelines. Their own offering as to what constitutes research evidence leads to the collapse of the evidence-based practice stool. It provides a justification for IAPT clinicians to continue to base their clinical judgements on their expertise ignoring what has traditionally been taken to be research evidence, so that treatments are not based on reliable diagnoses. The shortcomings of basing treatment on ‘expertise’ have been detailed by Stewart, Chambless & Stirman (2018), these authors comment on ‘The importance of an accurate diagnosis is an implicit prerequisite of engaging in EBP, in which treatments are largely organized by specific disorders’.

‘Let IAPT Mark It’s Own Homework, Don’t Put It to The Test’

 

Kellett et al (2021) claim that it would be too expensive to have a high quality, ‘gold standard’ effectiveness study with independent blind assessors using a standardised semi-structured diagnostic interview. But set against the £4billion already spent on the service over the last decade the cost would be trivial. It is perfectly feasible to take a representative sample of IAPT clients and conduct independent blind assessments of outcome that mirror the initial assessment. Indeed the first steps in this direction have already been taken in an evaluation of internet CBT [ Richards et al (2020)] in which IAPT Psychological Wellbeing Practitioners used the MINI [ Sheehan et al (1998)] semi-structured interview to evaluate outcome, albeit that they were not independent evaluators and there could be no certainty that they had not used the interview as a symptom checklist rather than in the way it is intended. Further the authors of Richards et al (2020) were employees of the owners of the software package or worked for IAPT. Tolin et al (2015) have pointed out that for a treatment to be regarded as evidence-supported there must be at least two studies demonstrating effectiveness in real world settings by researchers not involved in the original development and evaluation of the protocol and without allegiance to the protocol. Kellet et al (2020) have failed to explain why IAPT should not be subject to independent rigorous scrutiny and their claim that their own work should suffice is difficult to understand.

 

The Misuse of Effect Size and Intention to Treat

Kellet at al (2021) rightly caution that comparing effect sizes (the post-test mean subtracted from the pre-test mean divided by the pooled standard deviation) across studies is a hazardous endeavour. But they fail to acknowledge my central point that the IAPT effect sizes are no better than those found in studies that pre-date the establishment of IAPT, that is they do not demonstrate an added value.  Kellet et al (2021) rightly draw attention to the importance of intention to treat analysis and attempt to rescue the IAPT studies on the basis that many performed such an analysis. Whilst an intention to treat analysis is appropriate in a randomised controlled trial in which less than a fifth of those in the different treatment arms default, it makes no sense in the IAPT context in which 40% of clients are nonstarters (i.e complete only the assessment) and 42% dropout after only one treatment session [ Davis et al (2020)]. In this context it is not surprising that Delgadillo et al (2020) failed to demonstrate any significant association between treatment competence measures and clinical outcomes, a point in fairness acknowledged by the latter author. But such a finding was predictable from the Competence Engine [Scott (2017)] which posits a reciprocal interaction between diagnosis specific, stage specific and generic competences.

 

Kellett et al (2020) Get Deeper in The Mud Attacking Scott (2018)

 

Kellett et al (2021) rightly underline my comment that my own study of 90 IAPT clients Scott (2018) was hardly definitive, as all had gone through litigation. But they omit to mention that I was wholly independent in assessing them, my duty was solely to the Court as an Expert Witness.  Despite this they make the extraordinary claim that my study had a ‘high risk of bias’, which casts serious doubts on their measuring instruments. They failed to understand that in assessing a litigant one is of necessity assessing current and past functioning. In my study I included used of the current and lifetime versions of a standardised semi-structured interview the SCID [ First et al (1996)].  This made it possible to assess the impact of IAPT interventions whether delivered pre or post the trauma that led to their claim. Whatever was the timing of the IAPT intervention the overall picture was that only the tip of the iceberg (9.2%) lost their diagnostic status as a result of these ministrations. Nevertheless, as I suggested, there is a clear need for a further publicly funded study of the effectiveness of IAPT with a representative sample of the latter.

 

References

 

Davis, A., Smith, T., Talbot, J., Eldridge, C., & Bretts, D. (2020). Predicting patient engagement in IAPT services: a statistical analysis of electronic health records. Evidence Based Mental Health, 23:8-14  doi:10.1136/ebmental-2019-300133.

Delgadillo, J., Branson, A., Kellett, S., Myles-Hooton, P., Hardy, G. E., & Shafran, R. (2020). Therapist personality traits as predictors of psychological treatment outcomes. Psychotherapy Research, 30(7), 857–870. https://doi.org/10.1080/10503307.2020.1731927.

First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (1996). Structured clinical interview for DSM-IV axis I disorders, clinician version (SCID-CV). Washington, DC: American Psychiatric Press.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268

 

Richards, D., Enrique, A., Ellert, N., Franklin, M., Palacios, J., Duffy, D., Earley, C., Chapman, J., Jell, G., Siollesse, S., & Timulak, L. (2020) A pragmatic randomized waitlist-controlled effectiveness and  cost-effectiveness trial of digital interventions for depression and anxiety npj Digital Medicine (2020)3:85 ; https://doi.org/10.1038/s41746-020-0293-8.

Scott, M.J (2017) Towards a Mental Health System That Works. London: Routledge.

Scott, M.J. (2018). Improving access to psychological therapies (IAPT) – the need for radical reform. Journal of Health Psychology, 23, 1136-1147. https://doi.org/10.1177/1359105318755264.

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264

 

Sheehan, D. V. et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric inter- view for DSM-IV and ICD-10. J. Clin. Psychiatry 59(Suppl 2), 22–33 (1998). quiz 34-57.

Spring B (2007). Evidence-based practice in clinical psychology: what it is, why it matters; what you need to know. Journal of Clinical Psychology, 63(7), 611–631. 10.1002/jclp.20373 [PubMed: 17551934].

Stewart, R.R., Chambless, D.L., & Stirman, S.W (2018) Decision making and the use of evidence based practice: Is the three-legged stool balanced?  Pract Innov 3(1): 56–67. doi:10.1037/pri0000063.  

Stirman, S. W., DeRubeis, R. J., Crits-Christoph, P., & Rothman, A. (2005). Can the Randomized Controlled Trial Literature Generalize to Nonrandomized Patients? Journal of Consulting and Clinical Psychology, 73(1), 127–135. https://doi.org/10.1037/0022-006X.73.1.127

 

Tolin, D. F., McKay, D., Forman, E. M., Klonsky, E. D., & Thombs, B. D. (2015). Empirically supported treatment: Recommendations for a new model. Clinical Psychology: Science and Practice, 22(4), 317–338. https://doi.org/10.1111/cpsp.12122

 

 

Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259

 

 

Categories
BABCP Response - NICE Consultation January 2022

IAPT and The Collapse of Its’ Practice-Based Evidence

The Improving Access to Psychological Therapies (IAPT) service has failed to deliver [Wakefield et al (2021) https://doi.org/10.1111/bjc.12268]  and Scott 2021) https://doi.org/10.1111/bjc.12264] practice based evidence. In an attempted rebuttal of my Commentary [ Kellet et al (2021) https://doi.org/10.1111/bjc.12268 ] in the forthcoming British Journal of Clinical Psychology, IAPT fellow-travellers dig an even deeper hole, exposing even more conflicts of interest.

 

Cover image

Under UK Government pressure, NHS England is to reconfigure its’ relationship with social care.  It would be timely if the Government also insisted that NHS England put its house in order with regards to the provision of routine mental health services.  As a first step it should insist that NHS England staff cannot be employed by an agency, such as IAPT, that they are responsible for auditing.

Further in 2016 the National Audit Office asked that IAPT be made responsible to an independent body the Care Quality Commission, but the Service has instead been allowed to continue to mark its’ own homework.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264

Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Mental Health Patients and GPs Caught In IAPT’s Revolving Door

30% of GP referrals to the Improving Access to Psychological Therapies (IAPT) service do not attend, and of those who attend for treatment, 40% do so for only one session [Pulse November 2nd 2018 Home Analysis NHS structures Revealed: How patients referred to mental health services end up back with their GP]. The GP is left to cope with this haemorrhaging , with no input from Clinical Commissioning Groups (CCGs) to rectify matters.  The CCGs are complicit in IAPT ‘cherry picking’ clients, refusing clients under another service, such as substance misuse or an eating disorder.  Further the CCG’s do not bat an eye when IAPT claims to have met its’ target of a 50% recovery rate.  They miss the point that there has been no publicly funded  independent audit of IAPT.  They are blissfully ignorant that using an independently administered standardised semi-structured interview (SCID) only the tip of the iceberg (9.2%) recover Scott (2018) https://doi.org/10.1177%2F1359105318755264.

Evidence based practice involves an integration of best research evidence, clinician expertise and patient’s preferences [ NHS England document ‘Finding the Evidence’ November (2013)].

There are no randomised controlled trials, using a blind evaluator, of IAPT’s modal, low intensity treatment.  Making  the ‘best research evidence’ leg unstable.  GPS do not audit the effect of IAPT on their patients and so their clinical expertise in dealing with these patients is questionable.  Shared decision making is an integral part of  eliciting patient preferences. But in IAPT clients are usually discharged when they have had the pre-determined number of sessions and/or when their score on a self-report measure falls below a certain cut-off.  There is no credible elicitation of client’s preferences. All legs of the evidence based practice stool have fault lines, and it collapses under IAPT’s weight. The Agency is a prime exemple of failed evidence based practice.

Dr Mike Scott