Categories
BABCP Response - NICE Consultation January 2022

The Bell Tolls for IAPT if NICE Has Its’ Way

according to the BABCP’s submission BABCP response – NICE consultation draft  to the National Institute for Health and Clinical Excellence (NICE ). Implementation of the latter’s proposed guidance would mark the end of the Improving Access to Psychological Therapies (IAPT) service. 

Interestingly BABCP recommend that assessment should begin with a reliable diagnostic interview and acknowledges that IAPT’s Psychological Wellbeing Practitioners (PWPs) are not equipped to do this. Further BABCP recommend that outcomes should be assessed from the client’s perspective but do not specify how. Ironically some of BABCP’s own recommendations undermine the functioning of its over-induIged prodigy, IAPT. BABCP are alarmed that the proposed guidance would, in their view, herald the end of stepped-care.

BABCP are aghast that NICE have not included studies by IAPT related personnel in determining the way forward. In defence of IAPT, BABCP cite the Wakefield et al(2021) https://doi.org/10.1111/bjc.12259 study published in the British Journal of Clinical Psychology but fail to mention my rebuttal paper Scott(2021) https://doi.org/10.1111/bjc.12264 published in the same issue of the Journal. Quite simply NICE does not consider studies that are based on agencies marking their own homework as having any credence. This is thoroughly reasonable.

The BABCP have rightly pointed out to NICE that in recommending group interventions as the starting point for offering clients help, they have not properly looked at the context of the group studies. As I pointed out in my submission to NICE COMMENTS ON PROPOSED GUIDANCE (and simultaneously submitting via BABCP as a stakeholder), there are considerable hurdles in engaging clients in group therapy, see Scott and Stradling (1990)Group cognitive therapy for depression produces clinically significant reliable change in community-based settings Behavioural Psychotherapy, 18: 1-19 and Simply Effective Group Cognitive Behaviour Therapy Scott (2011) https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiph5Hlvbb1AhWKX8AKHRSJDZ0QFnoECAUQAQ&url=https%3A%2F%2Fwww.amazon.co.uk%2FSimply-Effective-Cognitive-Behaviour-Therapy%2Fdp%2F0415573424&usg=AOvVaw0nam02gszlQ0HqCktSCB0s. 

In fairness, I think Prof Shirley Reynolds from BABCP has done a great job in reviewing the extensive documentation provided by NICE and collating the individual submissions, all within a very brief period of time. I understand from her that these matters will feature in the next issue of CBT Today and whilst I was happy to have my name noted as having submitted, there are important aspects of the submission on which I wish to dissent.

NICE make its’ formal recommendations in May, interesting times

 

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Proposed NICE Depression Guidance – Response

My submission to NICE is set out COMMENTS ON PROPOSED GUIDANCE I also submitted it via BABCP as a stakeholder. The closing date is Jan 12th.

 

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

People Cannot Benefit from a Treatment To Which They Have Not Been Exposed – The Undermining of IAPT

The Improving Access to Psychological Therapies (IAPT) Service does not assess treatment fidelity. Thus, there can be no certainty that clients receive an evidence-based treatment treatment.  IAPT therapies are not EBTs. Despite this, the major funder of IAPT training days SilverCloud, claims on its’ website ‘up to a 70% real-world recovery’ using its computer assisted products, for all common disorders except PTSD and OCD!  The Advertising Standards Authority need to look at this, the ASA has a complaints form that can be completed online. SilverCloud’s UK address is Suite 1350, Kemp House, 152 City Road, London, EC1V 2NX., My own study of 90 IAPT cases suggests just a 10% recovery rate, Scott (2018) https://doi.org/10.1177%2F1359105318755264).

IAPT have produced no evidence that its’ therapists using SilverCloud make any added difference to their clients over and above that of those who didn’t use it. see SilverClouds Space for Depression programme   NICE Guidance ‘Space from depression for treating adults with depression’ Medtech innovation briefing published May 7th 2020. Strangely the NICE IAPT Expert Panel concluded that the case for adoption is ‘partially supported’ despite in the body of report noting lower depression scores, at the end of treatment for the clients of therapists who did not use the computer assisted CBT. An example of spin and conflict of interest.

 

The SiverCloud website cites 10 references appearing in peer-reviewed journals to support its work.  But none of the studies cited by SilverCloud involve blind independent assessors of outcome using a ‘gold-standard’ diagnostic interview. In the cited review study by Wright et al (2019) Wright JH, Owen JJ, Richards D, et al. Computer-assisted cognitive-behavior therapy for depression: a systematic review and meta-analysis. J Clin Psychiatry. 2019;80(2):18r12188 the third author is employed by SilverCloud.

 ‘Real-world’ recovery represents a change that a client would care about, such as no longer suffering from the disorder that they were suffering from before treatment or a return to best functioning. In a footnote SilverCloud defines recovery as ‘Moving from clinical caseness to non-caseness, i.e. lowering the score on PHQ-9 and GAD-7 from above the clinical threshold to below the threshold’. Such changes are meaningless to clients they are not ‘real-world’.

Here is what one client told  me:

‘I found Silvercloud ineffective, generic and not tailored to my personal situation. It wasn’t engaging or helpful and as such I didn’t engage with the website very much. Consequently, the following weekly call with the IAPT therapist  were sometimes made difficult by the fact I hadn’t completed the same questionnaire as the week before or read through articles. I wanted to talk about my situation, my feelings and find out why I was feeling the way I was, but I felt I was just being led back to using the online SilverCloud resource.

‘It was in 2017 that my doctor suggested I try SilverCloud online CBT with telephone support and in September 2017, I started speaking to another IAPT counsellor. He seemed to be a very nice man. After a few weekly calls, he stated that he didn’t believe I was depressed and so he changed the original Silvercloud course I had started and reset it back to a new series of 6 sessions. The weekly calls lasted between 20 minutes to an hour depending on what we discussed, but always concluded with him asking me to log onto SilverCloud and work my way through the programme before our next call. After the requisite 6 sessions finished in February 2018, that was it! No answers, no tools to help me cope, just signed off, discharged, but told I had 12 month access to SilverCloud. I haven’t used the resource since’.

In general the claims of clinicians and supervisors with regards to treatment fidelity do not match those of independent blind-raters [ Waltman et al (2017)https://doi.org/10.1016/j.janxdis.2021.102407], there are vested interests at play.

The author knows of no study of low intensity CBT (guided self-help, group psychoeducation, computer assisted CBT) that has assessed treatment fidelity. Usage of a manual does not guarantee treatment fidelity. Approx. three quarters of IAPT clients receive low intensity intervention on entry to the Service [Davis et al (2020)https://doi.org/10.1136/ebmental-2019-300133].].

IAPT’s approach ostensibly depends on the results of randomised controlled trials of CBT, but a study of remission rates in CBT for anxiety disorders (including OCD and PTSD) Levy, Bryan and Tolin (2021) https://doi.org/10.1016/j.janxdis.2021.102407 showed that in half the studies (8 out of 17) there was a high risk of bias because of a failure to address treatment fidelity. Further in 7 of the 17 studies there was a high risk of bias because of the failure to use blind assessors. [A re-view of psychotherapy trial reports published in 6 top psychiatry journals in 2017 and 2018 revealed that only 59% of the included trials reported adequate blinding of outcome assessors Mataix-Cols et al (2021)]. https://jamanetwork.com/journals/jama/fullarticle/10.1001/jamapsychiatry.2021.1419?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamapsychiatry.2021.1419].Thus, the research base that IAPT draws upon is far from rock solid.  The remission rate in rcts for anxiety disorders is approx. 50% [ Springer et al (2018) https://doi.org/10.1016/j.cpr.2018.03.002]and this is the ‘gold standard’. But IAPT claims comparable results despite a total disregard for blinding and treatment fidelity! The faked goods ought perhaps to be reported to Trading Standards as well as ASA, in lieu of any interest in the matter from the British Psychological Society (BPS) or the British Association for Behavioural and Cognitive Psychotherapy (BABCP)!

The real story of SilverCloud is that it provides morsels of CBT when what is really needed is a proper meal. It is insulting to clients to in effect say ‘let’s see how you get on with morsels and then we will see about a proper meal’.

 

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

The Improving Access to Psychological Therapies (IAPT) Programme and The British Psychological Society (BPS)

 

The BPS has enthusiastically supported IAPT from its’ inception in 2008.  Improving access to psychological therapies is clearly a laudable goal, as most people with a mental health problem are not offered psychological therapy. The Society has led the course accreditation process for IAPT’s, Psychological Wellbeing Practitioners (PWPs) low-intensity training since 2009. Features on individual PWP’s have featured periodically in the pages of The Psychologist. In 2009, The Psychologist published a letter from the then President of the British Association for Behavioural and Cognitive Therapies (BABCP) stating that BPS members on the IAPT Education and Training Project Group supported BABCP’s accreditation of high intensity training programmes and noted that there were BPS members on the Accreditation Oversight group.

But the enthusiasm of BPS to give away psychological therapy has not been matched by a concern, to listen to the concerns of service users. Specifically:

  1. At no point has BPS suggested that it is inappropriate for IAPT to mark its’ own homework. The latter’s reliance entirely on self-report measures completed often in the prescence of the IAPT therapist, should have had any self-respecting psychologist crying ‘foul’ and calling for independent assessment.
  2. A concern for service users, should have led BPS to insist that a primary outcome measure must be clearly intelligible to the client. But there has been no specification of what a change in X as opposed to a change of Y would mean to a client on the chosen yardsticks of the PHQ-9 and GAD-7.
  3. BPS has been strangely mute on the fact that two self-report measures have been pressed into service to validate IAPT’s approach, with no suggestion that such an approach needs to be complemented by independent clinician assessments that go beyond the confines of the 2 disorders (depression and generalised anxiety disorder) that the chosen measures address.
  4. If a drug company alone extolled the virtues of its’ psychotropic drug, BPS members would quite rightly cry ‘foul’ insisting on independent blind assessment using a standardised reliable diagnostic interview. But from the BPS there has been a deafening silence on the need for methodological rigour when evaluating psychological therapy. This reached its’ zenith In the latest issue of The Psychologist, September 2021, when the Chief Executive of an Artificial Intelligence Company, was allowed to extol the virtues of its’ collaboration with four IAPT services. No countervailing view was sought by The Psychologist, despite it being obvious that the supposed gains were all in operational matters e.g reduced time for assessment, with no evidence that the AI has made a clinically relevant difference to client’s lives.

 

In 2014 I raised these concerns in an article ‘IAPT – The Emperor Has No Clothes’ I submitted to the Editor of the Psychologist which was rejected and he wrote thus ‘I also think the topic of IAPT, at this time and in this form, is one that might struggle to truly engage and inform our large and diverse audience’. This response was breathtaking given that IAPT was/is the largest employer of psychologists.

Fast forward to 2018 and I wrote and had published in 2018 a paper ‘IAPT – The Need for Radical Reform’ https://doi.org/10.1177%2F1359105318755264 published in the Journal of Health Psychology, presenting data that of 90 IAPT clients I assessed independently using a standardised diagnostic interview only 10% recovered in the sense that they lost their diagnostic status, this contrasts with IAPT’s claimed 50% recovery rate. The Editor of the Journal devoted a whole issue to the IAPT debate complete with rebuttals and rejoinders. But no mention of this at all in the pages of The Psychologist.

It appears that BPS operates with a confirmation bias and is unwilling to consider data that contradicts their chosen position. If psychologists cannot pick out the log in their own eye how can they pick out the splinter in others? In 2021 I wrote a rebuttal of an IAPT inspired paper that was published in the British Journal of Clinical Psychology, ‘Ensuring IAPT Does What It says On The Tin’, https://doi.org/10.1111/bjc.12264 but again no mention of this debate in the Psychologist.

In my view the BPS is guilty of a total dereliction of duty to mental health service users in failing to facilitate a critique of IAPT. It has an unholy alliance with BABCP who are similarly guilty. Both organisations act in a totalitarian manner.

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Notice Served On IAPT’s Claim

of a 50% recovery rate. The Editors of Lancet Psychiatry https://doi.org/10.1016/ S2215-0366(21)00123-1 have challenged researchers to demonstrate that an acclaimed intervention makes a difference that service users would recognise. Thus making the consumer of mental health services centre stage rather than a change in score on a test. In addition researchers are asked to justify their primary outcome measure. In interpreting test results the Editors insist that  author’s must clarify what a change of X would mean to a service user as opposed to a change of Y. A recently published paper in the Journal, using IAPT data, https://doi.org/10.1016/ S2215-0366(21)00083-3 would probably not have been published, if it had not been accepted just before the new guidance was implemented. If other Journal editors follow suit, IAPT’s wings will have been clipped over the claims of IAPT and its’ fellow travellers, such as the British Psychological Society (BPS) and the British Association of Behavioural and Cognitive Therapies (BABCP).  There has been a dereliction of duty by BPS and BABCP.

 

In this connection I have had the following correspondence with the Lancet Psychiatry  Editors:

My letter

When A Difference Makes No Difference

In June this year the Lancet published guidance [Boyce et al (2021)] for mental health researchers to ensure that the primary outcome measure employed in a study needs to be meaningful. Researchers were asked to a) justify their choice of an outcome measure and b) specify what a change of X or Y on a measure would mean for a service user. Contemporaneously, Lancet Psychiatry published a study by Barkham et al (2021) that made no attempt to address the Editor’s expressed concerns.

Barkham et al (2021) chose to adopt the Improving Access to Psychological Therapies (IAPT) primary outcome measures the PHQ-9 [Kroenke et al (2001)] and GAD-7 [Spitzer et al (2006)], without any discussion. There is no comment that these are self-report measures, subject to demand characteristics and that changes are impossible to interpret without comparison to an active placebo treatment.

The Barkham et al (2021) study involved comparison of person-centred counselling and cognitive behaviour therapy (cbt) in a high intensity therapy service delivered by IAPT. Curiously patients were screened for the study using the Clinician Interview Schedule Revised but neither this nor any standardised diagnostic interview was used as an outcome measure. Why such apparent blindness? The answer is apparent reading the declaration of conflicts of interest, the authors are either devotees of person-centred counselling or have links with IAPT. Their take home message is that person centred counselling might be better than CBT for depressed patients. But there is no attempt to address the question of what proportion of patients lost their diagnosis status and for how long, as determined by an independent blind clinical assessment using a standardised interview. Service-users interests are ill-served by this type of study which additionally ignored data that suggest the recovery rate in IAPT is just 10% [Scott (2018)].

References

Barkham, M., Saxon, D., Hardy, G. E., Bradburn, M., Galloway, D., Wickramasekera, N., Keetharuth, A. D., Bower, P., King, M., Elliott, R., Gabriel, L., Kellett, S., Shaw, S., Wilkinson, T., Connell, J., Harrison, P., Ardern, K., Bishop-Edwards, L., Ashley, K., Ohlsen, S., … Brazier, J. E. (2021). Person-centred experiential therapy versus cognitive behavioural therapy delivered in the English Improving Access to Psychological Therapies service for the treatment of moderate or severe depression (PRaCTICED): a pragmatic, randomised, non-inferiority trial. The lancet. Psychiatry, 8(6), 487–499. https://doi.org/10.1016/S2215-0366(21)00083-3

Boyce, N., Graham, D., & Marsh, J. (2021). Choice of outcome measures in mental health research. The lancet. Psychiatry, 8(6), 455. https://doi.org/10.1016/S2215-0366(21)00123-1

Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001;16: 606–13.

Scott M. J. (2018). Improving Access to Psychological Therapies (IAPT) – The Need for Radical Reform. Journal of health psychology, 23(9), 1136–1147. https://doi.org/10.1177/1359105318755264

Spitzer RL, Kroenke K, Williams JB, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med 2006; 166: 1092–97.

 

August 13th 2021

 

Thank you for your letter to The Lancet Psychiatry. We are pleased to see that our initiative re primary outcome reporting has been noticed.  We are applying this now but did not apply it retrospectively to papers accepted before publication. The Barkham et al paper was published online on 14 May, six weeks after the Comment, but was accepted and edited before our new policy was in place. 

For Correspondence, our information for authors states: Letters written in response to previous content published in The Lancet Psychiatry must reach us within 4 weeks of publication of the original item.  We do extend this to after the original item has been published in an issue but I’m afraid that your letter is still outside the window for the Barkham et al paper, so we have decided not to publish it.

Although this decision has not been a positive one, I thank you for your interest in the journal.

Yours sincerely,

Joan Marsh

Joan Marsh MA PhD

Deputy Editor

 

Dr Mike Scott

 

Categories
BABCP Response - NICE Consultation January 2022

Paranoid, But Judged Recovered If Your Conviction of Threat Falls Below 50%

this is the primary outcome measure used in a just published study of CBT for persecutory delusions https://babcpmail.com/AQ0-7G5L5-2KPHRR-4HXKEC-1/c.aspx. But would the typical person suffering from schizophrenia recognise this metric? What if convictions take a variable course and are mood dependent? What is going on here? Unrestrained by such questions Freeman et al (2021) proclaim in their advertisement for the 5 day online course for the Programme:

‘it is the most effective psychological treatment for persecutory delusions. Half of patients have recovery in their persecutory delusion with the Feeling Safe Programme’

‘Recovery’ here has a meaning far removed from common parlance. ‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’ —LEWIS CARROLL, Through the Looking-Glass, 1871. If my conviction about the likelihood of being flooded fell to less than 50%, I would still be wanting to relocate!

Allegiance Bias

Freeman et al (2021) are evaluating their own Feeling Safe Programme but no mention that therefore their study might be prone to allegiance bias. The same therapists administered the Feeling Safe Programme and the comparison Befriending Programme. Given that the therapists knew that the hypothesis was that the former would prove superior to the latter, they are likely to be more enthusiastic about the CBT. Twenty sessions were to be delivered in 6 months in each modality but in the event more sessions were delivered in the CBT. Thus the possibility of allegiance bias amongst the therapists cannot be ruled out. It is therefore not surprising that a statistically significant difference was found between the two arms of the study. But this does not necessarily demonstrate the added benefit of CBT – a further confounding factor is that  71% of those in befriending were on antidepressants compared to 50% in CBT.

Replication Crisis

Freeman et al (2021) make the common cry of all researchers for more research, but there is no mention of the need for independent replication. This latter is particularly important as previous studies have not demonstrated the added value of CBT for persecutory delusions.

Inappropriate Outcome Measure

Clients in CBT were encouraged to take a 6 session module ( the Feeling Safe Module) targetting threat beliefs, how can the latter then be a credible outcome measure? Broader measures such as functioning as I was before I became paranoid or even as I was when I was least paranoid would have been more credible primary outcome measures.  Further the secondary outcome measures used were all based on self-report measures, there was no standardised diagnostic interview conducted. Whilst diagnostic labels were affixed at entry into the study ( on what basis is not clear), they were ignored with regards to outcome.

Is The Effect Size Found Meaningful?

The effect size for the primary outcome measure was a Cohen’s d of 0.86, Freeman et al (2021). The effect size for total delusions score on PSYRATS was d=1.2 Freeman et al (2021) celebrate this large effect size as comparable to that found in trials of CBT for anxiety disorders. But in terms of the primary outcome measure the average person undergoing CBT improved  by less than one standard deviation compared to the average person who was befriended, this is shown diagrammatically below, does this amount to a real world difference? The economic analysis promised in the pre-trial protocol was not included in the paper, leaving it an open-question as to whether the CBT is worth the added investment. 

 

Eminence-based Rather Than Evidence-based

Advocates of the Feeling Safe Programme, are claiming more than is known, doubtless BABCP and IAPT will seize on it and control how CBT is to be conducted with this population, extending their empire. Well the study was published in Lancet Psychiatry after all? The CBT therapist should be sceptical, but regrettably training courses seem not to equip them for this, I wonder why? Perhaps I am paranoid?

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

The Future of CBT In Practice

niNext month is Aaron Beck’s 100th birthday and the journal which he founded ‘Cognitive Therapy and Research’ has a great editorial https://doi.org/10.1007/s10608-021-10232-6, wishing him well and looking at possible developments in CBT for depression, envy, schizophrenia and OCD. But there is a yawning gap between the experiences of the beneficiaries of randomised controlled trials (rcts) of CBT and what UK citizens receive in routine practice. In terms of the model below, going clockwise, there has been a fundamentalist translation of the rcts to determine policy, such that key elements of context such as ensuring reliable diagnosis have been left out, implementation has been determined in a ‘Stalinist’ way e.g the possibility of sanctions if a 50% recovery is not reached, there has been no independent monitoring of the policy, there is a claimed risk reduction, but this is based on responses to the ambiguous suicide item, item 9 on the Patient Health Questionnaire PHQ-9. But there is no evidence that the ‘Science’ is looking at these matters anytime soon. The deliberations of bodies like the British Association for Cognitive and Behavioural Psychotherapy (BABCP) appear to occur in a parallel universe.

Recently Drew et al (2021) https://doi.org/10.1016/j.socscimed.2021.113818 examined transcripts of IAPT (Improving Access too Psychlog ical Therapies)  sessions and the take home message was that clinicians were preoccupied with their own agenda and not really listening. This echoes what  Omylinska-Thurston et al (2019) https://doi.org/10.1002/capr.12249   found when interviewing former IAPT clients:

participants discussed difficulties with the outcome measures they had to fill in each week. Clients said they did not feel comfortable filling them in. Clare said it felt “disheartening… because … it brings it home…just how bad you’ve been feeling”. Clients also said that the scales felt disrespectful to their experience. For some, it was difficult to pinpoint the accurate answer and for others the measures did not reflect the nuances. For example, Jenny said about the self-harm question on PHQ9 “…to harm myself? No, but I know I wasn’t eating …well”. Also Jason said “there’s a difference between wishing you were dead and wanting to die … the question really is: do you think you should kill yourself rather than do you think you’d be better off dead?” Participants also commented that they learnt how to score the measures to get more services or sessions. Jenny said about the self-harm question “If I’ said ‘yes’ then they …‘right, shit’, but because you don’t put that they do ‘OK, see you next week’.” Jenny also worried that “if you put it was only one day this week, does that mean you don’t get any more sessions?” Measures were also reported as focusing on the negative side and did not catch positive change

Difficulties with assessment

Six clients discussed issues they had with the assessment process. Clients said that they were not assessed for the right type of therapy. For example, Adam said “if I had been…assessed better, that therapist doing CBT could have been helping another person”. Clients also said that CBT was not explained to them and Michael commented that he “didn’t know exactly what CBT things were going to entail”. Clients said that assessment involved a lot of paperwork and form filling and did not focus on their needs. Jason commented that he had to fill in a measure first and the score decided that he was depressed rather than a discussion first supported by a measure. Maurice talked a lot about the phone assessment and said it was “uncaring, robotic and intrusive”. He was concerned that people will not engage in therapy following telephone assessments’.

Yet what struck me when I met Beck in 1997 in Canterbury, was how much he genuinely listened. The rcts on CBT continue to generate high expectations but the jury is out on whether they have made or will make a real-world difference.

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

A Pandemic of ‘Alleged CBT’

Homework is the missing hallmark of CBT In routine practice. Inspection of Improving Access to Psychological Therapies (IAPT) records provides scant evidence of agreed homework assignments. Rarely do they specify the behaviours  that the client is to engage in, the coping strategy to be employed and the monitoring strategies. But given that client’s commonly have impaired concentration written specification is a must and helps to ensure compliance with homework [Cox 1988 Cox, D. J., Tisdelle, D. A., & Culbert, J. P. Increasing adherence to behavioral homework assignments. Journal of behavioral medicine, 11(5), 519–522. https://doi.org/10.1007/BF00844844].

Beck [Beck, A. T., Rush, A. J., Shaw, B. F., & Emery, G. (1979). Cognitive therapy of depression. New York: Guilford Press] suggested that homework should be a) clear and specific b) include a cogent rationale c) client reactions should be elicited to troubleshoot difficulties and d) progress should be summarised when reviewing homework. Homework provides a link between sessions. Meeting criteria a) to d) in a low intensity intervention is a tall ask and in the absence of written evidence to the contrary, it must be assumed that this active ingredient in CBT treatment is missing [Kazantzis, N., Whittington, C. J., & Dattilio, F. M. (2010). Meta-analysis of homework effects in cognitive and behavioral therapy: A replication and extension. Clinical Psychology: Science and Practice, 17, 144–156]. Whilst it is likely the case that the skilful assignment of homework will relate to outcome Kazantis (2021) [ Introduction to the Special Issue on Homework in Cognitive Behavioral Therapy: New Clinical Psychological Science. Cogn Ther Res 45, 205–208 (2021). https://doi.org/10.1007/s10608-021-10213-9], such considerations are of little consequence if routine therapy is constructed in such a way that homework has difficulty thriving.

It is interesting to ponder that if a Civil Court Case was mounted on the basis that a supposed CBT treatment had not in fact happened, leaving ongoing debility, would a claim for compensation succeed? As an Expert Witness I would ask to see the treatment records and in over 25 years in this capacity I can think of few cases were I could be sure, on the balance of probability, that the said treatment had been delivered. Part of this evidence would be no evidence of homework assignment. IAPT has tried to keep out of the legal domain by asserting that its’ therapists do not make diagnosis. But a Judge might ask where is the accountability in this matter. A Nurse may be called to task for not following an evidence based medical procedure even though overall responsibility may rest with a Consultant. There can be no certainty that IAPT would not find itself in the dock. Its’ defence would likely be that its’ practitioners were only doing, say what most BABCP members do, but this would be skating on thin ice.

Dr Mike Scott

 

 

Categories
BABCP Response - NICE Consultation January 2022

The Department of Health Has Failed To Regulate Routine Mental Health Services

Improving Access to Psychological Therapies (IAPT) services are out of bounds to Care Quality Commission inspection.  In 2016 the National Audit Office (NAO) asked the Department of Health to address this issue and it has done nothing.  The Department sets the agenda and budget for NHS England, who in turn do the same with Clinical Commissioning Groups to determine local provision of services. But NHS England staff are lead players amongst service providers, these conflicts of interest exacerbate the parlous governance of IAPT. There is a need for Parliament to step in and take the Department of Health to task.  

 

Whilst no one doubts the importance of improving access to psychological therapies, it was remiss of the NAO in 2016 to take at face value IAPT’s claim that it had the appropriate monitoring measures in place.  Incredulously the NAO accepted at face value IAPT’s claim that it was achieving a 45% recovery. It is always tempting to look only as far as evidence that confirms your belief. But it is equally important to consider what type of evidence would disconfirm your belief. The NAO has failed to explain why it has not insisted on independent scrutiny of IAPT’s claims. 

The The Improving Access to Psychological Therapies (IAPT) programme has exercised a confirmatory bias in its’ audit by focussing only on self-report responses on  psychometric tests (the PHQ9 and GAD7). The service has never looked at a categorical end point, such as whether a person lost their diagnostic status as assessed by an independent evaluator using a standardised diagnostic interview.

Organisations, are inherently likely to be self-promoting and will have a particular penchant for operating, not necessarily wholly consciously, with a confirmatory bias. It is for other stakeholders, NHS England, Clinical Commissioning groups, MPs, the media, Charities and professional bodies (BABCP  and BPS) to hold IAPT to account. For the past decade they have all conspicuosly failed to do so. How have IAPT evaded critical scrutiny, despite the taxpayer having paid £4billion for its’ services? Friends in high places is the most likely answer. I have called for an independent public inquiry for years and will continue to do so  but there is likely to be an echo of a deafening silence as the only beneficiary would be the client with mental health problems.  

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Mental Health Sufferers Vote With Their Feet and Government Does Nothing At All

 of those who undergo an initial assessment with the Improving Access to Psychological Therapies (IAPT) Service 40% do not go on to have treatment, and about the same proportion (42%) attend only one treatment session, according to a just published study by Davis A, Smith T, Talbot J, et al. Evid Based Ment Health 2020;23:8–14. doi:10.1136/ebmental-2019-300133. These findings echo a study  published last year by Moller et al (2019) https://doi.org/10.1186/s12888-019-2235-z, on a smaller sample, which suggested that 29% were non-starters and that the same proportion attended only one treatment session. Further scrutiny of the data reveals that about 3 out of 4 people drop out of treatment once begun. Unsurprisingly the authors’s independent study, of 90 IAPT clients, Scott (2018) revealed that only the tip of the iceberg (9.2%) recovered                 DOI: 10.1177/1359105318755264, raising serious questions about why the Government has spent over £4 billion on the service.

What Has Gone Wrong?

Kline et al (2020) consider that at an assessment by a clinician is supposed to: a) provide a credible rationale for the proposed treatment b)  detail the efficacy of the envisaged treatment and c) ensure that the clients preferences are acknowledged. IAPT’ assessments fail on all counts, taking these in turn:

a. If the problem is ill-defined e.g low mood/stress it is not clear what rationale should be presented. It is doubtful that a 30-45 minute telephone conversation can provide sufficient space to define the primary problem and other problems/disorders that may complicate treatment. Initial assessments  of patients for randomised controlled trials of psychological interventions are typically 90 mins plus, if this is the time deemed necessary to reliably diagnose a patient by a highly trained clinician, how can a much less trained PWP do it in less than half the time? Under time pressure a PWP may consider providing a credible rationale is part of treatment not assessment and in such circumstances it becomes more likely that a client will default. 

b. How often do PWPs present clients with evidence on the efficacy of an intervention? Take for example, computer assisted CBT, does the therapist tell the client that only 7 out of 48 of NHS recommended e-therapies have been subjected to randomised controlled trials, ( see Simmonds-buckley et al J Med Internet Res 2020;22(10):e17049) doi: 10.2196/170490 and even in these a gold standard semi-structured diagnostic interview conducted by a blind assessor was not use to determine diagnostic status post treatment, i.e there was no determination of the proportion of clients who were back to their old self after treatment and for how long. Further the e-therapies had average dropout rates of 31%.  They are not evidence based treatments in the way the NICE recommended high intensity treatments are. But approximately three quarters Of IAPT interventions (73%) are low intensity first, with 4% stepped up to high intensity and 20% in total receiving a high intensity intervention Davis A, Smith T, Talbot J, et al. Evid Based Ment Health 2020;23:8–14. doi:10.1136/ebmental-2019-300133

c. Client’s preferences are a predictor of engagement in treatment, but how often is a client given a choice between a low intensity intervention and a high intensity intervention. If both options are juxtaposed choice is likely skewed by informing the client that the high intensity intervention has a much longer waiting time.

Defining A Dropout

The generally accepted definition of a dropout is attending less than 7 sessions [see Kline et al (2020) https://doi.org/10.1016/j.brat.2020.103750], it is held that clients attending below this number will have had a sub-therapeutic dose of treatment and are therefore unlikely to respond]. Applying this metric to IAPT’s dataset is difficult as they only report data for those who complete 2 or more sessions and for which the average number of sessions attended is 6, thus the likely dropout rate from IAPT treatment, as most would understand the term, is about 75%.  But IAPT has developed its’ own definition of a completer as one who attends 2 or more sessions. This strange definition serves only to muddy the waters on its haemorrhaging of clients. It makes no sense to continue to fund IAPT without an independent  government inquiry into its’ modus operandi.

 

An Alternative Way Forward

Such has been the marketing power of IAPT over the last decade, that professional organisations such as the British Association for Behavioural and Cognitive Psychotherapies (BABCP) and the British Psychological Society (BPS) have sat mesmerised, as the Services fellow travellers have dominated accreditation and training.     In ‘Simply Effective Cognitive Behaviour Therapy’  published in (2009) by Routledge, I detailed a very different way of delivering services, that represents a faithful translation of the CBT treatments delivered in the randomised controlled trials (rcts) for depression and the anxiety disorders. Unfortunately it is IAPT’s fundamentalist translation of the rcts that has held sway and has brooked no debate either in journals or at Conferences.

 

Dr Mike Scott

 

%d bloggers like this: