Beware of Claims for Teletherapy

there is insufficient evidence that teletherapy (FaceTime, Zoom, Whats App) is superior to telephone assisted therapy,  that is the take home message from a just published review by Markowitz et al (2021) in this months American Journal of Psychiatry Am J Psychiatry 2021; 178:240–246; doi: 10.1176/appi.ajp.2020.20050557. Interestingly the preference of some clients with social anxiety disorder and PTSD is for telephone assisted therapy.  Markowitz et al (2021) also regard the claim that Teletherapy is as good as in person therapy as not proven. They voice a fear that remote therapy could become the new norm because of cost and convenience than because of evidence of equivalence with in-person therapy. Whilst there is undoubtedly a convenience value to teletherapy for clients and therapists with availability problems, there are also disadvantages such as managing a client who has become suicidal, missing non-verbal cues because of ‘talking heads’ together with technical problems, such as a poor internet connection, freezing screens etc. Further the poor and the elderly may not be able to afford the cost.

Markowitz et al (2021) opine that for the duration of the pandemic teletherapy may be very important but long term it should become, like telephone assisted therapy a useful option. I would hope so. But looking at the way in which IAPT has dominated the field with its low intensity (low cost) interventions bereft of a credible evidence base, I suspect teletherapy will continue to be a mainstay despite the jury being out on its’ efficacy. 

 

Dr Mike Scott

IAPT – Suicidal and Given Online CBT

 I recently came across a former IAPT client that the Organisation’s own documentation described as considering two different means of suicide. He had been bullied at school and engaged in a lot of self-harm. This depressed young man was given computer assisted CBT by IAPT and dropped out after 4 sessions. He told me that it did not teach him anything he did not already know. IAPT’s decision making is based on exigencies rather than clinical need.

Oftentimes a client with thoughts that they would be ‘better off dead’ are passed back to their GP. The GP is then obliged to contact the patient to discover that the ‘suicidal thoughts’ are most often passive and without any active intent or planning. In such instances IAPT had not taken the time to discover whether there was any active planning of suicide. The reaction of the Organisation is that ‘we do not want egg on our face’, so bounce it back to the GP. Unfortunately GP’s don’t complain to their Clinical Commissioning Groups about IAPT, content that they get a break from these ‘non-medical’ cases whilst they are being seen by IAPT, albeit that it is a revolving door.

Dr Mike Scott

IAPT Fails To Rebut Charge Of a Tip Of The Iceberg Rate Of Recovery

In the March Issue of the British Journal of Clinical Psychology, 3 academics admit their links to the Improving Access to Psychological Therapies (IAPT) Service, having failed to do so on an earlier occasion.  Their attempted rebuttal of my paper ‘Ensuring IAPT Does What It Says On The Tin’, published in the same issue of the Journal is a Donald Trump like expose. The British Government is looking at the matter of making NHS England accountable, to date the latter has allowed IAPT to mark its’ homework, with no involvement of the Care Quality Commission. Having spent over £4billion on IAPT the time for change is long overdue. Below is my response to Kellett et al (2021).

Practice-based evidence has been termed a three-legged stool comprising best research evidence, the clinician’s expertise and patient preferences [Spring (2007)]. Wakefield et al., (2021) published a systematic review and meta-analysis of 10 years of practice-based evidence generated by the Improving Access to Psychological Therapies (IAPT) services which is clearly pertinent to the research evidence leg of this stool. In response to this paper I wrote a critical commentary ‘Ensuring IAPT does what it says on the tin’ [ Scott (2021)].  In turn Kellett et al., (2021) have responded with their own commentary ‘The costs and benefits of practice-based evidence: Correcting some misunderstandings about the 10-year meta-analysis of IAPT studies’ accepting some of my points and dismissing others. Their rebuttal exposes an even greater depth of conflicts of interest in IAPT than originally thought. The evidence supplied by Wakefield et al (2021), renders the research evidence leg of the stool unstable and it collapses under the weight of IAPT.

 

Transparency and Independent Evaluation

 

Kellett et al (2021) in their rebuttal head their first paragraph ‘The need for transparency and independent evaluation of psychological services’. But these authors claimed no conflict of interest in their original paper, despite the corresponding author’s role as an IAPT Programme Director.  In their rebuttal Kellet et al., (2021) concede ‘Three of us are educators, clinicians and/or clinical supervisors whose work directly or partially focuses on IAPT services’. This stokes rather that allays fears that publication bias may be an issue.

There has been a deafening silence from Kellett et al., (2021) that in none of the IAPT studies, has there been an independent evaluator, using a standardised semi-structured diagnostic interview to assess diagnostic status at the beginning, end of treatment and follow up. It has to be determined that any recovery is not just a flash in the pan. Loss of diagnostic status is a minimum condition for determining whether a client is back to their old selves (or best functioning) post treatment. Studies that have allowed reliable determination of diagnostic status have formed the basis for the NICE recommended treatments for depression and the anxiety disorders.  As such they speak much more to the real world of a client than IAPT’s metric of single point assessments on psychometric test completed in a diagnostic vacuum.

 

The Dissolution of Evidence-Based Practice

The research evidence leg of IAPT’s evidence-based practice stool is clearly flawed. Kellet et al., (2021) seek to put a ‘wedge’ under this leg by asserting that the randomised controlled trials are in any case of doubtful clinical value because their focus is on carefully selected clients i.e they have poor external validity. But they provide no evidence of this. Contrary to their belief randomised controlled trials (rcts) admit client with a limited range of comorbidity. A study by Stirman et al., (2005) showed that the needs of 80% of clients could be accommodated by reference to a set 100 rcts. Further Stirman et al., (2005) found that clients in routine practice were no more complex than those in the rcts.. Kellett et al., (2021) cannot have it both ways on the one hand praise IAPT for attempting to observe National Institute for Health and Care Excellence (NICE) guidance and then pull the rug on the rcts which are the basis for the guidelines. Their own offering as to what constitutes research evidence leads to the collapse of the evidence-based practice stool. It provides a justification for IAPT clinicians to continue to base their clinical judgements on their expertise ignoring what has traditionally been taken to be research evidence, so that treatments are not based on reliable diagnoses. The shortcomings of basing treatment on ‘expertise’ have been detailed by Stewart, Chambless & Stirman (2018), these authors comment on ‘The importance of an accurate diagnosis is an implicit prerequisite of engaging in EBP, in which treatments are largely organized by specific disorders’.

‘Let IAPT Mark It’s Own Homework, Don’t Put It to The Test’

 

Kellett et al (2021) claim that it would be too expensive to have a high quality, ‘gold standard’ effectiveness study with independent blind assessors using a standardised semi-structured diagnostic interview. But set against the £4billion already spent on the service over the last decade the cost would be trivial. It is perfectly feasible to take a representative sample of IAPT clients and conduct independent blind assessments of outcome that mirror the initial assessment. Indeed the first steps in this direction have already been taken in an evaluation of internet CBT [ Richards et al (2020)] in which IAPT Psychological Wellbeing Practitioners used the MINI [ Sheehan et al (1998)] semi-structured interview to evaluate outcome, albeit that they were not independent evaluators and there could be no certainty that they had not used the interview as a symptom checklist rather than in the way it is intended. Further the authors of Richards et al (2020) were employees of the owners of the software package or worked for IAPT. Tolin et al (2015) have pointed out that for a treatment to be regarded as evidence-supported there must be at least two studies demonstrating effectiveness in real world settings by researchers not involved in the original development and evaluation of the protocol and without allegiance to the protocol. Kellet et al (2020) have failed to explain why IAPT should not be subject to independent rigorous scrutiny and their claim that their own work should suffice is difficult to understand.

 

The Misuse of Effect Size and Intention to Treat

Kellet at al (2021) rightly caution that comparing effect sizes (the post-test mean subtracted from the pre-test mean divided by the pooled standard deviation) across studies is a hazardous endeavour. But they fail to acknowledge my central point that the IAPT effect sizes are no better than those found in studies that pre-date the establishment of IAPT, that is they do not demonstrate an added value.  Kellet et al (2021) rightly draw attention to the importance of intention to treat analysis and attempt to rescue the IAPT studies on the basis that many performed such an analysis. Whilst an intention to treat analysis is appropriate in a randomised controlled trial in which less than a fifth of those in the different treatment arms default, it makes no sense in the IAPT context in which 40% of clients are nonstarters (i.e complete only the assessment) and 42% dropout after only one treatment session [ Davis et al (2020)]. In this context it is not surprising that Delgadillo et al (2020) failed to demonstrate any significant association between treatment competence measures and clinical outcomes, a point in fairness acknowledged by the latter author. But such a finding was predictable from the Competence Engine [Scott (2017)] which posits a reciprocal interaction between diagnosis specific, stage specific and generic competences.

 

Kellett et al (2020) Get Deeper in The Mud Attacking Scott (2018)

 

Kellett et al (2021) rightly underline my comment that my own study of 90 IAPT clients Scott (2018) was hardly definitive, as all had gone through litigation. But they omit to mention that I was wholly independent in assessing them, my duty was solely to the Court as an Expert Witness.  Despite this they make the extraordinary claim that my study had a ‘high risk of bias’, which casts serious doubts on their measuring instruments. They failed to understand that in assessing a litigant one is of necessity assessing current and past functioning. In my study I included used of the current and lifetime versions of a standardised semi-structured interview the SCID [ First et al (1996)].  This made it possible to assess the impact of IAPT interventions whether delivered pre or post the trauma that led to their claim. Whatever was the timing of the IAPT intervention the overall picture was that only the tip of the iceberg (9.2%) lost their diagnostic status as a result of these ministrations. Nevertheless, as I suggested, there is a clear need for a further publicly funded study of the effectiveness of IAPT with a representative sample of the latter.

 

References

 

Davis, A., Smith, T., Talbot, J., Eldridge, C., & Bretts, D. (2020). Predicting patient engagement in IAPT services: a statistical analysis of electronic health records. Evidence Based Mental Health, 23:8-14  doi:10.1136/ebmental-2019-300133.

Delgadillo, J., Branson, A., Kellett, S., Myles-Hooton, P., Hardy, G. E., & Shafran, R. (2020). Therapist personality traits as predictors of psychological treatment outcomes. Psychotherapy Research, 30(7), 857–870. https://doi.org/10.1080/10503307.2020.1731927.

First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (1996). Structured clinical interview for DSM-IV axis I disorders, clinician version (SCID-CV). Washington, DC: American Psychiatric Press.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268

 

Richards, D., Enrique, A., Ellert, N., Franklin, M., Palacios, J., Duffy, D., Earley, C., Chapman, J., Jell, G., Siollesse, S., & Timulak, L. (2020) A pragmatic randomized waitlist-controlled effectiveness and  cost-effectiveness trial of digital interventions for depression and anxiety npj Digital Medicine (2020)3:85 ; https://doi.org/10.1038/s41746-020-0293-8.

Scott, M.J (2017) Towards a Mental Health System That Works. London: Routledge.

Scott, M.J. (2018). Improving access to psychological therapies (IAPT) – the need for radical reform. Journal of Health Psychology, 23, 1136-1147. https://doi.org/10.1177/1359105318755264.

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264

 

Sheehan, D. V. et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric inter- view for DSM-IV and ICD-10. J. Clin. Psychiatry 59(Suppl 2), 22–33 (1998). quiz 34-57.

Spring B (2007). Evidence-based practice in clinical psychology: what it is, why it matters; what you need to know. Journal of Clinical Psychology, 63(7), 611–631. 10.1002/jclp.20373 [PubMed: 17551934].

Stewart, R.R., Chambless, D.L., & Stirman, S.W (2018) Decision making and the use of evidence based practice: Is the three-legged stool balanced?  Pract Innov 3(1): 56–67. doi:10.1037/pri0000063.  

Stirman, S. W., DeRubeis, R. J., Crits-Christoph, P., & Rothman, A. (2005). Can the Randomized Controlled Trial Literature Generalize to Nonrandomized Patients? Journal of Consulting and Clinical Psychology, 73(1), 127–135. https://doi.org/10.1037/0022-006X.73.1.127

 

Tolin, D. F., McKay, D., Forman, E. M., Klonsky, E. D., & Thombs, B. D. (2015). Empirically supported treatment: Recommendations for a new model. Clinical Psychology: Science and Practice, 22(4), 317–338. https://doi.org/10.1111/cpsp.12122

 

 

Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259

 

 

IAPT and The Collapse of Its’ Practice-Based Evidence

The Improving Access to Psychological Therapies (IAPT) service has failed to deliver [Wakefield et al (2021) https://doi.org/10.1111/bjc.12268]  and Scott 2021) https://doi.org/10.1111/bjc.12264] practice based evidence. In an attempted rebuttal of my Commentary [ Kellet et al (2021) https://doi.org/10.1111/bjc.12268 ] in the forthcoming British Journal of Clinical Psychology, IAPT fellow-travellers dig an even deeper hole, exposing even more conflicts of interest.

 

Cover image

Under UK Government pressure, NHS England is to reconfigure its’ relationship with social care.  It would be timely if the Government also insisted that NHS England put its house in order with regards to the provision of routine mental health services.  As a first step it should insist that NHS England staff cannot be employed by an agency, such as IAPT, that they are responsible for auditing.

Further in 2016 the National Audit Office asked that IAPT be made responsible to an independent body the Care Quality Commission, but the Service has instead been allowed to continue to mark its’ own homework.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264

Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259

Dr Mike Scott

Mental Health Patients and GPs Caught In IAPT’s Revolving Door

30% of GP referrals to the Improving Access to Psychological Therapies (IAPT) service do not attend, and of those who attend for treatment, 40% do so for only one session [Pulse November 2nd 2018 Home Analysis NHS structures Revealed: How patients referred to mental health services end up back with their GP]. The GP is left to cope with this haemorrhaging , with no input from Clinical Commissioning Groups (CCGs) to rectify matters.  The CCGs are complicit in IAPT ‘cherry picking’ clients, refusing clients under another service, such as substance misuse or an eating disorder.  Further the CCG’s do not bat an eye when IAPT claims to have met its’ target of a 50% recovery rate.  They miss the point that there has been no publicly funded  independent audit of IAPT.  They are blissfully ignorant that using an independently administered standardised semi-structured interview (SCID) only the tip of the iceberg (9.2%) recover Scott (2018) https://doi.org/10.1177%2F1359105318755264.

Evidence based practice involves an integration of best research evidence, clinician expertise and patient’s preferences [ NHS England document ‘Finding the Evidence’ November (2013)].

There are no randomised controlled trials, using a blind evaluator, of IAPT’s modal, low intensity treatment.  Making  the ‘best research evidence’ leg unstable.  GPS do not audit the effect of IAPT on their patients and so their clinical expertise in dealing with these patients is questionable.  Shared decision making is an integral part of  eliciting patient preferences. But in IAPT clients are usually discharged when they have had the pre-determined number of sessions and/or when their score on a self-report measure falls below a certain cut-off.  There is no credible elicitation of client’s preferences. All legs of the evidence based practice stool have fault lines, and it collapses under IAPT’s weight. The Agency is a prime exemple of failed evidence based practice.

Dr Mike Scott

The Department of Health Has Failed To Regulate Routine Mental Health Services

Improving Access to Psychological Therapies (IAPT) services are out of bounds to Care Quality Commission inspection.  In 2016 the National Audit Office (NAO) asked the Department of Health to address this issue and it has done nothing.  The Department sets the agenda and budget for NHS England, who in turn do the same with Clinical Commissioning Groups to determine local provision of services. But NHS England staff are lead players amongst service providers, these conflicts of interest exacerbate the parlous governance of IAPT. There is a need for Parliament to step in and take the Department of Health to task.  

 

Whilst no one doubts the importance of improving access to psychological therapies, it was remiss of the NAO in 2016 to take at face value IAPT’s claim that it had the appropriate monitoring measures in place.  Incredulously the NAO accepted at face value IAPT’s claim that it was achieving a 45% recovery. It is always tempting to look only as far as evidence that confirms your belief. But it is equally important to consider what type of evidence would disconfirm your belief. The NAO has failed to explain why it has not insisted on independent scrutiny of IAPT’s claims. 

The The Improving Access to Psychological Therapies (IAPT) programme has exercised a confirmatory bias in its’ audit by focussing only on self-report responses on  psychometric tests (the PHQ9 and GAD7). The service has never looked at a categorical end point, such as whether a person lost their diagnostic status as assessed by an independent evaluator using a standardised diagnostic interview.

Organisations, are inherently likely to be self-promoting and will have a particular penchant for operating, not necessarily wholly consciously, with a confirmatory bias. It is for other stakeholders, NHS England, Clinical Commissioning groups, MPs, the media, Charities and professional bodies (BABCP  and BPS) to hold IAPT to account. For the past decade they have all conspicuosly failed to do so. How have IAPT evaded critical scrutiny, despite the taxpayer having paid £4billion for its’ services? Friends in high places is the most likely answer. I have called for an independent public inquiry for years and will continue to do so  but there is likely to be an echo of a deafening silence as the only beneficiary would be the client with mental health problems.  

Dr Mike Scott

A Conflict of Interest Between NHS England and IAPT

 

the Improving Access to Psychological Therapies (IAPT) pantomine is likely to continue, with Dr Adrian Whittington, National Lead for Psychological Professions, NHS England  and IAPT National Clinical Adviser about to chair a Conference with the leading light of IAPT, Professor David Clark for IAPT staff.      IAPT afficionados seem inherently incapable of understanding what constitutes a conflict of interest, see forthcoming issue of the British Journal of Clinical Psychology, ‘Ensuring IAPT Does What It Says On The Tin’. https://doi.org/10.1111/bjc.12264.

page1image32572160 page1image32571584 page1image32572352 page1image32572544 The Information Standard Guide

Finding the Evidence

A key step in the information production process

November 2013

Caroline De Brún

NHS England  should reflect on their own document published in 2013 ‘Finding the Evidence’ in which clinicians are asked to seek the ‘best research evidence’ by looking at how an intended treatment has fared compared to a credible alternative. Taking the IAPT service as the intended treatment there has never been a comparison with a credible alternative. IAPT cannot be considered  a repository of ‘best evidence’

The power holders, wish to believe their fairy tale ‘we are committed to mental health, we have shown this in supporting our world beating IAPT service, as far as possible we will fund expansion of the service, we have broken new ground’ and in small print ‘it is not politically correct to say other and we are too busy  with the pandemic/physical health to critically analyse IAPTs data’. But this is a dangerous story offering no protection for the mental health sufferer. It is time that sufferers are seen as ‘vulnerable’ people and offered societal protection.

IAPT therapists do not ask the client,  at the end of treatment, whether they are back to their old selves again. Outcome is determined by the Genie that arises out of the psychometric test lamp that IAPT polishes incessantly.

The Genie could be pressed ‘how does low intensity CBT work?’  A coughing and spluttering might ensue. It is known that CBT works for the depression and anxiety disorders, using the specific cognitive model for those disorders. But there is no evidence that simply describing the reciprocal interactions of cognition, emotion, behaviour and physiology, then targeting   one or more of them leads to an evidence supported treatment. It is a fundamentalist translation of the treatments conducted in the randomised controlled trials of depression and the anxiety disorders. It is a translation born of the exigencies of the situations, such as vast monies available for treatment, but it is akin to using a religious belief system for political purposes.

 The CBT protocol for panic disorder is entirely dependent on David Clark’s model (2020) of catastrophic misinterpretation of bodily sensations perpetuating the symptoms of panic https://doi.org/10.1007/s10608-020-10141-0. None of the procedures in the protocol would make sense without reference to his model. 

A cognitive model of a disorder is the nucleus around which orbit all the procedures of a protocol. Beck enshrined this in his theory of cognitive content specificity, that disorders are distinguished by their  different cognitive content and connive profiles see Baranoff, J., & Oei, T. P. S. (2015). The cognitive content-specificity hypothesis: Contributions to diagnosis and assessment. In G. P. Brown & D. A. Clark (Eds.), Assessment in cognitive therapy (p. 172–196). The Guilford Press, and Eysenck and Fajkowska (2018) https://doi.org/10.1080/02699931.2017.1330255.

But the procedures in low intensity CBT have no nucleus. For example the strategies in Williams et al (2018) doi: 10.1192/bjp.2017.18 Living Life to The Full classes ‘covering key CBT topics such as altered thinking, behavioural activation, problem-solving and relapse prevention’,  are not derived from any specific cognitive model of disorder – they are the equivalent of displaced electrons, the atoms have no credible name and the targets ill defined. For example in the Williams et al study (2018) the target is ‘low mood and stress’, the latter has no specific cognitive content or cognitive profile.   If it is not known how a psychological therapy achieves its goal then the therapy itself cannot be considered evidence supported. There has to be a plausible scientific explanation of the mechanism of change. The low intensity cbt protocols represent an ad hoc usage of cbt techniques, it is impossible to distil the mechanism of change, if any, in such a collage.  In this respect the low intensity interventions are found wanting, they are poor translations of the protocols in the ‘gold standard’ randomised controlled trials,  they are advocated in a fundamentalist way by IAPT, driven by perceived economy than any considered view of effectiveness.

Dr Mike Scott

 

‘Intensive Care PTSD’

this was the banner  headline on the BBC News today, January 13th 2021. It followed the announcement of a study by Prof Neil Greenberg, which revealed that staff had been ‘traumatised’ by the first wave of the pandemic. This in turn led for Paul Farmer Chief executive of MIND to call for ‘the right support at the right time’ on BBC radio 4 today. The Government has promised an extra £15 million so that extra support can be given.  But what sort of support?

In the press release accompanying publication of his study in the journal Occupational Medicine, Professor Greenberg notes ‘Further work is needed to better understand the real level of clinical need amongst ICU staff as self-report questionnaires can overestimate the rate of clinically relevant mental health symptoms’. His study was based on a web survey of ICU staff about half of whom responded, about half whom met the ‘threshold’ for PTSD, severe anxiety or problem drinking. There is a clear need to go beyond self-report measures.

I am currently writing a book ‘Personalising Trauma Treatment: reframing and reimagining’ to be published by Routledge. In this work I suggest that the initial conversation with trauma victims   should include ‘Gateway Diagnostic Interview Questions’ , with regard to Covid an appropriate subset would be:

Depression (evidence that at least one of the answers to the following questions is in the affirmative)

1. During the past month have you often been bothered by feeling, depressed or hopeless?

2. During the past month have you often been bothered by little interest or pleasure in doing things?

 

Panic Disorder

1. Do you have unexpected panic attacks, a sudden rush of intense fear or anxiety?

2. Do you avoid situations in which the panic attacks might occur?

 

Post-traumatic Stress Disorder

In your life, have you ever had any experience that was so frightening, horrible or upsetting that, in the past month, you

1. Have had nightmares about it or thought about it when you did not want to?

2. Tried hard not to think about it or went out of your way to avoid situations that reminded you of it?

3. Were constantly on guard, watchful, or easily startled?

4. Felt numb or detached from others, activities, or your surroundings?

5. Felt guilty or unable to stop blaming yourself or others for the event(s) or any problems the events may have caused?

Evidence that at least three of the answers to the symptom questions above are in the affirmative

Alcohol Dependence (evidence is that the response to the first three of the following questions is in the affirmative)

1. Have you felt you should cut down on your alcohol/drug?

2. Have people got annoyed with you about your drinking/drug taking?

3. Have you felt guilty about your drinking/drug use?

4. Do you drink/use drugs before midday?

Asking GDIQ questions encourages the person to furnish possible examples of the impact of the symptom on their life, so that they feel listened to. Reference can then be made to other  diagnostic symptoms for the particular disorder, to tease out whether there are sufficient impairing symptoms for that disorder, to merit that diagnostic label.  Use of GDIQ’s is part of a conversation, it is not a rapid fire interrogation or checklist. As a supplement to the GDIQ people can be asked whether this is something that they want help with, as they might not want to verbalise that they want to sort the problem out themselves, but are too polite to express this. 

The NICE recommended treatments are diagnosis specific, thus there is a recommendation of trauma focussed CBT for PTSD. But those traumatised by Covid are likely to find it toxic to be pushed to describe in graphic detail the horrors encountered. In my book I argue that this is unnecessary, rather that what is of key importance is to assess what the person takes their memory of being in ICU means about today. It is not the event that causes PTSD but the mental time travel to the worse period and the significance given to it  for today. This approach  is much less challenging for whoever is  accompanying the effected medical staff and family/friends who have seen horrors.

 

Dr Mike Scott

Unnecessary Treatment Is The Rule In IAPT – Due Diligence?

 

The UK Government, Improving Access to Psychological Therapies (IAP) only uses psychometric test screening measures  to assess clients, most commonly the  PHQ9 ( a measure of the severity of depression) and GAD7 (a measure of the severity of generalised anxiety disorder), but other measures are advised for other disorders, such as the PCL-5 for PTSD. A study by Zimmerman and Matia (2001) [The Psychiatric Diagnostic Screening Questionnaire: development, reliability and validity. Comprehensive psychiatry, 42(3), 175–189. https://doi.org/10.1053/comp.2001.23126 ] showed that questionnaire measures that reflect DSM criteria have a roughly 90% sensitivity across major depressive disorder, PTSD, panic disorder, social phobia and GAD, i.e it correctly identifies 9 out of 10 of those who do have one of these disorders. But it identifies only about 60% (specificity) of those who do not have the disorder and for GAD only 50%.  However many more people do not have a particular disorder than have one, leading to unnecessary treatment for many. The National Audit Office should take note of this and re-instate its’ investigation, where is the due diligence with regards to IAPT? £4billion has been given to IAPT!

Depression

In the Zimmerman  and Mattia (2001) study 47.9% of the psychiatric outpatients had major depression. Assuming psychiatric outpatients are a reasonable approximation to the IAPT population, then in a sample of 100 patients approx. 50 would have depression and 50 would not. Of the 50 with depression, 45 would have been correctly identified and treated. However of the 50 who did not have depression only, 30 would have been correctly identified leaving 20 as false positives, candidates for inapropriate treatment. Thus roughly for every two depressed cases appropriately treated one would be inappropriately treated. For depression the appropriate/inappropriate ratio is 2/1 – pretty wasteful.

Generalised Anxiety Disorder

In the Zimmerman Mattia Study 17.5% pf the psychiatric outpatients  had GAD. Thus in a sample of 100 patients approx. 18 would have GAD, of whom 16 would have been correctly identified and treated. But 82 would not have GAD but 50% of them would have been regarded as having GAD meaning that 41 would have been inappropiately treated. Thus for GAD the appropriate/inappropriate ratio is 16/41, so that for every one GAD client treated appropriately 2-3 others are treated inappropriately.

Post-traumatic Stress Disorder

In the Zimmerman and Mattia study 10.5% of the psychiatric outpatients had PTSD. Thus in a sample of 100 clients approx. 11 would have PTSD with 9 being correctly classified and treated. However 89 would not have PTSD of these 62% (55) were correctly classified, meaning that 34 were false positives. Thus the ratio of appropriately treated/ inappropriately treated is approximately 1/4 , for every one treated appropriately 4 are treated inappropriately.

IAPT’s Preposterous Claim On Recovery

Given the ubiquity of unnecessary treatment in IAPT, its’ claim of a 50% recovery rate [IAPT Manual (2019)] is preposterous.  I found a 10% recovery rate Scott (2018) https://doi.org/10.1177%2F1359105318755264, which is much more likely if a body relies simply on a screening instrument.

The Need To Translate Research Methodology Into Routine Practice

Ehlers et al. Trials (2020) 21:355 https://doi.org/10.1186/s13063-020-4176-8 have used the PDSQ to screen for cases of PTSD in their study of therapist assisted treatment for the condition, but have followed the screen up by using a standardised semi-structured interview the SCID to then diagnose PTSD. In this study they have kept a screen in its place and not allowed it free rein as in IAPT.  The IAPT Manual p25 states ‘To ensure that all relevant problems are identified, it is recommended that assessments include systematic screening for each of the conditions that IAPT treats. Standardised commercial screening questionnaire that cover the full range of problems and that can be completed by people before they attend an assessment can be considered ‘ and cites the  PDSQ as an example. But sole use of any screening instrument is very wasteful.

Ehlers et al (2020) have sought to establish whether no more than 4 hours therapist time can make a real world difference to PTSD sufferers lives, a consummation devoutly to be wished, these authors could be well employed helping IAPT get its’ own house in order.

 

Dr Mike Scott

‘Psychometric Testing In Clinical Settings’ – contains a devastating critique of IAPT

this is the title of a Chapter, by Hamilton Fairfax in a book ‘Psychometric Testing’ edited by Barry Cripps in (2017) and published by John Wiley. Fairfax pulls no punches, on the over interpretation of a psychometric test score:

‘These concerns are increased when organisations place value on such scores, and base commissioning and service decisions on them away from the clinical context. Increasingly, such decisions by NHS and private health care providers are made by individuals who are either not familiar with the specifications of services or not sufficiently trained clinically or methodologically to understand the information they are provided with. Instead they are under pressure to ensure services are economically viable; the attraction of a number that purports to measure improvement is obvious. It is possible to manage mental health services in a way that would not be permissible in banking, the military or food production.

One risks accusations of arrogance or pomposity if one’s critique of a management decision is based on the manager’s lack of awareness or training. A strange and unintended consequence of EBP (evidence based practice) is that it provides a heuristic for the uninformed to speak with authority in a way in which many of us would not speak to a mechanic just because we had read a car manual. Stating that something is ‘evidence-based’, whether or not the person knows much about the area being discussed, is often seen as sufficient. It is dangerous to base policy and the survival of clinical services on this level of insight. In outlining this position I do not want to demonise managers or create an equally unhelpful heuristics. Many are well informed, with good clinical experience, but their roles have increasingly alienated them from the realities of practice. Demand and the pressure to be more effective can diminish flexibility and creative thinking, leading to a reliance on quick information such as numbers and ‘evidence’. I speak from personal experience and am aware that these pressures only increase with more responsibilities’.

Hopefully we can manage a better New Year

Dr Mike Scott