When Mental Health Screening Makes You Want to Scream

assessment in the Improving Access to Psychological Therapies (IAPT) Service takes up 25% of all resources, yet in the NICE approved protocols for depression and the anxiety disorders assessment takes about 8% of resources       (roughly an assessment session for 10-15 treatment sessions). Much of the IAPT assessment consists of telephone completed screening questionnaires by contrast the assessment in the NICE protocols consists largely of a reliable standardised diagnostic interview. But there is no reason to believe that the screening measures in IAPT (PHQ-9 and GAD-7) are pertinent to the client’s problems. Despite this in IAPT, these measures are re-administered every session and often made the focus of discussion, thus the actual total time spent in IAPT on ‘assessment’ is likely much more than 25% of the budget. The added value is? IAPT has a credibility gap. Is it really credible that with such a skewed distribution of resources, that IAPT should claim success comparable to that found in the randomised controlled trials that are the basis for NICE recommendations? IAPT claims fidelity to NICE protocols, so that it is not divorced from its’ paymasters NHS England/Clinical Commissioning Groups, but it is a well known philanderer. It provides no evidence of fidelity, just protestations.

I hear that IAPT is looking at introducing artificial intelligence into the assessment process! Whilst there are attempts to develop an algorithim for suicide risk by looking at records etc, the outcome is likely to be some marrying of this with clinician expertise. But this goal is quite a challenge. A meaningful algorithim for IAPT’s assessments  is likely to be a whole different ballgame. Time would be better spent doing the simple things such as really listening to clients and making a reliable diagnosis to direct treatment, this would be an exercise in intelligence. 

IAPT represents an extreme case of the tail wagging the dog when it comes to screening. A just published editorial in the Journal of the American Medical Association by Mitchell Katz, shows the more general problem of screening:

Editor’s Note
June 14, 2021

A Response to Excessive Screening Questions

JAMA Intern Med. Published online June 14, 2021. doi:10.1001/jamainternmed.2021.2925

‘Recently, JAMA Internal Medicine published a firsthand account of a trauma survivor who had disclosed over a series of years on multiple self-administered screenings that she had experienced trauma,1 yet none of her health care clinicians had ever followed up with her or provided any support or resources. In reading her account, I wondered how many of her clinicians had even seen the screening questionnaires she completed.

Standardized screening questionnaires allow primary care clinicians to learn important information about patients, especially in psychosocial areas (eg, depression, anxiety, substance use) that can be difficult to assess in short appointments. Adding to their efficiency, they can be completed in the waiting room. But for them to be useful, they have to be read, and the appropriate resources must be available.

Just as “alarm fatigue” can result in not paying attention to important warnings from electronic health records, use of screening questionnaires performed more often than necessary can deluge clinicians with more information than they can incorporate in a visit, decreasing the efficiency of the visit and leading to cynicism on the part of patients (eg, “Why do they keep asking me if I am depressed, when I keep telling them I am not?”) and on the part of primary care clinicians (eg, “Why are my patients repeatedly given these screeners, when I dealt with this issue on the last visit?”).

In this issue of JAMA Internal Medicine, Simon and colleagues2 estimated the proportion of standardized screenings performed at 24 federally qualified health centers in 2019 that were excessive, defined as performed when not recommended. Six screeners were evaluated (depression, anxiety, smoking status, passive smoke exposure, health literacy, and preferred learning style), all of which were tied to national performance metrics. The authors found that 34.9% of all screenings performed (2 067 152 of 5 917 382) were excessive’.

Dr Mike Scott