PROMS – Track and Trace for Mental Health Without Knowing What Is Being Tracked

a just published study in the British Medical Journal  https://doi.org/10.1136/bmj.m3313 has found that  ‘There is insufficient evidence and mostly of low quality, that routine monitoring  with PROMS (Patient reported outcome measures) … leads to improvement in outcomes’.  Of the 5 studies reviewed one was of the Improving Access to Psychological Therapies (IAPT) Service in which the PHQ-9 and GAD-7 self report measures were used. 

Strangely the authors of the study Kendrick and Maund (2020) are surprised by the negative findings. It seems not to have occurred to them, that if it is not known with any certainty what the patients were suffering from in the first place then using the most available psychometric test to measure outcome is unlikely to yield any positive findings. In none of the studies was a standardised diagnostic interview used to establish diagnosis and determine any accompanying diagnostic comorbidity.  Thus it cannot be reliably known which is the outcome measure of primary interest, and should becomes the established yardstick before treatment begins and what secondary analyses should be declared in advance. This is akin to the need to pre-register how the results of a randomised controlled trial are going to be analysed rather than going on a post hoc fishing expedition highlighting some positive finding or other to justify a service.

Last Night of The PROMS?

The use of PROMS appears to be fuelled by the need to quickly process patients, using surrogate outcome measures. Rather than taking the time to properly listen to them and use a real world outcome measure such as loss of diagnostic status for say 8 weeks, as assessed by an independent evaluator using a standardised diagnostic interview. Psychometric tests completed for the benefit of a treating clinician are subject to demand characteristics, including wanting to please the therapist and not wanting to feel time has been wasted in engaging in psychological therapy. These concerns are amplified when tests are administered (as in IAPT) on a weekly basis and clients can easily remember their last score.

For all the deficiencies of track and trace over COVID-19,  the target is at least not a ‘fuzzy’ , rendering the process meaningless. Ironically since the demise of Public Health England Baroness Dido Harding is in charge the Covid-19-19 Track and trace. I e-mailed her asking if she was also going to assume responsibility for IAPT but have had no reply. Any QUANGO such as IAPT is likely to rejoice at the absence of accountability but to the detriment of the public. There has to be clarity about exactly who IAPT is accountable to now.

Monitoring Is Necessary But  Never Sufficient 

Just as monitoring the spread of the coronavirus is critical to triggering some preventative measures, it is likely going to be insufficient until there is an evidence based treatment protocol including a vaccine and treatment of the effected. So to only an informed monitoring of mental health problems can highlight appropriate treatment interventions. Monitoring by itself is descriptive rather than prescriptive. Unfortunately there is nothing in the Kendrick and Maund (2020) approach that is likely to make it reliably prescriptive, making their proposed developments in monitoring rather pointless.

Dr Mike Scott

 

Is Evidence Based Treatment Possible Without Evidence Based Assessment?

‘no’, this is the take home message from a just published study by Moses et al in the Journal of Anxiety Disorders https://www.sciencedirect.com/science/article/pii/S0887618520300931. An evidence based assessment includes a diagnostic interview, as well as a clinical interview and psychometric tests. Moses et al (2020) summarise the literature that the inclusion of a diagnostic interview improves outcome, by minimising missed diagnosis and misdiagnosis. These authors bemoan their finding that only a small minority of Australian psychologists use a diagnostic interview, but the position is even worse in the UK, as the largest provider of services the Improving Access to Psychological Therapies (IAPT) explicitly excludes the making of diagnosis/diagnostic interviews.   IAPT cannot improve access to evidence based psychological therapies because it does not operate the admission gate of an evidence based assessment.

The absence of an EBA leads to a revolving door, demoralising clients in search of a credible explanation of their difficulties. An EBA is a necessary part of evidence based practice (EBP) in that it highlights candidate evidence supported treatments (ESTs). But clinical judgement is still required to ascertain whether there is a sufficient match between client and the subjects in the EST. Most ESTs have admitted clients to the study with a limited range of comorbid disorders and have not been cognitively impaired, or suffering debilitating pain. Further the clients in the EST have been in a safe environment. 

 

Dr Mike Scott

IAPT and BABCP Duck Key Questions

‘what proportion of IAPT clients have maintained recovery from the primary disorder for which they first presented?’ . The Improving Access To Psychological Treatments (IAPT) Service prides itself on its’  large comprehensive database, as if this was somehow a guarantor of the effectiveness of the service.  But it is not possible to interrogate this database to determine the  extent of restoring clients to their normal functioning, as they don’t do diagnosis.

Not only don’t they do diagnosis, they refuse to share a platform with anyone known to be critical of them.  To date IAPT has not published written rebuttals of its’ critics charges. IAPT uses the muscle of the British Association of Cognitive and Behavioural Psychotherapies (BABCP) when challenged. Later this month the BABCP has its Annual Conference. I have had no indication from the President Elect as to how they are going to address my concerns over conflicts of interest and editorial freedom, but I do know that pride of place is to be given to IAPT’s leading light. BABCP is IAPT’s apologist. It might better spend its’ time investigating why the IAPT documentation indicates that its therapists, who are invariably BABCP members, make it up as they go along, sprinkling their notes with CBT terms, without any evidence of fidelity to an evidence based protocol for anything.

Dr Mike Scott

 

 

National Institute for Health Protection to Control IAPT?

in a blog written just before the demise of Public Health England I noted  the’Breathtaking Naivety of Public Health England On Mental Health’, https://wp.me/p8O4Fm-2HI. My hope is that its’ replacement the National Institute of Health Protection (NIHP)  will question why £4billion of the taxpayers money has been spent on the Improving Access to Psychological Therapies (IAPT) Programme, without any publicly funded independent evaluation of the service. My own independent finding was that only 10% of  those going through the IAPT service recover and that the public are very dissatisfied https://connection.sagepub.com/blog/psychology/2018/02/07/on-sage-insight-improving-access-to-psychological-therapies-iapt-the-need-for-radical-reform/ ,. By contrast IAPT claims a 50%  recovery rate, but my just published paper in the British Journal of Clinical Psychology, https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264#.XzwEMhZvXuk.email  casts serious doubts on the Services claim.

I have written to Baroness Harding of Winscombe, Dido Harding, the head of NIHP  to clarify whether the NIHP is indeed going to be the monitor of IAPT’s performance and if not who is? I have also stressed that no agency, including IAPT, should be allowed to mark its’ own homework.   It is imperative that a the metric for gauging the effectiveness of a service is one that the general public would recognise as meaningful, such as being independently assessed as no longer suffering from the disorder that they first presented with, as opposed to a surrogate measure, such as a change of score on a psychometric test completed in the presence of the therapist.

As MPs resume sitting in Parliament it is critical to ask who will now be in charge of ensuring IAPT does what it says on the tin and how will this QUANGO be made accountable?

Dr Mike Scott

The Improving Access To Psychological Therapies (IAPT) Programme Is Spreading Into Prisons – Why?

 

The July/August 2020 Issue of the Psychologist has a one page advert from the Forward Trust recruiting IAPT workers and others for positions in  five prisons. The Service Development Manager of the Trust speaking to the Psychologist said  ‘Many of our clients present with complex issues that would preclude then from community IAPT’. Which raises the interesting question of what body of evidence are they to call upon in dealing with these complex cases. Doubtless the Trust has done valuable work in for example giving out self-help leaflets to help prisoners manage the Covid crisis and facilitating connections with family. But such work was done hitherto by probation officers. The Trust was founded 30 years ago to support people in prison with drug problems. Where is the added value of an IAPT input?

IAPT has already published data showing clients with personality disorders do less well in community IAPT.  But IAPT clinicians have no way of reliably identifying clients with personality disorders. Further there is no evidence that they can faithfully administer a treatment protocol for personality disorder.  It seems that this is yet another example of IAPT’s expansionism,  matching its’ foray into treating long term  physical conditions that are medically unexplained. The lack of demonstrated evidence seems not to bother the service, it will likely proceed by running workshops of alleged best practice. It takes it for granted that its’ expansion is an obvious good. This is actually incredibly arrogant, demeaning of pre-existing services.

 

British Journal of Clinical Psychology Commentary and Rebuttal Of IAPT Paper

the Journal yesterday published my critique, ‘Ensuring IAPT Does What It Says On The Tin’ https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264#.XzwEMhZvXuk.email of the recent IAPT ( Improving Access to Psychological Therapies) paper, by Wakefield et al (2020).

£4bn has been spent on IAPT without publicly funded independent audit. This is a scandal when the best-evidence is that only 10% of those using the service recover. There is no evidence that the Service makes a real world difference to clients’ lives, returning them to their old selves/no longer suffering from the disorder that they first presented with for a significant period. The claimed 50% recovery rate by the service is absurd.  Not only has the now defunct Public Health England  mishandled the pandemic, but it has had a matching performance on mental health. It is too early to judge whether the newly formed Health Protection Board will grasp the nettle of mental health. But I doubt that it will until there is open professional discussion that the present IAPT service is not fit for purpose. It will likely need the involvement of politicians to ensure radical reform of IAPT and that mental health is not again kicked into the long grass.

Dr Mike Scott

 
 
 
 

Public Health England’s Breathtaking Naivety On Mental Health

the Government’s flagship mental health provider, Improving Access to Psychological Treatments (IAPT) has been a serial offender when it comes to non-declarations of conflicts of interest. If this were not enough, IAPT has been allowed to mark its own homework. It has not been subjected to publicly funded independent evaluation. All despite the taxpayer paying IAPT’s bill of £4billion. Unfortunately determining which publicly funded ‘Experts’ decided what, when and in collaboration with whom is likely to be as daunting as discovering who decided what with regards to the pandemic.  It’s about as transparent as our major rivers. But there is a pressing need for a public inquiry.

Latest Violation

IAPT’s latest violation occurred in a paper examining the agencies data in last months British Journal of Clinical Psychology https://doi.org/10.1111/bjc.12259., when an IAPT Programme Director and corresponding author  declared no conflict of interest. I protested to the Editor Professor Grisham about this violation and  that the authors, though citing my study of 90 IAPT clients failed to mention the key message of the study was that the recovery rate was 10% https://doi.org/10.1177%2F1359105318755264. These authors positively framed their findings to underline the frequently re-iterated claim of IAPT that it approaches a 50% recovery rate. My Commentary on the Journal article has been accepted for publication in it  in the near future.

Violation By The Prime Movers In IAPT

 In 2018 a study was published in the Lancet, and funded by the Wellcome Trust, and headed ‘Transparency about the outcomes of mental health services (IAPT approach): an analysis of public data’ and states:

‘Role of the funding source
The funder of the study had no role in study design, data
collection, data analysis, data interpretation, or writing of
the report. The corresponding author had full access to
all the data in the study and had final responsibility for
the decision to submit for publication’.

But there is no mention that the lead author is the leading light in IAPT, and that with one of the other authors, Lord Layard, they were the architects of IAPT.

A Systemic Problem 

In July 2017 I protested to the Editor of Behavior Research and Therapy (BRAT), that no conflict of interest had been declared in a paper authored by Ali et al published in that month’s issue of the Journal, https://doi.org/10.1016/j.brat.2017.04.006 focusing on IAPT data on relapse after low intensity interventions. I pointed out that the lead author headed the Northern IAPT research network.

In October 2015 Behavioural And Cognitive Psychotherapy published a paper by Kellet et al about an IAPT service ‘Large Group Stress Control’ https://doi.org/10.1017/S1352465815000491 this was authored by an IAPT teacher and researcher and appears without any statement of conflict of interest. 

 

Dr Mike Scott

“What’s The Odds Of Getting Back To My Old Self, With This Psychological Treatment?”

the response is likely to be a deafening silence, from those most likely encountered, a Psychological Wellbeing Practitioner (PWP) or a GP. Alternately, they may reply  ‘it’s complex’, leaving you bemused or patronised with a reply of ‘we don’t know until you try’. But the cancer sufferer and those close, would not tolerate being fobbed off about the likely success rate of a proposed oncology treatment. Further they would deem  it necessary for a face to face consultation, with a Consultant, for this question to be satisfactorily answered.

Contrast this  with the likely scenario in mental health, following a self-referral you would undergo a  20-30 minute telephone assessment by a Psychological Wellbeing Practitioner (PWP) [ from the Improving Access to Psychological Therapies (IAPT) programme]  the most junior member of staff.  Unfortunately their training totally precludes their being able to answer this question. The problem is that the PWP simply does not know the answer. His/her stock in trade is low intensity interventions such as guided self-help or computer assisted therapy, delivered in six or less sessions. The PWP’s training courses inform them that such interventions outperform usual treatment. More than that they do not know. Their ambition is usually to become a high intensity therapist delivering psychological therapy, over a much greater number of sessions.

The PWPs are unaware that the success of Cognitive Behaviour Therapy (CBT) in low intensity outcome studies has been gauged solely in terms of a metric called effect size. The (within subject) effect size is calculated by subtracting the post treatment mean of a sample from the pre treatment mean and dividing by the spread of the results (the pooled standard deviation). [Alternately if there has been a comparison group in the CBT studies the means that are subtracted, are the post treatment means of each group, again divided by the standard deviation, to yield a between subjects effect size].  Assuming that a between subjects effect size has been calculated all this tells one is the size of the difference between the two groups, it does not tell you whether everyone improved a little, or some greatly improved whilst some did very poorly. Thus the effect size gives no information  that can be passed onto a client that would give them a guide as to the likelihood of their recovery after low intensity  intervention.  

By contrast the psychological therapies to be delivered in high intensity IAPT, are supposed to be based on protocols approved by the National Institute of Clinical excellence. At first sight this is good news because many of these studies indicate the proportion of people who lost their diagnostic status as a result of psychological treatment i.e these studies were concerned with an end point and not just with whether there had been a response to treatment as indicated on some psychometric test. But IAPT has only ever relied entirely on psychometric test results. This exclusive focus on response by IAPT however lacks any validity because it is not known what the person was suffering from in the first place!. IAPT eskews diagnosis, there is a consistency in this  in that because they don’t do end points, they don’t do beginning points i.e they do not establish what the person is suffering from in the first place. It is not possible to substitute measures of response for categorical endpoints, the latter are determined independently using standardised diagnostic interviews. Matters are compounded further because IAPT uses no measure of treatment fidelity, thus it is totally unknown whether IAPT actually delivers an evidence supported treatment. 

 

Dr Mike Scott

 

 

 

 

‘Ensuring IAPT Does What It Says On The Tin’

this is my critique of the IAPT paper published in the current issue of the British Journal of Clinical Psychology, and the Editor has just accepted it for publication. Wakefield et al (2020) will be invited to respond.

Not quite sure when it will see the light of day, but hopefully it is at least the beginnings of open discussion. 

An area I’ve not touched on, in my paper is the effect of IAPT on its staff. Some are taking legal action against IAPT for bullying and have highlighted massive staff turnover. But it is very difficult for them to go into detail with litigation pending.  Others are suffering in silence to become financially secure enough to leave. Staff are in an invidious position, at best they might hope for an out of Court settlement. But unsurprisingly there is no great Organisational demand for whistleblowers. Gagging clauses it appears are still about and I heard of one being used recently by an employer against a victim of  the Manchester Arena bombing.

We need a national independent inquiry not only about the speed with which lockdown was imposed, but also about what has been happening in IAPT. But today I was talking with a survivor of the 1989 Hillsborough Football disaster, that I’ve kept in touch with since shortly afterwards, and we reflected on how long it has taken to get anywhere. He was too exhausted to follow through on the Statement he gave that was doctored by the police.

Bullying tends to centre on what the Organisations contend are ‘one or two bad apples’, which at a push they might make some compensation  for, to avoid adverse publicity, and without admitting liability. But I think there is a bigger phenomenon of Organisational Abuse that operates in an insidious way akin to racism, that needs to be called out. 

Dr Mike Scott

 

British Journal of Clinical Psychology Responds To IAPT’s Conflict of Interest

last week I wrote to Professor Grisham, the Editor of the Journal complaining, inter alia, of IAPT’s failure to declare a conflict of interest over the paper by Wakefield et al (2020) in the current issue, see link https://doi.org/10.1111/bjc.12259.  The Journal has responded  by formally inviting me to write a commentary, which subject to peer review, will appear alongside a response by the said authors. The text of my letter was as follows:

Dear Professor Grisham

Re: Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta-analysis of 10-years of practice-based evidence by Wakefield et al (2020) https://doi.org/10.1111/bjc.12259

In this paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study Stephen Kellett is an IAPT Programme Director.  This represents a clear conflict of interest that I believe you should alert your readers to. The study is open to a charge of allegiance bias.

I am concerned that in their reference to my published study “IAPT – The Need for Radical Reform”, Journal of Health Psychology (2018), 23, 1136-1147 https://doi.org/10.1177%2F1359105318755264 these authors have seriously misrepresented my findings. They chose to focus on a subsample of 29 clients, from the 90 IAPT clients I assessed for whom psychometric test results were available in the GP records. I warned that concluding anything from this subsample was extremely hazardous.  The bigger picture was that I independently assessed the whole sample using a ‘gold standard’ diagnostic interview and found that only the tip of the iceberg lost their diagnostic status as a result of IAPT treatment. Wakefield et al were strangely mute on this point.  They similarly fail to acknowledge that their study involved no independent assessment of IAPT client’s functioning and there was no use of a ‘gold standard’ diagnostic interview.

The author’s of Wakefield et al (2020) compare their findings favourably with those found in randomised controlled trials efficacy studies, suggesting that IAPT’s results approach a 50% recovery rate. But there can be no certainty of matching populations. In the said study there was no reliable determination of diagnostic status, thus there is no way that this heterogenous sample can be compared to homogenous samples of different primary disorders e.g obsessive compulsive disorder, adjustment disorder etc.

It is unfortunate that the British Journal of Clinical Psychology has allowed itself to become a vehicle for the marketing of an organisation which has only ever marked its’ own homework. The published study also calls into question the standard of the peer review employed by the Journal.

Regards

 

Dr Michael J Scott

At least we are getting to open debate, which is more than can be said for BABCP’s in-house IAPT comic, CBT Today.