Categories
BABCP Response - NICE Consultation January 2022

‘What Is Going On, When The Dose of Psychological Therapy Is Less Than Half That Recommended By The National Institute for Health and Care Excellence (NICE)?’

 

a recent study by Saunders et al (2021) Journal of Affective Disorders 294 (2021) 85-93 found that the average client in the Improving Access to Psychological Therapies  (IAPT) low intensity therapy has 3 sessions ( a mean of 2.85 sd 2.81) whilst the average client in high intensity therapy has 5 sessions ( a  mean 4.79 sd 5.51). But the IAPT Manual (2021) p51 states ‘should be offered up to the NICE-recommended number of sessions for the relevant clinical condition. For high-intensity work this would generally be in the range of 12 to 20 sessions’, Further the low intensity intervention such as the Stress Control Programme typically involve 6 sessions. Clearly IAPT clients are receiving a sub-therapeutic dose of treatment. Yet astonishingly IAPT claims that it is NICE compliant!

But IAPT uses a language that is reminiscent of Alice In Wonderland, treatment is defined as attending at least two treatment sessions. This means that those who miss their first treatment appoint [40% according to Davis, A., Smith, T., Talbot, J., Eldridge, C., & Betts, D. (2020). Predicting patient engagement in IAPT services: a statistical analysis of electronic health records. Evidence-based mental health, 23(1), 8–14. https://doi.org/10.1136/ebmental-2019-300133] or attend only one session [42% according to Davis, A., Smith, T., Talbot, J., Eldridge, C., & Betts, D. (2020). Predicting patient engagement in IAPT services: a statistical analysis of electronic health records. Evidence-based mental health, 23(1), 8–14. https://doi.org/10.1136/ebmental-2019-300133] are not counted as part of those treated, but they in total constitute 64% of those passing through the IAPT starting gate. Thus approximately two thirds of the would be beneficiaries of IAPT’s ministrations refuse to take their medicine! Add this into the equation and the services performance looks woeful.

 

IAPT uses a sleight of hand to claim NICE compliance (a sine qua non of funding by Clinical Commissioning Groups CCGs). It asks its’ therapists to use ‘problem descriptors’ to select an ICD code (the World health Organisations coding system for diagnosed disorders). But an ICD code is only as reliable as the diagnosis and IAPT therapists do not make diagnoses, how then can they provide treatment ‘for the relevant clinical condition’ p51 IAPT Manual (2021)?  GPs clearly make diagnoses and their records often mention problems associated with the diagnosis, but they do not make the diagnosis on the basis of those problems, they are qualitatively different types of information. An IAPT therapist like a social worker may note psychosocial stressors but any mention of disorder would not be taken as evidence of disorder in a Court of law.  Where IAPT therapists to claim they made a diagnosis that would be subject to legal challenge and the Organisation steers its employees away from this.

Many IAPT therapists are delighted with this as they see ‘diagnosis’ as akin to blasphemy. NHS England and CCG’s, refuse to acknowledge the elephant in the room, they have chosen to totally avert their gaze from this problem, preferring to sup with the IAPT hierarchy.

 

Dr Mike Scott

 

Categories
BABCP Response - NICE Consultation January 2022

Group Interventions A Manager’s Dream But A Clients….?

As the pandemic recedes and concerns about scarcity of individual therapy continues, there are likely to be increased calls for group interventions, whether psychoeducational or psychotherapeutic.  But the vexed question has to be asked, does the particular group intervention envisaged make a real world difference to client’s life? Is the group intervention more to do with appearing to have done something? 

There are protocols for depression and the anxiety disorders described in my book Simply Effective Group CBT (2011) London: Routledge.

The content for the group sessions I detailed in the book can be downloaded by clicking the link below:

https://www.dropbox.com/s/ys0ogfo3k93qmwb/Ptsd%20Group%20treatments%202018.pdf?dl=0

But I have a number of concerns about the utilisation of group interventions in the IAPT-ification of psychological therapy services. There is a danger of spin with regards to group CBT. This can happen easily by taking the abstracts of studies at face value, when many of the authors have developed the protocols that they are evaluating – allegiance bias. I doubt whether, despite the best efforts of any training institution, there will be a monitoring of fidelity to evidence-based protocols and a meaningful assessment of outcome. In practice clients may be short-changed by Group CBT and may then be put off further therapy. Whilst the group interventions may be intended as part of a stepped care package, clients are least likely to attend the appointment that marks the start of the new dawn [ Davis et al (2020)https://doi.org/10.1136/ebmental-2019-300133].  The following critique of group studies may be useful:

  1. Group psychoeducation interventions have an appeal that belies evidence of their effectiveness. I can find no study in which there has been independent assessment of effectiveness using a standardised diagnostic interview. Thus it is not known what proportion of sufferers with a particular disorder would lose their diagnostic status as a result of treatment, much less how enduring recovery would be. ‘No longer suffering from a disorder’ is a metric that a member of the public can readily understand, but to be told that you no longer require treatment because you are now below ‘caseness’ on a psychometric test is likely to produce a puzzlement, that the client is too polite or disadvantaged to challenge.
  2. It may be to the advantage of Managers  and Academic institutions to promote psychoeducational groups but to the pay-off for the client is what? The experience of group involvement may not be adverse, they may have even enjoyed the sense of belonging that has come from group attendance, but if at the end of the day it has not made a real world difference to the client’s life, was it worthwhile? The issue of group psychoeducation has to be approached from a bottom-up perspective not top-down.  Ost (2008) Outcome studies Quality Ost 2008  has published a 22 item measure of the reliability of psychotherapy outcome studies, each item is rated 0-2, so that a score of 44 is possible. Applying this measure to the most popularly delivered psychoeducation group Stress Control, I found yielded a score of 9, but the mean score in CBT outcome studies was 28 and Ost suggested that studies with scores of 19 or below could not be considered empirically supported treatments. Failings of the recent SC studies (the original White et al study fared slightly better because it specified a particular diagnosis, GAD) included amongst others the following domains: reliability of the diagnosis, specificity of outcome measure, blind evaluation, assessor training, design, assessment points, controlling for concomitant treatment, and assessment of clinical significance.

3. In a paper titled “Are individual and group treatments equally effective in the treatment of depression in adults? Cuijpers et al answered with a cautious ‘yes’. They drew upon my own study [ Scott and Stradling (1990)https://doi.org/10.1017/S014134730001795X ] comparing individual and group CBT with a waiting list, but did not mention that in order to get the group CBT to be a viable entity, we had to offer up to 3 individual sessions concurrently, though interestingly few took up all three.  Selling the group CBT was a challenge, however both modalities were equally effective and I think Cuijpers et al’s conclusion is appropriate. 

4. In 2018 carpenter et al published https://doi.org/10.1002/da.22728 a study of the efficacy of CBT for anxiety disorders. They considered only studies in which the comparison condition was a psychological placebo e.g supportive counselling i.e one in which there is a credible rationale, this controls for common factors such as the therapeutic alliance. Carpenter et al found only 7 studies comparing individual and group CBT with these provisos, and for only 2 disorders social anxiety disorder and PTSD was their sufficient data to reach conclusions and in both instances individual CBT was superior to group CBT.

But in 2020 Barkowski et al Group CBT for anxety dsorders 2020 published a meta analyses of group psychotherapy and claimed that group psychotherapy  for anxiety disorders  is more effective than active treatment controls.  They cite a Hedges g effect size of 0.29. But these authors fail to point that this effect size is small. A Hedges g of 0.2 would mean that the average person in the largely CBT groups would have done better than 58% of those in the control condition. Whether these differences are clinically significant is a matter of debate. Further only 3 anxiety disorders were considered GAD, panic disorder and social anxiety disorder, the results do not apply to OCD or PTSD (which historically was placed in the anxiety disorders but now no longer is). The authors proclaim that mixed-diagnoses groups are equally effective as diagnostic specific groups. But this is misleading, the most recently published group trans diagnostic study by Roberge et al (2020) Group transdiagnostic cognitive-behavior therapy for anxiety disorders: a pragmatic randomized clini had approx half of clients (52.8%) suffering from generalised anxiety disorder and approx a third (29.4%) suffering from social anxiety disorder, thus over 80% of the clients are suffering primarily from either one or the other of just 2 disorders, more accurately it should be termed limited transdiagnostic therapy.  Further clients were recruited via newspaper advertisements, 86% of the clients were women and 42% had a University Diploma, only half of clients were completers i.e attended 8 or more of the treatment sessions, and only half lost their principal diagnosis.  Making generalising from these studies problematic.

The danger is that group devotees look simply at the abstracts of the group studies, without realising that the authors were the developers of the protocols and their findings need taking with a great deal of caution. My worry is that IAPT in particular will seize on groups as a way forward in a numbers game and clients will be short changed.

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Stopped Care Replaces Stepped Care in IAPT

over 70% (73.3%) of the Improving Access to Psychological Therapies (IAPT) clients are offered low intensity treatment (such as computer assisted CBT, group psychoeducation or guided self-help) first, and only 4% (4.1%) are then stepped up to high intensity treatment. The first transition appointment is the least well attended [ Davis et al (2020) https://doi.org/10.1136/ebmental-2019-300133]. 

IAPT clients haemorrhage from the system at the outset, 40% of first appointments are missed and 42% of clients attend only one treatment session. But IAPT’s claimed recovery rate of 50% applies only to those who complete two or more treatment sessions. The suspicion is that real world recovery rate is much less. Examination of the trajectory of 90 IAPT  clients using a gold standard diagnostic interview [ Scott (2018) IAPT The Need for Radical Reform, Journal of Health Psychology https://www.dropbox.com/s/flvxtq2jyhmn6i1/IAPT%20The%20Need%20for%20Radical%20Reform.pdf?dl=0.] suggests only the tip of the iceberg recover.

A small proportion of IAPT clients (16.4%) are offered high intensity first. The IAPT Manual recommends that this should occur for clients suffering from PTSD, social anxiety disorder or severe depression.  But offers no reliable methodology for deciding who is in what category. The agency claims it does not do ‘formal diagnosis’. 

 

How are clients to be directed to appropriate care?

 

Dr Mike Scott

Categories
BABCP Response - NICE Consultation January 2022

Constant Breaches of Conflict of Interest By IAPT Constant Breaches of Conflict of Interest By IAPT and NHS England

Constant Breaches of Conflict of Interest By IAPT and NHS England

 

the latest example comes from Saunders et al  (2021) in the Journal of Affective Disorders 294 (2021) 85-93 https://doi.org/10.1016/j.jad.2021.06.084. None of the 9 cited authors acknowledge any conflict of interest. But the database they draw upon is from the Improving Access to Psychological Therapies (IAPT) treatment programme, 2 of the 9 work for iCope an IAPT service, as well as being part of a research network of IAPT and academics, 4 others are also part of the IAPT Service Improvement and Research Network (SIRN). Their paper is titled ‘Older adults respond better to psychological therapy than working age adults: evidence from a large sample of mental health service attendees’.  The authors note that their finding runs counter to other studies that have suggested interventions for depression are equally effective when comparing older to working age adults and even that older adults suffering from anxiety disorders have worse outcomes than working-age patients. They are curiously blind to the possibility that the IAPT database is suspect, that it does not measure what it purports to  do so. There is a clear operation of allegiance bias. Readers and Journal editors have a right to be alerted to the possibility of allegiance bias by a transparent declaration of conflicts of interest. I wrote to the Journal Editor about this but he declined to publish my letter.

 

  1. Earlier this year I wrote a blog on  a study by Barkham et al (2021) https://doi.org/10.1186/s12888-018-1899-0 which involved comparison of person-centred counselling and cognitive behaviour therapy (cbt) in a high intensity therapy service delivered by IAPT. Curiously patients were screened for the study using the Clinician Interview Schedule Revised but neither this nor any standardised diagnostic interview was used as an outcome measure. Further Barkham et al (2021) chose to adopt the Improving Access to Psychological Therapies (IAPT) primary outcome measures the PHQ-9 [Kroenke et al (2001)] and GAD-7 [Spitzer et al (2006)], without any discussion. There is no comment that these are self-report measures, subject to demand characteristics and that changes are impossible to interpret without comparison to an active placebo treatment. Why such apparent blindness? The answer is apparent reading the declaration of conflicts of interest, the authors are either devotees of person-centred counselling or have links with IAPT. Their take home message is that person centred counselling might be better than CBT for depressed patients. But there is no attempt to address the question of what proportion of patients lost their diagnosis status and for how long, as determined by an independent blind clinical assessment using a standardised interview. Service-users interests are ill-served by this type of study which additionally ignored data that suggest the recovery rate in IAPT is just 10% [Scott (2018)]https://doi.org/10.1177/1359105318755264 .
  2. IAPT is the biggest provider of group psychoeducation and it was given a boost by a Dolan et al (2021) study all authors declared no conflict of interest. But the corresponding author for the Dolan et al (2021) study is a programme director of IAPT and another of the authors has IAPT involvement.Dolan, N., Simmonds-Buckley, M., Kellett, S., Siddell, E., & Delgadillo, J. (2021). Effectiveness of stress control large group psychoeducation for anxiety and depression: Systematic review and meta-analysis. The British journal of clinical psychology60(3), 375–399. https://doi.org/10.1111/bjc.12288

 

3. In March 2021 the British Journal of Clinical Psychology published my Commentary ‘Ensuring IAPT Does What It Says On The Tin’ I wrote ‘In the Wakefield et al. (2020) paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study, Stephen Kellett, is an IAPT Programme Director. The study is therefore open to a charge of allegiance bias. It is therefore not surprising that Wakefield et al. (2020) fail to make the distinction between IAPT’s studies and IAPT studies. By definition, the former have a vested interest, akin to drug manufacturer espousing the virtues of its psychotropic drug. Whilst an IAPT study is conducted by a body or individual without a vested interest, in this connection Wakefield et al. (2020) have implicitly misclassified this author’s IAPT study, Scott (2018). In their study, Wakefield et al. (2020) make reference to the Scott (2018) study with a focus on a subsample of 29 clients (from the 90 IAPT clients) for whom psychometric test results were available in the GP records. But in Scott (2018) it was made clear that concluding anything from such a subsample was extremely hazardous. The bigger picture was that 90 IAPT clients were independently assessed using a ‘gold standard’ diagnostic interview, either before or after their personal injury (PI) claim. Independent of their PI status, it was found that only the tip of the iceberg lost their diagnostic status as a result of IAPT treatment. Wakefield et al. (2020) were strangely mute on this point. They similarly failed to acknowledge that the ‘IAPT’s studies’ involved no independent assessment of IAPT client’s functioning and there was no use of a ‘gold standard’ diagnostic interview.’

 

Failure to declare conflicts of interest is not confined to Journals but also operates in NHS England who direct Clinical Commissioning Groups. IAPT staff are employed by NHS England, the latter has no independent body overseeing IAPT and it is therefore unsurprising  that the expansion of the service is given wholesale backing.

Current NHS England team

Sarah Holloway, Head of Mental Health, NHS England
Xanthe Townend, Programme Lead – IAPT & Dementia, NHS England

David M. Clark, Professor and Chair of Experimental Psychology, University of Oxford; National Clinical and Informatics Adviser for IAPT

Adrian Whittington, National Lead for Psychological Professions, NHSE/I and HEE; IAPT National Clinical Advisor: Education

Jullie Tran Graham, Senior IAPT Programme Manager

Hayley Matthews, IAPT Programme Manager, NHS England

Andrew Armitage, IAPT Senior Project Manager, NHS England

Sarah Wood, IAPT Project Manager, NHS England

 

Dr  Mike Scott

References

Barkham, M., Saxon, D., Hardy, G. E., Bradburn, M., Galloway, D., Wickramasekera, N., Keetharuth, A. D., Bower, P., King, M., Elliott, R., Gabriel, L., Kellett, S., Shaw, S., Wilkinson, T., Connell, J., Harrison, P., Ardern, K., Bishop-Edwards, L., Ashley, K., Ohlsen, S., … Brazier, J. E. (2021). Person-centred experiential therapy versus cognitive behavioural therapy delivered in the English Improving Access to Psychological Therapies service for the treatment of moderate or severe depression (PRaCTICED): a pragmatic, randomised, non-inferiority trial. The lancet. Psychiatry, 8(6), 487–499. https://doi.org/10.1016/S2215-0366(21)00083-3

Scott M. J. (2018). Improving Access to Psychological Therapies (IAPT) – The Need for Radical Reform. Journal of health psychology23(9), 1136–1147. https://doi.org/10.1177/1359105318755264

 

Scott M. J. (2021). Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. The British journal of clinical psychology60(1), 38–41. https://doi.org/10.1111/bjc.12264

 

 

 

Categories
BABCP Response - NICE Consultation January 2022

IAPT – Many Visitors But Few Customers

 

If you are a visitor to the Improving Access to Psychological Therapies (IAPT) service there is a 1 in 4 (27%) chance of recovery [Walker, C., Speed, E. & Taggart, D. Turning psychology into policy: a case of square pegs and round holes?. Palgrave Commun 4, 108 (2018). https://doi.org/10.1057/s41599-018-0159-8] using the services’ own metric. But IAPT claims a 50% recovery rate. However if you read the fine print, this claim actually relates to those clients who complete two or more treatment sessions – a completer rather than intention to treat analysis.

 

According to the UK Governments Improving Access to Psychological Therapies (IAPT) annual report for 2019-2020 – 489,547 people curtailed involvement after having one treatment appointment, whilst 606,192 had two or more treatment sessions. Thus almost as many people have just one treatment session as two or more. Of 1.69 million referrals to IAPT in 2019-2020, 1.17 million left the starting gate, 30.77% (almost 1 in 3) were non-starters. IAPT’s compiler analysis is misleading.

But applying the metric of the proportion of clients who lose their diagnostic status, only the tip of the iceberg recover, Scott (2018) IAPT The Need for Radical Reform, Journal of Health Psychology https://www.dropbox.com/s/flvxtq2jyhmn6i1/IAPT%20The%20Need%20for%20Radical%20Reform.pdf?dl=0. If the consumers of services ruled, the Improving Access to Psychological Therapies (IAPT) service would go out of business.

But IAPT is answerable to no one. NHS England monitors the quality of inpatient mental health services  via the Care Quality Commission (CQC). For example CQC feedback persuaded NHS England to challenge the quality of Child and Adolescent Mental Health Services run by Cygnet. But the CQC has no such remit over IAPT. 

 
Turning psychology into policy: a case of square pegs and round holes?
Carl Walker1, Ewen Speed2 & Danny Taggart2

ABSTRACT

This paper problematizes the ways in which the policy process is conceived in published psychological research. It argues that these conceptions of the policy process fail to ade- quately reflect the real-world dynamism and complexity of the processes and practices of social policy-making and implementation. In this context, psychological evidence needs to be seen as one type of evidence (amongst many others). In turn this requires researchers to take account of broader political processes that favour certain types of knowledge and disparage others. Rather than be regarded as objective and scientific, policy in this characterisation is regarded as a motivated form of politics. This multi-layered, multi-level hybrid structure is not immediately amenable to the well-intentioned interventions of psychologists. While the tendency of many psychologists is to overestimate the impact that we can have upon policy formation and implementation, there are examples where psychological theory and research has fed directly into UK policy developments in recent years. This paper draws on the recent Improving Access to Psychological Therapies (IAPT) initiative and the work of personality researcher Adam Perkins on the UK’s social security system to ask whether psychology has a sufficiently elaborated sense of its own evidence base to legitimately seek to influence key national areas of public policy. The article cautions against dramatic changes to policy pre- dicated upon any one reading of the variegated and, at times, contradictory psychological evidence base. It concludes that, in order to meaningfully contribute to the policy develop- ment process in a way which increases equality and social justice, psychologists need to be more strategic in thinking about how their research is likely to be represented and mis- represented in any particular context. Finally some possible directions for psychologists to take for a more meaningful relationship with policy are suggested.
1 Applied Social Science, University of Brighton, Brighton, UK. 2 School of Health and Social Care, University of Essex, Essex, UK. Correspondence and requests for materials should be addressed to C.W. (email: C.J.Walker@brighton.ac.uk)
PALGRAVE COMMUNICATIONS | (2018)4:108 | DOI: 10.1057/s41599-018-0159-8 | www.nature.com/palcomms 1
1234567890():,;
COMMENT
PALGRAVE COMMUNICATIONS | DOI: 10.1057/s41599-018-0159-8
RTepresentations of policy in psychological research
he original ‘scientist of happiness’ Jeremy Bentham is commonly regarded as the individual who, more than any other, brought the phenomenon of ‘evidence based policy making’ into being. The sociologist Will Davies states that, “Whenever a policy is evaluated for its measurable outcomes, or assessed for its efficiency using cost benefit analysis, Bentham’s influence is present.” (2015). It is important to bear this influence in mind therefore, when almost 250 years later, the discipline of psychology (via the ‘measurement’ of human experience), is aiming to stake a claim for a central role at the heart of the UK policy agenda.

At issue here is the way in which much academic research, and psychological research in particular, conceives of, and engages with, policy processes. Typically, this involves a naïve reading of policy context (which happens somewhere else, away from the disciplinary context of the research) which is painted as uni- dimensional and something that can be relatively easily influ- enced, with no need to address questions of complexity, or other disciplinary and professional boundaries. The assumption tends to be that policy should come to their research, rather than thinking (concretely) about how they, themselves, not some ghostly, ill-defined (and supplicant) policy actor, might make their research policy relevant for them. This fundamental lack of connection with existing policy processes, demonstrates as Cair- ney (2016) has argued, that researchers are engaging with the policy process that they wish existed rather than the processes which actually exists. The policy processes which do exist are predicated upon internal and external hierarchies of evidence, and processes and practices of influence, patronage and lobbying which are far removed from two throw away lines in the con- clusion of an academic paper stating that the preceding research has policy relevance and impelling policy makers to sit up and take notice (whomsoever those policymakers might be-they are seldom if ever identified). This is to say nothing of the wider ideologically framed political and discursive formations that influence what it is possible to imagine in a policy arena at any one place or time.

To illustrate this complexity, and the attendant constraints upon psychological research impacting upon the policy process, we take an example where psychological research has been adopted in a highly selective way; the development of psycholo- gical treatments for common mental health issues. Our argument is that, in order to meaningfully contribute to the policy devel- opment process in a way which increases equality and social justice, psychologists need to be more strategic in thinking about how their research is likely to be represented and misrepresented in any particular context. This requires a political awareness and engagement that historically psychologists, in the interests of scientific distance, have been wary of.

In terms of the political context, the ghost of utilitarianism looms large. A rigid hierarchy of evidence, primarily predicated upon questions of economic value (broadly construed), dom- inates the field (Cairney, 2016). Even then, even if the paper has the highest standard of evidence, this does not necessarily guar- antee that this evidence will be picked up by policy makers or politicians. If there is no political case for the policy, there will be little chance of the evidence being implemented in a policy context. The point is that demonstrating economic value is not enough, there is also a clear and present need to demonstrate the political expediency of any proposed policy, (regardless of the evidence). It is in this context that we see policy-based evidence, rather than evidence based policy (see Cairney, 2018), where policymakers may make very selective use of the evidence in a
way that supports their view and denigrates another view. Con- sider the implications of this dynamic process for an academic paper which claims policy relevance, but does not direct this to a when or a where. In this context the academic research could, it is argued, be up for grabs, ready and able to deployed in any number of ways, that may, or may not contradict the intention of the original research.

Earlier we argued that this insistence, on the part of psychol- ogists, reflects a naïve reading of the policy context. Furthermore, this failure to engage with the dynamic complexity of policy practices and processes runs the risk of reifying this mis- characterisation. Such is the vagaries and iniquities of the policy process that there is a need to acknowledge and address these vagaries if research is to have a meaningful and significant impact. There is a compelling evidence that outlines the way that policies are often driven by ideology and biases rather than evidence (Fishbeyn, 2015, Prinja, 2010). Cleary then, there is a danger that psychological research, which argued position X, can and will be taken up by politicians or policy makers and deployed in a way which argues position Y. And, if the characterisation of the policy process envisaged by researchers is the reified mischaracterisation we outline above, the danger of this misrepresentation of the tone and tenor of academic research becomes far more likely to occur, as policy makers and politicians assert their dominance (over the evidence makers, i.e., academic researchers) in the policy arena.

All of this talk relates to issues in and around the policy pro- cess, that is to say, the practice of social policy, how it works. But there is also a need to address issues of context, those social, political, economic and cultural spheres where the practice of social policy is enacted. There is a clear need to conceive of the relation between these different spheres in terms of the influence and impact that psychology might have on policy process and practice, but also, in terms of the influence and impact that policy might have on psychology. Failure to consider this ‘to-and fro’, or ‘ebb and flow’ would run as much of a risk of policymakers reifying psychological research as the obverse process we set out here.

This requires psychologists, as a professional group, to give serious discussion and debate to what they are doing in a broader social, political economic and cultural context. It requires psy- chologists to raise questions about how they should (or should not) contribute to policy. Moreover, questions about the relia- bility of the psychological evidence base, as well as a tendency to celebrate the statistical flukes left standing after researchers have cast around to find publishable positive results (Rhodes, 2015) has left psychologists Farsides and Sparks (2015) to suggest that ‘psychology is liberally sprayed with bullshit’ (p368).

IAPT, the welfare trait and ‘policy-ready’ psychology
While the tendency of many psychologists is to overestimate the impact that we can have upon policy formation and imple- mentation, there are examples where psychological theory and research has fed directly into UK policy developments in recent years, with some influence. By focusing on a high profile example we can then examine not only if psychology has an impact but more importantly what that impact has been.

The Improving Access to Psychological Therapies (IAPT) initiative is arguably the jewel in the crown of applied psychol- ogy’s influence on UK mental health policy, with the rest of the world taking note (NYT, 2017). Based on an elegant economic cost-saving calculation (the ghost of Bentham again), by the UKs‘Happiness Tsar’ and economist Richard Layard, combined with evidence-based psychological therapies (primarily CBT), one could reasonably ask what’s not to like? While we of course can see the value in the opportunity for many thousands of people to
2 PALGRAVE COMMUNICATIONS | (2018)4:108 | DOI: 10.1057/s41599-018-0159-8 | www.nature.com/palcomms
PALGRAVE COMMUNICATIONS | DOI: 10.1057/s41599-018-0159-8
COMMENT
undertake a course of psychological therapy to alleviate their mental health issues and to bring an important psychosocial dimension into mainstream mental health care, there are unin- tended and unhelpful consequences of the IAPT agenda that need to be considered.

Firstly, as with many policy initiatives, there is the gap between the political rhetoric and the clinical reality. IAPT has, from the beginning, made ambitious claims about its effectiveness, offering treatment to 900,000 patients annually and achieving a 50% recovery rate for depression, anxiety and related ‘common mental health problems.’ (Clark, 2011). It should be borne in mind that this level of treatment was necessitated in order to produce the ‘zero net cost’ of IAPT at the policy development stage through promising savings in physical health care, numbers of welfare claimants and a reduction in mental health related sick days (Clark, 2011).

Whilst IAPT has lived up to some of these promises, with undoubted successes in treatment effectiveness (recovery figures approaching 50% for those who complete treatment, NHS Digital, 2017), there is some cause for concern around evidence that the ‘IAPT effect’ has not been entirely benign. For example, in IAPT monthly data summary for December 2017 NHS data (NHS Digital, 2017), 89,485 new referrals were received with 68,205 referrals entering treatment. Of this number 39,834 completed a course of treatment with 37,238 starting treatment with ‘caseness’ (the referral has enough symptoms to be regarded as clinical) and 49.9% of those approaching recovery by the end of treatment. This data means that only 22.25% of the total referrals were considered to be getting better. That means that in this month over 72% of referrals coming for help either did not receive any treatment at all, did not receive a full course of treatment or did not get better by IAPT’s own metrics.

If we are more conservative and remove those who did not start treatment from our calculation, the situation improves slightly to 27% of referrals achieving recovery and even if we assume that one referral does not equate to one person, that is still a lot of disappointed, disillusioned and worried people. Given that these dropout figures are roughly comparable with other time points in IAPT’s history (McInnes, 2014) this is cause for con- cern. Add to this the study which found that, for a sample who did achieve recovery using low intensity forms of IAPT, 53% relapsed within one year (Ali et al., 2017) and we can see that the more carefully we look at the psychological evidence base for IAPT policy the more questions arise.

Given widespread reductions to specialist mental health ser- vices, it seems likely that IAPT, a brief, often self-directed form of psychological treatment for mild to moderate mental health issues is being expected to plug the gap left by the reduction in longer term community services. As Watts (2016) points out, in IAPT’s own analysis dropout figures are not seen as a criticism of the initiative itself as they are not counted in recovery figures. Fur- thermore, in trying to bring people into services in order to meet policy-based treatment targets we may inadvertently be creating need in people and then not meeting it, a particularly ‘perverse form of care’ likely to have a detrimental impact on the mental health of many left behind by the IAPT revolution.

Additional evidence demonstrates that the number of people not recovering through IAPT approved CBT are dis- proportionately from poorer communities (Delgadillo et al., 2015). In the 2015–2016 IAPT annual report (NHS Digital, 2016), 55% of referrals in the least deprived 10% of areas achieved recovery, while in the most deprived 10% of areas only 35% achieved recovery. This can be likened to an extension of the ‘inverse care law’ whereby not only do people in more disadvantaged com- munities struggle more to get access to healthcare, when they do it is less effective in helping their mental health. To what extent this is a failing of IAPT per se and how much it reflects the causes of
common mental health issues in disadvantaged communities being less psychological and more social (Speed and Taggart, 2012) is beyond the scope of this paper but is also worth con- sidering when assessing psychology’s readiness to take an intrapsychic perspective to the exclusion of other models.

From this we can begin to see that laudable claims of treating common mental health issues for hundreds of thousands of people annually involves unintended consequences of alienating many from trusting services and repeating the pattern of further mar- ginalising those already structurally disadvantaged. Indeed, it is our argument that it is the very ‘policy ready’ nature of IAPT that has precipitated many of these unintended consequences. Its grand, possibly hubristic vision of ‘curing’ common mental health issues and reducing the economic burden of the ‘mentally ill’ undoubtedly plays well in a political era that privileges an instrumentalist, Fordist and market-oriented approach to public services, but the scale of the claims needed to provide a ‘bottom line’ appeal was always likely to demand much and leave many staff and patients behind. This is largely because Benthamite cost-benefit analyses are not a sufficient basis upon which to predicate the policy and practice of mental health care because they ignore the processes and contexts of mental health, out there, in the world.

Another source of ‘policy-ready’ psychological research comes from the personality researcher Adam Perkins in his work ‘The Welfare Trait’ (2015). Perkins makes a case for a fundamental restructuring of the UK’s social security system based on an assertion that overly generous welfare provision for out-of-work parents results in the proliferation of what he describes as an ‘Employment Resistant Personality’ profile that leads to welfare claimants having more children and negatively impacting national productivity. Whilst questions have been raised about whether this work is of merit as psychological science, the issue we want to address here is whether psychology has a sufficiently elaborated sense of its own evidence base to seek to influence a key national area of public policy as social security?

To take the Welfare Trait example, even Perkins notes the perverse logic in cutting welfare payments for families with multiple children, thereby depriving already disadvantaged chil- dren of resources. On this point there is a consensus of agree- ment. However, Perkins draws from a 1975 study (Tonge et al. 1975) of 33 Sheffield families to suggest that this perverse logic can be ignored as welfare claimants all “spend their welfare benefits on unnecessary purchases such as electronic gadgets and luxury chocolates, instead of using the money to improve the lives of their children.” (Perkins, 2015 p.177). This conclusion seems emotive and polemical rather than being based on any robust evidence. This conclusion on the part of Perkins is clearly not the type of psychological evidence that we want to use to determine the life chances of millions of children.

When we add a competing example of psychological evidence, an analysis which found that levels of referral for maltreatment were causally related to the income variation in low income families in the US (Cancian et al. 2010) with reductions in income leading to an increases in maltreatment cases, we can see that dramatic changes to welfare policy predicated upon any one reading of the variegated and at times contradictory psychological evidence base can be downright dangerous.

Concluding thoughts

We would like to end by urging caution on psychologists who aim to have a ‘policy impact’. Instead, we suggest a level of professional engagement in policy processes, whereby psychological evidence is used to forge alliances with common interest groups, for example, to lobby ministers for particular reforms or new initiatives. So rather than merely referencing policy implications in research
PALGRAVE COMMUNICATIONS | (2018)4:108 | DOI: 10.1057/s41599-018-0159-8 | www.nature.com/palcomms 3
COMMENT
PALGRAVE COMMUNICATIONS | DOI: 10.1057/s41599-018-0159-8
papers, there is a wider social undertaking to make research evi- dence available to communities impacted by the social problems under investigation in order to enable them to petition for change.

In some mental health research contexts this is already hap- pening, where alliances have been formed between psychological researchers and mental health service user groups. In the Understanding Psychosis (2014) project there is a clear move towards the dissemination of research findings regarding aetiol- ogy, symptomatology and treatment for psychoses and associated mental health issues that largely draws upon preexisting psy- chological research alongside first person testimonies and acti- vism from people with that diagnosis. Therefore, in this case the research acts as a form of social activism in which evidence is used explicitly in the interests of those it purports to be about, allowing them a very real, present and active voice in talk about their mental health. However, there is a bind that comes with this ‘bottom up’ approach to influence, and it is one that resonates with the naivety (even futility) of claims regarding the policy relevance of psychological research that we critiqued in this paper. Unless there is a vested interest policy actor working with the psychological researchers, the chance of any research gaining any degree of purchase in the field is at best limited.

The great achievement of the IAPT agenda was the alignment of psychological evidence, the vested interests of the profession of clinical psychology wanting to expand its sphere of influence in an historically medically dominated field with the social democratic, utilitarian ethos of a key policy influencer in Richard Layard. This placed psychology and its practitioners in a position of previously unimaginable influence within mental health service development in the NHS, with the opportunity for future mission creep into other areas. But, to return to Cairney (2018), this was largely because the psychological evidence corresponded to the policy reality (i.e., for IAPT, psychology was able to present policy-based evidence, rather than IAPT being drawn from evidence-based policy). As such, its policy influence had material, resource-based advantages in a way that alignment with the service user movement will not, because, it is much more unlikely that service user based evidence is going to align so comfortably with the evidentiary needs of policy makers, psychiatrists or psychologists (and once we move to consider the psy-professions, we are already one degree removed from influen- cing policy processes). So, in order for psychological evidence to be utilised by community stakeholders, it may have to risk being ‘sidelined’ in other forums. This question is as much ethical as scientific and will confront psychologists trying to influence policy with the challenging question, what are the implications of this evidence not for policy but for the communities many of us are paid to serve? As we have argued in this paper, this question is less straightforward than we might like to think.

Received: 30 April 2018 Accepted: 3 August 2018

References

Ali S, Rhodes L, Moreea O, McMillan D, Gilbody S, Leach C, Lucock M, Lutz W, Delgadillo J (2017) How durable is the effect of low intensity CBT for depression and anxiety? Remission rates and relapse in a longitudinal cohort study. Behav Res Ther 94:1–8
BPS (2014) Understanding psychosis and schizophrenia. https://www1.bps.org.uk/ system/files/Public%20files/rep03_understanding_psychosis.pdf Accessed 24 Mar 2018

Cairney P (2016) The politics of evidence-based policy making. Palgrave, London Cairney P (2018) The UK government’s imaginative use of evidence to make

policy. Br Polit https://doi.org/10.1057/s41293-017-0068-2
Cancian M, Slack K, Yang MY (2010) The effect of family income on risk of child maltreatment. Institute for research on poverty: Discussion paper no. 1385-10.

Institute for Research on Poverty

Clark DM (2011) Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: The IAPT experience. Int Rev Psychiatry 23(4):318–327. https://doi.org/10.3109/09540261.2011.606803

Delgadillo J, Asaria M, Ali S, Gilbody S (2015) On poverty, politics and psychology: the socioeconomic gradient of mental healthcare utilisation and outcomes. Br J Psychiatry 1–2 https://doi.org/10.1192/bjp.bp.115.171017

Davies W (2015) The happiness industry: how the government and big business sold us wellbeing. Verso, UK

Farsides T, Sparks P (2015) Buried in bullshit. Psychologist 29(5):368
Fishbeyn B (2015) When ideology trumps evidence: a case for evidence based

health policies. Am J Bioeth 15(3):1–2
McInnes B (2014) The researcher, and so, again, to IAPT. Therapy Today 25

(10):18–24
NHS Digital (2016) Psychological therapies: Annual report on the use of IPAT

services. Health and Social Care Information Centre
NHS Digital (2017) Improving access to psychological therapies (IAPT): Executive

Summary December 2017. Health and Social Care Information Centre
New York Times (2017) England’s Mental Health Experiment: No-Cost Talk Therapy. https://www.nytimes.com/2017/07/24/health/england-mental-

health-treatment-therapy.html Accessed 24 Mar 2018
Perkins A (2015) The Welfare Trait- How state benefits affect personality. Palgrave

Macmillan.
Prinja S (2010) Role of ideas and ideologies in evidence-based health policy. Iran J

Publ Health 39(1):64–69
Rhodes R (2015) Replication- latest twists. Psychologist 29(5):334
Speed E, Taggart D (2012) It’s your problem but you need us to help you fix it: the

paradox at the heart of the IAPT agenda. Asylum Mag Democr Psychiatry 19

(3):23–24
Tonge WL, James DS, Hillam SM (1975) Families without hope: a controlled study

of 33 problem families. Ashford, Headley
Watts J (2016) IAPT and the ideal image in The future of psychological therapy:

from managed care to transformational practice. In: John Lees (Ed.), Rou- tledge, UK

Dr Mike Scott