IAPT Fails To Rebut Charge Of a Tip Of The Iceberg Rate Of Recovery

In the March Issue of the British Journal of Clinical Psychology, 3 academics admit their links to the Improving Access to Psychological Therapies (IAPT) Service, having failed to do so on an earlier occasion.  Their attempted rebuttal of my paper ‘Ensuring IAPT Does What It Says On The Tin’, published in the same issue of the Journal is a Donald Trump like expose. The British Government is looking at the matter of making NHS England accountable, to date the latter has allowed IAPT to mark its’ homework, with no involvement of the Care Quality Commission. Having spent over £4billion on IAPT the time for change is long overdue. Below is my response to Kellett et al (2021).

Practice-based evidence has been termed a three-legged stool comprising best research evidence, the clinician’s expertise and patient preferences [Spring (2007)]. Wakefield et al., (2021) published a systematic review and meta-analysis of 10 years of practice-based evidence generated by the Improving Access to Psychological Therapies (IAPT) services which is clearly pertinent to the research evidence leg of this stool. In response to this paper I wrote a critical commentary ‘Ensuring IAPT does what it says on the tin’ [ Scott (2021)].  In turn Kellett et al., (2021) have responded with their own commentary ‘The costs and benefits of practice-based evidence: Correcting some misunderstandings about the 10-year meta-analysis of IAPT studies’ accepting some of my points and dismissing others. Their rebuttal exposes an even greater depth of conflicts of interest in IAPT than originally thought. The evidence supplied by Wakefield et al (2021), renders the research evidence leg of the stool unstable and it collapses under the weight of IAPT.


Transparency and Independent Evaluation


Kellett et al (2021) in their rebuttal head their first paragraph ‘The need for transparency and independent evaluation of psychological services’. But these authors claimed no conflict of interest in their original paper, despite the corresponding author’s role as an IAPT Programme Director.  In their rebuttal Kellet et al., (2021) concede ‘Three of us are educators, clinicians and/or clinical supervisors whose work directly or partially focuses on IAPT services’. This stokes rather that allays fears that publication bias may be an issue.

There has been a deafening silence from Kellett et al., (2021) that in none of the IAPT studies, has there been an independent evaluator, using a standardised semi-structured diagnostic interview to assess diagnostic status at the beginning, end of treatment and follow up. It has to be determined that any recovery is not just a flash in the pan. Loss of diagnostic status is a minimum condition for determining whether a client is back to their old selves (or best functioning) post treatment. Studies that have allowed reliable determination of diagnostic status have formed the basis for the NICE recommended treatments for depression and the anxiety disorders.  As such they speak much more to the real world of a client than IAPT’s metric of single point assessments on psychometric test completed in a diagnostic vacuum.


The Dissolution of Evidence-Based Practice

The research evidence leg of IAPT’s evidence-based practice stool is clearly flawed. Kellet et al., (2021) seek to put a ‘wedge’ under this leg by asserting that the randomised controlled trials are in any case of doubtful clinical value because their focus is on carefully selected clients i.e they have poor external validity. But they provide no evidence of this. Contrary to their belief randomised controlled trials (rcts) admit client with a limited range of comorbidity. A study by Stirman et al., (2005) showed that the needs of 80% of clients could be accommodated by reference to a set 100 rcts. Further Stirman et al., (2005) found that clients in routine practice were no more complex than those in the rcts.. Kellett et al., (2021) cannot have it both ways on the one hand praise IAPT for attempting to observe National Institute for Health and Care Excellence (NICE) guidance and then pull the rug on the rcts which are the basis for the guidelines. Their own offering as to what constitutes research evidence leads to the collapse of the evidence-based practice stool. It provides a justification for IAPT clinicians to continue to base their clinical judgements on their expertise ignoring what has traditionally been taken to be research evidence, so that treatments are not based on reliable diagnoses. The shortcomings of basing treatment on ‘expertise’ have been detailed by Stewart, Chambless & Stirman (2018), these authors comment on ‘The importance of an accurate diagnosis is an implicit prerequisite of engaging in EBP, in which treatments are largely organized by specific disorders’.

‘Let IAPT Mark It’s Own Homework, Don’t Put It to The Test’


Kellett et al (2021) claim that it would be too expensive to have a high quality, ‘gold standard’ effectiveness study with independent blind assessors using a standardised semi-structured diagnostic interview. But set against the £4billion already spent on the service over the last decade the cost would be trivial. It is perfectly feasible to take a representative sample of IAPT clients and conduct independent blind assessments of outcome that mirror the initial assessment. Indeed the first steps in this direction have already been taken in an evaluation of internet CBT [ Richards et al (2020)] in which IAPT Psychological Wellbeing Practitioners used the MINI [ Sheehan et al (1998)] semi-structured interview to evaluate outcome, albeit that they were not independent evaluators and there could be no certainty that they had not used the interview as a symptom checklist rather than in the way it is intended. Further the authors of Richards et al (2020) were employees of the owners of the software package or worked for IAPT. Tolin et al (2015) have pointed out that for a treatment to be regarded as evidence-supported there must be at least two studies demonstrating effectiveness in real world settings by researchers not involved in the original development and evaluation of the protocol and without allegiance to the protocol. Kellet et al (2020) have failed to explain why IAPT should not be subject to independent rigorous scrutiny and their claim that their own work should suffice is difficult to understand.


The Misuse of Effect Size and Intention to Treat

Kellet at al (2021) rightly caution that comparing effect sizes (the post-test mean subtracted from the pre-test mean divided by the pooled standard deviation) across studies is a hazardous endeavour. But they fail to acknowledge my central point that the IAPT effect sizes are no better than those found in studies that pre-date the establishment of IAPT, that is they do not demonstrate an added value.  Kellet et al (2021) rightly draw attention to the importance of intention to treat analysis and attempt to rescue the IAPT studies on the basis that many performed such an analysis. Whilst an intention to treat analysis is appropriate in a randomised controlled trial in which less than a fifth of those in the different treatment arms default, it makes no sense in the IAPT context in which 40% of clients are nonstarters (i.e complete only the assessment) and 42% dropout after only one treatment session [ Davis et al (2020)]. In this context it is not surprising that Delgadillo et al (2020) failed to demonstrate any significant association between treatment competence measures and clinical outcomes, a point in fairness acknowledged by the latter author. But such a finding was predictable from the Competence Engine [Scott (2017)] which posits a reciprocal interaction between diagnosis specific, stage specific and generic competences.


Kellett et al (2020) Get Deeper in The Mud Attacking Scott (2018)


Kellett et al (2021) rightly underline my comment that my own study of 90 IAPT clients Scott (2018) was hardly definitive, as all had gone through litigation. But they omit to mention that I was wholly independent in assessing them, my duty was solely to the Court as an Expert Witness.  Despite this they make the extraordinary claim that my study had a ‘high risk of bias’, which casts serious doubts on their measuring instruments. They failed to understand that in assessing a litigant one is of necessity assessing current and past functioning. In my study I included used of the current and lifetime versions of a standardised semi-structured interview the SCID [ First et al (1996)].  This made it possible to assess the impact of IAPT interventions whether delivered pre or post the trauma that led to their claim. Whatever was the timing of the IAPT intervention the overall picture was that only the tip of the iceberg (9.2%) lost their diagnostic status as a result of these ministrations. Nevertheless, as I suggested, there is a clear need for a further publicly funded study of the effectiveness of IAPT with a representative sample of the latter.




Davis, A., Smith, T., Talbot, J., Eldridge, C., & Bretts, D. (2020). Predicting patient engagement in IAPT services: a statistical analysis of electronic health records. Evidence Based Mental Health, 23:8-14  doi:10.1136/ebmental-2019-300133.

Delgadillo, J., Branson, A., Kellett, S., Myles-Hooton, P., Hardy, G. E., & Shafran, R. (2020). Therapist personality traits as predictors of psychological treatment outcomes. Psychotherapy Research, 30(7), 857–870. https://doi.org/10.1080/10503307.2020.1731927.

First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (1996). Structured clinical interview for DSM-IV axis I disorders, clinician version (SCID-CV). Washington, DC: American Psychiatric Press.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268


Richards, D., Enrique, A., Ellert, N., Franklin, M., Palacios, J., Duffy, D., Earley, C., Chapman, J., Jell, G., Siollesse, S., & Timulak, L. (2020) A pragmatic randomized waitlist-controlled effectiveness and  cost-effectiveness trial of digital interventions for depression and anxiety npj Digital Medicine (2020)3:85 ; https://doi.org/10.1038/s41746-020-0293-8.

Scott, M.J (2017) Towards a Mental Health System That Works. London: Routledge.

Scott, M.J. (2018). Improving access to psychological therapies (IAPT) – the need for radical reform. Journal of Health Psychology, 23, 1136-1147. https://doi.org/10.1177/1359105318755264.

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264


Sheehan, D. V. et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric inter- view for DSM-IV and ICD-10. J. Clin. Psychiatry 59(Suppl 2), 22–33 (1998). quiz 34-57.

Spring B (2007). Evidence-based practice in clinical psychology: what it is, why it matters; what you need to know. Journal of Clinical Psychology, 63(7), 611–631. 10.1002/jclp.20373 [PubMed: 17551934].

Stewart, R.R., Chambless, D.L., & Stirman, S.W (2018) Decision making and the use of evidence based practice: Is the three-legged stool balanced?  Pract Innov 3(1): 56–67. doi:10.1037/pri0000063.  

Stirman, S. W., DeRubeis, R. J., Crits-Christoph, P., & Rothman, A. (2005). Can the Randomized Controlled Trial Literature Generalize to Nonrandomized Patients? Journal of Consulting and Clinical Psychology, 73(1), 127–135. https://doi.org/10.1037/0022-006X.73.1.127


Tolin, D. F., McKay, D., Forman, E. M., Klonsky, E. D., & Thombs, B. D. (2015). Empirically supported treatment: Recommendations for a new model. Clinical Psychology: Science and Practice, 22(4), 317–338. https://doi.org/10.1111/cpsp.12122



Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259



IAPT and The Collapse of Its’ Practice-Based Evidence

The Improving Access to Psychological Therapies (IAPT) service has failed to deliver [Wakefield et al (2021) https://doi.org/10.1111/bjc.12268]  and Scott 2021) https://doi.org/10.1111/bjc.12264] practice based evidence. In an attempted rebuttal of my Commentary [ Kellet et al (2021) https://doi.org/10.1111/bjc.12268 ] in the forthcoming British Journal of Clinical Psychology, IAPT fellow-travellers dig an even deeper hole, exposing even more conflicts of interest.


Cover image

Under UK Government pressure, NHS England is to reconfigure its’ relationship with social care.  It would be timely if the Government also insisted that NHS England put its house in order with regards to the provision of routine mental health services.  As a first step it should insist that NHS England staff cannot be employed by an agency, such as IAPT, that they are responsible for auditing.

Further in 2016 the National Audit Office asked that IAPT be made responsible to an independent body the Care Quality Commission, but the Service has instead been allowed to continue to mark its’ own homework.

Kellett, S., Wakefield, S., Simmonds‐Buckley, M. and Delgadillo, J. (2021), The costs and benefits of practice‐based evidence: Correcting some misunderstandings about the 10‐year meta‐analysis of IAPT studies. British Journal of Clinical Psychology, 60: 42-47. https://doi.org/10.1111/bjc.12268

Scott, M.J. (2021), Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin. British Journal of Clinical Psychology, 60: 38-41. https://doi.org/10.1111/bjc.12264

Wakefield, S., Kellett, S., Simmonds‐Buckley, M., Stockton, D., Bradbury, A. and Delgadillo, J. (2021), Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 60: 1-37 e12259. https://doi.org/10.1111/bjc.12259

Dr Mike Scott

A Conflict of Interest Between NHS England and IAPT


the Improving Access to Psychological Therapies (IAPT) pantomine is likely to continue, with Dr Adrian Whittington, National Lead for Psychological Professions, NHS England  and IAPT National Clinical Adviser about to chair a Conference with the leading light of IAPT, Professor David Clark for IAPT staff.      IAPT afficionados seem inherently incapable of understanding what constitutes a conflict of interest, see forthcoming issue of the British Journal of Clinical Psychology, ‘Ensuring IAPT Does What It Says On The Tin’. https://doi.org/10.1111/bjc.12264.

page1image32572160 page1image32571584 page1image32572352 page1image32572544 The Information Standard Guide

Finding the Evidence

A key step in the information production process

November 2013

Caroline De Brún

NHS England  should reflect on their own document published in 2013 ‘Finding the Evidence’ in which clinicians are asked to seek the ‘best research evidence’ by looking at how an intended treatment has fared compared to a credible alternative. Taking the IAPT service as the intended treatment there has never been a comparison with a credible alternative. IAPT cannot be considered  a repository of ‘best evidence’

The power holders, wish to believe their fairy tale ‘we are committed to mental health, we have shown this in supporting our world beating IAPT service, as far as possible we will fund expansion of the service, we have broken new ground’ and in small print ‘it is not politically correct to say other and we are too busy  with the pandemic/physical health to critically analyse IAPTs data’. But this is a dangerous story offering no protection for the mental health sufferer. It is time that sufferers are seen as ‘vulnerable’ people and offered societal protection.

IAPT therapists do not ask the client,  at the end of treatment, whether they are back to their old selves again. Outcome is determined by the Genie that arises out of the psychometric test lamp that IAPT polishes incessantly.

The Genie could be pressed ‘how does low intensity CBT work?’  A coughing and spluttering might ensue. It is known that CBT works for the depression and anxiety disorders, using the specific cognitive model for those disorders. But there is no evidence that simply describing the reciprocal interactions of cognition, emotion, behaviour and physiology, then targeting   one or more of them leads to an evidence supported treatment. It is a fundamentalist translation of the treatments conducted in the randomised controlled trials of depression and the anxiety disorders. It is a translation born of the exigencies of the situations, such as vast monies available for treatment, but it is akin to using a religious belief system for political purposes.

 The CBT protocol for panic disorder is entirely dependent on David Clark’s model (2020) of catastrophic misinterpretation of bodily sensations perpetuating the symptoms of panic https://doi.org/10.1007/s10608-020-10141-0. None of the procedures in the protocol would make sense without reference to his model. 

A cognitive model of a disorder is the nucleus around which orbit all the procedures of a protocol. Beck enshrined this in his theory of cognitive content specificity, that disorders are distinguished by their  different cognitive content and connive profiles see Baranoff, J., & Oei, T. P. S. (2015). The cognitive content-specificity hypothesis: Contributions to diagnosis and assessment. In G. P. Brown & D. A. Clark (Eds.), Assessment in cognitive therapy (p. 172–196). The Guilford Press, and Eysenck and Fajkowska (2018) https://doi.org/10.1080/02699931.2017.1330255.

But the procedures in low intensity CBT have no nucleus. For example the strategies in Williams et al (2018) doi: 10.1192/bjp.2017.18 Living Life to The Full classes ‘covering key CBT topics such as altered thinking, behavioural activation, problem-solving and relapse prevention’,  are not derived from any specific cognitive model of disorder – they are the equivalent of displaced electrons, the atoms have no credible name and the targets ill defined. For example in the Williams et al study (2018) the target is ‘low mood and stress’, the latter has no specific cognitive content or cognitive profile.   If it is not known how a psychological therapy achieves its goal then the therapy itself cannot be considered evidence supported. There has to be a plausible scientific explanation of the mechanism of change. The low intensity cbt protocols represent an ad hoc usage of cbt techniques, it is impossible to distil the mechanism of change, if any, in such a collage.  In this respect the low intensity interventions are found wanting, they are poor translations of the protocols in the ‘gold standard’ randomised controlled trials,  they are advocated in a fundamentalist way by IAPT, driven by perceived economy than any considered view of effectiveness.

Dr Mike Scott


IAPT’s Black Hole – Accountability

I recently asked the National Audit Office to restart it’s investigation into IAPT. I am expecting their reply in the next week or two. There has been no independent scrutiny of IAPT. They have been answerable only to Clinical Commissioning Groups, which have consisted largely of GPs and allowed IAPT to mark its’ own homework.

But the accountability gap also extends downwards, where is the evidence that front line staff or clients have been consulted or involved in decision making?  Most recently IAPT has offered webinars, for its staff on helping those with long term COVID.   There is a tacit assumption that this will be within the expertise of IAPT therapists just as helping those with long term physical conditions such as irritable bowel syndrome. But the IAPT staff working with LTCs were never consulted, before this new foray. Client’s with LTCs were never asked whether they were back to their old selves (or best functioning) before this proposed further extension of IAPT’s empire.  

In the forthcoming issue of the British Journal of Clinical Psychology I have challenged IAPT’s account of its ‘performance’ see ‘Ensuring IAPT Does What It Says On The Tin’ https://doi.org/10.1111/bjc.12264. There is a reply in rebuttal see ‘The costs and benefits of practice-based evidence: correcting some misunderstandings about the 10-year meta-analysis of IAPT studies’ https://doi.org/10.1111/bjc.12268 that reveals a breathtaking level of conflict of interests. IAPT and its’ fellow travellers should be held to account. But importantly they also need to account to their therapists and clients. [ The original IAPT paper is available at https://doi.org/10.1111/bjc.12259]


Dr Mike Scott

National Institute for Health Protection to Control IAPT?

in a blog written just before the demise of Public Health England I noted  the’Breathtaking Naivety of Public Health England On Mental Health’, https://wp.me/p8O4Fm-2HI. My hope is that its’ replacement the National Institute of Health Protection (NIHP)  will question why £4billion of the taxpayers money has been spent on the Improving Access to Psychological Therapies (IAPT) Programme, without any publicly funded independent evaluation of the service. My own independent finding was that only 10% of  those going through the IAPT service recover and that the public are very dissatisfied https://connection.sagepub.com/blog/psychology/2018/02/07/on-sage-insight-improving-access-to-psychological-therapies-iapt-the-need-for-radical-reform/ ,. By contrast IAPT claims a 50%  recovery rate, but my just published paper in the British Journal of Clinical Psychology, https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264#.XzwEMhZvXuk.email  casts serious doubts on the Services claim.

I have written to Baroness Harding of Winscombe, Dido Harding, the head of NIHP  to clarify whether the NIHP is indeed going to be the monitor of IAPT’s performance and if not who is? I have also stressed that no agency, including IAPT, should be allowed to mark its’ own homework.   It is imperative that a the metric for gauging the effectiveness of a service is one that the general public would recognise as meaningful, such as being independently assessed as no longer suffering from the disorder that they first presented with, as opposed to a surrogate measure, such as a change of score on a psychometric test completed in the presence of the therapist.

As MPs resume sitting in Parliament it is critical to ask who will now be in charge of ensuring IAPT does what it says on the tin and how will this QUANGO be made accountable?

Dr Mike Scott

British Journal of Clinical Psychology Commentary and Rebuttal Of IAPT Paper

the Journal yesterday published my critique, ‘Ensuring IAPT Does What It Says On The Tin’ https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264#.XzwEMhZvXuk.email of the recent IAPT ( Improving Access to Psychological Therapies) paper, by Wakefield et al (2020).

£4bn has been spent on IAPT without publicly funded independent audit. This is a scandal when the best-evidence is that only 10% of those using the service recover. There is no evidence that the Service makes a real world difference to clients’ lives, returning them to their old selves/no longer suffering from the disorder that they first presented with for a significant period. The claimed 50% recovery rate by the service is absurd.  Not only has the now defunct Public Health England  mishandled the pandemic, but it has had a matching performance on mental health. It is too early to judge whether the newly formed Health Protection Board will grasp the nettle of mental health. But I doubt that it will until there is open professional discussion that the present IAPT service is not fit for purpose. It will likely need the involvement of politicians to ensure radical reform of IAPT and that mental health is not again kicked into the long grass.

Dr Mike Scott


‘Ensuring IAPT Does What It Says On The Tin’

this is my critique of the IAPT paper published in the current issue of the British Journal of Clinical Psychology, and the Editor has just accepted it for publication. Wakefield et al (2020) will be invited to respond.

Not quite sure when it will see the light of day, but hopefully it is at least the beginnings of open discussion. 

An area I’ve not touched on, in my paper is the effect of IAPT on its staff. Some are taking legal action against IAPT for bullying and have highlighted massive staff turnover. But it is very difficult for them to go into detail with litigation pending.  Others are suffering in silence to become financially secure enough to leave. Staff are in an invidious position, at best they might hope for an out of Court settlement. But unsurprisingly there is no great Organisational demand for whistleblowers. Gagging clauses it appears are still about and I heard of one being used recently by an employer against a victim of  the Manchester Arena bombing.

We need a national independent inquiry not only about the speed with which lockdown was imposed, but also about what has been happening in IAPT. But today I was talking with a survivor of the 1989 Hillsborough Football disaster, that I’ve kept in touch with since shortly afterwards, and we reflected on how long it has taken to get anywhere. He was too exhausted to follow through on the Statement he gave that was doctored by the police.

Bullying tends to centre on what the Organisations contend are ‘one or two bad apples’, which at a push they might make some compensation  for, to avoid adverse publicity, and without admitting liability. But I think there is a bigger phenomenon of Organisational Abuse that operates in an insidious way akin to racism, that needs to be called out. 

Dr Mike Scott


British Journal of Clinical Psychology Responds To IAPT’s Conflict of Interest

last week I wrote to Professor Grisham, the Editor of the Journal complaining, inter alia, of IAPT’s failure to declare a conflict of interest over the paper by Wakefield et al (2020) in the current issue, see link https://doi.org/10.1111/bjc.12259.  The Journal has responded  by formally inviting me to write a commentary, which subject to peer review, will appear alongside a response by the said authors. The text of my letter was as follows:

Dear Professor Grisham

Re: Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta-analysis of 10-years of practice-based evidence by Wakefield et al (2020) https://doi.org/10.1111/bjc.12259

In this paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study Stephen Kellett is an IAPT Programme Director.  This represents a clear conflict of interest that I believe you should alert your readers to. The study is open to a charge of allegiance bias.

I am concerned that in their reference to my published study “IAPT – The Need for Radical Reform”, Journal of Health Psychology (2018), 23, 1136-1147 https://doi.org/10.1177%2F1359105318755264 these authors have seriously misrepresented my findings. They chose to focus on a subsample of 29 clients, from the 90 IAPT clients I assessed for whom psychometric test results were available in the GP records. I warned that concluding anything from this subsample was extremely hazardous.  The bigger picture was that I independently assessed the whole sample using a ‘gold standard’ diagnostic interview and found that only the tip of the iceberg lost their diagnostic status as a result of IAPT treatment. Wakefield et al were strangely mute on this point.  They similarly fail to acknowledge that their study involved no independent assessment of IAPT client’s functioning and there was no use of a ‘gold standard’ diagnostic interview.

The author’s of Wakefield et al (2020) compare their findings favourably with those found in randomised controlled trials efficacy studies, suggesting that IAPT’s results approach a 50% recovery rate. But there can be no certainty of matching populations. In the said study there was no reliable determination of diagnostic status, thus there is no way that this heterogenous sample can be compared to homogenous samples of different primary disorders e.g obsessive compulsive disorder, adjustment disorder etc.

It is unfortunate that the British Journal of Clinical Psychology has allowed itself to become a vehicle for the marketing of an organisation which has only ever marked its’ own homework. The published study also calls into question the standard of the peer review employed by the Journal.



Dr Michael J Scott

At least we are getting to open debate, which is more than can be said for BABCP’s in-house IAPT comic, CBT Today.