The primary aim of the study was to assess whether providing clinicians and supervisors with regular feedback about patients’ progress would improve clinical outcomes. The researchers were also interested in whether different types of feedback would affect clinicians’ perceptions of their patients’ progress.
The study was conducted in an NHS clinic that provided psychological services to patients with a range of psychological and physical health difficulties. There were 125 patients included in the study and eight therapist-supervisor dyads (16 dyads withdrew during the study). The authors reported the study employed a stepped-wedge, cluster-randomised controlled trial design, with random allocation of dyads to one of three time periods (wedges).
The Clinical Outcomes in Routine Evaluation (CORE) system comprises tools for patients to self-report psychological distress and for clinicians to record demographic and clinical data. This was the primary outcome measure used in the study. Additionally, therapists rated their patients’ severity and progress using the Clinical Global Impression scale, at the end of every session.
Two conditions were compared. Clinicians in the first condition received monthly feedback (via email) about their patients’ progress (based on session-by-session CORE scores). In the second condition, both clinicians and supervisors received the feedback, in addition to alerts that were triggered when patients’ CORE scores did not improve (or worsened). Clinicians were instructed to discuss patients with alerts in supervision.
Outcomes were compared across three time points (baseline; end of therapy; six-months from start of therapy) using repeated-measures regression models. Results showed there was not a significant difference in clinical outcomes between the conditions, nor a between-condition difference in the progress of patients who met criteria to trigger an alert. During therapy, clinicians rated patients in the first condition as being less severe and making better progress. On average, these patients received significantly more therapy sessions than patients in the second condition.
The authors expressed caution about interpreting the results due to encountering unexpected difficulties (such as high attrition). Caution is prudent as it is likely between-condition differences at baseline impacted participants’ progress through therapy; introducing systematic bias and limiting the validity of comparisons. Although the authors acknowledged briefly that more participants in the first condition self-harmed and had addictions and eating disorders, they did not discuss the potential impact of this, nor did they discuss other between-condition differences at baseline (e.g. self-esteem, physical health, relationships etc.). Furthermore, feedback on progress was only provided to participants in the first condition if they requested it (whereas it was provided to all participants in the second condition). Plausibly, this feedback could have impacted ongoing progress, for better or worse. For example, participants’ motivation might have decreased if they were informed about their lack of progress.
To increase ecological validity, more studies are needed in clinical settings. The present study was useful in highlighting to future researchers some of the pitfalls they are likely to encounter; hopefully paving the way for more tightly-controlled research of high external validity, examining this important area of research.