# Measurement + Feedback = Improved Outcomes

## An evidenced based practice

With any skilled activity, it is a truism that the frequent of practice combined with performance feedback results in improved performance. A recent article by Goodman, McKay, & DePhilippis (2013) summarizes the evidence that measurement and feedback in mental health and substance abuse treatment leads to better outcomes:

Progress Monitoring in Mental Health and Addiction Treatment
An evidenced based practice can be defined as a set of behaviors that have been empirically shown to lead to better outcomes. Measurement and feedback certainly fit the bill. The behaviors are easy to train, easy to implement, and easy to quantify:

- measurement = number patients completing questionnaires
- feedback = number of times the clinician seeks/receives feedback.

The ACORN collaborative has accumulated a data repository spanning multiple years, which permits us to evaluate the extent to which practice and feedback correlate with improved performance. To test this relationship, we investigated individual clinicians' improvement in

Severity Adjusted Effect Size during the 12 month period ending July 1, 2013 (Follow-up Period) compared to results from prior years (Baseline Period).

## Description of Sample

All individual clinicians with outcome data spanning more than one year were included in the sample providing that they had at least two clinical range cases (any age group) with pre-post outcome scores during the past 12 months (Follow-up) and at least this many cases during Baseline Period. A total of 408 clinicians met this criteria.

## Method

The mean Severity Adjusted Effect Size (SAES) was calculated using hierarchical linear modeling (HLM) is specified in the calculation of the

ACORN Criteria of Effectiveness. The use of HLM has the advantage of controlling for differences in sample size.

Regression to the mean would predict that clinicians with very poor outcomes at Baseline would improve over time while those with very good outcomes at Baseline would tend to show a decrease. To control for this, we used a

General Linear Model Regression to compare expected change to measured change from Baseline to Follow-up. In this way we were able to determine if a clinician improved more or less than other clinicians with a comparable SAES during the base line period.

We hypothesized other variables that might be predictive of changes in outcomes. These included the number of cases measured, the number of assessments per case, and the length of time in treatment time between the first and final assessment in a treatment episodes. We also examined the effect of changes in these variable between the Baseline and Follow-up periods to test whether observed improvement was due to changes in the number of sessions or length of treatment.

All of the variables, along with the number of times the clinician looked at data via the Toolkit, we used to predict the improvement from Baseline to Follow-up.

## Results

Overall, the average effect size for all clients seen by these clinicians remained relatively stable at .82 effect size. However, within this sample, there was a wide variation in clinician gains in effectiveness from one year to another, and two variables provided a strong prediction of the magnitude of positive gain in effectiveness during the Follow-up Period.

The mean Severity Adjusted Effect Size for these clinicians (averaged at the clinician level) was 0.82 at Baseline and 0.84 during Follow-up.. Within this group, the average change in effect size between Baseline and Follow-up ranged from -0.63 to 0.87.

Two variables were found to correlate significantly significantly with improvement in clinician outcomes:

- Measurement, as defined by the number of patients measured during the Follow-up Period (r=.33; p<.0001)
- Feedback, as defined by the number of Toolkit page views during the Follow-up Period (r=.32 p<.0001)

None of the other variables (number of sessions, number of weeks in treatment, number of cases in prior years, Toolkit login history in prior years, or changes in these variables) were significant predictors on improvement in outcomes in the Follow-up Period.

## Practice Index

Interestingly, number of cases and the Toolkit page views during the Follow-up Period were weakly correlated (r=.10; p<.1) meaning that the two predictors operated largely independently of one another. Measuring a larger number of cases, regardless of Toolkit usage, was likely to result in improvement. Likewise, high Toolkit usage, even if the number of cases measured was small, was likely to result in improvement.

The finding that the two variables predicting improvement are number of cases and Toolkit usage are consistent with the hypothesis that practice frequency (number of cases measured) and feedback frequency (Toolkit usage) results in improvement.

However, the most powerful effects were observed when the two were combined - practice and feedback frequency. To measure this, we created a simple measure of the use of these evidence based practices called the Practice Index: the number of cases and number of Toolkit page views during the Follow-up Period.

The resulting correlation between the Practice Index and increase in effect size from Baseline to Follow-up was .4 (p<0001)

Clinicians who measured at least 25 cases with at least 25 Toolkit page views in the Follow-up Period averaged over 20% gain in effect size compared to the Baseline Period.

## Discussion

The results provide support to the premise that routine measurement combined with feedback results in improved performance. It is encouraging to observe that the consistent use of the questionnaires with as many cases as possible resulted in improvement, even in the absence of performance feedback. This implies that organizations may improve outcomes with the simple implementation of routine use of ACORN questionnaires, with out the need to contract for Toolkit usage, data processing, and analysis.

Nevertheless, the use of the decision support tools available through the Toolkit, including direct performance feedback to the clinicians, clearly augments the positive results obtained though the implementation of outcomes informed care. Regular use of the Toolkit appears to substantially increase the gains obtained by routine measurements alone.