The Prevention of Substance Use Disorders: Part Three – Evidence-Based Practices

Written by Geoff Kane, MD, MPH. Posted on
Print Friendly, PDF & Email

Written with Michael Ballue CADC II, BSBA

This series has advocated that society make disease prevention a priority—especially the prevention of substance use disorders—while recognizing that the prevention of substance use disorders (SUDs) is already a priority for some providers and agencies.  Before organizers of SUD prevention services present an intervention to a target population, they like to be confident that their efforts will result in healthy change.  Also, before funding sources contribute to SUD prevention services, they like to be confident that their investment will produce worthwhile results.  For both these reasons, organizers of SUD prevention services prefer to deliver prevention approaches that are evidence-based.  That is, organizers like to replicate programs that were shown to be effective when they were presented before.

It is increasingly difficult for organizers to attract funding for anything other than evidence-based practices (EBPs).  Simply believing or hoping a program will work is no longer sufficient.  Plus, once a program is implemented, continuation of funding may well be dependent upon the organizer’s ability to demonstrate that their initial intervention produced the desired results.

When starting out, prevention program organizers typically have a target population in mind and look for a prevention approach that worked well in a comparable population.  Principles and guidelines offered by the Office of National Drug Control Policy can help them select a relevant EBP from candidate programs such as those cataloged in SAMSHA’s National Registry of Evidence-based Programs and Practices (NREPP), which reports on more than 280 mental health and substance abuse interventions for all age groups.  To be posted on NREPP, the evidence of a program’s effectiveness must satisfy independent experts.  Some organizers select prevention approaches from the Promising Practices Network (PPN) provided they are rated “promising” or, even better, “proven.”  There are also regional resources, such as the California Department o f Education’s list of science-based programs for preventing youth drug use, violence, or disruptive behavior.  That list includes links to potential funding.

Once they select an EBP relevant to their target population, organizers must weigh whether they have the capacity to deliver the practice as designed.  Failure to maintain fidelity to the original design may produce inferior results.  Some programs are proprietary and the program budget may have to include fees to the developer for use of the practice and for training needed to deliver it.  If the target population is not congruent with that of the original practice, an organizer may have to confer with the developer and with their own funding source to determine whether an adaptation of the EBP is feasible.

When an EBP is delivered with fidelity to an appropriate target population, the expected outcome is healthy change.  In general, however, neither program organizers nor funding sources take positive outcomes for granted; they want proof—which means outcome measurement and program evaluation must be built into the entire preventive enterprise.  One option is to contract with an independent external evaluator.  Another option is to measure outcomes using in-house resources.  A third option, often overlooked, is to invite a local university to collaborate on an evaluation study, which might not only provide the program with independent evaluation but also help one or more graduate (or possibly undergraduate) students fulfill an academic research requirement.  Universities may also have access to grants not accessible to community agencies.  There are guidelines for evaluating prevention programs in the RAND toolkit Getting to Outcomes and the on-line course created by NREPP.

Proposed prevention programs may attract funding by replicating interventions that worked in comparable target populations.  Established prevention programs may extend their original funding or attract additional funding by demonstrating they are effective—using evidence from completed program evaluations or transforming existing outcome data into an evaluative report.  When successive evaluations show that a practice or program results in healthy change, that service may be eligible for listing in a major registry such as NREPP; may attract funding for more rigorous evaluation from sources such as SAMHSA’s Service to Science Initiative; or may generate revenue of its own because others wish to replicate and be trained in it.

The human and economic costs of SUDs are enormous.  The evidence shows we can reduce that pain and expense—by making prevention a priority.

Click for Part One or Part Two in this series.