Date Published:Dec 26, 2012
Effective prevention is a two-way street. States rely on their community-level providers to plan, implement, and evaluate evidence-based prevention efforts. But communities can’t do it on their own. Local-level practitioners need preparation and ongoing support to deliver effective prevention programming. And they rely on training and technical assistance provided by their states to do this.
But while most states carefully track the quantity of T/TA services delivered—such as number of hours and clients served—few are prepared to assess the quality of those services. What impact, if any, does the T/TA have on how prevention interventions or strategies are implemented? How, in other words, can states tell if their T/TA works?
To explore this question, SAMHSA’s Center for the Application of Prevention Technologies’ (CAPT) Southeast Resource Team facilitated the workshop Evaluating Training and Technical Assistance Strategies, designed to help participants (1) strengthen the methods they use to evaluate the state-level T/TA they provide to local grant recipients, overall, and (2) go beyond collecting just quantitative data to assessing the impact of the T/TA delivered. Convened by Janice Petersen, Prevention Services Director for North Carolina’s Department of Mental Health, Developmental Delays, and Substance Abuse Services, the workshop included state prevention specialists from North Carolina, South Carolina, Virginia, and Mississippi.
Despite its importance, few states have embarked on a systematic evaluation of the T/TA they provide. One reason is that there is no one “right” approach for doing so. “Across the country, states are using a variety of methods to deliver their T/TA services,” explains Dr. Bertha Gorham, Evaluator for the CAPT’s Southeast Resource Team. “Because of this, no single evaluation approach works for everyone. One size does not fit all.”
To that end, the collaborative two-day event was designed to help states develop an evaluation approach tailored to their individual needs. The program included presentations by leading CAPT evaluation experts, featuring case examples from three states engaged in the process of assessing T/TA quality. They also provided one-on-one technical assistance to participants: Under their guidance, participants could apply lessons learned from the presentations to the development of their own customized logic models and evaluation plans.
“There are questions that all states need to ask before beginning an evaluation, such as: ‘Why are we doing this? What do we want to know? Who are the stakeholders?’” explains Dr. Gorham. “During the hands-on planning sessions, participants had the opportunity to pause, explore these key questions with experienced evaluators, and then start thinking about the kinds of surveys and tools they might use to capture the information they’re looking for.”
Featured evaluators included Dr. Paul Florin, Professor of Clinical Psychology at the University of Rhode Island; Dr. James G. Emshoff, Vice President and Director of Research at EMSTAR Research; and Dr. Wayne Harding, Director of Projects for Social Science Research and Evaluation, Inc. and CAPT Principal Investigator and Chief of Data & Evaluation.
Dr. Florin’s presentation focused on the relationship of quality T/TA to improved prevention outcomes, highlighting the impact of the Rhode Island Training and Technical Assistance Resource Center, which provides T/TA to communities implementing environmental prevention strategies under the state’s Strategic Prevention Framework State Incentive Grant, on state outcomes.
Drs. Emshoff and Harding focused on specific evaluation approaches. Dr. Emshoff described the evaluation of T/TA delivered by the Georgia Family Connection Partnership, which includes ongoing, rigorous tracking of T/TA delivery hours combined with surveys of service recipients to capture changes in knowledge, attitudes, and behavior.
Dr. Harding described the 2011 implementation of Massachusetts’ Cumulative Service Assessment survey, designed to collect data about clients’ perceptions of the quality and impact of the T/TA services they received. According to Harding, the evaluation data confirmed the Commonwealth’s view that the existing T/TA system was not working as well as it might, and reinforced a decision to move from a decentralized T/TA delivery approach, comprising six regional prevention centers, to a single, centralized delivery system.
At the conclusion of the workshop, participants were asked to—naturally—evaluate the quality of the event. The results were uniformly positive. “Participants indicated that the workshop delivered valuable information that they could use right away to build the evaluation capacity of their states’ prevention staff,” says Iris Smith, Coordinator of the CAPT’s Southeast Resource Team. And what respondents found most helpful? The workshop’s “open atmosphere and “opportunity for dialogue.” They also valued having access to individual consultants to help them develop their state T/TA evaluation plans. One respondent summed it up: “The training was great. Looking forward to more T/TA on T/TA!”
For more information, contact:
Bertha Gorham, Evaluator, Center for the Application of Prevention Technologies, Southeast Resource Team
Wayne Harding, Principal Investigator and Chief of Data & Evaluation, Center for the Application of Prevention Technologies