Byrnes, H.F., Miller, B.A., Aalborg, A.E., Plasencia, A.V., & Keagy, C.A. (2010). Implementation fidelity in adolescent family-based prevention programs: relationship to family engagement. Health Education Research, 25 (4), 531-541.
This study examines the relation between program fidelity and family engagement (i.e., satisfaction and participation) in family-based prevention programs for adolescent alcohol, tobacco or other drug use. Fidelity was assessed by: adherence to the program manual, and quality of implementation.
Cross, W.F., West, J.C. (2011). Examining implementer fidelity: conceptualizing and measuring adherence and competence. J Child Serv., 6 (1), 18-33.
(Abstract) The large gap between intervention outcomes demonstrated in efficacy trials and the apparent ineffectiveness of these same programs in community settings has prompted investigators and practitioners to look closely at implementation fidelity. Critically important, but often overlooked, are the implementers who deliver evidence-based programs—the effectiveness of programs cannot surpass skill levels of the people implementing them. This article distinguishes fidelity at the programmatic level from implementer fidelity.
Dariotis, J.K., Bumbarger, B.K., Duncan, L.G., & Greenberg, M.T. (2008). How do implementation efforts relate to program adherence? Examining the role of organizational, implementer, and program factors. Journal of Community Psychology, 36 (6) 744-760.
The authors identified five key factors of the implementation system—implementer, implementing organization, program, intervention recipient, and school/community context and assessed which characteristics related to program adherence.
Domitrovich, C.E., Greenberg, M.T., (2000). The study of implementation: current findings from effective programs that prevent mental disorders in school-aged children. Journal of Education and Psychological Consultation, 11 (2) 193-221.
This article reviews implementation issues in prevention trials and specifically highlights the study of implementation in the 34 programs determined to be effective in a recent review conducted by the Prevention Research Center for the Center for Mental Health Services. The authors discuss reasons for the lack of attention to implementation and suggestions for ways to incorporate implementation measurement into prevention initiatives.
Dusenbury, L., Brannigan, R., Falco, D., & Hansen, W.B. (2003). A review on research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education and Research Theory and Practice, 18, 237-256.
(Abstract) To help inform drug abuse prevention research in school settings about the issues surrounding implementation, the authors conducted a review of the fidelity of implementation research literature spanning a 25-year period. Fidelity was measured in five ways: (1) adherence, (2) dose, (3) quality of program delivery, (4) participant responsiveness, and (5) program differentiation. Definitions and measures of fidelity were found not to be consistent across studies, and new definitions are proposed.
Elliott, D.S., Mihalic, S. (2003). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5 (1).
(Abstract) The article overview a brief overview of findings from the Blueprints for Violence Prevention-Replication Initiative, including the factors that enhance or impede a successful implementation of these programs. Findings are organized around five implementation tasks: site selection, training, technical assistance, fidelity, and sustainability.
Fixsen, D.L., Naoom, S.F., Blase, K.A., Friedman, R.M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Louis de la Parte Florida Mental Health institute, 231.
This monograph summarizes findings from a review of the research literature on implementation. The authors suggest a unified approach for talking about, studying and promoting implementation in human services.
Gonzalez Castro, F., Barrera Jr., M., & Martinez Jr., C.R. (2004). The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prevention Science, 5 (1).
(Abstract) A dynamic tension has developed in prevention science regarding two imperatives: (a) fidelity of implementation—the delivery of a manualized prevention intervention program as prescribed by the program developer, and (b) program adaptation—the modification of program content to accommodate the needs of a specific consumer group. This paper examines this complex programmatic issue from a community-based participatory research approach for program adaptation that emphasizes motivating community participation to enhance program outcomes.
Hanley, S., Ringwalt, C., Vincus, A.A., Ennett, S.T., Bowling, J.M., Haws, S.W., & Rohrbach, L.A. (2009). Implementing evidence-based substance use prevention curricula with fidelity: the role of teacher training. J. Drug Education, 39 (1) 39-58.
(Abstract) It is widely recognized that teacher training affects the fidelity with which evidence-based substance use prevention curricula are implemented. In this article, the authors present the results of a 2005 survey of teachers from a nationally representative sample of 1721 public middle schools in the U.S.—measuring fidelity along two dimensions (adherence and dose) and assessing the number of hours, recency, and perceived effectiveness of teachers’ training, as well as the degree to which adherence was emphasized during training.
Hill, L.G., Maucione, K., & Hood, B.K. (2006). A focused approach to assessing program fidelity. Society for Prevention Research, 8: 25-34.
(Abstract) The primary goals of this study were to explore the types and frequencies of adaptations made by facilitators and their reasons for making them, and to examine the hypothesis that a small number of kinds of adaptation would account for most of the adaptations reports. The authors interviewed 42 program facilitators involved in a large-scale dissemination about their implementation of a community-based prevention program. Interview questions addressed facilitators’ attitudes about program fidelity and the various types of changes, deletions, and additions they made. Although nearly all facilitators reported that fidelity to program curriculum was important, most also reported adapting the program. The most frequent reason for adaptation was deleting or changing material because they ran out of time.
Lee, S.J., Altschul, I., & Mowbray, C.T. (2008). Using planned adaptation to implement evidence-based programs with new populations. Am J Community Psychol, 41:290-303.
(Abstract) The Interactive Systems Framework for Dissemination and Implementation elaborates the functions and structures that move evidence-based programs (EBPs) from research to practice. Inherent in that process is the tension between implementing programs with fidelity and the need to tailor programs to fit the target population. We propose Planned Adaptation as one approach to resolve this tension, with the goal of guiding practitioners in adapting EBPs so that they maintain core components of program theory while taking into account the needs of particular populations.
Mihalic, S. (n.d.). Implementation Fidelity. Retrieved January 25, 2013, from http://www.blueprintsprograms.com/
This document examines factors that influence fidelity of implementation, explores studies that address “profidelity and adaptation”, and includes several recommendations for enhancing fidelity.
Mowbray, C.T., Bybee, D., Holter, M., & Lewandowski, L. (2006). Validation of a fidelity rating instrument for consumer-operated services. American Journal of Evaluation, 27:9.
(Abstract) With the emphasis on the use of evidence-based practices has come a need to measure the fidelity of replications to the operations and principles of original models. Recent reviews have focused on methods to develop fidelity measures for evidence-based program models. However, the issue of how to validate such measures has been given scant attention. The research reported here attempted to validate a fidelity rating instrument for consumer-operated drop-in centers methods.
Mowbray, C.T., Holter, M.C., Teague, G.B., & Bybee, D. (2003). Fidelity criteria: development, measurement, and validation. American Journal of Evaluation, 24: 315.
(Abstract) The purpose of this review article is to outline steps in the development, measurement, and validation of fidelity criteria, providing examples from health and education literatures. It further identifies important issues in conducting each step. Finally, it raises questions about the dynamic nature of fidelity criteria, appropriate validation and statistical analysis methods, the inclusion of structure and process criteria in fidelity assessment, and the role of program theory in deciding on the balance between adaptation versus exact replication of model programs.
Ringwalt, C.L., Pankratz, M.M., Jackson-Newsom, J., Gottfredson, N.C., Hansen, W.B., Giles, S.M., & Dusenbury, L. (2009). Three-year trajectory of teachers’ fidelity to a drug prevention curriculum. Society for Prevention Research, 11: 67-76.
(Abstract) Little is known about the trajectories over time of classroom teachers’ fidelity to drug prevention curricula. Using the “Concerns-Based Adoption Model” as a theoretical framework, the authors hypothesized that classroom teachers’ fidelity to drug prevention curricula would improve with repetition. Participants comprised 23 middle school teachers who videotaped their administration of three entire iterations of the All Stars curriculum . . . Study findings suggest the need for ongoing training and technical assistance, as well as “just in time” messages delivered electronically; but it is also possible that some prevention curricula may impose unrealistic expectations or burdens on teachers’ abilities and classroom time.
Rohrbach, L.A., Dent, C.W., Skara, S., Sun, P., & Sussman, S. (2006). Fidelity of implementation in project towards no drug abuse (TND): A comparison of classroom teachers and program specialists. Society of Prevention Research, 8:125-132.
This paper presents the results of an effectiveness trial of Project Towards No Drug Abuse [TND], in which the authors compared program delivery by regular classroom teachers and program specialists within the same high schools. Classroom sessions were observed by pairs of observers to assess three domains of implementation fidelity: adherence, classroom process, and perceived student acceptance of the program.
Social Development Research Group (SDRG). (2009). Can communities implement prevention programs with fidelity to program design? Social Development Group, Research Brief No. 1.
This research brief summarizes the findings of the Community Youth Development Study (CYDS), a randomized trial of Communities That Care. Results offer evidence that communities can successfully implement prevention programs with high implementation fidelity. By utilizing a specific framework and measurement tools developed for the CYDS study, communities were able to monitor the implementation of prevention programs.
Zvoch, K. (2012). How does fidelity of implementation matter? Using multilevel models to detect relationships between participant outcomes and the delivery and receipt of treatment. American Journal of Evaluation, 33: 547.
(Abstract) The study examined relationships between fidelity indicators and outcomes associated with a summer literacy intervention. It captures the extent to which students experienced instruction and demonstrates the ways in which dosage-response relationships manifest in program evaluation contexts.
Zvoch, K. (2009). Treatment fidelity in multisite evaluation: A multilevel longitudinal examination of provider adherence status and change. American Journal of Evaluation, 30: 44.
(Abstract) The authors analyzed program implementation data obtained from the repeated observation of teachers delivering one of two early childhood literacy programs to economically disadvantaged students in a large southwestern school district to estimate protocol adherence levels at the onset of the intervention as well as the change in adherence over the intervention period. The research questions were: (a) To what extent were the program models adhered to at the onset of the intervention and over the course of the intervention period? (b) To what extent did program adherence vary initially and over time within and between implementation sites? and (c) Was program model and aspects of the contextual environment, teacher characteristics, and student backgrounds associated with the initial adherence status and change outcomes?
Developed under the Substance Abuse and Mental Health Services Administration’s (SAMHSA) Center for the Application of Prevention Technologies contract. Reference #HHSS277200800004C. For training and/or technical assistance purposes only.