To foster greater understanding of SAMHSA’s National Registry of Evidence-based Programs and Practices (NREPP) and NREPP’ssubmission and review process, in 2013 the Service to Science (STS) Team posed a series of frequently asked questions to NREPP staff. The following are responses to those questions. Topics include (but are not limited to) details about the NREPP application process, how NREPP interacts with applicants, and how NREPP views qualitative data.
Please briefly describe the vision and mission of the NREPP registry. What was the impetus for establishing it?
The purpose of the National Registry of Evidence-based Programs and Practices (NREPP) is to help the public learn more about available evidence-based programs and practices and determine which of these may best meet their needs. NREPP is one way that the Substance Abuse and Mental Health Services Administration (SAMHSA) is working to improve access to information on evaluated interventions and reduce the lag time between the creation of scientific knowledge and its practical application in the field. NREPP is a voluntary, self-nominating system in which program developers elect to participate. There will always be some interventions that are not submitted to NREPP, and not all that are submitted are reviewed.
Please describe the typical progression a program takes from submission to acceptance into the registry? In what ways, and at what points, do programs interact with NREPP reviewers?
Program developers who prepare a submission packet can apply for review during the NREPP open submission period, which generally occurs on an annual basis. The next NREPP open submission period will be from January 2 through February 28, 2014. Information on the submission period and minimum review requirements is published in a Federal Register notice the summer prior to the open submission period. Throughout the submission period, program developers can use the NREPP Online Submission System to upload relevant documents. NREPP staff carefully screen the uploaded materials for evidence that the intervention submissions meet the minimum review requirements. Submissions that meet these requirements are considered for acceptance, but the total number accepted each year for review is determined by the availability of SAMHSA’s funding resources. SAMHSA also takes into account interventions that represent topical areas that are underrepresented on NREPP. Program developers receive notification of SAMHSA’s decision. Accepted submissions are added to the list of interventions awaiting review. For submissions that are not accepted for review, the associated program developer is given the reason for this decision and is encouraged to resubmit during future open submission periods, once the submission has been revised to address deficiencies.
Although NREPP staff do not provide technical assistance to program developers applying for review, they are available to answer questions about the minimum review requirements and general information regarding the preparation and uploading of a submission packet. NREPP staff can be contacted through any of the following ways:
- By toll-free phone number: 1-866-43NREPP (1-866-436-7377)
- By email: firstname.lastname@example.org
- Through NREPP’s online contact form: http://www.nrepp.samhsa.gov/ContactUs.aspx
The review process begins with a kickoff call with the program developer of the accepted submission, the developer’s designated point(s) of contact, and the NREPP staff who have been assigned to the review. One individual, referred to as the Principal, must be designated by the developer (and others, if necessary) to serve as the single authority for the review of the intervention, but other individuals may participate throughout the review process until the final intervention summary is published on NREPP’s Web site. The purpose of the kickoff call is to provide participants with both an overview of the review process and an opportunity to ask questions. During the review process, NREPP staff identify two external reviewers for the Quality of Research (QOR) review and two external reviewers for the Readiness for Dissemination (RFD) review. These external reviewers remain anonymous to both the Principal and SAMHSA. Reviewers do not interact with program developers or Principals; they conduct the review and submit ratings and comments directly to NREPP staff. The Principal reviews and approves the intervention summary before it is forwarded to SAMHSA’s Government Project Officer for approval and publication on the NREPP Web site.
What are the benefits to programs of being listed in NREPP? What are the costs to programs of going through the application process?
Preparation of a submission packet and participation in an NREPP review can be resource-intensive processes, requiring the support of key players in the development and evaluation of an intervention and the contribution of multiple sets of program materials. (One set of Readiness for Dissemination [RFD] materials remains with NREPP after the review is complete.) Yet, several hundred developers have prepared and submitted a packet to be considered for review by NREPP.
The benefits of investing the effort required to prepare a submission to NREPP are threefold:
- Because NREPP is a resource accessed by implementers, researchers, and other professionals throughout the United States and internationally, interventions that are included in NREPP have the opportunity for exposure to a wide audience, which can lead to an increase in the intervention’s use. Interventions reviewed by NREPP have the potential for increased sustainability, which can stem from acceptance by and inclusion in a national registry (which is a funding requirement for some agencies).
- The assessment and ratings provided by NREPP’s reviewers can serve as guidance for program developers, evaluators, and others involved in the intervention reviewed by NREPP. In addition, the criteria that are used to rate interventions, evidence supporting their outcomes, and the amount and quality of implementation resources can be used to guide study designs and the development of intervention materials.
There is no fee for submitting an intervention for review. For accepted interventions, there is no fee for the review process.
To what extent does NREPP coach programs on what is needed to gain acceptance into the registry?
Because the principal role of NREPP is to administer an objective peer-review process for the evaluation of intervention outcomes and materials, NREPP staff do not provide technical assistance or coaching; however, they are available to answer questions about the minimum review requirements and the submission process. Information on the open submission period and minimum review requirements is published in a Federal Register notice prior to the open submission period, and it is also included on the NREPP Web site. In addition, guidance documents have been developed and are available on the NREPP Web site to assist submitters. For example, the NREPP Learning Center includes the following online course: Preparing an Intervention for NREPP Submission and Potential Review (http://www.nrepp.samhsa.gov/Courses/Submissions/NREPP_0201_0010.html). This course was created for developers of mental health or substance abuse programs, including principal investigators of relevant research studies, who are working toward submitting an intervention for NREPP review. The online course is available as a learning module as well as a downloadable, printer-friendly PDF file. The NREPP Submission Checklist (http://www.nrepp.samhsa.gov/pdfs/NREPP_Submission_Checklist.pdf), which also can be downloaded and printed, assists interested program developers by helping them document their progress toward submission.
What advice would you give programs as to whether they should seek NREPP recognition? What are some “must haves” for programs who are considering applying?
Program developers and researchers who are interested in disseminating their study findings and programs to a wider audience may see NREPP as an opportunity to help them accomplish this objective. Developers may also seek validation of their work through their intervention’s inclusion in NREPP. All submissions are assessed on the basis of four minimum requirements, which must be demonstrated or documented before the end of the submission period. Three requirements are related to the Quality of Research (QOR) review, and one is related to the Readiness for Dissemination (RFD) review:
Quality of Research
- The intervention has produced one or more positive behavioral outcomes (p ≤ .05) in mental health or substance abuse among individuals, communities, or populations. Significant differences between groups over time must be demonstrated for each outcome.
- Evidence of these outcomes has been demonstrated in at least one study using an experimental or quasi-experimental design.
- Results of these studies have been published in a peer-reviewed journal or other professional publication (e.g., book volume) or documented in a comprehensive evaluation report.
Readiness for Dissemination
- Implementation materials, training and support resources, and quality assurance procedures have been developed and are ready for use by the public.
How long does NREPP recognition last (i.e., are agencies required to resubmit after a period of time)?
Intervention summaries published on the NREPP Web site will remain on the site as long as the intervention’s implementation materials, training (if required), and quality assurance measures are available to the public. The Principal may request the removal of his or her intervention summary from NREPP but, to date, intervention summaries have only been removed because of the unavailability of one or more of the items noted above. Interventions are eligible for a second review if they have been included in NREPP for 5 years and if new research or updated dissemination materials are available.
Based on your experience with reviews, where do programs typically fail to meet NREPP Quality of Research (QOR) requirements? What are the most common points of failure? What types of issues do programs inadequately address?
There is no QOR-related minimum review requirement that stands out as one that submissions typically fail to meet, and some submissions fail to meet two or all three requirements. However, for those interventions that have been accepted and reviewed, common weaknesses identified by the QOR external reviewers include the following, as presented for each QOR criterion:
Criterion 1: Reliability of Measures
- Limited information regarding reliability of measures
- Lack of psychometric support for reliability of measures
- Use of a modified version of a known instrument without providing reliability coefficients for the modified instrument
- Use of only one form of reliability testing (e.g., internal consistency) rather than a more complete set of approaches (e.g., test-retest/alternate form, interrater
- Poor reliability
- Use of an instrument with a different population than the one for which the instrument has been tested (e.g., use of a high school version of an instrument with middle school students)
- Use of a mix of sources of reliability data that may raise concerns about common standards (e.g., reports on disciplinary problems from schools that have different definitions for disciplinary problems)
Criterion 2: Validity of Measures
- Limited information regarding validity of measures
- Lack of evidence to support content validity
- Lack of evidence to demonstrate higher forms of validity (i.e., concurrent, discriminant, predictive, and construct validity)
Criterion 3: Intervention Fidelity
- Lack of ongoing training and corrective oversight of intervention delivery during the study
- Lack of a written protocol to document whether all required components of the intervention were provided as intended during the study
- Lack of a fidelity instrument or a psychometrically tested fidelity instrument
- Lack of objective sources (e.g., independent observers who are not interventionists and who are blind to condition assignment) for assessing the extent of intervention fidelity
Criterion 4: Missing Data and Attrition
- No or little information on missing data and attrition
- Missing data and attrition rates that are too high to be subject to statistical control
- Lack of appropriate statistical modeling of both missing data and attrition
- Statistically significant difference in attrition rates between the intervention condition and the control or comparison condition
- Inability of the investigator to document that intervention completers and noncompleters are not different on the basis of attrition
Criterion 5: Potential Confounding Variables
- Lack of reported data regarding confounding variables
- Lack of a sufficiently rigorous study design (e.g., pre-post assessment only, posttest assessment only without random assignment to condition, baseline differences among conditions that are not modeled in the statistical contrasts)
- Instances where the intervention group and the control or comparison group represent different populations
- Inability to control for variables that might be expected to have an impact on study outcomes
Criterion 6: Appropriateness of Analysis
- Lack of appropriate analysis to adequately address the dataset
- Use of an inappropriate statistical test
- Small sample size
- Use of inadequate power analysis or an underpowered study
- Failure to correct for the experiment- or family-wise type I error rate when conducting many statistical contrasts between groups
Based on your experience with reviews, where do programs typically fail to meet NREPP Readiness for Dissemination (RFD) requirements? What are the most common points of failure? What types of issues do programs inadequately address?
Submissions that did not meet the RFD-related minimum review requirement did not present evidence that the program had implementation materials to support dissemination to the public; that is, they lacked information regarding the availability of implementation materials, training and support resources, and/or quality assurance procedures. For those interventions that have been accepted and reviewed, common weaknesses identified by the RFD external reviewers include the following, as presented for each RFD criterion:
Criterion 1: Availability of Implementation Materials
- Lack of detail in implementation protocols
- Unclear qualifications for implementers
- Limited support for adapting the intervention to other cultural groups
- Manuals that lack organization and that are not user-friendly
Criterion 2: Availability of Training and Support Resources
- Lack of available formalized training (if training is required)
- No clear method for accessing training
- Lack of information regarding the availability of technical assistance and support resources
- Inadequate technical assistance and support resources
Criterion 3: Availability of Quality Assurance Procedures
- No information on improving intervention delivery
- No fidelity measures in place
- Lack of guidance on the interpretation of quality assurance measures
- Lack of guidance for the use of fidelity measures
- Lack of information on outcome measurement
- Lack of guidance on the interpretation of outcome measures
We understand that programs do not have to get perfect scores along every criterion to be listed in the registry. In light of this, how does NREPP ultimately decide which programs to include in the registry?
The number and type of interventions accepted for review and ultimately included in NREPP are contingent on two factors: (1) the number of submissions meeting all four minimum requirements, and (2) the availability of funding to support the review process for accepted submissions. After an intervention has been accepted for review, the review process runs its course, culminating in an intervention summary. Each intervention summary is published on the NREPP Web site (after review and approval by both the Principal and SAMHSA) regardless of the findings of the Quality of Research (QOR) and Readiness for Dissemination (RFD) reviews (i.e., the external reviewers’ ratings of each QOR and RFD criterion). It is SAMHSA’s policy that once a review has been authorized by the Principal, the resulting intervention summary is expected to be published on the NREPP Web site. Therefore, if a Principal declines to publish the intervention summary, the Web site will provide information stating that although the intervention was reviewed, NREPP was not authorized by the Principal to publish the results of the review. Since SAMHSA implemented this policy, no Principals have declined publication.
How does NREPP judge the merit of qualitative research/data? What criteria does it use?
All Quality of Research (QOR) and Readiness for Dissemination (RFD) external reviewers must complete training on standardized criteria, and they are required to use the criteria as the basis for their ratings. The QOR review is based on quantitative evidence, as indicated by the minimum review requirements. However, qualitative research and data can be useful in helping users understand the processes and documentation underlying the intervention, and they are particularly useful in regard to intervention fidelity. The RFD review focuses on qualitative information, primarily in determining whether users can obtain the information needed to implement the intervention with fidelity.
Is there anything else programs should know about NREPP that we may not have asked you about?
The review of interventions is a continual process, and because new intervention summaries are regularly published on the NREPP Web site, the registry is always growing. Despite the inclusion of a large number of intervention summaries in NREPP (over 315 as of December 2013), there are still many content areas, target populations, and prevention and treatment strategies that are not represented, and NREPP continues to seek out and encourage submissions in these areas. Efforts have been made to reach international program developers and implementers, including researchers in Australia, Canada, and the United Kingdom. Select intervention summaries already published on the NREPP Web site have been translated into Spanish; translations of intervention summaries into Spanish, as well as other languages, will continue on the basis of available funding.
The Quality of Research (QOR) and Readiness for Dissemination (RFD) ratings of an intervention range from 0.0 to 4.0, with 4.0 being the highest rating given. In addition to the ratings, the intervention’s research and dissemination strengths and weaknesses are noted as part of the summary. The strengths and weaknesses provide an opportunity for users to understand and interpret the quantitative ratings and to determine whether they can be confident (on the basis of a personal threshold) in the study findings. The strengths and weaknesses may also be useful for program developers and researchers to sustain or further develop the items noted.