Skip nav
 
  •  

Home > Learning Center > NREPP Glossary

NREPP Glossary

The following definitions have been drawn from numerous sources and are tailored specifically for content on the NREPP Web site. The terms defined here may have slightly different meanings in other settings.

 View All  |  A  |  B  |  C  |  D  |  E  |  F  |  G  |  H  |  I  |  J  |  K  |  L  |  M  |  N  |  O  |  P  |  Q  |  R  |  S  |  T  |  U  |  V  |  W  |  X  |  Y  |  Z

Adaptation
A modest to significant modification of an intervention to meet the needs of different people, situations, or settings.

back to top

Adverse effect
Any harmful or unwanted change in a study group resulting from the use of an intervention.

back to top

Attrition
The loss of study participants during the course of the study due to voluntary dropout or other reasons. Higher rates of attrition can potentially threaten the validity of studies. Attrition is one of the six NREPP criteria used to rate Quality of Research.

back to top

Baseline
The initial time point in a study just before the intervention or treatment begins. The information gathered at baseline is used to measure change in targeted outcomes over the course of the study.

back to top

Co-occurring disorders
In the context of NREPP, substance abuse and mental disorders that often occur in the same individual at the same time (e.g., alcohol dependence and depression); also known as comorbid disorders.

back to top

Comparative effectiveness research

The Federal Coordinating Council on Comparative Effectiveness Research defines comparative effectiveness research, in part, as the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies (e.g., medications, procedures, medical and assistive devices and technologies, diagnostic testing, behavioral change, and delivery system strategies) to prevent, diagnose, treat, and monitor health conditions in "real world" settings.

(For the full definition, see the Federal Coordinating Council's June 30, 2009, Report to the President and the Congress on Comparative Effectiveness Research).

back to top

Comparison group
A group of individuals that serves as the basis for comparison when assessing the effects of an intervention on a treatment group. A comparison group typically receives some treatment other than they would normally receive and is therefore distinguished from a control group, which often receives no treatment or "usual" treatment. To make the comparison valid, the composition and characteristics of the comparison group should resemble that of the treatment group as closely as possible. Some studies use a control group in addition to a comparison group.

back to top

Confounding variables
In an experiment, any characteristic that differs between the experimental group and the comparison group and is not the independent variable under study. These characteristics or variables "confound" the ability to explain the experimental results because they provide an alternative explanation for any observed differences in outcome. In assessing a classroom curriculum, for example, a confounding variable would exist if some students were taught by a highly experienced instructor while other students were taught by a less experienced instructor. The difference in the instructors' experience level makes it harder to determine if the differences in student outcomes (e.g., grades) were caused by the effects of the curriculum or by the variation in instructors. The likelihood that confounding variables might have affected the outcomes of a study is one of the six NREPP criteria used to rate Quality of Research.

back to top

Control group
A group of individuals that serves as the basis of comparison when assessing the effects of an intervention on a treatment group. Depending upon the study design, a control group may receive no treatment, a "usual" or "standard" treatment, or a placebo. The composition and characteristics of the control group should resemble that of the treatment group as closely as possible to make the comparison valid.

back to top

Core components
The most essential and indispensable components of an intervention (core intervention components) or the most essential and indispensable components of an implementation program (core implementation components).

back to top

Cultural appropriateness
In the context of public health, sensitivity to the differences among ethnic, racial, and/or linguistic groups and awareness of how people's cultural background, beliefs, traditions, socioeconomic status, history, and other factors affect their needs and how they respond to services. Generally used to describe interventions or practices.

back to top

Cultural competence
In the context of public health, the knowledge and sensitivity necessary to tailor interventions and services to reflect the norms and culture of the target population and avoid styles of behavior and communication that are inappropriate, marginalizing, or offensive to that population. Generally used to describe people or institutions. Because of the changing nature of people and cultures, cultural competence is seen as a continual and evolving process of adaptation and refinement.

back to top

Dissemination
The targeted distribution of program information and materials to a specific audience. The intent is to spread knowledge about the program and encourage its use.

back to top

DSM (Diagnostic and Statistical Manual of Mental Disorders)
The Diagnostic and Statistical Manual of Mental Disorders, or DSM, is the standard reference handbook used by mental health professionals in the United States to classify mental disorders. There have been five revisions of the DSM since it was first published by the American Psychiatric Association in 1952. The most recent version is the DSM-IV or Fourth Edition, published in 1994; a text revision (DSM-IV-TR) was published in 2000. Earlier editions that may be referenced in NREPP include the DSM-III (1980) and DSM-III-R (1987).

back to top

Effective Program
A few Effective Programs were re-reviewed for NREPP using updated criteria in 2006-2007 and can now be found by searching for the program on the Find an Intervention page.

back to top

Evidence-based
Approaches to prevention or treatment that are based in theory and have undergone scientific evaluation. "Evidence-based" stands in contrast to approaches that are based on tradition, convention, belief, or anecdotal evidence.

back to top

Experimental
A study design in which (1) the intervention is compared with one or more control or comparison conditions, (2) subjects are randomly assigned to study conditions, and (3) data are collected at both pretest and posttest or at posttest only. The experimental study design is considered the most rigorous of the three types of designs (experimental, quasi-experimental, and preexperimental).

back to top

Externalizing behaviors
Social behaviors and other external cues that reflect an individual's internal emotional or psychological conflicts. Examples include spontaneous weeping, "acting out," and uncharacteristic aggression. Reduction of externalizing behaviors is a frequently used measure of the success of treatment or intervention for mental or emotional disorders.

back to top

Fidelity
Fidelity of implementation occurs when implementers of a research-based program or intervention (e.g., teachers, clinicians, counselors) closely follow or adhere to the protocols and techniques that are defined as part of the intervention. For example, for a school-based prevention curriculum, fidelity could involve using the program for the proper grade levels and age groups, following the developer's recommendations for the number of sessions per week, sequencing multiple program components correctly, and conducting assessments and evaluations using the recommended or provided tools.

back to top

Generalizability
The extent to which a study's results can be expected to occur with other people, settings, or conditions beyond those represented in the study. Threats to generalizability include lack of randomization, effects of testing, multiple-treatment interference, selection-treatment interference, effects of experimental arrangements, experimenter effects, and specificity of variables.

back to top

Implementation
The use of a prevention or treatment intervention in a specific community-based or clinical practice setting with a particular target audience.

back to top

Implementation team
A core set of individuals charged with providing guidance through full implementation of the intervention. This team helps ensure engagement of the stakeholders, increases readiness for implementation, ensures fidelity to the intervention, monitors outcomes, and addresses barriers to implementation.

back to top

Indicated
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Indicated prevention strategies focus on preventing the onset or development of problems in individuals who may be showing early signs but are not yet meeting diagnostic levels of a particular disorder.

back to top

Internal validity
The degree to which the intervention or experimental manipulation was the cause of any observed differences or changes in behavior.

back to top

Internalizing behaviors
Behaviors that reflect an individual's transfer of external social or situational stresses to emotional, psychological, or physical symptoms. One well-known internalizing behavior is a child's development of stomach cramps when the parents argue; another is insomnia during a high-stress situation at work. Reduction of internalizing behaviors is a frequently used measure of the success of treatment or intervention for mental or emotional disorders.

back to top

Intervention
A strategy or approach intended to prevent an undesirable outcome (preventive intervention), promote a desirable outcome (promotion intervention) or alter the course of an existing condition (treatment intervention).

back to top

Legacy Programs
The label used by SAMHSA for all former Effective and Promising Programs, which were reviewed between 1997 and 2004 as part of the Center for Substance Abuse Prevention's Model Programs Initiative. Summaries for these Legacy Programs are listed in the Legacy Programs section of the NREPP Web site.

back to top

Logic model
A tool that allows key stakeholders to develop a strategic plan to address an identified community problem.

back to top

Mental health promotion
Attempts to (a) encourage and increase protective factors and healthy behaviors that can help prevent the onset of a diagnosable mental disorder and (b) reduce risk factors that can lead to the development of a mental disorder.

back to top

Mental health treatment
Assistance to individuals for existing mental health conditions or disorders.

back to top

Meta-analysis
A statistical procedure for combining the results of two or more studies on the same topic.

back to top

Missing data
Data or information that researchers intended to collect during a study that was not actually collected or was collected incompletely. Missing data may occur, for example, when survey respondents do not answer all questions in a survey, or when the researchers "throw out" or exclude survey questions because the responses do not meet validation checks. Missing data can threaten the validity and reliability of a study if steps are not taken to compensate for or "impute" (replace with calculated data) the missing information. Missing data are one of the six NREPP criteria used to rate Quality of Research.

back to top

Model Program
Most of the Model Programs were re-reviewed for NREPP using updated criteria in 2006-2007 and can now be found by searching for the program on the Find an Intervention page.

back to top

Outcome
A change in behavior, physiology, attitudes, or knowledge that can be quantified using standardized scales or assessment tools. In the context of NREPP, outcomes refer to measurable changes in the health of an individual or group of people that are attributable to the intervention.

back to top

Outcome evaluation
An evaluation to determine.the extent to which an intervention affects its participants and the surrounding environments. Several important design issues must be considered, including how to best determine the results and how to best contrast what happens as a result of the intervention with what happens without the program.

back to top

Preexperimental
A study design in which (1) there are no control or comparison conditions and (2) data are collected at pretest or posttest only; includes simple observational or case studies. The preexperimental study design provides the most limited scientific rigor of the three types of designs (experimental, quasi-experimental, and preexperimental).

back to top

Process evaluation
An evaluation to determine whether an intervention has been implemented as intended.

back to top

Program drift
A threat to fidelity due to compromises made during implementation.

back to top

Program fit
The degree to which a program matches a community’s needs, resources, and implementation capacity.

back to top

Promising Program
A few Promising Programs were re-reviewed for NREPP using updated criteria in 2006-2007 and can now be found by searching for the program on the Find an Intervention page.

back to top

Psychometrics
The construction of instruments and procedures for measurement.

back to top

Quality assurance
Activities and processes used to check fidelity and the quality of implementation.

back to top

Quality of Research
One of the two main categories of NREPP ratings. Quality of Research (QOR) is how NREPP quantifies the strength of evidence supporting the results or outcomes of the intervention. Each outcome is rated separately. This is because interventions may target multiple outcomes, and the evidence supporting the different outcomes may vary. These QOR ratings are followed by brief "Strengths and Weaknesses" statements where reviewers comment on the studies and materials they reviewed and explain what factors may have contributed to high or low ratings. For more information on the scientific reviewers who rate QOR and how ratings are derived, see the NREPP page on Review Process Quality of Research.

back to top

Quasi-experimental
A study design in which (1) the intervention is compared with one or more control or comparison conditions, (2) subjects are not randomly assigned to study conditions, and (3) data are collected at pretest and posttest or at posttest only; includes time series studies, which have three pretest and three posttest data collection points. The quasi-experimental study design provides strong but more limited scientific rigor relative to an experimental design.

back to top

Ratings
NREPP provides two types of ratings for each intervention reviewed: Quality of Research and Readiness for Dissemination. Each intervention has multiple Quality of Research ratings (one per outcome) and one overall Readiness for Dissemination rating. QOR and RFD ratings are followed by brief "Strengths and Weaknesses" statements where reviewers comment on the studies and materials they reviewed and explain what factors may have contributed to high or low ratings.

back to top

Readiness for Dissemination
One of the two main categories of NREPP ratings. Readiness for Dissemination (RFD) is how NREPP quantifies and describes the quality and availability of an intervention's training and implementation materials. More generally, it describes how easily the intervention can be implemented with fidelity in a real-world application using the materials and services that are currently available to the public. For more information on the reviewers who rate RFD and how ratings are derived, see the NREPP page on Review Process Readiness for Dissemination.

back to top

Reliability of measure
The degree of variation attributable to inconsistencies and errors involved in measures or measurements. Key types include test-retest, interrater, and interitem. Reliability of measures is one of the six NREPP criteria used to rate Quality of Research.

back to top

Replication
The original investigator(s) or an independent party has used the same protocol with an identical or similar target population, and/or has used a slightly modified protocol with a slightly different population, where results are consistent with positive findings from the original evaluation.

back to top

Selective
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Selective prevention strategies focus on specific groups viewed as being at higher risk for mental health disorders or substance abuse because of highly correlated factors (e.g., children of parents with substance abuse problems).

back to top

Substance abuse prevention
Attempts to stop substance abuse before it starts, either by increasing protective factors or by minimizing risk factors.

back to top

Substance abuse treatment
Assistance to individuals for existing substance abuse disorders.

back to top

Sustainability
The long-term survival and continued effectiveness of an intervention.

back to top

Symptomatalogy
The combined symptoms or signs of a disorder or disease.

back to top

Systematic review
A literature review that attempts to collect, summarize, and present results of individual studies, and then synthesizes findings on a specific topic.

back to top

Universal
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Universal prevention strategies address the entire population (national, local community, school, neighborhood), with messages and programs to prevent or delay the use/abuse of alcohol, tobacco, and other drugs.

back to top

Validity of measure
The degree to which a measure accurately captures the meaning of a concept or construct. Key types include pragmatic/predictive, face, concurrent/criterion, and construct. Validity of measures is one of the six NREPP criteria used to rate Quality of Research.

back to top