Skip navigation
Link to HHS Web Site
Healthy People 2020 logo

Healthy People Home > Healthy People 2020 > Secretary's Advisory Committee > Evidence-Based Clinical and Public Health: Generating and Applying the Evidence

Healthy People 2020 logo
Evidence-Based Clinical and Public Health: Generating and Applying the Evidence

Secretary’s Advisory Committee on
National Health Promotion and Disease Prevention Objectives for 2020

July 26, 2010

Table of Contents

Background and Purpose of this Brief

Origins of the Movement toward Evidence-Based Public Health Practice

Integrating the Existing Science into Implementation Processes

References

Exhibit 1. RE-AIM Definitions and Questions

Exhibit 2. Hierarchy of Evidence Used by the USPSTF to Evaluate Clinical Studies

Exhibit 3. USPSTF Criteria for Grading the Internal Validity of Individual Studies

Exhibit 4. USPSTF Recommendation Grid: Letter Grade of Recommendation or Statement of Insufficient Evidence Assessing Certainty and Magnitude of Net Benefit for Population Level Interventions

Exhibit 5. What the USPSTF Grade Means and Suggestions for Practice

Exhibit 6.Assessing the Strength of a Body of Evidence on Effectiveness of Population-Based Interventions in the Guide to Community Preventive Services

Exhibit 7. Relationship of Strength of Evidence of Effectiveness and Strength of Recommendations

Appendix 1. Members of the Ad Hoc Groups on Evidence-Based Practices

Appendix 2. Methods for Conducting a Systematic Review of Evidence

Appendix 3. The U.S. Preventive Services Task Force and the Guide to Community Preventive Services

Appendix 4. The Community Preventive Services Task Force

Appendix 5. Categorizing the Effectiveness of Interventions

Appendix 6. Available Evidence for Intervention Strategies at Multiple Levels to Address Population Health Issues

Background and Purpose of this Brief

The Healthy People initiative, coordinated by the Office of Disease Prevention and Health Promotion (ODPHP) under the U.S. Department of Health and Human Services (HHS), provides quantitative health promotion and disease prevention goals and objectives to be achieved in ten-year increments. Planning is underway for the fourth iteration of national objectives—Healthy People 2020. To aid in this effort, the Secretary’s Advisory Committee on National Health Promotion and Disease Prevention Objectives for 2020 (the Committee) has been convened to provide advice and consultation to the HHS Secretary.

In its previous iterations, Healthy People set targets for reducing burden, but did not suggest actions for achieving these targets or offer tools for considering the relative effectiveness or cost-effectiveness of these efforts. In June of 2009, HHS asked the Committee to provide guidance on criteria that could be used to select “evidence-based” or “knowledge-based” actions for inclusion in Healthy People 2020.

The Committee convened an ad hoc group of experts via conference call (Appendix 1) to provide input on concepts of evidence-based practice in public health, challenges in assessing evidence in support of public health interventions, and existing resources for evidence-based public health practices. In March 2009, this group developed an initial document summarizing these concepts. In 2010, HHS sought input on what guidance would best help the users of Healthy People 2020 employ available evidence when choosing from among the list of objectives and interventions. HHS was also interested in feedback on approaches that could maximize adoption and use of Healthy People.

The Committee convened a second ad hoc advisory group (Appendix 1) to share their perspectives on these issues. The Committee felt an important use of evidence is to assure that resources are allocated to maximize population health impact; objectives that have proven, and highly effective, interventions should generally be preferred, as should interventions within an objective that have a stronger evidence-base. Drawing on the work of the earlier group as background material, the second advisory group proposed that Healthy People 2020 seek to tie goals and objectives to focused, evidence-based interventions that guide effective action and accountability at the federal, state, and local levels.

The current brief presents the cumulative input of these two advisory groups on critical issues and needs for consideration in guiding Healthy People users to actions that are grounded in solid scientific evidence. To facilitate the creation of seamless linkages from Healthy People 2020 to existing resources that periodically evaluate and interpret evidence (e.g., the U.S. Preventive Services Task Force’s Guide to Clinical Preventive Services and the Guide to Community Preventive Services), the Committee offers examples of several such resources.

We recognize that the amount and quality of evidence for public health practice and policy varies greatly for different types of interventions and across domains. There are many areas where evidence is available to inform us about what works. In such cases it is critical to invest in interventions that offer real value. Yet in other important areas (like obesity), our knowledge of which interventions are effective, singly or in combination with others, is limited or absent. In these situations, trade-offs should be weighed between the need to address a specific health objective and the need to use scarce resources on interventions of clear effectiveness.

If there is an imperative to act on a problem—due to its enormity or to constraints on how intervention resources can be used—it may be necessary to implement unproven interventions within the context of learning about their effectiveness, while using the best available theoretical constructs and expert opinion. To help make choices in the absence of clear evidence, we provide information about emerging evidence-based approaches that go beyond what the aforementioned resources recommend, in order to inform disease prevention and health promotion.

Back to Top

Origins of the Movement toward Evidence-Based Public Health Practice

Evidence-based public health (EBPH) has its roots in clinical epidemiology and evidence-based medicine (EBM). During the 1970’s and 1980’s, evidence accumulated that expert reviews and recommendations from expert panels frequently failed to include relevant studies and produced suboptimal conclusions. It was not clear what aspects of health care practices were associated with better health outcomes. EBM was developed in response to this experience, and as a means to explore which combination of specific services and medical conditions lead to improved health outcomes in actual practice, and for whom. EBM originated in the management of individual patients, where the best available evidence was combined with patient preferences and knowledge of local resources to improve decision making.

More recently, EBM has focused primarily on clarifying aspects of medical decision making that can be made on a scientific basis, recognizing that judgments about appropriate treatment often depend on individual factors (e.g., values or quality of life). Preventive services have been in the vanguard of this movement. In 1984, the U.S. Preventive Services Task Force (USPSTF) was established to build on the work of the Canadian Task Force on the Periodic Health Examination (established almost a decade earlier). The general medical community began focusing on EBM in the 1990’s, employing scientific methods to assess which diagnostic or therapeutic strategies would produce the best medical outcomes.

What is “evidence-based” public health practice?

Evidence-based public health practice is the development, implementation, and evaluation of effective programs and policies in public health through application of principles of scientific reasoning, including systematic uses of data and information systems and appropriate use of behavioral science theory and program planning models.1 Just as EBM seeks to combine individual clinical expertise with the best available scientific evidence,2 evidence-based public health draws on principles of good practice, integrating sound professional judgments with a body of appropriate, systematic research.3 There has been strong recognition in public health of the need to identify the evidence of effectiveness for different policies and programs, translate that evidence into recommendations, and increase the extent to which that evidence is used in public health practice.

As with clinical interventions, planning to address population-based health problems typically takes place within a context of limited resources. Decision-makers should invest in proven, cost-effective solutions. Evidence for the effectiveness of interventions—such as programs, practices, or policies—can be used to provide a rationale for choosing a particular course of action, or to justify the allocation of funding and other resources.

There is demand for evidence at many levels: practitioners use it for program planning and internal policies, local managers use it to make decisions about which programs to support, and senior managers within government and health care organizations use it to set priorities and make policy and funding decisions.4 A recent important example is the Patient Protection and Affordable Care Act (PPACA) which instituted a policy to cover all services receiving an A or B recommendation from the USPSTF with no copayment in all new health care plans.

Why focus on evidence-based decision making in health promotion and disease prevention practice?

The clinical literature documents the pitfalls of making treatment decisions in the absence of clear evidence. For example, in the late 1980’s, a promising treatment for breast cancer emerged, involving high-dose chemotherapy with autologous bone marrow transplantation (HDC/ABMT). The widespread and rapid dissemination of HDC/ABMT took place before the treatment had been carefully evaluated. Studies later found HDC/ABMT to be ineffective, but in the meantime more than 30,000 women had already received the treatment, dying earlier and suffering more than they otherwise would have done. Based on evidence that the procedure was harmful, HDC/ABMT is no longer used.5

The USPSTF has shown that many screening tests are not only unnecessary, but can actually do harm. In addition to psychological harms associated with false positive results or delayed treatment due to false negatives, in some cases screening tests themselves can be harmful (e.g., see USPSTF recommendations for abdominal aortic aneurysm screening in women).6

In public health practice, the potential harms of implementing unproven interventions may be less stark. Yet there are cases where evidence-based reviews have made a significant difference in terms of policy, funding, or programmatic decisions that affect public health. For example, the Community Guide conducted a review of the impact of 16 state laws that made it illegal to drive with a blood alcohol concentration (BAC) exceeding 0.08 percent. The review found that, following implementation of the law in these states, there was a mean decrease in fatalities due to alcohol-related motor vehicle crashes of 7 percent. The evidence review was used to justify federal legislation that linked highway funding for states to their enactment of laws lowering BAC to 0.8 percent.7

Another example of the value of using evidence to guide public health practice is worksite risk assessments. In isolation, there is insufficient evidence to show that Assessment of Health Risk Factors (AHRF) is effective. In combination with health education and other interventions, however, a systematic review found strong evidence that AHRF is effective in improving one or more health behaviors or conditions in populations of workers.8

Within the context of limited funding, investment in an ineffective intervention means a lost chance to invest in something that works to improve health and/or prevent disease or injury. The public health field has often approached decision making about interventions with the belief that “if it sounds good, it must be good.” To counteract this tendency, the Centers for Disease Control and Prevention (CDC) is now commonly requiring that applicants who respond to Funding Opportunity Announcements use evidence-based interventions supported by credible sources, or provide independent justification.

At the present time, a number of interventions are commonly used for which we do not yet have adequate evidence of effectiveness. The evidence-base must be strengthened for a wide range of interventions, including school-based programs to promote nutrition and physical activity, state or community-wide promotion of sealants to reduce dental caries, and client or family incentives to increase demand for vaccination, all of which lack sufficient evidence to determine whether or not they are effective.9 Research has demonstrated that experience and logic are often poor guides to good choices. We must therefore take care to ensure that “practice-based research” is rigorous, and is not a generalization that is based on anecdotal evidence.

How is “evidence” defined and evaluated within a public health context?

Public health evidence can take many forms. In a broader sense, the evidence for social decision making can be divided into three categories.10 The first is scientific information that is independent of context. This is typified by assessment of the efficacy of specific technologies; it answers the question of whether an intervention can work at all, often based on a carefully controlled trial, for example randomized trials of screening for abdominal aortic aneurysm.

The second type of evidence is social science evidence, which can be equally rigorous but is typically more obviously context-sensitive. An example is an educational worksite program which may have many variants, based on length and intensity of program, the characteristics of the employees themselves, and perceived support from the employer. The Community Guide, for example, carefully examines whether an intervention works for specific populations and in specific sites (e.g., physician offices, schools, and worksites).

The last type of evidence is anecdotal information that is truly local, such as budget constraints or political considerations. In this discussion, we focus primarily on the first type of evidence about whether an intervention can work, and the potential health impact. Those who consider using an intervention should also look at evidence of effectiveness given their setting, resources, and target population.

Clinicians and policymakers often distinguish between efficacy (emphasis on internal validity) and effectiveness (emphasis on external and internal validity) of an intervention. Efficacy trials measure whether an intervention produces the expected result under ideal circumstances. Effectiveness trials measure the degree of beneficial effect under “real world” circumstances. Efficacy and effectiveness exist on a continuum.11 The data produced by these studies are valid to the extent that they measure what they are supposed to measure. Internal validity is the degree to which one can say with certainty that the intervention being studied is responsible for producing an effect. External validity is the degree to which one can generalize the study’s findings to other populations and circumstances.12

Tradeoffs between internal and external validity

Most reports that evaluate prediction models focus on the issue of internal validity, and do not discuss the generalizability of findings (external validity).13 Due to the wide-scale adoption of the CONSORT reporting criteria for randomized clinical trials and related methodological quality rating scales (e.g., TREND) for non-randomized trials, there has been an increased focus on the methodological quality of research reports.14 Adoption of these criteria (e.g., randomization, double-blinding, and other controls over potential confounding factors) have led to enhanced internal validity and analytic reporting quality, but there has been a relative lack of emphasis on external validity.15

This can be problematic because conditions in efficacy studies are often so tightly controlled that communities and organizations have no way of understanding whether a study they read would work in their community, in the ”real world.” Further, many public health interventions are not amenable to randomized trials. There will never be such a trial for requiring motorcycle helmets or raising alcohol taxes. Yet other analyses can provide a rigorous assessment that can justify a broad recommendation.

Healthy People 2020 must guide its users towards actions that the best available evidence suggests will be effective in accomplishing the objectives. This is no simple task because Healthy People addresses a broad range of public health issues, and the available evidence for how to make progress on these issues is uneven at best. Where evidence of the efficacy of a particular intervention does exist, it should have primacy. In situations where such information does not exist, there are strong evaluation designs that can be used to begin to fill gaps in the evidence, particularly with regard to external validity.

At the same time, it is critical to resist the temptation to make decisions on the basis of pragmatism, rather than science. In the process of selecting which interventions to recommend, context, need, and the appeal of interventions that “sound good” or are being widely implemented without a strong evidentiary base should not trump the interventions that we know will work.

Systematic reviews and guideline development

The foundation of the EBM approach is the “systematic review,” epitomized for many by the Cochrane Collaboration meta-analyses. Systematic reviews summarize the results of available, carefully designed and executed studies. They provide an assessment of the quality of evidence and the effectiveness (net benefit or balance of benefits and harms) of health interventions.16

Systematic reviews of high-quality studies are important for assessing specific interventions. They are produced through a rigorous methodological process in which reviewers detail how studies were identified and selected, the extent to which the studies were useful for answering review questions, and how results of separate studies were combined to yield an overall measure of the benefits and harms of an intervention.17 Information from reviews is used to improve the quality of care (e.g., through evidence-based recommendations, quality improvement metrics and incentives, and clinical decision support tools.) Appendix 2 presents established methods that can be used to determine the certainty of net benefit and the magnitude of effect of an intervention through systematic reviews.

At a basic level, the effectiveness of health interventions can be judged by the extent to which they have reached their stated goals.18 However, systematic reviews have also looked at how studies are designed, implemented, and analyzed. Not all evidence is judged to be of equal value; there are hierarchies of research design that assign different levels of usefulness of findings for the decision-making process.19 It is important to be explicit about the types of evidence that should be used for clinical and public health decision making, and about how such evidence should be used. What constitutes systematic research about health interventions? What criteria should be used to evaluate whether evidence supports designating a practice as “effective”?

For diagnostic and treatment services as well as prevention and health behavior change interventions, there is a large and well-documented gap between interventions that have been proven to be effective, and the practices that are actually implemented.20 Barriers to the translation of proven interventions into practice often have to do with characteristics of the interventions themselves, the target settings, the research or evaluation design, or combinations of these factors.21

Promising approaches to help address this gap include providing guidance and training to practitioners on the subtleties of evidence-based practice, creating practice-based research networks and partnerships among transdisciplinary collaborators, engaging decision-makers and recipients in participatory research, and placing greater emphasis on ensuring that the results of studies can be generalized into real-world settings.22 Appendix 3 provides examples of how these issues are addressed by the U.S. Preventive Services Task Force and the Community Preventive Services Task Force.

Given the challenges outlined above, there is a movement within the literature to think more broadly about how evidence is derived.23 New, rigorous approaches for evaluating “best practices” and “model practices” in public health interventions are needed. Some have called for shifting the focus in public health away from “evidence-based practice” and toward the more relevant “practice-based evidence.”24, 25 One proponent of this view noted that, “as public health…strives to rise to the paradoxical challenge of evidence-based practice…the challenge is that most of the evidence is not very practice-based.”26 Below are a few key issues to be considered in developing broadened strategies for evaluating the evidence base for public health interventions.

Back to Top

Integrating the Existing Science into Implementation Processes

Systematic reviews, where they exist, are insufficient to inform all aspects or types of disease prevention and health promotion interventions. They can be informative about efficacy and average effect sizes under ideal circumstances in research settings (internal validity). They can also help to identify best practices and rule out approaches that are unlikely to be useful. An important limitation of systematic reviews, however, is that they depend on the amount and quality of evidence that has been generated on a particular topic. Where an intervention issue has not been studied or studied well, systematic reviews will be inconclusive. In such cases, guidance and decisions must be made on the evidence at hand combined with expert judgment about what is likely to work.

Principles for applying such judgments have been well addressed in several current guideline development processes to ensure that overall recommendations are not based entirely on the results of quality ratings of available studies.27 In addition, as explained earlier, the results of the studies included in systematic reviews and the reviews themselves may be less informative about effects under less than ideal circumstances or in non-research settings. They may not apply directly to study populations that are not represented in studies reviewed, and may provide no information on cost, feasibility, acceptability, or the ability to be taken to scale in actual practice settings. Additional research focused on these practical issues is needed to supplement the findings of systematic reviews of intervention effectiveness on specific health outcomes. In particular, cost considerations may become paramount as decision-makers attempt to select the most effective service package using the resources available.28

To address these issues as well as to guide decision-makers in the selection and implementation of interventions, a number of approaches have been developed. One of the best known is the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) evaluation framework. It places equal emphasis on internal and external validity issues and provides guidance for evaluating an intervention’s potential for public health impact and widespread application. It provides a bridge for moving from best processes to best practices. The goal of RE-AIM is to encourage program planners, evaluators, readers of journal articles, funders, and policymakers to pay more attention to essential program elements, including external validity, that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions29 (see Exhibit 1).

Assessing external validity is a complex process that requires applied expertise. Because it also requires some subjective judgment, practitioners can make use of those dimensions of the framework that are most appropriate to their own needs. The criteria used to evaluate external validity are both quantitative and qualitative. The RE-AIM evaluation tool can form the basis for creating a simple hierarchy of evidence. RE-AIM ranks interventions in terms of their external validity using an approach that is similar to the one used to rank the strength of internal validity for efficacy studies.

To enhance reporting on external validity, RE-AIM proposes adding the following seven steps to the CONSORT criteria to increase awareness of and reporting on external validity. The RE-AIM Web site indicates that these steps would enhance not only the quality and information value of individual studies, but also of evidence-based reviews and meta-analyses.30

  1. State the target population to which the study intends to generalize.
  2. Report the rate of exclusions, the participation rate among those eligible, and the representativeness of participants.
  3. Report on methods of recruiting study settings in the same manner as for individual participants, including exclusion rate, participation rate among those approached, and representativeness of settings studied.
  4. Describe the participation rate and characteristics of those delivering the intervention. State the population of intervention agents that one would see eventually implementing the program and how the study interventionists compare to eventual users of the intervention.
  5. Report the extent to which different components of the intervention are delivered (by different intervention agents) as intended in the protocol.
  6. Report specific amounts of time and/or costs required to deliver the intervention.
  7. Report on organizational level of continuance (or discontinuance) of the intervention once the trial is completed, as well as individual level maintenance of results.

What challenges exist in compiling evidence for public health practice?

  • The Multifaceted Nature of Population Health Approaches

Targeting, achieving, and measuring a shift in the health behaviors of a whole population is complex. Whole-population shifts require multidisciplinary and multi-sectoral strategies, with targets at various levels.31 Effective programs often require multiple types of interventions with synergistic components. Public health interventions are large-scale, long-term, and concerned with external validity or “real world” applicability.

High-quality practice- or community-based research studies are a potentially important source for new information on effectiveness that can provide insights not only into whether a specific intervention works, but also how to implement it in real-world practice. These studies can also provide badly needed information on important groups, such as the disabled or those with multiple chronic illnesses, who are often excluded from efficacy studies. Significant nuances about how interventions should be tailored to the context of different environments can also come to light in these studies.

  • Limitations of the Traditional Hierarchy of Evidence

While the clinical approaches that are addressed by EBM focus primarily on individuals, interventions in public health focus on populations. Randomised controlled trials (RCTs), which use an experimental design and control groups, are difficult to introduce to population health interventions for a variety of reasons, including expense and practical challenges. The use of randomization to control for confounding variables is also less feasible in population-based interventions as compared to clinical trials.32 Other quantitative designs (e.g., time series or comparison groups), as well as observational designs and other types of qualitative studies, may be better suited to answering key questions about the effect and value of an intervention. Modeling can synthesize the best available information and facilitate comparison of different strategies. Similarly, surveillance data may provide better indicators of the success of multiple interventions than RCTs.33

  • The Need for Other Contextual Information

Evidence-based public health practice should begin with interventions of known effectiveness and an understanding of the magnitude of impact. However, when evaluating the effectiveness of health promotion interventions, methodological rigor is not the only issue to be taken into account. Contextual factors such as community acceptance and involvement, integration, engagement in multiple dimensions of an intervention, and potential sustainability are important. Pragmatic considerations like community preferences, political and logistical feasibility, and budget constraints are also relevant.

Issues that are of practical importance should be considered when choosing among proven interventions, yet they do not by themselves provide an adequate rationale for choosing an attractive concept over one that has strong evidence of effectiveness. When novel approaches are implemented, they must be accompanied by strong evaluations. It remains unclear how much weight should be given to non-experimental factors when evaluating such interventions.34 The impact of some contextual factors can be assessed by studying key aspects of effectiveness, such as cost-effectiveness, while the effect of others can best be ascertained through deliberative processes.

  • Measurement of Outcomes

Health and intermediate outcomes of population-based interventions are measured at all points and stages of program development and implementation, and should not be limited to endpoints. The effectiveness of public health interventions should be measured at multiple points in time (e.g., short, intermediate, and long-term) as health outcomes may not become evident quickly.

  • Assessing Magnitude of Effect

While decision-makers want to know what works and where, they also need to know how large an impact can be anticipated. That information is sometimes available from systematic reviews of evidence, and from health impact assessments (HIAs). The Carter Center’s Closing the Gap project35 and, more recently, the National Commission on Prevention Priorities36 have demonstrated how the magnitude of impact for specific interventions can be determined and used as part of priority-setting processes.

Comparative effectiveness research is the application of evidence-based principles to the understanding of how different interventions compare to each other. It also answers questions such as how the benefits and harms differ overall or in specific subpopulations or situations. The Agency for Healthcare Research and Quality (AHRQ) has used its network of Evidence-Based Practice Centers (EPCs) to develop the methodology for comparative effectiveness reviews and to conduct many studies dealing with aspects of the health care delivery system. The term is also applied to randomized trials which compare two or more active interventions, and occasionally to comparative economic evaluations.

The Institute of Medicine recognized the importance of population health in its list of national comparative effectiveness priorities.37 The paucity of comparative studies of population health interventions remains a major gap and provides a rich research and evaluation agenda that deserves to be made a high priority for funding.

Addressing the complexity and interdisciplinary nature of public health interventions

To capture the multidimensional nature of public health practice, some practitioners have recently called for a broader approach to assembling evidence. The need for such an approach arises when practitioners are faced with a new or rapidly increasing public health problem that lacks a strong evidentiary framework, as in the case of obesity. In this instance, the Institute of Medicine developed the L.E.A.D. Framework as an approach to identify, evaluate, and compile evidence—broadly defined—to inform decision making about obesity prevention and other complex public health issues, taking a systems perspective.

L.E.A.D. builds upon, but also expands and enhances, familiar concepts and principles of evidence utilization by providing a rationale for intervention and by helping to determine what intervention to undertake and how to implement a given intervention in a particular context. Answering these questions requires: 1) taking a comprehensive look at what can be done differently in relation to the use of evidence, 2) considering how one can actually apply this different approach, and 3) providing a clear justification for why doing things differently is both valid and necessary for compelling issues such as obesity.

Taking action in the face of evidence gaps and generating relevant evidence

The Committee has developed a set of criteria that can be used to categorize the effectiveness of interventions (see Appendix 5), as well as a set of examples that illustrate how they might be applied to make decisions about potential intervention strategies across the ecological spectrum (see Appendix 6).

Alternative approaches have been proposed by some researchers who suggest using well-designed evaluation studies as sources for evidence of effectiveness in public health interventions. However, the purpose of evaluation studies is different from that of studies of intervention effectiveness. Evaluation studies are designed to assess whether an intervention has achieved its goals in a specific population and setting; it does not seek to control for variation among units of comparison. Evaluations observe a proximal-distal chain of events, with different levels of outcome leading to different indicators.38

Changes in proximal indicators are more likely to be due to the direct impact of an intervention.39 For example, it was an RCT that demonstrated the effectiveness of screening for Chlamydia and led to a screening recommendation from the USPSTF.40 As screening became widely implemented in different settings, it was evaluated using the Healthcare Effectiveness Data and Information Set (HEDIS) in health plans and through evaluations in Sexually Transmitted Disease (STD) clinics. The L.E.A.D. Framework can be used as one source of guidance in this area.

Health impact assessment (HIA)—a tool for modeling health impact outside the health sector

Health impact assessments (HIAs)41 offer another tool for gathering the best available information to inform decisions that will impact health. HIAs use established methodologies and modeling techniques to provide an assessment of the likely health impact of initiatives, usually outside of the health sector. HIAs are a practical tool for building health considerations into policy decisions in other sectors, i.e., through a “health in all policies” approach.

HIA describes a variety of methodologies to assess the health impact of proposed programs, policies, or other activities. Most often, this set of approaches is used to estimate the likely overall and distributional health effects of these interventions in non-health sectors, such as education, transportation, fiscal and monetary policy, urban planning, energy, housing, commerce, and agriculture. HIA has great importance to collective efforts to improve population health because the actions in these sectors constitute important determinants of health. There is a rapidly growing body of literature on both methods for developing and grading evidence in HIAs as well as results of HIAs in the United States and other countries (see Resources below).

Other existing resources for identifying evidence-based and best practices in public health?

Several existing resources present evidence and knowledge for public health practice. Examples of useful resources are provided below.

Now over a decade old, provides evidence-reviews and recommendations of over 200 population health interventions, with frequent additions and modifications. (See earlier discussion on page 4, and also Appendix 3.)

  • Evidence-Based Practice for Public Health (EBPH)43

Provides online access to selected evidence-based public health practices resources, knowledge domains of public health, and public health journals and databases. The resources are arranged along a pathway of evidence so that public health practitioners can easily find and use the best evidence to develop and implement effective interventions, programs, and policies. It includes various links to evidence-based guidelines, systematic reviews, filtered searches of publications, and best practices.

  • Cochrane Public Health Group44

A recent initiative that aims to undertake systematic reviews of upstream public health interventions. The demand for the Cochrane Public Health Group arose out of a call to review topics on the Cochrane Library that are outside the scope of existing Cochrane review groups which focus on interventions in medicine.45 In March 6, 2008, the Cochrane Public Health Group launched their editorial and methods meetings.46 Meeting topics and presentations included:

  • Study designs for including in public health reviews
  • Study searching for public health reviews
  • Context and process evaluations on public health reviews
  • Assessing quality of studies for public health reviews
  • Including economic evaluations in public health reviews
  • Approaches to the synthesis of heterogeneous evidence

The Cochrane Public Health Group has developed a Health Promotion and Public Health Systematic Review Handbook that guides reviewers through the process of completing a systematic review that measures the effectiveness of certain public health interventions.47

  • National Association for City and County Health Officials (NACCHO) Database of Model Practices in Local Public Health Agencies

A database of model practices that were identified through the application of several key criteria. Included practices are considered to be exemplary, and replicable.48, 49

  • Promising Practices Network

This resource provides summary information for programs and practices that have been proven to have a positive effect on outcomes for children and youth. Programs are rated as either “Proven” or “Promising,” based upon several considerations, including the types of outcomes affected.50

  • Health Impact Assessment: Information and Insight for Policy51

This resource provides health impact assessment (HIA) reports as well as information about conducting and using HIA and links to related sites.

  • Health Impact Assessment Clearinghouse, Learning, and Information Center52

A resource that has been developed by the University of California at Los Angeles to collect and disseminate information on HIA in the United States.

  • The Center of Excellence for Training and Research Translation (C-TRT)53

The University of North Carolina at Chapel Hill’s Center of Excellence for Training and Research Translation seeks to enhance the public health impact of the WISEWOMAN program and the Nutrition and Physical Activity Program to Prevent Obesity and Other Chronic Diseases (Obesity Prevention program) through training and intervention translation initiatives that extend their reach, improve their effectiveness, strengthen their adoption in real-world settings, improve the quality of their operations, and sustain their efforts over time.54

Back to Top

References

1Brownson RC, Baker EA, Leet T, Gillespie KN, eds. Evidence-based public health. New York: Oxford University Press; 2003. Public Health and Information Tutorial. http://phpartners.org/tutorial/04-ebph/2-keyConcepts/4.2.2.html. External Links Disclaimer icon Accessed December 2, 2008.

2Sackett DL, Rosenberg WMC, Gray JAM et al. Evidence-based medicine: what it is and what it isn’t. Br Med J 1996;312:71-72.

3Green J, Tones K. Towards a secure evidence base for health promotion. J Public Health Med 1999;21(2):133-139.

4Jackson SF, Edwards RK, Kahan B, Goodstadt M. An assessment of the methods and concepts used to synthesize the evidence of effectiveness in health promotion: a review of 17 initiatives. Canadian Consortium for Health Promotion Research. http://www.utoronto.ca/chp/CCHPR/synthesisfinalreport.pdf. External Links Disclaimer icon Accessed December 9, 2008.

5Rettig RA, Jacobson PD, Farquhar CM, Aubry WM. False hope: bone marrow transplantation for breast cancer. New York: Oxford University Press; 2007.

6U.S. Preventive Services Task Force. Screening for abdominal aortic aneurysm: recommendation statement. http://www.ahrq.gov/clinic/uspstf05/aaascr/aaars.htm. External Links Disclaimer icon Accessed August 5, 2010.

7The Community Guide. From research to policy: lessons from a Community Guide review on alcohol-impaired driving laws. http://thecommunityguide.org/news/2010/BACcase.html. External Links Disclaimer icon Accessed August 5, 2010.

8The Community Guide. Assessment of health risks with feedback to change employees’ health. http://www.thecommunityguide.org/worksite/ahrf.html. External Links Disclaimer icon Accessed August 5, 2010.

9The Community Guide. http://www.thecommunityguide.org/pa/index.html. External Links Disclaimer icon Accessed August 5, 2010.

10Lomas J, Culyer T, McCutcheon C et al. Conceptualizing and combining evidence for health system guidance. Ottawa, Ontario: Canadian Health Services Research Foundation; 2005.

11Gartlehner G, Hansen R, Nissman D. A simple and valid tool distinguished efficacy from effectiveness studies. J Clin Epidem 2006;59:1040-1048.

12Ibid.

13Bleeker SE, Moll HA, Steyerberg EW et al. External validation is necessary in prediction research: a clinical example. J Clin Epidem 2003;56:826-832.

14Glasgow R, Green L, Klesges L et al. External validity: we need to do more. Ann Behav Med 2006;31(2):105-108.

15Ibid.

16The Cochrane Collaboration. http://www.cochrane.org/consumers/sysrev.htm. External Links Disclaimer icon Accessed December 8, 2008.

17Ibid.

18Green J, Tones K. 1999.

19Partners in Information Access for the Public Health Workforce. Public health and information tutorial. http://phpartners.org/tutorial/04-ebph/2-keyConcepts/4.2.2.html. External Links Disclaimer icon Accessed December 2, 2008.

20Glasgow R, Emmons K. How can we increase translation of research into practice? Types of evidence needed. Ann Rev Publ Health 2007;28:413-433.

21Ibid.

22Glasgow R, Green L, Klesges L et al. 2006.

23McNeil D, Flynn M. Methods of defining best practice for population health approaches with obesity prevention as an example. P Nutr Soc 2006;65:403–411.

24Ibid.

25Marmot MG. Evidence based policy or policy based evidence? Br Med J 2004;328:906-907.

26Green LW. Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health 2006;96(3):406-409.

27Kumanyika, Abrams et al. Bridging the evidence gap in obesity prevention: a framework to inform decision-making. Report Brief, April 2010. Institute of Medicine of the National Academies. http://www.iom.edu/~/media/Files/Report%20Files/2010/Bridging-the-Evidence-Gap-in-Obesity-Prevention/Bridging%20the%20Evidence%20Gap%202010%20%20Report%20Brief.ashx. External Links Disclaimer icon Accessed July 14, 2010.

28Maciosek MV, Coffield AB, Edwards NM, Flottemesch TJ, Goodman MJ, Solberg L. Priorities among effective clinical preventive services: results of a systematic review and analysis. Am J Prev Med 2006;31(1):52-61. http://www.rwjf.org/publichealth/product.jsp?id=15571. External Links Disclaimer icon Accessed July 19, 2010.

29RE-AIM. http://re-aim.org/default.aspx. External Links Disclaimer icon Accessed May 21, 2010.

30RE-AIM. Combining the CONSORT statement and aspects of the RE-AIM framework. http://www.re-aim.org/tools/checklists/consort-and-re-aim.aspx. External Links Disclaimer icon Accessed May 21, 2010.

31McNeil D, Flynn M. 2006.

32Ibid.

33Heller RF, Page J. A population perspective to evidence based medicine: evidence for population health. J Epidemiol Commun H 2001;56:45-47.

34Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Commun H 2003;57:527-529.

35Foege WH, Amler RW, White CC. Closing the gap: report of the Carter Center Health Policy Consultation. JAMA 1985;254(10):1355-1358.

36National Commission on Prevention Priorities. Preventive care: a national profile on use, disparities, and health benefits. 2007. http://www.rwjf.org/pr/product.jsp?id=19896. External Links Disclaimer icon Accessed July 13, 2009.

37Institute of Medicine. Initial national priorities for comparative effectiveness research. Washington, DC: National Academy Press; 2009.

38Murray CJ, Ezzati M, Lopez AD, Rodgers A, Vander Hoorn S. Comparative quantification of health risks: conceptual framework and methodological issues. Pop Health Metrics 2003;1(1).

39Green J, Tones K. 1999.

40U.S. Preventive Services Task Force. Screening for Chlamydial Infection: U.S. Preventive Services Task Force Recommendation. Ann Intern Med 2007; 147:128-134. http://www.ahrq.gov/clinic/uspstf07/chlamydia/chlamydiars.pdf. External Links Disclaimer icon Accessed July 26, 2010.

41Fielding J, Briss P. Promoting evidence-based public health policy: can we have better evidence and more action? Health Affair 2006;25(4):969-978.

42National Center for Health Marketing, Centers for Disease Control and Prevention. The guide to community preventive services. http://www.thecommunityguide.org/index.html. External Links Disclaimer icon Accessed July 13, 2009.

43University of Massachusetts Medical School. Evidence-based practice for public health. http://library.umassmed.edu/ebpph/. External Links Disclaimer icon Accessed December 2, 2008.

44Cochrane Public Health Group. http://www.ph.cochrane.org/en/. External Links Disclaimer icon Accessed December 2, 2008.

45Cochrane Public Health Group. Proposed registration of a Cochrane-Campbell Public Health Review Group. August 12, 2007. http://www.cochrane.org/about-us/history. External Links Disclaimer icon Accessed December 2, 2008.

46Cochrane Public Health Group. Workshop and events. http://www.ph.cochrane.org/en/events.html. External Links Disclaimer icon Accessed December 2, 2008.

47Cochrane Public Health Group. Resources for review authors. http://www.ph.cochrane.org/en/authors.html. External Links Disclaimer icon Accessed December 2, 2008.

48National Association of County and City Health Officials. NACCHO model practice database. http://www.naccho.org/topics/modelpractices/database/. External Links Disclaimer icon Accessed December 2, 2008.

49Green et al. Share what works: model practices in local public health agencies. News from NACCHO. J Public Health Man 2004;10(2):180-182.

50Promising Practices Network. How programs are considered. http://www.promisingpractices.net/criteria.asp. External Links Disclaimer icon Accessed December 2, 2008.

51Health Impact Assessment. HIA Policy Reports. http://www.ph.ucla.edu/hs/health-impact/reports.htm. External Links Disclaimer icon Accessed August 5, 2010.

52UCLA Health Impact Assessment Clearinghouse and Information Center. http://www.ph.ucla.edu/hs/hiaclic/index.htm. External Links Disclaimer icon Accessed August 5, 2010.

53Center of Excellence for Training and Research Translation. http://www.center-trt.org/. External Links Disclaimer icon Accessed July 11, 2010.

54Ibid.

55Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, Atkins D., for the Methods Word Group, third U.S. Preventive Services Task Force. Current methods of the U.S. Preventive Services Task Force: a review of the process. Am J Prev Med 2001;20(3S):21-35.

56Ibid.

57Petticrew M, Roberts H, 2003.

58Briss PA, Zaza S, Pappaioanou M et al. Developing an evidence-based guide to community preventive services—methods. Am J Prev Med 2000;18(1S):35-43.

Back to Top

 

Content for this site is maintained by the Office of Disease Prevention & Health Promotion, U.S. Department of Health and Human Services.

Last revised: October 26, 2010