DEMONSTRATING YOUR PROGRAMS WORTH
A Primer on Evaluation for Programs to Prevent Unintentional Injury


Nancy J. Thompson, PhD
Helen O. McClintock

 

Purpose and Overview

 


DEMONSTRATING

YOUR

PROGRAM'S

WORTH

 

A Primer on Evaluation for Programs
to Prevent Unintentional Injury

 

 

Nancy J. Thompson, PhD

Helen O. McClintock

National Center for Injury Prevention and Control
Atlanta, Georgia
1998
Second Printing (with revisions), March 2000

 


 

Demonstrating Your Program’s Worth is a publication of the National Center for Injury Prevention and Control, Centers for Disease Control and Prevention:

Centers for Disease Control and Prevention

Jeffrey P. Koplan, MD, MPH, Director

National Center for Injury Prevention and Control

Stephen B. Thacker, MD, MSc, Director

Division of Unintentional Injury Prevention

Christine M. Branche, PhD, Director

Production services were provided by the staff of the Office of Health Communication Resources, National Center for Injury Prevention and Control:

Graphic Design
Marilyn L. Kirk

Cover Design
Beverly Charday
Mary Ann Braun

Text Layout and Design
Sandra S. Emrich

Suggested Citation: Thompson NJ, McClintock HO. Demonstrating Your Program’s Worth: A Primer on Evaluation for Programs To Prevent Unintentional Injury.Atlanta: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control, 1998.

 


PURPOSE

We wrote this book to show program managers how to demonstrate the value of their work to the public, to their peers, to funding agencies, and to the people they serve.
In other words, we’re talking about how to evaluate programs—a scary proposition for some managers. Our purpose is to reduce the scare factor and to show that managers and staff need not be apprehensive about what evaluation will cost or what it will show.

Remember that there are two ways an injury-prevention program can be successful. The obvious way is if it reduces injuries and injury-related deaths. The other way is if it shows
|that a particular intervention does not work. A program is truly worthwhile if it implements a promising intervention and, through evaluation, shows that the intervention does not reduce injuries. Such a result would be of great value to the injury prevention community: it would save you and other programs from wasting further resources and time on that
particular intervention.

In this book, we show why evaluation is worth the resources and effort involved. We also show how to conduct simple evaluation, how to hire and supervise consultants for complex
evaluation, and how to incorporate evaluation activities into the activities of the injury prevention program itself. By learning to merge evaluation and program activities, managers
will find that evaluation does not take as much time, effort, or money as they expected.

 


ACKNOWLEDGMENTS

We acknowledge and appreciate the contributions of several colleagues: Dr. Suzanne Smith saw the need for this primer and began the project; Drs. Terry Chorba and David Sleet enumerated the various types of injury programs (page 73); and Dr. Jeffrey Sacks, Dr. Katherine Miner, Dr. David Sleet, Ms. Susan Hardman, and Mr. Roger Trent reviewed the content and provided invaluable suggestions.

 


INTRODUCTION

All too often public health programs do wonderful work that is not properly recognized by the public, by other health care professionals, or even by the people who benefit directly from the program’s accomplishments. Why should this be? In most cases, it is because program managers and staff strongly believe that their work is producing the desired results but have no solid evidence to demonstrate their success to people outside their program. In other words, such programs are missing one key component: evaluation.

Unfortunately, without objective evaluation, program managers and staff cannot show that their work is having a beneficial effect, and other public health programs cannot learn from their success.

In addition, without adequate evaluation, programs cannot publish the results of their work in medical, scientific, or public health journals, and they cannot show funding agencies that their work is successful. Obviously programs that produce facts and figures to prove their success are more likely to publish the results of their work and more likely to receive continued funding than are programs that cannot produce such proof.

And here is another important point about evaluation. It should begin while the program is under development, not after the program is complete. Indeed, evaluation is an ongoing process that begins as soon as someone has the idea for a program; it continues throughout the life of the program; and it ends with a final assessment of how well the program met its goals.

Why must evaluation begin so early? Consider, for example, if you were to set up a program to provide free smoke detectors to low socioeconomic households. You put flyers in the mail boxes of people you want to reach, inviting them to come by your location for a free detector. Many people respond but not as many as you expected. Why?

To find out, you evaluate. Perhaps you learn that your location is not on a bus line and many people in your target population do not own cars. Or, perhaps, the language in your flyer is too complex to be easily understood by the people you want to read it. So you rewrite your flyer or move your location. Would it have been better to test the language in the flyer for readability and to assess the convenience of your location before beginning the program? Yes. It would have saved time and money—not to mention frustration for the program staff.

So, the moral is this: evaluate, and evaluate early. The earlier evaluation begins, the fewer mistakes are made; the fewer mistakes made, the greater the likelihood of success. In fact, for an injury prevention program to truly show success, evaluation must be an integral part of its design and operation: evaluation activities must interweave with—and sometimes merge into—program activities. If a program is well designed and well run, evaluating the final results can be a straightforward task of analyzing information gathered while the program was in operation. In all likelihood, the results of such an analysis will be extremely useful, not only to your own program but to researchers and to other injury prevention programs.

To help program managers avoid difficulty with evaluation, we produced this primer. Its purpose is to help injury prevention programs understand 1) why evaluation is worth the resources and effort involved, 2) how evaluation is conducted, and 3) how to incorporate evaluation into programs to prevent unintentional injuries. This primer can also help program managers conduct simple evaluation, guide them in how to hire consultants for more complex evaluation, and allow them to oversee the work of those consultants in an informed way.

Since we want to practice what we preach, we ask that you help us with our evaluation of this book. We encourage you to give us your opinion. Is this book useful? If so, how have you found it useful? Are all sections clear? If not, which sections are unclear? Is the book’s organization easy to follow? If not, where have you had difficulty? Should we add more details? If so, on which topics? We are interested in any comments or suggestions you might have to improve this book and make it more useful. Your close involvement with the people that you and CDC want to serve makes your feedback invaluable.

On page 125 is a form you can use to send us your comments. We look forward to hearing from you.

Please also visit our web site for more information about injury control and prevention and to order a variety of free publications: www.cdc.gov/ncipc/pub-res/pubs.htm


 

HOW THIS PRIMER IS ORGANIZED

This book is designed to help program staff understand the processes involved in planning, designing, and implementing evaluation of programs to prevent unintentional injuries.

Section 1 has general background information explaining why evaluation is important, what components go into good evaluation, who should conduct evaluation, and what type of information evaluation will provide.

In Section 2, we describe each of the four stages of evaluation: formative, process, impact, and outcome. In particular, we discuss the appropriate time to conduct each stage and the most suitable methods to use. "Evaluation at a Glance" (page 23) is a quick reference that helps programs decide when to conduct each stage of evaluation, describes what kind of information each stage of evaluation will produce, and explains why such information is useful. For further help in deciding which stage of evaluation is appropriate for your program, we guide you through a set of questions (page 24). Your answers will tell you which stage is the right one for your program’s situation.

Section 3 is devoted to the methods for conducting evaluation. We provide enough information to enable you to conduct simple evaluation. However, the primary use is to enable you to communicate with, hire, and supervise evaluation consultants.

Appendix A contains sample questions for interviews, focus groups, and questionnaires. It also contains sample events to observe and items to count at certain stages of evaluation. These examples can be adapted for use in evaluating any program to prevent unintentional injury.

Appendix B contains sample forms to help keep track of contacts that the program makes with the target population, items received from the target population, and items dispensed during a product distribution program.

Appendix C is a checklist of tasks that all programs to prevent unintentional injury can follow to make sure they do not omit any evaluation step during program design, development, and implementation.

Appendix D contains a bibliography of sources for further information about various aspects of evaluation.

Appendix E is a glossary of terms used in this primer. On page 125 is a form that you can use to send us comments about this book.

 

 

This page last reviewed April 1, 2005.

Privacy Notice - Accessibility

Centers for Disease Control and Prevention
National Center for Injury Prevention and Control