Health and Human Services > Indian Health Service > Improving Patient Care
U.S. Department of Health and Human Services
Indian Health Service: The Federal Health Program for American Indians and Alaska Natives
A - Z Index:
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
#

Evaluation

The measures used within the work of the IPC program are designed primarily to help sites test the degree to which their ideas for change result in the predicted improvement. Usually, this is done in the context of a series of small tests of change called Plan-Do-Study-Act cycles. By sequentially testing ideas for change under different circumstances, knowledge is built regarding which changes result in the most improvement.

This type of measurement evaluates ideas for change. It does not evaluate whole programs and their system-wide effects over long periods of time. Nor does this approach answer questions about the merit, worth, and significance of the program for the whole system. It does not address long-term health outcomes, costs, and satisfaction of patients and employees. To answer these types of questions, the IPC Evaluation Team, a working group of technical experts from throughout the Indian health system, was convened in 2008. Evaluation efforts can be divided into two phases: the evaluation of IPC 2 and the evaluation of IPC 3.

Evaluation of IPC 2

After convening a meeting of experts in the evaluation of quality improvement in 2008, the Evaluation Team designed a least-cost evaluation strategy using existing data and qualitative inquiry. The aims of the evaluation were:

  1. to determine to what extent the Improving Patient Care program was associated with an improvement in quality and efficiency of care;
  2. to identify characteristics of context and implementation that are associated with the most successful sites; and
  3. to collect stakeholders’ perceptions of facilitators and barriers to IPC implementation and to assess the effect of IPC on staff satisfaction.

Aims A and B are primarily addressed through the analysis of existing data sources, which includes:

  • secondary analysis of IPC measures reported by sites in the form of a meta-analysis;
  • Government Performance and Results Act data at the level of the GPRA reporting unit;
  • an analysis of the impact of IPC sites on emergency room and urgent care visits that do not result in hospitalization, transfer, or death, utilizing data from the National Data Warehouse; and
  • an analysis of hospitalizations reported to the National Data Warehouse deemed avoidable in the presence of quality primary care (Prevention Quality Indicators Exit Disclaimer – You Are Leaving www.ihs.gov from the Agency for Healthcare Research and Quality Exit Disclaimer – You Are Leaving www.ihs.gov ).

Quantitative data analyses continue as data for the follow-up periods become available. The IPC 2 sites that have continued in the Learning Network can be followed into the future. Aim C items were addressed through qualitative interviews of staff and managers at selected IPC sites during 2010. This analysis has been completed.

Evaluation of IPC 3

A more extensive evaluation is required to respond to additional information needs of stakeholders and decision makers as the IPC program has expanded to IPC 3. With encouragement from IHS headquarters, the Evaluation Team chose to plan the evaluation of IPC 3 using the Framework for Program Evaluation Exit Disclaimer – You Are Leaving www.ihs.gov in Public Health devised by the Centers for Disease Control and Prevention. The CDC framework draws from evaluation in many fields—from health care to education to criminal justice. The framework-inspired steps for developing and implementing a quality evaluation include:

  • engaging stakeholders,
  • describing the program to be evaluated (with a logic model),
  • focusing the evaluation design,
  • gathering credible quantitative and qualitative evidence,
  • justifying conclusions,
  • ensuring the use of findings, and
  • sharing lessons learned.

In this approach, a committee of stakeholders who will make decisions about the program’s future, or who are affected by the program (staff, patients, community leaders), will work together with the evaluation team to plan the evaluation and to interpret and communicate the results. Drawing from a preliminary logic model, the stakeholder committee will decide on the priority questions the evaluation will answer. The qualitative and quantitative methods for the evaluation will then be based on the priority evaluation questions and the relevant measures suggested by the logic model. Relevant measures will be selected from:

  • activities (e.g., diffusion of changes throughout the system);
  • outputs (e.g., improvement in work flow);
  • short-term outcomes expected within 1 year (e.g., improvement in same-day access to primary care);
  • intermediate outcomes expected within 1 to 3 years (e.g., reduction in emergency room and urgent care visits); and
  • long-term outcomes expected within 4 to 5 years (e.g., reduction in avoidable hospitalizations and operating costs).

Within the limits of the budget, the evaluation of IPC 3 will be open to quantitative and qualitative methods. The evaluation will collect original data from patients and charts. A variety of methodologies from the fields of health services research will be employed, including evaluation in the social sciences, applied economics, applied anthropology, and epidemiology.

Structure of the Evaluation Effort

The evaluation effort is accountable to the Office of Clinical and Preventive Services (OCPS). This effort is being planned and partially executed by the Evaluation Team and external academic and consulting partners coordinated by representatives from OCPS.

To help plan the evaluation effort and to interpret and communicate the findings, an evaluation stakeholder committee has been convened. Evaluation stakeholders represent those who will use the findings of the evaluation to make decisions about the program’s future and those affected by the evaluation, namely staff, patients, and tribal leaders.

The Evaluation Team will interface with IPC sites to collect additional information, procure permissions, and communicate findings through the evaluation liaison. An evaluation liaison will be selected from each IPC team. Academic and consulting partners will continue to be important players in the evaluation effort. Whether brought in to the effort by funding mechanisms (such as contracts, cooperative agreements, and grants) or as volunteer advisors, external partners will serve on the Evaluation Team.

CPU: 31ms Clock: 0s