Skip Navigation U.S. Department of Health and Human Services www.hhs.gov/
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov/
Evaluation of the Use of AHRQ and Other Quality Indicators

Chapter 5. Findings from the Case Studies

In this chapter, we report the results of two case studies, one in the area of Boston, Massachusetts, and the other in Texas, with an emphasis on the Dallas-Fort Worth area. The two market areas were selected to represent one market with a long history of using the AHRQ QIs for public reporting (Dallas-Fort Worth) and one market in which public reporting, pay-for-performance, and tiered insurance products using the QIs have been more recently implemented (Boston). For each case study, we provide a discussion of the impact of public reporting, technical lessons learned, political lessons learned, and implications.

5.1. Boston

5.1.1. Boston Market Area

The Boston health care market is somewhat unique, marked by a large number of well-known academic medical centers, higher-than-average health care costs,38 and a large number of practicing physicians and other health care workers.39 Research and training of physicians and nurses are prevalent in the Boston area.

The Boston health insurance market is dominated by three players: Blue Cross Blue Shield of Massachusetts (BCBSMA), Harvard Pilgrim Health Care, and Tufts Health Plan, with BCBSMA experiencing significant membership growth in recent years.40 Providers are also rather concentrated, with the largest organization being Partners Health Care, which includes two prominent academic medical centers—Massachusetts General Hospital and Brigham and Women's Hospital.

Quality improvement has a long history among Boston-area providers. Indeed, a great deal of the research into quality improvement and quality measurement has been conducted by Boston-area researchers, and prominent quality improvement organizations, such as the Institute for Healthcare Improvement, are located in the area.

A 2005 report by the Center for Studying Health System Change (HSC) found that public reporting and pay-for-performance activities were leading providers to view their performance on quality and efficiency metrics as "a necessary competitive strategy" in addition to the "mission-driven efforts" that had guided long-standing quality improvement efforts.41 Many of these recent activities incorporate the AHRQ QIs.

5.1.2. Background on Use of Quality Indicators

Although quality measurement and quality improvement activities have a long history in the Boston area, the AHRQ QIs have recently been used for several new purposes, including pay-for-performance, public reporting, and tiered insurance productsj based on quality indicators. These new activities have been met with stiff resistance from some members of the Boston provider community.

In 2000, the State Department of Health and Human Services (DHHS) began including the HCUP indicators, the predecessor to the AHRQ QIs, in the performance report it distributed to hospitals. This was done in cooperation with the Massachusetts Hospital Association (MHA) and was not intended for public release. However, even at this early stage some participants "saw the writing on the wall" that public reports would eventually be forthcoming and opposed use of the AHRQ indicators in these reports.

In 2005, DHHS started publishing selected AHRQ IQIsk on the internet, along with indicators from the Hospital Quality Alliance, Leapfrog, and the Massachusetts Data Analysis Center (Mass-DAC), a Harvard-based state initiative on the measurement of the quality of cardiac care using clinical data. The decision to publish this information on the Web was prompted by impending state legislation that would require public reporting of hospital quality data. The legislation was part of a more widespread movement in Massachusetts towards greater transparency of provider performance data.

Also in 2003, BCBSMA began reporting selected measures from the AHRQ QIs to hospitals in a pay-for-performance program called the Hospital Quality Improvement Program, now known as the Hospital Performance Improvement Program.l The first payments to hospitals based on their AHRQ QI results occurred in 2006. Ten JCAHO Core Measures and one of the Leapfrog Leaps (computerized physician order entry) are also reported to hospitals as "advisory measures," but are not tied to incentive payments. 

A third major development in quality reporting in the Boston area was spurred by the Massachusetts Group Insurance Commission (GIC)—the organization that manages benefits for state employees. The GIC asked participating health insurers to develop insurance products that tiered provider copayments based on efficiency alone, or efficiency in conjunction with quality. These plans were implemented in 2006. Most of the early focus has been on physician efficiency measurement using Ingenix Episode Treatment Groups (ETGs), but one plan (Tufts Navigator) started tiering hospitals in 2004 using selected AHRQ QIs as well as JCAHO, Leapfrog, and other measures. More health plans, such as Harvard Pilgrim Health Care, may also develop methods of tiering hospitals in the future.m

5.1.3. Impact

Since many of the quality initiatives involving the AHRQ QIs began only in 2005, interviewees told us that it was "too early" to conduct formal assessments of their impact. There was some scattered anecdotal evidence, however, that the initiatives were spurring quality improvements. 

In its pay-for-performance program, BCBSMA works with hospitals to help them explore what may be driving their performance on the AHRQ QIs. We were told that in this capacity, they have observed some processes hospitals have put into place that have improved quality, including, for example, standardized order sets for pneumonia, heart attack and heart failure care; routine risk assessments and prophylaxis against blood clots; agreeing to common definitions and documentation standards for obstetrical trauma; and early ambulation for postoperative patients to lessen the chance for pneumonia.

The introduction of rapid response teams as part of the IHI 100,000 Lives Campaign is changing the way hospitals evaluate and intervene with patients who are beginning to become unstable. As hospitals implement rapid response teams the expectation is that over time, they will see a reduction in mortality. Additionally, hospitals are increasing the amount and sophistication of their own performance measurement and tracking of intermediate processes and outcomes as part of their improvement processes. With these early interventions, BCBSMA representatives related that they are beginning to see significant improvements in several of the AHRQ QI measure results across their provider network.

A hospital system representative had similar anecdotal evidence of quality improvement spurred by the AHRQ QIs in their six acute care hospitals. Although the hospital system opposes the use of AHRQ QIs for pay-for-performance and public reporting, the BCBSMA and DHHS activities have spurred them to begin studying their results and investigating the underlying causes. In areas where a problem is flagged by the AHRQ QIs, a medical chart audit is performed. As a result, we were told that improvements have been noted in areas including iatrogenic complications, infections during medical care, sepsis following surgery, and pressure ulcers.

We were consistently told that improvements in the coding of administrative data have preceded and accompanied quality improvements. In the first stages of the new quality assessment activities, most of the problems flagged were data coding issues. More recently, hospitals have been implementing real improvements in the quality of care. Since the reputation and income of the hospitals is on the line, the activities have succeeded in focusing the attention of hospital administrators on quality improvements in hospitals.

There has also been some evidence that hospitals have been collaborating and sharing their experiences on how to improve quality. As the initiatives mature and hospitals become more familiar with the indicators, more of this type of collaboration may occur.

None of the initiatives we learned about in the case study are being subjected to formal, rigorous evaluations. However, the available anecdotal evidence suggests that AHRQ QIs have had a direct impact on the quality of patient care in the Boston area. It is difficult to judge, however, how much of this impact is due to use of the AHRQ QIs for public reporting, pay-for-performance, and tiered insurance products vis-à-vis quality improvement activities.

This is a contentious question since many Boston-area hospital representatives argue that the AHRQ QIs are appropriately used only as part of quality improvement. However, it is uncertain how many hospitals would be using the AHRQ QIs for quality improvement if not for the incentives provided by public reporting, pay-for-performance, and tiered insurance products, especially given hospitals' other quality measurement and improvement activities.

5.1.4. Political Lessons Learned

The experience in the Boston area provides a good case study of the politics of quality reporting in a contentious environment. Boston-area hospital representatives, many very knowledgeable in the science of quality measurement, have strongly denounced the use of the AHRQ QIs for any purpose other than confidential research and quality improvement activities. They point out that for several years, AHRQ recommended using the QIs as a "screen", not for purchasing decisions, and only cautiously for public reporting. In contrast, payers and purchasers have promoted the use of the AHRQ QIs for these other purposes as one of the only available, feasible, and scientifically sound options (the other alternatives cited were the HQA, JCAHO, and Leapfrog measures). They note that AHRQ's current published guidance endorses non-quality improvement uses of the QIs.

The debate over appropriate use of the AHRQ QIs was summarized by a Boston-area hospital representative, who said that proponents of public reporting, pay-for-performance, and tiering using the AHRQ QIs "think that anything is better than nothing. We think that nothing can be better than bad." Payers, purchasers, and providers admit that the AHRQ QIs have some limitations, almost all of which are due to the fact that the AHRQ QIs are based on administrative data that were not collected for the primary purpose of quality measurement. For this reason, the data often include important omissions or mistakes that could be very clinically relevant and limit the validity of the quality indicators.

On the other hand, use of administrative data-based indicators can lead to improvement in the quality of the underlying data. Furthermore, in the absence of electronic medical records, administrative data are the only affordable choice for quality measurement, given the cost of abstracting clinical data from medical records. The exception is the JCAHO Core Measures (and the closely related HQA Hospital Quality Indicators), the measurement of which is mandatory for accreditation and therefore already part of hospitals' costs. However, the JCAHO/HQA indicators (as well as the Leapfrog quality indicators, the other main alternative), are not considered by Boston-area purchasers and payers to be sufficient on their own for public reporting, pay-for-performance, or tiering, mainly due to the fact that they cover a limited set of conditions and do not measure the outcomes of care.

There is some degree of variability in the opinions of Boston-area organizations about using the AHRQ QIs for any purpose other than quality improvement. At one extreme is the MHA. Although the MHA has been active for some time in regional and national quality measurement and reporting activities, they unilaterally oppose the use of any administrative data-based indicators for public reporting, pay-for-performance, or tiering on the grounds of poor validity. Their position is that:

Evaluations of the quality of care used to inform the public, to make purchasing decisions, or to reward/sanction organizations, must rely on a complete clinical picture of the patient and the care delivered. Administrative data bases, because of inherent limitations tied to coding systems and methods—among other issues—are unsuited to this use. Quality of care evaluation tools based on administrative databases were designed to be, and are suitable only as, screening tools for use by health care providers to direct their quality management processes. A complete picture of patient conditions and care delivered is available only in the medical record.

The Massachusetts Medical Society, while generally supportive of increasing the transparency of health care quality, also is wary of the limitations of administrative data-based quality measurement, but stops short of stating that administrative data can only be used as a quality screening tool. This organization is opposed to using inaccurate data and inappropriate use of administrative data, but understands that "administrative data are the best we have right now." Given that reality, the Massachusetts Medical Society has drafted several criteria for the appropriate use of administrative data-based indicators:

  • Rigorous, completely transparent methodology.
  • Meaningful measurements standardized whenever possible across payers and systems.
  • Opportunity for physicians to review data and make changes when data are inaccurate well before publication.
  • Collaborative process.
  • Timely data sharing.
  • User-friendly format.

Other interviewees made distinctions between different sets of the AHRQ QIs. For example, we were told that the mortality-based IQIs were considered more acceptable for public reporting or other activities, but that the remaining IQIs and the PSIs were not. Representatives of the payers and purchasers who support the use of AHRQ QIs for performance measurement told us that they favor trying to improve the AHRQ QIs rather than try to develop alternatives based on clinical data.

They told us that clinical data collection is cost-prohibitive, so that insisting that public reporting, pay-for-performance, and tiering of providers be based on indicators derived from clinical data effectively limits these efforts to the JCAHO Core Measures. From their point of view, opposition by hospital administrators to reporting administrative data-based indicators is partly a "delay tactic." They pointed out that administrative data are based on medical records, are owned by the hospitals, and can be improved for quality measurement purposes.

Despite these disagreements about the appropriate use of the AHRQ QIs, the new reporting initiatives have gone forward, with some modifications, and the degree of opposition appears to be decreasing. This shift is partly due to growing political support in the state (and nation) for increased transparency in health care. Public reporting, pay-for-performance, and tiering are increasingly viewed as inevitable. Another important factor in overcoming opposition has been the accommodation of some of the concerns about which indicators have been used and how they are used. 

For example, BCBSMA told us that a key to stakeholder "buy-in" has been to involve the hospitals, build good relationships, and work collaboratively to set clinically important and reasonable goals. Nevertheless, some participants are unlikely to change their conviction that quality indicators based on administrative data are inappropriate for any use other than quality improvement.

5.1.5. Technical Lessons Learned

Several common technical issues with the AHRQ QIs were identified by all of our interviewees. The major disagreement focused on whether these technical limitations were sufficient to prohibit use of the AHRQ QIs for non-quality improvement purposes. We were consistently told that the most prominent limitation is that the AHRQ QI specifications and underlying administrative data do not account for conditions that are present at hospital admission.

This issue may be remedied, however, since future versions of the AHRQ QIs will accommodate present-on-admission conditions and a new Massachusetts law will require that this data element be included in the state's administrative data. Other commonly mentioned limitations include failure to identify patients under do-not-resuscitate orders and those who stay in the hospital for "comfort care" only.

Other technical problems with the AHRQ QIs are due to variability in coding practices among hospitals. As mentioned earlier in this report, many abnormal results in the AHRQ QIs are found to be due to data-coding issues rather than quality-of-care problems, and the first step in quality improvement using the AHRQ QIs is often improvement in data-coding practices. These issues underscore the need to allow providers to review their AHRQ QI results prior to use and to have the opportunity to investigate and correct abnormal results that may be due to data-coding issues. Since not all data quality issues can be resolved quickly, it may also be useful to allow hospitals to offer an explanation for abnormal results in a public report.

Another technical issue mentioned by several interviewees is that implementation of the AHRQ QIs by different vendors has led to varying results for the same provider on the same indicator. These vendors may be using non-transparent changes to the specifications of the AHRQ QIs that lead to these divergent results.

In summary, our interviews suggest that many of the technical issues raised with respect to the AHRQ QIs may be amenable to improvement. The most prominent technical issue mentioned, a flag for conditions that are present on hospital admission, provides one example of a problem that is currently being addressed by AHRQ and state legislation. However, while proponents of using the AHRQ QIs stressed the value of working to improve the indicators and data, some interviewees felt that the indicators did not warrant any improvement. We were told by one interviewee that "at this point, any changes are tweaking around the edges... AHRQ could pour a lot more money into the indicators to make them better, but it would not be money well spent because they can't get a whole lot better."

5.1.6. Implications

The experience in the Boston area suggests that the AHRQ QIs might be used for public reporting, pay-for-performance, and tiered insurance products without major negative ramifications, despite strong opposition. However, a number of important caveats must be considered. The activities in Boston are in their early stages and could become problematic as they are expanded (e.g., addition of PSIs to the state's public report or additional funds devoted to pay-for-performance programs). We heard strong warnings from some interviewees that these activities were inappropriate and could lead to problems for payers, purchasers, and AHRQ, since AHRQ had endorsed use of the QIs for these purposes.

Nevertheless, given the growing political focus on transparency in health care and the lack of viable alternatives to indicators based on administrative data, use of the AHRQ QIs for these purposes appears to be entrenched in the Boston area. The strength of outright opposition appears to be declining, and opponents appear to be increasingly focused on addressing their concerns by changing the ways the AHRQ QIs are implemented.

The Boston experience underscores the importance of clear, official AHRQ guidance on how AHRQ QIs should be used. None of our interviewees doubted the usefulness of the AHRQ QIs; they disagreed on appropriate uses. Critical to the successful implementation of quality-reporting activities in Boston was AHRQ's written endorsement of use of the AHRQ QIs for non-quality improvement purposes. Payers and purchasers requested that this guidance be more explicit and strongly worded. Other interviewees suggested that AHRQ should not have issued this type of endorsement, and point out that for some time AHRQ endorsed use of the AHRQ QIs only for their originally designed purpose—quality improvement.

Despite this disagreement, respondents in Boston agreed that AHRQ should be the leader of developing quality indicators based on administrative data. In addition, several interviewees suggested that AHRQ should collaborate with other national organizations, including Leapfrog, JCAHO, and CMS, to create a consensus around the use of at least some subset of the ARHQ QIs in a national quality indicator set.

Return to Contents

5.2. Dallas-Fort Worth

5.2.1. Dallas-Fort Worth Market Area

The Dallas-Fort Worth (DFW) market can be characterized as an area with well-established hospital reporting. Sophisticated institutional players both drove the introduction of reporting on quality and helped hospitals and purchasers turn results into actions. The DFW Hospital Council is a regional hospital association of over 70 hospitals. It was founded in 1997 to support hospitals in collaborating and using data to improve patient safety and quality. As part of this association, the Dallas-Fort Worth Hospital Council (DFWHC) Data Initiative (DI)42 is an education and research foundation.

Among its many functions, the DI serves as an expert intermediary between the hospitals and the State for purposes of submitting discharge data from hospitals for generation of the legislatively-mandated report card on the quality of hospital care (see discussion below). It also independently calculates AHRQ QIs for all participating hospitals and feeds back to each hospital its own indicator results so that hospitals can see their performance on the AHRQ QIs well ahead of the public release. In addition, the DI has developed a sophisticated software tool with which hospitals can analyze and benchmark their own data, as well as identify individual cases that were flagged by an indicator as potential adverse events. Hospitals compare this information against medical records to distinguish coding problems from quality issues.

In addition, the media traditionally gives a lot of attention to the issue of hospital quality in the DFW area, partly because of the DI and employer initiatives and partly because of a local news reporter with a strong personal interest.

5.2.2. Background on Use of Quality Indicators

Texas has a long history of using data to encourage informed consumer decisions and competition on quality of care. In 1995, the Texas Legislature created the Texas Health Care Information Collection (THCIC)43,44 with the primary purpose of providing data to enable Texas consumers and health plan purchasers to make informed health care decisions. THCIC's charge is to collect data and report on the quality performance of hospitals and health maintenance organizations operating in Texas. This same legislative mandate required the THCIC to publicly report on the quality of care in Texas hospitalsn and explicitly mandated the use of a severity adjustment.

A scientific advisory committee guided THCIC in its decisions on how to implement the mandate, and the process was accompanied by extensive stakeholder consultations. The committee argued that indicators for the report should be based on a national standard and an open source methodology. The AHRQ QIs were the only viable indicator set found, and the option of developing indicators was rejected because of resource constraints and concerns about scientific defensibility.

THCIC decided to use most of the IQIso and all the PQIs, but not the PSIs. THCIC also plans to add the PDIs to the report. The decision not to report the PSIs was motivated by concerns about unstable rates due to small denominators and their controversial nature, as those indicators capture severe adverse events in hospitalized patients.45 The first report was released in 2002, shortly after the AHRQ QIs became available, which made Texas one of the first users of this product. The report has been updated annually since.

5.2.3. Impact of Public Reporting

There have been no formal evaluations of the impact of publicly reporting the AHRQ QIs, so that only anecdotal evidence is available. Our interviewees suggested that the indicators have had an impact, at least in the DFW area. One interviewee explained:

The use of AHRQ QIs probably has affected patient care indirectly, but not directly. Because hospital-level information would be subject to Freedom of Information Act requests, the state does not and cannot work specifically with hospitals in addressing quality of care issues. But, as a result of the QIs, at least one of the regional hospital associations works with member hospitals to look at indicators and quality.

Because of media interest and purchaser pressure, hospitals administrators pay close attention. Hospital CEOs, CFOs, and boards are commonly briefed on their AHRQ QI results. Larger hospitals have quality improvement teams that routinely utilize the tools provided by the DI to analyze their results, and even smaller facilities try to understand their performance along the indicators. The main focus so far has been to work with coding departments on the particular requirements for the AHRQ QIs to make sure coding issues do not distort the results. But some hospitals reported having found, as part of their investigation into coding practices, instances in which a quality problem was detected and addressed.

Overall, hospitals were not enthusiastic about publicly reporting the AHRQ QIs, but accepted that reporting was "here to stay" and considered the AHRQ QIs to provide a reasonable mechanism for meeting this requirement. Hospitals were concerned, however, about the limitations of billing information as the underlying data source. In addition, the AHRQ QIs, because of their visibility, now play a substantial role in setting priorities for quality improvement projects and sometimes drain resources from other initiatives that hospitals see as more urgent.

One interviewee complained, "There are so many indicators, but the public availability of the AHRQ QIs forces us to deal with them, even if we don't believe in the results." This is reinforced by the media attention to the AHRQ QIs, because reporters tend to overemphasize negative results and over-interpret the findings.

Our interviewees emphasized that the situation was quite different in other parts of Texas, where there is limited media attention and no expert intermediary to help hospitals with data analysis. We consistently heard that little sustained attention has been paid to the public indicator reports after the excitement around the first release had subsided.

Another important observation is that the QI projects are mainly driven by hospital administrators and not physicians. Physicians are involved on an individual basis, depending on their interest in quality improvement, but the Texas Medical Association has focused its quality agenda on other initiatives, such as the IHI 100,000 Lives Campaign. It does not see a role for organized medicine in the public reporting of the AHRQ QIs, which it considers to be hospital-related.

We heard repeatedly that hospital administrators used the AHRQ QIs to convince physicians to work on quality improvement projects (e.g., standing order sets, clinical pathways), which some physicians tended to dismiss as "cookbook medicine." However, if the indicators suggested poor performance in a given clinical area, hospital administrators had additional leverage in convincing physicians to embrace change.

5.2.4. Political Lessons Learned

In spite of the potentially controversial nature of public reporting for hospitals, its introduction has been unremarkable. Hospital associations and the DI worked together with the Texas legislature to craft and implement the statutory requirements, which mandated transparency and proper risk adjustment. The indicators were selected through a multi-stakeholder consultation process. Stakeholders decided early in the process that only well-established national indicator sets should be considered, as any "home-grown" solution would lack credibility.

The AHRQ QIs were the only viable option that met those requirements, and had the additional advantage of not requiring additional data collection. AHRQ's reputation for unbiased and rigorous research carried great weight in this decision process. Still, there was "a great deal of anxiety before the first released report," and hospitals received their own data for review and comment 60 days in advance. A few years into this program, anxiety has subsided, making room for a more reasoned discussion of the strengths and limitations of the indicators and the best ways to use them for quality improvement. As one interviewee stated:

Hospitals don't love the indicators—for performance improvement, clinical data is the best bet, but for in the short run, especially as they improve, administrative data and the AHRQ QIs are a good solution.

The two main lessons learned from this implementation are the importance of proper involvement of stakeholders, in particular hospitals, in every step of the process, and the importance of using scientifically credible and transparent indicators.

5.2.5. Technical Lessons Learned

The implementation of a public reporting system based on the AHRQ QIs did not create significant technical problems for the state of Texas and the DFWHA. Both the THCIC and the DI were accustomed to working with discharge data and with the HCUP indicators (the predecessors of the current AHRQ QIs). AHRQ provided limited feedback and technical assistance in the initial implementation, but the program is largely self-sufficient now.

It proved more challenging for the hospitals to adapt to this new requirement. While they were typically familiar with many of the building blocks of the AHRQ QIs, such as the UB-92 data format and the APR-DRGs, they needed to understand the logic of the indicators and how coding practices for the UB-92 affected their results. They also had to start identifying patients who were flagged by an indicator, retrieve their medical records, and assess whether there was an actual adverse event or a coding problem before responding to the findings. Substantial educational efforts for medical-record coders became necessary to make coders aware of the implications of coding rules for the indicators. Without the software tools and the technical assistance that the DI provides, few hospitals, especially the smaller facilities, would have been in a position to analyze their own data, and to improve both data quality and quality of care. This ability, as we heard over and over, is critical for buy-in by the hospitals and also for a public reporting program to lead to real change.

Those problems have not yet been fully overcome, and new issues continue to surface as hospitals become more familiar with the implications of coding practices for the indicators. For some indicators (e.g., vaginal tears during childbirth), strict interpretation of the coding rules to achieve adequate reimbursement has led to poor performance on the indicators—and vice versa. For others (e.g., post-operative hemorrhage), coding rules are not specific enough, resulting in inconsistencies between physicians and coders.

Finally, Texas has not mandated the use of E-codes (ICD codes for external cause of injury) for hospitals. This leads to comparability problems, because hospitals can set their own policy as to whether those codes should be used or not. In general, physicians and hospitals would like to see more rigorous validation studies to assess the strengths and limitations of the different indicators. While many hospitals in the DFW market have compared their indicator results against medical records, none has done this systematically as part of a research project and the efforts have so far focused on false-positive events.

Sample size remains a particular problem for smaller facilities. While there are fewer small hospitals in this area than in other parts of Texas, some hospitals can report only on mortality for AMI and pneumonia, since they lack the required sample size for any other indicator. In addition, many small facilities routinely transfer most AMI patients to hospitals that are equipped for emergency procedures. This can inflate AMI mortality and lead to poor performance on the indicator, because a greater share of patients who are too unstable for transport, or whose prognosis is too poor to allow for invasive procedures, remain in the smaller hospitals.

5.2.6. Implications

There are two main implications for the AHRQ QI program from the experience in Texas and in DFW in particular. First, while the AHRQ QIs were not originally designed for public reporting, their use for this purpose, with the appropriate caveats, seems viable. Hospital administrators have adjusted to the AHRQ QIs as metrics and are beginning to educate their coders about the impact of coding practices on the quality reports. At least anecdotally, the reports are having an impact on quality improvement efforts. As previous RAND research has reported,46 the driving force to act on the results of performance data are not patients or purchasers but rather hospital administrators and their boards, who are concerned about the reputation of their hospital. The reporting requirements raised the profile of quality of care as a priority issue over finances and provided both data and leverage to introduce quality improvement efforts.

Second, the DI as an expert intermediary was crucial to the implementation and will remain crucial in helping hospitals to turn the reports into action. The DI helped overcome the initial resistance and helped hospitals understand the value of reporting and accountability. For other regions in Texas, which lack such expert support, the indicators remain a "black box" and a source of anxiety rather than a stimulus to improve care. This suggests that it is important for AHRQ to continue supporting intermediaries like the DI.


j. Tiered insurance products charge patients higher cost-sharing levels for providers that perform worse on quality and/or efficiency indicators.

k. Specifically, IQIs #12, 14, 16-20, 32, 21, 33, 34.

l. The indicators used include 15 selected QIs (PSIs # 2, 4, 7, 11, 12, 17-19; IQIs # 12, 15-17, 20; PQI # 1; and PDI 14).

m. BCBSMA's tiered insurance product will not be used by GIC since BCBSMA does not serve state employees.

n. All hospitals except critical access hospitals in areas with a population of less than 35,000 inhabitants.

o. Specifically, IQIs # 1-14, 16-20, 22-25, 30-33.


Return to Contents
Proceed to Next Section

 

AHRQAdvancing Excellence in Health Care