AHRQ Home > CHIPRA > CHIPRA National Evaluation > Reports and Resources > Evaluation Highlight No. 1


What's New?

The first in a series of Evaluation Highlights is now available. Learn about four States’ approaches to practice-level quality measurement and reporting.

Subscribe to Email Updates About the Evaluation

Sign Up for Updates from the National Evaluation of the CHIPRA Quality Demonstration Grant Program

How are CHIPRA demonstration States approaching practice-level quality measurement and what are they learning?


Authors: Grace A. Ferry, Henry T. Ireys, Leslie Foster, Kelly J. Devers, and Lauren Smith

This Evaluation Highlight is the first in a series that presents descriptive and analytic findings from the national evaluation of the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Quality Demonstration Grant Program.1 In this Highlight, we discuss the early accomplishments, challenges, and lessons learned from the following four States pursuing practice level quality measurement: Maine, Massachusetts, North Carolina, and Pennsylvania. Our analysis is based on work completed by the States during the first 2 years of their 5-year demonstration projects. These 2 years included a year of planning followed by a year of implementation.


Contents


Key Messages

The experiences of the four demonstration States may be helpful to other States considering or pursuing practice-level quality measurement and reporting.

Key messages from the States’ early experiences include:

  • Involving physician practices in selecting or refining measures for quality improvement (QI) projects was an integral approach for the States. Both States and practices had to be flexible to reach agreement on measures that are high-priority, actionable, and appropriate for busy practices.
  • Adapting measures originally designed for reporting at the health plan or State level for use at the practice level has been an unexpectedly time- and resource-intensive task.
  • Outdated or underdeveloped claims systems, health information exchanges, and electronic health records (EHRs) can pose substantial barriers to collecting practice-level quality measures.

The CHIPRA Quality Demonstration Grant Program

In February 2010, the Centers for Medicare & Medicaid Services (CMS) awarded 10 grants, funding 18 States, to improve the quality of health care for children enrolled in Medicaid and the Children’s Health Insurance Program (CHIP). Funded by the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA), the Quality Demonstration Grant Program aims to identify effective, replicable strategies for enhancing quality of health care for children. With funding from CMS, the Agency for Healthcare Research and Quality (AHRQ) is leading the national evaluation of these demonstrations.

The 18 demonstration States are implementing 51 projects in five general categories:

  • Using quality measures to improve child health care.
  • Applying health information technology (IT) for quality improvement.
  • Implementing provider-based delivery models.
  • Investigating a model format for pediatric electronic health records (EHRs).
  • Assessing the utility of other innovative approaches to enhance quality.

The demonstration began on February 22, 2010 and will conclude on February 21, 2015. The national evaluation of the grant program started on August 8, 2010 and will be completed by September 30, 2015.

Top of Page Top of Page


Background

CHIPRA established a series of initiatives to improve the quality of children’s health care and, more generally, helped to bring a pediatric focus to a broad range of Federal quality measurement efforts. In addition to the Quality Demonstration Grant Program, CHIPRA mandated the identification of a core set of quality measures to track improvements in care for children. In early 2011, the Centers for Medicare & Medicaid Services (CMS) released the Initial Core Set of 24 Health Care Quality Measures (Initial Core Set), encouraged States to report to CMS annually on these measures for children enrolled in Medicaid and Children’s Health Insurance Program (CHIP), and launched a national technical assistance and analytic support program to assist states in collecting, reporting and using the measures to drive quality improvement. The Initial Core Set of measures covers a range of health domains including prevention and health promotion, management of acute and chronic conditions, the availability of care, and family experiences of care.2

Ten of the 18 demonstration States are using grant funds to develop valid and reliable procedures for constructing and reporting the Initial Core Set to CMS for all children enrolled in Medicaid and CHIP. This category of activities will help CMS identify technical assistance needs and improve measure sets, which are scheduled to be released annually beginning January 2013.

In addition, eight of the ten States are working with physician practices to collect clinically useful and timely information about child-serving practices’ performance on selected measures. Practice-level quality measures aim to help providers target areas for QI efforts and assess the impact of QI interventions so they can learn what works or does not work. The States are devising their own approaches to practice-level reporting, seeking input from providers and stakeholders in their States. To date, States have not requested assistance on practice-level reporting from the national technical assistance and analytic support program.

This Evaluation Highlight describes how four States—Maine, Massachusetts, North Carolina, and Pennsylvania—are developing plans and early strategies for practice-level quality measurement. Strategies range from producing practice-level reports from claims data to helping practices calculate measures from chart reviews. Their experiences may be instructive for other States interested in pursuing similar strategies.

For this Evaluation Highlight, we drew information primarily from semi-structured, in-person interviews conducted in the spring and summer of 2012. The national evaluation team interviewed State demonstration staff, staff in physician practices participating in CHIPRA projects, and other stakeholders. The analysis also draws on semi-annual progress reports that demonstration States submitted to CMS on February 1, 2012 and August 1, 2012.

Summary of Four States’ Approaches to Practice level Reporting

Maine is working with up to 24 practices to generate data needed to calculate measures, including immunization and developmental screening measures, at the practice level. Ultimately, the State hopes to integrate quality measurement activities across Federal and State reporting efforts (for example, American Academy of Pediatrics’ Bright Futures measures and meaningful use measures associated with the CMS EHR payment incentive program).

Massachusetts is working to integrate practice-level reporting for children enrolled in Medicaid, CHIP, and commercial insurance plans. The State believes that this comprehensive reporting will provide practices with more complete information about their performance, which may help with planning quality improvement (QI) efforts.

North Carolina is embedding a QI specialist in each of the 14 primary care networks in the State. The QI specialists will help practices collect data necessary to produce measures at the practice level and use practice-level quality reports for QI activities.

Pennsylvania is establishing a pay-for-performance system that rewards pediatric practices in seven health systems for extracting quality measures from EHRs and reporting on eight of the quality measures from the Initial Core Set, as well as either maintaining good performance or improving performance.

To learn more about the States’ specific approaches to practice-level reporting, visit: http://www.ahrq.gov/chipra/demoeval/

Top of Page Top of Page


Findings

The administrative and technical steps needed to calculate quality measures at the practice level are different in critical ways from the steps needed to report quality data at the health plan or State level. We highlight the four States’ approaches to: (1) selecting measures, (2) adapting the selected measures for practices, and (3) collecting the needed data.

Close partnering with providers is an important factor in the selection of meaningful, feasible measurement sets

Aligning measures reported at the practice, health plan, and State levels is an ongoing challenge for national measurement initiatives. Demonstration States made substantial efforts to involve providers from child-serving practices to help States identify Initial Core Set measures that are the highest priority for practices and to select additional measures that may be useful for QI efforts. By engaging practices, States gained provider buy-in and helped ensure cooperation with QI initiatives.

Providers expressed two strong preferences for measures. Measures selected for practice-level reporting should be: 1) timely and useful to the practice’s QI efforts (for example, related to frequently seen health problems), and 2) under the influence of the practice (that is, the measures had to primarily address quality of the care delivered in that office). Some practices believed that access to care should not be measured at the practice level because providers should not be held accountable for whether families schedule and keep provider-recommended appointments for their children.

Generally, after working through these issues, States and practices were able to agree on measure sets that were acceptable to both. Furthermore, two States agreed to reduce measurement burden on the practices by taking a phased approach and initially implementing only a subset of measures.

"I like data—but I want data that is relevant, that touches me. I see all these graphs and data and this and that, but what does that mean for me and my [patients]?"

North Carolina Provider, April 2012

A positive byproduct of these conversations is that States and practices gained a greater understanding of each other’s perspective in ways that go beyond practice-level quality measurement to broader considerations of health system reform. These considerations included QI priority setting, establishing expectations for accountability, and the potential role of shared savings as an incentive to statewide QI.

Adapting health plan and State-level measures for practice-level reporting is challenging and resource-intensive

The majority of measures in the Initial Core Set were specified for health-plan-level reporting. The lack of national guidance on reporting measures at the practice level means that each State is testing different approaches. While States are closely following the original specifications for some measures, for others they are making adjustments to fit the reporting capability and needs of their State. Adjustments include testing new data sources or excluding certain children from the calculations (for example, beneficiaries dually eligible for Medicaid and Medicare). A recent study on reporting Initial Core Set measures at the practice level underscores the benefits and drawbacks of developing practice-specific specifications.3

In two States, providers and other stakeholders led the measure testing process. States reported that this stakeholder-led approach encouraged the assessment of the pros and cons of different methods and was instrumental to creating usable practice-level measures. CMS and other States can learn from their experiences to determine which measures are most feasible for practice-level reporting.

It must be noted, however, that testing different measurement approaches can limit the comparability of practice-level data across States and compromise the reliability and validity of the measures if reported measures move too far away from the original specifications. Moreover, adapting the measures for the practice level is a resource-intensive and a time-consuming process. All four States indicated that the process is taking longer, using more demonstration resources than anticipated, or both.

"There was a lot of really good discussion about taking a set of measures that were designed to measure a Medicaid program … and trying to operationalize [them] to a health system and a practice level. Trying to figure out those numerators and denominators is easier said than done, but we’ve been able to do it."

Pennsylvania Demonstration Staff, June 2012

Testing different data sources. To develop the database needed for practice-level reporting, States are exploring the use of a combination of State-level data systems (for example, claims, enrollment, and eligibility data; immunization registries; and health information exchanges) and data submitted by providers (for example, manual chart reviews and EHR data submissions). In some cases, States were overly optimistic in their plans to link existing data systems, such as claims systems and immunization registries, and to use newly developed information exchange systems to calculate practice-level measures. Reporting timelines were not met when data were delayed, incomplete, or inaccurate. States with experienced data analysts reported that the analysts’ expertise helped States identify data issues and develop solutions in a timely manner.

Attributing patients to providers. Determining which children each practice should be held accountable for is a critical and common challenge for States reporting the Initial Core Set or other measures at the practice level. Generally, claims and encounter data indicate which provider (and not which practice) a patient visited. Claims do not contain information about which providers work together, making it difficult to develop practice-level patient assignments. Massachusetts is addressing this problem by using Massachusetts Health Quality Partners’ (MHQP) annually updated database of providers working at Massachusetts practices and basing their patient attribution methodologies on ones that MHQP uses in reporting practice-level data on commercially insured patients.

Even in States where Medicaid- and CHIP-enrolled children choose or are assigned to primary care providers (PCPs), the attribution process can be problematic. Families may use a provider other than the assigned PCP if they are not aware of a child’s assignment or if another provider is more convenient. States must decide whether patients using providers other than their assigned PCP should be attributed to the practice with their assigned PCP or to the practice where they seek the majority of their care.

Calculating Medicaid and CHIP measures also requires identifying which children in the practice are enrolled in those programs. Practices in Pennsylvania, for example, have no way to identify CHIP enrollees in their medical records as a result of a longstanding policy not to assign CHIP identifiers to avoid the stigmatization sometimes associated with public benefits. The State had to work with the Department of Insurance to develop enrollment files that link CHIP enrollees to specific practices.

In addition, barriers to defining denominators for health plan or State-level reporting remain a challenge for practice-level measurement efforts. In both levels of reporting, some children may need to be excluded from certain measure calculations. For example, the Initial Core Set measure for Chlamydia screening is valid only for girls who are sexually active. A practice needs to be able to exclude girls who are not sexually active, or the measure may inappropriately indicate low levels of screening. Massachusetts developed a set of data submission tools for the measures using data derived from medical records and loaded these tools into an online reporting portal. These tools allow providers to indicate if a child should be excluded from a measure denominator if the child does not meet denominator criteria.

Data collection is hindered by infrastructure and technology limitations

All four States looked to health IT (for example, linkages between State databases, health information exchanges, and EHRs) as a way to ease the burden of data collection and measure calculation. However, their needs often went beyond what current IT systems could provide. As a result, some States are developing short-term strategies to obtain data while they work to develop the needed health IT and data infrastructure.

Involving practices in data collection. All of the States are relying on data from practices. How involved the practices are in data collection depends on the measure. Measures relying on data available in existing systems, such as immunization registries and claims databases, use information providers submit through normal business procedures, such as billing. When existing data systems are outdated, underdeveloped, or not trusted by providers, States are relying on practices to extract the needed data from their records. Two States that actively engage practices in data collection are developing systems that include EHR data extractors and an online reporting system for manual chart reviews to help ensure that submitted data are valid.

"We are finding that the HIT [health information technology] changes required to support the collecting and reporting of practice-based measures are more difficult to implement and take longer than previously understood."

Maine CHIPRA Quality Demonstration Progress Report, February 1, 2012

To alleviate provider burden associated with "active" participation in data collection, States are trying various strategies. These include aligning or coordinating Federal and private payer measurement initiatives, providing participation stipends or pay-for-reporting incentives (for example, $5,000 per measure reported by a health system), embedding demonstration staff (for ex-ample, QI specialists) in practices to help collect the measures via paper chart or EHR data extraction, and providing training to practices on data collection.

Mismatch between EHR capabilities and measurement generation needs. The States are reporting measures not currently specified for use with EHR data. To pull information from EHRs, a few States are attempting to map the measure specifications to discrete EHR fields for one or more major vendors as a test case. They face a number of challenges.

"…Collecting consistent, complete, and reliable data from practices via practice staff will require significant effort both in the creation of detailed data collection forms and in the support and guidance offered to practices."

Massachusetts CHIPRA Quality Demonstration Progress Report, February 1, 2012

First, EHRs store needed information in a variety of ways, as a result of product design or user preference. One EHR, for example, may have a check box to indicate if a patient is contraindicated for a vaccine, whereas another may store that information in a drop-down box. Second, EHRs may not support, or a provider may not use, the discrete fields needed to calculate measures. Data that reside in free text instead of discrete fields are difficult to use for quality measurement with current technology. Third, EHR vendor and product selection is not static. Some practices have switched EHR vendors during the first 2 years of the demonstration, and even if they have stayed with the same vendor, the product can change. As EHR specifications for measures are released, States will continue to face these challenges if providers use non-certified systems or record data outside of the discrete fields.

Collecting data from State-level data systems. The demonstration States are collaborating across agencies to gain access to needed information and to improve the long-term quality and sustainability of existing data systems. For example, Massachusetts convened a broad group of stakeholders to work on a variety of child health issues including the spread of provider-level reporting efforts.

Another example of cross-agency collaboration is the production of childhood and adolescent immunization measures at the practice level in North Carolina. The immunization measures will rely on data pulled from three different data systems that have not previously "talked" to each other: paid Medicaid claims, the North Carolina Immunization Registry, and the Health Service Information System (a billing system). This required State staff to add new data fields and elements, which entailed corresponding changes to databases and new training for individuals who work with these systems. Similarly, Maine’s demonstration staff are collaborating with a large group of agencies and stakeholders to enhance the reporting capacity of the State’s immunization registry.

Top of Page Top of Page



Conclusions

Four demonstration States are experiencing similar challenges in implementing practice-level reporting of the Initial Core Set of quality measures. Each State is developing strategies from scratch to some extent. Although they are pursuing unique or customized strategies, inefficiencies also are created. Practice-level reporting efforts could be accelerated through the provision of technical assistance to help States develop solutions to the kinds of challenges described in this Evaluation Highlight.

Recent reviews and commentaries on health information exchange and quality measurement in the larger health care environment suggest that the range of technological and administrative challenges States have faced while trying to provide practices with timely and accurate Medicaid and CHIP measures for QI are similar to those affecting other efforts to develop practice-level reporting.4,5

Nevertheless, the demonstration States have made progress over the last 2 years. Providers in Pennsylvania indicated they are initiating new QI efforts as a result of CHIPRA practice-level reports. To increase well-child visits, for example, clinics are redesigning reminder letters and completing reminder calls earlier in the month when parents are more likely to have available cell phone minutes. In addition, demonstration staff and stakeholders in Maine indicated the CHIPRA demonstration is increasing the pediatric focus of quality measurement in the State. For example, Pathways to Excellence, a public reporting initiative, added immunization measures aligned with the Initial Core Set to their list of measures reported by practices to receive Good-Better-Best quality rankings.

Top of Page Top of Page



Implications

The early experiences of the four demonstration States highlighted here suggest some insights that other States interested in practice-level quality measurement may want to keep in mind. Specifically, States could:

  • Involve providers in the measurement selection and testing process to help ensure the measures are useful for practice-level QI efforts.
  • Reserve resources in advance for carefully planning how measures will be calculated at the practice level.
  • Provide support to practices actively participating in data collection. Support could include financial incentives, staff support, or training.
  • Select primary and alternative data sources, such as established State databases. Issues with data access or quality may not be apparent in the planning stages, and having an alternative in mind may provide feasible, shorter term solutions as new data systems or technologies are developed.
  • Ensure that staff can understand and manage the technical details governing data exchange across data systems. If necessary, staff could be hired or contracted from public-private partnerships, universities, or measure-reporting organizations.
  • Devise strategies for integrating multiple data initiatives across State agencies and the private sector and ensuring that children’s health quality and improvement issues are addressed. One way to do this is through child-focused coalitions or improvement partnerships.
  • Provide support to practices that have recently adopted EHRs or changed their EHR vendor or product so they can build measure reports within their EHR systems that meet measure specifications. States could directly provide training or link them to resources available through other initiatives, such as the HITECH Regional Extension Centers.6
  • Manage stakeholder expectations about the State’s ability to provide data at the practice level. Given the range of challenges States face, the ideal scenario—where child-serving practices have real-time quality data available to influence their clinical decisions at the point of care—is not likely to be realized in the near term.

Top of Page Top of Page



Learn More

Supplemental resources for this Evaluation Highlight, including detailed descriptions of the four States' approaches to practice-level reporting, are available at http://www.ahrq.gov/chipra/demoeval/highlights/supplhighlight01.htm.

Additional information about the national evaluation and the CHIPRA Quality Demonstration Grant Program is available at http://www.ahrq.gov/chipra/demoeval/.

Use the tabs and information boxes on the Web page to:

  • Find out about the 51 projects being implemented in 18 demonstration States.
  • Get an overview of projects in each of the five grant categories.
  • View reports that the national evaluation team and the State-specific evaluation teams have produced on specific evaluation topics and questions.
  • Learn more about the national evaluation, including the objectives, evaluation design, and methods.
  • Sign up for email updates on the national evaluation.

Top of Page Top of Page



Endnotes

1. We use the term "national evaluation" to distinguish our work from the activities undertaken by evaluators who are under contract with many of the demonstration grantees to assess the implementation and outcomes of State-level projects. The word "national" should not be interpreted to mean that our findings are representative of the United States as a whole.

2. Mann C. State Health Official Letter# 11-001 Regarding the CHIPRA Quality Measures. 2011 (February). Available at : Centers for Medicare and Medicaid Services. http://downloads.cms.gov/cmsgov/archived-downloads/SMDL/downloads/SHO11001.pdf Plugin Software Help. Accessed October 25, 2012. CMS anticipates including additional domains in future pediatric measure sets.

3. Casciato A, Angier H, Milano C, et al. Are Pediatric Quality Care Measures too Stringent? J Am Board Fam Med 2012; 25( 5): 686-93.

4. Gold MR, McLaughlin CG, Devers KJ, et al. Obtaining Providers’ ‘Buy-in’ and Establishing Effective Means of Information Exchange will be Critical to HITECH’s Success. Health Aff 2012; 31(3): 514-26.

5. Landon BE. Use of Quality Indicators in Patient Care: A Senior Primary Care Physician Trying to Take Good Care of His Patients. JAMA 2012; 307(9): 956-64.

6. Information on Regional Extension Centers is available at: http://www.healthit.gov/providers-professionals/regional-extension-centers-recs.

Acknowledgements

The national evaluation of the CHIPRA Quality Demonstration Grant Program and the Evaluation Highlights are supported by a contract (HHSA29020090002191) from the Agency for Healthcare Quality and Research (AHRQ) to Mathematica Policy Research and its partners, the Urban Institute, and AcademyHealth. Special thanks are due to Cindy Brach and Stacy Farr at AHRQ, Karen Llanos at CMS, and our colleagues for their careful review and many helpful comments. We particularly appreciate the help received from demonstration staff and providers in the four States featured in this Highlight and the time they spent answering many questions during our site visits, as well as their review of an early draft. The observations contained in this document represent the views of the authors and do not necessarily reflect the opinions or perspectives of any State or Federal agency.


Please note: This Web site uses the term "national evaluation" to distinguish this evaluation of the entire demonstration program from evaluations commissioned or undertaken by grantees. The word "national" should not be interpreted to mean that findings are representative of the United States as a whole.



AHRQAdvancing Excellence in Health Care