[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

WORKGROUP ON QUALITY

June 24, 2004

National Center for Health Statistics
3311 Toledo Road
Hyattsville , MD

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, suite 160
Fairfax , Virginia 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S [9:05 a.m.]

Agenda Item: Call to Order and Introductions - Review Agenda/Intent of Hearing - Mr. Hungate

MR. HUNGATE: Okay, are we all set? Are the speakers working, the sound system working, etc., all tuned up and ready to go?

I want to start with the mundane things of thanking NCHS for the coffee and donuts, etc., this is an unaccustomed pleasure --

MS. GREENBERG: The NCVHS team provided --

MR. HUNGATE: Specific? Alright, well we'll have to advertise that around a little bit --

MS. GREENBERG: I wouldn't.

[Laughter.]

MR. HUNGATE: Okay, welcome all to the hearing that I have to decided to label as Lurching Toward Measurement. As I look through the record of previous hearings that led up to this, which started really I think back in 1998 roughly on quality issues, and one of the first comments that I came across was Simon Cohn's comment about we manage what we measure. You really said that, I have the documentary proof with me.

Well, let's start with introductions and start over here at my right with Julia Holmes.

DR. HOLMES: I'm Julia Holmes and I am, I work here at NCHS and staff on the Quality Workgroup. And my primary task at NCHS and involvement in quality measurement and disparities measurement have been that I've been kind of the lead person working on the National Health for Quality Report and National Health Care Disparities Report led by AHRQ, it is an intergovernmental effort and NCHS and CDC provided a great many of the measures that go into those reports.

MS. GREENBERG: I'm Marjorie Greenberg also here at NCHS, CDC, and I'm the executive secretary for the committee.

MS. POKER: Hi everybody, my name is Anna Poker and I'm from AHRQ, I'm going to be the staff lead to the Quality Subcommittee. And my work at AHRQ, I'm relatively new at AHRQ, I'm working on the National Heath Care Quality Reports, Disparities and Quality, and I'm also on the patient safety team there.

MR. HUNGATE: Bob Hungate, principal of Physician Patient Partnerships for Health, which is just me, I'm a Medicare retiree, I also chair the Group Insurance Commission in Massachusetts. And chair the workgroup, member of NCVHS.

MS. GREENBERG: Bob, we should probably ask if anyone has any conflicts.

MR. HUNGATE: I'm reminded that I should ask that members that have conflicts should announce them. In that sense as a beneficiary of the Medicare am I conflicted?

MS. HANDRICH: My name is Peggy Handrich, I am a member of the committee, I work for the Wisconsin Department of Health and Family Services with responsibility for administering the Medicaid program and vital statistics parts of the state.

DR. CARR: I'm Dr. Justine Carr, I am at Beth Israel Deaconess Medical Center in Boston and I am a member of the committee and member of the Quality Workgroup.

MR. REYNOLDS: Harry Reynolds, Blue Cross and Blue Shield of North Carolina, member of the Standards Subcommittee sitting in on the Quality Committee today, and no conflicts.

DR. COHN: Simon Cohn, I am the national director for health information policy for Kaiser Permanente. Like Harry we're sort of sitting in as sort of the implementation arm of everybody's good ideas and make sure there may be some way at the end of the day to implement whatever it is that's decided. I've been told that I need to bite my lip or tongue if CPT comes up since I'm on the editorial panel.

MS. DILORENZO: My name is Jessica DiLorenzo, I work for General Electric -- really just one of the Bridges to Excellence Products so I'm also the operations leader.

MR. QUERAM: My name is Chris Queram, I'm the CEO of the Employer Health Car Alliance Cooperative, a business coalition in Madison, Wisconsin, fellow badger along with Peggy, and I'm here today on behalf of the Consumer/ Purchaser Disclosure Group which is an organization and an initiative that I'll tell you more about in my comments.

MS. TITLOW: I'm Karen Titlow, I'm the director of operations for the Leapfrog Group, and my only conflict of interest is that sometimes I need to get health care --

DR. HAYWOOD: Good morning, I'm Trent Haywood from the Centers for Medicare and Medicaid Services and I'm all also on staff to the Workgroup on Quality.

MR. HUNGATE: Can we include the audience in the introductions?

MS. COFFEY: I'm Rosanne Coffey, I'm from --

MS. BLOUNT(?): I'm Laura Blount, I'm the associate director of federal relations --

MS. COLTIN: I'm Cathy Coltin, I'm director of --

MR. HUNGATE: And for everyone's edification who doesn't already know it Cathy was my predecessor as the chair of this workgroup and labored long and hard in collecting the information.

Much of what we're trying to do today is if you will improve the institutional memory and specificity of our understanding of the business case for increased measurement in some fashion proposed in one way as claims data, proposed as an alternative maybe as a new quality transaction which would open a different set of doors so the specifics are not locked down in that sense. Each of you received a list of if you will questions that relate to trying to move, and I don't know whether these two pages are in the packet that everyone got or not. The speaker questions? The two pages that went out, is that there, okay.

The questions themselves do not take into consideration one other aspect that's been talked about within the Quality Workgroup in any very specific way, this was the question of what, for various measurements, what's the best approach vis-à-vis what we can do now and what we should not try to do now because it will be so much easier once there is electronic health record. What's the mix in terms of where we should spend our efforts now versus waiting for better information to an EHR. So that's not in the specific questions but it's of interest and you should consider it as you speak.

Format wise what I'd like to do is try to keep remarks to 15 minutes, and just go sequentially with no questions if we can do that because then we can all as a group talk more collectively afterwards. It's been my experience that sometimes one speaker takes a little long and we give up on the discussion which I think is a lot of where our benefit will be.

The 11:15 to 12 noon time is spoken of as workgroup discussion, I don't want to have that suggest that you should leave, I think we'd welcome your staying for that and participating because I think the reason you're here is you've got knowledge and will help us. And I would like to just take a couple of minutes and let other people from the quality workgroup, both members and staff, add any thoughts that they have in relationship to this hearing and especially ask Simon and Harry to add thoughts that relate to your perspectives. The pressures of time have meant that we haven't had the collective planning process that we might have liked to have had in preparation for this so if we can take five minutes and allow for some dialogue there or issues that aren't covered that should be I'd welcome it. If no one has anything to add we'll just go right on but if there is something I'd like to catch it now.

MS. HANDRICH: I don't want to wait, I've been looking forward to this.

MR. HUNGATE: Okay, well hearing no other suggestions I suggest we go forward then.

I want to thank the panel for coming, we very much appreciate your taking the time to work in our edification and the public's edification. The other comment that I would like to make, personal opinion, that one of our measurement problems is that we haven't paid enough attention to the importance of the quality improvement model whereby the best data is taken by people who take it for their own reasons, who are taking data for their own self improvement in some way. When you're taking it for somebody else it doesn't have near the same impact on your accuracy or correctness of reporting. That factor and the need to align the measurement system for payers, purchasers, and patients as I would describe it I think is an important consideration in improving quality. So that's just a generality that I think is important.

With that we start out with Trent Haywood who I had the pleasure of hearing present to all of Medicare people about the new, public hearing relating to their initiatives, and I learned a lot more. So Trent, welcome, I look forward to your comments.

Agenda Item: Purchasers - Panel 1 - Dr. Haywood

DR. HAYWOOD: Good morning, I'm sort of preaching to the choir since I'm a member of the staff here, nevertheless I will start out with some general comments. I'm going to be a little less formal then some of the other speakers for now and if that's okay I'll take that prerogative and not stand at the podium unless I'm demanded to. What you should have is kind of the handout anyway so we can just move along according to the Power Point handout.

Let me just start out and you'll see on that first page I have some general comments that I think we already have or at least allude to in the recommendations or some of the preamble to the recommendations but I still want to highlight it in the backdrop of what we're going to discuss today which are primarily looking at administrative data to get at some of the quality information that we're trying to achieve. And so what I put at the very beginning, overall in looking at the recommendations I think it's clearly stated that CMS is supportive of both the intent and the approach of trying to actually determine what is the best methodology that we can strive to achieve as far as doing well now recognizing the limitations that we currently have.

And so particularly what I stated is that we should keep in mind that administrative data is really more of a bridge to where we want to go then something we think is kind of the end all be all to some of the problems that we currently have as far as data gaps. And in other words I wrote here that there continues to be a constant tension between building upon a foundation that was never intended to be used in that manner, meaning the administrative data, versus allocating energy and resources towards quicker adoption of the idealized design of EHRs.

And I think that because continually as we work, I apologize because there's a double page here so skip that second page and go to the third, I realize that as we continue to work kind of with all the partnerships and providers that there continues to be that tension where if you talk to individual clinicians and practitioners they have immediately have some concerns when you mention that you're going to do quality measurement from an administrative source. And particularly as I talk a little bit more about our physician office quality, once we start moving toward to the ambulatory setting why physicians in particular have some concerns about that administrative source, the intent of it, it starts to get into questions of whether or not you're really helping me to become a better coder or really helping me become a better practitioner. And so despite our best efforts I think whatever we end up concluding with the administrative data you're still going to have to deal with that message and the issue around the source of that particular data, particularly I think for physician offices.

I just quickly highlighted here, I think some of you already know, particularly my fellow colleagues, that the President, this is how the e-Health initiative that was announced in April and has the ten year goal of actually trying to get us to the EHR that the NCVHS has been working quite hard on and the first prong was that adoption of health information standards that I know NHII and CHI have been working closely kind of under the umbrella of the NCVHS to actually move that adoption forward and continue to move forward on that.

The second one listed was kind of increasing funding for demonstration that are IT related and a lot of that is starting to occur actually at CMS where we're starting to have more demonstration and I'll talk more fully later about some of those demonstrations where we are starting to incorporate or incentivize physician offices using information technology. And we kind of modeled that after what the private sector is already doing so when you hear some of the discussion around Bridges to Excellence and the physician office link as well, the CMS activity is primarily based upon the model that was already out in the private sector.

Then additionally what the President has also asked all the federal agencies, so the VA, Department of Defense, Medicare is kind of a set large purchasers but across the federal agency is look at our kind of programs and policies to see what we may actually be able to alter those that would actually incentivize adoption of the information technology systems. So we're starting to actually do that as well, looking at areas in which we can actually incentivize adoption, whether that would be through payment or whether that might actually be through the way that we actually design some of our policies or regulations.

And then lastly I think all of you kind of know that Dr. David Brailer was announced as the new National Health Information Technology Coordinator to actually move us towards that goal in ten years of actually having not only kind of EHRs in the sense that we may be talking about in physician's office or a hospital system but also kind of the personal health record so that as someone moves across the community that we would actually still have information flow across that community as well for that individuals. So all that is occurring, that's the backdrop in which we're going to have this discussion, and the last thing I did want to highlight as far as a general comment is the tension that always exists around pace, kind of timing and the pace of activities, because I think one of the things that you had alluded to, Bob, earlier on is kind of what do we want to actually achieve now versus waiting for EHR recognizing the current limitations, I think that continues to be an activity, a tension that we also face throughout all of our different quality initiatives that you heard me talk about is kind of the pace both from the side of purchasers and consumers in comparison to maybe where providers currently are at this particular time or maybe even some of the scientists that may argue for a little more rigorous methodology and wait for that particular process or method.

So having said that let me just quickly before I get into the specifics because I did try to do my homework according to the way you set it out, Bob, and specifically address those questions but I wanted to give some of the members that may not be familiar with some of the activities that we have undertaken at CMS what we are currently doing and that will help you understand kind of where we are and hwy I've come up with certain specific recommendations.

In 2001 Secretary Thompson announced what we are calling our Quality Initiatives that have two primary foci which are one, empowering consumers to make informed health care decisions, and simultaneously to stimulate and support providers and clinicians to improve the quality of health care. And so if you look at that middle diagram that's entitled what we can do to improve quality, this is a schematic of a process in which we've actually tried to address those two goals. And in particular if you look, once we are working with our partners to actually get a sense around selecting the priorities as to where you actually want to move then the two underneath it become kind of the focus for this discussion which is the measures, which measures, how to adopt those particular measures, and whether or not you can actually feasibly collect that data in the way that you actually intend to collect it to actually show you what you wanted to actually show.

And so that is the methodology and then at the bottom of that what you see is all the levers if you will, all the levers in which we can actually influence that process at CMS whether it be through a regulatory method, whether it be through technical assistance, whether it be providing that information to consumers, structured payment, and then kind of rewarding that desired outcome, so pay for performance which is some of the conversation today.

The bottom slide just highlights the fact that within this particular administration with Secretary Thompson one of the biggest goals was to really get more information out to the public at large. And so we continue to do that and that is really remains kind of the central focus for the quality initiatives to actually get that information out to consumers.

So on the next slide you see kind of all the different Medicare.gov, which is our consumer website, the information that is currently available to consumers, so for our Medicare+Choice plans we have quality measures at the health plan level, for our dialysis facilities compare, we have that information as well looking at adequacy of anemia, looking at survival rate, and looking at anemia management, adequacy of dialysis and anemia management. In the nursing home compare we have that information, in home health compare we added in 2003, and then 2005 we're actually going to transition into our hospital compare and also have that information available to consumers as well.

So we started off with, when the former administrator Skoal(?) was here we actually started off with nursing homes and one of the reasons why we started off with that was because there was the infrastructure already in place in the sense that there was a common dataset, the MDS dataset that people were familiar with, whether they agreed or disagreed with the use of that dataset for quality measurement at least there was some familiarity with the particular dataset and everyone actually was utilizing in that dataset.

Now we went ahead and looked at that dataset and were able to provide kind of valid and reliable measures that we actually went out and pilot tested before we actually implemented on a nation wide basis. So we're quite comfortable with the validity and reliability of those measures.

Having said that though we still recognize that not only internally but even externally stakeholders have concerns about continuing to rely on that particular dataset, an administrative dataset that really wasn't designed for the purposes of actually quality reporting. And so MDS 3.0 is out there where we're actually right now apparently talking about how to actually improve the MDS 3.0 in the revision to MDS 3.0 essentially because there continues to be even tension among what we actually want to do with MDS 3.0 meaning externally as well as internally in the sense of do you want to go down the path of actually the most intuitive, clinically intuitive system that helps kind of primarily the nurse that's taking care of that resident and have that type of system, do you want a system that actually allows researchers to actually be able to get the data that they need to do to actually assess, do you want to be kind of a trigger actually with the rest to be kind of a trigger mechanism so that if you actually follow the actual flow of that actual instrument that it would trigger for people that may be at risk for anemia, at risk for falls, etc., do you want it to be that way versus actually you identifying the diagnosis first and then that you're going to route.

So there continues to be discussion around even with that MDS 3.0 what the actual, what the actual primary focus or goal, whether compromises will be reached to actually achieve all these that everyone wants to place on something that wasn't originally designed to do all of this. So that's just one example where I think we'll continue to have discussions around whether tradeoffs will actually be as far as utilizing that administrative dataset recognizing that I think, that we're all recognizing that if we have the EHR pretty much the way that we would want it that would remedy a lot of these issues.

Similarly with the home health and the oasis, we've had similar concerns as with the MDS is that again you're relying on a system that was necessarily designed to do what you actually are required to do in today's marketplace. It's good now as far as kind of where we are currently but there's a question as to whether or not you want to continue to go down that particular path. One of visions at least has been for both of these initiative improvements, all the initiatives that we've done, is that we do work with the National Quality Forum as our consensus building process so that we're able to take measurements regardless of whether it was administratively derived or not, take that through the consensus building process and make certain that external stakeholders have input as to whether they're in agreement or disagreement with that particular measurement and report it out as a quality metric.

Hospitals now, when we move to hospitals, I'm on the top of the next page for quality initiatives of hospitals, when we move to hospitals you quickly move away from kind of a common data infrastructure in which everyone was using the same data and you move to a place in which you have different vendors for individual institutions that aren't necessarily reporting data particularly at the micro specification level the same way that you may be actually requesting that data. There's different I think requests or demands for data at different levels for the individual institution. And so when we move to the hospitals we quickly recognized that we don't have the infrastructure that we had on home health and nursing homes and so we had to start early on and take a step back and actually work with some of the stakeholders here and some that aren't present to get a sense of where we actually could start off, what could be kind of a common place to start.

So that's why we started at looking at kind of the National Quality Forum, looking at what the Joint Commission was already requiring through some of their vendors as well. And I know Jerod Loeb is on the schedule later today to talk but we're looking at kind of what they were already doing as well so that we wouldn't actually increase the burden so that we could take what was already out there and actually build upon that. So we came up with the ten measures for three clinical conditions that's on our professional website that we'll transition in February to the consumer website around heart failure, AMI, and pneumonia. Now we're starting a phase now where we're going to actually build out beyond those three clinical conditions and we're going to build out to surgical infection prevention.

On the hospital side most of that is derived, it's charts abstracted pretty much, that's chart abstracted data so you're moving to really what people are saying, administrative data is not actually, it doesn't hold the same amount of quality of data that you've done on nursing home and home health and so you really get into chart abstracted data and so course that's resource intensive and there's costs that are incurred when you're doing that. So again, I think we're all cognizant of that so we're trying to move forward in a reasonable pace recognizing the demands that are out there on the individual providers but also recognizing the need not only for purchasers but more particularly for consumers to have that information so that they're informed about their particular chart at any of those institutions. So we started out, and so the next clinical condition will be surgical infection prevention, providers have been on board with that one as kind of really the true first patient safety one that we'll have as a module or a domain.

I did go ahead and also highlight because there was some interest in kind of pay for performance as part of the backdrop and the questions that we'll get into more specifically. I did mention firstly a financial incentive is not really a pay for performance but a statutory incentive which was the Medicare Modernization Act section 501 did have a differential market basket update for those hospitals that would submit to ten measures, so at least they're a little more incentivized if they weren't already doing it voluntarily to actually provide that information to consumers.

Then at the bottom you'll see a few slides that talk about the Premier Hospital Quality Incentive Demonstration and for most of you I think you already know, when CMS is talking about demonstration we're talking about payment demonstration where we're actually somehow trying to incentivize, or at least test the marketplace through financial incentives, and so this one links it to quality and so what you'll see on the next page, for those providers who actually score in the top two deciles we'll have financial incentives linked to those two percent, two and one percent respectively for those two deciles.

We start to actually put that information on the website, now in the Premier demo gives some, provides backdrop and instruction on how difficult some of this activity can be in that this was designed not to test the financial incentives, not necessarily to tell us what we're going to go over all because we take that kind of unsolicited proposal because the Premier demonstration, Premier was able to come to us and tell us that they could actually provide measurement for 34 measures so they're ready to provide this information on a larger set of measures across, because they have the online database built in to their particular system so they provide that information.

If we wanted to actually try to model that beyond them as some have come in and talked to us about doing we quickly recognized that many institutions aren't necessarily set up to do it, even internally within CMS we have to build out our enterprise system to actually accommodate all that activity because this is just one small demonstration but it wasn't designed to be built on a national level. We started to actually build out our own infrastructure to support that activity as well so we think by next year we can accommodate some of that and so when I talk about the QIO data warehouse that's what I'm talking about, that we need to build out our own enterprise. So in other words if all the hospitals right now just wanted to submit kind of 34 measures across the board we wouldn't be able to actually accommodate that today, we'd be able to accommodate that next summer but we wouldn't be able to accommodate that today. So that's one thing in the back of your mind and also to keep in your mind that even if you got what you wished for you also have to make certain it links with kind of the infrastructure that would actually support it.

And then lastly before I get into kind of the specifics that you asked me to I just wanted to quickly highlight, one thing, I'm going to skip physician/physician quality and talk about it later, is ESRD quality initiative, what the ESRD they already were providing kind of live information so we have for instance adequacy of dialysis and anemia management where on the actual administrative claim form they're providing that information to us. But they're financially incentivized to actually provide that information to us and so we haven't had much concern at getting that information, we've actually been able to get that information where at least it's kind of constructed unless you have that financial incentive, that may be viewed as kind of an additional step and an additional burden without much return for the individual institutions.

For instance on ESRD one of the things that we're going to have is a demonstration and when as you see on that slide that in addition to the quality measures we already have one of the measurements that we're talking about out is bone measurement activity and that takes kind of calcium and phosphorous intact PTH, which is parathyroid hormone, kind of looking at bone metabolism. And we quickly realized that actually the administrative data is not that great for actually once you start trying to get at that particular level and try to come up with bone metabolism. So we're thinking that we may actually end up being forced to kind of go to the chart level to actually get that information and not go with administrative data or try to bill out that administrative dataset.

Now I'm going to skip these slides, if you see the bottom of I think the next page I actually start off considerations for NCVHS where I kind of just walk through the specific recommendations. I lumped recommendation number one and two together for purposes of discussion, which is recommendation number I believe was live test and recommendation, number two was vital signs. So what I was reading here is the need to collect selected live results and vitals are important as they would improve the ability for us to actually evaluate an immediate outcome. So we have no doubt that we think this information is definitely important and would be a priority for us. And then I said for example hypertension control, if we really wanted to be able to get a sense of how effective someone is actually taking care of a hypertensive patient we would actually need to be able to get at this level so we definitely think that's valuable. And we think it would be valuable for us is to have the lab results similarly if we want to get control, for instance diabetes control, things of that nature, LDL control, all that is actually valuable and will be important for pay for performance as well if we really want to pay for performance.

I did not that I don't know if we can say all vital signs are equal, I think hypertension control is like example where I think people would recognize that. If you start talking about fever or pulse or respirations, particularly respirations, I don't know what that gets you, to be honest working in the ER I never, I discount respiration unless I do them myself so I never, pretty much I ignore respiration. And oftentimes I ignore fever unless I've gotten it twice. So I think all vitals aren't necessarily equal but I think they're important.

Until fully implemented this is probably the best that we're going to do is actually as you outlined try to actually get it through some type of administrative process so I think we're in full support on that.

Recommendation number three, secondary diagnoses, this does continue to be an issue that we're looking, we do agree with you, we think that it is right now that the best methodology is going again to be some type of administrative fix for this. And so we're in full support I think again with that, and we recognize both from the standpoint of actually trying to do risk adjustment but also trying to look at complication rates if you're going to actually try to look at some of these, complications then it's important to have that information.

Recommendation number five, and this is, sorry, recommendation number four, I skipped one, recommendation number four is I believe this is the operating physician for the principle of inpatient diagnosis. In talk about this internally we agree that this is probably an appropriate mode if one is only interested in that principle inpatient procedure. In other words if the individual patient depending upon the complication and what procedure they had that may or may not be of your interest, I can imagine like a lot of patients, particularly any patient that went to ICU, they've gotten a tracheotomy, they have multiple different procedures and so it really would depend upon what your point of interest is as to whether or not this is the information that you're going to want.

So I think that was the only limitation we had with that particular recommendation. I think there will be addition burden but I don't think that additional burden for the purpose of this would necessarily outweigh it, the need to have that or the value as the questions had raised it to us.

I'm going to just quickly run through recommendation number five, this would also again be important I think for part assessment and pay for performance. There's not a data element there as to risk adjustment, I think that was one of the specific questions that was asked was about risk adjustments for this. There is a concern whether this information is consistent with kind of what you really want, in other words I'm thinking about timing of antibiotics or thrombolysis in the ER, whether or not I've actually admitted that patient in the ER before I've given thrombolytics, I'm not quite certain that I should get you the information that you want. So I think there's some clinical issues that may not necessarily get you the exact data that you were actually wanting, this is actually giving you the information that you want.

And then if this is one of your recommendations I think this one may actually be considered a little burdensome although I know that we purposefully kind of selected, or selected conditions so that we can actually reduce that burden and I gave examples of the surgical infection prevention measures or AMI measures. But then overall if we can actually these, at least I'm presuming that this is going to be less costly then chart abstraction so it's just a matter of if people are actually demanding that information and providers truly have to provide it then this would be a better route then if they have to continue to rely on chart abstraction. If there's not a demand for this information and we're forcing them to do it then I think that becomes more of a burden for them.

And then real quickly, kind of get through the last two recommendations here. Recommendation number six, in reading this one I think this is definitely helping us get in the kind of the global procedure codes and somewhat disaggregate those if you will. This is important definitely actually for coordination of care and kind of the continuum of care for the individual patient. I'm not certain as far as the utility for pay for performance, I'd like to hear from some of the other panelists on that. When I read that I didn't get a sense that's something that we may leverage for pay for performance to be honest and I didn't really get a quick sense of any risk adjustment purposes for that.

And then lastly, that was one of the ones that I think that was more directed to CMS so I did talk to one person in CMS, I didn't truly vet this with Center for Medicare Management that does the payment side of the house but I did vet it with one of the managers over there and he found that this would be actually problematic, it would just be problems from the reality of actually trying to get physicians to go back, disaggregate, bill for something that we know that we're not going to pay them any different for because it's wrapped in part of the global billing process. So he just thought from his experience this would probably be kind of problematic to be able to actually take a billing approach to this.

And then functional status, Karen, I'd be interested to see what you thought, so if I say wrong I defer to Karen. I did talk to some of my team about this, in particular some of my team that's in long term care and had a lot of experience with functional status. Overall the general sense was that there was definitely a positive and eager to actually get functional status and get to his lab, the concern is that there's not standardization around it and that continues to be a primary barrier to actually having that functional status, which is what we're, what the recommendations I think of trying to achieve is actually to give that type of standardization.

And so we're in support of actually trying to explore that possibility, get a sense of what that would look like so whether that would be ICF or some kind of ICF with some other lexicon to help us actually be able to get at that standardization so I think that's actually where we would come on functional status that if we could actually get at the standardization around functional status then that would actually be beneficial across the board in actually being able to really look at true kind of improvements of care and actually kind of improvements, to the highest level possible for the independency.

So with that I will stop and look forward to the discussion.

Agenda Item: Purchasers - Panel 1 - Ms. Titlow

MS. TITLOW: Just a little bit about us so that everyone knows who the Leapfrog Group is, we are a consortium of more then 155 health care purchasers who purchase on behalf of more then 34 million American. We spend more then $62 billion dollars on health care expenditures every year on employees. It's also important that we are a group that's trying to work as Trent alluded to, it's extremely important for us as an organization to work very closely with other organizations that are trying to standardize measures and to standardize pay for performance, so that would be CMS, DOD, and OPM are liaison members. And we want to make this point strongly to say that we think that what you're trying to do here is critical so that all of our efforts will have the maximum ability to move forward quickly. Without this level of data, it has just been a daily struggle for us to be able to implement what we think is the most important ways to improve health care in the country without this data, it is the minutia that we are all being stuck in, all of these groups together. We're trying to standardize our message to the providers and to the health plans about what we're looking for and we absolutely need help in standardizing the data collection process. So that's our global, that's the biggest statement that I can make for today is that this is absolutely critical for all of our ability to succeed.

The Leapfrog Group's mission, our overall mission, we are mostly known as being a patient safety organization, that is how we started, but we started with patient safety because that was one place where there was more data to be able, that we were able to collect. It was also something that consumers could pay the most amount of attention to. But our larger goal, and this is how we are now recasting our goals out to the public, is that our overall goal is to trigger giant leaps forward in the safety, quality and affordability of overall health care by supporting informed health care decisions by those who use and pay for health care and promoting high value health care through incentives and rewards. And what's really critical in this mission statement for this group to see is the word informed health care decisions which right now almost all health care decisions made by consumers, made by purchasers, are made by guessing. So we are using financial data almost exclusively to make decisions and we don't use that when we buy any other product. If for example I'm buying car doors and, car door hinges, and the hinges fall off all the time, even if they were really inexpensive I would be less likely to buy that product. But we can't make those same kinds of judgments with health care and the data that we need to collect that you're talking about here, the expansion and ability to get to this information easily, really is going to make the difference for us, to be able to make informed decisions both on the purchaser side and for the individual consumer which is half of our mission, or half of our goal is to really get out to the consumer, it's important for them to have this data.

What we're really promoting in order to be able to achieve this mission is a new purchasing model. We need two strong pillars of this new purchasing model, the first is transparency, and transparence as I alluded to earlier, it's a major goal of Leapfrog, it's an even stronger goal of the disclosure group who you'll hear from in just a moment, which is to have the broadest possible access to standardized measures and that everybody has access to look at those measures and be able to compare between individual providers. So we in order to do that we need easy access to standardized quality health care data and I have to tell you every time that I work on our survey, we have a survey that hospitals report to, the most amount of push back that we get from the hospitals is that everybody and their cousin is asking for something slightly different and that is where the burden is, it's the variation in what is being asked for that they are most resistant to. They are much less resistant now to the fact of just being transparent, it's the administrative burden, and so I think that the standardization that could come about by the efforts that you are trying to push forward here could make a huge amount of different in allowing the hospitals, encouraging the hospitals to report what in fact they are now more willing to report then they've ever been willing to report before.

We also believe fundamentally it's a patient's right to know about the quality of care that they can get and this data as I mentioned is a very strong piece of what they need to be able to make that informed decision making. From the purchaser's standpoint we have been buying based primarily on the cheapest possible deal that we could get and that is an ineffective way to try to improve the quality of health care. And all the members of the Leapfrog Group have formally committed to buy based on the highest quality of care rather then based on the cheapest care that we can get. And in order to do that we need to a, know what high quality care means, we need to be able to then differentiate between the high quality providers by either paying them separately, by shifting more volume to those higher quality providers, or by giving them some form of public recognition for the fact that they've done a fantastic job as providers.

But if we can't measure it we can't reward it and if we don't reward it the IOM has strongly stated that it really will not happen. And so as you can see, I reiterated it before but I'm going to say it again, we really have to have this data or we are going to just, we are not going to be able to achieve our goals or the goals of the IOM's To Err is Human are and those are critical changes in our health care system that need to occur.

I did a very similar thing as what Trent did is that I took your very specific questions and went through, I talked about what the recommendations are, what we believe the benefits are, some of that is in general, and then also to relate it specifically to what Leapfrog is currently doing so that you'll have a sense about where it would be able to help our current efforts. I can also provide further detail later and just send it to you if you need specific follow-up information about that.

So mechanisms for reporting lab results, that is it's helpful for case identification, for identifying appropriateness of care, for tracking utilization when the multiple times the same labs are ordered over and over again in difference doctor's offices, that's just a waste of a patient's blood and everyone's time and energy. This is specifically useful in Bridges to Excellence which Jessica is going to speak about, it's a partner organization with the Leapfrog Group. Rewarding results, we need this kind of information and what it results is an RWJ Grant Program for six pay for performance quality, pay for performance efforts that are being done across the country. And these would support those efforts, Leapfrog administers the Rewarding Results program.

E2 is an incentive and rewards program that has been developed by the Leapfrog Group, it's going to be going public later on this summer, it's a specific program where we identify, it is basically based on the Premier effort that CMS did and we have developed it in partnership again with CMS and with Premier to try to remain as standardized as possible. This kind of information will help to allow us to administer the program, very important for that, and also for physician decision support which we're working on again with CMS and with AHRQ, these are the kinds of information that we would need to have access to to make this as easy as possible to collect data.

Going forward, I'm not going to go over every single one in each one of my blocks, I will just call this the incentives and rewards. Basically all four of these programs are major incentives and rewards programs.

Second is mechanism for reporting vital statistics for standard transactions, I think that Trent is right in that there are certain, this is just from my history as a provider, there are certain vital statistics which are more, vital signs which are more important to others but I will leave those decisions up to the experts about what those would be. But in terms of is there a business value to it, it's an important avenue to help to ease the tracking of intervention and to reduce the administrative burden on hospitals because we are asking for this information. If we ask for it in a standardized format over and over again it's going to make it much easier for hospitals to provide the data. It supports two efforts, one is the I&R that I mentioned above and the overall goal of reducing administrative burden on providers.

Flagging diagnoses that were present on admission, I'd say this was, and I'll go through it and what our top priorities are but this is one of our, this would be one of our top priorities. It's important for improving risk adjustment and we started out focused on patient safety and we're still extremely concerned about this and this is the perfect way to help to identify iatrogenic injuries, it's a very important process that would help us.

Are there any questions? You looked questioningly at me.

MR. HUNGATE: -- iatrogenic injuries --

MS. TITLOW: Iatrogenic injuries are ones that are caused by the hospital, like you didn't have it when you came in, I think.

DR. EDINGER: I'm curious, one of the problems when you come up with lab results, standardizing data for reporting like the LOINC code there's a different, I went through this about 20 years ago, when you have different values reported by different methodologies, for example for the same tests done by five methods, the value you get may be different, that may mean absolutely nothing because of the way the tests are standardized, normal ranges, etc. So you can have misinformation by just looking at the actual number without knowing the methodology and in some cases you can't compare them, there's no linear relationship or simple linear relationship, and sometimes just getting a number back may actually be more misleading then not even having the number. Have you ever addressed those kinds of issues?

MS. TITLOW: This is definitely beyond the scope of what the purchasers want to do, we are hoping that, honestly, that the experts in the field would determine these issues and that we would then defer to the experts, that's how Leapfrog always works is that we convene expert panels and we're trying less and less to do that and we're trying to be able to look towards standardized panels that would be able to set those practices for us. We make cars and that's beyond our capacity.

MR. HUNGATE: We're going to try and hold discussion until after all the presentations.

MS. TITLOW: All right, operating physician identifier code, this is important, I'm only going to look at this from our perspective and from a payer's perspective and a quality perspective, which is we would really like to be able to pay individual physicians for being able to do the high quality care that they need to, that's the same sort of problem that we're having now is that we tend to shift towards different health plans who we assume are giving, are helping to provide the highest quality care. We are now looking at hospitals but we really need to get down to that granular level of the individual physician, this is a critical piece of information to be able to do that.

This is specifically it's important for, we currently have a standard evidence based hospital referral physician volumes, this would allow us to collect that data, we'd like to shift, right now we only have hospital volumes and new evidence is showing that individual surgeon volumes are probably more predictive of better outcomes then hospitals volumes. We cannot collect that data unless we get this sort of information in here, it's just, we just don't have the capacity to do so. And we want to be as accurate and fair as possible so in order to collect that we would like to have this information.

It's also related to Bridges to Excellence which Jessica will talk about more and we would like to be able to, we have a lot of recommendations from hospitals and from physicians about our payment plan which is we're only, we only incent the hospitals for meeting our evidence based hospital referrals and they said you have to really share that with the physicians because otherwise the physicians themselves don't have much incentive to try and improve their own care, so if we had this level of data we could do a form of gain sharing with them so we think that would be helpful.

Dates and times for admissions and procedures, this is relevant in particular for evidence based hospital referral process measures which is for evidence based hospital referrals which is one of the seven conditions that we collect individual data on. We collect, we ask hospitals what volume do you do, do you do specific process measures, and on certain conditions we try and get outcomes measures and these are related to some of the particular hospital measures. And this in a broader way support our incentives and rewards but it's not a huge issue for us frankly.

Episode start and end date, my director of leaps and measures explained that it improved risk adjustments for us and therefore we get a lot of push back about various risk adjustment methodologies since just like there aren't a lot of standard measures there are very few standards of risk adjustment methodology and that would be, if you'd like to pick that after you finish this entire project that would be good too. But this helps us to reduce the risk adjustment methodology as that is improved.

And Trent is right, as a former physical therapist I am two thumbs up on the create functional status code. This is really, when you're talking about employee's functional status code is really where the rubber hits the road, when can they go back to work, how much can they lift, can they still drive the cars, this would really be the Holy Grail of being able to relate the care, to be able to get your employees back to work. So this would be extremely helpful for us and obviously as Trent said and you have recognized you have to set the standards first, come to an agreement and then to be able to collect the data. But I definitely think it's worth, not just me but the organization thinks it's definitely worth investing in the effort to determine what those standardized measures would be.

In ranking the values of the recommendations, we rank at the top the mechanism for reporting vital statistics in a standard transmission and reporting the lab results. And some of this is because we believe this is really going to force the hospitals to put EMR in a faster rate and because of that I think that there is more, there's a wider benefit from, aside from just the specific of these two different measures, that it would have a cause and effect to really push EMR faster.

The flag diagnosis that was present on admission, this is something we could turn around very quickly into incentives and rewards programs and to really help to identify what things were caused at the hospital level and what things were present before admission. And then I noticed that I somehow deleted four which should have come after that. Seven and eight then again, functional status code, the reason that it's a little bit lower, even though I think it should be one of the most important things we do, it's realistic for us to realize that it's going to take longer to put that in place and so for my understanding of your question was what's the biggest bank for the buck in the short term, that's how I've used that to make these rankings.

And then further down dates and admissions, dates and times for admissions and episode start and end dates. Important, would be helpful, but in terms of overall rankings probably less important.

Correlations between recommendations, every one of them could probably stand alone but since the question was asked we think seven and eight go perfectly together, I think that's fairly obvious. Three, four, and five would have a bigger bang for the buck if they went together as would one and two.

And that's it.

MR. HUNGATE: Very good, thank you. Chris, your turn.

Agenda Item: Purchasers - Panel 1 - Mr. Queram

MR. QUERAM: Earlier Trent asked for the committee's indulgence to allow his remarks to be made while sitting down, I'd like to ask for that same indulgence. I would also have to confess though that I am the under achiever in the group, I don't have a Power Point presentation nor do I have a handout, so I feel embarrassed and outdone by my colleagues but I'll persevere nonetheless.

Much of the material that I'll be drawing upon, we have developed and submitted to the National Uniform Billing Committee and so if after I conclude my remarks if there is an interest in seeing a little bit more of the documentation behind some of the concepts I can easily make that available to you.

Let me tell you a little bit about who the Consumer/Purchaser Disclosure Group is first by way of background. I introduced myself as the chief executive officer of the Employer Health Care Alliance Cooperation, we're a business coalition in Madison, Wisconsin, some refer to it as the People's Republic of Madison, 25 square miles of land surrounded by reality. But Madison is a unique community and one in which the business community and the public sector are very organized in terms of the effectiveness around health care purchasing. The Alliance is an employer owned and directed health care coalition, we have about 165 member companies, we pursue a direct purchasing model, direct contracting model, with health care organizations, not through health plans. And one of the attributes of our model is that we emphasize the importance of data and we collect and maintain an administrative data repository that we use for reporting of cost and utilization information. And increasingly which we're using for comparative provider performance reporting recognizing all of the limitations inherent in administrative data.

The Alliance is one of the founding members of something called the Consumer/Purchaser Disclosure Group which is an initiative that is supported by the Robert Wood Johnson Foundation and the primary purpose of the Consumer/ Purchaser Disclosure Group is to advocate for a full dashboard of quality measures so that by the date of January 1st of 2007 all Americans will have information with which to make informed decision about hospital care, about individual physician care, about medical group and integrative delivery system care, and about treatment options in ways that cross all six domains of the quality chasm, safety, timeliness, efficiency, equity, effectiveness and patient centered measures.

A primary focal point for our advocacy is the National Quality Forum because we believe strongly in the important role and mission of the Quality Forum as a convener and as consensus based endorser of measurement sets that will promote greater standardization. And we support it in part because consumers and purchasers enjoy an institutional advantage in the National Quality Forum which was a very conscious design feature, as a part of the consensus based process and more importantly the board approval process, consumers and purchasers occupy a majority of the board seats, which is a significant recognition that our needs as the customers of the health care delivery system have in many ways been unmet and under met by the providers of health care services and it's a way to redress the imbalance particularly around issues of measurement and disclosure of information.

We represent about 60 large purchasers and consumer organizations, some of the names are certainly far better known then the Alliance, groups like the Pacific Business Group on Health, 3M and Motorola represent some of the private sector purchasers that are involved. On the consumer side the National Partnership for Women and Families, AARP and the AFL-CIO are among the members in endorsing organizations.

Why are we focused on transparency? Part of the reason is because of the expanded purchaser fiduciary responsibility and designing and offering a health insurance benefit that ensures that there is a modicum of assurance of quality for services that are provided to beneficiaries. And I think that sense of fiduciary responsibility has been heightened in the wake of the Institute of Medicine reports beginning with To Err is Human. Certainly we think also with the movement toward increased consumerism and health care, particularly consumer directed models of health insurance that place more responsibility on individual stakeholders for making decisions, there is a need for greater transparency.

But perhaps above all is a recognition to play off of the old aperism(?) that what gets measures gets improved. Our belief is that what gets measured and reported publicly gets improved faster and we think that there is increasing evidence to suggest that in the literature, our own experience with the publication of a comparative hospital performance report in the fall of 2001 was the subject of a research article or research study that was reported in Health Affairs by Judy Hibberg(?) that clearly demonstrated that hospitals whose performance is reported publicly catalyzed and invested a greater degree of resources in improving quality in the areas where their performance was shown to be deficient then those hospitals that had the benefit of a private report or had no report whatsoever. So we are firmly of the conviction that transparency moves markets and transparency changes behavior, especially if we can begin to link transparent information with rewards and incentives.

We are very interested in the use of billing or administrative data to foster and support this movement towards increased transparency. We are firmly of the conviction that there is no other means available near term to move performance reporting. And while we appreciate and are sympathetic to the tension that Trent cited between the need to do something now versus the need to move towards more electronic means of gathering and reporting information, the simple reality is that our consumers and our beneficiaries and those of us who work closely with purchasers cannot afford to wait until such time as electronic means of gathering and reporting information are widely disseminated and adopted in the industry.

We also think the that the opportunity cost of doing nothing dwarfs the direct cost of collecting information. We recognize that there is an added burden associated with collecting and transmitting additional administrative data elements but we do believe that the opportunity to use that information to drive improvement, to make better decisions, and to inform quality improvement efforts within health care organizations outweighs the cost of collecting that information.

About a year and a half ago we learned that the National Uniform Billing Committee was beginning its every decade or so process of looking at changes to the UB-92 form, for a whole it was the UB-02 form, now I think it's the UB-04 form so its become a decade process that's morphed into a 12 year process. And one of the members of our staff who spends a lot of his time thinking about ways to use information to drive a value purchasing agenda, and to use administrative in a responsible way, realized that we had this window of opportunity to try to influence the NUBC process, literally a once in a decade opportunity to try to get more information that could support better performance reporting and pay for performance demonstrations.

We brought that to the attention of the Consumer/ Purchaser Disclosure Group under the leadership of Arnie Millstein who serves as the medical director for this initiative. We convened an advisory panel, a couple of people in the room today were a part of that advisory panel, hopefully they remember that, so we were able to vet a number of suggestions and what we came out with were six broad areas of expansions of administrative datasets that match very nicely with the recommendations in your report. Our recommendations align perfectly with recommendations one through six in your report. With all due respect and deference to Karen's comments about the importance of functional status measures, that was not on our list, but what is on our list are the following.

Secondary diagnosis codes present on admission, we would rate that as a top order priority in terms of our ability to differentiate complications from comorbidities and begin to move forward with more appropriate severity adjustment of hospital performance. We experienced this two and a half years ago when we published our own report on hospital performance and referenced it in very strong terms using some of Judy Hibberd's research on the importance of evaluability and talked about the need to select hospitals that have fewer rates of complication deaths and medical errors, not often terms that you see in public performance reports associated with hospitals. And one of the criticisms that was levied against the report is the very legitimate concern about complications that are present on admission versus those that arise within the hospital setting and we think the addition of a secondary diagnosis code to flag those conditions present on admission would be a significant improvement in administrative datasets.

The second area that we are interested in is unique physician identifiers for each hospital procedure, which allows an outcomes assessment by hospital procedure lists and we believe is more precise then retrospective linkages of records. We'd like to see as a third area vital signs at admission, things such as heart rate, blood pressure, temperature, respiratory rate. We believe it's a powerful predictor of mortality for common conditions like AMI, congestive heart failure, asthma.

Fourth area we're interested in is key lab values at admission notwithstanding some of the technical concerns that were suggested earlier. A fifth area we're interested in is do not resuscitate orders that are present within the first 24 hours so that we can distinguish most pre-complications from post-complications to patient do not resuscitate requests. And then lastly we're interested in time of admission, discharge, and procedures so that we can do a better job of understanding and reporting delays in treatment for patients within that acute care setting.

A couple of comments about the ability to use this type of information for pay for performance and some of you may be aware that our organization, the Alliance, hosted a pay for performance conference in Madison in early May which was funded by AHRQ and which was designed to begin to introduce some of the concepts in an evidence based practice center report that AHRQ is issuing soon, the current state of the art of pay for performance initiatives. Some of the takeaways for me from that conference is that we don't know a lot about what works in the area of pay for performance and it would be I think risky for me to sit here and suggest that all of the items that we have outlined as being important to us in terms of expansion of administrative datasets will easily and readily lend themselves to pay for performance demonstrations. The reality is we don't know what works, we don't know the size of the incentives, we don't know exactly how to motivate changes not only in provider behavior but beneficiary behavior. But this is a time of critical importance in moving forward with experimentation, you'll hear in a minute about an important demonstration known as Bridges to Excellence, and we are of the conviction that employers are motivated to begin to experiment and with this type of information I think you will see a change in behavior on the part of purchasers who are looking for strategies and techniques to begin to combine information with meaningful incentives to not only impact provider performance but also change beneficiary behavior and decision making.

With that I will stop and yield whatever time I might have left to Jessica and look forward to the comments and questions.

Agenda Item: Purchasers - Panel 1 - Ms. DiLorenzo

MS. DILORENZO: Good morning, my name is Jessica DiLorenzo, I do work for General Electric, I also work for a not for profit company called Bridges to Excellence. And I say that kind of in a tongue in cheek way because there really is not staff at Bridges to Excellence, I think you're looking at it, with all the partners like MedStat, and all the partners that we have we've been able to do a lot of work with the people that we have involved.

Really what I wanted to do today was to talk about the recommendations within the framework of Bridges to Excellence and also what GE's thoughts are about transforming the health care marketplace, and review the mission, talk a little bit about the specific Bridges to Excellence products, and then do a high level crosswalk between your recommendations and the metrics, the performance metrics that are related to Bridges to Excellence. And I have probably the advantage of being last because I can just say I can agree with everybody previously, I won't do that, I may be a little bit quicker but that's okay, I have to admit that up front.

First of all I just wanted to say that from the GE perspective, so I have two hats on, I'll try to delineate myself when I speak here, but from the GE perspective we imagine a health care market where service providers are free to compete based on the published value of the services they provide and are rewarded for high performance. So this is really what's motivating us and why we think what the committee here is doing today is so important because there are some very key words in there for a very good reason and the transparency piece of it as we have heard before, I have to agree 100 percent, is what's going to drive the marketplace. So the published value of the services that they provide, and we mean here also that the networks are not necessarily restricted, the consumers have choice and that they are free to choose and that they are sensitive to the value of the services that they consume and they're engaged. So that's a very lofty mission and something that we're very committed to through Bridges to Excellence and Leapfrog Group and some of the initiatives that we're working on.

I think it's not really news to anybody that to create this market we're going to have to solve a few problems the first of which we're focused on today which is the harmonize the performance measures on effectiveness and efficiency across the industry and that will go you to that value quotient for the providers or for the purchasers in getting involved in this. So working with the health plans and developing the products that will encourage consumers to shop for value. So as a purchaser and as speaking from GE's perspective we are at this point involved in these initiatives to be a catalyst in the market but to also work with our health plans and other purchasers to move the market so we can continue to leverage all the good work that we're doing. And it's important to succeed in a significant market experiments like Bridges to Excellence so that we can demonstrate that the change is possible here and we have started to do that with Bridges to Excellence in the four markets that we've rolled out in.

Bridges to Excellence is an answer to some, I think that should be in quotations here, maybe not problems, but Bridges to Excellence is an answer to some of the problems, it is a multi-stakeholder, for those who don't know much about Bridges to Excellence it is a multi-stakeholder approach to creating incentives for quality. It is purchaser funded so we're made up of large and small employers, Proctor and Gamble, Verizon, UPS, Ford, hopefully I'm not forgetting anybody, and some local and regional employers as well, retail market chains and such in upstate New York.

When we came to the table to talk about Bridges to Excellence we didn't do it alone, we did bring the health plans, we brought experts in industry, we brought the providers and the consumers and really hashed it out for a good two years as to what the measures should be, what the incentives should look like, what was important to everybody, we used a sick saving quality rigor in the project so that involves getting all of the customers point of views in this process and all the stakeholder point of views in designing this so that we broke through as many barriers as we possibly could and could have answers to potentially any of the sort of responses we got along the way.

You can our mission there, is to improve quality of care through rewards and incentives that encourage providers and patients so again, it had to be a two-fold effort, it wasn't just focused on one segment or the other. And we knew that we had to do that just based on the economy of the way that it would set up and also the physicians told us that you can't just beat us with a stick, you have to give a carrot out there on the patient side as well to be motivated to change. And we do have that as part of the program as well that's just getting ready to launch so I don't have any specific data to share on that.

The focus is on office practices and then two specific condition areas, diabetes and cardiac care, were rolled out in selected markets. There's four areas right now that we've brought up in the last year and what the very key selling point for me on Bridges to Excellence is that we've been able to create with our partners an operational model that is very scaleable to pretty much any area and I think that what we've seen in some of the other valuable demonstration projects is that they're very specific to an area or a situation and it's hard to sometimes pick those up and move them to other areas and replicate the same results. Bridges to Excellence was that the way that we're set up with the rewards and the measures that are standards of NCQA. We are able to pretty much bring that into very different markets with very different physician levels of organization and still make an impact. Program costs are pay fully by the participating employers.

Specifically Bridges to Excellence has like I said a diabetes care link, cardiac are link, and physician office link, those three names are the names that are related to the Bridges to Excellence product. The measurement sets are administered by the NCQA, the NCQA had national measurement sets prior to Bridges to Excellence coming along in diabetes and then later in heart/stroke recognition program. We haven't really had much push back from the physicians at all on the specific measures and I'll show those a little bit later so they're fair but they're tough and I think that with the proper infrastructure I place many more physicians will be able to report on their activities and gain recognition.

The objective of course is to improve the outcomes for patients with diabetes and cardiovascular disease and we're targeting the primary care physicians groups and the specialists that treat those types of patients. Physician office link is something that rewards physicians for all covered, so it's population or population management at the practice level, redesigning the processes of the care and of course the Quality Chasm and all primary care physicians that can produce the certain amount of records on those patients are eligible to apply.

The fourth one which is new is hospital care link and we're working with the Leapfrog Group on that and we're looking to basically use the Bridges to Excellence operational model to leverage the hospital care link and I think that we're very new in those discussions.

This is kind of an interesting slide that I like to show, I mean what you can see here that stands out to me immediately in our discussions today, at the very center of the slide is standard measures and pretty much all of what we're trying to get out here is focusing around the standard measures and of course you can't get the standard measures unless you have ways of reporting them that will ease some of the administrative burdens for physicians and hospitals to do that. So basically you can see how we've organized around this and how we've approached this, you could say that with Leapfrog Group and Bridges to Excellence the first step really was there to organize a large amount of stakeholders around an objective and to get a lot of focus and visibility if you will for the missions. And then create some standard measures that focus on process and outcome improvement so we can move, shift to hospitals and we can change the way the physicians are practicing.

And then again, using measures that will, and this was important to purchasers of course, find measures that will be actuarily sound, so that there is a return for the purchaser and there is a business case for purchasers to get involved in moving the market. We do want to improve the quality, absolutely, but there's always, we're looking at the savings at the very individual metric level as well especially when we take a look at blood pressure and what they control.

So that's the framework that we've come up that we think is going to break the gridlock and why we think the discussion today is so critically important.

I wanted to show what the process and outcome measures were for, and actually DPRP is actually the correct title for that, the Diabetes Provider Recognition Program through NCQA, so you can see that the measures are HbA1C measures so they're not just process but they are actually lab measures as well. And I can tell you the burden of getting this information for some of the practices is a barrier that's keeping them from going forward. And it's especially true in some of the smaller practices --

PARTICIPANT: I have a question, is that correct, cholesterol less then 100, less then 130?

MS. DILORENZO: That's a range, correct. So these are the NCQA measures, I also wanted to point out here that we have two measurement sets, there's a three year and a one year measure, the one year measure is actually specifically for Bridges to Excellence and this was in response to what the physicians had said early on that they wanted to just sort get their toe into the process and maybe they didn't want to go for the full measurement set, and provide a subset of measures for us that would sort of give us an in. No one has elected for the one year certification process at this stage of the game.

What also comes along with the one year certification is a risk adjustment option, that again was in response to what the physicians asked for, our patients are sicker then everyone else's, we're not going to be able to make it, please provide some sort of risk adjustment for us which we did design in, again no one has opted for that. So while I think that making sure that the measures around risk adjustment are sound from our perspective, at least at the physician level, early in the process no one has opted for taking the risk adjustment. And it's free, there's no cost associated with that as well.

Taking a quick look at cardiac care link performance assessment, again there's a three year and a one year. What's different for cardiac care link is that there's no reward associated with the one year measure. And the reason that that is true is because when we took a look at the actual return on investment for passing each of these performance assessment metrics we were hard pressed to find a savings quite honestly with some of the measures and blood pressure and lipid control was really going to give us the bang for the buck. And so what we did instead was rather then coming up with a blended reward fee or blended reward for physicians and sort of weight it out between very, very high performers and then some maybe okay performers we decided to come up with a tiered reward. So someone who is performing a the highest level of blood pressure and LBL control they will get, or blood pressure control, they will get $160 dollars per patient per year as opposed to the $80 dollars for the three year.

The one year again, the subset of measures that we did come up with and the reason we did keep it in there was that part of the recognition is not only rewards but getting publicly recognized as well so we wanted to at least keep that in there as an option, again, these one year options may sunset depending on how we proceed, we're still very new in the roll out, we just completed the roll out of our fourth market so I think time, we'll still need to play that out.

I wanted to from the physician office link try to just quickly crosswalk for you between some of the physician office link measures that are there, whether or not they have a basic registry or an EMR so the span there is anywhere from a basic shoebox that's organized really well to a fully functioning EMR within the office so that does qualify for a year or so physician office link. So the education and screening measures, quality improvement, they need to show that they have quality improvement of health outcomes and performance goals programs within their practice. They have to show that they have chronic care management going on at the office level, that they're doing something about preventable admissions and working on identification and follow-up with those patients, and what are they doing with high risk patients.

So I wanted to just lay out some of those, they're our candidate recommendations that I thought would specifically help in that physician office link application process and how this would ease the burden. Again, the physician office link application process is very long and it's very intense and there's some redundancies and we're working on that. And I think that's why it's good to call Bridges to Excellence a pilot and we're learning as we go and we're willing to learn as we go and we are getting ready to sit down with everyone who has just gone through the application process and we have a good sample of them now in Boston, we'll have somewhere around 37, actually we'll have about 67 practice sites that will have applied for physician office links so that's a good amount of people to sit down and say okay, what can we do better about this and what are the process changes that at least we can make on our end. I think there's work here as we talked about making sure that some of these standardizations happen across the board that would even make that less burdensome to the practices.

Our overall response is basically that competition should happen at the individual physician level by disease and by procedure. Today competition is really at the health plan and the network level and we can't have competition if we don't have robust comparisons and accurate data. Competition at this level is only possible through improvements in the outcome and administrative data and making it transparent, I think that that's a theme that you've heard over and over already today. Bridges to Excellence will continue to move forward with the status quo because we need to sometimes just go about bringing about change in the environment that you're in. I think that we've been pretty clear that we're kind of a catalyst to change and we need to keep marching forward and hopefully improvements will come as we go.

For purchasers to buy health care based on value outcome data needs to be readily available and transparent in the marketplace, value is created when we have a market where providers compete and consumers can shop with their feet, I mean I think that everyone would pretty much agree with that.

I have to agree with Karen as well that number one and number two are the highest priority for us in the sense that test results and vital signs and effective data is where we see the largest room for improvement and that this would definitely ease the barrier in provider participation. What we're seeing, I think that what we're seeing in some of, in actually two of the markets I know two specific cases where they're going through the application process with Bridges to Excellence or for NCQA and they found out that they can't pass and that's where we're finding that the large amount of value Bridges to Excellence quite honestly is that they've noticed just going through the application process and burden and assigning resources and physicians are finding very creative ways themselves to get through this process. They're getting graduate students in there, they're contracting with grants to get resources in the offices to be able to this, they currently just do not have, most of them don't have the resources within the office structure to do pay for performance applications in the current format that it's in and we're trying to help them with that as well, QIOs, I mean we're, we're just trying to figure out a way to get over that hurdle for some of the small to mid size practices.

Boston and very highly organized situations, they have the resources and they're able to bring in people to get through the application process and they have the infrastructure and some electronic data now so they're able to submit a little bit more easily. But what we're seeing is that when they do look at the day they're going to make changes. And they tell us they're making changes because they weren't able to pass and even if they were able to pass it was well we found room for improvement so obviously this is what, this is the crux of where change is going to happen and when they have the data readily available and it's accurate and it's at the patient level.

Secondary admission diagnosis flag and operating physician would improve the precision of hospital performance measures, of course we're interested in that as well through the Leapfrog Group. Dates and times for admission and procedures would allow for measures of timeliness of hospital care, quite honestly a lot of these, some of them are relevant to Bridges to Excellence, most of them are more focused on the hospital side so I'd have to agree with Karen and her assessment of those. Functional status coding, coding of health status and severity of illness would better account for individual patient differences which of course is critical for us to know. And obviously if we are able to get that data on the physicians at a more accurate level we would be able to reward them and create an incentive system that would better reflect the performance.

In summary we want to reward physicians for doing it right the first time and for us that creates the business need for accurate and complete data at the individual physician procedure level. Standard data formats will help physicians and practices leap the administrative hurdle of self reported performance standards. And I also wanted to leave the door open that Bridges to Excellence is open to exploring any of the changes in performance measures in the promotion of data exchange and interoperability.

MR. HUNGATE: Thank you very much, I commend all your presentations for clarity and content, very good, very helpful. Let's move to questions on content and clarifications, spend about 20 minutes doing that, and then I think we'll take a little break. And then come back and I'd like to come back to start to try to articulate what could be called a business case around these recommendations in terms of what are the tradeoffs and how do we quantify the merits of the various tradeoffs in efficiency. So we're open for questions.

MS. POKER: I have two comments or questions and it's kind of an overall question and I think I'm going to pick on Trent just because he was the first one that triggered this thought but I'd love input from anyone about this. One of my concerns as we're going ahead with making very high level health policy decisions based on the quality of the data that's being captured, do we know that we're capturing quality data? My concern is with the MDS data, in my previous life I've been both an MDS coordinator and an MDS trainer of other coordinators and I'm here to tell you it wasn't captured accurately unfortunately from the level I saw. Now I can't state nationally to that but I have talked nationally to other MDS trainers, it's a poor pitiful issue out there. And in other words if you're going to be basing serious decisions in health policy are we looking at the reliability of this data. I'm only talking of one aspect of it and how are we going to look at it.

And as another real quick comment too to leave out to the panel to think about, one of my concerns as we're going forward with this rewarding and incentive program for physicians, and this is just a food for thought for anybody here, are we going to be increasing the disparities of care and access to care because are we going to be getting towards a system where the docs are going to say well, it's worth my while to work in areas where the population is less sick, there's less chronicity, I'm going to have a better rate. I'm just throwing this out there, I don't know the answers to this but this is one of my concerns, I work on the disparities reporting and we do have a huge issue in this country about that and do we want, just kind of tread cautiously.

DR. HAYWOOD: I'll start off and the wealth here. As to do we know that we're capturing quality data, I think we are still in the earlier phases but I think we do know to the level that we're actually measuring data how reliable and valid and for what its intended purposes are. In other words the measures that we absolutely incorporate into our activities, we actually do that in a process whereby for instance with MDS is actually to go back and compare those measurements with activities that we've actually not only with dementia but with actually activities that were occurring in those nursing homes to determine whether or not those nursing home standards were actually having activities or processes or systems in place that would actually be linked to those actual individual measures thus improvement in those particular measures. So that's how kind of the nursing home quality initiative was actually piloted was not only looking at the validity and reliability of measuring but also comparing that to a processes that you would know shows improvement in care along those particular continuums. So that's how that was tested to find out whether or not they were actually accurate.

Similarly I think in the other things that we are looking at particular measurement, that's why you have so few measures right now to be honest because there's a lot of people that are concerned about your particular interests, is that we don't want to move so quickly the science isn't sound and I think similarly on the purchaser side whether the purchaser sector or even more so on the private sector we continue to hear that no one wants a system built upon something that's fallible. And so we continue to work not only with kind of the experts but we also work with the individual providers and clinicians to make certain it's not only kind of works from the scientific statistically sound, that type of perspective, but also from the true at the bedside perspective as to whether or not it actually has an impact at the clinical level.

So I think we hear your concern, we always continue to have that concern and actually with each measure that gets added it goes to the full process and part of that is not only kind of internally at the measure developer stage but also when you actually roll it out to the National Quality Forum at the consensus level stage as well, that it's re-vetted again throughout that whole process taking a look at the particular concerns that you raised.

As to the second question, I'm going to broaden it a little bit but I share your concern which is unintended consequences. And particularly from the private sector because I've continued, and even to my private colleagues where we are in total agreement I think with a market based approach to actually having some improvement strategy that allows that it may have unintended consequences. Now the unintended consequences that you specifically mentioned was disparities and access, because there is the possibility that the rich get richer and the poor get poorer if it's built upon a system in which people have to already have resources to actually do certain activities so for instance Bridges to Excellence in Boston, as Jessica indicated it was easier because they already had the infrastructure in place. And so if you were to design a model in that capacity, if you didn't actually try to go into the particular communities and get them on a certain footing they would be disadvantaged in this market.

Having said that there are activities that are underway where this may actually improve that in the sense that if you are really actually doing risk adjustment and looking at people that have more comorbidities there may actually now be a financial incentive which there currently is not to actually take care of those particular patients. And so it really becomes I think a matter of how we actually one, increase the infrastructure in those particular communities, and if we really do do kind of the risk adjustment as we pay based upon that risk adjustment that may finally be a financial incentive to actually start taking care of those particular populations.

MS. TITLOW: Just a couple of comments. One is I think we need to draw a distinction between concept and execution. So the idea of collecting the data I think it's hard to be against that, the execution of it I think gets more to your point, do we have, how well are folks trained, how accurately do they report the data, I think that's a very separate issue and one that could be considered separately. And of course you want to have execution to be the best that it can but because it isn't currently being executed in a perfect way it shouldn't interfere with the appropriateness of the concept that this data needs to be collected or we will not continue to improve at the rate that we need to improve.

And so I would caution you against mixing those two issues up, I think that's something that has really, this is something that we see from the purchaser side, this is how we see the system, which is that at NQF, often at CMS meetings, there is such a push back from the perfect getting in the way of the better. I mean it is not a good health care system now, there is a huge capacity for improvement and because there is such a frequent push back yes, it could be perfect, it's slightly off, this won't be exactly right, that's true but that shouldn't stop us from trying to take the improvement steps that we could take. And I strongly urge you not to let that happen, to continue to let that happen, that's been the way our health care system has worked for 20 or 30 years, there is a huge amount of room for improvement before we get to the point that we say that this is imperfect and so we can't do it.

And I guess that's just the biggest point is that it is a fallible system, I was a provider, I've been an injured person, a health care person, and health care is fallible 100 percent, I mean that is the way that it works, we are human beings, we don't understand our own physical bodies that well, providers, there's an emotional aspect to treatment and so it's going to be a fallible system, it's not like gears working together and I think that we should recognize that, that it's a health care system.

MR. QUERON: Just a quick comment on the first issue that you raised and that is the whole issue of the integrity of datasets. I think you get the sense from Karen's comments that you won't hear a lot of sympathy from the purchaser community that we need to wait for perfection in data before we make movements and we're a living example of that with the experience that we've had in our community using administrative data to publish hospital comparative performance reports. I do think that there is an argument to be made that using information makes it better, that it becomes an incentive to improve the quality and the integrity of datasets when it's used responsibly but perhaps pushing the envelope to invest the time in coding practices, coding procedures, data auditing techniques and so forth to improve the integrity of the information.

About a year and a half ago the National Quality Forum had a vigorous debate about the adoption of a hospital performance measurement set. And one of the things that that debate highlighted is that there is a fundamental difference between the, we characterize it as values, the values of the industrial purchasing model and the values of the professional model within the health care industry.

If you look at how the decision criteria are applied and weighted in vetting performance measures strong emphasis is placed on the scientific and clinical credibility of a measure, on the feasibility of collecting the information, the burden issue, and certainly everybody agrees that there needs to be an importance to the measure, that it's linked to a clinical process that can be improved. But usability is often weighted significantly lower and purchasers and consumers are looking for usable information with which to guide decision making, we would place a higher emphasis on the usability of information for pay for performance, for consumer decision making and for public accountability then we would on scientific and clinical credibility.

That's not to say that that's an important value that should be discounted but in our world we would weight the decision criteria higher. And we have often said sometimes to the consternation of our clinical and scientific colleagues that 75 percent accuracy is sufficient for decision making rather then 99.9 percent accuracy.

DR. COHN: Actually I have a couple of question, first of all I want to thank the testifiers for some very interesting testimony. Now first of all Christopher you had commented that you had had conversations with the NUBC, and the good news is I'm not on the NUBC so I don't have to recuse myself, that's one of the few committees I'm not on. Potentially can you update us on the status of those conversations and then I have some very focused questions in relationship to some of those.

MR. QUERON: No, I really can't, I would have to say that the UBC process is a Byzantine one that I have not figured out so maybe defer to your questions.

MS. GREENBERG: I am on the NUBC, as staff I don't have to recuse myself on anything I don't think. But I'm into disclosure of course and I am a member of the NUBC.

Arnie Millstein did present on a few occasions actually and the UB-04 has not been finalized but it's pretty, the current version of it very likely to be approved in the next, I think probably at the August meeting but definitely by the November meeting, does include the indicator for the flag for secondary diagnoses, which already is actually part of the 837-I, the institutional claim standards, the HIPAA institutional claim standard, although it's not currently in the HIPAA guide, the implementation guide, but it is part of the standard so to move it into the actual implementation guide, the next version would not be that difficult because it's already in the standard.

And one of the interests of course of the NUBC is to align itself as much as possible with the electronic version recognizing that for hospitals in particular the vast majority do use hospitals. On some of these measures the committee advised Dr. Millstein that paper solutions were probably not that desirable and that electronic solutions would be much more likely and easier to implement. But he responded that there still are hospitals that do not bill electronically and they really don't want the patients at those hospitals to just be forgotten. So that will probably be added, at least the current version has that on the form, and there also are, I'm calling them code to code values, there are some fields that will be on the UB-04 that would be used for the vital signs and the lab values if one were to choose to do that, there's real estate there that's kind of been set aside in which you would indicate, sort of like value codes or if you know anything about the UB forms where it has different types of code values so you would have a code that says this is, now you're going to see the blood pressure and now here is the blood pressure or something like that. The infrastructure could support some of that and probably that's true about several of these other elements.

But the only, I believe the only one of, and as you said they're quite aligned with these first eight, the only one in which it would be specifically, it seems to be strong consensus on is this flag but I think there would be potential capacity in the new form for some of the others as well.

MR. QUERON: When would a decision likely to be finalized on that?

MS. GREENBERG: The next, it's certainly this calendar year, there are two more meetings, one is in August and then the last meeting of the year is in November and I can't remember at which one of those two meetings they're actually going to vote on this because this is also out being looked at by the state uniform billing committees as well and so we have to get feedback from them but I would say at the latest November.

DR. HAYWOOD: Simon, I can I just ask just Marjorie real quick from a timing standpoint for the activity at this staff, that the Workgroup on Quality is doing, how does that relate to if some of the recommendations is directly related to EUBC and it's only going to occur for a decade, I didn't know how the timing that this activity relates to that.

MS. GREENBERG: The committee is aware of the fact that this, the Uniform Billing Committee is aware of the fact that many of these issues are also under consideration by this committee and it's true, if it's August it would be different then November. I'll find that out, I'll actually try to find that out at the break.

DR. COHN: And I guess I should clarify, you commented about the role of the standards development organizations is a little unusual, one obviously needs to recognize that most of hospital data submissions is electronic these days, I mean CMS, virtually all of your stuff comes in now on an 837 or what used to be the NSF format and all of this, I mean obviously the paper form has less input but it's more the intellectual leadership of the NUBC in all of this.

MS. GREENBERG: If the UB-04 has this flagged for secondary diagnoses and said this is a required element it would have an impact on anyone reporting hospital data.

DR. COHN: The other piece on this of course is the number of states obviously already adopted this many years ago, California for example has been using this modifier 15 years, 20 years --

MS. GREENBERG: New York and California. Actually another one that isn't in these recommendations but is in the consumer disclosure recommendations, that's the DNR, and the Public Health Data Standards Consortium on behalf of California, the Public Health Data Standards Consortium has a state and a federal representative, I'm actually the federal representative on the NUBC. On behalf of California and other states who have expressed interest in this is bringing that forward as a proposal in August.

DR. COHN: I was going to ask just a follow-up question on this but would you rather have --

MR. HUNGATE: No, go ahead.

DR. COHN: Okay, so now that we know what the update is I actually had a very specific, I think Trent you're right, actually please tell me how many emergency physicians we have that we're dealing with, I had no idea that Trent was an emergency physician, Shawn Toomas(?), myself, all the physicians, actually not all, but many of the physicians on the NCVHS are all emergency physicians.

But I find myself continuing to be intrigued by item four and I'm curious about all of your comments about all this about operating physician identifier codes. Now I guess I need to understand from all of you about this issue, now Trent didn't think that this was a big deal and he thought the administrative burden was probably worse then the data capture value but the rest of you obviously want this very badly. Now when I tend to think about going and collecting data I typically think about going to the source as opposed to a second or third party for data if I'm trying to find reliable data, and one would observe that probably the physicians know best, whether they're the operating physician or not and indeed in the world of physician billing things are not quite so tough, there's the main physician but then there's also things that identify whether it was a group procedure or a co-lead procedure, I mean this is the world of modifiers because obviously there are lots of as you described nuances to all of this.

Now knowing that this stuff is already being routinely submitted because obviously there are bills from physicians into the health care system I guess I would need to understand why this is an item that's being listed on the hospital form and is this really just a convenience issue for the people dealing with data or something to get around physician resistance to this. I mean can you explain to me why this is an item that's on this level of priority?

DR. HAYWOOD: I think I can try to explain some of that which is there is confusion because the Part B billing doesn't easily match for your if you're trying to get at that Part A stay. So in other words if I'm the physician that's doing a particular procedure on a Part A stay I may bill it, my group may bill it, so you may have, you may never get a bill that specifically allows you to say that Trent Haywood actually did the procedure, what you would get is a group in which Trent Haywood is a member of Billed Medicare Program, and that relates to a particular HIC(?) number, but after actually getting to the individual date and service that's where you continue to have the disconnect. And so this is a way of actually making that link pretty straightforward and expedient so that you don't have to actually crosswalk between trying to get down to the physician level on the Part B claim which you apparently aren't doing a lot of times and then trying to walk that over to the Part A to actually get at the service that you actually have one of the interests. So I think that's where the biggest issue is, Simon, with actually trying to use the Part B data for this.

DR. COHN: Any comments? Is this the issue or is it something else?

MS. TITLOW: I guess my only comment would be that we were asked to react to it as opposed to develop it so I don't know --

DR. COHN: [Inaudible.]

MR. QUERON: We weighed in with the UBC committee and the rationale, I'm going to read it so that it's clear for this is that we believe that this would enable outcome assessments for physicians performing inpatient procedures that complements existing UB-92 data elements but more importantly eliminates the need to link institutional and professional databases. So it could be easily, well, I'll say easily, it could be collected by the hospital and reported as part of the UB-04 for procedures that are performed on an inpatient basis. And it identifies specific physicians so that we can begin to look at more granularity, more outcome assessments of inpatient procedures.

MR. HUNGATE: Cathy and then let's take a break.

MS. COLTIN: I wanted offer one additional rationale and that is that not only is there a very real problem in that the professional claims frequently only provider a provider ID that represents some sort of a billing group or billing organization rather then an individual but also professional claims data are generally proprietary, they're not out there in public domain whereas most hospital datasets are available, most states have publicly available hospital datasets that can be used in the public good to create these kinds of measures and this kind of consumer information and therefore it would make this information more widely available.

MS. GREENBERG: Well, I just had a quick question for this one person on the panel --

MR. HUNGATE: Same subject? Same subject, not new subject, right? Or is this a new subject?

MS. GREENBERG: Well it's the subject we've been discussing all morning. I don't know how long they're going to be here --

MR. HUNGATE: Can you folks stay for the rest of our discussion or do you need to leave?

DR. HAYWOOD: Is there a way that we could actually, I know I unfortunately, and I'm feeling bad because I'm on the staff but I'm going to the airport from here --

MR. HUNGATE: Not take a break --

DR. HAYWOOD: Right, skip the break.

MR. HUNGATE: Okay, we will continue. I've got Stan, I've got Harry, I've got Peggy, I've got Marjorie, and Julia. Peggy, can I let her go first?

MS. HANDRICH: Thank you. Thank you all for your excellent testimony, it was very interesting to hear the agreements that you shared and also some of the variation in the comments that you made, and questions and comments that I have go to one of the areas where there was a little bit of variety in your reaction and that had to do with the priorities you placed on the functional assessment.

I thought it was interesting that Karen and Trent spoke somewhat passionately to the importance of standardized data to inform functional assessments and the two of you, Chris and Jessica, did not. And I perceived that the perception of the importance of the functional screen from the perspective of Trent and Karen had to do with actually very different populations. In your case Trent you were looking at basically the long term care population, and Karen I thought it was so interesting that you made the comment we want to know when they're ready to go back to work, a different population. And if you can confirm that that's the case when I get done that would be great but this is kind of a question for all four of you.

I wonder why it is that Jessica and Chris didn't assign as much of a priority to that, if you could speak a little bit more to that, and what it is more specifically that you think when you spoke to the need for standardized information, if there is any role that the workgroup or the committee, you would be interested in seeing played out in terms of the development of that. And I want to invite you to launch on all of this with a reaction that I had, you all spoke to kind of the need for moving forward the quality agenda by influencing provider behavior and also by informing and energizing the consumer community so that they can make informed decisions.

We have a very small but very popular project in Wisconsin known as Family Care that includes a personal functional screen being performed for every individual in this model in conjunction with a case plan being developed. And this project has been evaluated at least three times already and in each of the evaluations it has come to the fore that consumer participation in a functional screen has informed and energized the consumer to make better decisions about seeking health care, which provider they wish to have that health care be provided by, using information that's available, and also making decisions about choices, where for example a person might say yes, that would be the ideal thing for me but I don't think I'm going to go there. So I was interested that none of you kind of saw the functional screen as being a lever on the consumer side and I wonder if you could all react to that.

MS. DILORENZO: Well honestly the functional status and my perception of that was more of the elderly population, that was my take on the recommendations and Bridges to Excellence specifically is focused on active employees so that's why I fell a little bit lower on my priority list, not that it's not important, if you're talking about health risk assessment and functional assessment and that way, important, yes, difficult to get done on a wide scale basis.

MR. QUERON: My only comment, and I don't disagree with the importance of functional status and I think representing an employer constituency we would love to be able to have an idea of a holistic perspective if you will of how health care services are provided, particularly as it relates to productivity and presenteeism, which is of keen interest to employers. But part of where we were coming from is we've been through an iterative process in part through the NUBC and we want to be somewhat parsimonious in the request that we make of the system. And our perception is that the greatest return on investment in the short run would be through the addition of these six data elements that we are now focusing our advocacy on.

MS. GREENBERG: If I could just comment because I specifically, I personally have a very strong interest in functional status assessment professionally because of again truth and advertising responsibility for the international qualification of functioning disability and health, I asked Arnie specifically at the NUBC meeting why functional status wasn't on your list and he said that there had been a lot of discussion about it and there was recognition that it could be extremely valuable but it wasn't, they didn't feel that it was standardized at this point and so it would be more difficult to implement and they were looking more at things that could be implemented now. But I think actually the recommendation here is to do the kind of work that would be necessary to develop standardized measures and so my sense of the way he responded was that the consumer disclosure project would support that, but I don't want to put words in your mouth but that was the dialogue I had with him on that.

MR. QUERON: Well more is better from a purchaser perspective but I'm trying to be realistic with you in terms of does our reach exceed our grasp, sometimes, and I have great appreciation and respect for being responsible and using what you ask for. And I'm not persuaded at this point that functional status is something that's as actionable as the other datasets that we're putting our emphasis on.

MR. REYNOLDS: It feels like Groundhog Day, yesterday I left North Carolina and walked out of the building at 6:00 after having presentations on Bridges to Excellence and Leapfrog. If I wake up tomorrow morning and you're back I will be concerned.

I tend to focus on how the health care system operates and that's part of why we're on the standards situation and I've drawn myself a picture here, which I won't bore you with, but the interesting thing is that if you think of the way information flows right now and you think of the way we work you have a care setting and then standard transactions go to some kind of a payer, whether it's a private payer, Medicare, Medicaid, or sometimes a third party for some of you large companies. And you pretty much call a lot of the shots, it's not the standard underwritten situation.

And then out of each of those payers drops many constituencies, United, Aetna, Signa, us, have a lot of employers and a lot of individuals, whereas Medicare has a very structured population, Medicaid has a structured population and so on. When you think of quality, quality to me doesn't seem to focus on maybe that singular chain anymore, the normal drop down, because a hospital is a hospital is a hospital and it sees all its patients and the quality of that should be based on its whole population as well as a doctor, not just whether if in North Carolina we're X percent of the business then I only get a look at X percent of their patients.

So when you think of, and obviously one of the things we have to address in the Standards Committee in helping this particular committee, is when we look at standards you can look at the claim form, or you can look at the attachment, or you can look at something else, and if quality truly does supercede the true financial payment process I would love to know from you if you see quality more of a population situation, not who they have health care, and we've got a significant other population called the uninsured that is growing and changing and so on.

So I struggle trying to sit here as a member of the Standards Committee and get a sense of what your model is but it seems to be saying that you want to step clear back from that and I know some of you as employers literally go work directly with care settings, so that kind of takes part of that structure out of play. So I really would like before you get going, I really would like a sense of where you see this data flowing to and is there one, like you think an NCQA, is there one, I'm just using them as an example, but using them as who decides quality and all the data flows to them or flows to somebody else versus having a financial transaction that pops down and then somebody's looking at a very small subset which is how we're kind of organized now as an industry. So I would love just some comment so I can feel a little better about where this might fall out.

DR. HAYWOOD: I'll start it off because there's a lot of layers there to say the least, it's kind of an intriguing question. Let me up front probably state one of my biases which I heard what you said which is kind of population based that made me think of the kind of a traditional public health approach. Which a lot of times ignores the way you started the conversation which is how we actually finance health care, there's always this disconnect between what a traditional public health approach is and the reality of the marketplace. And so I think what you're hearing today is kind of a realities of the marketplace type of approach to actually getting at what you ultimately want to get which is improved quality across that particular population.

Now as to whether or not there needs to be one data repository is a separate issue to me then whether one individual institution can actually define quality because I think there are quality measures but then there's value propositions associated with those quality measures that I don't think we'll ever have consensus around everyone coming to an agreement as to which of those particular measures they find more valuable then others irrespective of who that arbiter might be as to qualities are if you will.

We are currently in the process at NCQA of building out the QIO data warehouse and we are trying to build that out in a way, at least the way that we're envisioning that, in a way that should actually allow for some, the way we're looking at it to be honest I think a lot of people are trying to get out of the data collection industry, there's derivatives that are financially desirable in the marketplace but the actual primary data collection, it's a question if that's a declining industry as far as a financial model. And so I think the government is going to at some point take that on anyway and so we're already starting to build out a kind of the QIO data warehouse on the hospital side and on the physician side to actually be able to be somewhat of a data repository where you can have that information put into it.

The thought is, in a very simplistic way, is that if then individual practitioners or providers want that information disclosed to whoever then they can actually have that disclosed so that individual entity, kind of giving the consent of that entity can have the information kind of in the way that they actually value that information.

So that's a little different then actual passing judgment on whether or not we're defining quality but part of that is that you're going to buy, virtually by standardizing what information is coming in you are apriory(?) making a determination as to which measures you actually think should be kind of part of that decision making process so I think that portion is occurring now. And that's occurring kind of collectively because with the NCQA, with AMA on the ambulatory side as well, with CMS, with Joint Commission, and kind of through that NQF process as well as all the specialty societies that play a role, I think we're having that now starting in June where you're getting a consensus around what the actual spectrum will be as far as the measurement set and then you'll continue to have people argue whether or not they prefer within that measurement set they like on measure, they value that more then the other.

So I think that's at least kind of my view of where the current state is, I don't think we'll actually get to the point of actually having one quality czar kind of dictating what should be considered quality but I think we will get to a consensus and NQF is the place to where you actually go to actually make determination as to what those measurement sets should be.

MR. REYNOLDS: I didn't mean necessarily that one person would decide, but if it is in one place then anyone can go out, philosophical, I'm not recommending that, I'm talking, you go out, access that data, and decide what they decide, and then you go back to free enterprise and like you said value --

MS. TITLOW: I want to answer from the purchaser, purchaser side which is we're working as hard as we can to standardize what we asked for so all the Leapfrog purchasers ask for the same thing and we are really working hard to get our health plans that we contract with to ask for the same quality measures and that is very difficult because that is, you're both with health plans, you know that is a differentiator in the market. So we get a lot of push back for that question as well but we are, that is we would much prefer to have the standardized questions and we think that this process will help that occur and this is a step, a very important step in that process but I do want to reinforce for you that that is absolutely our goal is to have as little noise, as little variation, as possible out there.

MS. DILORENZO: I would agree with that and purchases is a little bit different because it's on the physician side but I think there's somewhere in between that we're trying to strive to, we're asking the health plans to use national standards, third party objective standards, that's what the physicians will, along with, not to say that is an exclusive set, the only set, but at least the health plans and the purchasers would all be asking for the same thing because again, that's part of the barrier we're finding is they're asked to report on so many different performance measures that at this point it's like which one do I pick here. So there needs to be some consensus with littler levels of variation.

MR. QUERON: Harry, my only quick contribution in response to your question is the Consumer/Purchaser Disclosure Group is in the process of formulating what we refer to as a set of ground rules that could be used by purchasers, by payers, by others, in terms of what measures should we be adopting and requiring for public reporting, for pay for performance and for other purposes And if you're interested in seeing that we'd be happy to share it with you because I think it gets to the heart of your question, how can we forge more standardization around a common set of measures for not only the burden considerations but to enhance consumer and purchaser use of information. So if you're interested in seeing that we hope to use the bully pulpit of our broad base of participating organizations to force more standardization along the times of what we're advocating.

MR. HUNGATE: That would be helpful.

DR. HOLES: We brought up the topic of risk adjustments and you responded in terms of your recommendation saying that adding a particular data element would support risk adjustment. But as we all know risk adjustment is a very murky issue and I wonder when you speak of aiding risk adjustment for the purpose of quality reporting or pay for performance do you have a particular risk adjustment methodology in mind, are there differences in terms of a risk adjustment methodology, whether you're talking about outpatient reporting, inpatient reporting, and would they ever come together. And perhaps Trend you might be in a good position too to talk about what CMS has done with respect to trying to actually take data elements and put them into a methodology with which to risk adjust.

MS. TITLOW: For the measures that Leapfrog collects we look to risk adjustment that's done by the states for the publicly available state information so they have their own process of risk adjustments and we just defer to the four states that do that for those particular, when we take that information, we also, we use STS, the Society for Thoracic Surgery, and what's the other one, thank you, ACC, somebody who knows more then me, ACC's recommendations, so we are deferring to the societies for their greater knowledge. So no, we haven't, we're not going to develop our independent risk adjustment methodology but we're also working with JCAHO for IPS, for the intensivists.

DR. HOLMES: Would it also be necessary that just as we have common standardized data elements to have a common standardized risk adjustment methodology?

MS. TITLOW: And if anybody could identify for us an authority that could help to develop that then we would be happy to look to them.

DR. HAYWOOD: I would have to talk to some kind of local CMS experts about whether or not standardization of risk adjustment, what I mean by, I think within individual settings I don't have any particular issue, what I was trying to get a sense of was whether or not it actually allowed us to do it across all our different settings, which is a different issue to me. So that I don't know if you were asking kind of standardization for instance around particular risk adjustment, hospital and mortality based, kind of getting some standardization around that risk adjustment model, versus what we may do kind of in our classification condition schemes that we're doing on the managed care plus side, whether or not it'd be the same kind of risk adjustment model once we switched to home health and the long term care.

So I guess I would have to talk to some of the experts, Jared maybe later on can comment to you as to whether or not it actually is that something that would actually be valuable to do across all those settings or whether or not we would continue to have to use different methodologies with different variables across those settings.

DR. COHN: I actually had a question and a comment, Cathy is it about this issue or something else?

MS. COLTIN: It's about the issue that --

DR. COHN: Okay, so your last issue. Let me just make a comment about risk adjustment and then ask a question and then Cathy can go back to the previous one.

Actually risk adjustment is sort of interesting since obviously CMS has a major payment methodology which includes both Medicare Advantage as well as all the Part B payments, it's a Medicare model, it's a payment model, not a risk adjustment model for identifying other aspects of quality. I don't know if CMS is doing it, some sort of standard in my book.

Now having said all that let me get back to a more concrete issue. We've been obviously talking about a number of these recommendations and I guess I just need to understand from all of your perspectives sort of the frequency of collection of some of these data elements. And obviously for example we're talking about lab results and vital signs and objective data as well as times and dates for admissions and procedures.

And I guess I just need to understand is that something, are you anticipating in your own vision of, I'm assuming that you even like these things, would you be getting lab results on every admission, on every member, every time you have a lab test? Or every time they have an X test done? Same thing about vital signs, every time a patient is seen in the clinic are you expecting blood pressures to be submitted with it every single time that they have a visit?

The same point about the, I guess these times and dates of admission and procedures, the only thing that I really ever heard referenced about those were time to get in the ER until they go thrombolytics and what time they got IV antibiotics before they had a surgery. Is there something more generalizable here about all these things or are these, and the reason I'm asking is is because if it's indeed a generalizable every time a patient is seen you'd want to put it into a claims transaction. But if it's sort of an unusual thing that you only get from time to time you might want to use for example something like a claims attachment to really send out along the specific data related to that very unusual circumstance. So can you give me a vision of what you think about all this?

MS. DILORENZO: Actually it's a snapshot of a period of time, for at least the NCQA measures it's a snapshot via claims database, claims information, so it's not ongoing, it's a one time application, a snapshot of where their patient population is and then the recognition that's good for three years. So it's not an ongoing submission of data to NCQA, it's an application, you do chart extraction, you fill out, you provide evidence that you've achieved these outcomes with your patients, you have to pull a sample of patient populations --

DR. COHN: So you wouldn't want the blood pressure on every visit.

MS. DILORENZO: No.

MS. TITLOW: I want the blood pressure on every visit, because from my perspective I think that Chris made an extremely important point which is that we are at a very early stage of figuring out incentives and what works and what doesn't work, we're also at an early stage of developing standardized quality measures. So if we, the broader amount of information that we can collect the better the quality data is going to be and the more flexible we're going to be able to be in the future. So now if we collect, my assumption would be that it's important from a quality, from an actual quality perspective to know your patient's blood pressure every time they come in and that should be something that on a lot of check off lists for various conditions is checked every time and that would be a quality measure. So I would want to know that in my process measures and that we had process measures for CABGs(?), we have process measures for diabetes, etc., etc., the question we have right now is do you check blood pressure every time it comes in and if you saw high blood pressure did you do something automatically to correct it. So if it was something that was only collected once a year or once every whenever then I don't think that would count as quality from our perspective.

MR. QUERON: Our advocacy is to make these added fields mandatory on the UB, I guess what would be the UB-04, so it would be submitted every time there was a hospital discharge for these additional datasets, data fields.

DR. EDINGER: I wanted to just ask as you're sitting there now about the issue like lab tests, like lab results, if you did it by four different methods in the same hospital and had that recently happen to me at Bethesda Navy, you may see a trend in something that isn't real because the methodology is different, the techniques for doing the test is different and the normal ranges are different. If you capture every lab result every day and then look at them you may get a misleading picture of what's going on with the patient without additional information about how it was done or the methodology that was done. For example, let's take a glucose, it was done by different methods, you might get a different, it may be normal but it may be in a different range which may be abnormal in a different methodology.

DR. COHN: You're asking that question?

DR. EDINGER: Yeah, I'm asking if you capture that data, just the numeric data, is that going to cause a problem in doing an analysis for looking at patient safety issues, what day you see abnormal results may become a patient safety issue which may not actually exist.

DR. COHN: Can I try to answer that one? This is something that obviously we could ask the LOINC person but I actually understood that that was part of the LOINC capability, it got specific enough that it actually could deal with variations in how the test was run.

DR. EDINGER: Well, also somebody has to look at that.

DR. COHN: Yes.

DR. EDINGER: And understand that that's an issue, that's the other half of it.

DR. COHN: Yes.

DR. CARR: But I think glucose is pretty consistent. But my question is, but it does sort of raise the issue because the kinds of things that we're looking at, many of them are pretty standard but your point is well taken.

But I'm still trying to envision, if you have a claim for an admission you might have had 50 glucoses so I guess that would be a different scenario, and knowing the glucose control in the hospital is a very, very important quality indicator but it would be unwieldy to have on the claim form.

MR. QUERON: Getting to this level of granularity is important, in submission that we made to the NUBC we're looking for lab values at admission, not each and every lab procedure that is performed during the course of a stay, we're looking for vital signs at admission, we're look for DNR orders at admission or within 24 hours of admission, so that there are limitations on the burden, not that this wouldn't be a burden to capture but that it is precise at the time of admission.

DR. CARR: Just to respond to that, I think in addition to trying to standardize the data elements we actually need to standardize the questions because you're asking was the patient high risk when they came in and did the outline line up, and you're asking was blood pressure properly controlled over time. And someone else is asking, Jessica was asking if you have a high blood pressure on a snapshot did it trigger a response. And I think that it's just as important to standardize the questions as it is the data elements.

MS. TITLOW: And again, this is what we were asked to respond to so at the moment we only have, enough of that's inappropriate because that's what we want to be able to respond to but I think that we are presenting different answers depending on where we're looking at it in the continuum of our process. I'm trying to think in the very long term what would really be able to help us get down to the quality measures and I think Chris' information very well informed by experts is what is the fastest thing that we can get done mostly likely to be able to pass through and then we could at least get a starting point and I think both places are valid places to be, it just depends on, some of it's politically where do we think it's most useful to go. And I'm one for collecting lots of, long for a lot of flexibility --

MS. COLTIN: I wanted to respond to the question that Mr. Reynolds raised about health plans getting only a slice of the data and it's something I'm very familiar with and a problem that in our health plan we struggle with, there's only a percentage of our provider network on whom we have petitioned numbers of patients to base any measures and that in order to evaluate the quality of the entire provider network we need to pool our data with other health plans in order to maximize the value of that data. And if Dr. Millstein were here I'm sure he would be talking about initiatives that he's involved with, the Care Focus Purchasing Initiative where employers are asking the health plans to pool the claims data to create quality measures from these pooled datasets. These are the same administrative datasets that the proposal would effect in terms of making available new data elements to permit the creation of more robust measures of outcome and other measures of quality.

So that's one perspective on that, another project that's similar that Bob Hungate is involved with is one that's going on in Massachusetts that's sponsored by the Group Insurance Commission which is also pooling claims data to develop measures of provider efficiency and as part of that project some of these same data elements can be used for risk adjustment, not simply for outcomes measurement. So I did want to bring that perspective in that increasingly plans are being asked not just to take these data in, to process them and adjudicate claims, but to analyze them and take actions on them themselves and to ship them out as part of collaborative quality improvement efforts. So I see the data stream as really in and out with a lot of digestion in the middle.

MR. HUNGATE: Okay, where are we question wise, have I covered --

DR. EDINGER: I have a question since Cathy's here, Trent says that if we do pay for performance one of the things we may do is get a problem with disparities because basically some of the rich may get richer, the poor may get poorer, and I guess part of the issue is how you brought the data that show there's a performance difference, are there certain data elements we might need that might help in, well, you can't solve the societal problem but at least in generating some information that we may not just pay for providing unless you got a CPOE you get ten percent more or because you have higher risk patients, but are there other data elements that might help us in the pay for performance model or especially down the road, don't pay for non performance models, that may actually help eliminate some of these disparities issues, at least let us deal with them in the context of this kind of a free market or pay for performance model?

MS. COLTIN: Well, I think over the four years of testimony that was taken in formulating these candidate recommendations we heard a lot of ideas. The think the candidate recommendations reflect the preponderance of interest in particular data elements to move this along. It's not that there weren't some other ideas suggested along the way. I think that in terms of the specific issue around creating disparities and trying to protect against that there are other candidate recommendations contained in the full report, this hearing is only looking at the first eight but there are recommendations around the collection of race and ethnicity data which would enable the transparency of these measures along various racial and ethnic lines as well so that we could identify whether or not disparities were increasing as a result of these kinds of strategies and could realign incentives to make certain that that doesn't happen. So I think while we're talking today about only the first stage we need to keep in mind that they're part of a set of recommendations.

MR. HUNGATE: I have a personal need to take a break and I'd like to thank the speakers and applaud your participation and content, it's very helpful. And we would especially like to see what UBC submission from the group is.

[Whereupon at 11:40 a.m. the meeting was adjourned, to reconvene at 1:05 p.m. the same afternoon, Thursday, June 24, 2004.]


A F T E R N O O N S E S S I O N [1:05 p.m.]

MR. HUNGATE: We're missing a few people as yet but they'll get here. I know that such quality performers as we have for this afternoon will need no reminding to stay within their 15 minute desired limit, I'm going to switch the order a little bit and let Jerod Loeb go first from the Joint Commission since he has some deadlines on the other side. In the panel this morning the comments were very succinct, the testimony was very helpful, and the discussion afterward was even more helpful so we'll hold all questions to after each of you has done your thing and then we'll go for questions.

Why don't we go around and do a quick introductions so everybody knows who everybody is.

DR. HOLMES: I'm Julia Holmes and I work for the National Center for Health Statistics and one of my primary tasks here has been to work with wonderful people like Ed Kelley over at AHRQ on the National Health Care Quality Report and Ernest Moy(?) on the National Health Care Disparities Report working to provide probably a third of the measures that go into the report.

MS. GREENBERG: I'm Marjorie Greenberg, also here at NCHS, CDC, and executive secretary to the committee and I want to thank you for coming and welcome you to NCHS.

MS. POKER: Hi, I'm Anna Poker from AHRQ and I also have the pleasure of working with Ed Kelley on the QR report and on the DR report with Dr. Moy. And I also am part of the patient safety team at AHRQ.

MR. HUNGATE: Bob Hungate, Physician Patient Partnerships for Health, chairman of the Quality Workgroup, member NCVHS.

MS. HANDRICH: Hi, I'm Peggy Handrich, I'm a member of the committee and a member of the Quality Workgroup and I come from the Wisconsin Department of Health and Family Services and have a special interest in Medicaid and the state collection of health statistics.

DR. CARR: I'm Justine Carr, I'm a physician at Beth Israel Deaconess Medical Center and Health Care Quality and I'm a member of the committee.

MR. REYNOLDS: Harry Reynolds, Blue Cross and Blue Shield of North Carolina and a member of the committee.

DR. JANES: Gail Janes, CDC Atlanta, actually Washington, D.C., and staff.

DR. EDINGER: Stan Edinger, AHRQ, I also had the pleasure of working with Ed Kelley although he won't give me an autographed copy of his book unless I agree to give him $50 dollars for it but I'm still waiting. I'm also staff to the committee.

DR. LOEB: I'm Jerod Loeb, executive vice president for research at the Joint Commission, I've never had the pleasure of working with Ed but I've been on panels with him and thanks to Ed and Peggy for letting me go first.

MS. O'KANE: Peggy O'Kane, president of NCQA, I look forward to dialoguing with you.

DR. KELLEY: My name is Ed Kelly and I work with Ed Kelly at the, I work at Agency for Healthcare Research and Quality, I'm director for the National Health Care Quality Report and team leader for the National Health Care Quality and Disparities Report.

Agenda Item: Quality Measurement Organizations - Panel 2 - Dr. Loeb

DR. LOEB: Any of you that know me, and there's a fair number in this room, I almost never speak without slides so this is a whole new experience for me, but as I sat and thought about what to say it occurred to me that this was dense stuff and it was going to be really hard to do this in the context of a short timeframe so I actually scripted myself so that I could hopefully keep to the amount of time and I have given Debbie a copy of my remarks so you do have it for the record.

Certainly I realize that you've asked us all to limit our remarks specifically to certain sections of the report but I beg your indulgence for just a couple of seconds because I do have to make a global comment having read the full report and that is I really applaud on behalf of the Joint Commission the basic message in this report, the basic message being the importance of standardization in the measurement field. It cries out to standardization and I'm talking about this from the perspective of data collection as well as data reporting and frankly I don't think this is necessarily intuitive obvious in a pluralistic society where the mantra seems to be let 1,000 flowers bloom. So it really is sort of antithetical to our society but at the same time I think it's abundantly clear that absent the standardization that you're trying to move toward our ability to establish appropriate benchmarks, to track and hopefully improve performance over time, pay differentially for performance and compare providers is difficult, perhaps even impossible.

And inherently woven into this complex fabric as I look upon this is the importance of data collection becoming a byproduct of the care delivery process rather then as an iterative activity separated from health care delivery. And that's really what we have here today, we have health care delivery here, and for the most part data collection here, at least within the context of most of our provider organizations. And the ven diagram is really never connected and frankly I think it is only with a nationally standardized electronic health record with consensus generated standards that we'll ever have any hope of succeeding.

I know I didn't have to say all this but I feel better having said it.

To begin I speak from the perspective of the nation's oldest and largest health care accrediting body, a little bit bigger then Peggy's but I don't want to say anything inappropriate, but more importantly I want to be clear that my responses to the specific questions that you've asked really represent a synthesis of our staff analysis of the report and its recommendations. And we did so in the context of trying to understand how it would impact our existing business processes, how it would impact the business processes associated with the vendors with whom we have relationships to transmit data to us and there's about 130 of those vendors out there, as well as the provider organizations and there's about 16,000 of those that we currently accredit. So clearly the measurement train has left the station based on what I just said and I speak here about the enormous but what I view as clearly fragmented measurement infrastructure that's in place today in health care facilities.

Now what I will try to do is to use specific examples from our existing as well as our evolving measure sets to speak to the questions that you asked based on the candidate recommendations, firstly in terms of business case. The question of legitimate business needs I think is clear, there is a legitimate business need for just about all the recommendations that you have made in the report. Using our ICU measure set as an example we have many physiological variables in the set that are needed in calculating the risk models, at the moment it uses Apache-3 methodology and those of you that are familiar with Apache-3 know that there's a host of physiological variables associated with that. But in addition there are other variables such as the results of laboratory tests that we clearly believe that your first candidate recommendation relative to laboratory values is really very important in terms of standardizing it. But one of the questions that we have is how much of the laboratory values, that's not really very clear in the report as we see it.

Outside of traditional laboratory values, physiological data, and we heard this in the testimony this morning, blood pressure, heart rate, respiratory rate, and so on, all are critical in the context for example of the ICU set. Since most of these measures require data collection beginning on the first day of admission to the ICU for our evolving set, but linkage of the test results or even vital signs to data collection we think is absolutely essential.

Comorbid conditions are important risk adjustment variables in the ICU measure sets so we certainly agree with the recommendation relative to secondary admission flags, totally supportive of that. And just to go further as another example in our existing pneumonia set, which currently has about 2100 hospitals collecting data, this is an even more important issue.

We believe the recommendation respecting required reporting of operating physician would bring significant added value to performance measurement both in terms of internal QI as well as in terms of external reporting. And as I'm sure you're all aware there is a lot of discussion in the context of the boards of medicine today relative to maintenance of certification. And if we collect the data once using a standardized set of variables and stream the data where it's needed, to an accrediting body perhaps for decision making, to a board perhaps for decision making, all with one data collection it would significantly advance the field and probably reduce the push back.

As to the issue of dates and times for admission and procedures, this is already integral in our process. Timeliness of care is addressed both in our pneumonia set and it relates there to blood cultures and antibiotic timing, it's addressed in our acute MI set, it relates to thrombolytic timing, aspirant(?) arrival, etc., but these measure calculations use arrival date and time rather then admission date and time. And we did that purposefully because much of the care provided prior to admission in areas such as pre-hospital EMS and in ED care, that's an issue, so we're a little concerned with the way you currently have this worded. And in many ways many of the entities in the field are really moving away from admission date and moving toward arrival time, admission time to an arrival time. So we would urge you to reexamine this in the light of some of the existing processes and the evolving science before you make your final recommendation.

In terms of pay for performance the Joint Commission uses data elements to construct measures and not data elements themselves as a unique dataset, that's a fine nuance that I think is important to understand. Thus if some of the specific data elements that are being recommended in your report could be derived administratively and not require chart abstraction everybody's going to benefit. I think you heard that in the context of the discussion earlier this morning, from Chris, from Trent, from others as well, and this benefit will accrue to health care organizations, it will accrue to individual providers, it will accrue to purchasers as well. It's really a transformational change moving from chart abstraction such that these data elements become part of the administrative dataset.

Risk adjustment, a very complex topic, you discussed it for a while this morning, there's clearly many patient level data elements necessary to appropriately risk adjust outcome based performance measures. And I want to emphasize to you I'm talking about outcome based measures and not process measures because we traditionally do not risk adjust process measures.

The ICU measure set which is currently in testing requires nine separate lab values for inclusion in the risk model and these include such things as blood gasses, hematocrit, albumin, BUN(?), creatinine glucose, sodium, bilirubin and white counts, so the committee's first recommendation respecting lab data values once again becomes very critical for us although the actual usefulness is ultimately going to depend on which values, which values you include in the final set that AHRQ and the National Quality Forum are charged with developing.

Similarly the recommendations made respecting vital signs, secondary admission, diagnosis flags, and the admission dates and times so for the arrival times all have significant bearing upon our risk adjustment methodologies and standardizing data elements here is going to go a long way toward reducing some of the push back and some of the controversy that exists frankly with respect to risk adjustment techniques today.

Data quality, personally I believe data quality should under gird everything we're doing in performance measurement but I think that there is, at least from the pulpit that I occupy, quite a bit of variability in terms of data quality today. You talked about it earlier with respect to the MDS set, it's true in the hospital dataset, it's true in the home care dataset, it's true all over.

Your fourth questions pertains to whether the suggested mode of data collection would in fact provide data of acceptable quality. It's clear that the data being collected are of variable quality but we certainly believe that your recommendations are sound and will result in data of acceptable quality, perhaps even beyond acceptable to good quality data. In particular with respect to our surgical infection prevention measures and our children's asthma measures which are now in testing the recommendations once again pertinent to test results probably require a bit more specificity then you have already articulated. Test result data should include dates and times and be able to be identified as to whether it's inpatient or outpatient.

Value, that's the mantra that everyone is talking about in performance measurement today. If standardization stimulates a migration of data elements from those requiring chart abstraction to the administrative dataset we believe the recommendations in the report will significantly enhance the value to everybody concerned. But I also want to at the same time exert a bit of a caution here, and that is it's important not to shift general thinking toward only measuring those variables that can be derived from administrative datasets rather then from medical record abstraction, I'd like to see a balance here, certainly in favor of administratively derived data elements but let's not discount the fact that there are important variables particularly with respect to risk adjustments that can only be derived at least today from the medical record.

And as I think about this there's sort of three schools of thought about this in measurement, the first says what do I have, what data do I have and what can I measure. The second school of thought says what can I measure and can I find reliable and valid data. And the third school of thought which is the one that I would hope you would be thinking about is do I ask the question first, what's the right thing to measure, can I find reliable data, and what about the cost benefit relationship, do the costs outweigh the benefits or do the benefits outweigh the costs. It's sort of that three part piece to this complex question that we think is really important.

So in summary I think it's fair to say that we strongly support the basic tenets that you are presenting as your candidate recommendations in all cases. I didn't speak about functional health data and I know that was an issue this morning but certainly that's an important area and we feel very strongly certainly outside the hospital accreditation program, it's important in the home care, it's important in long term care, so we support that as well.

We believe additional standardization around data collection and reporting is absolutely essential although it's abundantly clear that there still are significant challenges as you begin to consider the implementation strategies.

So thank you for listening and hopefully we can engage in some conversation before I need to run.

Agenda Item: Quality Measurement Organizations - Panel 2 - Peggy O'Kane

MS. O'KANE: Well Jerod is going to feel really good that he did a written statement instead of slides, I don't know, we'll see how this goes. You'll see there's a lot of wordiness but I think I can get the messages across. We also think this is a real contribution, this report, and I'm going to speak at a fairly high level I think somewhat like Jerod did, but I also would invite you to talk to us because we have a great performance measurement staff that's been actively collecting these data and we can talk to you a lot about the kinds of things that are really important to measure that we can't measure today because of the lack of availability of data.

So with that I'd just thought I'd, I think most of you know who we are but let me just go over it briefly, we're a private non-profit, much smaller then the Joint Commission, maybe a little more ornery then the Joint Commission, we measure and report, we're very proud of the role we've played in the advancement of performance measurement in the whole accountability world. And we I think have done this by uniting diverse groups around the common goal of improving quality through measurement and public reporting. Our orf(?) reflect the consensus of stakeholders, sometimes very difficult to achieve but always achieved. We produce information that enables value based purchasing and we put a lot of emphasis on pay for performance, I'll talk about that.

We produce information that can be used by the public for choice and by providers for quality improvement. And our public reporting of HEDIS clinical information and patient satisfaction using CAHPS covers over 300 health plans with more then 70 million covered lives. And we seek to align with key regulatory requirements because there's no question that this activity is very expensive and creates a lot of, I mean when people see their results also it takes a lot of effort to improve them.

So HEDIS I think, we're not actually the inventors of HEDIS, it came to root I guess in NCQA in 1992. It's a very robust set today of over 60 process and outcomes. CAHPS came to us through AHRQ and we appreciate our relationship with AHRQ and I think it's an example of the kind of public/private partnership that we all ought to be looking for. We're very proud of our precise specification of measures and auditing of results and we have independent auditors, we audit the auditors. On the satisfaction survey we certify survey vendors and so forth. So in the course of just trying to have a system that could withstand the kinds of questions that would inevitably be raised we've created really a whole soup to nuts accountability system.

It's used to evaluate quality of care for commercially insured and Medicaid and Medicare enrollees, we have now adapted it for measurement of performance by medical groups such as physician offices in many circumstances. It's required by many public and private sector purchasers and it is, derives about 30 percent of our accreditation scores these days and that's something that continues to move up as we build more measures.

I think there's no question that transparency really works, I'll just give you a few numbers from our State of Health Care Report last year. Every year I say to my staff I hope the plan has improved again and again 30 out of 33 clinical measures showed improvement last year. There were very strong gains on cardiac measures which were relatively more recent, and if you look back over the history of this reporting the average improvement is over 50 percent and it's really remarkable and it speaks very well of the people in the plans who have taken this on.

We believe there's a need to expand this agenda especially today as we move into kind of a, how do I want to say this politely, 1,000 flowers are blooming in the health plan world and we need to be able to make comparisons across what's the value add of this type of a product versus that one. We're pleased that we're going to have information from hospitals coming out, that will be useful I think for the plans as well as the consumers. And physicians I think there's a lot of very good activity going on so reason to feel optimistic that this agenda is really going to move.

We believe very strongly that we need payment reform as a way of really moving this agenda, that means not only the kind of things that we're involved with today which is rewarding superior performance, but I think there are many, many ways in which our current payment systems reward bad performance. We need to go beyond fee for service, we need to go beyond unadjusted DRGs, and when we're paying for complications that are caused by a provider I think we need to think about that.

I think the resource base relative value scale was useful in its time, I think we are at a point when we need to really think about what is the value that we gained from particular health services and think about health as the true north for value. And all of this of course requires meaningful, sound, and feasible performance measures.

Just to talk a little bit about some of the pay for performance activities that we're involved with, in California there is a project involving seven health plans and 230 medical groups. It's $100 million dollars on the table that will be distributed to very superior performers. I think the level of enthusiasm both on the sides of the medical groups and the part of the plans has been remarkable and it really, this was something that started out very shaky and there's been a lot of pushing and shoving and now everybody's seems to feel good about it. Maybe the ones that don't get the money won't be so thrilled.

The other one I think that you probably heard about is Bridges to Excellence, which we do collaboratively with GE and other employers in four different cities, Louisville, Cincinnati, Boston, and the latest entry is in Albany Schenectady. These are built on our recognition programs for physicians, we have one for diabetes, one for heart care, and one for systems in the office including electronic medical records, registries, education and so forth, patient education. We're also involved in Minneapolis where again there are groups, medical groups, which by the way facilitate this enormously I think you know. Also I think a lot of interest there at reporting at the group level and also in pay for quality. And then our recognition programs are being used by others but those are the most prominent examples. We also, you mentioned Medicaid, many Medicaid programs are now beginning to look at pay for quality and we think that's a wonderful thing.

Now to go to the report, as I said we were very impressed with the report and really agree very much with the thrust. The lack of accessible and inexpensive data, in other words not from chart review or survey, is a major barrier to measurement and I just want to reinforce that we are to some extent the drunks under the lamppost because just from a practical point of view there are many things that we cannot measure because it's too expensive or too complicated.

As we try to move to the hospital and physician level of measurement chart review becomes prohibitively expensive and particularly if you think about very sick patients where they're seeing multiple specialists there are big questions about the whole framework for accountability in that kind of a scenario. But just figuring out whether somebody actually got something when you have multiple physicians was very difficult.

Electronic medical records offer hope but they won't be more useful then paper records unless they are structured and coded for abstracting quality purposes. So you can have just kind of free text in electronic medical records, it's really not very helpful. As administrative data will remain an important source for quality information we think for years to come it's reasonable to improve this data source and so we applaud you for your practical approach to this.

In addition to enhancing the administrative data sources to enable the measurement of particular care processes and outcomes it should be clear which of these measures will be used for accountability, provider feedback, surveillance, or research purposes since not all measures will be useful for all purposes to the same extent. So overall while there's some initial and ongoing expense expanding current claims data far, you'll get a much better bang for the buck then trying to go around the current claims data, so again, we agree.

So let's go to the individual recommendations. We combined one and two, create a mechanism for reporting selected inpatient and outpatient lab results, vital signs, objective data measurements and standardized transactions. Lab results and vital signs can represent important outcomes or risk adjustment parameters. Their inclusion in routine performance measurement and pay for performance activity will increase substantially with routine reporting through administrative claims and the plans I think have really done a lot, there's been an enormous amount of resourcefulness with the current state of affairs but you could just make this so much better if these recommendations are implemented.

Objective measures could be reported in the standardized fashion, we urge you to concentrate on things that are likely to change as opposed to things that are not likely to change. Some parameters only pertain to particular clinical conditions or specific procedures, we urge you to clarify on relevant coding forms or to suggest that it be clarified when added elements are required to be coded during care episodes. Having electronic data on test results and vital signs would significantly increase the ability to do clinical measures and risk adjustments at a much lower cost per measure.

Number three, which is facilitate the reporting of a diagnosis modifier to flag diagnoses that were present on admission on secondary diagnosis fields in all inpatient claims transactions. Administrative transactional data provided by hospitals about the type of care provided don't lend themselves easily to the calculation of multiple valid performance measures pertaining to inpatient care. However more measures could be calculated with the addition of a few routine data elements on medical claims. For example, calculating the rates of adverse outcomes would be greatly enhanced if diagnoses present on admission were routinely and validly coded, in other words you'd have some hope of knowing what happens during the admission.

In addition the inclusion of standardized severity designations, for example cancer stage which is hugely important, might aid in the calculation of risk adjustment outcomes or measures of appropriateness. So we very much support that recommendation.

Number four, modify the use instructions for the existing data element for operating physician such that it is a required data element for the principal inpatient procedure. I'm going to just summarize, we think this is a very useful recommendation, we very much support it and it really does make the whole accountability picture much clearer.

Let me move to number five, modify the requirements for reporting admission date and time and selected procedure dates and times on institutional claims transactions. The IOM identified timeliness as a key aim, we agree, little information is available now on timeliness as Jerod also noted. Moreover in order to improve the rendering of effective care that require the adherence to certain time intervals in order to improve outcomes, like door to needle time, date and time information is essential. Information about which time intervals are most critical to adhere to in order to improve clinical outcomes or minimize patient stress should be identified. And I just want to really urge you to think about the patient worrying and stress as also an important outcome and an important corollary of care. Dates and times of services would allow measurement in an accurate and efficient manner of a major current gap in performance measurement.

Number six, encourage payers to modify billing instructions to providers to align procedure start and end dates with services included in selected global procedure codes and standard HIPAA claims transactions. In general we think this is a desirable and useful addition to administrative data.

For the set of practices where compelling evidence exists that care must be initiated within certain time intervals to increase the likelihood of desired outcomes it's advisable to require the recording of start and end dates if such practices are captured using global procedure codes. At the same time in order to not overburden the system such capture should not be required for services where time is not of the essence.

Number seven and eight, review the available options for coding patients' functional status in EHRs and other clinical datasets and recommend standard approaches. Conduct the research recommended by NCVHS in 2001 and CHI in 2003 as endorsed by NCVHS. Create a mechanism for reporting functional status codes in a standardized transaction. There's no doubt that functional status information represents an important outcome for many care processes and patient groups that should be captured in a standardized fashion. As with other outcomes many factors contribute to functional status including patient variables as well as care processes.

Interpretation of functional outcome data as an accountability measure can be very complicated, including issues like what is the natural history of disease and what's the variation from the natural history. It can be complicated if functional status prior to initiation of treatment in addition to other important variables is not taken into account. So I think there's a framing issue there that's really crucial even before the sort of nuts and bolts of functional status measurement is taken on so I would urge you to consider that.

So our conclusions, administrative data will remain critical for the measurement of quality in health care for some time to come as alternative widely implemented data capture processes have been slow to penetrate health care delivery. As has been demonstrated with the HEDIS performance measurement set administrative data can constitute an easily available, not so easily if you ask many of the people, reliable and valid data source for the measurement of many aspects of health care quality. Administrative data can further be enhanced through the alteration of current data capture processes in coding and billing practices, and one point I want to make is that when these data count for payment, for accreditation, for whatever it is, for public reporting, the data get better.

The rationale for the adoption of recommendations could further be enhanced if it is clear which additional performance measures could be computed, and we would be delighted to work with you on that and some of the experience we've had, if it is determined how these measures relate to national, regional, or other important health care quality goals and priorities, and if the health benefit gained by improving the performance in areas for which these newly deployable performance measures is quantified. So in other words making the case for kind of gains that are sort of sitting on the table now I think is a crucial part to advancing this agenda.

So thank you very much for your attention and look forward to the discussion.

Agenda Item: Quality Measurement Organizations Panel 2 - Dr. Kelley

DR. KELLEY: First of all I want to, I'll say congratulations to the committee on the report and I see Cathy here, congratulations on the report, I know this was a long time in coming and a lot of work. I'll echo what I had said at the full committee meeting which is that AHRQ has enjoyed a real symbiotic relationship with NCVHS in general and this committee in particular supported quite a bit the development of the National Health Care Quality and Disparities Reports and so for that we're eternally grateful.

And also I would say congratulations on actually holding these hearings because I think, I mean the report as Jerod and Peggy were saying could stand on its own, I think it's important for I'll say folks like us but I mean the folks who are in kind of the measurement and data world to assume that better measurement and better data which is as Jerod was alluding to what this report talks about is automatically a good thing. Not talking to our partners who come from different perspectives doesn't allow the full consideration of what are the tradeoffs on this and I think some of the questions that you gave to us to answer in terms of the value issue speak to that and I'm sure some of the other panels will have slightly different perspectives on this.

But what I'd like to do, I didn't have copies of the slides, mainly because then I would have to put what follows is the opinion solely of Ed Kelley and not of AHRQ in general, we've had a lot of input on this but what I'd like to do is provide you sort of a little bit more formal official that Caroline who has been out of town had a chance to see and sign off on as well as the written input. So with that in mind maybe we can move along.

In terms of, I'd like to just provide a few background pieces and then go through some of our input on the specific recommendations and one follow-up item that we think the committee should take into account. Now the business need for this which is how you had worded the first question to us, we took literally in terms of considering where is AHRQ's concern with this and improvement of administrative data, it's obviously coming from a research and quality agency, something that AHRQ holds very dear to its heart, two prominent areas and I don't mean to imply that these are the only areas but to prominent areas where we have done work on this are with AHRQ's Health Care Costs and Utilization Project and our quality indicators which have been developed to assess quality of care at the hospital level using administrative data, and they were intended for internal quality improvement efforts. And a lot of those same indicators have been used in the National Health Care Quality Report and National Health Care Disparities Report to publications, that as the committee knows that we were tasked by Congress to produce.

I'll just pause a minute as Jerod opened the door, I know you asked us to focus on recommendations one through eight but we are sure looking forward to the follow-up hearings on the rest of the recommendations because I think there are a lot of other good pieces in there, particularly recommendation nine that talks about disparities and the measurement of racial and ethnic data elements and that's, I mean we've had endless conversations about that. And as well as number ten, standardizing surveys and how things like vaccination is measured across different surveys.

The reports themselves, I won't dwell on them too much but again, mandated by Congress, and this is the language that gave them to us. The one to track quality of care annually in a National Health Care Disparities Report to track prevailing disparities in health care delivery as it relates to racial factors, socioeconomic factors in priority populations. The two reports track quality of care in the National Health Care Quality Report and quality and access issues for those, along those particular lenses as I mentioned.

The areas that are tracked and for which this has the most relevance is in the effectiveness area and that lists off the different areas, different condition areas, and we talked about at the full committee hearing how we got to those. There's also the issue of timeliness and I'll touch on that in a moment.

Health Care Costs and Utilization Project, it's the largest collection of longitudinal hospital care data in the United States and there's a family of databases that make up each and it's a partnership between AHRQ and the states as some on the committee know. There's the Nationwide Inpatient Sample and that's inpatient data from a national sample of over 1,000 hospitals. The State Inpatient Databases, which is inpatient care in community hospitals in participating states and this next years report, National Health Care Quality Report, will report on about 30 states although membership is up quite a bit from that currently. That accounts for approximately 80 percent of all U.S. hospital discharges in total. And then there are two more derivative databases, state ambulatory surgery databases on ambulatory care encounters, and on the kids inpatient database for hospital inpatient stays for children.

Now if I can get then to the summary, sort of struck me, we all have the same issue of what's the best way to offer this information to you and honestly I only did these slides because I knew Peggy and Jerod were going to do slides and then Jerod let me down, so Peggy and I are both mad at him. But Peggy's slides offered a lot more information and what I'd like to do is kind of run, I mean this is very summary so it's very hard to know our logic behind it, and in the hopes that you'll think still waters run deep I'll kind of go quickly through this and give you the rationale for the thinking on this and then again provide you with the actual written rationale.

Clearly it will be while the national dialogue on electronic medical records has stepped up significantly in the past couple years and even particularly just in the past six months we all anticipate that it will be quite some time before that information will be readily available to us in terms of quality analysis and quality improvement. There are a whole host of reasons in terms of trying to rely on ICD-9-CM codes for clinical information in terms of working on improving administrative data and trying to have clinically important and specific quality improvement efforts. And even quality monitoring efforts, ICD-9-CM has limited clinical depth, the severity of conditions for things like congestive heart failure and diabetes are difficult to ascertain.

Timing of conditions cannot be understood using current administrative data, while renal failure may be coded it's not clear if the patient may have been admitted with the condition or if the condition originated during the hospital stay. And of course hospitals differ in their depth of coding, this is something that we've had difficulty with in the reporting at the state level for the National Health Care Quality Report in terms of trying to determine an equitable way of reporting some of our indicators, many of which depend on coding which is differentially done for risk adjustment as well for exclusions obviously.

In terms of the data, the recommendations themselves, I've sort of grouped them into three areas, the ones that we feel like offer, and this echoes a lot of what Peggy and Jerod had said, improve data and better risk adjustment and those are tackled in recommendations two, three, seven, and eight. The ones that, skipping down, that offer some important process outcome links, recommendations one, although we have some concerns about burden. And then recommendations two, six, seven, and eight, and you'll notice that no one list is mutually exclusive, that's designed to be potentially confusing, the value for effort concerns I think we have some on recommendations two, six, seven and eight. And what I'd like to do is maybe jump into some of the specific thoughts on a subset of these measures, specifically recommendations three, four, and five, and recommendation number one.

Now by listing in that previous slide recommendations in multiple places I think it kind of indicates some of the ambiguity that we may feel over recommendations that will make important improvements in data but obviously there's a real spectrum of the costs associated with it, it's not just dollar costs, associated with some of these recommendations. And we did try, although it's not sort of uniquely our area, to consider some of these.

On our overall feedback on this recommendation three will allow providers, purchasers, and policy makers to distinguish complications from comorbidities and enhance performance measurement and risk adjustment. And if this is consistent with concerns expressed by stakeholders such as MedPack(?) and others of why are we paying more for poor care.

Recommendation three could be a critical component in a pay for performance environment by enhancing indicator definitions and related risk adjustment, and by providing clarify around complications versus comorbidities in general. It could be valuable to internal quality improvement efforts by providers, comparative reporting efforts by organizations such as hospital associations and state agencies, and the public reporting of provider performance. We felt like this recommendation had high value and is as far as we know is already in place in a couple of states already and would also provide specifically with the National Health Care Quality and Disparities Report better ability to risk adjust and analyze across some of the indicators that we're looking at right now at least only reporting at the national level and only reporting kind of one point in time which is kind of difficult, it's not really our mandate and it's not that informative.

Recommendations four and five we feel like could be listed as having a strong business case mainly based on the resources required to implement in our view, it should be relatively low cost and we should be able to have confidence that the operating physician is the one who performed the principle procedure and would help to expand so the physician is noted with each procedure. It will assist in establishing accountability for procedure related outcomes, and we felt that in terms of the value question that you asked us to address that this is relatively high value.

Now on recommendation five and a little bit on number six, this tackles the timeliness question and both Jerod and Peggy talked about this. Now timeliness as I showed in an earlier slide timeliness is one of the four dimensions that's tracked in the National Health Care Quality and Disparities Report and is in fact one of the least populated of the entire framework in terms of number of measures. And in part I think that it's because there's been something of a schism in what people, in the views that people have on timeliness, what exists and what came up through a very lengthy consensus building process for the National Health Care Quality Report on measures were timeliness measures were patient's perception of timeliness measures, about getting appointments at the right time and those types of things.

While a lot of really good timeliness measures were clinically specific, timeliness measures were kind of lumped into the effectiveness area and more forward thinking organizations like NCQA, like JCAHO, may not think this way but I think there's a significant portion of the research and quality community that don't make that leap and link the two. And on our next National Health Care Quality Report we will present in a timeliness section, not just the patient perspective which unfortunately a lot of our readers, they go to the effectiveness section, they read that and then they forget the rest of it, and maybe the safety section, and they forget the rest of it because it's all about those patients and really what the heck do they know anyway.

And the discussion now that we're having would vastly improve not just the quality and disparities reports ability to track timeliness in a much more clinically specific way but would also allow us to kind of take the first step on linking some of those patients perspectives and provider perspectives on what's an extremely important dimension of quality.

Jerod is right on the admission and arrival point and we would, while AHRQ has not been kind of in the forefront on any evidence reports specifically looking at this and has generally taken its cue from the from the work that CMS and JCAHO and other organizations have done in terms of designating these on these particular measures, out measure in the National Health Care Quality Report right now on pneumonia in terms of time to antibiotics is arrival, right now although we're examining it for AMIs on beta blockers and aspirin(?) at admission and I think that would most likely change. But those particular measures we would encourage some examination of. And particularly this timeliness, the times issues in recommendation number five will help on two measures that we track nationally right now using chart review data, time to thrombylitics and time to PTCA.

I should pause first on the uncertain on recommendation number one. Now the National Health Care Quality and Disparities Reports do track both process and outcome measures and we talked about it a little bit in the full committee, we were tasked to do that by the Institute of Medicine that gave us guidance on this. And in some selected cases we have some very good clinically specific measures that are provided to us on control of hemoglobin A1C, control of blood pressure and control of lipids for diabetic patients and these could be obviously looked at for other patients as well, diabetes just happens to be the one that there's a lot of consensus on in terms of the measures.

This is provided by NCHS through the NHANES survey and is great data, so I guess we have something of ambivalence regarding this in terms of our specific role which is what I would like to limit our feeling on here, our comments on here, which is that while this would as Jerod and Peggy add tremendously to information available and I think if you talk to any physician that works on quality improvement and having this type of information about a set of their patients in a given unit they would say that this is great. For our purposes in terms of national reporting we do have a ready source of data and anyway, we think it may fall a little bit more on the margin in terms of the value and immediate need.

I can move forward to a couple caveats, concerns going forward. I guess I'll summarize this slide and the rest of my slides by saying that there are a broad set of recommendations in the report and in general the recommendations don't, having read the report and again with all praise, in general the recommendations don't present things not to do, they sort of in general present things we should be doing, sort of additional things to do. And while that was kind of the point of doing it, figuring out how we can do this better and then usually how you can do something better is rarely to chuck out something you're already doing, we do think that there is some room for the committee to at least advocate additional examination of what, which is what you're doing today, what of the many, many clinically related data elements that could be added to a standard transactions really make sense. And I just wanted to bring this up mainly because AHRQ is working right now on a request for task orders that will look at this exact question.

Again AHRQ QIs are initially developed for internal quality improvement efforts and there's been a lot of interest in using these QIs for public reports comparing hospitals, but there's a lot of questions as we've been talking about in terms of the clinical validity of these datasets. And the motivation for doing this request for task orders has come from a lot of different places, this committee, NCQA's Measurement Advisory Panel, as well as the Consumer/Purchaser Disclosure Project.

The research questions that we'll be tackling, or the contractor will be tackling in this request for task orders which was put out April 12th, we're still in the process of working on it, but what key clinical data elements can be added to selected measures from the AHRQ QIs and to what extent does the use of clinical data in performance measurement add value relative to the cost of data collection. Now this is, this particular mechanism is used by AHRQ and by other HHS agencies on kind of evaluation type aspects and so that's why it's relatively AHRQ specific in terms of the AHRQ QIs but it's, obviously it's been explaining, the AHRQ QIs are only kind of a lens on administrative data in general so we feel like it's very widely relevant.

I can provide, again I will send this information so you all will have it but the QIs that are going to be examined in this study are mortality rates, patient safety indicators, and potentially failure to rescue which is another patient safety indicator, but a subset of those two and of outcome areas as Jerod was talking about where risk adjustment and specific risk adjustment would be important.

The clinical data elements that will be examined include history, vital signs, clinical assessment, procedure reports, lab values, cultures, DNR orders, and times, and obviously vital signs, lab values, and times would be especially important in terms of the set of eight recommendations we're considering today.

I won't go into any more detail on that mainly because that's about as much as I have, we're just sort of in the planning process of that but the reason I bring it up is mainly to say that not just that particular request for task order but there may be other work that other partners are doing that would help the committee hone its discussion across the numerous recommendations about, and even a summarization of these hearings the sentiment that some examination that is data driven and the efficiency here in terms of the outcome being that what are the most efficient data elements to add will be data driven and statistically determined, I think there might be some room for that in addition to the kind of expertise that testifiers are bringing here to you today.

Finally I'll just say that we look forward to working with the committee on just in general, I know this is the first kind of big step that was Bob has moved the committee on in terms of his tenure but we're looking forward to in general collaborating more with the committee on shaping the National Health Care Quality and Disparities Reports as well as with keeping the committee up to date on work we're doing with the QIs and the National Quality Forum, that's actually why Ken's not here, he's actually been working on the quality indicators while we've been talking. And then the Quality Committee we feel like would be a perfect spot for ongoing discussions on measure alignment and that's I think really what we've been talking about here is the standardization and measure alignment which would probably be the number one legacy but the committee could leave in its review of this particular report would be improved alignment and improved standardization.

Thanks.

MR. HUNGATE: Very good, thank you all for excellent testimony, we're open for discussion and questions. Can any of you speculate what Ken might have said differently then any of you?

DR. LOEB: I think he'd agree.

MR. HUNGATE: That would be my guess.

DR. COHN: First of all I want to thank our testifiers for I think what's been some very interesting helpful testimony. First of all Peggy I really wanted to thank you for I think reminding the workgroup that electronic health records is not a panacea. As one whose been involved in system development since about '84 I think it's, I mean we keep sort of talking about well when electronic health records come but maybe part of these lists of items is really also help inform electronic health record developers since you're right, free text or just even, just not structuring it right can create great barriers.

MS. O'KANE: I would actually encourage, I don't know if this is what you're looking for but I think that's a huge topic and I've been kind of going around looking at installed electronic health records just for my own personal education and it's quite edifying to learn about the experiences that people have and unedifying at the same time. So I think NCVHS, I think that's in your bailiwick, right? I think you should declare it in your bailiwick and I think you could make a tremendous contribution by pointing out some of the pitfalls, some of the ways in which we can sort of leap over some of the early experiences that people have had where they've been very disappointed with what the records have done.

DR. COHN: Well it isn't even disappointment, it's just not engineered to answer the right questions.

MS. O'KANE: I was visiting Chuck Hylow(?), I don't know if you know him but he's got this Greenfield Medical Clinic in Portland, Oregon, which is kind of a clinic of the future and he introduced me to this young doc that he's got in his practice and they had, I won't mention which brands they had but the guy said he just wants to kick it all the time because it's so clunky and it doesn't do what it's supposed to do, and particularly talking about the registry function. So I think that there is tremendous, tremendous value could be gained by just kind of reviewing some of the experience and trying to send a signal to the vendor community or have some mechanism for feedback from customers to vendors, it may actually take some kind of institutional intervention I think.

DR. LOEB: As I view this I think one of the real problems is we develop performance measures from a whole variety of data elements on one side and you've got the vendor community on the other side and they don't integrate up front, they try to integrate at the back end and that creates enormous challenges, both technical as well as political and scientific. And I think that's a real issue, I know that there are a lot of discussions going on today about trying to do this in tandem as opposed to do them entirely separately but there's FTC issues and a whole variety of other drivers that make that really difficult but I think that's part of the problem, the thought processes are really quite different.

DR. COHN: Well if I can just give an example or two, my specialty is emergency medicine and so I developed systems back in '84 for the emergency room. And I have very good systems that would automatically time and date stamp admission data and when you're admitted to the emergency room, but of course not arrival data. Time that you documented on an intervention but not necessarily the time that the intervention was actually delivered because you're obviously in the room taking care of the patient and most systems don't allow, I mean there's some obviously that you can do it in real time but usually in the midst of a mini code you're not sitting there looking for your key board. And so it gets in the way of all of the measures that you've all been talking about.

I had one final comment, and actually it had to do with the work from AHRQ, and actually I want to thank you for bringing forward this concept of evidence based looking at these pieces. Now I'm actually not part of the Quality Workgroup, I'm from Standards and Security, and we're sort of the people who are trying to figure out how if this were to happen how to make it happen. And I think there's been a sense, at least I'm speaking for myself, of a sense of like high level concept, right concept but what exactly do we mean here and is there really a cost benefit on this one and can you be specific enough that I'd even know how to do a cost benefit. And it sounds to me like you're actually going to be going off and trying to do that work with much greater specificity.

DR. KELLEY: Right, I didn't mention but I should have, I think I was trying to limit my slides, but we're partnership with Pennsylvania Health Care Cost Containment Council to use their data, that will be the kind of data setting for the analysis. So anyway, it will be, I mean in some ways it's getting to Jerod's kind of synthesis of both ends of the spectrum there but the feeling was it would be nice to have some statistical experience in a small kind of pilot setting at least that would get compared with some of the thinking about on this report and it evolved in part because of some of the recommendations that were getting formed from this. But as everyone here knows it's not, this is kind of thinking about improving administrative data that's been coming from a number of different places.

MS. POKER: I just had to relate to what Simon said before when a patient codes and you don't look at the time, the patient's care gets so in the way of data collection it seems like. But the point is it shouldn't, in a good system what we should be doing is capturing all the relevant data without it necessarily becoming a burden, so that's hopefully our goal here. But Jerod, you had mentioned before when you were talking about the right question to ask in quality and that's I think what we're struggling with here, we're trying to find the right questions.

In addition to the recommendations that were presented does the panel feel there are recommendations we haven't presented? Or are there quality questions as you put it, the right questions, that you would like to suggest to us to consider also? We'd really appreciate it.

MS. O'KANE: We'd like to come back to you with some suggestions. I think there's a whole set of questions around cancer where we're completely hobbled because we have such dysfunctional registries and so forth, we just had a meeting on this so it's on the top of mind for me. But I think some of the other, some of the things that we had to turn down as potential quality measures because we couldn't get the data, but I think that there's been a framing of priorities, the IOM priorities are I think a great place. I mean if we could make progress on those things that would just be enormous. And I think one thing that I would urge the committee to sort of think about is there's not going to be a solution that's the permanent and eternal solution so I think moving forward, we always learn and we're limited by our headlights and I think you can take a sort of best report, best set of recommendations, given what we know now and maybe looking back in five years you'll say well why didn't we think of that and the reason is because we didn't have all that information. We will come back to you.

MR. HUNGATE: A follow-up related question/ observation I guess. I think I'm hearing from the combination of things that there are some sub pieces of some of these recommendations that are better then the more global recommendation itself, in specific places the data is more helpful, the disease specific, condition specific. And I don't know what the right approach for us as a group in moving, how specific should we try to get in our more complete recommendations. And Simon says very, and so I think that's part, if it's an efficient measure then it will have a strong business case that says the value exceeds the cost and that probably is quite specific. Now how do we get from where we are to there?

DR. LOEB: I'm not sure the answer to that question but I would certainly agree with what was just stated relative to the degree of specificity, the devil really is in the detail here and one of the great frustrations in performance measurement is things that at face value do look alike, when you start getting underneath them at the data element definition level they really aren't alike, that's what's really dogged a lot of the work, and I see Barbara sitting back there from her previous life, working together with Barbara, trying to align and in fact create identity between measures that the Joint Commission specified and measures that CMS specified, that at face value looked pretty similar. Its taken us years and right now we anticipate issuing a manual, an aligned manual, one manual with the Joint Commission and CMS logo on it roughly mid-September and I mean it's literally taken years to get to that point. But that's going to move we think the field forward much more quickly and comfortably because it will indeed create a singular data collection opportunity.

DR. KELLEY: I'm wondering about though, and I don't know entirely sort of what your hope is for the outcome of the series of hearings that you're going to have on the recommendations but it would seem to me that the broad set of the recommendations, if implicit in your questions there's sort of range in value for the different recommendations that therefore there might be ones that are higher value and again we're thinking, not just cheaper but the actual value that they have, but that the committee, the outcome of the effort might be to recommend those that are of higher value and I wonder if, I mean because that kind of work, I completely agree with these comments, that it really looks like they're lined up, we have the same measures, let's move forward with them and then you figure out they're very different, there's a lot of different specifications that are different, it's a set. I'm just wondering if you need to have all of the specificity for the outcome of this particular effort or if it's the next step and the piece that follows is a subgroup getting together on a subgroup of the recommendations.

DR. LOEB: This sets the table though to see if it's really critical.

MR. HUNGATE: That's part of my concern is that we have limited resources I have learned very quickly and so we can make principle statements a lot more easily then we can make reconciling statements. The standards group does get a lot closer to reconciling but it's got a pretty full plate too so I'm trying to think about what's the best way to get what's most useful.

MS. GREENBERG: I think if you look at the candidate recommendation that's like for the first two an that's where I hear, and I've heard this discussion at the National Uniform Billing Committee as well about well which vital signs, which lab tests, when would you, and it's pretty clear I think that if that isn't highly specified then you won't get comparable data. And the question of who should be highly specifying that, and I think the way these recommendations now is to create a mechanism for reporting basically this information and then I think the report does go into this a little further about having the appropriate groups really identify then exactly what should be collected. And do you feel that that would be useful in and of itself, trying to push the state of the art forward, calling for a mechanism for collecting this without actually saying these vital signs or these tests at this time, etc., and then would you see groups such as your own sort of filling the void there.

DR. LOEB: I think there's a couple of caveats that under gird what you say. First is a consensus development process that includes the right stakeholders, I think that's really important. Second caveat at least that comes to my mind would be the notion of being sure that there is a cost benefit analysis that underlies this, that there really is identified the need for blood pressure, heart rate, etc. We all sort of intuitively think that but we believe in evidence here so does the evidence support it, I think that's going to be really important as well. I've got a few more but I'll trade the microphone.

MS. O'KANE: Well I think the whole question of who's going to implement measures is a big open question these days and there are all kinds of projects going on, kind of ad hoc projects, there are people like us who I think maybe have a bigger staff and a little more expertise with this stuff.

I think what's really important though is to understand that this has to be sort of a living relationship, that there's not sort of one set of recommendations that you could put out that would solve the problem once and for all, I mean for those of us that are maintaining measures we're constantly discovering new things that we didn't know about or somebody's data system where it's not working. So there's a need for thinking about how this would, and I don't have an answer because what I'm saying to you is I think sometimes there are organizations where you could have an ongoing relationship that are more institutionalized and then you also have kind of ad hoc projects where I think it becomes much more complicated and I don't know what you can do about that except maybe realize that some of these things are going to have a limited half life.

MS. GREENBERG: I don't want to belabor this but I guess my bottom line question is does a recommendation like one and two, which generally you supported with the caveats, all of the caveats that we understand, but would the recommendations at that rather general level be useful? Do you see a benefit to a recommendation that recognized that collection of some of these, some vital signs, some lab test results, etc., and having a mechanism to do so in administrative data is a thing that should be pursued?

MS. O'KANE: I think you should be more specific.

DR. LOEB: I think it points the car in the right direction but it doesn't move the car and I think that's a real significant issue.

MS. GREENBERG: So you'd like to see the committee actually being more specific.

DR. LOEB: From input from appropriate parties, yes.

MR. HUNGATE: The same subject right?

MR. REYNOLDS: Well, it's a little different slant and Gail has been waiting and others so I'll wait.

DR. JANES: To a certain extent similar to what we were talking about, something that Peggy said jogged my memory about something that struck me is the issue of physician profile and while we didn't really speak directly to that in this report --

[Inaudible.] Are there certain elements that we have included in these recommendations or that we haven't and perhaps should have that would play into that world and would strengthen our ability to do that given the fact that it has --

MS. O'KANE: Well, if I could just step back for a second and say that I think that the issue of measuring individual physician performance is not for sissies, I think that there are, our approach to this I think has been fairly conservative and that is it's a target based approach where you basically see if you comply with certain kinds of standards, using ADA guidelines, using American Heart Association, ACC guidelines, and then some structural stuff with our practice, physician practice connections program. So I think we've, while I think people think of this as some sort of radical thing we actually think it's a fairly conservative approach and really quite distinct from the kind of profiling that I think we're on the verge of seeing all over the place, some of which will work and some of which will I think not work.

I do think, there are examples of plans that give profile information back to the doctors from the claims data at considerable cost to the plans and they have had enormous success in improving outcomes and the physicians love it, so I think the thinking about creative ways to take claims data and populate practice management systems could take us a long way forward while we're all waiting for the day of the all singing all dancing electronic medical record.

So I don't know if I answered your question --

DR. JANES: Yes, just the last set would be and if we want to move in that direction, are there certain --

MS. O'KANE: I think the question is what does that direction mean, I think the answer I would give is I believe there is very important utility to having physicians profiled, I don't know that it means that everybody's results get publicly reported. This target based approach we've got you don't, nobody reports anything if you don't hit the bar. The boards now are sort of trying to figure out exactly what their evaluation methodology is going to be like and I think there's a very important, we need to think about evaluation or feedback as something more comprehensive then evaluation, something I think that you could learn from.

DR. JANES: I couldn't agree more, that it has inherent value and -- [inaudible] -- but like I said, to get back to my initial question and that is if we all around the table decided that yes this is a good thing and that no we don't want to wait, we don't want to engage in large scale chart reviewing, we want to wait for a comprehensive record, are there certain data elements in the administrative data which could facilitate that process which we have not decided or --

MS. O'KANE: Let me get back to you on that because I think we'd like to just go, I'd like to ask my staff that question because this gets to a level of detail that I'm just not --

DR. CARR: Well just speaking as a physician if I may for a moment and thinking about physician profiling, I think the categories of volume, practice and outcomes are a few ways of looking at it and I think we're looking at volumes and I think there's consensus among physicians that that's appropriate. Adherence to best practice I think is another thing and if we got those two done we'd have made tremendous strides and I think we've already seen that in claims data feedback and so on. I think where you get challenges is outcomes and that's where the sophistication of your risk adjustment system comes into play and I think that's a much, much larger discussion with a different audience or different participants in addition to the folks here. But I think on volume and adherence to best practice there's information that we're already getting that we could get --

MS. O'KANE: Well, I think we're getting it but I think we're getting it at massive costs, and with our recognition programs it really involves the doctors going through paper records and when I see what's been done for example by Exelas(?) Upstate New York which has been very innovative I think that there's a lot of potential here to give them better information. Now there's also the separate issue of the volume, if you're looking at a primary care physician they're not going to have enough of any one particular category --

DR. KELLEY: But I think this also brings up the special point, the issue of risk adjustment, having the physician specific information and having the data elements still does not, you still don't have the risk adjustment yet, I mean we can witness what is going on, CMS is working on some of their risk adjusted AMI 30 day mortality and stroke mortality reporting and we've been real interested in narc and wanting to include those in the National Health Care Quality Report and the questions even of should the models be kind of clinically derived or should they be statistically derived and the people come from very different camps on that. So I do think that the devil is in the detail as people have been saying and it's a long term prospect to go from actually being able to use the outcomes even once you have agreement on the data pieces that we think are most kind of statistically efficient as well as clinically important to have.
DR. LOEB: Actually could I just kind of dovetail on what Ed just said in terms of using the data, I think one of the biggest issues that dogs the discussion in this area is how do you sort out the accountabilities. Even if we had unique physician identifiers and we're dealing with that right now in the context of measures that track such things as beta blocker use, ace inhibitor use and so on, where do you point the finger and I understand the whole notion is quality improvement and not accountability but let's be realistic about this at the same time, when it means board certification and ultimately perhaps pubic reporting the stakes are very, very high and the tempers are very, very short and you're right, it isn't for sissies.

MR. REYNOLDS: I appreciate your comment and I can now add to my resume that I have heard Ed twice so I'm pretty excited, no, actually I'm going to ask him some questions and that's even better, it's getting exciting. As we deal with this, I deal with this along with Simon and the others on the Standards Committee, the philosophy of quality is easy to agree on, the reality of capturing the data and doing it, and as we right now are spending many days going through e-prescribing and we're dealing at the individual doctor level and then you look at quality where we all have to remember that the individual practitioner in some of the smaller places, the large hospitals may have the bodies to do some things but as we get down, so what do you as kind of the quality evaluators, what do you see as the value proposition, that if this group agrees on what it's going to be and then the standards group agrees on how it's going to come through, what is the value proposition that can be sold to the individual practitioners as they deal with their vendors and then they deal with everybody else through the chain that they get this done?

DR. LOEB: I'll be the idealist in answering your question and it is improved health care quality, pure and simple, the ability to track longitudinally how you're doing as well as compare it with that of others in a standardized manner.

MS. O'KANE: I think the value proposition is being made by all kinds of forces in our society that are taking on the issue of accountability in health care and I think physicians are, I think there's a lot of frustration and appropriately so at all the different things that are coming at them. I personally think we're not going to solve these questions just through data strategies and measurement and that there's a kind of renegotiation of the whole definition of accountability that has to go on and I think we're beginning to see that, I think that the boards are stepping up I think, trying to, I think that there's a whole agenda here that's not simple in any way. But I think that the fact that this is going to be happening one way or the other and there are just multiple examples of it, it's not necessarily an only upside proposition, if you don't sort of embrace the agenda and try to figure out a way that we all can constructively work towards advancing it then we'll continue to live with chaos and waste and unfair judgment.

DR. LOEB: And follow the money, that's the other piece of it.

DR. KELLY: I think that's a good question because it's sort of, it would be easy to get caught in the kind of benefits of improved data for improved data sake given that generally we're converts to that line of thought. I think Jerod's point is well taken, improved health care quality in terms of being able to at a much cheaper cost, not even possible for your average physician to kind of do regular chart abstractions and compare themselves to CMS data or whatever.

But in terms of a pay for performance environment that would obviously, that environment has to kind of be in place for the actual kind of dollar side of that value equation of what Jerod's talking about, I mean I would think that every average physician would want to improve quality for improved quality sake and improved outcomes but it would also be the pay for performance aspects.

But I would just say one caveat having done in my previous life quite a bit of research on the implementation of different quality improvement and quality assurance methods and I had done a research project looking at feedback of performance to a set of, relatively large set of providers and in terms of sustained improvement in quality just sort of knowing where you are and the gap between you and maybe the best performance in your class or even the rest of the state or however it is only gets you so far, I mean there are push factors that get us to do things better in our jobs, in any job, there are pull factors in terms of getting actually paid for our jobs.

But then there's a lot that's internalized in terms of just wanting, everyone wants to do a good job, I mean my garbage collector, you hear this all the time from anyone you know, you always see it in TV interviews, I'm the best darn garbage collector, blank, and so I think that there's aspect of it but that sort of wanting to be as good as the best only gets you so far, there are structural barriers that do exist.

So I do think we have to think hard about the value question and where it ends up and what are some of the remaining barriers that this won't solve and be honest about how it gets solved.

MR. HUNGATE: This has been a very helpful panel, I'm sure that there's a lot of follow-up that needs to get done and we'll get back on those things. But I know right now Jerod's got to go catch his cab so I think we better take a 15 minute break and return. Thank you very much.

[Brief break.]

MR. HUNGATE: Let's see if we can move along. Okay, I don't think we've really had any banjo players so you're not following a banjo player in this case. But I will say we've had two excellence panels and look forward very much to your input to our what I characterized this morning has hearing on the subject of lurching toward measurement. And I don't think it's a bad characterization.

For both of the last two panels we've taken a few minutes and gone around the table to introduce everyone so everybody's name, and that's all the speaker introductions we do so say what you wish in that context and let's start with Holmes.

DR. HOLMES: I'm Julia Holmes and I here at the National Center for Health Statistics, and I'm a staff member on this Quality Workgroup and at NCVHS I also work on using NCH(?) data to measure quality and disparities, we've been very heavily involved with AHRQ's National Health Care Quality and Disparities Report.

MS. POKER: Hi everybody, my name is Anna Poker and I'm from AHRQ and I'm the staff lead for the Quality Committee. And AHRQ I work also on the quality report, on the NHQR and the NHDR, and I'm also part of the patient safety team there.

MR. HUNGATE: Bob Hungate, principal of the Physician Patient Partnership for Health which is me and nothing more, chairman of the Group Insurance Commission, chair of the Quality Workgroup, member of the committee.

MS. HANDRICH: I'm Peggy Handrich, I'm a member of the committee and a member of the Quality Workgroup, I'm from the Wisconsin Department of Health and Family Services with a particular interest in Medicaid and health data collection on the state level.

DR. CARR: I'm Justine Carr, I'm a physician in health care quality at Beth Israel Deaconess Medical Center and Health Care Quality and I'm a member of the committee.

MR. REYNOLDS: Harry Reynolds, Blue Cross and Blue Shield of North Carolina and a member of NCVHS.

DR. COHN: Simon Cohn, national director of health information policy for Kaiser Permanente and a practicing physician.

DR. JANES: Gail Janes, metatheneologist(?) at and staff to the Quality Workgroup.

DR. EDINGER: Stan Edinger, AHRQ, and also staff to the subcommittee.

DR. KAZANDIJIAN: I'm Vahe Kazandijian, I'm from the Maryland Hospital Association.

DR. HOCHBERG: Stan Hochberg, I wear a couple hats here, I'm the medical director at Provider Service Network in Boston, I'm also the chair of the National Quality Partnership Physician Council.

DR. PAUL: Barbara Paul, I'm senior vice president and chief medical officer at Beverly Enterprises and formerly was at CMS for five years most recently heading up a lot of the quality initiatives that were launched under Tommy Thompson and Tom Skulley.

MS. FOSTER: Hi, good afternoon, I'm Nancy Foster, I'm a senior associate director of policy at the American Hospital Association, and in that capacity and perhaps relevant to your discussions this afternoon I've been working very closely with a number of other organizations on an activity to nationally voluntarily report hospital quality data to the public. Barbara was instrumental in that when she was at CMS and so have been many, many others.

MR. HUNGATE: Okay, we're trying to set up the way we do things so that there's 15 minutes of presentation, no questions after that so that we save all the questions until the end, and it has produced productive discussion so I think we'll stick with that. Nancy, you're on your way.

Agenda Item: Provider Organizations - Panel 3 - Ms. Foster

MS. FOSTER: Thank you, Bob, and thanks to all the members of the committee for allowing us to be part of the discussion, I really appreciate that, I hope to be briefer then 15 minutes because I'd really like to get to your questions.

A couple of points I'd like to make going in, one is that as you think about improving health care quality and improving health care quality measures it's important to remember that at least the membership with the American Hospital Association, over 5,000 members strong, represent a very diverse group of organizations with very different capacities to accommodate changes. Some are very much at the forefront and we've heard from a couple of the practicing physicians on the panel here who are among colleagues in organizations that are trying to adopt information technologies and move ahead rapidly. Others work in very small institutions, seven, eight bed, some even smaller then that. I think our smallest member is a two bed hospital. Everything in between, absolutely everything in between in terms of numbers of beds, numbers of patients, sophistication. The only common thread outside of the fact that they care for patients who need general acute care is that they are committed to serving their communities in the best way they possibly can. And they view quality measurement as an activity that will help them serve those communities better.

In that light when I come to talk to many groups I hear this sort of angst about the fact that there's not a lot of quality measurement out and available to people. That's not the same perspective that our hospitals have, our hospitals think there's an inordinate amount of quality measurement going on, which is not to say that we think that there's a lot of quality information out there, there's a big distinction. And part of what I think I've seen in the report that you shared with us and asked us to comment on is that struggle to figure out how best to serve this master of finding a way to communicate good information, not data, good information, to the people who want to use it to effectively improve care, which could be providers, purchasers, the public, patients, any number of organizations. But none of us can do that quality improvement job without a source of data and the context in which to understand that data. And we struggle because the data themselves don't come with a context and we really need to create that.

At the American Hospital Association, and I know this is true for several of our colleague hospital organizations that are in the broad based national voluntary initiative that I referred to before, we want to work with other organizations to accomplish two goals, one is to provide information to the public and to our providers that will help them to improve quality. And the second is to dramatically reduce the cacophony that exists around measurement. It's why we started the quality initiative, it's why we continue to work with a number of organizations, it is an incredibly important activity as far as our membership, our board, and our president our concerned. That is not the President George Bush but my President Dick Davidson.

And we've started with a small set of measures, just ten measures that come out of the Joint Commission and CMS, a common set of measures, as a way of building a data path towards getting information out to the public. It was really a learning process for us and I want to communicate some of the information we've learned as we've moved ahead with that effort to get data out, information out, to the public.

Jerod and others on your previous panel referred to the fact that it's incredibly important to get standardization of data, standardization of measures. It reduces the burden of collection, it moves things forward. I cannot emphasize that enough, that kind of standardization which is reflected in your report is incredibly valuable and we would fully support the efforts you're undertaking to get there. Having said that, it is more then just the standardization of the measures and the specification, it is around standardized methodologies for data collection, it is really down at the very nitty gritty level of the person in each hospital or in each organization having a common understanding of what they're supposed to do, how to do it, and doing it that way in order to build up to where you have data that you can compare and general real information. It is not an easy task.

In asking us to come here you've asked us to comment on the value of doing, of imbedding certain information in route billing information, administrative data, others, and I have to say we understand that question only in context of what we're trying to do and in moving things forward on quality and I don't know how to answer the question, is it valuable to imbed certain information in the uniform bill. Well only if you have a common set of understanding of what the priorities are, what should be measured, the measures that will effectively answer those questions, and then drill down to the nitty gritty level of each of the specs that needs to be gathered in order to effectively measure whatever you're trying to measure.

Your report points to the IOM Report on National Priorities, a great place to start, 20 priorities that an

esteemed group laid out at the Institute of Medicine, a wonderful starting point. But there's a large jump between those national priority areas and knowing what to measure. It would be extremely helpful if you could help us understand how to get from those priority areas knowing what are the most critical elements of care to measure about each of those priorities. Because once you know what the most critical elements of care are then you can go back and look and see if you have measures that will effectively do it and begin to talk about what data elements you need in order to collect the information needed to create the answers to those measures, a very detailed process.

In one area of your report as I read it you talked about the value of outcome measures, particularly for consumers, and asserted that outcomes measures are in fact the most valuable for consumers. I'd have to say as somebody whose been working in quality measurement for more years then I care to comment on that I think, and many others would agree, that the door is still open on that one, the question is really still unanswered. When we look at outcomes measures such as the Medicare mortality data that was collected and reported in the early ‘80's, and other mortality data such as that reported in New York and Pennsylvania, it's been incredibly useful to some of the provider community folks but the fact is that folks only believe it if it has a good and rigorous risk adjustment methodology which you've talked about. Some very good focus group work with consumers has suggested that when they look at outcomes that have been risk adjusted they read that as whoever is collecting the data having played with the data, having fixed the day so it comes out the way they want it to come out, not having risk adjusted it in the way that you and I would understand risk adjustment.

There are some very good process measures that maybe as effective in communicating to the public as outcomes measures and I wouldn't rule those out if you would consider some of them.

I want to just finish up here by commenting on some of your specific recommendations and then move on down the panel. You have recommended collecting the specific provider/physician identifier and I know that the previous panel had some discussion around that. We at the American Hospital Association understand quality to be a systems property, not an individual provider property. Several years ago I used to work at Georgetown University Hospital doing quality improvement work for the Department of Medicine and while I was there there was a picture that appeared in Life Magazine, as the centerfold for Life Magazine, they came and took it at the hospital, they wanted to identify anybody, any clinician in the hospital that had touched a single patient. They identified the patient, got his permission, had him sit front and center, and there were 99 people who stood behind him. Now this was a patient who'd only been in the hospital for three days, all of those people effected his outcome and as we look at it people who cared for him before he came in the hospital, people who cared for him after he went home, all effected his outcome. Systems properties are very, very important and by proposing to imbed this individual physician identifier you may be running the risk of scaring people by thinking you're going to come back to them with the big accountability hanna(?), and you may be misleading folks into thinking that it is in fact an individual physician who is responsible for everything that happens to that patient while in the hospital.

You have also suggested that there should be some flags imbedded, like the flag to identify the conditions that are present on admission. That would be a great idea actually, that would be extremely helpful. The problem is that there's not a lot of consensus around how to identify what was present on admission for some conditions that you'd want to flag. We've been dealing around pneumonia, so was this a community acquired pneumonia or a hospital acquired pneumonia, how do you define, what's the timeline look like, when do you know, all of those issues come up and there isn't to the best of my knowledge any consensus on how to effectively identify that. So having a flag when you don't know what to flag isn't helpful.

The same may be true and you've had a lot of discussion already today around the service dates and times, it's very difficult, it's actually a fairly rigorous process to try and collect that information about what was provided and what was provided, it's very time consuming. Much of the data that we collect right now, especially if it has to be abstracted from medical records, is extraordinarily time consuming.

We've taken a look, we've asked hospitals, I don't have a really solid number to offer to you but most of the 100 to 125 bed hospitals that we work with say they have at least one full FTE devoted to simply collecting data right now, more if they're actively involved in various projects. That's a lot of investment. And to date hospitals have said we see the value in our own internal quality improvement work, we see the value of collaborating with other organizations or collecting data similarly. We don't see the value in the public sharing of the information, not that we're against doing it but there hasn't been a response from the public in the way that people have talked about it, so that part of the value proposition isn't there.

As you move forward to think about imposing more data collection you've got to think very carefully about what it is, what the value is, to whom that value is, and how you make sure that the folks who incur the burden at least get some sort of recognition for that burden and response.

Let me stop there and pass it on to Barbara.

Agenda Item: Provider Organizations - Panel 3 - Dr. Paul

DR. PAUL: Thank you and I would like to first thank you for the invitation to be here, I really am honored to be here and would also like to add my congratulations on your report, it was wonderful to read. I also am glad I was able to be here a little early to hear Jerod and Peggy and Ed because in my former life at CMS I basically would have been belly to the table with them in that presentation, or this one this morning where Trend Haywood, who was my deputy, is now taken over the position I used to have at CMS.

And I'd just like to echo a couple things that Jerod and Peggy talked about in terms of the incredible value of standardization to propelling health care quality, I hope you'll see how I see that value from my new lens in the long term care area in a minute, and also the value of really integrating data collection into the clinical flow of processes.

And I'd say that I think that what my comments are going to be, I'm going to drill very specifically into the long term care area because it's clear to me that you've had a lot of input from hospital centered folks and since I seem to be the one person at least on this panel representing long term care I'm going to get very specific, a little different from the discussions you've had thus far, but personally I would say that my comments are yes and, that listening to Jerod and Peggy particularly, it's a yes and which you're going to hear, yes and in terms of you've been talking about the vehicle being an administrative claims vehicle, I'm going to tell you sort of a yes and to that.

You've talked about the role of standardization and measurement in terms of using it for quality improvement and for accountability, I'm going to give you sort of the yes and for that. And then you also heard a very, very brief set of comments from Peggy about the recommendations seven and eight, functional status, hopefully I can give you a little more depth on yes and there.

I would like to first just let you know that Beverly Enterprises cares for roughly 45,000 patients around the country in about 350 nursing homes. We provide physical therapy, occupational therapy, and speech therapy for roughly another 40,000 or so patients in some additional nursing homes that we contract with. We also have a number of home health and hospice agencies and so we really do as a company reflect that sector that's often referred to as long term care, so the elder care spectrum.

And we measure our quality, we have a very detailed scorecard that is heavily weighted toward clinical quality. We align incentives at our company toward higher quality very, very explicitly and that was one of the surprises and one of the reasons why I took this position. Also our rehab and hospice providers really are I believe at the forefront of some of the measurement work that's going on in that area which is still fairly developmental.

So my presentation this afternoon really is going to lay out my case for why it's important, my case for why I think it's just absolutely terrific that we're going to talk about long term care in this fairly hospital centered project. Secondly I'm going to really focus on recommendation one, which is probably our number one need, talk about recommendation seven and eight and then a few conclusions.

Our hospital patients are long term care patients and long term care patients are hospital patients, and just one, a couple of quick statistics to try to prove my point. In 1999 for the 1.5 million long term care residents in that year of our nursing home admissions nationwide 46 percent came from a hospital, and of the nursing home discharges 30 percent went to a hospital. So to think that you're talking about hospital quality and hospital costs you've got to be looking at long term care quality if you're ever going to really get at hospital quality and costs.

My second point to try to set that stage is that the transitions between these settings are frequent and this was just one interesting study that I found that looked at these transitions between the various settings with a cohort of elderly patients, in and out of hospitals, rehab facilities, nursing homes, home care, site facilities, hospice, etc. Two year period, cohort of seniors over 65 years old, 18 percent of them had at least one transition in that two year period and for women over 85, a very fast growing cohort, it was almost half of them who had at least one transition. And of those who had a transition 22.4 percent had at least one transition related problem, at least arguably transition related problem, an ER visit in the 30 days after that transition, an avoidable hospitalization or a return to institution from community. So my message to you, and I hope you'll agree, is that if your goal is to be a part of improving hospital quality and costs that I think one of your key objectives needs to be to find ways to help us in long term care to improve our quality. And I think you have an opportunity with standardization to do that.

I would have to say that as I'm trying to understand the landscape in my new world I would say that intense oversight has a bit of a silver lining for us in which it has resulted in a standardization of care process that is fairly impressive to me. There is a lot of standardized valid data, the MDS dataset, the Oasis dataset, very clinically rich data that is standardized, transmitted to a central repository, validated, scrubbed and cleaned and studied, much more clinically rich then hospital data currently, certainly from the standpoint going back to my old hat at Medicare. And the nursing home environment for better or for worse is very heavily an environment of accountability. So I think those are, while that's tough it's also a silver lining for where we need to go as an industry.

I would also say that prior government approaches really tended to focus on the bottom of the bell shaped curve, trying to weed out the bad apples, and we've got to do that, it's important. But what I think is, I'm really glad about is that increasingly government and other sort of approaches to quality in the long term care area, just as in other settings of health care, are adding additional approaches. Technical assistance from the quality improvement organizations, consumer information, incentives, etc., and so I've really been delighted to be moving to this part of the health care sector where we can really start to chase some carrots as opposed to always just kind of running away from the sticks.

Also just a few other comments, I would just say that I've already mentioned MDS and Oscar data, we are doing public reporting of that right now as you know on Medicare.gov, a fairly robust set of quality measures being reported there. And increasingly as I mentioned in terms of my company we are aligning internal incentives with quality results. And the last bullet point, nascent electronic health records exist in long term care in pieces, the MDS dataset is a module if you will, is a piece of it. Our physician order entry module is very, very clunky but it's another piece. We have lots of pieces of it in our nursing home settings in particular and so I'm really looking forward to your recommendations helping us to move these pieces forward.

So let me go ahead to recommendation one and just really focus in then on the long term care setting. And as I said before I think what I've been hearing so far in your deliberations is that you've been talking about the need for standardized laboratory result reporting for purposes of measurement and quality improvement and accountability, and I would say yes, and, what we need it for however is just to plain provide care for patients. We need it in real time, you've been talking about providing it in administrative claims data, that's too late for us, we're getting those 46 percent of our patients from hospitals without laboratory data and our clinicians are spending clinical person time chasing lab data, it's a real morale buster, it's a time waster, I think the business need is so intuitive I don't think you have to do a study to know that if we had real time laboratory data for our admissions what a different it would make with that one stroke in terms of the quality of the care we could provide tomorrow.

So just to sort of put a fine point on this, the thing that I want you to understand is that we need it in real time, it needs to be asynchronous with the billing vehicle. It's fine if it's in the billing vehicle for the hospitals, for these other uses, for risk adjusting hospital outcomes, but for us we need it in a standardized module that we can get our hands on it as appropriate. So I'd like you to think about going one better then the idea of simply recommending that this go on to the UB-04 or UB-05, whatever it turns out to be.

The next slide in terms of your other question about using it for pay for performance, I think that you can imagine that again, if we had this information in real time we would be able to provide better care as I said with that one step. And right now we operate very, we're very aware of the fact that we operate right now under sort of a negative pay for performance environment, if we fall short we are subject to fines, we are subject to withholding of Medicare payments and so forth. And by having this information it would allow us to avoid that negative pay for performance that we live in right now, it obviously would also help us to chase that positive pay for performance that we'd love to see in our sector of health care.

Risk adjustment, I think it goes without saying that it would help with risk adjustment as well as our acuity assessment as we are accepting patients from hospitals, we would have a much better sense of acuity of the patient we're taking care of.

Mode of collection, this is again just to go back to my point about needing one better, one standard dataset that's created and maintained by the lab provider, standard nomenclature, sent in a standardized transaction, asynchronous with the billing form. I'd go so far as to suggest that Medicare should mandate that if a lab provider wants to be paid that they should have to provide it in this format.

The value ranking, of the recommendations on through eight number one I would have to say is the greatest need for the long term care providers and from our standpoint, because of the point I'm making about our need, they can be independent. I think that if you're talking about the needs that were talked about with others today you can talk about the need for them to be linked. But from our standpoint if you could do one thing tomorrow it would be this.

And then some comments to close on recommendation one, I do think as I've been learning about the CCR record, the continuity of care record effort, that this contains the shell for what I'm talking about and I think that that would be a way for us all to move together fairly quickly in this regard. So that kind of closes my comments on recommendation one.

In going to recommendations two, seven, and eight, I think that you can just kind of hear my voice in most of those giving you a ditto for most of them. The emphasis is a little sense because I think the need is just a little less in terms of our just day in and day out need for quality of care. But let me give you some focused comments on seven and eight.

Mode of collection, again, going one better, what we, we already have functional information on the MDS and so I think that one real benefit that we could offer back to the hospitals would be that the MDS functional data elements might be a useful starting point for a standardized set of functional elements that would be able to be transferred back to the hospital when we are transferring patients back and forth. And I'm sorry, I'm trying to remember exactly what your proposal was but basically once standard dataset again created and maintained by the provider of care, standard nomenclature, standard transaction, again, you were talking again about linking it somehow with billing, the billing vehicle. Again, I think you need to go one better then that, it needs to be asynchronous, it's fine if it's exported to that vehicle but I think for really effective use of this information it needs to be asynchronous and in real time.

Some specific comments about MDS and SNOMED, there apparently has been a crosswalk recently completed which between looking at SNOMED and it lacks functional status elements in the area of ADLs, and so CMS is currently talking as I understand it with ASPE about the fact that SNOMED while is a very useful set of standardized set of data elements lacks functional assessment and we think that we need to look at that and find a way to get those functional elements into SNOMED. For right now we would recommend integrating SNOMED the other way around, integrate SNOMED into the next version of MDS and then use the current MDS data elements for the functional element gaps.

A few comments on the CCR effort and functional status. CCR as it's currently configured also lacks functional status but a crosswalk again was recently completed to look at what it lacks and as I understand it the agency for, the Association for, American Health Care Association and others are working to develop what those new elements would be that go into the CCR. And as you can imagine we will make sure that the elements map directly to MDS. So I think that again you've got SNOMED, a very useful vehicle needs some functional elements added to it, and then you have the CCR effort which is more the vehicle, again, needs some functional status elements added to it.

So with that I'll close, long term care patients are hospital patients and the standardization that you're talking about today I find very exciting because it would actually help us not just improve our quality, which is terrific, but would actually help us tomorrow to be providing the kind of quality care we want to be providing. And also to let you know that we really are committed to the end result of your work, transparency, accountability, and the real time communication of information that this would afford.

Thank you.

Agenda Item: Provider Organizations - Panel 3 - Dr. Hochberg

DR. HOCHBERG: I want to first thank you for inviting me, I want to thank the committee for thinking of me. If I say anything that you disagree with you can talk to Justine later.

Let me just say a little bit about where I'm coming from so you can put my comments in context. I'm medical director for an organization that pulls together six different hospitals and their physicians for managed care contracting and we build data systems, we maintain all payer data warehouses and other systems to support the management. You'll see a theme through this, one of our mantras is that data needs to be aggregated. I'm also the chair and a board member, I'm chair of the Physicians Council and board member of Mass Health Quality Partners, that's a statewide collaborative in Massachusetts of payers, physician groups and public agencies whose goal is to aggregate quality data across multiple payers statewide and proceed on a path of public release meeting very high standards of validity.

In that organization the physicians are absolutely committed to the data aggregation piece and also to long term public release but I think as Nancy alluded to also in the physician community there's a lot of tension and debate about what are the appropriate standards of validity, what are appropriate timeframes, is there a period of internal quality improvement that is allowed before data is released.

I've also spent time at a large corporation overseeing the building of data warehouse and quality measurement systems also predominantly claims based so if there is a claim added, a field added to a claim for the last five or six years I'd have to figure out what to do with it when I got back to the office. And we still do that today. I know a little bit about a lot of things so I'll try to touch on a few things as we go through.

Before we go into detail I think one of the issues that we have to be aware of is that really in the past 20, 30 years we've taught physicians to ignore data, the average practicing physician gets data from multiple sources. Even if the elements are standardized the reporting formats and the report definitions are not, and so the validity is usually poor, they don't tie, and the ends don't meet reasonable statistical thresholds. And I think actually people don't come out of medical school hating data but I think after a while they learn that most of the data they're seeing from administrative databases is not valid and they tend to just ignore it. And I think if these pay for performance models are going to improve we also have to get physicians more brought back into valid data.

The other problem with the data, several other problems, it's lagged, so it's often profiling past performance. The second is it has minimal clinical depth and that will speak right to the next issue, which is most physicians don't find HEDIS measures, that the administrative HEDIS measures particularly compelling, I'm glad Peggy isn't here, because to them the issue is, and the classic example, it's not whether you did the glycol hemoglobin test for diabetes, it's how you controlled it. Yet all the bonus programs and all the HEDIS, everything's claims based, it has to across these large networks, it's very unimpressive to physicians, doesn't really generate buy-in. I think there's a little better situation in some hospitals who've used their information systems to give good information, what's going on in the institutional setting, but it's rare to have that in the broader outpatient setting. The exception I think is some organizations like Kaiser and Harvard Vanguard who have platforms to do that.

When I look at selected laboratory tests this really gets a double thumbs up, clearly it would improve HEDIS measures and allow often administrative data measurement with more clinical depth. I think also the incentive payments who pay for performance contract models would also be better. And for some of the HEDIS measures where people are grouping around 80 percent compliance anyway the sort of difference in which people moving money around is not very great, it may not well probably within competence intervals so I'm a little skeptical of how dramatic that's going to drive improvement.

Theoretically it should improve the accuracy of risk adjustment models, the more inputs you put in of this depth you should increase them, I don't know that I can tell you the actual improvement in the R squared but theoretically I think it absolutely would. High risk members in need of outreach is another area, if you look at particularly some of the disease management programs again, just to continue to use the paradigm of diabetes, those programs don't have the information they need to actually decide who to outreach to, this would be an enormous movement in that direction and again would increase the buy-in to physicians for claims based measurement.

On the down side since there's never any free lunch in this environment it would dramatically increase the volume of data captured in claims based analytic systems and current systems would need significant time to prepare for this. When I was at McKesson and we were looking at LOINC actually, which hasn't done much in a relatively long period of time, we were very excited but we were fearful one day we would get a file with all that in it and what we would do with it, we'd have to rewrite all our rules, we'd have to expand our storage capability. Significant operational challenges if you reported everything is a bit of a data explosion and even the risk adjusted models would have to be reevaluated and retooled so there's a lot of work that goes with this to make this useful.

I think one thing to think about then is the gradual phase-in of measures really according to what you think is most critical instead of putting all of them in at one time, to just phase this over a period of years, start with a few critical ones, give people time to upgrade their data systems, their profiles, their acuity predictions and then move from there.

So to summarize this I think very, very high value implementation costs are significant but I don't think unreasonable at all actually, I think it's more of a time issue for depth rather then a magnitude of investment issue. I think the overall balance of value versus cost would come out very favorable to do this so I would very much encourage this to happen. And again I think it's really about how you phase and introduce this that's critical to its success.

If we look at the second recommendation which is selected vital signs and objective data, clearly at least from my perspective collection for reporting will be burdensome, particularly in the absence of electronic systems. The linkage to claim submissions I think would be problematic in a number of institutions where they're just not used to pulling that clinical data beyond diagnosis into their billing system, so again, I think there's a lot of background work to make this happen in an automated way. And I know, at least people would raise questions about reliability, I don't know how standard the blood pressure is from hospital to hospital, there's inter-operator variability and there's also product and other issues.

One of the questions really is to defer this until clinical care IT systems improve and this is captured more easily and then fed down to claims more easily. The immediate value would probably be an inpatient evaluation of selected procedures or care paths where vital signs are significant discriminators around what you would do so I think there's a smaller subset of things this is going to relate to. It certainly would provide valuable benchmark information but at this point I would think that the value versus cost is not as certainly favorable as the first recommendation.

This charge diagnosis modifier or flag for present at admission, again, this would certainly assist in risk stratification of cases and that would improve the current quality assessment efforts and support more accurate measurements from claims databases. Also to the extent there's a lot of things being sent in under pay for performance contracts, the more easily you can capture acuity differences or comorbidities you need to do that, but these systems again aren't incenting properly.

I believe this would require changes to the hospital coding patterns, overall cost to implement to institutions, I would think it's probably low, and the value of this, I think it's moderate value, low cost, it's probably something that should go ahead, it's low hanging fruit in another jargon.

The operating physician identifier code I actually found very interesting to think about, it would certainly dramatically improve the ability to monitor and profile individual surgeons by outside agencies or payers. I think most hospitals capture this now so it probably does not add a lot of value to internal hospital quality improvement, this is really going to be more for use from external agencies. I think there's certainly going to be problems with adequate sample sizes and risk adjustments, particularly if you don't aggregate claims across different payers, so that's one of the things I think you have to think about with this. And I think claims based risk adjustments are really not at this point prepared to take claims and risk adjust procedures and outcomes, that doesn't say they couldn't do that but again, there's a lot of statistical and analytic studies to build those models, I don't believe they exist at this point.

I think you'd also, if you're going to, you need improved outcomes reporting or if you know who did the surgery, but then you don't really have a very good ability to actually measure the outcomes and I'm not sure what the usefulness of knowing who the surgeon was is. I think there's another critical piece to this. This would allow tiering of individual surgeons under pay for performance model, which would scare some of my constituents to death, there'd be a very high bar for statistical validity, in our organization it would be about 80,000 feet up. The other question is what is the proper level to look at things like operative mortality and performance of individual procedures, is it at the surgeon level or the institution level, I think currently we haven't remotely scratched the surface on just looking at institutional performance on these measures and to jump out and now try to look at individual surgeons to me may not be the most rational pathway and the best use of time and effort. And again, I think a lot of institutions have very robust internal quality improvement programs and pay for performance as they're going to be incenting institutions around quality measures does get people's attention and does drive some of your focus in some of the things they do. So I think you can get at this issue potentially without going to this level for now, I think at some point down the road you may want to think about that.

Individual surgeon performance, a lot of that now is really within the hospital and that's the audience, if you move this to claims base then you're broadening the audience, you put it up for potential public release. A lot of dynamics change in terms of right now this is a very, the walls of this are around the institution, some of this is peer review protected so you have to I think about how that relates if you made this change.

If you look at dates and times for admissions and procedures, I have some accuracy concerns, I think someone mentioned, I can stamp a time that I'm going into the room but I can leave a minute later and not come back for ten, so I don't know how accurate this would be of actually pinpointing at what point in the process this was done. I could stamp going into the room and then not go in the room for 20 minutes too. I think utility is probably limited to selected interventions where you have outcomes clearly tied to the repetity of interventions and there is a finite set of things for which that time sequence appears to be critical, so that means this has sort of a finite world in which this would have a lot of value.

I think the other thing is if you were going to evaluate this you'd have to collect confounding variables. Best example I can give you of a surgical case I know of last week where someone needed to go to the operating room immediately but actually needed a transfusion first, so their time to go to the OR was delayed more then it should have been for perfectly adequate reasons. Now if we were just measuring time and date and not that confounding variable that would have looked like a quality problem when it probably wasn't. So I think if you go to this you have to think about what other data do you need, so in essence risk adjusting or adjusting these cases so you can interpret the data.

Episodes start and end dates, I think useful for evaluation and very selected episodes, in the report it talked about prenatal visits. I don't know that there's a very big universe for this so I think utility is actually quite narrow here. And again, I'm a little concerned about the probable accuracy of this.

Functional status codes, analytic models clearly exist and a number of them like HIMSS and others are well validated. I think large scale collection of this at any reasonable cost is probably contingent on electronic health record adoption, and I'd like to say that's coming in the next two to five years but I think it's really a lot longer. It's really only some of the integrated systems now in hospital physician systems who are making these investments, there's really no economic model right now for most individual practices to make this investment.

The other thing is, and this was touched on earlier is it's not just electronic health record adoption, if you're going to collect this you need to have standardized assessment and data input. They've put forms in these systems and that standardization is no where in existence amongst the current vendors. So I think that would require some kind of government action or standards or some agency to take on standardized fields and enter in these systems and that seems to me quite a long time to get there. So I think this is probably out of scope in the near term.

And I think that's it, thank you.

Agenda Item: Provider Organizations - Panel 3 - Dr. Kazandijian

DR. KAZANDIJIAN: Thank you. First of all I'm delighted to be here and see some faces I've known for the past 20 plus years in this field, I'm delighted to see that the excitement is still there and that the topic is also sometimes still there.

My presentation is probably going to be slightly different from the others that we have heard so far, I have only three slides and I would like to put my presentation within a context as to where I'm coming from. I do wear a number of hats myself, most appropriate for today's presentation are a couple of those. I'm senior vice president of the Maryland Hospital Association and in that sense bring the association's issues, the relationship issues on a smaller scale then AHA obviously, more regional. I'm also a president of a center affiliated with the hospital association that is responsible for international research only. And I used to say that I have done international work for the past 35 years and that's where I picked up my accent. So that's one of the problems with causation and correlation but still an important one.

So a lot of the work that we do in six countries deal with performance measures and accountability so I will bring some of those issues as well. And finally, I'm certainly interested in bringing some of the academic rigor to this and we have had that as a goal in our work for the past two decades or more. So that's basically the context in which my presentation will take place.

I will not necessarily go one by one in those recommendations but I will be bringing some issues that I think make sense coming from the field if you want to address the issues that you raised and make some comment as we go, some directly, others indirectly, I'm hoping they will be addressed.

So the first of the three slides is the question of the business need, is there a business need for this report, which I would like to join the previous presenting group as well as us to say that it's very timely, very thoughtful, yet deals with issues from an angle in some situations that I would like to challenge if you want, if nothing else challenge the angle itself and say can we take it perhaps to another level, will that make things easier.

From a business point of view the whole concept of measuring performance obviously is of importance, the distinction I'm trying to make is measuring quality versus measuring performance, which has created a lot of difficulties in the past decade when projects or initiatives dealt with quality as the term and promised to measure quality, it did not work, as well as when it's measuring performance, which is a value free concept. So looking at the time for example as one of your measures, in itself if you say this is the right time then it becomes quality, but if you say this is the time and this is how people are doing it it becomes comparative analysis. And I think that's an interesting distinction.

The Maryland Quality Indicator Project is where I'm sort of building some of my comments, I've been fortunate 20 years ago to be part of the first group to design that project, I'm responsible for that project even today after 20 years. It's the largest project in the U.S. as well as overseas. And one of the interesting things there regarding performance measurement is actually the term performance rather then quality. And I've spent the last 20 years telling people that the title, the name of the project is misleading, it doesn't measure quality, the Quality Indicator Project. But that was the thought 20 years ago and then we learned better I think. So that's an interesting issue.

The concept of benchmarking becomes important I think when you start talking about pay for performance and I'll come back to that, but under that rubric I think the distinction between incentives and rewards becomes very important. How do you deal with that concept, do you provide incentives for those who are not doing well or do you reward only those who do well? Do you increase the gap between those who can or cannot by doing that? It is still an open question.

So the transformation of performance to quality becomes an important question and I think that's where the challenge is with some of those measures because at least in our experience, and in my personal view, the moment you put a value on a value free measurement it becomes quality. If you say waiting time of 18 hours is okay then it becomes okay, if you say it's not it's not, but you measure waiting time the same way in both situations. So the issue of quality is that transformation by putting a value on it and that in our jargon, that's what we call evaluation, which is putting a value, so evaluation is a transformation of performance to quality and that in itself is a different question, who does that evaluation, who does that transformation, is it the user, is it other stakeholders, is it some other agency from outside, whoever is going to put that value on that measured performance is going to make a difference in how its adopted and I think all of our recommendations have that inherently, have that in them inherently as an issue.

Again, I discussed that it works for better performance, it is definitely a business need and all of your measures I think fit into this. So overwhelmingly I think the business need is there, the question is how do you define it. So it's one thing, I think Nancy was framing it as well, it's one thing to say yes, it's another thing to say how do we measure it, the contextual issues and so on and so forth, so the reward issue becomes a very important one. And it's not a trivial issue because the reward is directly linked to evidence based medicine concepts. If evidence base is the standard, is the goal, then it's a reward system. You meet or you don't meet, if you meet and you get kudos, if you don't meet you need incentives, not rewards, to get there and that should be part of the business need as well and it's important to see if those measures can accommodate the concept of incentives, which is my next point.

The incentives towards better performance, at least I'm following my own logic which is good, incentives towards better performance become very important because again, as Nancy was saying, just looking at the hospitals but it deals with long term care and home care as well, the spectrum of size and therefore resources becomes a very, very important issue. If you look at the typical American hospital, it's a mid-size community hospital no matter how much of the papers within the medical journals come from large institutions they do not necessarily present unless they are dealing with a generic issue and I think that's where the challenge is, how generic is this to be portable and adaptable.

The next question regarding the business need, which I think is something that is important that I did not find here and perhaps it is imbedded in the logic but not necessarily in there is the question of ongoing monitoring. What is the logic behind the ongoing monitoring? Because we all know that ad hoc or short term whenever you throw something that projects people know there is an end date to it, and when it becomes part of the fabric of life then it's a different thing, it's not a project, it's a shift in mentality and I think that's one area to think about. Is this going to be a project? And if it's a project people will give you what you want, people will give what they can, but then they know it's going to end and they can go back into doing things if that's the reality of it. So the ongoingness or the continuous approach to it is very important.

And it is also important from a quantitative point of view because the adjustments which are going to be necessary to make, to take those measures, evaluate them, see how they are behaving over time, recalibrate them, it's going to need a long term ongoing monitoring. And that monitoring in itself is going to be effected by questions of pay for performance as we said, accountability for sure, transparency, public disclosure, and so on and so forth. So that's one issue.

My second slide deals with the risk adjustment need although risk adjustment is a distinct recommendation here, I think it has implications for your other measures as well, and I think it's important to look at it perhaps a little bit closely. The distinction may be important to make here that in fact the adjustment concept has two components relative to those measures. One of them is that it's immunological in nature which deals with stratification basically, that's what it is, in epidemiology it's called stratification, other people call it adjustment. But you compare things that are similar basically, you put them in pigeonholes that are comparable, so that's one aspect.

The second one is the clinical adjustments, which we heard that it goes beyond risk, it goes into actual acuity adjustments which I think is very important because risk in itself deals with something that could happen whereas acuity is what is there already and how you measure it, those are two very distinct concept that cut across again.

From the epidemiological certification point of view if you have a long term monitoring system it may not be, and here I come after a full day of everybody agreeing on risk adjustment, you needed one person to say maybe not, so I'll try to do that. Risk adjustment I think is very, very necessary, clinical adjustment is very, very necessary, one it is a short term analysis. When you have a short time span, six months, a year, to look at something and try to minimize any variability in your measurement that may be due to things then actually what it is.

When you're looking over time, when you're looking over time at stratified data, epidemiologically stratified by demographics, by minimal stratification on health status, some of those patterns will emerge, there is enough literature on that that they will emerge and it is an interesting challenge how to marry those two concepts, I don't think it is necessary to throw the epidemiological stratifications with demographic stratifications that are not at the disease level, as clinical adjustments for severity are necessarily going to be, yet they do provide good insight if the design of the approach has a longitudinal component to it.

So I would say that adjustment in general is actually the case to be made here and I think all of your measures would benefit from that. But I will maybe refine the term risk adjustment or clinical adjustment as the only type of adjustment and include into that epidemiological stratification as well.

Needless to say there's a whole industry, there has been a whole industry in the past decade on looking at stratification and people have tried different things. Some of them have been so difficult to interpret that they have actually taken, provided tough times for some very good projects and the Medicare Mortality Report mid-‘90's definitely is an example of that where the stratification itself made it almost impossible to be able to interpret it among those who understood the stuff let alone what it could have done for outside groups. So in that sense the business need for adjustment and the provisions by the business field to support adjustment of those measures may be a given if in fact it is provided in a way that accommodates for those things.

Then there's a scientific need to uncover and discover and improve on this and therefore I think it is important to have that long term commitment of monitoring because as we uncover things and we tweak with them and we change that, we heard both Peggy and Jerod mentioning that in their work and certainly we have done that and continue to do that in our work, the scientific needs, the measures should be able to accommodate for those scientific changes and novelties in how to do things. And I do think that, I will come back to that, except the one that seems to be relatively controversial to everybody in your recommendations which is the physician identifier or physician information, everything else I think can and could keep pace with the change whereas that I don't think can in this area or do a very good job.

Health service needs, I'm going to try to first just slightly different again because I think we are at crossroads on this and this may be a wonderful opportunity with your recommendations, with your support, with your guidance, to actually take it to a level that people have shied away to do it. Health status has always been discussed in this kind of context of performance measurement as the health status of the patient when in an institution or when they return to the institution. But to link the discussion between process and outcome that you have raised, this afternoon it was raised as well, I would suggest that perhaps health status is the ultimate outcome when it comes to public understanding of quality. And if you think about it health status is what you're supposed to change if you did well. You're going to change it if you didn't do well but again, it is still health status.

So in that sense health status perhaps should be looked at and promoted to go beyond the hospital, beyond the nursing home, beyond the institutional boundaries. Why am I saying that? If you look at the most frequently done high cost procedures in the U.S., and other places but primarily in the U.S. but western Europe is catching up very nicely with it too, they are mostly electric. If you look at hysterectomies, if you look at prostatectomy, if you look at C-sections, if you look at evascination(?), and if you look at orthopedic surgery, unless they are for cancer or for pathology that have nothing to do with functional status or quality of life of the patient there's always that discussion between the provider and the recipient as to why they are doing that. And functional status in that situation is the ultimate outcome. Is it different from those both percentages or is it ability to walk, that whole discussion becomes very, very interesting and I do know it goes beyond perhaps the scope of looking at the databases that we're talking about here, but start putting the bug in people's ear that health status in itself should not be only in that sense but also part of accountability and the true outcomes, maybe an interesting, that's an area we're embarking on now in our work trying to actually link information on health status in community when patients are discharged to the care that was provided or to the education it was given, it's a delightful challenge.

It does make sense to community, and by community, I use community in a very lax way and a very generic way and we have the communities of payers, we have the communities of recipients, we have the communities of potential future recipients. And we have the communities of folks that actually put policies together, so that's, it does make sense to practically everybody. Purchasers primarily if you look at productivity as being one of the measures when people, when you go to a business group and say here is a model in order to improve performance or improve quality, one of the first question is of incentives and measures, is measures of productivity and so on and so forth, so it does have an inherent attraction to the people.

As I mentioned I want just to be on the record on this, I know it is probably before its time and probably before even out of context slightly to today's discussion, but I couldn't resist to discuss health status by saying hey, thinking a little bit out of the traditional circle may also benefit the cause because for me that is really the ultimate outcome.

And that's where a lot of the interesting comments come about outcomes research, when people talk about outcomes research and they say it's limited to 3.4 four average length of stay days, those are outcome measures or output measures or whatever you want to call them but they do not have in there the issue of health status as much as perhaps we should, especially with the aging population and some of the chronic conditions.

So to end my presentation I would like to bring a couple of thoughts to the table. From our experience years ago when we were challenged with this kind of issues, mid 1980's when there were no projects, there was no initiative in this field, when we started the quality indicator project one of the challenges was how do you describe the actual nature of the indicators, of the measures. And the best way we have found over the years to explain it to various audiences is the follow which I think also fits into today's discussion.

We have called the indicators pointer dogs and we have said that they are basically, they do exactly the job of a pointer dog when you go pheasant hunting or whatever hunting you do. That in fact a good indicator is a valid dog that points to a pheasant when you're hunting a pheasant not to rabbit. But the point is I can give you the best dogs in the world, you will never get the pheasant if you're not trained how to interpret that pointing or to know how to shoot.

So the real success of this is in training the user, in training the hunter, of this information and not giving people the best dogs. And that has always been the challenge, organizations that say we got the best dogs and put it out there and people say how the heck do I interpret when the tail goes to the left, well, that's where the challenge comes in. So my sort of, and by the way that's, I wrote an article on that in 1990 and the word pointer came into the literature coming from that article, it was absolutely fascinating, it's a very simple imagery.

But again, it brings the issue back that the approach should not only be is this going to be part of the database, is this valid, is this reliable, is this, that aspect is important but that's the dog. How does it translate to the education? How does it have a parallel program for the users? And the users are the ones not only health care users but the communities, because if accountability is part of the model then the community should be able to see and make a difference and I think thinking about that, at least bringing that to the forefront as the next topic will be a wonderful thing to do.

Those are my comments and again, I do thank you for the opportunity, it's delightful to see that this issue is still very hot.

MR. HUNGATE: Thank you, having the pointer work means the information is actionable in some other language and I couldn't agree more. I know that the lady who sits two doors to the right of me here would like to comment on the health status discussion.

MS. GREENBERG: I couldn't agree more and again, I think the nature of this, the recommendation on functional status which actually the World Health Organization would broaden to call health status, which is kind of a combination of functioning and maybe health conditions as well, is to do more research and identify standardized ways of capturing this and sort of to facilitate eventually being able to say something, say comparable things about health status or functioning. And then when that should be collected and where it should be collected, I think you could see collecting it at admission if you want to use it as a risk adjuster but certainly generally when people get discharged so quickly out of hospitals now that they're in worse health status when they leave probably then when they got there and it's certainly not in the improved health status that you would hope they'll be down the road.

So I think these are all a lot of questions but I appreciate your recognition that it may be the outcome that makes the most sense to consumers and ultimately purchasers, we heard that about back to work, etc.

Though I would agree with whoever made the comment about process because I think, that can also be an important way of communicating to consumers because I think it has a real educational benefit too and the more you know about the what the direct processes are, the best practices, you have to be an advocate to yourself and your family in the health care environment and that's helpful to people and it helps them to make distinctions.

So anyway, I thank all of you for very provocative testimony and I'm really chewing over the issues that Barbara has brought before us because you're absolutely right that you need this information in real time and that putting it on the administrative transactions will not get it to you in real time --

MR. HUNGATE: Won't do it.

MS. GREENBERG: And there we get back to either the electronic health record in all its glory or maybe pieces of it which is I think what you were suggesting, but as we all know that's not here yet.

MR. HUNGATE: Thank you and we're open for discussion.

DR. COHN: I actually have a question for Marjorie, I don't mean to put her on the hot spot here but let me just ask her, there are actually two recommendations relating to functional status and I don't know whether, I think we are talking about functional status though, health outcomes and health status is obviously another piece of all of that. But one of the recommendations is sort of to study what the right answer is and in 2001 the NCVHS said the government needs to study this, and then 2003 CHI said we need to study this. And then we have another one that says go out and do something. I guess my question is has there been any study either in 2001 or based, actually it was 2004 recommendations I think, or is there a plan to study anything?

MS. GREENBERG: Well, some study was done through the CHI process and it was, because it was done basically, not with a detailed research project or resources or whatever it was inconclusive which is why the CHI process, the Consolidated Health Informatics process which identified and now many have been adopted by the departments, DOD and VA, standards for various fields, they did not adopt a standard for functioning and disability because they felt that they weren't there yet.

I think what you're, so some study has been done but no, I think that a lot of, and there are some individual projects out there but a lot of resources have not been significant, certainly not adequate resources have not been put into the research that was recommended by the national committee --

MS. HANDRICH: If I may just add to that, I would venture a guess that in the field of long term care what study that has been done on specific measures for functional assessment has been done by state Medicaid programs.

DR. COHN: The reason I'm asking this is because actually, maybe it would be a little more clear, 2001 they said geez, ICF, which is the WHO --

MS. GREENBERG: They said it was the most promising --

DR. COHN: Promising but it really needs further study, and then in 2003-2004 when it was looked at again by CHI what they said is geez, there's a fuzziness in this whole area, we don't know what questions to ask much less what answers to expect. And a code set is great, if you know what it is you're doing a code set can fill it in, but if you don't exactly know what questions to ask, or if there's no agreement on all of that which is the LOINC code issue, it's hard to figure out what the code set ought to be or what it ought to be doing. And so I guess I was just sort of wondering, obviously it seemed very premature to create a mechanism for recording if we don't know what the questions are or how to answer them and so I was just wondering around the basis of the CHI piece whether or not that somehow needs to be expedited. Am I misstating it?

MS. GREENBERG: The response to that would be that there is some functional status information currently being collected through administrative processes but it's basically free text and so to have the capacity to translate that into some standardized coding which might then be, one could do something with a little bit better then free text, is I think partly behind that recommendation. But it may, even then the standardized coding has not been agreed to at this point as to whether it should be ICF or SNOMED-CT, and I will say to Barbara that the clinical terms that have been added to SNOMED do include a lot more information about functioning and ADLs then SNOMED basically didn't include much of anything. But there was actually a real effort by the people in the UK to make sure that a lot of the terms from ICF, International Classification and Functioning Disability and Health were added to the clinical terms.

Then when the clinical terms got married with SNOMED SNOMED-CT tends to have, it's not adequate at this point but it has more in it, and again, it's the question of a terminology, the whole continuum between terminologies and classifications. So SNOMED-CT is actually, if you haven't looked at that you can see that there's been more added then there was in SNOMED which was very little.

But Cathy, do you want to comment on number eight, recommendation number eight, which is made in conjunction with the need for more research and whether you think, do you want to come to the table?

DR. COHN: I mean seven seems to be pretty reasonable, I just was trying to figure out how you could do eight given that we need to do a call for some research.

MS. COLTON: Well I think they were intended to be linear rather then parallel but I think it was also recognized that the process for adding new data elements is a very lengthy one and as was pointed out here that the process for implementing is also a lengthy one. So once a data element gets added to a standard, a transaction standard, there's a period of time for organizations to gear up to implement that standard. And so that if we're looking forward and saying when is the earliest time that such a code could actually be included in the standard and then when would it actually get reported, one could conduct the research during that period of time. It's quite a lengthy period of time.

MS. GREENBERG: It's an enabler, for example the institutional standard allows collection of this flag for diagnoses, well, one state or two states needed it so they were able to get it into the standard but I mean everyone agrees now it's kind of a low hanging fruit, it's been recommended since, the national committee recommended it back in 1992, only 12 years ago. But one reason it is low hanging fruit is because the capacity is there, but now if you're going to start to try to get it into the standard it would take you another two or three years. So there's a reason to, I think back to the 2001 report said that standards organizations should be alerted that the national committee considered this an important element. And so I think I would definitely agree with Cathy that there's, there's time to do the research and maybe put in the capacity in the transaction standard would actually get people to start doing the research, who knows, but there's certainly time.

MR. HUNGATE: Related question, there is this uncertainty about whether the standards are there or not but I have the feeling that in the rehab community, that across that community that they do have their own set of standards that they have developed --

MS. GREENBERG: Sure, they have to.

MR. HUNGATE: They've got to, and I wonder if that isn't true in the long term care community, that there isn't some set of measures that have evolved that are fairly common. And what worries me in this discussion is that we try to get something that covers everything when it means that people that could things now have to wait until its standardized for everything and that worries me and maybe it's just lack of understanding on my part.

DR. PAUL: I think comments on this were hesitant because I think you guys have more depth of knowledge then I do, but in the nursing home setting right now we are assessing functional status, right now we are struggling with the fact that a hospital based physical therapist uses terms different then a nursing home based physical therapist for certain aspects of functional status. Right now we are working with CMS and ASPE and others to try to take MDS to its next level, to take MDS to incorporate SNOMED, to take SNOMED-CT to make it better for functional status, all of that is happening right now. What I don't know is how that effects the world that you're actually living in but right now it's happening for our world.

DR. COHN: Barbara you bring up a very good point, I think that the reality is is that this stuff is being collected and I think you made a very salient point which is that different people ask different questions, or they ask the same questions and they mean different things. And I think that when, I'm just reflecting on what the CHI did around all of this and it might be very useful for the Quality Workgroup to ask for a briefing from the people that were in charge of this area to sort of get a more in depth feeling. But there was sort of confusion a, because there were a whole bunch of questions that seemed to be sort of the same but sort of different asked on sort of different forms in the federal government about functional status, and then there's sort of the question about whether you leave them all the same as different or whether there's some way to standardized them and do a smaller group. And then of course once you have that then the question is to try and standardize the answer, which is really where SNOMED or ICF or whatever sort of may fit in. But it was apparently a complex enough issue that that was sort of held for further study as I understand and it's why that LOINC issue as well as the ICF/SNOMED issue came up. And it sounds like you're working on pieces of it just within your work at this point.

MS. HANDRICH: I would agree with the comment that you made but the long term care population being defined as the population that is over age 65 or persons with chronic disabilities desires to live in the community and the science for measuring functional status for the non-institutionalized population, which is where everyone wants to be, isn't there. So that's the area of opportunity that could be identified as a priority and pursued perhaps with some endorsement of this committee and others and they are the very people who go in and out of the hospitals and may have short term stays in nursing homes but the whole trend of long term care life is for people who may formerly have been in nursing homes to spend much longer periods of their lives, if not all their lives at home with intermittent stays in nursing facilities. Would you agree with that Barbara?

DR. KAZANDIJIAN: If I may just kind of comment on that, I fully agreement with your overall assessment and I think the reason it's attractive is that it brings back the issue of the continuum of care and it brings back the whole concept of linking the different sites to different activities, to different outcomes. And I find that the opportunity to use your recommendations if it's possible, at least one recommendation that goes beyond one setting, may provide very good incentives for future developments.

DR. PAUL: I'm sorry, I can get home tonight if I get to BWI. Thank you.

MS. GREENBERG: I might just if I can say Barbara that I know the committee has a lot of interest in pursuing long term care, quality issues related to long term care and certainly we can call upon you again.

DR. CARR: I'd like to just integrate all the very wonderful input that we've had today but I want to make sure that I understand it. I think this morning we heard a lot from JCAHO, CMS, Leapfrog, quality groups about the tremendous opportunity that lies ahead if we can just get quality going or something. And this afternoon I'm hearing I think from you of the great risk that lies ahead if we go in the wrong direction or without careful thought. And I just wonder, and I think you've all sort of spoken about it, but if you could leave us with one thought of what is the biggest mistake that could be made. I've never been to one of these hearings before so if we could get, I mean everybody likes apple pie and motherhood but what's the biggest danger that lies ahead in this initiative?

MS. FOSTER: I'll start with one but I'll just go back to the previous conversation and say that I was excited as Vahe was about the opportunity to look at the continuum of care and for hospitals and other organizations to be able to get feedback on the outcomes of their patients or the functional status of their patients well after they've been in the institution, something they don't get right now.

One huge danger, I'm not sure if it's the biggest because I'd have to think about this a little bit, but one huge danger is that if you identify the wrong thing to measure you are going to push people towards doing the wrong thing clinically. We've seen examples of that and whatever you would call it, the continuum of that is if you don't have a plan for continuously updating your information, your requests for data to keep current with current science as Vahe was saying that you will keep people doing something that is perhaps not the most effective clinical approach to care. And if you want further elucidation on that I'm sure Jerod and others can talk to you ad nauseam about ARBs and other things that are sort of at the point where we're measuring something that's not exactly what clinicians believe should be the right thing to do.

DR. HOCHBERG: I would add another comment which is that I think all the focus on measurement also has to create the business case or economic underpinning for institutions and practices to make the investment to success. I can share a vignette with you, I was at a meeting about a hospital that was going to tier people based, hospitals based on Leapfrog criteria, particularly computerized order entry. And a person next to me was an administrator of a small hospital said God, for a year we've been talking about the need to find a way to gather money to invest in things like this and if I get this thing right now I'm actually going to lose money because I don't have it, that's the exact opposite of what I need to go forward. So I think this has to incorporate some provision of tool system investment otherwise people scramble in relatively inefficient ways to do this and don't get where we need to be.

DR. KAZANDIJIAN: I don't know the answer to your question but just an observation, I think health care has shown that it likes to be a very wide swinging pendulum that goes from one to the other and rejects things in between. I think one of the most dangerous things that could happen is to focus only on the systems concepts. We heard a number of people today, it's very politically correct statement nowadays regarding the safety situation and so on and so forth that it should always be looked at, only looked at as a system issue and I think it's an important component of how it should be approached but I do think that health care is still delivered to individuals. And you heard Nancy's example about the 99 doctors or whatever it was and also systems, if you shut your eyes for a minute and try to see a system what do you see? Most people see people, they don't see processes, they don't see IT things, they don't see structures. So I think we do still have to learn what the system is and to put everything into the system may actually cover or allow individuals to believe that there is nothing that could come back and haunt lesser performers, lesser performance. So one of the dangers I see in all this is the swing from the 1980 small area variation analysis that we all remember I think where the only goal was to look at physician practice styles, remember, that was the term, change the physician practice style you will decrease variation, ACPRs(?) concepts were based on that back in the 1980's. Then we swung from look at the physician to look at the system without mixing those two and I think it is perhaps a cautionary note that looking at outliers and inliers at the same time is better then just outliers or only inliers.

MR. HUNGATE: I have a question I think is related and it's addressed both to Vahe and Stan really. You both participant in project that people have joined to improve quality. What data is used in those? Is in either case is it claims data or is it always some other source of data? So what do your projects depend on and how do our recommendations effect those projects specifically?

DR. KAZANDIJIAN: That's a very fair question and I think it depends on the nature or the reason for participation. For example in our case many of our projects are projects whereby the measures as well as the data elements are designed by us and the participants based on consensus using all the necessary steps of literature reviews and consensus and expertise and all that. But it is de novo type of data many times and not using existing databases just because some of the questions could not be answered by those data. Now that's a voluntary component whereby organizations have reached their internal culture if you want of saying we would like to know more and then voluntarily would participate, so that's a totally different type of people or organizations.

Then there is the more mandatory component of things and for example in our work we are the largest vendor for Joint Commission in the U.S., more then 700 hospitals have chosen us based on our relationship with them to be the vendor for the core measures for them. So we collect the core measures as defined by the Joint Commission so in that sense also our participants use information that are there with process that we built, IT components and so on and so forth. But that distinction of voluntary and non-voluntary is important.

MR. HUNGATE: Is the non-voluntary portion, is that a dataset that the hospital has to generate as a separate dataset from all the other datasets that they do? Or is it an offshoot?

DR. KAZANDIJIAN: That's a fair question. In many situations it goes back to something Jerod said which is when you look on the surface the titles of the measures are the same, a C-section is a C-section and so on and so forth. But when you look at how the data elements are defined there's always a difference someplace excluding the denominators that throws the whole thing into a loop. So there is some overlap but in many situations there are some parallel databases. And the reason they continue to do that after 20 years is really the whole question of the business value or the value itself because they see a value in that as well as in this and they continue doing that and I think that's an important consideration. The validity from that point of view, was it able to make a difference.

MS. FOSTER: Could I just clarify Vahe's point, the Joint Commission, that's done basically by individuals in the hospitals going back and abstracting the information from charts.

MR. HUNGATE: So it's all chart review, labor intensive --

MS. FOSTER: Labor intensive chart review, somewhere estimated between 25 minutes and 45 minutes per record --

MR. HUNGATE: So it's an expensive dataset.

DR. HOCHBERG: Well I would just add that all the ambulatory stuff is claims based and all the pay for performance programs need to be off administrative claims, they can't incorporate chart review, they don't have the infrastructure to do that. So everything keeps devolving back to the HEDIS measures which are relatively good administrative databases and the more challenging measures which involve chart review don't end up in incentive programs or as part of the drive toward improvement.

MR. HUNGATE: So it seems to me that our challenge in this agenda is partly to try to see what's efficiently done by the relatively cheaper administrative information then the relatively expensive chart abstraction. And so in a way it's saying what could you gain in making your program better for your members then now exists from this data. And that's hard for me to get a handle on --

MS. GREENBERG: [Inaudible.]

MR. HUNGATE: Well, to the participants because they have to pay for the chart abstraction right?

DR. COHN: But I was also going to add the point that you have this claims transaction or whatever and obviously every time you add a data element to it that costs money too so you can't win --

MR. HUNGATE: I understand, but the magnitude of the different costs is part of the big business case isn't it?

DR. COHN: Yeah, exactly.

MR. HUNGATE: And I'm saying I don't know how to grapple with that, I don't know how to get at the real elements of that. See what I mean? I think it's important but I don't know how to get an answer.

MS. FOSTER: And if I may add it would depend on what you're asking to be added to the administrative claim, because if in order to get that data someone in the hospital has to go back to the medical record and abstract it you've got that same cost and then there may be additional costs associated with the delay in the bill.

MR. HUNGATE: Then my sense is, my hypothesis is that we have to kind of come to a tentative this is what we think we heard and ask for feedback along the lines of what I've just articulated. Is that a rational approach to the question?

DR. COHN: I think it is, I though it was actually also informed by hopefully work that AHRQ is going to be funding or in the process of funding so I think that actually, I guess from my view it seemed to me that if we identify things that are very obviously low very low hanging fruit, that cost little but have some value it becomes very easy to say go forward now, some things we may say geez, we're going to know over the next year what the cost benefit is on all those --

MR. HUNGATE: So my mind says that we want six or eight specific low hanging fruits, if we could identify, there might not be that many, I'm an optimist, may never get there. But let's say that I'm trying to model what we're trying to get to, so I'm saying let's think about, that's what we're trying, where they fit in the categories of candidate recommendations so there's a specific that relates to recommendation one or a specific to recommendation two.

DR. COHN: Next step?

MR. HUNGATE: Lab value measurement, which enables a performance measure, where it is cost effective to do it and where providers would believe it was of value to them.

DR. CARR: Okay, again, I'm new at this so this is like a really dumb question. But where does this information reside? So for example Stan's data that are claims that go to Harvard Pilgrim or TOPS(?) or wherever they go, or Blue Cross, they go to those places. And so the creation of the field and the abstraction of the data elements is what we're talking about but it goes somewhere and either is used or is not used. So again, closing the loop on the cost/benefit, there has to be a recipient place and as we heard also today that the measures that we see coming out of different groups will have perhaps different subscribers. So I guess I'm a little fuzzy on after we've done this work and standardized it where it goes or would it also reside in the hospital data set, I guess it would.

DR. KAZANDIJIAN: I think it would depend on the right issue, first of all is where do you get it from, and the second one is where does it go once you generate something out of it. I think where you get it from is, it is in part to medical record obviously but not exclusively, because if you're dealing with process measures, if you're dealing with issues of systems as people would like to define in many situations, that is not in the medical record. That is in understanding the process, in understanding the flow, and understand the interaction, understanding basically the fishbone diagram, understanding step wise where things go. It's an algorithm, it's purvical(?), so that doesn't exist. So there are no data sources for those kind of things, however they have to be established if in fact that is an important thing, it has to be established as a new system. Years ago that was tried to be established in the activities around clinical pathways if you all remember that, it was the term, everybody was doing 700 pathways in their hospitals, and those pathways were painstakingly drawn and everything, and again, they missed things that didn't work well and today you don't see them as often so it is important. But when it goes, where does it go back, it depends on what the message was, if the message was change the system, change the communication, it has to go to a different group then you're not applying, you're not giving aspirin, when you have to give you're not applying evidence based medicine then it goes to someplace else. I don't think there's an answer to that before we know what the question is.

MR. REYNOLDS: I think a key thing is, and we're going through this a little bit in e-prescribing and we've had many hearings, we've heard one day of hearings from one set of people and we're trying to, I sense we're trying to jump to an answer or jump to how many fields we can use and where would they go and what would they be, and I think regardless of what we do is if there is some low hanging fruit we still have to I guess to maintain a discipline for ourselves for one or two more steps after that to make sure that whatever is approved or whatever is decided, if it fits in the current structure and can be captured in the current structure then it's a good tactical move. If it is something that creates a new way of doing things and there's no structure that we recommend to put it into place so that when you, if you pick the first five things you want to do, and I think I heard that from a number of you, you pick the first five you want to do and then we end up turning out in the end we want 25, and after we do the five we've got to redo the whole thing again for the 25, it just looks like we kind of jumped off the cliff so we could look like we said we were in the air but we're making a parachute on the way down if we're not careful. So I think it might be a little --

MR. HUNGATE: I hear you and I don't disagree. My experience in industry I guess is the best way to say it is that we had some people that could follow the ready, aim fire, we could have some people that just fired, fired, fired and finally got there, and we've got some that aimed, aimed, aimed and never got there. And I guess I have a bias to try to do a little firing at a target conclusion if you will and put my mind on the table and let everybody hammer at it because I feel like we need to have something to work from to move from the range of the nebulous to the specific where we've been called upon to do the specific and then do the shooting at it that tests whether we're going to have to do redo.

MR. REYNOLDS: I agree, that's what I'm saying.

MS. FOSTER: I've never been in a medical meeting that talked so much about shooting, it's a good analogy. The second thing is or as you ponder this question, as you hear more testimony, I would encourage you to think about costs, about value, but in each case asking to whom or to which parties because it's likely to be of cost to more then one and of value to more then one, and to Justine's point is that if you have constructed the perfect data collection you know it's of value to someone but you don't have a pathway of getting the information back to that group of individuals who would find it of value then you've missed the boat too. And all of that is critically important as you go forward and in most of the data collection activities with which I'm familiar there's been some dissonance between who bears the cost and who gets the value.

MR. HUNGATE: I understand. Did you have a comment Stan? It looked like you wanted to say something there.

DR. HOCHBERG: I think also you just have to think about you're going to drive hospital system investment and other investment when you make these standards someone is going to pull them for reporting and they will be incented on you put them on the table. So I think you have to ask questions like do you want more reporting or do you want computerized physician order entry and which do you want hospital field more necessary to do in the short term. I think you also have to think about how this plays out in the list of things hospitals should and need to do and are you diverting them from any of that by creating additional overhead or reporting requirements.

MR. HUNGATE: I agree. Well, my gold standard personally is that it has to benefit patients, providers, and payers, and it's harder to do that then most anything but I think that defines the low hanging fruit. Where you find things that do provide benefit on those three fronts you're likely to have made a better contribution but that's just my own definition.

MS. COLTIN: I would also suggest that you look at things that serve multiple goals, so if hospitals have on the list of things we expect that they move toward electronic medical records, and we're asking for increased reporting of clinical data elements, that those relate to one another and that hospitals that implement electronic medical records will have an easier time reporting these clinical data elements as part of a claim but likewise putting them in the claim adds to the incentive for investing in the clinical information systems. So there's a synergy between the two goals and the more we can identify those types of synergies I think the better.

MR. HUNGATE: Okay, well I suspect we're close to thinking capacity of the assembled group and that we should probably adjourn until tomorrow morning for the rest of us, but thank you panel for your very helpful perspective and content.

[Whereupon at 4:40 p.m. the meeting was adjourned.]