[THIS TRANSCRIPT IS UNEDITED]

National Committee on Vital and Health Statistics

Workgroup on Quality

November 12, 1998

Hubert H. Humphrey Building
200 Independence Avenue, S.W.
Room 325-A
Washington, D.C.

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway #160
Fairfax, Virginia 22030
(703) 352-0091

PARTICIPANTS:


P R O C E E D I N G S (11:25 a.m.)

MS. COLTIN: It says in the first five minutes we are going to do introductions. I think most of us know each other, but I think we probably should go around and do that again, just for each other's understanding of who we are and who is speaking.

I am Kathy Coltin and I am chairing this work group. I am with Harvard Pilgrim Health Care.

DR. MOR: Vince Mor from Brown University.

DR. LUMPKIN: John Lumpkin, Illinois Department of Public Health.

MR. EDINGER: Stan Edinger, AHCPR.

DR. STEINDEL: Stu Steindel, CDC.

DR. STARFIELD: Barbara Starfield, Johns Hopkins University.

MS. WARD: Elizabeth Ward, Washington State Department of Health.

DR. IEZZONI: Lisa Iezzoni, Beth Israel Deaconess Medical Center in Boston.

MR. MAYES: Bob Mayes, Health Care Financing Administration.

MS. ROBINSON: Carolina Robinson(?), Health Care Financing Administration.

MR. GOE: Leon Goe, Health Care Financing Administration.

MR. GOLDWATER: Jason Goldwater, NCHS.

MS. COLTIN: That is all of us. The agenda says that we will now be reviewing the work plan and discussing approaches.

There is no formal work plan to be passed out, because my goal here right now is to actually define the work plan.

What we have is a proposed work plan that has been discussed at the executive committee meeting, and also in general terms in the subcommittee on populations as well.

What I would like to do is sort of outline for you what has come out of those previous discussions, both at the subcommittee and the executive committee, and see whether we can agree upon the work plan. Then we can have it written up formally and we can vote on it, and bring it to the full committee, or to the full subcommittee, I guess, first.

The proposed work plan actually has three -- depending on how you look at it -- three or four components.

The first two components were selected to help minimize demands on the members of this work group, because all of us are involved either with other work groups or certainly with other subcommittees, as well as the full committee.

We didn't want to try to identify a whole lot of new initiatives that would really add to the work that is already on each of our plates; but rather, to identify areas that are complementary, where we could work with other groups and their activities, and try to achieve something important in the area of quality as well.

So, those first two areas are to look at the work that is coming out of the subcommittee on populations related to Medicaid and managed care, and to particularly identify what the implications are for quality assessment and quality improvement.

What is it that we heard from the states, from consumers, from others that participated in the various hearings on Medicaid managed care, about what they are interested in that relates to quality, and what the data issues are, and constraints that limit their ability to get the kind of information they would like to have about quality.

That is the first component, and it really would be derivative of a lot of the work that is already going on and would be done really jointly through the subcommittee, in terms of the construction of the report that is going to be done by George Washington, to make certain that there is a section in there and a chapter that specifically addresses data needs for quality assessment and improvement. Does that sound all right?

DR. IEZZONI: Absolutely, so that it is contemporaneous; it is part of the same process. It is just making sure that we focus.

MS. COLTIN: The second is that the subcommittee had identified a focus for its work for the coming year that would be around post-acute care, with apologies to Barbara, who doesn't like that term.

We can work that out in the subcommittee in terms of what we actually want to call it.

This really gets at issues around the continuum of care and looking at quality, both within post-acute settings, but also across the continuum of care.

So, looking at patients who are going from acute settings to post-acute settings, and from post-acute settings back into the community, or into outpatient care or other types of care.

So, as we proceed in designing the work plan around the post-acute care initiative, structuring the types of hearings we would want to have and the types of individuals we would want to hear from, this work group would pay particular attention to what it wants to know about quality assessment and improvement, as it relates to post-acute care and the continuum of care.

Again, most of that work could be done through the subcommittee, as it is planning the activities that will go on with regard to the post-acute care initiative.

Our role will really be to look at the work plan, look at the sets of hearings, the questions that get designed and so forth, and assure that concerns that we have about quality are incorporated into that process.

DR. MOR: So, the subcommittee's emphasis on post-acute care is going to span broad issues related to data, data integration, data coordination, integration, et cetera.

Then this work group is going to focus that subcommittee's attention on the application of those data for quality, and management of quality.

DR. IEZZONI: So, Kathy, for example, we talked at the meeting on the 29th about having January 22 be a tutorial for us.

MS. COLTIN: Yes.

DR. IEZZONI: We might want to talk this hour about what kind of things we should see whether Carolyn can help us put together for a subpiece on quality for that educational session.

MS. COLTIN: I haven't seen how full that agenda is right now, and how much we want to take on. If that day's agenda is going to be very full, just bringing people up to date --

DR. IEZZONI: We don't know yet. It is still in process right now. This is your time to kind of save a slot.

MS. COLTIN: If there are opportunities in the agenda.

DR. IEZZONI: Carolyn, do you want to come to the table?

DR. MOR: I would actually second that, because part of the tutorial process is going to be drawing upon examples, and one of the prime examples is the world of quality measurements derived from these various data systems.

MS. COLTIN: So, both in terms of home care and skilled nursing facilities.

DR. MOR: Or whatever.

MS. COLTIN: Rehab also?

DR. MOR: Rehab, yes, there are quality measures for all of them.

DR. STARFIELD: I really think we ought to incorporate it into our thinking as continuum of care.

MS. COLTIN: It is not always the same people.

DR. STARFIELD: It is not trivial.

MS. COLTIN: Like the people from the University of Wisconsin have really been secondary users of the data from the people who have developed the MDS.

DR. STARFIELD: I think we really should stop talking about post-acute care. We have gotten it out of the document, out of the charge. It is continuum of care.

It was interesting that Peggy Hamburg talked about it this morning, not as post-acute care, but continuum of care. I don't know if she called it continuum of care, but she talked about it across settings.

MS. COLTIN: I think what we were trying to do is come up with a term that said, well, we are not really talking about focusing on outpatient care that is provided in physicians' offices, clinics, hospital outpatient departments. That is not part of it, and yet, it is part of the continuum.

We are not talking about acute hospitals, because other people talk about that a lot. When I say we, I am talking about the initiative that the subcommittee has identified.

Within this work group, we will be talking about those things, those settings. Within the post-acute initiative, we really were looking at a limited portion of the continuum, and how to describe that portion of the continuum is really, I think, what we were struggling with.

DR. STARFIELD: I think that I keep making the point about kids not being in any of the facilities you have described, and most of them are not.

DR. MOR: They are.

DR. STARFIELD: Most of them aren't.

MS. COLTIN: Home health.

DR. STARFIELD: That is not post-acute.

DR. MOR: Most nursing homes are post-acute, but they certainly don't fit most understandings and definitions of a continuum.

DR. LUMPKIN: They are continuing, as opposed to a continuum of care.

DR. MOR: Maybe continuing care is a more wrap-around term.

DR. LUMPKIN: Which is different than what most people get in their physicians' office, which is the site of care.

MS. COLTIN: I don't want to spend our limited time today arguing about what it should be called. I think we can do that.

DR. IEZZONI: I don't want to be arguing about it either.

MS. COLTIN: I just want to be clear on what we are talking about and what we are not talking about. We are talking about the portion of the continuum, within this particular initiative, that deals not with outpatient care or acute inpatient care. It is sort of everything else.

Within that initiative that will be undertaken by the full subcommittee, our work plan would include defining what it is we want to know about the state of the art and the needs around quality assessment, quality improvement in those settings, and across the settings.

DR. IEZZONI: I think, Carolyn, we should try to stake out a piece of the day on January 22 to have quality as the focus.

MS. COLTIN: Vince is very familiar with a lot of the people who have done quality measurement in that area.

I have become pretty familiar with it over the last year and a half, because we have tried to develop measures in our own organization, and have drawn on the work of a lot of these people.

I think between the two of us, we could probably give you names. If there are other committee members who know people as well, we should just let you know who we think would be good to invite to come speak with us.

It would be good to find people who have developed quality measures in each of those settings, as well as anyone who has looked across.

My experience from a lot of the work that we have done -- we have done some work with the Picker Institute on patient reports of their care experience in these settings.

The areas that show the biggest problems are the transitions between settings. That is where the patients have the biggest problems.

So, most of the measures that I have seen are stove pipe measures within settings. I think we need to look at the pipeline that runs between them.

DR. IEZZONI: That might also be an area of deficiency that we identify. I suspect that will be the area of deficiency that we identify.

MS. COLTIN: Are people comfortable with that as sort of the second focus of our work plan?

So, that would be the second focus of our work plan.

DR. STARFIELD: This is written out someplace. Is it in here someplace?

MS. COLTIN: I thought it was written up, but I couldn't find it. What I could find is that there is, within the charge for the full subcommittee, which is under tab E, on page 4 of that, there is an area that deals with quality of health care data, collection and use.

This is an earlier construction, I think, a little bit of what we talked about. It relates more to point C that we haven't talked about yet. It doesn't talk as much about the Medicaid managed care focus on quality or the post-acute focus on quality.

I think we need to have a very clear work plan that says, this is the work product that we anticipate. This is what we are going to do and these are some of the work products that we anticipate.

DR. MOR: I have a document that I prepared actually for the Institute of Medicine. It is a whole issue on quality measurement, quality indicators in long-term care.

Mostly it focuses on the nursing home context, because that is what it was in relation to for the Institute of Medicine.

It does present a framework. What I will do is, I will e mail it to all of you; okay?

MS. COLTIN: That will be helpful. Thank you.

DR. IEZZONI: Why don't you fax it?

MS. COLTIN: I can take e mail. I think if you can't, you can let Vince know.

DR. STARFIELD: I prefer a fax.

DR. MOR: Okay, I will send it out.

MS. COLTIN: Now, on to the third focal area, which is really the new piece of work. We are saying we are going to talk about Medicaid managed care and about post-acute care of the continuum, in conjunction with the work of the full subcommittee.

Then there is a discrete piece of work that we would do in this work group that is broader than those two initiatives.

That has to do with really looking much more generally at what kinds of data are important for assessing and improving quality of care.

Where do we stand right now in terms of our access to those data, the quality of those data, and their availability to be used to be able to create measures of quality.

That sort of gets us to both looking at some of the recommendations that came out of the President's advisory commission, that are data dependent, identifying those that are data dependent and saying what kinds of data would be necessary to actually be able to measure this.

The one that comes to my mind immediately, because it is such an area of deficiency is medical errors.

There was a recommendation in there about gathering information about medical errors. What data would you need to be able to measure medical errors. What is the availability of that information.

What is the likelihood that it could be developed and what would have to be done.

That is the kind of framework I am thinking of. That is probably the toughest example from their recommendations and maybe not one that I should be using, because it is a little disheartening.

Some of the others are also equally data dependent, but the data they may be dependent upon are more readily available, although some of them do have problems.

DR. MOR: I think, at least for me, it is very helpful to preface this kind of an effort with a discussion about, I suppose, the meaning of measures.

Some kinds of measures -- let's take your medical errors example. Some kind of measures may be thought of as absolute. Except for measurement error, there is no ambiguity about the fact that every time this event occurs, it is a bad, no ambiguity whatsoever, as distinct from a more probabilistic interpretation, which is almost always what we are dealing with, because there is no universal, this is always bad under all circumstances.

That makes a huge difference in how we think of these kinds of measures and how we understand quality and how we make comparative statements, et cetera.

My quick reading of the President's commission's recommendations didn't get into any of this kind of stuff, but it is essential.

MS. COLTIN: I think it gets into that whole issue of are they normative measures or comparative measures as well.

DR. MOR: Right. Having decided that it is absolute versus potentially probabilistic, then within the probabilistic group you say, well, is it normative, do you have a target, or is it just comparative, or means or averages or medians or what have you, or percent distributions, of some measurable level of variation.

If it is absolute, then the nature of the data you collect is radically different from the nature of the data you collect if it is probabilistic.

DR. STARFIELD: So, one you need a cut point and the other is a continuum.

DR. MOR: The cut point, actually, for instance--

DR. STARFIELD: I guess it is established by the data, by what you know about the characteristics.

DR. MOR: If you are looking at medical errors, for example, then one kind of example would be drug toxicity.

Now, a hospital admission for a drug toxicity is presumably a sign, universal, of an error, but maybe not. Not on the hospital's part, necessarily, so even that becomes very problematic.

In order to actually determine is it really true in this particular instance, then you would have to go through a medical record review and so on and so forth, as distinct from, say, applying some set of criteria.

MS. COLTIN: One of the things that I want to try and clarify, though, is that we are not talking in this work group about designing the quality measures.

We are talking about trying to understand what the data elements are that would be necessary to support the work of those who are developing quality measures.

One of the examples that I come up with all the time is that when I participate in the process of developing HEDIS measures for health plans, we are constantly stymied by the fact that there are very important things that we would like to measure that are extremely difficult to measure because the data are not available.

Some of the new measures that are coming out in the next version of HEDIS say, we don't care how difficult it is.

If we put them in there, people will develop the data systems, because they will want to make it less difficult to get them.

Things like laboratory results, levels of laboratory results, are not available from administrative data sets.

We do have to go to medical records, and sometimes you have to go to multiple medical records, if a patient is being seen by more than one medical provider and those results could be in more than one record.

Is there a way to develop laboratory results information according to some standardized method, that would enable us to then have that type of information available, which enables a broad range of different types of measures to be constructed.

DR. IEZZONI: There is HL7.

MS. COLTIN: But what are the constraints? What are the barriers? There are privacy issues; there are money issues; there are infrastructure issues. There are all kinds of issues around that.

That is the sort of thing that I was hoping that we would be able to identify and focus in on, really, the data as opposed to the measures.

There are many, many groups out there that are developing measures, and I don't intend for us to go into competition and develop our own measures, but rather, to hear from those people about what problems they are having.

What kinds of data do they not have that they need. What are the kinds of issues that it might make sense to address on a national level, to try to improve that situation.

DR. IEZZONI: Kathy, I know John was going to go first, but I am going to leap him. The nature of the measure, though, could affect whether you would sample and how you might sample.

That is one of the ways that people are talking about reducing the costs of some of these data collections.

I think we are going to have to talk about the nature of the measures when we start talking about the possibility of sampling and how you target the assessments.

DR. MOR: Really, it is the nature. We have to think about the nature of measurement, not what the measure is.

MS. COLTIN: That is true, and that is the kind of example I was giving you about laboratory results. I didn't have to talk about the diabetic glycemic control measure, or the lipid control measure after people had a heart attack.

I could just talk generally about information about laboratory results, changes in laboratory findings, findings that are above or below cut off points that are considered normal or control or whatever.

I could say, well, the real common theme across all those kinds of measures is that we don't have the actual result information coming in through some standardized data collection stream.

I think that is where I am really trying to get us down to, is that level of granularity and our ability to improve the data that is available to then construct measures.

I think you are right, that we have to talk about measures as examples of what the data problems are, in order to identify those data problems.

I don't want us to spend a lot of time debating the measures, because we are going to hear about a lot of measures that we don't agree with, or that we don't think are right.

That is not the point and that is not the best use of our committee's particular time. I think that our time is really to try to understand what the data issues are, that underlie the development of these kinds of measures, and see what we can do to try to improve that situation.

Is there anyone who really disagrees with that, or thinks we should be doing something different?

DR. LUMPKIN: I was going to say something earlier, but Lisa jumped in. Since, like, she is one of my gurus, I have to listen to her first before I comment.

DR. IEZZONI: I have to use that in a more powerful way. Let me think about that. [Laughter.]

DR. LUMPKIN: One of the areas -- I think that as we try to struggle with what the committee is going to do, it seems to me that there are some issues related to data acquisition.

We have talked about a few of them, but I think that if we were to look at how this committee could interface with the computerized patient record committee and the standards and security committee, particularly in relationship to claims transactions, particularly the claims attachment, it is fairly clear that, as the process of claims attachments is going forward, that it is not just going to be one standard. It is going to be a number of standards.

The envelope may be described once, but what goes into that envelope -- they are talking about prioritizing emergency department ambulance records.

Maybe laboratory records might be an important priority. So, perhaps putting input into the process of claims attachment, based upon what is really needed for quality measurement, may be an area where this committee can make an important contribution.

The second is, data acquisition has to do with surveys. I am really concerned about the ability in all the surveys that are being done.

This committee could begin to look at all the surveys that are going, not only just the quality surveys, but perhaps mapping them and developing a matrix of what kind of questions are being asked of whom, to see if there is some rationality that can be brought to this system, either through a consolidated approach that we could recommend to the quality standard development organizations. That may be an area where we could make a contribution.

The third is the conceptual environment in which quality occurs that you alluded to. That is, how can we assure that quality measurement is done in a way that protects confidentiality; that is sensitive to the communities; that the sampling is done appropriately, to actually measure what we think is important about measuring quality.

Then the fourth area, which may not be on our agenda right away is, what about the quality outside of health plans.

To what extent are the things being developed and the techniques being used in non-managed care -- and I include home health in that, as a managed care. It is not the same structure, but there is some structure to that person versus the fee for service person.

To what extent can we now, if we have defined half the population and we are measuring their quality, why not for the rest of them.

Maybe it is some ability to look at the rest of that piece further down the road.

DR. MOR: I had the sense that you were talking about just managed care.

MS. COLTIN: No, I gave you an example because that came from my world in terms of what I am familiar with, when I talked about the HEDIS measures.

There is no intent whatsoever in this work group to limit our discussion of what data are needed for quality measurement or improvement in the managed care world.

DR. MOR: Like I said, I was in total agreement with what you said. [Laughter.]

MS. COLTIN: Yes, I think we are all clear on that. I would feel actually very concerned if we were doing that, because I think the biggest problem with managed care is the lack of fee for service comparisons.

I would rather focus on what is not in managed care.

DR. LUMPKIN: I only say that because, for the first part, the surveys and the claims attachment issues, the low hanging fruit there, is to begin to address the issues that are already there for managed care.

Then I think, in agreement with you, that other piece is where we perhaps need to push for better development.

MS. COLTIN: I think when we are talking about the post-acute initiative, we are clearly not looking just at managed care, by a long shot. There is actually a small percentage of the patients that actually are treated in those settings right now.

DR. STARFIELD: We are talking about boarding homes, too, aren't we?

DR. MOR: Board and care, assisted living, those kinds of places.

MS. COLTIN: I think we are agreed on that. I think the other point that you made, John, actually gets to the second bullet that is in the write-up, that has to do with the road map, and how this fits into the notion of an information systems road map.

What you were saying about the work group on computerized patient records and the claims attachment and so forth, I think actually follows logically and also makes a great deal of sense in terms of the best use of people's time, and again, coordinating with other work groups.

When we start talking about the data that are needed, when we identify where the data are, where the gaps are, the next natural thing is to talk about where is the best place to get those data.

Is it data that comes from administrative data, or should come from administrative data, from clinical records, from patients?

Depending on where we think the best place is for that data to come, it will help to identify what the interfaces are that we need to pursue, if it is from medical records, working with that work group, if it is from administrative data, working with the full subcommittee.

If it is from patients, then I agree, the survey issues could either be brought up to the full subcommittee on populations or discussed here and brought back to them in some more distilled form. You are right.

DR. MOR: I heard John say that one thing that we might also do, or it would probably make more sense for this committee to actually provide some sort of rationalization of the various survey mechanisms that are out there trying to explicitly measure quality, whether it is HCFA or the various managed care companies, and the Medicaid agencies who are surveying folks through various mechanisms with some overlapping.

Some do satisfaction, some do this, some do that. Some do very specific, you know, did you get food; was your food hot; did someone call you after the fact.

I don't know whether this committee wants to try to tackle how that overlays.

DR. STARFIELD: We haven't dealt with, either implicitly or explicitly, the definition of quality, and that is what you are raising.

DR. MOR: Yes.

MS. COLTIN: We should do that. We should define what it is that we are talking about. I have been, at least in my own mind, envisioning a pretty broad definition of quality.

For instance, access measures or quality measures, we well as technical clinical quality measures, and satisfaction.

DR. STARFIELD: I have sort of tried to divide quality into at least four, maybe five pieces. One is adequacy of resources; one is adequacy of delivery; one is technical quality and one is outcome. You might want to add satisfaction as a separate o You are right.

DR. MOR: I heard John say that one thing that we might also do, or it would probably make more sense for this committee to actually provide some sort of rationalization of the various survey mechanisms that are out there trying to explicitly measure quality, whether it is HCFA or the various managed care companies, and the Medicaid agencies who are surveying folks through various mechanisms with some overlapping.

Some do satisfaction, some do this, some do that. Some do very specific, you know, did you get food; was your food hot; did someone call you after the fact.

I don't know whether this committee wants to try to tackle how that overlays.

DR. STARFIELD: We haven't dealt with, either implicitly or explicitly, the definition of quality, and that is what you are raising.

DR. MOR: Yes.

MS. COLTIN: We should do that. We should define what it is that we are talking about. I have been, at least in my own mind, envisioning a pretty broad definition of quality.

For instance, access measures or quality measures, we well as technical clinical quality measures, and satisfaction.

DR. STARFIELD: I have sort of tried to divide quality into at least four, maybe five pieces. One is adequacy of resources; one is adequacy of delivery; one is technical quality and one is outcome. You might want to add satisfaction as a separate one.

DR. COLTIN: Or treat it as an outcome, as some people do.

DR. IEZZONI: Although I do think that we should probably look at the President's commission's definition.

DR. STARFIELD: I don't think they did define it in an operational way.

DR. IEZZONI: Let me just finish; and a number of other definitions, like the Institute of Medicine a number of years ago also came out with a definition of quality.

I think it would be a problem for us, if we, without a sense of history of defining this, tried to define it ourselves.

One of the things we could do is say, here is the definition of the IOM. It is inadequate in this way.

DR. STARFIELD: It doesn't help us with data at all.

MS. COLTIN: There are new frameworks being developed. The framework that has been developed by the Foundation for Accountability with NCQA, is a framework that FAQ developed to not just be limited to managed care, but really to look at community levels an so forth. Maybe their framework would work.

MS. WARD: I think we could look at a number of different frameworks. Maybe that is one of the things that we could see whether Stan might be able to help with, is just kind of collecting these definitions and how they relate to data.

DR. STARFIELD: I think what you are saying, Lisa, is if we can avoid going off and having our own personal definition of quality, it would certainly be nice to avoid. That is not going to be an operational framework that would flow from a definition, hopefully.

MS. COLTIN: That would be helpful, to try to have something like that; I agree.

DR. MOR: Then the other issue of sampling issues and sensitivity, that is operational expression of the actual measures, presumably; yes or no?

MS. COLTIN: I think that there will be issues. As I envision this, I would like us to constitute a panel and get people in here who have been doing quality measurement for different populations in different settings, across different settings, and using different types of data sources, to describe for us what some of the frustrations are that they have experienced and what some of the inadequacies are.

Then from that, I would expect would come problems in defining the denominator population and the sampling, as well as problems in defining numerators for particular measures as well.

DR. IEZZONI: Kathy, I always read my a.m. news when I am en route to this meeting. I was reading it this morning and there was an article about health plans, like PacifiCare, are refusing to allow their data to be put up in the NCQA system this year.

Plans are now beginning to refuse to agree to go along with these kind of quality monitoring initiatives for a number of different reasons.

So, you are having a slight mutiny out there, it sounds like. It would be really good to have a sense of where this is coming from.

The way this little article that I read kind of cast this is not that anybody argues with the need to do quality measurement, but the are concerned about the quality of the data.

They are concerned about being compared to non-audited data that other plans are putting up there.

One of the points that was made in this article that audited data always look worse than non-audited data.

If you do audit your data and that is the standard that you put forth on that NCQA web page, then you are going to look worse.

I think that having a sense of where the community is on this right now, would be a very real world test for what I am concerned could become a very academic exercise.

MS. COLTIN: I agree that it is a real world test. I think also we have to take a leaf from the physicians who get measured who say, oh, but my patients are always sicker and then when you go in, maybe they are not.

NCQA analyzed the data for the fact that -- a lot of them submitted it. A larger percentage submitted this year than submitted the year before, but a large percentage did not agree to make it go public.

They analyzed the public and the non-public and they did look worse. If you also look within those that submitted, as to those that were audited and not audited, you did not see major systematic differences.

The New England plants that all went public and all looked the best in the country were all audited, every one of them.

DR. IEZZONI: The audit issue, I think, is an important one.

MS. COLTIN: I agree, that we should hear about those things. I think we need to recognize that a lot of it may be making excuses.

DR. IEZZONI: May we hear about that? This article that I read didn't bring that point up. I think that this would be a forum to hear from whoever could give us that kind of more objective study.

MS. WARD: It is who is using the data to do something financially to the other person. That is one of the struggles that we are having right now in our state.

MS. COLTIN: I think there are some legitimate concerns that are being raised by plans. This will come up if you start doing these measure by hospital, you start doing them by nursing home or whatever.

If everybody is in the game and we are all being measured and we are all going public, that is one thing. If only some are playing, then the ones that are playing are afraid that they are going to be the ones who get criticized if they don't work good, but how about the folks over here who aren't reporting at all.

I think some of what you saw in the reaction of plans that didn't want to go public this year was what happened to them last year.

Some purchasers actually stopped contracting with a plan that reported and didn't look good, and contracted with a plan that didn't report at all. That really, to me, those shouldn't be equated.

DR. IEZZONI: Maybe we should hear from benefit managers about why they are making these decisions.

MS. COLTIN: Why make the assumption that the one that didn't report at all would have been better, had they reported.

DR. WARD: The plans are losing enrollees because of the CAP survey that is out in Washington. You go back and find out that this many people were surveyed during this period of time, and is that truly a reflection of --

DR. MOR: Those are interesting anecdotes, but the AHCPR did a little study that suggested the benefit plans and managers, by and large, don't look at this stuff at all.

MS. COLTIN: Some of them don't.

DR. IEZZONI: John, I am sorry, you just raised your hand as I started to talk. [Laughter.]

We are kind of having fund with this discussion but I do think that we do need to kind of -- I am guilty of kind of broadening it. We do need to try to focus and try to make it not be an academic exercise; be something that will have some practical utility. John, is that where you were going to try to lead us?

DR. LUMPKIN: I am sure I was going to say exactly that. [Laughter.]

Let me add one thing because I know about this, because I was actually talking to the person in Justice who is doing the work.

One of the things that is coming up in the same issue about releasing information is the Justice Department is looking at prosecuting for "quality of care fraud and abuse" under some of the statutes.

Some of their people are aware of it, because I know at least two of the organizations that are being investigated.

The reason they asked me is that I am detailed to one of the congressional committees. We are looking at some of the same organizations at the same time.

So, this issue has come up with them because even some of the physician groups have been reluctant to have their provider IDs even provided; the Justice Department has taken them to court on it.

The next level up is not only people not participating, but are being prosecuted on top of it. There is a higher level of concern.

DR. IEZZONI: Maybe we should hear about this, Marjorie, in one of our big group meetings. This is actually a big issue.

DR. MOR: It is happening to hospice, to nursing homes and home health agencies and now ambulatory foundations are being dinged because of "miscoding" errors, and someone is going in and deciding that those coding errors, which are endemic, are fine-able, are discoverable and fined for, because it is against the law.

MS. GREENBERG: To make a coding error?

DR. MOR: Yes.

MS. GREENBERG: We don't have big enough jails.

DR. MOR: Nosologists are going to have to have malpractice insurance.

DR. IEZZONI: Kathy, can we just get back to what you are thinking? Would the work group hold hearings? Are we thinking we should schedule some dates?

MS. COLTIN: I think one of the things we are going to have to be careful about, quality is an issue that a lot of people are interested in and concerned about.

So, one of the things that we may want to do is develop what we think we want to hear about and what a panel would look like, and then bring it to the full subcommittee and say, is this something everybody wants to participate in or should we do it as part of the work group.

Then we also need to consider, is it something that is cross cutting, that gets into the claims attachment issue, the CPR issue and so forth.

Is this particularly a panel that we ought to try to get on a full committee meeting agenda.

I think we are going to have to kind of identify what those panels are that we are interested in and then, one by one, decide where is the right locus for that discussion. Do people have other ideas about that?

DR. LUMPKIN: Why don't you emphasize the full committee.

MS. GREENBERG: We do have a commitment to follow up the panel on data quality.

DR. IEZZONI: You know, actually, going back to that, that actually was a great panel, Marjorie. You have waxed eloquent on how wonderful that panel was a number of times.

Kathy, that panel really laid out the data panel road map. I think we should probably go back to the transcript of that meeting and see whether we could extract -- write a little piece from it, don't you think?

I really think that we don't want to redo something that we did really well. That was a good panel. So, who might -- Dan?

MR. EDINGER: I can see if I can find a copy of it.

MS. GREENBERG: We will send it. It is probably on the internet, but we can also send it.

DR. STARFIELD: What was it, about a year ago?

DR. IEZZONI: In March.

DR. MOR: The full transcript and not just the minutes are up there?

MS. GREENBERG: No, we put the full transcripts up.

DR. MOR: The full transcript is what you need.

MS. GREENBERG: You would want the full transcripts.

DR. IEZZONI: You would want the handouts that people gave. Kathy, didn't you have slides?

MS. COLTIN: No, that was the Medicaid managed care person, I believe.

DR. IEZZONI: There might be slides. I really think that if someone could write that up in a five page kind of, here is what these people said about this, that that would be a really good spring board for us, to kind of home in on where we want to go with this.

DR. GREENBERG: We probably have several pages in the minutes. As Vince said, it is hard to capture the details which are the most compelling.

DR. IEZZONI: Yes, and writing it more in an expository way about here is the problem, here is what various experts said about it, here are solutions or areas that they said we need to think further, and have it be more of that than simply a report from the minutes.

MS. COLTIN: It was a good panel. That is a good starting point, and from that we extract questions that we would want to have perhaps a broader set of stakeholders and participants respond to, as well as additional questions.

It was a good panel. That is a good starting point, and from that we extract questions that we would want to have perhaps a broader set of stakeholders and participants respond to, as well as additional questions.

There are a lot of things that I think would come out of that transcript that were points of view that were expressed, that would be important to test with other stakeholders and see if this is a commonly held belief or a commonly experienced problem, or whether these represent some unique perspectives and experiences.

It was a pretty limited set of people. Having spoken, you know, I feel in terms of my own remarks, that I would like to have some of the vetted more widely and see if others agree.

So, I think this third bullet is actually breaking out into at least two, which has to do with sort of identifying what the data gaps and data quality problems are.

We said we would go back and take a look at that transcript and see what we know already and then try to plan for what else we would like to find out in that area.

The other has to do with sort of the information system road maps, how do those data gaps relate to specific sources of information and initiatives that are going on to improve those sources or not, and might go on.

So, I think that we might as well identify those as two separate -- they are identified separately in the work plan here, the IS road map and the gap kind of analysis.

That would give us four initiatives in our work plan -- the Medicaid managed care, the post-acute, the gap analysis, and the sort of interface with the different systems development activities that are going on, as well as the survey activities and how those relate.

Is everybody comfortable with that as a work plan for this year? I mean, we can think about going beyond that in the future, but I think that that is pretty ambitious, even for the next year.

That sounds good to me. We will get that written up.

In relation to that last item, the data sources issue, I just learned from a side bar conversation that I had with Bob Mayes at the end of the earlier meeting, about an initiative that HCFA is starting to undertake around the HEDIS measures and trying to maybe address some of the problems that you have identified around data sources, and standardizing and where the data comes from, and what is the best way to get it and what tools are needed to do that.

I asked Bob if he would be willing to just quickly brief us on what he is thinking at the moment about that assignment.

MR. MAYES: As you know, of course, we have the Medicare HEDIS program. In the last couple of years that we have collected this information, we have fully audited it and have had some concerns about the quality of data and discussions about that.

I have to say that the quality has increased dramatically over what we found the first year.

As part of our discussions with NCQA and internally, what we have sort of been thinking is that the data quality question really starts with where the data is generated.

If you wait until the point at which the data is aggregated and reported through the NCQA tool, that that might not be the best place to start really addressing the data quality issues.

Our focus has been, in the past on other initiatives, of trying -- in fact, I should say that Jeff Cahn(?), the director of the office of clinical standards and quality, which is where I work, we have sort of made it now a strategic direction, that we wish to begin to develop tools that are generic in nature as much as possible, that are standardized and that are widely available in the public domain, for the all the different areas that we are interested in.

We have sort of started doing that with the raven tool for the MDS data, Oasis, which is for home health, and some of our more focused clinical projects.

Along this direction, we have had discussions about potentially developing a standardized HEDIS data collection tool.

This would not be the HEDIS data collection at the level that the NCQA tool is. In other words, this wouldn't be the reporting tool. This would actually be a data abstraction, if you will, a data collection tool that would be pushed further down into the system.

I think we are going to move forward with looking at that.

DR. MOR: Is this something that the level of the patient, where the patient is served?

MR. MAYES: We have had a number of -- we talked about this a year or so ago. I have, in fact, had some calls from around the country from plans saying, gee, you know, you guys are doing this stuff in Medicare. Is there anything available to us, to try and help us do our data collection.

They are talking really before the point of aggregating and reporting to NCQA. Even in groups like the Coalition of New England, that has made a decision that they all want to compare their stuff and they want to work together on this, what they are finding is that, by the point that they begin to look at their data, it really isn't that comparable.

It is a fairly complex set of algorithms and other things that you need to look at. If you wait until you have aggregated up too high you realize that, gosh, the people who gave us the data haven't been interpreting it correctly.

DR. MOR: What a surprise.

MR. MAYES: What a surprise. So, since we have, in the past, in our other -- we are moving more and more now toward provider level reporting of our data.

We think that is probably not a bad way to go with the HEDIS as well. I can't speak to the details, just because I am not as familiar with all the details of all the HEDIS measures.

The idea would be to develop some publicly available tools, data capturing tools, acquisition tools, that would be, again, focused on capturing the data further down the chain, closer to where it is actually generated.

That would allow you, then, to aggregate it up in a more consistent, meaningful way, so that some of the data problems we saw at the data reporting level would basically disappear.

It is very hard, at the level that the reporting tool that NCQA uses, to put much in the way of meaningful edit on the data.

Frankly, you are just reporting a couple of numbers, denominators and numerators. Yes, you can put in some very basic edits.

When we have talked to NCQA about some of the data quality issues, it is basically, well, that quality has to resolve before it gets to our tool.

So, we are just trying to figure out -- and we have just started thinking about this -- can we develop some of these tools that we can make broadly available.

We would very much like to, of course, keep them in line with, or expandable extendable to HEDIS beyond Medicare use.

DR. IEZZONI: Because you have a contract to do fee for service.

MR. MAYES: Yes. I think that we are beginning to realize that we gain a lot. The MDS -- you can argue whether you like MDS --

DR. MOR: MDS is the classic example. You collect individual level data and you can aggregate it to the level of the facility using standard algorithms, and then it gets aggregated up to the level of the state or the county or whatever.

MR. MAYES: We have been told that the raven software that we have provided has gotten tremendous positive feedback for us.

The industry really liked the idea that there was a broadly available, basically free set of tools that had been developed, the specifications had already been interpreted in a way that we know is going to be the right way, because we did the specs.

We are really looking at extending that approach. It is not to become software vendors to the world. It is basically to give some kernel around which other things can coalesce.

Certainly we are a pretty big player in some of these areas, and we are willing to put the resources, initially, to sort of interpret those specifications.

MR. EDINGER: When you are talking about the tool. Let's take the MDS for example. The issue of lab values, they don't really deal with it to that extent. Will this tool incorporate the possibility --

DR. MOR: Actually, they are contemplating to include incorporation of actual lab values in the PAC version, the post-acute version.

MR. EDINGER: Let's say you decide you want to include a glucose value. Will these tools enable you to capture additional information if you so chose?

MR. MAYES: Boy, you are opening up an area that I would love to go into. Let me expand the thinking a little bit here, the strategic thinking, because I think it has some real implication.

Currently, we are actually developing all of our tools on the same platform. It is this MedQuest platform, which is a data dictionary based platform.

In other words, we define all the different attributes of the data we wish to collect within this tool.

Actually, it is not just the data elements, but the edit rules, even the algorithms, the analytic algorithms. They are all defined within basically a data base.

In fact, what you have in raven is this data base with an underlying interpreter engine. You just run the data base through the engine and it creates the application.

Well, you can run any kind of data base through that engine and create an application. So, that is how we distribute all of our clinical data collection tools now.

That allows you -- and one of the things we are looking for, the enhancements, for instance, in the next version of that, would be to make it HL7 compliant.

So, the goal would be that you can now take our tools and preload data fields if you have got the data in electronic format.

So, if you have got all the demographic data, you could preload that into the reporting requirement. But it is a static kind of preload. You have to create a file and then preload the data fields.

What we would like to be able to do ultimately is have a tool which could interface with existing systems -- and lab would be a nice example -- that is already churning out this digital information.

You could simply tap the stream, if you will, and bring the data in.

DR. MOR: Right now, what they do is they get this great block of ASCI that comes from the state which, in turn, get it from the individual facilities.

The facilities have to pay somebody to actually put it out into a format that is uniform, but they already have it in something that is much more dynamic.

Ultimately, it will be much more desirable to have a direct translation from the dynamic data base form into a large national or state data base.

MR. MAYES: We are working in that area. The other thing that we are working on doing is shifting from a paradigm of paper specifications for our reporting requirements, which is the way they are done now.

Basically, we develop a reporting requirement and we publish a book for the MDS. The book was about yea thick, where here are the specifications and we put it out to the world and systems developers or others have to read through it and start to code a system which captures it.

That is the issue that HEDIS is facing right now, because that is the approach that they take. They basically simply publish this paper specification.

We would like to move toward electronic specification. By that, I don't mean putting the paper up on the web.

I mean, actually interpreting the specifications in the form of objects, if you will, of things which are wrapped in an industry standard interface.

An example of that kind of approach is cut and paste, for instance. If you are building a new program, you don't code cut and paste any more. You go to the Microsoft web site and you pull what they call a DLL or an OCX, which is this functionality already incorporated in a little standard packet, and you plug it into your system.

We would like very much to go that route for our reporting requirements. Then you get this plug and play type capability, and potentially you could really get into full re-use of components, which is really what you are talking about in terms of survey integration and these other systems integration things.

It is, can you define sort of core sets of functionality, if you will, and data, that you could then plug in as appropriate, depending on what it is that you are trying to collect, without having to reinvent.

I think that we are very much along the road of the same thoughts that the committee wants to go. In terms of making it not an academic exercise, obviously that is fundamentally important to us.

We are actually talking about trying to build reference implementations of these types of approaches, if we could.

MS. COLTIN: And build it in a way that is hinged in a view of the future.

MR. MAYES: Correct. Adaptable systems is the buzz word currently in use here.

DR. MOR: I have to ask, Bob, where were you in 1991 when we were told by HCFA, no, let the market do it.

MR. MAYES: I was at the Surgeon General.

DR. MOR: We just died, because we wanted specs from HCFA. They just didn't exist. It was too early.

MR. EDINGER: Bob, there are things in the hospital regs. You published a thing talking about drug errors. The hospital is supposed to monitor them. It is based on the work of Bates and Lieb(?) and others, which are a computerized data system.

Most hospitals don't have and they recognize that in the regs, and you don't have to do it by an automated data base, but they don't tell you how in the world you should do it.

MR. MAYES: One area -- not to keep on and on but I think it might be of some interest. An area that we are really trying to move forward to, and fortunately it is a small area for us, is end stage renal disease.

We are required now to go to facility level reporting on end stage renal disease. That program happens to fall under me.

Now, I am doing it a little bit different from the MDS and the Oasis people. We are moving toward providing not just the one-way reporting tools, but developing an actual bilateral communication system with those facilities.

The idea is to add value to our data requests to the people who are giving us the data.

MS. COLTIN: There are good examples of that. I know that in Massachusetts, the electronic system for birth certificates in the hospitals does that.

It allows the hospitals not just to send the birth certificates in a standard format, but to actually report on their own data.

MR. MAYES: Absolutely. I think what I have been flogging endlessly at HCFA is value added through the whole chain.

It has to be more than the negative value of, you know, not having HCFA hit you over the head. That is a certain incentive, but that can no longer be the full incentive.

MS. COLTIN: I think we have come to the end of our time. This is something I think we should hear more about when we flesh out kind of a full agenda around this information systems road map and what are different routes.

MR. MAYES: If we go forward with this, which I am almost sure we will, it would be, I think, the way to do it would be in a collaborative consensus type broadly-focused approach versus us thinking singly.

DR. MOR: Bob, as you are doing this, you guys have to be grappling with exactly what the continuing care problem is, which is people served by multiple settings, where two pieces of data that are relative to HEDIS come from this part of the system -- managed care system -- and two pieces come from here.

In the current HEDIS structure they don't get linked together until it is way up here at this funny numerator/denominator aggregate. How do you do that? You have to grapple with it.

MR. MAYES: That is the challenge. I think that -- I am sure that we can come up with an approach, but I think we would all be better served -- I mean, this is a real opportunity here, perhaps, and granted, it is in the managed care, but there are links moving forward or moving outward from there.

MS. COLTIN: If you are going to talk about collecting this type of information at the provider level, the systems that would interest us, those would be marvelous.

MR. MAYES: If we can do it reasonably well in this particular instance, and think about it, I think that we are setting --

MS. COLTIN: There are other people who want to want to get into the conversation.

DR. STEINDEL: I just wanted to mention that CDC is obviously looking at this type of approach for primary source reporting for public health data, and are looking at it in terms of infectious disease reporting and also tumor registry reporting.

What caused me to mention this more so is, we are very concerned about the issue of multiple reporting sites.

We are really dealing with this information in real time. We don't have the luxury of worrying about how to aggregate the data a week later or a month later or a year later.

When we are getting information from the lab on a public health incident, we want to make sure that it is coming in on the same person.

We know we are looking at data from multiple laboratory sources constantly. Most people have the same test run in multiple laboratories, triaging up the line.

So, this aggregation of patient data, especially in the face of changing demographic information as we move around the chain, is a very, very difficult issue and a very important quality issue.

MS. COLTIN: It sounds like we have a panel discussion shaping up. Okay, I think we are done for today except to set up future meetings and conference calls.

What I would like to do is see if whether, in connection with -- certainly, we are going to try to set up something for the January 22 post-acute. We should probably think about a conference call to -- actually, I tell you what.

Rather than a conference call, we said anyone who had ideas about individuals to invite to be part of that panel, who are working on quality measures, either within the silos or across the plain, but I think once we have a proposed panel, I guess we should go to Stan, who is coordinating or maybe this is Carolyn.

I guess it is Carolyn. It is going to get a little bit confusing, because this is really a subcommittee initiative and not a working group.

DR. IEZZONI: Who is lead staff for the working group, I guess, is the question.

MS. COLTIN: Stan is, but this is really a subcommittee agenda for the 22nd that we are trying to feed into. So, that would be Carolyn.

DR. IEZZONI: Stan and Carolyn should work together.

MS. COLTIN: Okay. For that particular panel, once we have a list of sort of suggested people and topics, if we could e mail that out to all of us so that people could comment on it.

MS. WARD: My idea of a great speaker might not fit, and I would like that comment. That would be very helpful.

MS. COLTIN: So, once you have identified sort of speakers and topics, if you could get that e mailed out to people, people could comment back and we could then come up with a final list together.

Our next initiative would be at that January 22 meeting. The next time we would have that opportunity to get together again, that is already planned, is the February full committee meeting.

Can we schedule a breakout for this work group at that meeting? I would like to do that if we can. It might be difficult. We did it this time.

DR. IEZZONI: What might be good is to have the subcommittee and then have the work group afterwards. I mean, John is the one person who kind of doesn't fit with the rest of the subcommittee.

DR. LUMPKIN: I just don't fit, period.

DR. IEZZONI: The rest of us overlap completely, 100 percent. Well, why don't we hear what the agenda is for January 22 and see how you are planning to handle the breakouts.

MS. COLTIN: The other thing would be to poll people now about whether they would be willing to either come in early and have a half day on the second.

DR. MOR: It is privacy and confidentiality.

DR. IEZZONI: I think, again, Kathy, I never bring my calendar. Some of us will have to go home and see about the dates.

MS. COLTIN: All right, we could poll people around that and see. I think it would be fine for us to be able to schedule a time slot like we did today, but I am not sure that is going to get us very far. It is not enough time to really lay this out.

DR. IEZZONI: We need to start blocking out some other specific meeting times in March, April, May.

MS. COLTIN: I am a little bit concerned because we talked about scheduling two or three conference calls on Medicaid managed care between now and February.

We talked about the post-acute initiative on January 22, and I really didn't want to have to schedule another conference call on this in that same time frame.

I sort of think we will do what we can on January 22 on post-acute. If we can either clear some time on the February full agenda for this group to meet, for at least two hours as opposed to one, that would be helpful.

DR. IEZZONI: One of the agenda items for this group also is the Medicaid managed care and making sure that the quality piece fits in there.

Part of the conference calls that we are hoping to schedule to talk about Medicaid managed care will have this kind of quality piece embedded in it.

MS. COLTIN: I would like us to take a couple of hours to try to plan out these points three and four in terms of what kinds of stakeholders are we going to want to hear from, what kinds of issues are we going to want to hear about.

Just planning for that, which hopefully could take place maybe in connection with the June meeting or, if not, sometime in between, maybe in May or in that time, again, a lot of these things that we are going to hear about, the full committee could be interested in.

MS. GREENBERG: There is a fair amount of competition now for the February 3-4 agenda. Elizabeth, I think you are the only person who is also on privacy, of this subcommittee.

Do you want staff to poll about an afternoon session on the 2nd?

MS. COLTIN: I think it is worth polling about it. I am not that optimistic at this point, but I think it is worth trying to see how many people could come in for something on the afternoon of the 2nd.

MS. WARD: My choice, of course, is to attend the quality.

DR. MOR: It is a direct conflict with quality; you understand that, temporally and otherwise. [Laughter.]

MS. COLTIN: I am only thinking we need two to three hours. You could go to privacy in the morning and then join us in the afternoon, and split your time.

MS. GREENBERG: It is certainly feasible if people can do it.

MS. COLTIN: It also enables people not to have to come in the night before, if we do it in the afternoon.

DR. IEZZONI: Let's see if it is available.

MS. COLTIN: If not, we will try to poll for some time between February and June, early enough to be able to plan something for June and pull it off.

A lot of the people that we might be interested in having speak also have full agendas. We are going to be caught in the full committee participating and having a separate meeting.

MR. EDINGER: One question. For the January 22, because we are talking about speakers and panels, how much time is there for the speakers?

MS. COLTIN: The question portion?

MR. EDINGER: Yes.

MS. COLTIN: Should we get two hours?

DR. IEZZONI: Yes, absolutely.

MS. COLTIN: Two hours is what crossed my mind. We will work on that. It probably should come a little later in the agenda, because the fundamentals ought to come first.

MS. GREENBERG: I was thinking afternoon.

DR. MOR: The quality session should be in the afternoon and there should be at least some discussion of the reimbursement models that are currently in place.

Then actually the quality stuff might even follow after the reimbursement stuff, because there is some data out there suggesting that the reimbursement stuff has quality implications.

MS. COLTIN: We will work on that. We are a little over time. Lunch. Thank you.

[Whereupon, at 12:40 p.m., the session was adjourned.]