Effective Clinical Practice
Effective Clinical Practice, November/December 1999.
At a meeting of the Board of Directors of The HMO Group (now the Alliance of Community Health Plans) in 1989, we board members were complaining, as usual, about the demands from employers for claims data to justify our rate increases. The member plans had two problems with these requests: We could not provide them with cost data (as prepaid group practices, we did not create encounter forms for each visit), and we knew that these data would not tell purchasers what they most wanted to know: Were they getting their money's worth?
George Halvorson, President and Chief Executive Officer of Group Health, Inc. (now HealthPartners), proposed that we work with some of the more sophisticated employers to develop a set of measures that could demonstrate that their money was well spent. The HMO Group and Kaiser Permanente put together a task force that included several influential employers who were active in the Washington Business Group on Health, and Towers Perrin, to develop this set of measures. In September 1991, the Health Plan Employer Data and Information Set, or HEDIS, was born (see Primer ). The earliest measures focused on prevention—an area that was noncontroversial, relatively easily measured, and a historic strength of HMOs.
In 1992, representatives of the plans and employers met with measurement and policy experts to find HEDIS a home and a neutral sponsor. The group concluded that the logical candidate was the National Committee for Quality Assurance (NCQA), which subsequently agreed to administer and develop the performance measures. NCQA faced two challenges. First, it needed to demonstrate that plans that contracted with community physicians were willing and able to collect the data needed to produce a HEDIS report. Second, to prevent a less-than-honest organization from claiming excellent results (thereby putting its honest competitors at a disadvantage), HEDIS data needed to be audited. NCQA's national demonstration project, completed in 1994, included plans with community physicians and showed that HEDIS could be implemented outside of group practice models. Starting this year, HEDIS scores must be verified by an auditing firm approved by NCQA.
HEDIS has evolved substantially from version 1.0. Some of the earliest measures (e.g., the percentage of members 40 years of age and older who have had cholesterol screening in the past 5 years) have been replaced with more targeted measures (cholesterol management after acute cardiovascular events). Other measures have been added, then retired (e.g., readmission for certain mental health disorders). Key elements of prevention (e.g., childhood immunizations, breast cancer screening, cervical cancer screening, flu shots for older adults) will remain, but HEDIS 2000 will increase its emphasis on how well plans care for the chronically ill, for example, by adding measures for comprehensive diabetes care and control of hypertension.
Trouble in Paradise
In this issue of ecp, Mehl1 raises several provocative points about some of the unintended consequences of HEDIS. He points out that by combining the newest childhood immunizations with well-established immunizations in a single performance measure, HEDIS disproportionately penalizes plans that are slower to use new vaccines than previously existing ones. And the scoring is black and white—immunized or not immunized—with no credit given for documenting a thorough presentation to the parents, who then exercise their right of informed consent and refuse the immunization. Parental concern about the risk of immunization varies among populations and is more prominent in Colorado (Mehl's state) and Washington (my state). In addition, immunization against hepatitis B is much more clearly indicated for populations with a greater risk for the disease than the general population. But in HEDIS, "one size fits all."
Lest ecp readers think this is a fable, one of the top-rated plans in the country recently learned that a national magazine dropped its rating of the plan's effectiveness in care of children from "A" one year to "D" the next year. Upon examination of the criteria used, the plan found that the key measurement was the rate of pediatric immunizations. The plan had chosen to respect its pediatricians' hesitancy to recommend varicella vaccine for all children. The rate of varicella immunization was less than 50%, which reduced the plan's overall immunization score from over 90% to less than 50%. Has HEDIS taken a wrong turn? Maybe it's time to ask some hard questions about performance measurement.
Should New Immunizations Be Measured Separately?
At a minimum, delay in introducing any new technology is inevitable (perhaps the Centers for Disease Control and Prevention could be more sensitive to this issue as they issue new recommendations). In addition, newer immunizations are less familiar to parents and are thus less likely to be viewed as necessary. (Those of us old enough to remember when the Salk polio vaccine was approved have trouble believing that there are actually parents who "don't believe in immunizations"!) NCQA believes that the plans that are more closely connected to their physicians are more likely to help physicians educate members about the importance of the newer immunizations. I agree with NCQA on this assumption, and I also believe that the newer immunizations offer significant benefits. However, I feel that reporting the rate of the newest agents separately is a reasonable compromise.
Is 100% Compliance the Right Goal?
The immunization example demonstrates the trade-off between compliance and patient autonomy. The trade-off may carry over to other measures, such as screening. For example, there are women who fully understand the potential advantages of breast cancer screening and still decide that they don't want it—it makes them too anxious, the procedure is too painful, or they "don't want to know." Perhaps they have a friend who had a false-positive mammogram and was subjected to a series of unnecessary procedures and possibly lifelong anxiety—some people never fully believe that the results of a test were false-positive and are left with iatrogenic disease despite our best intentions. It has taken centuries for physicians to learn that, in the case of competent patients, our role is to advise and the patients' role is to consent. Surely, we should not change our commitment to that position to have a better score in our contract negotiations.
Does Performance Measurement Have Opportunity Costs?
But what about other measures—where 100% compliance would be seen as an unambiguous "good"? Is there any reason, for example, not to have a goal of zero emergency department visits for patients with asthma? Yes, and the economists call it opportunity cost. Although each affected member would presumably be better off by achieving the goal of perfect compliance, the plan would inevitably be practicing "on the flat part of the curve." The effort that would be devoted to getting that last child to take his or her medication properly would be better spent on improving an aspect of care that is not measured by HEDIS. For example, physicianpatient communication can be improved by a 1-day educational program2; investing in such education would probably provide much more benefit to members than improving a screening rate from 94% to 95%. Also, HEDIS may cause an organization to invest considerable resources in immunizing a population against a given disease when a costbenefit analysis would show that for that particular group, money would be much better spent on some other problem.
Can Performance Measurement Actually Lower Quality?
The way HEDIS is scored and the way it is used may, paradoxically, lower quality scores. Although this may partially be spurious, some effects could be real. Improving a measure by including more elements (e.g., care of patients with diabetes) is likely to lead to lower scores; if the results are not disseminated carefully, quality of care may seem to deteriorate when it is actually improving. To preclude this perception, plans may put a disproportionate amount of effort into improving less-important aspects of care, thereby providing lower quality care than they are capable of. This kind of issue falls under the rubric of "distraction," a problem that was recently explored in depth.3 Clinicians are now taught to pay close attention to many areas, including safety (seatbelts, firearms, radon); sensitive issues (chemical dependency, domestic violence, sexual issues); and lifestyle
factors (stress at work, exercise and diet, personal relationships). The more expectations we have, the less effective clinicians will be in doing what only they can do—diagnose and treat illness.
Don't Throw the Baby Out with the Bath Water
Performance measurement has its downside. Ironically, HEDIS has not achieved its original purpose—to encourage employers to choose plans on the basis of quality.4 So what has HEDIS done? For one thing, it has brought structure to chaos. For decades, various plans reported interventions that improved aspects of quality in that plan; however, there was no way to demonstrate whether plans overall were able to improve quality in important areas. By giving plans the same goals and requiring consistent measurement systems, NCQA has used HEDIS to help bring focus to plans. Its 1998 Annual Report shows that plans reporting data for both 1996 and 1997 improved more quickly than plans in general. HEDIS drives change and differentiates higher-performing plans.
The kinds of measurements selected involve many areas of the organization aside from physicians (e.g., health plan administration, pharmacy, technology services, and marketing) and bring them together to work on common and important goals. This focus has had effects far beyond the individual plans: One veteran physician HMO executive close to HEDIS has described it as "the closest thing we have to a national health policy."
There have been a number of unexpected, and valuable, uses of HEDIS. A handful of influential employers give employees significant financial incentives to choose plans with better HEDIS scores in certain key areas (the most prominent, General Motors, emphasizes childhood immunizations, prenatal care in the first trimester, screening for breast and cervical cancer, follow-up after hospitalization for mental illness, advising smokers to quit, eye examinations for persons with diabetes, and treatment with ß-blockers after heart attack). Some plans, such as HealthPartners, use HEDIS data for rating physician groups and post these data on their Web site to enable members to choose providers on the basis of quality. A consortium of Massachusetts plans is working on pooling HEDIS data, thereby enabling more physicians to be more precisely measured; their general philosophy is to collaborate on clinical quality while competing on price and service quality. Finally, many plans reward individual physicians according to performance in certain measures; a medical director who has been working in prepaid medicine for 30 years has observed improvement of 5% to 10% a year in some areas, after seeing no improvement in his first 25 years of work. He likes the practical uses of HEDIS; he can tell physicians exactly which patients need screening and even notify the patients directly.
Most of the "old hands" in the HMO field go back to the 1970s—when we were accused of being communists. (Many of us took that as a compliment, considering who was doing the accusing.) We are now viewed as money-grubbing capitalists who deprive people of necessary medical care to ensure that our stock options will rise in value. HEDIS is one of the few ways we have of keeping our dream alive—the dream of developing systems of care that really improve health, which has been the fundamental mission of HMOs ever since they were called a "movement" instead of an "industry."
1. Mehl AL. Physician autonomy, patient choice, and immunization performance measures. Eff Clin Pract. 1999;2;289-293.
2. Stein TS, Kwan J. Thriving in a busy practice: physicianpatient communication training. Eff Clin Pract. 1999;2:63-70.
3. Fisher ES, Welch HG. Avoiding the unintended consequences of growth in medical care: how might more be worse? JAMA. 1999;281:446-53.
4. Gabel JR, Hunt KA, Hurst K. When employers choose health plans: do NCQA accreditation and HEDIS data count? New York: The Commonwealth Fund; 1998.
Henry S. Berman, MD, Berman Consulting, LLC, 13727 North Riverbluff Lane, Spokane, WA 99208; telephone: 509-466-2445; fax: 509-464-0550; e-mail: Hsberman@aol.com.