ACP American College of Physicians - Internal Medicine - Doctors for Adults

Effective Clinical Practice

POLICY MATTERS

Assessing Hospital Quality: A Review for Clinicians

Effective Clinical Practice, March/April 2001

Kaveh G. Shojania, Jonathan Showstack, Robert M. Wachter

For author affiliations, current addresses, and contributions, see end of text.

Physicians frequently react to "quality reports" with suspicion, regarding attempts to measure "quality" as thinly disguised efforts at cost reduction (1) or marketing salvos engineered by managed care organizations. (2) Skepticism about quality reports is reinforced by the confusion in many report cards, which often present a handful of relatively crude clinical outcomes (e.g., mortality and readmission rates) mixed with measures highlighting resource use (e.g., lengths of stay).

Nevertheless, physicians must acknowledge the need for quality measurement in American health care, (3) given the widespread gaps between recommended and actual practice (4, 5) and unimpressive performance in terms of basic health care outcomes. (6) While quality reports to date have demonstrated limited impact, (7) interest in them remains high, especially in light of the recent publicity regarding medical errors. (8, 9) Widely used quality reports (e.g., the Health Employer Data Information Set [HEDIS] (10)) have focused largely on preventive strategies most relevant to ambulatory patients in health plans. In contrast, the attention given to the challenges and opportunities involved in measuring the quality of inpatient care has been far less comprehensive.

In this article, we present clinicians with a conceptual scheme for assessing hospital quality reports. We illustrate the dominant paradigm of measuring quality in terms of structure, process, and outcome (11) by using examples from the clinical and health services research literature to illustrate advantages and disadvantages for each category. We emphasize comparative quality measurement (as with hospital quality report cards), rather than the development of indicators designed to fuel internal quality improvement. (12, 13) In addition, because productive quality measurement clearly must stimulate quality improvement, (14) we touch on the interrelationship between these two activities.

A Definition of Quality

One often-cited definition describes quality as the "degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge." (15) In practice, distinctions between quality and cost have often been blurred. Cost-cutting efforts and "quality assurance" activities have often been regarded as identical. (1) Thus, a growing trend recognizes the distinction between quality and economy, assessing "value" (quality divided by cost) as the preferred metric for informing health care purchasing and policy decisions. (16-18)

Structural Measures of Quality

Structure is the setting in which care occurs and the capacity of that setting to produce quality. (11, 19) Table 1 presents traditional examples of structural measures related to hospital quality. More recent candidate measures include the adoption of organizational models for inpatient care (e.g., closed intensive care units (20) and dedicated stroke units (21)) and the presence of sophisticated clinical information systems. (22, 23)

The growing use of patient volume as a quality indicator reflects an extensive literature that documents superior outcomes for hospitals and physicians with higher patient volumes. (24-32) The literature suggests that substantial reductions in mortality rate might result from regionalizing treatment for certain high-risk conditions. (41, 42) For example, recent analysis suggests that transferring the care of patients with acute myocardial infarction (MI) from hospitals in the lowest-volume quartile to those in the highest-volume quartile would save 2.3 lives per 100 patients. (32) The corresponding "number needed to treat"—50—falls within the range of many accepted therapeutic interventions.

When volume-outcomes relationships reflect a "practice-makes-perfect" effect, using volume to assess quality may be reasonable. However, such relationships could also reflect "selective referral," which may occur when the community has informal knowledge of the superior quality of these high-volume centers. (43) Moreover, the degree to which volume-based performance measurement would promote quality improvement remains unclear. One study of cardiac surgery indicated improved outcomes as a result of increased patient volume, (44) but a study of pediatric trauma centers suggested that, beyond a certain threshold, increases in volume place a strain on provider resources and worsen outcomes. (45) Diverting patients to large referral centers also has important health policy implications (46) and may decrease patient satisfaction. (47)

Similar arguments apply to specialist care as a quality measure. For example, although cardiologists achieve better outcomes than generalists in caring for patients with acute MI, much of this benefit is derived from the performance of established care processes. (48, 49) The case for measuring these processes directly is strengthened by the finding that even cardiologists fail to provide proven therapies to many eligible patients. (48-50) In contrast, a comparable set of proven interventions does not exist for inpatient management of the complications of HIV infection. Thus, management by high-volume HIV "specialists" may provide the best available marker for superior inpatient HIV care. (26, 51)

Accreditation listings and billing data permit easy ascertainment of teaching status and patient volumes, making such structural measures extremely efficient quality indicators. However, existing databases do not characterize the organizational models (20, 21, 33, 34) and information systems (22, 23) that hospitals use. Measures targeting these hospital characteristics may thus prove difficult to implement.

Process Measures of Quality

Process measures depict the interior of the hospital "black box," allowing measurement of the care patients actually receive. Table 2 outlines some of the advantages of this approach. If proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable outcome differences occur. (11, 19, 58) Moreover, process indicators may avoid the distorting patient factors that affect outcome-based quality measurement by comparing providers in terms of the percentage of eligible patients for whom they performed particular processes. (59) Sophisticated process measurement will still require some accounting for patient factors, such as accepted contraindications to the targeted interventions (60) and patients' informed decisions to decline physician recommendations. (61, 62) Nonetheless, process-based quality measurement should minimize the perverse incentive for physicians to avoid caring for complicated patients, (63) a concern recently illustrated in outcome-based physician profiling (64) (Table 2).

The example of MI care best illustrates the potential power of process-based quality measurement. Clinical research has firmly established the benefits of acute reperfusion, (65) therapy with aspirin and b-blockers, and certain other processes of care. (66, 67) Numerous studies have documented delays in translating this research into practice (50, 68-70)and the adverse consequences of these delays. (70-72) The Cooperative Cardiovascular Project confirmed that variations in therapeutic and preventive processes explain differences in acute MI mortality rates between hospitals; the project also documented improved outcomes following improvements in process performance. (52)

Unfortunately, the literature provides surprisingly few examples of process-outcome links as compelling as those associated with acute MI care. For example, despite the extensive literature documenting the benefits of postoperative thromboembolism prophylaxis, the relative advantages of various pharmacologic and mechanical methods remain unclear. (73-78) A process measure that captured "some form of thromboembolism prophylaxis" would probably not reveal significant underuse by providers. Nonetheless, any variation noted would be significant, and negative consequences of promoting this process are difficult to imagine. The same cannot be said of perioperative antibiotic prophylaxis. While the literature demonstrates improved patient outcomes for many surgical procedures, the evidence supporting a particular antibiotic regimen is not as strong. (79-81) In fact, a recent systematic review concluded that appropriately timed single-dose prophylaxis is as effective as multiple-dose regimens. (82) Thus, to develop a measure that captures perioperative antibiotic use, the extent of two potential quality problems—postoperative infections related to underuse of prophylaxis and increased adverse drug events and microbial resistance from antibiotic overuse—would need to be compared.

This example highlights the point that overuse and misuse of available therapies represent quality problems that are at least as important as those associated with underuse. (8, 83) However, overuse and misuse have generally proved more difficult to target. Despite marked variations in the use of many common procedures, (84) it is still difficult to distinguish between overuse and inappropriate use. (85, 86 )In fact, repeated cesarean section represents the only widely employed measure of overuse, and growing concerns about the safety of vaginal birth after cesarean section (87, 88) may lead to its abandonment. Endarterectomy for asymptomatic carotid stenosis presents a more promising potential target for a measure of overuse. Population-level analysis indicates that all providers, including hospitals enrolled in the Asymptomatic Carotid Atherosclerosis Study, (89) have significantly higher mortality rates than those reported in the trial. Inappropriate patient selection seems to be the most plausible explanation. (90) At the individual patient level, judgments of appropriateness for endarterectomy remain highly sensitive to estimates of surgical risk. (91) When such estimates can be generated more precisely, appropriate patient selection for endarterectomy may provide an accurate quality measure.

Despite these complexities, process measures remain attractive quality indicators because of their sensitivity in detecting quality problems (58) and their direct connection to quality improvement strategies. (11, 19, 59) Process-based measurement is particularly well-suited to medical conditions, for which the literature supplies more candidate processes and in which technical skill in process performance plays a minimal role. (92)

Outcome Measures of Quality

Although such outcomes as mortality, morbidity, and patient satisfaction (11, 19) represent the bottom line in defining quality, (15) these measures have problems (Table 3). Even for common conditions, it may take years to detect differences in outcomes between hospitals. (19, 58, 59) Moreover, such differences may reflect patient factors, variations in admission practices, or chance rather than differences in quality of care. (94-99) On the other hand, appropriate structure- or process-based measures do not exist for many conditions, and certain conditions remain particularly suited to outcome-based performance measurement.

Two immediate questions to ask about any candidate outcome measure are: Is the measure really an outcome? Does it relate to medical care delivered in the setting in question? (100) Length of stay is a widely used quality measure despite the fact that it primarily involves resource use. Rates of early readmission more plausibly represent a combined clinical and financial outcome, although analyses of their relationship to quality have produced conflicting results. (101-104) Although infant birthweight represents a pure clinical outcome, it is related to prehospital care and may be only minimally affected by factors under providers' control, even in the outpatient setting. (105-107)

More complicated questions regarding candidate outcome measures relate to perverse incentives and opportunities for "gaming." (100 )Perverse incentives may arise from the criteria used to define target patient populations. For example, restricting inpatient mortality to deaths that literally occur in the hospital allows hospitals to lower their mortality rates simply by discharging patients to die at home or in other institutions. (108-111 )Complication rates that capture only in-hospital events will similarly reward hospitals that merely reduce length of stay. (112)

"Upcoding" of diagnoses related to severity of illness and comorbid conditions represents another opportunity for gaming. (64, 113) Systematic biases in diagnostic coding were observed after the introduction of the prospective payment system (114) and may have contributed to some of the early reductions in adjusted mortality attributed to the Cardiac Surgery Reporting System in New York State. (113, 115) Early concerns that surgeons in New York avoided operating on high-risk patients (116) seem unfounded, (115, 117) but the incentive for physicians or hospitals to avoid caring for sicker patients remains a substantial concern for outcome-based performance measurement. (64)

For measures of patient morbidity, accurate outcome assessment will have to overcome a reverse reporting bias—that is, a bias toward underreporting of diagnoses and events. Most hospitals rely on voluntary incident reporting of adverse drug events, a method known to be severely flawed. (118) Thus, measurement of such events may penalize hospitals that use sophisticated detection methods and reward hospitals that do not. A similar reporting bias probably explains the observation of high surgical complication rates in hospitals with low mortality rates and other plausible indicators of superior quality. (119)

The questions on outcome-based quality measures that have received the most attention concern the variations in outcomes that are attributable to factors other than quality, such as chance, selection effects, and patient factors. (96) Most of the work in this area has focused on risk adjustment for hospital mortality rates. (120) Early efforts were hampered by the use of models originally designed to predict financial rather than clinical outcomes, (120) reliance on administrative data with known systematic errors and biases, (108, 121-123) and methodologic problems relating to model generation and performance analysis. (124-126) Progress has been made by focusing on specific subgroups of patients instead of overall hospital mortality. Sophisticated risk adjustment can now be used to interpret mortality rates for patients undergoing cardiac surgery (44, 127) and interventional cardiology procedures (128); critically ill patients (129); and patients with acute MI, (52) community-acquired pneumonia, (130) and upper gastrointestinal hemorrhage. (131)

In addition to adopting better models, sophisticated outcome measures tend to adjust for patient risk by using clinical rather than administrative data. The inaccuracy of the diagnostic codes contained in administrative data undermined early hospital mortality reports. (108, 113, 121, 122) New York and California have recently added a "sixth digit" to International Classification of Diseases, ninth revision, codes to distinguish secondary diagnoses present at admission from those that develop during hospitalization and may represent complications of care. This refinement may increase the validity of risk-adjusted mortality rates that use administrative data, but the expectation of such improvement remains speculative.

Even with ideal risk adjustment, the effects of chance (94, 96, 99) and selection are considerable (95, 97, 132) and probably cause more variation than do provider differences. Furthermore, even providers and hospital executives who have helped develop outcome measures tend to distrust performance reports that are based solely on outcomes, presumably for many of the previously cited reasons. (133) Nonetheless, given the current limitations of other quality measures and some notable successes of outcome-based reports, (44, 127) outcome assessment will continue to play a role in quality measurement. Valid application will depend on recognition of situations in which the concerns described here can be adequately addressed.

Quality Measurement versus Quality Improvement

We have focused on quality measurement as it relates to report card-style comparisons between hospitals, rather than the development of indicators designed to fuel quality improvement activities. (12, 13) Providers may believe that quality measurement has value only if it informs quality improvement. (134) On the other hand, patients have a legitimate interest in knowing how the providers who are available to them vary in quality. Moreover, although developers of report cards must ensure that overemphasis on measurement does not undermine quality (135) (e.g., by randomly rotating subsets of standard measures to eliminate "playing for the test" (106)), such publicly reported quality measurement has already produced at least one notable success. (44, 115) By contrast, the dissemination of clinical practice guidelines (136, 137) and other models of quality improvement (138-140) has generally failed to match this accomplishment.

Conclusions

We have illustrated the use of the conceptual scheme of structure, process, and outcome to develop quality indicators related to hospital care. In discussing the advantages and disadvantages of these types of indicators, we have also touched on the issues involved in developing indicators that are appropriate for clinical and measurement contexts and the importance of targeting the correct quality problem: underuse, overuse, or misuse.

Beyond the technical issues discussed, the major barrier to effective quality measurement and improvement remains a health care culture in which most hospitals muster the resources to measure quality only in response to immediate financial or organizational pressures. (17) The recent spotlight on medical error (8, 141, 142) may provide an opportunity to secure funding for broader initiatives related to hospital quality measurement and improvement. A substantial increase in the dollars spent on quality-related activities would still consume only a tiny fraction of the resources presently expended on developing novel inpatient diagnostic strategies and therapeutic agents.

References

1. Chassin MR. Quality of health care. Part 3: improving the quality of care. N Engl J Med. 1996;335:1060-3.

2. Stewart GM. Series on the quality of health care [Letter]. N Engl J Med. 1997;336:804-5.

3. Chassin MR, Galvin RW. The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA. 1998;280:1000-5.

4. Schuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? Milbank Q. 1998;76:517-63.

5. Chassin MR. Is health care ready for Six Sigma quality? Milbank Q. 1998;76:565-91.

6. Anderson GF. In search of value: an international comparison of cost, access, and outcomes. Health Aff (Millwood). 1997;16:163-71.

7. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000;283:1866-74.

8. Kohn LT, Corrigan J, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Pr;2000.

9. Pear R. Clinton to order steps to reduce medical mistakes; a mandate for hospitals; states, in turn, will be asked to add systems to require the reporting of errors. New York Times. 2000: A1(N), A1(L).

10. Health Plan Employer Data Information Set (HEDIS). National Committee for Quality Assurance. Available at www.ncqa.org. Accessed 14 March 2000.

11. Donabedian A. Explorations in Quality Assessment and Monitoring. Ann Arbor, MI: Health Administration Pr; 1980.

12. Kazandjian VA, Wood P, Lawthers J. Balancing science and practice in indicator development: the Maryland Hospital Association Quality Indicator (QI) project. Int J Qual Health Care. 1995;7:39-46.

13. Rosenthal GE, Hammar PJ, Way LE, et al. Using hospital performance data in quality improvement: the Cleveland Health Quality Choice experience. Jt Comm J Qual Improv. 1998;24:347-60.

14. Berwick DM. Crossing the boundary: changing mental models in the service of improvement. Int J Qual Health Care. 1998;10:435-41.

15. Lohr KN, Schroeder SA. A strategy for quality assurance in Medicare. N Engl J Med. 1990;322:707-12.

16. Nelson EC, Greenfield S, Hays RD, Larson C, Leopold B, Batalden PB. Comparing outcomes and charges for patients with acute myocardial infarction in three community hospitals: an approach for assessing "value". Int J Qual Health Care. 1995;7:95-108.

17. Maxwell J, Briscoe F, Davidson S, et al. Managed competition in practice: "value purchasing" by fourteen employers. Health Aff (Millwood). 1998;17:216-26.

18. Wachter RM. Assessment and improvement of quality and value. In: Wachter RM, Goldman L, Hollander H, eds. Hospital Medicine. Philadelphia: Lippincott Williams & Wilkins; 2000.

19. Brook RH, McGlynn EA, Cleary PD. Quality of health care. Part 2: measuring quality of care. N Engl J Med. 1996;335:966-70.

20. Carson SS, Stocking C, Podsadecki T, et al. Effects of organizational change in the medical intensive care unit of a teaching hospital: a comparison of "open" and "closed" formats. JAMA. 1996;276:322-8.

21. Collaborative systematic review of the randomised trials of organised inpatient (stroke unit) care after stroke. Stroke Unit Trialists' Collaboration. BMJ. 1997;314:1151-9.

22. Evans RS, Pestotnik SL, Classen DC, et al. A computer-assisted management program for antibiotics and other antiinfective agents. N Engl J Med. 1998;338:232-8.

23. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280:1311-6.

24. Luft HS, Bunker JP, Enthoven AC. Should operations be regionalized? The empirical relation between surgical volume and mortality. N Engl J Med. 1979;301:1364-9.

25. Hannan EL, O'Donnell JF, Kilburn H Jr, Bernard HR, Yazici A. Investigation of the relationship between volume and mortality for surgical procedures performed in New York State hospitals. JAMA. 1989;262:503-10.

26. Stone VE, Seage GR 3rd, Hertz T, Epstein AM. The relation between hospital experience and mortality for patients with AIDS. JAMA. 1992;268:2655-61.

27. Horowitz MM, Przepiorka D, Champlin RE, et al. Should HLA-identical sibling bone marrow transplants for leukemia be restricted to large centers? Blood. 1992;79:2771-4.

28. Hosenpud JD, Breen TJ, Edwards EB, Daily OP, Hunsicker LG. The effect of transplant center volume on cardiac transplant outcome. A report of the United Network for Organ Sharing Scientific Registry. JAMA. 1994;271:1844-9.

29. Phibbs CS, Bronstein JM, Buxton E, Phibbs RH. The effects of patient volume and level of care at the hospital of birth on neonatal mortality. JAMA. 1996;276:1054-9.

30. Cebul RD, Snow RJ, Pine R, Hertzer NR, Norris DG. Indications, outcomes, and provider volumes for carotid endarterectomy. JAMA. 1998;279:1282-7.

31. Begg CB, Cramer LD, Hoskins WJ, Brennan MF. Impact of hospital volume on operative mortality for major cancer surgery. JAMA. 1998;280:1747-51.

32. Thiemann DR, Coresh J, Oetgen WJ, Powe NR. The association between hospital volume and survival after acute myocardial infarction in elderly patients. N Engl J Med. 1999;340:1640-8.

33. Mitchell PH, Shortell SM. Adverse outcomes and variations in organization of care delivery. Med Care. 1997;35:NS19-32.

34. Wachter RM, Katz P, Showstack J, Bindman AB, Goldman L. Reorganizing an academic medical service: impact on cost, quality, patient satisfaction, and education. JAMA. 1998;279:1560-5.

35. Palmer RH, Reilly MC. Individual and institutional variables which may serve as indicators of quality of medical care. Med Care. 1979;17:693-717.

36. Shortell SM, LoGerfo JP. Hospital medical staff organization and quality of care: results for myocardial infarction and appendectomy. Med Care. 1981;19:1041-55.

37. Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989;321:1720-5.

38. Brennan TA, Hebert LE, Laird NM, et al. Hospital characteristics associated with adverse events and substandard care. JAMA. 1991;265:3265-9.

39. Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709-14.

40. Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293-9.

41. Birkmeyer JD, Lucas FL, Wennberg DE. Potential benefits of regionalizing major surgery in Medicare patients. Eff Clin Pract. 1999;2:277-83.

42. Dudley RA, Johansen KL, Brand R, Rennie DJ, Milstein A. Selective referral to high-volume hospitals: estimating potentially avoidable deaths. JAMA. 2000;283:1159-66.

43. Luft HS, Hunt SS, Maerki SC. The volume-outcome relationship: practice-makes-perfect or selective-referral patterns? Health Serv Res. 1987;22:157-82.

44. Hannan EL, Kilburn H Jr, Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271:761-6.

45. Tepas JJ 3rd, Patel JC, DiScala C, Wears RL, Veldenz HC. Relationship of trauma patient volume to outcome experience: can a relationship be defined? J Trauma. 1998;44:827-30; discussion 830-1.

46. Phillips KA, Luft HS. The policy implications of using hospital and physician volumes as "indicators" of quality of care in a changing health care environment. Int J Qual Health Care. 1997;9:341-8.

47. Finlayson SR, Birkmeyer JD, Tosteson AN, Nease RF Jr. Patient preferences for location of care: implications for regionalization. Med Care. 1999;37:204-9.

48. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med. 1996;335:1880-7.

49. Willison DJ, Soumerai SB, McLaughlin TJ, et al. Consultation between cardiologists and generalists in the management of acute myocardial infarction: implications for quality of care. Arch Intern Med. 1998;158:1778-83.

50. Brand DA, Newcomer LN, Freiburger A, Tian H. Cardiologists' practices compared with practice guidelines: use of beta-blockade after acute myocardial infarction. J Am Coll Cardiol. 1995;26:1432-6.

51. Kitahata MM, Koepsell TD, Deyo RA, Maxwell CL, Dodge WT, Wagner EH. Physicians' experience with the acquired immunodeficiency syndrome as a factor in patients' survival. N Engl J Med. 1996;334:701-6.

52. Marciniak TA, Ellerbeck EF, Radford MJ, et al. Improving the quality of care for Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project. JAMA. 1998;279:1351-7.

53. Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256-62.

54. Meehan TP, Fine MJ, Krumholz HM, et al. Quality of care, process, and outcomes in elderly patients with pneumonia. JAMA. 1997;278:2080-4.

55. Clagett GP, Anderson FA Jr, Geerts W, et al. Prevention of venous thromboembolism. Chest. 1998;114:531S-560S.

56. Cook DJ, Guyatt GH, Salena BJ, Laine LA. Endoscopic therapy for acute nonvariceal upper gastrointestinal hemorrhage: a meta-analysis. Gastroenterology. 1992;102:139-48.

57. Cooper GS, Chak A, Connors AF Jr, Harper DL, Rosenthal GE. The effectiveness of early endoscopy for upper gastrointestinal hemorrhage: a community-based analysis. Med Care. 1998;36:462-74.

58. Mant J, Hicks N. Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ. 1995;311:793-6.

59. Palmer RH. Process-based measures of quality: the need for detailed clinical data in large health care databases. Ann Intern Med. 1997;127:733-8.

60. Lescoe-Long M, Long MJ. Defining the utility of clinically acceptable variations in evidence-based practice guidelines for evaluation of quality improvement activities. Eval Health Prof. 1999;22:298-324.

61. Mehl AL. Physician autonomy, patient choice, and immunization performance measures. Eff Clin Pract. 1999;2:289-93.

62. Protheroe J, Fahey T, Montgomery AA, Peters TJ. The impact of patients' preferences on the treatment of atrial fibrillation: observational study of patient based decision analysis. BMJ. 2000;320:1380-4.

63. Shojania KG, Wachter RM. Unreliability of physician "report cards" to assess cost and quality of care [Letter]. JAMA. 2000;283:52; discussion 53-4.

64. Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The unreliability of individual physician "report cards" for assessing the costs and quality of care of a chronic disease. JAMA. 1999;281:2098-105.

65. Weaver WD, Simes RJ, Betriu A, et al. Comparison of primary coronary angioplasty and intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review. JAMA. 1997;278:2093-8.

66. Yusuf S, Anand S, Avezum A Jr, Flather M, Coutinho M. Treatment for acute myocardial infarction. Overview of randomized clinical trials. Eur Heart J. 1996;17(Suppl F):16-29.

67. Peterson ED, Shaw LJ, Califf RM. Risk stratification after myocardial infarction. Ann Intern Med. 1997;126:561-82.

68. Gurwitz JH, Goldberg RJ, Chen Z, Gore JM, Alpert JS. Beta-blocker therapy in acute myocardial infarction: evidence for underutilization in the elderly. Am J Med. 1992;93:605-10.

69. Viskin S, Kitzis I, Lev E, et al. Treatment with beta-adrenergic blocking agents after myocardial infarction: from randomized trials to clinical practice. J Am Coll Cardiol. 1995;25:1327-32.

70. Krumholz HM, Radford MJ, Ellerbeck EF, et al. Aspirin for secondary prevention after acute myocardial infarction in the elderly: prescribed use and outcomes. Ann Intern Med. 1996;124:292-8.

71. Soumerai SB, McLaughlin TJ, Spiegelman D, Hertzmark E, Thibault G, Goldman L. Adverse outcomes of underuse of beta-blockers in elderly survivors of acute myocardial infarction. JAMA. 1997;277:115-21.

72. Krumholz HM, Radford MJ, Wang Y, Chen J, Marciniak TA. Early beta-blocker therapy for acute myocardial infarction in elderly patients. Ann Intern Med. 1999;131:648-54.

73. Palmer AJ, Koppenhagen K, Kirchhof B, Weber U, Bergemann R. Efficacy and safety of low molecular weight heparin, unfractionated heparin and warfarin for thrombo-embolism prophylaxis in orthopaedic surgery: a meta-analysis of randomised clinical trials. Haemostasis. 1997;27:75-84.

74. Morrison RS, Chassin MR, Siu AL. The medical consultant's role in caring for patients with hip fracture. Ann Intern Med. 1998;128:1010-20.

75. Hull RD, Brant RF, Pineo GF, Stein PD, Raskob GE, Valentine KA. Preoperative vs postoperative initiation of low-molecular-weight heparin prophylaxis against venous thromboembolism in patients undergoing elective hip replacement. Arch Intern Med. 1999;159:137-41.

76. Bounameaux H. Integrating pharmacologic and mechanical prophylaxis of venous thromboembolism. Thromb Haemost. 1999;82:931-7.

77. Agu O, Hamilton G, Baker D. Graduated compression stockings in the prevention of venous thromboembolism. Br J Surg. 1999;86:992-1004.

78. Hooker JA, Lachiewicz PF, Kelley SS. Efficacy of prophylaxis against thromboembolism with intermittent pneumatic compression after primary and revision total hip arthroplasty. J Bone Joint Surg Am. 1999;81:690-6.

79. Waddell TK, Rotstein OD. Antimicrobial prophylaxis in surgery. Committee on Antimicrobial Agents, Canadian Infectious Disease Society. CMAJ. 1994;151:925-31.

80. Gillespie WJ, Walenkamp G. Antibiotic prophylaxis for surgery for proximal femoral and other closed long bone fractures (Cochrane Review). The Cochrane Library, Issue 2. Oxford: Update Software; 2000.

81. Smaill F, Hofmeyr GJ. Antibiotic prophylaxis for cesarean section (Cochrane Review). The Cochrane Library, Issue 2. Oxford: Update Software; 2000.

82. Esposito S. Is single-dose antibiotic prophylaxis sufficient for any surgical procedure? J Chemother. 1999;11:556-64.

83. Donaldson MS, ed. Measuring the Quality of Health Care. A Statement by the National Roundtable on Health Care Quality. Washington, DC: National Academy Pr; 1999.

84. Wennberg J, Gittelsohn A. Variations in medical care among small areas. Sci Am. 1982;246:120-34.

85. Chassin MR, Kosecoff J, Park RE, Winslow CM, Kahn KL, Merrick NJ, et al. Does inappropriate use explain geographic variations in the use of health care services? A study of three procedures. JAMA. 1987;258:2533-7.

86. Wennberg JE. The paradox of appropriate care. JAMA. 1987;258:2568-9.

87. McMahon MJ, Luther ER, Bowes WA Jr, Olshan AF. Comparison of a trial of labor with an elective second cesarean section. N Engl J Med. 1996;335:689-95.

88. Sachs BP, Kobelin C, Castro MA, Frigoletto F. The risks of lowering the cesarean-delivery rate. N Engl J Med. 1999;340:54-7.

89. Endarterectomy for asymptomatic carotid artery stenosis. Executive Committee for the Asymptomatic Carotid Atherosclerosis Study. JAMA. 1995;273:1421-8.

90. Wennberg DE, Lucas FL, Birkmeyer JD, Bredenberg CE, Fisher ES. Variation in carotid endarterectomy mortality in the Medicare population: trial hospitals, volume, and patient characteristics. JAMA. 1998;279:1278-81.

91. Matchar DB, Oddone EZ, McCrory DC, et al. Influence of projected complication rates on estimated appropriate use rates for carotid endarterectomy. Appropriateness Project Investigators. Academic Medical Center Consortium. Health Serv Res. 1997;32:325-42.

92. Mant J, Hicks NR. Assessing quality of care: what are the implications of the potential lack of sensitivity of outcome measures to differences in quality? J Eval Clin Pract. 1996;2:243-8.

93. Drachman DA. Benchmarking patient satisfaction at academic health centers. Jt Comm J Qual Improv. 1996;22:359-67.

94. Jencks SF, Daley J, Draper D, Thomas N, Lenhart G, Walker J. Interpreting hospital mortality data. The role of clinical risk adjustment. JAMA. 1988;260:3611-6.

95. Wennberg JE, Freeman JL, Shelton RM, Bubolz TA. Hospital use and mortality among Medicare beneficiaries in Boston and New Haven. N Engl J Med. 1989;321:1168-73.

96. Park RE, Brook RH, Kosecoff J, et al. Explaining variations in hospital death rates. Randomness, severity of illness, quality of care. JAMA. 1990;264:484-90.

97. Miller MG, Miller LS, Fireman B, Black SB. Variation in practice for discretionary admissions. Impact on estimates of quality of hospital care. JAMA. 1994;271:1493-8.

98. Thomas JW, Hofer TP. Research evidence on the validity of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care Res Rev. 1998;55:371-404.

99. Thomas JW, Hofer TP. Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care. 1999;37:83-92.

100. McGlynn EA. The outcomes utility index: will outcomes data tell us what we want to know? Int J Qual Health Care. 1998;10:485-90.

101. Hayward RA, Bernard AM, Rosevear JS, Anderson JE, McMahon LF Jr. An evaluation of generic screens for poor quality of hospital care on a general medicine service. Med Care. 1993;31:394-402.

102. Fisher ES, Wennberg JE, Stukel TA, Sharp SM. Hospital readmission rates for cohorts of Medicare beneficiaries in Boston and New Haven. N Engl J Med. 1994;331:989-95.

103. Hofer TP, Hayward RA. Can early re-admission rates accurately detect poor-quality hospitals? Med Care. 1995;33:234-45.

104. Cowper PA, Peterson ED, DeLong ER, Jollis JG, Muhlbaier LH, Mark DB. Impact of early discharge after coronary artery bypass graft surgery on rates of hospital readmission and death. The Ischemic Heart Disease (IHD) Patient Outcomes Research Team (PORT) Investigators. J Am Coll Cardiol. 1997;30:908-13.

105. Goldenberg RL, Rouse DJ. Prevention of premature birth. N Engl J Med. 1998;339:313-20.

106. Eddy DM. Performance measurement: problems and solutions. Health Aff (Millwood). 1998;17:7-25.

107. Frick KD, Lantz PM. How well do we understand the relationship between prenatal care and birth weight? Health Serv Res. 1999;34:1063-73; discussion 1074-82.

108. Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. JAMA. 1988;260:2240-6.

109. Kahn KL, Brook RH, Draper D, Keeler EB, Rubenstein LV, Rogers WH, et al. Interpreting hospital mortality data. How can we proceed? JAMA. 1988;260:3625-8.

110. Mullins RJ, Mann NC, Hedges JR, Worrall W, Helfand M, Zechnich AD, et al. Adequacy of hospital discharge status as a measure of outcome among injured patients. JAMA. 1998;279:1727-31.

111. Jollis JG, Romano PS. Pennsylvania's Focus on Heart Attack-grading the scorecard. N Engl J Med. 1998;338:983-7.

112. Iezzoni LI, Mackiernan YD, Cahalane MJ, Phillips RS, Davis RB, Miller K. Screening inpatient quality using post-discharge events. Med Care. 1999;37:384-98.

113. Green J, Wintfeld N. Report cards on cardiac surgeons. Assessing New York State's approach. N Engl J Med. 1995;332:1229-32.

114. Keeler EB, Kahn KL, Draper D, et al. Changes in sickness at admission following the introduction of the prospective payment system. JAMA. 1990;264:1962-8.

115. Chassin MR, Hannan EL, DeBuono BA. Benefits and hazards of reporting medical outcomes publicly. N Engl J Med. 1996;334:394-8.

116. Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996;93:27-33.

117. Hannan EL, Siu AL, Kumar D, Racz M, Pryor DB, Chassin MR. Assessment of coronary artery bypass graft surgery performance in New York. Is there a bias against taking high-risk patients? Med Care. 1997;35:49-56.

118. Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv. 1995;21:541-8.

119. Silber JH, Rosenbaum PR, Schwartz JS, Ross RN, Williams SV. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274:317-23.

120. Iezzoni LI. Risk adjustment for measuring healthcare outcomes. 2nd ed. Chicago, Ill.: Health Administration Pr; 1997.

121. Green J, Wintfeld N. How accurate are hospital discharge data for evaluating effectiveness of care? Med Care. 1993;31:719-31.

122. Malenka DJ, McLerran D, Roos N, Fisher ES, Wennberg JE. Using administrative data to describe casemix: a comparison with the medical record. J Clin Epidemiol. 1994;47:1027-32.

123. Romano PS, Mark DH. Bias in the coding of hospital discharge data and its implications for quality assessment. Med Care. 1994;32:81-90.

124. Concato J, Feinstein AR, Holford TR. The risk of determining risk with multivariable models. Ann Intern Med. 1993;118:201-10.

125. Localio AR, Hamory BH, Sharp TJ, Weaver SL, TenHave TR, Landis JR. Comparing hospital mortality in adult patients with pneumonia. A case study of statistical methods in a managed care program. Ann Intern Med. 1995;122:125-32.

126. Christiansen CL, Morris CN. Improving the statistical approach to health care provider profiling. Ann Intern Med. 1997;127:764-8.

127. O'Connor GT, Plume SK, Olmstead EM, et al. A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. The Northern New England Cardiovascular Disease Study Group. JAMA. 1996;275:841-6.

128. Block PC, Peterson ED, Krone R, et al. Identification of variables needed to risk adjust outcomes of coronary interventions: evidence-based guidelines for efficient data collection. J Am Coll Cardiol. 1998;32:275-82.

129. Knaus WA, Wagner DP, Draper EA, et al. The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest. 1991;100:1619-36.

130. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336:243-50.

131. Cooper GS, Chak A, Harper DL, Pine M, Rosenthal GE. Care of patients with upper gastrointestinal hemorrhage in academic medical centers: a community-based comparison. Gastroenterology. 1996;111:385-90.

132. Ballard DJ, Bryant SC, O'Brien PC, Smith DW, Pine MB, Cortese DA. Referral selection bias in the Medicare hospital mortality prediction model: are centers of referral for Medicare beneficiaries necessarily centers of excellence? Health Serv Res. 1994;28:771-84.

133. Romano PS, Rainwater JA, Antonius D. Grading the graders: how hospitals in California and New York perceive and interpret their report cards. Med Care. 1999;37:295-305.

134. Berwick DM. Quality comes home. Ann Intern Med. 1996;125:839-43.

135. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341:1147-50.

136. Davis DA, Taylor-Vaisey A. Translating guidelines into practice. A systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ. 1997;157:408-16.

137. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458-65.

138. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76:625-48, 511.

139. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76:593-624, 510.

140. Shortell SM, Jones RH, Rademaker AW, et al. Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care. 2000;38:207-17.

141. Pear R. Report outlines medical errors in V.A. hospitals; 700 deaths in 19 months; sweeping self-examination by department sees range of mistakes and abuse. New York Times. 1999: 1(N), 1(L).

142. Altman LK. Getting to the core of mistakes in medicine. New York Times.; 2000: D1(N), F1(L).

Acknowledgment

The authors thank Amy J. Markowitz and Drs. Harold Luft, Arnold Milstein, and Jeffrey Rideout for their suggestions and comments on earlier drafts of this manuscript.

Grant Support

In part by Josiah Macy Jr. Foundation (Dr. Shojania).

Correspondence

Robert M. Wachter, MD, Department of Medicine, Box 0120, University of California, San Francisco, San Francisco, CA 94143-0120; telephone: 415-476-5632; fax: 415-502-5869; e-mail: bobw@medicine.ucsf.edu.