ACP American College of Physicians - Internal Medicine - Doctors for Adults

Effective Clinical Practice


Effective Clinical Practice, November/December 2000.

See related editorial.

First Do No Harm—To Err Is Human

I can still remember how excited I was to read an advance summary of the Institute of Medicine Report (IOM) on medical errors. I thought to myself, "This is going to be just great—I'll be able to take this back to the University of Washington Medical Center. It will aid our efforts to reduce risk and improve outcomes."

We have had a successful risk reduction and risk management program for more than 20 years. When I became Medical Director, we converted a quality assurance, quality-by-inspection program, in the best tradition of Codman, (1, 2) Lembcke, (3) and the old-style Joint Commission for Accreditation of Hospitals, (4) to a continuous quality improvement (CQI) program. (5) Our CQI program was actively supported at all levels, had been instrumental in our continuous improvement of operations and outcomes for 10 years, and was associated with measurable reduction in risk and improved outcomes.

I was certain that the full IOM report (6) would help us focus that precious commodity—institutional energy and resources—on a more systematic approach to error reduction and, by extension, adverse event reduction.

You can imagine my surprise when, in meetings with faculty and medical center leadership, there was virtually none of the enthusiasm for the report I had anticipated, especially from our most influential quality opinion leaders—nor was there outright rejection. The most common response to my initial enthusiasm was even more dreadful than rejection: silence! Persons who had been strong advocates for improved medical quality would listen, nod in apparent assent, then simply say, "So what?" or worry silently, "So what's next?" For several weeks I was perplexed and also wondered, "What next?" Gradually, as I had more opportunities to speak privately with colleagues, I began to understand the "problem" the IOM report, "To Err Is Human," had created for me.

IOM Report: Exaggeration and Omission

"First, do no harm" is a deeply ingrained principle for all physicians and conscientious professionals. (7) In their hearts, the many sincere, hardworking physicians and nurses who have devoted their lives to providing care to patients, to continuously trying to improve quality of care, and above all, to "Do no harm," probably felt betrayed by the IOM report. They no doubt felt particularly betrayed by the spectacular headlines that "their" errors caused between 44,000 and 98,000 deaths per year, more deaths than are caused by other headline-grabbing diseases, like AIDS. Inevitably, the headlines expand to "Each year, 120,000 patients die because of medical error—that's equivalent to a jumbo jet crashing every day!" (8)

The silence of everyday practitioners, in contrast to my initial enthusiasm, is a significant problem for the IOM report. My colleagues felt as if all of their past efforts to reduce risk and prevent error were discounted as inconsequential. As Brennan states elsewhere, "The report and the media accounts of it give the impression that doctors and hospitals are doing very little about the problem of injuries caused by medical care." (9) The report makes only brief mention of the work by anesthesiologists, which has reduced the risk of anesthesia by more than 100-fold. Also noteworthy is the temporal trend in rate of injury due to medical care in the three studies cited in the IOM report—from 4.6% of admissions in California in 1976, to 3.7% in New York in 1984, to 2.9% in Colorado and Utah in 1992. (10-12) Although it is risky to extrapolate the rates of injury in individual studies to annual deaths due to substandard medical care in the United States, (13) the extrapolated total would decrease from 92,000 deaths in 1984 to 25,000 in 1992 based on the New York and the Colorado and Utah studies, respectively. (7) Regulators have also been working quietly but effectively to reduce the frequency of rare but disastrous events, such as deaths due to inadvertent administration of concentrated potassium chloride solutions and wrong-side surgery, thanks to programs like the JCAHO's sentinel event program and to programs like the American Academy of Orthopedic Surgeons' protocol in which patients and others "sign their site" to prevent wrong-side surgery.

Effects on Practitioners

So what if the IOM report has the effect of exaggerating the magnitude of error in the public's mind? So what if it appears condescending in its failure to acknowledge the good work of individuals locally and of national groups in their efforts to reduce risks in hospitalized patients? My greatest worry is that some individuals will feel a sense of betrayal and literally disengage themselves, especially given that our litigation system is driven by a crime-and-punishment paradigm. The report could encourage litigation, which, as most practitioners realize and the Harvard malpractice study demonstrates, (13) is a very imperfect system of justice and method of meting out punishment. If litigation is encouraged, practitioners could disengage and increase the tendency for silence and secrecy induced by fear. This would be needless disengagement because, in fact, the IOM report focuses on the need for improvement, not punishment.

The almost-unprecedented media attention of the report, including a presidential proclamation and press conferences, is likely to be followed by the inevitable "error industry" of consultants and instant experts. These persons will be available to help institutions respond to new regulations and reporting requirements. However, they also have the potential to cause the most important participants in error reduction—the physicians and nurses in the trenches—to disengage, or even worse, to feel that error reduction is no longer their responsibility. This would be a step backward from a centuries-old tradition, perhaps first established by Florence Nightingale (7) and leading to the pledge of nurses and doctors to "First, do no harm."

The tendency to disengage will inevitably be accompanied by a higher level of underreporting of adverse outcomes. Adequate reporting by engaged participants, as the IOM report notes, is critical to error prevention in aerospace and in medicine. The disastrous consequences of underreporting will be further compounded if various state, federal, and local error repositories do not have confidentiality protection or, alternatively, if the United States adopts a no-fault medical injury compensation system.

Moving Forward

So, what next? First, we need to celebrate the reduction in risk to our patients that has occurred, especially in the past couple of decades. In so doing, we acknowledge efforts by individual practitioners committed to their patients' well-being and their pledge to "First, do no harm." These efforts have been informed by researchers who have described the epidemiology and complexity of errors and the magnitude of the threat to patient well-being, as well as by voluntary groups, ranging from effective local peer-review committees of physicians, to hospital administrators, to voluntary agencies like the Joint Commission on Accreditation of Healthcare Organizations, and more recently, to the National Patient Safety Foundation, which have helped make U.S. hospitals among the safest in the world. What is important is that we not abandon individual efforts to reduce risk and local efforts to continuously improve systems that support practitioners.

Second, we must ensure confidentiality. Both voluntary and involuntary efforts to catalogue and reduce adverse outcomes should be protected from discovery. To guarantee strict confidentiality in the United States will require national legislation. If we really want to encourage accurate and complete reporting, confidentiality is essential.

Third, we need to invest wisely. Efforts to improve safety require resources: for example, money to purchase computerized order entry systems and the time of committed professionals. These resources have to come from somewhere. Therefore, a careful assessment of costs and benefits is required to allocate dollars for systems development and precious time and energy of physicians to improve the safety of medical care. This largely local process will not be well served by assigning this work to a self-serving error industry.

Finally, we need to realize that, as the IOM report states, "to err is human." Although patient safety can always be improved, zero tolerance is unrealistic. The only doctors who do not make medical errors are those who do not care for patients. Risk, including that due to error, is inherent in medical care. Our job is to minimize it. But it is also our job to communicate frankly with the public, our patients. A total absence of risk would mean forgoing much of what is now routine patient care. Promoting expectations of zero risk is ultimately a tremendous disservice to the public and our patients.

Eric B. Larson
University of Washington Medical Center Seattle, Washington

University of Washington Medical Center, 1959 North East Pacific Street, Box 356330, Seattle, WA 98195; telephone: 206-598-6600, fax: 206-598-4021; e-mail:


1. Codman EA. The product of a hospital. Surg Gynecol Obstet. 1914;18:491-6.

2. Codman EA. A study of hospital efficiency, as demonstrated by the case report of the first five years of a private hospital. Boston: TH Todd, 1918.

3. Lembcke PA. Medical auditing by scientific methods. JAMA. 1956;162:646-55.

4. O'Leary DS. The Joint Commission looks to the future. [Editorial] JAMA. 1987;258:951-2.

5. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989;320:53-6.

6. Kohn LT, Corrigan J, Donaldson MS. Institute of Medicine (U.S.) Committee on Quality of Health Care in America. To err is human: building a safer health system. Washington, DC: National Academy Pr; 1999.

7. Nightingale F. Notes on hospitals; being two papers read before the National Association for the Promotion of Social Science, at Liverpool, in October, 1858; with evidence given to the Royal Commissioners on the state of the Army in 1857. London: Parker; 1859.

8. Gerlin A. Healthcare's deadly secret: accidents routinely happen. Philadelphia Inquirer; Sept. 12, 1999.

9. Brennan TA. The Institute of Medicine report on medical errors-could it do harm? N Engl J Med. 2000;342:1123-5.

10. California Medical Association. Medical Insurance Feasibility Study. Report on the Medical Insurance Feasibility Study. San Francisco: California Med Assoc; 1977.

11. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-6.

12. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261-71.

13. Weiler PC. A Measure of malpractice: medical injury, malpractice litigation, and patient compensation. Cambridge, MA: Harvard Univ Pr; 1993.

Side Effects of Exaggerating Errors

It was Friday afternoon, and the patient in front of me had a fourth relapse of lymphoma. She needed intensive salvage chemotherapy. Wearily, she asked for the weekend at home, and I arranged a Monday admission. I dictated her admission note before she left the clinic. Monday morning, she arrived at the hospital, I phoned in my orders to the oncology nurse, and somehow, somewhere the process failed. The patient received an excessive dose and died from drug toxicity. Even though my dictated note had the right dosage, I may have given the wrong dose verbally, the nurse may have written it incorrectly, the ward clerk may have transcribed the wrong dose, or the pharmacist may have filled it incorrectly.

Early in my career, I realized that errors do occur in hospitals. The IOM report correctly identified an epidemic that medicine has ignored for too long. Their report has been bannered across newspapers, featured on the nightly news, and even discussed by Dear Abby. The issue needed public exposure to start any serious correction effort. But as policymakers, physicians, and the media debate medical errors, they must also consider the unintended consequences of too much exposure. The tendency is to hype the issue with statements like "More people die in hospitals than on the highways." Scaring the public and sensationalizing the issue could backfire in four important ways.

I'll Deny the Problem

We were spending the morning at a major university health policy conference discussing the issues in health care—expenses and errors. The gentleman at the podium is a silver-haired patrician. His voice and facial expressions left no doubt that he was angry. As the CEO of a large integrated system, he was emphatic about his position. "I never want to hear that we kill patients again," he said. "Our doctors and nurses are simply too conscientious to be accused of these errors."

My worst nightmare just became reality. The CEO illustrated the most destructive potential consequence—he simply refused to believe that the problem exists. His reaction is understandable. I have no doubt that the physicians and nurses in his system are smart, sincere, and hard working. I'm also positive that this CEO makes his decisions with the best intentions for patient care. But even the best intentions are worthless if the processes behind them are flawed.

The evidence cited by the IOM is convincing enough to concern even the most skeptical clinician. There is no logical reason to believe that the problems raised by the report do not apply to every hospital and physician office in the country. Ask any doctor on a hospital staff if he has witnessed errors, and you'll get a muted, but positive, response.

No one wants to admit that his or her institution is potentially dangerous. But the leader who ignores this issue is guiding his institution into the medical equivalent of Three Mile Island. The time, commitment, and resources needed to reduce errors won't be available with leaders who deny that the problem exists. Further, no one will step forward to work on errors if they work in an environment where errors are denied.

I'll Produce Data To Prove There Isn't a Problem

John Kenneth Galbraith once said, "Faced with the choice of changing one's mind and proving that there is no need to do so, almost everybody gets busy on the proof." In the next iteration of denial, many organizations and leaders are already commissioning studies to prove that there are no errors. I have no doubt that the studies will demonstrate that errors are overstated. We've collected data with this intent for decades. How many incident reports truly document serious errors? Most of them are devoted to minor mishaps without any consequence, and they're often used to prove how "safe" the hospital is to accreditation inspectors or state licensure agencies.

If the intent of data collection is to prove safety rather than to improve performance, the data gatherers will follow the intent. The real purpose of data collection for this effort is to improve processes. For that reason, policymakers should carefully consider public, mandatory reporting. Although public reporting enforces accountability, it also encourages manipulation. Couldn't we take an approach that rewards good performance and encourages others to follow? What if only the top five performers in a state were reported to the public, and all others were voluntary? Wouldn't that encourage more process improvement and voluntary explanations rather than simple data manipulation?

We'll Blame the Problem on a Scapegoat

It's far easier to assume that errant individuals make mistakes than to examine the environment that allowed the mistake. According to Martin Hatlie, president of Partnership for Patient Safety, Dana Farber Cancer Center brought disciplinary actions against 18 nurses in the famous Betsy Lehman chemotherapy overdose case. (1) It defies reality to believe that all of those nurses were incompetent. They needed a better process of care.

The scapegoat approach ties with denial. A lawsuit followed the overdose incident described in the opening paragraph. One of the requirements for settlement was that no one would discuss the case. There was no hospital quality team assigned to build a process that would prevent further errors. It was simply easier to assume it was a human error, assign a few scapegoats, and ignore the issue for the future. The same flawed process continued.

Occasionally, physicians and nurses do deserve punishment (probably measured in single-digit percentages), but it should not be routine policy to attribute errors to an individual. Every individual works within a system or process, and it is almost always the process that needs correction, not the person. Simply identifying a scapegoat for every problem guarantees that more scapegoats will follow—they still must work with a flawed process.

Patients Are Too Scared To Get Medical Care

Nothing terrifies like the diagnosis of disease. Patients deserve the best emotional comfort we can offer them, but in an environment of compassionate honesty. As an oncologist, I didn't deny that the metastatic colon cancer patient would die, but I did discuss the treatments that I could offer to relieve the symptoms of the disease. The same approach is necessary for discussing errors.

Patients deserve to understand that errors do occur and, more important, that their physicians, nurses, and administrators are systematically working to reduce those errors. I fly more than 100,000 miles a year, and I'm aware of the risk I take every time I step on the airplane. But I'm also reassured that there are sincere, honest efforts being made to ensure my safety. The airline CEO doesn't deny that an accident can occur—the in-flight magazine discusses the airlines safety program. Airline safety data aren't collected to prove safety—they are analyzed to improve safety. Pilots aren't made into scapegoats (unless they truly have done something egregious).

If the press overplays the issue, terrified patients may avoid obtaining important health care. But that is not an excuse to ignore or deny. Just like an airline passenger, patients fully understand that errors do occur. The medical profession can alleviate that anxiety by addressing the problem and working to solve it openly and honestly.

Two weeks ago, my father called to tell me that he had crushing substernal chest pain with exertion. He had deliberately ignored the symptoms for months, and I was concerned that my stories and research about medical errors kept him from seeking care. But his real reason surprised me. Seven years ago, he had a respiratory arrest from an excess postoperative dose of morphine. He was more concerned that no progress in error reduction had been made since his last hospitalization—he believed (incorrectly) that the hospital represented more danger to him than his new illness.

The medical leaders who brought errors to the public attention are right to do so. As they move forward to solve those problems, they must recognize the possible reactions and unintended consequences of their work. They must strike a balance between keeping the problem confidential and sensationalizing it. The work of error reduction is too important to let these reactions prevail over improvement.

Lee N. Newcomer
Vivius, Inc.
Minneapolis, Minnesota

5775 Wayzata Blvd., Suite 300, Minneapolis, MN 55416; telephone: 952-938-4314; fax: 952-945-9108; e-mail:


1. Hatlie MJ. Scapegoating won't reduce medical errors. Med Econ. 2000 May 22;77:92, 97-100.

Reporting of Errors: How Much Should the Public Know?

The IOM's recent report on medical errors has generated a great deal of discussion both within the profession and the public at large. (1, 2) In a nutshell, the IOM reviewed a variety of the studies that have been done in the 1980s and 1990s and concluded that somewhere between 45,000 and 98,000 Americans die as a result of medical errors each year. Although the evidence has been available in a number of sources for years, the IOM's synthesis captured the public's attention and has generated much interest both in the Clinton administration and in Congress, where several bills have been introduced to address medical errors. (3)

The IOM report eschews treating medical errors as blameworthy events and instead emphasizes system-based approaches to prevention, citing examples in safety engineering of nuclear energy plants and aviation. Yet it also calls for both mandatory confidential and public reporting of errors to restore public confidence in the medical profession.

Public reporting of medical errors has not been endorsed by the health care industry for several reasons. First, there has been a professional reticence to disclose mistakes. This is not based on any professional ethics foundation, but rather seems to be a matter of tradition and training. (4) Silence is reinforced by the sanctions of the tort system. Revealing errors can lead to costly suits and emotionally draining litigation. Thus, the call for mandatory reporting conflicts directly with the traditions of silence on mistakes, a tradition that must be overcome. However, it is not clear that mandatory reporting is the best way to accomplish this.

To some extent, the policy on mandatory reporting of errors is further complicated by the research on medical injuries. The first major study of medical adverse events was completed by the California Medical Association in 1976 and revealed a medical injury rate of 4.65%. (5) The results were never publicized, most likely because the California Medical Association did not wish to publicize such high injury rates as it campaigned (successfully) for tort reform.

The second major study of medical injuries, known as the Medical Practice Study, was completed in New York for injury year 1984 and revealed an adverse event rate of 3.7%. (6) This was similar to California in 1976, but because the California study was flawed methodologically, a confirmatory study was completed for Colorado and Utah in 1992 and identified an adverse event rate of 2.9%. (7) These data, plus comparisons with similar studies in Australia, suggest a reasonable "ballpark" adverse event incidence rate of 3% to 4%. (8) Thus, we can say with confidence that the morbidity associated with medical adverse events is very significant.

Note that the research talks about adverse events—that is, injuries caused by medical practice, as opposed to the disease process, that either prolong hospitalization or lead to disability at the time of discharge. Some of these events might be caused by errors, but the primary analysis for both the New York and Colorado-Utah studies was on adverse events and the subset of adverse events due to negligence. Many errors occur without causing injuries; and errors are not equivalent to negligence.

Although we can be certain that the morbidity associated with adverse events is great, the reliability of the judgment in any one case is relatively low. We have not been able as researchers to perfect methods for boosting the reliability of implicit judgments of reviewers, (9) nor have we been able to rely simply on explicit screening criteria for identification of adverse events. (10) In addition, the degree of morbidity, and for that matter mortality, that is caused by adverse events is not entirely clear. Even less work has been done on the reliability of identification of errors. This is a challenge for the future in health services research. It is a challenge today, however, for any mandatory reporting scheme, especially one that would be public. As anyone who has reviewed charts to identify adverse events or anyone who has done peer review to decide whether a case should be reported to state authorities knows, decisions about medical injuries can be quite difficult. Implicit judgments create the potential for great variation in rates of self-report. This is true for adverse events and probably even more true for identification of errors. Given the professional reticence about disclosure and fear of discoverability and litigation, (11) the potential for underreporting is considerable.

The main appeal of public reporting is to restore confidence that something is being done to address patient safety. Such restoration is needed, and perhaps public reporting is the only way to accomplish it. But I'm afraid that the public reporting requirements will be frustrated by professional reticence and the difficulties of identifying adverse events. Differing interpretations of regulations and differing thresholds for reporting could lead to vastly inconsistent report rates that are not at all based on quality of care. This will probably undermine the reporting system's legitimacy in the eyes of the profession and the public.

More appealing may be a totally confidential system, also advocated by the IOM, that sweeps up all potential adverse events. This approach addresses some of the major concerns about mandatory reporting and provides hospitals with much more information that can be used to assess and design prevention strategies, presuming confidentiality increases the volume of reports. It also promotes awareness of error prevention in health care institutions. If used properly by states receiving the reports, it will be possible for epidemiologists to undertake large multi-institutional studies of risk factors for adverse events. This is precisely the kind of confidential reporting system that has served the aviation industry so well.

The public may have a right to know about medical error-related injuries. Indeed, there may be a duty on behalf of physicians to report these. But if the goal of our policy changes is to reduce medical injury rates, we should recognize that a confidential reporting system may have greater potential to effect real change. In this case, the public may be better served by lack of publicity.

Troyen A. Brennan
Brigham and Women's Physician Organization
Boston, Masschusetts

Brigham and Women's Physician Organization, Clinics 3, 75 Francis Street, Boston, MA 02115; telephone: 617-732-8961; fax: 617-975-0933; e-mail:


1. Kohn LT, Corrigan J, Donaldson MS. Institute of Medicine (U.S.) Committee on Quality of Health Care in America. To err is human: building a safer health system. Washington, DC: National Academy Pr; 1999.

2. Brennan TA. The Institute of Medicine report on medical errors-could it do harm? N Engl J Med. 2000;342:1123-5.

3. Bosk CE. Forgive and Remember. Chicago: Univ Chicago Pr;1979.

4. California Medical Association. Report on the medical insurance feasibility study. In: Mills DH, ed. Sutton, CA: California Med Assoc; 1977.

5. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;7:370-6.

6. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261-71.

7. Thomas EJ, Studdert DM, Brennan TA, et al. Comparison of results from the Utah Colorado Medical Practice Study and the Quality in Australian Health Care Study. International Journal of Health Care Quality. 2000;[In press].

8. Hofer TP, Bernstein SJ, DeMonner S, Hayward RA. Discussion between reviewers does not improve reliability of peer review of hospital quality. Med Care. 2000 Feb;38:152-61.

9. Bates DW, O'Neil AC, Petersen LA, Lee TH, Brennan TA. Evaluation of screening criteria for adverse events in medical patients. Med Care. 1995 May;33:452-62.

10. Liang BA. Promoting patient safety through reducing medical error: a paradigm of cooperation between patients, physicians, and attorneys. Southern Illinois University Law Journal. 2000;24:541-63.

11. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized sign-out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24:77-87.

Public Reporting of Serious Medical Errors

Since the IOM's thoughtful report on medical errors, the entire effort to discuss the systems and culture within which errors occur has been almost drowned by a shrill controversy over whether serious medical errors should be publicly reported. The heart of this controversy seems relatively simple: On one side is a group of patient advocates, disciples of market processes, and believers in regulatory enforcement, who argue that providers and practitioners who make mistakes should be publicly identified so that 1) patients can avoid those who make errors, 2) market forces will pressure those who make errors to improve, and 3) the public can hold those who make errors accountable. On the other side are students of professional behavior and system improvement who believe that 1) public reporting will discourage recognizing errors that currently go unrecognized but that must be recognized to prevent repetition, and 2) improvement is most likely to occur in a fear-free environment where participants can acknowledge errors without penalty. These polar positions reflect apparently irreconcilable points of view, but underlying these positions are technical issues that, if understood, can narrow the policy differences.

Which Events Should Be Reported?

The first of these technical issues is what we mean by "error" and how we can identify the events that might be reported. The IOM defined error as "the failure of a planned action to be completed as planned or the use of a wrong plan to achieve an aim." This definition, however, can encompass almost everything that is not optimal in episodes of health care, from amputating the wrong limb to failing to order the right lab test. The IOM therefore recommended limiting public reporting to:

a mandatory reporting program for serious adverse events, implemented nationwide, linked to systems of accountability, and made available to the public . . . The types of events to be reported might include, for example, maternal deaths, deaths or serious injuries associated with the use of a new device, operation, or medication, deaths following elective surgery, or anesthetic deaths in Class 1 patients. (1)

The shift from "error" to "adverse event" is an effort to eliminate the need for potential reporters to make self-incriminating admissions as well as to balance the public's need to know against the need of practitioners and providers for confidentiality in inquiries about problems that caused no harm. Not only are many errors not followed by adverse events, but many adverse events are not the result of errors. Indeed, the adverse events that are almost certainly the result of errors are far less common than the events whose association with error can only be elucidated by detailed clinical information, such as adverse drug reactions and nosocomial infections. A major problem is that the IOM definition does not match well the narrower public and political definition of errors as gaffes that are uncommon and fairly obvious. The public does not quite know what to make of such pervasive problems as failure to immunize older patients and failure to give -blockers to patients who have had heart attacks.

One approach (adopted for further study by the Federal Quality Interagency Council) is to limit reporting to serious "never" events—those that will almost always signal human and/or systems failure (e.g., anaphylactic reaction to a drug to which the patient was known to be allergic, wrong-side surgery). Current evidence suggests that such events comprise only a small (and perhaps unrepresentative) sample of events that might be preventable or those that may signal a hazardous provider. A further step, employed in some states, is to add reporting of unexpected adverse events (unscheduled return to the operating room, death after minor surgery) that are likely, but less certain, to be the result of errors. These strategies, however, are only starting points because they omit many, perhaps most, events that are associated with potentially identifiable errors.

Will Error Reporting Affect Other Public Health Reporting?

The second technical issue is the interaction between public reporting and other kinds of epidemiologic surveillance activities. A frequently proposed definition of reportable events, for example, is that they be adverse, major, and not result from the natural course of the disease being treated. Organizations such as the FDA, however, have found that only careful epidemiologic examination of all adverse experiences associated with use of a pharmaceutical in many settings can separate those that are attributable to the pharmaceutical from those that are only coincident or are caused by the disease the patient had before the pharmaceutical was given. If such judgments of causality are instead made by hospitals or physicians worried by public reporting and those judgments shape how adverse events are perceived and classified, then events actually associated with devices or drugs might be unreported even in confidential systems because providers and practitioners misclassified them as resulting from the natural course of illness. Thus, mandatory reporting can conflict with the technical epidemiologic need for reporting that does not prejudge the cause of the event.

A similar issue arises regarding the effect of public reporting on confidential internal reporting aimed at quality improvement. It is not clear to what extent we can have public reporting without chilling the atmosphere of trust needed for confidential reporting. The poster child for at-risk reporting systems is the National Nosocomial Infection Surveillance System, sponsored by the Centers for Disease Control and Prevention. The initial goal in participating hospitals is to drive the rate of identification of nosocomial infections upward by producing more adequate reporting. There is a very strong conviction among those who oversee this system that publication of provider-specific nosocomial infection rates would result in a precipitous decline in reported nosocomial infections but that 1) this decrease would not reflect a decline in the actual infection rate, and 2) the true rate of infection would soon start to rise because underreporting would undermine the hospital's infection control program.

Can Error Reporting Be Made Useful to the Public?

A third technical issue is our ability to make the reported information useful to the public. If reporting is limited to "never" events, the number of these events is relatively modest, and most advocates of public reporting would probably be seriously disappointed if reporting were so limited: What would or should the public make of these rates in choosing sources of care or in making the environment of care safer? If more frequent adverse events, such as nosocomial infections, pressure ulcers, and adverse drug reactions, are to be reported publicly, then making the data meaningful will require some kind of risk adjustment to compare the number of persons at risk and the severity of their risk or vulnerability. These adjustments are extremely difficult and probably within the state of the art for only a few classes of events, and they are not easy for the public to understand. To further complicate matters, there is little evidence that patients, purchasers, or practitioners have made much use of the most carefully done risk-adjusted outcomes that have been published—those for cardiac surgery. Because decisions on public reporting of errors rest on balancing the advantages of public information against the problems of reporting, the technical issues in making reporting useful are quite important. The Health Care Financing Administration, under the aegis of the Federal Quality Interagency Council, is studying public reporting of adverse events to better understand these issues.

What Are the Legal Implications of Public Reporting?

A final technical issue is the relationship between public reporting and litigation. At first glance, public reporting is a malpractice lawyer's dream, providing preresearched opportunities for litigation. This dream assumes, however, that 1) the affected individuals do not already know what has happened, 2) individual cases would be identified, or 3) the reports regarding a provider would be usable in support of a suit brought for other reasons.

Almost all ethicists agree that the patient has an absolute right to know what has happened and whether what has happened is the result of an error. Thus, it would be improper to argue against public reporting on the grounds that it would give information to the patient. Further, widely accepted privacy rights would protect the identity of live individuals (included in the Federal Privacy Act) and any law requiring reporting should incorporate a decent respect for the families of the deceased that would extend them the same privacy protection (although the Federal Privacy Act does not). Thus, lawyers would probably not be able to use the reports as a way to identify potential litigants. The question of how useful reports might be in support of potential litigation is a technical issue in the legal field, some aspects of which could be addressed in any legislation to create reporting.

These technical issues are an invitation to health services research. Neither the IOM Committee nor the Federal Quality Interagency Committee had much real information available to them on the data actually reported to existing state reporting systems or on how those systems are working. We need to use the states as laboratories and observe their individual programs. Such studies, as well as studies of some of the technical issues described above, are now being designed at the Agency for Healthcare Research and Quality and at Health Care Financing Administration. Unresolved, these issues are most likely to lead to paranoia among those who fear mandatory reporting and to paralysis of the policy making. The human and resource costs of errors are too high for us to allow that outcome.

Stephen Jencks
Office of Clinical Standards
Health Care Financing Administration
Baltimore, Maryland

Office of Clinical Standard and Quality Improvement, Health Care Financing Administration, 7500 Security Boulevard, M/S S3-02-01, Baltimore, MD 21244-1850; telephone: 410-786-6508; fax: 410-786-8532; e-mail:


1. Kohn LT, Corrigan J, Donaldson MS, Institute of Medicine (U.S.) Committee on Quality of Health Care in America. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Pr; 1999.

Informational Errors

On morning rounds, an attending physician reads the chart of a 5-year-old boy with known seizure disorder who had been admitted the previous night for a prolonged postictal state. According to the note of the admitting resident, as a newborn the child had been treated for meningitis. His first "grand mal" seizure, a known sequela of neonatal meningitis, happened when he was 2 years old, and he has since had roughly one seizure a year. A previous EEG had been normal, and in every other regard the child was doing well. There was no family history of epilepsy or any other ailment, and he was receiving no medications. Similar to previous episodes, on the evening of admission the boy had been playing with his brother in the yard when he fell to the ground and then had "tonic-clonic"' movements that lasted for 5 minutes. He was admitted after remaining somnolent in the emergency department for 3 hours.

Information is the foundation on which all diagnoses and therapeutic decisions are made. Accurate information, in a readily available and easily interpreted format, is therefore crucial to quality medical care. Yet, practice decisions are often based on erroneous or missing information. Busy physicians garner their information from many sources, including patients, family members, laboratory results, the medical literature, and conversations with colleagues. Typically, these sources converge in the central repository of the medical record. The medical record contains information elicited by diverse medical providers who then interpret, synthesize, and ultimately transcribe that information into the chart. It's a dynamic document—where recent entries are often influenced by those from the past (Figure 1). Each of these steps can introduce, amplify, or propagate erroneous information. In this case, wary that erroneous information may have been misguiding the care of this child, the attending physician dug deeper.

After calling the medical records department to retrieve the old chart, wading through hundreds of pages for an hour, and then speaking with the parents, the attending physician has radically revised her initial impression. First, the diagnosis of "meningitis" as a newborn had been highly presumptive, as a bloody lumbar puncture had precluded "ruling out" meningitis, even though the cerebrospinal fluid sample never grew an organism (yet the "fact" that the boy had had "meningitis" as an infant was repeated in the medical record thereafter). Second, the spells were syncopal, with a period of flaccidity followed by thrashing on awakening (misinterpreted as tonic-clonic movements). Third, a young uncle had died suddenly in his sleep for unknown reasons (a question regarding family history that had never before been asked). Finally, an astute night resident had suggested during a previous admission to "check EKG," but this had been misread by the team the next morning as "check EEG."

The care of this patient exemplifies seven types of information error, which can be divided into two broad categories: errors of data gathering-processing and errors of communication (Table 1). Although many of these errors involve higher cognitive faculties of reasoning and judgment, they all converge in the mundane yet crucial recording of data and information into the medical record and determine how this chart is subsequently used.

Once the EKG and subsequent tests documented that the boy had the long QT syndrome, the attending physician sought to correct the medical record so that all erroneous information and inaccurate interpretations would be expunged. Given how the medical charts in her hospital were organized, however, no satisfactory rectification could occur.

What can be done to mitigate the impact of these information errors? Unfortunately, they defy easy solutions. Efforts to diminish data gathering-processing errors might focus on better medical education and wider use of diagnostic or decision support software, but these measures will at best reduce and not eliminate elicitation, interpretive, and synthetic errors.

Communication errors require systemic changes, some of which may be achievable via the electronic medical record. Such technology has the potential to reduce the likelihood of particular errors. For example, point-and-click data entry may reduce transcription errors, or sophisticated search engines might facilitate retrieval of all pertinent information. The human propensity to err, however, will persist. Indeed, the ease with which the electronic medical record could enable providers to cut and paste histories or document erroneous findings may, quite ironically, perpetuate corroborative errors or increase retrieval errors.

To decrease errors of uncritical corroboration or undue certitude, electronic medical records need to be enhanced so users can question and interrogate the veracity of the information. We envision the chart as comprising two different domains: a "deep" domain, archiving all previous entries without possibility of editing, and a "flat" domain, always kept current, that references the key evidence supporting diagnoses, current medications, or drug allergies. Such a system might note whenever repeated entries have merely been copied from previous ones or enable providers to instantly check whether a test result documented in the chart by a provider is consistent with what the laboratory reported. The system would provide a means to assess the level of certainty about vital bits of information. Users could work backward from information contained in the flat domain, retracing the diagnostic steps as archived in the deep domain—perhaps through a hyperlinked format, similar to Web pages, that would connect diagnoses on a problem list to the evidence that substantiated the diagnosis. When necessary, they could correct errors in the flat domain.

Whatever approach is taken to reduce informational errors, however, warrants rigorous scrutiny and clinical testing on par with how we assess therapeutic effectiveness through randomized clinical trials. Even if this line of research results in a tangible decrease in error, providers ultimately will still need to view the medical record and all of its information with healthy skepticism.

Dimitri A. Christakis, Chris Feudtner
Child Health Institute
University of Washington
Seattle, Washington

Dr. Christakis: Child Health Institute, University of Washington, 146 North Canal Street, Suite 300, Seattle, WA 98103-8652; telephone: 206-616-1202; fax: 206-543-5318; e-mail: