ACP American College of Physicians - Internal Medicine - Doctors for Adults

Effective Clinical Practice


Use and Evaluation of Critical Pathways in Hospitals

Effective Clinical Practice, May/June 2002

Jonathan Darer, MD, MPH, Department of Medicine; Peter Pronovost, MD, PhD, Department of Anesthesiology/ Critical Care Medicine, Department of Surgery, Department of Health Policy and Management; Eric B. Bass, MD, MPH, Department of Medicine, Department of Health Policy and Management, The Johns Hopkins University School of Medicine, Bloomberg School of Public Health, Baltimore, Md

For author affiliations, current addresses, and contributions, see end of text.

Context. Although hospitals have devoted substantial resources to critical pathways, it is not known whether they routinely evaluate the clinical or economic effects of these pathways.

Objective. To determine how use and evaluation of critical pathways differ between academic and community hospitals.

Design. Cross-sectional survey.

Participants. Hospitals participating in consortia for improving quality of care associated with the Institute of Health Care Improvement and the VHA, Inc. (formerly known as the Voluntary Hospitals of America, Inc.). Hospital administrators at 41 hospitals completed the survey (71% response rate), representing 13 academic medical centers, 13 community teaching hospitals, and 15 community hospitals.

Measures. Use of critical pathways and measurement of clinical and economic outcomes of pathways.

Results. The median number of adult critical pathways used by academic hospitals, community teaching hospitals, and community hospitals was 25, 18, and 3, respectively. The most common pathways were community-acquired pneumonia, total hip or knee replacement, and stroke or transient ischemic attack. The percentage of hospitals with pathways dedicating staff to manage them was 78% for academic hospitals, 22% for community teaching hospitals, and 14% for community hospitals (P = 0.02). Evaluation practices varied widely among hospitals with pathways. Measures assessed included monitoring length of stay (85%), total hospital costs (74%), in-hospital mortality (62%), infectious complications (53%), readmission rates (47%), functional status (18%), and adverse drug events (15%).

Conclusion. The use of critical pathways varies substantially among hospitals participating in quality improvement consortia. Use was highest in academic centers and lowest in community hospitals. Many hospitals with pathways do not track important clinical outcomes as part of their evaluation practices.

Take Home Points

Critical pathways can be defined as problem-specific management plans that delineate key steps along an optimal timeline to achieve a set of described intermediate and ultimate patient goals. (1-6) While critical pathways may also be called care paths, integrated clinical pathways, care maps, and anticipated recovery pathways, (4, 7) all attempt to increase efficiency by organizing the care delivery process into individual analyzable steps. As a result of early reports of critical pathway success, many institution and hospital administrators eagerly implemented pathways. (3, 8-10)

Widespread use of critical pathways is remarkable for two reasons: They are expensive to develop and maintain, and there is little evidence of their effectiveness. Given the investment required to develop, implement, and maintain pathways, hospitals risk wasting substantial resources if they do not evaluate the effects of their pathways. Moreover, given the impetus to reduce hospital costs, some authors have suggested that critical pathways may actually worsen clinical outcomes, although data to support this assertion are limited. (4) It seems important then for hospitals to monitor the effects of pathways on clinical as well as economic outcomes.

The specific aims of this project were to determine how the use and evaluation of critical pathways differ among various types of hospitals. We hypothesized that: 1) larger academic institutions would be more likely to invest in and evaluate critical pathways than smaller community hospitals and 2) many institutions with pathways would not track important clinical outcomes as part of their evaluation practices.


We conducted a cross-sectional electronic mail survey of hospitals between May 2000 and February 2001.

Hospital Sample

The hospitals surveyed had formed a consortium for improving their quality of care: the Quality Management Network associated with the Institute of Health Care Improvement, with members located throughout the United States, and the Quality Council associated with the VHA Inc. (formerly known as the Voluntary Hospitals of America, Inc.), East Coast Region, with members located in Pennsylvania, Delaware, and New Jersey. We surveyed a variety of hospital types, including academic, community teaching, and community hospitals.

Survey Instrument

The survey instrument sought to evaluate the extent to which critical pathways were used and evaluated within hospitals. Pathway practices assessed included the number and types of adult and pediatric critical pathways used, the percentage of medical and surgical patients eligible for pathways, the outcomes routinely evaluated for patients on pathways, whether the hospital had a written plan for evaluation of pathways, the kinds of data used to evaluate pathways (e.g., medical records), and the number of staff dedicated to critical pathways (measured in full-time equivalents).

We also asked about hospital characteristics, including hospital size and whether the hospital was an academic, community teaching, or community hospital. We defined academic hospitals as being a member of the Council of Teaching Hospitals, community teaching hospitals as nonmembers of the Council of Teaching Hospitals but having residency programs, and community hospitals as hospitals without residencies. We pilot-tested the survey instrument for understandability on leaders in critical pathways at five institutions.

Survey Administration

Surveys were sent by e-mail to the hospital administrator identified as the representative to the quality consortium. If no reply was received after the first e-mail, a second attempt was made. If no reply was received after the second e-mail, we called the hospital administrator to ascertain interest in the study. Several hospital administrators directed us to more appropriate personnel, who were then contacted in a similar manner.

Forty-one of the 58 hospitals (71%) completed the survey. The response rate was similar across hospital type: 72% response rate for academic hospitals, 68% for community teaching hospitals, and 71% for community hospitals.


Descriptive statistics included percentages for dichotomous variables, mean and standard deviation for normally distributed continuous variables, and median and interquartile range (IQR) for nonnormally distributed continuous variables. Fisher exact test was used for the comparative analysis of differences in dichotomous variables between hospital types. Stata was used for statistical analysis (Intercooled Stata 6.0 for Windows 98, Stata Corporation, College Station, TX).


Among the 41 hospitals responding to our survey, there were 13 academic hospitals, 13 community teaching hospitals, and 15 community hospitals. Figure 1 shows that academic hospitals were more likely to use pathways than other types of hospitals. The most commonly used pathways were for pneumonia or community-acquired pneumonia (61%), total knee or hip replacement (59%), and cerebrovascular accident or transient ischemic attack (46%). Other clinical settings in which pathways are commonly used are displayed in Figure 2.

Use of Pathways

Thirty-four of the hospitals surveyed used at least one clinical pathway. As shown in Table 1, the median number of adult critical pathways was greater for academic hospitals than for community teaching hospitals and community hospitals. Few pediatric critical pathways were used.

Pathway Evaluation

Hospitals with more pathways tended to report more rigorous evaluation methods. A written plan was reported to be a component of evaluation practices in 62% of academic hospitals, 50% of community teaching hospitals, and 22% of community hospitals (P >0.2). Specific personnel reportedly were assigned to evaluate pathways in 78% of academic hospitals, 22% of community teaching hospitals, and 14% of community hospitals (P =0.02).

Most hospitals reported that they evaluated at least one clinical outcome for patients on pathways, including 92% of academic hospitals, 92% of community teaching hospitals, and 56% of community hospitals (P =0.09). In-hospital mortality was the most commonly measured outcome (Table 1). Other clinical outcomes that were frequently measured included infectious complications, procedure complications, and readmission rates. Few hospitals reported evaluating patient functional status or the occurrence of adverse drug events. Patient satisfaction was evaluated only in a minority of hospitals.

One hundred percent of academic hospitals, 92% of community teaching hospitals, and 67% of community hospitals (P =0.06) measured one or more economic outcomes in patients on pathways. The reported economic evaluation of pathways commonly included monitoring length of stay and total hospital costs.


Our survey of hospitals participating in two quality improvement networks demonstrated substantial variation in the use of critical pathways across different types of hospitals. Use was highest in teaching centers and lowest in community hospitals. Although almost all hospitals tracked length of stay and total hospital costs, the extent to which hospitals monitor other clinical and economic outcomes for patients on clinical pathways varied substantially. This finding is notable, given the paucity of evidence about the effectiveness and cost-effectiveness of critical pathways in many clinical settings. (4, 7, 11, 12)

While studies that evaluate individual clinical pathways are common, surveys of clinical pathway utilization are not. We found one other report of a survey of critical pathway use. Riley (13) reported a survey of care pathway use among hospitals in the United Kingdom in 1998. Her findings demonstrated that pathways were commonly used in hospitals. The specialties most likely to develop and implement pathways were orthopedics, followed by general surgery and medicine (including care of the elderly). About one third of hospitals found that pathways helped them control costs. In addition, the survey found that most pathways were similar in content, including incorporation of guidelines, measurement of clinical outcomes, and provision of patient education.

Among the hospitals surveyed, we found that the reported investment in pathways was greatest among the large academic institutions. Trends in our data suggest that academic hospitals were more thorough than the others in evaluation of economic and clinical outcomes of critical pathways. These results could be a function of patient volume, whereby hospitals with larger numbers of patients with specific conditions or requiring specific procedures may have found it more valuable to invest in developing pathways for these patients. Academic hospitals were by far the largest: Mean number of beds were 752 (academic), 457 (community teaching), and 242 (community). Academic institutions may also have more resources to develop critical pathway programs than do community institutions.

Of hospitals that reported using pathways, about half were able to estimate the percentage of patients eligible for pathways, although several hospitals failed to answer questions regarding measuring eligibility. One of the driving influences of the pathway movement has been an attempt to reduce costs through improved efficiency. Developing, implementing, updating, and evaluating pathways can be costly in terms of staff time. The lack of data regarding patient eligibility will make it difficult for individual hospitals to estimate the relative importance of implementing specific pathways that ultimately may only serve a limited population of patients. In addition, lack of eligibility data will make it difficult to measure how effectively hospital staffs comply with the pathway—an important outcome measure for effective quality improvement.

The hospitals that responded to our survey mostly seemed to agree about the economic indicators that are used to evaluate critical pathways, as 85% measured hospital length of stay and 74% measured total hospital costs. These hospitals differed substantially in the clinical indicators used to evaluate critical pathways, however: 62% measured in-hospital mortality, 53% measured infectious complications, and 15% measured adverse drug events. Hospitals simply may not have enough resources or expertise to monitor the many clinical indicators that could be used. They also may face important obstacles to tracking different types of clinical outcomes for different pathways, such as finding validated and standardized instruments by which to measure outcomes. While some outcome measures for hospital quality reporting have been developed, such as those created by the Joint Commission on Accreditation of Healthcare Organizations or the 7th Scope of Work developed by the Centers for Medicare and Medicaid Services, no universally accepted standardized set of clinical quality indicators for hospital reporting exists. (14) The relative inattention to evaluation of some clinical outcomes, such as adverse drug events, may limit the ability of critical pathways to be used as a tool to improve safety and quality of care.

We recognize several limitations to this study. The first limitation is the risk for selection bias: The specific group of hospitals chosen for this survey may not be representative of critical pathway use in the United States. Because the hospitals surveyed were part of hospital quality consortia, they may be more likely to use and evaluate critical pathways, thus overestimating the use and evaluation of these pathways in U.S. hospitals. The direction of such bias would imply that measurement of pathway outcomes is reduced in the general hospital setting, as is suggested by the relatively low use of critical pathways by nonacademic institutions within these quality consortia. We also had limited ability to detect statistically significant differences between types of hospitals, but the trends were quite consistent. Another limitation is that hospitals may not have accurate sources of data for the information we requested (such as percentage of eligible patients). Nonetheless, their perceptions are important, and the reported lack of information is telling. In addition, we recognize that different institutions may broadly interpret the definition of critical pathways; thus, our survey did not attempt to evaluate the content of the various instruments. While this survey was focused on the evaluation of critical pathways, the actual pathways themselves may differ widely among hospitals.

Overall, our results suggest that many hospitals fail to gather data regarding patient eligibility for critical pathways, and while most hospitals gather data on one or more clinical and economic outcomes, substantial differences exist in how different types of hospitals evaluate the usefulness of these pathways. Further research is needed to develop valid and efficient methods for evaluating the impact of these methods in different types of hospitals. This would help hospitals to determine whether the resources put into critical pathways are truly worth the investment.

Take Home Points
  • Many hospitals have adopted the use of critical pathways without strong evidence that they are clinically and economically effective.
  • Through a survey of hospitals within two quality consortia, we sought to describe the current use of critical pathways and the extent to which hospitals evaluate the impact of these pathways.
  • Most hospitals, and especially academic hospitals, use numerous pathways.
  • While most hospitals evaluate at least some clinical and economic outcomes for patients on pathways, substantial differences exist in how various types of hospitals evaluate the usefulness of these pathways.
  • Further research is needed to develop valid and efficient methods for evaluating the impact of critical pathways in different types of hospitals.


1. Isozaki LF, Fahndrick J. Clinical pathways-a perioperative application. AORN J. 1998;67:376, 379-86, 389-92.

2. Wieczorek P. Developing critical pathways for the operating room. AORN J. 1995;62:925-9.

3. Strassner L. Critical pathways: the next generation of outcomes tracking. Orthop Nurs. 1997;16:56-61.

4. Pearson SD, Goulart-Fisher D, Lee TH. Critical pathways as a strategy for improving care: problems and potential [see comments]. Ann Intern Med. 1995;123:941-8.

5. Ellrodt G, Cook DJ, Lee J, Cho M, Hunt D, Weingarten S. Evidence-based disease management. JAMA. 1997;278:1687-92.

6. Falconer JA, Roth EJ, Sutin JA, Strasser DC, Chang RW. The critical path method in stroke rehabilitation: lessons from an experiment in cost containment and outcome improvement. QRB Qual Rev Bull. 1993;19:8-16.

7. Campbell H, Hotchkiss R, Bradshaw N, Porteous M. Integrated care pathways [see comments]. BMJ. 1998;316:133-7.

8. Marrie TJ, Lau CY, Wheeler SL, Wong CJ, Vandervoort MK, Feagan BG. A controlled trial of a critical pathway for treatment of community-acquired pneumonia. CAPITAL Study Investigators. Community-Acquired Pneumonia Intervention Trial Assessing Levofloxacin. JAMA. 2000;283:749-55.

9. Stanley AC, Barry M, Scott TE, LaMorte WW, Woodson J, Menzoian JO. Impact of a critical pathway on postoperative length of stay and outcomes after infrainguinal bypass. J Vasc Surg. 1998;27:1056-64; discussion 1064-5.

10. Levine PA. Otolaryngology. J Am Coll Surg. 1998;186:197-202.

11. Bohmer R. Critical pathways at Massachusetts General Hospital. J Vasc Surg. 1998;28:373-7.

12. Pearson SD, Kleefield SF, Soukop JR, Cook EF, Lee TH. Critical pathways intervention to reduce length of hospital stay. Am J Med. 2001;110:175-80.

13. Riley K. Care pathways. Paving the way. Health Serv J. 1998;108:30-1.

14. Epstein AM. Rolling down the runway: the challenges ahead for quality report cards. JAMA. 1998;279:1691-6.

Grant Support

Dr. Darer is supported by a grant from the Health Resources and Services Administration.


Jonathan Darer, MD, MPH, Division of General Internal Medicine, 1830 East Monument Street, 8th floor, Baltimore, MD 21205; telephone 410-662-1358; e-mail: