Effective Clinical Practice
Effective Clinical Practice, January/February 2000
Timothy G. Ferris
For author affiliations, current addresses, and contributions, see end of text.
For 1 hour every week my colleagues and I sit down together over lunch to discuss medical management. Attendance is mandatory. We discuss inpatients as well as profiles and performance measures. We also talk about anything that impacts our practice, such as a new guideline, changes in patient pharmacy benefits, or a particularly responsive (or unresponsive) specialist. A recent meeting included my attempt to convince my colleagues to participate in a quality improvement program I had developed with the large provider organization that owns our practice.
My group's responses were similar to those of the physicians in over 50 groups in our region: 1) worry about the effect of a quality improvement program on their relationships with patients; 2) exasperation at the prospect of additional work; 3) suspicion about the sources of funding; and 4) a sense that these programs, if effective, just may add value to their practice of medicine.
From this perspective, the article by Jans and colleagues (1) in this issue of ecp gives us much to consider. Variations in the care of patients with asthma and chronic obstructive pulmonary diseases have been well documented, and most managed care organizations in the United States have an explicit program that attempts to improve the care of these persons. Despite such widespread use, these programs rarely produce published results. When they are published, the results are usually derived from before-after studies that may not be sufficient to determine the effect of an intervention. (2)
The quality improvement intervention reported by Jans and coworkers included a process of identifying specific problems by using questionnaires, putting in place a system to measure processes of care, and providing interactive continuing medical education that amounted to approximately 1 hour per month. This multidimensional approach has been shown in a meta-analysis to be significantly more effective at changing the processes of care than one-dimensional interventions. (3) Jans and colleagues' report is unusual for the inclusion of prospective controls, and the authors were therefore able to document admirable improvement in the number of asthma patients receiving regularly scheduled visits, peak flow measurement, and advice to quit smoking. Smaller improvements were seen in the use of anti-inflammatory agents and checks on medication compliance and inhalation technique. The paper raises several issues that are central to understanding research in quality improvement.
Three Questions for Quality Researchers
Do we need to know patient outcomes to be convinced that quality was improved? Patient outcomes can be both expensive and time-consuming to obtain. Although relying on process measures is often sufficient for quality managers, it may be useful to distinguish between the process measures used to monitor a quality improvement program and those that are necessary to determine if a particular type of quality improvement intervention actually improved patient outcomes. In other words, quality improvement research may require a higher level of proof than that necessary for management decisions. Even if we agree that the process measures used by Jans and coworkers should have improved patient outcomes, we are still left wondering about emergency department visits, hospitalizations, quality of life, and satisfaction with care. The process measure that showed the smallest change, use of anti-inflammatory agents, has the strongest correlation with improved outcomes. It is possible that the extraordinary effort necessary to improve the processes of care did not improve patient outcomes.
How carefully should we measure the cost of quality improvement? The cost of clinical quality improvement to busy clinicians (pushed by productivity targets and rapid advances in technology) should not be overlooked. Jans and colleagues document the number of hours physicians spent in seminars (11 hours per year for the average physician) but do not mention the time spent recruiting physicians, training them in the new documentation, the time of specialist consultants, or their own time in coordinating and implementing the intervention. Any estimation of the cost-effectiveness of quality improvement requires not only knowing about patient outcomes but also having a detailed understanding of the costs of the intervention.
Are there side effects of quality improvement? Quality improvement is often approached as if the delivery systems in which they operate have unlimited resources and each new quality improvement program is independent of other programs. The responses of my colleagues suggest that neither is the case. Did the physicians in Jans and coworkers' study spend less time keeping up with literature on deep venous thrombosis management, cholesterol-lowering agents, or Helicobacter pylori testing? Did the nature of the physician-patient relationship change with the intervention? Did the intervention in the study decrease the physicians' sense of autonomy and their willingness to involve patients in diagnostic and treatment decisions? (4) Did the physicians spend less time with other patients to compensate for increased time with patients with respiratory conditions? It may be worth finding out if there are side effects to a focused effort to improve one aspect of health care.
Possible Side Effects of Quality Improvement
Figures 1 and 2 help illustrate the range of possible effects from quality improvement efforts. Figure 1 considers the effects on specific clinical performance measures. In the idealized model, each intervention produces improvements in the targeted performance measure: mammography rates, b-blocker use, and diabetic eye examinations, and causes no decrement in performance elsewhere (e.g., the effort to improve use of ß-blockers does not distract from mammography use). The result is a net improvement. In the zero-sum model, each intervention produces improvements in the targeted performance measure but results in a decrement in performance elsewhere. Thus, despite efforts for improvement, overall quality within the system remains the same.
Figure 2 considers the possible effects of quality improvement efforts on overall performance. Overall performance includes aspects of medical care that are not easily captured in discrete clinical performance measures (e.g., how well we prioritize multiple chronic disease processes, whether we acknowledge patients' symptoms and suffering, how much effort we devote to caring). In the idealized model, each intervention not only improves the targeted performance measure but also spills over to help improve performance globally. In the zero-sum model, each intervention improves the targeted performance measure but at the cost of distracting providers from other aspects of care. Because it seems likely that quality improvement interventions produce some side effects, it is important to begin thinking about how we can look for them.
How Can We Look for Side Effects?
To measure side effects in terms of specific performance, we need to collect information on clinical indicators that are unrelated to the specific quality improvement initiative under investigation. For example, if we are evaluating a mammography intervention, then we might measure counseling for smoking cessation. It may be useful to ask, "What aspects of clinical care are most likely to be dropped when we add something else?" Identifying clinical care measures that are sensitive to changes in the system of care or physician practice patterns is an essential first step to measuring side effects. On a practical level, a logical place to begin is to evaluate quality information already collected on physicians (e.g., HEDIS measures (5)).
Measuring side effects of quality improvement interventions in terms of overall performance could be difficult. Measurement of global health status in an entire patient population would be ideal but may be insensitive to smaller effects. Patient satisfaction may serve as a reasonable alternative—so might physician satisfaction. Many provider organizations and insurers already collect information on patient and physician satisfaction, making satisfaction an easy outcome to track. Certain changes in the operational aspects of the delivery of care, such as physician productivity, are easy to quantify but may be less sensitive to changes than others, such as diagnosis-specific visit duration. Finally, objective measures of physician responses to quality improvement would help identify more or less acceptable methods for quality improvement.
Advocates of quality improvement might argue that generalized approaches to quality improvement circumvent the problem of side effects. Continuous quality improvement and total quality management suggest that quality can be improved throughout an organization by changing the culture of work. Despite the global approach, attempts to introduce continuous quality improvement may have unforeseen side effects. The only way to know is to look for them.
The practice of medicine is changing, and the pace of changes is accelerating. Efforts to improve the care of patients are an essential component of these changes. We should take care to ensure that in our enthusiasm to embrace improvement we remain aware of the costs. Research on quality improvement will keep us informed and make sure that "improvements" really do improve care for all. This research will require prospective controls, information on patient outcomes and program costs, and measurement of clinical and nonclinical performance in areas outside the scope of the specific quality improvement intervention. Only by including these elements in quality improvement research will we develop a complete understanding of the best ways to improve clinical care of patients.
1. Jans MP, Schellevis FG, van Hensbergen W, van Eijk JThM. Improving general practice care of patients with asthma or chronic obstructive pulmonary disease: evaluation of a quality system. Eff Clin Pract. 2000;3:16-24.
2. Primer on before-after studies. Eff Clin Pract. 1999;2:241-3.
3. Solomon DH, Hashimoto H, Daltroy L, Liang MH. Techniques to improve physicians' use of diagnostic tests: a new conceptual framework. JAMA. 1998;280:2020-7.
4. Kaplan SH, Greenfield S, Gandek B, Rogers WH, Ware JE Jr. Characteristics of physicians with participatory decision- making styles. Ann Intern Med. 1996;124:497-504.
5. Primer on HEDIS. Eff Clin Pract. 1999;2:287-8.
Timothy G. Ferris, MD, MPH, Massachusetts General Hospital, Institute for Health Policy, 50 Staniford Street, 9th floor, Boston, MA 02114; telephone: 617-724-4648; fax: 617-724-4738; e-mail: firstname.lastname@example.org.