ACP American College of Physicians - Internal Medicine - Doctors for Adults

Effective Clinical Practice

When Should This Patient Be Seen Again?

Effective Clinical Practice, January/February 1999

Michael K. Chapko, Elliott S. Fisher, H. Gilbert Welch

For author affiliations, current addresses, and contributions, see end of text.

Context. The decision about when to ask a patient to return to the clinic for his or her next visit is common to all outpatient encounters in longitudinal care. It directly affects provider workloads and has a potentially great impact on health care costs and outcomes.

General Question. What are the effects of lengthening or shortening revisit intervals (the recommended period between one visit and the next) on health status and health care costs?

Specific Research Challenge. How can we change the average revisit interval while preserving provider input for individual patients?

Proposed Approach. Patients could be randomly assigned to either short or long revisit intervals. So that provider input would be preserved, providers would select from among three discrete categories of revisit intervals: near-term (1 to 2 months); intermediate-term (2 to 4 months); and long-term (4 to 8 months). On the basis of randomization, patients would receive appointments at either the lower or the upper bound of the category selected.

Potential Difficulties. Because blinding would be almost impossible, providers might "game" randomization at subsequent visits.

Alternate Approaches. A comparison of shorter and longer revisit intervals might be achieved with less direct approaches. In one such approach, patients would be randomly assigned to 1) having an appointment made immediately after the initial visit or 2) calling back for an appointment according to the interval recommended by the provider. In another approach, patient panel size would be held constant and providers would be randomly assigned to either an increased or a reduced number of clinic sessions.

Take Home Points

It is difficult to imagine a medical decision that is more common, has a greater impact on subsequent utilization of resources, and has less of an empirical foundation than the decision made at the end of every outpatient encounter: When should this patient be seen again? It has always been assumed that seeing patients sooner (and, thus, more often) improves patient outcomes, but this assumption has not been tested.

In this paper, we consider how such testing might be done. After briefly reviewing current data on revisit intervals, we delineate what we see as the major challenge for an experimental study: to determine how to modify average revisit intervals while preserving provider input for individual patients. We then describe one approach to this challenge, discuss its weaknesses, and introduce two alternatives to it.

Background

Determination of the revisit interval (for example, deciding to tell a patient to "return to the clinic in 3 months") is a remarkably unexplored area of medicine. Much of the literature is in abstract form (1-4) or comes from the dental profession. (5-8) Although complex mathematical models have been developed to help standardize revisit intervals for two particular clinical settings—patients with bladder cancer (9) and patients receiving warfarin (10)—we know of no efforts to standardize revisit intervals for patients with more common conditions. And there are good reasons to believe that achieving such standardization will be difficult. Despite some evidence that providers adjust revisit intervals on the basis of elevations in blood pressure, (11, 12) one study found no consensus among providers who were asked about the revisit intervals that they would recommend for patients with various blood pressures. (13) The lack of consensus was even more evident when providers were given clinical vignettes of patients with common ambulatory care conditions. (14)

Despite this lack of consensus, an unspoken assumption exists about the relation between revisit intervals and health outcomes: Most would posit that more frequent contact (that is, shorter revisit intervals) is associated with better outcomes. However, the few data that exist bring this assumption into question. Dittus and Tierney (3) found that revisit intervals varied as much as threefold at their institution after adjustment for comorbid conditions and severity of disease. Stern and colleagues, (4) at the same institution, detected no differences in outcomes as a result of this variation. Both of these studies, however, were observational and retrospective.

Wasson and associates (15) indirectly studied the effect of varying revisit intervals in a randomized trial of telephone care. Patients receiving telephone care had longer revisit intervals and lower health care expenditures than patients who were not receiving telephone care, yet they reported similar health status and were admitted to the hospital less often. However, these patients were contacted regularly by telephone, and it is impossible to determine which aspect of the intervention—additional telephone contacts or longer revisit intervals—was responsible for the improved outcome. A recent Veterans Affairs Cooperative Study (16) suggested that shorter revisit intervals might cause harm. In an effort to reduce hospitalizations, this nine-site, randomized trial enrolled hospitalized U.S. veterans with diabetes, chronic obstructive pulmonary disease, and congestive heart failure and assigned them to receive a primary care intervention (standardized discharge planning and follow-up) or usual care. After discharge, patients in the intervention group were seen twice as soon and had 70% more general medical visits than controls. Yet the shorter revisit intervals did not improve outcomes as expected—in fact, the reverse occurred. The rate of readmission, the study's primary outcome, was significantly higher for recipients of the intervention. This finding was remarkably consistent across sites and diagnoses. This study provides a powerful reason to explicitly study the effects of short and long revisit intervals.

Research Outline

To set the stage, Table 1 shows the basic attributes of the research being contemplated. We begin with an explicit statement of our primary question: What are the effects of lengthening or shortening revisit intervals? The effects can, of course, be measured in many ways. Two broad outcomes are particularly important: health status (which can range from self-assessed measures to hospital admissions or emergency room use) and health care costs (which reflect clinic visits, diagnostic testing, medication use, and utilization of resources). Another set of outcomes that may be of interest are measures of tampering, that is, the degree to which physiologic measures, such as serum glucose, vary over time. To make inferences about these outcomes, the investigation would need to continue for a considerable period, perhaps 2 or 3 years.

It is also important to define the patients and the setting in which the research will be done: adults with chronic medical conditions who are receiving longitudinal care from primary care providers. Clearly, questions about the effects of revisit intervals are relevant to other patient populations, from children receiving well-child care from pediatricians (with revisit intervals measured in years) to frail, elderly persons receiving home health care (with revisit intervals measured in days). Specific types of care, such as routine dental care, eye examinations, and diabetic foot care, are also relevant. We have chosen the longitudinal care of adults simply because it represents the bulk of outpatient care.

Design Challenge

Although many things should be considered in the design of an experimental study of revisit intervals, we focus here on determining the mechanism that should be used to modify the revisit interval. In a crude "first-cut" study, the average revisit interval of a provider's patient panel would be measured. Half of the patients would be asked to return sooner than the average, and the other half would be asked to return later than the average. Thus, a provider who sees patients every 4 months, on average, could randomly assign half of his or her patients to be seen at 3 months and the other half to be seen at 5 months.

The weakness of this approach is obvious: The 4-month average obscures tremendous heterogeneity. It represents some patients who are asked to return in a year and some who are asked to return in a week. A study that assigns a 3-month revisit interval to a patient who would otherwise be asked to return in a year or assigns a 5-month interval to a patient who would otherwise be asked to return in a week denies that the provider has any basis for his or her recommendation. A study to investigate revisit intervals must somehow make use of provider input for individual patients while still modifying averages.

Proposed Approach: Recommendation-Adjustment Design

One way to meet this challenge is to solicit provider input by using discrete categories of revisit intervals, each of which reflects a range of intervals (e.g., 1 to 2 months). This approach makes provider input less quantitative (e.g., "return in 6 weeks") and more qualitative (within a prescribed range). In a sense, it makes the recommendation less exact. In practice, revisit intervals have always been inexact: Providers do not always have appointments available, and patients do not always return when they are asked to do so. There is good reason to believe, given both the reality of practice and the absence of an empirical basis for any specific revisit interval, that providers will be comfortable with this level of precision. The creation of discrete categories of revisit intervals provides an opportunity to adjust revisit intervals while still allowing provider input.

Figure 1 depicts this recommendation-adjustment design. First, patients for whom immediate follow-up (in less than 1 month) is recommended are excluded. These patients include the patients about whom the provider is most concerned (perhaps because of a new diagnosis, medication changes, or a sudden decline in health status) and for whom a specific revisit interval would probably be most strongly advocated. For the remaining patients, the provider selects one of three categories of revisit intervals: near-term (1 to 2 months), intermediate-term (2 to 4 months), and long-term (4 to 8 months). (Given a finite period of time to complete the research, patients in whom a revisit interval longer than 8 months is recommended will also be excluded.) Within the provider's panel, patients would then be randomly assigned to either a short-interval arm or a long-interval arm. Patients in the short-interval arm would have revisit intervals at the lower extremes of the ranges (1, 2, and 4 months); patients in the long-interval arm would have revisit intervals at the upper extremes of the ranges (2, 4, and 8 months).

The net effect of this design is that the average revisit interval will be twice as long in the long-interval arm as in the short-interval arm. Because randomization is done after the provider determines the revisit interval range, the distribution of interval categories is similar for patients in the two arms. As Table 2 shows, regardless of the specific distribution, the net effect is a doubling of the average across the two arms. Because randomization is done within the provider's panel, each provider will care for similar numbers of long-interval and short-interval patients. Thus, this design has little net effect on provider workload and, at the same time, controls for provider characteristics.

Potential Difficulties

One problem with this approach is that although providers would be unaware of a patient's randomization at the initial visit, they might know it at subsequent visits. At each subsequent visit, providers would again choose among the three interval categories, and the original randomization assignment would determine which extreme of the range is used. In addition, providers would have the option of "immediate follow-up" and would be able to specify an exact revisit interval for patients who they think need to be seen again in less than a month.

The inability to blind providers to randomization assignments at subsequent visits creates the potential for "gaming." This is best understood through an example. Consider a provider who, after seeing a patient, is "committed" to a 2-month revisit interval. If the patient is in the short-interval arm of the study, the provider would choose the "intermediate-term" category to ensure a 2-month follow-up interval. If the patient is in the long-interval arm, the provider would choose the "near-term" category to ensure a 2-month follow-up interval. Although the intake visit cannot be gamed (because the interval category is chosen before randomization), gaming can become a problem at subsequent visits. If gaming were to become widespread, the net effect would be to erase the difference in visit intervals between the two arms of the study.

Alternate Approaches

One way to avoid the problem of gaming is to allow the provider to specify the revisit interval and to modify the patient's role in the decision-making process. After the provider's recommendation is made, patients could be randomly assigned to one of two strategies: appointment (having an appointment made immediately after the visit) or call back (being asked to call back for an appointment after the recommended interval). This patient-initiative design is illustrated in Figure 2. The comparison might be characterized as a provider-dominant model ("you have an appointment on May 23 at 10 a.m.") versusa patient-centered model ("please call in 6 months for an appointment"). The expectation is that patients assigned to the call-back arm will tend to have longer revisit intervals (e.g., they may think, "I'm doing fine, I'll make an appointment later"), although this assumption has yet to be tested. The patient-intiative design is one way to test the so-called open access model as outpatient care.

Another approach is to change the unit of randomization from the patient to the provider and to randomly assign some providers to a larger patient panel and others to a smaller panel. Given a fixed number of clinical sessions, providers with larger panels would tend to lengthen revisit intervals and those with smaller panels would tend to shorten revisit intervals. However, assigning a large group of new patients to some providers complicates the intervention (are observed effects about revisit intervals or new patients?), whereas removing a group of patients from other providers is disruptive to the patients who are removed. In short, an abrupt change in panel size is too blunt an intervention to be practical for research.

However, the panel size could be held constant and the intervention could be a modification of the number of half-day sessions on which providers can see patients. Providers with fewer sessions would tend to have longer revisit intervals, and those with more sessions would tend to have shorter intervals. This capacity-constraint designis illustrated in Figure 3. To experimentally modify the number of sessions in both directions, eligible providers would have to have an intermediate number of weekly sessions (between three and eight) at baseline. Each provider would be randomly assigned to have more or fewer sessions; the precise number of sessions would depend both on the randomization and on the number of sessions that the provider has at baseline. If the provider has six sessions at baseline, he or she would be randomly assigned to eight sessions (increased capacity and shorter revisit intervals) or four sessions (reduced capacity and longer revisit intervals). If the provider has four sessions at baseline, he or she would be randomly assigned to have either three or six sessions. Regardless of the number of sessions at baseline, the net effect would be to double the capacity of some providers relative to that of others.

Synthesis

The three approaches discussed here are summarized in Table 3. The recommendation-adjustment design has the advantage of producing a predictable effect in an individual patient. On the other hand, the two alternative designs may produce highly variable effects in individual patients. The recommendation-adjustment and patient-initiative designs both minimize the effect of the provider because patients are the unit of randomization and individual providers care for patients in both study arms. Because providers are the unit of randomization in the capacity-constraint design, many providers will be needed to control for provider effects.

One disadvantage of the recommendation-adjustment design is that it necessitates active intervention at each subsequent visit (most likely, a research assistant or computer program will move each patient to the appropriate extreme of the assigned revisit interval range). The need for subsequent intervention is greatly reduced in the patient-initiative design and is nonexistent in the capacity-constraint design. The capacity-constraint design, however, raises special issues. One might posit that once a provider's capacity is altered, a net change in visit intervals is reasonably assured, but providers may react unpredictably. They might react to reduced capacity by spending less time with patients (having shorter visits and seeing more patients in each clinic rather than changing revisit intervals) or by referring patients to consultants more often. Furthermore, substantial inducements might be required (perhaps with regard to what providers are asked to do in their nonclinic time) before providers are willing to undergo randomization in this manner.

It is important to emphasize that many other questions will need to be addressed in a study of revisit intervals. For example, should three arms (short-interval, moderate-interval, and long-interval) be included in the recommendation-adjustment design so that we can learn more about dose-response? Should comparisons be made within or between providers? What is the right length of follow-up? Would the results be more valid and easier to interpret if the study were restricted to patients with a single disease state? Or is there as much heterogeneity among patients with type 2 diabetes as among patients with a mixture of diabetes, heart disease, lung disease, and arthritis? What should the consent process be?

Although these questions and many others remain, we hope that we have clearly explained the importance of studying revisit intervals. We have chosen to focus on ways in which an intervention might be implemented. Clearly, other approaches are possible. Our goal was to start a conversation about the kind of work that might help inform clinicians about when to ask patients to see them again.

Note

The authors are contemplating a study of revisit intervals in both the Department of Veterans Affairs and in community practices, and they welcome feedback from readers. Please address ideas or comments to Dr. Chapko (chapko@u.washington.edu) or Dr. Fisher (elliott.fisher@dartmouth.edu).

Take Home Points
  • The question of how often providers should see
    outpatients is remarkably unexplored.
  • The period after which a patient is requested to return to the clinic for his or her next visit is called a revisit interval.
  • An interventional study of revisit intervals must preserve provider input for individual patients.
  • One way to do this is to have study providers choose among interval categories instead of stipulating precise intervals.
  • Patients could then be randomly assigned to the lower or upper bound of the chosen category.

 

References

1. Gordon GH, Webb DW. Effect of psychosocial factors on medical outpatient return visit intervals. Clin Res 1984;32:31A.

2. Gordon GH, Tyler J, Friis R. Factors influencing return visit intervals in a medical outpatient clinic. Clin Res 1984;32:32A.

3. Dittus RS, Tierney WM. Scheduling follow-up office visits: physician variability. Clin Res 1987;35:738A.

4. Stern MF, Fitzgerald JF, Dittus RS, Tierney WM, Overhage JM. Office visits and outcomes of care: does frequency matter? Clin Res 1991;39:610A.

5. Alger DW. Appointment frequency versus treatment time. Am J Orthod Dentofacial Orthop 1988;94:436-9.

6. Boggs DG, Schork MA. Determination of optimal time lapse for recall of patients in an incremental dental care program. J Am Dent Assoc 1975;90:644-53.

7. Wang NJ, Riordan PJ. Recall intervals, dental hygienists and quality in child dental care. Community Dent Oral Epidemiol 1995;23:8-14.

8. Wang NJ, Holst D. Individualizing recall intervals in child dental care. Community Dent Oral Epidemiol 1995;23:1-7.

9. Kent DL, Nease RA, Sox HC Jr, Shortliffe LD, Shachter R. Evaluation of nonlinear optimization for scheduling of follow-up cystoscopies to detect recurrent bladder cancer. The Bladder Cancer Follow-up Group. Med Decis Making 1991;11:240-48.

10. Fihn SD, McDonell MB, Vermes D, et al. A computerized intervention to improve timing of outpatient follow-up: a multicenter randomized trial in patients treated with Warfarin. J Gen Intern Med 1994;9:131-9.

11. Lichtenstein MJ, Steele MA, Hoehn TP, Bulpitt CJ, Coles EC. Visit frequency for essential hypertension: observed associations. J Fam Pract 1989;28:667-2.

12. Parchman ML, Bisonni RS, Lawler FH. Hypertension management: relationship between visit interval and control. Fam Pract Res J 1993;13:225-31.

13. Lichtenstein MJ, Sweetnam PM, Elwood PC. Visit frequency for controlled essential hypertension: general practitioners' opinions. J Fam Pract 1986;23:331-6.

14. Tobacman JK, Zeitler RR, Cilursu AM, Mori M. Variation in physician opinion about scheduling of return visits for common ambulatory care conditions. J Gen Intern Med 1992;7:312-6.

15. Wasson J, Gaudette C, Whaley F, Sauvigne A, Baribeau P, Welch HG. Telephone care as a substitute for routine clinic follow-up. JAMA 1992;267:1788-93.

16. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? N Engl J Med 1996;334:1441-7.

Grant Support

Supported in part by a Robert Wood Johnson Foundation (Dr. Fisher) and a Career Development Award from the Department of Veterans Affairs (Dr. Welch).

Correspondence

Michael E. Chapko, PhD, VA Puget Sound Health Care System, 1660 South Columbian Way, Seattle, WA 98108; e-mail: chapko@u.washington.edu.