Rapid responses are electronic comments to the editor. They enable our users
to debate issues raised in articles published on bmj.com. A rapid response
is first posted online. If you need the URL (web address) of an individual
response, simply click on the response headline and copy the URL from the
browser window. A proportion of responses will, after editing, be published
online and in the print journal as letters, which are indexed in PubMed.
Rapid responses are not indexed in PubMed and they are not journal articles.
The BMJ reserves the right to remove responses which are being
wilfully misrepresented as published articles or when it is brought to our
attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not
including references and author details. We will no longer post responses
that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Not-for-profit urban medical centers in the US face chronic resource
limitations, a condition exacerbated by the ascendancy of managed care.
Resources for the conduct of research are especially scarce. Within this
context we conducted a nationwide mail survey of US geriatricians in
which, in addition to the survey goals, we sought to paritally replicate
Sloan et al's1 test of PhD versus MD signatory on the covering letter of
the survey to increase response rates. Current research demonstrates that
several strategies can increase physician response rates, but all would
have required labor and material resources unavailable to us.2,3 Sloan et
al1 found, among other things, that a PhD signatory increased response
rates from physicians over an MD signatory when they asked for consent
from the physicians to contact cancer patients for participation in
research.
Subjects and Methods
The survey was conducted during August to October, 1998. The mailing
was comprised of the survey instrument and one of two versions of a
covering letter which represented the treatment arms of the experiment.
The covering letters were identical with the exception of the signatory.
The signatory of the first covering letter was the PhD director of
research in a large, urban medical center. The signatory of the second
version was the MD director of the division of geriatrics in the same
medical center. To control for the effect of handwriting style on response
rates, only one name was used on all letters, and the signatures on all
letters were signed by one person during one sitting.
A random number generator was employed to randomly assign half of a
sample of 530 geriatricians to receive one of the covering letters and
half to receive the other letter. Group samples (n1=n2=265) were
calculated to detect at least a 15% difference between groups at alpha=.05
and a statistical power of .95. All statistical tests were two-tailed,
and were made using the Chi-Square test with Yate's correction for
continuity. We calculated 95% confidence intervals (95% CI) using standard
formulas for proportions and the difference between independent
proportions, respectively, with Yate's correction for continuity.4
Results and Comment
No difference (1.5%, 95% CI, -6.5% to 9.5%, Chi-Square=.042, p=.84)
existed between response rates for the PhD covering letter ([191/265],
72.1%, 95% CI, 66.5% to 77.7%) and the MD letter ([195/265], 73.6%, 95%
CI, 68.1% to 79.1%). Moreover, no difference existed (Chi-Square=2.65,
p.62) between treatment arm response rates across the five geographic
regions of the United States.
We believe that the findings of Sloan et al's earlier experiment
could not be replicated here largely because the purpose for which
physicians were contacted in our research was fundamentally different.
Physicians in the US (and elsewhere) are deluged with surveys that include
requests for information from professional organizations, and appeals from
marketing and sales agents, inter alia. In this environment, Sloan et
al's request to contact a patient to participate in cancer research may
appear to physicians to have greater validity as "legitimate" research
than a sample survey, regardless of the survey goals. Consequently,
tinkering with the details of the covering letter may be more effective in
the context of Sloan et al's request than in survey research, because
their request engenders greater trust and in turn, greater attention to
such details. In sum, our evidence suggests that to achieve significant
increases in response rates from physicians in survey research such as
ours requires an outlay of resources currently unavailable to our
institution, and we suspect many medical institutions in the existing
milieu.
References
1.Sloan M, Kreiger N, James B. Improving response rates among doctors:
randomised trial. BMJ 1997;315:1136.
2.Tambor E, Chase G, Faden R, Geller G, Hofman K, Holzman N.
Improving response rates through incentive and follow-up: The effect on a
survey of physicians' knowledge of genetics. Am J Public Health
1993;83:1599-1603.
3.Maheux B, Legault C, Lambert J. Increasing response rates in
physicians mail surveys: An experimental study. Am J Public Health
1989;79:638-9.
4. Fleiss JL. Statistical Methods For Rates and Proportions. New
York: John Wiley and Sons, Inc., 1981.
Randomized trial to assess the effect of the covering lettter signatory on physician survey response
Not-for-profit urban medical centers in the US face chronic resource
limitations, a condition exacerbated by the ascendancy of managed care.
Resources for the conduct of research are especially scarce. Within this
context we conducted a nationwide mail survey of US geriatricians in
which, in addition to the survey goals, we sought to paritally replicate
Sloan et al's1 test of PhD versus MD signatory on the covering letter of
the survey to increase response rates. Current research demonstrates that
several strategies can increase physician response rates, but all would
have required labor and material resources unavailable to us.2,3 Sloan et
al1 found, among other things, that a PhD signatory increased response
rates from physicians over an MD signatory when they asked for consent
from the physicians to contact cancer patients for participation in
research.
Subjects and Methods
The survey was conducted during August to October, 1998. The mailing
was comprised of the survey instrument and one of two versions of a
covering letter which represented the treatment arms of the experiment.
The covering letters were identical with the exception of the signatory.
The signatory of the first covering letter was the PhD director of
research in a large, urban medical center. The signatory of the second
version was the MD director of the division of geriatrics in the same
medical center. To control for the effect of handwriting style on response
rates, only one name was used on all letters, and the signatures on all
letters were signed by one person during one sitting.
A random number generator was employed to randomly assign half of a
sample of 530 geriatricians to receive one of the covering letters and
half to receive the other letter. Group samples (n1=n2=265) were
calculated to detect at least a 15% difference between groups at alpha=.05
and a statistical power of .95. All statistical tests were two-tailed,
and were made using the Chi-Square test with Yate's correction for
continuity. We calculated 95% confidence intervals (95% CI) using standard
formulas for proportions and the difference between independent
proportions, respectively, with Yate's correction for continuity.4
Results and Comment
No difference (1.5%, 95% CI, -6.5% to 9.5%, Chi-Square=.042, p=.84)
existed between response rates for the PhD covering letter ([191/265],
72.1%, 95% CI, 66.5% to 77.7%) and the MD letter ([195/265], 73.6%, 95%
CI, 68.1% to 79.1%). Moreover, no difference existed (Chi-Square=2.65,
p.62) between treatment arm response rates across the five geographic
regions of the United States.
We believe that the findings of Sloan et al's earlier experiment
could not be replicated here largely because the purpose for which
physicians were contacted in our research was fundamentally different.
Physicians in the US (and elsewhere) are deluged with surveys that include
requests for information from professional organizations, and appeals from
marketing and sales agents, inter alia. In this environment, Sloan et
al's request to contact a patient to participate in cancer research may
appear to physicians to have greater validity as "legitimate" research
than a sample survey, regardless of the survey goals. Consequently,
tinkering with the details of the covering letter may be more effective in
the context of Sloan et al's request than in survey research, because
their request engenders greater trust and in turn, greater attention to
such details. In sum, our evidence suggests that to achieve significant
increases in response rates from physicians in survey research such as
ours requires an outlay of resources currently unavailable to our
institution, and we suspect many medical institutions in the existing
milieu.
References
1.Sloan M, Kreiger N, James B. Improving response rates among doctors:
randomised trial. BMJ 1997;315:1136.
2.Tambor E, Chase G, Faden R, Geller G, Hofman K, Holzman N.
Improving response rates through incentive and follow-up: The effect on a
survey of physicians' knowledge of genetics. Am J Public Health
1993;83:1599-1603.
3.Maheux B, Legault C, Lambert J. Increasing response rates in
physicians mail surveys: An experimental study. Am J Public Health
1989;79:638-9.
4. Fleiss JL. Statistical Methods For Rates and Proportions. New
York: John Wiley and Sons, Inc., 1981.
Competing interests: No competing interests