Intended for healthcare professionals

CCBYNC Open access
Research

Association between physician US News & World Report medical school ranking and patient outcomes and costs of care: observational study

BMJ 2018; 362 doi: https://doi.org/10.1136/bmj.k3640 (Published 26 September 2018) Cite this as: BMJ 2018;362:k3640
  1. Yusuke Tsugawa, assistant professor1,
  2. Daniel M Blumenthal, instructor234,
  3. Ashish K Jha, KT Li professor56,
  4. E John Orav, associate professor78,
  5. Anupam B Jena, Ruth L Newhouse associate professor91011
  1. 1Division of General Internal Medicine and Health Services Research, David Geffen School of Medicine at UCLA, 911 Broxton Avenue, Los Angeles, CA 90024, USA
  2. 2Cardiology Division, Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
  3. 3Department of Medicine, Harvard Medical School, Boston, MA, USA
  4. 4Devoted Health, Waltham, MA, USA
  5. 5Department of Health Policy and Management, Harvard TH Chan School of Public Health, Boston, MA, USA
  6. 6The VA Healthcare System, Boston, MA, USA
  7. 7Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA, USA
  8. 8Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, MA, USA
  9. 9Department of Health Care Policy, Harvard Medical School, Boston, MA, USA
  10. 10General Internal Medicine Division, Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
  11. 11National Bureau of Economic Research, Cambridge, MA, USA
  1. Correspondence to: Y Tsugawa ytsugawa{at}mednet.ucla.edu
  • Accepted 16 August 2018

Abstract

Objective To investigate whether the US News & World Report (USNWR) ranking of the medical school a physician attended is associated with patient outcomes and healthcare spending.

Design Observational study.

Setting Medicare, 2011-15.

Participants 20% random sample of Medicare fee-for-service beneficiaries aged 65 years or older (n=996 212), who were admitted as an emergency to hospital with a medical condition and treated by general internists.

Main outcome measures Association between the USNWR ranking of the medical school a physician attended and the physician’s patient outcomes (30 day mortality and 30 day readmission rates) and Medicare Part B spending, adjusted for patient and physician characteristics and hospital fixed effects (which effectively compared physicians practicing within the same hospital). A sensitivity analysis employed a natural experiment by focusing on patients treated by hospitalists, because patients are plausibly randomly assigned to hospitalists based on their specific work schedules. Alternative rankings of medical schools based on social mission score or National Institute of Health (NIH) funding were also investigated.

Results 996 212 admissions treated by 30 322 physicians were examined for the analysis of mortality. When using USNWR primary care rankings, physicians who graduated from higher ranked schools had slightly lower 30 day readmission rates (adjusted rate 15.7% for top 10 schools v 16.1% for schools ranked ≥50; adjusted risk difference 0.4%, 95% confidence interval 0.1% to 0.8%; P for trend=0.005) and lower spending (adjusted Part B spending $1029 (£790; €881) v $1066; adjusted difference $36, 95% confidence interval $20 to $52; P for trend <0.001) compared with graduates of lower ranked schools, but no difference in 30 day mortality. When using USNWR research rankings, physicians graduating from higher ranked schools had slightly lower healthcare spending than graduates from lower ranked schools, but no differences in patient mortality or readmissions. A sensitivity analysis restricted to patients treated by hospitalists yielded similar findings. Little or no relation was found between alternative rankings (based on social mission score or NIH funding) and patient outcomes or costs of care.

Conclusions Overall, little or no relation was found between the USNWR ranking of the medical school from which a physician graduated and subsequent patient mortality or readmission rates. Physicians who graduated from highly ranked medical schools had slightly lower spending than graduates of lower ranked schools.

Introduction

Given extensive evidence that practice patterns vary widely across physicians,123456 there is increasing interest in measuring the performance of individual physicians and understanding the determinants of physician level variation in patient outcomes and healthcare spending. Such knowledge may help design effective interventions to improve quality of care and reduce low value care.78 Education and training are potentially important determinants of a physician’s practice style. Research has found that physicians whose residency training occurred in regions with higher healthcare spending had higher subsequent costs of care after residency completion compared with physicians who trained in lower spending regions.9 A previous study also found that obstetricians who trained in residency programs with higher complication rates for childbirth had higher complication rates compared with obstetricians who trained in residency programs with lower complication rates.10 These findings shed light on the potential importance of physician training in determining the quality and costs of care delivered.

Surprisingly little is known about the association between where a physician completed medical school—in particular a medical school’s US News & World Report (USNWR) national ranking—and subsequent patient outcomes and costs of care. Patients and peer physicians may use a physician’s USNWR medical school ranking as a signal of provider quality,1112 despite little evidence that the prestige of a medical school (which may correlate with both the quality of medical education and the strength of pre-medical academic records) is associated with the quality of care physicians deliver.1314 However, it remains largely unknown whether subsequent patient outcomes and spending differ between physicians who graduate from top ranked versus lower ranked medical schools in USNWR rankings.

Using nationally representative data on Medicare beneficiaries admitted to hospital for a medical condition, we examined the association between USNWR rankings of medical schools attended by a cohort of general internists and their clinical performance—30 day mortality rates, 30 day readmission rates, and costs of care. We focused on USNWR rankings in our main analyses because they are the most commonly applied rankings and used in previous studies.151617 As secondary analyses, we also investigated alternative rankings based on social mission score and research funding.18

Methods

Data

We linked the 20% Medicare Inpatient Carrier and Medicare Beneficiary Summary Files from 2011 to 2015 to a comprehensive physician database from Doximity. Doximity is an online professional network for physicians that has assembled data on all US physicians—both those who are registered members of the service as well as those who are not—through multiple sources and data partnerships: the National Plan and Provider Enumeration System National Provider Identifier Registry, state medical boards, specialty societies such as the American Board of Medical Specialties, and collaborating hospitals and medical schools. The database includes information on physician age, sex, year of medical school completion, credentials (allopathic versus osteopathic training), specialty, and board certification.19202122 Prior studies have validated data for a random sample of physicians in the Doximity database using manual audits.1920 We were able to match approximately 95% of physicians in the Medicare database to the Doximity database. Details of the Doximity database are described elsewhere.11920222324

Patient population

We analysed Medicare fee-for-service beneficiaries aged 65 years or older admitted to hospital with a medical condition (as defined by the presence of a medical Medicare Severity Diagnosis Related Group (MS-DRG) on admission) between 1 January 2011 and 31 December 2015. To avoid comparing patient outcomes across physicians of different specialties, we focused on patients treated by general internists. We restricted our sample to patients treated in acute care hospitals, and excluded hospital admissions where a patient left against medical advice. To minimize the influence of patients selecting their physicians or physicians selecting their patients, we focused our analyses on emergency hospital admissions, defined as either emergency or urgent admissions identified in Claim Source Inpatient Admission Code of Medicare data (although we were not only interested in patients admitted as an emergency, this approach was necessary to reduce the impact of unmeasured confounding). To allow sufficient follow-up, we excluded patients admitted in December 2015 from 30 day mortality analyses and patients discharged in December 2015 from readmission analyses.

Medical school rankings

Data on medical school attended were available for approximately 80% of physicians in our data. We restricted analyses to those who graduated from medical schools in the US, excluding 259 (0.6%) who self reported as graduates of “other medical schools.” For those physicians for whom information on medical school was available, we matched schools to the rankings of the US News & World Report (USNWR) in research and primary care (see supplemental eTable 1). The USNWR has published research rankings of medical schools since 1983, and it added a primary care ranking in 1995.25 USNWR uses four attributes to rank medical schools: reputation, research activity, student selectivity, and faculty resources. The rankings are commonly used as a metric for assessing the quality of medical schools151617 (although other less commonly used ranking schemas exist). Rankings are based on a weighted average of indicators, including peer assessment by school deans, evaluation by residency directors, selectivity of student admission (medical college admission test scores, student grade point averages, and acceptance rate), and faculty-student ratio.26 In addition, research rankings also take into account research activity of the faculty; primary care rankings include a measure of the proportion of graduates entering primary care specialties.

To allow for a non-linear relation we categorized medical schools into groups on the basis of USNWR ranking: 1-10, 11-20, 21-30, 31-40, 41-50, and ≥50 (only the top 50 medical schools are ranked, and therefore, we put unranked schools into the last category). We considered these six categories as the ranking categories. To measure the USNWR ranking of a physician’s medical school during the approximate period of school attendance, we used rankings published in 2002 as opposed to current rankings, and we examined patient outcomes of these physicians in 2011-15. Previous studies have found a high correlation between USNWR school rankings across years27 and relatively stable rankings over time for the top 20 medical schools in primary care rankings.28

Outcome variables

Our outcomes of interest included 30 day mortality, 30 day readmissions, and costs of care. Information on dates of death, including deaths out of hospital, was available in the Medicare Beneficiary Summary Files, which have been verified by death certificate.29 We excluded less than 1% of patients with non-validated death dates. We defined costs of care as total Medicare Part B spending (physician fee-for-service spending, including visits, procedures, and interpretations of tests or images) for each hospital admission, because Part A spending (hospital spending) is largely invariant as it is determined by MS-DRGs.

Attribution of patient outcomes to physicians

Based on prior studies,121222430 we defined the physician responsible for patient outcomes and spending as the physician who billed the most Medicare Part B spending costs during that hospital admission. On average, 51%, 22%, and 11% of total Part B spending was accounted for by the first, second, and third highest spending physicians, respectively. We restricted our analyses to hospital admissions in which the assigned physician was a general internist. For patients transferred to other acute care hospitals (1.2% of hospital admissions), we attributed the multi-hospital episode of care and associated outcomes to the assigned physician of the initial hospital admission.3132

Adjustment variables

We adjusted for patient and physician characteristics and hospital fixed effects. Patient characteristics included age (as a continuous variable, with quadratic and cubic terms to allow for a non-linear relation), sex, race or ethnic group (non-Hispanic white American, non-Hispanic black American, Hispanic American, other), primary diagnosis (indicator variables for MS-DRG), indicators for 27 comorbid conditions (from the Chronic Conditions Data Warehouse developed by the Centers for Medicare and Medicaid Services33), median household income by zip code (in 10ths), an indicator for dual Medicare-Medicaid coverage, day of the week on which the admission occurred, and the indicator variable for year. Physician characteristics included age (as a continuous variable, plus quadratic and cubic terms), sex, credentials (allopathic versus osteopathic training), and patient volume (as a continuous variable, with quadratic and cubic terms). We also adjusted for hospital fixed effects—indicator variables for each hospital, which account for both measured and unmeasured characteristics of hospitals that do not vary over time, including unmeasured differences in patient populations. Therefore, our models effectively compared patient outcomes between physicians who graduated from medical schools of varying USNWR rank, practicing within the same hospital.343536 This method allowed us to circumvent the potential concern that physicians from highly ranked medical school may appear to have better (or worse) patient outcomes because they are differentially employed by hospitals with better support systems or whose patients have, on average, lower severity of illness (or alternatively, worse support systems and patients with, on average, higher severity of illness).

Statistical analysis

We examined the association between the USNWR ranking category of the medical school a physician attended and the physician’s 30 day patient mortality rate using a multivariable linear probability model, adjusting for patient and physician characteristics and hospital fixed effects. We used cluster robust standard errors to account for the possibility that outcomes among patients treated by the same physician may be correlated with each other.37 After fitting the regression model, we calculated adjusted 30 day mortality rates using marginal standardization (also known as predictive margins or margins of responses); for each hospital admission we calculated predicted probabilities of patient mortality with physician medical school ranking fixed at each category and then averaged over the distribution of covariates in our national sample.38 In addition, to test whether mortality rates changed monotonically across USNWR medical school ranking categories, we conducted a trend analysis (P for trend) by refitting the regression model using the ranking categories as a continuous variable.

We then evaluated the relation between the ranking of the medical school a physician attended and 30 day readmission rates and costs of care using a similar method to the analysis of mortality. To estimate these associations, we used multivariable linear probability models adjusting for patient and physician characteristics and hospital fixed effects.

Secondary analyses

We conducted several additional analyses. Firstly, to address the possibility that the USNWR ranking of a physician’s medical school in 2002 may not reflect the ranking at the time the physician attended medical school, we restricted our analysis to physicians who graduated from medical school within four years (from 1998 through 2006) of when the rankings were created (given that the typical duration of medical school education is four years). We also conducted an analysis that used USNWR medical school rankings from 2009 instead of 2002.

Secondly, to address the possibility that physicians who graduated from more highly ranked medical schools may treat patients with greater or lesser unmeasured severity of illness, we repeated our analyses focusing on hospitalists instead of general internists. Hospitalists typically work in scheduled shifts or blocks (eg, one week on and one week off) and in general do not treat patients in the outpatient setting. Therefore, within the same hospital, patients treated by hospitalists may be considered to be plausibly randomised to a particular hospitalist based only on the time of the patient’s admission and the hospitalist’s work schedule.212239 We assessed the validity of this assumption by testing the balance of a broad range of patient characteristics between physicians who graduated from lower ranked versus higher ranked medical schools. We defined hospitalists as general internists who filed at least 90% of their total evaluation-and-management billings in an inpatient setting, a claims based approach that has been previously validated (sensitivity of 84.2%, specificity of 96.5%, and positive predictive value of 88.9%).40 To address the possibility that patients who are admitted multiple times may be assigned to the hospitalist who treated the patient previously, we restricted our analyses to patients’ first admission to a given hospital during the study period.

Thirdly, to evaluate whether our findings were sensitive to how we attributed patients to physicians, we tested two alternative attribution methods: attributing patients to physicians with the largest number of evaluation-and-management claims, and attributing patients to physicians who billed the first evaluation-and-management claim for a given hospital admission (the “admitting physician”).

Fourthly, we used multivariable logistic regression models, instead of multivariable linear probability models, to test whether our findings were sensitive to the model specification for the analyses of binary outcomes (mortality and readmissions). To overcome complete or quasi-complete separation problems (ie, perfect or nearly perfect prediction of the outcome by the model), we combined medical diagnosis related group codes with no outcome event (30 day mortality or readmission) into clinically similar categories.41

Fifthly, because cost data were right skewed we conducted an additional sensitivity analysis using a generalised linear model with a log-link and normal distribution for our cost analyses.42

Sixthly, we investigated whether the association between a physician’s USNWR medical school ranking and subsequent patient outcomes and costs of care was modified by years of experience. We hypothesised that medical school may play a greater role, if any, as a signal of quality for physicians who recently completed residency training as opposed to older physicians for whom any signal of quality owing to medical school may dissipate over time as physician practice norms conform to those of other peers or hospital standards.

Seventhly, to evaluate the influence of hospitals where physicians practice, instead of comparing physicians who graduated from highly ranked versus lower ranked schools within the same hospital, we compared physicians across hospitals, by removing hospital fixed effects from our regression models.

Finally, to address important concerns about the methods used for the USNWR rankings, we repeated the analyses using alternative rankings. Alternative to the primary care ranking, we used the ranking based on social mission score developed by Mullan and colleagues, which is calculated based on the percentages of graduates who work in primary care, graduates who work in health professional shortage areas, and underrepresented minorities.43 Alternative to the research ranking, we used a ranking based on the amount of NIH (National Institute of Health) funding to medical schools. Importantly, our baseline analysis focused on USNWR rankings, rather than these possibly more objective measures of medical school quality, because the key empirical question of interest in this study was whether the commonly used USNWR rankings bear any predictive signal for downstream patient outcomes and costs of care for physicians who graduated from high ranked versus lower ranked schools in USNWR rankings.

Data preparation was conducted using SAS, version 9.4 (SAS Institute), and analyses were performed using Stata, version 14 (Stata Corp).

Patient and public involvement

No patients were involved in setting the research question or the outcome measures, nor were they involved in developing plans for design or implementation of the study. No patients were asked to advise on interpretation or writing up of results. There are no plans to disseminate the results of the research to study participants or the relevant patient community.

Results

Physician and patient characteristics

Among 30 322 physicians included in the study, 13.3% (4039/30 322) graduated from a medical school ranked in the top 20 for primary care in US News & World Report (USNWR) rankings, and 13.4% (4071/30 322) graduated from a school ranked in the top 20 for research. Only seven medical schools were in the top 20 for both primary care and research. Within the same hospital, physicians graduating from top 20 medical schools for primary care were slightly older and more likely to be graduates of allopathic schools (table 1; see eTable 2 for differences in physician and patient characteristics between top 20 research versus lower ranked research schools). Patient characteristics were similar between physicians of top 20 medical schools versus physicians graduating from lower ranked schools, with small differences in the prevalence of diabetes, hypertension, and chronic kidney disease.

Table 1

Patient and physician characteristics, according to a physician’s medical school US News & World Report (USNWR) ranking for primary care in 2002. Values are numbers (percentages) unless stated otherwise

View this table:

USNWR primary care ranking and patient outcomes/healthcare spending

Among 996 212 hospital admissions of Medicare patients, 10.6% (106 003/996 212) died within 30 days of admission. After adjusting for patient and physician characteristics and hospital fixed effects, no systematic relation was observed between the USNWR primary care ranking of the medical school that a physician attended and the physician’s 30 day mortality rate for treated patients (table 2 and fig 1). A formal test for linearity found no association between USNWR medical school primary care ranking and patient mortality (P for trend=0.67).

Table 2

Association between a physician’s medical school US News & World Report (USNWR) ranking and patient outcomes

View this table:
Fig 1
Fig 1

Association between physicians’ US News & World Report medical school ranking for primary care and research and patient 30 day mortality. Adjusted for patient and physician characteristics and hospital fixed effects

The overall 30 day readmission rate was 16.0% (156 057/973 484). After multivariable adjustment, patients treated by physicians who graduated from medical schools with lower USNWR primary care rankings had slightly higher readmissions compared with patients treated by physicians who graduated from higher ranked medical schools (adjusted 30 day readmission, 15.7% for top 10 schools versus 16.1% for schools ranked ≥50; adjusted risk difference 0.4%, 95% confidence interval 0.1% to 0.8%; P for trend=0.005) (table 2 and fig 2).

Fig 2
Fig 2

Association between physician US News & World Report medical school ranking for primary care and research and patient 30 day readmission rate. Adjusted for patient and physician characteristics and hospital fixed effects

Physicians who graduated from schools highly ranked in primary care had slightly lower spending than physicians who graduated from lower ranked schools (P for trend <0.001). For example, physicians who graduated from top 10 ranked USNWR schools spent slightly less for each patient than physicians who graduated from schools with a ranking of 50 or more (adjusted Part B spending level $1029 (£790; €881) for top 10 schools v $1066 for schools ranked ≥50; adjusted difference $36, 95% confidence interval $20 to $52; P<0.001) (table 2 and fig 3).

Fig 3
Fig 3

Association between physician US News & World Report medical school ranking for primary care and research and Part B spending for each hospital admission. Adjusted for patient and physician characteristics and hospital fixed effects

USNWR research ranking and patient outcomes/healthcare spending

No statistically significant association was observed between the USNWR research ranking of the medical school that a physician attended and patient 30 day mortality. No systematic (linear) association was found between USNWR medical school research ranking and patient mortality (P for trend=0.99) (table 2 and fig 1).

The USNWR research ranking of a physician’s medical school was not statistically significantly associated with patient 30 day readmission rates (table 2 and fig 2) (P for trend=0.27).

Physicians who graduated from highly ranked schools had slightly lower spending than graduates from lower ranked schools (adjusted Part B spending level $1050 for top 10 schools v $1067 for schools ranked ≥50; P for trend <0.001) (table 2 and fig 3).

Secondary analyses

The overall findings were qualitatively unaffected by restricting analyses to physicians who graduated from medical school within four years of when the USNWR rankings were created (eTable 3), or when 2009 USNWR rankings were used instead of 2002 rankings (eTable 4). The agreement rate between the 2002 and 2009 ranking categories using a weighted κ was 0.90 and 0.71 for research and primary care rankings, respectively. Patient characteristics did not differ between hospitalists who graduated from top ranked versus lower ranked USNWR schools (eTable 5), and findings were similar among hospitalists (eTable 6). Findings were not affected by using alternative physician attribution models (eTables 7 and 8), using logistic regression models (eTable 9), or using a generalised linear model with a log-link and normal distribution for the analysis of costs (eTable 10). A stratified analysis by years since completion of residency programs showed that the association of medical school with subsequent patient outcomes was strongest in the first 10 years of the physicians’ career (eTable 11). For example, physicians who attended top medical schools (either in terms of research or primary care rankings) exhibited statistically significantly lower patient mortality rates in the first 10 years of independent practice, whereas there was no association after 10 years. Comparison of physicians across hospitals (by removing hospital fixed effects from regression models) revealed that physicians who graduated from highly ranked USNWR medical schools—for both primary care and research rankings—had lower mortality rates, readmissions rates, and costs of care compared with physicians who graduated from lower ranked schools (eTable 12). Finally, we found little or no association between medical school rankings and patient outcomes or costs of care when the ranking of the medical school that a physician attended was based on a social mission score (eTable 13). We also found no association between patient outcomes and ranking of the medical school that a physician attended when ranking was based on NIH funding; costs of care were only slightly lower for physicians who graduated from medical schools ranked highly for NIH funding compared with lower ranked schools (eTable 14).

Discussion

In a nationally representative cohort of Medicare patients aged 65 years and older who were admitted to hospital in 2011-15 and treated by a general internist, little or no association was found between the US News & World Report (USNWR) ranking of the medical school from which a physician graduated and patient 30 day mortality or readmission rates. Physicians who graduated from highly ranked medical schools had slightly lower spending compared with physicians who graduated from lower ranked schools. Overall these findings suggest that the USNWR ranking of the medical school from which a physician graduated bears only a weak relation with patient outcomes and costs of care. We also found that alternative ranking schema—based on social mission score for primary care ranking and NIH funding for research ranking—bore little relation with subsequent patient outcomes and costs of care.

There are several potential explanations for why physicians who graduate from USNWR highly ranked medical schools show little or no differences in their clinical outcomes and healthcare spending. Firstly, the medical school accreditation processes, medical school standards, and standardized testing required of all physicians may be sufficiently stringent to ensure that all medical students master the essential competencies necessary to practice as clinicians. In the US, MD (allopathic) granting medical schools are accredited by the Liaison Committee on Medical Education, and DO (osteopathic) granting schools are accredited by the American Osteopathic Association Commission on Osteopathic College Accreditation. That only 17 medical schools were awarded full accreditation by these bodies between 2007 and 2016 suggests that these accrediting bodies hold MD granting and DO granting medical schools to rigorous standards.44 However, it is possible that observed variation would be larger if there were no national standards for medical schools. Secondly, our findings indicate that although different medical schools may focus on training students with different interests and goals—for example, some institutions may focus on training physician-scientists, whereas others may have mandates to produce clinicians for their local communities—schools may have developed strategies for effectively ensuring that students learn the core knowledge and skills necessary to become competent physicians. Thirdly, the findings of our study may in part reflect the study’s design, which compared patient outcomes between physicians practicing in the same hospital. Our within hospital analysis helps address confounding arising from the possibility that physicians who graduate from higher USNWR ranked versus lower ranked medical schools may practice in areas with different patient populations. However, because hospitals perform quality assurance on the physicians that are hired, it is likely that within hospital differences in physician skill may be smaller than the between hospital differences. This hypothesis is supported by our secondary analysis findings that differences in patient outcomes between physicians graduating from medical schools of varying USNWR rank were larger when we removed hospital specific fixed effects from our model (thereby comparing physicians across rather than within hospitals). Fourthly, although patients may view the USNWR ranking of a physician’s medical school as a signal of quality, it is likely that many factors at different stages of physicians’ career, including postgraduate training and the systems in place at a physician’s current place of work, play an important role in determining the quality and costs of care that physicians provide.910 Future studies are warranted to understand whether other factors such as residency training have a measurable association with the performance of physicians after completion of training. Lastly, it is possible that the rankings we used in this study do not capture the quality of medical education in a valid and reliable way, and we may need better approaches for measuring the quality of medical schools. For example, in the USNWR primary care ranking, the largest weight is given to graduates selecting internal medicine, family practice, or paediatric residencies; however, only a limited proportion of those trainees entering internal medicine residency programs may remain in primary care. Our findings suggest that there is room for improvement within medical school rankings to ensure that they reflect the actual quality of medical education that students receive at individual medical schools.

To identify whether the quality of medical education has an impact on downstream practice patterns of physicians, it is important to emphasize what this study does and does not attempt. Our main interest was to analyse whether the commonly used USNWR ranking is associated with subsequent patient outcomes and costs of care for physicians who graduated from medical schools with a high ranking versus low USNWR ranking. We chose this question because the USNWR ranking of the medical school from which a physician graduated may be used by patients and clinicians as a signal for physician quality. We found no evidence that the USNWR ranking of the medical school from which a physician graduated bears any relation with subsequent patient outcomes, at least when considering physicians who practice within the same the hospital. We also found no relation between two other ranking schema and subsequent patient outcomes of physicians who graduated from high ranked versus lower ranked medical schools (rankings based on social mission score and NIH funding); however, this does not imply that the quality of medical school training bears no relation with quality of downstream patient care, which is a distinct question. It may, but the main focus of this study was whether common perceptions of a medical school’s quality—based on widely used USNWR rankings—provide any predictive signal for subsequent patient outcomes and costs of care.

Comparison with other studies

The current study relates to prior research on the relation between residency training and subsequent costs and quality of care, which finds that practice patterns embedded in residency training are subsequently implemented into practice after physicians complete their residency.910 There is also a limited body of work evaluating the association between the medical school from which a physician graduated and subsequent practice patterns. Reid and colleagues examined physicians practicing in Massachusetts and found no association between graduating from a top 10 medical school, defined using USNWR rankings, and performance on process-of-care measures.13 Hartz and colleagues found no association among cardiothoracic surgeons attending prestigious medical schools and coronary artery bypass graft surgery outcomes.45 Schnell and Currie recently reported that physicians who completed training at highly ranked medical schools write statistically significantly fewer opioid prescriptions than physicians from lower ranked schools.14

Strengths and limitations of this study

Our study has limitations. Firstly, USNWR rankings are, at best, imperfect measures of medical school quality. While no ranking system is perfect, the USNWR rankings system captures a wider array of factors that reflect medical school quality than any other ranking system—including peer assessment scores by school deans, evaluation by residency directors, students’ grades and test scores, and faculty-student ratio.26 Moreover, USNWR rankings systems have been reported to influence applicants’ medical school choices and are often used in scientific research as a proxy for medical school quality.1213144546 Importantly, even if USNWR rankings are not accurate measures of medical school quality, to the extent that patient perceptions of doctor quality may partly depend on the ranking of the medical school at which a physician trained, this study suggests that little information about mortality, readmissions, and costs of care should be inferred by patients from that ranking information. Secondly, it is possible that the quality of a medical school’s research and primary care training may not correlate well with the quality of the school’s training for hospital based care, which could confound our assessment of the relation between medical school quality and patient outcomes. Thirdly, we relied on USNWR medical school rankings from a single year, whereas physicians in our data matriculated from medical school across a wide range of years. It is possible that this single year estimate of quality failed to capture variations in medical school quality over time that had an important impact on physician quality, and, in turn, on patients’ clinical outcomes. However, previous studies have found a high correlation of USNWR rankings across years27 and relatively stable rankings over time for top 20 primary care medical schools.28 Our data also confirmed a high correlation of rankings across years. Furthermore, using alternative approaches in sensitivity analyses did not affect the results, supporting the robustness of these findings. Finally, these findings may not apply to non-Medicare populations, outpatient care, or surgical patients. Additional studies are needed to determine if the lack of association between the ranking of the medical school a physician attended and subsequent patient outcomes is generalizable to other types of clinical care and different patient populations.

Conclusion

For physicians practicing within the same hospital, the USNWR ranking of the medical school from which they graduated bears little or no relation with patient mortality after hospital admission, readmissions, and costs of care.

What is already known on this topic

  • No national data exist on whether the US News & World Report (USNWR) ranking of the medical school from which an internist graduated is associated with hospital patient outcomes and costs of care

  • Patients may perceive the medical school from which a physician graduated as a signal of care quality

  • The predictive relation between the USNWR ranking of the medical school a physician attended and subsequent patient outcomes and spending is therefore important to understand

What this study adds

  • Physicians who graduated from highly USNWR ranked primary care medical schools had slightly lower patient readmission rates and spending compared with those who attended lower ranked schools, but no difference in patient 30 day mortality

  • Physicians who graduated from highly ranked research medical schools had slightly lower spending but no difference in patient 30 day mortality or readmission rates

  • Little or no association was found between other rankings—based on social mission score or National Institute of Health funding—and patient outcomes and costs of care

Footnotes

  • Contributors: All authors contributed to the design and conduct of the study, data collection and management, analysis, and interpretation; and preparation, review, or approval of the manuscript. YT is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: ABJ was supported by the Office of the Director, National Institutes of Health (NIH Early Independence Award, grant 1DP5OD017897). ABJ reports receiving consulting fees unrelated to this work from Pfizer, Hill Rom Services, Bristol Myers Squibb, Novartis Pharmaceuticals, Amgen, Eli Lilly, Vertex Pharmaceuticals, Precision Health Economics, and Analysis Group. DMB has received consulting fees unrelated to this work from Precision Health Economics, Amgen, Novartis, and HLM Venture Partners, and is the associate chief medical officer of Devoted Health, which is a health insurance company. Study sponsors were not involved in study design, data interpretation, writing, or the decision to submit the article for publication.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: ABJ has received consulting fees unrelated to this work from Pfizer, Hill Rom Services, Bristol Myers Squibb, Novartis Pharmaceuticals, Amgen, Eli Lilly, Vertex Pharmaceuticals, Precision Health Economics, and Analysis Group. DMB has received consulting fees unrelated to this work from Precision Health Economics, Amgen, Novartis, and HLM Venture Partners, and is the associate chief medical officer of Devoted Health, which is a health insurance company.

  • Ethical approval: This study was approved by the institutional review board at Harvard Medical School.

  • Data sharing: No additional data available.

  • Transparency: The lead author (YT) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies are disclosed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References