The evidence base for US joint commission hospital accreditation standards: cross sectional studyBMJ 2022; 377 doi: https://doi.org/10.1136/bmj-2020-063064 (Published 23 June 2022) Cite this as: BMJ 2022;377:e063064
- Sarah A Ibrahim, research fellow12,
- Kelly A Reynolds, resident physician3,
- Emily Poon, research assistant professor2,
- Murad Alam, professor456
- 1Rush Medical College, Chicago, IL, USA
- 2Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- 3University of Cincinnati College of Medicine, Cincinnati, OH, USA
- 4Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- 5Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- 6Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- Correspondence to: M Alam
- Accepted 9 May 2022
Objective To evaluate the evidence upon which standards for hospital accreditation by The Joint Commission on Accreditation of Healthcare Organizations (the Joint Commission) are based.
Design Cross sectional study.
Setting United States.
Participants Four Joint Commission R3 (requirement, rationale, and reference) reports released by July 2018 and intended to become effective between 1 July 2018 and 1 July 2019.
Interventions From each R3 report the associated standard and its specific elements of performance (or actionable standards) were extracted. If an actionable standard enumerated multiple requirements, these were separated into distinct components. Two investigators reviewed full text references, and each actionable standard was classified as either completely supported, partly supported, or not supported; Oxford evidence quality ratings were assigned; and the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) was used to assess the strength of recommendations.
Main outcome measure Strengths of recommendation for actionable standards.
Results 20 actionable standards with 76 distinct components were accompanied by 48 references. Of the 20 actionable standards, six (30%) were completely supported by cited references, six were partly supported (30%), and eight (40%) were not supported. Of the six directly supported actionable standards, one (17%) cited at least one reference of level 1 or 2 evidence, none cited at least one reference of level 3 evidence, and five (83%) cited references of level 4 or 5 evidence. Of the completely supported actionable standards, strength of recommendation in five was deemed GRADE D and in one was GRADE B.
Conclusions In general, recent actionable standards issued by The Joint Commission are seldom supported by high quality data referenced within the issuing documents. The Joint Commission might consider being more transparent about the quality of evidence and underlying rationale supporting each of its recommendations, including clarifying when and why in certain instances it determines that lower level evidence is sufficient.
The Joint Commission on Accreditation of Healthcare Organizations is a national body that accredits and evaluates more than 22 000 facilities in the United States, including hospitals, behavioral health centers, nursing homes, home health services, ambulatory care centers, and other healthcare organizations across the United States.12 Its mission is to promote patient safety and improve quality of healthcare organizations by enforcing compliance with various performance based standards.3
Although accreditation by The Joint Commission is voluntary, meeting mandated standards deems healthcare organizations eligible to receive federal reimbursement from the Centers for Medicare and Medicaid Services.4 Many states also include The Joint Commission standards in their hospital licensure requirements.25 Failure to meet the standards therefore could potentially disqualify a hospital from receiving millions of dollars annually in federal funding and could even lead to suspension or revocation of its state license.2
In recent years, advisory and regulatory bodies have increasingly accepted that the evidence underlying recommendations in clinical practice guidelines should be conveyed. In 2011 the US Institute of Medicine published a guidance document for the development of trustworthy, evidence based guidelines in which it recommended that organizations producing guidelines provide clear, transparent explanations of the reasoning supporting their published standards.6 Specifically, the Institute of Medicine recommended the inclusion of a discussion of expected benefits and harms; a summary of the available evidence, including its quality, completeness, and consistency, as well as any relevant evidence gaps; and details about the influence of other types of information, such as values, opinion, theory, and clinical experience, that might have influenced the recommendations. This would culminate in a rating of the level of confidence in the supporting evidence and also a rating of the strength of recommendation associated with guidelines or standards. Since the publication of the Institute of Medicine guidance, the US Preventive Services Taskforce has published standards for guideline development that accepts the Institute of Medicine’s guidance as the basis for establishing evidence foundations for its own recommendations.7
Clinical practice guidelines are not standards. Both the Institute of Medicine and the US Preventive Services Taskforce focus on clinical practice guidelines. A guideline recommendation is a suggestion based on evidence. An actionable standard, such as that implemented by The Joint Commission, is a requirement, not a suggestion. Arguably, the level of evidence and transparency should be greater moving along this continuum, from clinical practice guidelines to standards. Given that The Joint Commission accreditation is a deemed authority for federal certification and is therefore a de facto national standard, it is reasonable to expect that the evidentiary basis underlying the development of The Joint Commission’s standards meet and exceed the evidentiary basis considered appropriate for mere clinical practice guidelines.8 Between July 2018 and June 2020 we evaluated the quality of the evidence on which The Joint Commission standards are based and the extent to which discussion of this information is transparent and available.
Definitions of The Joint Commission’s terminology
Standards—The Joint Commission standards guide the evaluation of safety and assurance of quality of care and are the basis of its accreditation process. Standards according to The Joint Commission are broad thematic areas requiring oversight. Examples include pain assessment and pain management, identifiers for newborns, and assessment of infection in the perinatal period.910111213 Developed in response to emerging safety and quality concerns, standards and their subsections are drafted with stakeholder input and then distributed nationally for field review.9 After review, executive leadership of The Joint Commission might revise the standards before publication and implementation.9 Manuals describing all standards are distributed to accredited institutions and can also be purchased.2914 The Joint Commission currently maintains more than 250 standards, and it regularly updates these with variable frequency.315
Elements of performance (actionable standards)—Element of performance is The Joint Commission term for an actionable standard on which basis healthcare organizations are evaluated. Although the term standard is used to denote general thematic areas for compliance, specific actionable standards (elements of performance) within these broad categories are the activities assessed by site reviewers and are used to score compliance with regulations.14
R3 reports—R3 refers to requirement, rationale, and reference. R3 reports are The Joint Commission’s documents that announce and describe new or revised standards, provide the background underlying these standards, and list literature citations on which each standard’s specific elements of performance (actionable standards) are based. R3 reports are distributed to The Joint Commission accredited healthcare organizations and are also made publicly available on its website.16
We included all R3 reports released by July 2018 and intended to become effective between 1 July 2018 and 1 July 2019 and excluded standards outside this timeframe, as well as duplicate standards. Actionable standards that cited the same literature and therefore had similar levels of support were also excluded. Within each group of similar actionable standards, we used the actionable standard that had the highest degree of support and strongest level of evidence for this analysis.
From each R3 report we extracted each associated standard as well as its specific actionable standards. If an actionable standard seemed to enumerate multiple requirements (eg, need for multiple diagnostic tests, roles of different types of providers, or for different classes of patients), two reviewers (SI, KR) used qualitative research methods to separate these actionable standards into distinct components (supplementary table 1). Specifically, any actionable standard that described multiple actions or multiple agents responsible for adhering to the actionable standard was divided so that a combination of only one action and one agent were included in each requirement.
Support provided by cited references
Two investigators (SI, KR) independently reviewed the references cited in each actionable standard and classified the degree to which each reference supported the corresponding distinct component as one of: completely supported (the reference literally or substantially affirmed the distinct component); partly supported (the reference addressed the same general topic as the distinct component but did not literally or substantially affirm the distinct component—for example, if a distinct component required testing of mothers for HIV and the reference was related to HIV in pregnant women but did not mention testing, the reference was classified as partly supports); or no support (the reference contained no language, explicit or implied, to support the distinct component). The category for partly supported actionable standard was subjective but considered necessary so as to avoid minimizing the utility of relevant references by excluding those at least somewhat relevant. The investigators were blinded to each other’s preliminary ratings. To improve reliability, the investigators completed their ratings using a standardized checklist, adapted from the Centre for Evidence-Based Medicine Critical Appraisal Tools (supplementary table 2). A third rater (MA) resolved discrepancies, with the highest level of support (ie, rounding up) reported by at least two raters selected.
If the distinct components within an actionable standard were totally supported by the references, then the actionable standard was considered completely supported. If at least one distinct component was supported by the references, then the actionable standard was considered partly supported. If none of the distinct components were supported by the references, then the actionable standard was considered not supported.
Level of evidence for cited references
We assigned a level of evidence to each reference using the Oxford Centre for Evidence-Based Medicine scheme.17 This tool is commonly used to appraise evidence based reviews and clinical practice guidelines and was chosen because, unlike other rating schemes, it permits a higher level of evidence for some observational studies.18
We modified the tool using two methods. First, if a reference was a clinical practice guideline, we evaluated the recommendation’s level of evidence within the guideline rather than the guideline as a whole. We did this because although guidelines are collectively considered expert opinion, some guideline recommendations might be supported by high level evidence and others might be based solely on lower level evidence, such as expert opinion. Second, we assigned cross sectional studies—which are not well specified in the Oxford tool—an evidence level of 4.
Two investigators (SI, KR) independently assigned a level of evidence to each reference. A third investigator (MA) resolved discrepancies.
Assessment of overall strength of recommendations
We used a modified GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) approach to assess the strength of the actionable standards, accounting for the quality of each supporting reference.19 Two investigators (SI, KR) independently assigned a GRADE rating to each actionable standard; discrepancies were resolved by using the evaluation of a third investigator (MA).
Data were extracted and saved in Microsoft Excel. Analyses were carried out in Excel and verified in R (version 4.1.0). Descriptive statistics were calculated on the actionable standards, and their distinct components reviewed, including counts and percentages. We used unweighted Cohen’s κ statistics to evaluate interrater reliability of reviewers’ assessments of the level of evidence and degree of support.20
Patient and public involvement
Speaking to patients inspired this review. Although no patients or members of the public were directly involved in this paper owing to the methodological focus on assessing quality of evidence, we did speak to patients about the study, and we asked a member of the public to read our manuscript.
Four of the five R3 reports issued by July 2018 were included (fig 1). One report was excluded as it was a duplicate for implementation in a different clinical care setting. One report each was developed for use in ambulatory care settings, obstetrics services, newborn patient services, and critical access hospitals. For a total of 23 actionable standards, each report contained between one and 19 (median 7) actionable standards. The actionable standards element of performance 14 and element of performance 15 were excluded because they addressed similar clinical care decisions and contained similar reference lists to element of performance 16. In addition, element of performance 6 (identifier PC.01.02.07) was excluded owing to its similarity to element of performance 7 (LD.04.03.13). As a result, 20 actionable standards were included in the final analysis.
We separated the 20 actionable standards into 76 distinct components. Across the 76 components, 48 references were cited. One distinct component was excluded from subsequent analyses because no references were cited. One reference was a poster presentation and was not publicly accessible and was therefore excluded from this analysis.
Each distinct component analyzed cited between one and 10 references (median 2, interquartile range 1.0-2.5). Of the 75 distinct components, 68% (n=51) were supported by at least one reference, 31% (n=23) were partly supported by at least one reference, and 17% (n=13) were not supported by references. The interrater reliability for the two raters was substantial (κ=0.77, 95% confidence interval 0.70 to 0.83; P<0.001). Of the 244 assessments for degree of support of a reference, the primary reviewers disagreed in 36 instances (15%). Thirty (83%) of these adjudications resulted in rounding up.
Of the 47 references evaluated, most were low quality and were classified as evidence level 4 or 5 (n=34, 72%) (table 1). Among the distinct components directly supported by one or more references, 51% (26/51) were supported exclusively by references with evidence quality ratings of 5. Seven of the 19 actionable standards (37%) that cited references were directly supported only by narrative reviews, opinion based references, or book excerpts. Fewer than 30% of references were of evidence quality 1 or 2 (12/47 references), and only 13 of 51 distinct components (25%) were directly supported by such evidence (ie, systematic reviews of randomized controlled trials or other high quality studies). The interrater reliability for the two raters was substantial (κ=0.73, 95% confidence interval 0.66 to 0.81; P<0.001). Of the 51 unique assessments for level of evidence of a reference as it pertained to a distinct component, the primary reviewers disagreed in 11 instances (22%). Six (55%) of these adjudications resulted in rounding up.
Of 20 actionable standards, six (30%) were completely supported by references, six were partly supported (30%), and eight (40%) were not supported (table 2). Of the six completely supported actionable standards, one (17%) cited at least one reference of level 1 or 2 evidence, none cited at least one reference of level 3 evidence, and five (83%) cited references of level 4 or 5 evidence. Strength of recommendation of the completely supported actionable standards was GRADE B in one and GRADE D in five (table 3). Of the six actionable standards only partly supported, two (33%) cited at least one reference of level 1 or 2 evidence, one (17%) cited level 3 evidence, and three (50%) exclusively cited references of level 4 or 5. Strength of recommendation of the partly supported actionable standards was GRADE C in three (50%) and GRADE D in three (50%).
In general, when the Joint Commission on Accreditation of Healthcare Organizations implements new actionable standards for healthcare organizations, it often provides few references and little evidence in support of these standards in documents that are publicly available. When evidence is cited, it is often of low level or only partly supports the new actionable standards. As a consequence, it is unclear whether implementation of new actionable standards from The Joint Commission would likely improve safety or quality outcomes. Although The Joint Commission may have rationales that are not conveyed in its thinly referenced public documents, greater transparency about such additional reasonings as well as more complete reference lists may further strengthen confidence in the recommendations. Finally, if in some instances The Joint Commission believes evidence may not be required to support an actionable standard, then explicitly mentioning this, and explaining the rationale, could provide convincing justification in place of evidence.
Comparison with other studies
Although research on the detrimental effects of standards that are insufficiently supported by evidence is limited, there is a body of research on the possible harms from clinical practice guidelines that are inadequately supported. Despite standards being requirements and guidelines being recommendations, it is reasonable to expect standards to be at least as well supported as guidelines.
A body of research confirms the potentially deleterious effects of unsubstantiated, low quality recommendations in clinical practice guidelines on patient care. A review of articles published in a high impact journal found that over a span of 10 years, medical practice was reversed in 146 cases after the publication of one or more higher quality trials refuted the intended goal of a recommended practice, or found that benefits were outweighed by potential harms.22 However, once practices have been implemented, albeit with lower level evidence, reversal can be difficult.21 To avoid implementation of unwarranted practices and delay in their reversal, many clinicians and researchers believe that new guidelines should be adopted only after the availability of rigorous evidence supporting effectiveness, affordability, and practicality.2021222324
Even if enforcement of recommendations unsupported by evidence does not directly harm patients, it might lead to regulatory fatigue that diverts clinicians and facilities from their primary missions.212526272829 Efforts to comply could distract workers from implementing policies that are essential, and possibly result in patient harm.25 Time and resources could also be wasted. Fewer recommendations founded on higher quality evidence may help alleviate regulatory fatigue and improve patient outcomes.2530
This study detected several types of evidentiary problems. In some cases, The Joint Commission may have created overly broad standards that go beyond the evidence, with this possibly based on internal considerations that are not made clear. In other instances, the supporting literature appears to be tangentially or minimally relevant. This latter problem may be attributable to lack of attention or the unavailability of relevant evidence—although at least in some cases relevant citations might have been inadvertently overlooked. For example, during the analysis of this study, the authors came across published high level evidence that was not cited in support of particular requirements.
Not only can outside observers be confused about the steps involved in The Joint Commission’s development process for standards, but in some cases the lack of transparency of this process might also be difficult for The Joint Commission to manage internally, thus resulting in directives that may not be aligned with patients’ best interests. For illustration, in 2002 The Joint Commission published measures requiring clinicians to obtain blood cultures before administering antibiotics to patients with community acquired pneumonia. It was subsequently found that the standards directly opposed the evidence already established in the literature, and transparency about the evidence that had informed the recommendations was minimal.31 Similarly, in 2000 The Joint Commission produced standards to address the problem of undertreatment of pain. These standards were founded on small, low level evidence studies that suggested a benefit from adhering to these standards, and subsequent reports of copious adverse events related to overtreatment of pain soon began to emerge.32
It is possible that literature supporting some recommendations by The Joint Commission cannot be found, given the nature and type of recommendation. In such cases, it would be prudent either to qualify the recommendation as a suggestion rather than a requirement or to narrow down the recommendation so that it can be supported by evidence. Greater specificity in recommendations and their evidentiary basis may also motivate increased compliance and make this more achievable.
Strengths and limitations of this study
This study has several limitations. We only assessed The Joint Commission R3 reports and only reviewed the references cited in each report. It is possible that The Joint Commission did not cite higher quality evidence that may have been available, including unpublished internal data. Indeed, each report includes the disclaimer, “not a complete literature review” so it is possible that the reports are not heavily referenced, and that additional supporting documentation or expert testimony is not disclosed. However, it seems unlikely that more appropriate published references were reviewed by The Joint Commission but were not included in the R3 reports, as abbreviated or abridged reference lists tend to include the strongest and most relevant references. In support of this, each R3 report includes the statement, “The references provide evidence that supports the requirement.” To the extent that The Joint Commission enforces de facto national standards, transparency in communicating the basis for new regulations is a reasonable expectation. So, if more convincing evidence supportive of The Joint Commission recommendations was available to the commission but not included in its R3 reports, The Joint Commission might consider modifying its practice in the future to share this additional evidence more fully with stakeholders. Further, if more relevant evidence exists, then by promptly and clearly sharing this evidence, The Joint Commission may also garner more stakeholder support for recommendations and spur more rapid adoption of recommendations.
Although we focused on assessing evidence in this study, we concede that evidence might not be required in all recommendations from The Joint Commission. In several conditions, evidence may be unnecessary or unobtainable—for example, when standards are based on self-evident or widely accepted imperatives for which acquisition of evidence is exceedingly difficult. Similarly, standards developed in response to unanticipated and emerging crises (eg, pandemics) may be implemented before evidence of their utility is determined, owing to the urgency of the situation. Standards may also be designed to prevent rare but extremely grave adverse outcomes, and because of the rarity of these adverse outcomes, it might be difficult to acquire data on the specific utility of the measures. Finally, standards developed to satisfy particular legal or regulatory mandates might not require evidence, as these standards are founded on governmental oversight authority. While there are thus several scenarios in which evidence may not be required for standards, it would, in each such case, be useful for agencies that set these standards to transparently admit the lack of supportive evidence and to explain on what basis these standards were being implemented, given the absence of evidence. Since even the most seemingly reasonable recommendations can have unintended negative consequences or take time and effort away from more useful activities, a strong argument for explaining the need for each standard is imperative.31 Indeed, The Joint Commission does provide a written rationale for each of its actionable standards, but at present these explanations do not routinely weigh the relative impact of evidence versus other bases for implementing standards.
Other limitations of this study include the choice to focus only on The Joint Commission standards. Standards can be used to ensure quality and safety of care but are not always appropriate for achieving healthcare safety goals. Sometimes, less directive standards—or even guidelines—might not be the answer. Specifically, quality improvement and improved patient safety could also require focused attention on adaptation and resilience in the context of uncertainty. Although standards work well to guard against known system vulnerabilities that lead to error, many uncertain system vulnerabilities and events might also lead to error. Unwanted events as a result of uncertainty are more difficult to mitigate because the causal mechanisms involved are unknown, and this represents a commonly unaddressed system safety issue in healthcare.33 In other words, when risk prevention strategies are well defined and objectively measurable, as in the case of ensuring proper patient identification, the implementation of particular standards may provide clear benefits. However, less quantifiable or predictable system issues, such as errors in interpersonal communication or cascading system failures (eg, cyberterrorism, simultaneous failure of different types of equipment, unanticipated acute supply shortages) may be addressed by strategies such as accident and incident analysis, restricting staff autonomy in high risk situations, encouraging monitoring and adaptation to the crisis, and in-the-moment problem solving by staff trained to expect and manage uncertainty.34 Healthcare workers must be trained and prepared accordingly.
Notably, this study focused only on The Joint Commission standards and a one year period during which newly implemented standards came into effect. The reason to avoid a historical assessment of standards was to emphasize current practice in The Joint Commission standard implementation, which has evolved over time. The cross sectional approach was also designed to pick up different types of standards, since by including all the R3 reports that met time criteria, bias associated with studying a particular content area, for which the level of evidence might not have been representative of The Joint Commission standards as a whole, was minimized. Nonetheless, our choices made could have led to a lack of representativeness about other nationwide or international quality and safety standards over the preceding decades.
Importantly, all but two of the actionable standards we reviewed considered acute or chronic pain. Thus, our findings might not be broadly applicable to all R3 reports by The Joint Commission. However, The Joint Commission should be expected to pay greater attention to the evidentiary basis behind pain management, given its experience in producing standards for the management of pain.
The direct and indirect costs to healthcare systems of adherence to accreditation rules are believed to be substantial. One systematic review of the cost of health accreditation, including accreditation by The Joint Commission, estimated that 0.2-1.7% of total annual healthcare system operating costs were spent on accreditation adherence.35 A case report estimated that it would cost an individual institution $326 784 (1% of the annual budget) to adhere to The Joint Commission specific accreditation standards.36 These statistics are likely underestimates because the indirect costs of adherence, which are likely to be much greater than the direct costs, are more difficult to measure and report.
Although the lack of transparency in the development process for standards is unsatisfactory for clinicians and researchers, it is not unusual, and it is mirrored in regulatory processes in other countries. For example, the International Society for Quality in Health Care External Evaluation Association (a separate entity established by the International Society for Quality in Health Care) provides a list of standards and criteria for adhering to these standards but does not provide supporting references.37 In contrast, the French National Authority for Health provides an accreditation handbook containing the standards it evaluates and notes that the development process for these standards includes a review of references pertaining to healthcare experiences as well as a consensus process. References were recently made available in the accreditation manual.38 Similarly, in the UK, the National Institute for Health and Care Excellence quality standards are accompanied by a topic library that collates evidence from NICE guidelines or NICE accredited evidence sources that was used by the Quality Standards Advisory Committee during the development of the quality standards.39
In general, recent actionable standards issued by The Joint Commission are not supported by high quality data referenced within the issuing documents. While performance standards play a critical role in ensuring high healthcare quality, and The Joint Commission remains an exemplary guardian of patients’ health and safety, The Joint Commission might consider being more transparent about the quality of evidence and underlying rationale supporting each of its recommendations. When higher level evidence is available to The Joint Commission but not cited in public documents, it might consider releasing this evidence publicly to increase confidence in its recommendations. In cases in which it believes lower level evidence (eg, unanimous expert consensus from a diverse group) could be sufficient to support a new actionable standard, or when it believes evidence might not even be necessary or appropriate (eg, a national emergency), a detailed, publicly available rationale would be helpful. Although these changes would increase work and cost, they might also increase enthusiasm for the uptake of The Joint Commission actionable standards, and further align the commission with the conventions of evidence based medicine. Finally, given its pivotal role in ensuring patient safety, The Joint Commission might want to increase its emphasis on the role of uncertainty in healthcare environments. Training staff to manage uncertainty through means other than standards and guidelines would complement the use of standards for more well defined and predictable problems.
What is already known on this topic
In the US, the Joint Commission on Accreditation of Healthcare Organizations mandates standards for safety in hospitals and other healthcare facilities
For each actionable standard, The Joint Commission provides a rationale, including supporting references
No study has systematically assessed the evidentiary basis underlying The Joint Commission’s actionable standards
What this study adds
Based on documents published by The Joint Commission in 2018 and 2019, six of 20 (30%) actionable standards were directly supported by the references provided, with most being assigned strength of recommendation GRADE D
The remaining actionable standards were either only partly supported (6/20, 30%), or not supported (8/20, 40%) suggesting that actionable standards recently issued by The Joint Commission are seldom supported by high quality data referenced in the issuing documents
The Joint Commission might consider being more transparent about its assessment of evidence, including providing more complete and convincing evidence for new standards, or explaining why lower level evidence might be sufficient in particular instances
Data availability statement
All data relevant to the study can be requested from the corresponding author (email@example.com).
We thank Bianca Y Kang for her invaluable contributions in extracting the data and in creating the manuscript figure and table. We also thank Jacob M Schauer for providing his expertise in statistical analysis.
Contributors: All authors conceived and designed the study and acquired, analyzed, and interpreted the data. SAI, KAR, and MA drafted the manuscript. All authors critically revised the manuscript for important intellectual content. SAI and KAR performed the statistical analysis. MA obtained funding. SAI, KAR, and EP provided administrative, technical, or material support. MA supervised the study. MA had full access to the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. MA is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.
Funding: This study was supported by the Department of Dermatology, Northwestern University. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: funding support from the Department of Dermatology, Northwestern University; no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
The lead author (MA) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
Dissemination to participants and related patient and public communities: We plan to share our work with readers by issuing a press release at the time of publication through Northwestern University’s media relations; tweeting about our work concurrently through our lab Twitter account (@AlamLab); distributing a plain language summary to regulators, advocacy groups, and clinicians; and writing an editorial with other physicians and patients in another journal that references this work.
Provenance and peer review: Not commissioned; externally peer reviewed.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.