Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklistBMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i1174 (Published 17 March 2016) Cite this as: BMJ 2016;352:i1174
All rapid responses
Re: Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist
Agarwal et al  describe the development of the WHO mERA checklist, a set of guidelines designed and piloted by experts to improve the quality and consistency of mHealth intervention reporting. The guidelines comprise 16 items, accompanied by a detailed explanation and justification for each. The authors argue that uptake of these guidelines will improve the standardisation of reporting and ostensibly the quality of the mHealth scientific evidence base.
The development of these guidelines is timely as mobile devices become more user friendly, computationally powerful and accessible. Mobile phone applications “apps” use increasingly sophisticated technology to deliver highly personalised, timely and relevant experiences that can be integrated into people’s everyday lives. While other attempts to standardise the reporting of digital health interventions such as the CONSORT eHealth guidelines  allow for the inclusion of mHealth programs, it is increasingly clear that as apps evolve, a universal set of guidelines for both web based and mobile phone based technology has limited applicability and usefulness.
Compared to traditionally delivered and web-based interventions, the relatively shorter development cycles and real time improvements of apps present unique challenges to researchers in this area. For example, reporting upon the fidelity (item 16 of the mERA checklist) and replicability (item 13 of the mERA checklist) of the intervention can assume a sequential research process whereby the intervention is developed and subsequently tested as a static entity in separate and non-overlapping steps. This is often impractical as complex mHealth interventions require ongoing trouble shooting and updating. Further consideration of new and evolving methodologies as they relate to the reporting of program fidelity and replicability may thus be useful. For example, Mohr et.al. suggest that using continuous evaluation methods  and focusing on the evaluation of core intervention principles  rather than the intervention alone has the potential to provide more usable and durable information.
The mERA guidelines are promising in that they incorporate reporting of usability testing and user feedback. However it may be the case that the use and reporting of a well-developed, validated measure of app/program quality should be recommended in order for this process to be standardised. For example, the Mobile Application Rating Scale  is now available, and its use could help facilitate comparisons of the quality of mHealth interventions. Further, we know user engagement to be a critical component of personalised health technology; the benefits of which may dissipate with poor user experience . Hence, it is not only important to report on a validated, standardised measure of app quality, but also to report on steps taken to optimise user experience in a systematic way, at various stages of intervention development and testing.
We conclude that these guidelines are an important first step in improving the reporting and conduct of mHealth intervention research. Further nuancing may be required in order to enhance reporting of up-to-date research on high quality, engaging mHealth interventions.
1. Agarwal S, LeFevre AE, Lee J, L’Engle K, Mehl G, Sinha C, et al. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. bmj. British Medical Journal Publishing Group; 2016;352:i1174.
2. Eysenbach G. CONSORT-EHEALTH: improving and standardizing evaluation reports of Web-based and mobile health interventions. J Med Internet Res. 2011;13(4):e126.
3. Mohr DC, Cheung K, Schueller SM, Hendricks Brown C, Duan N. Continuous evaluation of evolving behavioral intervention technologies. Am J Prev Med. 2013;45(4):517–23.
4. Mohr DC, Schueller SM, Riley WT, Brown CH, Cuijpers P, Duan N, et al. Trials of intervention principles: Evaluation methods for evolving behavioral intervention technologies. Journal of medical Internet research. JMIR Publications Inc.; 2015;17(7).
5. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps. JMIR mHealth and uHealth. JMIR Publications Inc.; 2015;3(1).
6. Oldenburg B, Taylor CB, O’Neil A, Cocker F, Cameron LD. Using New Technologies to Improve the Prevention and Management of Chronic Conditions in Populations. Annual review of public health. Annual Reviews; 2015;36:483–505.
Competing interests: No competing interests
Sir: Agarwal et al.’s(1) discussion of the WHO mHealth Technical Evidence Review Group’s mHealth evidence reporting and assessment (mERA) checklist proposes strategies for the completeness of reporting of mobile health interventions. The 16 item checklist includes explanations and examples to standardize the quality of mHealth evidence reporting and to improve the quality of mHealth evidence. Our analysis indicates a broad-reaching checklist that is in alignment with our “Guidelines for Personalized Health Technology”(2).
Over the past eighteen months, we have convened public and private sector stakeholders to develop a framework that proactively addresses ethical, legal, and social concerns associated with personalized health technologies (e.g. wearables, smartwatches, and mobile health applications) and their associated data. Following a three-month global public consultation to identify best practices, we released the finalized Guidelines in March 2016. They include the following:
1. Build health technologies informed by science: Integrate scientific and behavioral evidence into the design of health technologies to better understand health risks and outcomes.
2. Scale affordable health technologies: Develop cost effective health technologies that are accessible by all populations to minimize health inequalities.
3. Guide interpretation of health data: Facilitate interpretation of health data through software design to support better health literacy.
4. Protect and secure health data: Embed privacy and security design features into health technology to ensure end to end protection.
5. Govern the responsible use of health technology and data: Disclose practices associated with the governance of health technologies and data to create shared values for all stakeholders.
We will now convene stakeholders in a working group to pilot the guidelines to create a self-regulatory framework for the personalized health technology industry. While mERA is a noteworthy checklist for standardizing reporting on scientific evidence, improving health using these technologies must focus on their broader ethical, legal, and social implications. As Melvin Kranzberg eloquently stated, “technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves”(3).
1 Agarwal S, LeFevre A, Lee J, et al. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ 2016; 352:i1174.
2 Christie G, Patrick K, Yach D. Guidelines for Personalized Health Technology. The Vitality Group 2016.
3 Kranzberg, M. (1986) ‘Technology and History: Kranzberg's Laws’, Technology and Culture vol. 27, no. 3, pp. 544-560.
Competing interests: No competing interests
Sir: Agarwal et al (1) kindly draw our attention to a poorly structured area of medical writing. Digital healthcare, whether mobile or monitoring, desktop telemedicine or smart phone videocalling, has developed rapidly, and users are unclear of its value. Is this the allure of a new fad or a nirvana? (2). At the current rate of assessment even the better funded statutory bodies, such as the Federal Drug Administration will not finish assessing apps existing in 2015 for decades, let alone those that have since appeared. Thankfully Stoyanov et al have developed the Mobile Application Rating Scale dovetailing nicely with the WHO mHealth Technical Evidence Review Group guidelines. The widespread adoption of both should bring much needed clarity and comparability to a currently commercially driven area, assisting doctors in rational prescribing and helping to curtail burgeoning costs (4).
(1) Agarwal S, LeFevre A, Lee et al. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ 2016; 352:i1174
(2 ) M J Wise Digital Healthcare: Fools Gold or the Promised Land? European Psychiatry 2016 Vol 33 http://dx.doi.org/10/1016/j.eurpsy.2016.01.849
(3) Stovanov S, Hides L, Kavanagh D et al. Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps. JMIR mHealth and uHealth 2015; 3: (1).
(4) N Sartorius. Mental Health in Context. Impact of Economic Policies on Health Services. European Psychiatry 2016 Vol 33 http://dx.doi.org/10.1016/j.eurpsy.2016.01.873
Competing interests: Secretary of the European Psychiatric Association's Section on TelePsychiatry