Intended for healthcare professionals

Opinion

Visualising harms: barely scratching the surface

BMJ 2022; 377 doi: https://doi.org/10.1136/bmj.o1230 (Published 16 May 2022) Cite this as: BMJ 2022;377:o1230

Linked Opinion

Visualising harms in publications of randomised controlled trials

  1. Will Stahl-Timmins, data graphics designer
  1. The BMJ

Good visuals and graphs can quickly convey substance in a way that words cannot. Graphics make a lasting impression, are easy to digest, but hard to make. Far from being just an additional eye-catching “surface” on top of a paper, they are an integral part of how science is explained. Often, not enough attention is paid to the visual element of science communication and why it matters. I was recently involved in a project, led by Rachel Phillips, and colleagues at Imperial College London and several other UK medical schools. The project aimed to recommend graph types for reporting harms data in clinical trials. However, while we collated a few simple and easy to make graph types, the surface of this topic has barely been scratched. We need to develop and use more innovative techniques that can present data in more complex trial designs such as those with multiple outcomes and subgroups.

I was the sole representative of the information visualisation and data presentation community for the project, now published in The BMJ.1 Perhaps unsurprisingly, no other health specialised data visualisers were available for recruitment. I provided a few quick sketches of ways that harms data might be presented, and was invited to join a panel of experts (mostly statisticians) to score the usefulness of a selection of visualisations which could be produced in standard statistical software. This produced a final set of simple and potentially useful visualisations, some of which are not often used for visualising harms data. The authors may have identified existing visualisation techniques that are easy to produce for statisticians and trialists, but the project was not designed to come up with any new ways of presenting information that haven’t been tried before. While feedback on the usefulness of techniques was sought from clinicians, the study also hasn’t really given us a detailed understanding of how useful these techniques would be for non-specialists such as practising clinicians, policy makers, and patients. For this, one might have to perform user testing, or task-based research—asking people to answer questions about the distribution of adverse events in a dataset with different visualisations, for example.

There is a fundamental dichotomy at the heart of this project. On the one hand, encouraging good practice is laudable, and offering a “menu” of options to a statistician or designer can be useful inspiration. On the other hand, recommending existing visualisation types that are easy to create in standard statistics software could stifle innovation. The visualisation of data are such a creative and fast moving field that we should encourage ongoing experimentation. It is tempting to be too prescriptive, so I welcome the authors’ recommendation that researchers use multiple visualisations to make sense of data from different perspectives. Clinical datasets are so heterogeneous that looking at them in different ways can reveal surprising patterns.

Ultimately, named chart types may not be the best way to visualise complex datasets. Specialists often prefer to consider individual “elements” of data visualisation, such as position, size, colour, and shape, and build bespoke visualisations for specific datasets—in the tradition of designers like Jacques Bertin, Colin Ware, and Steven Few.234 I have written more about this in my PhD on the visualisation of health technology assessment data.5 Naming particular types of charts does of course help when using automated tools for generating visualisations. But it can lead to some awkward definitions for more complex visualisations: “Look, I made a three-dimensional force-directed treemap with a rainbow colour scale!”

An interest in the presentation of data needs to begin somewhere, and offering a “menu” of options to consider can be a useful starting point. However, researchers working with adverse events datasets should also continue to experiment with different ways of presenting them, and to develop new techniques to add to the “menu.” For instance, very few visualisation types were identified that can adequately present multi-arm trials, or give a good overview of entire harm profiles of time-to-event data, rather than just individual outcomes. Interactive figures offer many exciting possibilities for allowing readers to explore datasets, in ways which were beyond the aims of this project. We need to get truly under the surface of harms visualisation and explore new ways of presenting data, rather than just keeping on using the same familiar techniques.

Footnotes

  • Competing interests: none declared

  • Provenance and peer review: commissioned, not peer reviewed.

References