How to find the good and avoid the bad or ugly: a short guide to tools for rating quality of health information on the internetCommentary: On the way to quality
BMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7337.598 (Published 09 March 2002) Cite this as: BMJ 2002;324:598All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Responding to Dr. Risk's commentary:
Regarding your point about relying on reputation of sites being
correlated with higher quality, you mention Google and cite a study
presented at MedNet2001. I have some questions and concerns.
First, regarding the study cited, using the HON principles as a
quality measure is problematic because I believe the 8 HON principles are
inadequate to deal with the current state of technology and marketing
savvy used on the Internet today. Other standards of quality, such as the
URAC accreditation standards, are more complex and detailed (URAC has 53
accreditation standards, not all of which are applicable to every health
web site).
Second, it appears that even compliance with these 8 principles in
the sample population of sites was very low although the authors state:
"The most important result of this study is the fact that the more
compliant is a site with HON principles, the higher number of links
receives (sic). We believe that this result implies a validation of the
number of inbound links as a quality marker." I do not have the data upon
which this statement is based. Perhaps you do? (When I access the JMIR
research paper that the abstract refers to, I find that this study does
not correlate inbound links with HON principles, but with ratings of third
parties like HealthAtoZ and Medical Matrix. Also, I must admit, that I do
not understand how to interpret the data tables very well anyway ;-)
Third, I have looked at another measure of quality - privacy policy
compliance with fair information practice principles - and find that sites
that say they comply with HON principles rank much lower in compliance
than sites that comply with other "seals" (URAC and Hi-Ethics, for
example).
In short, I am not convinced that "reputation" is a good quality
indicator - I don't think the research to date is convincing or perhaps it
is too narrowly focused.
Linking policies are often vague and sometimes unethical to boot!
Reputation, as we here in the US have learned from ENRON in the off-line
world and from the drkoop.com experience on the health Internet, is often
a poor marker for quality.
[DISCLOSURE: I am involved with a number of health Internet quality
initiatives. In addition to being the president of the Internet Healthcare
Coalition I also am a member of URAC's health web site accreditation
committee. The views expressed here are my own and do not necessarily
represent the views of the Coalition or URAC.]
Competing interests: No competing interests
Re: Reputation as a Quality Marker
Dear Editor.
We have enjoyed so much reading Dr Risk’s commentary on his
perception of the quality of medical web sites and how to assess it. We
believe that such a complex subject needs a more intuitive approach than
only fixed rules to webmasters at the time of their sites design, or a set
of quality principles to take into account (by whom), or some warnings to
the users on the deficits of this or that resource, and… so on. At the
end, users search the Net, saving in “Favourites” what they like, and
recommending (or linking, if they are webmasters) those sites they
consider the best ones. At the end, it does not matter if those resources
have an imperfect design, are not written in a correct grammar, or even,
if they do not conform to the most current clinical guidelines. The fact
is that those sites are relatively the best ones, as, for us, this week
the BMJ is the best medical journal in the world. And this is a relative
perception, of course.
Responding to Mr. Mack’s commentary: Our team have studied the value
of some webmetric and usage indexes as quality markers of paediatric web
sites for the last four years. Dr. Risk cited in his paper a communication
to Mednet2001 based on that research. As a paper on that work is being
considered for publication by a peer-reviewed medical journal I must not
publicly give more details. In any case, we must confess that it is not an
original idea, because we have only tried to apply on the Web the methods
used for decades to rank medical journals. Again it is important to remind
that, similarly to bibliometric indexes, webmetric indexes do not validate
a given resource in absolute terms, but only help to rank them by their
"relative" quality. Thus, in our opinion, Meric et al’s findings in their
article of this week BMJ (1) issue do not rest value to certain popularity
indexes as quality markers.
1. Meric F, Bernstam EV, Mirza NQ et al. Breast cancer on the world
wide web: cross sectional survey of quality of information and popularity
of websites. BMJ 2002 Mar 9; 324:577-81
Competing interests: No competing interests