Intended for healthcare professionals

CCBYNC Open access

Rapid response to:

Research Christmas 2014: Media Studies

Televised medical talk shows—what they recommend and the evidence to support their recommendations: a prospective observational study

BMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g7346 (Published 17 December 2014) Cite this as: BMJ 2014;349:g7346

Rapid Response:

Re: Televised medical talk shows—what they recommend and the evidence to support their recommendations: a prospective observational study

I personally have not watched Oz or the Doctors shows, but I am now intrigued to do so and conduct my own 'biased' study. It seems like the Oz show started in September 2009, and I can only assume that it evolved into its current form over the years. So a sampling from previous years would give different results than more current ones.

I agree with some of the commentators here about the methodological flaws of the study despite the lengthy response that is provided by the author, which seems dismissive of the validity of points that have been raised.

If I am understanding correctly, it is being argued that the purpose of this study was to describe the content of EBM in these shows. Considering the misinformation and misuse this publication received, it is important to point the fact that some of the recommendations categorized as 'no evidence' might be just because they were not previously investigated or might have conflicting evidence, which does not mean recommendations are made against existing evidence.

My understanding from all my readings is that these shows provide answers to some of the mundane questions of daily life. In these situations, do we know what an average physician recommends? One example of 'no evidence' from the authors was covering one's mouth with an arm while sneezing prevents the transmission of flu. What would an EBM-practicing physician recommend in this instance? If your sample contained many subjective or common sense recommendation examples, then what did we really learn from this investigation? A more robust design would have been to compare the same recommendations with several comparable medical practices. At this point, things would, of course, get very difficult because you would need to go back and ascertain data from an actual practice as you have done with the TV show. It is not a secret that many physicians have different practice patterns, and you will find many don't agree on certain treatment recommendations - just covering another physician's practice for a brief period of time can be quite educational.

Were these doctors making recommendations that jeopardized the public's lives and were harmful? Were they giving advice that was unequivocally shown to be wrong? Otherwise, if the shows' recommendations were assessed by the three experts in your study, all this publication presents is the opinion of those experts, not a systematic review or a prospective review (whatever that really means in epidemiological design sense).

Anyways, a good effort to get public attention. As an epidemiologist I always am wary about the inferences of my studies; hence, I hope to be cognizant of these design issues. Considering the professional discourse building around these TV shows and the fear of duping public with quack medicine, I sense the authors and the editor felt responsibility to get this information out to the public. However, sometimes it is better not to pursue a study than to misinform the public because that's how we end up with those recommendations that keep getting changed with each new study not because of the new scientific discoveries but inherent design flaws. Well, just because a study got published in The BMJ doesn't mean it is a great study as we have seen many examples in this and many other respectable journals in recent history.

Competing interests: No competing interests

28 April 2015
Ayse Tezcan
Phd candidate
Davis, CA, USA
UC Davis Graduate Group in Epidemiology