Intended for healthcare professionals

Rapid response to:

Papers

Are these data real? Statistical methods for the detection of data fabrication in clinical trials

BMJ 2005; 331 doi: https://doi.org/10.1136/bmj.331.7511.267 (Published 28 July 2005) Cite this as: BMJ 2005;331:267

Rapid Response:

Wider application of these methods

The methodologies suggested in this paper have further applications
in the reviewing process to check for data errors that may have occurred
not only through fraud but also through faulty data entry - I found a
very large s.d. in BMI field for one arm of a clinical trial in a recent
paper under review and this was shown to be due to a misplaced decimal
point - the BMI should have been ~ 30 and was ~ 300, as the decimal point
had been omitted in the weight field.

I am not sure that comparing the s.d. might always be the correct
comparison - in the example in this paper there is a phase shift between
the two trials, as one trial had hypertensive patients only. In this
context coefficient of variation (c.v.) may be a better measure of
dispersion.

Data from other papers about the same disease group might serve as
better comparators than data of the same type from a general population.
Consequently as I review it may prove to be useful to maintain a database
of the mean and s.d. of key variables, for me this would be in populations
of subjects with diabetes.

Finding outliers when comparing s.d.'s and means might also mean that
the data entry (as in my example above) was singly punched late at night
by a junior researcher with no validation checks - double entry data
systems with validation should be required by review and ethics
committees.

Competing interests:
None declared

Competing interests: No competing interests

12 August 2005
Irene M Stratton
Senior Medical Statistician, statistical advisor to Diabetic Medicine
Oxford Centre fro Diabetes, Endocrinology and Metabolism, Churchill Hospital, Headington, Oxford OX3 7LJ
Diabetes Trials Unit