Intended for healthcare professionals

Chapter 1. What is epidemiology?

Epidemiology is the study of how often diseases occur in different groups of people and why. Epidemiological information is used to plan and evaluate strategies to prevent illness and as a guide to the management of patients in whom disease has already developed.
Like the clinical findings and pathology, the epidemiology of a disease is an integral part of its basic description. The subject has its special techniques of data collection and interpretation, and its necessary jargon for technical terms. This short book aims to provide an ABC of the epidemiological approach, its terminology, and its methods. Our only assumption will be that readers already believe that epidemiological questions are worth answering. This introduction will indicate some of the distinctive characteristics of the epidemiological approach.
All findings must relate to a defined population
A key feature of epidemiology is the measurement of disease outcomes in relation to a population at risk. The population at risk is the group of people, healthy or sick, who would be counted as cases if they had the disease being studied. For example, if a general practitioner were measuring how often patients consult him about deafness, the population at risk would comprise those people on his list (and perhaps also of his partners) who might see him about a hearing problem if they had one. Patients who, though still on the list, had moved to another area would not consult that doctor. They would therefore not belong to the population at risk.
The importance of considering the population at risk is illustrated by two examples. In a study of accidents to patients in hospital it was noted that the largest number occurred among the elderly, and from this the authors concluded that “patients aged 60 and over are more prone to accidents.” Another study, based on a survey of hang gliding accidents, recommended that flying should be banned between 11 am and 3 pm, because this was the time when 73% of the accidents occurred. Each of these studies based conclusions on the same logical error, namely, the floating numerator: the number of cases was not related to the appropriate “at risk” population. Had this been done, the conclusions might have been different. Differing numbers of accidents to patients and to hang gliders must reflect, at least in part, differing numbers at risk. Epidemiological conclusions (on risk) cannot be drawn from purely clinical data (on the number of sick people seen).
Implicit in any epidemiological investigation is the notion of a target populationabout which conclusions are to be drawn. Occasionally measurements can be made on the full target population. In a study to evaluate the effectiveness of dust control measures in British coal mines, information was available on all incident (new) cases of coal workers’ pneumoconiosis throughout the country.
More often observations can only be made on a study sample, which is selected in some way from the target population. For example, a gastroenterologist wishing to draw general inferences about long term prognosis in patients with Crohn’s disease might extrapolate from the experience of cases encountered in his own clinical practice. The confidence that can be placed in conclusions drawn from samples depends in part on sample size. Small samples can be unrepresentative just by chance, and the scope for chance errors can be quantified statistically. More problematic are the errors that arise from the method by which the sample is chosen. A gastroenterologist who has a special interest in Crohn’s disease may be referred patients whose cases are unusual or difficult, the clinical course and complications of which are atypical of the disease more generally. Such systematic errors cannot usually be measured, and assessment therefore becomes a matter for subjective judgement.
Systematic sampling errors can be avoided by use of a random selection process in which each member of the target population has a known (non-zero) probability of being included in the study sample. However, this requires an enumeration or censusof all members of the target population, which may not be feasible.
Often the selection of a study sample is partially random. Within the target population an accessible subset, the study population, is defined. The study sample is then chosen at random from the study population. Thus the people examined are at two removes from the group with which the study is ultimately concerned:
Target population – study population – study sample
This approach is appropriate where a suitable study population can be identified but is larger than the investigation requires. For example, in a survey of back pain and its possible causes, the target population was all potential back pain sufferers. The study population was defined as all people aged 20-59 from eight communities, and a sample of subjects was then randomly selected for investigation from within this study population. With this design, inference from the study sample to the study population is free from systematic sampling error, but further extrapolation to the target population remains a matter of judgement.
The definition of a study population begins with some characteristic which all its members have in common. This may be geographical(“all UK residents in 1985” or “all residents in a specified health district”); occupational(“all employees of a factory,” “children attending a certain primary school”, “all welders in England and Wales”); based on special care(“patients on a GP’s list”, “residents in an old people’s home”); or diagnostic (“all people in Southampton who first had an epileptic fit during 1990-91”). Within this broad definition appropriate restrictions may be specified – for example in age range or sex.
Oriented to groups rather than individuals
Clinical observations determine decisions about individuals. Epidemiological observations may also guide decisions about individuals, but they relate primarily to groups of people. This fundamental difference in the purpose of measurements implies different demands on the quality of data. An inquiry into the validity of death certificates as an indicator of the frequency of oesophageal cancer produced the results shown in Table 1.1
Inaccuracy was alarming at the level of individual patients. Nevertheless, the false positive results balanced the false negatives so the clinicians’ total (53 + 21 = 74 cases) was about the same as the pathologists’ total (53 + 22 = 75 cases). Hence, in this instance, mortality statistics in the population seemed to be about right, despite the unreliability of individual death certificates. Conversely, it may not be too serious clinically if Dr. X systematically records blood pressure 10 mm Hg higher than his colleagues, because his management policy is (one hopes) adjusted accordingly. But choosing Dr. X as an observer in a population study of the frequency of hypertension would be unfortunate.
 
Table 1.1 Cause of death diagnosed clinically compared with at necropsy
Diagnosis of oesophageal cancerNo.
Diagnosed by clinician and confirmed by pathologist53
Diagnosed by clinician and not confirmed by pathologist21
First diagnosed post mortem22
Conclusions are based on comparisons
Clues to aetiology come from comparing disease rates in groups with differing levels of exposure – for example, the incidence of congenital defects before and after a rubella epidemic or the rate of mesothelioma in people with or without exposure to asbestos. Clues will be missed, or false clues created, if comparisons are biased by unequal ascertainment of cases or exposure levels. Of course, if everyone is equally exposed there will not be any clues – epidemiology thrives on heterogeneity. If everyone smoked 20 cigarettes a day the link with lung cancer would have been undetectable. Lung cancer might then have been considered a “genetic disease”, because its distribution depended on susceptibility to the effects of smoking.
Identifying high risk and priority groups also rests on unbiased comparison of rates. The Decennial Occupational Supplement of the Registrar General of England and Wales(1970-2) suggested major differences between occupations in the proportion of men surviving to age 65:
Table 1.2 Men surviving to 65, by occupation
Farmers (self employed)82%
Professionals77%
Skilled manual workers69%
Labourers63%
Armed forces42%
These differences look important and challenging. However, one must consider how far the comparison may have been distorted either by inaccurate ascertainment of the deaths or the populations at risk or by selective influences on recruitment or retirement (especially important in the case of the armed forces).
Another task of epidemiology is monitoring or surveillance of time trends to show which diseases are increasing or decreasing in incidence and which are changing in their distribution. This information is needed to identify emerging problems and also to assess the effectiveness of measures to control old problems. Unfortunately, standards of diagnosis and data recording may change, and conclusions from time trends call for particular wariness.
The data from which epidemiology seeks to draw conclusions are nearly always collected by more than one person, often from different countries. Rigorous standardisation and quality control of investigative methods are essential in epidemiology; and if an apparent difference in disease rates has emerged, the first question is always “Might the comparison be biased?”

Chapters