^{a}Department of Public Health Sciences St George's Hospital Medical School London SW17 0RE^{b}ICRF Medical Statistics Group Centre for Statistics in Medicine Institute of Health Sciences PO Box 777 Oxford OX3 7LF

- Correspondence to: Professor Bland

## Article

Many quantities of interest in medicine, such as anxiety or degree of handicap, are impossible to measure explicitly. Instead, we ask a series of questions and combine the answers into a single numerical value. Often this is done by simply adding a score from each answer. For example, the mini-HAQ is a measure of impairment developed for patients with cervical myelopathy.1 This has 10 items (table 1)) recording the degree of difficulty experienced in carrying out daily activities. Each item is scored from 1 (no difficulty) to 4 (can't do). The scores on the 10 items are summed to give the mini-HAQ score.

When items are used to form a scale they need to have internal consistency. The items should all measure the same thing, so they should be correlated with one another. A useful coefficient for assessing internal consistency is Cronbach's alpha.2 The formula is:

[This figure is not available.]where *k* is the number of items, *s _{i}*

^{2}is the variance of the

*i*th item and

*s*

_{T}^{2}is the variance of the total score formed by summing all the items. If the items are not simply added to make the score, but first multiplied by weighting coefficients, we multiply the item by its coefficient before calculating the variance

*s*

_{i}^{2}. Clearly, we must have at least two items-that is

*k*>1, or will be undefined.

The coefficient works because the variance of the sum of a group of independent variables is the sum of their variances. If the variables are positively correlated, the variance of the sum will be increased. If the items making up the score are all identical and so perfectly correlated, all the *s _{i}*

^{2}will be equal and

*s*

_{T}

^{2}=

*k*

^{2}

*s*

_{i}^{2}, so that

*s*

_{i}^{2}/

*s*

_{T}^{2}= 1/

*k*and = 1. On the other hand, if the items are all independent, then

*s*

_{T}^{2}=

*s*

_{i}^{2}and = 0. Thus will be 1 if the items are all the same and 0 if none is related to another.

For the mini-HAQ example, the standard deviations of each item and the total score are shown in the table. We have *si*^{2} = 11.16, *s _{T}*

^{2}= 77.44, and

*k*= 10. Putting these into the equation, we have

which indicates a high degree of consistency.

For scales which are used as research tools to compare groups, may be less than in the clinical situation, when the value of the scale for an individual is of interest. For comparing groups, values of 0.7 to 0.8 are regarded as satisfactory. For the clinical application, much higher values of are needed. The minimum is 0.90, and =0.95, as here, is desirable.

In a recent example, McKinley *et al* devised a questionnaire to measure patient satisfaction with calls made by general practitioners out of hours.3 This included eight separate scores, which they interpreted as measuring constructs such as satisfaction with communication and management, satisfaction with doctor's attitude, etc. They quoted for each score, ranging from 0.61 to 0.88. They conclude that the questionnaire has satisfactory internal validity, as five of the eight scores had >0.7. In this issue Bosma *et al* report similar values, from 0.67 to 0.84, for assessments of three characteristics of the work environment.4

Cronbach's alpha has a direct interpretation. The items in our test are only some of the many possible items which could be used to make the total score. If we were to choose two random samples of *k* of these possible items, we would have two different scores each made up of *k* items. The expected correlation between these scores is .