Statistics
Measures Of Variability
Suppose that a teacher gave the same test to two different classes and obtained the following results: Class 1: 80%, 80%, 80%, 80%, 80% Class 2: 60%, 70%, 80%, 90%, 100% If you calculate the mean for both sets of scores, you get the same answer: 80%. But the collection of scores from which this mean was obtained was very different in the two cases. The way that statisticians have of distinguishing cases such as this is known as measuring the variability of the sample. As with measures of central tendency, there are a number of ways of measuring the variability of a sample.
Probably the simplest method is to find the range of the sample, that is, the difference between the largest and smallest observation. The range of measurements in Class 1 is 0, and the range in class 2 is 40%. Simply knowing that fact gives a much better understanding of the data obtained from the two classes. In class 1, the mean was 80%, and the range was 0, but in class 2, the mean was 80%, and the range was 40%.
Other measures of variability are based on the difference between any one measurement and the mean of the set of scores. This measure is known as the deviation. As you can imagine, the greater the difference among measurements, the greater the variability. In the case of Class 2 above, the deviation for the first measurement is 20% (80%-60%), and the deviation for the second measurement is 10% (80%-70%).
Probably the most common measures of variability used by statisticians are the variance and standard deviation. Variance is defined as the mean of the squared deviations of a set of measurements. Calculating the variance is a somewhat complicated task. One has to find each of the deviations in the set of measurements, square each one, add all the squares, and divide by the number of measurements. In the example above, the variance would be equal to [(20)2 + (10)2 + (0)2 + (10)2 + (20)2] 4 ÷ 5 = 200.
For a number of reasons, the variance is used less often in statistics than is the standard deviation. The standard deviation is the square root of the variance, in this case, √+200 = 14.1. The standard deviation is useful because in any normal distribution, a large fraction of the measurements (about 68%) are located within one standard deviation of the mean. Another 27% (for a total of 95% of all measurements) lie within two standard deviations of the mean.
Additional topics
- Statistics - Inferential Statistics
- Statistics - Measures Of Central Tendency
- Other Free Encyclopedias
Science EncyclopediaScience & Philosophy: Spectroscopy to Stoma (pl. stomata)Statistics - Some Fundamental Concepts, Collecting Data, Graphical Representation, Distribution Curves, Other Kinds Of Frequency Distributions - Descriptive statistics