Measures of Variability (part 1)

Measures of variability (also called measures of dispersion) are values that show the dispersion of scores in a data set. In other words, variability shows how far apart the scores are in a distribution. A small measure of variability indicates that the scores are closer to the mean, while a larger measure of variability shows that the scores are further away from the mean. Variability is a measure of distance, so it can never be a negative value. This information can be useful for describing data, and gives the reader an idea how different the participants’ scores were in a research study. There are several ways to report variability, and some are better than others. The most basic measure of variability is the range.

The range is simply the highest score (or value) minus the lowest. This works fine until you have a data set with an outlier, in which case the range is quite misleading. For example, suppose you have two data sets and you want to know which one has the largest spread. If one data set has a range of 10 – 2 = 8, and the other one has a range of 40 – 2 = 38, it would appear that the second data set has the most “spread out” scores because the range is so much larger than the first. But what if the scores in the first data set are 3, 9, 2, 5, 4, 10 and the scores in the second data set are 5, 3, 2, 8, 6, 40?   We can now see that the scores are essentially the same in each data set except for that one value of 40. This outlier in the second set gives us a range of 38, which in turn gives us an inflated idea of the spread of these scores. This is why the range isn’t very useful. Another reason being that the range can vary too much depending on the sample size. Larger samples are likely to have a larger range.

There are better measures of variability that we can use. Next time I will introduce mean deviation, which takes into consideration the distance between the scores and the mean of those scores.

 

Comments are closed