Troubleshooting
Problem
I have an ordinal variable with a scale of 1 to 5 in IBM SPSS Statistics. The mean, plus one standard deviation, is greater than 5. How can we assume a subset of cases in a normal distribution that theoretically fall above the maximum value?
Resolving The Problem
By definition (Social Statistics, Hubert Blalock, McGraw-Hill, 1972), the standard deviation is the is "the square root of the arithmetic mean of the squared deviations from the mean" (pp. 78-79). Note that this definition does not take into account an assumption of normality, but simply describes the deviations from the mean for that sample.
For example, lets look at a heavily skewed variable, such as POPULAR in the 1991 General Social Survey, The mean is 4.6 and the standard deviation is .756. If we assume a normal curve with a mean of 4.6 and the standard deviation, then how do we assume that about 15% of the cases expect to have values greater than 5.356 when the maximum value is 5? The problem may be in assuming normality at this point. At this point the standard deviation is telling is that the square root of the average squared deviance is .756. If we divide the standard deviation by the square root of N (N, in this case, is 982), we get a value of .024, which is the standard error of the mean. At this point we may wish to consider the normal distribution. Given the central limit theorem, we can assume that the "true" population mean is 4.60 with a standard deviation of .024, so there is a 68.26% chance of the "true" mean being between 4.576 and 4.624, and around a 96% chance of the mean being between 4.554 and 4.648.
Related Information
Historical Number
72054
Was this topic helpful?
Document Information
Modified date:
16 April 2020
UID
swg21478865