A measure of variability—such as the range, interquartile range, standard deviation, or variance—tells you how spread out (or consistent) the values are. If everyone in the population has the same value, then the measure of variability will be zero. Thus, a measure of variability also tells you whether or not everyone in the population has the same value.

A measure of central tendency—such as an average (mean), mode, or median—gives you an idea of the middle and/or most common value for the variable. A measure of central tendency doesn't tell you how spread out the values are, just as a measure of variability doesn't tell you anything about the central tendency.

Considered together, a measure of central tendency and a measure of variability give you a fairly comprehensive view of the distribution of a variable's values, particularly when those values are normally distributed. The required assumptions for the z test are:

•The sample is selected using random sampling:

This means every member of the population has an equal chance of being included in the sample, and this chance of being included does not change as the members are selected. In practice, it is almost impossible to have a true random sample, but researchers should take precautions to make the selection of the sample as random as possible.

•The observations are independent from one another:

This means members of the sample are not connected to one another such that their data values are systematically related. For example, if you include siblings in the same sample, data related to health factors or lifestyle are likely to be similar to each other. In addition, this requirement for independent observations means that researchers should not, for example, interview members of the same sample simultaneously to, for example, ask them to express their opinion on something. One member's opinion might then influence another member's opinion.

•The standard deviation of the variable of interest is constant across treatments: Here "treatments" might consist of an actual treatment (as in one group takes a drug, the other does not), or it may just refer to a change in conditions (such as comparing a set of measurements from one year to the next).

•The distribution of sample means is normal: Using the central limit theorem, this is true when the original population is normal or when the sample size is sufficiently large (typically greater than 30 as long as the original population is not extremely nonnormal). So, having a normally distributed population or having a sample larger than 30 is a necessary assumption to make the central limit theorem work. If the null hypothesis is true, then you should not reject it. If the null hypothesis is false then you should reject it. Therefore, failing to reject a true null hypothesis (Outcome D) and rejecting a false null hypothesis (Outcome A) are both correct decisions. On the other hand, if the null hypothesis is true, but nevertheless you reject the null hypothesis (Outcome B), this is a mistake: A Type I error; if the null hypothesis is false, but you do not reject it (Outcome C), this is also a mistake: A Type II error.

The outcome a researcher should be most concerned about is rejecting a true null hypothesis (a Type I error). You can remember this by thinking about it as a researcher's first responsibility: not claiming there is a difference or an effect when there actually is none. A researcher's second concern, corresponding to a Type II error, is not rejecting a false null hypothesis. At least when researchers make this mistake they do not draw a false conclusion, because not rejecting the null hypothesis doesn't mean there is no difference or effect; it just means the study failed to show it.

By definition, the probability of a Type I error is α, and the probability of a Type II error is β. (Notice that α is the first letter in the alphabet; β is the second.) The complement of rejecting a true null hypothesis is not rejecting a true null hypothesis (that is, if the null hypothesis is true, a researcher either rejects it or does not reject it). Thus, if the probability of rejecting a true null hypothesis is α, the probability of not rejecting a true null hypothesis is 1 - α. Likewise, the complement of not rejecting a false null hypothesis is rejecting a false null hypothesis. Thus, if the probability of not rejecting a false null hypothesis is β, the probability of rejecting a false null hypothesis is 1 - β.