# Estimating Statistics and Imputing Missing Values

You can choose to estimate means, standard deviations, covariances, and correlations using listwise (complete cases only), pairwise, EM (expectation-maximization), and/or regression methods. You can also choose to impute the missing values (estimate replacement values). Note that Multiple Imputation is generally considered to be superior to single imputation for solving the problem of missing values. Little's MCAR test is still useful for determining whether imputation is necessary.

Listwise Method

This method uses only complete cases. If any of the analysis variables have missing values, the case is omitted from the computations.

Pairwise Method

This method looks at pairs of analysis variables and uses a case only if it has nonmissing values for both of the variables. Frequencies, means, and standard deviations are computed separately for each pair. Because other missing values in the case are ignored, correlations and covariances for two variables do not depend on values missing in any other variables.

EM Method

This method assumes a distribution for the partially missing data and bases inferences on the likelihood under that distribution. Each iteration consists of an E step and an M step. The E step finds the conditional expectation of the "missing" data, given the observed values and current estimates of the parameters. These expectations are then substituted for the "missing" data. In the M step, maximum likelihood estimates of the parameters are computed as though the missing data had been filled in. "Missing" is enclosed in quotation marks because the missing values are not being directly filled in. Instead, functions of them are used in the log-likelihood.

Roderick J. A. Little's chi-square statistic for testing
whether values are missing completely at random (MCAR) is printed
as a footnote to the EM matrices. For this test, the null hypothesis
is that the data are missing completely at random, and the *p* value is significant at the 0.05 level.
If the value is less than 0.05, the data are not missing completely
at random. The data may be missing at random (MAR) or not missing
at random (NMAR). You cannot assume one or the other and need to analyze
the data to determine how the data are missing.

Regression Method

This method computes multiple linear regression estimates and has
options for augmenting the estimates with random components. To each
predicted value, the procedure can add a residual from a randomly
selected complete case, a random normal deviate, or a random deviate
(scaled by the square root of the residual mean square) from the *t* distribution.