Correlations and Importance
To interpret the contributions of the predictors to the regression, it is not sufficient to only inspect the regression coefficients. In addition, the correlations, partial correlations, and part correlations should be inspected. The following table contains these correlational measures for each variable.
The zero-order correlation is the correlation between the transformed predictor and the transformed response. For this data, the largest correlation occurs for Package design. However, if you can explain some of the variation in either the predictor or the response, you will get a better representation of how well the predictor is doing.

Other variables in the model can confound the performance of a given predictor in predicting the response. The partial correlation coefficient removes the linear effects of other predictors from both the predictor and the response. This measure equals the correlation between the residuals from regressing the predictor on the other predictors and the residuals from regressing the response on the other predictors. The squared partial correlation corresponds to the proportion of the variance explained relative to the residual variance of the response remaining after removing the effects of the other variables. For example, Package design has a partial correlation of –0.955. Removing the effects of the other variables, Package design explains (–0.955)2 = 0.91 = 91% of the variation in the preference rankings. Both Price and Good Housekeeping seal also explain a large portion of variance if the effects of the other variables are removed.
As an alternative to removing the effects of variables from both the response and a predictor, you can remove the effects from just the predictor. The correlation between the response and the residuals from regressing a predictor on the other predictors is the part correlation. Squaring this value yields a measure of the proportion of variance explained relative to the total variance of response. If you remove the effects of Brand name, Good Housekeeping seal, Money back guarantee, and Price from Package design, the remaining part of Package design explains (–0.733)2 = 0.54 = 54% of the variation in preference rankings.
Importance
In addition to the regression coefficients and the correlations, Pratt’s measure of relative importance 1 aids in interpreting predictor contributions to the regression. Large individual importances relative to the other importances correspond to predictors that are crucial to the regression. Also, the presence of suppressor variables is signaled by a low importance for a variable that has a coefficient of similar size to the important predictors.
In contrast to the regression coefficients, this measure defines the importance of the predictors additively—that is, the importance of a set of predictors is the sum of the individual importances of the predictors. Pratt’s measure equals the product of the regression coefficient and the zero-order correlation for a predictor. These products add to R 2, so they are divided by R 2, yielding a sum of 1. The set of predictors Package design and Brand name, for example, have an importance of 0.654. The largest importance corresponds to Package design, with Package design, Price, and Good Housekeeping seal accounting for 95% of the importance for this combination of predictors.
Multicollinearity
Large correlations between predictors will dramatically reduce a regression model’s stability. Correlated predictors result in unstable parameter estimates. Tolerance reflects how much the independent variables are linearly related to one another. This measure is the proportion of a variable's variance not accounted for by other independent variables in the equation. If the other predictors can explain a large amount of a predictor’s variance, that predictor is not needed in the model. A tolerance value near 1 indicates that the variable cannot be predicted very well from the other predictors. In contrast, a variable with a very low tolerance contributes little information to a model, and can cause computational problems. Moreover, large negative values of Pratt’s importance measure indicate multicollinearity.
All of the tolerance measures are very high. None of the predictors are predicted very well by the other predictors and multicollinearity is not present.