IBM SPSS® Regression enables you to predict categorical outcomes and apply various nonlinear regression procedures. You can use these procedures for business and analysis projects where ordinary regression techniques are limiting or inappropriate. This includes studying consumer buying habits, responses to treatments or analyzing credit risk. The solution helps you expand the capabilities of SPSS Statistics for the data analysis stage of the analytical process.
This module is included in the SPSS Standard, Professional and Premium packages.
Binary logistic regression
Predict the presence or absence of a characteristic or binary outcome based on values of a set of predictor variables. It is similar to a linear regression model, but is suited to models where the dependent variable is dichotomous and assumed to follow a binomial distribution. The estimated coefficients can be used to estimate odds ratios for each of the independent variables in the model.
Use the logit link function to model the dependence of a polytomous ordinal response on a set of predictors. In the logit model, the log odds of the outcome is modeled as a linear combination of the predictor variables.
Classify subjects based on values of a set of predictor variables. This type of regression is similar to logistic regression, but it is more general because the dependent variable is not restricted to two categories.
Find a nonlinear model of the relationship between the dependent variable and a set of independent variables. Unlike traditional linear regression, which is restricted to estimating linear models, nonlinear regression can estimate models with arbitrary relationships between independent and dependent variables. This is accomplished using iterative estimation algorithms.
Use probit and logit response modeling to analyze the potency of responses to stimuli, such as medicine doses, prices or incentives. This procedure measures the relationship between the strength of a stimulus and the proportion of cases exhibiting a certain response to the stimulus. It is useful for situations where you have a dichotomous output that is thought to be influenced or caused by levels of some independent variable(s), and is particularly well suited to experimental data.
Use instrumental variables that are uncorrelated with the error terms to compute estimated values of the problematic predictor(s) (the first stage), and then use those computed values to estimate a linear regression model of the dependent variable (the second stage). Since the computed values are based on variables that are uncorrelated with the errors, the results of the two-stage model are optimal.
Control the correlations between the predictor variables and error terms that can occur with time-based data. The weight estimation procedure tests a range of weight transformations and indicates which will give the best fit to the data.
Model the relationship between a set of predictor (independent) variables and specific percentiles (or "quantiles") of a target (dependent) variable, most often the median. Quantile regression has two main advantages over ordinary least squares regression: it makes no assumptions about the distribution of the target variable and tends to resist the influence of outlying observations.