Home
Analytics
SPSS
SPSS Statistics
Categories
IBM® SPSS® Categories enables you to visualize and explore relationships in your data, helping you predict outcomes based on your statistical analysis. It uses categorical regression techniques to predict the values of nominal, ordinal or numerical outcome variables from a combination of numeric and ordered or unordered categorical predictor variables. The software features statistical procedures such as predictive analysis, statistical learning, perceptual mapping and preference scaling.
This module is included in the IBM® SPSS® Statistics Professional edition for traditional license use and as part of the IBM® SPSS® Complex Sampling and Testing add-on for subscription plans.
Use SPSS Categories to conduct correspondence analysis, making it easier to visualize and analyze differences between categories.
Incorporate supplementary information by defining custom attributes for variables. This enables you to add additional context or metadata not captured by standard labels, measurement values or missing values. These attributes can store further information, such as descriptive notes, units of measurement or coding schemes, providing more context for data analysis.
Use symmetrical normalization to produce a biplot so you can better see associations.
Analyze and interpret your multivariate data and its relationships more effectively through in-depth data analysis. For example, understand which consumer characteristics are most closely associated with your product or brand in your dataset, or compare customer perceptions of your products with those offered by you or your competitors.
Predict the values of a nominal, ordinal or numerical outcome variable from a combination of numeric and ordered or unordered categorical predictor variables. Use regression with optimal scaling to describe, for example, how job satisfaction can be predicted from job category, geographic region and the amount of work-related travel.
Quantify variables so that the multiple R is maximized. Optimal scaling may be applied to numeric variables when residuals are nonnormal or when predictor variables are not linearly related to the outcome variable. Regularization methods such as ridge regression, lasso and elastic net can improve prediction accuracy by stabilizing the parameter estimates.
Use dimension-reduction techniques to see relationships in your data. Summary charts display similar variables or categories to provide you with insight into relationships among more than 2 variables.
Techniques include correspondence analysis (CORRESPONDENCE), categorical regression (CATREG), multiple correspondence analysis (MULTIPLE CORRESPONDENCE), CATPCA, nonlinear canonical correlation (OVERALS), proximity scaling (PROXSCAL) and preference scaling (PREFSCAL).