Classification Table

The next step in evaluating the model is to examine the predictions generated by the model. Recall that the model is based on predicting cumulative probabilities. However, what you're probably most interested in is how often the model can produce correct predicted categories based on the values of the predictor variables. To see how well the model does, you can construct a classification table-also called a confusion matrix-by cross-tabulating the predicted categories with the actual categories. You can create a classification table in another procedure, using the saved model-predicted categories. See the topic Analysis of cross-classifications using Crosstabs for more information.

Figure 1. Classification table for the initial model
Classification table showing account status in the rows and predicted response category (payments current or critical account) in the columns

The model seems to be doing a respectable job of predicting outcome categories, at least for the most frequent categories-category 3 (debt payments current) and category 5 (critical account). The model correctly classifies 90.6% of the category 3 cases and 75.1% of the category 5 cases. In addition, cases in categories 2 are more likely to be classified as category 3 than category 5, a desirable result for predicting ordinal responses.

On the other hand, category 1 (no credit history) cases are somewhat poorly predicted, with the majority of cases being assigned to category 5 (critical account), a category that should theoretically be most dissimilar to category 1. This may indicate a problem in the way the ordinal outcome scale is defined. In the interest of brevity, you will not pursue this issue further here, but in an actual data analysis situation, you would probably want to investigate this and try to discover whether the ordinal scale itself could be improved by reordering, merging, or excluding certain categories.

Next