Output (Multilayer Perceptron)
Network Structure. Displays summary information about the neural network.
 Description. Displays information about the neural network, including the dependent variables, number of input and output units, number of hidden layers and units, and activation functions.
 Diagram. Displays the network diagram as a noneditable chart. Note that as the number of covariates and factor levels increases, the diagram becomes more difficult to interpret.
 Synaptic weights. Displays the coefficient estimates that show the relationship between the units in a given layer to the units in the following layer. The synaptic weights are based on the training sample even if the active dataset is partitioned into training, testing, and holdout data. Note that the number of synaptic weights can become rather large and that these weights are generally not used for interpreting network results.
Network Performance. Displays results used to determine whether the model is "good". Note: Charts in this group are based on the combined training and testing samples or only on the training sample if there is no testing sample.

Model summary. Displays a summary of the neural network results by partition and
overall, including the error, the relative error or percentage of
incorrect predictions, the stopping rule used to stop training, and
the training time.
The error is the sumofsquares error when the identity, sigmoid, or hyperbolic tangent activation function is applied to the output layer. It is the crossentropy error when the softmax activation function is applied to the output layer.
Relative errors or percentages of incorrect predictions are displayed depending on the dependent variable measurement levels. If any dependent variable has scale measurement level, then the average overall relative error (relative to the mean model) is displayed. If all dependent variables are categorical, then the average percentage of incorrect predictions is displayed. Relative errors or percentages of incorrect predictions are also displayed for individual dependent variables.
 Classification results. Displays a classification table for each categorical dependent variable by partition and overall. Each table gives the number of cases classified correctly and incorrectly for each dependent variable category. The percentage of the total cases that were correctly classified is also reported.
 ROC curve. Displays an ROC (Receiver Operating Characteristic) curve for each categorical dependent variable. It also displays a table giving the area under each curve. For a given dependent variable, the ROC chart displays one curve for each category. If the dependent variable has two categories, then each curve treats the category at issue as the positive state versus the other category. If the dependent variable has more than two categories, then each curve treats the category at issue as the positive state versus the aggregate of all other categories.
 Cumulative gains chart. Displays a cumulative gains chart for each categorical dependent variable. The display of one curve for each dependent variable category is the same as for ROC curves.
 Lift chart. Displays a lift chart for each categorical dependent variable. The display of one curve for each dependent variable category is the same as for ROC curves.
 Predicted by observed chart. Displays a predictedbyobservedvalue chart for each dependent variable. For categorical dependent variables, clustered boxplots of predicted pseudoprobabilities are displayed for each response category, with the observed response category as the cluster variable. For scaledependent variables, a scatterplot is displayed.
 Residual by predicted chart. Displays a residualbypredictedvalue chart for each scaledependent variable. There should be no visible patterns between residuals and predicted values. This chart is produced only for scaledependent variables.
Case processing summary. Displays the case processing summary table, which summarizes the number of cases included and excluded in the analysis, in total and by training, testing, and holdout samples.
Independent variable importance analysis. Performs a sensitivity analysis, which computes the importance of each predictor in determining the neural network. The analysis is based on the combined training and testing samples or only on the training sample if there is no testing sample. This creates a table and a chart displaying importance and normalized importance for each predictor. Note that sensitivity analysis is computationally expensive and timeconsuming if there are large numbers of predictors or cases.
How To Select Output for Multilayer Perceptron
This feature requires the Neural Networks option.
 From the menus choose:
 In the Multilayer Perceptron dialog box, click the Output tab.