# Neural Network Structure

Although neural networks impose minimal demands on model structure
and assumptions, it is useful to understand the general **network architecture**. The multilayer
perceptron (MLP) or radial basis function (RBF) network is a function
of predictors (also called inputs or independent variables) that minimize
the prediction error of target variables (also called outputs).

Consider the *bankloan.sav* dataset that ships with the product, in which you want to be able
to identify possible defaulters among a pool of loan applicants. An
MLP or RBF network applied to this problem is a function of the measurements
that minimize the error in predicting default. The following figure
is useful for relating the form of this function.

This structure is known as a **feedforward
architecture** because the connections in the network flow
forward from the input layer to the output layer without any feedback
loops. In this figure:

- The
**input layer**contains the predictors. - The
**hidden layer**contains unobservable nodes, or units. The value of each hidden unit is some function of the predictors; the exact form of the function depends in part upon the network type and in part upon user-controllable specifications. - The
**output layer**contains the responses. Since the history of default is a categorical variable with two categories, it is recoded as two indicator variables. Each output unit is some function of the hidden units. Again, the exact form of the function depends in part on the network type and in part on user-controllable specifications.

The MLP network allows a second hidden layer; in that case, each unit of the second hidden layer is a function of the units in the first hidden layer, and each response is a function of the units in the second hidden layer.