LOSS Subcommand (NLR command)
LOSS
specifies
a loss function for CNLR
to minimize.
By default, CNLR
minimizes the
sum of squared residuals. LOSS
can be used only with CNLR
;
it cannot be used with NLR
.
- The loss function must first be computed in the model
program.
LOSS
is then used to specify the name of the computed variable. - The minimizing algorithm may fail if it is given a loss function that is not smooth, such as the absolute value of residuals.
- If derivatives are supplied, the derivative of each parameter must be computed with respect to the loss function, rather than the predicted value. The easiest way to do this is in two steps: First compute derivatives of the model, and then compute derivatives of the loss function with respect to the model and multiply by the model derivatives.
- When
LOSS
is used, the usual summary statistics are not computed. Standard errors, confidence intervals, and correlations of the parameters are available only if theBOOTSTRAP
subcommand is specified.
Example
MODEL PROGRAM A=1 B=1.
COMPUTE PRED=EXP(A+B*T)/(1+EXP(A+B*T)).
COMPUTE LOSS=-W*(Y*LN(PRED)+(1-Y)*LN(1-PRED)).
DERIVATIVES.
COMPUTE D.A=PRED/(1+EXP(A+B*T)).
COMPUTE D.B=T*PRED/(1+EXP(A+B*T)).
COMPUTE D.A=(-W*(Y/PRED - (1-Y)/(1-PRED)) * D.A).
COMPUTE D.B=(-W*(Y/PRED - (1-Y)/(1-PRED)) * D.B).
CNLR Y /LOSS=LOSS.
- The second
COMPUTE
command in the model program computes the loss function and stores its values in the variable LOSS, which is then specified on theLOSS
subcommand. - Because derivatives are supplied in the derivatives program, the derivatives of all parameters are computed with respect to the loss function, rather than the predicted value.