You use forecast performance metrics to determine which
model produces the best fit for the forecasted data.
About this task
To generate a summary of a forecast model's performance
metrics:
Procedure
- Open the forecast that you want to test.
- Open a forecast in the Forecast editor.
- Run the forecast model that you want to test.
The forecast results are displayed in the Results editor.
- Click the Create Performance toolbar
button.
A table containing the forecast model's
performance metrics is displayed.
- Cumulative Forecast Error
- Equal to the sum of differences between predicted and actual values.
- Mean Absolute Deviation
- Equal to the sum of the absolute values of the forecast error
divided by the number of values. This metric tends to provide the
best indicator of performance and is used as the default comparison
criterion in forecast graphs.
- Mean Square Error
- Calculated as the sum (or average) of the squared error values.
This performance metric is very sensitive to unique or large values,
hence the error is amplified.
- Mean Absolute Percent Error
- Calculated as a percentage of the absolute difference between
predicted and actual values divided by the number of values.
- Tracking Signal
- Calculated as a ratio of cumulative forecast error to the mean
absolute deviation.
In general, the closer the error is to zero, the
better the performance of the model (for example, a performance error
equal to zero implies a perfect fit between the predicted and actual
values).
- Optional: If you want to
view the forecast hierarchical structure, open the Outline view
by selecting from
the main menu.