Interpreting the preview
When you preview forecast data for the first time, sometimes the results can be unexpected. Here are some situations where you might encounter unexpected preview results and explanations for why they can occur.
Failure to forecast
Not enough data points providedTo perform a prediction, typically a minimum of seven historical data points must be provided. Furthermore, if there is to be a seasonality of x detected, then more than 2x ( and ideally 3x) data points must be provided. For example, if the season is 12, then more than 24 data points must be provided, but ideally 36 points). For all forecasting activities it is best to provide as much historical information as possible.
Too many ignored data pointsIf there are too many gaps in the historical data, a forecast may not be completed successfully. Try providing more historical data and/or ignoring less by filling in the values yourself rather than relying on the engine to fill in values.
Successful forecast
Peculiar forecasted dataIf the forecasted data seems terribly wrong, then the time axis is likely not set up properly. See the setup section Forecasting requirements for details on how to set it up correctly.
Additionally, sparse data in a cube can result in a forecast with low prediction accuracy that show exceedingly low negative values or high positive values. In this case, review the cube for sparsity and verify that the historical data is in line with predictions.
Horizontal lineA horizontal line typically occurs when historical data is sparsely available resulting in trend or seasonality not to be detected. The following examples show where this happens and what can be done about it.
Trend

Not enough data is available to detect the trend that one might expect. This is confirmed by the fact that the Trend component that is returned (see the statistical details) is None. Providing more historical data allows for appropriate detection. This can also be solved by including more history. See the following example.


The width of the confidence envelope is based largely on how well the model matched the historical data and the confidence interval that is indicated by you (that is, 95%). What this means is that, based on the strength of match to the historical data, this is how confident the forecasted values are within this range. In the following example, with a perfect match, effectively no envelope is displayed.


However, with this example, which is given the variance of data the prediction expects continued variance along the same trend line. Trend is detected in this case.


After choosing to ignore outliers, new outliers can arise during this process. This is because the act of correcting outliers changes the model that matches the historical information and confidence intervals. With a change in historical confidence intervals, new outliers might be detected.
Confidence interval gets narrower over timeIn some cases, the projected confidence interval gets narrower over time.

In the previous case, you can see that the confidence level doesn’t exist for 2025. Intuitively we know that the future is increasingly unknown, however what is illustrated and forecasted is a result of only the known historical information.
Graph appears disconnectedSometimes an outlier is detected at the very start of the forecast period, as in this example.

Correcting the outlier may cause the graph to appear disconnected, as shown here.

This is expected. The solid line represents the historical data, which ends at the corrected historical data point. Forecasted data continues from the corrected historical data point.
Peculiar positioning of outliersAt first glance, some outlier identifications might seem peculiar, as in this example.

You might have expected the last historical point at 2019 to be the sole outlier and none anywhere else. This would very likely have been the case if the seasonality detected was 4. However, in this case it detected a seasonality of 2. Thus, from the point where outliers were detected, everywhere it expected a cycle and there wasn’t one, an outlier was positioned. The purpose of showing this is to illustrate that there are many factors that contribute to outlier detection.