April 17, 2018 | Written by: John Parks
Categorized: Cognitive Retail
Share this post:
When I first started my career, I was a demand planner for a large consumer products company. I had all these fancy algorithms (in Lotus 123, mind you) back when Excel was in its infancy, and before a lot of companies were leveraging big systems like JDA and SAP and Oracle. On the first Monday of every month I couldn’t wait to get into the office to check the line printer for the forecast accuracy report that printed out on the old green and white paper.
I used to get so disappointed when I had a bad month, and so excited when I got a forecast really close. But after a few years it occurred to me: “Hey wait, there isn’t really much more I can do.” At some point the errors are just random. My models are as good as they can be, and the accuracy is what it is. So what’s the point of grading me with an accuracy metric?
What became even more annoying was how my boss would set targets for the department to “improve forecast accuracy by 5%.” That’s nice, but how? I remember thinking back then, “I’m fresh out of levers to pull!” It was like expecting me to forecast something other than 7 when rolling a pair of dice. I know that sometimes I’m going to roll snake eyes or box cars, I just don’t know when. 7 is my best forecast.
I see forecast accuracy as a key performance indicator (KPI) at almost all of our clients, touted as though it can be easily controlled, but knowing that it can’t be.
So who really does care about error? Well, inventory planning and supply planning care. They need to look at error to decide how much supply-side buffer is required. The less accurate the forecast, the more they need to protect with inventory or capacity.
The financial community cares too. Whether external like Wall Street or internal finance, they care about your ability to control your business and recognize how you are performing.
So what should the demand planner care about?
Far more important is for the planner to focus on forecast bias. Bias is a systematic pattern of forecasting too low or too high. A forecaster loves to see patterns in history, but hates to see patterns in error; if there are patterns in error, there’s a good chance you can do something about it because it’s unnatural.
Maybe planners should be focusing more on bias and less on error. With bias, it’s easy to blame the actual, as in “how could our customers consistently buy less than the forecast, there’s must be something wrong with the market.” The improvement opportunity is to identify the patterns in error and do root cause analysis either quantitatively, or qualitatively via collaboration. Usually the issue nowadays is not with forecasting tools, it’s with the manual adjustments to the forecast, or qualitative views asserting themselves to override the math.
Also important to look at is not just ‘how much bias’ but ‘how many times.’ Let’s say your historical forecast was 100 for the last 10 periods. Nine times the actual came in at 95. One time the actual came in at 125. So overall, total forecast was 1,000 and total actuals were 1,020, for a 2% error. Not too bad, but over-forecasting nine times out of 10 is like flipping a coin 10 times and getting nine heads. I see skews like that all the time with our clients. It’s a clear improvement opportunity.
In summary, error isn’t always addressable but bias usually is. Consider focusing on bias first when trying to improve your forecasts.
More Retail Strategies: Hey Retailers, It’s Time to Get Imaginative