When writing the earlier post "Three Must-Know Basics of Forecasting", I actually had six basics in mind. Here are the other three:
4. All forecasts can be improved.
Since all forecasts are wrong, there is always room for improvement, at least from the accuracy aspect. Broadly speaking, the objective of forecast improvement is to enhance the usefulness. While there are many aspects of usefulness, it can be hard to figure out what to improve. Other than the ones discussed in Three Must-Know Basics of Forecasting, such as various error metrics, interpretability, traceability and reproducibility, there are some more specific directions for potential improvement:
1) Spread of errors. Nobody likes to have surprisingly big errors. Reducing the variance or range of the errors means reducing the uncertainty, which consequently increases the usefulness of the forecasts. Sometimes the business may even give up some central tendency of the error (e.g., MAPE), to gain improvement in the spread (e.g., standard deviation of APE).
2) Interpretability of errors. For instance, in long term load forecasting, due to the uncertainty in long term weather and economy forecasts, the load forecasts may present some significant error from time to time. Then the forecasters should help the business users to understand, how much of the error is contributed by modeling error, weather forecast error and economy forecast error. Breaking down the error to its sources increases its interpretability as well as usefulness.
3) Requirement of resources. In reality, we always have limited recourses to build a forecast. The limitations may be from data, hardware and labor. If we can enhance the simplicity of the forecasting process by reducing the requirements on these recourses, it can be very useful for the business side.
5. Accuracy is never guaranteed.
Well, if we turn in a forecast with zeros at all points, can't we guarantee a MAPE of 100%?
Yes, but so what? Is it of any value?
Due to the stochastic nature of forecasting, the future will never repeat the history in exactly the way described by our models. Sometimes, the deviations are large; sometimes, it can be small. Even if we could achieve a similar accuracy during the past a few years, we still cannot guarantee the same or similar accuracy going forward. I have seen and heard of some consultants and vendors promising unrealistic accuracy to the clients in order to sell the services or solutions. This is one of the worst practices, because eventually the clients will realize that the error is not as low as what's been promised.
6. Having the second opinion is preferred.
"A man with a watch knows what time it is. A man with two watches is never sure."
In our daily life, we probably prefer one watch rather than two. In forecasting, it's the opposite. There is not a perfect model. If we only have one model, we will experience "bad" forecasts from time to time. If we have multiple models, the situation can be completely different. We will have good confidence when they agree with each other. We will be able to focus on the periods when these models disagree with each other significantly. Empirically, combining forecasting techniques usually does a better job than each individual by offer more robust and accurate forecasts. Therefore, one of the best practices is to run multiple models and combine the forecasts.
This blog post probably leads to more "how to" questions, such as how to reduce the spread of errors, how to interpret errors, and how to combine forecasts. Some of them will take a few lectures to answer, while some can fit in another blog post. Stay tuned...
4. All forecasts can be improved.
Since all forecasts are wrong, there is always room for improvement, at least from the accuracy aspect. Broadly speaking, the objective of forecast improvement is to enhance the usefulness. While there are many aspects of usefulness, it can be hard to figure out what to improve. Other than the ones discussed in Three Must-Know Basics of Forecasting, such as various error metrics, interpretability, traceability and reproducibility, there are some more specific directions for potential improvement:
1) Spread of errors. Nobody likes to have surprisingly big errors. Reducing the variance or range of the errors means reducing the uncertainty, which consequently increases the usefulness of the forecasts. Sometimes the business may even give up some central tendency of the error (e.g., MAPE), to gain improvement in the spread (e.g., standard deviation of APE).
2) Interpretability of errors. For instance, in long term load forecasting, due to the uncertainty in long term weather and economy forecasts, the load forecasts may present some significant error from time to time. Then the forecasters should help the business users to understand, how much of the error is contributed by modeling error, weather forecast error and economy forecast error. Breaking down the error to its sources increases its interpretability as well as usefulness.
3) Requirement of resources. In reality, we always have limited recourses to build a forecast. The limitations may be from data, hardware and labor. If we can enhance the simplicity of the forecasting process by reducing the requirements on these recourses, it can be very useful for the business side.
5. Accuracy is never guaranteed.
Well, if we turn in a forecast with zeros at all points, can't we guarantee a MAPE of 100%?
Yes, but so what? Is it of any value?
Due to the stochastic nature of forecasting, the future will never repeat the history in exactly the way described by our models. Sometimes, the deviations are large; sometimes, it can be small. Even if we could achieve a similar accuracy during the past a few years, we still cannot guarantee the same or similar accuracy going forward. I have seen and heard of some consultants and vendors promising unrealistic accuracy to the clients in order to sell the services or solutions. This is one of the worst practices, because eventually the clients will realize that the error is not as low as what's been promised.
6. Having the second opinion is preferred.
"A man with a watch knows what time it is. A man with two watches is never sure."
In our daily life, we probably prefer one watch rather than two. In forecasting, it's the opposite. There is not a perfect model. If we only have one model, we will experience "bad" forecasts from time to time. If we have multiple models, the situation can be completely different. We will have good confidence when they agree with each other. We will be able to focus on the periods when these models disagree with each other significantly. Empirically, combining forecasting techniques usually does a better job than each individual by offer more robust and accurate forecasts. Therefore, one of the best practices is to run multiple models and combine the forecasts.
This blog post probably leads to more "how to" questions, such as how to reduce the spread of errors, how to interpret errors, and how to combine forecasts. Some of them will take a few lectures to answer, while some can fit in another blog post. Stay tuned...
No comments:
Post a Comment
Note that you may link to your LinkedIn profile if you choose Name/URL option.