The growing success of machine learning in forecasting brings with it renewed concerns about the so-called ‘black box’ nature of this type of AI model. It’s no different than with driver assistance packs in cars, face recognition at airports, or automated credit and insurance decisioning: we want to know whether the systems are any good, that decisions are fair, and that we can generally put our trust in them.
Concerns like this have paved the way for a field of research and a range of solutions known as XAI, or eXplainable Artificial Intelligence. There’s no wonder it’s become a hot topic in demand planning too. But what exactly can we expect from XAI?
The critical function of demand forecasting involves dealing with the many uncertainties. Professional planners will want their forecast to be as accurate as possible. Here’s how a day at the planning office might work out:
Case closed? Not for the business manager who firmly believes that this new technology should be embraced rather than rejected.
Planners and managers both have a point here.
It does make sense to trust a model that has proven to be more accurate, on average, than traditional methods. But demand planners are forecasting experts too, so they quite rightly want to understand why the ML output would be any better than their gut feeling. They also want to know how the machine came to its conclusion and what factors have been taken into account. Such insight would allow them to further refine the forecast and take control, which is especially important if the planners feel that the machine systematically fails to incorporate important factors.
So, these are the three concerns that XAI needs to address:
True XAI should involve a combination of techniques, which in the case of ML will definitely include guided experimentation with the model.
Diving into the pure math behind the model will be somewhat unconvincing. Do you need to look inside the brain of airline pilots to trust flying with them? Do you need to know how the ‘brain’ of an autonomous car is programmed to trust the machine? Or would it be more helpful if the car’s digital dashboard shows you which objects, signs, conditions, and facts have been taken into account?
There are ways to get users to understand the model and give them control without requiring them to be intimately familiar with the underlying mathematics.
As for trust, even thousands of well-conducted simulations or blind tests attesting to the ML model’s superior performance are no substitute for a real-life track record. If autonomous cars systematically garner better real-life crash statistics than cars driven by humans, you will likely begin to trust them more. If, over time, demand planners see the ML forecasting model systematically outperforming their traditional forecasts, will they continue to mistrust it?
Lennert oversees the R&D of OMP for Demand Management. He is mostly driven by looking for innovations that make our customer’s demand planning journey more manageable and, at the same time, more effective.