We see you are located in China.
Do you want to switch to our Chinese website?

How ‘explainable’ should ML-powered forecasting models be?

Lennert Smeets - November 15, 2021

Reading time: 3 min

How ‘explainable’ should ML-powered forecasting models be?

The growing success of machine learning in forecasting brings with it renewed concerns about the so-called ‘black box’ nature of this type of AI model. It’s no different than with driver assistance packs in cars, face recognition at airports, or automated credit and insurance decisioning: we want to know whether the systems are any good, that decisions are fair, and that we can generally put our trust in them.

Concerns like this have paved the way for a field of research and a range of solutions known as XAI, or eXplainable Artificial Intelligence. There’s no wonder it’s become a hot topic in demand planning too. But what exactly can we expect from XAI?

 

A day in the planning office with ML

The critical function of demand forecasting involves dealing with the many uncertainties. Professional planners will want their forecast to be as accurate as possible. Here’s how a day at the planning office might work out:

Blog post

  • The planner receives a forecast generated by an ML model and is surprised to see a downswing is predicted;
  • He wonders if the model has already taken into account the effects of known factors such as planned promotional campaigns and vacation time;
  • His manager argues that, on average, this particular model produces better results than traditional methods;
  • But the predicted downswing is highly counterintuitive in view of the planned promotional campaign, which is why our planner wonders if there’s a glitch in the model;
  • In general, he’s frustrated that he doesn’t know how the model works;
  • The planner finally decides to put the automated forecast aside and create his own, exactly the way he’s been doing it for years.

How ‘explainable’ should ML-powered forecasting models be?

Blog post

Case closed? Not for the business manager who firmly believes that this new technology should be embraced rather than rejected.

 

Planners want to trust, understand and control the model

Planners and managers both have a point here.

It does make sense to trust a model that has proven to be more accurate, on average, than traditional methods. But demand planners are forecasting experts too, so they quite rightly want to understand why the ML output would be any better than their gut feeling. They also want to know how the machine came to its conclusion and what factors have been taken into account. Such insight would allow them to further refine the forecast and take control, which is especially important if the planners feel that the machine systematically fails to incorporate important factors.

So, these are the three concerns that XAI needs to address:

  • Trust — Making sure that users will trust the outputs of the given model;
  • Understand — Giving users insight into the mechanisms of the model so that they know what has been taken into account and what has not; and
  • Control — Allowing users to intervene in such a way that the model learns from it and gets better and better, appreciating user preferences and understanding complex situations.

 

It’s also a matter of establishing a track record

True XAI should involve a combination of techniques, which in the case of ML will definitely include guided experimentation with the model.

Diving into the pure math behind the model will be somewhat unconvincing. Do you need to look inside the brain of airline pilots to trust flying with them? Do you need to know how the ‘brain’ of an autonomous car is programmed to trust the machine? Or would it be more helpful if the car’s digital dashboard shows you which objects, signs, conditions, and facts have been taken into account?

Blog post

How ‘explainable’ should ML-powered forecasting models be?

There are ways to get users to understand the model and give them control without requiring them to be intimately familiar with the underlying mathematics.

As for trust, even thousands of well-conducted simulations or blind tests attesting to the ML model’s superior performance are no substitute for a real-life track record. If autonomous cars systematically garner better real-life crash statistics than cars driven by humans, you will likely begin to trust them more. If, over time, demand planners see the ML forecasting model systematically outperforming their traditional forecasts, will they continue to mistrust it?

Blog post

That’s why I believe that a key pathway to inspiring trust in the system is to build and be able to demonstrate a real-life track record, step by step.

Is your demand planning department ready for an ML journey using XAI? Feel free to reach out to me to discuss. 

Lennert Smeets

Senior Product Manager at OMP USA

Biography

Lennert oversees the R&D of OMP for Demand Management. He is mostly driven by looking for innovations that make our customer’s demand planning journey more manageable and, at the same time, more effective.

Let's connect