Do you remember the terrifying events which followed when US Airways Flight 1549 struck a flock of birds and lost all power in its engines on January 15, 2009, in New York City? Ditching the plane in the Hudson River, Captain Chesley ‘Sully’ Sullenberger saved the lives of all 155 people on board.
In 2016, Clint Eastwood turned the story into a movie called Sully. Although the movie was rightly criticized for its heavily fictionalized and inaccurate depiction of the post-event hearings, one scene is quite interesting for its reference to simulation models and how they can be abused.
Let me explain briefly. In the scene, the pilot is confronted with the results of a flight simulator exercise demonstrating what the pilot should have done in that situation. To everyone’s surprise, the simulation indicated that it would have been possible to safely return to an air base. Utterly perplexed, Sully replied: “I can't quite believe you still have not taken into account the human factor.” He felt that the simulation model failed to consider essential aspects such as the need for the pilot to act quickly in a highly uncertain situation.
This tells us something about our relationship with models. Many people might have second thoughts if, like Sully, they are confronted with the results of a simulation. Multiple questions arise. Did the model used take into account all the relevant factors? What data were used to feed it? How did it come to its conclusion? Can we trust it to make critical decisions?
I believe that models can be trusted, but on one condition: the model must be transparent. It is essential that people understand the scope of the model. It’s not that users need to fully understand the math behind the model, but they should have insight into its purpose, the strategies it uses to come to a conclusion, and the inherent limitations and pitfalls of the tool.
That’s why I like to teach some modeling or analytics theory to people who use our supply chain planning solution. Let me round out the basics for you. There are three types of models, each with its own purpose, typical strategies, and limitations: descriptive, predictive, and prescriptive models.
Descriptive models use data aggregation and data mining to uncover patterns in past or current events. A familiar example of descriptive modeling is business reporting in the form of graphs, charts, and dashboards. While descriptive models can be very complex, they are usually close to 100% accurate.
Predictive models are more sophisticated. They analyze historical data and use AI algorithms to uncover trends, make smart extrapolations, and identify likely outcomes.
A prime example in supply chain planning is the use of statistical techniques or machine learning to predict future consumer demand. Statistical predictive models have significant limitations because they are based on a lot of assumptions and rely on limited sets of historical data. The more assumptions that are made, the more potential mistakes or inaccuracies can enter into the equation. Machine learning models are less based on assumptions, are more flexible, and can process much larger sets of data but, crucially, there is a greater need to carefully select and prepare the data.
Prescriptive models are designed to find the ‘best’ solution for a given problem. Such models make trade-offs between complicated options based on optimization criteria — that’s why they’re also called optimization models. Supply planning relies heavily on optimization models, which we often call solvers.
Examples include campaign or operational plan optimizers, material-cutting optimizers, and engines that compute the outcome of scenarios. The more complex the model, the more computing power and time will be needed to solve a given problem. This may lead to situations where the response simply comes too late. For example, a planner who needs to choose one of five scenarios by the end of the week is not helped much if it takes seven days to compute the given scenarios.
None but the simplest predictive or prescriptive models will ever be 100% infallible. But that should not be a problem. If we understand what a model does — in business terms, not technically — and we have insight into its assumptions and limitations, we can safely use it as a reliable aid to making critical decisions.Business consultants working with models should give due consideration to this. Adopting the principles of explainable AI (XAI), they must teach users what can and cannot be done with the implemented models, and demonstrate the validity and limitations of outcomes. This helps avoid drawing hasty conclusions such as those at the Sully hearing.
With wide-ranging experience in pharmaceutical supply chain, Momen currently helps customers get the most out of the OMP Solution as an implementation consultant for life sciences and consumer goods projects.