Decision making with AI in industry conference

The conference “Decision making with AI in the industry” took place on the 19th of March online, organized by the European Supply Chain Forum (ESCF), together with the Eindhoven Artificial Intelligence Systems Institute (EAISI).

The collaboration between the ESCF and EAISI is a logical step in the effort of bringing academia and industry closer together, as the former’s mission to “enable professionals […] to create value by generating, exchanging and integrating knowledge” is perfectly complemented by the latter’s long-term goal to  “develop AI-technology for real-time autonomous decision-making.”.

Consequently, the conference brought together participants with varied backgrounds and professions, with a common interest in innovating decision-making with artificial intelligence.

“The AI planner of the future” program at the TU/e IE&IS department
After a welcome introduction given by Winfried Peeters (Account Executive at ESCF), Prof. Tom Van Woensel, the Director of ESCF and Director of education at the Department of Industrial Engineering & Innovation Sciences (IE&IS) kicked off the session with a presentation on the “AI planner of the future” program bound to start in September 2021 and last for at least five years.

The program is designed on the observation that planning in the industry is changing due to the rise of data availability and AI technologies in decision making, coupled with the fact that translating AI concepts into reality is still vague. This vagueness, explains Prof. van Woensel, leads to a lack of buy-in from users and improper data systems, limiting the adoption of innovative optimization tools that would otherwise be useful.

In response, the IE&IS program is focused on both the technical and human aspects in the context of AI planning for supply chains and logistics while considering “people, profit, and the planet”. To support this focus, experts from both practice and academia across all industries are involved in the program, with real-life living labs being supported by ESCF companies.

 Smart Warehousing: Digital Twin and Artificial intelligence in retail and e-commerce distribution centers
Monique van den Broek gave the second presentation, Principal Consultant at CQM, a company that helps organizations unroll the intricacies of their processes and then builds frameworks for decision making based on factual information and quantitative models.

Use cases that shape the operation of warehouses explains Ms. van den Broek, differ when comparing the e-commerce and retail industries. One important significant difference is that, in the former case, the frequency of orders is higher. Yet, the orders themselves are much smaller than in the latter. However, whereas the warehouses need to be different, the processes stay roughly the same and usually involve receiving, putaway, replenishment, and order picking.

In this context, smart warehousing faces several challenges. An increase in volatility, demanding lead times, a shortage of personnel, and partially automated solutions were underlined during the presentation. Nevertheless, with challenge comes opportunity, among which we notice an increase in data availability and smart algorithms and a higher rate of acceptance of robots. 

Following Gartner’s data maturity model, a core principle at CQM is to always start with a descriptive and diagnostic phase before even thinking of optimization. It is vital to first understand the warehouse processes before applying AI to optimize them.

As a first example, Ms. van den Broek presented an optimal carrier and pick order problem in which the goal is to create the optimal allocation of orders to carriers. A local search algorithm is employed for this purpose, whose outcome can be influenced by the company by adjusting its parameters according to the current context.

Then, the stackability problem was introduced. A model predicts the amount of unusable volume on a carrier created by imperfect stackability and rules based on product characteristics such as weight, shape, and fragility. A linear regression model was used to solve this problem that, while not the best performing of all researched techniques, is explainable and comes with a simple implementation.

Another important topic – of great interest across all industries – that was touched in the CQM presentation was digital representations of warehouses, also known as Digital Twins. This technology’s added value is that processes can be simulated or emulated and the outcome can be analyzed concerning critical KPIs. It can thus assist in warehouse design, algorithm development, workforce planning during strategical planning and preparation. In contrast, the continuous stream of actual data coming from the warehouse enables monitoring and real-time process optimization.

When asked what the relationship between the advantages of smart warehousing and the size of the business is, Ms. van den Broek explained that a prospective process improvement of 10-15% is estimated, but that the cost of the progress has to be taken into account and the existence of a clear business use case is crucial.

Shunting Trains with Deep Reinforcement Learning
Dr. Wan-Jui Lee, an AI Researcher at Dutch Railways (NS), gave a presentation on the research the company is doing on Deep Reinforcement Learning (DRL) for train shunting.

The main problem under discussion is the Train Unit Shunting Problem (TUSP), a sequential decision-making nature. In short, during off-peak hours or at night, not 100% of carriages are in use, with some being stored in shunting yards, where repairs, cleaning and other maintenance activities are usually performed. These activities need to be scheduled, along with movements within the yard. Coupled with the need to match incoming trains with outgoing ones according to the timetable and the complex physical layout of the yard itself, it becomes computationally difficult to find a feasible solution for the problem, and a hybrid approach combining heuristics, machine learning, and optimization has been researched for this purpose.

As Dr. Lee explains, a local search-based algorithm was previously developed at NS, which, albeit successful in finding solutions to real-life problems, it does so without offering a human-understandable way of deciphering how the planning decision is made. Furthermore, random initial solutions result in inconsistent plans, making it difficult for human planners to evaluate and adjust them.

In contrast, a Multi-Agent Deep Reinforcement Learning (MARL) setting, where each train is an agent, is considered more suitable for this problem. It is bound to generate consistent plans and be robust to uncertainty. Furthermore, combining Graph Convolutional Networks (GCN) and MARL results in a scalable method for optimizing routing decisions.

NS is a company involved in long-term R&D cooperation with several universities. This research is no exception, with dutch universities involved in solving sections of the problem, such as multi-agent pathfinding with DRL and making solutions explainable.

Asked whether the handling of unexpected events is still managed by humans or automated, Dr. Lee explained that in the 1st phase, this is still done by a human planner. Nevertheless, in the 2nd phase, the AI will learn to give choices based on the human planner’s historical improvements.

Animal Health Improves Bio-Manufacturing Efficiency
The last presentation of the conference was given by Dr. Tugce Martagan, an assistant professor at the Eindhoven University of Technology. She won several prizes and distinctions such as the Marie Curie Research Fellowship (2016-2018) from the European Commission and the 2018 VENI award from the Netherlands Organization for Scientific Research.

Dr. Martagan’s presentation was focused on the collaboration with MSD Animal Health towards developing optimization models and decision support tools to improve biomanufacturing efficiency.

Biomanufacturing methods lead to vaccines, hormones, proteins, insulin, and cancer treatments by generating active ingredients based on living organisms such as viruses and bacteria. The very nature of biomanufacturing leads to researchers and professionals in the field facing challenges that are not found in other industries, such as batch-to-batch variability in production yield, next to common challenges such as lead times and costs.

From the portfolio of decision tools developed with MSD Animal Health, the presentation’s focus was first placed on the automated decision-making for the bleed-feed problem. As explained by Dr. Martagan, during the fermentation process, cell growth follows a specific pattern that usually consists of six phases. During the initial lag phase, the cells adapt to their environment, with growth happening in the acceleration, exponential growth, and deceleration phases. Following the deceleration phase, caused by a depletion in nutrients, the cells enter a stationary phase, followed by a death phase, during which the cells lose viability and die.

Currently, it is customary in the industry to harvest each batch during the deceleration or stationary phase. Uncertainty is induced in the problem because the time of each phase and the growth rates can be highly variable in practice, leading to the previously mentioned problem of batch-to-batch variability. 

Furthermore, a setup is needed for each phase, which can take up to 25% of the total production costs.

In response, the bleed-feed technique entails harvesting only a fraction of the cell culture during the exponential growth phase (bleed), and a particular medium is added (feed), with the remaining fraction of the culture acting as a seed for a new fermentation run with an exponential growth rate. Thus, if successful, the bleed-feed technique allows bypassing subsequent setup instances as well as passes through the lag phase. However, as previously stated, the uncertainty in the exponential growth phase’s duration makes the problem difficult.

A stochastic model was generated as a Markov Decision process in which actions are performing the bleed-feed or continuing the fermentation, and the state space is characterized by the cultivation age (i.e., the time elapsed from a setup), the cell growth rate of culture, and the total number of bleed-feed operations performed since the setup. Since the successful implementation (a first in the industry) of bleed-feed in July 2020, 85% higher yield per setup and a 30% reduction in lead times have been recorded.

After each bioreactor setup, the fermentation run’s goal is to achieve the highest possible yield from that batch. However, as shown in one example, among three different production lines using an identical recipe and fermentation process, various productions yields were obtained, with one line consistently generating lower yields. The cause, as it was discovered, was another bioreactor technology being used which led the researchers to believe that a controllable input parameter could be adjusted and, thus, an optimal configuration be obtained.

This leads to the second item in the portfolio developed, namely learn-by-doing in yield optimization, based on an optimal-learning model, that resulted in 50% higher yield per batch upon implementation. 

The Digital Twin also appeared in Dr. Martagan’s presentation, as the base of the last example from the portfolio of decision tools developed together with MSD Animal Health, namely the rhythm wheel. This tool serves as an automated framework for better planning and improved predictability. Production planning in biomanufacturing faces challenges such as batch-to-batch variability, unique production requirements for each antigen, and variability in size and technology of the bioreactor among others. This tool’s implementation led to one extra batch per production line without any investment in additional capacity.

Among the implementation challenges the researches faced with respect to real-world data, Dr. Martagan listed the scale of the experiments and test runs, the need to manually retrieve data from the memory of different pieces of equipment and the fact that most data was stored in a paper format.

What enhances the importance of the research and its results even further is that, according to Dr. Martagan, its real-world implementations resulted in €50 million savings per year, reducing CO2 footprint with increased yield batch and an overall reduction in variability due to process standardization.

 Conclusions
The conference brought together members from both the academia and practice across the warehousing, transportation and biomanufacturing industries, showcasing the applicability and versatility of AI methods to decision-making and overall process optimization in industrial settings.