Operation managers must find the optimal inventory levels for their products, semi-finished products, and raw materials to reduce operating costs and provide predictable service to customers. Managers rely on well-known base stock strategies to optimize each value chain node. Using optimal policies at all stock points rarely constitutes an optimal multi-echelon policy. This article focuses on the multi-echelon inventory optimization problem, which optimizes all inventory levels in a production or distribution chain. Literature makes different assumptions to make problems generalizable and computationally tractable. Guaranteed and stochastic service models are usually differentiated. This article compares OR-Tools vs. OptaPlanner.
We begin by introducing the guaranteed service model, its assumptions, and the general findings that have been published in the literature. Then, to solve this model, we introduce two implementations. The first implementation makes use of the OptaPlanner meta-heuristic framework. The second employs Google's OR-Tools constraint programming solver. We will illustrate the main implementation challenges for both and provide general guidelines to the reader. We will then discuss two industry cases and compare the performance of both implementations on these two cases. Finally, we will draw conclusions to assist the reader in setting up such implementations for real-world applications in their industries.
The literature on multi-echelon inventory optimization distinguishes between two models: the guaranteed service and stochastic service model. For both models, each stage of a supply chain operates according to a base-stock policy. In each period, each stage observes demand and places a replenishment order on its suppliers equal to the observed demand. Associated with each stage is a deterministic processing time which includes all the time required to transform the item at the stage (waiting time, manufacturing time and transportation time). It is assumed further that the production/assembly operations are deterministic, but finished good products (at the stages that supply the customers) experience normally distributed demand requirements (Graves and Willems, 2003; Magnanti et al. 2006). The guaranteed service model further restricts these assumptions by using only the truncated demand at each stage. Moreover, it assumes that this demand bound function is known, i.e., for each stage and for each duration there is a maximal value for the total demand presented to the node during any time interval with that duration (Löhnert and Rambau, 2018). Based on these assumptions, for each stage and for each of its successors, we can decide the maximal time between receiving an order and sending off the supply to those successors.
The main criticism for the guaranteed service model stems from this last assumption. If anything unexpected arises, e.g. demand bounds are exceeded or unforeseen transportation delays occur, some unmodelled flexibility of the supply chain is expected (outsourcing, overtime work, transshipments) to arise in order to guarantee the service times. This justified criticism aside, the guaranteed service model has proven its merit both in academic literature and practical applications. The reader is referred to Graves and Willems (2003) for a better understanding of these assumptions and a detailed comparison with the stochastic service model.
In this short introduction, we refrain from a rigorous mathematical description of the problem. We will therefore also not reproduce the mathematical model that is outlined for this problem in the earlier cited references. Rather, we will try to provide a more practically oriented description of the decision variables and proceed with a short discussion on the practical observations that have followed from the decades of research on the topic.
At each stage (or node $J$) of the network, a solver for the guaranteed service model needs to optimize the outgoing service time ($s_j^{out}$). In other words, the outgoing service time is the time that a node quotes to its customers (i.e. the successors in the network). Within this time, any demand is guaranteed to be fulfilled. Since the demand is assumed to be normally distributed, we can use the standardized normal distribution to calculate the service factor, just like we would in a single stage inventory optimization. If for example a demand bound (or service level) of 95% is used, any stochastic demand in a period, that is below that bound, will be supplied within the quoted service time. The service factor $k_j$ that corresponds to this 95% service level is 1.654. The inventory holding cost for a node j can now be calculated as (Graves and Willems, 2003):
$C_j k_j \sigma_j \sqrt{s_j^{in} + L_j - s_j^{out}}$
Where $C_j$ represents the holding rate, $\sigma_j$ the standard deviation of the demand at node $j$, $s_j{in}$ is the incoming service time and $L_j$ the (deterministic) processing time.
The incoming service time for a node $j$ is the maximum of the service times that are quoted to node $j$ by its predecessor nodes. This can be noted as:
$s_j^{in} = \max_{i : (i,j) \in {A}} \left\{ s_i^{out} \right\}$
Where ${A}$ represents the set of arcs in the network and $(i,j) \in {A}$ denotes that there is an arc in the network for which node $i$ is the source and node $j$ is the sink.
In general, multi-echelon optimization will shift inventories to where it is more cost-efficient to hold safety stocks, although this is not applicable as a general use of thumb. Since the holding costs at a node are proportional to the accrued product cost (i.e. the cumulative cost for all predecessor nodes) it is typically more efficient to keep safety stocks earlier in the supply chain. Moreover, the higher the holding cost rate (industry examples range from 10% up to 45%, Graves and Willems (2005)), the higher the benefits that follow from a multi-echelon optimization. Literature examples have also shown that when holding costs are relatively high to the cost of goods sold – e.g. because of large variations in demand or costly (semi-finished) products – the potential gain from optimizing inventories across the supply chain increases.
In this section, we outline the main implementation challenges for two solvers using open-source frameworks. First, we will explore the use of Optaplanner’s meta-heuristics framework (https://www.optaplanner.org/). Next, we will illustrate the use of Google’s CP-SAT constraint programming solver, available through the OR-Tools framework (https://developers.google.com/optimization). The optaplanner framework is compatible with JVM (Java virtual machine) languages. The OR-Tools API is accessible from Java, Python, C# and C++. For the sake of simplicity, we will only include Java code snippets in this article.
OptaPlanner is an AI constraint solver. It can be used to optimize planning and scheduling problems through a lightweight, embeddable planning engine that is compatible with all JVM languages (Java, Kotlin and Scala). It can be added to your software project as a maven dependency:
In this article, we will refrain from including the full implementation. Instead, we will only include code snippets for the specific implementation challenges we address. The domain modelling for the optaplanner implementation is depicted in the following diagram:
This uncomplicated implementation includes only a single planning entity that represents the assignment of an outgoing service time to a node in a network. The ‘node’ object class contains all necessary parameters to calculate the inventory holding cost, once the incoming and outgoing service times are known. The ‘nominal time’ represents the deterministic processing time of a node. With the list of preceding and succeeding nodes that are added to this class, we can accurately assess a node’s position in a network, calculate the demand from the succeeding nodes and the accrued cost from the preceding nodes.
In the guaranteed service model description that we presented earlier, we noted that the incoming service time at a stage must equal the maximum of the outgoing service time of all of its preceding nodes. Optaplanner would allow us to define the incoming service time as a planning variable. We would then need to control it by adding a constraint that penalizes any difference between this incoming service time and the maximum service time of the preceding nodes.
However, since the incoming service time always follows directly from the assigned outgoing service times, we can use the ‘shadow variable’ concept of optaplanner. The optaplanner documentation states that “A shadow variable is a planning variable whose correct value can be deducted from the state of the genuine planning variables.” By introducing this concept we halve the number of planning variables, making for a more efficient solver implementation.
In the planning entity class definition we add the ‘@CustomShadowVariable’ annotation:
The necessary logic to update the incoming service times, based on the genuine variables i.e. the outgoing service times, is then added to the listener class:
If the listener class is triggered after a genuine variable is changed, we can simply assess whether this genuine variable is bigger than the incoming service time of its successors and if so, modify them accordingly. However, when the listener class is triggered before the variable changes, we need to calculate what the incoming service time will be based on all of the preceding outgoing service times for the node’s successor.
Like in any solver, we need to provide bounds to the values a planning variable can take on during solving. By doing this, we define the search space for the optimization problem. Since we use the simple domain modelling, presented in figure 1, we can define this search space as the integer values that can be assigned to the outgoing service times for all of the nodes. We have implemented this in a secondary constructor for the planning entity class:
The ‘max nominal time’ parameter is then chosen to allow plenty of optimization possibilities, while not exploding the search space too much. This constructor is called in a pre-processing phase of our solver, were the planning entities are constructed:
This article focuses on providing insights in how a solver can be implemented with little effort. In that way it might even act as a tutorial. All of the implementation details that are outlined in this article should be interpreted with that in mind. Therefore, little effort has been undertaken to further refine the definitions of the value range. The authors are aware that the search space could even be further reduced by choosing a value range per node that incorporates other node characteristics such as total cost, number of successors, number of predecessors or a nonlinear function of the processing time.
The industry examples that we will discuss, further in this article, have the requirement that customers from the sink node should be serviced immediately. In other words, the outgoing service time quoted to the customers should be 0. All of the end-customer demand (within the service factor- defined bound) should be serviceable from safety stock. We therefore include the following constraint in our optaplanner model:
This constraint puts a penalty on each outgoing service time that is not 0 for every node that has no successors in the network (i.e. the sink nodes).
For the sake of completeness, we include the modelling for the objective function here. In optaplanner, the objective function is modelled through a ‘score’ score penalty for each node equal to the holding cost for that node:
OR-Tools is an open source software suite for optimization, tuned for tackling the world's toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming. We use Google's award-winning CP-SAT solver to model the guaranteed service multi-echelon optimization problem. SAT stands for satisfiability: the solver uses techniques for solving SAT problems along with CP methods. We refer to Google’s documentation on their constraint programming solver for more information. CP has a widespread and very active community around the world with dedicated scientific journals, conferences, and an arsenal of different solving techniques.
Recently, Google has made their OR-Tools suite available as a maven dependency. It can be added to your project through:
We will focus on the main constraints that need to be added to the model in order to solve the guaranteed service multi-echelon optimization problem. We can re-use the java class definitions from the optaplanner domain modelling. The constraints and the decision variables themselves are modelled in a class for which the constructor is given below:
The reader should notice that we use four sets of variables: the outgoing service times, the incoming service times, the total service time (i.e. the term underneath the square root in the cost function) and the square root variables (i.e. a variable to model the square root of the total service time). All of them are kept, for efficiency purposes, in a hashmap referenced by the name of their node. In addition, the cost coefficients (the term before the square root in the cost function) are pre-computed and are also kept in a hashmap.
These variables are defined in the solver using the following function:
The CP-SAT solver only allows integer valued variables. We will go into the repercussions for the model and why we therefore need a ‘precision’ factor further in this article. Similar to the optaplanner implementation, we need to provide the solver a value range for each of the decision variables in order to bound the solution space. Here however, we need to provide this for 4 variables, where in the optaplanner implementation we could restrict this to the single assignment variable: the outgoing service time.
We can then define the constraints using these variables.
The first constraint models the fact that the incoming service time for each node should equal the max of the outgoing service times of its predecessors.
The second constraint models the total service time for a node *j* as: $s_j^{in} - s_j^{out} + L_j = s_j^{tot}$
Since we cannot directly model the square root, to be applied in the cost function, we must include the auxiliary variable (sqrtVariable). The third constraint therefore models: $s_j^{tot} = sqrtVariable_j$ for each node $j$.
Finally, similar to in the optaplanner implementation, we must guarantee the immediate service to end-customers. We therefore model the outgoing service time for any sink node using the fourth constraint.
Again, for the sake of completeness, we include the objective function. Using the square root variables and the pre-computed cost coefficients, we note:
The CP-SAT solver requires integer valued variables and does not allow us to model the square root in the cost function directly. The third constraint therefore limits the possible solutions and might even restrict the solver from finding the optimal solution for some instances. The ‘artificial’ restriction on the square root variables only allows combinations for the outgoing and incoming service times that, together with the nominal processing time, equal a squared integer. The guaranteed service model definition typically does not include this constraint and will therefore not always be solvable with this CP-SAT solver implementation. In this article, we relieve this restriction to some extent, by adding an additional precision factor. If e.g. all service times (including the nominal deterministic processing time) are multiplied by 1000, we will allow the model to find more solutions that adhere to the ‘squared integer’ constraint. The impact on the solution quality is however very limited and by no means guarantees the optimality of the solution procedure. Moreover, an increasing precision factor dramatically increases the time to solve. In further exploration, the reader is advised to look into other alternatives to tackle the square root in the cost function: piecewise linearization, taylor series expansion, ...
For a comparison of the two implementations that are outlined in this article, we lean heavily on the industry cases that are described in Graves and Willems (2003). Both the ‘battery case’ and ‘bulldozer case’ from their book chapter are used here. Instead of reproducing the description and insights for these cases, the reader is referred to the original publication.
We only present these cases here using their network representations, shown in figure 2. These cases exhibit considerable differences in their network structures, type of distribution, number of products and number of sink nodes. In addition, as one would expect from their product types, the cost of goods sold compared to the holding cost for each case is also very different. The bulldozer case involves large, very costly semi-finished products and end-products with considerably small demand. Whereas the battery case involves small, cheap commodity products with very large demand and regional/packaging demand differences.
The guaranteed service model for both cases was shown to be solvable to optimality by Graves and Willems (2000). The optimal solution for the bulldozer case results in an annual holding cost for the safety stock in the supply chain of \$633,000. The optimal solution for the battery case results in a holding cost of \$853,000.
We now use the metaheuristic and constraint programming implementation to solve the two industry cases. Optaplanner allows the user to control the heuristics that are applied in solving an optimization problem, going from the type of constructive heuristic, the meta heuristics used in the local search phase, to full control over the moves that are evaluated in the search path. However, it also contains plenty of intelligence to auto-configure this for the user. In this example implementation, we have only configured the time the local search should run and let optaplanner configure the rest. The SAT-CP solver also allows the user to control different aspects of how a problem is solved. However, we again only restricted the running time of the program and let the rest be auto-configured by the package itself. We run the programs on a 2 GHz Intel Core i5.
We will first focus on the result from the bulldozer case. The optaplanner implementation solves the bulldozer case to optimality within 7 seconds. The SAT-CP implementation finds its optimal solution only after 30 seconds. It should be noted that due to the integer value constraint on the square root factor in the cost component, this solution does not correspond to the optimal solution found by Graves and Willems (2000) and the optaplanner implementation. Moreover, the optimal solution was verified to be infeasible for the SAT-CP modelling. The best value found for the holding cost in both implementations is shown in figure 3.
In addition to the solutions found using the solve implementations, we have also added the not-optimized base solution where each node quotes a service time of zero to its customers and therefore holds considerable safety stocks. This illustrates how a multi-echelon solution to the inventory holding problem can drastically reduce operational costs, while the end-customer is still serviced immediately.
The results for the battery case are not outlined in detail here, but can be made available upon request. Here, the optaplanner implementation finds the optimal solution nearly instantaneously. The Google OR-Tools implementation finds its optimal solution within the same time, however again this solution is not as good as the one found by Graves and Willems (2000) and the optaplanner implementation, due to the integer valued square root in the cost function.
In this article we have explored two possible implementation strategies for the guaranteed service multi-echelon inventory placement problem. We conclude that a meta-heuristic solver implementation using the optaplanner framework is well suited to find the optimal solution for two small industry cases. Due to its scalability we present it as a viable solution approach for when exact solutions become intractable in large, real-sized problem instances. The constraint programming solution, based on Google’s OR-Tools SAT-CP solver, that was presented in this article suffers from one major drawback: the integer value constraint for the square root factor in the cost function. It is however not excluded that this procedure might find solutions that suffice for practical applications. This is left for further research to explore.
This article can be considered a short introduction to the challenges and insights in developing practical solvers for the guaranteed service multi-echelon problem. It can act as a tutorial for readers that want to explore practical solution strategies for this well-known industry problem.
Graves, S. C., & Willems, S. P. (2000). Optimizing strategic safety stock placement in supply chains. Manufacturing & Service Operations Management, 2(1), 68-83.
Graves, S. C., & Willems, S. P. (2003). Supply chain design: safety stock placement and supply chain configuration. Handbooks in operations research and management science, 11, 95-132.
Graves, S. C., & Willems, S. P. (2005). Optimizing the supply chain configuration for new products. Management science, 51(8), 1165-1180.
Löhnert, M., & Rambau, J. (2018). Modeling Demand Propagation in Guaranteed Service Models. IFAC-PapersOnLine, 51(2), 284-289.
Magnanti, T. L., Shen, Z. J. M., Shu, J., Simchi-Levi, D., & Teo, C. P. (2006). Inventory placement in acyclic supply chain networks. Operations Research Letters, 34(2), 228-238.