See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/276542895
Improving spare parts inventory control at a repair shop Article in Omega · May 2015 DOI: 10.1016/j.omega.2015.05.002
CITATIONS
12 3 authors, including: Willem van Jaarsveld
Rommert Dekker
Technische Universiteit Eindhoven
Erasmus University Rotterdam
19 PUBLICATIONS 137 CITATIONS
291 PUBLICATIONS 10,357 CITATIONS
SEE PROFILE
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Managing Reverse Logistics or Reversing Logistics Management? View project
Applications of OR View project
All content following this page was uploaded by Willem van Jaarsveld on 20 October 2017. The user has requested enhancement of the downloaded file.
Improving spare parts inventory control at a repair shop Willem van Jaarsveld, Twan Dollevoet and Rommert Dekker Erasmus School of Economics, Erasmus University Rotterdam
Abstract We study spare parts inventory control for an aircraft component repair shop. Inspection of a defective component reveals which spare parts are needed to repair it, and in what quantity. Spare part shortages delay repairs, while aircraft operators demand short component repair times. Current spare parts inventory optimization methods cannot guarantee the performance on the component level, which is desired by the operators. To address this shortfall, our model incorporates operator requirements as time-window fill rate requirements for the repair turnaround times for each component type. In alignment with typical repair shop policies, spare parts are allocated on a first come first served basis to repairs, and their inventory is controlled using (s, S) policies. Our solution approach applies column generation in an integer programming formulation. A novel method is developed to solve the related pricing problem. Paired with efficient rounding procedures, the approach solves real-life instances of the problem, consisting of thousands of spare parts and components, in minutes. A case study at a repair shop reveals how data may be obtained in order to implement the approach as an automated method for decision support. We show that the implementation ensures that inventory decisions are aligned with performance targets. Keywords: Spare Parts Inventory Control, (s, S) Policies, Case study, Assemble-to-Order Systems
1
Introduction
High availability of aircraft is crucial for airline operator profitability. Therefore, defective components are replaced by components in good condition during line maintenance, instead of being repaired inside the aircraft. The defective component is then repaired separately, allowing operators to reduce the duration of line maintenance. Independent repair shops perform these repairs on a commercial basis. Component repairs (non-military) generated a turnover of $9 billion in recent years (Aviation Week 2011). To enable efficient planning and execution of aircraft line maintenance, airline operators use their bargaining power to pressure repair shops into achieving short repair turnaround times (TATs) (De Jong 2012). In case of in-house shops, the need for efficient line maintenance planning is typically reflected in business targets for repair TATs (Aerts 2013). In either case, timely availability of the resources needed for component repairs is key. Assuring spare parts availability is particularly challenging: Components 1
may consist of hundreds of parts, any number of which may need replacement to complete a repair. Only inspection reveals which parts are needed in each repair. Thus, demand for each spare part is unpredictable, forcing repair shops to keep large spare parts safety stocks. Stochastic inventory control methods for safety stock optimization typically set availability targets based on price, leadtime and demand volume. A simple example illustrates the limitations of such approaches: Consider a 10-day time-window fill rate target of 98% for each spare part, and a repair of a critical component that uses 20 different spare parts. The repair delay exceeds 10 days with a probability of roughly 1 − 0.9820 ≈ 33.2% (depending on demand correlations). This disrupts the operators’ ability
to efficiently perform line maintenance, causing eventual loss of market share for the repair shop. But repair times of 40 days may be acceptable for another component, if it is less critical, if an exchange stock is available, or if it is only used in heavy line maintenance that takes more than 40 days. The spare parts availability targets thus result in overperformance for that component, causing excessive spare parts inventories. So even though short repair times are the prime reason for stocking spare parts, current stochastic inventory models do not provide sufficient control over those repair times. On initiative of the manager of a repair shop owned by Fokker Services, we developed a model and algorithm to support the inventory analysts in dealing with the above-mentioned difficulties. In the model, availability targets are set on the level of components, and spare parts inventories follow from those targets. This is the central idea of the spare parts algorithm developed in this paper. The approach we propose addresses the following properties of the repair shop stocking problem:
1. Inventory decisions are taken for spare parts, while performance is measured on the component repair level. Repairs require multiple spare parts. 2. The shop repairs hundreds of components and stocks thousands of spare parts. 3. Spare parts are slow moving: Demand during leadtime is discrete. 4. Many spare parts are relatively inexpensive and ordering involves fixed costs, so parts should preferably be ordered in batches. In particular, our algorithm for optimizing (s, S) policies scales to systems of thousands of spare parts and components. Repair shop performance is measured on the level of component repairs, while each repair requires multiple different spare parts. This distinguishes our work from the majority of literature on spare parts inventory control. For a review, see Kennedy et al. (2002). In particular, studies of large-scale spare parts networks assume that demands for different spare parts are independent; see Caggiano et al. (2007) and references therein. Multi-indenture models distinguish components and spare parts, but assume that each component failure is caused by a single spare part failure (Muckstadt 1973). So-called repair kit models form an exception, but the repair kit model differs significantly from our model. In particular, the repair
2
Repair shop terminology: Spare parts
(Component) Repairs
ATO system terminology: Components
Products
Figure 1: Schematic representation of a repair shop/assemble-to-order(ATO) system. Inventory is kept for spare parts/components, while availability is measured for repairs/products. kit problem involves only a single replenishment without leadtime (Teunter 2006, Bijvank et al. 2010), whereas our setting involves many replenishments with positive leadtimes over an infinite time horizon. From a modeling perspective, the problem we consider is an Assemble-to-Order (ATO) system, yet the interpretation differs from the classical ATO context. In ATO systems, products are assembled from multiple components, while in our setting multiple spare parts are required to repair a component (see Figure 1). Inventory optimization of ATO systems is well studied, but none of the proposed methods are scalable to the large instances arising at the repair shop. Ettl et al. (2000) and Cheng et al. (2002) test their methods on 18 product and 17 component systems. These methods might scale to larger systems, but assume base-stock policies, while batching is important at the repair shop. Moreover, their analysis fits normal distributions to demand during leadtime, which gives poor results for slow moving spare parts demands. Indeed, Figure 2 shows that a normal distribution is a very bad fit for a typical slow moving spare parts demand. Other studies in the periodic review setting include Hausman et al. (1998), Zhang (1997), Agrawal and Cohen (2001), and Ak¸cay and Xu (2004). In a continuous review setting, Lu et al. (2005) consider back-order minimization under a budget constraint and Lu and Song (2005) investigate cost minimization for product specific back-order costs, but the proposed algorithms can only solve small cases. Song (2000) and Zhao and Simchi-Levi (2006) study evaluation of performance measures under (r, Q) policies, but optimization is not addressed. Repair shops typically allocate spare parts on a first come first served (FCFS) basis to component repairs, while spare part inventories are controlled using independent (s, S) policies (De Jong 2012, Aerts 2013). These approaches are prevalent in practice because they are easy to implement. For example, once the (s, S) parameters are known one may determine whether a product should be ordered by inspecting the supply chain of that part alone. Mathematical treatises of ATO systems have shown that optimal control would involve coordination of replenishment orders and complex allocation rules (cf. Benjaafar and El Hafsi 2006, Do˘ gru et al. 2010, Reiman and Wang 2012, Lu et al. 2014), but concrete policies have only been proposed and tested for very small systems. Our modeling assumptions match the practice of FCFS allocation and independent (s, S) policies, reflecting our goal to develop a method that is easily implementable in practice. This pragmatic approach matches that of the majority of studies on ATO
3
Normal approximation
Real distribution
Cumulative probability
1 0.8 0.6 0.4 0.2 0 -1
-0.5
0
0.5
1 1.5 Lead time demand
2
2.5
3
Figure 2: The CDF of typical spare parts demand during leadtime versus its best normally distributed approximation. systems, including those reviewed in the previous paragraph. We use bounds on performance measures to obtain a surrogate optimization problem, a commonly used approach to handle the intractability of performance measures in ATO systems (see e.g. Zhang (1997), Song and Yao (2002), Cheng et al. (2002), Kapuscinski et al. (2004) and Lu et al. (2005)). Van Jaarsveld and Scheller-Wolf (2014) find that this approach typically has only a limited detrimental effect on the quality of the optimum. Our algorithm is based on column generation; we appear to be the first to use this approach in an ATO setting. (Others have used the approach for different inventory problems, e.g. Wong et al. (2007), Kranenburg and van Houtum (2008, 2009), Topan et al. (2010).) The related pricing problem reduces to a separate optimization of the inventory policy for each spare part. These optimizations are carried out efficiently by a novel algorithm which is based on a grid of parallelograms covering the policy space. We derive a lower bound for the costs of policies enclosed in such a parallelogram, which is utilized to determine which areas of the grid need refinement. This pricing algorithm is interesting by itself, because it works under more general conditions than existing algorithms for the single-item problem. In a numerical study, we find that our automated method for determining (s, S) policies at repair shops solves systems consisting of hundreds of components and thousands of spare parts in a practical time-scale. Our case study reveals that implementing the method at a repair shop improves inventory control by assuring that spare parts inventories are aligned with business targets on component repairs. The remainder of this paper is organized as follows. In Section 2 we formulate the optimization problem. In Section 3, we describe the optimization algorithm and in Section 4, we present a computational study to evaluate the performance of the algorithm. In Section 5, we report on the implementation of the method at the repair shop. We conclude in Section 6.
4
2
The optimization problem
In this section, we formulate the optimization problem and the model underlying it. The model is described in Section 2.1. In Section 2.2, we derive bounds on performance measures that are used to formulate the optimization problem, which is given in Section 2.3. In Section 2.4 we discuss the pricing problem associated with our optimization problem.
2.1
The model
We consider a repair shop where various components are repaired. Components needing repair arrive according to a Poisson process. Upon arrival of a defective component, inspection reveals which spare parts are needed to repair it. Spare parts are stocked in a local warehouse. Inventory is under continuous review and is controlled using independent (s, S) policies. Independent inventory control is easy to implement in practice, as discussed in Section 1. Under an (s, S) policy, when the inventory position (= inventory on hand + inventory on order − backlogs) is at or below s, an order is placed to raise it to S. Such policies thus
allow the decision maker to control fixed ordering costs and to hedge against stock-out risk by keeping a safety stock. Indeed, (s, S) policies are used in practice by repair shops (De Jong 2012, Aerts 2013). Because delaying the placement of a replenishment order for a part with backlogs is not common in practice, we assume s ≥ −1.
More expensive spare parts that are slow moving might benefit from the use of so called base-stock
policies. Note that such policies are a special case of the general (s, S) policies that are considered in this model (the case with s = S − 1). Our model will thus determine for each individual spare part which is more appropriate: a base-stock policy or an (s, S) policy with s < S − 1.
We assume full back-ordering of unmet demand, and stochastic sequential leadtimes. To explain this
latter assumption, consider an arbitrary time t, and let Lj (t) denote the transit time (or leadtime) of part j at t. We assume that any orders placed at or before t − Lj (t) will have arrived by t, while any orders placed after t − Lj (t) are still in transit. The random inventory level at t is thus obtained by subtracting
the random demand in a period of random length Lj (t) from the random inventory position in steady state. The distribution of Lj (t) is assumed to be given, and independent of the ordering process. To obtain the distribution of demand during a period of length Lj (t), we thus condition on Lj (t), and use the results for deterministic leadtimes. Lj (t) may be different for different parts, and we do not make any restrictive assumptions regarding the distribution of Lj (t). Zipkin (1986) and Svoronos and Zipkin (1991) discuss in detail how stochastic sequential leadtimes would arise in practice, and argue that such leadtimes may be more realistic than iid leadtimes, for instance because the assumptions ensure that orders do not cross. As discussed in the introduction, spare parts are allocated to repairs on a FCFS basis. When some parts for a repair are available but others are not, the available parts are put aside as committed inventory
5
(see e.g. Song (2002) and Zhao and Simchi-Levi (2006)). This assumption is reasonable: After inspection of a component in repair shops in practice, the required spare parts that are available are typically withdrawn from the warehouse and kept near the component, even though other spare parts may still be missing (e.g. De Jong 2012, Aerts 2013). This procedure may arise because mechanics want to ascertain that the parts that are available now will still be available for the repair when the missing parts arrive. The procedure may also be a consequence of automatic rules for processing part requests in ERP systems. We denote the set of spare parts by J . Different components are repaired, and the different component
repairs are denoted by I. We introduce the following notation:
• hj > 0: inventory holding costs per unit of time per unit of inventory of part j ∈ J . • oj ≥ 0: the fixed ordering costs for placing an order for parts j ∈ J . • Cj : the set of policies for part j ∈ J . For each valid combination of s and S, we have (s, S) = c ∈ Cj . • Ij ⊂ I: set of component repairs in which parts j may be used. We allow Ij = I, but in practice, parts are only used in a limited range of repairs.
• J i ⊂ J : set of parts that may be used in repair of component i. • Y i (n) = (Yji (n), j ∈ J i ): random vector indicating the spare parts needed in the nth repair of component i ∈ I. We assume that Yji (n) ∈ {0, 1, 2, . . .} and that Y i (n) for n ∈ {1, 2, . . .} are iid random variables. We allow for dependence between Yji (n) for different parts j.
• λi : the Poisson arrival rate of repairs of components i. • ti (n): (random) time of arrival of the nth repair of component i ∈ I. • λj =
P
i∈Ij
λi P(Yji (n) > 0): the rate at which repairs arrive that require a positive number of part
j. (Note that P(Yji (n) > 0) does not depend on n by assumption.) We will also refer to λj as the demand rate for part j: note that demand for part j is compound Poisson.
• I(t− ) = (Ij (t− , cj ), j ∈ J ): (random) inventory on hand just before time t. The dependence of Ij on the policies cj ∈ Cj will be dropped where no confusion can arise.
• P (t) = (Pj (t, cj ), j ∈ J ): (random) number of purchase orders in the time period (0, t). • W i (n): (random) waiting time until all spare parts needed in the nth repair of component i are available. W i denotes the random waiting time for an arbitrary repair as n → ∞.
2.2
Bounds on performance measures
Repairs of a certain component may typically require a broad range (10-50) of spare parts, each with low probability. As a result of the dependence between the inventory level of different parts, exact evaluation 6
of the time-window fill rate P(W i < w) or expected waiting time E(W i ) for such repairs is intractable (see Song (1998) and Song (2002), respectively). A well-established method to cope with this difficulty is the use of bounds on the performance measures. We will now derive bounds on the performance of (s, S) ATO systems, such as the repair shop we consider. We first derive a bound on the fill rate. We concentrate on bounds on the immediate fill rate, because the time-window fill rate corresponds to the immediate fill rate in a system with revised leadtimes. (For details see Proposition 1.1 of Song (1998), which extends with little difficulty to stochastic sequential leadtimes.) For the nth repair of component i, P(Ij (ti (n)− ) < Yji (n)) equals the probability that the waiting time for parts j is positive. We thus have P(W i (n) = 0) = 1 − P ≥1−
X
[
Ij (ti (n)− ) < Yji (n)
(1)
j∈J i
P Ij (ti (n)− ) < Yji (n) ,
(2)
j∈J i
where the inequality is typically referred to as Boole’s inequality. By taking the limit n → ∞, we obtain a bound on the long-term fill rate. Note that this bound is tight if waiting time in each repair is caused by at most one spare part j. The expected waiting time may be bounded by a Riemann sum: Let 0 = w1 < w2 < . . . < wM be an arbitrary sequence such that P(W i ≤ wM ) = 1. Then i
E(W ) =
Z
∞
w=0
i
(1 − P(W ≤ w))dw ≤
M X
(wm − wm−1 )(1 − P(W i ≤ wm−1 )).
(3)
m=2
The bound in (2) can subsequently be used in the summand, to obtain an efficiently computable lower bound on the average waiting time.
2.3
Cost minimization under fill rate constraints
We present our method for cost minimization under component specific fill rate constraints. The bounds discussed in Section 2.2 allow for an extension to (combinations of) average waiting time and/or timewindow fill rate constraints. The automated method implemented at Fokker Services (cf. Section 5) features time-window fill rate constraints, and our focus on the immediate fill rate is for simplicity of exposition only. A natural formulation of the problem would use the policies (s, S) = c ∈ Cj directly as decision
variables. However, such a formulation would be non-linear, and even non-convex, which would render it computationally intractable. Instead, we propose to let xjc = 1 indicate that policy c = (s, S) is used for part j, linearizing the formulation at the cost of introducing infinitely many decision variables. In contrast with the difficulties associated with a non-convex model, this difficulty turns out to be manageable using
7
the techniques developed in the next section: The algorithm we will develop needs to consider only a small number of decision variables xjc explicitly to conclude that the current solution is close-to-optimal. The linearization leads to the following optimization problem: min
XX
xjc (Hj (c) + Oj (c)),
(4)
j∈J c∈Cj
s.t.
X X
xjc Fji (c) ≤ 1 − ai ,
i ∈ I,
(5)
xjc = 1,
j ∈ J,
(6)
xjc ∈ {0, 1},
j ∈ J , c ∈ Cj .
(7)
j∈J i
c∈Cj
X c∈Cj
Here, ai denotes the target fill rate for repairs of component i, and Hj (c) = Hj (s, S) = lim hj E (Ij (t, c)) ,
(8)
Oj (c) = Oj (S − s) = lim oj E (Pj (t, c)/t) ,
(9)
t→∞
t→∞
Fji (c) = Fji (s, S) = lim P Ij (ti (n)− , c) < Yji (n) . n→∞
(10)
In particular, for part j ∈ J , Hj denotes the holding costs and Oj the ordering costs. Fji is the probability that the inventory for part j is insufficient to cover the demand of an arbitrary component of component
i. The surrogate constraint (5) is based on (2) and guarantees that the fill rate constraints are satisfied. This is regardless of correlations of Yji (n), j ∈ J i , which is important since such correlations are hard to estimate in practice.
We now discuss the evaluation of (8-10). For k ∈ {0, . . . , S − s − 1}, mk denotes the probability to
visit inventory position S − k during an arbitrary order cycle. mk can be evaluated recursively using
the compounding distribution of demand for part j, see e.g. Axs¨ater (2006, pp. 107-109). The expected P length of an order cycle is given by MS−s /λj , with MS−s = S−s−1 mk . The holding costs for general k=0 policies can be expressed in terms of the holding costs for (S − 1, S) (i.e., base-stock) policies as follows: Hj (s, S) =
1 MS−s
S−s−1 X k=0
mk Hk (S − k − 1, S − k).
(11)
The same expression holds with Fji and Fki replacing Hj and Hk , respectively. Since a single order is placed in each order cycle, we have Oj (S − s) = oj λj /MS−s .
To solve the optimization problem (4-7), we will use the solution of the associated continuous relax-
ation, which is obtained by replacing (7) by 0 ≤ xjc ≤ 1
j ∈ J , c ∈ Cj .
8
(12)
To strengthen the lower bound that is obtained via this relaxation, we note that for any policy c for a part j that does not satisfy Fji (c) ≤ 1 − ai ,
i ∈ Ij ,
(13)
the decision variable xjc must take the value 0 in any feasible solution to (4-7). From now on, policies which do not satisfy (13) are no longer considered to be included in Cj . This improves the quality of the
lower bound (4-6,12) without affecting the optimal solution of (4-7).
2.4
The pricing problem
In this section, we describe how to find the column xjc with the lowest reduced cost for given dual multipliers, and briefly discuss the equivalence of this problem with a single-item inventory problem. The reduced cost associated with xjc is given by Rj (c) = Rj (s, S) = Hj (s, S) + Oj (s, S) + µj +
X
ν i Fji (s, S),
(14)
i∈Ij
where ν i ≥ 0, i ∈ I are the dual multipliers associated with (5), and µj is a dual multiplier associated with (6).
To determine whether any xjc has negative reduced costs, we determine for each part j the solution of min Rj (s, S)
−1≤s
such that Fji (s, S) ≤ 1 − ai ,
i ∈ Ij ,
(15)
where the constraints result from our restriction of Cj to policies satisfying (13). To demonstrate equivalence with a single-item inventory model, we define G(y) = Hj (y − 1, y) +
X i∈Ij
ν i Fji (y − 1, y) + µj .
(16)
We now rewrite (14) as
Rj (s, S) = Oj (S − s) +
−1 MS−s
S−s−1 X k=0
mk G(S − k).
(17)
This formulation is similar to many single-item formulations, e.g. Zheng and Federgruen (1991). However, G(·) need not be quasiconvex as a consequence of the fact that Fji corresponds to the component-specific part fill rate. This renders many algorithms, in particular the one proposed by Zheng and Federgruen (1991), inapplicable. In addition, (15) features fill rate type of constraints, which cannot be accounted for in existing algorithms, in particular the algorithm proposed by Chen and Feng (2006).
9
In Section 3.2, we propose a new, efficient and exact algorithm for (15). The algorithm is interesting in its own right as a solution method for single-item problems, because it works under very general conditions. In particular, G(·) need not be quasiconvex, but only decomposable into an increasing and a decreasing function, a much weaker condition. In addition, we can allow for constraints on the average waiting time, the fill rate, or any other service measure F˜ (s, S) for which F˜ (S − 1, S) is nonincreasing in S. We are not aware of existing methods that can efficiently compute the optimal (s, S) policy for single-item problems under these conditions.
3
The algorithm
The algorithm to solve (4-7) consists of two steps: We first solve the continuous relaxation (4-6,12) to obtain a lower bound, and then apply an rounding procedure to find an integer solution. We use a column generation approach to solve the continuous relaxation, which is described in Section 3.1. The algorithm to solve the pricing problem is described in Section 3.2. Finally, we develop methods to find integer solutions in Section 3.3.
3.1
Column generation algorithm
Our column generation approach to solve the continuous relaxation can be summarized as follows: Algorithm 1. Step 1: Initialization Determine an initial set of policies Cj0 ⊂ Cj for each part j, by executing the initialization step of Algorithm 2.
Step 2: Master Problem Solve the restricted master problem (4-6,12) with Cj replaced by Cj0 . This gives us a primal and dual solution.
Step 3: Pricing Problem For the dual multipliers obtained in Step 2, execute for each part j Steps 1-3 of Algorithm 2. This adds the policy c ∈ Cj with the lowest reduced cost to Cj0 , typically along with other policies that also have low reduced costs.
Step 4 If any policies with negative reduced costs were added to Cj0 for any part j in the previous step, go to Step 2. Otherwise, terminate.
When this algorithm terminates, we obtain the optimal solution to (4-6,12). The solution value is a lower bound on the solution value of the integer problem (4-7).
3.2
Algorithm for the pricing problem
To solve the pricing problem in Step 3 of Algorithm 1, we develop an algorithm to solve (15) for given dual multipliers ν i and µj . Throughout this section, we will suppress subscript j (in particular, I will
denote Ij ). The algorithm is based upon the following key observation. 10
c00 = (s00 , S 00 )
S
c0 = (s0 , S 0 ) s
Figure 3: The parallelogram spanned by the policies c0 and c00 . Policies covered by the parallelogram are circled. Proposition 1. Let two policies c0 = (s0 , S 0 ) and c00 = (s00 , S 00 ) with S 0 − s0 ≥ S 00 − s00 and S 00 ≥ S 0 be given. Consider all policies covered by the parallelogram spanned by the policies (s0 , S 0 ) and (s00 , S 00 ) in the (s, S) plane (circled in Figure 3). The reduced costs for these policies, as given by (14), are bounded below by R(c0 , c00 ) = H(s0 , S 0 ) + O(S 0 − s0 ) +
X
ν i F i (s00 , S 00 ) + µ.
(18)
i∈I
In addition, if c00 violates (13), then all policies inside the parallelogram do. For proofs, we refer to Appendix A. The algorithm we propose is based on a grid of these parallelograms. Each parallelogram is denoted by g = (c0 (g), c00 (g)) = (s0 , S 0 ; s00 , S 00 ), with c0 (g) and c00 (g) the policies in the lower left and upper right corner of g (see Figure 3). For a collection of parallelograms G, we define C(G) as {c0 (g)|g ∈ G} ∪ {c00 (g)|g ∈ G}. Note that C(G) may contain policies that violate (13), and therefore C(G) 6⊂ C. The following sketches the general idea behind the algorithm for the pricing problem; details will be given later. Algorithm 2. Initialization Construct a grid of parallelograms G, such that each relevant policy (s, S) is covered by
at least one parallelogram g ∈ G. Initialize the set of policies C 0 = C(G) ∩ C. Thus, only policies in the corners of each parallelogram are initially considered.
Step 1 Select c∗ ∈ arg minc∈C 0 R(c), where R(c) is given by (14). Step 2 For any g ∈ G for which R(c0 (g), c00 (g)) < R(c∗ ) and for which c00 (g) satisfies (13) Refine parallelogram: There might be policies c covered by g (but c ∈ / C 0 ) that improve on c∗ and satisfy (13). Remove g from G, and add to G a number of smaller parallelograms that together cover all policies originally covered by g.
Note that by Proposition 1: 1) if R(c0 (g), c00 (g)) ≥ R(c∗ ), then policies covered by g cannot improve
on c∗ and 2) if c00 (g) violates (13), then all covered policies do. In both cases, covered policies need not be evaluated. 11
Step 3 If any parallelogram was refined in Step 2, update C 0 = C(G) ∩ C and go to Step 1. Otherwise, terminate returning c∗ . G is stored for future calls to Steps 1-3.
This algorithm terminates, as parallelograms that need refining will become smaller and smaller until they cover only a single policy. In Section 4, we show that the algorithm only evaluates a small number of policies. In the remainder of this section, we will describe each step in detail. 3.2.1
Determining relevant policies
To determine a finite set that contains the policy (s∗ , S ∗ ) with lowest reduced costs, we determine Ξ such that s∗ + S ∗ < Ξ. To find such an upper bound Ξ, we will use the following proposition: Proposition 2. For every policy (s, S) Hj (s, S) ≥ Hj
s+1+S s+1+S − 1, 2 2
,
(19)
where Hj (y − 1/2, y + 1/2) for integer y is defined as the average of Hj (y − 1, y) and Hj (y, y + 1). For the proof we refer to Appendix A. Now, define 0 y0 + 1 y +1 0 − 1, >u . y(u) = min y H 2 2
(20)
Let u be an upper bound on R(s∗ , S ∗ ) − µ. Then, for any (s, S) such that s + S ≥ y(u), R(s, S) ≥ H(s, S) + µ ≥ H
s+1+S s+1+S − 1, 2 2
+ µ > u + µ ≥ R(s∗ , S ∗ ),
(21)
where the second inequality follows from Proposition 2. When Algorithm 2 is used in a stand-alone manner to solve single-item inventory problems, we can thus set Ξ = y (R(c) − µ) for any policy c. But during the initialization step of Algorithm 1, no values
of ν i are available, and as a consequence, R(c) cannot be computed. To proceed, determine a policy c˜ P such that F i (˜ c) is negligibly small for all i ∈ I. Set u ˜ = R(˜ c) − µ − i∈I F i (˜ c)ν i = H(˜ c) + O(˜ c), and
use Ξ = y(˜ u). After execution of Algorithm 2, check whether R(c∗ ) − µ ≤ u ˜, which indicates that u ˜ was a valid bound. If the bound turns out to be invalid, determine a new Ξ based on R(c∗ ) − µ (that
is now guaranteed to be valid), expand the grid to cover the added policies, and continue at Step 2 of Algorithm 2. 3.2.2
Ensuring computational efficiency
When adding (s, S) to C(G), H(s, S)MS−s and F i (s, S)MS−s , i ∈ I are determined using H(s − 1, S)MS−s+1 = H(s, S)MS−s + H(s − 1, s)mS−s , 12
(22)
512
S
0 -1
s
311
Figure 4: Some parallelograms g ∈ G after initial construction. and similarly for F i . With computations based on this recursive definition, the computational effort of executing the algorithms depends mainly on the number of values for S for which policies (s, S) need to be evaluated. 3.2.3
Grid construction
Several good algorithms for constructing the covering grid during initialization of Algorithm 2 have been developed. We describe a method that has particularly robust performance across all parts. Determine first the values that will be used for S and ∆ = S − s > 0 in C 0 . Take {S1 , S2 , . . . , SN } =
{0, 1, 2, 3, 4, 6, 8, 12, 16, . . . , Ξ} (the step-size Sn+1 − Sn doubles whenever a power of two above 2 is reached). Use a similar range for {∆1 , ∆2 , . . . , ∆M }, but starting at 1. Now let
G¯ = {(g = (Sn − ∆m+1 , Sn ; Sn+1 − ∆m , Sn+1 )|n ∈ {0, N − 1}, m ∈ {0, M − 1}}.
(23)
We then let G consist of all parallelograms g ∈ G¯ that cover at least some policies (s, S) such that
−1 ≤ s < S and s + S < Ξ. Figure 4 depicts some parallelograms g ∈ G constructed in this manner. 3.2.4
Refining a parallelogram
When a parallelogram g = (s0 , S 0 ; s00 , S 00 ) needs to be refined in Step 2 of Algorithm 3.2, it is replaced by a number of smaller parallelograms covering the same policies. We consider two cases, see Figure 5: • S 0 − s0 6= S 00 − s0 . We replace the parallelogram by a number of parallelograms for which S 0 − s0 = S 00 − s0
• S 0 − s0 = S 00 − s0 . The parallelogram is split into two parallelograms of equal size.
13
(s00 , S 00 )
(s0 , S 0 )
(s00 , S 00 )
(s0 , S 0 )
S 0 − s0 6= S 00 − s00
S 0 − s0 = S 00 − s00
Figure 5: Refining a parallelogram g = (s0 , S 0 ; s00 , S 00 ). Two cases are distinguished. We refine policies in this manner to limit the number of values for S for which policies need to be evaluated (cf. Section 3.2.2).
3.3
Finding integer solutions
We now present two algorithms to obtain feasible integer solutions for the discrete problem based on the continuous relaxation (4-6,12) by iteratively fixing policies for a subset of parts. The first algorithm applies sequential rounding, fixing the policy for one part at a time, each time selecting the policy with the highest primal value. Parts with higher hj are fixed first. The actual algorithm is slightly more complicated, because we have to prevent rounding to infeasible policies: Algorithm 3. Initialization Initialize the set Jˆ of parts for which a policy still needs to be fixed as J . Initialize the fill rate targets a ˆi as ai .
Step 1 Solve the continuous relaxation, with the parts restricted to Jˆ. Use 1 − a ˆi as the RHS in (5) and (13) to account for the policies that are already fixed.
Step 2 Select the part j ∈ Jˆ with highest hj . Select c∗ ∈ arg max{xjc : c ∈ Cj0 } and fix that policy for part j. Update a ˆi ← a ˆi + F i (c∗ ), and remove j from Jˆ. j
Step 3 If Jˆ = ∅, terminate. Otherwise, go to Step 1. Recall that we applied column generation to solve the continuous relaxation (4-6,12). We may find a feasible solution in such a situation by considering the mixed integer program containing only the columns that have been generated when solving the continuous relaxation. Because the number of binary variables in these integer problems may still be too large, the second algorithm divides the set of spare parts J
into a set Jdisc of discrete spare parts for which a single policy must be selected and a set Jcont for which we allow a mixture of policies.
14
Instance A B C D E F G
|I| 4 33 68 75 491 414 1603
|J | 113 378 545 857 1814 3790 10028
Table 1: Characteristics regarding the size of the problem instances.
The size of the set Jdisc determines the difficulty of solving the mixed integer problem with branch-
and-bound. This size is controlled by a parameter K0 . This gives rise to the following algorithm. Algorithm 4.
Initialization Initialize the set of parts for which a policy still needs to be fixed Jˆ as J . Initialize the fill rate targets as a ˆi = ai . Set K = K0 .
Step 1 Define Jdisc as the K parts in Jˆ that have highest values hj . Set Jcont = Jˆ \ Jdisc . Solve this mixed integer problem by branch-and-bound. Use 1 − a ˆi as the RHS in (5) and (13) to account for the policies that are already fixed.
Step 2 Fix for all parts j ∈ Jdisc the policy j that is selected in the solution from the previous step. Update the values a ˆi and remove Jdisc from Jˆ. Set K = 23 K. Step 3 If Jˆ = ∅, terminate. Otherwise, go to Step 1. To obtain a starting solution, we apply Algorithm 3 to the parts in Jˆ .
4
Computational results
In this section, we investigate the ability of the algorithms developed in Section 3 to solve large-scale instances of (4-7). (Note that this problem formulation is approximate; for a detailed discussion we refer to Section 1. The quality of the lower bound (2) is validated in Appendix C.) In the next section, the value of implementing the method in practice will be investigated. Our test instances arose during the case study at the repair shop (cf. Section 5), and are available from the authors upon request. Instances of different sizes were constructed by restricting attention to classes of related components, see Table 1. We set the target performance for each component to 95%. The problem instances are orders of magnitude larger than any instance that has been solved in existing studies of ATO systems.
15
Instance A B C D E F G
Algorithm 3 Gap(%) Time(min) 0.46 0.1 0.14 0.1 0.19 0.1 0.21 1.6 0.68 6.4 0.48 23 0.91 176
Algorithm 4 Gap(%) Time(min) 0.00 1.2 0.01 0.3 0.02 30 0.05 46 0.33 90 0.30 160 0.60 896
Table 2: The relative gap to the best lower bound and the computation time of our algorithms.
To assess the quality of our algorithms, we compare the solution values obtained with the heuristics to a lower bound on the optimal solution value. This lower bound is computed using an optimal branch-andprice algorithm based on the column generation framework developed in Section 3.1. For larger instances, this optimal algorithm needs massive computation times to find the optimal solution. Therefore, we have truncated the branch-and-price algorithm after 48 hours. We compute the gaps between the solutions found with the heuristics and the best lower bound that was obtained after 48 hours. These gaps thus serve as an upper bound on the optimality gap. In Table 2, we present the gaps and running times for both heuristics for each instance. The values of the lower bounds and the solution we obtained can be found in Appendix B. Recall that K0 controls the number of binary variables in the mixed integer programs that are solved in Algorithm 4. For Instances A-C, we set K0 = |J |. By doing so, the algorithm solves the problem in one iteration. For Instance
G, we set K0 = 600. For the other instances, we set K0 = 360. We have used CPlex 12.6 on modern hardware to solve the linear and mixed integer programs. When solving the mixed integer programs, we applied a time limit of 30 minutes. The table shows that Algorithm 3 solves Instances A-E within minutes, and Instance F and G in 24 and 176 minutes, respectively. These computation times are short enough for application in practice. The algorithm finds solutions that are at most 0.9% worse than the best lower bound. The second algorithm improves the quality of the solutions significantly in comparison to the first algorithm: Gaps are negligible for Instances A-D, and significantly smaller for larger instances. For the larger instances, the quality improvement comes with the burden of significant computation times. It should be noted that Instance A can be solved to optimality by using the branch-and-price framework. The optimal solution is found after 30 minutes and improves the solution of Algorithm 4 by 0.0004%. For the other instances, the solution obtained with Algorithm 4 was not improved. Figure 6 illustrates which policies are evaluated by Algorithm 2 for a high demand part. In most of the solution space, the sparse grid that was initially constructed suffices to establish optimality, showing the effectiveness of the lower bound in Proposition 1. As a consequence, the average computation time for executing the algorithm for a single part is in the order of a few milliseconds.
16
512
400
S
S
250
0 -1
s
311
-1
s
149
Figure 6: The policies that are evaluated for a high demand part. In the left part of the figure, the lower left area of the s, S plane is depicted. On the right side of the figure, a detail of the left side is depicted. Dots represent policies that are evaluated during execution of Algorithm 2. Darker/lighter dots represent policies that are added in early/late iterations of Step 2 of the algorithm. The large black dot represents the optimum.
Our model assumes FCFS with committing. As discussed in Section 2, this is in line with the policy in use at many repair shops. Still, we would like to investigate to what extent the committing assumption affects results. We investigate this question in Appendix D.
5
Case study
Algorithm 3 has been implemented in a decision support system (DSS) at the aircraft component repair shop of Fokker Services. In this section, we first discuss the company’s motivation to implement the algorithm. We then examine the way the DSS is used. Finally, we give insights into the benefits of applying the algorithm. The repair shop is wholly owned by Fokker Services. Fokker Services is one of the five businesses of Fokker Technologies, which develops and produces advanced structures and electrical systems for the aviation and aerospace industry, and supplies integrated services and products to aircraft owners and operators.
5.1
Motivation of the company
To explain the company’s motivation for implementing the method, we quote Maarten van Marle, the managing director of the repair shop: “This project is important for Fokker Services, as short and reliable repair turnaround times (TATs) are very important to our customers when deciding to which repair shop 17
Part description seal washer .. .
y=0 65% 76% .. .
y=1 35% 20% .. .
y=2 4% .. .
y = ... ... ... .. .
pin .. .
95% .. .
.. .
5% .. .
... .. .
Table 3: P(Yji = y) for repairs of component {i = servo-motor} ∈ I and some parts {j1 = washer, j2 = seal, j3 = axe} ⊂ J i , tabulated in decreasing probability of being used in i. The rows containing vertical dots represent 3 and 23 parts omitted from the table for brevity.
they will outsource their repairs. Meeting target TATs is therefore one of our primary KPIs. To score on this KPI, a number of processes have to be under control. The most challenging of these processes is making sure that the spare parts needed in the component repairs are available when we need them, while at the same time keeping inventory costs under control.” To give a better understanding of the importance of using automated methods for determining (s, S) policies, we emphasize that inventory consists of a very broad assortment of spare parts. As a consequence, each inventory analyst is responsible for a few thousand spare parts. Manually adjusting the policies to keep them up-to-date with, for example, changes in repair volumes and supply leadtimes, is very time consuming. Moreover, it is challenging to properly set the policies, while taking into account both the need for short repair TATs and the need to keep inventory costs under control.
5.2
Implementation
In order to use the algorithm in a DSS, data regarding the component repairs and spare parts are needed. We now describe the approach that is used at the company to obtain these data. Data are mainly retrieved from the ERP system of the repair shop. Prices and leadtimes of spare parts are obtained in this manner. The leadtimes in the ERP system represent standard supplier leadtimes. If demand for a spare part surges, there are limited abilities to reduce leadtimes. To some extent, existing orders may be expedited in order to alleviate the effects of a demand surge. However, expediting is achieved via case-by-case negotiation with the supplier, and the possibilities for expediting orders are very hard to predict in advance for the thousands of spare parts in the assortment. Therefore, it would be very costly to obtain the data needed to include expediting in the decision model, and the resulting benefits are not likely to outweigh these costs. Therefore, we choose to work with the standard leadtime. Estimates of future repair volumes are obtained using standard forecast techniques and improved using expert knowledge. The spare part usage probabilities P(Yji = y) for each component are estimated based on data regarding the spare parts usage in historic repairs (cf. Romeijnders et al. (2012)). Due to regulation requirements, a long history of accurate data regarding spare parts usage is available. Fixed ordering costs are estimated based on the time that it takes to place and receive a typical order.
18
Because repair shops repair a wide range of components, instances typically consist of a large number of components and spare parts. Case F in Table 1 corresponds to all components and spare parts used in one of the sections of the repair shop, and thus gives an indication of the size of problems that are typically solved. Prices vary between EUR 0.01 and 40,000, with 5% of the spare parts above EUR 2,500, and 80% below EUR 500. Leadtimes vary between a few days and 2 years, with 80% of the leadtimes below 3 months. Forecasted repair rates of components vary between 0 and 150 per year, and 80% of the components have a rate below 6 per year. Table 3 shows an example of an estimate of the spare part usage probabilities P(Yji = y) for a single component (e.g. a servo-motor). Twenty-three of the twenty-nine parts are used with a probability that is less than 5%, while the remaining five are used with higher probability. This is typical for most components, as parts are generally very reliable. The spare part and component data are complemented with appropriate time-window fill rate targets for the components, which are determined based on target repair turnaround times (TATs). The TAT is what is seen by the customer. It consists of transportation, waiting for spare parts, and repair of the component. The target TAT is determined based on customer expections or general market conditions. It should be short and highly reliable for critical components, while longer and less reliable TATs may be acceptable for less critical components. For example, the target TAT could be 5 days for highly critical components, which needs to be achieved with 95% probability. For less critical components the target TAT could be 40 days, which needs to be achieved with 80% probability. If transportation and the actual repair take 5 days, then appropriate time-window fill rate targets would be 80% within 35 days for less critical components and 95% within 0 days for critical components. To control the average repair time of some critical components, multiple time-window fill-rate targets may be combined, for example 80% within 35 days and 98% within 75 days. Based on the data and TAT targets, the DSS periodically creates a problem instance that reflects the inventory problem at the repair shop. The DSS then solves this problem instance using Algorithm 3. In this manner, it recommends (s, S) policies to the inventory analysts. The analysts generally adhere to these policies. Because the method computes new policies automatically, much effort is saved in keeping the policies up-to-date with changes in forecasts, leadtimes and market conditions.
5.3
Benefits
The approach helps the company to overcome the difficulties of effectively managing the inventory, while attaining the desired performance on the level of components. Mr van Marle: “We can now assure that decision making throughout the organization is aligned with the business targets on a component level.” As a consequence, inventory can be managed more cost-efficiently: “By using a demand forecast to predict future inventory levels based on these policies, the future inventory level was projected to decrease by about 15% compared to current values.” To shed further light on these benefits, we apply our method and a benchmark to Case F from Table 1. The benchmark method is typical for spare parts methods used in practice, and roughly reflects Fokker 19
hj \ λj expensive medium cheap
slow 0.8 (453) 0.95 (486) 0.98 (1178)
average 0.9 (151) 0.98 (143) 0.99 (540)
fast 0.95 (84) 0.99 (153) 0.998 (602)
Table 4: The fill rate targets used for parts in each group in the benchmark methods. The number of parts in each group is given in parentheses.
Method Benchmark
Algorithm 3
Criticality High Medium Low High Medium Low
Time window 0 days 15 days 35 days 0 days 15 days 35 days
≤80% 4 1 3 0 0 0
Time-window fill rate 80-85% 85-90% 90-95% 8 15 33 3 6 26 2 8 13 0 0 0 0 0 77 68 34 23
>95% 54 72 166 114 31 67
Table 5: The number of components that achieve a time-window fill rate below various thresholds for the benchmark method and our approach. Boldface indicates components with performance below the target.
Services’ old approach for optimizing inventories: It sets an individual fill-rate target for each spare part. In particular, the spare parts are partitioned in a 3×3 grid based on usage frequency (λj ) and the holding cost (hj ). Cutoff values are 2/year and 6/year for λ, and 50 and 250 for hj . (hj has been rescaled for confidentiality.) Parts in each group are assigned a fill-rate target as detailed in Table 4. As discussed in Sections 1 and 5.2, repair shop performance is measured on the level of components, and availability targets depend on component criticality. Case F has 114, 108 and 192 components with a high, a medium, and a low criticality, respectively. For purposes of illustration, we use a single time-window fill rate target for each component: 95% within 0 days for high criticality components; 90% within 15 days for medium criticality components; and 80% within 35 days for low criticality components. While both the benchmark method and the component availability targets roughly reflect the old and new approach in use at the company, the precise targets used in this section are hypothetical for confidentiality. For the solutions obtained using our method and the benchmark method, we obtain the appropriate time-window fill rate for each component using simulation until confidence intervals are smaller than 0.15%. The results are summarized in Table 5. The benchmark method cannot deliver the desired performance for 60/114 = 52% of the high criticality components, and 27/114 = 23% of the high criticality components have more than twice as many delays as the 5% target. Such large numbers of repair delays cannot be accommodated by the operator and reduce his ability to efficiently maintain the aircraft. This would eventually lead to a decreasing market share of the repair shop. The inability to deliver the desired performance on the level of components is a significant shortcoming of all parts-based methods like the benchmark method.
20
hj \ λj expensive medium cheap
Proposed method slow average fast 891K 518K 551K 201K 158K 455K 99K 101K 226K
Benchmark method slow average fast 1631K (+83%) 694K (+34%) 587K (+7%) 239K (+19%) 167K (+6%) 508K (+12%) 107K (+8%) 101K (+0.1%) 232K (+2.4%)
Table 6: The cost increase of the benchmark method with respect to the proposed method, for the various sub-assortments. hj \ λj expensive medium cheap
slow 395/453 (87%) 311/486 (64%) 60/1178 (5%)
average 90/151 (59%) 3/143 (2%) 0/540 (-)
fast 10/84 (12%) 0/153 (-) 0/602 (-)
Table 7: The number of parts with base-stock policies and the total number of parts in each category.
As expected, our approach does deliver the desired performance for all components. Mr. van Marle recognizes the advantages of this feature: “I am confident that the method has a positive impact on sales, as it allows us to better guarantee that we deliver to our customers what they expect.” In terms of ordering and holding costs, the proposed method also performs significantly better than the benchmark method: the costs of the benchmark are 33% higher than the costs of the proposed method. Table 6 summarizes the cost increase of the benchmark with respect to the proposed method for each of the nine groups in the benchmark method. Of course, these savings depend on the service targets used for the benchmark method. But note that the current targets in the benchmark method cannot achieve the desired performance on component level. Hence, any targets that do achieve the desired component availability will have even higher associated costs. Our model has the ability to use base-stock levels or (s, S) policies with s < S − 1 depending on
the cost parameters of each individual spare part. In Table 7 we show the number of parts that use a base-stock level for each sub-assortment. The table shows that base-stock levels are typically used for expensive slower moving parts, which matches the intuition.
6
Conclusion
We investigated spare parts inventory control at a repair shop, and found that performance of the repair shop should be measured on the level of component repairs, rather than on the level of spare parts. We modeled this inventory problem along the lines of models in the ATO literature. We ensured that modeling assumptions match the practice of repair shops. We presented an automated approach to solve the model, based on a formulation of the problem as a binary program, in which each combination of policy parameters (s, S) for a spare part is represented by a column. The continuous relaxation of this binary program was solved by column generation and LP-based
21
algorithms were developed to find integer solutions. As part of the column generation approach, a very efficient algorithm was developed to solve the related pricing problem. It was shown in a computational study that the algorithms find close-to-optimal solutions for systems of the size that are prevalent in practice. They are the first algorithms for ATO systems that have been shown capable of solving such large-scale systems. The method has been implemented at a repair shop owned by Fokker Services. In a case study at the repair shop, we have shown that this implementation improves inventory control. In particular, the algorithm aligns inventory decisions with business targets for the TATs of component repairs, and significantly outperforms item-based approaches. In the approach presented in this paper, component exchange inventories can only be taken into account to a limited extent: by setting appropriate repair TAT targets for the components. This is reasonable for external repair shops, because they have no control of exchange inventories, which are controlled by the line maintenance organization. Still, integrating inventory control of components and spare parts under the assumption that a component repair requires multiple spare parts might be an interesting direction for future research. Another interesting direction for future research would be developing coordinated replenishment policies that are tractable for the large systems considered in this paper.
References G. Aerts. Personal communication, 2013. (Mr. Aerts was assistent head of support at NedTrain componentenbeheer at the time of correspondence). N. Agrawal and M. A. Cohen. Optimal material control in an assembly system with component commonality. Naval Research Logistics, 48:409–429, 2001. Y. Ak¸cay and S. H. Xu. Joint inventory replenishment and component allocation optimization in an assemble-toorder system. Management Science, 50:99–116, 2004. Aviation Week. 10-year global MRO forecast. Aviation Week: Overhaul & Maintenance, 17(4):28–31, 2011. S. Axs¨ ater. Inventory Control. Springer, 2nd edition, 2006. S. Benjaafar and M. El Hafsi. Production and inventory control of a single product assemble-to-order system with multiple customer classes. Management Science, 52:1896–1912, 2006. M. Bijvank, G. Koole, and I.F.A. Vis. Optimising a general repair kit problem with a service constraint. European Journal of Operational Research, 204(1):76–85, 2010. K.E. Caggiano, P. L. Jackson, J. A. Muckstadt, and J. A. Rappold. Optimizing service parts inventory in a multiechelon, multi-item supply chain with time-based customer service-level agreements. Operations Research, 55(2):303–318, 2007. F. Y. Chen and Y. Feng. Optimization and optimality of (s, S) stochastic inventory systems with non-quasiconvex costs. Probability in the Engineering and Informational Sciences, 20:287–306, 2006.
22
F. Cheng, M. Ettl, G. Lin, and D. D. Yao. Inventory-service optimization in configure-to-order systems. Manufacturing & Service Operations Management, 4:114–132, 2002. M. de Jong. Personal communication, 2012. (Mr. de Jong was Manager Operations at Fokker CMRO Schiphol at the time of correspondence). M. Do˘ gru, M. Reiman, and Q. Wang. A stochastic programming based inventory policy for assemble-to-order systems with application to the W model. Operations Research, 58:849–864, 2010. M. Ettl, G. E. Feigin, G. Y. Lin, and D. D. Yao. A supply network with base-stock control and service requirements. Operations Research, 48:216–232, 2000. W. H. Hausman, H. L. Lee, and A. X. Zhang. Joint demand fulfillment probability in a multi-item inventory system with independent order-up-to policies. European Journal of Operational Research, 109:646–659, 1998. R. Kapuscinski, R. Q. Zhang, P. Carbonneau, R. Moore, and B. Reeves. Inventory decisions in Dell’s supply chain. Interfaces, 34:191–205, 2004. W.J. Kennedy, J. Wayne Patterson, and L. D. Fredendall. An overview of recent literature on spare parts inventories. International Journal of Production Economics, 76:201–215, 2002. A. A. Kranenburg and G.J. van Houtum. Service differentiation in spare parts inventory management. Journal of the Operations Research Society, 59:946–955, 2008. A. A. Kranenburg and G.J. van Houtum. A new partial pooling structure for spare parts networks. European Journal of Operational Research, 199:908–921, 2009. L. Lu, J. Song, and H. Zhang. Optimal and asymptotically optimal policies for assemble-to-order n- and w-systems. Columbia Business School Research Paper No. 14-55, 2014. Y. Lu and J.-S. Song. Order-based cost optimization in assemble-to-order systems. Operations Research, 53: 151–169, 2005. Y. Lu, J.-S. Song, and D. D. Yao. Backorder minimization in multiproduct assemble-to-order systems. IIE Transactions, 37:763–774, 2005. J. A. Muckstadt. A model for a multi-item, multi-echelon, multi-indenture inventory system. Management Science, 20:472–481, 1973. M. Reiman and Q. Wang. A stochastic program based lower bound for assemble-to-order inventory systems. Operations Research Letters, 40(2):89–95, 2012. W. Romeijnders, R. Teunter, and W. van Jaarsveld. A two-step method for forecasting spare parts demand using information on component repairs. European Journal of Operational Research, 220:386–393, 2012. J.-S. Song. On the order fill rate in a multi-item, base-stock inventory system. Operations Research, 46:831–845, 1998. J.-S. Song. A note on assemble-to-order systems with batch ordering. Management Science, 46:739–743, 2000. J.-S. Song. Order-based backorders and their implications in multi-item inventory systems. Management Science, 48:499–516, 2002. J.-S. Song and D. D. Yao. Performance analysis and optimization of assemble-to-order systems with random lead times. Operations Research, 50:889–903, 2002. J.-S. Song and Y. Zhao. The value of component commonality in a dynamic inventory system with lead times. Manufacturing & Service Operations Management, 11:493–508, 2009.
23
A. Svoronos and P. Zipkin. Evaluation of one-for-one replenishment policies for multiechelon inventory systems. Management Science, 37:68–83, 1991. R. H. Teunter. The multiple-job repair kit problem. European Journal of Operational Research, 175(2):1103 – 1116, 2006. E. Topan, Z. P. Bayındır, and T. Tan. An exact solution procedure for multi-item two-echelon spare parts inventory control problem with batch ordering in the central warehouse. Operations Research Letters, 38:454–461, 2010. W. van Jaarsveld and A. Scheller-Wolf. Optimization of industrial-scale assemble-to-order systems. Working paper, 2014. H. Wong, B. Kranenburg, G-J. van Houtum, and D. Cattrysse. Efficient heuristics for two-echelon spare parts inventory systems with an aggregate mean waiting time constraint per local warehouse. OR Spectrum, 29: 699–722, 2007. A. X. Zhang. Demand fulfillment rates in an assemble-to-order system with multiple products and dependent demands. Production and Operations Management, 6:309–324, 1997. Y. Zhao and D. Simchi-Levi. Performance analysis and evaluation of assemble-to-order systems with stochastic sequential lead times. Operations Research, 54:706–724, 2006. Y. Zheng and A. Federgruen. Finding optimal (s, S) policies is about as simple as evaluating a single policy. Operations Research, 39:654–665, 1991. P. Zipkin. Stochastic leadtimes in continuous-time inventory models. Naval Research Logistics Quarterly, 33: 763–774, 1986.
A
Proof of propositions
Proof of Proposition 1 For the proof, we will use the following three lemmas. Lemma 1.
1. Hj (S − 1, S) is increasing in S.
2. Fji (S − 1, S) is decreasing in S. Proof. For fixed leadtime, evaluation of limt→∞ Ij (t) is standard. The results then follow from (8) and (10). For stochastic sequential leadtimes, Hj (S − 1, S) and Fji (S − 1, S) are evaluated by conditioning on the leadtime and using the approach for fixed leadtime (cf. Zipkin (1986)). The asserted properties are preserved while conditioning. Lemma 2.
1. Hj (s + ∆, S) and Fji (s − ∆, S) are nondecreasing in ∆.
2. Hj (s + ∆, S + ∆) and Fji (s − ∆, S − ∆)) are nondecreasing in ∆. Proof. The results are a consequence of (11) and the monotonicity of Hj (y − 1, y) and Fji (y − 1, y) from Lemma 1.
Lemma 3. Oj (S − s) is nonincreasing in S − s. 24
Proof. Immediate from Oj (S − s) = oj λj /MS−s and the definition of MS−s . We are now ready to prove Proposition 1. Therefore, let (s, S) denote a policy in the parallelogram. By Lemma 2, we know that H(s, S) ≥ H(s0 , S 0 ), and F i (s, S) ≥ F i (s00 , S 00 ), while Lemma 3 shows that
O(s, S) ≥ O(s0 , S 0 ). Consequently, the reduced costs of (s, S) is bounded below by (18) (note that ν i ≥ 0).
This proves the first claim. The additional claim follows immediately from Lemma 2.
Proof of Proposition 2 Let j ∈ J be given and let (s, S) ∈ Cj be the policy to control the inventory for
spare part j. We will prove that
Hj (s, S) ≥ Hj
s+S−1 s+S+1 , 2 2
,
where Hj (s, S) is the holding cost rate for part j. We will apply renewal reward theory to compute the left-hand side. Renewals correspond to moments at which a replenishment order is placed. Define C(s, S) as the expected holding cost and T (s, S) as the expected time during a cycle. For simplicity of notation, define pd = P(D = d) for d ∈ N as the compounding distribution for the demand for spare part j. It then holds, that C(s, S) =
S−s−1 X 1 Hj (S − 1, S) + pd C(s, S − d), λj
(24)
d=1
T (s, S) =
1 + λj
S−s−1 X d=1
pd T (s, S − d).
(25)
The elementary renewal theorem now states that Hj (s, S) =
λj C(s, S) C(s, S) = . T (s, S) λj T (s, S)
Multiplying (24) and (25) by λj , we see that the functions (s, S) 7→ λj C(s, S) and (s, S) 7→ λj T (s, S)
satisfy a similar equation with λj = 1. Therefore, we can assume λj = 1 without loss of generality. To ease the notation, we now define for fixed s ∈ Z, the function n n v : N → R : n 7→ v(n) = Hj s + , s + 1 + . 2 2 Hj (s − 21 , s + 12 ) is defined as the average of Hj (s − 1, s) and Hj (s, s + 1). Convexity of s 7→ Hj (s, s + 1) then implies that
Hj (i + 1, i + 2) − Hj (i, i + 1) ≥ Hj (i, i + 1) − Hj (i − 1, i) for all i ∈ 21 Z. The function v therefore satisfies v(n + 1) − v(n) ≥ v(n) − v(n − 1); 25
(26)
i.e., the function v is convex. For later convenience, we also introduce v¯ : N20 → R : (i, j) 7→ v¯(i, j) = v (i) − v (j) . From (26), we see that the function i 7→ v¯(i + 1, i) is increasing. This implies that v¯(i + 1, i) ≤ v¯(j + 1, j) whenever i ≤ j. It follows for n, j ∈ N0 with 1 ≤ j ≤ n, that v¯(n, n − j) = v (n) − v (n − j) =
j−1 X k=0
j−1 X v(n − k) − v(n − k − 1) = v¯(n − k, n − k − 1)
≤
k=0 j−1 X k=0
v¯(n, n − 1) = j¯ v (n, n − 1).
As the left and right-hand side are obviously equal for j = 0, we conclude for 0 ≤ j ≤ n, that v¯(n, n − j) ≤ j¯ v (n, n − 1).
(27)
We will prove the lemma for fixed s by induction on S > s. Note that for fixed s, the policy (s, S) is alternatively characterized by the difference S − s − 1. To ease the notation, and to clarify the inductive
argument, we define N = S − s − 1 and apply the identification N ∼ (s, s + 1 + N ) ∈ Cj . Note that S > s is equivalent to N ≥ 0. We will apply the following lemma.
Lemma 4. For all N ∈ N,
N X j=1
jpj T (N − j) ≤ N.
Proof. For N = 1 the result follows from the observation that p1 ≤ 1 and T (0) = 1. Assume now that it holds for all n < N . Then N X j=1
jpj T (N − j) = N pN + =
N X j=1
jpj +
N −1 X
pk
N −1 X j=1
N −k X j=1
k=1
jpj T (N − j) = N pN +
jpj T (N − k − j) ≤
N X
N −1 X
jpj +
j=1
jpj
j=1
1+
N −j X k=1
N −1 X k=1
! pk T (N − j − k)
pk (N − k) =
N X j=1
N pj ≤ N,
where we substituted (25) in the second equality and applied the induction hypothesis in the first inequality. This proves the claim. We are now ready to prove Proposition 2. Recall that N = S − s − 1, so v(N ) = Hj
2s + N 2s + 2 + N , 2 2
= Hj
s+S+1 s+S+1 − 1, 2 2
.
Similarly v(2N ) = Hj (S − 1, S). Using the notation introduced in this appendix, we should thus prove 26
the following. Lemma 5. C(N )/T (N ) ≥ v (N ) for all N ∈ N0 . Proof. For N = 0 the result follows by definition. Assume now that it holds for all n < N . We then have for all 1 ≤ j ≤ N
0 ≤ C(N − j) − v (N − j) T (N − j).
Multiplying this expression by pj and summing it from j = 1 to N , we obtain
0≤
N X j=1
pj C(N − j) −
N X j=1
pj v (N − j) T (N − j).
(28)
Applying (27), Lemma 4 and monotonicity of i 7→ v¯(i + 1, i), respectively, we see for all 1 ≤ j ≤ N , that N X j=1
pj v¯(N, N − j)T (N − j) ≤ v¯(N, N − 1) =
N −1 X k=0
N X j=1
jpj T (N − j) ≤ N v¯(N, N − 1)
v¯(N, N − 1) ≤
N −1 X k=0
v¯(2N − k, 2N − k − 1) = v(2N ) − v(N ).
Rearranging terms and substituting v¯(N, N − j) = v(N ) − v(N − j), we can rewrite this as 0 ≤ v(2N ) − v(N ) +
N X j=1
pj v(N − j)T (N − j) −
N X j=1
pj v(N )T (N − j).
(29)
Adding (28) and (29), we obtain
0 ≤ v(2N ) +
N X j=1
pj C(N − j) − v(N ) 1 +
N X j=1
pj T (N − j) = C(N ) − v (N ) T (N ).
This proves the claim.
B
Computational results
Table 8 gives the values and lower bounds obtained for the different algorithms and cases investigated in Section 4. Note that the objective value for Case F differs from the one reported in the case study, because we use different targets here.
C
The quality of the lower bound
In this appendix, we examine the quality of the lower bound (2). We consider Case F in Table 1. To facilitate the interpretation of the results, we will use the same target fill rate ai = a for each component in 27
Instance A B C D E F G
LP 66141 1316033 1689486 1824336 2721664 4500976 10167509
Best bound 66448.6 1321868 1698004 1832479 2730443 4516924 10183923
Alg. 3 66751.4 1323754 1701197 1836392 2749126 4538607 10276708
Alg. 4 66448.9 1321987 1698393 1833434 2739339 4530651 10245184
Table 8: The second column gives the objective value of the LP relaxation. The third column gives the best lower bound obtained by branch-and-price. The fourth and fifth column present the solution value obtained by Algorithm 3 and Algorithm 4, respectively. 100
90 80
Fraction of repair types with lower deviation
70
60 50
90% 95%
40
98%
30 20 10
0 0%
1%
2%
3%
4%
5%
6%
Absolute deviation between lower bound and true value
Figure 7: The cumulative fraction of components for which the deviation between lower bound and true value is below the value on the horizontal axis. the proposed method, instead of using the targets that are used at the company. We vary a over 90%, 95%, and 98%. A solution for the resulting instances is determined using Algorithm 3. For each component, the deviation between the lower bound on the fill rate (2) and the true fill rate (1) is determined. The true fill rate is obtained using simulation until confidence intervals are smaller than 0.05%. The results are shown in Figure 7. The figure should be interpreted as follows. Consider the vertical line at 1%. For the case with targets of 90%, the figure shows that the true fill rate deviates at most 1% from the lower bound on the fill rate for 91% of the components. Similarly, the percentage of components that have a deviation of at most 1% is 98% for the case with targets of 95%. The figure shows that for 95% of the components, the deviation is smaller than 1.3%, 0.6%, and 0.15%, when the target is 90%, 95%, and 98%, respectively. The average deviation is 0.45%, 0.16%, and 0.04%, respectively. The lower bound is thus a quite accurate approximation of the true fill rate.
28
Target 90% 95% 98%
[0, 0.001) 123 (29%) 224 (54%) 393 (95%)
[0.001, 0.002) 51 (12%) 115 (28%) 14 (3.4%)
Parts per interval [0.002, 0.005) [0.005, 0.01) 134 (32%) 89 (21%) 65 (16%) 10 (2.4%) 7 (1.7%) 0 (-)
[0.01, 0.015) 15 (3.6%) 0 (-) 0 (-)
[0.015, 0.02) 2 (0.5%) 0 (-) 0 (-)
Table 9: The number of components for which the difference in fill-rate between FCFS and FRFS lies in each of 6 intervals. We distinguish between target fill-rates a ∈ {90%, 95%, 98%}.
D
Allocation
In this appendix, we investigate the effects of the FCFS assumption, and in particular the effects of committing inventories to arrived demands. As a benchmark, we use a so-called First-Ready First-Serve (FRFS) policy (cf. Song and Zhao 2009). FRFS is a no holdback policy: It only (and always) allocates spare parts to a repair when all required spares for that repair are available. Under FRFS, back-orders are cleared first-come first-serve. Because FRFS does not commit inventories to demands that cannot be fullfilled, it is a suitable rule for testing the possible adverse effects of committed inventories under FCFS (which does commit inventories for all arriving demands). Because the optimization problem considered in this paper is intractable under FRFS, we will optimize base-stock levels under FCFS, and determine to what extent the fill-rate improves under FRFS. We use Case F with three fill-rate targets a ∈ {90%, 95%, 98%} as in Appendix C. We obtain (s, S) policies using Algorithm 3. We determine
the achieved component fill-rates under FCFS and FRFS using simulation until confidence intervals are smaller than 0.05%. We determine the fill-rate difference between FCFS and FRFS for each of the 414 components and for a ∈ {90%, 95%, 98%}.
Table 9 summarizes the results. The table shows that typical differences in fill-rate between FCFS
and FRFS are relatively small, but significant. FRFS also has some undesirable properties: it is e.g. very difficult to predict in advance when repairs will be finished under FRFS, which may be problematic if repair shops wish to make reliable promises to customers after inspection. Additionally, FRFS may be be difficult to implement in some ERP systems that automatically withdraw parts that are needed for a repair. Repair shops need to determine whether the small performance improvement resulting from FRFS outweighs these disadvantages.
29 View publication stats