network algorithms for supply chain optimization...
TRANSCRIPT
NETWORK ALGORITHMS FOR SUPPLY CHAIN OPTIMIZATIONPROBLEMS
By
BURAK EKSIOGLU
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2002
To my family...
ACKNOWLEDGMENTS
I would like to thank my dissertation chair, Panos Pardalos, for his technical
advice, professionalism, encouragement and insightful comments throughout my
dissertation research.
I acknowledge my committee members, Selcuk Erenguc, Joseph Geunes, Edwin
Romeijn, and Max Shen, for their constructive criticism concerning the material of
this dissertation. I would like to thank Edwin Romeijn for all the effort he has devoted
to the supervision of this research and Joe Geunes for his time and suggestions. I
also would like to verbalize my appreciation to all of my sincere friends at the ISE
Department and in Gainesville.
No words can express all my thanks to my parents, Inceser and Galip Eksioglu,
my sister, Burcu, and brother, Oguz, for their love, encouragement, motivation, and
eternal support. Last but not least, I would like to thank my wife, Sandra, for
her patience, kindness, and continuous support throughout all my years here at the
University of Florida.
iii
TABLE OF CONTENTSpage
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTERS
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . 11.2 Global Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Goal and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 PRODUCTION PLANNING AND LOGISTICS PROBLEMS . . . . . . 8
2.1 Single-Item Economic Lot Sizing Problem . . . . . . . . . . . . . . 82.2 Production-Inventory-Distribution (PID) Problem . . . . . . . . . 102.3 Complexity of the PID Problem . . . . . . . . . . . . . . . . . . . 122.4 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.1 Fixed Charge Network Flow Problems . . . . . . . . . . . . 152.4.2 Piecewise-linear Concave Network Flow Problems . . . . . 16
3 SOLUTION PROCEDURES . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Existing Solution Approaches . . . . . . . . . . . . . . . . . . . . 253.2 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 History of Local Search . . . . . . . . . . . . . . . . . . . . 273.2.2 Complexity of Local Search . . . . . . . . . . . . . . . . . . 283.2.3 Local Search for the PID Problem . . . . . . . . . . . . . . 29
3.3 Dynamic Slope Scaling Procedure (DSSP) . . . . . . . . . . . . . 373.3.1 Fixed Charge Case . . . . . . . . . . . . . . . . . . . . . . 393.3.2 Piecewise-linear Concave Case . . . . . . . . . . . . . . . . 403.3.3 Performance of DSSP on Some Special Cases . . . . . . . . 41
3.4 Greedy Randomized Adaptive Search Procedure (GRASP) . . . . 503.4.1 Construction Phase . . . . . . . . . . . . . . . . . . . . . . 503.4.2 Modified Construction Phase . . . . . . . . . . . . . . . . . 52
3.5 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
iv
4 SUBROUTINES AND COMPUTATIONAL EXPERIMENTS . . . . . . 59
4.1 Design and Implementation of the Subroutines . . . . . . . . . . . 594.2 Usage of the Subroutines . . . . . . . . . . . . . . . . . . . . . . . 614.3 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . 634.4 Randomly Generated Problems . . . . . . . . . . . . . . . . . . . 65
4.4.1 Problems with Fixed Charge Costs . . . . . . . . . . . . . . 664.4.2 Problems with Piecewise-Linear Concave Costs . . . . . . . 76
4.5 Library Test Problems . . . . . . . . . . . . . . . . . . . . . . . . 83
5 CONCLUDING REMARKS AND FURTHER RESEARCH . . . . . . . 86
5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.2 Proposed Research . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.2.1 Extension to Other Cost Structures . . . . . . . . . . . . . 885.2.2 Generating Problems with Known Optimal Solutions . . . . 88
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
v
LIST OF TABLESTable page
4–1 The distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4–2 Problem sizes for fixed charge networks. . . . . . . . . . . . . . . . . . 67
4–3 Characteristics of test problems. . . . . . . . . . . . . . . . . . . . . . 68
4–4 Number of times the error was zero for data set A. . . . . . . . . . . . 68
4–5 Average errors for problems with data set E. . . . . . . . . . . . . . . 72
4–6 Problem sizes for fixed charge networks with capacity constraints. . . . 73
4–7 Average errors for problems with production capacities. . . . . . . . . 74
4–8 CPU times of DSSP for problems with production capacities. . . . . . 74
4–9 CPU times of CPLEX for problems with production capacities. . . . . 75
4–10 Percentage of production arcs used in the optimal solution. . . . . . . 76
4–11 Problem sizes for piecewise-linear concave networks. . . . . . . . . . . 78
4–12 Summary of results for problems using data set A. . . . . . . . . . . . 79
4–13 Summary of results for problems using data set B. . . . . . . . . . . . 80
4–14 Summary of results for problems using data set C. . . . . . . . . . . . 81
4–15 Summary of results for problems using data set D. . . . . . . . . . . . 82
4–16 Summary of results for problems using data set E. . . . . . . . . . . . 83
4–17 Problem sizes for the extended formulation after NSP. . . . . . . . . . 84
4–18 Summary of results for problems 40 and 46 after NSP. . . . . . . . . . 84
4–19 Uncapacitated warehouse location problems from the OR library. . . . 85
vi
LIST OF FIGURESFigure page
2–1 The single-item ELS model. . . . . . . . . . . . . . . . . . . . . . . . 9
2–2 A supply chain network with 2 facilities, 3 retailers, and 2 periods. . . 11
2–3 Piecewise-linear concave production costs. . . . . . . . . . . . . . . . 17
2–4 Arc separation procedure. . . . . . . . . . . . . . . . . . . . . . . . . 21
2–5 Node separation procedure. . . . . . . . . . . . . . . . . . . . . . . . . 23
3–1 An example for ε-neighborhood. . . . . . . . . . . . . . . . . . . . . . 31
3–2 An example of moving to a neighboring solution. . . . . . . . . . . . . 33
3–3 The local search procedure. . . . . . . . . . . . . . . . . . . . . . . . . 35
3–4 Cycle detection and elimination. . . . . . . . . . . . . . . . . . . . . . 35
3–5 An example of a type I cycle. . . . . . . . . . . . . . . . . . . . . . . 36
3–6 An example of a type II cycle. . . . . . . . . . . . . . . . . . . . . . . 37
3–7 The DSSP algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3–8 Linearization of the fixed charge cost function. . . . . . . . . . . . . . 40
3–9 Linearization of the piecewise-linear concave cost function. . . . . . . 41
3–10 A bipartite network with many facilities and one retailer. . . . . . . . 42
3–11 A bipartite network with two facilities and two retailers. . . . . . . . 44
3–12 A bipartite network with multiple facilities and multiple retailers. . . 47
3–13 The construction procedure. . . . . . . . . . . . . . . . . . . . . . . . 52
3–14 Extended network representation. . . . . . . . . . . . . . . . . . . . . 55
4–1 Input file (gr-param.dat) for gr-pid.c. . . . . . . . . . . . . . . . . . . 62
4–2 Sample data file (sample.dat) for gr-pid.c and ds-pid.c. . . . . . . . . 63
4–3 Sample output (sample-gr.out) of gr-pid.c. . . . . . . . . . . . . . . . 64
4–4 Average errors for problems using data set D. . . . . . . . . . . . . . 70
vii
4–5 Number of GRASP iterations vs. solution quality. . . . . . . . . . . 71
5–1 Piecewise-concave cost functions. . . . . . . . . . . . . . . . . . . . . 89
viii
Abstract of Dissertation Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of theRequirements for the Degree of Doctor of Philosophy
NETWORK ALGORITHMS FOR SUPPLY CHAIN OPTIMIZATIONPROBLEMS
By
Burak Eksioglu
December 2002
Chair: Panagote M. PardalosMajor Department: Industrial and Systems Engineering
The term supply chain management (SCM) has been around for more than
twenty years. The supply chains for suppliers, manufacturers, distributors, and
retailers look very different because of the different business functions that they
perform and the types of companies with which they deal. Thus, the definition
of supply chain varies from one enterprise to another. We define supply chain (SC) as
an integrated process where these business entities work together to plan, coordinate
and control materials, parts, and finished goods from suppliers to customers.
For many years, researchers and practitioners have concentrated on the individual
processes and entities within the SC. Recently, however, many companies have
realized that important cost savings can be achieved by integrating inventory control
and transportation policies throughout their SC. As companies began realizing the
benefits of optimizing their SC as a single entity, researchers began utilizing operations
research techniques to better model SCs.
Typical models for SC design/management problems assume that the involved
costs can be represented by somewhat restrictive cost functions such as linear and/or
ix
convex functions. However, many of the applications encountered in practice involve
a fixed charge whenever the activity is performed, plus some variable unit cost which
makes the problem more complicated.
The objective of this research is to model and solve SC optimization problems
with fixed charge and piecewise-linear concave cost functions. In these problems a
single item is produced at a set of facilities and distributed to a set of retailers such
that the demand is met and the total production, transportation, and inventory costs
are minimized over a finite planning horizon.
x
CHAPTER 1INTRODUCTION
1.1 Supply Chain Management
Supply chain management is a field of growing interest for both companies and
researchers. As nicely told in the recent book by Tayur, Ganeshan, and Magazine [71]
every field has a golden age: This is the time of supply chain management. The term
supply chain management (SCM) has been around for more than twenty years and its
definition varies from one enterprise to another. We define a supply chain (SC) as an
integrated process where different business entities such as suppliers, manufacturers,
distributors, and retailers work together to plan, coordinate, and control the flow
of materials, parts, and finished goods from suppliers to customers. This chain is
concerned with two distinct flows: a forward flow of materials and a backward flow
of information. Geunes, Pardalos, and Romeijn [27] have edited a book that provides
a recent review on SCM models and applications.
For many years, researchers and practitioners have concentrated on the individual
processes and entities within the SC. Recently, however, there has been an increasing
effort in the optimization of the entire SC. As companies began realizing the benefits
of optimizing the SC as a single entity, researchers began utilizing operations research
(OR) techniques to better model supply chains. Typically, a SC model tries to
determine
• the transportation modes to be used,
• the suppliers to be selected,
• the amount of inventory to be held at various locations in the chain,
• the number of warehouses to be used, and
• the location and capacities of these warehouses.
1
2
Following Hax and Candea’s [37] treatment of production and inventory systems, the
above SC decisions can be classified in the following way:
• Strategic level. These are long-term decisions that have long-lasting effectson the firm such as the number, location and capacities of warehouses andmanufacturing facilities, or the flow of material through the SC network. Thetime horizon for these strategic decisions is often around three to five years.
• Tactical level. These are decisions that are typically updated once every quarteror once every year. Examples include purchasing and production decisions,inventory policies and transportation strategies including the frequency withwhich customers are visited.
• Operational level. These are day-to-day decisions such as scheduling, routingand loading trucks.
Beamon [6] gave a summary of models in the area of multi-stage supply
chain design and analysis. Erenguc, Simpson, and Vakharia [17] surveyed models
integrating production and distribution planning in SC. Thomas and Griffin [72]
surveyed coordination models on strategic and operational planning.
As a result of the globalization of the economy, however, the models have become
more complex. Global SC models now often try to include factors such as exchange
rates, international interest rates, trade barriers, taxes and duties, market prices, and
duty drawbacks. All of these factors are generally difficult to include in mathematical
models because of uncertainty and nonlinearity. Vidal and Goetschalckx [78] provided
a review of strategic production-distribution models with emphasis on global SC
models. Eksioglu [14] discussed some of the recent models that address the design
and management of global SC networks. Cohen and Huchzermeier [10] also gave an
extensive review on global SC models. They focus on the integration of SC network
optimization with real options pricing methods. Most of these SC problems can be
modeled as mathematical programs that are typically global optimization problems.
Therefore, we next give a brief overview of the area of global optimization.
3
1.2 Global Optimization
The field of global optimization was initiated during the mid 1960s mainly
through the primary works of Hoang Tuy. Since then, and in particular during the
last fifteen years, there has been a lot of interest in theoretical and computational
investigations of challenging global optimization problems. This has resulted in the
development and application of global optimization methods to important problems
in science, applied science, and engineering. Exciting and intriguing theoretical
findings and algorithmic developments have made global optimization one of the
most attractive areas of research.
Global optimization deals with the search for a global optimum in problems
where many local optima exist. The general global optimization problem is defined
by Horst, Pardalos, and Thoai [43] as
Definition 1.1 Given a nonempty, closed set D ⊂ Rn and a continuous function
f : Ω → R, where Ω ⊂ Rn is a suitable set containing D, find at least one point
x∗ ∈ D satisfying f(x∗) ≤ f(x) for all x ∈ D.
A major difficulty of global optimization problems is the existence of many
local optima. As Horst and Tuy [44] stated, standard local optimization methods
are trapped at a local optimum or more generally at a stationary point for
which there is not even any guarantee of local optimality. Thus, the use of
standard local optimization techniques is normally insufficient for solving global
optimization problems. Therefore, more sophisticated methods need to be designed
for global optimization problems, resulting in more complex and computationally
more expensive methods. Horst and Pardalos [42] gave a detailed and comprehensive
survey of global optimization methods. Floudas [23] presented a review of recent
theoretical and algorithmic advances in global optimization along with a variety of
applications.
4
In contrast to the objective of the global optimization, the area of local
optimization aims at determining a feasible solution that is a local minimum of
the objective function f in D (i.e., it is a minimum in its neighborhood, but not
necessarily the lowest value of the objective function f). Therefore, in general, for
nonlinear optimization problems where multiple local minima exist, a local minimum
(as with any other feasible solution) represents only an upper bound on the global
minimum of the objective function f on D.
In certain classes of nonlinear problems, a local solution is always a global one.
For example, in a minimization problem with a convex (or quasi-convex) objective
function f and a convex feasible set D, a local minimizer is a global solution (see for
instance, Avriel [4], Horst [41], Zang and Avriel [83]).
It has been shown that several important optimization problems can be
formulated as concave minimization problems. A well-known result by Raghavachari
[64] states that the zero-one integer programming problem is equivalent to a concave
(quadratic) minimization problem over a linear set of constraints. Giannessi and
Niccolucci [29] have shown that a nonlinear, nonconvex integer program can be
equivalently reduced to a real concave program under the assumption that the
objective function is bounded and satisfies the Lipschitz condition. Similarly, the
quadratic assignment problem can be formulated as a global optimization problem
(see for instance, Bazara and Sherali [5]). In general, bilinear programming problems
are equivalent to a kind of concave minimization problem (Konno [52]). The linear
complementarity problem can be reduced to a concave problem (Mangasarian [57])
and linear min-max problems with connected variables and linear multi-step bimatrix
games are reducible to a global optimization problem (Falk [18]). The above examples
indicate once again the broad range of problems that can be formulated as global
optimization problems, and therefore explain the increasing interest in this area.
5
Global optimization problems remain NP-hard even for very special cases such
as the minimization of a quadratic concave function over the unit hypercube (see for
example Garey et al. [26], Hammer [34]), in contrast to the corresponding convex
quadratic problem that can be solved in polynomial time (Chung and Murty [9]).
Most of the optimization problems that arise in SC are global optimization
problems. These problems are of great practical interest, but they are also inherently
difficult and cannot be solved by conventional nonlinear optimization methods.
Despite the fact that the majority of the challenging and important problems
that arise in science, applied science and engineering exhibit nonconvexities and hence
multiple minima, there has been relatively little effort devoted to the area of global
optimization as compared to the developments in the area of local optimization. This
is partly attributed to the use of local optimization techniques as components of global
optimization approaches, and also due to the difficulties that arise in the development
of global optimization methods. However, the recent advances in this area and the
explosive growth of computing capabilities show great promise towards addressing
these issues.
Global optimization methods are divided into two classes, deterministic and
stochastic methods. The most important deterministic approaches to nonconvex
global optimization are: enumerative techniques, cutting plane methods, branch
and bound, solution of approximate subproblems, bilinear programming methods
or different combinations of these techniques. Specific solution approaches have been
proposed for problems where the objective function has a special structure (e.g.,
quadratic, separable, factorable, etc.) or the feasible region has a simplified geometry
(e.g., unit hypercube, network constraints, etc.).
1.3 Goal and Summary
The focus of this dissertation is to study optimization problems in supply
chain operations with cost structures that arise in several real-life applications. A
6
typical feature of the logistics problems encountered in practice is that their cost
structure is not linear, due to the presence of fixed charges, discount structures,
and different modes of transportation. These cost structures have not been given
sufficient attention in the literature, perhaps due to the difficulty of the underlying
mathematical optimization problems. The goal of this dissertation is to develop new
optimization models and algorithms for solving large-scale logistics problems with
nonlinear cost structure.
The supply chain optimization problems we consider are formulated as large-
scale mixed integer programming problems. The network structure inherent in such
problems is utilized to develop efficient algorithms to solve these problems. Due to
the scale and difficulty of these problems, the focus is to develop efficient heuristic
methods. The objective is to develop approaches that produce optimal or near-
optimal solutions to logistics problems with fixed charge and piecewise-linear concave
cost structures. In this dissertation we also address the generation of experimental
data for these optimization models since the performance of heuristic procedures
are typically measured by the computation time required and the quality of the
solution obtained. Conclusions about these two performance measures are drawn
by testing the heuristic approaches on a collection of problems. The validity of the
derived conclusions strongly depends on the characteristics of the problems chosen.
Therefore, we generated several sets of problems with different characteristics.
The outline of the dissertation is as follows. In Chapter 2 we first introduce
the single-item economic lot sizing problem and then discuss extensions to the basic
problem to arrive at our problem, which we call the production-inventory-distribution
(PID) problem. In Chapter 3 we present local search based heuristic approaches.
We give a brief history of local search and present two approaches for constructing
solutions to initiate our local search procedure. We discuss the complexity of the
7
problem and give different formulations for problems with fixed charge and piecewise-
linear concave cost structures. A dynamic slope scaling procedure (DSSP) is presented
in section 3.3 and a greedy randomized adaptive search procedure (GRASP) is
developed in section 3.4. DSSP was first introduced by Kim and Pardalos [48]. We
refined the heuristic to improve the quality of the solutions obtained. The final section
in Chapter 3 discusses lower bound procedures that are used to test the quality of the
solutions obtained from local search. The results of extensive computational results
are presented in Chapter 4. Details on the design, implementation, and usage of the
subroutines developed are also included in Chapter 4. Finally, in Chapter 5 we end
the dissertation with a summary of the findings and future research directions.
CHAPTER 2PRODUCTION PLANNING AND LOGISTICS PROBLEMS
2.1 Single-Item Economic Lot Sizing Problem
Many problems in SC optimization such as inventory control, production
planning, capacity planning, etc. are related to the simple economic lot sizing (ELS)
problem. Harris [35] is usually cited as the first to study ELS models. He considered
a model that assumes deterministic demands which occur continuously over time.
In 1958, a different approach was proposed independently by Manne [58] and by
Wagner and Whitin [80]. They divided time into discrete periods and assumed that
the demand over a finite horizon is known in advance. In the past four decades
ELS has received considerable attention and many papers have directly or indirectly
discussed this model. Aggarwal and Park [2] gave a brief review of the ELS model
and its extensions.
To describe the basic single-item ELS model we will use the following notation.
Demand (dt) for the product occurs during each of T consecutive time periods. The
demand during period t can be satisfied either through production in that period
or from inventory that is carried forward in time. The model includes production
and inventory costs, and the objective is to schedule production to satisfy demand at
minimum cost. The cost of producing pt units during period t is given by rt(pt) and
the cost of storing It units of inventory from period t to period t+1 is ht(It). Without
loss of generality, we assume both the initial inventory and the final inventory are
zero. The mathematical representation of the ELS model can now be given as
minimizeT∑
t=1
(rt(pt) + ht(It))
8
9
subject to (P0)
pt + It−1 = It + dt t = 1, . . . , T, (2.1)
I0, IT = 0 (2.2)
pt, It ≥ 0 t = 1, . . . , T. (2.3)
In the above formulation, the first set of constraints (2.1) requires that the sum
of the inventory at the start of a period and the production during that period equals
the sum of the demand during that period and the inventory at the end of the period.
Constraint (2.2) simply assures that the initial and final inventories are zero, while
the last set of constraints (2.3) limits production and inventory to nonnegative values.
The basic ELS problem can also be formulated as a network flow problem (NFP).
This formulation was first introduced by Zangwill [84]. The network in Figure 2–1
consists of a single source node and T sink nodes. Each sink node requires an inflow of
dt (t = 1, 2, ..., T ) and node D is capable of generating an outflow of∑T
t=1 dt. For each
arc from node D to node t there is an associated cost function rt(·) for t = 1, 2, ..., T ,
and for each arc from node t to node t + 1 there is an associated cost function ht(·)for t = 1, 2, ..., T − 1.
⋅ ⋅ ⋅
⋅ ⋅ "!#%$& ⋅ '
Figure 2–1: The single-item ELS model.
If the cost functions rt(·) and ht(·) are allowed to be arbitrary functions, then the
basic ELS problem is quite difficult to solve; as Florian, Lenstra and Rinnooy Kan [22]
10
have shown it is NP-hard. Due to this difficulty and to represent cost functions found
in practice, certain assumptions are often made about the cost functions. Aggarwal
and Park [2] gave a review of some of these assumptions and provided improved
algorithms for the ELS problem.
We consider extensions to the basic ELS problem and include distribution
decisions in the model. We also consider multiple production plants (facilities)
and multiple demand points (retailers). The goal is to meet the known demand
at the retailers through production at the facilities, such that the system wide total
production, inventory, and distribution cost is minimized. As mentioned earlier in
Chapter 1, we refer to this problem as the PID problem.
2.2 Production-Inventory-Distribution (PID) Problem
The PID problem can be formulated as a network flow problem on a directed,
single source graph consisting of several layers. Figure 2–2 gives an example with two
facilities, three retailers and two time periods. Each layer of the graph represents a
time period. In each layer, a bipartite graph represents the transportation network
between the facilities and the retailers. Facilities in successive time periods are
connected through inventory arcs. There is a dummy source node with supply equal
to the total demand. Production arcs connect the dummy source to each facility in
every time period.
This is an easy problem if all costs are linear. However, many production and
distribution activities exhibit economies of scale in which the unit cost of the activity
decreases as the volume of the activity increases. For example, production costs often
exhibit economies of scale due to fixed production setup costs and learning effects that
enable more efficient production as the volume increases. Transportation costs exhibit
economies of scale due to the fixed cost of initiating a shipment and the lower per unit
shipping cost as the volume delivered per shipment increases. Therefore, we assume
11
! "
#%$'& (
)%*'+ ,
-/.10 2
3
465798;:<>=?1@BACEDGFIHKJL1MBNOQP1RBS;T>UV1WBX
Figure 2–2: A supply chain network with 2 facilities, 3 retailers, and 2 periods.
the production costs at the facilities are either of the fixed charge type or piecewise-
linear concave type, the cost of transporting goods from facilities to retailers are of
the fixed charge type, and the inventory costs are linear. We also make the following
simplifying assumptions to the model:
• Backorders are not allowed.
• Transportation is not allowed between facilities.
• Products are stored at their production location until being transported to a
retailer.
• There are no capacity constraints on the production, inventory, or distribution
arcs.
The first three assumptions can easily be relaxed by adding extra arcs in the
network and the last assumption is justified since the following result by Wagner [79]
shows that a network flow problem with capacity constraints can be transformed into
a network flow problem without capacity constraints.
12
Proposition 1 Every capacitated minimum cost network flow problem (MCCNFP)
on a network with m nodes and n arcs can be transformed into an equivalent
uncapacitated MCCNFP on an expanded network with (n + m) nodes and (n + n)
arcs.
Freling et al. [24] and Romeijn and Romero Morales [66–69] considered similar
problems. They assumed that production and inventory costs are linear and that
there is a fixed cost of assigning a facility to a retailer. In other words, they accounted
for the presence of so-called single-sourcing constraints where each retailer should be
supplied from a single facility only. Eksioglu, Pardalos and Romeijn [16] and Wu and
Golbasi [81] considered the multi commodity case where there are multiple products
flowing on the network. Eksioglu et al. [16] assumed production and inventory costs
are linear and transportation costs are of fixed charge type, whereas Wu and Golbasi
[81] assumed production costs are fixed charge and inventory and transportation costs
are linear.
2.3 Complexity of the PID Problem
The PID problem with concave costs falls under the category of minimum concave
cost network flow problems (MCCNFP). Guisewite and Pardalos [30] gave a detailed
survey on MCCNFP throughout the early 1990s. It is well-known that even certain
special cases of MCCNFP, such as the fixed charge network flow problem (FCNFP)
or the single source uncapacitated minimum concave cost network flow problem (SSU
MCCNFP), are NP-hard (Guisewite and Pardalos [31]). MCCNFP is NP-hard even
when the arc costs are constant, the underlying network is bipartite, or the ratio of
the fixed charge to the linear charge for all arcs is constant. This has motivated the
consideration of additional structures which might make the problem more tractable.
In fact, polynomial time algorithms have been developed for a number of specially
structured variants of MCCNFP (Du and Pardalos [13] and Pardalos and Vavasis
[62]).
13
The number of source nodes and the number of arcs with nonlinear costs affect
the difficulty of MCCNFP. It is therefore convenient to refer to MCCNFP with a
fixed number, h, of sources and fixed number, k, of arcs with nonlinear costs as
MCCNFP(h,k). Guisewite and Pardalos [32] were the first to prove the polynomial
solvability of MCCNFP(1,1). Later, strongly polynomial time algorithms were
developed for MCCNFP(1,1) by Klinz and Tuy [51] and Tuy, Dan and Ghannadan
[75]. In a series of papers by Tuy [73, 74] and Tuy et al. [76, 77] polynomial time
algorithms were presented for MCCNFP(h,k) where h and k are constants. It was also
shown that MCCNFP(h,k) can be solved in strongly polynomial time if minh,k=1.
2.4 Problem Formulation
The problem that we consider is a multi-facility production, inventory, and
distribution problem. A single item is produced in multiple facilities over multiple
periods to satisfy the demand at the retailers. The objective is to minimize the system
wide total production, inventory, and transportation cost.
Let J , K, and T denote the number of facilities, the number of retailers, and the
planning horizon, respectively. Then the resulting model is
minimizeJ∑
j=1
T∑
t=1
rjt(pjt) +J∑
j=1
K∑
k=1
T∑
t=1
fjkt(xjkt) +J∑
j=1
T∑
t=1
hjtIjt
subject to (P1)
pjt + Ij,t−1 = Ijt +K∑
k=1
xjkt j = 1, . . . , J ; t = 1, . . . , T, (2.4)
J∑
j=1
xjkt = dkt k = 1, . . . , K; t = 1, . . . , T, (2.5)
pjt ≤ Wjt j = 1, . . . , J ; t = 1, . . . , T, (2.6)
Ijt ≤ Vjt j = 1, . . . , J ; t = 1, . . . , T, (2.7)
xjkt ≤ Ujkt j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.8)
pjt, Ijt, xjkt ≥ 0 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.9)
14
where
pjt = amount produced at facility j in period t,
xjkt = amount shipped from facility j to retailer k in period t,
Ijt = inventory carried at facility j during period t,
Wjt = production capacity at facility j in period t,
Ujkt = capacity on the shipment from facility j to retailer k in period t,
Vjt = inventory capacity at facility j in period t,
rjt(pjt) = cost of producing pjt units at facility j in period t,
fjkt(xjkt) = cost of shipping xjkt units from facility j to retailer k in period t,
hjt = unit holding cost per period at facility j for period t, and
dkt = demand at retailer k in period t.
Constraints (2.4) and (2.5) are the flow conservation constraints and (2.6), (2.7),
and (2.8) are the capacity constraints. If Ujkt, Vjt, and Wjt are large enough then
the problem is effectively uncapacitated. As we pointed out earlier MCCNFP has
the combinatorial property that if it has an optimal solution, then there exists an
optimal solution that is a vertex of the corresponding feasible domain. A feasible flow
is an extreme flow (vertex) if it is not the convex combination of any other feasible
flows. Extreme flows have been characterized for possible exploitation in solving the
MCCNFP. A flow is extremal for a network flow problem if it contains no positive
cycles. A positive cycle in an uncapacitated network is a cycle that has positive flow
on all of its arcs. On the other hand, for a problem with capacity constraints on arc
flows, a positive cycle is a cycle where all of the arcs in the cycle have positive flows
which are strictly less than the capacity. This implies that for the PID problem with
unlimited capacity an extreme flow is a tree. In other words, an optimal solution
exists in which the demand of each retailer will be satisfied through only one of the
facilities.
15
2.4.1 Fixed Charge Network Flow Problems
In the fixed charge case the production cost function rjt(pjt) is of the following
form:
rjt(pjt) =
0 if pjt = 0,
sjt + cjtpjt if 0 < pjt ≤ Wjt.
where sjt and cjt are, respectively, the setup and the variable costs of production.
If the distribution cost function, fjkt(xjkt), is also of a fixed charge form then it
is given by
fjkt(xjkt) =
0 if xjkt = 0,
sjkt + cjktxjkt if 0 < xjkt ≤ Ujkt.
where sjkt and cjkt represent, respectively, the setup and the variable costs of
distribution.
Due to the discontinuity of the cost functions at the origin, the problem can
be transformed into a 0 − 1 mixed integer linear programming (MILP) problem by
introducing a binary variable for each arc with a fixed charge. Assuming sjt > 0, the
cost function rjt(pjt), can be replaced with
rjt(pjt) = cjtpjt + sjtyjt.
Similarly, the distribution cost functions can be replaced with
fjkt(xjkt) = cjktxjkt + sjktyjkt
where
yjt =
0 if pjt = 0,
1 if pjt > 0,and yjkt =
0 if xjkt = 0,
1 if xjkt > 0.
The MILP formulation of the problem is as follows:
minimizeJ∑
j=1
T∑
t=1
(cjtpjt + sjtyjt + hjtIjt) +J∑
j=1
K∑
k=1
T∑
t=1
(cjktxjkt + sjktyjkt)
16
subject to (P2)
(2.4), (2.5), (2.7), (2.9), and
pjt ≤ Wjtyjt j = 1, . . . , J ; t = 1, . . . , T, (2.10)
xjkt ≤ Ujktyjkt j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.11)
yjt, yjkt ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T. (2.12)
The MILP formulation is used to find an optimal solution. This formulation is
useful in practice since the relaxed problems are linear cost network flow problems. We
use the general purpose solver CPLEX, which employs branch and bound algorithms,
to solve the MILP formulation. This is done to gauge the difficulty of finding an
optimal solution.
2.4.2 Piecewise-linear Concave Network Flow Problems
If the cost functions in problem (P1) are piecewise-linear concave functions
(Figure 2–3), then they have the following form:
rjt(pjt) =
0 if pjt = 0,
sjt,1 + cjt,1pjt if 0 < pjt ≤ βjt,1,
sjt,2 + cjt,2pjt if βjt,1 < pjt ≤ βjt,2,
...
sjt,ljt+ cjt,ljt
pjt if βjt,ljt−1 < pjt ≤ βjtljt,
where βjt,i for i = 1, 2, ..., ljt − 1 are the break points in the interval (0,Wjt), βjt,ljt=
Wjt, and ljt is the number of linear segments of production cost function rjt(·). Due
to the concavity of the cost functions the following properties hold:
• cjt,1 > cjt,2 > . . . > cjt,ljt(2.13)
• sjt,1 < sjt,2 < . . . < sjt,ljt(2.14)
We also assume cjt,ljt> 0 and sjt,1 > 0, since these are production costs.
17
β
β ! β"# $&%
'() *+
,-. /&01324 5&6
798: ;<=9>? @A
Figure 2–3: Piecewise-linear concave production costs.
Similarly, if the transportation cost functions are piecewise-linear and concave,
they have the following form:
fjkt(xjkt) =
0 if xjkt = 0,
sjkt,1 + cjkt,1xjkt if 0 < xjkt ≤ βjkt,1,
sjkt,2 + cjkt,2xjkt if βjkt,1 < xjkt ≤ βjkt,2,
...
sjkt,ljkt+ cjkt,ljkt
xjkt if βjkt,ljkt−1 < xjkt ≤ βjkt,ljkt,
The PID problem with piecewise-linear concave costs can be formulated as
a MILP problem in several different ways. Below we give four different MILP
formulations for the problem: λ-formulation, slope-formulation, ASP formulation,
and NSP formulation.
λ-formulation. Any pjt value on the x-axis of Figure 2–3 can be written as a
convex combination of two of the adjacent break points, i.e., pjt =∑ljt
i=0 λjt,iβjt,i
where βjt,0 = 0,∑ljt
i=0 λjt,i = 1 and 0 ≤ λjt,i ≤ 1 for i = 0, 1, ..., ljt. Note that at most
two of the λjt,i values can be positive and only if they are adjacent. This formulation
can be used to model any piecewise-linear function (note necessarily concave) by
introducing the following constraints:
18
λjt,0 ≤ zjt,1,
λjt,1 ≤ zjt,1 + zjt,2,
λjt,2 ≤ zjt,2 + zjt,3,
...
λjt,ljt−1 + zjt,ljt−1 ≤ zjt,ljt,
λjt,ljt≤ zjt,ljt
,
∑ljt
i=1 zjt,i = 1, and
zjt,i ∈ 0, 1.Note also that we have O(ljt) (not O(2ljt)) possibilities for each zjt vector. In
other words, only one of the components of the vector zjt = (zjt,1, zjt,2, . . . zjt,ljt) will
be 1 and the rest will be zero. The production cost functions can now be written in
terms of the new variables. An additional binary variable, yjt, is added to take care
of the discontinuity at the origin
rjt(pjt) = sjt,1yjt +∑ljt
i=1(rjt(βjt,i)− sjt,1)λjt,i
The new variable, yjt, must equal 1 if there is a positive amount of production at
plant j during period t and it must be 0 if there is no production. This is handled
by the following constraints: yjt ≥ (1− λjt,0) and yjt ∈ 0, 1. The distribution cost
functions can similarly be modelled by introducing new y, z, and λ variables. The
PID problem with piecewise-linear concave production and distribution costs can now
be written as
minimizeJ∑
j=1
T∑
t=1
(sjt,1yjt + hjtIjt +ljt∑
i=1
(rjt(βjt,i)− sjt,1)λjt,i)
+J∑
j=1
K∑
k=1
T∑
t=1
(sjkt,1yjkt +ljkt∑
i=1
(fjkt(βjkt,i)− sjkt,1)λjkt,i)
subject to (P3)
(2.4), (2.5), and
19
pjt =ljt∑
i=1
λjt,iβjt,i j = 1, . . . , J ; t = 1, . . . , T, (2.15)
ljt∑
i=0
λjt,i = 1 j = 1, . . . , J ; t = 1, . . . , T, (2.16)
λjt,0 ≤ zjt,1 j = 1, . . . , J ; t = 1, . . . , T, (2.17)
λjt,i ≤ zjt,i + zjt,i+1 j = 1, . . . , J ; t = 1, . . . , T ; i = 1, . . . , ljt − 1,(2.18)
λjt,ljt≤ zjt,ljt
j = 1, . . . , J ; t = 1, . . . , T, (2.19)ljt∑
i=1
zjt,i = 1 j = 1, . . . , J ; t = 1, . . . , T, (2.20)
yjt ≥ 1− λjt,0 j = 1, . . . , J ; t = 1, . . . , T, (2.21)
xjkt =ljkt∑
i=1
λjkt,iβjkt,i j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.22)
ljkt∑
i=0
λjkt,i = 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.23)
λjkt,0 ≤ zjkt,1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.24)
λjkt,i ≤ zjkt,i + zjkt,i+1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.25)
i = 1, . . . , ljkt − 1,
λjt,ljkt≤ zjkt,ljkt
j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.26)ljkt∑
i=1
zjkt,i = 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.27)
yjkt ≥ 1− λjkt,0 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.28)
λjt,i, λjkt,i ≥ 0 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.29)
i = 0, . . . , ljkt,
yjt, yjkt ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.30)
zjt,i, zjkt,i ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.31)
i = 1, . . . , ljkt.
Slope-formulation. The slope-formulation is similar to the λ-formulation in the
sense that the pjt values on the x-axis of Figure 2–3 are again rewritten in terms of
the break points. However, this time pjt is not a convex combination of the break
20
points. The production amounts are now defined as pjt =∑ljt
i=1 γjt,i(βjt,i − βjt,i−1)
where βjt,0 = 0 and 0 ≤ γjt,i ≤ 1 for i = 1, 2, ..., ljt. As in the λ-formulation, the
vector zjt in the following formulation takes O(ljt) possible values and not O(2ljt).
However, in this case one or more of the components of zjt can be 1. If one of the
components is 1 (e.g. zjt,m=1), then all of the preceding components must equal 1
(zjt,i = 1 for all i = 1, . . . ,m− 1). The slope-formulation of the PID problem is given
by
minimizeJ∑
j=1
T∑
t=1
(sjt,1yjt+hjtIjt+ljt∑
i=1
∆rjt,iγjt,i)+J∑
j=1
K∑
k=1
T∑
t=1
(sjkt,1yjkt+ljkt∑
i=1
∆fjkt,iγjkt,i)
subject to (P4)
(2.4), (2.5), and
pjt =ljt∑
i=1
∆βjt,iγjt,i j = 1, . . . , J ; t = 1, . . . , T, (2.32)
γjt,i ≥ zjt,i j = 1, . . . , J ; t = 1, . . . , T ; i = 1, . . . , ljt − 1, (2.33)
γjt,i+1 ≤ zjt,i j = 1, . . . , J ; t = 1, . . . , T ; i = 1, . . . , ljt − 1, (2.34)
yjt ≥ γjt,1 j = 1, . . . , J ; t = 1, . . . , T, (2.35)
xjkt =ljkt∑
i=1
∆βjkt,iγjkt,i j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.36)
γjkt,i ≥ zjkt,i j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.37)
i = 1, . . . , ljkt − 1,
γjkt,i+1 ≤ zjkt,i j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.38)
i = 1, . . . , ljkt − 1,
yjkt ≥ γjkt,1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.39)
γjt,i, γjkt,i ≥ 0 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.40)
i = 0, . . . , ljkt,
yjt, yjkt ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (2.41)
zjt,i, zjkt,i ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T ; (2.42)
21
i = 1, . . . , ljkt.
where
• ∆rjt,i = [rjt(βjt,i)− rjt(βjt,i−1)− sjt,1] for i = 1, 2, ..., ljt,
• ∆fjkt,i = [fjkt(βjkt,i)− fjkt(βjkt,i−1)− sjkt,1] for i = 1, 2, ..., ljt,
• ∆βjt,i = [βjt,i − βjt,i−1] for i = 1, 2, ..., ljt,
• ∆βjkt,i = [βjkt,i − βjkt,i−1] for i = 1, 2, ..., ljt.
ASP formulation. Kim and Pardalos [50] used an Arc Separation Procedure
(ASP) to transform network flow problems with piecewise-linear concave costs to
network flow problems with fixed charges. Using the same procedure, the PID
problem with piecewise-linear concave costs can be transformed into a fixed charge
network flow problem. Each arc is separated into ljt arcs as shown in Figure 2–4. All
of the new arcs have fixed charge cost functions.
!"
#$% &')(*+, -.0/ 123 457689: ;<>= ?A@B CD7EFGH IJ>K
LNMO
PQRS
TNUV
WXYZ [\
]^_ `ba
cde fg
Figure 2–4: Arc separation procedure.
The network grows in size after the arc separation procedure. The number of
production and transportation arcs in the extended network is given by
J∑
j=1
T∑
t=1
ljt +J∑
j=1
K∑
k=1
T∑
t=1
ljkt.
22
The production cost can now be written in terms of the cost functions of the
new arcs created in the following way:
rjt(pjt) =ljt∑
i=1
rjt,i(pjt,i) =ljt∑
i=1
(cjt,1pjt,1 + sjt,1yjt,1). (2.43)
Note that the equality (2.43) may not hold in general without additional constraints
to restrict the domain for each separated arc cost function. However, for concave cost
functions the equality holds due to the properties given by (2.13) and (2.14). Kim
and Pardalos [50] gave a simple proof based on contradiction. This formulation is
closely related to the λ-formulation in the sense that only one of the linear pieces will
be used in an optimal solution. As the zjt variables in the λ-formulation, the vector
of yjt variables in the below formulation will have O(ljt) possible values. The MILP
formulation after the arc separation procedure is
minimizeJ∑
j=1
T∑
t=1
ljt∑
i=1
(cjt,ipjt,i + sjt,iyjt,i)+
J∑
j=1
K∑
k=1
T∑
t=1
ljt∑
i=1
(cjkt,ixjkt,i + sjkt,iyjkt,i) +J∑
j=1
T∑
t=1
hjtIjt
subject to (P5)
ljt∑
i=1
pjt,i + Ij,t−1 = Ijt +K∑
k=1
ljt∑
i=1
xjkt,i j = 1, . . . , J ; t = 1, . . . , T, (2.44)
J∑
j=1
ljt∑
i=1
xjkt,i = dkt k = 1, . . . , K; t = 1, . . . , T, (2.45)
pjt,i ≤ Wjtyjt,i j = 1, . . . , J ; t = 1, . . . , T ; (2.46)
i = 1, . . . , ljkt,
Ijt ≤ Vjt j = 1, . . . , J ; t = 1, . . . , T, (2.47)
xjkt,i ≤ Ujktyjkt,i j = 1, . . . , J ; k = 1, . . . , K; (2.48)
t = 1, . . . , T ; i = 1, . . . , ljkt,
yjt,i, yjkt,i ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; (2.49)
23
t = 1, . . . , T ; i = 1, . . . , ljkt,
pjt,i, Ijt, xjkt,i ≥ 0 j = 1, . . . , J ; k = 1, . . . , K; (2.50)
t = 1, . . . , T ; i = 1, . . . , ljkt.
NSP formulation. We developed a Node Separation Procedure (NSP) to transform
the PID problem with piecewise-linear and concave costs to PID problems with fixed
charge costs in an extended network. The NSP procedure is similar to the ASP
procedure, but it results in a larger network because the NSP separates the inventory
arcs as well. Although the network grows in size, this formulation is useful because its
linear programming (LP) relaxation leads to tight lower bounds. Once the problem
is transformed into a fixed charge network flow problem through NSP, the lower
bounding procedure explained in section 3.5 can be used to get good lower bounds.
Figure 2–5 illustrates the node separation procedure.
!#"
$%&'#(
)*+ ,-./ 0
1243
567 89:
;<>=@?
AB@C
D>E@F GIH
J>K@L MON
P>Q@R S#T
UV@W XIYZ\[@] ^#_
`a@b cOd
Figure 2–5: Node separation procedure.
The MILP formulation for PID problems with piecewise-linear concave costs,
linear inventory costs, and fixed charge distribution costs after the node separation
24
procedure is
minimizeJ∑
j=1
T∑
t=1
ljt∑
i=1
(cjt,ipjt,i + sjt,iyjt,i + hjtIjt,i) +J∑
j=1
K∑
k=1
T∑
t=1
ljkt∑
i=1
(cjktxjkt,i + sjktyjkt,i)
subject to (P6)
pjt,i + Ij,t−1,i = Ijt,i +K∑
k=1
xjkt,i j = 1, . . . , J ; t = 1, . . . , T, (2.51)
i = 1, . . . , ljkt,
J∑
j=1
ljt∑
i=1
xjkt,i = dkt k = 1, . . . , K; t = 1, . . . , T, (2.52)
pjt,i ≤ βjt,iyjt,i j = 1, . . . , J ; t = 1, . . . , T ; (2.53)
i = 1, . . . , ljkt,
Ijt,i ≤ Vjt j = 1, . . . , J ; t = 1, . . . , T, (2.54)
xjkt,i ≤ Ujktyjkt,i j = 1, . . . , J ; k = 1, . . . , K; (2.55)
t = 1, . . . , T ; i = 1, . . . , ljkt,
yjt,i, yjkt,i ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; (2.56)
t = 1, . . . , T ; i = 1, . . . , ljkt,
pjt,i, Ijt,i, xjkt,i ≥ 0 j = 1, . . . , J ; k = 1, . . . , K; (2.57)
t = 1, . . . , T ; i = 1, . . . , ljkt.
CHAPTER 3SOLUTION PROCEDURES
3.1 Existing Solution Approaches
The PID problem, as described in section 2.4, is formulated as a 0 − 1
Mixed Integer Linear Program (MILP) due to the structure of the production and
distribution cost functions. In fact, most of the exact solution approaches for
combinatorial optimization problems transform the problem into an equivalent 0− 1
MILP and use branch and bound techniques to solve it optimally. Two of the
latest branch and bound algorithms for fixed charge transportation problems were
presented by McKeown and Ragsdale [60] and Lamar and Wallace [53]. Recently, Bell,
Lamar and Wallace [7] presented a branch and bound algorithm for the capacitated
fixed charge transportation problem. Most of the branch and bound algorithms use
conditional penalties called up and down penalties, that contribute to finding good
lower bounds.
Other exact solution algorithms include vertex enumeration techniques, dynamic
programming approaches, cutting-plane methods, and branch-and-cut methods.
Although exact methods have matured greatly, the computational time required by
these algorithms grows exponentially as the problem size increases. In many instances
they are not able to produce an optimal solution efficiently. Our computational
experiments indicate that even for moderate size PID problems the exact approaches
fail to find a solution.
The NP-hardness of the problem motivates the use of approximate approaches.
Since the PID problem with fixed charge or piecewise-linear concave costs falls
under the category of MCCNFP, it achieves its optimal solution at an extreme
point of the feasible region (Horst and Pardalos [42]). Most of the approximate
25
26
solution approaches exploit this property. Some recent heuristic procedures have
been developed by Holmqvist, Migdalas and Pardalos [39], Diaby [12], Khang and
Fujuwara [47], Larsson, Migdalas and Ronnqvist [54], Ghannadan et al. [28], and
Sun et al. [70]. Recently, Kim and Pardalos [48–50] provided a dynamic slope
scaling procedure (DSSP) to solve FCNFP. Eksioglu et al. [15] refined the DSSP
and presented results for bipartite and layered fixed charge network flow problems.
A wide variety of the heuristic approaches for NP-hard problems are based on
local search. Local search methods provide a framework for searching the solution
space focusing on local neighborhoods. In the following we present local search based
solution methods for the PID problem.
3.2 Local Search
Local search is based on a simple and natural method which is perhaps the oldest
optimization method, trial and error. However, local search algorithms have proven
to be powerful tools for hard combinatorial optimization problems, in particular those
known as NP -hard. The book edited by Aarts and Lenstra [1] is an excellent source
which provides some applications as well as complexity results.
The set of solutions of an optimization problem that may be visited by a local
search algorithm is called the search space. Typically, the feasible region of the
problem is defined as the search space. However, if generating feasible solutions is
not easy, then a different search space may be defined, which takes advantage of the
special structures of the problem. If a search space other than the feasible region
is used, then the objective function should be modified so that the infeasibility of a
given solution can be identified.
A basic version of local search is iterative improvement. In other words, a general
local search algorithm starts with some initial solution, S, and keeps replacing it with
another solution in its neighborhood, N(S), until some stopping criterion is satisfied.
Therefore, to implement a local search algorithm the following must be identified:
27
• a neighborhood, N(S),
• a stopping criterion,
• a move strategy,
• and an evaluation function (this is simply the objective function if the search
space is the feasible region).
Good neighborhoods often take advantage of the special combinatorial structure
of the problem and are typically problem dependent. The algorithm stops if
the current solution does not have any neighbors of lower cost (in the case of
minimization). The basic move strategy in local search is to move to an improved
solution in the neighborhood. Usually two strategies are implemented which are
known as: the first better move strategy and the best admissible move strategy. In
the first better move strategy, the neighboring solutions are investigated in a pre-
specified order and the first solution that shows an improvement is taken as the next
solution. The order in which the neighbors are searched may affect the solution quality
and the computational time. In the best admissible move strategy, the neighborhood
is searched exhaustively and the best solution is taken as the next solution. In this
case, since the search is exhaustive the order of the search is not important.
Lately, other more sophisticated strategies have been developed to allow the
search to escape from locally optimal solutions in the hopes of finding better solutions.
These sophisticated procedures are referred to as metaheuristics in the literature.
In the following we first give a brief history of local search including a list of
metaheuristics, then discuss the computational complexity of local search, and finally
give our local search procedure for the PID problem.
3.2.1 History of Local Search
The use of local search in combinatorial optimization has a long history. Back in
late 1950s, Bock [8] and Croes [11] solved traveling salesman problems (TSP) using
edge-exchange local search algorithms for the first time. Later, Lin [55] refined the
28
edge-exchange algorithms for the TSP and presented 3-exchange and Or-exchange
neighborhood functions. Reiter and Sherman [65] examined various neighborhoods
for the TSP and introduced the multi-start strategy. Subsequently, Kernighan and
Lin [46] presented a variable-depth search algorithm for uniform graph partitioning.
Lin and Kernighan [56] also successfully applied a variable-depth search algorithm
for the TSP.
In the 1980s more generalized approaches were proposed which combined
local search with other heuristic algorithms. These approaches allow moves to
solutions that do not necessarily give better objective function values. Examples
of these sophisticated approaches, called metaheuristics, are: simulated annealing,
tabu search, genetic algorithms, neural networks, greedy randomized adaptive
search procedure (GRASP), variable neighborhood search, ant systems, population
heuristics, memetic algorithms, and scatter search. Recent reviews on these
procedures can be found in the book edited by Pardalos and Resende [61].
3.2.2 Complexity of Local Search
An important question about a local search algorithm is the number of steps it
takes to reach a locally optimal solution. The time it takes to search the neighborhood
of a given solution is usually polynomially bounded. However, the number of moves
it takes to reach a local optimum from a given solution may not be polynomial. For
example, there are TSP instances and initial solutions for which the local search takes
an exponential number of steps under the 2-exchange neighborhood. To analyze the
complexity of local search Johnson, Papadimitriou and Yannakakis [45] introduced a
complexity class called PLS (polynomial-time local search). PLS contains problems
whose neighborhoods can be searched in polynomial time. Yannakakis [82] provides
an extensive survey for the theory of PLS-completeness.
It is important to distinguish between the complexity of a local search problem
and a local search heuristic. A local search problem is the problem of finding
29
local optima by any means whereas a local search heuristic is finding local optima
by the standard iterative procedure. An interesting and typical example is linear
programming (LP). For LP local optimality coincides with global optimality and the
Simplex method can be viewed as a local search algorithm. The move strategy of a
local search algorithm affects the running time. Similarly, the pivoting rule affects
the running time of the Simplex method. It is well-known that in the worst case the
Simplex method may take an exponential number of steps for most pivoting rules. It
is still an open problem whether there exists a pivoting rule which makes the Simplex
method a polynomial time algorithm. However, LP can be solved by other methods
such as the ellipsoid or interior point algorithms in polynomial time.
In the past two decades there has been considerable work on the theory of local
search. Several local search problems have been shown to be PLS-complete, but
the complexity of finding local optima for many interesting problems remains open,
although computational results are encouraging.
3.2.3 Local Search for the PID Problem
In this section, we give details about the neighborhood, the move strategy, the
stopping criterion, and the evaluation function for the PID problem, which are the
main ingredients of a local search algorithm. The PID problem is formulated as
a network flow problem in section 2.4, where the objective is the minimization of a
concave function. We take advantage of the special structure of network flow problems
to define a neighborhood and a move strategy. We first give several definitions of
neighborhood for MCCNFP and then adopt one of these definitions and provide a
neighborhood definition for the PID problem. Next, we discuss the move strategy, the
stopping condition, and the evaluation function. The generation of initial solutions
will be discussed later in sections 3.3 and 3.4.
30
Neighborhood definitions for MCCNFP. The usual definition of local optimality
defines a neighborhood of a solution S in the following way where ‖ · ‖ is a vector
norm and ε > 0.
Definition 3.1 Nε(S) = S ′ : S ′ is feasible to the problem and ‖S − S ′‖ < ε.Under this definition of a neighborhood a solution S is locally optimal if it is not
possible to decrease the objective function value by rerouting a small portion ε of flow.
Actually, for certain concave functions the ε-neighborhood results in all extreme flows
to be locally optimal. Therefore, this standard definition of neighborhood, called the
ε-neighborhood, is not very useful for our problem.
Proposition 2 Every extreme flow is a local optimum for an uncapacitated network
flow problem with a single source and with fixed arc costs under Nε.
Proof. In order to create an ε-neighbor of a given extreme flow let us reroute an
ε amount to one of the demand nodes. Subtract an ε from all the arcs on the path
from the source to that demand node. This will not change the current cost because
of two reasons. First, the current flow on all arcs on the path from the source to that
demand node is higher than ε so they will still have positive flow. Second, the arcs
have the following cost structure:
rjt(pjt) =
sjt if pjt > 0,
0 if pjt = 0.
Now flow that ε amount from a different path. This will create at least one cycle in
the network which means at least one arc that had zero flow will now have positive
flow. Thus, the total cost will not decrease. 2
Gallo and Sodini [25] have also shown that every extreme flow is a local optimum
under Nε for similar problems with the following cost structure:
rjt(pjt) = µpαjt
where 0 < α < 1 and µ > 0.
31
There are other concave cost functions, however, for which every extreme flow is
not necessarily a local optimum under Nε. Consider the following example in Figure
3–1 where there are two demand nodes, two transhipment nodes, and a single source
node.
!
"#%$
&(' ) *
+-,. /
021 3 4
5
6879 :
;<>=? @
AB>CD @
EF%G
HI%J>KL@
@
Figure 3–1: An example for ε-neighborhood.
If the cost functions are fixed charge then the extreme flow on the left in Figure
3–1 has a total cost of s11 + c11d11 + s21 + c21d21 + s111 + c111d11 + s221 + c221d21
(see section 2.4.1 for the definition of the variables). The non-extreme ε-neighbor
on the right in Figure 3–1 has a total cost of s11 + c11(d11 − ε) + s21 + c21(d21 +
ε) + s111 + c111(d11 − ε) + s221 + c221d21 + s211 + c211ε. The difference between the
costs is s211 + (c211 + c21 − c11 − c111)ε. If appropriate cost values are chosen then
this difference can be negative which indicates that the extreme flow is not a local
optimum. However, the ε-neighborhood is still not useful for our problem because it
is not easy to search this neighborhood efficiently.
Gallo and Sodini [25] developed the following generalized definition of
neighborhood for MCCNFP.
Definition 3.2 NAEF (S) = S ′ : S ′ is feasible to the problem and it is an adjacent
extreme flow.Here, two extreme flows are adjacent if and only if they differ only on the path
between two vertices. Thus, the graph joining the two extreme flows S and S ′ contains
a single undirected cycle. Gallo and Sodini [25] described a procedure which detects
if a given extreme flow is a local optimum or finds a new better extreme flow by
32
solving a series of shortest path problems. Their procedure constructs a modified
network and solves a shortest weighted path problem for each vertex in the current
solution, S. The modified network is constructed to prevent moving to nonextreme
or nonadjacent solutions.
Guisewite and Pardalos [31] developed the following more relaxed definition of
neighborhood for MCCNFP.
Definition 3.3 NAF (S) = S ′ : S ′ is feasible to the problem and it is an adjacent
flow.Under this definition, S ′ is adjacent to the extreme flow S if it is obtained by
rerouting a single sub-path within S. NAF (·) is a relaxed neighborhood with respect to
NAEF (·) in the sense that adjacent solutions but not only extreme adjacent solutions
are included in the neighborhood. In this case it is easier to reach the neighboring
solutions because the modified graph that prevented nonextreme solutions is not
required and the shortest weighted path problem need not be solved for all vertices of
S. It is sufficient to solve the shortest weighted path problem for the branch points
(nodes with degree greater than two) and the sink nodes.
Neighborhood definition for the PID problem. A generally accepted rule is that
the quality of the solutions are better and the accuracy of the final solutions are
greater if a larger neighborhood is used which may require additional computational
time. Therefore, it is likely that NAF (·), which is defined above, will lead to better
solutions. Thus, we define a neighborhood for the PID problem in the following way
which is, in principle, similar to NAF (·).Definition 3.4 A solution S ′ is a neighbor to the current feasible solution S if the
facility supplying one of the retailers is changed or if the demand of a retailer comes
from the same facility but through production in a different period.
Here, to reach a neighbor S ′ from an extreme solution S we first subtract the
demand of a retailer from the current path to that retailer then route it through a
33
different path. An example is shown in Figure 3–2 where the demand of retailer 1
in period 2 (R1,2) initially comes from facility 1 in period 2 (F1,2), but it is rerouted
through facility 2 in period 1 (F2,1). Note that changing the facility that supplies a
retailer may not necessarily result in an extreme point feasible solution. Therefore,
after the flow to a retailer is rerouted we need to force the solution to be a tree, which
may result in additional cost savings.
In Figure 3–2, for example, there is a cycle between nodes D, F2,1, and F2,2.
This cycle indicates that there is a positive amount of inventory carried forward from
period 1 to period 2 at facility 2 and at the same time there is a positive amount of
production at facility 2 in period 2. To remove this cycle we should either eliminate
the inventory or the production from the cycle. However, we know that rerouting
the inventory through node F2,2 will actually increase the total cost and this can
easily be shown. When we moved from the extreme point solution on the left to the
nonextreme solution on the right rerouting the demand of node R1,2 was cheaper if
node F1,2 is used rather than F2,2. Therefore, the production arc should be removed
from the cycle. A more formal discussion on cycle detection and elimination is given
later in this section.
!#"$ %
&#'( )
*+ , -
./ 0 1
23 4 5
6
78 9 :
;<= >
?@A B
C#DE F
GIHJ K
Figure 3–2: An example of moving to a neighboring solution.
The move strategy, the evaluation function, and the stopping Condition. The
move strategy determines the order in which the neighboring solutions are visited.
34
As we mentioned earlier, first better and best admissible are the two common move
strategies implemented. Guisewite and Pardalos [31] and Eksioglu et al. [15] presented
computational results using both move strategies on network flow problems. These
authors provided empirical evidence that the first better move strategy requires less
time and the quality of the solutions are comparable for both strategies. Therefore, in
our local search algorithms we used the first better move strategy. In this strategy, the
neighboring solutions are investigated in a pre-specified order and the first solution
that shows an improvement is taken as the next solution. The improvements are
measured by an evaluation function. The evaluation function that we use calculates
the total cost of a solution which is simply the objective function. The local search
algorithm terminates when there is no more improvement. In other words, the
stopping condition determines whether the current solution is a local optimum or
not. If the current solution is a local optimum then the procedure terminates.
The local search algorithm is summarized in Figure 3–3. Let j′ be the facility
that currently supplies retailer k’s demand in period t through production in period
τ ′ (t ≥ τ ′). Also, let δjτ be the additional cost of changing the supplier to facility j in
period τ (t ≥ τ). In other words, δjτ is the cost of rerouting the demand of retailer k
under consideration. Note that δjτ = 0 when j = j′ and τ = τ ′. In the local search
procedure given in Figure 3–3, j∗ and τ ∗ are such that δj∗τ∗ = ∆ = minδjτ : j =
1, . . . , J, τ = 1, . . . , tCycle detection and elimination. Due to the special structure of our network,
identifying cycles and removing them can be done efficiently. A cycle for the PID
problem means that at least one of the facilities is producing and also carrying
inventory in the same period. Therefore, with a single pass over all facilities and
time periods the cycles can be identified. The procedure is summarized in Figure
3–4. Note that two types of cycles can be created and they are analyzed separately
in the following paragraphs.
35
procedure Local
Let S be an initial extreme feasible solution
while (S is not a local optimum) do
for (t = 1, . . . , T ) and (k = 1, . . . , K) do
Calculate δjτ , j = 1, . . . , J, τ = 1, . . . , t
∆ := minδjτ : j = 1, . . . , J, τ = 1, . . . , tif ∆ = 0 then S is a local optimum
else if ∆ < 0 then
Reroute the flow through the new path
Check for any cycles and eliminate them
end if
end for t and k
end while
return S
end procedure
Figure 3–3: The local search procedure.
procedure Cycle
for (l = t, . . . , 0) do
if (pjl > 0 and Ijl > 0) then
if pjl is on the new path then reroute Ijl
otherwise reroute pjl
end for
return S
end procedure
Figure 3–4: Cycle detection and elimination.
36
Type I cycle. Assume that the demand of retailer k in period t is satisfied
through production at facility j at time τ after it is rerouted. If facility j is already
carrying inventory from period τ − 1 to period τ , then a type I cycle is created.
Only one cycle is created and it can be eliminated by rerouting the inventory. Figure
3–5 gives an example of a type I cycle where the demand of retailer 1 in period 3 is
originally satisfied from production at facility 2 in period 3. If it is cheaper to satisfy
the demand of retailer 1 in period 3 by production at facility 1 in period 3, then this
will result in the network flow given on the right of Figure 3–5. To eliminate the
cycle the amount of inventory entering node F1,3 is subtracted from its current path
and added on to the production arc entering F1,3.
"! #
$ %"& '
() * + ,- . /
01"2 3 4 5"6 7
89 : ;
<>= ? @
AB C D
E
FG H I
JK"L M
NPO"Q R
SUTV W
X Y"Z [
\>] ^ _ `a b c
def g hUij k
Figure 3–5: An example of a type I cycle.
Type II cycle. Assume that facility j currently produces in several periods and
that the demand of retailer k in period t is satisfied through production from an
earlier period after it is rerouted. The rerouting of retailer k’s demand may result
in one or more cycles. Figure 3–6 gives an example of a type II cycle. The flow on
the left is the starting extreme flow which indicates that the demand of retailer 1 in
37
period 3 is currently satisfied through production in period 3 at facility 2. Assume
that it is actually cheaper to satisfy the demand of retailer 1 in period 3 through
production at facility 1 in period 1. This will result in the flow given on the right of
Figure 3–6 which has two cycles. To eliminate these cycles the production amounts
in periods 2 and 3 at facility 1 are eliminated and carried forward as inventory from
period 1.
"! #
$ %"& '
() * + ,- . /
01"2 3 4 5"6 7
89 : ;
<>= ? @
AB C D
E
FG H I
JK"L M
NPO"Q R
SUTV W
X Y"Z [
\>] ^ _ `a b c
def g hUij k
Figure 3–6: An example of a type II cycle.
So far, we have talked about the neighborhood, the move strategy, the evaluation
function, and the stopping condition. However, we have not discussed how the initial
solutions are found. Generation of initial solutions may be critical for local search
procedures. Good starting solutions may lead to better quality local optima. In
sections 3.3 and 3.4 we discuss several procedures for generating good initial solutions.
3.3 Dynamic Slope Scaling Procedure (DSSP)
The DSSP is a procedure that iteratively approximates the concave cost function
by a linear function, and solves the corresponding network flow problem. Note that
each of the approximating network problems has exactly the same set of constraints,
38
procedure DSSP
q := 0 /*set the iteration counter to zero*/
Initialize c(q)jt and c
(q)jkt /*find an initial linear cost*/
while (stopping criterion not satisfied) do
q := q+1
Solve the following network flow problem:
minimize∑J
j=1
∑Tt=1(c
(q−1)jt pq
jt + hjtIqjt) +
∑Jj=1
∑Kk=1
∑Tt=1 c
(q−1)jkt xq
jkt
subject to original constraints of (P1)
Update c(q)jt and c
(q)jkt
if cost(S(q)) < cost(S) then S := S(q)
end while
return S
end procedure
Figure 3–7: The DSSP algorithm.
and differs only with respect to the objective function coefficients. The motivation
behind the DSSP is the fact that a concave function, when minimized over a set
of linear constraints, will have an extreme point optimal solution. Therefore, there
exists a linear cost function that yields the same optimal solution as the concave cost
function. Figure 3–7 summarizes the DSSP algorithm.
Important issues regarding the DSSP algorithm are finding initial linear costs,
updating the linear costs, and stopping conditions.
Finding an initial linear cost (c(0)jt ). Kim and Pardalos [48, 50] investigated two
different ways to initiate the algorithm for fixed charge and piecewise-linear concave
network flow problems. Here, we generalize the heuristic for all concave costs with
the property that the total cost is zero if the activity level is zero. In other words,
rjt(·) and fjkt(·) are concave functions on the interval [0,∞) and rjt(0) = fjkt(0) = 0.
The initial linear cost factors we use are c(0)jt = rjt(Wjt)/Wjt and c
(0)jkt = fjkt(Ujt)/Ujt.
39
Updating scheme for c(q)jt . Given a feasible solution S(q) = (p(q), I(q), x(q)) in
iteration q, the objective function coefficients for the next iteration are expressed in
linear form as
c(q)jt =
rjt(p(q)jt )/p
(q)jt if p
(q)jt > 0,
c(q)jt if p
(q)jt = 0,
and
c(q)jkt =
fjkt(x(q)jkt)/x
(q)jkt if x
(q)jkt > 0,
c(q)jkt if x
(q)jkt = 0.
Stopping condition. If two consecutive solutions in the above algorithm are equal,
then the linear cost coefficients and the objective function values in the following
iterations will be identical. As a result, once S(q) = S(q−1) there can be no more
improvement. Therefore, a natural stopping criterion is to terminate the algorithm
when ‖S(q) − S(q−1)‖ < ε. An alternative is to terminate the algorithm after a fixed
number of iterations if the above criterion is not satisfied.
The initiation and updating schemes for fixed charge and piecewise-linear concave
cases are analyzed below separately. In the remainder of this dissertation we will refer
to the local search approach that uses DSSP to generate the initial solutions as LS-
DSSP.
3.3.1 Fixed Charge Case
Kim and Pardalos [48] investigated two different ways to initiate the algorithm.
The initiation scheme presented here is shown to provide better results when used
for large scale problems (Figure 3–8). The initial value of the linear cost we use
is c(0)jt = cjt + sjt/Wjt. This is simply the LP relaxation of the original problem.
However, note that the problems we are solving are NFPs which can be solved much
faster than LPs.
The linear cost coefficients are updated in such a way that they reflect the variable
costs and the fixed costs simultaneously in the following way:
40
+
!#"$ %'&(
Figure 3–8: Linearization of the fixed charge cost function.
c(q)jt =
cjt + sjt/p(q)jt if p
(q)jt > 0,
c(q)jt if p
(q)jt = 0,
and
c(q)jkt =
cjkt + sjkt/x(q)jkt if x
(q)jkt > 0,
c(q)jkt if x
(q)jkt = 0.
3.3.2 Piecewise-linear Concave Case
When production or transportation costs are piecewise-linear and concave, the
problem can be transformed into a fixed charge network flow problem as described in
section 2.4.2. After the transformation, the solution approach given in section 3.3.1
can be used to solve the problem. However, the transformed problem does not always
lead to generation of feasible solutions since the problem is not solved to optimality.
Therefore, we apply the procedure to the original problem instead of the extended
network with some modification to account for each piece in the cost functions.
The linear cost coefficients are initialized in the same way as the fixed charge
case, i.e. c(0)jt = cjt + sjt/Wjt for the production arcs. To update the costs, the flow
values are checked to see which interval it falls into and the corresponding fixed and
variable costs are used. The updating scheme (Figure 3–9) is as follows:
41
c(q)jt =
cjt + sjt/p(q)jt if βjt,i−1 < p
(q)jt ≤ βjt,i,
c(q)jt if p
(q)jt = 0,
and
c(q)jkt =
cjkt + sjkt/x(q)jkt if βjkt,i−1 < x
(q)jkt ≤ βjkt,i,
c(q)jkt if x
(q)jkt = 0,
β
β ! β"# $&%
'() *+
,-. /&01324 5&6
798: ;<=3>? @A
BCD
Figure 3–9: Linearization of the piecewise-linear concave cost function.
3.3.3 Performance of DSSP on Some Special Cases
In this section we analyze the performance of DSSP on some basic problems.
The problems considered are simple cases with only a single time period. We identify
some cases where the DSSP heuristic actually finds the optimal solution.
Multiple facilities and one retailer. If there is only a single retailer but multiple
facilities in the network, then the problem is easy to solve, provided that there are
no capacities on the production and transportation arcs and the cost functions are
concave. The problem in this case simply reduces to a shortest path problem (Figure
3–10) which can be solved in O(J) time by complete enumeration. In an optimal
solution only one pair of production and transportation arcs will be used since this
is an uncapacitated concave minimization problem. The optimal solution has the
following characteristics (j∗ indicates the facility used in the optimal solution):
42
• pj∗1 = xj∗11 = d11
• pj1 = xj11 = 0 ∀ j ∈ 1, 2, ..., J\j∗
Figure 3–10: A bipartite network with many facilities and one retailer.
Proposition 3 The DSSP finds the optimal solution in J +2 iterations, in the worst
case, for the single retailer problem.
Proof. In every iteration of the DSSP heuristic the following network flow
problem is solved (derived from problem (P1) for a single retailer):
minimizeJ∑
j=1
(c(q−1)j1 + c
(q−1)j11 )p
(q)j1
subject to (P1′)
J∑
j=1
p(q)j1 = d11 (3.1)
p(q)j1 ≥ 0 j = 1, . . . , J. (3.2)
Although (P1′) is uncapacitated, the heuristic requires an upper bound to
initialize the linear cost coefficients. Therefore, let Wj1 and Uj11 be the upper
bounds for the production and transportation arcs, respectively, such that Wj1 ≥ d11
and Uj11 ≥ d11. Hence, the initial cost coefficients are c(0)j1 = rj1(Wj1)/Wj1 and
c(0)j11 = fj11(Uj11)/Uj11 for j = 1, 2, ..., J .
43
Problem (P1′) is a continuous knapsack problem. Therefore, at a given DSSP
iteration, q, the solution will be of the following form: p(q)j′1 = d11 and p
(q)j1 = 0 where
(c(q−1)j′1 + c
(q−1)j′11 ) ≤ (c
(q−1)j1 + c
(q−1)j11 ) ∀ jε1, 2, ..., J\j′. Since p
(q)j′1 is the only positive
value, the cost coefficient of that arc will be updated and the others will not change.
Thus, c(q)j′1 = rj′1(d11)/d11 and c
(q)j′11 = fj′11(d11)/d11.
The heuristic terminates when the solutions found in two consecutive iterations
are the same. However, this will not happen unless the solution found is the
optimal one. In other words, at iteration, q + 1, the solution will be p(q+1)j′′1 = d11
(all other variables are zero) and the procedure will stop if j′′ = j′. Note that
j′′ = j′ only if j′ = j∗ where j∗ is the facility used in the optimal solution
to (P1) with a single retailer. Assume j′ 6= j∗. Also assume, without loss of
generality, that the optimal solution is unique so that (rj∗1(d11)/d11+fj∗11(d11)/d11) <
(rj1(d11)/d11 + fj11(d11)/d11) ∀ jε1, 2, ..., J\j∗. Due to the concavity of the cost
functions rj1(Wj1)/Wj1 ≤ rj1(d11)/d11 and fj11(Uj11)/Uj11 ≤ fj11(d11)/d11 (remember
that rj1(0) = fj11(0) = 0) for j = 1, 2, ..., J . Therefore, at iteration, q + 1,
c(q)j′1 = rj′1(d11)/d11 > rj∗1(d11)/d11 ≥ c
(q)j′1. Hence, p
(q+1)j′1 = 0 and j′ 6= j′′.
The heuristic will visit each facility at most once and facility, j∗, will be visited
at most three times (twice in the last two iterations and possibly once during the
previous iterations). The total number of iterations, therefore, is J + 2 in the worst
case. If Wj1 = Uj11 = d11 for j = 1, 2, ..., J then the optimal solution will be found in
only 2 iterations. 2
Two facilities and two retailers. When there are only two facilities and two
retailers the problem is obviously easy since there are only a few feasible solutions
to consider (Figure 3–11). If there are no production and transportation capacities
then the two retailers are not competing for capacity and they can act independently.
This indicates that the problem decomposes into two single-retailer problems both of
which can be solved to optimality using the DSSP. In general, if there are J facilities,
44
K retailers, T periods, and no capacity constraints, then the problem can be solved
to optimality by solving KT single-retailer problems (with the additional condition
that only distribution costs are nonlinear).
Figure 3–11: A bipartite network with two facilities and two retailers.
If, however, the facilities have production capacities then the DSSP may fail to
find the optimal solution. We will illustrate this by creating an example for which
DSSP fails to find the optimal solution. We assume production and distribution costs
are fixed charge rather than general concave functions to simplify the analysis.
The MILP formulation for the problem in Figure 3–11 with fixed charge costs
can be given as (derived from problem (P2) in section 2.4.1)
minimize c11p11 + c21p21 + s11y11 + s21y21
+c111x111 + c121x121 + c211x211 + c221x221
+s111y111 + s121y121 + s211y211 + s221y221
subject to (P2′)
p11 − x111 − x121 = 0, (3.3)
p21 − x211 − x221 = 0, (3.4)
x111 + x211 = d11, (3.5)
x121 + x221 = d21, (3.6)
p11 ≤ W11y11, (3.7)
p21 ≤ W21y21, (3.8)
45
x111 ≤ U111y111, (3.9)
x121 ≤ U121y121, (3.10)
x211 ≤ U211y211, (3.11)
x221 ≤ U221y221, (3.12)
yj1, yjk1 ε 0, 1 j = 1, 2; k = 1, 2, (3.13)
pj1, xjk1 ≥ 0 j = 1, 2; k = 1, 2. (3.14)
We want capacities on the production arcs, but to have a feasible problem we
must have W11 + W21 ≥ d11 + d21. Letting W11 + W21 = d11 + d21 forces the following
conditions: p11 = W11, p21 = W21, y11 = 1, and y21 = 1. Under these conditions
the only decision variables left in the problem are the distribution amounts from the
facilities to the retailers. If (W11 = W21 = d11 = d21) or (W11 = d11, W21 = d21,
and d11 < d21) then the DSSP finds the optimal solution in only two iterations. This
can easily be shown by following a similar analysis given below for the case where
W11 < d11 < d21 < W21.
The distribution arcs are uncapacitated, so the upper bounds on these arcs can
be set equal to the minimum of the demand and the capacity of the corresponding
production arc, i.e. Ujkt = minWjt, dkt. This leads to U111 = W11, U121 = W11,
U211 = d11, and U221 = d21. Since W11 < d11 < d21 < W21, this indicates that the
capacity of facility 1 is not enough to satisfy the demand of either of the retailers.
Therefore, both of the distribution arcs from facility 2 must be used, i.e. y211 =
y221 = 1. This leaves us with only one decision “Which of the two distribution arcs
from facility 1 should be used?”
After doing all the above substitutions (P2′) can now be rewritten as
minimize c111x111 + c121x121 + c211x211 + c221x221 + s111y111 + s121(1− y111)
46
subject to (P2′′)
x111 − x121 = W11, (3.15)
x211 − x221 = W21, (3.16)
x111 + x211 = d11, (3.17)
x121 + x221 = d21, (3.18)
x111 = W11y111, (3.19)
x121 = W11(1− y111), (3.20)
x211 ≤ d11, (3.21)
x221 ≤ d21, (3.22)
y111 ε 0, 1, (3.23)
xjk1 ≥ 0 j = 1, 2; k = 1, 2. (3.24)
Note that the production cost is now a constant since p11 = W11 and p21 = W21.
Thus, it is not included in the objective function. The problem formulation can
further be simplified by taking advantage of the equality constraints.
minimize (c111W11 − c121W11 − c211W11 + c221W11 + s111 − s121)y111
subject to (P2′′′)
y111 ε 0, 1. (3.25)
The optimal solution to (P2′′′) is y111 = 1 if (c111W11 − c121W11 − c211W11 +
c221W11 + s111 − s121) < 0 and y111 = 0 otherwise. Assume (c111W11 − c121W11 −c211W11 + c221W11 + s111− s121) ≥ 0 so that we have x111 = 0, x121 = W11, x211 = d11,
and x221 = d21 −W11 as the optimal solution.
47
Under the assumptions and simplifications given before, the DSSP solves the the
following network flow problem at every iteration:
minimize (c(q−1)111 − c
(q−1)121 − c
(q−1)211 + c
(q−1)221 )x
(q)111
subject to (P1′′)
x(q)111 ≤ W11, (3.26)
x(q)111 ≥ 0. (3.27)
The initial linear cost coefficients are c(0)111 = c111+s111/W11, c
(0)121 = c121+s121/W11,
c(0)211 = c211+s211/d11, and c
(0)221 = c221+s221/d21. Let (c111+s111/W11−c121−s121/W11−
c211 − s211/d11 + c221 + s221/d21 < 0) so that the solution to (P1′′) is x(1)111 = W11,
x(1)121 = 0, x
(1)211 = (d11 − W11), and x
(1)221 = d21. In the next iteration the linear cost
coefficients do not change except for c211 which becomes c(1)211 = c211+s211/(d11−W11).
Since s211/(d11 − W11) > s211/d11 the objective function coefficient in (P1′′) is still
negative. Thus, the solution in the next iteration is the same and DSSP terminates
with a solution different from the optimal solution.
Multiple facilities and multiple retailers. If the problem has multiple facilities,
multiple retailers, and concave cost functions, then it falls under the category of
MCCNFP which is known to be NP-hard (section 2.3).
!#"%$&
'()
Figure 3–12: A bipartite network with multiple facilities and multiple retailers.
48
However, if (i) the distribution costs are concave, (ii) there are equal number of
facilities and retailers, (iii) the demands at the retailers are all the same, and (iv) all
production capacities are equal to the constant demand, then the problem becomes
easier (Figure 3–12). In other words, if J = K and Wj1 = dk1 = d for j, k = 1, 2, ..., J
the problem formulation is
minimizeJ∑
j=1
rj1(pj1) +J∑
j=1
J∑
k=1
fjk1(xjk1)
subject to (P7)
pj1 −J∑
k=1
xjk1 = 0 j = 1, . . . , J, (3.28)
J∑
j=1
xjk1 = d k = 1, . . . , J, (3.29)
pj1 ≤ d j = 1, . . . , J, (3.30)
xjk1 ≤ d j, k = 1, . . . , J, (3.31)
pj1, xjk1 ≥ 0 j, k = 1, . . . , J. (3.32)
Due to tight production capacities the only feasible way of meeting the demand is
if all facilities produce at their full capacity. This means that pj1 = d for j = 1, 2, ..., J
so, the production variables can be dropped from the formulation and the problem
reduces to
minimizeJ∑
j=1
J∑
k=1
fjk1(xjk1)
subject to (P7′)
J∑
k=1
xjk1 = d j = 1, . . . , J, (3.33)
J∑
j=1
xjk1 = d k = 1, . . . , J, (3.34)
xjk1 ≥ 0 j, k = 1, . . . , J. (3.35)
49
Now let zjk1 = xjk1/d for j, k = 1, 2, ..., J . Substituting this into (P7′) gives
minimizeJ∑
j=1
J∑
k=1
fjk1(d ∗ zjk1)
subject to (P7′′)
J∑
k=1
zjk1 = 1 j = 1, . . . , J, (3.36)
J∑
j=1
zjk1 = 1 k = 1, . . . , J, (3.37)
zjk1 ≥ 0 j, k = 1, . . . , J. (3.38)
Note that constraints (3.36) and (3.37) together with zjk1 ∈ 0, 1 are the
constraints of the well-known assignment problem. If the transportation costs, fjk1(·),are concave then we will refer to (P7′′) as the concave assignment problem.
Proposition 4 The DSSP finds the optimal solution in 2 iterations for a concave
assignment problem.
Proof. If DSSP is used to solve (P7′) the following network flow problem is solved
in every iteration:
minimizeJ∑
j=1
J∑
k=1
cjk1xjk1
subject to (P8)
J∑
k=1
xjk1 = d j = 1, . . . , J, (3.39)
J∑
j=1
xjk1 = d k = 1, . . . , J, (3.40)
xjk1 ≥ 0 j, k = 1, . . . , J. (3.41)
This is equivalent to solving the LP relaxation of a linear assignment problem.
Therefore, the solution to (P8) gives the optimal solution to (P7′). The objective
function coefficients in the next iteration of DSSP will be the same since x1jk1 = 0 or
50
x1jk1 = d for j, k = 1, 2, ..., J . The same solution will be found in the second iteration
and DSSP will terminate with the optimal solution. 2
3.4 Greedy Randomized Adaptive Search Procedure (GRASP)
The Greedy Randomized Adaptive Search Procedure is an iterative process that
provides a feasible solution at every iteration for combinatorial optimization problems
(Feo and Resende [20]). GRASP is usually implemented as a multi-start procedure
where each iteration consists of a construction phase and a local search phase. In
the construction phase, a randomized greedy function is used to build up an initial
solution. This solution is then used for improvement attempts in the local search
phase. This iterative process is performed for a fixed number of iterations and the
final result is simply the best solution found over all iterations. The number of
iterations is one of the two parameters to be tuned. The other parameter is the size
of the candidate list used in the construction phase.
GRASP has been applied successfully for a wide range of operations research
and industry problems such as scheduling, routing, logic, partitioning, location,
assignment, manufacturing, transportation, telecommunications. Festa and Resende
[21] give an extended bibliography of GRASP literature. GRASP has been
implemented in various ways with some modifications to enhance its performance.
Here, we develop a GRASP for the PID problem. In the following, we discuss
generation of initial solutions. We give two different construction procedures. The
second construction procedure, called the modified construction phase, is developed
to improve the performance of the heuristic procedure.
3.4.1 Construction Phase
In the construction phase a feasible solution is constructed step by step, utilizing
some of the problem specific properties. Since our problems are uncapacitated
concave cost network flow problems the optimal solution will be a tree on the
network. Therefore, we construct solutions where each retailer is supplied by a single
51
facility. The construction phase starts by connecting one of the retailers to one of the
facilities. The procedure finds the facilities that give the lowest per unit production
and transportation cost for a retailer, taking into account the effect that already
connected retailers have on the solution.
The unit cost, θj, of assigning facility j to retailer k is fjkt(dkt)/dkt + rjt(pjt +
dkt)/(pjt+dkt). The cheapest connections for this retailer are then put into a restricted
candidate list (RCL) and one of the facilities from RCL is selected randomly. The size
of the RCL is one of the parameters that requires tuning. Hart and Shogan [36] and
Feo and Resende [19] propose a cardinality-based scheme and a value-based scheme
to build an RCL. In the cardinality-based scheme, a fixed number of candidates are
placed in the RCL. In the value-based scheme, all candidates with greedy function
values within (100α)% of the best candidate are placed in the RCL where α ∈ [0, 1].
We use the value-based scheme and a candidate facility is added to the RCL if its
cost is no more than a multiple of the cheapest one. Finally, when the chosen facility
is connected to the retailer, the flows on the arcs are updated accordingly. The
procedure is given in Figure 3–13. When α, in Figure 3–13, is zero the procedure is
totally randomized and the greedy function value, θj, is irrelevant. However, when α
is equal to 1 the procedure is a pure greedy heuristic without any randomization.
Holmqvist et al. [39] develop a similar GRASP for single source uncapacitated
MCCNFP. Their problems are different from ours because they use different cost
functions and network structures. Also, our approach differs from theirs in the greedy
function used to initialize the RCL. They use the actual cost where as we check the
unit cost of connecting retailers to one of the facilities. Due to the presence of fixed
costs the unit-cost approach performed better. We have tested both approaches and
our approach consistently gave better results for the problems we have tested. The
results are presented in Chapter 4.
52
procedure Construct
for (t = 1, . . . , T ) do
pjt := 0, j = 1, . . . , J
xjkt := 0, j = 1, . . . , J, k = 1, . . . , K
for (k = 1, . . . , K) do
θj := fjkt(dkt)/dkt + rjt(pjt + dkt)/(pjt + dkt), j = 1, . . . , J
Θ := minθj : j = 1, . . . , JRCL := j : θj ≤ Θ
α, 0 ≤ α ≤ 1
Select l at random from RCL
plt := plt + dkt
xlkt := dkt
end for k
end for t
return the current solution, S
end procedure
Figure 3–13: The construction procedure.
The construction procedure in Figure 3–13 handles each period independently.
In other words, the initial solution created does not carry any inventory and demands
are met through production within the same period. A second approach is to allow
demand to be met by production from previous periods. The second approach led to
better initial solutions, but the final solutions after local search were worse compared
to the ones we got from the first approach. One explanation to this is that the initial
solutions in the second approach are likely already local optima. In the first approach,
however, we do not start with a very good solution in most cases but the local search
phase improves the solution. The local search approach which uses the construction
procedure described in this section will be referred to as LS-GRASP from here on.
3.4.2 Modified Construction Phase
In a multi-start procedure, if initial solutions are uniformly generated from the
feasible region, then the procedure will eventually find a global optimal solution
53
(provided that it is started enough times). If initial solutions are generated using
a greedy function, they may not be in the region of attraction of a global optimum
because the amount of variability in these solutions is typically smaller. Thus, the
local search will terminate with a suboptimal solution in most cases. GRASP tries
to find a balance between these two extremes by generating solutions using a greedy
function and at the same time adding some variability to the process. After the first
set of results we obtained by using the construction procedure explained in section
3.4.1, we wanted to modify our approach to diversify the initial solutions and increase
their variability.
In order to diversify the initial solutions we use the same greedy function but
we allow candidates with worse greedy values to enter the RCL. The modified
construction procedure is similar to the procedure given in Figure 3–13. The only
difference is in the way the RCL is constructed. The RCL in the modified construction
procedure is defined as
RCL:= j : Θ ≤ θj < Θ + ∆Θwhere Θ = minθj : j = 1, . . . , J, ∆Θ = (Θ − Θ)α, Θ = maxθj : j = 1, . . . , J,and 0 ≤ α ≤ 1.
After a fixed number of iterations Θ is updated (Θ = Θ + ∆Θ) so that the
procedure allows worse solutions to enter the RCL. Note that with this modification
the procedure randomly chooses the next candidate from a set of second best
candidates. This is done until Θ ≥ Θ at which point Θ is set back to minθj :
j = 1, . . . , J. In the rest of the dissertation the term LSM-GRASP will be used to
denote the local search approach with the modified construction procedure.
3.5 Lower Bounds
The local search procedures presented above give suboptimal but feasible
solutions to the PID problem. These feasible solutions are upper bounds to our
minimization problem. To evaluate the quality of the heuristics, when the optimal
54
solution value is unavailable, we need a lower bound. It is as important to get
good lower bounds as it is to get good upper bounds. For the fixed charge case,
the LP relaxation of (P2), unfortunately, gives poor lower bounds, particularly for
problems with large fixed charges. Therefore, we reformulated the problem, which
led to an increase in problem size but the LP relaxation of the new formulation
gave tighter lower bounds. For the piecewise-linear concave case we reported lower
bounds obtained from CPLEX in Chapter 4. The MILP formulation is run for a
certain amount of time and CPLEX may fail to find even a feasible solution by that
time, but a lower bound is available. For problems with piecewise-linear and concave
costs, we have also obtained lower bounds using the procedure given below. These
problems were first transformed into fixed charge network flow problems using the
node separation procedure (section 2.4.2). Once they are fixed charge type problems
the procedure provided below can be used to find lower bounds.
To reformulate the problem with fixed charge costs we define new variables which
help us decompose the amount transported from a facility to a retailer by origin.
Figure 3–14 gives a network representation for the new formulation.
Let yjτkt denote the fraction of demand of retailer k in period t, dkt, satisfied by
facility j through production in period τ . We can now rewrite xjkt, Ijt and pjt in
terms of yjτkt as
xjkt =t∑
τ=1
yjτktdkt, (3.42)
Ijt =K∑
k=1
t∑
τ=1
T∑
l=t+1
yjτkldkl, (3.43)
pjt =K∑
k=1
T∑
τ=t
yjtkτdkτ . (3.44)
Equation (3.42) simply indicates that the amount shipped from facility j to retailer
k during period t is equal to part of the demand, dkt, satisfied from production at
facility j in periods 1 through t. Equation (3.43) denotes that the amount of inventory
55
!"# $
%&' (
)*+ ,
-./ 0
1
2436587:9<;>=?A@CB
DFEGIH:JLKMANCOPRQTSVUCWYX[Z\8]:^_a`bAcCd
Figure 3–14: Extended network representation.
carried at facility j from period t to t+1 is equal to the total demand of all retailers in
periods t+1 to T satisfied by facility j through production in periods 1 to t. Finally,
equation (3.44) indicates that the amount of production at facility j during period t
is equal to the amount of demand met from this production for all retailers in periods
t through T.
Substituting (3.42), (3.43) and (3.44) into (P2) will give the following new
formulation:
minimizeJ∑
j=1
T∑
t=1
sjtyjt +J∑
j=1
K∑
k=1
T∑
t=1
sjktyjkt +J∑
j=1
K∑
k=1
T∑
t=1
t∑
τ=1
νjτktyjτkt
subject to (P9)
J∑
j=1
t∑
τ=1
yjτkt = 1 k = 1, . . . , K; t = 1, . . . , T, (3.45)
K∑
k=1
T∑
τ=t
yjtkτdkτ ≤ Wjtyjt j = 1, . . . , J ; t = 1, . . . , T, (3.46)
K∑
k=1
t∑
τ=1
T∑
l=t+1
yjτkldkl ≤ Vjt j = 1, . . . , J ; t = 1, . . . , T, (3.47)
56
t∑
τ=1
yjτktdkt ≤ Ujktyjkt j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T, (3.48)
yjτkt ≤ 1 j = 1, . . . , J ; k = 1, . . . , K, (3.49)
t = 1, . . . , T ; τ = 1, . . . , t,
yjτkt ≥ 0 j = 1, . . . , J ; k = 1, . . . , K, (3.50)
t = 1, . . . , T ; τ = 1, . . . , t,
yjt, yjkt ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K; t = 1, . . . , T. (3.51)
In the above formulation it can easily be shown that νjτkt = (cjτ + cjkt +∑t−1
l=τ hjl)dkt.
This indicates that νjτkt is the total variable cost (including production, inventory,
and distribution) if the demand of retailer k at time t is met through production
at facility j during time τ . Moreover, for uncapacitated problems the upper bounds
Wjt, Vjt, and Ujkt can be set equal to
Wjt =K∑
k=1
T∑
τ=t
dkτ , (3.52)
Vjt =K∑
k=1
T∑
τ=t+1
dkτ , (3.53)
Ujkt = dkt. (3.54)
Substituting (3.52), (3.53), and (3.54) into (3.46), (3.47), and (3.48), respectively,
will give the following constraints:
K∑
k=1
T∑
τ=t
yjtkτdkτ ≤K∑
k=1
T∑
τ=t
dkτyjt (3.55)
K∑
k=1
T∑
l=t+1
dkl(t∑
τ=1
yjτkl) ≤K∑
k=1
T∑
l=t+1
dkl (3.56)
t∑
τ=1
yjτkt ≤ yjkt (3.57)
57
Constraints (3.55), (3.56), and (3.57) can, respectively, be replaced by the following
set of stronger constraints:
yjtkτ ≤ yjt (3.58)t∑
τ=1
yjτkl ≤ 1 (3.59)
t∑
τ=1
yjτkt = yjkt (3.60)
It is easy to show that (3.58), (3.59), and (3.60) imply (3.55), (3.56), and (3.57),
respectively. Moreover, (3.58) and (3.59) are clearly valid. The last constraint (3.60)
is based on the property of an optimal solution. In an optimal solution each demand
point will be satisfied by only one facility. Therefore, we can force yjτkt to be binary
which indicates that (3.60) is also valid. Using the new constraints we arrive at the
following formulation after some simplification:
minimizeJ∑
j=1
T∑
t=1
sjtyjt +J∑
j=1
K∑
k=1
T∑
t=1
t∑
τ=1
ν ′jτktyjτkt
subject to (P10)
J∑
j=1
t∑
τ=1
yjτkt = 1 k = 1, . . . , K; t = 1, . . . , T, (3.61)
yjτkt − yjτ ≤ 1 j = 1, . . . , J ; k = 1, . . . , K, (3.62)
t = 1, . . . , T ; τ = 1, . . . , t,
yjt, yjτkt ∈ 0, 1 j = 1, . . . , J ; k = 1, . . . , K, (3.63)
t = 1, . . . , T ; τ = 1, . . . , t,
In the above formulation ν ′jτkt = (cjτ + cjkt +∑t−1
l=τ hjl)dkt + sjkt.
Proposition 5 For uncapacitated PID problems with fixed charge costs the optimal
cost of the LP relaxation of (P10) is greater than or equal to the optimal cost of the
LP relaxation of (P2).
58
Proof. Let (yjt, yjτkt) be a feasible solution to (P10). Multiplying the constraints
(3.62) by dkt and summing them over k and t leads to (3.55). Constraints (3.56) can
be reached in a similar way. Substituting the yjτkt values into (3.60), (3.42), (3.43),
and (3.44) will give a feasible solution to (P2). It follows that every solution to the
LP relaxation of (P10) gives rise to a feasible solution of the LP relaxation of (P2).
2
CHAPTER 4SUBROUTINES AND COMPUTATIONAL EXPERIMENTS
4.1 Design and Implementation of the Subroutines
One of the major components of this dissertation is the extensive amount of
computational results. Therefore, we wanted make our subroutines available to other
researchers. The subroutines execute the heuristic approaches presented in Chapter
3. A set of files, referred to as the distribution, is available upon request. The
distribution includes the subroutines as well as some sample input and output files.
The following guidelines are used in the implementation of the subroutines. The
codes are written in standard C and are intended to run without modification on
UNIX platforms. For simplicity, only the subroutines for PID problems with fixed
charge costs are presented in this section and the following section. The subroutines
gr-pid.c and ds-pid.c (Table 4–1) apply local search with GRASP and local search with
DSSP, respectively. The optimizer in gr-pid.c is a self-contained set of subroutines.
However, ds-pid.c uses the callable libraries of CPLEX 7.0 to solve the linear network
flow problems. Input and output, as well as array declarations and parameter settings
are done independently, outside of the optimizer modules. The distribution consists
of eight files which are listed in Table 4–1. The files gr-pid.c and ds-pid.c are the core
of the package.
Table 4–1: The distribution.
subroutines gr-pid.c ds-pid.csample input gr-param.dat ds-param.datsample data sample.dat
sample output sample-gr.out sample-ds.outinstructions README.txt
59
60
Subroutine gr-pid.c calls two subroutines, TIME and free-and-null. Subroutine
TIME is used to measure the CPU time of the algorithm and subroutine free-and-null
is used to nullify the pointers created during the execution of the code. Subroutine
gr-pid.c takes the following as input from file gr-param.dat:
• the maximum number of iterations (max-iteration),
• a random seed (seed),
• the GRASP parameter α (alpha),
• a binary variable which denotes the format of the output (out),
• the file name that includes the input data (sample.dat),and
• the output filename (sample-gr.out).
After the parameters are read the following data are taken from the input file:
• the number of plants,
• the number of retailers,
• the number of time periods,
• the demands at the retailers, and
• the related cost values.
Before the GRASP iterations are executed, the value of the best solution found
is initialized to a large value. The while loop implements the construction phase of
the GRASP (section 3.4.1). Each solution constructed is kept in a linked list called
starts. The list keeps the solutions in increasing order of total cost and each time
a new solution is created it is added to the list if it is different from the existing
solutions.
In the do loop following the construction phase, a local search (section 3.2.3) is
carried out in the neighborhood of each solution in the starts list. Each time a locally
optimal solution is found, it is added to a list called locals. The solutions in the list
are in increasing order with respect to their total costs. After the local search phase
is completed, an output file (e.g. Figure 4–3) is created which includes the following:
61
• the total CPU time,
• the number of plants,
• the number of retailers,
• the number of periods,
• total number of iterations,
• the initial seed,
• minimum cost before and after local search, and
• the actual amounts of flow on the arcs.
4.2 Usage of the Subroutines
The subroutines in files gr-pid.c and ds-pid.c compute an approximate solution to
a production-inventory-distribution problem using GRASP and DSSP, respectively.
The variables needed as input for gr-pid.c are as follows:
• max-iteration: This is the maximum number of GRASP iterations. It is aninteger such that max-iteration>0. The default value for max-iteration is 128.
• seed: This is an integer in the interval [0, 231 − 1]. It is the initial seed for thepseudo random number generator. The default for seed is 1.
• alpha: This is the restricted candidate list parameter α, whose value is between0 and 1 inclusive. It can be set to a value less than 0 or greater than 1 toindicate that each GRASP iteration uses a different randomly generated value.The default alpha value is -1.
• out: This is a binary variable which indicates the format of the output file. If itis equal to 1, then all production, inventory, and distribution values are printedin the output file. If out is equal to 0, then only the cost values are printed inthe output file. The default is 0.
• in-file: This is a string that gives the name of the data file.
• out-file: This variable is also a string which gives the name of the file where theoutput should be written.
62
81-11sample.datsample-gr.out
Figure 4–1: Input file (gr-param.dat) for gr-pid.c.
The input variables for ds-pid.c are similar. The only variables needed are seed,
out, in-file, and out-file. All of the above variables are provided in an input file. An
example of an input file for gr-pid.c is given in Figure 4–1.
The subroutines gr-pid.c and ds-pid.c first read the variables from the input files
gr-param.dat and ds-param.dat, respectively. This is followed by reading in the data.
The data for subroutines gr-pid.c and ds-pid.c include the following:
• the number of production facilities in the network (plants),
• the number of demand points in the network (retailers),
• the planning horizon (periods),
• amount of demand at each retailer,
• variable production and distribution costs,
• fixed production and distribution costs, and
• unit inventory costs.
This data is provided in an in-file in the order given above. Figure 4–2 gives the
format of a sample in-file. The data is read into the following arrays:
• B: The size of this array is m, where m is the total number of nodes in supplychain network (m = periods ∗ (plants + retailers) + 1). It is an array ofreal numbers that contains the demand information for the retailers. The firstelement of the array is equal to the total demand for all retailers over theplanning horizon. In other words, this is the total supply. Note that the facilitiesdo not have any demand.
• obj-c: The unit production, inventory, and distribution costs are stored in thisarray whose size is n, where n is the total number of arcs in the network(n = plants ∗ periods ∗ (2 + retailers)− plants).
63
2 2 219.9775135.627978.66903929.909268.788289 19.68071212.93113 10.6673067.882008 14.7828117.671567 19.0953449.043183 13.5169246.522605 19.3253372.036276 16.5443581.934919 10.2107029.043183 15.1220496.522605 12.0201892.036276 19.3997701.934919 12.0408231.9127631.942124
Figure 4–2: Sample data file (sample.dat) for gr-pid.c and ds-pid.c.
• obj-s: This array contains the fixed production and distribution costs. This isan array of real numbers and its size is also n.
Once all parameters that control the execution of the algorithm are set, the
optimization module starts. For GRASP, the optimization module is the iterative
construction and local search phases. For DSSP, the callable libraries of CPLEX
7.0 are used to solve the network flow problem at each iteration. The problem is
solved with a new set of linear costs each time as described in section 3.3. After the
optimization is successfully completed an output file is created as shown in Figure
4–3. If the optimization is not successful, an error message is returned.
4.3 Experimental Data
The primary goal of our computational experiments is to evaluate the solution
procedures developed and presented in Chapter 3. For a procedure that finds an
optimal solution, an important characteristic is the computational effort needed. For
a heuristic procedure, one is often interested in evaluating how close the solution value
64
GRASP for Production-Inventory-Transportation Problem
CPU time 0.00Number of plants 2Number of retailers 2Number of periods 2Number of iterations 8Initial seed 1
Minimum cost before local search 1288.10Minimum cost after local search 1288.10
Production Period Plant Retailer55.61 1 238.58 2 2
InventoryShipment
19.98 1 2 135.63 1 2 28.67 2 2 1
29.91 2 2 2All other variables are zero.
Figure 4–3: Sample output (sample-gr.out) of gr-pid.c.
65
is to the optimal value. One method for evaluating these characteristics is running
experiments on real-world or library test problems. Real-world problems are useful
because they can accurately assess the practical usefulness of a solution procedure.
Library test problems are important because the properties of the problems and the
performance of other procedures are usually known for these problems. However, the
availability of data for either of these types of problem instances is limited. Moreover,
there may also be disadvantages of using real-world and library test problems, which
are briefly described later in section 4.5. Therefore, many research studies use
randomly generated problem instances as we do in this dissertation.
To test the behavior of the solution procedures developed for the PID problem we
primarily use randomly generated problems and some library test problems from the
literature. Unfortunately, we were not able to find library test problems that fit our
problem requirements and characteristics exactly. Therefore, we found other library
problems such as facility location problems and formulated them as PID problems.
Once they were formulated as PID problems, we tested the algorithms on these
problems.
4.4 Randomly Generated Problems
The behavior and performance of solution procedures are frequently tested on
a collection of problem instances. The conclusions drawn about these performance
characteristics strongly depend on the set of problem instances chosen. Therefore, it is
important that the test problems have certain characteristics to ensure the credibility
of the conclusions. Hall and Posner [33] gave suggestions on generating experimental
data which are applicable to most classes of deterministic optimization problems.
They developed specific proposals for the generation of several types of scheduling
data. They proposed variety, practical relevance, invariance, regularity, describability,
efficiency, and parsimony as some of the principles that should be considered when
66
generating test problems. We take these properties into account when generating our
data.
4.4.1 Problems with Fixed Charge Costs
The first set of problems we solved were PID problems with fixed charge
production and distribution costs. LS-DSSP and LS-GRASP were used to solved
these problems. For some of the problems LSM-GRASP was used. Problem size,
structure, and the cost values seem to affect the difficulty of the problems. Therefore,
we generated five groups of problems. Group 1 and Group 2 problems are single period
problems without inventory and Groups 3, 4, and 5 are multi-period problems. To
capture the effect of the structure of the network on the difficulty of the problems we
varied the number of facilities, retailers, and periods within each group. The problem
sizes are given in Table 4–2. Problems 1 through 5 are in Group 1, problems 6 to 10
are in Group 2, problems 11 to 15 are in Group 3, problems 16 to 20 are in Group 4,
and Group 5 consists of problems 21 to 25. A total of 1250 problems were solved.
It has been shown that the ratio of the total variable cost to the total fixed cost
is an important measure of the problem difficulty (see Hochbaum and Segev [38]).
Therefore, we randomly generated demands, fixed charges, and variable costs for
each test problem from different uniform distributions as given in Table 4–3. Note
that only the fixed charges are varied. For example, data set A corresponds to those
problems that have small fixed charges. The fixed charges increase as we go from
data set A to data set E. The distances between the facilities and the retailers are
used as the unit transportation costs (cjkt). For each facility and retailer we uniformly
generated x and y coordinates from a 10 by 10 grid and calculated the distances. The
values of the parameter α, which is used in the construction phase of LS-GRASP,
were chosen from the interval (0, 1). Experiments indicated that better results were
obtained for α in the interval [0.80, 0.85].
67
Table 4–2: Problem sizes for fixed charge networks.
No. of No. of No. of No. ofGroup Problem Facilities Retailers Periods Arcs
1 25 400 1 10,0252 50 400 1 20,050
1 3 75 400 1 30,0754 100 400 1 40,1005 125 400 1 50,1256 125 200 1 25,1257 125 250 1 31,375
2 8 125 300 1 37,6259 125 350 1 43,87510 125 400 1 50,12511 10 70 20 14,39012 15 70 20 21,585
3 13 20 70 20 28,78014 25 70 20 35,97515 30 70 20 43,17016 30 60 5 9,27017 30 60 10 18,570
4 18 30 60 15 27,87019 30 60 20 37,17020 30 60 25 46,47021 30 50 20 31,17022 30 55 20 34,170
5 23 30 60 20 37,17024 30 65 20 40,17025 30 70 20 43,170
For each problem size and for each data set we randomly generated 10 problem
instances. The algorithms are programmed in the C language, and CPLEX 7.0
callable libraries are used to solve the linear NFPs and the MILPs. The algorithms
are compiled and executed on an IBM computer with 2 Power3 PC processors, 200
Mhz CPUs each.
The errors found for problems using data set A were the smallest. This is
expected since the fixed charges for this set are small. As the fixed charge increases,
the linear approximation is not a close approximation anymore. Thus, the error
bounds for data set E were the highest. Nevertheless, all errors reported were less than
68
Table 4–3: Characteristics of test problems.
Data set sjt, sjkt cjt hjt dkt
A 10-20 5-15 1-3 5-55B 50-100 5-15 1-3 5-55C 100-200 5-15 1-3 5-55D 200-400 5-15 1-3 5-55E 1000-2000 5-15 1-3 5-55
9%. All errors reported here are with respect to the lower bounds unless otherwise
stated. The lower bounds are obtained from the LP relaxation of the extended MILP
formulation given in section 3.5. The errors are calculated as follows:
Error = Upper Bound - Lower BoundLower Bound ∗ 100.
Table 4–4 reports the number of times the gap between the upper and lower
bounds was zero for data set A (note that 50 problems were solved for each group).
The number of times the heuristic approaches found the optimal solution may,
however, be more than those numbers reported in the table since the errors are
calculated with respect to a lower bound. For data set B, the errors were still small
(all were under 0.6%). LS-DSSP found errors equal to zero for 42 of the 50 problems
in Group 1, for 36 of the problems in Group 2, and only for 1 problem in Group 3.
All errors in Groups 4 and 5 were greater than zero. The number of errors equal to
zero using LS-GRASP, however, was 12 for Group 1 problems, only 1 for Group 4
problems, and none for the other groups. As the fixed charges were increased the
errors also increased.
Table 4–4: Number of times the error was zero for data set A.
DSSP GR 8 GR 16 GR 32 GR 64 GR 128Group 1 50 26 28 29 32 33Group 2 50 19 23 27 30 32Group 3 42 24 25 26 29 30Group 4 30 17 22 29 30 32Group 5 29 15 18 20 22 23GR 8: GRASP with 8 iterations,GR 16: GRASP with 16 iterations, etc.
69
The maximum error encountered for data set A set was 0.02% for both heuristics.
The results indicate that LS-DSSP and LS-GRASP are both powerful heuristics for
these problems, but another reason for getting such good solutions is because the
problems in set A are relatively easy. We were actually able to find optimal solutions
from CPLEX for all problems in this set in a reasonable amount of time. On average,
LS-DSSP took less than a second to solve the largest problems in Groups 1 and 2.
LS-GRASP with 8 iterations took about 1.5 seconds for the same problems. Finding
lower bounds also took about a second, whereas it took more than 25 seconds to find
optimal solutions for each of these problems using CPLEX. The problems in Groups
3, 4, and 5 were relatively more difficult. Average times were 8, 10, 15, and 50 seconds
for LS-DSSP, LS-GRASP with 8 iterations, LP, and CPLEX, respectively. Optimal
solutions were found for all problems using data set B as well. However, for data set
C, optimal solutions were found for only Group 1, 2, and 3 problems and for some of
the problems in Groups 4, and 5. For data set D, optimal solutions were found for
some of the Group 1 and 2 problems. Finally, data set E was the most difficult set.
CPLEX was able to provide optimal solutions for only a few of the Group 1 and 2
problems and none for the other 3 groups.
From Figure 4–4 we can see that LS-DSSP gave better results for Group 1 and 2
problems and LS-GRASP gave slightly better results for problems in Groups 3, 4, and
5. Similar behavior was observed for the other data sets. The results also indicate
that LS-GRASP was more robust in the sense that the quality of the solutions was not
affected greatly by the network structure. The proposed local search also proved to be
powerful. For example, average errors for the problems in Figure 4–4-d without the
local search were roughly 2.9% for the DSSP and 11% for GRASP with 8 iterations.
If more iterations are performed, LS-GRASP finds better solutions. Figure 4–5
shows that the errors decrease as the number of iterations increase. However, the
computational time required by LS-GRASP increases linearly as more iterations are
70
! "$#&%')(+*-,/.1032547698;:/<>=@?
AB C
DE F
GH I
JK L
MN O
PQ R
STU VWYX Z&[&\ ]^_ `&a&bcedgfih/jlknm/oqp7rts@u5vxwzy1|
~/&+9 )&+
¡¢ £
¤¥ ¦
§¨ ©
ª« ¬$ ®¯ °&± ²Y³´¶µl·-¸/¹1ºn»5¼7½¾;¿/À>ÁÃÂ
ÄÅ Æ
ÇÈ É
ÊË Ì
ÍÎ Ï
ÐÑ Ò
ÓÔ Õ
Ö× ØÙ ÚÛ ÜÝ Þ&ßàeágâ-ã>älå3æ5çqèétê@ë5ìîíðïlñÃò
óõô5ö÷ø&ù+ú9ûýü þõÿ
! "
# $&% ')( *,+ -./1032547698;:=<>@?3ACBED@F3GHJI=KMLNOPRQTS
UTV1WYXZ\[^]_\`badce\fTgihkj
Figure 4–4: Average errors for problems using data set D.
71
performed. Most of the improvement is achieved earlier in the search. Performing
more than 16 or 32 iterations does not seem to improve the solution quality by much.
This suggests that the construction phase shows small diversification. To overcome
this problem we modified our construction of solutions and tried to diversify the
solutions.
!#" $&%(')+*-, .0/2143654798;:=<+> ?A@CBEDGFIHKJMLONQP
RTSUVWXYZ[\]^_ `a
bdcMegf#hjilk mndoMpgq#rjslt uvdwMxgy#zjl| &~dMg#jl &dMg#jl
Figure 4–5: Number of GRASP iterations vs. solution quality.
With the modified construction procedure we try to generate solutions from
different parts of the feasible region. This is done by allowing the construction
of solutions that are not necessarily good. Our new criterion allows less desirable
candidates to enter the restricted candidate list. The modified construction procedure
was explained in section 3.4.2.
The best results were obtained for α in the interval [0.05, 0.20]. Since LS–GRASP
did relatively poorly for Group 1 and 2 problems, we present results in Table 4–5
that show the improvement achieved with the new implementation of LSM-GRASP.
To have a reasonable comparison, LSM-GRASP was run until 128 different solutions
were investigated. As we can see there is significant improvement over the LS-GRASP
72
approach. However, LS-DSSP still seems to be doing better for these problems and
also takes less time.
Table 4–5: Average errors for problems with data set E.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX1 0.02 3.86 0.82 0.32 4.14 4.552 0.17 4.11 1.95 0.78 8.05 8.813 0.24 3.55 1.72 1.14 11.47 13.124 0.54 4.15 2.02 1.75 15.03 17.265 0.56 3.86 2.00 2.85 19.08 22.176 2.83 2.91 2.45 2.12 9.19 10.697 1.75 3.43 2.18 2.17 11.79 13.578 1.23 4.03 2.29 2.04 14.04 16.409 0.86 3.58 2.04 2.32 16.28 19.0310 0.56 3.86 2.00 2.92 19.01 22.17
Problems with fixed charge costs and production capacities. The second set
of problems we solved were PID problems with fixed charge costs and production
capacities. Table 4–6 gives the set of problems solved in this section. Note that
the local search procedures we have developed are for uncapacitated problems. For
example, the construction phase of LS-GRASP and LSM-GRASP make use of the
property of an optimal solution for uncapacitated problems, i.e., an optimal solution
is a tree on the network. The local search procedure for LS-DSSP also uses the same
property of an uncapacitated PID problem. DSSP, however, is applicable to both
capacitated and uncapacitated PID problems. Therefore, we use only DSSP to solve
the problems generated in this section.
As soon as capacities are introduced, the problems become much more difficult.
The difficulty of the problems depends on how tight the capacities are. In order to
simplify the problems, we introduce only production capacities and we also assume
that the distribution costs are linear. In other words, the PID problems generated in
this section have fixed charge production costs, linear inventory and distribution costs,
73
Table 4–6: Problem sizes for fixed charge networks with capacity constraints.
No. of No. of No. of No. ofProblem Facilities Retailers Periods Arcs
26 5 30 30 4,79527 5 30 40 6,39528 5 30 50 7,99529 10 30 30 9,59030 10 30 40 12,79031 10 30 50 15,99032 10 40 30 12,59033 10 40 40 16,79034 10 40 50 20,990
and also capacities on the amount of production. These simplifying assumptions can
easily be relaxed since DSSP is a general procedure that can solve almost any PID
problem with network constraints and concave costs. Data set C in Table 4–3 is used
to generate the input data for these problems. The capacity on each production arc
for a given problem is a constant and it is calculated by
Capacity = Mean Demand*Number of retailersNumber of plants ∗ ξ
where ξ is a fixed parameter. If ξ is very large (ξ ≥ 2 * number of plants * number of
periods), then the problem is an uncapacitated PID problem. If, on the other hand,
ξ is very small, then the problem may not have any feasible solution. We varied the
values of ξ from 1.1 to 5. When ξ=1.1, there were several infeasible instances and
when ξ=5, the problems were practically uncapacitated.
For each problem in Table 4–6 we generated 10 instances. We also tested 8
different ξ values. Thus, a total of 720 problems were solved. The average errors for
these problems are presented in Table 4–7 (an explanation of how these errors were
calculated is given below). As we can see from the table, the errors are all close to zero.
In fact, the maximum error among the 720 instances was 0.34%. Table 4–7 also shows
that the average errors are slightly higher when ξ=2. Similar behavior is observed in
Tables 4–8 and 4–9. Table 4–8 gives the average CPU time DSSP took to solve each
74
Table 4–7: Average errors for problems with production capacities.
Average Error (%)Problem ξ = 1.1 ξ = 1.125 ξ = 1.15 ξ = 1.2 ξ = 2 ξ = 3 ξ = 4 ξ = 5
26 0.02 0.02 0.02 0.04 0.06 0.07 0.09 0.0927 0.03 0.05 0.04 0.04 0.13 0.10 0.14 0.1328 0.02 0.04 0.04 0.04 0.17 0.07 0.08 0.1029 0.06 0.08 0.07 0.09 0.20 0.16 0.10 0.1430 0.07 0.09 0.12 0.14 0.21 0.13 0.14 0.1531 0.11 0.12 0.13 0.13 0.23 0.15 0.16 0.1732 0.03 0.03 0.06 0.08 0.11 0.11 0.09 0.1233 0.05 0.06 0.08 0.06 0.15 0.11 0.10 0.0934 0.07 0.07 0.08 0.11 0.18 0.11 0.13 0.14
problem whereas Table 4–9 gives the average CPU times for CPLEX. The highest
CPU times for DSSP and CPLEX were observed when ξ=2 and ξ=1.2, respectively.
This indicates that the problems became more difficult as ξ was increased from 1.1
to 2, but as ξ was further increased the problems became easier again.
Table 4–8: CPU times of DSSP for problems with production capacities.
CPU Time (seconds)Problem ξ = 1.1 ξ = 1.125 ξ = 1.15 ξ = 1.2 ξ = 2 ξ = 3 ξ = 4 ξ = 5
26 0.08 0.09 0.09 0.09 0.19 0.09 0.07 0.0827 0.11 0.13 0.11 0.13 0.21 0.12 0.11 0.1028 0.14 0.16 0.17 0.17 0.34 0.17 0.14 0.1429 0.21 0.17 0.21 0.21 0.39 0.20 0.17 0.1530 0.23 0.32 0.31 0.34 0.49 0.31 0.26 0.2531 0.37 0.38 0.36 0.46 0.66 0.37 0.30 0.3232 0.24 0.26 0.27 0.28 0.40 0.24 0.21 0.2033 0.39 0.40 0.43 0.41 0.60 0.42 0.35 0.3334 0.45 0.46 0.60 0.59 0.74 0.49 0.44 0.42
In our implementations CPLEX was run for at most 600 CPU seconds. If an
optimal solution is found within 600 CPU seconds, then the objective function value
of the solution obtained from DSSP is compared to the objective function value of
this optimal solution. In other words, the error is calculated as
Error = DSSP Solution Value-Optimal Solution ValueOptimal Solution Value ∗ 100.
75
However, if an optimal solution is not found within 600 CPU seconds, then CPLEX
returns a lower bound. In this case, the objective function value of the DSSP solution
is compared to the lower bound and the error is given by
Error = DSSP Solution Value-Lower BoundLower Bound ∗ 100.
Some of the problems in Table 4–9 have average CPU times of 600. This means
that for these problems CPLEX was not able find an optimal solution within our
time limit. Thus, the average errors reported in Table 4–7, corresponding to these
problems, are with respect to a lower bound. However, the errors are still small.
Table 4–9: CPU times of CPLEX for problems with production capacities.
CPU Time (seconds)Problem ξ = 1.1 ξ = 1.125 ξ = 1.15 ξ = 1.2 ξ = 2 ξ = 3 ξ = 4 ξ = 5
26 4 6 7 8 9 7 3 227 9 31 22 265 24 5 4 428 16 40 34 124 69 15 8 729 234 186 572 411 373 85 33 930 553 600 600 600 600 352 71 2831 600 600 600 600 600 600 600 6332 40 137 600 600 235 69 17 1433 600 600 600 600 600 242 49 4634 600 600 600 600 600 600 108 55
Table 4–10 also gives us an idea about the difficulty of the problems. This table
shows the percentage of production arcs that have positive flow in the optimal solution
obtained from CPLEX. For example, when ξ=1.1, on average 93% of the production
arcs were used in an optimal solution (N/A denotes that an optimal solution was
not available). This indicates that the capacity constraints are so tight that there is
production at almost every facility in every period. In other words, the number of
feasible solutions that need to be investigated is small thus CPLEX does not take
too much time to solve these problems. As ξ increases, the percentage of production
arcs used in an optimal solution decreases. When ξ=5, this percentage drops down
to about 30% and these problems are practically uncapacitated.
76
Table 4–10: Percentage of production arcs used in the optimal solution.Percentage of production arcs (%)
Problem ξ=1.1 ξ=1.125 ξ=1.15 ξ=1.2 ξ=2 ξ=3 ξ=4 ξ=526 94 91 90 87 56 43 37 3427 94 92 90 86 56 43 39 3728 93 91 89 86 56 42 37 3429 91 89 87 84 51 37 31 2730 91 N/A N/A N/A N/A 37 30 2731 N/A N/A N/A N/A N/A N/A N/A 2732 91 89 N/A N/A 52 39 32 3033 N/A N/A N/A N/A N/A 38 32 2834 N/A N/A N/A N/A N/A N/A 32 29
4.4.2 Problems with Piecewise-Linear Concave Costs
We also tested our local search procedures on problems with piecewise-linear
concave costs. These problems were much more difficult compared to the problems
with fixed charge costs due to the increase in the problem size as the number of pieces
in each arc increases. The structure of the networks generated and the characteristics
of the input data are similar to those of the problems in section 4.4.1. The inventory
costs, hjt, are generated in exactly the same way, i.e., from a Uniform distribution
in the interval [1,3]. The demands are also uniformly generated from the same
distribution in the interval [5,55]. The transportation costs are also the same. The
same distances between the facilities and retailers are used as the variable costs and
the fixed costs are uniformly generated from different intervals as given in Table 4–
3. The reason we generate fixed charge transportation costs is that the problems
we consider are uncapacitated concave minimization problems, which means that in
an optimal solution the flow on transportation arcs will be equal to either zero or
the demand. In other words, xjkt = 0 or xjkt = dkt in an optimal solution for all
j = 1, . . . , J , k = 1, . . . , K, and t = 1, . . . , T . Thus, if piecewise-linear and concave
costs are generated for transportation arcs they can be reduced to fixed charge cost
functions to simplify the formulation.
77
The production costs, however, are different since they are piecewise-linear
concave rather than fixed charge. To generate a piecewise-linear concave cost function
with ljt pieces for a given production arc we first divide the maximum possible flow on
that arc into ljt equal intervals. Next, we generate a variable cost for each piece. Note
that the variable costs should be decreasing (see equation 2.13) to guarantee concavity.
Therefore, ljt variable costs are uniformly generated from the interval [5,15] and then
they are sorted in decreasing order. The fixed cost for the first interval is randomly
generated from one of the five different uniform distributions given in Table 4–3. The
fixed cost of the other intervals can be calculated using the following equation:
sjt,i = sjt,i−1 + (cjt,i−1 − cjt,i)βjt,i ∀i = 2, . . . , ljt.
For a given problem we assume that all production arcs have an equal number of
pieces, i.e., ljt = l for all j = 1, . . . , J and t = 1, . . . , T . We took the smallest
problems in each group in Table 4–2 and varied the number of pieces, which resulted
in Table 4–11. Table 4–11 summarizes the set of problems solved with piecewise-linear
concave costs. The last column gives the number of arcs in the extended network
after the arc separation procedure.
For each problem in Table 4–11 five different data sets are used. In other words,
the fixed charges are generated from five different uniform distributions as given in
Table 4–3 which leads to different problem characteristics. Ten problem instances
are solved for each data set and problem combination (a total of 750 problems).
Based on our observations from problems with fixed charge costs only LSM-GRASP
is used to solve these new problems that have piecewise-linear concave production
costs. The GRASP parameter α is chosen randomly from the interval [0.05, 0.20]
and 256 iterations are performed. These problems are more difficult compared to
those presented in section 4.4.1. Therefore, we ran more iterations of LSM-GRASP
to get good solutions in the expense of extra computational time. As shown in
78
Table 4–11: Problem sizes for piecewise-linear concave networks.
No. of No. of No. of No. of No. of No. ofGroup Problem Facilities Retailers Periods Pieces Arcs Arcs
(org) (ext)35 25 400 1 2 10,025 10,050
6 36 25 400 1 4 10,025 10,10037 25 400 1 8 10,025 10,20038 125 200 1 2 25,125 25,250
7 39 125 200 1 4 25,125 25,50040 125 200 1 8 25,125 26,00041 10 70 20 2 14,390 14,590
8 42 10 70 20 4 14,390 14,99043 10 70 20 8 14,390 15,79044 30 60 5 2 9,270 9,420
9 45 30 60 5 4 9,270 9,72046 30 60 5 8 9,270 10,32047 30 50 20 2 31,170 31,770
10 48 30 50 20 4 31,170 32,97049 30 50 20 8 31,170 35,370
org: original network, ext: extended network after ASP
Tables 4–12, 4–13, 4–14, 4–15, and 4–16, LSM-GRASP took more time compared to
LS-DSSP but gave slightly better results.
CPLEX is used to solve the ASP formulations of the problems given in section
2.4.2. We limited the running time of CPLEX to 2,400 CPU seconds. If an optimal
solution is found within 2,400 seconds then CPLEX returns a lower bound and an
upper bound, both of which are equal to the optimal solution value. If an optimal
solution is not reached in 2,400 seconds then CPLEX returns a lower bound. CPLEX
was not even able to find a feasible solution within the specified time limit for some
problems and, therefore, an upper bound may not be available. The errors reported
in Tables 4–12, 4–13, 4–14, 4–15, and 4–16 are the average errors of the upper bounds
obtained from LS-DSSP, LSM-GRASP, and CPLEX (if an upper bound is available)
with respect to the lower bound of CPLEX, which is calculated in the following way:
Error = Upper Bound - CPLEX Lower BoundCPLEX Lower Bound ∗ 100.
79
Table 4–12: Summary of results for problems using data set A.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX35 0.36 0.36 0.00 0.40 23.98 30.4536 3.40 1.35 0.00 0.97 27.98 208.4037 2.26 0.84 0.00 2.84 40.41 508.5438 0.00 0.00 0.00 0.88 56.51 69.3139 0.49 0.46 0.00 2.67 84.10 631.3240 9.97 9.41 11.15 5.21 93.43 2,400.0041 0.00 0.00 0.00 6.08 301.26 222.1042 2.56 2.52 3.43 12.42 376.34 2,400.0043 16.09 16.14 17.72 38.06 447.86 2,400.0044 0.00 0.00 0.00 1.09 55.70 46.13459 0.34 0.33 0.26 2.34 70.02 1,134.0246 11.03 10.90 13.16 6.53 111.03 2,400.0047 0.01 0.00 0.00 14.29 627.57 1,078.9948 7.67 7.68 8.28 33.04 874.33 2,400.0049 26.14 26.19 N/A 84.77 1249.34 2,400.00
The superscripts over the problem numbers denote the number of instances out of
ten for which CPLEX found an optimal solution. If CPLEX finds optimal solutions in
all ten instances, then the average error for CPLEX will be zero and the superscript is
omitted. If CPLEX cannot find an optimal solution for any of the ten instances, then
the average error will be higher than zero and there will be no superscript over that
problem number. For example, in Table 4–12 problem 45 has a superscript 9 which
means that CPLEX found optimal solutions for 9 out of 10 instances. Problems 35,
36, 37, 38, 39, 41, 44, and 47 have no superscripts but the average errors for CPLEX
are all zero for these problems and this indicates that CPLEX found optimal solutions
for all instances in these problems. For the remaining problems (40, 42, 43, 46, 48,
and 49) CPLEX was not able to find an optimal solution for any of the instances. In
fact, for problem 49 CPLEX did not even find a feasible solution.
As the number of linear pieces in each production cost function increases the
problems seem to get more difficult, which is expected because the network increases
80
Table 4–13: Summary of results for problems using data set B.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX35 0.25 0.24 0.00 0.46 22.22 76.5036 2.32 1.19 0.00 1.08 29.51 811.98378 2.56 1.82 2.39 3.35 44.30 1,499.3238 0.08 0.02 0.00 1.51 60.04 268.35391 3.13 3.01 5.40 3.81 88.01 2,304.3740 18.15 17.77 49.22 8.89 105.70 2,400.00414 0.06 0.05 0.02 7.30 293.44 2,106.0042 6.15 6.10 7.12 16.33 395.21 2,400.0043 21.76 21.74 23.56 48.04 487.02 2,400.0044 0.07 0.06 0.00 1.29 57.91 142.50453 4.03 3.90 4.65 3.15 73.99 2,296.5246 17.05 16.71 20.40 7.64 109.98 2,400.00471 0.22 0.23 0.28 18.33 661.42 2,388.1048 14.37 14.20 N/A 42.54 910.27 2,400.0049 23.63 23.37 N/A 114.22 1243.62 2,400.00
in size. For example, in Table 4–13 problems 38, 39, and 40 have the same
network structure and the same cost characteristics but the number of pieces in their
production arcs are different. The number of pieces are 2, 4, and 8 for problems 38,
39, and 40, respectively, as given in Table 4–11. LS-DSSP and LSM-GRASP solve the
problems on the original network, therefore, the increase in the CPU times of these
two approaches is not as high compared to the increase in the CPU times of CPLEX.
This is due to the fact that CPLEX works on the extended network which is larger
in size. For problem 38, CPLEX was able to find an optimal solution for all instances
and the average time spent was about 268 CPU seconds. The average error for LS-
DSSP was 0.08% which took about 1.5 seconds. The CPU time for LSM-GRASP was
60 seconds and the average error was 0.02%. As the number of pieces increased the
errors and the CPU times both increased. For example, the average CPU times for
LS-DSSP, LSM-GRASP, and CPLEX were 8.89, 105.7, and 2,400, respectively, for
problem 31. It is clear that LS-DSSP was much faster compared to the other two for
81
all problems, but LSM-GRASP in general gave slightly better results. The average
errors for problem 40 were about 18% for LS-DSSP, 17% for LSM-GRASP, and 49%
for CPLEX.
Table 4–14: Summary of results for problems using data set C.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX35 0.10 0.02 0.00 0.52 22.87 155.00368 2.40 1.16 0.71 1.18 28.37 1,412.73372 5.26 3.08 7.94 3.18 42.84 2,340.8238 0.08 0.14 0.00 1.65 59.14 598.2639 9.29 8.75 11.92 4.70 82.22 2,400.0040 16.17 15.41 N/A 8.86 106.15 2,400.0041 0.26 0.30 0.30 8.10 291.93 2,400.0042 6.23 6.25 7.21 14.71 386.89 2,400.0043 20.20 20.23 21.96 43.81 505.90 2,400.0044 0.15 0.21 0.00 1.52 57.38 371.8245 7.36 7.00 8.37 3.53 76.54 2,400.0046 17.56 16.74 23.52 7.90 110.72 2,400.0047 2.45 2.59 N/A 20.18 655.19 2,400.0048 13.35 13.08 N/A 55.96 893.06 2,400.0049 22.23 21.64 N/A 118.03 1282.63 2,400.00
Another observation about Tables 4–12, 4–13, 4–14, 4–15, and 4–16 is with
respect to the CPU times. Note that the fixed charges increase as we go from Table
4–12 to Table 4–16 and the CPU times of CPLEX increase as the fixed charges
increase. However, the CPU times for LS-DSSP and LSM-GRASP are not affected
by the input data.
The errors reported in Tables 4–12 to 4–16 are with respect to either an optimal
solution or a lower bound that are obtained from CPLEX. As we can see from these
tables, the errors are relatively larger for problems 40, 43, 46, and 49. The production
cost functions in these problems have eight pieces which makes it difficult to solve
them. The large errors are possibly due to poor lower bounds. Therefore, we first
transformed these piecewise-linear and concave PID problems into fixed charge PID
82
Table 4–15: Summary of results for problems using data set D.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX35 0.04 0.12 0.00 0.71 21.83 229.56366 2.58 1.37 1.69 0.96 26.43 1,993.5137 7.88 6.19 11.02 2.33 40.88 2,400.00385 0.67 0.72 0.33 2.54 57.01 1,978.9139 9.29 8.67 N/A 5.63 76.78 2,400.0040 15.30 13.99 N/A 9.46 96.96 2,400.0041 0.63 0.99 1.29 8.61 294.96 2,400.0042 4.92 5.28 6.44 18.22 378.43 2,400.0043 16.17 16.58 18.01 39.33 502.67 2,400.00447 0.56 0.80 0.36 1.89 54.59 1,505.3945 9.19 8.93 9.69 3.46 69.54 2,400.0046 16.39 15.51 N/A 9.42 103.10 2,400.0047 4.86 5.53 N/A 26.54 610.61 2,400.0048 13.12 12.87 N/A 53.67 807.60 2,400.0049 20.71 20.54 N/A 135.07 1181.78 2,400.00
problems using the node separation procedure. Then, we used the extended problem
formulation given in section 3.5 to obtain lower bounds. We observed significant
improvement in the errors. However, one drawback of the extended formulation is
that the problem grows tremendously after the transformations. Therefore, solving
even the LP relaxations of these problems becomes computationally expensive. In
fact, CPLEX ran out of memory when we attempted to solve problems 43 and 49.
Table 4–17 gives the problem size for problems 40, 43, 46, and 49.
As we can see from Table 4–17, problem 43 and 49, respectively, have about 1.2
and 2.5 million variables in their corresponding extended formulations. CPLEX was
not able to solve these problems. However, we got good lower bounds for the other
two problems. The errors for these problems are given in Table 4–18. For example,
the errors reported in Table 4–12 for problem 40 were 9.97% for LS-DSSP, 9.41% for
LSM-GRASP, and 11.5% for CPLEX, but those errors decreased to 0.88%, 0.71%,
and 1.81%, respectively.
83
Table 4–16: Summary of results for problems using data set E.
Error (%) CPU Time (seconds)LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX35 0.16 0.76 0.00 0.79 18.82 456.69365 0.36 0.60 0.34 1.05 22.42 1,817.6637 3.59 3.85 4.72 2.27 30.00 2,400.0038 3.83 4.33 N/A 3.17 53.45 2,400.0039 7.30 7.15 N/A 5.23 67.91 2,400.0040 9.18 8.88 N/A 9.57 82.76 2,400.0041 1.93 3.06 3.88 12.03 276.38 2,400.0042 3.58 4.73 5.99 20.38 323.73 2,400.0043 6.75 7.58 9.15 43.04 411.45 2,400.0044 3.75 4.46 3.40 3.15 52.33 2,400.0045 9.17 9.35 9.80 4.51 65.20 2,400.0046 12.34 12.03 N/A 8.16 83.65 2,400.0047 9.25 9.77 N/A 40.08 587.33 2,400.0048 12.72 13.68 N/A 70.12 750.53 2,400.0049 15.92 16.04 N/A 112.23 959.70 2,400.00
4.5 Library Test Problems
As we mentioned in section 4.3, real-world problems and library test problems
are useful, but as McGeoch [59] said, it is usually difficult to obtain these kind
of problems. He also notes that library data sets may quickly become obsolete.
Moreover, it is difficult to generalize results from a small set of problem instances.
Another disadvantage of using library problems is that researchers may be tempted
to tune their algorithm so that it works well on that particular set of instances. This
may favor procedures that work poorly for general problems and may disfavor those
procedures that have superior performance (Hooker [40]).
We could not find any PID problems in the OR-library, but there were
uncapacitated warehouse location problems. We formulated these as PID problems
and used LS-DSSP and LS-GRASP to solve them. The data files are located
at http://mscmga.ms.ic.ac.uk/jeb/orlib/uncapinfo.html and they have the following
information:
84
Table 4–17: Problem sizes for the extended formulation after NSP.
No. of No. of No. of No. of No. of No. ofProblem Facilities Retailers Periods Pieces Arcs Arcs
(org) (ext)40 125 200 1 8 25,125 201,00043 10 70 20 8 14,390 1,177,60046 30 60 5 8 9,270 217,20049 30 50 20 8 31,170 2,524,800
org: original networkext: extended network formulation after NSP
Table 4–18: Summary of results for problems 40 and 46 after NSP.
Error (%) LP-NSPLS- LSM- CPU
Data Set Problem DSSP GRASP CPLEX (seconds)A 40 0.88 0.71 1.81 6,761
46 0.77 0.66 2.73 1,370B 40 1.04 0.72 27.67 6,340
46 0.78 0.48 3.69 1,662C 40 1.17 0.51 N/A 6,053
46 1.43 0.73 6.59 1,617D 40 1.89 0.73 N/A 6,079
46 1.48 1.39 N/A 2,350E 40 1.65 1.41 N/A 7,126
46 2.94 2.66 N/A 3,626
• number of potential warehouse locations,
• number of customers,
• the fixed cost of opening a warehouse,
• the demand of each customer, and
• the total cost of flowing all of the demand from a warehouse to a customer.
The warehouses in the location problems are treated as the facilities in the PID
problem and the customers are treated as the retailers. There is only a single period
and the fixed costs of opening a warehouse are used as the fixed production costs at
the facilities. Note that the variable costs of production will be zero. The distribution
85
arcs, on the other hand, will have linear cost functions since there is no fixed cost of
placing an order in the location problems. The results are summarized in Table 4–19.
Table 4–19: Uncapacitated warehouse location problems from the OR library.
Error (%)LS- LSM-
Problem DSSP GRASP DSSP GRASPcap71 0.73 4.65 0.00 0.00cap72 1.49 6.21 0.13 0.00cap73 1.35 2.68 0.47 0.45cap74 0.56 4.12 0.37 0.37cap101 1.77 7.86 0.56 0.24cap102 2.26 6.30 0.39 0.09cap103 1.21 6.46 0.81 0.71cap104 1.37 1.37 0.61 0.00cap131 4.49 8.51 2.22 0.85cap132 3.90 2.92 2.34 0.86cap133 3.16 5.98 1.03 0.79cap134 2.09 1.37 1.56 0.00
The errors in the first column reported in Table 4–19 are for the DSSP procedure
if no local search is applied. Similarly, in the second column errors are given for
GRASP with only the construction phase. The last two columns are the errors for
LS-DSSP and LSM-GRASP. As we can see from the results, the local search procedure
improved the errors bounds significantly. The errors for LSM-GRASP were all under
1%. LS-DSSP gave higher error bounds. In terms of CPU times LS-DSSP was faster
than LSM-GRASP. These are relatively small size problems and the CPU times for
LS-DSSP were about 0.01 seconds for all problems. The CPU times for LSM-GRASP
varied from 0.3 seconds to 1.1 seconds.
CHAPTER 5CONCLUDING REMARKS AND FURTHER RESEARCH
5.1 Summary
We have developed local search algorithms for the uncapacitated concave
production-inventory-distribution problem. We first characterized the properties
of an optimal solution to uncapacitated PID problems with concave costs, and
then presented DSSP and GRASP algorithms which take advantage of these
properties. DSSP and GRASP provide initial solutions for the local search
procedure. Computational results for test problems of varying size, structure, and
cost characteristics are provided. The first set of problems for which experimental
results were presented was uncapacitated PID problems with linear inventory costs,
and fixed charge production and distribution costs. The second set of problems
included PID problems with fixed charge production costs, and linear inventory
and distribution costs. These problems had production capacities. Since our local
search algorithms are designed for uncapacitated problems, only DSSP was used to
solve these problems because DSSP works for both capacitated and uncapacitated
problems. The third set of computational results presented were for uncapacitated
PID problems with piecewise-linear and concave production costs, fixed charge
distribution costs, and linear inventory costs.
A general purpose solver, CPLEX, was used in an attempt to solve our problems
optimally. We compared the solution values obtained from LS-DSSP, LS-GRASP,
and LSM-GRASP to the solution values obtained from CPLEX. Due to the difficulty
of the problems CPLEX failed to find an optimal solution in most cases. In fact, for
some problems CPLEX was not able to find even a feasible solution. We initially
compared our solution values to lower bounds obtained from CPLEX. However,
86
87
these lower bounds were poor, particularly for problems with high fixed charges
and for problems with piecewise-linear and concave costs. Therefore, we provided
extended formulations for these difficult problems and solved the LP relaxations
of these extended formulations. A disadvantage of these formulations is that the
problems grow in size and they become harder to solve. However, the lower bounds
obtained from the extended formulation were much tighter compared to the ones
obtained from CPLEX.
The computational experiments indicated that LS-DSSP was more sensitive to
problem structure and input data, but the errors obtained from LS-GRASP and
LSM-GRASP were more robust. LS-DSSP gave smaller errors for problems with
fixed charge costs, particularly for single period problems. LSM-GRASP, in general,
gave better results for problems with piecewise-linear and concave costs. However,
the CPU times required by LS-DSSP were much smaller compared to the CPU times
of LS-GRASP and LSM-GRASP. LS-DSSP has a natural stopping condition, but LS-
GRASP and LSM-GRASP can be performed for many iterations. Thus, the GRASP
approaches lead to better results if more iterations are performed in the expense of
extra computational time. The CPU times of the local search procedures with both
DSSP and GRASP can further be improved since they are multi-start approaches
and are suitable for parallel implementation.
The main contributions of this work can be summarized as follows:
• Several local procedures are developed that provide solutions to uncapacitatedPID problems with concave costs in a reasonable amount of time.
• An extended formulation is given for PID problems with fixed charges costs. Itis also shown that the LP relaxation of the extended formulation gives lowerbounds that are no worse than the lower bounds obtained from the LP relaxationof the original formulation.
• Extensive amount of computational results are presented that show theeffectiveness of the proposed solution approaches.
88
• Some special PID problems are identified for which one of the heuristicprocedures finds an optimal solution.
• The codes developed for the implementation of the algorithms are madeavailable for distribution.
The proposed solution approaches are useful in practice, particularly, for
production, inventory, and distribution problems for which the supply chain network
is large. For example a network with 30 facilities, 70 retailers, 10 time periods, and
fixed charge costs is quite big. Finding optimal solutions for networks of this size,
and even for smaller networks, may be computationally expensive or impossible. The
computational results indicated that if the number of retailers is much more larger
than the number of facilities, the heuristic approaches performed better. Therefore,
we recommend the use of the proposed algorithms for problems that can be modeled
as PID problems which have the characteristics mentioned above.
5.2 Proposed Research
5.2.1 Extension to Other Cost Structures
The solution approaches presented in Chapter 3, particularly GRASP, take
advantage of the fact that the optimal solution is a tree in the supply chain network.
This is due to the fact that the cost functions used are concave cost functions.
Therefore, a natural extension to the work presented here is to develop solution
approaches to solve problems with more general cost structure (e.g. Figure 5–1)
which may arise in various applications.
Kim and Pardalos [49] actually extended the DSSP idea to handle nonconvex
piecewise-linear network flow problems. They developed a contraction rule to reduce
the feasible domain for the problem by introducing additional constraints.
5.2.2 Generating Problems with Known Optimal Solutions
As mentioned earlier in section 4.4, selection and generation of test problems is
an important part of computational experiments. A large proportion of experimental
research on algorithms concerns heuristics for NP-hard problems as is the case in
89
Figure 5–1: Piecewise-concave cost functions.
this dissertation. One question often asked about an approximate solution is “How
close is the objective value to the optimal value?” The obvious difficulty here is that
optimal values for NP-hard problems cannot be efficiently computed unless P = NP.
Therefore, generating instances where the optimal solution is known is an attractive
area of research but not necessarily an easy one.
Arthur and Frendewey [3] presented a simple effective algorithm for randomly
generating travelling salesman problems for which the optimal tour is known. Pilcher
and Rardin [63] gave a more complex generator utilizing a random cut approach.
They first discussed a random cut concept in general for generating instances of
discrete optimization problems and then provided details on its implementation for
symmetric travelling salesman problems. The random cut concept they presented
may be implemented to generate PID problems. This will provide a variety of test
problems that can be used to evaluate heuristic approaches. It will also motivate
research in the area of cutting plane algorithms.
Another practical application of generating problems with known optimal
solutions is to help organizations set prices for their products and services. For
example, given a supply chain network let us assume that due to certain regulations
some of the routes must be used in an optimal solution. Rather than modeling these
90
restrictions as constraints of the problem, one could set the production, inventory,
and distribution costs in such a way that those routes will be part of an optimal
solution. Toll pricing is another good example. Given a network of highways and
roads, assume we want the drivers to follow certain routes during certain hours of
the day. In this case, how should the roads be tolled so that the desired flow is the
optimal flow?
REFERENCES
[1] E.H.L Aarts and J.K. Lenstra, editors. Local Search in combinatorialoptimization. John Wiley & Sons, Chichester, England, 1997.
[2] A. Aggarwal and J.K. Park. Improved algorithms for economic lot size problems.Operations Research, 41(3):549–571, 1993.
[3] J.L. Arthur and J.O. Frendewey. Generating travelling-salesman problems withknown optimal tours. Journal of the Operational Research Society, 39(2):153–159, 1988.
[4] M. Avriel. Nonlinear programming: Analysis and methods. Prentice-Hall,Englewood Cliffs, NJ, 1976.
[5] M.S. Bazara and H.D. Sherali. On the use of exact and heuristic cutting planemethods for the quadratic assignment problem. Journal of the OperationalResearch Society, 13:991–1003, 1982.
[6] B.M. Beamon. Supply chain design and analysis: Models and methods.International Journal of Production Economics, 55:281–294, 1998.
[7] G.J. Bell, B.W. Lamar, and C.A. Wallace. Capacity improvement, penalties, andthe fixed charge transportation problem. Naval Research Logistics, 46:341–355,1999.
[8] F. Bock. An algorithm for solving traveling-salesman and related networkoptimization problems. page 897, 1958. Abstract in Bulletin 14th NationalMeeting of the Operations Research Society of America.
[9] S.J. Chung and K.G. Murty. Polynomially bounded ellipsoid algorithms forconvex quadratic programming. In O.L. Mangasarian, R.R. Meyer, and S.M.Robinson, editors, Nonlinear programming 4, pages 439–485. Academic Press,New York, 1981.
[10] M.A. Cohen and A. Huchzermeier. Global supply chain management: A survey ofresearch and applications. In S. Tayur, R. Ganeshan, and M. Magazine, editors,Quantitative models for supply chain management. Kluwer Academic Publisher,Boston, 1999.
[11] G.A. Croes. A method for solving traveling-salesman problems. OperationsResearch, 6:791–812, 1958.
91
92
[12] M. Diaby. Successive linear approximation procedure for generalized fixed-chargetransportation problem. Journal of the Operational Research Society, 42:991–1001, 1991.
[13] D.-Z. Du and P.M. Pardalos, editors. Network optimization problems. WorldScientific, Singapore, 1993.
[14] B. Eksioglu. Global supply chain models. In C.A. Floudas and P.M.Pardalos, editors, Encyclopedia of optimization, volume 4, pages 350–353. KluwerAcademic Publisher, Dordrecht, The Netherlands, 2001.
[15] B. Eksioglu, S.D. Eksioglu, and P.M. Pardalos. Solving large-scale fixed chargenetwork flow problems. In P. Daniele, F. Giannesi, and A. Mangeri, editors,Variational inequalities and equilibrium models. Kluwer Academic Publisher,Dordrecht, The Netherlands, 2001.
[16] S.D. Eksioglu, P.M. Pardalos, and H.E. Romeijn. A dynamic slope scalingprocedure for the fixed-charge cost multi-commodity network flow problem. InP.M. Pardalos and V.K. Tsitsiringos, editors, Financial engineering, e-commerceand supply chain. Kluwer Academic Publisher, Dordrecht, The Netherlands,2002.
[17] S.S. Erenguc, N.C. Simpson, and A.J. Vakharia. Integrated production/distribution planning in supply chains: An invited review. European Journalof Operational Research, 115:219–236, 1999.
[18] J.E. Falk. A linear max-min problem. Mathematical Programming, 5:169–188,1973.
[19] T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationallydifficult set covering problem. Operations Research Letters, 8:67–71, 1989.
[20] T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures.Journal of Global Optimization, 6(2):109–133, 1995.
[21] P. Festa and M.G.C. Resende. Grasp: An annotated bibliography. InP. Hansen and C.C. Ribeiro, editors, Essays and surveys on metaheuristics.Kluwer Academic Publisher, Norwell, MA, 2001.
[22] M. Florian, J.K. Lenstra, and A.H.G. Rinnooy Kan. Deterministic productionplanning: Algorithms and complexity. Management Science, 26:669–679, 1980.
[23] C.A. Floudas. Deterministic global optimization: Theory, algorithms andapplications. Kluwer Academic Publisher, Dordrecht, The Netherlands, 2000.
[24] R. Freling, H.E. Romeijn, D. Romero Morales, and A.P.M. Wagelmans.A branch-and-price algorithm for the multi-period single-sourcing problem.Technical Report 99–12, Department of Industrial and Systems Engineering,University of Florida, 1999.
93
[25] G. Gallo and C. Sodini. Adjacent extreme flows and application to min concavecost flow problems. Networks, 9:95–121, 1979.
[26] M.R. Garey, D.S. Johnson, and L. Stockmeyer. Some simplified np-completeproblems. Teoretical Computer Science, 1:237–268, 1976.
[27] J. Geunes, P.M. Pardalos, and H.E. Romeijn, editors. Supply chain management:Models, applications, and research directions. Kluwer Academic Publisher,Dordrecht, The Netherlands, 2002.
[28] S. Ghannadan, A. Migdalas, H. Tuy, and P. Varbrand. Heuristics based ontabu search and lagrangean relaxation for the concave production-transportationproblem. Studies in Regional and Urban Planning, 3:127–140, 1994.
[29] F. Giannessi and F. Niccolucci. Connections between nonlinear and integerprogramming problems. Symposia Mathematica, XIX:161–176, 1976.
[30] G.M. Guisewite and P.M. Pardalos. Minimum concave-cost network flowproblems: Applications, complexity, and algorithms. Annals of OperationsResearch, 25:75–100, 1990.
[31] G.M. Guisewite and P.M. Pardalos. Algorithms for the single-sourceuncapacitated minimum concave-cost network flow problem. Journal of GlobalOptimization, 1:245–265, 1991.
[32] G.M. Guisewite and P.M. Pardalos. A polynomial time solvable concave costnetwork flow problem. Networks, 23:143–147, 1993.
[33] N.G. Hall and M.E. Posner. Generating experimental data for computationaltesting with machine scheduling applications. Operations Research, 49(7):854–865, 2001.
[34] P.L. Hammer. Some network flow problems solved with pseudo-booleanprogramming. Operations Research, 13:388–399, 1965.
[35] F.W. Harris. What quantity to make at once. In Operations and costs:Planning and filling orders, cost keeping methods, controlling your operations,standardizing material and labor costs, volume 5 of The Library of FactoryManagement, pages 47–52. A.W. Shaw Company, Chicago, 1915.
[36] J.P. Hart and A.W. Shogan. Semi-greedy heuristics: An empirical study.Operations Research Letters, 6:107–114, 1987.
[37] A.C. Hax and D. Candea. Production and inventory management. Prentice-Hall,Englewood Cliffs, NJ, 1984.
[38] D.S. Hochbaum and A. Segev. Analysis of a flow problem with fixed charges.Networks, 19:291–312, 1989.
94
[39] K. Holmqvist, A. Migdalas, and P.M. Pardalos. A grasp algorithm for the singlesource uncapacitated minimum concave-cost network flow problem. DIMACSSeries in Discrete Mathematics and Theoretical Computer Science, 40:131–142,1998.
[40] J.N. Hooker. Testing heuristics: We have it all wrong. Journal of Heuristics,1:33–42, 1995.
[41] R. Horst. A note on functions, whose local minimum are global. Journal ofOptimization Theory and Applications, 36:457–463, 1982.
[42] R. Horst and P.M. Pardalos, editors. Handbook of global optimization. KluwerAcademic Publisher, Dordrecht, The Netherlands, 1995.
[43] R. Horst, P.M. Pardalos, and N.V. Thoai. Introduction to global optimization.Kluwer Academic Publisher, Dordrecht, The Netherlands, 1995.
[44] R. Horst and H. Tuy. The geometric complementarity problem and transcendingstationarity in global optimization. DIMACS Series in Discrete Mathematics andTheoretical Computer Science, 4:341–354, 1991.
[45] D.S. Johnson, C.H. Papadimitriou, and M. Yannakakis. How easy is local search?Journal of Computer and System Sciences, 37:79–100, 1988.
[46] B.W. Kernighan and S. Lin. An efficient heuristic procedure for partitioninggraphs. Bell System Technical Journal, 49:291–307, 1970.
[47] D.B. Khang and O. Fujiwara. Approximate solutions of capacitated fixed-chargeminimum cost network flow problems. Networks, 21:689–704, 1991.
[48] D. Kim and P.M. Pardalos. A solution approach for the fixed charge network flowproblem using a dynamic slope scaling procedure. Operations Research Letters,24:195–203, 1999.
[49] D. Kim and P.M. Pardalos. A dynamic domain contraction algorithmfor nonconvex piecewise linear network flow problems. Journal of GlobalOptimization, 17:225–234, 2000.
[50] D. Kim and P.M. Pardalos. Dynamic slope scaling and trust interval techniquesfor solving concave piecewise linear network flow problems. Networks, 35:216–222, 2000.
[51] B. Klinz and H. Tuy. Minimmum concave-cost network flow problems witha single nonlinear arc cost. In D.-Z. Du and P.M. Pardalos, editors, Networkoptimization problems, pages 125–145. World Scientific, Singapore, 1993.
[52] H. Konno. A cutting plane algorithm for solving bilinear programs. MathematicalProgramming, 11:14–27, 1976.
95
[53] B.W. Lamar and C.A. Wallace. Revised-modified penalties for fixed chargetransportation problems. Management Science, 43:1431–1436, 1997.
[54] T. Larsson, A. Migdalas, and M. Ronnqvist. A lagrangean heuristic for thecapacitated concave minimum cost network flow problem. European Journal ofOperational Research, 78:116–129, 1994.
[55] S. Lin. Computer solutions of the traveling salesman problem. Bell SystemTechnical Journal, 44:2245–2269, 1965.
[56] S. Lin and B.W. Kernighan. An effective heuristic algorithm for the travelingsalesman problem. Operations Research, 21:498–516, 1973.
[57] O.L. Mangasarian. Characterization of linear complementarity problems as linearprograms. Mathematical Programming Study, 7:74–87, 1978.
[58] A.S. Manne. Programming of economic lot sizes. Management Science, 4:115–135, 1958.
[59] C.C. McGeoch. Toward an experimental method for algorithm simulation.Informs Journal on Computing, 8(1):1–15, 1996.
[60] P.G. McKeown and C.T. Ragsdale. A computational study of using preprocessingand stronger formulations to solve general fixed charge problems. Computers &Operations Research, 17:9–16, 1990.
[61] P.M. Pardalos and M.G.C. Resende, editors. Handbook of applied optimization.Oxford University Press, New York, 2002.
[62] P.M. Pardalos and S.A. Vavasis. Open questions in complexity theory fornonlinear optimization. Mathematical Programming, 57:337–339, 1992.
[63] M.G. Pilcher and R.L. Rardin. Partial polyhedral description and generation ofdiscrete optimization problems with known optima. Naval Research Logistics,39:839–858, 1992.
[64] M. Raghavachari. On connections between zero-one integer programming andconcave programming under linear constraints. Operations Research, 17:680–684,1969.
[65] S. Reiter and G. Sherman. Discrete optimizing. Journal. Society of Industrialand Applied Mathematics, 13:864–889, 1965.
[66] H.E. Romeijn and D. Romero Morales. Asymptotic analysis of a greedy heuristicfor the multi-period single-sourcing problem: The acyclic case. Technical Report99–13, Department of Industrial and Systems Engineering, University of Florida,1999.
96
[67] H.E. Romeijn and D. Romero Morales. An asymptotically optimal greedyheuristic for the multi-period single-sourcing problem: The cyclic case. TechnicalReport 99–11, Department of Industrial and Systems Engineering, University ofFlorida, 1999.
[68] H.E. Romeijn and D. Romero Morales. A greedy heuristic for a three-levelmulti-period single-sourcing problem. Technical Report 2000–3, Department ofIndustrial and Systems Engineering, University of Florida, 2000.
[69] H.E. Romeijn and D. Romero Morales. A probabilistic analysis of the multi-period single-sourcing problem. Discrete Applied Mathematics, 112:301–328,2001.
[70] M. Sun, J.E. Aronson, P.G. McKeown, and D. Drinka. A tabu search heuristicfor the fixed charge transportation problem. European Journal of OperationalResearch, 106:441–456, 1998.
[71] S. Tayur, R. Ganeshan, and M. Magazine, editors. Quantitative models for supplychain management. Kluwer Academic Publisher, Boston, 1999.
[72] D.J. Thomas and P.M. Griffin. Coordinated supply chain management. EuropeanJournal of Operational Research, 94(1):1–15, 1996.
[73] H. Tuy. The MCCNFP problem with a fixed number of nonlinear arc costs:Complexity and approximation. In P.M. Pardalos, editor, Approximationand complexity in numerical optimization, pages 525–544. Kluwer AcademicPublisher, Dordrecht, The Netherlands, 2000.
[74] H. Tuy. Strong polynomial solvability of a minimum concave cost network flowproblem. ACTA Mathematica Vietnamica, 25(2):209–217, 2000.
[75] H. Tuy, N.D. Dan, and S. Ghannadan. Strongly polynomial time algorithmsfor certain concave minimization problems on networks. Operations ResearchLetters, 14:99–109, 1993.
[76] H. Tuy, S. Ghannadan, A. Migdalas, and P. Varbrand. The minimum concavecost flow problem with fixed number of nonlinear arc costs and sources. Journalof Global Optimization, 6:135–151, 1995.
[77] H. Tuy, S. Ghannadan, A. Migdalas, and P. Varbrand. A strongly polynomialtime algorithm for a concave production-transportation problem with a fixednumber of nonlinear variables. Mathematical Programming, 72:229–258, 1996.
[78] C.J. Vidal and M. Goetschalckx. Strategic production-distribution models: Acritical review with emphasis on global supply chain models. European Journalof Operational Research, 98:1–18, 1997.
[79] H.M. Wagner. On a class of capacitated transportation problems. ManagementScience, 5:304–318, 1959.
97
[80] H.M. Wagner and T.M. Whitin. Dynamic version of the economic lot size model.Management Science, 5:89–96, 1958.
[81] S.D. Wu and H. Golbasi. Manufacturing planning over alternative facilities:Modeling, analysis, and algorithms. In J. Geunes, P.M. Pardalos, andH.E. Romeijn, editors, Supply chain management: Models, applications, andresearch directions, pages 279–316. Kluwer Academic Publisher, Dordrecht, TheNetherlands, 2002.
[82] M. Yannakakis. Computational complexity. In E. Aarts and J.K. Lenstra,editors, Local search in combinatorial optimization, chapter 2, pages 19–55. JohnWiley & Sons, Chichester, England, 1997.
[83] I. Zang and M. Avriel. A note on functions, whose local minimum are global.Journal of Optimization Theory and Applications, 16:556–559, 1976.
[84] W.I. Zangwill. Minimum concave cost flows in certain networks. ManagementScience, 14:429–450, 1968.
BIOGRAPHICAL SKETCH
Burak Eksioglu was born on February 14, 1972, in Adana, Turkiye. He received
his high school education in Tarsus American College. In 1994, he was awarded
a bachelor’s degree in industrial engineering from Bogazici University in Istanbul,
Turkiye. He had his master’s degree from University of Warwick in England in 1996.
After his master’s studies, he went back to Turkiye and worked for Marsa KJS as
an Organization Development Specialist for about a year. He then began doctoral
studies at the University of Florida, and received his Ph.D. in December 2002. Burak
Eksioglu is a member of the Tau Beta Pi Engineering Honor Society.
98