some thoughts on combinatorial optimisation

18
ELSEVIER European Journal of Operational Research 83 (1995) 253-270 EUROPEAN JOURNAL OF OPERATIONAL RESEARCH Some thoughts on combinatorial optimisation M.H. Bjorndal a,., A. Caprara b P.I. Cowling c, F. Della Croce d H. Lourenqo e, F. Malucelli f, A.J. Orman g, D. Pisinger h, C. Rego i, j.j. Salazar J a Norwegian School of Economics and Business Administration, Bergen, Norway b DEIS, Universit~ di Bologna, Bologna, Italy c Service de Math~matiques de la Gestion, Universit~ Libre de Bruxelles, Brussels, Belgium d DAI, Politecnico di Torino, Torino, Italy e Departamento de Estatlstica e Inl;estigaq~o Operacional, Universidade de Lisboa, Lisbon, Portugal f Dipartimento di Informatica, Universit?t di Pisa, Pisa, Italy g Faculty of Mathematical Studies, University of Southampton, Southampton, UK h Department of Computer Science, University of Copenhagen, Copenhagen, Denmark i DI, Universidade Portucalense, Porto, Portugal J DE10C, Universidad de La Laguna, Tenerife, Spain Abstract A group of young researchers from the ESI X summer school, HEC, Jouy-en-Josas 1994, give their personal views on the current status of, and prospects for, Combinatorial Optimisation. Several issues are considered and discussed with emphasis on a selected number of techniques: heuristics and polyhedral approaches, and problems: knapsack, quadratic 0-1 programming, machine scheduling, routing and network design. I. Introduction Combinatorial Optimisation (CO) studies prob- lems which are characterised by a finite number of feasible solutions. Although, in principle, the optimal solution to such a finite problem can be found by a simple enumeration, in practice this task is frequently impossible, especially for practi- cal problems of realistic size where the number of feasible solutions can be extremely high. CO re- searchers study the structural properties of the problems and use these properties to devise both * Corresponding author. exact and approximate general solution tech- niques. In general, CO problems are classified according to their computational complexity. This worst-case analysis does not always reflect the actual computational tractability. For this reason, it is the real difficulty of the problems that drives the development of solution approaches. This paper is a review of some of the problems and solution methods of CO as seen through the eyes of the next generation of researchers. Though partial, and in some cases possibly naive, this review intends to address the questions usually faced by young researchers when entering the field. Common questions are: What is the future of CO? Why should a young researcher enter the field of CO? How can we promote CO? How can 0377-2217/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved SSDI 0377-2217(95)00005-4

Upload: independent

Post on 10-Nov-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

E L S E V I E R European Journal of Operational Research 83 (1995) 253-270

EUROPEAN JOURNAL

OF OPERATIONAL RESEARCH

Some thoughts on combinatorial optimisation

M . H . B j o r n d a l a , . , A . C a p r a r a b P. I . C o w l i n g c, F. D e l l a C r o c e d H . L o u r e n q o e,

F. M a l u c e l l i f, A . J . O r m a n g, D . P i s i n g e r h, C. R e g o i, j . j . S a l a z a r J

a Norwegian School of Economics and Business Administration, Bergen, Norway b DEIS, Universit~ di Bologna, Bologna, Italy

c Service de Math~matiques de la Gestion, Universit~ Libre de Bruxelles, Brussels, Belgium d DAI, Politecnico di Torino, Torino, Italy

e Departamento de Estatlstica e Inl;estigaq~o Operacional, Universidade de Lisboa, Lisbon, Portugal f Dipartimento di Informatica, Universit?t di Pisa, Pisa, Italy

g Faculty of Mathematical Studies, University of Southampton, Southampton, UK h Department of Computer Science, University of Copenhagen, Copenhagen, Denmark

i DI, Universidade Portucalense, Porto, Portugal J DE10C, Universidad de La Laguna, Tenerife, Spain

Abstract

A group of young researchers from the ESI X summer school, HEC, Jouy-en-Josas 1994, give their personal views on the current status of, and prospects for, Combinatorial Optimisation. Several issues are considered and discussed with emphasis on a selected number of techniques: heuristics and polyhedral approaches, and problems: knapsack, quadratic 0-1 programming, machine scheduling, routing and network design.

I. Introduction

Combina tor ia l Opt imisa t ion (CO) studies prob- lems which are characterised by a finite number of feasible solutions. Although, in principle, the optimal solution to such a finite problem can be found by a simple enumeration, in practice this task is frequently impossible, especially for practi- cal problems of realistic size where the number of feasible solutions can be extremely high. CO re- searchers study the structural properties of the problems and use these properties to devise both

* Corresponding author.

exact and approximate general solution tech- niques. In general, CO problems are classified according to their computational complexity. This worst-case analysis does not always reflect the actual computational tractability. For this reason, it is the real difficulty of the problems that drives the development of solution approaches.

This paper is a review of some of the problems and solution methods of CO as seen through the eyes of the next generation of researchers. Though partial, and in some cases possibly naive, this review intends to address the questions usually faced by young researchers when entering the field. Common questions are: What is the future of CO? Why should a young researcher enter the field of CO? How can we promote CO? How can

0377-2217/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved SSDI 0377-2217(95)00005-4

254 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

links between CO and real-world applications be strengthened? How can the interest of young students be generated by CO? How can the pub- lication procedure improve the recognition of CO? How can links between CO and other aca- demic disciplines, both mathematical and non- mathematical, be improved?

Given that CO comprises a multitude of areas, we decided to address the aforementioned ques- tions by focusing on a selected number of tech- niques and problems. The topics were chosen as they combined the expertise of the authors and strong personal opinions, thus improving the clar- ity of the work.

Following this introduction, in Section 2, we consider Heuristic methods and their importance in the field of CO. Section 3 presents Polyhedral approaches stressing their practical aspects. In Section 4, we will consider the classical Knapsack Problems, and discuss the structural properties that make these problems easily solvable. In Sec- tion 5, we will analyse Quadratic 0-1 Program- ming, and in particular we will try to put forward evidence for some of the large number of connec- tions that exist between quadratic 0-1 program- ming and other CO fields. Machine Scheduling problems are introduced in Section 6 and their link with the industrial world explored. Different aspects of Routing and Distribution problems are discussed in Section 7. Section 8 considers the Design of Networks as an important application of CO. In Section 9, we summarise some of the most important ideas which emerge from the other sections, and thus try to answer our initial questions.

This is a joint work of ten authors from seven different European countries, which is unusual for a CO paper. The guidelines were devised and discussed during ESI X at Groupe HEC, Jouy- en-Josas, France, while for the most part it was written independently and communicated through e-mail. Consequently, there may be some incon- sistencies between the various sections. As each section was written by different individuals, with their personal opinions, it was decided at the editing stage to leave each individual comment intact, and thus, we feel, enhance the overall appeal of the paper.

2. Heuristics

It is known, from extensive experience, that living beings can provide 'adequate' solutions to complex and ill-defined problems in the absence of complete information. Heuristics represent a natural projection of this mysterious ability of living beings to be able to solve problems auto- matically. However, in the field of real-world problem-solving, it is currently unreasonable to expect a computerised heuristic to be able to solve any problem without taking into account the structure and the properties of the problem. We must currently restrict ourselves to problems which we can model in a precise mathematical way. It would appear that problems without any precise formulation, for example, "Should I buy company X or leave my money in the bank?", "Should we lift the trade embargo on country Y, or should we consider a military invasion?", are well beyond our current reach. This is, possibly, a failure of our ability to formulate these questions in a precise mathematical way, but it would ap- pear to be more likely that these problems simply do not currently have a precise mathematical formulation and it will require quite astonishing advances in currently available mathematical tools before we can 'adequately' answer these prob- lems in any real sense.

Within every area of mathematical problem- solving, heuristics are of central importance. In CO, heuristics play an even more obviously cen- tral role, since the heuristics developed in this discipline attempt, in many cases, to model the process by which a human would arrive at the solution 'by hand'. This is demonstrated clearly by the way in which many heuristics are discov- ered - by working through small cases of the problem manually in order to elucidate a general problem structure and solution technique. Power- ful, general, modelling tools in combinatorial op- timisation (mixed integer programming, graphs, matroids, etc.) give us the necessary framework to model real situations, sometimes with a very high degree of accuracy. The availability of powerful software to solve, particularly, models formulated as mixed integer programs, could mean that for all real-world problems we need simply to plug

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 255

our model into such a piece of software and await the mathematically optimal results. However, for a large class of problems, general techniques will not generate a mathematically optimal solution within a reasonable t ime and it is for these prob- lems that heuristic techniques must be used to find solutions of a sufficiently high quality, which may of course be some way from the optimal solution of the model.

The time and space required by a heuristic may be measured in many ways. A large body of literature is concerned with determining the worst-case complexity of a heuristic method. That is, an upper bound on the amount of t ime or space required to provide a solution of a problem instance of size n, as a function of n. The models solved using these heuristics are often the mathe- matical core of real-world problems. Clearly this is an important measure. In many cases, however, there is a large discrepancy between the theoreti- cal worst-case complexity and the t ime / space required to solve 'average ' instances. A different measure of time complexity is provided by assum- ing some distribution on problem instances and computing the 'expected ' complexity. In many cases, this technique yields results which are borne out in practice, yet a lingering doubt must remain that problems encountered in practice do not have 'easy' distributions. The development of new techniques in probability theory will, in the fu- ture, allay some of these fears. The complexity of the heuristics and the size of problems which may be solved by them increases relatively quickly with increase of hardware speed, since most heuristics have essentially polynomial time com- plexity. Hardware advances are, in general, more important for heuristic techniques than for exact methods of solution: they make it possible to use more complex heuristic techniques to solve larger problem instances. For many real-world prob- lems, the most meaningful measure of complexity is 'customer satisfaction'. We must ask ourselves whether we are finding an adequate solution within a time acceptable to the situation, for most or all instances.

Since heuristics do not, necessarily, generate an optimal solution, we have another important

measure of performance, the solution quality. Ideally, here we should have a bound on just how far from the opt imum a solution can be. How- ever, this analysis may be beyond the current theory, or there may be strong evidence that such a result is not possible, unless 9 = J g [2]. We may then consider a series of experiments to determine solution quality. Here we compare the heuristically-generated solution against some measure of optimality. This measure may be an optimal solution determined using some other technique, a bound on the optimal solution value or a solution generated by another heuristic, in particular a manual one. We feel that in the literature the importance of the first and second measures has sometimes been over-stressed, where for practical problems the third measure is more appropriate. A significant discussion on this matter is given by Roberts [45] where several indicators of solution quality are illustrated. There are many papers where comparison with the opti- mal solution for unrealistically small instances are given, instead of a much more significant compar- ison with the third measure. This is particularly true in cases where the original model may, nec- essarily, have inaccuracies which lead a solution which is optimal in a mathematical sense to be sub-optimal in a subjective sense. It may also be true that the third measure is more appropriate since the ' function' mapping the model 's solution to the real world may give rise to inaccuracies. It is worth mentioning that in [43] an at tempt was recently made to aid referees in comparing arti- cles describing new heuristics. Whilst each case must be considered on its merits, such a frame- work is useful for both the author and the ref- eree.

The observation that many heuristics have a similar structure has led to the recent develop- ment of general meta-heuristics, as Simulated An- nealing [1], Tabu Search [17] and Genetic Algo- rithms [18]. These meta-heuristics give algorithms with an essentially user-definable complexity, since the user has great flexibility in deciding whether to trade solution quality for speed. Since these techniques are based on local search, they frequently do not require much problem-specific

256 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

knowledge in order to generate good solutions. We believe that this is an exciting time to be in the field of combinatorial optimisation, since it seems likely that new and more powerful meta- heuristics will emerge in the coming years. In particular, we expect that meta-heuristics will be- come more widely available to practitioners through commercial software packages and li- braries, in the same way as has happened for the meta-heuristic technique of constraint based rea- soning. Meta-heuristics also present a formidable theoretical challenge to the mathematical com- munity, since despite very promising results from experimentation and practice, there are few pa- pers addressing the scientific reasons why these techniques should indeed be effective. It seems that the use of probabilistic techniques might provide interesting results here and we believe that significant strides will be made in this area in the next few years.

Undergraduate courses in CO rarely have more than a very small introduction to heuristic tech- niques. In some respects, this reflects the need for students to develop a thorough grounding in basic principles which rarely give rise to sub-opti- mal solutions. Since most heuristics are based upon intuition, the inclusion of more detailed knowledge of selected heuristic techniques could provide additional motivation for students of CO. Furthermore, since many students will go on to solve problems outside Academia, it would make clear to them that there exist sound techniques for solving even extremely hard, difficult-to-model problems. Many of these students will one day be in a position to decide whether it is possible to solve given real-world problems using comput- erised methods, so academics clearly have much to gain in terms of increased collaboration with industry if knowledge of heuristic approaches is more widely appreciated.

A discipline which is in some way related to heuristic techniques in CO is Artificial Intelli- gence. We note that unfortunately there are very poor ties between researchers in CO and Artifi- cial Intelligence. It should be recognised that there is significant common ground in these two areas, so that further duplication of effort can be avoided.

3. Polyhedral approaches

A Combinatorial Optimisation Problem (COP) requires finding a best element in a finite set S, with respect to some objective function. While for some COP's polynomial algorithms are known, for the remaining ones the exact solution ap- proaches known so far require 'brute force' in the worst case, i.e. exploring all the elements in S. In order to avoid an exhaustive enumeration, a tree search strategy and a bounding procedure are needed. The tree search strategy plays a relevant role in any exact algorithm, and different imple- mentation choices can lead to different perfor- mances. Nevertheless, for hard problems, the bounding procedure seems to be the core of each method in the sense that bad choices provide extremely inefficient algorithms.

The bounding procedure is based on the exact solution of a Relaxed Problem, obtained from the original problem by dropping or simplifying some constraints. As the relaxed problem gets 'close' to the original problem, the bound provided im- proves but usually at the cost of greater computa- tional effort. Classical branch-and-bound ap- proaches are often based on relaxed problems which are quite easy to solve. Their performance is satisfactory on some problems for which known relaxations provide a bound very close to the optimum, e.g. Knapsack, Asymmetric Travelling Salesman and Single Machine Scheduling. On the other hand, for many COP's, there is a large gap between the optimum and the optimal solution value of classical relaxations, and therefore the exact algorithms based on these relaxations are not satisfactory. Moreover, the availability of bound values close to the optimum is also neces- sary for certifying the quality of a given heuristic.

The aim of Polyhedral Combinatorics is to provide relaxations which are very 'close' to a given COP by using linear programming tech- niques. In a sense, linear programming helps in providing methods to take into account con- straints which are usually dropped in classical relaxations. More precisely, the original problem has to be initially formulated as an integer linear programming problem, where the optimal solu- tion corresponds to a best vertex of the polytope

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 257

defined as the convex hull of feasible points. There exists a complete description of this poly- tope by means of a finite number of linear con- straints. Then, an optimal vertex could in princi- ple be found by solving a Linear Programming Problem (LP). Polyhedral combinatorics deals with this LP, by using good theoretical results (such as those in duality theory) and the effective algo- rithms known for linear programming. In prac- tice, there are two main problems in solving a hard COP as an LP. Firstly, it is extremely diffi- cult to know all the linear constraints that are needed. Secondly, the number of these con- straints is exponential; of course, only a polyno- mial number of them is required to define an optimal solution, but it is not known how to select them. Polyhedral combinatorics is divided in two parts, each one trying to overcome one of these difficulties: Polyhedral Theory and Polyhedral Computation [23,39].

Polyhedral theory is a very old branch of math- ematics which studies the geometric structure of polyhedra. In the last twenty-five years, some operational researchers have been using polyhe- dral theory to give a complete description of the polytopes associated with some COP's. Many nice results have been derived for different problems [44].

Polyhedral computation uses the results from polyhedral theory for solving COP's; in particu- lar, it deals with a set ~- of constraints which describe (completely or partially) an associated polytope. This discipline has been developed mainly in the last ten years, in parallel with the great improvements of computers and LP solvers. These tools are very effective in managing large LP's, with thousands of constraints. Anyway, this may not be enough to solve the LP's provided by polyhedral theory, containing a huge number of constraints. In order to overcome these difficul- ties, the original LP is solved by iteratively solving LP's containing only a small subset of constraints from the original one, and by using Separation Procedures.

Separation procedures solve the following Sep- aration Problem: given the optimal solution x * to an LP containing only a subset of the set of constraints 5 r, find a constraint in 5 r which is

violated by x *, or prove that none exists. When 9- is a complete linear description of the poly- tope associated with a given COP, a very impor- tant result states that the original problem is polynomially solvable if and only if the corre- sponding separation problem is [20]. Then, for an ~ - h a r d problem, one could not expect the sep- aration problem to be efficiently solvable. Never- theless, some heuristic algorithms (or exact algo- rithms for separating over a subset .Y-0 c ~-) can be used, and many of them have been able to solve to optimality in several real-life problems. The early key results in this area are the work of Edmonds [12] for the matching problem and the development of the ellipsoid method for linear programming [28].

The above considerations have been converg- ing into cutting-plane based approaches like the well-known branch-and-cut proposed by Padberg and Rinaldi [40], which have been successfully used for solving several COP's, such as Linear Ordering, Quadratic 0-1 Programming, Clique Partitioning, Symmetric Travelling Salesman, Set Partitioning and Asymmetric Travelling Sales- man. It is interesting to notice that the polyhedral approach was used not only to solve problems having mainly a theoretical interest, but also to tackle real-life problems. For example, the Set Partitioning code developed by Hoffman and Padberg [26] is intended to solve airline crew scheduling problems to optimality, since " . . . even small percentage savings amount to substantial dollar amounts". Another interesting observation is that the most important aim in polyhedral combinatorics is not simply to derive theoretical polyhedral results, but to use the results to de- velop computationally effective algorithms. Many people believe that the core of polyhedral combi- natorics is finding facets, but in most cases the focus is on finding good inequalities and thus speeding up the algorithm. In this direction, some of the approaches are intended to introduce in- equalities that not only cut off fractional points but also uninteresting feasible solutions.

Our experience suggests that polyhedral com- binatorics could be an interesting subject for ad- vanced university courses, since it is an immedi- ate application of mathematical programming and

258 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

graph theory techniques to integer programming, and it also exemplifies how sophisticated mathe- matical results can be used for solving practical problems.

As to the future prospects, there are many hard COP's for which polyhedral approaches are being tried, with the objective of obtaining good results where the classical methods have failed. However, it is not known whether these ap- proaches will be successful for well-known diffi- cult problems such as Machine Scheduling or Quadratic Assignment. These problems do not have a straightforward 'good' integer linear pro- gramming formulation and this seems to be the main difficulty which cannot be overcome by the polyhedral approach.

4. Knapsack Problems

Knapsack Problems have been intensively studied since the emergence of Operational Re- search, both because of their immediate applica- tions in industry and financial management, but more especially for theoretical reasons as Knap- sack Problems often occur by relaxation of differ- ent integer programming problems, e.g. by surro- gate relaxation of the Set Covering Problem. In such applications, we need to solve a Knapsack Problem each time a lower bound is derived, demanding extremely fast solution times.

The family of Knapsack Problems all requires a subset of n given items to be chosen such that the corresponding profit sum is maximised with- out exceeding the capacity c of the knapsack(s). Different types of Knapsack Problems occur de- pending on the distribution of items and knap- sacks: In the 0-1 Knapsack Problem (KP) each item may be chosen at most once, while in the Bounded Knapsack Problem we have a bounded amount of each item type. The Multiple-choice Knapsack Problem occurs when the items should be chosen from disjoint classes and, if several knapsacks are to be filled simultaneously we get the Multiple Knapsack Problem. The most general form is the Multi-constrained Knapsack Problem, which basically is a general IP-problem with posi- tive coefficients. For an overview of several

Knapsack Problems and their applications, see Martello and Toth [33].

Although Knapsack Problems, from a theoreti- cal point of view, are almost intractable as they belong to the family of lYe-hard problems, sev- eral of the problems may be solved to optimality in fractions of a second. This surprising result is the outcome of several decades of research which has exposed the special structural properties of Knapsack Problems that make the problems so easily solvable.

The most important property of Knapsack Problems is that tight upper bounds on the objec- tive value may be found by solving the continuous relaxed problem. The variables are simply or- dered according to their profit-to-weight ratio, such that a greedy algorithm may be used for filling the knapsack. An important result actually states that the continuous KP may be solved in O(n) time without sorting the items (see [33] for a discussion). The existence of tight and quickly obtainable upper bounds makes it possible to develop effective branch-and-bound algorithms for the exact solution of the problems.

Another essential property is that, having solved the continuous relaxed problem, generally only a few decision variables need to be changed in order to obtain the optimal solution. For the 0-1 KP, this property may be illustrated as fol- lows: order the variables according to their profit- to-weight ratios, and solve the continuous relaxed problem, denoting the fractional variable by b. In order to obtain the integer-optimal solution, gen- erally only very few variables need to be changed, and those variables are generally very close to b. Based on this observation, Balas and Zemel [5] proposed that only a few variables around b are considered in order to solve the KP to optimality. This problem was denoted the core problem and has been an essential part of all efficient algo- rithms for Knapsack Problems. However, the way the core is chosen is important, since degenera- tion may occur.

By using dynamic programming, Knapsack Problems are solvable in pseudopolynomial time with the exception of Multiple and Multi-con- strained KP. The dominance relations are gener- ally very efficient, making it possible to fathom

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 259

several infeasible states and, by incorporating bounding tests in the dynamic programming, very efficient algorithms may be developed. Since the problems are pseudopolynomially solvable, fully polynomial-time approximation schemes are a natural spin-off from these techniques.

Another important property of Knapsack Problems is that they are separable, as observed by Horowitz and Sahni [27], which means that a 0-1 KP may be solved in O(2v~-) worst-case time, thus giving an improvement over a complete enu- meration by a factor of a square-root. Although this bound is still exponential, the consequence of this observation is that we may solve a 0-1 KP through parallel computation by recursively divid- ing the problem in two parts. The resulting algo- rithm runs in O(log n log c) which is probably the best one can hope for, as mentioned in [29], but the number of processors required is huge.

For all the Knapsack Problems, efficient re- duction algorithms have been developed, which enable one to fix some decision variables at their optimal values, thus considerably decreasing the size of an instance. Basically, these tests may be viewed as a special case of the branch-and-bound technique; for each 0-1 variable, we test both branches, fathoming one of them if a bounding test shows that a better solution cannot be found. See Martello and Toth [33] for a thorough treat- ment of reduction techniques.

Captivated by these nice properties of Knap- sack Problems, one might get the idea of solving any Integer Programming problem through re- duction to a 0-1 KP. Actually, techniques have been developed which transform a general Inte- ger Programming problem to a 0-1 KP by merg- ing constraints. However, such transformations introduce exponentially growing coefficients in the Knapsack Problems and, as the 0-1 KP is pseudopolynomially solvable, this means that we actually get the worst-case solution times for those problems. In this connection, it should be men- tioned that Chv~tal [9] proved that if all coeffi- cients of a Knapsack Problem are exponentially growing, and if the profit equals the weight for each item (the so-called subset-sum data in- stances), then no bounding and no dominance relations will stop the enumerative process before

at least (2 "/70 ) nodes have been enumerated, thus implying strictly exponentially growing com- putational times.

Seen in this light, perhaps too much effort has previously been used on the solution of easy data instances. Recent research has thus been concen- trated on the solution of hard Knapsack Prob- lems. Lexicographic search has been used for solving the so-called strongly correlated data in- stances but, as this technique is not applicable for general Knapsack Problems, Martello and Toth [34] developed an algorithm based on cutting- plane techniques for generating additional con- straints to the problem. Bounds for the tightened problem are obtained through Lagrangian relax- ation. Finally, [41] has showed promising solution times for hard Knapsack Problems by using dy- namic programming on an expanding core prob- lem in combination with enumerative upper bounds.

From an industrial point of view, the main issue in Knapsack Problems is that easy problem types (0-1 KP, etc.) have been intensively stud- ied, although real-life problems usually are con- siderably more complex. The Multiple KP, which is very important in naval applications, has only been considered by a few authors. Since results obtained for the easy Knapsack Problems have been so good, and widely known, this has discour- aged young researchers from entering the field in spite of a variety of challenges. Future research should be concentrated on the solution of com- plex Knapsack Problems as well as hard data instances. The prospects are quite bright, as re- suits from the easier Knapsack Problems immedi- ately may be propagated to the harder problems. For instance in [33] the proposed branch-and- bound method for the Multiple KP requires the solution of a 0-1 KP each time an upper or lower bound is determined.

As current algorithms generally behave well for some instances and poorly for others, we should focus future research on the development of robust algorithms for several Knapsack Prob- lems, which are able to solve even strongly corre- lated data instances efficiently. These algorithms may be based on cutting plane techniques in connection with enumerative techniques - where

260 M.H. Bjorndal et aL / European Journal of Operational Research 83 (1995) 253-270

dynamic programming has best prospects due to the pseudopolynomial time bounds. Enumerative upper bounds may also lead to better bounding criteria. The appearance of adaptive algorithms, where all steps of the solution process are per- formed by need may lead to algorithms which are fast for easy problems and robust for difficult problems, avoiding critical parameters as ex- pected core-size and threshold values on the enu- meration. On the other hand, it does not seem promising to apply the recent meta-heuristics for Knapsack Problems; primarily because good heuristics already do exist, and secondarily be- cause none of the current meta-heuristics are able to fully benefit from the special structural properties that apply to Knapsack Problems.

5. Quadratic 0-1 Programming

Quadratic 0-1 Problems (QP) have a central role in CO. From a practical point of view, a large number of problems can be formulated as maximisation of quadratic real valued functions in 0-1 variables (pseudo-Boolean functions) [25]. A significant example is that maximum satisfiabil- ity can be easily reduced to QP. For that reason, QP has been referred to as 'the mother of all the CO problems'. Other constrained 0-1 quadratic problems, such as Quadratic Knapsack, Quadratic Assignment and Quadratic Semi-Assignment, arise from a countless number of applications. From a theoretical point of view, the study of quadratic 0-1 optimisation is very interesting due to the many connections with other fields of optimisation, such as graph theory and various branches of continuous optimisation, and also because of the large variety of techniques that can be used to approach the problem.

Since all the mentioned problems, in general, are very difficult to solve and due to their practi- cal relevance, much effort has been spent on finding heuristic algorithms, which attain good solutions although they are not guaranteed to be optimal. Usually, a crucial point is to assess the quality of the solution by the heuristics since, for most problems, the upper bounds of the optimal solution are very loose.

A pseudo-Boolean quadratic function is usu- ally defined as

f ( x ) =xTQx + cTx,

where x ~ {0, 1}", Q ~ ~ × " is an upper triangu- lar matrix and c ~ ~ is a column vector. The problem of finding the maximum of f(x), in general, is J g - h a r d . However there are some classes of instances which are solvable in polyno- mial time. For example when all the coefficients of Q are non negative, the problem can be solved by means of a min cut algorithm.

A possible way to approach QP is to transform it into an equivalent problem where the quadratic coefficients matrix defining the objective function is semi-definite positive [16], that is making the objective function convex. This way, the continu- ous relaxation of the problem is equivalent to the original one as, due to convexity, all the optimal solutions will be integer, hence the problem can be approached by means of non differentiable optimisation methods. However, this equivalence can induce some unpleasant properties: for exam- ple, it can happen that all the integral points (vertices of [0, 1]") are local minima with respect to the Euclidean neighbourhood. Nevertheless, this approach remains interesting as this kind of relation can induce results between the two fields. For example, the study of easily solvable in- stances in the 0-1 case can suggest interesting insights in the continuous case, and vice-versa.

An important aspect is the relation between quadratic 0-1 programming and graph theory. Hammer [24] has proved the equivalence between quadratic 0-1 optimisation and the problem of finding a maximum cut in an oriented weighted graph. This kind of transformation allows the exploitation of all the solution techniques and the properties devised for the max cut to study quadratic 0-1 problems. In particular, Barahona et ai. [6] apply a polyhedral approach to solve efficiently quite large instances of QP. Also all the heuristic algorithms for the max cut can be applied effectively to QP. Another important re- sult which derives from the equivalence with the max cut is the definition of a class of problems solvable in polynomial time. In fact, for the graphs which are not contractible to K 5, the max cut can

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 261

be found in polynomial time [7]. This result di- rectly applies to QP.

There is also another relation between quad- ratic 0-1 programming and graph theory: in [25] the authors prove the equivalence between QP and the problem of finding a maximum weight stable set in a graph (vertex packing). The graph used is called SAM (Stable and Matching) be- cause its nodes can be partitioned into two sub- sets such that the first subset identifies a stable set and the set of arcs with both endpoints in second subset defines a matching. The problem of finding a vertex packing is, in general, , ¢ ~ - hard, but also in this case there are some easy cases, such as for example when the graph is bipartite. The equivalence between QP and the vertex packing is exploited to transform the prob- lem and to obtain upper bounds for the problem which can be computed very efficiently, [25].

A classical way to approach a quadratic prob- lem is to linearise the objective function. One possible way is studied in [38], where the problem is viewed from a polyhedral point of perspective, hence the facets and the characteristics of the quadric polytope are studied. The results ob- tained by studying the quadric polytope directly can, in practice, be interpreted on the corre- sponding cut polytope obtained after transform- ing the problem. In particular, the results on the easy cases coincide; in [7] it is proved that, when the graph defined by the non zero elements of Q defines a series-parallel graph, QP is solvable in polynomial time. Looking at the corresponding max-cut problem, we can see that, when Q de- fines a series-parallel graph, the graph of the max cut problem is not contractible to Ks, which is an easy solvable case for the max cut. Another inter- esting linearisation is studied by Rhys [46].

Other elegant approaches to the problem which can be used to obtain upper bounds are the roof dual and the quadratic complementa- tion. Also in this case interesting and exquisite relations with other CO fields can be given. In [25] the authors prove that the upper bounds obtained by computing the roof dual and the quadratic complement of QP are equal to the bounds obtained from Rhys' linearisation and the fractional weighted stability problem of the corre-

sponding SAM graph. This allows all the bounds to be computed very efficiently using a maximum flow algorithm on a bipartite graph. The study of the relations between these four different prob- lems suggests some important properties as those related to the persistency, which allows to fix the value of some variables in the optimal solution.

This kind of reasoning introduces another topic which has been studied for quadratic 0-1 pro- gramming, that is necessary and sufficient opti- mality conditions. Optimality conditions can be very important to verify the quality of the solu- tions found by the great number of heuristic procedures devised for QP. Unfortunately, it has been proved that recognising the optimality of a given feasible solution cannot be easier than solv- ing the problem [8]. Also, on the front of recog- nising the local optima, the results are not excit- ing. In fact, checking the second order conditions of Kuhn-Tucker, which completely characterise the local optima of any general quadratic pro- gramming problem, is X°J-complete. However, all the results on the optimality conditions can be exploited to give some indications on the quality of the solution, and we think that this could be a challenging field for future research, in particular because it can give a method to test heuristic solutions alternative to the branch-and-bound ap- proach.

Let us conclude this section with some general considerations and impressions about the world of quadratic 0-1 programming. Quadratic 0-1 programming is a way of modelling that is ex- tremely powerful and simple at the same time; many practical problems can be modelled by means of a QP or other quadratic 0-1 con- strained problems. The resulting formulations are extremely clear and simple. Unfortunately, this class of problems has the bad reputation of being extremely difficult to solve. This reputation very often discourages new researchers from investing time and effort in trying to give a significant contribution to this field. This is in spite of the fact that, as we briefly saw in this section, there are many connections between QP and almost all the other fields of CO. Most of the recent work proposed to the scientific community is mainly devoted to applications of standard heuristics,

262 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

such as Tabu Search, Simulated Annealing or similar techniques. The lack of new contributions to this field implies that the same people are involved in the study of QP, and thus the growth of new ideas and cross fertilisation is difficult. However, we are confident that the multiplicity of connections with other CO fields will bring to the world of QP not only knowledge and new ideas, but also new researchers. This could also be made easier by a cautious introduction of QP into university instruction programs.

6. Machine Scheduling

Among the branches of CO, Machine Schedul- ing is probably the one most directly linked to the industrial world. It is concerned with the optimal application of scarce resources (machines) to ac- tivities over time. The interest in machine scheduling theory originated in the early 1950's and has expanded in recent years as its applicabil- ity to an increasing multitude of problems has been realised and as its surprising difficulty has intrigued researchers. For introductory textbooks on machine scheduling, we refer to Baker [3] and French [13] and to the more recent and applica- tion-oriented work of Morton and Pentico [37].

The majority of research in machine schedul- ing has been on deterministic problems, while stochastic scheduling, i.e. the extension to this area of problems allowing some uncertainty in part of the problem data, has generally received less attention. Deterministic scheduling is part of CO. The goal in any scheduling problem is to minimise a certain objective function. This opti- mality criterion may take a number of forms, the most common type of problem is the minimisa- tion of a single cost measure, nondecreasing in each of the jobs completion times, which is called a regular measure. The class of regular measures is varied enough to have kept research fuelled for four decades to date. The area of non-regular criteria, which involve set-ups, earliness of jobs and multicriteria scheduling is a relatively unex- plored area. The possible number of optimality criteria is vast and can be split into three distinct groups depending on the problem variables of

interest; those based on completion times, those based on due dates and those based on idleness penalties.

A vast number of scheduling problems exists through the variation of the problems' defining factors which can simplify some problems and render others difficult. Computational complexity is the term coined for such a classification of problems. The complexity for a collection of 4 536 machine scheduling problems, with 1, 2 or 3 machines has been collated [31] and gives an excellent insight to the difficulty of these types of optimisation problems. 416 of these problems were known to be solvable in polynomial time, 3817 have been proved to be JV.~-hard, whilst the complexity of 303 is still open.

The large proportion of machine scheduling problems which are IVY-hard have guided many researchers in this field to explore effective ways to adopt exact solution methods. Dynamic pro- gramming and branch-and-bound algorithms are the most commonly used techniques. Also, it is often reasonable to assume that a machine scheduling problem could be solved through a mathematical programming formulation since it is concerned with the allocation of scarce resources. However, the effort in tackling X ~ - h a r d scheduling problems, by formulating them as mathematical programming models (mainly inte- ger programming models) and then solving them with mathematical programming techniques, has been without success up to now. A couple of hitches renders this approach mostly invalid. Firstly, it should be noted that no good general algorithm has been developed for solving integer programming problems. Secondly, the special structure of machine scheduling problems is not sufficiently used and the inherent difficulties in the scheduling problem are carried through, in a non-tractable form, into the mathematical pro- gramming model. Also the polyhedral approach that has shown up recently to be quite efficient for other CO problems has not been successful, at least at the present state of the art, when applied to scheduling problems.

The exponential characteristics of the exact algorithms used to solve X~-ha rd problems have driven researchers to employ heuristic methods

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 263

having a reasonable computational requirement for solving those problems. Heuristics for ma- chine scheduling problems can be divided into two major categories: constructive types and im- provement types. The former is concerned with building a feasible schedule from scratch such that whenever a decision is taken it is not re- versed. The latter starts with a feasible solution and attempts to improve it. This second category belongs to a class of heuristic algorithms called local (or neighbourhood) search techniques.

The two most popular local search techniques, simulated annealing and tabu search, have been applied to a number of machine scheduling prob- lems. Other approximation methods include ge- netic algorithms, which when applied in a pure form do not generate particularly good solutions, and neural networks which have not received much attention. Threshold accepting is also in- creasingly being used. In terms of results, it seems that the best compromise between quality of the solution and computing time is reached typically by Tabu Search, see [47] for a wide comparison of heuristic methods on the job-shop problem.

As far as the factories are concerned, schedul- ing theory may be applied to a certain extent to a large number of real-life problems from a variety of practical situations. It is mainly applied to planning problems arising from manufacturing environments, resulting in the scheduling termi- nology of jobs and machines. There are, however, many parallels to manufacturing systems which require the allocation of scarce resources to activ- ities over time. Environments such as chemical industry, education, agriculture, transport, de- fence and health may exploit some of the results from machine scheduling. In the management of such systems, the scheduler plays a major role in determining the most efficient method of allocat- ing tasks to the available resources.

Though already very hard in most cases, classi- cal scheduling problems are still very far from real-life ones. In the industrial environment, mul- tiple conflicting objectives need to be tackled whilst respecting various constraints, leading to an overwhelming complexity. The unique way to overcome this issue is to decompose the global problem into more affordable subproblems. The

decomposition task is probably the most form- idable step when dealing with real problems, given that it involves jointly modelling theory as much as scheduling theory know-how. At present, there is not a standard way to approach this step and this is reasonable given that the scheduling issues differ greatly from one firm to another. Nonethe- less, the more the decomposition leads to classi- cal subproblems, the more it is possible by means of known procedures to reach a 'good' subopti- mal solution.

Though Machine Scheduling is a branch both of CO (as far as the academic environment is concerned) and of industrial automation (as far as the factories are concerned), there is not enough interaction between these two, separate worlds. For the industrial managers, too much time is spent in the Universities looking at 'use- less' exact procedures on 'unrealistic' classical problems. Also, too often a scheduler is judged on the quality of its graphical interface and in general its user-friendliness. Not enough atten- tion is given to the optimisation procedure being used. Simple rules are often enough to keep many managers happy. Nonetheless, in the recent past, more and more managers are becoming aware of the importance of high-quality schedul- ing. Once they have derived their objectives, and ascertained their constraints, they quickly need an effective solution. Through the modelling step, the scheduler is able to tackle a real problem and managers are increasingly looking for standard rules, and techniques, to apply to this process. We feel that the major demand from the indus- trial world to the academic one is in the mod- elling area. The more the latter is able to satisfy these needs, the more it will be possible to tighten the links between the two worlds. Then credit will be given to the academic research into the sub- problems, which result from the decomposition process. Exact methods become meaningful to validate the quality of approximate procedures as much as research into heuristic procedures allows the gap between the optimal unreached solution and the suboptimal available one to be tightened more and more. Worst-case analysis and compu- tational complexity aspects, though probably too far from the industrial applications, can give a

264 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

flavour of what we can expect from the available solution procedures and in some sense help in the assessment procedure.

To summarise, we feel that one of the goals of the scientific community in machine scheduling should be to fully address more realistic schedul- ing problems. A strong link would not only en- able industry to validate the current models but would also generate new issues in the field of machine scheduling and thus create new, chal- lenging, problems,

7. Routing

Routing problems are important in the fields of transportation, distribution and logistics. The Travelling Salesman Problem (TSP), Vehicle Rout- ing Problem (VRP) and Shortest Path Problem (SPP) models, together with their many varia- tions, provide a very general modelling frame- work. Practitioners may thus construct realistic models and use a wealth of well-studied tech- niques to provide good solutions. The TSP is one of the best-studied problems of CO and for many years has been an important model for testing new algorithmic ideas [30]. The interplay between theory and practice has rarely been so dynamic in any area of Operational Research [19,11].

The work of classifying and providing solution methodologies for some of the 'core' variants of the TSP, SPP and VRP models in the academic literature has provided a significant aid to the practical problem-solver. As a result, solution techniques exist for multi-dimensional capacity constraints, time windows, pickups and deliveries, stochastic client demands, 'mixed' vehicle fleets and time-varying arc-costs, to name just a few. The use of routing models within VLSI design and the recent emergence of 'in-car', and other computerised routing systems, mean that the routing model should remain a combinatorial area of key interest to both theoreticians and practi- tioners as the complexity and size of problem instances encountered in practice increases. In addition, significant advances in designing geo- graphic databases for street networks and road

maps have already made an impact in several real-life vehicle routing implementations.

There has been much recent progress in exact methods for routing problems, including some cutting plane techniques which have been used with remarkable success to solve TSP instances involving several thousand nodes [21,40]. The ex- istence of widely available problem libraries for TSP has further reinforced its status as one of the testing grounds for exact solution techniques. Many researchers still continue to search for a solution technique having slow growth in com- plexity for nearly all problems encountered in practice, despite having exponential time com- plexity for a small number of hard problem in- stances. Existing test problems are largely artifi- cial, and we would hope to see existing problem libraries being extended with a larger range of practical problems. Exact cutting plane based techniques have also been applied to the VRP [10] as well as relaxation techniques using bounds from spanning tree, shortest path, or state space relaxations. Experience tends to suggest that these problems are substantially harder than TSP prob- lems of a similar size. For example, on the stan- dard benchmark problems for the VRP, only solutions for the capacity-constrained problems involving up to 100 nodes have been proved opti- mal to date.

Heuristic techniques continue to play an im- portant role especially when large problems must be solved. Recent progress in complex local search techniques based on insertion/deletion of nodes and arcs and in particular of metaheuristic tech- niques, which allow these local search techniques to escape local optimality, have produced many powerful and generally-applicable techniques for routing models. In many cases, where an optimal solution is known, these metaheuristic tech- niques, especially Tabu Search, have produced optimal solutions in a small fraction of the time required by exact solution methods. In other cases, metaheuristic methods have beaten exist- ing best solutions. Whilst most of these heuristics do not carry any performance guarantee, experi- ence has shown the effectiveness of these tech- niques in practical situations. Significant recent advances in approximability theory suggest that,

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 265

for many routing problems, reasonable perfor- mance guarantees are unlikely anyway (unless 9 = X ~ ) . In addition to tabu search, simulated annealing and genetic approaches, a neural net- work model has been used for a dynamic vehicle dispatching problem, [42].

The VRP model with all its variants provides an excellent and under-utilised topic for univer- sity courses in CO. It brings together, and uses, several techniques, for example dynamic pro- gramming and Lagrangian relaxation techniques. Other combinatorial problems, for example matching problems, (generalised) assignment problems, and bin packing problems arise often as subproblems in various vehicle routing models. Moreover, it brings these problems together to solve a problem which is more clearly a real-world problem, than, say, the TSP. In addition, integer programming models for the vehicle routing problems are quite varied, including set covering formulation, commodity flow based formulation, and vehicle flow based formulation. We would hope to see more widespread use of this problem in education, particularly at the undergraduate level. The VRP is also, we believe, an effective medium for introducing complexity-theoretic con- cepts such as Xg-completeness.

Several, excellent, high quality journals reflect the quality of research and the calibre of re- searchers in this important area. Although arti- cles are never published rapidly enough, it is clear that specialist publications in this area suf- fer less lag than some others. The large number of researchers working in this field means that there cannot be a community of researchers where everyone knows everyone else, hence the role of journals in this area is very important, as the area cannot be supported by communities of people each sending pre-prints to the others as is the case with smaller subject areas. Clearly there is a danger if such groups exist in response to slow publication times, of large amounts of duplication of research.

The routing models, in addition to their obvi- ous application areas, have been used to formu- late, and solve, problems from other areas unre- lated with transportation. For example, the prob- lem of scheduling n jobs on m identical parallel

machines with sequence dependent set-up times, in order to minimise the makespan, can be mod- elled as a 'Multiple' Travelling Salesman Prob- lem. Another example is the topological design of local access ring networks, which has already been modelled as a generic VRP. Furthermore, with respect to arc-routing problems, topological testing of computer systems and VLSI circuit design problems have been formulated as Chi- nese Postman Problems.

The prospects for the future of this area are stimulating from both a practical and a theoreti- cal point of view. Advances in hardware and software technologies, and the availability of geo- graphical databases should make complex 'in-car' and other routing systems widespread, giving large to complex 'on-line' routing problems and using, possibly, multicriteria techniques to provide fast, adequate solutions. This area will continue to be one of the most actively studied from a polyhe- dral point of view. Also, new and more powerful heuristics will be found for the VRP, possibly with better performance guarantees. Tabu Search may become a standard in practice as well as in theory. Here, significant developments may be made in the study of more sophisticated moves used in neighbourhood search. Furthermore, 'ex- pected' performance guarantees using probabilis- tic techniques may become important to evaluate heuristics. Many problems within this area, par- ticularly VRP formulations, may be parallelised, and advances in parallel computing will bring about corresponding increases in the sizes of problems which may be solved.

8. Network design

Network design problems have applications in telecommunications, transportation, logistics, manufacturing, electricity distribution, etc., and constitute a branch of CO that has given signifi- cant contributions to 'practice' by offering con- siderable cost savings compared to manual design methods that are replaced or enhanced. A cost saving of a few percent makes a big difference in sectors like telecommunications and transporta-

266 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

tion, where there is a huge financial investment in infrastructure.

A network design problem can be stated as follows: given certain design criteria, including traffic requirements between nodes, we will opti- mise the network with respect to one or more of the following factors: topology, capacities of in- stalled links, and routing of traffic in the result- ing network. By optimisation, we usually mean minimisation of some measure of the costs of the network, but other objectives such as the minimi- sation of time-delay or maximisation of flow are also encountered.

The basic ingredient of a network design model is a network flow that guarantees the fulfilment of traffic requirements. In addition, models might include capacity constraints and constraints con- cerning network reliability, such as connectivity constraints, or quality measures, such as bounds on average message delay in packet switched computer networks. Cost functions found in ap- plications can be of almost any kind: linear, con- cave, convex, discrete or a combination of these 'pure ' forms. The difficulty of a network design model depends highly on the form of the cost function, as is illustrated by Minoux's excellent overview over solution methods for minimum cost multicommodity flow (MCMF) problems [36].

Linear and pure fixed cost functions lead to easy-to-solve special cases, since the problems are solved respectively by shortest path calculations and by finding a minimal spanning tree. By a pure fixed cost function, we mean that the cost of flowing traffic over a link is fixed no matter how large the traffic is, except for a flow of zero that induces no cost. Very often, however, real-world cost functions exhibiting economies of scale also have a positive marginal cost, thus continuous concave costs, or linear cost functions with fixed costs, are often used as approximations. Both of these instances of the MCMF-problem are very difficult to solve to optimality, but quite success- ful add/drop- l ike heuristics based on necessary routing properties for optimal networks have been devised.

In the case of linear cost functions with fixed costs, for a problem often referred to as the fixed charge problem, Balakrishnan et al. [4] have

adapted a dual-ascent method for facility location problems. Dual-ascent methods have proven ef- fective for problems with a combination of fixed costs and variable costs, and are also convenient, since we obtain both feasible solutions and lower bounds.

For the design of backbone networks in com- puter communication, convex delay costs are of- ten weighted against discrete costs for transmis- sion capacity. Various heuristics and relaxations are the usual solution methods for the very com- plex models that emerge, [14]. With gaps between feasible solutions and lower bounds in the range of 5 -10% for medium sized networks, it appears that much research remains to be carried out on these problems. Convex cost functions are also found in the design of transportation networks [32].

Introducing additional constraints to the net- work flow considerably increases the solution dif- ficulties. In this setting, polyhedral approaches have been used with success, for instance for network survivability problems [22]. For mod- elling purposes, a useful distinction can be drawn between distributed and centralised networks. In centralised networks, all demand/ t raf f ic require- ments involve a central node. As a result, special topologies are often imposed on the networks enabling classical problems, like capacitated min- imal spanning tree problems, Steiner tree prob- lems and generic vehicle routing problems, to be solved. Capacitated minimal spanning tree prob- lems have been extensively studied in connection with local access networks (LACN). Solution methods include polyhedral approaches, but re- cent work also illustrates the usefulness of prob- lem reformulation, for instance by disaggregation, and the use of 'unnecessary' constraints in order to obtain good Lagrangian relaxations [15].

Often, network design problems deal with long-term planning. Generally, dynamic models are appropriate if costs or demand develop over time and if there are considerable costs con- nected to redesigning the network. Recently, there has been an increasing research interest in dynamic models but, by taking into account the time dimension, the solution of models of such size, and complexity, still seems a long way off.

M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270 267

Heuristic approaches, adapted to specific prob- lem instances, are probably necessary. For exam- ple, [35] shows how solution methods for static models, accompanied by local improvement heuristics, can be a reasonable way to solve mod- els with a steady growth in traffic demand.

Another topic that seems interesting for future research is a more explicit t reatment of stochastic factors, for instance by combining CO and stochastic programming. This may be particularly interesting for reliability matters and for dynamic models. Reliability has been taken into account by connectivity constraints being imposed on the network or by specifying a set of 'states', corre- sponding to different error situations, that the network must be able to handle. By these ap- proaches, one decides beforehand which error situations should be handled, and the network is constructed to take care of them all. What would be interesting are models that incorporate the trade-off between increased reliability and net- work costs, enabling the optimal reliability level to be found.

Concerning dynamic models, long-term plan- ning may be of limited interest as long as stochas- tic factors are not explicitly treated. Taking stochastic factors into account usually leads to decision policies involving extra costs in return for more flexibility in the future, and such solu- tions are not reached by means of deterministic models since they are never the 'wait and see' solutions, that is solutions found in a setting with perfect information. In other words, options have no value without the influence of stochastic fac- tors.

As indicated above, there is a wide variety of solution methods applied to network design prob- lems. Future research is expected to improve both exact and approximate methods for the dif- ferent network design problems described in the literature. Here it should be important to take advantage of the experience from connected branches of CO, of which routing perhaps is the most obvious example. A factor that has been advantageous to algorithmic developments in routing is the existence of widely available li- braries of test problems. There is a demand for similar libraries in the network design area.

New applications will most likely also be of great interest for future research, especially in areas like telecommunications and computer communication that experience very fast techno- logical changes. In the past, there are several examples of how technological changes have given new directions to research. One example is the packet switching technology that introduced mea- sures of message delay into the models. By using the standard formula for average message delay, putting a unit price on delay and incorporating delay costs in the objective function, we get mod- els that incorporate decisions on the degree of utilisation of the network. A second example is the more widespread use of high-capacity fibre cables. Traditional methods give network configu- rations which are too sparse, and adding connec- tivity constraints is one way of obtaining more reliable designs. In the near future, the widespread use of cellular systems of different kinds might take on a similar role.

9. Conclusions

There has been a trend in the academic com- munity, as regards CO, to study the classical problems. These problems, despite their interest- ing structure and mathematical beauty, often fall short of tackling real issues. We feel, as Opera- tional Researchers, that serious attempts should be made by the Operational Research community to bridge the divide between academic theory and real-life problems. This will not just enable the academic community to reap the rewards from solving commercial problems, but also improve the standing of the discipline in society, a theme which is becoming increasingly important.

In industrial environments, there are often multiple conflicting objectives which need to be tackled whilst respecting the various constraints, leading to overwhelming problem complexity. One way to overcome this issue is to simplify the model in order to have a well-studied, or more reasonable problem. Another possible, widely used, approach is to decompose the global prob- lem into more affordable subproblems. This de- composition task is probably the most formidable

268 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

step when dealing with real problems, given that it involves modelling skill as much as combinato- rial know-how. At present, there is no standard way of modelling a real-life problem, either by reducing it to a classical problem or by decompos- ing it into a sequence of subproblems. This methodology needs great sensibility and knowl- edge of both the problem under consideration and of CO methods. We do not know how this methodology can be taught or communicated, on a basic or advanced course, but maybe more consideration should be given to real-life case studies. Studying real-life problems will stimulate the interest of students, who are invariably moti- vated by real applications, to work in this field.

One of the ideas emerging from the various sections, and which we can say to be common to all the reviewed fields, is that, in spite of the fact that there are many qualified scientific journals accepting papers on combinatorial optimisation topics, there is still little space for contributions. Moreover, probably due to this fact, it often takes more than three years for a paper to be pub- lished, without mentioning the fact that the selec- tion is often based marginally on fashion criteria, with all the disadvantages that this could create. This implies that most of the papers published in journals are old and their results are overcome by further developments; consequently, if a re- searcher wants to be updated on the state of the art, h e / s h e must collect technical reports mainly by asking the authors directly. This is not always easy, especially for newcomers, as they are not yet introduced into the proper research circles. There are some possible ways of tackling this kind of situation, which will probably worsen in the future. For example, 'electronic' versions of journals could allow papers to be published as soon as they get a favourable answer from the referees, that is within at most four months after submission. In fact, the journals can be looked up with advanced tools, like for example NCSA Mo- saic, which allows browsing and retrieving docu- ments in a very friendly way, such as access to a bibliographic reference. Furthermore, papers may be retrieved simply by 'double clicking' with the mouse on the citation. As for the publishers, they would sell accesses to their databases instead of

selling the journals, and they would save some trees!

Another general tendency is that research of- ten ends up with negative results, which are not accepted by scientific journals. Since part of sci- entific research is devoted also to studying ideas which are not good, we express the wish that these negative results, if they derive from reason- able ideas and are carried on with scientific rigour, could find some space in a scientific journal, maybe as short and quick notes, mainly to inform the scientific community and to enable other re- searchers to avoid spending their time in fruitless analysis. Space for these types of results would become available if a more effective approach was taken to publication. Similarly, the validity of a paper should not only be measured by the gap obtained in the given problem between upper and lower bounds. Whenever realistic cases do not allow to compute meaningful bounds, there should still be room in the literature for appropri- ate modelling and decomposition procedures which show better results than other heuristic approaches.

As we have seen in the various sections, CO gets most of its strength from connections be- tween the many different disciplines that it in- volves, for example operational research, man- agement science, mathematics and computer sci- ence. We strongly feel that these connections should be nurtured. Furthermore, within CO, we believe that some areas are strong candidates for intensive research. For example, the use of meta-heuristics has been a common theme throughout this paper. We are concerned, how- ever, that little research has been done on the theoretical foundations of such techniques. Also, the interaction between stochastic methods and classical CO problems is, unfortunately, currently neglected. Exact methods, such as polyhedral techniques, for solving real life problems deserve more attention.

To conclude this paper, we would like to briefly discuss a question which every researcher has faced more than once, at least at the beginning of h i s /he r career: "Why are we doing this kind of job?". There is not a unique answer, as this varies with the individual's personality and educational

M.H. Bjorndal et al. / European Journal o f Operational Research 83 (1995) 253-270 269

background. Of course, most scientific research does not pay in terms of money or fame, in particular for a relatively young discipline as CO, or Operational Research. The more selfish and trivial answer could be: "As long as we find someone who will pay us for doing something that we like, there is no reason for us to do anything else". If this view is the only reason that inspires researchers, probably scientific research, at least in Operational Research, is going to starve. There must be something more. Essen- tially, we think that one major stimulus arises from the fact that in our discipline we are dealing with problems which have a direct application, and whose solution can have a strong impact on everyday life. CO was born from the need to look for efficient solution to practical problems. In looking for those solutions, a fascinating theory has been discovered. In order to keep CO active, we should not forget this link, which gives the opportunity not only to solve problems of rele- vant practical importance, but also enables new requirements, and suggestions, to be fed back into the theory.

Acknowledgements

We are indebted to Laure Renotte for the general discussion. The authors and other partici- pants in the ESI X summer school would also like to thank Catherine Roucairol, Gerard Plateau, Herv6 Thiriez, Jakob Krarup and Pierre ToUa for organising the summer school and E U R O for financial assistance. This work has been partially supported by 'Progetto Finalizzato Trasporti 2', Grant No. 93.01799.PF74.

References

[1] Aarts, E., and Korst, J., Simulated Annealing and Boltz- mann Machines, Wiley, Chichester, UK, 1989.

[2] Arora, S., Lund, J., Motwani, R., Sudan, M., and Szegedy, M., "On the intractability of approximation problems", Manuscript, 1992.

[3] Baker, K.R., Introduction to Sequencing and Scheduling, Wiley, New York, 1974.

[4] Balakrishnan, A., Magnanti, T.L., and Wong, R.T., "A dual-ascent procedure for large-scale uncapacitated net- work design", Operations Research 37 (1989) 716-740.

[5] Balas, E., and Zemel, E., "An algorithm for large zero- one Knapsack Problems", Operations Research 28 (1980) 1130-1154.

[6] Barahona, F., Jiinger, M., and Reinelt, G., "Experiments in quadratic 0-1 programming", Mathematical Program- ming 44 (1989) 127-137.

[7] Barahona, F., and Mahjoub, A.R., "On the cut polytope", Mathematical Programming 36 (1986) 57-173.

[8] Carraresi, P., Malucelli, F., and Pappalardo, M., "Testing optimality for quadratic 0-1 unconstrained problems", ZOR Mathematical Methods of Operations Research, to appear.

[9] Chvatal, V. "Hard Knapsack Problems", Operations Re- search 28 (1980) 1402-1411.

[10] Cornuejols, G., and Nharche, F., "Polyhedral study of the Capacitated Vehicle Routing Problem". Mathemati- cal Programming 60 (1993) 21-52.

[11] Desrosiers, J., Dumas, Y., Solomon, M., and Soumis, F., "Time constrained routing and scheduling", in: M. Ball, C. Monma, T. Magnanti and G. Nemhauser (eds.), Handbooks in Operations Research and Management Sci- ence, Vol. on Networks, North-Holland, Amsterdam, forthcoming.

[12] Edmonds, J., "Matching and a polyhedron with 0-1 vertices", Journal of Research of the National Bureau of Standards 69B (1965) 125-130.

[13] French, S., Sequencing and Scheduling." An Introduction to the Mathematics o f the Job-Shop, Horwood, Chichester, UK, 1982.

[14] Gavish, B., "Topological design of computer communica- tion networks - The overall design problem", European Journal of Operational Research 58 (1992) 149-172.

[15] Gavish, B., "Topological design of telecommunication networks - Local access design methods", Annals of Operations Research 33 (1991) 17-7 l.

[16] Giannessi, F., and Nicolucci, F., "Connections between nonlinear and integer programming problems", Symposia Mathematica, VoL XIX, Academic Press, New York, 1976, 161-176.

[17] Glover, F., "Tabu Search, Part I", ORSA Journal on Computing 1/3, (1989) 190-206.

[18] Goldberg, D.E., Genetic Algorithms in Search, Optimiza- tion and Machine Learning, Addison-Wesley, Reading, MA, 1989.

[19] Golden, B.L., and Assad, A.A., Vehicle Routing: Methods and Studies, Studies in Management Science and Sys- tems, Vol. 16, North-Holland, Amsterdam, 1988.

[20] Gr6tschel, M., Lovfisz, L., and Schrijver, A., Geometric Algorithms and Combinatorial Optimization, Springer- Verlag, Berlin, 1988.

[21] Gr6tschel, M., and Holland, O., "'Solution of large-scale Travelling Salesman Problems", Mathematical Program- ming 51 (1991) 141-202.

[22] Gr6tschel, M., Monma, C.L., and Stoer, M., "Design of

270 M.H. Bjorndal et al. / European Journal of Operational Research 83 (1995) 253-270

survivable networks", in: M. Ball, C. Monma, T. Mag- nanti and G. Nemhauser (eds.), Handbooks in Operations Research and Management Science, Vol. on Networks, North-Holland, Amsterdam, forthcoming.

[23] Gr6tschel, M., and Padberg, M.W., "Polyhedral theory", in: E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan and D.B. Shmoys (eds.), The 7"raveling Salesman Problem, Wiley, New York, 1985.

[24] Hammer, P.L., "Some network flow problems solved with pseudo-boolean programming", Operations Research 13 (1965) 388-399.

[25] Hammer, P.L., Hansen, P., and Simeone, B., "Roof duality, complementation and persistency in quadratic 0-1 optimization", Mathematical Programming 28 (1984) 121-155.

[26] Hoffman, K.L., and Padberg, M.W., "Solving airline crew scheduling problems by branch-and-cut", Management Science 39 (1993) 657-682.

[27] Horowitz, E., and Sahni, S., "Computing partitions with applications to the Knapsack Problem", Journal of the ACM 21 (1974) 277-292.

[28] Khachiyan, L.G., "A polynomial algorithm for linear programming", Doklady Akademii Nauk USSR 244 (1979) 1093-1096.

[29] Kindervater, G.A.P., and Lenstra, J.K., "An introduction to parallelism in combinatorial optimization", Discrete Applied Mathematics 14 (1986) 135-156.

[30] Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., and Shmoys, D.B., The Traveling Salesman Problem, Wiley, New York, 1985.

[31] Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., and Shmoys, D.B., (1993), "Sequencing and scheduling: Algo- rithms and complexity", in: Handbooks in Operations Research and Management Science, Vol. 4: Logistics of Production and Inventory, North-Holland, Amsterdam, 1993, 445-524.

[32] Magnanti, T.L., and Wong, R.T., "Network design and transportation planning: Models and algorithms", Trans- portation Science 18 (1984) 1-55.

[33] Martello, S,, and Toth, P., Knapsack Problems: Algo- rithms and Computer Implementations, Wiley, Chichester, UK, 1990.

[34] Martello, S., and Toth, P., "Upper bounds and algo- rithms for hard 0-1 Knapsack Problems", OR/93/04, Research Report DEIS, University of Bologna, 1993.

[35] Minoux, M., "Network synthesis and dynamic network optimization", Annals of Discrete Mathematics 31 (1987) 283-324.

[36] Minoux, M., "Network synthesis and optimum network design problems: Models, solution methods and applica- tions", Networks 19 (1989) 313-360.

[37] Morton, T.E., and Pentico, D.W., Heuristic Scheduling Systems, Wiley, New York, 1993.

[38] Padberg, M., "Quadric polytype: Some characteristics and facets", Mathematical Programming 45 (1989) 139- 172.

[39] Padberg, M.W., and Gr6tschel, M., "Polyhedral compu- tation", in: E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan and D.B. Shmoys (eds.), The Traveling Salesman Problem, Wiley, New York, 1985.

[40] Padberg, M.W., and Rinaldi, G., "A branch-and-cut al- gorithm for the solution of large-scale Traveling Sales- man Problems", SIAM Review 33 (1991) 1-41.

[41] Plateau, G., and Elkihel, M., "A hybrid method for the 0-1 knapsack problem", Methods of Operations Research 49 (1985) 277-293.

[42] Potvin, J.-Y., Shen, Y., and Rosseau, J.-M., "Neural network for automated vehicle dispatching", Computers & Operations Research 19/3-4 (1992) 267-276.

[43] Psaraftis, H.N., "Review standards for OR/MS papers", OR~MS Today, June 1994, 54-57.

[44] Pulleyblank, W.R., "Polyhedral combinatorics", in: J.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.), Handbooks in Operations Research and Management Sci- ence, Vol. l: Optimization, North-Holland, Amsterdam, 1989.

[45] Roberts, F.S., "Meaningfulness of conclusions from com- binatorial optimization", Discrete Applied Mathematics 29 (1990) 221-242.

[46] Rhys, J. "A selection problem of shared fixed costs and networks", Management Science 17 (1970) 200-207.

[47] Vaessens, R.J.M., Aarts, E.H.L., and Lenstra, J.K., "Job shop scheduling by local search," Memorandum COSOR 94-05, Eindhoven University of Technology, 1994.