a hard problem: the tsp by: mahmood reza jahanseir amirkabir university of technology

59
A Hard Problem: The TSP By: Mahmood Reza Jahanseir Amirkabir University of Technol

Post on 19-Dec-2015

221 views

Category:

Documents


1 download

TRANSCRIPT

A Hard Problem: The TSP

By: Mahmood Reza Jahanseir

Amirkabir University of Technology

2

Table of Contents

• Basic definitions• Lower bounds: Relaxations

• The assignment relaxation• The MST relaxation• The s-tree relaxation• The LP relaxation

• Lower bounds: Subgradient optimization• Approximation algorithms• Upper bounds: Heuristics• Upper bounds: Local search• Optimal solutions: Branch and bound

Basic definitions

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

4

• Let w: E → R+ be a weight function on the complete graph Kn. We seek a cyclic permutation (1, π(1), . . . , πn−1(1)) of the vertex set {1, . . . , n} such that

is minimal. We call any cyclic permutation π of {1, . . . , n} as well as the corresponding Hamiltonian cycle

in Kn a tour; if w(π) is minimal among all tours, π is called an optimal tour.

• TSP is NP-complete, so that we cannot expect to find an efficient algorithm for solving it.

5

• Metric Travelling Salesman Problem (ΔTSP):Let W = (wij) be a symmetric matrix describing a TSP, and assume that W satisfies the triangle inequality: wik ≤ wij + wjk for i, j, k = 1, . . . , n. Then one calls the given TSP metric or, for short, a ΔTSP.

• Theorem 1. ΔTSP is NP-complete.• Proof. Let G = (V,E) be a given connected graph, where V = {1, . . . , n}, and

let Kn be the complete graph on V with weights:

Obviously, G has a Hamiltonian cycle if and only if there exists a tour π of weight w(π) ≤ n (and then, of course, w(π) = n) in Kn. Thus the given polynomial algorithm for TSP allows us to decide HC in polynomial time.

6

• In the metric case, there always exists a reasonably good approximation algorithm; most likely, this does not hold for the general case, where the triangle inequality is not assumed.

• Asymmetric Travelling Salesman Problem (ATSP).Instead of Kn, we consider the complete directed graph on n vertices: we allow the weight matrix W to be non-symmetric (but still with entries 0 on the main diagonal). This asymmetric TSP contains the usual TSP as a special case, and hence it is likewise NP-hard.

7

• Example 1. We drop the condition that the travelling salesman should visit each city exactly once, so that we now consider not only Hamiltonian cycles, but also closed walks containing each vertex of Kn at least once. If the given TSP is metric, any optimal tour will still be an optimal solution.

However, this does not hold in general, as the example given in following figure shows: here(w, x, y, z, x,w) is a shortest closed walk (oflength 6), but the shortest tour(w, x, y, z,w) has length 8.

8

• Given a matrix W = (wij) not satisfying the triangle inequality, we may consider it as a matrix of lengths on Kn and then calculate the corresponding distance matrix D = (dij ).

• For example, we can use the algorithm of Floyd and Warshall for this purpose. Of course, D satisfies the triangle inequality and, hence, defines a metric TSP.

• It is easy to see that the optimal closed walks with respect to W correspond to the optimal tours with respect to D.

• Thus the seemingly more general problem described in Example 1 actually reduces to the metric TSP.

• One may also consider an arbitrary connected graph G with some length function w instead of Kn. Then it is not at all clear whether any tours exist: we need to check first whether G is Hamiltonian. this feasibility question is already an NP-complete problem in itself.

Lower bounds: Relaxations

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

10

• In order to judge the quality of an approximate solution, we need lower bounds on the length of a tour, and these bounds should not only be strong but also easily computable.

• A standard approach is the use of suitable relaxations:instead of the original problem P, we consider a problem P’ containing P; this auxiliary (simpler) problem is obtained by a suitable weakening of the conditions defining P.

• Then the weight w(P’) of an optimal solution for P’ is a lower bound for the weight w(P) of an optimal solution for P.

• Unfortunately, in many cases it is not possible to predict the quality of the approximation theoretically, so that we have to use empirical methods: for instance, comparing lower bounds found by relaxation with upper bounds given by solutions constructed by some heuristic.

11

The assignment relaxation• Assignment problem. Let A be a given square matrix with nonnegative

real entries. We require a diagonal of A for which the sum of all its entries is maximal (or minimal).

• Thus we seek a permutation π of {1, . . . , n} for which w1,π(1) + . . . + wn,π(n) becomes minimal.

• In particular, we have to examine all cyclic permutations π (each of which determines a tour); for these permutations, the sum in question equals the length of the associated tour. Therefore we can indeed relax TSP to AP.

• As we are not interested in permutations with fixed points for the TSP anyway, we can avoid this problem by simply putting wii = ∞ for all i.

12

W=W”=

• Hungarian algorithm will allow us to do so with complexity O(n3).• w(AP) is usually a reasonably good approximation to w(TSP) in practice –

even though nobody has been able to prove this.• Example 2. we replace the diagonal entries 0 in W by 88 to obtain the

matrix W’. In order to reduce this AP to the determination of a maximal weighted matching, we consider the matrix W” = 88 − w’

ij instead of W’. Hungarian algorithm yields value 603. Any optimal matching for W is a solution of the original AP; hence w(AP) = 9×88−603 = 189. This gives the bound w(TSP) ≥ 189.

13

• Of course, a tour is not a tree; but if we omit any edge from a tour, we indeed get a spanning tree. This shows w(MST) ≤ w(TSP).

• An optimal solution for MST can be determined with complexity O(n2). • Example 3. Determining a minimal spanning tree is also much easier than

solving an AP.

The MST relaxation

w(T)=186

14

The s-tree relaxation

• Let us choose a special vertex s V. An s-tree is a spanning tree for the ∈induced subgraph Kn \ s together with two edges incident with s.

• Obviously, every tour is a special s-tree; hence w(MsT) ≤ w(TSP), where MsT denotes the problem of determining a minimal s-tree.

• Note that it is easy to solve this problem: just determine a minimal spanning tree for Kn \ s, and add those two edges incident with s which have smallest weight.

• the resulting bound will usually depend on the choice of the special vertex s.

15

• Example 4. We choose s = Be. Hence w(TSP) ≥ w(B) = 186 + 44 = 230.

• Let T be a minimal spanning tree associated with the given TSP, and assume that s is a leaf of T. Then we obtain a minimal s-tree by adding an edge of smallest weight among all edges not in T and incident with s to T. In general, however, matters cannot be as simple.

16

• there are (n − 1)n−3 distinct spanning trees on the remaining n−1 vertices. To each of these trees, we have to add one of the (n − 1)(n − 2)/2 pairs of edges incident with s, so that the total number of s-trees of Kn is:

1/2 (n − 2)(n − 1)n−2

• we will see that s-trees yield much better results when one also uses so-called penalty functions.

17

The LP relaxation

• The assignment relaxation of the TSP can be described by the following ZOLP:

• Then the admissible matrices (xij) correspond precisely to the permutations in Sn.

• In order to restrict the feasible solutions to tours, we add the following subtour elimination constraints:

(15-2)

(15-1)

18

• Now let P be the polytope defined by the feasible solutions of previous inequalities; that is, the vertices of P correspond to the tours among the assignments. In principle, it is possible to describe P by a system of linear inequalities and solve the corresponding LP; unfortunately, the given inequalities do not suffice for this purpose.

• Even worse, nobody knows a complete set of corresponding inequalities, although large classes of required inequalities are known.

• Note that there is an exponential number of inequalities even in (15.2) alone.

Lower bounds: Subgradient optimization

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

20

• We show how the lower bounds obtained from the s-tree relaxation can be improved considerably by using so-called penalty functions.

• This method was introduced by Held and Karp and used successfully for solving comparatively large instances of the TSP.

• The basic idea is rather simple: we choose some vector p = (p1, . . . , pn)T ∈Rn and replace the weights wij of the given TSP by the transformed weights

• Let us denote the weight of a tour π with respect to the wij by w(π).

• hence any tour which is optimal for W is optimal also for W’.

21

• On the other hand, the weight of an s-tree B is not transformed by just adding a constant:

• Thus the difference between the weight of a tour and the weight of an s-tree is:

• Where

• Let us assume that dp(B) is positive for every s-tree B. Then we can improve the lower bound w(MsT) of the s-tree relaxation with respect to W by determining a minimal s-tree with respect to W’: the gap between w(TSP) and w(MsT) becomes smaller according to (15.6).

22

• Of course, it is not clear whether such a vector p exists at all, and how it might be found.

• We will use the following simple strategy: calculate a minimal s-tree B0 with respect to W, choose some positive constant c, and put

• Thus the non-zero coordinates of p impose a penalty on those vertices which do not have the correct degree 2 in B0.

• There remains the problem of choosing the value of c. It is possible to just use c = 1; however, in our example, we will select the most advantageous value (found by trial and error).

23

• Example 5.we obtain p = (−3,−3, 0, 3, 0, 0,−3, 3, 3)T , where we have chosen c = 3.

W’=

24

• A minimal s-tree B1 with respect to W is displayed in following figure; its weight is w(B1) = 242.

• This time we choose c = 4 and p = (0,−4, 0, 0, 0, 0, 0, 0, 4)T. which yields the weight matrix W”.

W”=

25

• w(TSP) ≥ w(B2) = 248.• This time we choose c = 1 and p=(0,−1, 0, 0, 0, 0, 1, 0, 0).• This leads to the minimal s-tree B3 of weight w∗(B3) = 250

26

• As B3 is actually a tour, we have now (coincidentally) solved the TSP. the following tour is optimal, and hence w(TSP) = 250.

• it would be nice to be able to choose the vector p as advantageously as possible.

• we want to minimize the gap d(p) between the length w’(TSP) of an optimal tour and the weight w‘(B) of a minimal s-tree B.

• If we want to minimize d(p), we need to determine

27

• In general, we will not end up with L(w) = w(TSP): it is quite possible that no choice of p yields a minimal s-tree which is already a tour.

• In the metric case, the weight of an optimal tour is bounded by 3/2 times the Held-Karp lower bound.

• The vectors p are called subgradients in the general context.

• Unfortunately, one cannot predict how many steps will be required, so that the process is often terminated in practice as soon as the improvement between successive values becomes rather small.

• Held and Karp showed that (L) can also be formulated in terms of linear programming, and this yields, in practical applications, good bounds with moderate effort.

Approximation algorithms

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

29

• Let P be an optimization problem, and let A be an algorithm which calculates a feasible – though not necessarily optimal – solution for any given instance I of P. We denote the weights of an optimal solution and of the solution constructed by A by w(I) and wA(I), respectively. If the inequality

• holds for each instance I, we call A an ε-approximative algorithm for P.

• Any connected Eulerian multigraph on V is called a spanning Eulerian multigraph

30

• Lemma 1. Let W be the weight matrix of a ΔTSP on Kn, and let G = (V,E) be a spanning Eulerian multigraph for Kn. Then one can construct with complexity O(|E|) a tour π satisfying w(π) ≤ w(E).

• Proof. We can determine with complexity O(|E|) an Euler tour C for G. Write the sequence of vertices corresponding to C in the form (i1, P1, i2, P2, . . . , in, Pn, i1), where (i1, . . . , in) is a permutation of {1, . . . , n} and where the P1, . . . , Pn are (possibly empty) sequences on {1, . . . , n}. Then (i1, . . . , in, i1) is a tour π satisfying

31

• The easiest method is simply to double the edges of a minimal spanning tree.

(1) Determine a minimal spanning tree T for Kn (with respect to the weights given by W).

(2) Let G = (V,E) be the multigraph which results from replacing each edge of T with two parallel edges.

(3) Determine an Euler tour C for G.(4) Choose a tour contained in C.

Algorithm 1: tree algorithm

32

• Example 6. We saw in Example 3 that the MST relaxation yields the minimal spanning tree T of weight w(T) = 186. A possible Euler tour for the doubled tree is: (Aa,Du,Ha,Be,Ha,Du,Fr,St,Ba,St,Nu,Mu,Nu,St, Fr,Du,Aa),which contains the tour of length 307:

33

• Theorem 2. Algorithm 1 is a 1-approximative algorithm of complexity O(n2) for ΔTSP.

• Proof. Using the algorithm of Prim, step (1) has complexity O(n2). The procedure EULER can be used to perform step (3) in O(|E|) = O(n) steps. Clearly, steps (2) and (4) also have complexity O(n). This establishes the desired complexity bound. By Lemma 1, the tree algorithm constructs a tour π with weight w(π) ≤ 2w(T). On the other hand, the MST relaxation shows that all tours have weight at least w(T). Hence w(π) is indeed at most twice the weight of an optimal tour.

34

• Next we present a 1/2 -approximative algorithm, which is due to Christofides.

(1) Determine a minimal spanning tree T of Kn (with respect to W).

(2) Let X be the set of all vertices which have odd degree in T.(3) Let H be the complete graph on X (with respect to the weights

given by the relevant entries of W).(4) Determine a perfect matching M of minimal weight in H.(5) Let G = (V,E) be the multigraph which results from adding the

edges of M to T.(6) Determine an Euler tour C of G.(7) Choose a tour contained in C.

Algorithm 2: Christofides’ algorithm

35

• Theorem 3. Algorithm 2 is a 1/2 -approximative algorithm of complexity O(n3) for ΔTSP.

• Proof: As G is Eulerian, the tour π determined in step (5) satisfies the inequalityThus we have to find a bound for w(M). Write |X| = 2m and let (i1, i2, . . . , i2m) be the vertices of X in the order in which they occur in some optimal tour σ. We consider the following two matchings of H:

The triangle inequality for W implies

since M is a perfect matching of minimal weight. Hence

yields w(π) ≤ 3w(TSP)/2.

36

• Example 7. let T be the minimal spanning tree with weight w(T) = 186. By inspection, we obtain M = {AaDu,BeMu,BaSt} with w(M) = 95. Euler tour

which contains the tour

Note that this tour has weight 266, which is only 6% more than the optimal value of 250.

Upper bounds: Heuristics

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

38

• Perhaps the most frequently used heuristics are the so-called insertion algorithms.

• The current partial tour of length k is always extended to a tour of length k +1 by adding one more city in the k-th iteration; this involves two tasks:

(a) choosing the city to be added and(b) deciding where the city chosen in (a) will be inserted into the

current partial tour.• There are several standard strategies for choosing the city in (a): arbitrary

choice; selecting the city which has maximal (or, alternatively, minimal) distance to the cities previously chosen; or choosing the city which is cheapest to add.

• We also have to settle on a criterion for step (b); here an obvious strategy is to insert the city at that point of the partial tour where the least additional cost occurs.

39

Procedure FARIN(W, s;C)(1) C ← (s, s), K ← {ss}, w ← 0;(2) for u = 1 to n do d(u) ← wsu od

(3) for k = 1 to n − 1 do(4) choose y with d(y) = max {d(u): u = 1, . . . , n};(5) for e = ij K do c(e) ← w∈ i,y + wy,j − wi,j od

(6) choose an edge f K with c(f) = min {c(e): e K}, say f = ∈ ∈uv;

(7) insert y between u and v in C;(8) K ← (K \ {f}) {uy, yv}, w ← w + c(f), d(y) ← 0;∪(9) for x {1, . . . , n} \ C do d(x) ← min {d(x), w∈ yx}

(10) od

Algorithm 3: Farthest Insertion

40

• Example 8. We choose the vertex s = Fr as our starting point.

Step (Aa,Ba,Be,Du,Fr,Ha,Mu,Nu,St) y d(y) Partial tour W

1 26,34,56, 23 , 0,50,40 , 22,20 Be 56 Fr, Be, Fr 112

2 26,34, 0 , 23 ,0 , 29,40 ,22,20 Mu 40 Fr, Be, Mu, Fr 156

3 26,34, 0 , 23 , 0, 29, 0 , 17,20 Ba 34 Fr, Be, Mu, Ba, Fr 187

4 26, 0 , 0 , 23, 0, 29, 0 , 17,20 Ha 29 Fr, Ha, Be, Mu, Ba, Fr 210

5 26, 0 , 0 , 23, 0, 0 , 0 , 17, 20 Aa 26 Fr, Aa, Ha, Be, Mu, Ba, Fr 235

6 0 , 0 , 0 , 8 , 0, 0 , 0 , 17, 20 St 20 Fr, Aa, Ha, Be, Mu, St, Ba, Fr 247

7 0 , 0 , 0 , 8 , 0, 0 , 0 , 17, 0 Nu 17 Fr, Aa, Ha, Be, Nu, Mu, St, Ba, Fr 248

8 0 , 0 , 0 , 8 , 0, 0 , 0 , 0 , 0 Du 8 Fr, Aa, Du, Ha, Be, Nu, Mu, St, Ba, Fr 250

Upper bounds: Local search

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

42

• Having chosen a tour (at random or using a heuristic), the next step is to try to improve this tour as far as possible: we want to apply post-optimization.

• Suppose F is the set of all feasible solutions for a given optimization problem; for example, for the TSP, F would be the set of all tours. A neighborhood is a mapping N : F → 2F: we say that N maps each f F to ∈its neighborhood N(f). Any algorithm which proceeds by determining local optima in neighborhoods is called a local search algorithm.

• Let f be a tour, and choose k {2, . . . , n}. The neighborhood N∈ k(f) is the set of all those tours g which can be obtained from f by first removing k arbitrary edges and then adding a suitable collection of k edges. One calls Nk(f) the k-change neighborhood.

• Any tour f which has minimal weight among all tours in Nk(f) is said to be k-optimal.

43

(1) Choose an initial tour f.(2) while there exists g N∈ k(f) with w(g) < w(f) do

(3) choose g N∈ k(f) with w(g) < w(f);

(4) f ← g(5) od

• Perhaps the most important problem concerns which value of k one should choose.

• For large k (that is, larger neighborhoods), the algorithm yields a better approximation, but the complexity will grow correspondingly. In practice, the value k = 3 seems to work well.

Algorithm 4: k-opt

44

• Let f be a tour described by its edge set f = {e1, . . . , en}:

• Then the tours g N∈ 2(f) can be found as follows:

• ei and ej should not have a vertex in common.

• Note that every neighborhood N2(f) contains precisely n(n − 3)/2 tours g ≠ f.

45

• δ(g) measures the advantage which the tour g offers compared to f.

• We set δ = max {δ(g): g N∈ 2(f)}; if δ > 0, we replace f by some tour g with δ(g) = δ. Otherwise, f is already 2-optimal, and the algorithm 2-opt terminates.

• each iteration has complexity O(n2); the number of iterations cannot be predicted.

46

Procedure 2-OPT(W, f; f)(1) repeat(2) δ ← 0, g ← f;(3) for h N∈ 2(f) do

(4) if δ(h) > δ then g ← h; δ ← δ(h) fi(5) od(6) f ← g;(7) until δ = 0

Algorithm 5: 2-opt

47

• Example 9. Let us choose the tour of weight 266 constructed using Christofides’ algorithm in Example 7 as our initial tour f. During the first iteration of 2-OPT, the edges BeMu and NuSt are replaced with BeNu and MuSt; this yields the following tour of length 253.

The second iteration of 2-OPT exchanges the edges FrDu and AaHa for FrAa and DuHa. The resulting tour is the optimal tour of length 250

Optimal solutions: Branch and bound

Basic definitionsLower bounds: RelaxationsLower bounds: Subgradient optimizationApproximation algorithms

Upper bounds: HeuristicsUpper bounds: Local searchOptimal solutions: Branch and bound

49

• In each step, the set of all possible solutions is split into two or more subsets, which are represented by branches in a decision tree.

• For the TSP, an obvious criterion for dividing all tours into subsets is whether they contain a given edge or not.

• The approach becomes really useful only when we add an extra idea: in each step, we will calculate lower bounds for the weight of all solutions in the respective subtrees and compare them with some previously computed upper bound.

• Then no branch of the tree for which the lower bound exceeds the known upper bound can possibly lead to an optimal tour.

50

• Let W = (wij) be the weight matrix of a given TSP on Kn. We choose the diagonal entries wii as ∞ ; this can be interpreted as forbidding the use of loops.

• Let us select some row or column of W, and let us subtract a positive number d (smallest entry in our row or column) from all its entries.

• So that the weight of π with respect to W’ is reduced by d compared to its weight with respect to W. In particular, the optimal tours for W coincide with those for W’.

• We continue this process until we obtain a matrix W” having at least one entry 0 in each row and each column; such a matrix is called reduced.

• Note that the optimal tours for W agree with those for W”, and that the weight of each tour with respect to W” is reduced by s compared to its weight with respect to W, where s is the sum of all the numbers subtracted during the reduction process.

• It follows that s is a lower bound for the weight of all tours.

51

s = 8 + 27 + 29 + 8 + 20 + 29 + 17 + 17 + 19 + 8 + 1 = 183

• so that each tour has weight at least 183 with respect to W.

row

column

827298

2029171719

0 8 0 0 1 0 0 0 0

52

• Next, we have to choose an edge ij to split the set of solutions; note that we have to use directed edges ij here.

• Tours not containing ij can then be described by the weight matrix M which results from W’ by replacing the (i, j)-entry by ∞.

• As we would like to increase the current lower bound s, we should choose some edge that corresponds to a zero entry in W’ and, moreover, is the only 0-entry in its row and column (so that the matrix M will allow a further reduction). Clearly, it makes sense to choose the entry 0 for which M can be reduced by the largest possible amount.

• In our example, this is the edge BeHa, which leads to a reduction of 30+15 = 45 for M.

53

• Recall that we already know an optimal tour (of weight 250), which we can obtain from the FARIN heuristic.

• That tour does not help us to exclude one of the branches of the decision tree yet. Of course, such limited progress is to be expected, because the known solution of weight 250 contains the edge HaBe (for an appropriate orientation) and thus occurs on the right branch of the decision tree.

• As our original TSP was symmetric, it Actually suffices to consider this branch: for each tour containing the edge BeHa, there is a corresponding tour of the same weight not containing this edge, namely the tour having the opposite orientation.

• We replace W’ by the weight matrix M.

54

• Then we can select (Ha,Be), where the possible reduction is 14 + 27 = 41.

• As none of the tours which we still need to consider may contain a further edge beginning in Ha or ending in Be, we may omit both the row Ha and the column Be from M.

• Next we use the (Aa,Du)-entry of the resulting 8 × 8-matrix. For tours not containing the edge AaDu, we may replace this entry by ∞ and reduce the resulting matrix by 11 + 3 = 14. This yields the following matrix A

A=

55

• We shall investigate A later. First, we examine those tours which contain the edge AaDu. For such tours, both the row Aa and the column Du may be omitted from M. Moreover, the edge DuAa cannot also occur, so that the corresponding entry may be replaced by ∞. Then the resulting matrix can be reduced even further: we may subtract 6 from the column Aa and 5 from the row Du.

56

• Let us consider tours not containing the edge FrAa next. Then the entry corresponding to FrAa may be replaced by ∞, so that the matrix can be reduced by 14.

• Thus we may restrict our attention to tours containing the edge FrAa. As our tour contains the edges FrAa and AaDu, the edge DuFr is no longer permissible.

57

• Next we consider tours which do not involve the edge DuHa; this yields a lower bound of 283, so that we may restrict ourattention to tours containing DuHa. Moreover, the tours still left all contain the path (Fr,Aa,Du,Ha,Be).

• In the next step, we find that the tour has to contain the edge BeNu: without this edge, we get a lower bound of 239 + 16 = 255. Then we must also discard the edge NuFr; replacing the corresponding entry by ∞ allows us to subtract 5 from row Mu.

58

• We may now insert the edge MuSt into our tour: for tours not containing this edge, we obtain a lower bound of 251. Next we omit both the row Mu and the column St and replace the (St,Mu)-entry by ∞. Subtracting 6 from row Ba, we get a lower bound of 250.

• Continuing the procedure in the same way would yield the additional edges StBa, BaFr, and NuMu, so that we would obtain the (optimal) tour:

• Thus we could have obtained this tour without using heuristic methods by performing a sort of DFS on the decision tree: always choose the branch with the smallest lower bound for continuing the investigation.

59