Chapter 25. On Some Nonlinear Knapsack Problems
Tải bản đầy đủ  0trang
Table 3’
n = 50
n = 70
n = m
n=80
Gap
Iterations
Nodes
Time
Gap
Iterations
Nodes
Time
Gap
Iterations
Nodes
Time
Gap
Iterations
Nodes
0.00
0.00
0.00
0.00
1.17
0.00
0.00
0.02
0.30
0.41
0.08
0.02
0.16
0.20
0.10
67
56
104
74
1863
1
1
2
1
70
1
1
7
39
21
27
27
21
19
15
2.8
2.4
4.3
3.1
79.5
3.5
2.5
8.6
46.7
27.2
37.2
38.9
30.4
25.2
18.8
0.00
0.53
0.00
0.00
0.26
0.00
0.24
0.51
0.10
0.00
0.09
0.10
0.00
0.00
0.06
418
858
65
76
1165
15
23
1
1
35
1
9
85
25
1
23
5
1
1
3
24.9
51.0
3.8
4.5
70.3
4.7
20.2
190.9
58.0
4.7
48.9
11.5
4.3
3.6
10.0
0.23
0.63
0.00
0.00
0.05
0.37
0.10
0.00
0.18
0.39
0.43
0.00
0.71
0.16
0.00
550
1417
82
108
219
800
191
66
2299
2687
1129
11
39
45.8
115.7
6.6
8.7
17.6
0.00
0.31
0.17
0.00
0.12
109
695
2530
125
361
1
11
59
84
60
207
1114
651
894
936
731
604
452
Average Runtime = 22.1
(I
= 0.01,
p = 0.1,
T = 0.9
80
341
3211
977
80
820
193
73
61
169
Average Runtime = 34.1
80
1104
555
117
1
1
3
22
3
1
67
63
29
1
25
15
1
1
5
Time
11.8
75.4
275.4
13.6
38.9
64.5
15.4
5.3
185.7
217.6
91.2
6.4
89.7
44.9
9.4
Average Runtime = 61.6
Average Runtime = 83.0
in all problems.
P
s
T.H.C. Smith, G.L. Thompson
492
be expected to provide a good upper bound U, for the type of problem under
consideration, we took as upper bound 1.01 times the value of the lower bound at
the end of the initial ascent (if no tour is found, the algorithm has to be run again
with a higher upper bound). In the initial ascent we computed the stepsize as
t = A ( 0 . 5 M ) / ~ z , N (d2)*
, (where M is the maximum lower bound obtained in the
d ,2)’. All
current ascent) while in a general ascent we took t = A(0.005 M ) / Z I E N ( ascents were started with the parameter K set equal to 2 , the threshold value.
Our computational experience with the above fifty problems are reported in
Table 3 where the column headings have the following interpretations:
Gap:
Iterations:
Nodes:
Time :
The difference between the optimal tour length and the lower bound
at the end of the initial ascent as a percentage of the optimal
tourlength.
The total number of ascent iterations.
The total number of subproblems generated in steps 2(b), 2(c) and
3(d) of IE.
The total runtime in seconds on the UNIVAC 1108.
A 100 node problem was also solved and took 13.6 minutes on the UNIVAC 1108,
generating 95 nodes and requiring 5014 ascent iterations.
For the reasons stated above it is extremely difficult to compare the IEalgorithm
with that of Hansen and Krarup. However, there does exist the possibility of
improving the IEalgorithm further by making use of efficient sorting techniques
and Kruskal’s algorithm for finding a minimal spanning tree (see [16]) as done by
Hansen and Krarup.
6. Conclusions
Our computational results indicate that the IEalgorithm is considerably faster
than the HKIalgorithm. Since the major computational effort in the IEalgorithm
is spent on Step 1 (the ascent method) in order to find good lower bounds on
subproblems, an increase in the efficiency of the algorithm can be obtained by
speeding up the ascent method. We are currently considering techniques for doing
the latter.
References
[l] E. Balas, A n additive algorithm for solving linear programs with zeroone variables, Operations
Res. 13 (1965) 517546.
[2] N. Cristofides, T h e shortest hamiltonian chain of a graph, S I A M J . Appl. Math. 19 (1970) 689696.
[3] E.W. Dijkstra, A note on two problems in connection with graphs, Num. Math. 1 (1959) 269271.
[3a] L.R. Ford, Jr. and D.R. Fulkerson, Flows in Networks (Princeton University Press, Princeton, NJ,
1962).
A LIFO implicit enumeration search algorithm
493
[4] R.S. Garfinkel and G.L. Nemhauser, Integer Programming (John Wiley, New York, 1972).
[5] A.M. Geoffrion, Integer programming by implicit enumeration and Balas’ method, SIAM Reo. 7
(1967) 178190.
[6] F. Glover, A multiphasedual algorithm for the zeroone integer programming problem, Operations
Res. 13 (1965) 879919.
[7] F. Glover, D. Klingman and D. Karney, The augmented predecessor index method for locating
stepping stone paths and assigning dual prices in distribution problems, Transportation Sci. 6 (1972)
1711 80.
[8] K.H. Hansen and J. Krarup, Improvements of the HeldKarp algorithm for the symmetric
travelingsalesman problem, Math. Programming 7 (1974) 8796.
[9] M. Held and R.M. Karp, A dynamic programming approach to sequencing problems, SIAM 10
(1962) 196210.
[lo] M. Held and R.M. Karp, The travelingsalesman problem and minimum spanning trees, Operations
Res. 18 (1970) 11381162.
[ l l ] M. Held and R. Karp, The traveling salesman problem and minimum spanning trees: 11, Math.
Programming 1 (1971) 6 2 5 .
[12] M. Held, P. Wolfe and H.P. Crowder, Validation of subgradient optimization, Math. Programming
6 (1974) 6288.
[13] E. Johnson, Network and basic solutions, Operations Res. 14 (1960) 619623.
1141 R.L. Karg and G.L. Thompson, A heuristic approach to solving traveling salesman problems,
Management Sci. 10 (1964) 225248.
[15] J. Kershenbaum and R . Van Slyke, Computing minimum trees, Proc. A C M Annual Conference
(1972), 518527.
[16] J.B. Kruskal, O n the shortest spanning subtree of a graph and the traveling salesman problem,
Proc. A m . Math. SOC. 2 (1956) 4850.
[17] J.D.C. Little, et al., An algorithm for the traveling salesman problem, Operations Res. 11 (1963)
972989.
[18] A.K. Obruca, Spanning tree manipulation and the traveling salesman problem, Computer J. 10
(1968) 374377.
1191 R.C. Prim, Shortest connection networks and some generalizations, Bell System Tech. J., X (1957)
1389 1401.
[20] T.C. Raymond, Heuristic algorithm for the traveling salesman problem, IBM J. Res. Deu., 13 (1969)
400407.
[21] P. Rosenstiehl, L’arbre minimum d’un graphe, in: P. Rosenstiehl, ed., Theory of Graphs (Gordon
and Breach, New York, 1967).
[22] V. Srinivasan and G.L. Thompson, Accelerated algorithms for labeling and relabeling of trees, with
applications to distribution problems, J. Assoc. Computing Machinery 19 (1972) 712726.
1231 G.L. Thompson, The stopped simplex method: I. Basic theory for mixed integer programming,
integer programming, Reuue Francaise D e Recherche Operationelle 8 (1964) 159182.
[24] G.L. Thompson, Pivotal operations for solving optimal graph problems, working paper.
This Page Intentionally Left Blank
Annals of Discrete Mathematics 1 (1977) 495506
@ NorthHolland Publishing Company
COMPUTATIONAL PERFORMANCE OF THREE SUBTOUR
ELIMINATION ALGORITHMS FOR SOLVING ASYMMETRIC
TRAVELING SALESMAN PROBLEMS*
T.H.C. SMITH
Department of Statistics, Rand Afrikaans University, Johannesburg, R.S.A.
V. SRINIVASAN
Graduate School of Business, Stanford Universify, Stanford, C A 94305, U.S.A.
G.L. THOMPSON
Graduate School of Industrial Administration, Carnegie Mellon Universify, Pittsburgh, P A 15213,
U.S.A.
In this paper we develop and computationally test three implicit enumeration algorithms for
solving the asymmetric traveling salesman problem. All three algorithms use the assignment
problem relaxation of the traveling salesman problem with subtour elimination similar to the
previous approaches by Eastman, Shapiro and Bellmore and Malone. The present algorithms,
however, differ from the previous approaches in two important respects:
(i) lower bounds on the objective function for the descendants of a node in the implicit
enumeration tree are computed without altering the assignment solution corresponding to the
parent node  this is accomplished using a result based on “cost operators”,
(ii) a LIFO (Last In, First Out) depth first branching strategy is used which considerably
reduces the storage requirements for the implicit enumeration approach. The three algorithms
differ from each other in the details of implementing the implicit enumeration approach and in
terms of the type of constraint used for eliminating subtours. Computational experience with
randomly generated test problems indicates that the present algorithms are more efficient and can
solve larger problems compared to (i) previous subtour elimination algorithms and (ii) the
1arborescence approach of Held and Karp (as implemented by T.H.C. Smith) for the asymmetric
traveling salesman problem. Computational experience is reported for up to 180 node problems
with costs (distances) in the interval (1,1000)and up to 200 node problems with bivalent costs.
1. Introduction
Excluding the algorithms of this paper, the stateoftheart algorithms for the
asymmetric traveling salesman problem appears to be that of [ll] and more
recently [l],both of which use the linear assignment problem as a relaxation (with
subtour elimination) in a branchandbound algorithm. In the case of the symmetric
* This report was prepared as part of the activities of the Management Science Research Group,
CarnegieMellon University, under Contract N0001475C0621 NR 047048 with the U.S. Office of
Naval Research.
A considerably more detailed version of this paper is available (Management Sciences Research
Report No. 369), and can be obtained by writing to the third author.
495
496
T.H.C. Smith, V. Srinivasan, G.L. Thompson
traveling salesman problem these algorithms as well as another interesting algorithm of Bellmore and Malone [1] based on the 2matching relaxation of the
symmetric traveling salesman problem are completely dominated in efficiency by
the branchandbound algorithm of Held and Karp [lo] (further improved in [S])
based on a 1tree relaxation of the traveling salesman problem. In [13] an implicit
enumeration algorithm using a LIFO (Last I n First O u t ) depth first branching
strategy based on Held and Karp’s 1tree relaxation was introduced and extensive
computational experience indicates that algorithm to be even more efficient than
the previous HeldKarp algorithms.
In [I71 Srinivasan and Thomspon showed how weak lower bounds can be
computed for the subproblems formed in the EastmanShapiro branchandbound
algorithm [5, 111. The weak lower bounds are determined by the use of cell cost
operators [14, 151 which evaluate the effects on the optimal value of the objective
function of parametrically increasing the cost associated with a cell of the
assignment problem tableau. Since these bounds are easily computable, it was
suggested in [I71 that the use of these bounds instead of the bounds obtained by
resolving or postoptimizing the assignment problem for each subproblem, would
speed up the EastmanShapiro algorithm considerably. In this paper we propose
and implement a straightforward LIFO implicit enumeration version of the
EastmanShapiro algorithm as well as two improved LIFO implicit enumeration
algorithms for the asymmetric traveling salesman problem. In all three of these
algorithms the weak lower bounds of [I71 are used to guide the tree search. The use
of weak lower bounds in the branchandbound subtour elimination approach is
explained with an example in [17].
We present computational experience with the new algorithms on problems of up
to 200 nodes. The computational results indicate that the proposed algorithms are
more efficient than (i) the previous subtour elimination branchandbound algorithms and (ii) a LIFO implicit enumeration algorithm based on the 1arborescence relaxation of the asymmetric traveling salesman problem suggested
by Held and Karp in [9], recently proposed and tested computationally in [12].
2. Subtour elimination using cost operators
Subtour elimination schemes have been proposed by Dantzig, et al. [3, 41,
Eastman [5], Shapiro [ I l l , and Bellmore and Malone [l]. The latter four authors
use, as we do, the Assignment Problem (AP) relaxation of the traveling salesman
problem (TSP) and then eliminate subtours of the resulting A P by driving the costs
of the cells in the assignment problem away from their true costs to very large
positive or very large negative numbers.
The way we change the costs of the assignment problem is (following [17]) to use
the operator theory of parametric programming of Srinivasan and Thompson [14,
151. To describe these let 6 be a nonnegative number and ( p , q ) a given cell in the
Computational performance of subtour elimination algorithms
497
assignment cost matrix C = {cij}. A positive (negative) cell cost operator SC&(SC,)
transforms the optimum solution of the original A P into an optimum solution of the
problem AP+(AP) with all data the same, except
c ; = c,
+ 6 ; (c,=
c,  6).
The details of how to apply these operators are given in [14, 151 for the general case
of capacitated transportation problems and in [17] for the special case of assignment problems. Specifically we note that p + ( p  ) denotes the maximum extent to
which the operator SCL(SC,) can be applied without needing a primal basis
change.
Denoting by Z the optimum objective function value for the AP, the quantity
( Z + p + ) is a lower bound (called a weak lower bound in [17]) on the objective
function value of the optimal APsolution for the subproblem formed by fixing
( p , q ) out. The quantity p + can therefore be considered as a penalty (see [7]) for
fixing ( p , q ) out. The important thing to note is that the penalty p + can be computed
from an assignment solution without changing it any way. Consequently, the
penalties for the descendants of a node in the implicit enumeration approach can be
efficiently computed without altering the assignment solution for the parent node.
In the subtour elimination algorithms to be presented next, it becomes necessary
to “fix out” a basic cell ( p , q ) , i.e., to exclude the assignment ( p , 4). This can be
accomplished by applying the operator MC&, where M is a large positive number.
Similarly a cell ( p , q ) that was previously fixed out can be “freed”, i.e., its cost
restored to its true value, by applying the negative cell cost operator. A cell can
likewise be “fixed in” by applying MC,.
3. New LIFO implicit enumeration algorithms
The first algorithm (called TSP1) uses the EastmanShapiro subtour elimination
constraints with the modification suggested by Bellmore and Malone [ l , p. 3041 and
is a straightforward adaptation to the TSP of the implicit enumeration algorithm for
the zeroone integer programming problem. We first give a stepwise description of
algorithm TSP1:
Step 0. Initialize the node counter to zero and solve the AP. Initialize Z B = M
(ZB is the current upper bound on the minimal tour cost) and go to Step 1.
Step 1. Increase the node counter. If the current APsolution corresponds to a
tour, update Z B and go to Step 4. Otherwise find a shortest subtour and determine
a penalty p + for each edge in this subtour (if the edge has been fixed in, take
p + = M, a large positive number, otherwise compute p + ) . Let ( p , q ) be any edge in
this subtour with smallest penalty p +. If Z + p + z=ZB, go to Step 4 (none of the
edges in the subtour can be fixed out without Z exceeding ZB). Otherwise go to
Step 2.
498
T.H.C. Smith, V. Sriniuasan, G.L. Thompson
Step 2 . Fix ( p , q ) out. If in the process of fixing out, Z + p + a ZB, go to Step 3.
Otherwise, after fixing ( p , q ) out, push (p, q ) on to the stack of fixed edges and go to
Step 1.
Step 3. Free ( p , q ) . If (9, p ) is currently fixed in, go to Step 4. Otherwise fix ( p , q )
in, push ( p , q ) on to the stack of fixed edges and go to Step 1.
Step 4. If the stack of fixed edges is empty, go to Step 6. If the edge (p, q ) on top
of the stack has been fixed out in Step 2, go to Step 3. Otherwise, go to Step 5.
Step 5. Pop a fixed edge from the stack and free it (if it is a fixed in edge, restore
the value of the corresponding assignment variable to one). Go to Step 4.
Step 6 . Stop. The tour corresponding to the current value of ZB is the optimal
tour.
In Step 1 of TSPl we select the edge (p, q ) to be fixed out as the edge in a shortest
subtour with the smallest penalty. Selecting a shortest subtour certainly minimizes
the number of penalty calculations while the heuristic of selecting the edge with the
smallest penalty is intuitively appealing (but not necessarily the best choice). We
tested this heuristic against that of selecting the edge with (i) the largest penalty
among edges in the subtour (excluding fixed in edges) and (ii) the largest associated
cost, on randomly generated asymmetric TSP’s. The smallest penalty choice
heuristic turned out to be three times as effective than (i) and (ii) on the average,
although it did not do uniformly better on all test problems.
Every pass through Step 1 of algorithm TSPl requires the search for a shortest
subtour and once an edge ( p , q ) in this subtour is selected, the subtour is discarded.
Later, when backtracking, we fix ( p , q ) in during Step 3 and go to Step 1 and again
find a shortest subtour. This subtour is very likely to be the same one we discarded
earlier and hence there is a waste of effort. An improvement of the algorithm TSPl
is therefore to save the shortest subtours found in Step 1 and utilize this information
in later stages of computation. We found the storage requirements to d o this were
not excessive, so that this idea was incorporated into the next algorithm.
The second algorithm, called TSP2, effectively partitions a subproblem into
mutually exclusive subproblems as in the scheme of Bellmore and Malone [1, p.
3041 except that the edges in the subtour to be eliminated are considered in order
of increasing penalties instead of the order in which they appear in the subtour.
Whereas the search tree generated by algorithm TSPl has the property that every
nonterminal node has exactly two descendants, the nonterminal nodes of the search
tree generated by algorithm TSP2 in general have more than two descendants. We
now give a stepwise description of Algorithm TSP2. In the description we make use
of the pointer S which points to the location where the Sth subtour is stored (i.e. at
any time during the computation S also gives the level in the search tree of the
current node).
Step 0. Same as in algorithm TSP1. In addition, set S = 0.
Step 1. Increase the node counter. If the current APsolution corresponds to a
tour, update Z B and go to Step 4. Otherwise increase S, find and store a shortest
Computational performance of subtour elimination algorithms
499
subtour as the S t h subtour (together with a penalty for each edge in the subtour,
computed as in Step 1 of algorithm TSP1). Let ( p , q ) be any edge in this subtour
with smallest penalty p + . If Z + p + 3 ZB, decrease S and go to Step 4 (none of the
edges in the subtour can be fixed out without Z exceeding Z B ) . Otherwise go to
Step 2.
Step 2. Same as in algorithm TSP1.
Step 3. Free ( p , q). If all edges of the Sth subtour have been considered in Step 2 ,
decrease S and go to Step 4. Otherwise determine the smallest penalty p + stored
with an edge (e,f) in the S t h subtour which has not yet been considered in Step 2 . If
Z + p + < Z B , fix ( p , q ) in, push ( p , q ) on to the stack of fixed edges, set
( p , q ) = (e,f ) and go to Step 2. Otherwise decrease S and go to Step 4.
Step 4. Same as in algorithm TSP1.
Step 5. Same as in algorithm TSP1.
Step 6. Same as in algorithm TSP1.
The third algorithm, called algorithm TSP3, effectively partitions a subproblem
into mutually exclusive subproblems as in the scheme of Garfinkel [6]. A stepwise
description of the algorithm follows:
Step 0. Same as in algorithm TSP2.
Step 1. Increase the node counter. If the current APsolution corresponds to a
tour, update Z B and go to Step 6. Otherwise increase S and store a shortest
subtour as the S t h subtour (together with a penalty for each edge in the subtour,
computed as in Step 2 of algorithm TSP1). Let ( p , q ) be the edge in this subtour with
smallest penalty p + . If Z + p + 2 ZB, go to Step 5. Otherwise go to Step 2 .
Step 2. Fix out all edges ( p , k ) with k a node in the Sth subtour. If in the process
of fixing out, Z + p t 2 ZB, go to Step 3. Otherwise, when all these edges have been
fixed out, go to Step 1.
Step 3. Free all fixed out (or partially fixed out) edges ( p , k ) with k a node in the
Sth subtour. If all edges in the S t h subtour have been considered in Step 2, go to
Step 4. Otherwise determine the smallest penalty p + stored with an edge ( e , f ) in
the Sth subtour which has not yet been considered in Step 2. If Z + p f < ZB, fix
out all edges ( p , k ) with k not a node in the S t h subtour, let p = e and go to Step 2.
Otherwise go to Step 4.
Step 4. Free all edges fixed out for the S t h subtour and go to Step 5.
Step 5. Decrease S. If S = 0, go to Step 7. Otherwise go to Step 6.
Step 6. Let ( p , k ) be the last edge fixed out. Go to Step 3 .
Step 7. Stop. The tour corresponding to the current value of Z B is the optimal
tour.
Note that the fixing out of edges in step 3 is completely optional and not required
for the convergence of the algorithm. If these edges are fixed out, the subproblems
formed from a given subproblem do not have any tours in common (see [6]). Most
of these edges will be nonbasic so that the fixing out process involves mostly cost
T.H.C. Smith, V. Sriniuasan, G.L. Thompson
500
changes. Only a few basis exchanges are needed for any edges that may be basic.
However, there remains the flexibility of fixing out only selected edges (for
example, only nonbasic edges) or not fixing out of any of these edges.
4. Computational experience
Our major computational experience with the proposed algorithms is based on a
sample of 80 randomly generated asymmetric traveling salesman problems with
edge costs drawn from a discrete uniform distribution over the interval (1,1000).
The problem size n varies from 30 to 180 nodes in a stepsize of 10 and five problems
of each size were generated. All algorithms were coded in FORTRAN V and were
run using only the core memory (approximately 52,200 words) on the UNIVAC
1108 computer.
We report here only our computational experience with algorithms TSP2 and
TSP3 on these problems since algorithm TSPl generally performed worse than
either of these algorithms, as could be expected a priori.
In Table 1 we report, for each problem size, the average runtimes (in seconds) for
solving the initial assignment problem using the 1971 transportation code of
Table 1.
Summary of computational performance of algorithms TSP2 and TSP3
Average
Problem
size
n
time to
obtain
assignment
solution
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
1 80
0.2
0.4
0.5
0.7
1.1
1.5
1.9
2.1
2.8
3.5
4.0
5.6
6.2
7.0
8.0
8.9
Note.
Algorithm TSP2
Average runtime
(including the
solution of the AP)
TSP2
TSP3
0.9
2.9
1.7
9.3
8.5
13.8
42.0
53.0
22.3
62.9
110.1
165.2
65.3
108.5
169.8
441.4
1.o
2.8
3.4
11.4
11.8
16.1
56.8
59.6


Average
runtime
Average time
Average quality
estimated by
to obtain
of first tour
regression
first tour
(% from optimum)
0.8
1.9
3.9
6.9
11.3
17.3
25.2
35.2
47.6
62.8
80.9
102.4
127.6
156.6
189.9
227.7
0.3
0.5
0.6
1.5
1.3
2.3
3.6
5.2
3.7
5.7
8.3
12.9
9.0
10.0
13.2
23.0
(1) All averages are computed over 5 problems each.
(2) All computational times are in seconds on the UNIVAC 1108.
3.7
4.0
0.8
4.1
0.5
1.0
2.7
3.8
1.3
1.5
2.0
4.2
1.1
1.1
1.3
3.1
Computational performance of subtour elimination algorithms
501
Srinivasan and Thompson [16] as well as the average runtime (in seconds including
the solution of the A P ) for algorithms TSP2 and TSP3. From the results for
n G 100, it is clear that algorithm TSP2 is more efficient than TSP3. For this reason,
only algorithm TSP2 was tested on problems with n > 100. We determined that the
function t ( n ) = 1.55 X
x n3.*fits the data with a coefficient of determination
(R’)of 0.927. The estimated runtimes obtained from this function are also given in
Table 1.
It has been suggested that implicit enumeration or branchandbound algorithms
can be used as approximate algorithms by terminating them as soon as a first
solution is obtained. In order to judge the merit of doing so with algorithm TSP2,
we also report in Table 1 the average runtime (in seconds) to obtain the first tour as
well as the quality of the first tour (expressed as the difference between the first tour
cost and the optimal tour cost as a percentage of the latter). Note that for all n the
first tour is, on an average, within 5% of the optimum and usually much closer.
We mentioned above that the fixing out of edges in step 3 of algorithm TSP3 is
not necessary for the convergence of the algorithm. Algorithm TSP3 was temporarily modified by eliminating the fixing out of these edges but average runtimes
increased significantly (the average runtimes for the 70 and 80 node problems were
respectively 24.3 and 25.5 seconds). Hence it must be concluded that the partitioning scheme introduced by Garfinkel [6] has a practical advantage over the original
branching scheme of Bellmore and Malone [l].
The largest asymmetric TSP’s solved so far appears to be two 80node problems
solved by Bellmore and Malone [l]in an average time of 165.4 seconds on an IBM
360/65. Despite the fact that the IBM 360/65 is somewhat slower (takes about 10 to
50% longer time) compared to the UNIVAC 1108, the average time of 13.8 seconds
for TSP2 on the UNIVAC 1108, is still considerably faster than the
BellmoreMalone [ 11 computational times. Svestka and Huckfeldt [ 181 solved
60node problems on a UNIVAC 1108 in an average time of 80 seconds (vs. 9.3
seconds for algorithm TSP2 on a UNIVAC 1108). They also estimated the average
runtime for a 100 node problem as 27 minutes on the UNIVAC 1108 which is
considerably higher than that required for TSP2.
The computational performance of algorithm TSP2 was also compared with the
LIFO implicit enumeration algorithm in [ 121 for the asymmetric traveling salesman
problem using Held and Karp’s 1arborescence relaxation. The 1arborescence
approach reported in [12] took, on the average, about 7.4 and 87.7 seconds on the
UNIVAC 1108 for n = 30 and 60 respectively. Comparison of these numbers with
the results in Table 1 again reveals that TSP2 is computationally more efficient. For
the symmetric TSP, however, algorithm TSP2 is completely dominated by a LIFO
implicit enumeration approach with the HeldKarp 1tree relaxation. See [ 131 for
details.
A more detailed breakdown of the computational results are presented in Table
2 (for TSP2 and TSP3 for n S 100) and in Table 3 (for TSP2 for n > 100). The
column headings of Tables 2 and 3 have the following interpretations: