Impact of Automated Algorithm Configuration on the State of the Art in Heuristic TSP Solving

The Traveling Salesperson Problem (TSP) has been studied for numerous decades. In exact TSP solving, Concorde (Applegate et al., 2007) has evolved as the predominant optimization algorithm. However, for most practical use cases, it is outperformed by stochastic TSP solvers, which are capable of finding the optimal tours much faster -- unfortunately without being able to guarantee optimality of the found tours.
Within the class of inexact TSP solvers, EAX (Nagata & Kobayashi, 2013) and LKH (Helsgaun, 2009) have shown superior performance and hence were an essential part of numerous research studies. As a consequence, researchers enhanced the original implementations of these two optimizers with additional restart strategies (Kerschke et al., 2018; Dubois-Lacoste et al., 2015) and alternative crossover operators (Sanches et al., 2017) -- in addition to their already existing variety of parameters.

Note that the GPX-versions were mainly evaluated on large TSP instances (with 10,000+ nodes), whereas the restart versions were compared for smaller TSP instances (consisting of 500 to 2,000 nodes). However, as the performance of an algorithm is in most cases highly dependent on its parametrization (Bischl et al., 2016; Lindauer et al., 2017), i.e., on the configuration of its parameters, the goal of this thesis is an extensive empirical comparison and analysis of the two aforementioned inexact TSP solvers. In contrast to previous studies, the two solvers shall be compared based on automatically configured versions. Here, parametrizations found by the state-of-the-art configurators SMAC (Hutter et al., 2011) and/or irace (López-Ibáñez et al., 2016) are of interest.
Further studies have dealt with the impact of performance measures on algorithm assessment (Bossek and Trautmann, 2018; Kerschke et al., 2018a; Kerschke et al., 2018b). Therefore, in order to enable drawing even more general conclusions, the two solvers shall be compared using three different measures: the Penalized Average and Quantile Runtime (PAR10 and PQR10), as well as with a multi-objective performance indicator based on the hypervolume indicator (HV).



  • David L. Applegate, Robert E. Bixby, Vasek Chvátal and William J. Cook (2007). The Traveling Salesman Problem: A Computational Study. Princeton University Press. [URL]
  • Yuichi Nagata and Shigenobu Kobayashi (2013). A Powerful Genetic Algorithm Using Edge Assembly Crossover for the Traveling Salesman Problem. In: INFORMS Journal on Computing, Vol. 25 (2), pp. 346 -- 363, INFORMS. [URL]
  • Keld Helsgaun (2009). General k-opt Submoves for the Lin-Kernighan TSP Heuristic. In: Mathematical Programming Computation, Vol. 1 (2-3), pp. 119 -- 163, Springer. [URL]
  • Pascal Kerschke, Lars Kotthoff, Jakob Bossek, Holger H. Hoos and Heike Trautmann (2018). Leveraging TSP Solver Complementarity through Machine Learning. In: Evolutionary Computation, Vol. 26 (4), pp. 597 -- 620, MIT Press. [URL]
  • Jérémie Dubois-Lacoste, Holger H. Hoos and Thomas Stützle (2015). On the Empirical Scaling Behaviour of State-of-the-Art Local Search Algorithms for the Euclidean TSP. In: Proceedings of the 17th Annual Conference on Genetic and Evolutionary Computation, pp. 377 -- 384, ACM. [URL]
  • Danilo Sanches, L. Darrell Whitley and Renato Tinós (2017). Building a Better Heuristic for the Traveling Salesman Problem. In: Proceedings of the 19th Annual Conference on Genetic and Evolutionary Computation, pp. 329 -- 336, ACM. [URL]
  • Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Thomas Marius Lindauer, Yuri Malitsky, Alexandre Fréchette, Holger H. Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney and Joaquin Vanschoren (2016). ASLib: A Benchmark Library for Algorithm Selection. In: Artificial Intelligence, Vol. 237, pp. 41 -- 58, Elsevier. [URL]
  • Thomas Marius Lindauer, Holger H. Hoos, Frank Hutter and Thorsten Schaub (2017). AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract). In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 5025 -- 5029. [URL]
  • Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown (2011). Sequential Model-Based Optimization for General Algorithm Configuration (Extended Version). In: Proceedings of the 5th International Conference on Learning and Intelligent Optimization, pp. 507 -- 523, Springer. [URL]
  • Manuel López-Ibáñez, Jérémie Dubois-Lacoste, Leslie Pérez Cáceres, Mauro Birattari and Thomas Stützle (2016). The irace Package: Iterated Racing for Automatic Algorithm Configuration. In: Operations Research Perspectives, Vol. 3, pp. 43 -- 58, Elsevier. [URL]
  • Jakob Bossek and Heike Trautmann (2018). Multi-Objective Performance Measurement: Alternatives to PAR10 and Expected Running Time. In: Proceedings of the 12th International Conference on Learning and Intelligent Optimization, pp. 215 -- 219, Springer. [URL]
  • Pascal Kerschke, Jakob Bossek and Heike Trautmann (2018a). Parametrization of State-of-the-Art Performance Indicators: A Robustness Study Based on Inexact TSP Solvers. In: Proceedings of the 20th Annual Conference on Genetic and Evolutionary Computation, pp. 1737 -- 1744, ACM. [URL]
  • Pascal Kerschke, Holger H. Hoos, Frank Neumann and Heike Trautmann (2018b). Automated Algorithm Selection: Survey and Perspectives. In: Evolutionary Computation, MIT Press. [URL]