For a copy of this thesis, either:
The VRPTW is a generalization of the well-known capacitated routing problem (VRP or CVRP). In the VRP a fleet of vehicles must visit (service) a number of customers. All vehicles start and end at the depot. For each pair of customers or customer and depot there is a cost. The cost denotes how much is costs a vehicle to drive from one customer to another. Every customer must be visited exactly ones. Additionally each customer demands a certain quantity of goods delivered (know as the customer demand). For the vehicles we have an upper limit on the amount of goods that can be carried (known as the capacity). In the most basic case all vehicles are of the same type and hence have the same capacity. The problem is now for a given scenario to plan routes for the vehicles in accordance with the mentioned constraints such that the cost accumulated on the routes, the fixed costs (how much does it cost to maintain a vehicle) or a combination hereof is minimized.
In the more general VRPTW each customer has a time window, and between all pairs of customers or a customer and the depot we have a travel time. The vehicles now have to comply with the additional constraint that servicing of the customers can only be started within the time windows of the customers. It is legal to arrive before a time window ``opens'' but the vehicle must wait and service will not start until the time window of the customer actually opens.
For solving the problem exactly 4 general types of solution methods have evolved in the literature: dynamic programming, Dantzig-Wolfe (column generation), Lagrange decomposition and solving the classical model formulation directly.
Presently the algorithms that uses Dantzig-Wolfe given the best results (Desrochers, Desrosiers and Solomon, and Kohl), but the Ph.D. thesis of Kontoravdis shows promising results for using the classical model formulation directly.
In this Ph.D. project we have used the Dantzig-Wolfe method. In the Dantzig-Wolfe method the problem is split into two problems: a ``master problem'' and a ``subproblem''. The master problem is a relaxed set partitioning problem that guarantees that each customer is visited exactly ones, while the subproblem is a shortest path problem with additional constraints (capacity and time window). Using the master problem the reduced costs are computed for each arc, and these costs are then used in the subproblem in order to generate routes from the depot and back to the depot again. The best (improving) routes are then returned to the master problem and entered into the relaxed set partitioning problem. As the set partitioning problem is relaxed by removing the integer constraints the solution is seldomly integral therefore the Dantzig-Wolfe method is embedded in a separation-based solution-technique.
In this Ph.D. project we have been trying to exploit structural properties in order to speed up execution times, and we have been using parallel computers to be able to solve problems faster or solve larger problems.
The thesis starts with a review of previous work within the field of VRPTW both with respect to heuristic solution methods and exact (optimal) methods. Through a series of experimental tests we seek to define and examine a number of structural characteristics.
The first series of tests examine the use of dividing time windows as the branching principle in the separation-based solution-technique. Instead of using the methods previously described in the literature for dividing a problem into smaller problems we use a methods developed for a variant of the VRPTW. The results are unfortunately not positive.
Instead of dividing a problem into two smaller problems and try to solve these we can try to get an integer solution without having to branch. A cut is an inequality that separates the (non-integral) optimal solution from all the integer solutions. By finding and inserting cuts we can try to avoid branching. For the VRPTW Kohl has developed the 2-path cuts. In the separationalgorithm for detecting 2-path cuts a number of test are made. By structuring the order in which we try to generate cuts we achieved very positive results.
In the Dantzig-Wolfe process a large number of columns may be generated, but a significant fraction of the columns introduced will not be interesting with respect to the master problem. It is a priori not possible to determine which columns are attractive and which are not, but if a column does not become part of the basis of the relaxed set partitioning problem we consider it to be of no benefit for the solution process. These columns are subsequently removed from the master problem. Experiments demonstrate a significant cut of the running time.
Positive results were also achieved by stopping the route-generation process prematurely in the case of time-consuming shortest path computations. Often this leads to stopping the shortest path subroutine in cases where the information (from the dual variables) leads to ``bad'' routes. The premature exit from the shortest path subroutine restricts the generation of ``bad'' routes significantly. This produces very good results and has made it subroutine in cases where the information (from the dual variables) leads to ``bad'' routes. The premature exit from the shortest path subroutine restricts the generation of ``bad'' routes significantly. This produces very good results and has made it possible to solve problem instances not solved to optimality before.
The parallel algorithm is based upon the sequential Dantzig-Wolfe based algorithm developed earlier in the project. In an initial (sequential) phase unsolved problems are generated and when there are unsolved problems enough to start work on every processor the parallel solution phase is initiated. In the parallel phase each processor runs the sequential algorithm. To get a good workload a strategy based on balancing the load between neighbouring processors is implemented. The resulting algorithm is efficient and capable of attaining good speedup values. The loadbalancing strategy shows an even distribution of work among the processors. Due to the large demand for using the IBM SP2 parallel computer at UNI-C it has unfortunately not be possible to run as many tests as we would have liked. We have although managed to solve one problem not solved before using our parallel algorithm.