1.
Introduction
The generation of municipal/urban solid waste (MSW) is an inalienable consequence of the development of modern cities. Likewise, its correct management is one of the key elements for the sustainable development of a city [1]. The stages in the reverse logistics chain of MSW are diverse. According to Tchobanoglous et al. [2], they can be classified into waste generation; handling, separation, accumulation and processing of waste at source; collection, transfer and transportation; separation, processing and transformation of solid waste; and finally, disposition. In these stages, there are many decisions to be made. For this reason, proposing computer-aided tools that allow assisting decision-making agents can contribute to a more efficient use of resources.
This work focuses on the collection, transfer and transportation stage of waste, which is known to be among the most expensive activities of MSW systems [3]. This activity comprises the design of the waste collection routes for the vehicles that collect the waste produced by households and other commercial and institutional urban activities. The correct performance of this task is critical for due quality of service provided to the citizens since when it is mishandled it can lead to waste bins overflow which is associated with environmental and social issues. In particular, a simulated annealing (SA) metaheuristic algorithm is proposed to solve this problem. The proposed algorithm is compared against other metaheuristic algorithms and CPLEX in order to study its performance. The experimentation is carried out on both a well-known benchmark of the literature and real instances of the city of Bahía Blanca, Argentina. This paper is a revised and expanded version of our conference work presented at The Xth international conference of production research-Americas 2020 [4].
This article is structured as follows. Section 2 presents the proposed model and resolution algorithms for the addressed problem and a literature review of the main related works. Section 3 describes the results of the computational experimentation performed on a well-known benchmark set of instances from the literature and a real-world case of study. Section 4 analyses and discusses the obtained results. Finally, Section 5 presents the main conclusion and future research work lines.
2.
Materials and methods
The waste collection problem can be addressed considering multiple features depending on the collection system and the real-world restrictions that are taken into account. The waste collection problem that is addressed in this work is based on a community bin system in which waste is carried by generators from their households to the collection points that are distributed throughout the city. A collection point is a place in the city specially conditioned for installing a (waste) bin or set of bins.
2.1. Mathematical formulation
This problem can be modelled either as a Capacitated Vehicle Routing Problem (CVRP) or a soft clustered-CVRP, in which the collection points are split into regions (clusters) and within a route a vehicle visiting a collection point in a cluster must visit all the remaining collection points therein before leaving it or to respect the soft-cluster constraint, i.e., all collection points of the same cluster must be served by the same vehicle [5]. In this work, the waste collection problem is modelled as a CVRP in which a fleet of homogeneous vehicles has to visit all the collection points considering capacity constraints. Given a set of collection points $ C $ and a superset $ \underline{C} = C\cup {c}_{0} $, where $ {c}_{0} $ is the depot where the vehicles start and end their trips, the following variables and parameters are defined: binary variable $ {x}_{ij} $ that takes value 1 if a vehicle uses the path from collection point $ i\in C $ to collection point $ j\in C $ and 0 otherwise; the continuous non-negative auxiliary variable $ {u}_{i} $ for sub-tour elimination indexed over collection points ($ i\in C $); parameter $ Q $ is defined as the capacity of the vehicle; parameter $ {d}_{ij} $ as the distance between collection points $ i\in C $ and $ j\in C $; and parameter $ {q}_{i} $ as the amount of waste to be collected at collection point $ i\in C $. In this way, the following model is proposed, using the two-index formulation developed by Miller et al. [6] in Eqs (1)–(6).
Subject to:
The proposed objective is to minimize the total distance travelled expressed in Eq (1). Equations (2) and (3) guarantee that each collection point is visited only once, having a single successor collection point and a single predecessor collection point on the route. Equation (4) prevents the formation of subtours. Equation (5) ensures that the vehicle's capacity is not exceeded. Equation (6) establishes the binary nature of the corresponding variable.
2.2. Proposed resolution method: a simulated annealing algorithm
As it was previously said, the collection of MSW in bins constitutes an application case of the VRP (Vehicle Routing Problem) problem. In computational complexity theory, this type of problem is classified as NP-hard [7], that is, it is at most as difficult as problems for which efficient solving algorithms that run in polynomial time have not yet been developed, regarding the size of the problem [8]. In this type of problems, metaheuristic tools allow obtaining good solutions in reasonable computational times [9]. The proposed resolution algorithm is based on a simulated annealing (SA) method and was adapted from the previous algorithm developed in Toncovich et al. [10,11]. SA is a local search-based method that was developed from an analogy with the physical phenomenon of annealing [12] to solve complex optimization problems. Local search methods search for the solution with the best value of the chosen criteria in the current solution neighborhood, accept it as the current solution, and repeat this procedure until it is not possible to improve the solution in the explored neighborhood. By systematically applying this procedure, a local optimum for the problem is generally obtained. To avoid being trapped in a local optimum, a diversification mechanism must be incorporated in order to adequately explore the solutions space. In simulated annealing metaheuristics, the diversification strategy allows moves, with some probability, toward solutions that worsen the current value of the objective function.
To get a good approximation to the optimal solution of the problem during the search process, it is necessary to restart the search regularly from one of the solutions accepted during the search process selected at random. The SA algorithm incorporates the classic parameters of simulated annealing. The parameters and variables corresponding to the implemented SA algorithm are indicated below.
● $ t $: current iteration.
● $ n $: current step.
● $ {S}_{0} $: initial solution.
● $ {S}_{A} $: current solution.
● $ {S}_{c} $: candidate solution.
● $ {S}^{*} $: best solution found.
● $ V\left(S\right): $ neighborhood for solution $ S $, given by the set of solutions that can be obtained from solution $ S $ through a basic perturbation. In our implementation the perturbation is generated by swapping two randomly selected elements (collection points) in $ S $.
● $ T $: control parameter. This variable simulates the temperature in the real annealing process. It is a positive real number that varies in the interval $ T\in $ [$ {T}_{f} $, $ {T}_{0} $] during the execution of the algorithm, where $ {T}_{0} $ is the initial temperature and $ {T}_{0 } $ > $ { T}_{f} $.
● $ {N}_{T} $: maximum number of iterations performed by the algorithm for a certain temperature value $ {T}_{n} $, at step $ n $.
● $ \alpha \left(T\right) $: function that determines the cooling mechanism, i.e., how $ T $ varies during the search process. In this case, it is a geometric progression of the form $ {T}_{n} = \alpha {\times T}_{n-1} $, with $ \alpha \in \left[\mathrm{0, 8};\mathrm{0,999}\right] $.
● $ {N}_{cnt} $: number of consecutive iterations without improvement in the objective function value at the iteration $ t $.
● $ {N}_{stop} $: maximum allowable number of iterations without improvement.
● $ time\_elapsed $: time that has passed since the algorithm started.
● $ time\_limit $: amount of time available for executing the algorithm on a given instance.
● $ {p}_{A} $: probability of accepting a solution that is worse than the current solution.
● $ \epsilon $: a uniformly distributed random number in the interval (0, 1).
The pseudo-code of the SA algorithm is presented in Algorithm 1.
2.3. Comparison algorithms
The proposed metaheuristic is compared with a commercial solver and other two metaheuristics, i.e., large neighborhood search (LNS) algorithm developed by Erdoğan [13] and a standard genetic algorithm.
2.3.1. Commercial solver based on MIP formulation
The solver that is used to solve the proposed mathematical formulation presented in Section 2.1 is CPLEX. In order to speed up the resolution process, the Eq (7) is added that sets the minimum number of trips required considering the overall waste to collect.
This Equation is a well-known valid inequality or cut in routing problems [14]. In preliminary tests, this cut was found to be competitive for reducing the computing times. This is generally associated to a better estimation of the lower bound due to a tighter approximation of the feasible region of the linear relaxation to the feasible region of the actual MIP model in resolution algorithms based on branch-and-cut [15] as CPLEX.
2.3.2. Large neighbourhood search
Erdoğan's metaheuristic was adapted from the adaptive LNS heuristic developed by Pisinger and Ropke [16], which extends Shaw's original heuristic [17]. A simplified explanation of its operation and pseudo-code is presented in Algorithm 2.
The operation of the heuristic consists of two main stages. The first is the construction of an initial solution that is obtained by inserting clients into the available routes in accordance with the objective to be minimized (operator $ InitialSol\left(\right) $). Then, an enhancement is made using the $ LocalSearch\left(\right) $ operator that applies four local search techniques. The first three are known as Exchange (two clients are exchanged), 1-OPT (a client is extracted from one route and reinserted into another) and 2-OPT (two route segments are exchanged). These operators are detailed in Groër et al. [18]. The fourth local search operator is Vehicle-Exchange. This operator tries to exchange the vehicles assigned to two routes, which is why it is useful when working with a heterogeneous fleet.
The second stage is where you try to get an improvement on the solution from the previous stage. The $ DestroyAndRebuild\left(\right) $ operator is applied, which considerably alters the current solution to explore another region of the search space and thus be able to escape from local optimum. In particular, long sections of routes are extracted from the current solution and the extracted clients are reinserted in those positions that minimize the objective function (function $ D\left(\right) $ defined in Eq (1)). A solution is accepted if it has a better cost than the best solution found so far or, if a solution does not have a better cost than the previous solution, it can even be accepted with a probability p-value. This process is repeated cyclically up to the time limit imposed by the user.
2.3.3. Genetic algorithm
The schema of the implemented genetic algorithm (GA) is presented in in Algorithm 3.
The GA was developed using the Distributed Evolutionary Algorithms in Python (DEAP) framework [20] and has the following characteristics:
● Representation of solutions. Solutions are encoded as a permutation of integers of length equal to the number of containers $ n $. Each index in the vector represents the visit order in the tour and the corresponding integer value represents one of the containers.
● Initialization. The population of size $ \left|P\right| $ is initialized by applying a random procedure to generate the permutations with a uniform distribution.
● Genetic operators. The recombination operator is the Partially Mapped Crossover (PMX) applied on two selected individuals with $ {p}_{c} $ probability. The mutation operator is based on Swap Mutation and swaps two elements of the permutation. Both selected operators have been usually applied in CVRP problems [21]. The mutation applies to an individual with probability pm. The proposed operators guarantee the feasibility of the solution.
● Selection. The selection is made through a binary tournament, from which the one with the best fitness is selected.
● Replacement. In each iteration, the new population is made up of the best 10% (with better fitness) of the previous population and the rest of new individuals generated through genetic operators.
● Fitness assessment. The fitness function is decoded by reading alleles in the gen from left to right and inserting a depot visit when necessary due to the capacity constraint of the vehicle. An example of the decoding process is presented in Figure 1 for a representative gen considering a vehicle with a capacity of $ 7{m}^{3} $. The collection vehicle starts its route from the depot and collects the waste of GAPs 2, 1 and 5. However, collecting the waste of GAP 6 is not feasible since the amount of waste ($ 1.5{m}^{3} $) exceeds the available remaining capacity of the vehicle, i.e., $ 7{m}^{3}– 2.3{m}^{3}-2.3{m}^{3}-2.1{m}^{3} = \mathrm{0, 3}{m}^{3} $. Thus, a visit to the depot to unload the vehicle is inserted before visiting GAP 6, adding the distances $ {d}_{50} $ and $ {d}_{06} $ to the fitness. Then, the vehicle visits GAPs 4, 7 and 3 and, finally, it returns to the depot.
● Cut due to stagnation. A deadlock cut-off mechanism was incorporated. The algorithm stops when no improvement in the fitness of the best individual is obtained during a time interval equal to 50% of the maximum execution time of the instance. Otherwise, it continues until the generation—evaluation, selection, crossing, mutation and replacement cycle—which has exceeded the maximum execution time at the end.
2.4. Related works
The problem of collection of containerized waste in a city has been frequently addressed in the literature. In Beliën et al. [22] and Han and Ponce Cueto [23], extensive reviews of these works are developed.
Among the main works that applied heuristics in waste collection, it is the work of Kim et al. [24] who applied a clustering-based heuristic and an insertion heuristic on a set of real-world instances of the United States of America. Over the same dataset, Benjamin and Beasley [25] applied two advanced metaheuristics, i.e., a variable neighborhood search and a tabu search, improving the previous results of Kim et al. [24]. Nesmachnow et al. [9] proposed an evolutionary algorithm to optimized collection routes in Montevideo, Uruguay while considering priorities in collection points that are located near public institutions or busy places in which overflowing may have a large social impact.
Regarding SA algorithms in waste management, there is the work of Tirkolaee et al. [26]. The authors applied a metaheuristic based on SA improved with an efficient cooling equation to address the problem of waste collection considering uncertainty in waste generation that aims at minimizing the total distance and the usage cost of vehicles. The proposed algorithm obtained similar results than the exact solution for small instances. A comprehensive work was performed by Nowakowski et al. [27] who compared four different heuristics, i.e., simulated annealing, tabu search, greedy, and bee colony optimization, to solve case studies of three different cities in the context of electronic waste collection. The results showed that SA outperforms the rest of the proposed heuristics. A similar problem is addressed in Tirkolaee et al. [3] in which more priority is assigned to collect waste bins that are located near some special places, such as health centers that produce hazardous waste, than to household waste for addressing a case study in Sanandaj, Iran. The proposed SA-based algorithm was able to obtain near-optimal solutions in comparison with the CPLEX solver. Mekamcha et al. [28] proposed a SA to deal with a waste collection case study in the Algerian city of Tlemcen in which the collection is modelled as a Travelling Salesman Problem. The SA algorithm is compared with a Tabu Search obtaining better results regarding total distance minimization.
In the case of Argentina, some studies can be found on applications of VRP models to solve the problem of design of collection routes. For example, Bonomo et al. [29] implemented a solution strategy based on integer programming to schedule the collection routes for waste containers in the Southern area of the City of Buenos Aires. On the other hand, in Bonomo et al. [30] a different model is presented for this city, which aims at minimizing the travel distances simultaneously with minimizing wear and tear on vehicles. In the city of Concordia, Bertero [31] presents an application to design the city's collection routes, making an effort to minimize the number of turns to facilitate the implementation of the routes by the authorities. On the other hand, Bianchetti et al. [32] exhibits an algorithm to solve the zoning of the city of San Miguel de Tucumán in order to optimize the use of resources, reassigning trucks to the downtown area of the city. In Braier et al. [33], through mathematical programming models, the collection of recyclable waste is planned for the city of Morón. Delle Donne et al. [34] presented a solution strategy based on integer programming to scheduling of routes to collect waste produced by leaf sweeping of streets in the city of Trenque Lauquen.
From the analysis of the related literature, we consider that there is still room to propose efficient algorithms based on simulated annealing to design waste collection routes. Moreover, this work compares SA with other metaheuristic approaches and a commercial solver in order to assess the competitiveness of the simulated annealing procedure and addresses a real case study of the city of Bahía Blanca.
3.
Results of computational experimentation
This Section presents the set of benchmark instances from the literature, the real-world case from the city of Bahía Blanca, the implementation details for the solution algorithms, the obtained results and an analysis of these results.
3.1. Case study: Argentinean city of Bahía Blanca
Bahía Blanca is a medium-size city located in the South of Argentina: It is considered as a major industrial, logistic and university center from the southern part of the country. The management of urban solid waste in the city of Bahía Blanca is a crucial activity for local authorities, not only because of the broad environmental and social impact associated with its operation but also because it consumes a large part of the municipality's budgetary resources [35]. Therefore, approaches that allow the optimization of collection logistics may be relevant to achieve a more efficient service provision [8,36]. Nowadays, waste collection is performed in a door-to-door basis. However, there are initiatives and studies that aim at migrating to a community bins system [8,35–38]. This work uses a real dataset that was obtained in one of these projects carried out by the academic, municipality and private stakeholders of the city [35]. This dataset was built with the aim of migrating from the current door-to-door waste collection system to the community waste bins system and it corresponds to two neighborhoods of the downtown of Bahía Blanca. These neighborhoods can be depicted in Figure 2: The University neighborhood as sector A and the downtown as sector B.
One of the goals of the aforementioned project [35] was also to implement source classification of waste to increase the amount of waste that is recycled in the city and assess the impact of different collection frequencies in the number of collection points. Following these guidelines for this work, the real instances are defined as follows. Firstly, three different geographic areas are considered: Sector A, Sector B, and the overall Sector A + B. From each area, four different instances are constructed, which differ in the type of waste -humid or dry waste- and the collection frequency considered. The instances of humid waste are always collected six times per week since they cannot remain uncollected due to environmental reasons. However, dry waste can be collected either four times per week or three times per week. Thus, Cavallin et al. [35] proposed two systems based on collection frequency for the city: The F2 system in which humid waste is collected six times per week and dry waste is collected four times per week, and F3 system in which dry waste is collected six times per week and dry waste is collected three times per week. Considering the three factors, i.e., geographic sector, type of waste and collection frequency in the system, the twelve instances of Table 1 are defined11. Further details of the instances can be consulted in Cavallin et al. [35]. In regard to the present work, the collection frequency of the dry waste will affect the stored quantity of dry waste to be collected, and due to its influence on the required storage space in the collection point, it will also affect the quantity of humid waste to be collected.
The capacity of the truck was set in 21 m3 which is the common capacity in the trucks of the fleet of the collection company. The distances between collection points were estimated with Open Source Routing Machine22 using the approach proposed by Vázquez [39].
3.2. Benchmark instances
In regard to the CVRP benchmark instances, the well-known set E of 13 instances proposed in Christofides and Eilon [40] was used. Both the instances and the optimal solutions were retrieved from the VRP-REP digital repository [41].
3.3. Implementation details
For solving the MIP formulation of Eqs (1)–(7), the commercial solver IBM ILOG CPLEX 20.1 was used [42]. It was implemented using GAMS 35.1 [43] as a modelling language. The majority of the parameters were kept in their default values [42,43] except for the enlargement of the working memory from the default 2048 MB to 5000 MB. This parameter was changed since on preliminary experimentation the solver ran out of memory space using the default configuration. The model that was solved with CPLEX is the MIP model composed by Eqs (1)–(7), which includes the valid inequality of the minimum number of trips (Eq (7)) to accelerate convergence. A time limit of 15,000 seconds was set to CPLEX for each run. The exact resolution was only applied to the real-world scenarios since the optimal solution of the benchmark instances is already available.
The SA was programmed in Visual Basic for Applications (VBA), the same programming language used in the LNS implementation taken from the repository of the author33. The GA algorithm was implemented using the DEAP library as the programming language.
Regarding the calibration of the metaheuristics the LNS does not need calibration [13]. The SA algorithm was fine-tuned using a small number of test runs in order to establish the values for the parameters specified in Section 2.2.
The GA algorithm was calibrated with a parametric analysis. The parameters that were analyzed through a statistical analysis were the size of the population $ \left|P\right| $—the values 100; 150; and 200 were considered, the probability of mutation $ {p}_{m} $—the values 0.05; 0.1; 0.15; 0.20; and 0.25 were considered and crossover $ {p}_{c} $—the values 0.5; 0.6; 0.7; 0.8; 0.9; and 1 were tested. For carrying out the parametric analysis the instances P-n16-k8, P-n19-k2 and P-n20-k2 proposed in [44] were used. For each instance and for each parametric configuration, 50 independent executions were carried out. The Shapiro-Wilk test was applied to study whether the fitness distribution had a normal distribution. Since many executions did not fit a normal distribution, the medians were evaluated with the Friedman test [45], selecting the configuration: $ \left|P\right| $ = 200, $ {p}_{c} $ = 0.9 and $ {p}_{m} $ = 0.25 as the most promising parametric configuration.
In order to provide a fair comparison between different metaheuristics the same computing time limits and implementation computer should be used [46]. In this case, the three metaheuristic were run in a personal computer with an Intel Core i5-3570k@3.40 GHz processor and 12 GB of RAM memory on a Windows 10 environment. For solving the instances, 30 runs were performed with each metaheuristic to obtain a more representative sample of the performance of the algorithms. Regarding execution times, Equation (8) was used to set the maximum execution time of each run in seconds ($ M $) for the metaheuristics which is proportional to the number of collection points in the instance -excluding the depot-($ N $)
It should be taken into account that both the SA and the GA described here have cut-off mechanisms due to stagnation that can lead to a real execution time that is less than the maximum. This is not the case for the LNS whose execution time will be equal to the maximum. In the case of CPLEX, a time limit of 15,000 seconds was given to the solver.
3.4. Computational experimentation
In this Section, we present the results of the computational experimentation. The experimentation was performed over two different datasets: the real world MSW instances of Bahía Blanca described in Section 3.1 and the benchmark instances described in Section 3.2, respectively.
Since the three metaheuristics involve probabilistic parameters, the results were statistically analyzed according to the following procedure. Each metaheuristic was run 30 times on each instance. Then, a two-stage statistical methodology is applied. First, the Shapiro-Wilk normality test [47] is used to determine whether the results follow a normal distribution. The null hypothesis of this test is that the data has a normal distribution and the alternative hypothesis is that the data cannot be assured to have a normal distribution. The test was applied with a statistical confidence level of 0.95.
After that, in case the results of the three metaheuristics for a specific instance follow a normal distribution, the mean value is used as estimator for the studied metric and the standard deviation (std) is used as a measure of statistical dispersion, and the parametric ANOVA statistical test is applied to analyze if there are significant differences on the mean values of the metaheuristics. Conversely, in case the results of any of the metaheuristics does not follow a normal distribution, the median value is used as estimator for the studied metric and the interquartile range (IQR) as a measure of statistical dispersion, and the non-parametric Kruskal-Wallis statistical test [48] is applied to analyze if there are significant differences on the median values of the metaheuristics. The null hypothesis of this test is that the medians are equal and the alternative hypothesis is that the medians are not equal. It was applied with a statistical confidence level of 0.95.
In both set of instances, at least the results of one of the metaheuristics rejected the hypothesis of normal distribution according to Shapiro Wilk test. Thus, medians and interquartile ranges are used to report the results and the Kruskal-Wallis test is used to analyze the differences among medians. The detailed results of the Shapiro Wilk and Kruskal-Wallis tests can be consulted in Appendix.
For the real instances Table 2 reports for each metaheuristic: the minimal value, the median value, the maximal value, and the interquartile range/standard deviation of the D values achieved by each metaheuristic. In the pairwise comparisons performed by Kruskal-Wallis statistical test all the medians where found to be statistically different except for those that have an asterisk*, which rejected this hypothesis.
In Table 3, the best distance—D according to Eq (1)—obtained for each real instance with CPLEX and each metaheuristic is reported. Besides, it is reported the gap estimated by CPLEX to the ideal solution calculated by the software and the percentage difference between the best solution obtained by each metaheuristic and the solution obtained by CPLEX according to Eq (9).
where $ {D}^{MH} $ is the distance obtained by each metaheuristic and $ {D}^{CPLEX} $ is the distance obtained with CPLEX. Thus, in Table 3 when $ {\Delta}^{MH} $ has a negative value it means that the metaheuristic obtained a better result than CPLEX. We compare the CPLEX solution with two distances of the metaheuristic: the best or minimal distance and the median distance.
In order to evaluate its performance, Table 4 records the number of iterations required by each metaheuristic in the time established by the Eq (8).
As aforementioned, to extend the metaheuristic comparison, the instances of the benchmark Set E [40] were solved. Table 5 shows the best results found in each instance by the algorithms under study, while Table 6 shows the performance of the metaheuristics in comparison to the BKS (Best Known Solution) of the instance using the percentage difference estimated with Eq (10).
where $ {D}^{MH} $ is the distance obtained by each metaheuristic and $ {D}^{BKS} $ is the distance of the BKS of the instance. Again, we compare the BKS with two distances: the best (minimal) value and the median value achieved by each metaheuristic.
Similarly, to the results of real-world instances, in Figure 4 we present a chart with the results of the best distance achieved by each metaheuristic and the BKS of each instance of the benchmark.
4.
Discussion
In this Section, we discuss the main results from the computational experimentation performed on real waste collection instances and on the benchmark set E. As a summary of the metaheuristics we present Table 7 in which we outlined the main features of the three metaheuristic.
4.1. Real MSW instances
Concerning the real instances of the MSW system of Bahía Blanca, the results are presented in Tables 2–4. Table 2 shows that SA obtained the smallest minimal distance in nine out of 12 instances while LNS provided the smallest distance in the three remaining instances. Regarding the median distances, SA produced the smallest median in eight out of 12 instances (however, in two instances the medians are not found to be statistically different from the ones obtained by LNS). LNS obtained the best median distance in five instances (however, in four instances these medians were not statistically different from those of SA). In instance A-F2DRY both algorithms obtained the same median. Both the SA and the LNS procedures always outperform the GA in terms of minimal values and median values. Neither the GA nor the SA cut were stopped before the maximum allowable computing time due to stagnation.
The results of Table 3 show that CPLEX was able to obtain optimal solutions (with 0.00% CPLEX gap) in only two out of 12 instances, i.e., A-F2HUM and A-F3HUM. In the rest of the instances, the optimality CPLEX gap is larger than zero with the maximum gap in instance A + B-F3DRY (39.09%). Regarding the comparison between the metaheuristics and CPLEX, SA minimal distances are on average 3.27% better than CPLEX solutions, having the largest improvement in instance A + B-F3DRY (–14.12%). Regarding LNS, minimal solutions are on average 2.50% better than CPLEX, having the largest improvement also in instance A+B-F3DRY (–10.84%). The minimal solutions of the GA are on average 32.00% worse than the CPLEX solutions, having the smallest difference on instance A-F3 DRY (7.58%). When comparing the median distances of the metaheuristics with CPLEX solutions, the relationship is maintained, being SA the metaheuristic that obtains the best average difference (–1.46%) followed by the LNS (–0.24%). Results of Table 4 showed that in relation to the average number of iterations carried out, SA and GA reach a greater number of iterations as the number of nodes grows. Conversely, in the case of LNS the number of iterations decreases as number of nodes increases.
Another important feature to analyze is the variation of the results of the algorithms. Since instances were non-parametric, the interquartile range better represents these values. In order to better depict the differences between the algorithms, in Figure 3 we present the box plots of the four real-world representative instances. A box plot is a simple way of visualizing statistical data on a plot in which a rectangle is drawn to represent the second and third quartiles with a vertical line inside to indicate the median value. The lower and upper quartiles are shown as horizontal lines either side of the rectangle. If there are outliers (values that surpass the lower and upper quartiles) these are plotted as extra scatter dots outside of the main box. For the sake of comparison, we also present the unique value of CPLEX. From Figure 3 it can be depicted that the GA always obtains worse results than the other two metaheuristics and CPLEX. Moreover, the variability of the results also seems to be larger, especially in instance A + B-F2HUM (Figure 3(a)). LNS and SA are very competitive against CPLEX. Besides, the variability of these algorithms seems to be relatively small especially in instance A + B-F3DRY (Figure 3(c)).
Another aspect that is relevant toanalyze in connection with the efficiency of the solution algorithms used to solve the real scenarios, is the impact of the partition of the overall area into two sectors. Partitioning complex problems into smaller (more tractable) parts is a normal procedure employed to address the complexity of NP-hard problems [49], as it is the case of the CVRP that is addressed in this work. In this case, the partition is performed at the level of the input instance dividing the overall area into two smaller Sectors (Sectors A and B). Although this allows the resolution algorithm to face a smaller problem, which might be easier to solve, it can also generate a certain loss of quality of the obtained solution since the algorithm do not take into account the overall scenario. The joint search space of the partitioned scenarios is smaller than the search space of the overall scenario since some possibilities are not feasible in the partitioned case (i.e., combining bins that belong to different sectors in the same route). For analyzing this behavior for the proposed algorithms, in Table 8 we report the percentage deviation between solving the two partitions separately and solving the global scenario computed according to Eq (11).
where $ {D}^{A} $, $ {D}^{B} $, $ {D}^{A+B} $ are the best distance values for sectors A and B, and the overall area A + B, respectively. Thus, a negative difference determines that the resolution approach is able to improve the results when considering the overall instance in comparison to the partitions.
From the results of Table 8, it can be seen that CPLEX is able to improve the solution only in instance of humid waste and collection frequency F2. SA is able to improve the solution when it considers the overall area in comparison to solving the partitions in three out of four cases, having the largest improvement in instances of dry waste and collection frequency F2 (4.19%). LNS improves the solution also in two out of four cases, having the largest improvement in humid waste and collection frequency F3 (3.38%). GA always provides a worse result when dealing with a larger scenario instead of the partitioned problem. Thus, the proposed SA was able to manage quite efficiently the increment in the collection area and take advantage of the enlargement of the search space, providing better solutions for the global scenario. On the other hand, GA could not take advantage of this possibility probably been more affected by the larger complexity of facing a larger scenario.
4.2. Benchmark set E
The results of the benchmark set E are reported in Tables 4 and 5. Results of Table 4 show that, regarding the minimal distances, the SA is able to obtain the best solution in 12 out of 13 instances while LNS in six instances out of 13. The relationship between both metaheuristics is reversed when comparing the median values that in this set of instances are all statistically different according to Kurskall-Wallis test. The medians of the results of LNS are better in 10 instances and those of SA are better in 7 instances. Although it is able to achieve the best minimal value in two instances, GA is usually outperformed by the other two metaheuristics in both minimal and median values.
Regarding Table 5, it is necessary to clarify that both SA and GA find a lower value than the optimal value reported in the literature for the E-n30-k3 instance given that, since they do not have a restriction indicating the number of routes, they report a solution with an additional route. Table 5 shows that the SA is able to obtain solutions that are on average (excluding instance E-n30-k3) only 0.16% worse in comparison to the value of 2.75% of LNS. The efficiency of SA is maintained when considering medians comparison, the solutions being around 0.86% distanced on average from the BKS while the LNS value is about 2.75%. As in the case of the real dataset, the GA provides worse results than those of the other two metaheuristics. Neither the GA nor the SA cut were stopped before the maximum allowable computing time due to stagnation.
Regarding benchmark instances, no boxplots are presented since the results of the metaheuristics have less variability in this set of instances and, thus, the box plots were less illustrative. From Table 6, it can be seen that the interquartile range for most of the results of the LNS and the SA is zero. Only the GA, keeps a certain variability in the results. Instead of box plots, in Figure 4 we present a chart with the results of the best distance achieved by each metaheuristic and the BKS of each instance of the benchmark. Once again, it is seen that the GA usually obtains worse solutions than the other two metaheuristics although there are two exceptions in instances E-n76-k7 and E-n23-k3 (in which GA obtain a better result than LNS). As aforementioned, in E-n30-k3 both SA and GA obtain a better solution but they use a larger number of routes.
5.
Conclusions
Finding new tools to make municipal/urban solid waste (MSW) logistics more efficient is a pressing concern in today's societies. This work is focused on solving waste collection problems for the Bahía Blanca area. In particular, it addresses a problem found in previous works where the exact solution of waste collection problems was inefficient, because the partitioning of the scenarios into smaller portions was required to achieve convergence to an acceptable feasible solution using reasonable computational times.
Therefore, in this work a comparative study is presented between the exact resolution and three metaheuristic solution tools for the problem of MSW collection in Bahía Blanca. The exact solution is based on a linear programming formulation of the CVRP model using the CPLEX software. On the other hand, with respect to the metaheuristic resolution, a simulated annealing algorithm (SA) is proposed, which is compared with an algorithm taken from the literature based on large neighborhood search (LNS) and a standard genetic algorithm (GA). In addition, the comparison among of metaheuristics using some instances of a well-known benchmark from the literature is carried out. The proposed SA has a similar performance to the LNS algorithm of the literature -obtaining values close to the optimal solution on several occasions- and markedly exceeds the performance of the standard GA.
After analyzing the experimental work carried out, it can be affirmed that the application of these algorithms provides potentially acceptable and competent results, having the metaheuristics the advantage of the less computational effort required to reach the solution. The experience carried out on the benchmark problems and on the MSW instances of Bahía Blanca, marked a good performance of the proposed SA algorithm in comparison to other metaheuristics and CPLEX in both real MSW instances and benchmark instances. It showed to be particularly competitive when compared to a state-of-the-art LNS algorithm from the related literature. On the other hand, GA exhibited a poor performance compared to SA and LNS, possibly due to the use of a standard genetic algorithm with a simple decoding function taken from applications in unconstrained problems such as the Traveling Salesman Problem.
As research lines of work in the future, it is planned to experiment with larger scenarios of the city of Bahía Blanca, using the proposed metaheuristics, which were validated in the present work. It is also intended to incorporate other realistic features to the problem such as time limits for the routes or a finite size of the collection vehicles fleet. Furthermore, it is proposed to develop the coding function of the genetic algorithm with an approach that is better adapted to the type of CVRP problem addressed in this paper. Finally, it is expected to continue performing the comparison between the three metaheuristics with different set of instances and parametric configurations in order to expand the analysis.
Acknowledgments
This work was funded by the research projects PGI 24/J084 and PGI 24/ZJ35 of the Universidad Nacional del Sur. The authors of this work wish to thank V. Herrán Symonds for her assistance in gathering the real data of the MSW scenarios of the city of Bahía Blanca. The third author of this work was funded by the Consejo Interuniversitario Nacional of Argentina (CIN) under the grant Becas de Estímulo a las Vocaciones Científicas (Scholarship EVC-CIN Call 2018).
Conflict of interest
All authors declare no conflicts of interest in this paper.