
To improve the path optimization effect and search efficiency of ant colony optimization (ACO), an improved ant colony algorithm is proposed. A collar path is generated based on the known environmental information to avoid the blindness search at early planning. The effect of the ending point and the turning point is introduced to improve the heuristic information for high search efficiency. The adaptive adjustment of the pheromone intensity value is introduced to optimize the pheromone updating strategy. A variety of control strategies for updating the parameters are given to balance the convergence and global search ability. Then, the improved obstacle avoidance strategies are proposed for dynamic obstacles of different shapes and motion states, which overcome the shortcomings of existing obstacle avoidance strategies. Compared with other improved algorithms in different simulation environments, the results show that the algorithm in this paper is more effective and robust in complicated and large environments. On the other hand, the comparison with other obstacle avoidance strategies in a dynamic environment shows that the strategies designed in this paper have higher path quality after local obstacle avoidance, lower requirements for sensor performance, and higher safety.
Citation: Chikun Gong, Yuhang Yang, Lipeng Yuan, Jiaxin Wang. An improved ant colony algorithm for integrating global path planning and local obstacle avoidance for mobile robot in dynamic environment[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 12405-12426. doi: 10.3934/mbe.2022579
[1] | Yuzhuo Shi, Huijie Zhang, Zhisheng Li, Kun Hao, Yonglei Liu, Lu Zhao . Path planning for mobile robots in complex environments based on improved ant colony algorithm. Mathematical Biosciences and Engineering, 2023, 20(9): 15568-15602. doi: 10.3934/mbe.2023695 |
[2] | Baoye Song, Shumin Tang, Yao Li . A new path planning strategy integrating improved ACO and DWA algorithms for mobile robots in dynamic environments. Mathematical Biosciences and Engineering, 2024, 21(2): 2189-2211. doi: 10.3934/mbe.2024096 |
[3] | Xuewu Wang, Bin Tang, Xin Zhou, Xingsheng Gu . Double-robot obstacle avoidance path optimization for welding process. Mathematical Biosciences and Engineering, 2019, 16(5): 5697-5708. doi: 10.3934/mbe.2019284 |
[4] | Zhen Yang, Junli Li, Liwei Yang, Qian Wang, Ping Li, Guofeng Xia . Path planning and collision avoidance methods for distributed multi-robot systems in complex dynamic environments. Mathematical Biosciences and Engineering, 2023, 20(1): 145-178. doi: 10.3934/mbe.2023008 |
[5] | Liwei Yang, Lixia Fu, Ping Li, Jianlin Mao, Ning Guo, Linghao Du . LF-ACO: an effective formation path planning for multi-mobile robot. Mathematical Biosciences and Engineering, 2022, 19(1): 225-252. doi: 10.3934/mbe.2022012 |
[6] | Xue Li, Lei Wang . Application of improved ant colony optimization in mobile robot trajectory planning. Mathematical Biosciences and Engineering, 2020, 17(6): 6756-6774. doi: 10.3934/mbe.2020352 |
[7] | Jian Si, Xiaoguang Bao . A novel parallel ant colony optimization algorithm for mobile robot path planning. Mathematical Biosciences and Engineering, 2024, 21(2): 2568-2586. doi: 10.3934/mbe.2024113 |
[8] | Tian Xue, Liu Li, Liu Shuang, Du Zhiping, Pang Ming . Path planning of mobile robot based on improved ant colony algorithm for logistics. Mathematical Biosciences and Engineering, 2021, 18(4): 3034-3045. doi: 10.3934/mbe.2021152 |
[9] | Zhenao Yu, Peng Duan, Leilei Meng, Yuyan Han, Fan Ye . Multi-objective path planning for mobile robot with an improved artificial bee colony algorithm. Mathematical Biosciences and Engineering, 2023, 20(2): 2501-2529. doi: 10.3934/mbe.2023117 |
[10] | Ping Li, Liwei Yang . Conflict-free and energy-efficient path planning for multi-robots based on priority free ant colony optimization. Mathematical Biosciences and Engineering, 2023, 20(2): 3528-3565. doi: 10.3934/mbe.2023165 |
To improve the path optimization effect and search efficiency of ant colony optimization (ACO), an improved ant colony algorithm is proposed. A collar path is generated based on the known environmental information to avoid the blindness search at early planning. The effect of the ending point and the turning point is introduced to improve the heuristic information for high search efficiency. The adaptive adjustment of the pheromone intensity value is introduced to optimize the pheromone updating strategy. A variety of control strategies for updating the parameters are given to balance the convergence and global search ability. Then, the improved obstacle avoidance strategies are proposed for dynamic obstacles of different shapes and motion states, which overcome the shortcomings of existing obstacle avoidance strategies. Compared with other improved algorithms in different simulation environments, the results show that the algorithm in this paper is more effective and robust in complicated and large environments. On the other hand, the comparison with other obstacle avoidance strategies in a dynamic environment shows that the strategies designed in this paper have higher path quality after local obstacle avoidance, lower requirements for sensor performance, and higher safety.
As the application of mobile robots gradually shifts from industrial environments to medical handling, navigation assistance, warehousing and transportation, and catering services, there are higher requirements for mobile robot navigation algorithms. At present, the algorithms for path planning are divided into two categories: traditional path planning algorithms which mainly include the D* algorithm, fuzzy logic, potential field method, the A* algorithm, and intelligent optimization algorithms, such as the genetic algorithm [1,2,3,4,5], particle swarm algorithm, ant colony optimization and so on [6,7,8,9]. Ant colony algorithm has attracted much attention in mobile robot path planning because it can optimize many aspects of path quality, is better suited to the needs of practical robot applications, and have the advantage of being easily combined with other search methods.
For the research on path planning of mobile robots in a static environment with known global information, Liu et al. [10] combined the ant colony algorithm with geometric optimization to eliminate the cross paths generated during the pathfinding process. Viseras et al. [11] added fast search random tree method into the ant colony algorithm to accelerate the search speed of the algorithm. Wang et al. [12] added initial pheromone to all nodes between the starting point and the ending point, which improves the adaptability and convergence speed of the algorithm. However, in complex environments, due to the roulette transfer method's randomness, pre-planning using other algorithms and adding a region of the initial pheromone cannot guarantee that ants will always follow this path or region and will increase the running time and complexity of the algorithm. The result is not as good as expected. Zheng et al. [13] proposed an adaptive update heuristic function to improve the performance. Jiao et al. [14] used an adaptive state transition method and the adaptive information updating strategy to adjust parameters in real-time, which guarantee the relative importance of pheromone intensity and desirability and the ability to jump out of local optimal. Akka et al. [15] designed a piece-wise multi-heuristic function, rewarding and punishing the optimal and the worst path respectively, which expands the search space of the algorithm and reduces the influence of cluttered pheromone. You et al. [16] introduced a multiple evolution strategy mechanism into the state transition rules of the ant colony algorithm that increase the diversity of search results to improve search quality and convergence speed. Wang et al. [17] proposed a method that if the ants are trapped in deadlock, the pheromone of the path is not updated to avoid other ants being attracted to the deadlock. Qu et al. [18] adopt the regression and penalty strategy, which allows ants to return one step and punish the pheromones on the path that ants have traveled. Luo et al. [19] devised a dynamic punishment method to reduce the number of lost ants and improve the diversity of the solution. Miao et al. [20] transformed the path planning problem into a multi-objective optimization problem by introducing the multi-objective performance indexes into the algorithm. However, the existing improved algorithms are still not well adapted to diverse, unstructured and complex environments because of the single heuristic information, still exist the excessive data fluctuations, the low stability of the algorithms, the long running time and can't better balance the global search ability and convergence speed of the algorithm problems in large-scale complex environments.
For the research on path planning in a dynamic environment, Zhu et al. [21] solved the path planning problem of the robot in the working environment that beyond its perception range and contains unknown static and dynamic obstacles. Yen et al. [22] avoided dynamic obstacles by finding the linear function of moving obstacles. Yin et al. [23] used the obstacle detection algorithm to obtain parameters of obstacles, then used the behavioral dynamics method to plan local obstacle avoidance path. Xu et al. [24] proposed three obstacle avoidance strategies according to the volume and speed of different dynamic obstacles in the environment. However, the above studies did not specify the magnitude of the volume and velocity of dynamic obstacles or only studied the case where dynamic obstacles and mobile robots were equally large or they do not consider the situation comprehensively enough. In practical applications, the size and speed of dynamic obstacles are critical to obstacle avoidance.
To overcome the shortcomings of existing algorithms and realize efficient and high-quality path planning for mobile robots, an improved ant colony algorithm is proposed in this paper. The algorithm sets the initial pheromone of uneven distribution according to the leading route which is created by the connection of the feature points, and it reduces the problem of blindness in the initial period. The bending suppression operator is introduced to improve heuristic information, which aims to optimize the smoothness of the path. The pheromone update rule is improved to increase the difference of pheromone concentration on different paths, and enhance the positive feedback effect of the algorithm. Three parameter dynamic adaptive control strategies are designed to coordinate the convergence and global search capability of the algorithm. The optimal parameters combination of the improved ant colony algorithm is determined by experimental and the experiments show that the aggregation pheromone structure can achieve a first complete solution and converge later more quickly, and has a higher efficiency and better performance.
Aiming at the problems existing in the path planning of the mobile robot in the dynamic environment. Firstly, using the improved algorithm to get the optimal path of the static environment, when the robot moves alone the optimal path, it can detect the dynamic obstacles' states, such as the volume and speed, then take corresponding obstacle avoidance strategies in time to avoid it until the robot reaches the end. Simulation researches show that, compared with other literatures, the obstacle avoidance strategies proposed in this paper is more comprehensive, and the path obtained after obstacle avoidance is better.
Grid maps are commonly used in mobile robot path planning due to the reasons that it is simple and can effectively represent the layout characteristics of the environment [25]. The grid map discretizes the environment into independent square grids, each grid has 2 states, free and occupied. The 0 value is used to represent the free area in the environment map, and the corresponding grid is filled with white; while the 1 value is used to represent the obstacle area, and the corresponding grid is filled with black. The grid model was placed into a two-dimensional coordinate system. And then serial number method is adopted to mark each grid, and the grid is numbered in order from upper to bottom, left to right to satisfy the needs of ant colony algorithm storage path, as shown in Figure 1.
In addition, for the convenience of research, the following assumptions are made about the mobile robot and its working environment:
(1) The location of the starting point, ending point, and static obstacles are known;
(2) Both the robot and the dynamic obstacle move in a straight line at a constant speed in units of grids and can switch between the constant straight-line motion and the stopped state;
(3) The robot is equipped with sensors can sense the state of dynamic obstacles, including moving direction, step length, size, and shape, but the states of different obstacles are randomly generated.
In the process of ants searching the optimal path, the probability of the next node is selected by the heuristic value and the pheromone value of the path. At time t, the transition probability of ant k selecting the next node j from node i is expressed in Eq (1):
pkij{[τij(t)]α[ηij(t)]β∑s∈allowk[τij(t)]α[ηij(t)]β,s∈allowk0,otherwise | (1) |
where allowk is the set of next reachable nodes when ants in the grid i, α is the pheromone heuristic factor, indicating the degree of importance of pheromone, β is the expected heuristic factor, which reflects the importance of the heuristic information on path. τij is the pheromone concentration of path (i,j), ηij is the heuristic value, and its expression is given in Eq (2):
ηij(t)=1dij | (2) |
where dij is the Euclidean distance between the current node i and the candidate node j.
Ants will leave pheromone on the path they pass, and some of these pheromones will volatilize as time flows. Eqs (3) and (4) are used to update the pheromone concentration on the path (i,j) at time t+1.
τij(t+1)=(1−ρ)τij(t)+Δτij(t) | (3) |
Δτij(t)=m∑k=1Δτkij(t) | (4) |
where ρ is the global pheromone volatilization coefficient, ρ∈(0,1), Δτij(t) is the increment of the pheromone on the path (i,j) in the current iteration is the concentration of left pheromone by ant k on the path (i,j), and Δτkij(t) is calculated according to Eq (5):
Δτkij(t)={QLk,tour(i,j)∈tourk0 ,otherwise | (5) |
Among them, Q is a constant, Lk represents the total length of the path searched by the kth ant in this iteration.
In the early stage of path planning, the difference in pheromone concentration in each node is small, so the positive feedback effect of the algorithm is not obvious, and the ants have some blindness in the path search, which leads to poor convergence and more time-consuming. This paper proposes a method to achieve the uneven distribution of initial pheromone by creating a collar route [26] and adding pheromone to the nodes that the collar route passes.
In this paper, several points are used to describe the different shapes of obstacle grids, and these points are called the feature points of obstacle grids. The extraction rules of feature points are as follows:
(1) The feature points of the convex barrier grid are all its vertices, as shown A in Figure 2
(2) The feature points of the concave barrier grid are only part of its vertices, and the vertices at the concave corners are removed, as shown B in Figure 2.
(3) The U-shaped obstacle grid is shown in Figure 2 as C and D. The characteristic points of this type of obstacle grid are all vertices, but the red vertices are removed.
(4) The obstacles which are at the boundary of the map as shown in (E, F, G), Figure 2, so that the vertices at the boundary of the map will be removed.
At first, using a straight line connecting the starting point to the goal point, as shown in the green line in Figure 3(a). If the straight line does not intersect any obstacle, the final path is the collar line. Other- wise, the obstacle that intersects the straight line is defined as the Class Ⅰ obstacle, and extract the feature points of the Class Ⅰ obstacle grid according to the rules. Then, starting from the starting point, judge the connection between the feature point and the starting point, among which there may be more than one feature point that can be connected to the starting point, and give priority to the feature point that is close to the ending point and connect it with the starting point, and then start from the selected feature point and repeat until a collar line from the starting point to the ending point is generated as shown in the red line in Figure 3(a).
In some environments there may be cases where the feature point cannot be connected to the start point, the feature point cannot be connected to the feature point, and the feature point cannot be connected to the ending point, as shown in Figure 3(b)–(d). For case b and c, it is necessary to connect the closest unconnectable feature point to the current start point or feature point as shown in the blue and d. For case d, it is necessary to connect the current feature point and the ending point, and the obstacles which the line passes are defined as class Ⅱ obstacle and extract its feature points, and then continue to create the collar line from the current feature point.
The heuristic information of the ACO is the reciprocal of the Euclidean distance between the current node and the next optional node, which reduces the search efficiency and the ability to find the optimal path of ants. Therefore, this paper adds the guidance information of the target point and the bend suppression factor to the heuristic function to reduce the invalid path search and reduce the redundant turning point. The improvement heuristic information is
ηij(t)=digdij+djg(cosω+1) | (6) |
where dij is the distance between the i grid and the j grid, djg is the distance between the grid to be selected and the goal grid, dig is the distance between the i grid and the goal grid. ω is the angle between the line segment of node i−1 to node i and the line segment of node i to node i+1. The closer the angle is to 180°, the greater the ηij(t) is, which helps to reduce the turning points of the path.
After each iteration, the pheromone increment on different path of the ACO is the ratio of the pheromone intensity to the path length, this difference can be very small if the path lengths are close to each other, making it easy for the ants to fall into a local optimum in a complex environment. Therefore, this paper proposes an improved pheromone update rule, according to Eq (7) to distribute the pheromone intensity on different paths. After each iteration, only the pheromone of the ant path reaching the target point is updated according to Eqs (3), (4) and (8).
Qm=tc·e(LGL−Lm)3·Q | (7) |
Δτkij(t)={QmLm+turn,tour(i,j)∈tourm0 ,otherwise | (8) |
where tc is a constant, LGL denotes the length of the collar line, Lm represents the length of the path searched by the ants that reach the end point, turn is the Total number of turns on the path.
For ACO, α and β are two important parameters that coordinate the algorithm's global search capability and convergence speed. In the early stage of the algorithm, the heuristic function is mainly used to guide the ants to search for the path. As the number of iterations increases, the pheromone concentration of each node increases. The pheromone should play a leading role in the selection of the path. In the later iterations, in order to avoid falling into the local optimization, the guiding effect of pheromone and heuristic function should be weakened at the same time to improve the randomness of the algorithm, therefore, α should be increased first and then decreased, and β should be gradually decreased. Since most parameter control strategies only consider the number of iterations to judge the operating conditions of the algorithm and then to adjust the value of the parameters while not consider the actual operating conditions of the algorithm, it is likely to cause the adjustment of the parameters to lag or lead. Therefore, this paper designs a variety of parameter control strategies to adjust the parameter values based on the length of the leading route, the length of the path found by the ants, and the number of iterations. The parameter control strategies proposed in this paper are as follows:
For α:
α(Nc)=ainital+δ·sin((NcN+LGLLFbest+LGL)2·π) | (9) |
For β:
β(Nc)=βinital·e−2((NcN+LGLLFbest+LGL)2)2 | (10) |
where ainital and βinital respectively represents the initial value of the parameter α and β, δ is the adjustment coefficient, the value is a positive number bigger than 0, N is the maximum number of iterations, Nc is the current number of iterations, LGL is the length of the collar line, LFbest is the optimal path length found in the previous iterations. It can be seen from Eqs (9) and (10) that the parameters are jointly controlled by the number of iterations and path length, and the parameters change nonlinearly and dynamically with the number of iterations and path length to coordinate the algorithm's global search capability and convergence speed.
Meanwhile, in the process of finding the optimal path for the ant colony algorithm, the pheromone volatilization factor also affects the performance of the algorithm. This paper uses adaptive dynamic adjustment of pheromone volatilization factor is used to adjust the global search ability and convergence speed of the algorithm, and shown in Eq (11):
ρ=κ(LGLLbest+LGL) | (11) |
where κ is the adjustment coefficient, the value is a positive number less than 1, with the higher quality of the found path, the volatilization factor ρ gradually increases, which increases the pheromone concentration difference on different paths, enhances the guiding role of high-quality ants, which can improve the search speed of ant colony and make the algorithm converges rapidly.
Step1: Build the grid map and initialize the number of ants m, the maximum number of iterations N, the pheromone heuristic factor α, the expected heuristic factor β, the pheromone volatilization factor ρ, the strength of the pheromone Q and other parameters.
Step2: Create the collar line and add the initial pheromone to the grid it passes through, then, record the length of the collar line.
Step3: Place the ant at the starting point, calculate the heuristic information according to Eq (6), and calculate the probability function according to Eq (11), finally, using the roulette method to select the next feasible grid. Then, turn to Step4.
Step4: If the ants reach the target grid, it will turn to Step5, otherwise it will turn to Step3.
Step5: The number of ants m = m + 1, and if all the ants have completed the path search, turn to Step6, otherwise turn to Step3.
Step6: Record the optimal path of this iteration, and update the path pheromone according to Eqs (3), (4) and (8).
Step7: Adaptively adjust parameters according to the Eqs (9)–(11).
Step8: Determine whether the end condition is satisfied. If it is satisfied, the optimal path length is output; otherwise, output optimal path and iterative curve; otherwise, let Nc=Nc+1, go to Step3. and continue to circularly execute until the maximum number of iterations is reached.
In a grid environment, the following collisions may occur between robots and dynamic obstacles. (Only obstacles in uniform linear motion are considered in this paper)
(1) The trajectory of the dynamic obstacle is not in the same line as the trajectory of the robot.
(2) The trajectory of the dynamic obstacle and the trajectory of the robot is in the same line and in the opposite direction. (At this point, the obstacle and the robot will collide face to face)
(3) The trajectory of the dynamic obstacle and the trajectory of the robot is in the same line and in the same direction (The dynamic obstacle is chasing the robot and moves faster than robot).
(4) The trajectory of the dynamic obstacle and the trajectory of the robot is in the same line and in the same direction (The robot is chasing the dynamic obstacle and moves faster than dynamic obstacle).
Four corresponding dynamic obstacle avoidance strategies are proposed for each of the above four scenarios.
In order to facilitate the design of obstacle avoidance strategies for dynamic obstacles with different shapes, first conduct regularization processing to obtain the trajectory of dynamic obstacles. As shown in Figure 4, it shows the original shape of the obstacle and the red line represents the moving direction of the obstacle, then extract the feature points of the dynamic obstacle based on the information detected by the sensor and the method proposed, and then select the two feature points that are the furthest vertical distance from the direction of movement on both sides of the dynamic obstacle, and according to the selected feature points, the dynamic obstacles are regularized into a part of the n×n grid, as shown in Figure 4(b), the blue dotted line represents the trajectory of the dynamic obstacle.
For scenario1, the wait-and-avoid strategy proposed by [24] can be used, as shown in Figure 5, A represents the collision point, and the robot stops at the red grid waits for the obstacle to pass, and then continues to move along the original path after ensuring that it will not collide with the obstacles.
X=floor(Ld√2)Stepscale−mod(Gridscale2)⩽ng | (12) |
For scenario 2, for the convenience of research, suppose that the step length of the robot is 1 grid and it occupies 1 grid. The ratio of the step length of the dynamic obstacle to the step length of the robot is Stepscale and the speed of the dynamic obstacle is greater than the speed of the robot. After the regularization process, the side length of the grid occupied is Gridscale, and the distance from the robot to the dynamic obstacle when the robot finds the dynamic obstacle is ng grids. At the same time, assuming that the grid coordinates where the robot is located is (Xrob,Yrob).The precondition for a robot to avoid dynamic obstacles is shown in Eq (12):
If the side length of the obstacle grid after regularization is an even multiple of the robot, it still needs to satisfy that there are non-obstacle grids in the 6 grids in Eq (13) and there is a path to the ending point in these grids
(xrob−2,yrob−1)、(xrob−2,yrob−2)(xrob−1,yrob−2)、(xrob+2,yrob+1)(xrob+2,yrob+2)、(xrob+1,yrob+2)}⊄obsgrid | (13) |
If the side length of the obstacle grid after regularization is an odd multiple of the robot, it still needs to satisfy that there are non-obstacle grids in the 2 grids in Eq (14) and there is a path to the ending point in these grids.
(xrob−2,yrob−2)、(xrob+2,yrob+2)}⊄obsgrid | (14) |
X⩽ng<X+Vrob+Vobs | (15) |
If ng s atisfies Eq (15), then the obstacle avoidance strategy designed below needs to be implemented at the grid where it is currently located.
X+Vrob+Vobs⩽ng | (16) |
If ng satisfies Eq (16), the robot can continue walking along the previously planned path with the number of grids D as shown in Eq (17) and then execute the obstacle avoidance strategy designed below.
ng−X+Vrob+Vobs∈[{0,1,2,3}+D(Vrob+Vobs)] | (17) |
The obstacle avoidance strategies designed in this paper will be illustrated in conjunction with the examples shown in Figure 6. Since the ratio of the size of the dynamic obstacle to the volume of the robot is different, the obstacle avoidance strategy adopted will be different. Therefore, the obstacle avoidance strategies are designed in two situations.
The obstacle avoidance strategies designed in this paper will be illustrated in conjunction with the examples shown in Figure 6. Since the ratio of the size of the dynamic obstacle to the volume of the robot is different, the obstacle avoidance strategy adopted will be different. Therefore, the obstacle avoidance strategy is designed in two situations.
Assuming that the size of the obstacle is 4 times or even times that of the robot, and the step length is 3 times that of the robot, as shown in Figure 6(a). In the Figure 6, the blue square represents the dynamic obstacle, the green square represents the robot, the red solid line represents the planned path before the robot encounters the obstacle, and the blue dotted line is the movement trajectory of the dynamic obstacle. At a certain moment, the position of the obstacle and the robot is shown in Figure 6(a). At this time, the two objects are 6 grids apart and satisfy the conditions of Eqs (12) and (14), so the obstacle avoidance strategy is executed at the current position, the next moment, the position of the obstacle and the robot is shown in Figure 6(b), and there will be no collision at this time. At the next moment, the robot can move to one of the three positions 3–1, 3–2 and 3–3 in Figure 6c, the turning angle is limited to ±90∘. At this time, the three grids available for selection plus the three grids on the upper right A total of 6 grids need to meet the condition of Eq (12), and then replan the path with the non-obstacle grid as the starting point, and take the shortest path as the path for the robot to move.
If the size of the obstacle is 9 times or odd times that of the robot, and the step length is 3 times that of the robot, as shown in Figure 7, the obstacle avoidance strategy is different. The robot only needs to move 2 grids along the ±90∘ direction to avoid dynamic obstacles, and then replan the path in the grid at the position in Figure 7(c). ③
The difference between the obstacle avoidance strategy adopted when the dynamic obstacle is an odd or even multiple of the robot's size is the position the robot is in after it has moved one grid after discovering the dynamic obstacle. As shown in Figure 6, for the odd multiple case, the robot is at position 2 in the Figure 6(b) after moving one grid, which is inside the trajectory of the dynamic obstacle, while for the even multiple case, the robot is at position in Figure 7(b), which is on the trajectory of the dynamic obstacle, the next position can be moved towards 3–1, 3–2 and 3–3 in the Figure 8(c) to avoid the dynamic obstacle, while the robot in position 2 in Figure 6(b) can only avoid the dynamic obstacle by moving towards the position where 3 is located Figure 6(c).
For the odd multiple case, if adopts the same avoidance strategy as the even multiple case would require the robot to find the obstacle further away from it or the robot will hit the dynamic obstacle as shown in Figure 8. In Figure 8, from Figure 8(a) to Figure 8(d), it represents the process of avoiding obstacles when the robot adopts the even-numbered strategy in odd-numbered situations, and will eventually collide with the obstacle, so the obstacle-avoiding strategy in even-numbered situations cannot be adopted, or which would undoubtedly place greater demands on the robot's performance.
For scenario 3, the same obstacle avoidance strategy is adopted as for case 2, simply by replacing Eqs (15)–(17) with Eqs (18)–(20) and then implementing the corresponding designed obstacle avoidance strategies.
X⩽ng<X+Vobs−Vrob | (18) |
X+Vobs−Vrob⩽ng | (19) |
ng−X+Vobs−Vrob∈[{0,1,2,3}+D(Vobs−Vrob)] | (20) |
For scenario 4, in this situation, the robot only needs to use an obstacle avoidance strategy similar to that in case 2 at a distance of rg grids from the obstacle. (rg means that the speed of the robot is rg times the speed of the obstacle)
Combining the environmental model, the improved ant colony algorithm and the dynamic obstacle avoidance strategy, the main steps of the path planning method proposed in this paper are as follows.
Step1: Build the grid map and create the collar route based on known static environmental information.
Step2: The robot uses the improved ant colony algorithm in this paper to plan an optimal path in a static environment.
Step3: The robot obtains the information of the dynamic environment at the current starting point through its sensors, and executes the corresponding collision avoidance strategy according to different dynamic obstacles based on the dynamic environment obstacle avoidance strategies described in Sect 4, and reaches a new starting point; otherwise, follow the original path to reach the new starting point.
Step4: If the robot reaches the ending point, the path planning ends; otherwise, return to Step3.
The simulation environment is as follows: Windows10 64 bit; processor AMD Ryzen 7 5800H; main frequency 3.2 GHz; memory 8 GB; simulation software: Matlab R2018b.
Since there are no precise mathematical analysis methods of the ACO model has so far been developed to generate optimal parameter settings in different situation, the parameters are generally chosen empirically. Nowadays, the commonly used methods are mainly based on experience and repeated trial through many experiments.
This paper mainly selects several important parameters that play a role in determining the performance of the algorithm: pheromone heuristic factor, expected heuristic factor, adaptive pheromone volatility, adaptive pheromone intensity.
Firstly, set the value range of each parameter:α∈{0,0.5,1,2,4}, β∈{0,2,5,7,9}, tc∈{1,1.5,2,3,5}, κ∈{0.1,0.3,0.5,0.7,0.9}. The default value of a group of parameters is set to α=1, β=5, tc=2, κ=0.5. Only one parameter is changed for each experiment. E.g. when calculating simulation results for α=0, all other parameters are taken as default values. Each parameter setting is simulated 10 times to avoid chance, and compares and analyzes the mean values of the simulation results of the algorithm. The corresponding experimental results for each group are shown in Table 1.
α | 0 | 0.5 | 1 | 2 | 4 |
Mean value of path length | 58.254 | 51.298 | 48.531 | 49.378 | 53.854 |
β | 0 | 2 | 5 | 7 | 9 |
Mean value of path length | 65.624 | 53.274 | 49.591 | 50.325 | 52.951 |
tc | 1 | 1.5 | 2 | 3 | 5 |
Mean value of path length | 57.936 | 49.292 | 50.836 | 51.951 | 53.469 |
κ | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
Mean value of path length | 52.861 | 50.197 | 49.178 | 48.568 | 49.625 |
From the experimental results in Table 1, it can be seen that the optimal value of α is around 1, the optimal value of β is around 7, the optimal value of tc is around 1.5, the optimal value of κ is around 0.7.
The experiment was divided into four parts according to four types of maps and three algorithms [19], the algorithm in [24] and the improved ant colony algorithm proposed in this paper are simulated on each map in turn. Each algorithm was executed 50 times to obtain the algorithm performance comparison table. The values of the experimental parameters are shown in Table 2.
Parameter | α | β | ρ | Q | m | N | tc | δ | κ |
Value | 1 | 5 | 0.3 | 100 | 50 | 100 | 1.5 | 2 | 0.7 |
Figure 9 is a path map generated by three algorithms in a simple environment. As shown in Figure 9 and Table 3, all three algorithms find the shortest path with a length of 28.6274 m, however, there are only 6 turning points in the algorithm of this paper, 7 in [19] and 7 in [24]. The algorithm in this paper has certain advantages in convergence speed and path smoothness.
Map | Algorithm | Optimal path length | Average of path length | Standard deviation | Average number of iterations | Average time-consuming(sec) | Number of bends |
1 | ① | 28.6274 | 28.6274 | 0 | 14 | 1.35 | 6 |
② | 28.6274 | 28.6557 | 0.0283 | 20 | 1.48 | 8 | |
③ | 28.6274 | 29.0127 | 0.3853 | 32 | 3.37 | 11 | |
2 | ① | 73.9828 | 74.2296 | 0.2468 | 13 | 4.04 | 12 |
② | 73.9828 | 77.0538 | 3.071 | 35 | 7.91 | 18 | |
③ | 77.6694 | 90.5694 | 12.9 | - | 18.23 | 33 | |
3 | ① | 115.87 | 119.35 | 3.48 | 27 | 4.68 | 32 |
② | 118.799 | 132.26 | 13.461 | 45 | 11.01 | 35 | |
③ | 122.213 | 148.84 | 26.627 | 83 | 23.6 | 41 | |
4 | ① | 58.183 | 62.527 | 4.344 | 38 | 12.32 | 18 |
② | 59.598 | 70.847 | 11.249 | 59 | 19.56 | 23 | |
③ | 65.111 | 85.247 | 20.136 | 81 | 44.83 | 40 | |
5 | ① | 78.8112 | 87.5614 | 8.7502 | 30 | 20.45 | 21 |
② | 84.0833 | 103.1546 | 19.0713 | 75 | 33.54 | 35 | |
③ | 90.1838 | 126.154 | 35.9702 | 85 | 50.57 | 34 | |
6 | ① | 364.7523 | 395.146 | 30.3937 | 35 | 40.56 | 59 |
② | 377.196 | 426.545 | 49.349 | 80 | 70.48 | 55 | |
③ | 389.8712 | 460.254 | 70.3828 | 97 | 110.32 | 71 | |
Note: ①: Algorithm in this paper.②: Algorithm in paper [24]. ③: Algorithm in paper [19] |
Figure 10 is a path map generated by three algorithms in a complex environment. As shown in Figure 10 and Table 3, the optimize length of the improved ant colony algorithm is 73.9828, and the number of bending times is 12. In the shortest path length, the improved ant colony algorithm is basically as the same as the algorithm in [19], but the algorithm in [24] can't find the global optimal path. In the number of bending times, it is 14% decrease than the algorithm in [19], and as shown in Table 3, the algorithm in this paper has obvious advantages in convergence speed.
Figure 11 is a path map generated by three algorithms in the baffle environment. The baffle environment is prone to occur deadlock problem due to the small space. As shown in Figure 11 and Table 3, the algorithm in this paper still plans the shortest path without redundant turning points, which length is 115.87m. Both the algorithm in literature [19] and [24] can't search the global optimized path, the algorithm in paper has faster convergence speed and less running time than the algorithm of this paper.
Figure 12 is a path map generated by three algorithms in the trough environment. As shown in Figure 12 and Table 3, compared with the algorithm in [19], the path length is reduced by 1.415, compared with the algorithm in [24], the turning point of this algorithm is reduced by 12 and the path length is reduced by 6.928. The optimal path length of the algorithm in this paper converges to 62.527, while the average path length of the algorithm in [19] is 70.847, which further illustrates that the algorithm of this paper is more stable.
Figure 13 is a path map generated by three algorithms in the complex environment. As shown in Figure 13 and Table 3, the algorithm in this paper can still quickly search for the optimal path. The optimal path length of the algorithm in [19] is 84.0833, while the optimal path length is 78.8112 in this paper. The path length is shortened 5.2721; the average iteration in [19] is 75times, and the average iteration of this algorithm is 30 times. And the convergence speed is increased nearly double times. The number of bending times is shortened 14.
Figure 14 is a path map generated by three algorithms in the environment which combine the maps above. As shown in Figure 14 and Table 3, compared with the algorithm in [19], the path length is reduced by 12.4437, the average iteration in [19] is 80 times, and the average iteration of this algorithm is 35 times. And the time consuming also decreased a lot compared with the algorithm in [19] and in this paper.
The results of the three algorithms that run 50 times in the same map environments are shown in Table 3. The algorithm in [19] and [24] have large fluctuations in the path length and the number of inflection points during the operation of the algorithm. The algorithm in this paper can find the global optimal path more stably, with fewer inflection points, higher quality, and faster convergence, which significantly reduces the optimization time and the improvement of the algorithm is more obvious on large-scale complex maps.
Only using the above test method to test the algorithm in this paper cannot fully prove its significant optimization performance. Thus, statistical test is used to judge about the significance of results here, and 50 times results of three algorithms on the metric of path length are tested by Wilcoxon rank sum test to obtain p value. That is, whether the algorithm in this paper is significantly different from other algorithms under the standard of p = 0.05, the original hypothesis H0 is that there is no significant difference between the two algorithms. When p < 0.05, the assumption of H0 is rejected, indicating that there is a significant difference between the two algorithms; when p > 0.05, H0 is accepted, indicating that the difference between the two algorithms is not significant, and the algorithms have the same optimization ability. The specific test results are shown in Table 4.
Map | Algorithm in paper [24] | Algorithm in paper [19] | ||
Z | P | Z | P | |
1 | –4.298 | 1.700E–05 | –4.623 | 4.000E–06 |
2 | –4.141 | 3.500E–05 | –4.782 | 2.000E–06 |
3 | –4.247 | 2.200E–05 | –4.782 | 2.000E–06 |
4 | –3.795 | 1.480E–04 | –4.556 | 5.000E–06 |
5 | –4.433 | 9.000E–06 | –4.762 | 2.000E–06 |
6 | –3.034 | 2.000E–03 | –4.330 | 1.500E–05 |
According to Table 4, the p-values are all less than 0.05, indicating that there is a significant difference between the algorithm in this paper and the other algorithms.
This paper will verify the feasibility of the proposed dynamic environment obstacle avoidance strategy through the following simulation examples. In this simulation experiment, the red grid represents the robot, and other color grids represent dynamic obstacles in different states. The starting and ending points have been marked, as shown in Figure 15.
Firstly, the robot uses the improved ant colony algorithm to plan a global optimal path from the starting point to the ending point in the static environment, as shown by the red line in Figure 15. Then the robot moves along the planned path, and detects environmental information every time it moves a grid, then executes the corresponding obstacle avoidance strategy according to the information of the detected dynamic obstacles (The obstacle avoidance strategies in case 1 and case 4 are relatively simple, so no corresponding simulation experiments have been done).
At time t1, the robot moves to the position shown in A in Figure 16(a) and the sensors detect a dynamic obstacle Obs1 (occupying 9 green grids) moving towards the robot at a rate of 3 grids per step, its initial and end positions are marked in the Figure 16a and the green dashed line indicates its trajectory. At this point, the robot is 5 grids away from the obstacle and chooses to execute the obstacle avoidance strategy for the odd number of grids in case 2, and the re-planned path is shown by the blue dotted line in the Figure 16(a). If the positive obstacle avoidance strategy proposed in [24] is used, the robot will not be able to avoid the obstacle regardless of when its sensors detect the state of the dynamic obstacle.
At time t2, the robot moves to the position shown in C in Figure 16(b), and the sensors detect a dynamic obstacle Obs2 (occupying 4 yellow grids) moving towards the robot at a rate of 2 grids per step, its initial and end positions are marked in the Figure 16(b) and the yellow dashed line indicates its trajectory. At this point, the robot is 3 grids away from the obstacle and chooses to execute the obstacle avoidance strategy for the even number of grids in case 2, and the re-planned path is shown by the green dotted line in the Figure 16(b). If the collision avoidance strategy proposed in [24] is adopted, the robot's sensor needs to detect the state of the dynamic obstacle at position B in the Figure 16(b), and start to replan the path to avoid the obstacle at this position. The path is shown by the red dotted line in the Figure 16(b). It can be seen from the Figure 16(b) that, compared with the obstacle avoidance strategy in [24], the path length re-planned after obstacle avoidance in this paper is shorter, and the number of inflection points is also less.
At time t3, the robot moves to the position shown in D in Figure 16(c), and the sensors detect a dynamic obstacle Obs3 (occupying 4 purple grids) moving towards the robot from behind at a rate of 3 grids per step, its initial and end positions are marked in the Figure 16c and the purple dashed line indicates its trajectory. At this point, the robot is 5 grids away from the obstacle and chooses to execute the obstacle avoidance strategy for the even number of grids in case 3, and the re-planned path is shown by the black line in the Figure 16(c). If the positive obstacle avoidance strategy proposed in [24] is used, the robot's sensor needs to detect the state of the dynamic obstacle at position F in the Figure 16c and start to replan the path to avoid the obstacle at this position. The path is shown by the yellow dotted line in the Figure 16(c).
Figure 17 shows a comparison of the global optimal paths for the obstacle avoidance strategies designed in this paper and in [24]. The red solid line in the Figure 17 is the path planned after the obstacle avoidance strategy of this paper, and the blue solid line is the path after obstacle avoidance of the obstacle avoidance strategy in [24] (if the obstacle cannot be avoided, the strategy of this article will be used instead). Compared with the obstacle avoidance strategy in [24], this paper proposes a method of regular processing for different shapes of obstacles, and fully considers the position of static obstacles in the environment and the state of dynamic obstacles to design different obstacle avoidance strategies, the robot is able to avoid obstacles even when it is close to dynamic obstacles, so the requirements for sensors are lower, the safety is higher, the path distance after obstacle avoidance is also shorter, and the number of inflection points is also fewer.
This paper proposes an improved ant colony algorithm for robot path planning in a static environment. Firstly, the feature points of different types of obstacle grids are extracted according to the known environmental information, and then a leading route from the starting point to the ending point is planned to solve the problem that the blindly searches in the initial stage to improve the convergence speed. A heuristic function that considers the starting point, ending point and turning point at the same time is designed to improve search efficiency and smoothness of path. Three different parameter control strategies are designed to coordinate the convergence and global search ability of the algorithm. Different obstacle avoidance strategies are then proposed for dynamic obstacles of different shapes and states in the dynamic environment to plan a collision-free optimal path for the robot from the start to the end.
Simulation experiments under different scales and complexity environments show that, compared with the algorithm in [19] and the algorithm in [24], the algorithm in this paper has effectively improvement in reducing the path length and the number of redundant corners, accelerating the iterative convergence speed and operation efficiency. The grid model based on feature point extraction eliminates some poor solutions in the original grid model, so that the ant colony algorithm does not need to consider these poor solutions in path planning, and the path selection is more purposeful and can obtain better quality of the initial solution and greatly reduces the computational time of the algorithm.
Simulation and comparison with other dynamic obstacle avoidance strategies in the dynamic environment show that the proposed strategy can fully consider the location of static obstacles and the shape of dynamic obstacles in the environment, and the paths planned after obstacle avoidance are shorter in length, have fewer turning points and can avoid obstacles even when they are close to the dynamic obstacles, which indicates that the strategy in this paper has lower requirements for sensor performance and higher security.
This work was funded by the National Key R & D Program of China. (2018YFB2000700).
The authors declare that there is no conflict of interest.
[1] |
B. K. Patle, G. Babu L, A. Pandey, D. R. K. Parhi, A. Jagadeesh, A review: On path planning strategies for navigation of mobile robot, Def. Technol., 15 (2019), 582–606. https://doi.org/10.1016/j.dt.2019.04.011 doi: 10.1016/j.dt.2019.04.011
![]() |
[2] |
A. Bakdi, A. Hentout, H. Boutami, A. Maoudj, O. Hachour, B. Bouzouia, Optimal path planning and execution for mobile robots using genetic algorithm and adaptive fuzzy-logic control, Robot. Auton. Syst., 89 (2017), 95–109. https://doi.org/10.1016/j.robot.2016.12.008 doi: 10.1016/j.robot.2016.12.008
![]() |
[3] |
G. Luo, J. Yu, Y. Mei, S. Zhang, UAV path planning in mixed-obstacle environment via artificial potential field method improved by additional control force, Asian J. Control, 17 (2015), 1600–1610. https://doi.org/10.1002/asjc.960 doi: 10.1002/asjc.960
![]() |
[4] |
S. M. Persson, I. Sharf, Sampling-based A* algorithm for robot path-planning, Int. J. Robot. Res., 33 (2014), 1683–1708. https://doi.org/10.1177/0278364914547786 doi: 10.1177/0278364914547786
![]() |
[5] |
M. Elhoseny, A. Tharwat, A. E. Hassaniene, Bezier curve based path planning in a dynamic field using modified genetic algorithm, J. Comput. Sci., 25 (2018), 339–350. https://doi.org/10.1016/j.jocs.2017.08.004 doi: 10.1016/j.jocs.2017.08.004
![]() |
[6] |
E. S. Low, P. Ong, K. C. Cheah, Solving the optimal path planning of a mobile robot using improved Q-learning, Robot. Auton. Syst. 115 (2019), 143–161. https://doi.org/10.1016/j.robot.2019.02.013 doi: 10.1016/j.robot.2019.02.013
![]() |
[7] |
S. Hosseininejad, C. Dadkhah, Mobile robot path planning in dynamic environment based on cuckoo optimization algorithm, Int. J. Adv. Robot. Syst., 16 (2019), 1–13. https://doi.org/10.1177/1729881419839575 doi: 10.1177/1729881419839575
![]() |
[8] |
L. Guangsheng, C. Wusheng, Path planning for mobile robot using self-adaptive lea-rning particle swarm optimization, Sci. China Inf. Sci., 61 (2018), 267–284. https://doi.org/10.1007/s11432-016-9115-2 doi: 10.1007/s11432-016-9115-2
![]() |
[9] |
F. H. Ajeil, I. K. Ibraheem, M. A. Sahib, A. J. Humaidi, Genetic algorithm based approach for autonomous mobile robot path planning, Procedia Comput. Sci., 127 (2018), 180–189. https://doi.org/10.1016/j.procs.2018.01.113 doi: 10.1016/j.procs.2018.01.113
![]() |
[10] |
L. Jianhua, Y. Jianguo, L. Huaping, T. Xingjun, G. Meng, An improved ant colony algorithm for robot path planning, Soft Comput., 21 (2017), 5829–5839. https://doi.org/10.1007/s00500-016-2161-7 doi: 10.1007/s00500-016-2161-7
![]() |
[11] |
A. Viseras, R. O. Losada, L. Merino, Planning with ants: Efficient path planning with rapidly exploring random trees and ant colony optimization, Int. J. Adv. Robot. Syst., 13 (2016), 1–16. https://doi.org/10.1177/1729881416664078 doi: 10.5772/62058
![]() |
[12] | Y. Zheng, L. Qiang, H. Wang, C. Wang, X. Chen, Path planning based on improved ant colony algorithm with potential field heuristic, Control Decision, 33 (2018), 1775–1781. |
[13] |
Y. Zheng, L. Qiang, H. Wang, C. Wang, X. Chen, Path planning of mobile robot based on adaptive ant colony algorithm, J. Intell. Fuzzy Syst., 1 (2020), 5329–5338. https://doi.org/10.3233/JIFS-189018 doi: 10.3233/JIFS-189018
![]() |
[14] |
Z. Jiao, K. Ma, Y. Rong, P. Wang, H. Zhang, S. Wang, A path planning method using adaptive polymorphic ant colony algorithm for smart wheelchairs, J. Comput. Sci., 25 (2018), 50–57. https://doi.org/10.1016/j.jocs.2018.02.004 doi: 10.1016/j.jocs.2018.02.004
![]() |
[15] |
K. Akka, F. Khaber, Mobile robot path planning using an improved ant colony optimization, Int. J. Adv. Robot. Syst., 15 (2018), 1–7. https://doi.org/10.1177/1729881418774673 doi: 10.1177/1729881418774673
![]() |
[16] | Y. XiaoMing, L. Sheng, Z. Chen, An improved ant colony system algorithm for robot path planning and performance analysis, Int. J. Robot. Autom., 33 (2018), 527–533. |
[17] | D. Wang, H. Yu, Path planning of mobile robot in dynamic environments, in International Conference on Intelligent Control and Information Processing (ICICIP), IEEE, 2 (2011), 691–696. https://doi.org/10.1109/ICICIP.2011.6008338 |
[18] | Q. Hong, H. Liwei, X. Ke, Research of improved ant colony based robot path planning under dynamic environment, J. Univ. Electron. Sci. Technol. China, 44 (2015), 260–265. |
[19] |
Q. Luo, H. Wang, Y. Zheng, J. He, Research on path planning of mobile robot based on improved ant colony algorithm, Neural Comput. Appl., 32 (2020), 1555–1566. https://doi.org/10.1007/s00521-019-04172-2 doi: 10.1007/s00521-019-04172-2
![]() |
[20] |
C. Miao, G. Chen, C. Yan, Y. Wu, Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm, Comput. Ind. Eng., 156 (2021), 107230. https://doi.org/10.1016/j.cie.2021.107230 doi: 10.1016/j.cie.2021.107230
![]() |
[21] |
Q. Zhu, J. Hu, W. Cai, Larry Henschen, A new robot navigation algorithm for dynamic unknown environments based on dynamic path re-computation and an improved scout ant algorithm, Appl. Soft. Comput., 11 (2011), 4667–4676. https://doi.org/10.1016/j.asoc.2011.07.016 doi: 10.1016/j.asoc.2011.07.016
![]() |
[22] |
C. Yen, M. Cheng, A study of fuzzy control with ant colony algorithm used in mobile robot for shortest path planning and obstacle avoidance, Microsyst Technol., 24 (2018), 125–135. https://doi.org/10.1007/s00542-016-3192-9 doi: 10.1007/s00542-016-3192-9
![]() |
[23] | J. Yin, W. Fu, A safety navigation method for integrating global path planning and local obstacle avoidance for self-driving cars in a dynamic environment, Sci. Iran., 28 (2021). |
[24] | K. Xu, X. Lu, Y. Huang, S. Hu, Robot path planning based on double-layer ant colony optimization algorithm and dynamic environment, Acta Electron. Sinica, 47 (2019), 2166–2176. |
[25] | Y. Weitao, S. Jing, G. Zhimin, Y. Chuanyi, G. Tong, Best grid size of the occupancy grid map and its accuracy, Robot, 42 (2020), 199–206. |
[26] |
X. Deng, R. Li, L. Zhao, K. Wang, X Gui, Multi-obstacle path planning and optimization for mobile robot, Expert Syst. Appl., 183 (2021), 1–16. https://doi.org/10.1016/j.eswa.2021.115445 doi: 10.1016/j.eswa.2021.115445
![]() |
1. | Wenshuang Zhao, Nan Jiang, Yuanyuan Wei, Xinke Zhao, Geo-Economic Analysis Based on an Improved Ant Colony Optimization, 2023, 15, 2071-1050, 4758, 10.3390/su15064758 | |
2. | Feng Li, Young-Chul Kim, Boyin Xu, Non-Standard Map Robot Path Planning Approach Based on Ant Colony Algorithms, 2023, 23, 1424-8220, 7502, 10.3390/s23177502 | |
3. | Feng Li, Young-Chul Kim, Ziang Lyu, Han Zhang, Research on Path Planning for Robot Based on Improved Design of Non-Standard Environment Map With Ant Colony Algorithm, 2023, 11, 2169-3536, 99776, 10.1109/ACCESS.2023.3312940 | |
4. | Junhao Zhang, Zhangfang Hu, Xingyuan Wang, Yong Zhang, Research on Global Path Planning of Robot Based on Ant Colony Algorithm and Gaussian Sampling, 2023, 2023, 1024-123X, 10.1155/2023/4194568 | |
5. | Mohamed A. Damos, Wenbo Xu, Jun Zhu, Ali Ahmed, Abdolraheem Khader, An Efficient Tourism Path Approach Based on Improved Ant Colony Optimization in Hilly Areas, 2025, 14, 2220-9964, 34, 10.3390/ijgi14010034 | |
6. | Benchi Jiang, Yiping Liu, Zhenfa Xu, Zhijun Chen, Enhancing AGV path planning: An improved ant colony algorithm with nonuniform pheromone distribution and adaptive pheromone evaporation, 2025, 0954-4070, 10.1177/09544070251327268 |
α | 0 | 0.5 | 1 | 2 | 4 |
Mean value of path length | 58.254 | 51.298 | 48.531 | 49.378 | 53.854 |
β | 0 | 2 | 5 | 7 | 9 |
Mean value of path length | 65.624 | 53.274 | 49.591 | 50.325 | 52.951 |
tc | 1 | 1.5 | 2 | 3 | 5 |
Mean value of path length | 57.936 | 49.292 | 50.836 | 51.951 | 53.469 |
κ | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
Mean value of path length | 52.861 | 50.197 | 49.178 | 48.568 | 49.625 |
Parameter | α | β | ρ | Q | m | N | tc | δ | κ |
Value | 1 | 5 | 0.3 | 100 | 50 | 100 | 1.5 | 2 | 0.7 |
Map | Algorithm | Optimal path length | Average of path length | Standard deviation | Average number of iterations | Average time-consuming(sec) | Number of bends |
1 | ① | 28.6274 | 28.6274 | 0 | 14 | 1.35 | 6 |
② | 28.6274 | 28.6557 | 0.0283 | 20 | 1.48 | 8 | |
③ | 28.6274 | 29.0127 | 0.3853 | 32 | 3.37 | 11 | |
2 | ① | 73.9828 | 74.2296 | 0.2468 | 13 | 4.04 | 12 |
② | 73.9828 | 77.0538 | 3.071 | 35 | 7.91 | 18 | |
③ | 77.6694 | 90.5694 | 12.9 | - | 18.23 | 33 | |
3 | ① | 115.87 | 119.35 | 3.48 | 27 | 4.68 | 32 |
② | 118.799 | 132.26 | 13.461 | 45 | 11.01 | 35 | |
③ | 122.213 | 148.84 | 26.627 | 83 | 23.6 | 41 | |
4 | ① | 58.183 | 62.527 | 4.344 | 38 | 12.32 | 18 |
② | 59.598 | 70.847 | 11.249 | 59 | 19.56 | 23 | |
③ | 65.111 | 85.247 | 20.136 | 81 | 44.83 | 40 | |
5 | ① | 78.8112 | 87.5614 | 8.7502 | 30 | 20.45 | 21 |
② | 84.0833 | 103.1546 | 19.0713 | 75 | 33.54 | 35 | |
③ | 90.1838 | 126.154 | 35.9702 | 85 | 50.57 | 34 | |
6 | ① | 364.7523 | 395.146 | 30.3937 | 35 | 40.56 | 59 |
② | 377.196 | 426.545 | 49.349 | 80 | 70.48 | 55 | |
③ | 389.8712 | 460.254 | 70.3828 | 97 | 110.32 | 71 | |
Note: ①: Algorithm in this paper.②: Algorithm in paper [24]. ③: Algorithm in paper [19] |
Map | Algorithm in paper [24] | Algorithm in paper [19] | ||
Z | P | Z | P | |
1 | –4.298 | 1.700E–05 | –4.623 | 4.000E–06 |
2 | –4.141 | 3.500E–05 | –4.782 | 2.000E–06 |
3 | –4.247 | 2.200E–05 | –4.782 | 2.000E–06 |
4 | –3.795 | 1.480E–04 | –4.556 | 5.000E–06 |
5 | –4.433 | 9.000E–06 | –4.762 | 2.000E–06 |
6 | –3.034 | 2.000E–03 | –4.330 | 1.500E–05 |
α | 0 | 0.5 | 1 | 2 | 4 |
Mean value of path length | 58.254 | 51.298 | 48.531 | 49.378 | 53.854 |
β | 0 | 2 | 5 | 7 | 9 |
Mean value of path length | 65.624 | 53.274 | 49.591 | 50.325 | 52.951 |
tc | 1 | 1.5 | 2 | 3 | 5 |
Mean value of path length | 57.936 | 49.292 | 50.836 | 51.951 | 53.469 |
κ | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |
Mean value of path length | 52.861 | 50.197 | 49.178 | 48.568 | 49.625 |
Parameter | α | β | ρ | Q | m | N | tc | δ | κ |
Value | 1 | 5 | 0.3 | 100 | 50 | 100 | 1.5 | 2 | 0.7 |
Map | Algorithm | Optimal path length | Average of path length | Standard deviation | Average number of iterations | Average time-consuming(sec) | Number of bends |
1 | ① | 28.6274 | 28.6274 | 0 | 14 | 1.35 | 6 |
② | 28.6274 | 28.6557 | 0.0283 | 20 | 1.48 | 8 | |
③ | 28.6274 | 29.0127 | 0.3853 | 32 | 3.37 | 11 | |
2 | ① | 73.9828 | 74.2296 | 0.2468 | 13 | 4.04 | 12 |
② | 73.9828 | 77.0538 | 3.071 | 35 | 7.91 | 18 | |
③ | 77.6694 | 90.5694 | 12.9 | - | 18.23 | 33 | |
3 | ① | 115.87 | 119.35 | 3.48 | 27 | 4.68 | 32 |
② | 118.799 | 132.26 | 13.461 | 45 | 11.01 | 35 | |
③ | 122.213 | 148.84 | 26.627 | 83 | 23.6 | 41 | |
4 | ① | 58.183 | 62.527 | 4.344 | 38 | 12.32 | 18 |
② | 59.598 | 70.847 | 11.249 | 59 | 19.56 | 23 | |
③ | 65.111 | 85.247 | 20.136 | 81 | 44.83 | 40 | |
5 | ① | 78.8112 | 87.5614 | 8.7502 | 30 | 20.45 | 21 |
② | 84.0833 | 103.1546 | 19.0713 | 75 | 33.54 | 35 | |
③ | 90.1838 | 126.154 | 35.9702 | 85 | 50.57 | 34 | |
6 | ① | 364.7523 | 395.146 | 30.3937 | 35 | 40.56 | 59 |
② | 377.196 | 426.545 | 49.349 | 80 | 70.48 | 55 | |
③ | 389.8712 | 460.254 | 70.3828 | 97 | 110.32 | 71 | |
Note: ①: Algorithm in this paper.②: Algorithm in paper [24]. ③: Algorithm in paper [19] |
Map | Algorithm in paper [24] | Algorithm in paper [19] | ||
Z | P | Z | P | |
1 | –4.298 | 1.700E–05 | –4.623 | 4.000E–06 |
2 | –4.141 | 3.500E–05 | –4.782 | 2.000E–06 |
3 | –4.247 | 2.200E–05 | –4.782 | 2.000E–06 |
4 | –3.795 | 1.480E–04 | –4.556 | 5.000E–06 |
5 | –4.433 | 9.000E–06 | –4.762 | 2.000E–06 |
6 | –3.034 | 2.000E–03 | –4.330 | 1.500E–05 |