1.
Introduction
Intelligent technology has developed rapidly in recent decades. More and more real-world optimization problems need to be solved, such as engineering design, UAV path planning and data processing. Many traditional optimization methods have been proposed before. However, their performance is poor on complex optimization problems. The emergence of meta-heuristic algorithm solves this problem to some extent. This kind of algorithm with excellent performance has been studied by more and more scholars. The most famous meta-heuristic algorithm is genetic algorithm (GA) [1] proposed by John Holland in 1975 which is based on the mechanics of the natural selection process.
Meta-heuristic algorithms can be divided into four categories: physics-based, human-based, swarm-based, and evolutionary algorithms. Physics-based optimization algorithms are inspired by physics laws in nature. One of the most famous algorithms in this class is simulated annealing [2], which is based on the annealing process of solid matter in physics. Equilibrium optimizer (EO) [3] is a physics-based algorithm proposed in recent years, which inspired by control volume mass balance models used to estimate both dynamic and equilibrium states. A class of optimization algorithms with human behavior as the main idea is called human-based algorithms. For example, Teaching-learning-based optimization (TLBO) was proposed based on the influence of a teacher on learners [4]. The algorithm obtains the optimal value through iteration of its two parts, which include the "teacher phase", meaning learning the teacher, and the "learner phase", meaning learning by the interaction between learners. Other human-based algorithms include imperialist competitive algorithm (ICA) [5] and collective decision optimization algorithm (CDOA) [6]. Swarm intelligence (SI) algorithms are the most popular kind of algorithms. They are inspired by the behavioral patterns of populations in nature. This class of algorithms is characterized by focusing on the interaction between individuals in the population. Inspired by the foraging behavior of birds, Kennedy and Eberhart proposed the famous particle swarm optimization algorithm (PSO) [7]. PSO sets a swarm of birds (particles) searching for food (best solution) within a certain range (search space). When one of the birds finds the most food, the other birds fly in the direction of the optimal individual. They fly taking into account both the position of the global best individual and their own personal best position. At the same time, many excellent SI algorithms have been proposed in recent years. For example, Tu et al. [8] proposed a new stochastic optimization algorithm named the colony predation algorithm (CPA) which is based on the corporate predation of animals in nature. The algorithm finds the optimum by communication and collaboration, dispersing prey, encircling prey, supporting the most likely successful hunter, and seeking another target. Inspired by the behavior of the moths, Wang [9] proposed a new bio-inspired meta-heuristic algorithm called moth search (MS) algorithm. The algorithm treats the best individual as the light source. Some sub-optimal individuals move in a Lévy flights, and poor individuals move towards the best individual. Other SI optimization algorithms include ant colony optimization (ACO) [10], artificial bee colony (ABC) [11], grey wolf optimization (GWO) [12], marine predators algorithm (MPA) [13], slime mould algorithm (SMA) [14], mayfly algorithm (MA) [15], sine cosine algorithm (SCA) [16], harris hawks optimization(HHO) [17] and whale optimization algorithm (WOA) [18]. Different from SI optimization algorithm, evolutionary algorithm mainly incorporates the idea of natural selection. It selects excellent individuals from the population to the next generation according to different selection operators, and can also integrate some other operations, such as crossover operation and mutation operation. Evolutionary algorithms include genetic algorithms (GA) [19], tree growth algorithm (TGA) [20], evolution strategy (ES) [21], differential evolution (DE) [22], arithmetic optimization algorithm (AOA) [23].
Meta-heuristic algorithms have been developed for many years, and many improvements have been proposed. As an improvement proposed in 2005, opposition-based learning (OBL) [24] has been studied and applied by many scholars. Guo et al. [25] proposed a random unscented sigma point mutation strategy and combines it with opposition-based learning strategy and nonlinear convergence factor adjustment strategy to propose an improved HHO algorithm (IHHO). This strategy achieves further optimization of the algorithm by exploiting the visible region around the optimal solution. It boosts the convergence speed and the accuracy of the solution. Si et al. [26] described the basic OBL and its four variants in detail and analyzed their impact on the search space's coverage, accuracy, exploration and exploitation, and convergence of salp swarm algorithm (SSA). They combined SSA with five OBLs to from five enhanced hybrid SSA-OBL and compared them with other algorithms. The experimental results show that incorporating different OBL in SSA enhanced its exploration ability. Hussien [27] proposed an enhanced opposition-based SSA. In this algorithm, in the initialization phase, OBL is used to generate an opposite population to enhance the diversity of the initial population. In the update phase, OBL is used in each iteration to generate the opposite population of the current population. Then, the best N individuals from these two populations are selected to replace the current population. This method effectively improved the quality of the population in the iterations. Wang et al. [28] proposed the orthogonal OBL (OOBL) based on OBL and applied it to the Yin-Yang-pair optimization to overcome the problem of low quality of candidate solutions in the exploration process. OOBL incorporates the characteristics of orthogonal experimental design, which selects the value of each dimension of the individual from the positive and opposite solutions according to the orthogonal matrix. It helps to make full use of the information of different dimensions of the positive and opposite solutions to generate better individuals. In addition to OBL, scholars have also conducted a lot of research on parameter control. The main idea of adaptive parameter control is to automatically adjust parameters during the algorithm process [29]. Lei et al. [30] proposed an aggregative learning gravitational search algorithm with self-adaptive gravitational constants (ALGSA) to alleviate the issues of low search performance and premature convergence. ALGSA adaptively adjusts the gravitational constant G according to the current state of each individual to better balance the exploration and development of the algorithm. ALGSA has better performance compared to some existing variants of the gravitational search algorithm.
Intelligent clonal optimizer (ICO) [31] is a new evolutionary algorithm which was proposed by Sahargahi et al. recently. In ICO, the population is initialized using chaos mapping, so that a population with better diversity can be obtained, which is beneficial to subsequent iterations. Next, A temporary target is generated in each iteration in the algorithm, which is determined by the best individuals in the population. A set of offsets can be calculated from this target. The offspring are generated by adding the parent to this offset or randomly generated near the parent. The number of offspring generated is defined by the fitness value of its parent. ICO has good performance in both unimodal and multimodal functions. Although the algorithm performed better than many existing algorithms, it has some defects. When it deals with the unimodal functions, the global optimal value will stagnate for a long time. After this period of stagnation, the algorithm will converge rapidly to the optimum. Therefore, the convergence rate and stability of the algorithm are affected seriously. In order to solve the problem that the optimal value update of ICO is stagnant, the adaptive parameter strategy is introduced. The adaptive parameter strategy set a threshold for the optimal number of stagnations. When the number of stagnations exceeds this threshold, the adaptive parameter strategy will accelerate the process of the algorithm from exploration to exploitation, thereby enhancing the convergence rate of the algorithm. At the same time, ICO has the problem of falling into local optimum when dealing with some functions. The introduction of the opposition-based learning can effectively solve this problem. Different from the previous methods, two variants of OBL, quasi-opposition-based learning and quasi-reflection-based learning, are used in different phases of the algorithm to improve the exploration and exploitation capabilities of the algorithm, respectively. In the exploration phase, quasi-opposition-based learning is used to conduct a larger search around the individual. Whereas in the exploitation phase, quasi-reflection-based learning is used to perform a small search around the individual to improve the accuracy of the solution. Finally, simulation experiments were performed on three engineering optimization problems, eight CEC 2104 test functions and twenty-seven benchmark functions to verify the competitive performance of IICO.
The main contribution of this paper would be:
1) The improved ICO algorithm was proposed in this paper, which combined opposition-based learning and adaptive parameter strategy.
2) The effects of opposition-based learning and adaptive parameter strategy on ICO were discussed in detail and analyzed.
3) Comparative simulation experiments were performed on unimodal, multimodal, CEC 2104 functions and three engineering optimization problems. At the same time the experimental analysis was carried out.
The rest of the paper is structured as follows. The principle of the intelligent clonal optimizer and the improvement are described in Section 2. Simulation experiments would be carried out in Section 3. In Section 4, three engineering optimization problems are solved. Finally, conclusion is presented in Section 5.
2.
The ICO and proposed IICO
2.1. Intelligent clonal optimizer
The ICO algorithm initializes the population using the chaotic mapping. A series of random points in space can be obtained by chaotic mapping. The equation of the logistic mapping is as follows:
where μ is the logistic parameter. When μ is close to 4 and X0∈[0,1], the logistic function is in total chaos. When a group of chaotic vectors is generated, they should be transformed into variables in search space using the following equation:
where lb and ub represent the upper and lower bounds of variables respectively. Assume the dimension of the problem is D, the population in the i th iteration is defined as X1(t),X2(t),…,XN(t), and the i th individual is denoted as Xi(t)=(xi1(t),xi2(t),…,xiD(t)),i=1,2,…,N.
2.1.1. Cloned operator
In ICO algorithm, some individuals need to be selected to generate offspring. The number of offspring produced by each parent is defined by the following equation.
where Si is the number of offspring created from i th parent. Smin and Smax are the minimum and maximum number of clones respectively. Smin is generally set to 0. Moreover, NFi represents the normalized fitness value of each solution while fi refers to the fitness value of the solution. Furthermore, fworst and fbest represent the worst and best fitness values in the current population respectively. The larger the value of NFi, the smaller the fitness value, the better the solution.
In ICO, there are two ways to generate offspring. The first is that offspring are randomly generated within a small range of the parent. The second is that offspring are generated through the temporary target. The offspring generated by these two ways are stored in list XL and XB, respectively. For the second case, the distance from each offspring to its parents is calculated by the following equation.
where r represents a random number between 0 and 1. The initial values of ΔX can be set to zero for all solutions. To compute Ai, some elite solutions are selected from the population and stored in ES list. The equation is as follows.
where niter is the number of the best solutions selected from the population, iter is the current iteration. According to the equation, the value of niter depends on y and y is reduced from initial value N. Therefore, niter is gradually reduced from N to 1 in every iteration. The algorithm will gradually shift from the exploration to the exploitation. The value of k can be calculated as follows:
where MaxFEs is maximum number of evaluations of the function, N is the population number. The calculation equation of Ai is given by
where M is the mean of the upper and lower bounds of the variable. β is obtained from the following equation:
where β0 and γ are constants which is 100 and 1e-19 respectively. The value of Z is reduced from 1 to 1e-18 when it reaches γ. The value of Ei is calculated through the following equation:
where TT is a temporary target which represents the weighted average of the selected elite solutions, R represents the Cartesian distance between Xi and TTi. ϵ is a positive small constant, NFj is determined through Eq (4), ri is a random constant between [0, 1].
The probability of using the second method to generate offspring is defined by the following equation.
where σinitial and σfinal indicates the initial and final value of standard deviation respectively. Ex is the nonlinear modulation index which is 2. Finally, XL and XB can be obtained by the above and following equations.
where r is a random value in the range [0, 1], rn is a random number that obeys the normal distribution.
2.1.2. Conservative selection operator
After the offspring are generated, the elite solutions in them should be selected. The equation is as follows:
In these equations, NXL and NXB represent the number of elements in XB and XL lists respectively. NX shows the number of the population. NL and NB are the number of offspring selected from XL and XB respectively. NE refers to the number of offspring selected from the parents. Table 1 shows the pseudocode of the ICO algorithm.
2.2. Proposed IICO algorithm
2.2.1. Opposition-based learning strategy
With the development of meta-heuristic algorithms, many improvement methods have been proposed. They can be applied to all kinds of algorithms. As one of them, opposition-based learning (OBL) [24] is a very widely used improvement, which can enhance the population diversity. Later, researchers proposed its two variants named quasi-opposition-based learning [32] and quasi-reflection-based learning [33]. In this paper, these two methods are combined to improve the performance of the ICO algorithm.
Assume X=(x1,x2,…,xD) is a point in D -dimensional space with xj∈[lb,ub],j=1,2,…,D. The opposite point of X is defined as ˘X=(˘x1,˘x2,…,˘xD). The quasi-opposite point and quasi-reflected point are defined as X=( x1, x2,…, xD), and −X=(−x1,−x2,…−xD) respectively. These points are defined as follows:
When the dimension of the problem is one, the value ranges of the quasi-opposite and quasi-reflected points are shown in Figure 1. Figure 1 shows that the quasi-reflected point −x is closer to the original point x, which is of benefit to exploiting near the original point, whereas the quasi-opposite point x is farther from the original point, which can effectively explore around the original point. Therefore, quasi-opposition-based learning can enhance the exploration ability of ICO algorithm, and quasi-reflection-based learning can enhance the exploitation ability of ICO algorithm.
According to the updating equation of ICO algorithm, when niter>1, the temporary target is the weighted average of the selected elite solutions, and the algorithm is in the exploration phase. In this phase, the quasi-opposition-based learning is applied to the algorithm when the updating Eq (19) is used and niter>1,r≥σiter. When the quasi-opposite solution Xi(t+1) is calculated through Eq (25), the greedy strategy is used, and the better solution of the two enters the next iteration. The selection equation is shown in Eq (27).
When niter=1, the temporary target is the best solution of the current population, the algorithm is in the exploitation phase. In this phase, the algorithm is quickly approaching the optimal value. Therefore, the quasi-reflection-based learning scheme is applied to IICO when the updating Eq (19) is used and niter=1,r≥σiter. When the quasi-reflected solution −Xi(t+1) is calculated through Eq (26), the greedy selection is used again. As shown in Eq (28):
The use of the opposite-based learning strategy can effectively meliorate the performance of the algorithm.
2.2.2. Adaptive parameter strategy
According to the principle of the algorithm in the previous section, as the parameter niter changes from N to 1, the algorithm gradually transforms from exploration to exploitation. The value of niter depends on y. When y≤1, the value of niter is 1 and the ICO algorithm completely enters the exploitation phase and begins to converge rapidly. However, when the algorithm deals with the unimodal functions, this process is too long. This leads to stagnation in global optimum and decelerates the convergence rate of the algorithm as shown in Figure 2. To solve this problem, an adaptive parameter strategy is proposed, which introduces a parameter stag initialized zero. When the global optimal value is not updated, the parameter stag add one. When the value of stag exceeds a threshold maxStag and y>1, the convergence of the algorithm is accelerated by subtracting y by one, and then setting stag to zero. The equation is as follows:
This strategy effectively prevents from the stagnation of the global optimal update and accelerates the rate of convergence on the unimodal functions. After testing, this strategy also improves the performance of the ICO algorithm on multimodal functions.
2.2.3. The flow of IICO algorithm
By combining the characteristics of the quasi-opposition-based learning and quasi-reflection-based learning, the population diversity of the original algorithm is improved and the accuracy of solutions is enhanced. At the same time, the introduction of the adaptive parameter strategy enhances the convergence speed of ICO. Figure 3 is the flow chart of the IICO algorithm. In the flowchart, the part marked in red is the improved step. When some constants in the algorithm are calculated, y is updated using Eq (29). After the population is updated, the value of the parameter stag will be updated according to whether the global optimal value is updated. The pseudocode of IICO is presented in Table 2. The source code of IICO is published in https://github.com/zhangjhboy/IICO.
2.2.4. Computational complexity of IICO
The computational complexity if IICO mainly consists of initialization, population update and fitness value calculation. Therefore, the time complexity of the ICO algorithm is O(MaxFEs×(D+N)). In the opposition-based learning strategy, the time consumption of the calculation of the quasi-opposition and quasi-reflected point is O(MaxFEs×D). The complexity of adaptive parameter strategy is O(1). Therefore, the whole-time complexity of the IICO algorithm is O(MaxFEs×(2D+N)).
3.
Simulation experiments
3.1. Experiments setup
In order to verify the superiority of the IICO algorithm, this paper select nine unimodal, nine multimodal and nine fixed-dimension multimodal benchmark functions, as shown in Tables 3–5. Eight CEC2014 test functions are shown in Table 6. And the IICO algorithm will compare with some other algorithms. These competitive algorithms include ICO [31], EO [3], GWO [12], HHO [17], PSO [7], SCA [16], SMA [14], WOA [18], AOA [23] and MA [15]. The parameter settings of the algorithm are shown in Table 7, where D represents dimension of the functions, Range indicates the range of the search space, and fmin is the optimum. In the experiment, the maximum number of function evaluations is D×2000. The x-axis of the result figure of each algorithm will be displayed in the number of iterations, which can make the difference between each algorithm more obvious. The number of initial populations is set to 30 for all algorithms. In order to reduce the influence of random factors involved in each algorithm, 30 Monte Carlo simulation experiments would be carried out and the results would be averaged.
The experiments were performed in Python 3.7.3 and PyCharm 2021.2.3 under the Windows 10 system with an AMD Ryzen 5 4600H CPU and 16 GB RAM.
3.2. Parameter sensitivity analysis
The sensitivity of new parameter of IICO is analyzed in this section, which is maxStag. This is a new parameter introduced in IICO. It affects the balance between the exploration and exploitation of the algorithm. Excessive maxStag value will slow down the convergence rate of the algorithm and fail to achieve the desired effect. Too small value will lead to too short algorithm exploration phase and affect the diversity of the population. The results are as shown in Figure 4.
It is clear from the diagram that when maxStag=3, the effect is best in most cases. Overall, as the value increases, the convergence rate will gradually decline.
3.3. Experimental analysis of the performance of two improvements
To verify the effect of the two improvements, the comparative experiment between IICO and two ICO variants is conducted in this section, including a combined version of ICO with OBL (ICO-OBL) and a combined version of ICO with the adaptive parameter strategy (ICO-APS). The experimental results are shown in Figure 5.
In unimodal functions F1, F4, and F8, ICO-APS has a faster convergence rate than ICO, and the reduced time is just the stagnation time of ICO. The adaptive parameter strategy effectively prevents the stagnation of the optimal value update by accelerating the process of the algorithm from exploration to exploitation and thus speeds up the convergence speed. ICO-OBL also has a faster convergence rate than ICO to a certain extent. However, it has the same stagnant phase as ICO. As a combined version of these two ICO variants, IICO has significantly better convergence speed than the other three algorithms. In multimodal functions F10, F12 and F14, ICO-APS is not much different from ICO. This is because the stagnation of ICO is mainly reflected in the unimodal function. Compared with the ICO solution, ICO-OBL has improved accuracy. When the two are combined, the convergence speed of IICO and the accuracy of the solution are significantly improved. In fixed-dimension multimodal functions F19, F21, F24 and F26, ICO-APS has the same performance as in multimodal functions. Compared with the ICO solution, the accuracy and convergence speed of ICO-OBL are obviously enhanced. In the exploration phase, quasi-opposition-based learning can search a large range around the individual to enhance the diversity of the population. During the exploitation phase, quasi-reflection-based learning can perform a fine-grained search around the individual to enhance the accuracy of the solution. The characteristics of these two make ICO-OBL achieve better results than ICO.
3.4. Qualitative analysis
The capability of IICO is preliminarily investigated in this section. The proposed IICO algorithm would be compared to the original ICO. In Figure 6, search history, trajectory of 1st dimension, average fitness and convergence curve are demonstrated.
The first column of Figure 6 shows the 3D model of the benchmark functions. The second column of Figure 6 shows the location information of the population in the first and second dimensions. IICO and ICO are represented as red and blue points, respectively. It can be seen from the figure that red points gather more at the optimal position than blue points. This means that OBL effectively improves the quality of the population using the characteristics of quasi-opposite and quasi-reflected points, so that the individuals of IICO can reach the best position faster. The third graph of Figure 6 presents the trajectory of the first solution in the first dimension. At the beginning, it can be seen that both algorithms change dramatically. With the iteration, the curve will gradually become stable. On function F3, IICO performs much better than ICO, and it can enter a stable state faster. However, in other functions, there is little difference between the two algorithms. The fourth and fifth columns of Figure 6 represent the average value of solutions fitness value and convergence curve respectively. The mean value of the solutions presents the overall fitness of the algorithm at each iteration. It can be seen that when the number of iterations is the same, IICO always has a better average fitness value. Moreover, the convergence curve graph records the change in the global optimal value in each iteration of the algorithm. It can be seen that IICO can reach the optimal value faster than the original algorithm. The adaptive parameter strategy speeds up convergence by reducing stagnation in optimal value updates. In the exploration phase, OBL searches in a large range around the individual to enhance the diversity of the population. In the exploitation phase, it searches within a small range of individuals to improve the accuracy of the solution and speed up the convergence rate to a certain extent. Through the above analysis, it can be concluded that the overall performance of IICO is better than the original algorithm.
3.5. Intensification capability analysis
For the unimodal benchmark functions, there is only one optimal value. All individuals in the population will aggregate towards this optimal value. The global convergence ability of the algorithm is required. Excellent algorithm can converge to the global optimal value faster. Therefore, simulation experiments on unimodal benchmark functions would be carried out to verify the intensification capability of the algorithm. The best, worst, mean and standard deviation are shown in Table 8.
It can be seen that IICO can obtain the optimal value on all selected unimodal functions, and its mean and standard deviation values are 0. This shows that the algorithm is stable. However, several other algorithms can achieve almost the same effect. For example, the results of ICO are no different from IICO except for the F3 function. Since ICO can already obtain optimal values on multiple unimodal benchmark functions, IICO cannot continue to optimize numerically. This is be because the main advantage of IICO in unimodal functions is in the speed of convergence. This table does not show the advantages of IICO very well.
3.6. Diversification capability analysis
Different from the unimodal functions, the multimodal functions have multiple local optima. The algorithm is easy to fall into local optimum and lead to premature. The strong global exploration ability is required. Like the intensification capability analysis, the relevant data are shown in Tables 9 and 10.
Table 9 presents the results of all algorithms in multimodal functions. According to the obtained results. The performance of IICO, SCA, WOA algorithms is significantly better than the other eight algorithms. They find better values than other algorithms in most functions. Compared with the original algorithm, IICO has better performance in functions F10, F13, F14 and F16–F18. The reason is that opposition-based learning enhances the exploration and exploitation capabilities of IICO while the adaptive parameter strategy provides more exploitation time for the algorithm. In functions F11, F12 and F15, there is no difference between the results of IICO and ICO, because ICO has obtained the optimal data. For other algorithms, SCA and WOA have the best results. They have almost the same data as IICO.
Table 10 indicates the results of all algorithms in fixed-dimension multimodal functions. Similar to the previous analysis, the results of IICO are overall better than ICO. Overall, IICO, EO and AOA have the best performance.
3.7. Acceleration convergence analysis
The previous part has a general understanding of the performance of IICO. In this section, IICO will be compared with other algorithms, and the comparison results are shown by the convergence curve. The results are shown in Figure 7. The figure shows the comprehensive performance of each algorithm, such as convergence speed, accuracy and so on. This can better reflect the advantages and disadvantages of each algorithm. In unimodal functions, F1–F9, IICO always reaches the optimal value fastest. This indicates that IICO has the fastest convergence rate compared to other algorithms. Furthermore, IICO also maintains the same performance in multimodal functions F10–F17, that is, it can reach the best value fastest. In addition, in functions F10, F13, F14, F16 and F17, IICO has higher solution accuracy than the original algorithm. However, in function F18, IICO does not perform very well. For fixed-dimension multimodal functions F19–F27, IICO has no best performance. In most cases, the performance of AOA is better than that of IICO. Compared with the original algorithm, the performance of IICO is still greatly improved. According to the description of NFL theory, it is impossible for an algorithm to achieve the best results in all functions, and this apples to IICO. In most benchmark functions, IICO has the best performance. In a few functions, its results are not the best.
3.8. Scalability experiments
In this section, dimensional analysis will be conducted. It refers to making an algorithm run and compare results under different dimensions. In general, as the dimension of the problem increases, the accuracy of the algorithmic solution decreases. The scalability of the algorithm can be tested by dimensional analysis. The scalability means that when the dimension of the problem increases, the algorithm can still maintain the same performance except the time cost. In this experiment, the dimensions are 50, 100, 150, 200, 300 and 500 respectively. The experimental results are shown in Figure 8.
In unimodal functions F1–F9, it can be seen that the curve of IICO is mostly parallel to the x axis. This indicates that IICO has strong scalability. The increase of dimension does not change other aspects of performance except time consumption. For multimodal functions, IICO does not perform as well as in unimodal functions. In functions F12–F14 and F17, IICO still maintains good scalability. However, in other multimodal functions, IICO is unstable.
3.9. Wilcoxon rank sum test
To demonstrate the difference between IICO and other algorithms, the Wilcoxon rank sum test was performed. The optimal results obtained by IICO were compared pairwise with the results of other algorithms. The normal value p was set to be 0.05. if p≤0.05, there is a significant difference between the two algorithms, otherwise, there is no difference. The results of the Wilcoxon rank sum test are shown in Table 11.
It can be seen from the table that there is little difference between IICO and ICO in the unimodal functions. This is because both end up with the optimal value of the functions. The advantage of IICO is reflected in the fast convergence speed. In most other cases, p<0.05. This shows that IICO is quite different from other algorithms.
3.10. Evaluation of IICO on CEC2014 function
CEC functions have complex mathematical structures. They are more difficult to find the optimal value than the benchmark functions. In this section, the comparison of the algorithms on the CEC2014 test functions is performed. These algorithms include two state-of-the-art algorithms, LSHADE [34] and LSHADE-SPACMA [35]. The results are shown in Table 12.
It can be seen that LSHADE and LSHADE-SPACAM, as the winners of the CEC competition, have superior performance on CEC2014 test functions. The results of IICO are similar to those of these two algorithms except for F28 and F31. From the results, IICO does not have much improvement compared to ICO.
4.
Solving engineering design problems
IICO was applied to three famous engineering design problems to test its accuracy and efficiency, such as gear train design and welded beam design. To handle constraints, a static penalty function method is adopted. In the penalty function method, the fitness function is defined as the sum of the objective function and a penalty term which depends on the constraint violation [36]. The penalty coefficient is a constant for all constraints, which can penalize the solution that violate constraints.
The following subsections present the results of using the algorithms in each problem. Moreover, each algorithm runs 30 times independently. The number of function evaluations is D×2000.
4.1. Gear train design
The objective of this problem is to find the optimal number of tooth for four gears of a train to minimize the gear ratio [37]. There are four variables in this problem. Since the values of each variable are integers, all results are rounded to integers. This problem is defined as follows:
The results obtained by IICO and other algorithms are presented in Table 13. The table shows that IICO has better fitness value in solving the problem compared with the original ICO algorithm. IICO is also better than other algorithms.
4.2. Welded beam design
The welded beam is a common engineering optimization problem with an objective to find an optimal set of the dimensions h=x1, l=x2, t=x3, and b=x4 such that the fabrication cost of the beam is minimized. It's a continuous optimization problem. The cost of the welded beam is formulated as follows:
The related variables and constants are expressed as follows:
The range of the variables is as follows:
There have been many meta-heuristic algorithms applied to this problem, such as GAS, EO, GA, HS. The results are shown in Table 14. Apparently, IICO is not the best, while it is a big improvement over ICO.
4.3. Pressure Vessel design
The pressure vessel design is an engineering optimization problem with the objective to minimize the total cost of material, forming and welding. There are four variables in the problem containing thickness of shell Ts=x1, thickness of head Th=x2, inner radius R=x3, and length of shell L=x4. Ts and Th are integer multiples of 0.0625 inch, which are the available thickness of rolled steel plates, and R and L are continuous. The structure of the problem is described below.
subject to four constraints
where 0≤x1,x2≤99 and 10≤x3,x4≤200.
Meta-heuristic algorithms have been widely used to solve this problem. Table 15 shows the results of the comparison. In the existing literature, the minimum values obtained by ICO, GSA, EO, GA, PSO, and WOA are 5891.383, 8538.836, 6059.7143, 6288.745, 6061.078 and 6059.741 respectively. Of these, ICO has the best results. IICO obtains the minimum cost of 5899.637. It is slightly worse than the results of ICO, while better than the other five algorithms. The adaptive parameter strategy speeds up the process of an algorithm from exploration to exploitation, which means that the exploration time of the algorithm is reduced. Although OBL has the effect of enhancing population diversity in the exploration phase, it may obtain relatively poor results when dealing with some complex problems. Therefore, the results of IICO are slightly worse than ICO.
5.
Discussion and conclusions
In this paper, an improved intelligent chaotic clonal optimizer based on adaptive parameter strategy (IICO) is proposed. The improvement introduces the opposition-based learning (OBL) strategy to enhance the solution accuracy and the diversity of the population. The adaptive parameter strategy can effectively prevent from the stagnation of the global optimal update and accelerate the convergence speed of the algorithm. The experiment results of twenty-seven benchmark functions and three engineering optimization problems show that IICO has better comprehensive performance. And it's superior to other algorithms in terms of solution accuracy and convergence speed. Although the adaptive parameter strategy prevents the stagnation of the optimal value update, it also reduces the time of exploration. This is not a problem when dealing with benchmark functions. However, when solving some complex problems, the results are not desired. For example, in the above experiments, IICO is no better than ICO when addressing CEC test functions and the third engineering optimization problem. Solving this problem will be a future research direction. At the same time, meta-heuristic algorithms have a wide range of applications in the field of UAVs. Some research work is also underway. In the future, the application of IICO to the UAV path planning problem will be considered.
Acknowledgements
The authors would like to thank the supports of the following projects: 1) The scientific research team project of Jingchu University of technology with grant number TD202001. 2) The Outstanding Youth Science and Technology Innovation Team Project of Colleges and Universities in Hubei Province with grants T201923. 3) The special research fund for joint training of graduate students of Jingchu University of Technology with grant number YJS202204.
Conflict of interest
All authors declare no conflicts of interest in this paper.