We consider a model for the propagation of a driven interface through a random field of obstacles. The evolution
equation, commonly referred to as the Quenched Edwards-Wilkinson model, is a semilinear parabolic equation
with a constant driving term and random nonlinearity to model the influence of the obstacle field. For the case
of isolated obstacles centered on lattice points and admitting a random strength with exponential tails, we show
that the interface propagates with a finite velocity for sufficiently large driving force. The proof consists
of a discretization of the evolution equation and a supermartingale estimate akin to the study of branching random
walks.
1.
Introduction
Along with the development of modern science and technology, the details of problems in nature are considered so much as to every influential factor being considered. These factors would play the roles influencing both the apparent showing to human and each other. As a result, analytical solutions could be no longer obtained whenever the problems are a little complicated. To conquer such difficulties, the nature-inspired algorithms had been proposed several decades of years ago.
Genetic algorithm (GA) [1] might be the first candidate in literature, it soon became a hot spot to solve most of the problems human met. Convinced by the better performance, other optimization algorithms were soon proposed, such as the ant colony optimization (ACO) [2] algorithm, the particle swarm optimization (PSO) [3] algorithm and so on.
Talking about the PSO algorithm, scientists and engineers around the world were soon addicted to the beautiful and simple structure. Thorough details were contributed, the reason why it converged so fast, its stability together with the constraints of variables $ {c}_{1}, {c}_{2} $ were confirmed [4]. Researches applied the PSO algorithm to solve every optimization problem they found, and they were also not satisfied with its current performance. Finding ways to improve the performance, in either convergence ratio or residual errors, soon became a hot spot at those years. Inertia weights [5], local unimodal sampling [6], regrouping [7], guaranteed convergence [8], and other improvements were soon proposed and better performance confirmed. Literally speaking, the improvements could be classified as: 1) improvements of variables, such as the chaotic mapping [9], Levy flight [10]. 2) re-constructions of updating equations, such as the guaranteed convergence, regrouping methods. 3) improvements of controlling parameters, for instance, the inertia weights. 4) Hybridizations, such as the hybridization of PSO with GA [11], PSO with ACO [12].
Among all of the improved PSO algorithms, the heterogeneous PSO (HPSO) improvement [13] was a most shining one, it might be an inspiration of most of the modern optimization algorithms proposed recently.
In the early years of computational optimization, all of the individuals in swarms would following a same style to update their positions. For example, in the GA, they would crossover, mutate, while in the PSO algorithm, all of the individuals would update their positions according to their current velocities, the global best candidate, and their current best trajectories. Individuals in swarms of the bat algorithm (BA) [14] would also follow a same equation to update their current positions. While in the HPSO algorithm, individuals in swarms would select a way randomly among the cognitive-only, social-only, barebones, and modified barebones operations. The HPSO was also confirmed better in optimization.
Heterogeneous improvements would give individuals in swarm multiple ways [15] or tunnels [16] to update their positions. In such conditions, individuals could seek help for more variable operations. Some of them could take larger steps for a better exploration capability, some of them could take smaller steps to exploit the local optima and confirm whether it is the global one or not, and they would not follow a same way, which could increase their capability in both exploration and exploitation procedure. We can see that individuals in the sine cosine algorithm (SCA) [17] have two choices to update their positions, the arithmetic optimization algorithm (AOA) [18] and Harris hawk optimization (HHO) algorithm have four choices. More ways to update their positions might not always achieve in better performance, individuals in swarms with such operations could indeed afford variable choices and multiple characteristics. Henceforth, the heterogeneous improvements could be induced to be a multiple updating principle.
Aquila optimizer (AO) [19] was just proposed recently, it was claimed and verified with fast convergence rate and better performance than most of the other algorithms, it have been applied in intrusion detection [20], production forecasting [21].
The individuals in swarms of the AO algorithm also have four ways to update their positions, however, they can only choose two of them during the first 2/3 full procedure in exploration, and two of them at the rest exploitation procedure. Although the overall results were quite promising as reported by the proposer, individuals in swarms could not always take further operations to obtain better results in the exploitation procedure, which could be easily found in figure 9 in referenced paper [19]. This paper would report a heterogeneous improvement of the Aquila optimization algorithm with abbreviation HAO. The constant value 2/3 which constraint the individuals to follow two ways each during their exploration and exploitation procedure would be changed to a probability value, and consequently, individuals in the HAO swarms could have four ways to update their positions.
The main contribution of this paper would be:
1) The heterogeneous improvement of the AO algorithm was proposed in this paper, which did not introduce any further equations and improve the computer complexity. The proposed HAO algorithm only reconstructed the strategies which were proposed by the proposer.
2) The four strategies were classified by two groups with two proportional values $ {p}_{1} $ and $ {p}_{2} $ , their influence of the convergence rate was simulated and a best set was chosen.
3) Qualitative analysis, intensification, diversification, and scalability capability of the proposed HAO algorithm were simulated and the acceleration rate were tested on either unimodal or multimodal benchmark functions.
4) Simulation experiments on unimodal, multimodal, together with three classical real-world engineering problems were carried out and comparison were made. Results confirmed the better performance of the propose HAO algorithm than the original AO algorithm together with some other well-known optimization algorithms.
The rest of this paper would be arranged as follows: In Section 2, the original AO algorithm was briefly reported, and the improved HAO was proposed. Simulation experiments would be carried out both on benchmark functions in Section 3 and real-world engineering problems in Section 4. Discussions would be made and conclusions would be drawn in Section 5.
2.
The AO and proposed HAO algorithms
2.1. Strategies in the AO algorithm
There are four strategies for individuals in the AO algorithm:
Strategy 1: Expanded exploration.
where $ {X}_{i}\left(t+1\right) $ , $ {X}_{best}\left(t\right) $ , $ {X}_{M}\left(t\right) $ represent the position of i-th individuals at iteration t + 1, the best location at the current iteration and the mean positions of all individuals at the current iteration respectively. $ {X}_{M}\left(t\right) $ would be calculated as follows:
where $ {X}_{i}\left(t\right) $ represents the position of i-th individuals at iteration t. N represents the number of individuals in swarms. $ {r}_{1} $ is the random number in Gaussian distribution with the interval of 0 and 1.
Strategy 2: Narrowed exploration.
where $ Levy\left(D\right) $ represents the Levy flights following equations:
where $ s = 0.01 $ is a constant parameter, $ {r}_{2} $ is another random number. $ \mu $ , $ \nu $ are random numbers between 0 and 1. $ \sigma $ is calculated as follows:
where $ \beta = 1.5 $ is a constant value. $ {X}_{R}\left(t\right) $ is a random selected candidate at the current iteration. $ y $ and $ x $ represent the spiral shape:
where $ {r}_{1} $ is a fixed number between 1 and 20. $ {D}_{1} $ is integer numbers from 1 to the length of the problems. $ \omega = 0.005 $ is a fixed constant number.
Strategy 3: Expanded exploitation.
where [LB, UB] is the definitional domain of the given problem. $ \alpha $ and $ \delta $ are two fixed small numbers. $ {r}_{3} $ is the third random number in Gaussian distribution.
Strategy 4: Narrowed exploitation.
where $ QF $ denotes to a quality function used to equilibrium the search strategy, and calculated with the following equation:
$ {r}_{4} $ , $ {r}_{5}, {r}_{6}, {r}_{7} $ are the fourth to seventh random numbers involved in this algorithm.
2.2. Proposed HAO improved algorithm
Although individuals in the original AO swarm have four strategies and four ways to update their positions, however, according to the status of the prey and the Aquila, individuals could only choose the first two strategies in the early 2/3 iteration times called exploration procedure, while they would choose the latter two strategies at the rest of exploitation procedure. Apparently, the first two strategies would be more aggressive and could result in fast convergence, however, the latter two involve the re-initializing operations, which would give the individuals a chance to jump out of the local optima. Both of the first and latter two strategies have their own merits and result in better performance of the original AO algorithm.
However, we can easily find that the latter two strategies lack capability in approaching global optima, especially for the multimodal benchmark functions. Figure 9 in the referenced paper [19] showed that individuals would fail to obtain better positions any more in the later iterations for F5–F10, F12–F15, F17–23, who were some unimodal benchmark functions and most of the multimodal benchmark functions. To eliminate such defects, the choice of strategies could be a random way with different probabilities, let alone the solemn separation of exploration and exploitation. The individuals are recommended to choose a way with different probabilities to explore or exploit the whole definitional domain with their own willing. Therefore, we introduce the heterogeneous improvement for the original AO algorithm, and proposed an improvement called the heterogeneous Aquila Optimizer (HAO). For simplicity, a pseudo code of the proposed HAO algorithm would be shown in Table 1.
The complexity would remain as $ O(M\times N\times D) $ with a minimum change in proportional values.
3.
Simulation experiments on benchmark functions
In this section, simulation experiments on benchmark functions would be carried out, simulation results would be discussed to find whether the proposed HAO algorithm would perform better than the original AO algorithm or not. And furthermore, considering the development of swarm intelligence and better performance, several other algorithms would also be involved in comparison, such as original AO, the antlion optimization (ALO) [22], African vultures optimization (AVOA) [23], equilibrium optimizer (EO) [24], grasshopper optimization algorithm (GOA) [25], the grey wolf optimizer (GWO) [26], Harris hawk optimization (HHO) [27], Moth-frame optimization (MFO) [28], mayfly optimization algorithm (MOA) [29], PSO, SCA and whale optimization algorithm (WOA) [30]. The parameters of all of the compared algorithms are shown in Table 2.
3.1. Benchmark functions
Verifying the capability of an algorithm with benchmark functions is a classical way in optimization. In this paper, the improved HAO would be also applied in optimization benchmark functions. 5 unimodal, 5 unimodal with three-dimensional basin-like landscapes, and 11 multimodal benchmark functions would be involved [31], as shown in Table 3–5.
3.2. Simulation platform
Simulation experiments would be carried out with a HP server platform whose assembly is shown in Table 6.
The source code was compiled with Matlab 2021b. And for simplicity, the maximum iteration number would be fixed 200, and the population size would be 40. In order to reduce the influence of random numbers involved in the algorithms, Monte Carlo simulation methods would be introduced and all of the results, if needed, would be the averaged over 30 separated runs.
For fair comparison, all the algorithms involved in this paper would be set the same with the above setup.
3.3. Probability parameters
For the proposed HAO algorithm, there would be three probability parameters $ {p}_{1}, {p}_{2}, {p}_{3} $ . According to the source code of the AO algorithm provided by the proposer, $ {p}_{1} $ might be a constant number near 2/3, and $ {p}_{2} = {p}_{3} = 0.5 $ . However, we do not know exactly whether it is suitable for the HAO algorithm. As a probability value, they all in a definitional domain fallen into [0, 1], and therefore, we simulate both $ {p}_{1}-{p}_{2} $ and $ {p}_{1}-{p}_{3} $ with the mean least iteration number (MLIN) when the best fitness values are smaller than 5e–4, which is 0.5‰, a constant number frequently used in engineering. Reducing the influence or randomness involved in the algorithm, Monte Carlo method is introduced and the final results are the average over 30 separated independent runs. Results were shown in Figures 1–6.
We can conclude from Figure 1–3 that $ {p}_{1} = 0.7 $ , and $ {p}_{2} = 0.5 $ might be a better value for most of benchmark functions, while some of the benchmark functions cannot tell a clear way, such as Csendes, Chung Renolds, Holzman's 2, Schumer Steiglitz 2 and Alpine 1 function. Specially, Venter Sobiezcczanski Sobieski function refused to be optimized with a minimum constraint of 1000 maximum iteration number, for saving time of running and computer complexity.
Considering the relationship between $ {p}_{1}and{p}_{3} $ , no guaranteed relationship could be confirmed at the first glance of Figures 4–6. But the mean least iteration number for Hyper ellipsoid, Ackley 1, Sargan, Cosine Mixture and Stretched V Sine wave functions require non-smaller $ {p}_{3} $ values. For simplicity, the values for $ {p}_{3} $ might be also setup to 0.5 at the same value with $ {p}_{2} $ .
3.4. Qualitative experiments
Qualitative analysis is quite an import type of experiments, which would always give people a glance at the capability in optimization. Simulations would be carried out once for each of the benchmark functions involved in this paper, and results were shown in Figures 7–11.
For convenience, all of the dimensionality would be fixed to 10.
We can see from Figures 7–11 that almost all of the benchmark functions, either unimodal or multimodal, would be quickly optimized with the improved HAO algorithm, except Venter and Sobiezcczanski-Sobieski's, which should be so complicated as to fail to optimize by many algorithms. Individuals in swarms would quickly find the global optima, with fast convergence, lower residual errors. The search history for the former two dimensionality showed their better capability. Most of the trajectories of the first dimensionality confirmed the fast convergence.
3.5. Intensification capability experiments
Specifically speaking about the unimodal benchmark functions, they are easy to optimize at most times. Due to the only one global optima existed in their whole domain, individuals in swarms would approach the global optima without interference and disturbance. However, there is indeed a parameter balancing their capabilities. We called intensification experiments.
To reduce the influence of random numbers involved in the algorithms, Monte Carlo method would be also used and the best, worst, median, mean, standard derivation of the best results after 200 iterations would be calculated for 30 separated independent. Results were shown in Table 7.
We can see from Table 7 that the proposed HAO is definitely better, it performs 8 best of 10, among which 7 best, one equally best with the original AO algorithm, and failed to gain the best position compared to the GWO algorithm. Note that only the HAO, AO, GWO, HHO algorithms have gained the best positions in these experiments. Their better performance would be mainly relevant to their inherit mechanisms and the GWO algorithm performed better for F3 function, which proved that in some cases, multiple top candidates could result in better performance, although the EO algorithm, also being involved multiple top candidates, performed worse all the time.
3.6. Diversification capability experiments
As for multimodal benchmark functions, they have more than one global optima, and consequently, individuals in swarms would be trapped in local optima. There is a need for individuals to be capable to run out of the local optima and approach the global one. This capability is called the diversification capability. To find out whether the proposed HAO algorithm has better diversification capability or not, similar experiments would be carried out with the intensification experiments, whereas this time the simulation would be carried out on multimodal benchmark functions. The results would be in a same situation with intensification experiments and shown in Table 8.
Table 8 showed that both the proposed HAO, AO and the HHO algorithm could perform well in optimizing multimodal benchmark functions. Comparatively speaking, the proposed HAO succeeded for 9 among 11 experiments, while the HHO succeed 9, AO succeeded 8 times. We can see that the proposed heterogeneous improvements not only increase the diversification capability of the original AO algorithm, it further succeeded more than the HHO, it means that the HAO is definitely better than before.
3.7. Acceleration convergence analysis
Both the qualitative, intensification, and diversification experiments verified the better performance of the proposed HAO algorithm. In this section, acceleration convergence analysis would be carried out, the best fitness values versus iterations would show more apparent results, as shown in figures from Figures 12–16.
Based on the averaged results in figures from Figures 12–16, we can see that the proposed improvement increases the convergence rate a lot, however, HHO algorithm became 3 bests of ten optimizing unimodal benchmark functions, while 7 bests of 10 for HAO algorithm, which remains most top lists. With Figure, we can find that the proposed HAO algorithm would perform the best for 8 from 11, while 3 multimodal benchmark functions, Griewank, Venter and Sobiezcczanski-Sobieski's, and Xin-She Yang 6 functions remain difficult to optimize. All of the involved nine algorithms could not optimize them at all.
Regarding the executive time of runs, the less time exhausted, the faster convergence rate. Under the same conditions that all results would be averaged over 30 independent runs, the executive time of the algorithms involved would be evaluated and compared, as shown in Figure 17.
We can see that for the unimodal Exponential function, the heterogeneous improvement took almost the same time as the original version. While for the multimodal Cosine Mixture function, it would take more time to finish the job than the original one. Meanwhile, a controversial conclusion might be drawn with simulation experiments on unimodal or multimodal benchmark functions. The executive time the algorithms take would be possibly based on the functions, other than the algorithms themselves.
3.8. Scalability experiments
Although almost all of the above experiments verified the best performance of the proposed HAO algorithm in this paper, they were carried out with a fixed dimensionality, therefore, scalability experiments should also be carried out to confirm whether it would also perform the best when the dimensionality changed.
In this section, the dimensionality would be changed from 2, 10 and up to 100, with an interval of 10. The population size remains the same, and the overall results would remain an average over 30 separated independent runs with Monte Carlo method. Results were shown in figures from Figures 18–23.
We can see figures from Figures 18–20 that the proposed HAO disappeared in the results of Ackley 1 and Exponential benchmark function. Detailed study confirmed the zero values for AO and HAO. And results on unimodal benchmark functions verified the better performance, specifically, 8/10 bests.
Results on multimodal, as shown in figures from Figures 21–23, however, did not result in a same conclusion. The results of Alpine 1 and Cosine Mixture benchmark functions would follow a same style. However, all of the rest benchmark functions did not support the former conclusion. Although at most times, the proposed HHO algorithm perform better with HHO, AO algorithms than others. That is to say, for the multimodal benchmark functions, a fixed population size might be unable to be suitable the increasing dimensionality.
3.9. Wilcoxon rank sum test
Most of the conclusions demonstrated that the proposed HAO algorithm could perform better in optimization. Verification should be made furthermore. In this section, the Wilcoxon rank sum test would be carried out to confirm whether the better results are fallen in a same distribution with results obtained from other compared algorithms. The normal value $ p = 0.05 $ is adopted and verified, if $ p\le 0.05 $ , acceptance of the basic hypothesis would be made and consequently, the proposed HHO algorithm would perform better, on the contrary, if $ p > 0.05 $ , rejection might be taken, and the datum would be derived from a same situation, and consequently, the proposed HHO algorithm could not be confirmed to perform better even though its mean, median, mean, worst, or standard derivation values are smaller. Results were shown in Table 9.
We can see from Table 9 that the proposed HHO could be verified at most times, only a few functions and algorithms against the hypothesis with bigger values.
4.
Simulation experiments on real-world engineering problems
Based on the simulation experiments results on benchmark functions, we have found that the proposed HAO algorithm is quite promising in optimization. What about its capability in handling the real-world engineering problems? In this section, we would introduce the proposed HAO algorithm to find the best solution for some benchmark engineering problems.
Literally speaking, the difference between the real-world engineering problems and the benchmark functions might be the constraints. That is to say, there is no constraints for individuals in swarms when optimizing the benchmark functions, however, some definitional domain could not be searched or exploited when optimizing the real-world engineering problems.
For a given constraint problem:
where, $ {x}_{i}\in [L{B}_{i}, U{B}_{i}] $ is the definitional domain for the i-th parameter. For simplicity, we introduce the penalty parameters to construct a new fitness function as follows:
where m and n are the number of equal and unequal constraint equations. $ {P}_{ie} $ , $ {P}_{e} $ represent the penalty factors, which should be fixed numbers for simplicity.
In order to find the best solution for the real-world engineering problems, 10,000 separated independent runs would be involved in this section, and the final results would be the best one of them. The population size would be fixed to 40, and the maximum allowed iteration number is set with 1000.
4.1. Pressure vessel design
The pressure vessel design problem is four-dimensional structural design problem. It contains four non-equal and four equal constraints. Applying the proposed HAO algorithm, we got the results and compared with other results in literature, as shown in Table 10.
The proposed HAO algorithm found the best design option among the compared algorithms reported in literature.
4.2. Three-bar truss design
The three-bar truss design is a two-dimensional constraint problem, it has only two non-equal constraints, yet a little difficult to find the candidate. The proposed HAO algorithm finds a better yet not the best option, as shown in Table 11, it failed to do better than the original version. However, the proposed HAO does find a better option than the original AO algorithm in our simultaneous comparison experiments.
4.3. Welded beam design
The welded beam design is a four-dimensional constraint problem. It has seven non-equal constraints. Results were shown in Table 12 and the proposed HAO also find a better result, yet not the best one. Same situation met in our experiment that the proposed HAO could find a better option than the original HAO, however, still worse than the reported result.
5.
Discussion and conclusions
This paper reports a new improvement for the Aquila optimization (AO) algorithm, which was just proposed in literature with better performance. Considering the flat lines in latter convergence curves in the original paper, the original AO should have some defects in exploitation procedure. Inspired by the better performance of heterogeneous improvements, and the inspiration of multiple updating principle, the heterogeneous AO algorithm called HAO is proposed in this paper. The proposed HAO algorithm would not introduce other equations, and re-construct the four strategies with promising better performance.
Simulation experiments were carried out on either unimodal or multimodal benchmark functions. Results confirmed the better performance including most of the Wilcoxon rank sum test results.
Three real-world engineering problems were also included to test the capability of the proposed HAO algorithm. Only one result, specifically the pressure vessel design problem, succeeded in comparison with other reported results in literature, including the original AO algorithm. However, the rest two failed to be the best one, even worse than the reported results from the original version. But in our experiment, the proposed HAO could obtain better options than the original AO algorithm. We can find that the randomness could affect the results a lot and better results might be found with occasions.
We could find the proposed heterogeneous AO algorithm would outperform at most times, it has intensification capability with unimodal benchmark functions, diversification capability with multimodal benchmark functions, it would be faster in convergence rate and approach the global optima much closer. The scalability capability is also good and the rank sum test convinced such simulations.
Individuals in swarms with heterogeneous improvements should be accompanied with larger population size accordingly. In the future, the proposed HAO algorithm could be applied to solve other problems such as reducing dimensionality, figure segmentation for real applications.
Acknowledgements
The authors would like to thank the supports of the following projects: 1) The scientific research team project of Jingchu University of technology with grant number TD202001. 2) the key research and development project of Jingmen with grant numbers 2019YFZD009. 3) the Outstanding Youth Science and Technology Innovation Team Project of Colleges and Universities in Hubei Province with grants T201923.
Conflict of interest
All authors declare no conflicts of interest in this paper.