Research article

An enhanced aquila optimization algorithm with velocity-aided global search mechanism and adaptive opposition-based learning


  • Received: 14 October 2022 Revised: 05 January 2023 Accepted: 17 January 2023 Published: 01 February 2023
  • The aquila optimization algorithm (AO) is an efficient swarm intelligence algorithm proposed recently. However, considering that AO has better performance and slower late convergence speed in the optimization process. For solving this effect of AO and improving its performance, this paper proposes an enhanced aquila optimization algorithm with a velocity-aided global search mechanism and adaptive opposition-based learning (VAIAO) which is based on AO and simplified Aquila optimization algorithm (IAO). In VAIAO, the velocity and acceleration terms are set and included in the update formula. Furthermore, an adaptive opposition-based learning strategy is introduced to improve local optima. To verify the performance of the proposed VAIAO, 27 classical benchmark functions, the Wilcoxon statistical sign-rank experiment, the Friedman test and five engineering optimization problems are tested. The results of the experiment show that the proposed VAIAO has better performance than AO, IAO and other comparison algorithms. This also means the introduction of these two strategies enhances the global exploration ability and convergence speed of the algorithm.

    Citation: Yufei Wang, Yujun Zhang, Yuxin Yan, Juan Zhao, Zhengming Gao. An enhanced aquila optimization algorithm with velocity-aided global search mechanism and adaptive opposition-based learning[J]. Mathematical Biosciences and Engineering, 2023, 20(4): 6422-6467. doi: 10.3934/mbe.2023278

    Related Papers:

    [1] Xianhao Zheng, Jun Wang, Kaibo Shi, Yiqian Tang, Jinde Cao . Novel stability criterion for DNNs via improved asymmetric LKF. Mathematical Modelling and Control, 2024, 4(3): 307-315. doi: 10.3934/mmc.2024025
    [2] Qin Xu, Xiao Wang, Yicheng Liu . Emergent behavior of Cucker–Smale model with time-varying topological structures and reaction-type delays. Mathematical Modelling and Control, 2022, 2(4): 200-218. doi: 10.3934/mmc.2022020
    [3] Gani Stamov, Ekaterina Gospodinova, Ivanka Stamova . Practical exponential stability with respect to hmanifolds of discontinuous delayed Cohen–Grossberg neural networks with variable impulsive perturbations. Mathematical Modelling and Control, 2021, 1(1): 26-34. doi: 10.3934/mmc.2021003
    [4] Yanchao He, Yuzhen Bai . Finite-time stability and applications of positive switched linear delayed impulsive systems. Mathematical Modelling and Control, 2024, 4(2): 178-194. doi: 10.3934/mmc.2024016
    [5] M. Haripriya, A. Manivannan, S. Dhanasekar, S. Lakshmanan . Finite-time synchronization of delayed complex dynamical networks via sampled-data controller. Mathematical Modelling and Control, 2025, 5(1): 73-84. doi: 10.3934/mmc.2025006
    [6] Bangxin Jiang, Yijun Lou, Jianquan Lu . Input-to-state stability of delayed systems with bounded-delay impulses. Mathematical Modelling and Control, 2022, 2(2): 44-54. doi: 10.3934/mmc.2022006
    [7] Naveen Kumar, Km Shelly Chaudhary . Position tracking control of nonholonomic mobile robots via H-based adaptive fractional-order sliding mode controller. Mathematical Modelling and Control, 2025, 5(1): 121-130. doi: 10.3934/mmc.2025009
    [8] Hongwei Zheng, Yujuan Tian . Exponential stability of time-delay systems with highly nonlinear impulses involving delays. Mathematical Modelling and Control, 2025, 5(1): 103-120. doi: 10.3934/mmc.2025008
    [9] Xipu Xu . Global existence of positive and negative solutions for IFDEs via Lyapunov-Razumikhin method. Mathematical Modelling and Control, 2021, 1(3): 157-163. doi: 10.3934/mmc.2021014
    [10] Saravanan Shanmugam, R. Vadivel, S. Sabarathinam, P. Hammachukiattikul, Nallappan Gunasekaran . Enhancing synchronization criteria for fractional-order chaotic neural networks via intermittent control: an extended dissipativity approach. Mathematical Modelling and Control, 2025, 5(1): 31-47. doi: 10.3934/mmc.2025003
  • The aquila optimization algorithm (AO) is an efficient swarm intelligence algorithm proposed recently. However, considering that AO has better performance and slower late convergence speed in the optimization process. For solving this effect of AO and improving its performance, this paper proposes an enhanced aquila optimization algorithm with a velocity-aided global search mechanism and adaptive opposition-based learning (VAIAO) which is based on AO and simplified Aquila optimization algorithm (IAO). In VAIAO, the velocity and acceleration terms are set and included in the update formula. Furthermore, an adaptive opposition-based learning strategy is introduced to improve local optima. To verify the performance of the proposed VAIAO, 27 classical benchmark functions, the Wilcoxon statistical sign-rank experiment, the Friedman test and five engineering optimization problems are tested. The results of the experiment show that the proposed VAIAO has better performance than AO, IAO and other comparison algorithms. This also means the introduction of these two strategies enhances the global exploration ability and convergence speed of the algorithm.



    Neural networks (NNs) serve as computational models that replicate the neural system of the human brain, and they are applied to address diverse problems in the field of machine learning. NNs have been widely used in various fields, including natural language processing, picture recognition, image encryption, wireline communication, finance, and business forecasting, because of its strong information processing capabilities (see [1,2,3,4,5,6,7,8]). Therefore, the stability analysis of NNs is a crucial matter, and has received a lot of attention in recent years (see [9,10,11]). Furthermore, the transmission of signals between neurons is subject to time-delay, which can adversely affect the performance of NNs ([12,13]). Consequently, determining the maximum allowable delay bounds (MADBs) that can ensure the stability of NNs is an important research topic that has drawn a lot of attention [14]. In the existing literature, the method of delay partitioning is commonly employed for analyzing time-delay systems. In order to obtain the MADBs, on the one hand, it is necessary to require that the constructed augmented Lyapunov-Krasovskii functional (LKF) contains more delay information. On the other hand, it is necessary to relax the requirements on the matrix variables involved. The research [15] introduced a novel asymmetric LKF, where all matrix variables involved do not need to be symmetric or positive definite. To make the augmented LKFs contain more delay information, a novel approach to delay partitioning was presented by Guo et al. [16], which involves dividing the variation interval of the delay into several subintervals. A new method for determining the negativity of a quadratic function is presented in [17], based on its geometric information. A more thorough reciprocity convex combination inequality was used by Chen et al. [18] to add quadratic terms to the time derivative of a LKF. It leads to less stringent stability conditions for delayed neural network (DNNs). A novel approach to free moving points generation was introduced in [19] based on the work of [18]. Specifically, free moving points were established for synchronous movements in each subinterval. In addition, the integral inequalities can reduce conservatism by providing tighter bounds through replacement of a function with its upper or lower limit, improving our ability to predict actual results.

    As previously discussed, the majority of existing research has focused on the negative condition of LKFs. However, there is a lack of investigation into its positive condition in the literature. The main work of this paper is to construct a relaxed LFK, and study the stability properties of DNNs by using a quadratic function positive definiteness method. The main contributions are summarized as follows:

    (1) Distinct from prevailing methodologies, this paper presents a novel approach for demonstrating the positive definiteness of the LKF, based on the requirement that the quadratic function satisfies the positive definite condition.

    (2) By employing the asymmetric LKFs methodology, we construct a relaxed LKF that incorporates delay information. The matrix variables included in this method do not require symmetry and positive definiteness.

    (3) A new delay-dependent stability criterion with reduced conservatism is derived for DNNs by extending basic inequalities and incorporating the conditions of positive definiteness for the quadratic function.

    Notations: Y is an n×n real matrix; YT is transpose of Y and Y>0; (Y<0) represents the positive definite (negative definite) matrix. The is a symmetric block in a symmetric matrix, He{Y}=Y+YT. The diagonal matrix is denoted by diag{}. The n-dimensional Euclidean space is denoted by Rn and Rn×nis the set of all n×n real matrices.

    Consider the following NNs with time-varying delay:

    {˙x(t)=Ax(t)+Bf(x(t))+Cf(x(thτ(t))),x(t)=ρ(t), (2.1)

    where

    x()=col[x1(),x2(),,xn()]Rn

    is the neuron state vector and ρ(t) is the initial condition.

    f(x())=col[f1(x1()),f2(x2()),,fn(xn())]

    denotes the activation functions.

    A=diag{a1,a2,,an}

    with ai>0. B and C are the connection matrices. The hτ(t) is the time-varying delay differentiable function that satisfies 0hτ(t)h, ˙hτ(t)μ, where h and μ are known constants. To derive our primary outcome, we need to rely on the following assumption and lemmas.

    Assumption 2.1. The Lipschitz condition that the neuron activation function satisfies is as follows:

    {ιifi(α)fi(β)αβι+i,αβ, fi(0)=0, i=1,2,,n,

    where ιi and ι+i are known constants. For simplicity, denote the following matrices:

    {L1=diag{ι1ι+1,ι2ι+2,,ιnι+n},L2={ι1+ι+12,ι2+ι+22,,ιn+ι+n2}.

    Lemma 2.1. [20] Given any constant positive definite matrix KRn×n, for any continuous function χ(u) and v1<v2, the following inequalities hold:

    (v2v1)v2v1χT(μ)Kχ(μ)dμv2v1χT(μ)dμKv2v1χ(μ)dμ.

    Lemma 2.2. [21] Given any constant positive definite matrix KRn×n, for any continuous function χ(u) and v1<v2, the following inequalities hold:

    v2v1χT(μ)Kχ(μ)dμ1(v2v1)v2v1χT(μ)dμKv2v1χ(μ)dμ+3(v2v1)ΩTKΩ,

    where

    Ω=v2v1χ(μ)dμ2(v2v1)v2v1v2θχ(μ)dμdθ.

    Lemma 2.3. [22] Let R=RTRn×n be a positive definite matrix. If there exists matrix XRn×n such that

    [RXR]0,

    then the following inequality holds:

    (β1β3)β1β3˙χT(μ)R˙χ(μ)ψTΛψ,

    where

    ψ=col[χ(β1),χ(β2),χ(β3)],β3<β2<β1,Λ=[RR+XX2RXXTR+XR].

    Lemma 2.4. For a quadratic function of delay,

    ξ(hτ)=ah2τ(t)+bhτ(t)+c,

    where a,b,cR,hτ[0,h], ξ(hτ)>0 holds, if ξ(hτ) satisfies:

    {ξ(0)>0,ξ(h)>0,hb+2c>0.

    Proof. We will prove Lemma 2.4 by the geometry approach.

    ● For a>0: ξ(hτ(t)) is a convex function. When ξ(hτ(t)) increases monotonically in [0,h], ξ(0)>0 will make ξ(hτ(t))>0 (see Figure 1); when ξ(hτ(t)) is monotonically decreasing in [0,h], if ξ(h)>0, then ξ(hτ(t))>0 (see Figure 2); when ξ(hτ(t)) is not monotonically increasing or decreasing in [0,h], D is the intersection of the two tangents at ξ(0) and ξ(h); if D>0, then ξ(hτ(t))>0 (see Figure 3).

    Figure 1.  ξ(hτ(t)) is monotonically increasing.
    Figure 2.  ξ(hτ(t)) is monotonically decreasing.
    Figure 3.  ξ(hτ(t)) is not monotonically increasing or decreasing.

    ● For a<0: ξ(hτ(t)) is a concave function. ξ(hτ(t))>0 in [0,h] if ξ(0)>0 and ξ(h)>0 (see Figure 4).

    Figure 4.  ξ(hτ(t)) is a concave function.

    Through the above discussion, we obtained three conditions for positive definiteness of quadratic functions. In Theorem 3.1, we constructed a quadratic function form of LKFs, and under the condition of satisfying these three conditions, we can prove that LKF is positive definite.

    Remark 2.1. Lemma 2.3 is a formula derived from the Bessel-Legendre integral inequality, which provides a varying estimate based on N that can help us to evaluate the upper bound of β1β3˙χT(μ)R˙χ(μ). It is apparent that Lemma 2.3 can be reduced to Lemma 2.1 when N=0 (see [23]). In [17,18,19], the negative definiteness criterion of a quadratic function is utilized to demonstrate the negativity of the derivative of the LKFs. At present, there is no research that explores the use of quadratic function methods for determining the positive-definiteness property of LKFs. In this paper, the Lemma 2.4 is a condition for a quadratic function to be positive definite. In Theorem 3.1, the h2τ(t) term is introduced in the augmented asymmetric LKFs through the integral inequalities. On the one hand, introducing h2τ(t) can include more time delay information in the LKFs and reduce conservatism. On the other hand, it can make the LKFs a quadratic function.

    The symbols used in the theorem are described here to help clarify its formulation.

    η(t)=col[x(t)x(thτ(t))x(th)f(x(t))f(x(thτ(t)))tthτ(t)x(s)dstthx(s)dstthτ(t)˙x(s)dsthτ(t)th˙x(s)dstthτ(t)f(x(s))dstthf(x(s))ds0htθf(x(s))dsdθtthtθx(s)dsdθ]el=[0n×(l1)n,In×n,0n×(13l)n]Rn×13n,l=1,2,,13,ϵ=1h.

    Theorem 3.1. For given scalars μ and h>0, system (2.1) with time-varying delay is asymptotically stable if there exist positive definite symmetric matrices W1, W2; positive definite diagonal matrices Z1, Z2; positive definite matrices R1, R2, Q1, Q2; symmetric matrices P1; and any appropriate dimension matrices P2, P3, M, N, F, and P=[P1,2P2,2P3], such that the following linear matrix inequalities (LMIs) hold:

    ξ(hτ(t),˙hτ(t))>0, (3.1)
    [W2FW2]0,[Ξ11Ξ12Ξ13]<0, (3.2)

    where

    ξ(hτ(t),˙hτ(t))=h2τ(t)Σ1+hτ(t)Σ2+Σ3,
    Σ1=2ϵ4eT13W1e13,Σ2=ϵ2eT7R2e7,Σ3=eT1[P1+W2]e1+ϵeT6R1e6+4ϵ2eT7W2e7+ϵeT10Q1e10+2ϵ2eT12Q2e12+12ϵ4eT13W2e13+He{eT1[P2ϵW2]e7+eT1P3e136ϵ3eT7W2e13},Ξ11=eT1[2P22P1A+2hP3+R1+R2+hW112ϵW2+hATW2AL1Z1]e1+eT2[12ϵF+12ϵFT(1μ)R2ϵW22M2NL1Z2]e2eT3[R2+12ϵW2]e3+eT4[hBTW2B+Q1+hQ2Z1]e4+eT5[hCTW2C(1μ)Q1Z2]e5+He{eT1[12ϵW212ϵF+MT]e2+eT1[12ϵFP2]e3+eT1[P1BhATW2B+L2Z1]e4+eT1[P1ChATW2C]e5+eT2[12ϵW212ϵF+N]e3+eT2[L2Z2]e5+eT4[hBTW2C]e5},Ξ12=eT1[ATP2P3]e7+eT1[ATP3]e13eT2Me8+eT2Ne9+eT4[BTP2]e7+eT5[CTP2]e7+eT4[BTP3]e13+eT5[CTP3]e13,Ξ22=4ϵeT7W1e712ϵeT8W2e812ϵeT9W2e9ϵeT11Q2e1112ϵ3eT13R2e13+He{6ϵ2eT7W1e13}.

    Proof. Consider the following candidate LKF for system (2.1):

    V(t)=4i=1Vi(t),(i=1,2,3,4), (3.3)

    where

    V1(t)=xT(t)P[x(t)tthx(s)dstthtθx(s)dsdθ],V2(t)=tthτ(t)xT(s)R1x(s)ds+tthxT(s)R2x(s)ds,
    V3(t)=tthtθxT(s)W1x(s)dsdθ+tthtθ˙xT(s)W2˙x(s)dsdθ,V4(t)=tthτ(t)fT(x(s))Q1f(x(s))ds+0htt+θfT(x(s))Q2f(x(s))dsdθ.

    By Lemmas 2.1 and 2.2, we can deduce

    V2(t)ηT(t){ϵeT6R1e6+hτ(t)ϵ2eT7R2e7}η(t),V3(t)ηT(t){2h2τ(t)ϵ4eT12W1e12+eT1W2e12ϵeT1W2e7+4ϵ2eT7W2e712ϵ3eT7W2e13+12ϵ4eT13W2e13}η(t),V4(t)ηT(t){ϵeT10Q1e10+2ϵ2eT12Q2e12}η(t).

    From the above derivation, we can conclude

    V(t)ηT(t)[h2τ(t)Σ1+hτ(t)Σ2+Σ3]η(t).

    The LKF (3.3) is positive definite if

    ξ(hτ(t),˙hτ(t))>0.

    Next we need to derive that the derivative of LKF is negative definite. Taking the time-derivative of LKF, we have

    ˙V1(t)=ηT(t){2eT1[P1A2P2+2hP3]e1+2eT1P1Be4+2eT4P1Ce52eT1ATP2e7+2eT4BTP2e7+2eT5CTP2e7+2eT1P2e32eT1ATP3e13+2eT4BTP3e13+2eT5CTP3e132eT1P3e7}η(t),˙V2(t)=ηT(t){eT1[R1+R2]e1(1μ)eT2R1e2eT3R2e3}η(t),˙V3(t)=tthxT(s)W1x(s)ds+hxT(t)W1x(t)tth˙xT(s)W2˙x(s)ds+h˙xT(t)W2˙x(t). (3.4)

    Applying inequalities from Lemmas 2.1–2.3, we can obtain

    ˙V3(t)ηT(t){4ϵeT7W1e712ϵ3eT1W1e1+He{6ϵ2eT7W1e13}12ϵeT7W2e712ϵeT8W2e8+[Ae1+Be4+Ce5]TW2[Ae1+Be4+Ce5]+γTΠγ}η(t), (3.5)

    where

    Π=12ϵ[W2W2+FFW2FFTW2+FW2],γ=col[e1e2e2].

    Furthermore, based on Assumption 2.1, the following condition holds for any positive definite diagonal matrices Z1 and Z2:

    0nj=1Z1j[fj(xj(t))ιjxj(t)][fj(xj(t))ι+jxj(t)] nj=1Z2j[fj(xj(thτ(t)))ιjxj(thτ(t))] [fj(xj(thτ(t)))ι+jxj(thτ(t))]. (3.6)

    For any matrices M and N, from the Newton-Leibniz integral formula, we can obtain that:

    {0=2xT(thτ(t))M[x(t)x(thτ(t))   tthτ(t)˙x(s)ds],0=2xT(thτ(t))N[x(thτ(t))x(th)   thτ(t)th˙x(s)ds],

    then,

    {0=ηT(t){2eT2M[e1e2e8]}η(t),0=ηT(t){2eT2N[e2e3e9]}η(t). (3.7)

    By adding the (3.4)–(3.7) together, we can obtain

    ˙V(t)ηT(t)[Ξ11Ξ12Ξ13]η(t). (3.8)

    Therefore, the proof has been completed.

    Remark 3.1. The purpose of constructing an augmented LKF is to extract more information from the system. By introducing new variables and parameters, the augmented LKF can describe the dynamic characteristics of the system in greater detail, helping us to better understand and analyze system behavior. Typically, in order to satisfy the stability conditions of an augmented LKF, the matrix variables involved need to be positive definite and symmetric. This is because in control theory, positive definite matrices and symmetric matrices have good properties that can ensure the nonnegativity and convexity of the LKF [24]. When requiring all matrix variables in the designed augmented LKFs to be positive definite and symmetric, it may lead to increased conservatism. This is because the restrictions of positive definiteness and symmetry narrow down the set of available LKFs, possibly failing to capture all system dynamics.

    Remark 3.2. Inspired by [15], a relaxed and asymmetric LKF is constructed in this paper. The involved matrix variables do not require them to be all positive definite or symmetric in this LKF. By utilizing the condition that the quadratic function is positive definite, the proposed Lemma 2.4 ensures the positive definiteness of the LKF. Furthermore, when combined with certain extended fundamental inequalities, Theorem 1 is less conservative compared to some of the existing literature.

    This section uses a numerical example to demonstrate the feasibility of the proposed approach.

    Example 4.1. Consider DNNs (2.1), with the following system parameters:

    A=[1.5001.7],L1=diag{0,0},L2=diag{0.15,0.4},B=[0.05030.04540.09870.2075],C=[0.23810.93200.03880.5062].

    Solving the LMI in Theorem 3.1 yields the MADBs. Table 1 shows the MADBs of Example 1 with various μ by the obtained Theorem 3.1. Compared to some recent results in other literature both theoretically and numerically. It is undeniably established that this paper's results are significantly better than some reported. Based on the data presented in Table 1, the MADBs system (2.1) yields a value of 11.8999 for μ = 0.4.

    Table 1.  The MADBs of h with various μ for Example 1.
    Methods μ=0.4 μ=0.45 μ=0.5 μ=0.55
    [25] Theorem 1 7.6697 6.7287 6.4126 6.2569
    [26] Theorem 2.1 (m=6) 8.970 7.663 7.115 6.855
    [27] Theorem 1 10.2637 9.0586 9.0586 9.1910
    [28] Theorem 2 10.4371 9.1910 8.6957 8.3806
    [29] Theorem 2 10.5730 9.3566 8.8467 8.5176
    Theorem 3.1 11.8999 11.4345 10.1016 9.8864
    Improvement 19.472% 26.543% 20.550% 20.697%

     | Show Table
    DownLoad: CSV

    In addition, we use different initial values (x1(0)=col[0.5,0.8],x2(0)=col[0.2,0.8]) and

    f(x(t))=col[0.3tanh(x1(t))0.8tanh(x2(t))]

    to obtain the state trajectory of the system (2.1). The graph of state trajectories show that all state trajectories ultimately converge to the equilibrium point, albeit with varying time requirements (Figures 58). Finally, numerical simulation results show that our proposed method is effective and the new stability criterion obtained is feasible.

    Figure 5.  State response of the DNNs (2.1) with x1(0)=col[0.5,0.8], μ=0.4, MADBs=11.8999.
    Figure 6.  State response of the DNNs (2.1) with x1(0)=col[0.5,0.8], μ=0.55, MADBs=9.8864.
    Figure 7.  State response of the DNNs (2.1) with x2(0)=col[0.2,0.8], μ=0.4, MADBs=11.8999.
    Figure 8.  State response of the DNNs (2.1) with x2(0)=col[0.2,0.8], μ=0.55, MADBs=9.8864.

    The main focus of this study is on the stability analysis of NNs with time-varying delays. To improve upon existing literature, this paper has proposed a quadratic method for proving the LKF positive definite. A relaxed LKFs has been constructed based on this method, which contains more information about the time delay and allows for more relaxed requirements on the matrix variables. Using LMIs, a new stability criterion with lower conservatism has been derived. These improvements make the stability criteria applicable in a wider range of scenarios. The numerical examples illustrate the feasibility of the proposed approach.

    Throughout the preparation of this work, we utilized the AI-based proofreading tool "Grammarly" to identify and correct grammatical errors. Subsequently, we thoroughly examined and made any additional edits to the content as required. We take complete responsibility for the content of this publication.

    This work was supported by the National Natural Science Foundation of China under Grant (No. 12061088), the Key R & D Projects of Sichuan Provincial Department of Science and Technology (2023YFG0287 and Sichuan Natural Science Youth Fund Project (No. 24NSFSC7038).

    There are no conflicts of interest regarding this work.



    [1] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [2] M. Mernik, S. H. Liu, D. Karaboga, M. Črepinšek, On clarifying misconceptions when comparing variants of the artificial bee colony algorithm by offering a new implementation, Inf. Sci., 291 (2016), 115–127. https://doi.org/10.1016/j.ins.2014.08.040. doi: 10.1016/j.ins.2014.08.040
    [3] R. V. Rao, V. J. Savsani, D. P. Vakharia, Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems, Inf. Sci., 183 (2012), 1–15. https://doi.org/10.1016/j.ins.2011.08.006 doi: 10.1016/j.ins.2011.08.006
    [4] Y. Tan, Y. Zhu, Fireworks algorithm for optimization, in International conference in swarm intelligence, (2010), 355–364. https://doi.org/10.1007/978-3-642-13495-1_44
    [5] C. Armin, H. K. Mostafa, P. M. Mahdi, Tree Growth Algorithm (TGA), Eng. Appl. Artif. Intell., 72 (2018), 393–414. https://doi.org/10.1016/j.engappai.2018.04.021 doi: 10.1016/j.engappai.2018.04.021
    [6] L. Abualigah, A. Diabatb, S. Mirjalilid, M. A. Elazizf, A. H. Gandomih, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
    [7] J. F. Frenzel, Genetic algorithms, IEEE Potentials, 12 (1993), 21–24. https://doi.org/10.1109/45.282292 doi: 10.1109/45.282292
    [8] R. A. Sarker, S. M. Elsayed, R. Tapabrata, Differential evolution with dynamic parameters selection for optimization problems, IEEE Trans. Evol. Comput., 18 (2014), 689–707. https://doi.org/10.1109/TEVC.2013.2281528 doi: 10.1109/TEVC.2013.2281528
    [9] J. R. Koza, J. P. Rice, Automatic programming of robots using genetic programming, in Proceedings of the Tenth 20 Computational Intelligence and Neuroscience National Conference on Artificial Intelligence, (1992).
    [10] H. G. Beyer, H. P. Schwefel, Evolution strategies–A comprehensive introduction, Nat. Comput., 1 (2002), 3–52. https://doi.org/10.1023/A:1015059928466 doi: 10.1023/A:1015059928466
    [11] Z. W. Geem, J. H. Kim, G. V. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation, 76 (2001), 60–68. https://doi.org/10.1177/003754970107600201 doi: 10.1177/003754970107600201
    [12] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, in 2007 IEEE Congress on Evolutionary Computation, (2007), 4661–4667. https://doi.org/10.1109/CEC.2007.4425083
    [13] Q. Zhang, R. Wang, K. D. Juan Yang, Y. Li, J. Hu, Collective decision optimization algorithm: A new heuristic optimization method, Neurocomputing, 221 (2017), 123–137. https://doi.org/10.1016/j.neucom.2016.09.068 doi: 10.1016/j.neucom.2016.09.068
    [14] M. Kumar, A. J. Kulkarni, S. C. Satapathy, Socio evolution & learning optimization algorithm: A socio-inspired optimization methodology, Future Gener. Comput. Syst., 81 (2018), 252–272. https://doi.org/10.1016/j.future.2017.10.052 doi: 10.1016/j.future.2017.10.052
    [15] A. Qamar, Y. Irfan, S. Mehreen, Political Optimizer: A novel socio-inspired meta-heuristic for global optimization, Knowl. Based Syst., 195 (2020), 105709. https://doi.org/10.1016/j.knosys.2020.105709 doi: 10.1016/j.knosys.2020.105709
    [16] F. A. Hashim, E. H. Houssein, M. S. Mabrouk, W. Al-Atabany, S. Mirjalili, Henry gas solubility optimization: A novel physics-based algorithm, Future Gener. Comput. Syst., 101 (2019), 646–667. https://doi.org/10.1016/j.future.2019.07.015 doi: 10.1016/j.future.2019.07.015
    [17] O. K. Erol, I. Eksin, A new optimization method: big bang–big crunch, Adv. Eng. Software, 37 (2006), 106–111. https://doi.org/10.1016/j.advengsoft.2005.04.005 doi: 10.1016/j.advengsoft.2005.04.005
    [18] S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-verse optimizer: a nature-inspired algorithm for global optimization, Neural Comput. Appl., 27 (2016), 495–513. https://doi.org/10.1007/s00521-015-1870-7 doi: 10.1007/s00521-015-1870-7
    [19] H. Abedinpourshotorban, S. M. Shamsuddin, Z. Beheshti, D. N. A. Jawawi, Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm, Swarm Evol. Comput., 26 (2016), 8–22. https://doi.org/10.1016/j.swevo.2015.07.002 doi: 10.1016/j.swevo.2015.07.002
    [20] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: A gravitational search algorithm, Inf. Sci., 179 (2009), 2232–2248. https://doi.org/10.1016/j.ins.2009.03.004 doi: 10.1016/j.ins.2009.03.004
    [21] A. Kaveh, A. Dadras, A novel meta-heuristic optimization algorithm: thermal exchange optimization, Adv. Eng. Software, 110 (2017), 69–84. https://doi.org/10.1016/j.advengsoft.2017.03.014 doi: 10.1016/j.advengsoft.2017.03.014
    [22] R. A. Formato, Central force optimization, Progress Electromagn. Res., 77 (2007), 425–491. https://doi.org/10.2528/PIER07082403 doi: 10.2528/PIER07082403
    [23] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm, J. Global Optim., 39 (2007), 459–471. https://doi.org/10.1007/s10898-007-9149-x doi: 10.1007/s10898-007-9149-x
    [24] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (1995), 39–43.
    [25] S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowl. Based Syst., 89 (2015), 228–249. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
    [26] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. Part B, 26 (1996), 29–41. https://doi.org/10.1109/3477.484436 doi: 10.1109/3477.484436
    [27] A. Faramarzi, M. Heidarinejad, S. Mirjalili, A. H. Gandomic, Marine predators algorithm: A nature-inspired metaheuristic, Expert Syst. Appl., 152 (2020), 113377. https://doi.org/10.1016/j.eswa.2020.113377 doi: 10.1016/j.eswa.2020.113377
    [28] D. Gaurav, K. Vijay, Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems, Knowl. Based Syst., 165 (2019), 169–196. https://doi.org/10.1016/j.knosys.2018.11.024 doi: 10.1016/j.knosys.2018.11.024
    [29] D. Gaurav, K. Amandeep, STOA: A bio-inspired based optimization algorithm for industrial engineering problems, Eng. Appl. Artif. Intell., 82 (2019), 148–174. https://doi.org/10.1016/j.engappai.2019.03.021 doi: 10.1016/j.engappai.2019.03.021
    [30] L. Abualigah, D. Yousri, M. A. Elaziz, A. A.Ewees, M. A. A. Al-qaness, A. H. Gandomi, Aquila Optimizer: a novel meta-heuristic optimization algorithm, Comput. Ind. Eng., 157 (2021), 107250. https://doi.org/10.1016/j.cie.2021.107250 doi: 10.1016/j.cie.2021.107250
    [31] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
    [32] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114 (2017), 163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002 doi: 10.1016/j.advengsoft.2017.07.002
    [33] A. A. Heidari, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
    [34] Y. Feng, S. Deb, G. G. Wang, A. H. Alavi, Monarch butterfly optimization: a comprehensive review, Expert Syst. Appl., 168 (2020), 114418. https://doi.org/10.1016/j.eswa.2020.114418 doi: 10.1016/j.eswa.2020.114418
    [35] S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
    [36] A. Luque-Chang, E. Cuevas, M. Pérez-Cisneros, F. Fausto, R. Sarkar, Moth swarm algorithm for image contrast enhancement, Knowl. Based Syst., 212 (2021), 106607. https://doi.org/10.1016/j.knosys.2020.106607 doi: 10.1016/j.knosys.2020.106607
    [37] Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Open source MATLAB software of hunger games search (HGS) optimization algorithm, 2021. http://dx.doi.org/10.13140/RG.2.2.10702.18241
    [38] D. Aniszewska, Multiplicative Runge–Kutta methods, Nonlinear Dyn., 50 (2007), 265–272. https://doi.org/10.1007/s11071-006-9156-3 doi: 10.1007/s11071-006-9156-3
    [39] R. S. Parpinelli, H. S. Lopes, A. A. Freitas, Data mining with an ant colony optimization algorithm, Evol. Comput. IEEE Trans., 6 (2002), 321–332. https://doi.org/10.1109/TEVC.2002.802452 doi: 10.1109/TEVC.2002.802452
    [40] I. Ahmadianfar, A. A. Heidari, S. Noshadian, H. Chen, A. H. Gandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022). https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
    [41] F. A. Hashim, R. R. Mostafa, A. G. Hussien, S. Mirjalili, K. M. Sallam, Fick's Law Algorithm: A physical law-based algorithm for numerical optimization, Knowl. Based Syst., 260 (2023) 110146. https://doi.org/10.1016/j.knosys.2022.110146 doi: 10.1016/j.knosys.2022.110146
    [42] A. S. Assiri, A. G. Hussien, M. Amin, Ant lion optimization: Variants, hybrids, and applications, IEEE Access, 8 (2020), 77746–77764. https://doi.org/10.1109/ACCESS.2020.2990338 doi: 10.1109/ACCESS.2020.2990338
    [43] Z. M. Gao, J. Zhao, Y. R. Hu, H. F. Chen, The challenge for the nature-inspired global optimization algorithms: Non-symmetric benchmark functions, IEEE Access, 9 (2021), 106317–106339. https://doi.org/10.1109/ACCESS.2021.3100365 doi: 10.1109/ACCESS.2021.3100365
    [44] S. Wang, H. Jia, L. Abualigah, Q. Liu, R. Zheng, An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems, Processes, 9 (2021), 1551. https://doi.org/10.3390/pr9091551 doi: 10.3390/pr9091551
    [45] M. Ahmadein, Boosting COVID-19 image classification using MobileNetV3 and aquila optimizer algorithm, Entropy, 23 (2021), https://doi.org/10.3390/e23111383 doi: 10.3390/e23111383
    [46] Y. J. Zhang, Y. X. Yan, J. Zhao, Z. M. Gao, AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer, IEEE Access, 10 (2022), 10907–10933. https://doi.org/10.1109/ACCESS.2022.3144431 doi: 10.1109/ACCESS.2022.3144431
    [47] J. Zhao, Y. Zhang, S. Li, Y. Wang, Y. Yan, Z. Gao, A chaotic self-adaptive JAYA algorithm for parameter extraction of photovoltaic models, Math. Biosci. Eng., 19 (2022), 5638–5670. https://doi.org/10.3934/mbe.2022264 doi: 10.3934/mbe.2022264
    [48] Y. Zhang, Y. Wang, S. Li, F. Yao, L. Tao, Y. Yan, An enhanced adaptive comprehensive learning hybrid algorithm of Rao-1 and JAYA algorithm for parameter extraction of photovoltaic models, Math. Biosci. Eng., 19 (2022), 5610–5637. https://doi.org/10.3934/mbe.2022263 doi: 10.3934/mbe.2022263
    [49] W. Zhou, P. Wang, A. A. Heidari, X. Zhao, H. Turabieh, M. Mafarja, Metaphor-free dynamic spherical evolution for parameter estimation of photovoltaic modules, Energy Rep., 7 (2021), 5175–5202. https://doi.org/10.1016/j.egyr.2021.07.041 doi: 10.1016/j.egyr.2021.07.041
    [50] S. Singh, H. Singh, N. Mittal, H. Singh, A. G. Hussien, F. Sroubek, A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation, Expert Syst. Appl., 209 (2022), 118272. https://doi.org/10.1016/j.eswa.2022.118272. doi: 10.1016/j.eswa.2022.118272
    [51] A. G. Hussien, L. Abualigah, R. A. Zitar, F. A. Hashim, M. Amin, A. Saber, et al., Recent advances in harris hawks optimization: A comparative study and applications, Electronics, 11 (2022), 1919. https://doi.org/10.3390/electronics11121919 doi: 10.3390/electronics11121919
    [52] S. Wang, A. G. Hussien, H. Jia, L. Abualigah, R. Zheng, Enhanced remora optimization algorithm for solving constrained engineering optimization problems, Mathematics, 10 (2022), 1696. https://doi.org/10.3390/math10101696 doi: 10.3390/math10101696
    [53] F. A. Hashim, A. G. Hussien, Snake optimizer: A novel meta-heuristic optimization algorithm, Knowl. Based Syst., 242 (2022), 108320. https://doi.org/10.1016/j.knosys.2022.108320 doi: 10.1016/j.knosys.2022.108320
    [54] R. Zheng, A. G. Hussien, H. M. Jia, L. Abualigah, S. Wang, D. Wu, An improved wild horse optimizer for solving optimization problems, Mathematics, 10 (2022), 8. https://doi.org/10.3390/math10081311 doi: 10.3390/math10081311
    [55] A. Hussien, R. Mostafa, M. Khan, S. Kadry, F. A. Hashim, Enhanced COOT optimization algorithm for dimensionality reduction, in 2022 Fifth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU), (2022). https://doi.org/10.1109/WiDS-PSU54548.2022.00020
    [56] H. Yu, H. Jia, J. Zhou, A. G. Hussien, Enhanced Aquila optimizer algorithm for global optimization and constrained engineering problems, Math. Biosci. Eng., 19 (2022), 14173–14211. https://doi.org/10.3934/mbe.2022660 doi: 10.3934/mbe.2022660
    [57] Y. Yang, C. Qian, H. Li, Y. Gao, J. Wu, C. Liu, et al., An efficient DBSCAN optimized by arithmetic optimization algorithm with opposition-based learning, J. Supercomput., 78 (2022), 19566–19604. https://doi.org/10.1007/s11227-022-04634-w doi: 10.1007/s11227-022-04634-w
    [58] Z. Cui, X. Hou, H. Zhou, W. Lian, J. Wu, Modified slime mould algorithm via levy flight, in 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), (2020).
    [59] Y. Yang, Y. Gao, S. Tan, S. Zhao, J. Wu, S. Gao, et al., An opposition learning and spiral modelling based arithmetic optimization algorithm for global continuous optimization problems, Eng. Appl. Artif. Intell., 113 (2022), 104981. https://doi.org/10.1016/j.engappai.2022.104981 doi: 10.1016/j.engappai.2022.104981
    [60] M. Abd Elaziz, D. Oliva, Parameter estimation of solar cells diode models by an improved opposition-based whale optimization algorithm, Energy Convers. Manage., 171 (2018), 1843–1859. https://doi.org/10.1016/j.enconman.2018.05.062 doi: 10.1016/j.enconman.2018.05.062
    [61] A. G. Hussien, M. Amin, M. Abd El Aziz, A comprehensive review of moth-flame optimisation: variants, hybrids, and applications, J. Exp. Theor. Artif. Intell., 32 (2020), 705–725. https://doi.org/10.1080/0952813X.2020.1737246 doi: 10.1080/0952813X.2020.1737246
    [62] H. Yu, S. Qiao, A. A. Heidari, A. A. El-Saleh, C. Bi, M. Mafarja, et al., Laplace crossover and random replacement strategy boosted Harris hawks optimization: performance optimization and analysis, J. Comput. Design Eng., 9 (2022), 1879–1916. https://doi.org/10.1093/jcde/qwac085 doi: 10.1093/jcde/qwac085
    [63] A. Qi, D. Zhao, F. Yu, A. A. Heidari, H. Chen, L. Xiao, Directional mutation and crossover for immature performance of whale algorithm with application to engineering optimization, J. Comput. Design Eng., 9 (2022), 519–563. https://doi.org/10.1093/jcde/qwac014 doi: 10.1093/jcde/qwac014
    [64] D. Zhao, L. Liu, F. Yu, A. A. Heidari, M. Wang, H. Chen, et al., Opposition-based ant colony optimization with all-dimension neighborhood search for engineering design, J. Comput. Design Eng., 9 (2022), 1007–1044. https://doi.org/10.1093/jcde/qwac038 doi: 10.1093/jcde/qwac038
    [65] X. Zhou, W. Gui, A. A. Heidari, Z. Cai, H. Elmannai, M. Hamdi, et al., Advanced orthogonal learning and Gaussian barebone hunger games for engineering design, J. Comput. Design Eng., 9 (2022), 1699–1736. https://doi.org/10.1093/jcde/qwac075 doi: 10.1093/jcde/qwac075
    [66] F. Rezaei, H. R. Safavi, M. Abd Elaziz, S. H. A. El-Sappagh, M. A. Al-Betar, T. Abuhmed, An enhanced grey wolf optimizer with a velocity-aided global search mechanism, Mathematics, 10 (2022), 351. https://doi.org/10.3390/math10030351 doi: 10.3390/math10030351
    [67] J. Zhao, Z. M. Gao, H. F. Chen, The simplified aquila optimization algorithm, IEEE Access, 10 (2022), 22487–22515. https://doi.org/10.1109/ACCESS.2022.3153727 doi: 10.1109/ACCESS.2022.3153727
    [68] M. Khishe, M. R. Mosavi, Chimp optimization algorithm, Expert Syst. Appl., 149 (2020), 113338. https://doi.org/10.1016/j.eswa.2020.113338 doi: 10.1016/j.eswa.2020.113338
    [69] L. Abualigah, D. Yousri, M. Abd Elaziz, A. A. Ewees, M. A. A. Al-qaness, A. H. Gandomi, Aquila Optimizer: A novel meta-heuristic optimization algorithm, Comput. Ind. Eng., 157 (2021), 107250. https://doi.org/10.1016/j.cie.2021.107250 doi: 10.1016/j.cie.2021.107250
    [70] A. G. Hussien, M. Amin, A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection, Int. J. Mach. Learn. Cybern., 13 (2022), 309–336. https://doi.org/10.1007/s13042-021-01326-4 doi: 10.1007/s13042-021-01326-4
    [71] A. G. Hussien, An enhanced opposition-based Salp Swarm Algorithm for global optimization and engineering problems, J. Ambient Intell. Humanized Comput., 13 (2022), 129–150. https://doi.org/10.1007/s12652-021-02892-9 doi: 10.1007/s12652-021-02892-9
    [72] H. Bayzidi, S. Talatahari, M. Saraee, C. P. Lamarche, Social network search for solving engineering optimization problems, Comput. Intell. Neurosci., 2021 (2021), 8548639. https://doi.org/10.1155/2021/8548639 doi: 10.1155/2021/8548639
  • This article has been cited by:

    1. Luyao Li, Licheng Fang, Huan Liang, Tengda Wei, Observer-based event-triggered impulsive control of delayed reaction-diffusion neural networks, 2025, 22, 1551-0018, 1634, 10.3934/mbe.2025060
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2620) PDF downloads(118) Cited by(6)

Figures and Tables

Figures(10)  /  Tables(24)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog