Processing math: 100%
Research article

A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators

  • Received: 13 November 2022 Revised: 02 February 2023 Accepted: 05 February 2023 Published: 21 February 2023
  • MSC : 47H05, 47H10, 47J20, 47J25, 65J15, 91B50

  • The theory of variational inequalities is an important tool in physics, engineering, finance, and optimization theory. The projection algorithm and its variants are useful tools for determining the approximate solution to the variational inequality problem. This paper introduces three distinct extragradient algorithms for dealing with variational inequality problems involving quasi-monotone and semistrictly quasi-monotone operators in infinite-dimensional real Hilbert spaces. This problem is a general mathematical model that incorporates a set of applied mathematical models as an example, such as equilibrium models, optimization problems, fixed point problems, saddle point problems, and Nash equilibrium point problems. The proposed algorithms employ both fixed and variable stepsize rules that are iteratively transformed based on previous iterations. These algorithms are based on the fact that no prior knowledge of the Lipschitz constant or any line-search framework is required. To demonstrate the convergence of the proposed algorithms, some simple conditions are used. Numerous experiments have been conducted to highlight the numerical capabilities of algorithms.

    Citation: Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang. A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators[J]. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489

    Related Papers:

    [1] Hasib Khan, Jehad Alzabut, Anwar Shah, Sina Etemad, Shahram Rezapour, Choonkil Park . A study on the fractal-fractional tobacco smoking model. AIMS Mathematics, 2022, 7(8): 13887-13909. doi: 10.3934/math.2022767
    [2] Shabir Ahmad, Aman Ullah, Mohammad Partohaghighi, Sayed Saifullah, Ali Akgül, Fahd Jarad . Oscillatory and complex behaviour of Caputo-Fabrizio fractional order HIV-1 infection model. AIMS Mathematics, 2022, 7(3): 4778-4792. doi: 10.3934/math.2022265
    [3] Sabri T. M. Thabet, Reem M. Alraimy, Imed Kedim, Aiman Mukheimer, Thabet Abdeljawad . Exploring the solutions of a financial bubble model via a new fractional derivative. AIMS Mathematics, 2025, 10(4): 8587-8614. doi: 10.3934/math.2025394
    [4] Murugesan Sivashankar, Sriramulu Sabarinathan, Vediyappan Govindan, Unai Fernandez-Gamiz, Samad Noeiaghdam . Stability analysis of COVID-19 outbreak using Caputo-Fabrizio fractional differential equation. AIMS Mathematics, 2023, 8(2): 2720-2735. doi: 10.3934/math.2023143
    [5] Muhammad Altaf Khan, Sajjad Ullah, Saif Ullah, Muhammad Farhan . Fractional order SEIR model with generalized incidence rate. AIMS Mathematics, 2020, 5(4): 2843-2857. doi: 10.3934/math.2020182
    [6] Mehmet Kocabiyik, Mevlüde Yakit Ongun . Construction a distributed order smoking model and its nonstandard finite difference discretization. AIMS Mathematics, 2022, 7(3): 4636-4654. doi: 10.3934/math.2022258
    [7] Rahat Zarin, Abdur Raouf, Amir Khan, Aeshah A. Raezah, Usa Wannasingha Humphries . Computational modeling of financial crime population dynamics under different fractional operators. AIMS Mathematics, 2023, 8(9): 20755-20789. doi: 10.3934/math.20231058
    [8] Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad . Computational analysis of COVID-19 model outbreak with singular and nonlocal operator. AIMS Mathematics, 2022, 7(9): 16741-16759. doi: 10.3934/math.2022919
    [9] Manal Elzain Mohamed Abdalla, Hasanen A. Hammad . Solving functional integrodifferential equations with Liouville-Caputo fractional derivatives by fixed point techniques. AIMS Mathematics, 2025, 10(3): 6168-6194. doi: 10.3934/math.2025281
    [10] Sara Salem Alzaid, Badr Saad T. Alkahtani . Real-world validation of fractional-order model for COVID-19 vaccination impact. AIMS Mathematics, 2024, 9(2): 3685-3706. doi: 10.3934/math.2024181
  • The theory of variational inequalities is an important tool in physics, engineering, finance, and optimization theory. The projection algorithm and its variants are useful tools for determining the approximate solution to the variational inequality problem. This paper introduces three distinct extragradient algorithms for dealing with variational inequality problems involving quasi-monotone and semistrictly quasi-monotone operators in infinite-dimensional real Hilbert spaces. This problem is a general mathematical model that incorporates a set of applied mathematical models as an example, such as equilibrium models, optimization problems, fixed point problems, saddle point problems, and Nash equilibrium point problems. The proposed algorithms employ both fixed and variable stepsize rules that are iteratively transformed based on previous iterations. These algorithms are based on the fact that no prior knowledge of the Lipschitz constant or any line-search framework is required. To demonstrate the convergence of the proposed algorithms, some simple conditions are used. Numerous experiments have been conducted to highlight the numerical capabilities of algorithms.



    Intelligent optimization algorithms are increasingly popular in the field of intelligent computing and are widely applied in various other fields, including engineering, medicine, ecology and environment, marine engineering, and so forth. Classical intelligent optimization algorithms include: Particle swarm algorithm (PSO)[1], genetic algorithm (GA)[2], simulated annealing (SA)[3], etc. PSO refers to a swarm intelligence optimization algorithm that simulating bird predation, in which each bird is treated as a particle and the particles follow the current optimal particle to find the optimal solution in the solution space. GA is a computational model based on Darwin's theory of evolution and Mendelian genetics to simulate biological evolution. Genetic algorithms obtain the optimal solutions through three basic operations: chromosomal self-selection, crossover and mutation. SA marks the first natural algorithm proposed to simulate the high-temperature annealing process of metallic materials. When SA is heated at high temperatures and then slowly cooled, the particles eventually reach a state of equilibrium and solidify into crystals of minimal energy. Many scholars have also developed many bionic intelligence optimization algorithms based on classical intelligence algorithms, such as Sine Cosine Algorithm (SCA) [4], the Artificial swarm algorithm (ABC) [5], the Bat algorithm(BA) [6], the Bee Evolutionary Genetic Algorithm (BEGA) [7], the Squirrel Search Algorithm (SSA) [8], the Atomic Search optimisation (ASO) Algorithm [9], etc.

    The Atomic Search Optimisation (ASO) algorithm is a new intelligent optimisation algorithm based on molecular dynamics derivatives that was proposed in 2019. ASO is composed of geometric binding force and the interaction force between atoms, following the laws of classical mechanics [10,11]. Geometric binding forces are the interaction between the Lennard-Jones(LJ) potential [12,13] and the co-borrowing bonds among atoms. In ASO, atoms represent solutions in the search space. The larger the atomic mass, the better the solution, and vice versa. Compared to traditional intelligent system algorithms, ASO requires fewer physical parameters and can achieve better performance. As a result, it has been widely used in various fields, Zhang et al. applied ASO to hydrogeological parameters estimation [9] and groundwater dispersion coefficients calculation [14]. Ahmed et al. utilized ASO in fuel cell modeling and successfully built an accurate model. Simulations showed that it was as good as commercial proton exchange membrane(PEM) fuel cells [15], Mohammed et al. used ASO to reduce the peak sidelobe level of the beam pattern [16], Saeid combined the ASO with the Tree Seeding Algorithm (TSA) to enhance its performance in exploration [17]. Ghosh et al. proposed an improved Atomic Search Algorithm(ASO) based on binary variables and combined it with Simulated Annealing (SA) technique [18]. Elaziz et al. proposed an automatic clustering algorithm combining the ASO and the SCA algorithm to automatically find the best prime numbers and the corresponding positions [19]. Sun et al. applied improved ASO to engineering design [20].

    Since ASO is prone to finding only locally optimal solutions with low accuracy, an Improved Atom Search Algorithm(IASO) algorithm based on particle velocity updating is proposed in this paper. The IASO algorithm has the same principle as the ASO algorithm, but IASO is optimized for speed iterative updates to improve the convergence speed of the algorithm, avoid finding local optimal solutions, and allow a more extensive search for optimal solutions. IASO adopts the idea of particle update speed in PSO and introduces inertia weights w to improve the performance of ASO. In addition, the learning factors c1 and c2 are added into IASO, which not only ensure the convergence performance of the algorithm, but also accelerate the convergence speed, effectively solving the problem that the original ASO tends to find only the local optimal solution.

    Array signal processing is one of the important research directions of signal processing, which has been developing rapidly in recent years. DOA estimation of signals is an ongoing research hotspot in the field of array signal processing. It has great potential for hydrophone applications. Hydrophones are generally divided into scalar hydrophones and vector hydrophones. Due to the scalar hydrophones can only test scalar parameters in the sound field, many scholars have turned to the study of vector hydrophones. Xu et al. used alternating iterative weighted least squares to deal with the off-grid problem of sparsity-based [21], DOA of an array of acoustic vector water-modulated microphones. Amiri designed a micro-electro-mechanical system (MEMS) bionic vector hydrophone with a piezoelectric gated metal oxide semiconductor field-effect transistor (MOSFET) [22]. More and more scholars have been doing research in the direction of vector array. Song et al. studied the measurement results of an acoustic vector sensor array and proposed a new method to obtain better DOA estimation performance in noisy and coherent environments [23], using the time-frequency spatial information of the vector sensor array signal, Gao et al. combined elevation, azimuth and polarization for the estimation of electromagnetic vector sensor arrays based on nested tensor modeling [24], Baron et al. optimised, conceptualised and evaluated a hydrophone array for deep-sea mining sound source localisation validation [25], and Wand et al. proposed an iterative sparse covariance matrix fitting direction estimation method based on a vector hydrophone array [26]. In recent years, Some scholars applied compressed sensing to signal DOA estimation. Keyvan et al., proposed the Three Dimensional Orthogonal Matching Pursuit(3D-FOMP) algorithm and the Three Dimensional Focused Orthogonal Matching Pursuit (3D-FOMP) algorithm to obtain better estimation performance in low signal-to-noise ratio and multi-source environments, and to solve the problem that conventional DOA estimation algorithms cannot distinguish between two adjacent sources [27]. Keyvan et al, designed a new hybrid nonuniform linear array consisting of two uniform linear subarrays and proposed a new DOA estimation method based on the OMP algorithm. This algorithm has lower computational complexity and higher accuracy compared with the FOMP algorithm. It can distinguish adjacent signal sources more accurately and solve the phase ambiguity problem [28].

    With the development of vector hydrophone, the direction of arrival estimation of the vector hydrophones signal has an increasingly wide application, which is of great importance for the functional extension of sonar devices [29]. Many useful estimation methods, multiple signal classification (MUSIC)[30], estimated signal parameters via rotational invariance technique (ESPRIT) [31] and maximum likelihood (ML) [32] etc. have been proposed by many scholars.

    In 1988, Ziskind and Max added ML estimation to DOA and achieved ideal results [33]. compared with MISIC and Espirit, the ML estimation method is more effective and stable, especially in the case of low signal-to-noise ratios (SNR) or small snapshots.However, in MLDOA estimation, the solving problem of the likelihood function is a multidimensional nonlinear polar problem that requires a multidimensional search for global extrema, which increases the computational burden.

    Many scholars have used various methods in combination with ML to improve the estimation performance of DOA. Zhang, C.Y. et al. proposed a sparse iterative covariance-based estimation method and combined it with ML to improve its performance. However, its resolution and stability [34] are not high, and Hu et al. proposed a multi-source DOA estimation based on the ML in the spherical harmonic domain [35], Ji analyzed the asymptotic performance of ML DOA estimation [36], Selva proposed an effective method to calculate ML DOA estimation in the case of unknown sensor noise power [37], Yoon et al. optimized the sequence and branching length of taxa in phylogenetic trees by using the maximum likelihood method [38], Vishnu proposed a line propagation function (LSF)-based sinusoidal frequency estimation algorithm to improve the performance of ML DOA [39].

    In response to the complexity of the ML DOA estimation problem, some scholars have used intelligent optimization algorithms to optimize MLDOA and achieved better performance. Li et al. applied the bionic algorithm genetic algorithm to MLDOA estimation for the first time, but the genetic algorithm is prone to problems such as premature convergence [40]. Sharma et al. applied the PSO algorithm to MLDOA estimation, but there are still some drawbacks in the estimation of multi-directional angles because the PSO algorithm converges slowly and tends to fall into local optimal solutions [41], Zhang et al. combined artificial colony bees with ML DOA estimation to reduce the computational complexity in calculating ML functions [5], Feng et al. combining bat algorithms with ML to optimize the multidimensional nonlinear estimation of spectral functions [6], Fan et al. applied the Improved Bee Evolutionary Genetic Algorithm (IBEGA) to MLDOA estimation [7], Wang et al. used an improved squirrel search algorithm (ISSA) in MLDOA estimation, which reduced computational complexity and enhanced the simulation effect [42], and Li et al. proposed a search space that limits the search space for particle swarm optimization [43].

    As calculating the likelihood function for maximum likelihood DOA estimation is a multi-dimensional non-linear polar problem, a multi-dimensional search for global extremes is required, which requires extensive computation. To solve this problem, the proposed IASO is applied to MLDOA estimation. Simulation results show that the combination of IASO and MLDOA estimation significantly reduces the computational complexity of multidimensional nonlinear optimization of ML estimation.

    The main structure of this article is as follows: Section2 presented the improved ASO and compared the convergence performance of ASO and IASO on 23 benchmark functions; Section3 gives the data model and ML estimation; Section4 combines IASO with ML DOA, providing the simulation results to validate the convergence performance and statistical performance of IASO ML estimation and compareing it with ASO, PSO, GA and SCA combined with ML DOA estimation separately. Section 5 concludes the paper.

    The Atomic Search Algorithm (ASO) was proposed by Zhao et al. in 2018. It is a physics-inspired algorithm developed by molecular dynamics. The algorithm is simple to implement, featured with few parameters and good convergence performance and thus it has been used to solve a variety of optimization problems.

    The ASO algorithm is based on the interaction forces between atoms and geometric constraints, and the position of each atom in the search space can be measured by its mass. This means that the heavier the atoms, the better the solution. The search and optimisation process is based on the mutual repulsion or attraction of atoms depending on the distance between them. The lighter atoms flow at an accelerated speed to the larger atoms, which widens the search area and performs a large search. The acceleration of heavier atoms is smaller, making it more concentrated to find better solutions. Suppose that a group of atoms has N atoms, the position of the ith atom is Xi=[x1i,x2i,,xdi], according to Newton's second law

    Fi+Gi=miai, (2.1)

    where Fi is the total force of the interaction force on the atom i, Gi is the geometric binding force on the atom i, and mi is the mass of the atom i.

    The general unconstrained optimization problem can be defined as

    minf(x),x=[x1,x2,,xD], (2.2)
    LbxUb,
    Lb=[lb1,,lbD],Ub=[ub1,,ubD],

    where xd(d=1,2,.D) is the d components in the search space, Lb is the lower limit, Ub is the upper limit, and D is the dimension of the search space.

    The fitness value Fiti(t) of the position of each atom is calculated according to the fitness function defined by the user. The mass of each atom is Eq (2.3) and Eq (2.4), which can be derived from its fitness.

    Mi(t)=eFiti(t)Fitbest(t)Fitworst(t)Fitbest(t), (2.3)
    mi(t)=eMi(t)Nj=1Mj(t), (2.4)

    where Fitbest(t) and Fitworst(t) refer to the best fitness value and worst fitness value of the atom in ith iterations, Fiti(t) is the fitness value of atom i at the ith iteration, and the expressions of Fitbest(t) and Fitworst(t) are as follows

    Fitbest(t)=mini{1,2,3,,N}Fiti(t), (2.5)
    Fitworst(t)=maxi{1,2,3,,N}Fiti(t). (2.6)

    The interaction force between atoms is obtained from the literature [9,11], after optimizated by the LJ potential energy

    Fij(t)=η(t)[2(hij(t))13(hij(t))7], (2.7)
    η(t)=α(1t1T)3e20tT, (2.8)

    where η(t) is the depth function that adjusts the repulsion and attractive force, α is the depth weight, T is the maximum number of iterations, and t is the current number of iterations. Figure 1 shows the functional behavior of function F, the η(t) corresponding to different h ranges from 0.9 to 2. It can be seen that when h is from 0.9 to 1.12, it is repulsion; when h is from 1.12 to 2, it is gravity; and when h=1.12, it reaches a state of equilibrium. Therefore, in order to improve the exploration in ASO, the lower limit of the repulsive force with a smaller function value is h=1.1, and the upper limit of the gravitational force is 1.24.

    hij(t)={hmin,rij(t)σ(t)<hmin,rij(t)σ(t),hminrij(t)σ(t)hmax,hmax,rij(t)σ(t)>hmin, (2.9)
    Figure 1.  Function behaviors of Fwith different values of η.

    where hij(t) is the distance function, hmin=1.1 and hmax=1.24 represent the lower and upper limits of h, rij is the Euclidean distance between atom i and atom j, and σ(t) is defined as follows

    σ(t)=xij(t),jKBestxij(t)K(t)2, (2.10)

    where xij is the position component of atom ith in the jth dimensional search space, 2 stands for two norm and KBest is a subset of the atom group, which is composed of the first K atoms with the best function fitness value.

    K(t)=N(N2)×tT, (2.11)
    {hmin=g0+g(t),hmax=u, (2.12)

    where g0 is a drift factor, which can shift the algorithm from exploration to development

    g(t)=0.1×sin(π2×tT), (2.13)

    Therefore, the atom ith acting on other atoms can be considered as a total force, expressed as

    Fdi(t)=jKBestrandjFdij(t), (2.14)

    The geometric binding force also plays an important role in ASO. Assume that each atom has a covalent bond with each atom in KBest, and that each atom is bound by KBest, Figure 2 shows the effect of atomic interactions. A1,A2,A3 and A4 are the atoms with the best fitness value, called KBest.In KBest, A1,A2,A3 and A4 attract or repel each other, and A5,A6,A7 attract or repel each other for every atom. Each atom in the population is bound by the optimal atom A1(Xbest), the binding force of the atom ith is

    Gdi(t)=λ(t)(xdbest(t)xdi(t)), (2.15)
    λ(t)=βe20tT, (2.16)
    Figure 2.  Forces of an atom system with KBest for K=5.

    where xdbest(t) is the atom which is in the best position in the ith iteration, β is the multiplier weight, and λ(t) is the Lagrange multiplier.

    Under the action of geometric constraint force and interaction force, the acceleration of atom ith atom at time t is

    adi(t)=Fdi(t)+Gdi(t)mdi(t)=α(1t1T)3e20tTjKBestrandj[2×(hij(t))13(hij)7]mi(t)(xdi(t)xdi(t))xi(t),xj(t)2+βe20tTxdbest(t)xdi(t)mi(t). (2.17)

    In the original ASO, the algorithm was found to be prone to local optima. As a result, changes were made in the iterative update process of the speed, allowing the algorithm to go beyond the local optimum, search and optimise more broadly. The particle update velocity from PSO is used in ASO and the inertial weight w is introduced in the original ASO velocity update.The algorithm is not prone to local optima at the start of the algorithm and improves the performance of the IASO algorithm. The addition of learning factors c1 and c2 not only ensures convergence performance, but also speeds up convergence, effectively solving the problem that the original ASO tends to fall into local optimality.

    w=0.90.5×(tT),c1=10×(tT)2,
    c2=1(10×(tT)2),
    vdi(t+1)=w×randdivdi(t)+c1×randdiadi(t)+c2×randdi(xdbest(t)xdi(t)) (2.18)

    At the (t+1)th iteration, the position updating of the ith atom can be expressed as

    xdi(t+1)=xdi(t)+vdi(t+1). (2.19)

    The maximum number of iterations, convergence normalisation, maximum running time and accuracy of the fitness function value are commonly used convergence criteria.In this paper, the maximum number of iterations and convergence normalisation are used as criteria for stopping the iterations.The maximum number of iterations is 200 and the convergence normalisation results are as follows

    D=ni=1(Fiti¯Fit)2<ε, (2.20)

    where Fiti is the fitness value of ith squirrel and ¯Fit is the average fitness value of the population, the accuracy ε is often taken as 1E6.

    Thus, by iterating over the above operations several times, we can eventually find the optimal solution exactly. Table 1 shows the main steps of the IASO algorithm.

    Table 1.  Unimodal test functions F1(x)F7(x).
    Name Function n Range Optimum
    Sphere F1(x)=ni=1x2i 30 [100,100]n 0
    Schwefel 2.22 F2(x)=ni=1xi+ni=1xi 30 [100,100]n 0
    Schwefel 1.2 F3(x)=ni=1(ij=1xj)2 30 [100,100]n 0
    Schwefel 2.21 F4(x)=maxi{xi,1in} 30 [100,100]n 0
    Rosenbrock F5(x)=n1i=1(100(xi+1x2i)2+(xi1)2) 30 [200,200]n 0
    Step F6(x)=ni=1(xi+0.5)2 30 [100,100]n 0
    Quartic F7(x)=ni=1ix4i+rand() 30 [1.28,1.28]n 0

     | Show Table
    DownLoad: CSV

    The pseudocode of IASO is present in Algorithm 1.

    Algorithm 1 Pseudocode of IASO
    Begin:
    Randomly initialize a group of atoms x (solutions) and their velocity v and Fitbest=Inf.
    While the stop criterion is not satisfied do
      For each atom xi do
        Calculate its fitness value Fiti;
        If Fiti<Fitbest then
          Fitbest=Fiti;
          XBest=xi;
        End If
        Calculate the mass using Eq (2.3)and Eq (2.4);
        Use K(t)=N(N2)×tT to determine K neighbors;
        Use Eq (2.14) and Eq (2.15) to calculate the interaction force and geometric restraint force;
        Calculate acceleration using formula Eq (2.17);
        Update the velocity
        vdi(t+1)=w×randdivdi(t)+c1×randdiadi(t)+c2×randdi(xdbest(t)xdi(t)).
        Update the position
          xdi(t+1)=xdi(t)+vdi(t+1).
      End For.
    End while.
    Find the best solution so far XBest.

    To test the performance of the IASO algorithm, 23 known benchmark functions were used. These benchmark functions are described in Tables 1, 2, 3, F1F7 is unimodal function, and each unimodal function has no local optimization, only one global optimum. Therefore, the convergence speed of the algorithm can be verified. F8F13 are multimodal functions with many local optimizations. While F14F23 is a low dimensional function, each function has less local optimal value. Therefore, multimodal function and low dimensional function are very suitable for local optimal test avoidance and algorithm exploration ability.

    Table 2.  Multimodal test functions F8(x)F13(x).
    Name Function n Range Optimum
    Schwefel F8(x)=ni=1(xisin(xi)) 30 [500,500]n -12569.5
    Rastrigin F9(x)=ni=1(x2i10cos(2πxi)+10) 30 [5.12,5.12]n 0
    Ackley F10(x)=20exp(0.21nni=1x2i)exp(1nni=1cos2πxi)+20+ε 30 [32,32]n 0
    Griewank F11(x)=14000ni=1x2ini=1cos(xii)+1 30 [600,600]n 0
    Penalized F12(x)=πn{10sin2(πy1)+n1i=1(yi1)2[1+10sin2(πyi+1)]+(yn1)2}+ni=1u(xi,10,100,4) 30 [50,50]n 0
    Penalized2 F13(x)=0.1{sin2(3πx1)+29i=1(xi1)2p[1+sin2(3πXi+1)]+(xn1)2[1+sin2(2πxn)]}+ni=1u(xi,5,100,4) 30 [50,50]n 0

     | Show Table
    DownLoad: CSV
    Table 3.  Low-dimensional test functions F14(x)F23(x).
    Name Function n Range Optimum
    Foxholes F14(x)=[1500+25j=11j+2j=1(xiaij)6]1 2 [65.536,65.536]n 0.998
    Kowalik F15(x)=11i=1aixi(b2i+bix2)b2i+bix3+x42 4 [5,5]n 3.075×104
    Six Hump Camel F16(x)=4x212.1x41+13x61+x1x24x22+4x42 2 [5,5]n 1.0316
    Branin F17(x)=(x25.14π2x21+5πx16)2+10(118π)cosx1+10 2 [5,10]×[0,15] 0.398
    Goldstein-Price F18(x)=1+(x1+x2+1)21914x1+3x2114x2+6x1x2+3x22)]×[30+(2x1+13x2)2(1832x112x21+48x236x1x2+27x22)] 2 [2,2]n 3
    Hartman 3 F19(x)=4i=1ciexp[3j=1aij(xjpij)2] 3 [0,1]n 3.86
    Hartman 6 F20(x)=4i=1ciexp[6j=1aij(xjpij)2] 6 [0,1]n 3.322
    Shckel 5 F21(x)=5i=1[(xiai)(xiai)T+ci]1 4 [0,10]n 10.1532
    Shckel 7 F22(x)=7i=1[(xiai)(xiai)T+ci]1 4 [0,10]n 10.4028
    Shckel 10 F23(x)=10i=1[(xiai)(xiai)T+ci]1 4 [0,10]n 10.5363

     | Show Table
    DownLoad: CSV

    In this experiment, the population size for IASO and ASO was 50 and the maximum number of iterations was 100. There are three performance evaluation indexes for comparing IASO and ASO: the average, minimum and standard deviation of the optimal solution. The smaller the average value of the optimal solution, the less likely it is that the algorithm will enter a local optimum and the easier it is to find the global optimum; The smaller the standard deviation of the optimal solution, the more stable the algorithm will be; the smaller the minimum value, the more accurate it will be. Tables 35 shows the comparison of optimization results of different types of functions, and the corresponding convergence curve is shown in Figures 35. Table 4 and Figure 6 are the optimization results and convergence curves of unimodal function F1(x)F7(x). It can be seen that IASO algorithm has better performance than ASO algorithm and its convergence speed is faster. Table 5 and Figure 7 are the optimization results and convergence curves of multimodal function F8(x)F13(x). It can be seen that the overall performance of IASO is better than that of ASO. Table 6 and Figure 8 are the optimization results and convergence curves of low dimensional function F14(x)F23(x). The convergence curve shows that ASO converges faster, but IASO is more accurate. By comparing IASO with ASO, it can be seen that the improved IASO converges much faster and is more stable than ASO It is also less likely to enter the local optimum.

    Table 4.  Comparisons of results for unimodal functions.
    Function Index ASO IASO
    F1(x) Mean 2.54e12 1.88e18
    Std 3.24e12 1.03e20
    Best 3.48e15 5.22e19
    F2(x) Mean 3.33e08 3.39e09
    Std 1.89e10 9.90e12
    Best 5.11e08 1.84e09
    F3(x) Mean 186.5664 1.06e17
    Std 86.3065 1.23e21
    Best 24.1115 1.81e18
    F4(x) Mean 3.24e09 8.77e10
    Std 6.1409 2.32e12
    Best 2.13e10 4.75e10
    F5(x) Mean 0.2905 0.0034
    Std 0.9888 0.0039
    Best 4.5370e+03 28.8627
    F6(x) Mean 0 0
    Std 0 0
    Best 0 0
    F7(x) Mean 0.02124 3.91e04
    Std 0.02981 3.60e04
    Best 0.03319 1.8710e04

     | Show Table
    DownLoad: CSV
    Table 5.  Comparisons of results for multimodal functions.
    Function Index ASO IASO
    F8(x) Mean 3887 6772.47
    Std 564.7 354.77
    Best 4245 6878.93
    F9(x) Mean 0 0
    Std 0 0
    Best 0 0
    F10(x) Mean 3.91e09 8.63e10
    Std 2.15e09 2.68e13
    Best 1.13e09 7.257e10
    F11(x) Mean 0 0
    Std 0 0
    Best 0 0
    F12(x) Mean 4.34e23 3.69e23
    Std 1.84e22 1.51e22
    Best 7.83e24 6.53e24
    F13(x) Mean 2.03e22 2.33e23
    Std 2.83e22 3.12e22
    Best 1.91e23 1.90e23

     | Show Table
    DownLoad: CSV
    Figure 3.  2D plots of function F1(x)F7(x).
    Figure 4.  2D plots of function F8(x)F13(x).
    Figure 5.  2D plots of function F14(x)F23(x).
    Figure 6.  Convergence curve of the function F1(x)F7(x).
    Figure 7.  Convergence curve of the function F8(x)F13(x).
    Table 6.  Comparisons of results for low-dimensional functions.
    Function Index ASO IASO
    F14(x) Mean 0.998004 0.998004
    Std 7.04e16 4.25e16
    Best 0.998004 0.998004
    F15(x) Mean 9.47e04 4.69e04
    Std 2.79e04 1.81e04
    Best 2.79e04 1.45e04
    F16(x) Mean 1.03163 1.03163
    Std 0 0
    Best 1.03163 1.03163
    F17(x) Mean 0.397887 0.397887
    Std 0 0
    Best 0.397887 0.397887
    F18(x) Mean 3 3
    Std 1.68e14 1.65e14
    Best 3 3
    F19(x) Mean 3.8627 3.8627
    Std 2.68e15 2.53e17
    Best 3.8627 3.8627
    F20(x) Mean 3.322 3.322
    Std 1.12e08 8.95e09
    Best 3.322 3.322
    F21(x) Mean 8.7744 9.4724
    Std 2.1867 1.3031
    Best 10.1532 10.1532
    F22(x) Mean 10.4029 10.4029
    Std 1.84e15 1.76e18
    Best 10.4029 10.4029
    F23(x) Mean 10.5364 10.5364
    Std 1.54e15 1.62e18
    Best 10.5364 10.5364

     | Show Table
    DownLoad: CSV
    Figure 8.  Convergence curve of the function F14(x)F23(x).

    Assume that N far-field narrowband signal sources are incident on the hydrophone array of the M(M>N) vector sensor. The incident angle is Θ=[Θ1,Θ2,,ΘN]T, where Θn=(θn,αn)T, ()T is the transposition, θn is the horizontal azimuth angle of the nth incident signal, αn is the elevation angle of the nth incident signal, respectively, the incident wavelength is λ, and the distance between adjacent arrays is d. Then the signal received by the array can be expressed in vector form as

    Z(t)=A(Θ)S(t)+N(t), (3.1)

    Among them, Z(t) is the 4M×1 dimensional received signal vector, and N(t) is the array 4M×1 dimensional gaussian noise vector. It is assumed that the noise is gaussian white noise, which are independent of each other in time and space. S(t) is the M×1 dimensional signal source vector. A(Θ) is the signal direction matrix of the vector hydrophone array

    A(Θ)=[a(Θ1),a(Θ2),,a(ΘN)]=[a1(Θ1)u1,a2(Θ2)u2,,aN(ΘN)uN], (3.2)

    where is the Kronecker product, an(Θn)=[1,ejβn,ej2βn,,ej(M1)βn]T is the sound pressure corresponding to nth signal, un=[1,cosθnsinαn,sinθnsinαn,cosαn]T is the direction vector of the nth signal source, and βn=2πλdcosθnsinαn. Then the array covariance matrix of the received signal is

    R=E[Z(t)ZH(t)]=AE[S(t)SH(t)]AH+E[N(t)NH(t)]=ARsAH+σ2I, (3.3)

    where Rs is the signal covariance matrix, σ2 is Gaussian white noise, I is the identity matrix, ()H is the conjugate transpose of matrix (), Assume that the signal and the array are on the same plane, that is, αn=π2, so only θn is considered in this paper. In actual calculations, the received data is limited, so the array covariance matrix is

    ˆR=1KKk=1Z(k)ZH(k), (3.4)

    where K represents the number of snapshots.

    By uniformly and independently sampling the received signal, the joint probability density function of the sampled data can be obtained as follows

    P(Z(1),Z(2),,Z(K))=Kk=1exp(1σ2Z(k)A(˜θ)S(k)2)det(πσ2I), (3.5)

    where det() represents the determinant of the matrix (), ˜θ is the unknown signal orientation estimation, P() is a multidimensional nonlinear function of unknown parameters ˜θ,σ2 and S. Take the logarithm of Eq (3.5)

    lnP=Klnπ+3MKlnσ2+1σ2Kk=1Z(k)A(˜θ)S(k)2, (3.6)

    In Eq (3.6), Take the partial derivative of σ2, set it to 0, get σ2=14Mtr{PAˆR}, where tr{} is the trace of matrix (), PA is the orthogonal projection matrix of matrix A, ˆS=A+Z and A+=(AHA)1AH are the pseudo-inverse of matrix A, substitute σ2 and ˆS, into Eq (3.6), then

    ˆθ=argmax˜θg(˜θ), (3.7)

    where g(˜θ) is the likelihood function, which can be expressed as

    g(˜θ)=tr{[A(˜Θ)(AH(˜θ)A(˜θ))1AH(˜θ)]ˆR}. (3.8)

    ˆθ is the estimated DOA angle of the estimated signal. Seeking the maximum value of the likelihood function g(˜θ) can get a set of solutions corresponding to this value, which is the estimated angle sought.In order to compare the convergence of different methods, the following equation is defined as the fitness function

    f(˜θ)=|g(˜θ)g(θ)|, (3.9)

    where θ is the known signal in Eq (3.1), g(θ)=tr{[A(θ)(AH(θ)A(θ))1AH(θ)]ˆR}, Eq (3.7) can thus be expressed as

    ˆθ=argmin˜θf(˜θ), (3.10)

    when f(˜θ) is close to 0, it means that the estimated angle is more accurate.

    The initial position is expressed as θi=[θ1i,θ2i,,θdi]. Taking Eq (3.9) in ML DOA as the fitness function Fiti(t) in IASO, then the fitness function Fiti(t) of Eq (2.3) in Section 2 is changed to f(˜θ). Then the geometric binding force of Eq (2.15) can be expressed as xdi(t+1)=xdi(t)+vdi(t+1); The acceleration is changed from Eq (2.17) to

    adi(t)=Fdi(t)+Gdi(t)mdi(t)=α(1t1T)3e20tTjKBestrandj[2×(hij(t))13(hij)7]mi(t)(θdi(t)θdi(t))θi(t),θj(t)2+βe20tTθdbest(t)θdi(t)mi(t) (4.1)

    The speed update is changed from Eq (2.18) to

    vdi(t+1)=w×randdivdi(t)+c1×randdiadi(t)+c2×randdi(θdbest(t)θdi(t)), (4.2)

    the location update is changed from Eq (2.19) to

    θdi(t+1)=θdi(t)+vdi(t+1). (4.3)

    In this part, we demonstrated the simulation results of the iterative process and convergence performance of the IASO. Then, we compared the ML DOA estimation performance between the IASO and ASO, SCA, GA, and PSO. In the experiment, the receiving array should be a uniform linear array composed of 10 acoustic vector sensors, the number of snapshots is 300, and the added noise is Gaussian white noise.

    In the simulation experiment, in 100 independent Monte Carlo experiments, the population number is 30, the maximum number of iterations is 200, the signal-to-noise ratio is 10dB, and the search range is [0,180]. Taking one source θ=[30], two sources θ=[30,60], respectively, the minimum process curve of fitness value is obtained. Compared with IASO, ASO, SCA, GA and PSO, Table 7 shows the parameters of the five algorithmsan and Figure 9 shows the fitness convergence curve.

    Table 7.  Parameter values of different algorithms for ML DOA estimator.
    Name of Parameter IASO ASO SCA GA PSO
    Problem dimension 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4)
    Population Size 30 30 30 30 30
    Maximum number of iterations 200 200 200 200 200
    Initial search area [0,180] [0,180] [0,180] [0,180] [0,180]
    Depth weight 50 50 - - -
    Multiplier weight 0.2 0.2 - - -
    Lower limit of repulsion 1.1 1.1 - - -
    Upper limit of attraction 1.24 1.24 - - -
    Acceleration factor c1 -2.5000e-04 - - - -
    Acceleration factor c2 1.003 - - - -
    Acceleration factor w 0.8975 - - - -
    Crossover Fraction - - - 0.8 -
    Migration Fraction - - - 0.2 -
    Cognitive Constants - - - - 1.25
    Social Constants - - - - 0.5
    Inertial Weight - - - - 0.9

     | Show Table
    DownLoad: CSV
    Figure 9.  The curves of fitness function for ML DOA estimator with IASO, ASO, SCA, GA and PSO at SNR = 10dB, when the number of signal sources is 1, 2, and 3, respectively.

    Figure 9 shows the fitness variation curves of the ML DOA estimators of the IASO, ASO, SCA, GA and PSO in the case of 1, 2, 3 signal source and 200 iterations. As can be seen from the picture. Regardless of the number of signal sources is 1, 2, or 3, IASO has the fastest convergence speed. When the number of signal sources is 1, IASO converges fastest, followed by ASO, In comparsion PSO, SCA and GA have large convergence range, and their fitness values are high, which indicates that they can easily to fall into local optimum, when the number of signal sources is 2, IASO has better convergence effect. When the signal source is 3, IASO still remained the best, followed by ASO. But SCA, GA and PSO not only converge rather slowly but can also easily fall into local optimum. Even after 200 iterations, the fitness function cannot converge to 0.

    In order to compare the statistical performance of different algorithms and their relationship with Cramér–Rao Bound(CRB), we performed a comparison and estimated the mean-variance of the algorithm based on the root mean square error (RMSE). The overall size is 30 iterations and the maximum number of iterations is 200.

    RMSE=1NLLl=1Ni=1[ˆθi(l)θi]2, (4.4)

    where L is the number of experiments, θi is the DOA of the ith signal, N is the number of signals, and ˆθi(j) denotes the estimate of the ith DOA achieved in the jth experiment.

    Figure 10 shows the RMSE of the ML DOA estimator for the five algorithms of IASO, ASO, SCA, GA and PSO when the number of signal sources is 1, 2 and 3, changing the signal-to-noise ratio ranges from -20dB to 20dB. It can be seen that the performance of IASO is more stable regardless of the source and more closer to CRB. When the number of signal sources is 3, the estimation performance of several algorithms decreases, but the DOA estimation performance based on IASO algorithm is still closer to CRB, followed by ASO, GA, PSO and SCA.The SCA performs well at low signal-to-noise ratios, but poorly at high signal-to-noise ratios. However, the MLDOA estimation performance of PSO and GA is poor, and their fitness functions have difficulty converging to the global optimum solution. Even after 200 iterations, a large RMSE is still generated.

    Figure 10.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as SNR changes from -20dB to 20dB, when the number of signal sources is 1, 2 and 3, respectively.

    Population size marks the most important parameter in biological evolutionary algorithms. In general, the estimation accuracy of intelligent algorithms improves as the population size increases. However, when the population increases, the computational load on the algorithm also increases. For ML DOA problems, the population size determines the number of likelihood functions calculated in each iteration. Therefore, this highlights the need for an algorithm with a small population size and high estimation accuracy.

    Figure 11 shows the RMSE curves of the ML DOA estimators of IASO, ASO, SCA, GA and PSO when the population size ranges from 10 to 100. As can be seen from the figure, the IASO can maintain low RMSE, high estimation accuracy and closer to CRB regardless of the number of signal sources. When there is one signal source, the RMSE of IASO, ASO, SCA and PSO is similar, while the GA algorithm keeps a relatively high RMSE when the population is less than 50. When there are two signal sources, the RMSE of IASO algorithm maintains a lower RMSE, while the ASO algorithm is somewhat unstable, and the PSO and GA still keep a higher RMSE. When the signal source is three, only IASO algorithm has lower RMSE, ASO, PSO, SCA and GA, and they have higher RMSE. This shows the population size is 100, GA and PSO algorithms only need a large population number size, bue also they also need a large number of iterations. For ML DOA estimation, when the number of signal source are 1, 2, 3, IASO algorithm can accurately estimate DOA with a smaller population number requiring less computational effort.

    Figure 11.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as population size changes from 10 to 100, when the number of signal sources is 1, 2 and 3, respectively.

    In addition to the convergence and statistical performance described above, the quality of an algorithm can also be judged by its computational complexity. The computational complexity is independent of the number of signal sources. Rather it is related to the maximum number of iterations, the population size and the maximum number of spatial variations. The following figure illustrates the average number of iterations calculated by stopping the standard formula 30 in the case of two signal sources: 1e6.

    Figure 12 shows the average iteration time curves of the different algorithms for 100 independent Monte Carlo experiments. It can be seen that the IASO algorithm has the smallest number of iterations when the signal-to-noise ratio range from -20dB to 20dB and the number of iterations is 200, followed by the ASO, PSO, GA and SCA algorithms, which require at least 100 iterations. When the number of IASO iterations was significantly lower than the other groups, the number of iterations was significantly lower than the other algorithms. In general, the IASO algorithm has the smallest number of iterations on the mean curve of signal-to-noise ratio and overall size. For ASO, SCA, GA and PSO, more iterations are still needed to find the optimal solution under signal-to-noise ratio and overall transformation. As a result, IASO has the lowest computational load.

    Figure 12.  The average iteration number of different algorithms with 3 signal sources as SNR changes from -20dB to 20dB, the population size changes from 10 to 100, respectively.

    This paper proposes an improved ASO, employing 23 test functions to test IASO and ASO and finds that it overcomes the shortcomings of ASO that it can easily to fall into local optimality and poor convergence performance. ML DOA estimation is a high-resolution optimization method in theory, but the huge computational burden hinders its practical applications. In this paper, IASO is used in ML DOA estimation, and simulation experiments are carried out. The results show that, compared with ASO, SCA, GA and PSO methods, the ML DOA estimator of IASO proposed in this paper has faster convergence speed, lower RMSE and lower computational complexity.

    This research was funded by National Natural Science Foundation of China (Grant No. 61774137, 51875535 and 61927807), Key Research and Development Foundation of Shanxi Province(Grant No. 201903D121156), and Shanxi Scholarship Council of China (Grant No. 2020-104 and 2021-108). The authors express their sincere thanks to the anonymous referee for many valuable comments and suggestions.

    The authors declare that they have no conflict of interest.



    [1] A. Antipin, On a method for convex programs using a symmetrical modification of the lagrange function, Ekonomika i Matematicheskie Metody, 12 (1976), 1164–1173.
    [2] T. Bantaojai, N. Pakkaranang, H. ur Rehman, P. Kumam, W. Kumam, Convergence analysis of self-adaptive inertial extra-gradient method for solving a family of pseudomonotone equilibrium problems with application, Symmetry, 12 (2020), 1332. http://dx.doi.org/10.3390/sym12081332 doi: 10.3390/sym12081332
    [3] L. Ceng, A. Petrușel, X. Qin, J. Yao, A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems, Fixed Point Theory, 21 (2020), 93–108. http://dx.doi.org/10.24193/fpt-ro.2020.1.07 doi: 10.24193/fpt-ro.2020.1.07
    [4] L. Ceng, A. Petrușel, C. Wen, J. Yao, Inertial-like subgradient extragradient methods for variational inequalities and fixed points of asymptotically nonexpansive and strictly pseudocontractive mappings, Mathematics, 7 (2019), 860. http://dx.doi.org/10.3390/math7090860 doi: 10.3390/math7090860
    [5] L. Ceng, A. Petrușel, J. Yao, On mann viscosity subgradient extragradient algorithms for fixed point problems of finitely many strict pseudocontractions and variational inequalities, Mathematics, 7 (2019), 925. http://dx.doi.org/10.3390/math7100925 doi: 10.3390/math7100925
    [6] L. Ceng, M. Shang, Composite extragradient implicit rule for solving a hierarch variational inequality with constraints of variational inclusion and fixed point problems, J. Inequal. Appl., 2020 (2020), 33. http://dx.doi.org/10.1186/s13660-020-2306-1 doi: 10.1186/s13660-020-2306-1
    [7] L. Ceng, C. Wen, Y. Liou, J. Yao, A general class of differential hemivariational inequalities systems in reflexive banach spaces, Mathematics, 9 (2021), 3173. http://dx.doi.org/10.3390/math9243173 doi: 10.3390/math9243173
    [8] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [9] Y. Censor, A. Gibali, S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in hilbert space, Optim. Method. Softw., 26 (2011), 827–845. http://dx.doi.org/10.1080/10556788.2010.551536 doi: 10.1080/10556788.2010.551536
    [10] Y. Censor, A. Gibali, S. Reich, Extensions of korpelevich extragradient method for the variational inequality problem in euclidean space, Optimization, 61 (2012), 1119–1132. http://dx.doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
    [11] S. Chang, Salahuddin, L. Wang, M. Liu, On the weak convergence for solving semistrictly quasi-monotone variational inequality problems, J. Inequal. Appl., 2019 (2019), 74. http://dx.doi.org/10.1186/s13660-019-2032-8 doi: 10.1186/s13660-019-2032-8
    [12] C. Elliott, Variational and quasivariational inequalities applications to free-boundary problems, SIAM Review, 29 (1987), 314–315. http://dx.doi.org/10.1137/1029059
    [13] G. Grillo, G. Stampacchia, Formes bilinéaires coercitives sur les ensembles convexes, Académie des Sciences de Paris, 258 (1964), 4413–4416.
    [14] N. Hadjisavvas, S. Schaible, On strong pseudomonotonicity and (semi)strict quasimonotonicity, J. Optim. Theory Appl., 79 (1993), 139–155. http://dx.doi.org/10.1007/BF00941891 doi: 10.1007/BF00941891
    [15] L. He, Y. Cui, L. Ceng, T. Zhao, D. Wang, H. Hu, Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule, J. Inequal. Appl., 2021 (2021), 146. http://dx.doi.org/10.1186/s13660-021-02683-y doi: 10.1186/s13660-021-02683-y
    [16] A. Iusem, B. Svaiter, A variant of korpelevich's method for variational inequalities with a new search strategy, Optimization, 42 (1997), 309–321. http://dx.doi.org/10.1080/02331939708844365 doi: 10.1080/02331939708844365
    [17] G. Kassay, J. Kolumbán, Z. Páles, On nash stationary points, Publ. Math. Debrecen, 54 (1999), 267–279. http://dx.doi.org/10.5486/pmd.1999.1902 doi: 10.5486/pmd.1999.1902
    [18] G. Kassay, J. Kolumbán, Z. Páles, Factorization of minty and stampacchia variational inequality systems, Eur. J. Oper. Res., 143 (2002), 377–389. http://dx.doi.org/10.1016/S0377-2217(02)00290-4 doi: 10.1016/S0377-2217(02)00290-4
    [19] D. Kinderlehrer, G. Stampacchia, An introduction to variational inequalities and their applications, New York: Society for Industrial and Applied Mathematics, 2000. http://dx.doi.org/10.1137/1.9780898719451
    [20] I. Konnov, Equilibrium models and variational inequalities, New York: Elsevier, 2007.
    [21] G. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [22] H. Liu, J. Yang, Weak convergence of iterative methods for solving quasimonotone variational inequalities, Comput. Optim. Appl., 77 (2020), 491–508. http://dx.doi.org/10.1007/s10589-020-00217-8 doi: 10.1007/s10589-020-00217-8
    [23] L. Liu, Ishikawa and mann iterative process with errors for nonlinear strongly accretive mappings in banach spaces, J. Math. Anal. Appl., 194 (1995), 114–125. http://dx.doi.org/10.1006/jmaa.1995.1289 doi: 10.1006/jmaa.1995.1289
    [24] L. Liu, S. Cho, J. Yao, Convergence analysis of an inertial tseng's extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal., 5 (2021), 627–644. http://dx.doi.org/10.23952/jnva.5.2021.4.09 doi: 10.23952/jnva.5.2021.4.09
    [25] P. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. http://dx.doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [26] A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46–55. http://dx.doi.org/10.1006/jmaa.1999.6615 doi: 10.1006/jmaa.1999.6615
    [27] A. Nagurney, Network economics: a variational inequality approach, New York: Springer, 1999. http://dx.doi.org/10.1007/978-1-4757-3005-0
    [28] M. Noor, Some iterative methods for nonconvex variational inequalities, Comput. Math. Model., 21 (2010), 97–108. http://dx.doi.org/10.1007/s10598-010-9057-7 doi: 10.1007/s10598-010-9057-7
    [29] P. Peeyada, W. Cholamjiak, D. Yambangwai, A hybrid inertial parallel subgradient extragradient-line algorithm for variational inequalities with an application to image recovery, J. Nonlinear Funct. Anal., 2022 (2022), 9. http://dx.doi.org/ 10.23952/jnfa.2022.9 doi: 10.23952/jnfa.2022.9
    [30] S. Regmi, Optimized iterative methods with applications in diverse disciplines, New York: Nova Science Publishers, 2020.
    [31] Salahuddin, The extragradient method for quasi-monotone variational inequalities, Optimization, 71 (2022), 2519–2528. http://dx.doi.org/10.1080/02331934.2020.1860979
    [32] B. Tan, S. Cho, J. Yao, Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems, J. Nonlinear Var. Anal., 6 (2022), 89–122. http://dx.doi.org/10.23952/jnva.6.2022.1.06 doi: 10.23952/jnva.6.2022.1.06
    [33] B. Tan, S. Li, Revisiting projection and contraction algorithms for solving variational inequalities and applications, Applied Set-Valued Analysis and Optimization, 4 (2022), 167–183. http://dx.doi.org/10.23952/asvao.4.2022.2.03 doi: 10.23952/asvao.4.2022.2.03
    [34] B. Tan, X. Qin, S. Cho, Revisiting subgradient extragradient methods for solving variational inequalities, Numer. Algor., 90 (2022), 1593–1615. http://dx.doi.org/10.1007/s11075-021-01243-1 doi: 10.1007/s11075-021-01243-1
    [35] B. Tan, X. Qin, J. Yao, Strong convergence of inertial projection and contraction methods for pseudomonotone variational inequalities with applications to optimal control problems, J. Glob. Optim., 82 (2022), 523–557. http://dx.doi.org/10.1007/s10898-021-01095-y doi: 10.1007/s10898-021-01095-y
    [36] B. Tan, X. Qin, J. Yao, Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications, J. Sci. Comput., 87 (2021), 20. http://dx.doi.org/10.1007/s10915-021-01428-9 doi: 10.1007/s10915-021-01428-9
    [37] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. http://dx.doi.org/10.1137/S0363012998338806
    [38] H. ur Rehman, M. Özdemir, I. Karahan, N. Wairojjana, The Tseng's extragradient method for semistrictly quasimonotone variational inequalities, J. Appl. Numer. Optim., 4 (2022), 203–214.
    [39] H. ur Rehman, A. Gibali, P. Kumam, K. Sitthithakerngkiet, Two new extragradient methods for solving equilibrium problems, RACSAM, 115 (2021), 75. http://dx.doi.org/10.1007/s13398-021-01017-3 doi: 10.1007/s13398-021-01017-3
    [40] H. ur Rehman, P. Kumam, Y. Cho, P. Yordsorn, Weak convergence of explicit extragradient algorithms for solving equilibirum problems, J. Inequal. Appl., 2019 (2019), 282. http://dx.doi.org/10.1186/s13660-019-2233-1 doi: 10.1186/s13660-019-2233-1
    [41] H. ur Rehman, P. Kumam, A. Gibali, W. Kumam, Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications, J. Inequal. Appl., 2021 (2021), 63. http://dx.doi.org/10.1186/s13660-021-02591-1 doi: 10.1186/s13660-021-02591-1
    [42] H. ur Rehman, P. Kumam, Y. Cho, Y. Suleiman, W. Kumam, Modified popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Method. Soft., 36 (2021), 82–113. http://dx.doi.org/10.1080/10556788.2020.1734805 doi: 10.1080/10556788.2020.1734805
    [43] H. ur Rehman, W. Kumam, P. Kumam, M. Shutaywi, A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems, AIMS Mathematics, 6 (2021), 5612–5638. http://dx.doi.org/10.3934/math.2021332 doi: 10.3934/math.2021332
    [44] H. ur Rehman, N. Pakkaranang, A. Hussain, N. Wairojjana, A modified extra-gradient method for a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces, J. Math. Comput. Sci., 22 (2021), 38–48. http://dx.doi.org/10.22436/jmcs.022.01.04 doi: 10.22436/jmcs.022.01.04
    [45] N. Wairojjana, H. ur Rehman, I. Argyros, N. Pakkaranang, An accelerated extragradient method for solving pseudomonotone equilibrium problems with applications, Axioms, 9 (2020), 99. http://dx.doi.org/10.3390/axioms9030099 doi: 10.3390/axioms9030099
    [46] Y. Wang, T. Xu, J. Yao, B. Jiang, Self-adaptive method and inertial modification for solving the split feasibility problem and fixed-point problem of quasi-nonexpansive mapping, Mathematics, 10 (2022), 1612. http://dx.doi.org/10.3390/math10091612 doi: 10.3390/math10091612
    [47] J. Yang, H. Liu, A modified projected gradient method for monotone variational inequalities, J. Optim. Theory Appl., 179 (2018), 197–211. http://dx.doi.org/10.1007/s10957-018-1351-0 doi: 10.1007/s10957-018-1351-0
    [48] J. Yang, H. Liu, Z. Liu, Modified subgradient extragradient algorithms for solving monotone variational inequalities, Optimization, 67 (2018), 2247–2258. http://dx.doi.org/10.1080/02331934.2018.1523404 doi: 10.1080/02331934.2018.1523404
    [49] L. Zhang, C. Fang, S. Chen, An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems, Numer. Algor., 79 (2018), 941–956. http://dx.doi.org/10.1007/s11075-017-0468-9 doi: 10.1007/s11075-017-0468-9
  • This article has been cited by:

    1. Faiz Muhammad Khan, Amjad Ali, Ebenezer Bonyah, Zia Ullah Khan, Mohammad Rahimi-Gorji, The Mathematical Analysis of the New Fractional Order Ebola Model, 2022, 2022, 1687-4129, 1, 10.1155/2022/4912859
    2. S. Bhatter, K. Jangid, A. Abidemi, K.M. Owolabi, S.D. Purohit, A new fractional mathematical model to study the impact of vaccination on COVID-19 outbreaks, 2023, 6, 27726622, 100156, 10.1016/j.dajour.2022.100156
    3. Guangdong Sui, Xiaobiao Shan, Chengwei Hou, Haigang Tian, Jingtao Hu, Tao Xie, An underwater piezoelectric energy harvester based on magnetic coupling adaptable to low-speed water flow, 2023, 184, 08883270, 109729, 10.1016/j.ymssp.2022.109729
    4. Peng Wu, Anwarud Din, Taj Munir, M Y Malik, A. S. Alqahtani, Local and global Hopf bifurcation analysis of an age-infection HIV dynamics model with cell-to-cell transmission, 2022, 1745-5030, 1, 10.1080/17455030.2022.2073401
    5. Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad, Computational analysis of COVID-19 model outbreak with singular and nonlocal operator, 2022, 7, 2473-6988, 16741, 10.3934/math.2022919
    6. Kottakkaran Sooppy Nisar, Muhammad Farman, Mahmoud Abdel-Aty, Chokalingam Ravichandran, A review of fractional order epidemic models for life sciences problems: Past, present and future, 2024, 95, 11100168, 283, 10.1016/j.aej.2024.03.059
    7. Yuqin Song, Peijiang Liu, Anwarud Din, Analysis of a stochastic epidemic model for cholera disease based on probability density function with standard incidence rate, 2023, 8, 2473-6988, 18251, 10.3934/math.2023928
    8. M. Latha Maheswari, K. S. Keerthana Shri, Mohammad Sajid, Analysis on existence of system of coupled multifractional nonlinear hybrid differential equations with coupled boundary conditions, 2024, 9, 2473-6988, 13642, 10.3934/math.2024666
    9. Lalchand Verma, Ramakanta Meher, Fuzzy computational study on the generalized fractional smoking model with caputo gH-type derivatives, 2024, 17, 1793-5245, 10.1142/S1793524523500377
    10. Shewafera Wondimagegnhu Teklu, Belela Samuel Kotola, Haileyesus Tessema Alemneh, Joshua Kiddy K. Asamoah, Smoking and alcoholism dual addiction dissemination model analysis with optimal control theory and cost-effectiveness, 2024, 19, 1932-6203, e0309356, 10.1371/journal.pone.0309356
    11. Abdelhamid Mohammed Djaouti, Zareen A. Khan, Muhammad Imran Liaqat, Ashraf Al-Quran, A Novel Technique for Solving the Nonlinear Fractional-Order Smoking Model, 2024, 8, 2504-3110, 286, 10.3390/fractalfract8050286
    12. A. Omame, F.D. Zaman, Analytic solution of a fractional order mathematical model for tumour with polyclonality and cell mutation, 2023, 8, 26668181, 100545, 10.1016/j.padiff.2023.100545
    13. Asad Khan, Anwarud Din, Stochastic analysis for measles transmission with Lévy noise: a case study, 2023, 8, 2473-6988, 18696, 10.3934/math.2023952
    14. Viswanathan Padmavathi, Kandaswami Alagesan, Samad Noeiaghdam, Unai Fernandez-Gamiz, Manivelu Angayarkanni, Vediyappan Govindan, Tobacco smoking model containing snuffing class, 2023, 9, 24058440, e20792, 10.1016/j.heliyon.2023.e20792
    15. I R Sofia, Shraddha Ramdas Bandekar, Mini Ghosh, Mathematical modeling of smoking dynamics in society with impact of media information and awareness, 2023, 11, 26667207, 100233, 10.1016/j.rico.2023.100233
    16. Jalal Al Hallak, Mohammed Alshbool, Ishak Hashim, Amar Nath Chatterjee, Implementing Bernstein Operational Matrices to Solve a Fractional‐Order Smoking Epidemic Model, 2024, 2024, 1687-9643, 10.1155/2024/9141971
    17. Muhammad Farman, Kottakkaran Sooppy Nisar, Mumtaz Ali, Hijaz Ahmad, Muhammad Farhan Tabassum, Abdul Sattar Ghaffari, Chaos and forecasting financial risk dynamics with different stochastic economic factors by using fractional operator, 2025, 11, 2363-6203, 10.1007/s40808-025-02321-2
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1540) PDF downloads(70) Cited by(3)

Figures and Tables

Figures(6)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog