Research article

On the Caginalp phase-field system based on the Cattaneo law with nonlinear coupling

  • We focus in this paper on a Caginalp phase-field system based on the Cattaneo law with nonlinear coupling. We start our analysis by establishing existence, uniqueness and regularity based on Moser’s iterations. We finish with the study of the spatial behavior of the solutions in a semi-infinite cylinder, assuming the existence of such solutions.

    Citation: Armel Andami Ovono, Alain Miranville. On the Caginalp phase-field system based on the Cattaneo law with nonlinear coupling[J]. AIMS Mathematics, 2016, 1(1): 24-42. doi: 10.3934/Math.2016.1.24

    Related Papers:

    [1] Hasib Khan, Jehad Alzabut, Anwar Shah, Sina Etemad, Shahram Rezapour, Choonkil Park . A study on the fractal-fractional tobacco smoking model. AIMS Mathematics, 2022, 7(8): 13887-13909. doi: 10.3934/math.2022767
    [2] Shabir Ahmad, Aman Ullah, Mohammad Partohaghighi, Sayed Saifullah, Ali Akgül, Fahd Jarad . Oscillatory and complex behaviour of Caputo-Fabrizio fractional order HIV-1 infection model. AIMS Mathematics, 2022, 7(3): 4778-4792. doi: 10.3934/math.2022265
    [3] Sabri T. M. Thabet, Reem M. Alraimy, Imed Kedim, Aiman Mukheimer, Thabet Abdeljawad . Exploring the solutions of a financial bubble model via a new fractional derivative. AIMS Mathematics, 2025, 10(4): 8587-8614. doi: 10.3934/math.2025394
    [4] Murugesan Sivashankar, Sriramulu Sabarinathan, Vediyappan Govindan, Unai Fernandez-Gamiz, Samad Noeiaghdam . Stability analysis of COVID-19 outbreak using Caputo-Fabrizio fractional differential equation. AIMS Mathematics, 2023, 8(2): 2720-2735. doi: 10.3934/math.2023143
    [5] Muhammad Altaf Khan, Sajjad Ullah, Saif Ullah, Muhammad Farhan . Fractional order SEIR model with generalized incidence rate. AIMS Mathematics, 2020, 5(4): 2843-2857. doi: 10.3934/math.2020182
    [6] Mehmet Kocabiyik, Mevlüde Yakit Ongun . Construction a distributed order smoking model and its nonstandard finite difference discretization. AIMS Mathematics, 2022, 7(3): 4636-4654. doi: 10.3934/math.2022258
    [7] Rahat Zarin, Abdur Raouf, Amir Khan, Aeshah A. Raezah, Usa Wannasingha Humphries . Computational modeling of financial crime population dynamics under different fractional operators. AIMS Mathematics, 2023, 8(9): 20755-20789. doi: 10.3934/math.20231058
    [8] Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad . Computational analysis of COVID-19 model outbreak with singular and nonlocal operator. AIMS Mathematics, 2022, 7(9): 16741-16759. doi: 10.3934/math.2022919
    [9] Manal Elzain Mohamed Abdalla, Hasanen A. Hammad . Solving functional integrodifferential equations with Liouville-Caputo fractional derivatives by fixed point techniques. AIMS Mathematics, 2025, 10(3): 6168-6194. doi: 10.3934/math.2025281
    [10] Sara Salem Alzaid, Badr Saad T. Alkahtani . Real-world validation of fractional-order model for COVID-19 vaccination impact. AIMS Mathematics, 2024, 9(2): 3685-3706. doi: 10.3934/math.2024181
  • We focus in this paper on a Caginalp phase-field system based on the Cattaneo law with nonlinear coupling. We start our analysis by establishing existence, uniqueness and regularity based on Moser’s iterations. We finish with the study of the spatial behavior of the solutions in a semi-infinite cylinder, assuming the existence of such solutions.


    Intelligent optimization algorithms are increasingly popular in the field of intelligent computing and are widely applied in various other fields, including engineering, medicine, ecology and environment, marine engineering, and so forth. Classical intelligent optimization algorithms include: Particle swarm algorithm (PSO)[1], genetic algorithm (GA)[2], simulated annealing (SA)[3], etc. PSO refers to a swarm intelligence optimization algorithm that simulating bird predation, in which each bird is treated as a particle and the particles follow the current optimal particle to find the optimal solution in the solution space. GA is a computational model based on Darwin's theory of evolution and Mendelian genetics to simulate biological evolution. Genetic algorithms obtain the optimal solutions through three basic operations: chromosomal self-selection, crossover and mutation. SA marks the first natural algorithm proposed to simulate the high-temperature annealing process of metallic materials. When SA is heated at high temperatures and then slowly cooled, the particles eventually reach a state of equilibrium and solidify into crystals of minimal energy. Many scholars have also developed many bionic intelligence optimization algorithms based on classical intelligence algorithms, such as Sine Cosine Algorithm (SCA) [4], the Artificial swarm algorithm (ABC) [5], the Bat algorithm(BA) [6], the Bee Evolutionary Genetic Algorithm (BEGA) [7], the Squirrel Search Algorithm (SSA) [8], the Atomic Search optimisation (ASO) Algorithm [9], etc.

    The Atomic Search Optimisation (ASO) algorithm is a new intelligent optimisation algorithm based on molecular dynamics derivatives that was proposed in 2019. ASO is composed of geometric binding force and the interaction force between atoms, following the laws of classical mechanics [10,11]. Geometric binding forces are the interaction between the Lennard-Jones(LJ) potential [12,13] and the co-borrowing bonds among atoms. In ASO, atoms represent solutions in the search space. The larger the atomic mass, the better the solution, and vice versa. Compared to traditional intelligent system algorithms, ASO requires fewer physical parameters and can achieve better performance. As a result, it has been widely used in various fields, Zhang et al. applied ASO to hydrogeological parameters estimation [9] and groundwater dispersion coefficients calculation [14]. Ahmed et al. utilized ASO in fuel cell modeling and successfully built an accurate model. Simulations showed that it was as good as commercial proton exchange membrane(PEM) fuel cells [15], Mohammed et al. used ASO to reduce the peak sidelobe level of the beam pattern [16], Saeid combined the ASO with the Tree Seeding Algorithm (TSA) to enhance its performance in exploration [17]. Ghosh et al. proposed an improved Atomic Search Algorithm(ASO) based on binary variables and combined it with Simulated Annealing (SA) technique [18]. Elaziz et al. proposed an automatic clustering algorithm combining the ASO and the SCA algorithm to automatically find the best prime numbers and the corresponding positions [19]. Sun et al. applied improved ASO to engineering design [20].

    Since ASO is prone to finding only locally optimal solutions with low accuracy, an Improved Atom Search Algorithm(IASO) algorithm based on particle velocity updating is proposed in this paper. The IASO algorithm has the same principle as the ASO algorithm, but IASO is optimized for speed iterative updates to improve the convergence speed of the algorithm, avoid finding local optimal solutions, and allow a more extensive search for optimal solutions. IASO adopts the idea of particle update speed in PSO and introduces inertia weights $ w $ to improve the performance of ASO. In addition, the learning factors $ c_{1} $ and $ c_{2} $ are added into IASO, which not only ensure the convergence performance of the algorithm, but also accelerate the convergence speed, effectively solving the problem that the original ASO tends to find only the local optimal solution.

    Array signal processing is one of the important research directions of signal processing, which has been developing rapidly in recent years. DOA estimation of signals is an ongoing research hotspot in the field of array signal processing. It has great potential for hydrophone applications. Hydrophones are generally divided into scalar hydrophones and vector hydrophones. Due to the scalar hydrophones can only test scalar parameters in the sound field, many scholars have turned to the study of vector hydrophones. Xu et al. used alternating iterative weighted least squares to deal with the off-grid problem of sparsity-based [21], DOA of an array of acoustic vector water-modulated microphones. Amiri designed a micro-electro-mechanical system (MEMS) bionic vector hydrophone with a piezoelectric gated metal oxide semiconductor field-effect transistor (MOSFET) [22]. More and more scholars have been doing research in the direction of vector array. Song et al. studied the measurement results of an acoustic vector sensor array and proposed a new method to obtain better DOA estimation performance in noisy and coherent environments [23], using the time-frequency spatial information of the vector sensor array signal, Gao et al. combined elevation, azimuth and polarization for the estimation of electromagnetic vector sensor arrays based on nested tensor modeling [24], Baron et al. optimised, conceptualised and evaluated a hydrophone array for deep-sea mining sound source localisation validation [25], and Wand et al. proposed an iterative sparse covariance matrix fitting direction estimation method based on a vector hydrophone array [26]. In recent years, Some scholars applied compressed sensing to signal DOA estimation. Keyvan et al., proposed the Three Dimensional Orthogonal Matching Pursuit(3D-FOMP) algorithm and the Three Dimensional Focused Orthogonal Matching Pursuit (3D-FOMP) algorithm to obtain better estimation performance in low signal-to-noise ratio and multi-source environments, and to solve the problem that conventional DOA estimation algorithms cannot distinguish between two adjacent sources [27]. Keyvan et al, designed a new hybrid nonuniform linear array consisting of two uniform linear subarrays and proposed a new DOA estimation method based on the OMP algorithm. This algorithm has lower computational complexity and higher accuracy compared with the FOMP algorithm. It can distinguish adjacent signal sources more accurately and solve the phase ambiguity problem [28].

    With the development of vector hydrophone, the direction of arrival estimation of the vector hydrophones signal has an increasingly wide application, which is of great importance for the functional extension of sonar devices [29]. Many useful estimation methods, multiple signal classification (MUSIC)[30], estimated signal parameters via rotational invariance technique (ESPRIT) [31] and maximum likelihood (ML) [32] etc. have been proposed by many scholars.

    In 1988, Ziskind and Max added ML estimation to DOA and achieved ideal results [33]. compared with MISIC and Espirit, the ML estimation method is more effective and stable, especially in the case of low signal-to-noise ratios (SNR) or small snapshots.However, in MLDOA estimation, the solving problem of the likelihood function is a multidimensional nonlinear polar problem that requires a multidimensional search for global extrema, which increases the computational burden.

    Many scholars have used various methods in combination with ML to improve the estimation performance of DOA. Zhang, C.Y. et al. proposed a sparse iterative covariance-based estimation method and combined it with ML to improve its performance. However, its resolution and stability [34] are not high, and Hu et al. proposed a multi-source DOA estimation based on the ML in the spherical harmonic domain [35], Ji analyzed the asymptotic performance of ML DOA estimation [36], Selva proposed an effective method to calculate ML DOA estimation in the case of unknown sensor noise power [37], Yoon et al. optimized the sequence and branching length of taxa in phylogenetic trees by using the maximum likelihood method [38], Vishnu proposed a line propagation function (LSF)-based sinusoidal frequency estimation algorithm to improve the performance of ML DOA [39].

    In response to the complexity of the ML DOA estimation problem, some scholars have used intelligent optimization algorithms to optimize MLDOA and achieved better performance. Li et al. applied the bionic algorithm genetic algorithm to MLDOA estimation for the first time, but the genetic algorithm is prone to problems such as premature convergence [40]. Sharma et al. applied the PSO algorithm to MLDOA estimation, but there are still some drawbacks in the estimation of multi-directional angles because the PSO algorithm converges slowly and tends to fall into local optimal solutions [41], Zhang et al. combined artificial colony bees with ML DOA estimation to reduce the computational complexity in calculating ML functions [5], Feng et al. combining bat algorithms with ML to optimize the multidimensional nonlinear estimation of spectral functions [6], Fan et al. applied the Improved Bee Evolutionary Genetic Algorithm (IBEGA) to MLDOA estimation [7], Wang et al. used an improved squirrel search algorithm (ISSA) in MLDOA estimation, which reduced computational complexity and enhanced the simulation effect [42], and Li et al. proposed a search space that limits the search space for particle swarm optimization [43].

    As calculating the likelihood function for maximum likelihood DOA estimation is a multi-dimensional non-linear polar problem, a multi-dimensional search for global extremes is required, which requires extensive computation. To solve this problem, the proposed IASO is applied to MLDOA estimation. Simulation results show that the combination of IASO and MLDOA estimation significantly reduces the computational complexity of multidimensional nonlinear optimization of ML estimation.

    The main structure of this article is as follows: Section2 presented the improved ASO and compared the convergence performance of ASO and IASO on 23 benchmark functions; Section3 gives the data model and ML estimation; Section4 combines IASO with ML DOA, providing the simulation results to validate the convergence performance and statistical performance of IASO ML estimation and compareing it with ASO, PSO, GA and SCA combined with ML DOA estimation separately. Section 5 concludes the paper.

    The Atomic Search Algorithm (ASO) was proposed by Zhao et al. in 2018. It is a physics-inspired algorithm developed by molecular dynamics. The algorithm is simple to implement, featured with few parameters and good convergence performance and thus it has been used to solve a variety of optimization problems.

    The ASO algorithm is based on the interaction forces between atoms and geometric constraints, and the position of each atom in the search space can be measured by its mass. This means that the heavier the atoms, the better the solution. The search and optimisation process is based on the mutual repulsion or attraction of atoms depending on the distance between them. The lighter atoms flow at an accelerated speed to the larger atoms, which widens the search area and performs a large search. The acceleration of heavier atoms is smaller, making it more concentrated to find better solutions. Suppose that a group of atoms has $ N $ atoms, the position of the $ i^{th} $ atom is $ X_i = [ x_{i}^1, x_{i}^2, \cdots, x_{i}^d], $ according to Newton's second law

    $ Fi+Gi=miai,
    $
    (2.1)

    where $ F_{i} $ is the total force of the interaction force on the atom $ i $, $ G_{i} $ is the geometric binding force on the atom $ i $, and $ m_{i} $ is the mass of the atom $ i $.

    The general unconstrained optimization problem can be defined as

    $ minf(x),x=[x1,x2,,xD],
    $
    (2.2)
    $ Lb\leq x\leq Ub, $
    $ Lb = \left[lb^1,\cdots,lb^D\right],Ub = \left[ub^1,\cdots,ub^D\right], $

    where $ x^d(d = 1, 2, \cdots.D) $ is the $ d $ components in the search space, $ Lb $ is the lower limit, $ Ub $ is the upper limit, and $ D $ is the dimension of the search space.

    The fitness value $ Fit_{i}(t) $ of the position of each atom is calculated according to the fitness function defined by the user. The mass of each atom is Eq (2.3) and Eq (2.4), which can be derived from its fitness.

    $ Mi(t)=eFiti(t)Fitbest(t)Fitworst(t)Fitbest(t),
    $
    (2.3)
    $ mi(t)=eMi(t)Nj=1Mj(t),
    $
    (2.4)

    where $ Fit_{best}(t) $ and $ Fit_{worst}(t) $ refer to the best fitness value and worst fitness value of the atom in $ i^{th} $ iterations, $ Fit_{i}(t) $ is the fitness value of atom $ i $ at the $ i^{th} $ iteration, and the expressions of $ Fit_{best}(t) $ and $ Fit_{worst}(t) $ are as follows

    $ Fitbest(t)=mini{1,2,3,,N}Fiti(t),
    $
    (2.5)
    $ Fitworst(t)=maxi{1,2,3,,N}Fiti(t).
    $
    (2.6)

    The interaction force between atoms is obtained from the literature [9,11], after optimizated by the LJ potential energy

    $ Fij(t)=η(t)[2(hij(t))13(hij(t))7],
    $
    (2.7)
    $ η(t)=α(1t1T)3e20tT,
    $
    (2.8)

    where $ \eta(t) $ is the depth function that adjusts the repulsion and attractive force, $ \alpha $ is the depth weight, $ T $ is the maximum number of iterations, and $ t $ is the current number of iterations. Figure 1 shows the functional behavior of function $ F $, the $ \eta(t) $ corresponding to different $ h $ ranges from 0.9 to 2. It can be seen that when $ h $ is from 0.9 to 1.12, it is repulsion; when $ h $ is from 1.12 to 2, it is gravity; and when $ h = 1.12 $, it reaches a state of equilibrium. Therefore, in order to improve the exploration in ASO, the lower limit of the repulsive force with a smaller function value is $ h = 1.1 $, and the upper limit of the gravitational force is 1.24.

    $ hij(t)={hmin,rij(t)σ(t)<hmin,rij(t)σ(t),hminrij(t)σ(t)hmax,hmax,rij(t)σ(t)>hmin,
    $
    (2.9)
    Figure 1.  Function behaviors of $ F $with different values of $ \eta $.

    where $ h_{ij}(t) $ is the distance function, $ h_{min} = 1.1 $ and $ h_{max} = 1.24 $ represent the lower and upper limits of $ h $, $ r_{ij} $ is the Euclidean distance between atom $ i $ and atom $ j $, and $ \sigma(t) $ is defined as follows

    $ σ(t)=xij(t),jKBestxij(t)K(t)2,
    $
    (2.10)

    where $ x_{ij} $ is the position component of atom $ i^{th} $ in the $ j^{th} $ dimensional search space, $ \|\cdot\|_{2} $ stands for two norm and KBest is a subset of the atom group, which is composed of the first $ K $ atoms with the best function fitness value.

    $ K(t)=N(N2)×tT,
    $
    (2.11)
    $ {hmin=g0+g(t),hmax=u,
    $
    (2.12)

    where $ g_{0} $ is a drift factor, which can shift the algorithm from exploration to development

    $ g(t)=0.1×sin(π2×tT),
    $
    (2.13)

    Therefore, the atom $ i^{th} $ acting on other atoms can be considered as a total force, expressed as

    $ Fdi(t)=jKBestrandjFdij(t),
    $
    (2.14)

    The geometric binding force also plays an important role in ASO. Assume that each atom has a covalent bond with each atom in KBest, and that each atom is bound by KBest, Figure 2 shows the effect of atomic interactions. $ A_{1}, A_{2}, A_{3} $ and $ A_{4} $ are the atoms with the best fitness value, called KBest.In KBest, $ A_{1}, A_{2}, A_{3} $ and $ A_{4} $ attract or repel each other, and $ A_{5}, A_{6}, A_{7} $ attract or repel each other for every atom. Each atom in the population is bound by the optimal atom $ A_{1} $($ X_{best} $), the binding force of the atom $ i^{th} $ is

    $ Gdi(t)=λ(t)(xdbest(t)xdi(t)),
    $
    (2.15)
    $ λ(t)=βe20tT,
    $
    (2.16)
    Figure 2.  Forces of an atom system with KBest for $ K = 5 $.

    where $ x_{best}^d(t) $ is the atom which is in the best position in the $ i^{th} $ iteration, $ \beta $ is the multiplier weight, and $ \lambda(t) $ is the Lagrange multiplier.

    Under the action of geometric constraint force and interaction force, the acceleration of atom $ i^{th} $ atom at time $ t $ is

    $ adi(t)=Fdi(t)+Gdi(t)mdi(t)=α(1t1T)3e20tTjKBestrandj[2×(hij(t))13(hij)7]mi(t)(xdi(t)xdi(t))xi(t),xj(t)2+βe20tTxdbest(t)xdi(t)mi(t).
    $
    (2.17)

    In the original ASO, the algorithm was found to be prone to local optima. As a result, changes were made in the iterative update process of the speed, allowing the algorithm to go beyond the local optimum, search and optimise more broadly. The particle update velocity from PSO is used in ASO and the inertial weight $ w $ is introduced in the original ASO velocity update.The algorithm is not prone to local optima at the start of the algorithm and improves the performance of the IASO algorithm. The addition of learning factors $ c_{1} $ and $ c_{2} $ not only ensures convergence performance, but also speeds up convergence, effectively solving the problem that the original ASO tends to fall into local optimality.

    $ w = 0.9-0.5\times\left(\frac{t}{T}\right), c_{1} = -10\times\left(\frac{t}{T}\right)^2, $
    $ c_{2} = 1-\left(-10\times\left(\frac{t}{T}\right)^2\right), $
    $ vdi(t+1)=w×randdivdi(t)+c1×randdiadi(t)+c2×randdi(xdbest(t)xdi(t))
    $
    (2.18)

    At the $ (t+1)^{th} $ iteration, the position updating of the $ i^{th} $ atom can be expressed as

    $ xdi(t+1)=xdi(t)+vdi(t+1).
    $
    (2.19)

    The maximum number of iterations, convergence normalisation, maximum running time and accuracy of the fitness function value are commonly used convergence criteria.In this paper, the maximum number of iterations and convergence normalisation are used as criteria for stopping the iterations.The maximum number of iterations is 200 and the convergence normalisation results are as follows

    $ D=ni=1(Fiti¯Fit)2<ε,
    $
    (2.20)

    where $ Fit_i $ is the fitness value of $ i^{th} $ squirrel and $ \overline{Fit} $ is the average fitness value of the population, the accuracy $ \varepsilon $ is often taken as $ 1E-6 $.

    Thus, by iterating over the above operations several times, we can eventually find the optimal solution exactly. Table 1 shows the main steps of the IASO algorithm.

    Table 1.  Unimodal test functions $ F_1(x)-F_7(x) $.
    Name Function n Range Optimum
    Sphere $ F_{1}(x)=\sum\limits_{i=1}^{n} x_{i}^{2} $ 30 $ [-100,100]^{n} $ 0
    Schwefel 2.22 $ F_{2}(x)=\sum\limits_{i=1}^{n}\mid x_{i}\mid +\prod\limits_{i=1}^{n}\mid x_{i}\mid $ 30 $ [-100,100]^{n} $ 0
    Schwefel 1.2 $ F_{3}(x)=\sum\limits_{i=1}^{n}(\sum\limits_{j=1}^{i} x_{j})^{2} $ 30 $ [-100,100]^{n} $ 0
    Schwefel 2.21 $ F_{4}(x)=\max\limits_{i}\{\mid x_{i}\mid, 1\leq i\leq n\} $ 30 $ [-100,100]^{n} $ 0
    Rosenbrock $ F_{5}(x)=\sum\limits_{i=1}^{n-1}(100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}) $ 30 $ [-200,200]^{n} $ 0
    Step $ F_{6}(x)=\sum\limits_{i=1}^{n}(x_{i}+0.5)^{2} $ 30 $ [-100,100]^{n} $ 0
    Quartic $ F_{7}(x)=\sum\limits_{i=1}^{n}ix_{i}^{4}+rand() $ 30 $ [-1.28, 1.28]^{n} $ 0

     | Show Table
    DownLoad: CSV

    The pseudocode of IASO is present in Algorithm 1.

    Algorithm 1 Pseudocode of IASO
    Begin:
    Randomly initialize a group of atoms $ x $ (solutions) and their velocity $ v $ and $ Fit_{best} = Inf $.
    While the stop criterion is not satisfied do
      For each atom $ x_{i} $ do
        Calculate its fitness value $ Fit_{i} $;
        If $ Fit_{i} < Fit_{best} $ then
          $ Fit_{best} = Fit_{i} $;
          $ X_{Best} = x_{i} $;
        End If
        Calculate the mass using Eq (2.3)and Eq (2.4);
        Use $ K(t) = N-(N-2)\times\frac{t}{T} $ to determine $ K $ neighbors;
        Use Eq (2.14) and Eq (2.15) to calculate the interaction force and geometric restraint force;
        Calculate acceleration using formula Eq (2.17);
        Update the velocity
        $ v_{i}^d(t+1) = w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)+c_{2}\times rand_{i}^d(x_{best}^d(t)-x_{i}^d(t)) $.
        Update the position
          $ x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) $.
      End For.
    End while.
    Find the best solution so far $ X_{Best} $.

    To test the performance of the IASO algorithm, 23 known benchmark functions were used. These benchmark functions are described in Tables 1, 2, 3, $ F1-F7 $ is unimodal function, and each unimodal function has no local optimization, only one global optimum. Therefore, the convergence speed of the algorithm can be verified. $ F8-F13 $ are multimodal functions with many local optimizations. While $ F14-F23 $ is a low dimensional function, each function has less local optimal value. Therefore, multimodal function and low dimensional function are very suitable for local optimal test avoidance and algorithm exploration ability.

    Table 2.  Multimodal test functions $ F_8(x)-F_{13}(x) $.
    Name Function n Range Optimum
    Schwefel $ F_{8}(x)=-\sum\limits_{i=1}^{n}(x_{i}\sin(\sqrt{\mid x_{i}\mid})) $ 30 $ [-500,500]^{n} $ -12569.5
    Rastrigin $ F_{9}(x)=\sum\limits_{i=1}^{n}(x_{i}^{2}-10\cos(2\pi x_{i})+10) $ 30 $ [-5.12, 5.12]^{n} $ 0
    Ackley $ F_{10}(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}x_{i}^{2}})-\exp(\frac{1}{n}\sum\limits_{i=1}^{n}\cos2\pi x_{i})+20+\varepsilon $ 30 $ [-32, 32]^{n} $ 0
    Griewank $ F_{11}(x)=\frac{1}{4000}\sum\limits_{i=1}^{n}x_{i}^{2}-\prod\limits_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1 $ 30 $ [-600,600]^{n} $ 0
    Penalized $ F_{12}(x)=\frac{\pi}{n}\{10\sin^{2}(\pi y_{1})+\sum\limits_{i=1}^{n-1}(y_{i}-1)^{2}[1+10\sin^{2}(\pi y_{i+1})]+(y_{n}-1)^{2}\}+\sum\limits_{i=1}^{n}u(x_{i}, 10,100, 4) $ 30 $ [-50, 50]^{n} $ 0
    Penalized2 $ F_{13}(x)=0.1\{\sin^{2}(3\pi x_{1})+\sum\limits_{i=1}^{29}(x_{i}-1)^{2}p[1+\sin^{2}(3\pi X_{i+1})] +(x_{n}-1)^{2}[1+\sin^{2}(2\pi x_{n})]\}+\sum\limits_{i=1}^{n}u(x_{i}, 5,100, 4) $ 30 $ [-50, 50]^{n} $ $ 0 $

     | Show Table
    DownLoad: CSV
    Table 3.  Low-dimensional test functions $ F_{14}(x)-F_{23}(x) $.
    Name Function n Range Optimum
    Foxholes $ F_{14}(x)=[\frac{1}{500}+\sum\limits_{j=1}^{25}\frac{1}{j+\sum\limits_{j=1}^{2}(x_{i}-a_{ij})^{6}}]^{-1} $ 2 $ [-65.536, 65.536]^{n} $ $ 0.998 $
    Kowalik $ F_{15}(x)=\sum\limits_{i=1}^{11}\mid a_{i}-\frac{x_{i}(b_{i}^{2}+b_{i}x_{2})}{b_{i}^{2}+b_{i}x_{3}+x_{4}}\mid^{2} $ 4 $ [-5, 5]^{n} $ $ 3.075\times 10^{-4} $
    Six Hump Camel $ F_{16}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4} $ 2 $ [-5, 5]^{n} $ $ -1.0316 $
    Branin $ F_{17}(x)=(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{\pi}x_{1}-6)^{2} +10(1-\frac{1}{8\pi})\cos x_{1}+10 $ 2 $ [-5, 10]\times[0, 15] $ $ 0.398 $
    Goldstein-Price $ F_{18}(x)=1+(x_{1}+x_{2}+1)^{2}19-14x_{1}+3x_{1}^{2}-14x_{2} +6x_{1}x_{2}+3x_{2}^{2})] \times[30+(2x_{1}+1-3x_{2})^{2}(18-32x_{1} 12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})] $ 2 $ [-2, 2]^{n} $ 3
    Hartman 3 $ F_{19}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{3}a_{ij}(x_{j}-p_{ij})^{2}] $ 3 $ [0, 1]^{n} $ $ -3.86 $
    Hartman 6 $ F_{20}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{6}a_{ij}(x_{j}-p_{ij})^{2}] $ $ 6 $ $ [0, 1]^{n} $ $ -3.322 $
    Shckel 5 $ F_{21}(x)=-\sum\limits_{i=1}^{5} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} $ 4 $ [0, 10]^{n} $ $ -10.1532 $
    Shckel 7 $ F_{22}(x)=-\sum\limits_{i=1}^{7} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} $ 4 $ [0, 10]^{n} $ $ -10.4028 $
    Shckel 10 $ F_{23}(x)=-\sum\limits_{i=1}^{10}[ (x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} $ 4 $ [0, 10]^{n} $ $ -10.5363 $

     | Show Table
    DownLoad: CSV

    In this experiment, the population size for IASO and ASO was $ 50 $ and the maximum number of iterations was $ 100 $. There are three performance evaluation indexes for comparing IASO and ASO: the average, minimum and standard deviation of the optimal solution. The smaller the average value of the optimal solution, the less likely it is that the algorithm will enter a local optimum and the easier it is to find the global optimum; The smaller the standard deviation of the optimal solution, the more stable the algorithm will be; the smaller the minimum value, the more accurate it will be. Tables 35 shows the comparison of optimization results of different types of functions, and the corresponding convergence curve is shown in Figures 35. Table 4 and Figure 6 are the optimization results and convergence curves of unimodal function $ F_1(x)-F_7(x) $. It can be seen that IASO algorithm has better performance than ASO algorithm and its convergence speed is faster. Table 5 and Figure 7 are the optimization results and convergence curves of multimodal function $ F_8(x)-F_{13}(x) $. It can be seen that the overall performance of IASO is better than that of ASO. Table 6 and Figure 8 are the optimization results and convergence curves of low dimensional function $ F_{14}(x)- F_{23}(x) $. The convergence curve shows that ASO converges faster, but IASO is more accurate. By comparing IASO with ASO, it can be seen that the improved IASO converges much faster and is more stable than ASO It is also less likely to enter the local optimum.

    Table 4.  Comparisons of results for unimodal functions.
    Function Index ASO IASO
    $ F_{1}(x) $ Mean $ 2.54e-12 $ $ 1.88e-18 $
    Std $ 3.24e-12 $ $ 1.03e-20 $
    Best $ 3.48e-15 $ $ 5.22e-19 $
    $ F_{2}(x) $ Mean $ 3.33e-08 $ $ 3.39e-09 $
    Std $ 1.89e-10 $ $ 9.90e-12 $
    Best $ 5.11e-08 $ $ 1.84e-09 $
    $ F_{3}(x) $ Mean $ 186.5664 $ $ 1.06e-17 $
    Std $ 86.3065 $ $ 1.23e-21 $
    Best $ 24.1115 $ $ 1.81e-18 $
    $ F_{4}(x) $ Mean $ 3.24e-09 $ $ 8.77e-10 $
    Std $ 6.14-09 $ $ 2.32e-12 $
    Best $ 2.13e-10 $ $ 4.75e-10 $
    $ F_{5}(x) $ Mean $ 0.2905 $ $ 0.0034 $
    Std $ 0.9888 $ $ 0.0039 $
    Best $ 4.5370e+03 $ $ 28.8627 $
    $ F_{6}(x) $ Mean 0 0
    Std 0 0
    Best 0 0
    $ F_{7}(x) $ Mean $ 0.02124 $ $ 3.91e-04 $
    Std $ 0.02981 $ $ 3.60e-04 $
    Best $ 0.03319 $ $ 1.8710e-04 $

     | Show Table
    DownLoad: CSV
    Table 5.  Comparisons of results for multimodal functions.
    Function Index ASO IASO
    $ F_{8}(x) $ Mean $ -3887 $ $ -6772.47 $
    Std $ 564.7 $ $ 354.77 $
    Best $ -4245 $ $ -6878.93 $
    $ F_{9}(x) $ Mean 0 0
    Std 0 0
    Best 0 0
    $ F_{10}(x) $ Mean $ 3.91e-09 $ $ 8.63e-10 $
    Std $ 2.15e-09 $ $ 2.68e-13 $
    Best $ 1.13e-09 $ $ 7.257e-10 $
    $ F_{11}(x) $ Mean 0 0
    Std 0 0
    Best 0 0
    $ F_{12}(x) $ Mean $ 4.34e-23 $ $ 3.69e-23 $
    Std $ 1.84e-22 $ $ 1.51e-22 $
    Best $ 7.83e-24 $ $ 6.53e-24 $
    $ F_{13}(x) $ Mean $ 2.03e-22 $ $ 2.33e-23 $
    Std $ 2.83e-22 $ $ 3.12e-22 $
    Best $ 1.91e-23 $ $ 1.90e-23 $

     | Show Table
    DownLoad: CSV
    Figure 3.  2D plots of function $ F_{1}(x)-F_{7}(x) $.
    Figure 4.  2D plots of function $ F_{8}(x)-F_{13}(x) $.
    Figure 5.  2D plots of function $ F_{14}(x)-F_{23}(x) $.
    Figure 6.  Convergence curve of the function $ F_{1}(x)-F_{7}(x) $.
    Figure 7.  Convergence curve of the function $ F_{8}(x)-F_{13}(x) $.
    Table 6.  Comparisons of results for low-dimensional functions.
    Function Index ASO IASO
    $ F_{14}(x) $ Mean $ 0.998004 $ $ 0.998004 $
    Std $ 7.04e-16 $ $ 4.25e-16 $
    Best $ 0.998004 $ $ 0.998004 $
    $ F_{15}(x) $ Mean $ 9.47e-04 $ $ 4.69e-04 $
    Std $ 2.79e-04 $ $ 1.81e-04 $
    Best $ 2.79e-04 $ $ 1.45e-04 $
    $ F_{16}(x) $ Mean $ -1.03163 $ $ -1.03163 $
    Std 0 0
    Best $ -1.03163 $ $ -1.03163 $
    $ F_{17}(x) $ Mean $ 0.397887 $ $ 0.397887 $
    Std 0 0
    Best $ 0.397887 $ $ 0.397887 $
    $ F_{18}(x) $ Mean $ 3 $ $ 3 $
    Std $ 1.68e-14 $ $ 1.65e-14 $
    Best $ 3 $ $ 3 $
    $ F_{19}(x) $ Mean $ -3.8627 $ $ -3.8627 $
    Std $ 2.68e-15 $ $ 2.53e-17 $
    Best $ -3.8627 $ $ -3.8627 $
    $ F_{20}(x) $ Mean $ -3.322 $ $ -3.322 $
    Std $ 1.12e-08 $ $ 8.95e-09 $
    Best $ -3.322 $ $ -3.322 $
    $ F_{21}(x) $ Mean $ -8.7744 $ $ -9.4724 $
    Std $ 2.1867 $ $ 1.3031 $
    Best $ -10.1532 $ $ -10.1532 $
    $ F_{22}(x) $ Mean $ -10.4029 $ $ -10.4029 $
    Std $ 1.84e-15 $ $ 1.76e-18 $
    Best $ -10.4029 $ $ -10.4029 $
    $ F_{23}(x) $ Mean $ -10.5364 $ $ -10.5364 $
    Std $ 1.54e-15 $ $ 1.62e-18 $
    Best $ -10.5364 $ $ -10.5364 $

     | Show Table
    DownLoad: CSV
    Figure 8.  Convergence curve of the function $ F_{14}(x)-F_{23}(x) $.

    Assume that $ N $ far-field narrowband signal sources are incident on the hydrophone array of the $ M(M > N) $ vector sensor. The incident angle is $ { \bf{\Theta} } = [\Theta_1, \Theta_2, \cdots, \Theta_N]^T $, where $ \Theta_n = (\theta_n, \alpha_n)^T $, $ (\cdot)^T $ is the transposition, $ \theta_n $ is the horizontal azimuth angle of the $ n^{\rm{th}} $ incident signal, $ \alpha_n $ is the elevation angle of the $ n^{\rm{th}} $ incident signal, respectively, the incident wavelength is $ \lambda $, and the distance between adjacent arrays is $ d $. Then the signal received by the array can be expressed in vector form as

    $ Z(t)=A(Θ)S(t)+N(t),
    $
    (3.1)

    Among them, $ { \bf{Z} }(t) $ is the $ 4M\times 1 $ dimensional received signal vector, and ${ \bf{N} }(t) $ is the array $ 4M\times 1 $ dimensional gaussian noise vector. It is assumed that the noise is gaussian white noise, which are independent of each other in time and space. $ { \bf{S} }(t)$ is the $ M\times 1 $ dimensional signal source vector. $ {\bf{A}}({\bf{\Theta }}) $ is the signal direction matrix of the vector hydrophone array

    $ A(Θ)=[a(Θ1),a(Θ2),,a(ΘN)]=[a1(Θ1)u1,a2(Θ2)u2,,aN(ΘN)uN],
    $
    (3.2)

    where $ \otimes $ is the Kronecker product, $ { \bf{a} }_n(\Theta_n) = [1, e^{-j\beta_n}, e^{-j2\beta_n}, \cdots, e^{-j(M-1)\beta_n}]^T $ is the sound pressure corresponding to $ n^{\rm{th}} $ signal, $ { \bf{u} }_n = [1, \cos\theta_n \sin \alpha_n, \sin\theta_n \sin \alpha_n, \cos \alpha_n]^T $ is the direction vector of the $ n^{\rm{th}} $ signal source, and $ \beta_n = \frac{2\pi}{\lambda}d\cos \theta_n \sin \alpha_n $. Then the array covariance matrix of the received signal is

    $ R=E[Z(t)ZH(t)]=AE[S(t)SH(t)]AH+E[N(t)NH(t)]=ARsAH+σ2I,
    $
    (3.3)

    where $ { \bf{R} }_s $ is the signal covariance matrix, $ \sigma^2 $ is Gaussian white noise, $ { \bf{I} } $ is the identity matrix, $ (\cdot)^H $ is the conjugate transpose of matrix $ (\cdot) $, Assume that the signal and the array are on the same plane, that is, $ \alpha_n = \frac{\pi}{2} $, so only $ \theta_n $ is considered in this paper. In actual calculations, the received data is limited, so the array covariance matrix is

    $ ˆR=1KKk=1Z(k)ZH(k),
    $
    (3.4)

    where $ K $ represents the number of snapshots.

    By uniformly and independently sampling the received signal, the joint probability density function of the sampled data can be obtained as follows

    $ P(Z(1),Z(2),,Z(K))=Kk=1exp(1σ2Z(k)A(˜θ)S(k)2)det(πσ2I),
    $
    (3.5)

    where $ {\rm{det}}(\cdot) $ represents the determinant of the matrix $ (\cdot) $, $ \tilde{\theta} $ is the unknown signal orientation estimation, $ P(\cdot) $ is a multidimensional nonlinear function of unknown parameters $ \tilde{\theta}, \sigma^2 $ and $ { \bf{S} } $. Take the logarithm of Eq (3.5)

    $ lnP=Klnπ+3MKlnσ2+1σ2Kk=1Z(k)A(˜θ)S(k)2,
    $
    (3.6)

    In Eq (3.6), Take the partial derivative of $ \sigma^2 $, set it to 0, get $ \sigma^2 = \frac{1}{4M}{\rm{tr}}\left\{{{\bf{P_A}}^\perp \hat{{ \bf{R} }}}\right\} $, where $ {\rm{tr}}\{\cdot\} $ is the trace of matrix $ (\cdot) $, $ {\bf{P_A}}^\perp $ is the orthogonal projection matrix of matrix $ {\bf{A}} $, $ \hat{{ \bf{S} }} = {\bf{A}}^{+}{ \bf{Z} } $ and $ {\bf{A}}^{+} = ({\bf{A}}^H{\bf{A}})^{-1}{\bf{A}}^H $ are the pseudo-inverse of matrix $ {\bf{A}} $, substitute $ \sigma^2 $ and $ \hat{{ \bf{S} }} $, into Eq (3.6), then

    $ ˆθ=argmax˜θg(˜θ),
    $
    (3.7)

    where $ g(\tilde{\theta}) $ is the likelihood function, which can be expressed as

    $ g(˜θ)=tr{[A(˜Θ)(AH(˜θ)A(˜θ))1AH(˜θ)]ˆR}.
    $
    (3.8)

    $ \hat{\theta} $ is the estimated DOA angle of the estimated signal. Seeking the maximum value of the likelihood function $ g(\tilde{\theta}) $ can get a set of solutions corresponding to this value, which is the estimated angle sought.In order to compare the convergence of different methods, the following equation is defined as the fitness function

    $ f(˜θ)=|g(˜θ)g(θ)|,
    $
    (3.9)

    where $ \theta $ is the known signal in Eq (3.1), $ g(\theta) = {\rm{tr}} \left\{\left[{\bf{A}}(\theta)({\bf{A}}^H(\theta){\bf{A}}(\theta))^{-1}{\bf{A}}^H(\theta)\right]\hat{{ \bf{R} }}\right\} $, Eq (3.7) can thus be expressed as

    $ ˆθ=argmin˜θf(˜θ),
    $
    (3.10)

    when $ f(\tilde{\theta}) $ is close to 0, it means that the estimated angle is more accurate.

    The initial position is expressed as $ \theta_i = [ \theta_{i}^1, \theta_{i}^2, \cdots, \theta_{i}^d] $. Taking Eq (3.9) in ML DOA as the fitness function $ Fit_{i}(t) $ in IASO, then the fitness function $ Fit_{i}(t) $ of Eq (2.3) in Section 2 is changed to $ f(\tilde{\theta}) $. Then the geometric binding force of Eq (2.15) can be expressed as $ x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) $; The acceleration is changed from Eq (2.17) to

    $ adi(t)=Fdi(t)+Gdi(t)mdi(t)=α(1t1T)3e20tTjKBestrandj[2×(hij(t))13(hij)7]mi(t)(θdi(t)θdi(t))θi(t),θj(t)2+βe20tTθdbest(t)θdi(t)mi(t)
    $
    (4.1)

    The speed update is changed from Eq (2.18) to

    $ vdi(t+1)=w×randdivdi(t)+c1×randdiadi(t)+c2×randdi(θdbest(t)θdi(t)),
    $
    (4.2)

    the location update is changed from Eq (2.19) to

    $ θdi(t+1)=θdi(t)+vdi(t+1).
    $
    (4.3)

    In this part, we demonstrated the simulation results of the iterative process and convergence performance of the IASO. Then, we compared the ML DOA estimation performance between the IASO and ASO, SCA, GA, and PSO. In the experiment, the receiving array should be a uniform linear array composed of 10 acoustic vector sensors, the number of snapshots is 300, and the added noise is Gaussian white noise.

    In the simulation experiment, in $ 100 $ independent Monte Carlo experiments, the population number is $ 30 $, the maximum number of iterations is $ 200 $, the signal-to-noise ratio is $ 10 $dB, and the search range is $ [0,180] $. Taking one source $ \theta = [30^\circ] $, two sources $ \theta = [30^\circ, 60^{\circ}] $, respectively, the minimum process curve of fitness value is obtained. Compared with IASO, ASO, SCA, GA and PSO, Table 7 shows the parameters of the five algorithmsan and Figure 9 shows the fitness convergence curve.

    Table 7.  Parameter values of different algorithms for ML DOA estimator.
    Name of Parameter IASO ASO SCA GA PSO
    Problem dimension 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4)
    Population Size 30 30 30 30 30
    Maximum number of iterations 200 200 200 200 200
    Initial search area $ [0,180] $ $ [0,180] $ $ [0,180] $ $ [0,180] $ $ [0,180] $
    Depth weight 50 50 - - -
    Multiplier weight 0.2 0.2 - - -
    Lower limit of repulsion 1.1 1.1 - - -
    Upper limit of attraction 1.24 1.24 - - -
    Acceleration factor $ c_{1} $ -2.5000e-04 - - - -
    Acceleration factor $ c_{2} $ 1.003 - - - -
    Acceleration factor $ w $ 0.8975 - - - -
    Crossover Fraction - - - 0.8 -
    Migration Fraction - - - 0.2 -
    Cognitive Constants - - - - 1.25
    Social Constants - - - - 0.5
    Inertial Weight - - - - 0.9

     | Show Table
    DownLoad: CSV
    Figure 9.  The curves of fitness function for ML DOA estimator with IASO, ASO, SCA, GA and PSO at SNR = $ 10 $dB, when the number of signal sources is 1, 2, and 3, respectively.

    Figure 9 shows the fitness variation curves of the ML DOA estimators of the IASO, ASO, SCA, GA and PSO in the case of 1, 2, 3 signal source and 200 iterations. As can be seen from the picture. Regardless of the number of signal sources is 1, 2, or 3, IASO has the fastest convergence speed. When the number of signal sources is 1, IASO converges fastest, followed by ASO, In comparsion PSO, SCA and GA have large convergence range, and their fitness values are high, which indicates that they can easily to fall into local optimum, when the number of signal sources is 2, IASO has better convergence effect. When the signal source is 3, IASO still remained the best, followed by ASO. But SCA, GA and PSO not only converge rather slowly but can also easily fall into local optimum. Even after 200 iterations, the fitness function cannot converge to 0.

    In order to compare the statistical performance of different algorithms and their relationship with Cramér–Rao Bound(CRB), we performed a comparison and estimated the mean-variance of the algorithm based on the root mean square error (RMSE). The overall size is 30 iterations and the maximum number of iterations is 200.

    $ RMSE=1NLLl=1Ni=1[ˆθi(l)θi]2,
    $
    (4.4)

    where $ L $ is the number of experiments, $ \theta_i $ is the DOA of the $ i^{th} $ signal, $ N $ is the number of signals, and $ \hat{\theta}_i(j) $ denotes the estimate of the $ i^{th} $ DOA achieved in the $ j^{th} $ experiment.

    Figure 10 shows the RMSE of the ML DOA estimator for the five algorithms of IASO, ASO, SCA, GA and PSO when the number of signal sources is 1, 2 and 3, changing the signal-to-noise ratio ranges from -20dB to 20dB. It can be seen that the performance of IASO is more stable regardless of the source and more closer to CRB. When the number of signal sources is 3, the estimation performance of several algorithms decreases, but the DOA estimation performance based on IASO algorithm is still closer to CRB, followed by ASO, GA, PSO and SCA.The SCA performs well at low signal-to-noise ratios, but poorly at high signal-to-noise ratios. However, the MLDOA estimation performance of PSO and GA is poor, and their fitness functions have difficulty converging to the global optimum solution. Even after 200 iterations, a large RMSE is still generated.

    Figure 10.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as SNR changes from -20dB to 20dB, when the number of signal sources is 1, 2 and 3, respectively.

    Population size marks the most important parameter in biological evolutionary algorithms. In general, the estimation accuracy of intelligent algorithms improves as the population size increases. However, when the population increases, the computational load on the algorithm also increases. For ML DOA problems, the population size determines the number of likelihood functions calculated in each iteration. Therefore, this highlights the need for an algorithm with a small population size and high estimation accuracy.

    Figure 11 shows the RMSE curves of the ML DOA estimators of IASO, ASO, SCA, GA and PSO when the population size ranges from 10 to 100. As can be seen from the figure, the IASO can maintain low RMSE, high estimation accuracy and closer to CRB regardless of the number of signal sources. When there is one signal source, the RMSE of IASO, ASO, SCA and PSO is similar, while the GA algorithm keeps a relatively high RMSE when the population is less than 50. When there are two signal sources, the RMSE of IASO algorithm maintains a lower RMSE, while the ASO algorithm is somewhat unstable, and the PSO and GA still keep a higher RMSE. When the signal source is three, only IASO algorithm has lower RMSE, ASO, PSO, SCA and GA, and they have higher RMSE. This shows the population size is 100, GA and PSO algorithms only need a large population number size, bue also they also need a large number of iterations. For ML DOA estimation, when the number of signal source are 1, 2, 3, IASO algorithm can accurately estimate DOA with a smaller population number requiring less computational effort.

    Figure 11.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as population size changes from 10 to 100, when the number of signal sources is 1, 2 and 3, respectively.

    In addition to the convergence and statistical performance described above, the quality of an algorithm can also be judged by its computational complexity. The computational complexity is independent of the number of signal sources. Rather it is related to the maximum number of iterations, the population size and the maximum number of spatial variations. The following figure illustrates the average number of iterations calculated by stopping the standard formula 30 in the case of two signal sources: $ 1e-6 $.

    Figure 12 shows the average iteration time curves of the different algorithms for 100 independent Monte Carlo experiments. It can be seen that the IASO algorithm has the smallest number of iterations when the signal-to-noise ratio range from -20dB to 20dB and the number of iterations is $ 200 $, followed by the ASO, PSO, GA and SCA algorithms, which require at least 100 iterations. When the number of IASO iterations was significantly lower than the other groups, the number of iterations was significantly lower than the other algorithms. In general, the IASO algorithm has the smallest number of iterations on the mean curve of signal-to-noise ratio and overall size. For ASO, SCA, GA and PSO, more iterations are still needed to find the optimal solution under signal-to-noise ratio and overall transformation. As a result, IASO has the lowest computational load.

    Figure 12.  The average iteration number of different algorithms with 3 signal sources as SNR changes from -20dB to 20dB, the population size changes from 10 to 100, respectively.

    This paper proposes an improved ASO, employing 23 test functions to test IASO and ASO and finds that it overcomes the shortcomings of ASO that it can easily to fall into local optimality and poor convergence performance. ML DOA estimation is a high-resolution optimization method in theory, but the huge computational burden hinders its practical applications. In this paper, IASO is used in ML DOA estimation, and simulation experiments are carried out. The results show that, compared with ASO, SCA, GA and PSO methods, the ML DOA estimator of IASO proposed in this paper has faster convergence speed, lower RMSE and lower computational complexity.

    This research was funded by National Natural Science Foundation of China (Grant No. 61774137, 51875535 and 61927807), Key Research and Development Foundation of Shanxi Province(Grant No. 201903D121156), and Shanxi Scholarship Council of China (Grant No. 2020-104 and 2021-108). The authors express their sincere thanks to the anonymous referee for many valuable comments and suggestions.

    The authors declare that they have no conflict of interest.

    [1] Caginalp G (1986) An analysis of a phase field model of a free boundary. Arch Ration Mech Anal ,92: 205-245.
    [2] Aizicovici S, Feireisl E (2001) Long-time stabilization of solutions to a phase-field model with memory. J Evol Equ ,1: 69-84.
    [3] Aizicovici S, Feireisl E (2001) Long-time convergence of solutions to a phase-field system. Math Methods Appl Sci ,24: 277-287.
    [4] Brochet D, Chen X, Hilhorst D (1993) Finite dimensional exponential attractors for the phase-field model. Appl Anal ,49: 197-212.
    [5] M. Brokate, J. Sprekels, Hysteresis and Phase Transitions, Springer, New York, 1996.
    [6] Cherfils L, Miranville A (2007) Some results on the asymptotic behavior of the Caginalp system with singular potentials. Adv Math Sci Appl .
    [7] Cherfils L, Miranville A (2009) On the Caginalp system with dynamic boundary conditions and singular potentials. Appl Math ,54: 89-115.
    [8] Chill R, Fasangov E′a, J. Pr¨uss (2006) Convergence to steady states of solutions of the Cahn-Hilliard equation with dynamic boundary conditions. Math Nachr ,279: 1448-1462.
    [9] C.I. Christov, P.M. Jordan, Heat conduction paradox involving second-sound propagation in moving media, Phys. Rev. Lett., 94 (2005), 154-301.
    [10] J.N. Flavin, R.J. Knops, and L.E. Payne, Decay estimates for the constrained elastic cylinder of variable cross-section, Quart. Appl. Math., 47 (1989), 325-350.
    [11] Gatti S, Miranville A (2006) Asymptotic behavior of a phase-field system with dynamic boundary conditions, in: Di erential Equations: Inverse and Direct Problems (Proceedings of the workshop “Evolution Equations: Inverse and Direct Problems ”, Cortona, June 21-25, 2004), in A. Favini, A. Lorenzi (Eds), A Series of Lecture Notes in Pure and Applied Mathematics ,251: 149-170.
    [12] C. Giorgi, M. Grasselli, and V. Pata, Uniform attractors for a phase-field model with memory and quadratic nonlinearity, Indiana Univ. Math. J., 48 (1999), 1395-1446.
    [13] Grasseli M, Miranville A, Pata V, Zelik S (2007) Well-posedness and long time behavior of a parabolic-hyperbolic phase-field system with singular potentials. Math Nachr ,280: 1475-1509.
    [14] M. Grasselli, On the large time behavior of a phase-field system with memory, Asymptot. Anal., 56 (2008), 229-249.
    [15] M. Grasselli, V. Pata, Robust exponential attractors for a phase-field system with memory J. Evol. Equ., 5 (2005), 465-483.
    [16] M. Grasselli, H. Petzeltová, and G. Schimperna, Long time behavior of solutions to the Caginalp system with singular potentials, Z. Anal. Anwend., 25 (2006), 51-73.
    [17] M. Grasselli, H. Wu, and S. Zheng, Asymptotic behavior of a non-isothermal Ginzburg-Landau model, Quart. Appl. Math., 66 (2008), 743-770.
    [18] A.E. Green, P.M. Naghdi, A new thermoviscous theory for fluids, J. Non-Newtonian Fluid Mech., 56 (1995), 289-306.
    [19] A.E. Green, P.M. Naghdi, A re-examination of the basic postulates of thermomechanics, Proc. Roy. Soc. Lond. A., 432 (1991), 171-194.
    [20] A.E. Green, P.M. Naghdi, On undamped heat waves in an elastic solid, J. Thermal. Stresses, 15 (1992), 253-264.
    [21] J. Jiang, Convergence to equilibrium for a parabolic-hyperbolic phase-field model with Cattaneo heat flux law, J. Math. Anal. Appl., 341 (2008), 149-169.
    [22] J. Jiang, Convergence to equilibrium for a fully hyperbolic phase field model with Cattaneo heat flux law, Math. Methods Appl. Sci., 32 (2009), 1156-1182.
    [23] Ph. Laurençot, Long-time behaviour for a model of phase-field type, Proc. Roy. Soc. Edinburgh Sect. A, 126 (1996), 167-185.
    [24] A. Miranville, R. Quintanilla, Some generalizations of the Caginalp phase-field system, Appl. Anal., 88 (2009), 877-894.
    [25] A. Miranville, R. Quintanilla, A generalization of the Caginalp phase-field system based on the Cattaneo law, Nonlinear Anal. TMA., 71 (2009), 2278-2290
    [26] A. Miranville, R. Quintanilla, A Caginalp phase-field system with a nonlinear coupling. Nonlinear Anal.: Real World Applications, 11 (2010), 2849-2861.
    [27] A. Miranville, S. Zelik, Robust exponential attractors for singularly perturbed phase-field type equations, Electron. J. Diff. Equ., (2002), 1-28.
    [28] A. Miranville, S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains, in: C.M. Dafermos, M. Pokorny (Eds.) Handbook of Differential Equations, Evolutionary Partial Differential Equations. Elsevier, Amsterdam, 2008.
    [29] A. Novick-Cohen, A phase field system with memory: Global existence, J. Int. Equ. Appl. 14 (2002), 73-107.
    [30] R. Quintanilla, On existence in thermoelasticity without energy dissipation, J. Thermal. Stresses, 25 (2002), 195-202.
    [31] R. Quintanilla, End effects in thermoelasticity, Math. Methods Appl. Sci.. 24 (2001), 93-102.
    [32] R. Quintanilla, R. Racke, Stability in thermoelasticity of type Ⅲ, Discrete Contin. Dyn. Syst. B, 3 (2003), 383-400.
    [33] R. Quintanilla, Phragmén-Lindelöf alternative for linear equations of the anti-plane shear dynamic problem in viscoelasticity, Dynam. Contin. Discrete Impuls. Systems, 2 (1996), 423-435.
    [34] R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, second edition, Applied Mathematical Sciences, vol. 68, Springer-Verlag, New York, 1997.
    [35] Z. Zhang, Asymptotic behavior of solutions to the phase-field equations with Neumann boundary conditions, Comm. Pure Appl. Anal., 4 (2005), 683-693.
  • This article has been cited by:

    1. Faiz Muhammad Khan, Amjad Ali, Ebenezer Bonyah, Zia Ullah Khan, Mohammad Rahimi-Gorji, The Mathematical Analysis of the New Fractional Order Ebola Model, 2022, 2022, 1687-4129, 1, 10.1155/2022/4912859
    2. S. Bhatter, K. Jangid, A. Abidemi, K.M. Owolabi, S.D. Purohit, A new fractional mathematical model to study the impact of vaccination on COVID-19 outbreaks, 2023, 6, 27726622, 100156, 10.1016/j.dajour.2022.100156
    3. Guangdong Sui, Xiaobiao Shan, Chengwei Hou, Haigang Tian, Jingtao Hu, Tao Xie, An underwater piezoelectric energy harvester based on magnetic coupling adaptable to low-speed water flow, 2023, 184, 08883270, 109729, 10.1016/j.ymssp.2022.109729
    4. Peng Wu, Anwarud Din, Taj Munir, M Y Malik, A. S. Alqahtani, Local and global Hopf bifurcation analysis of an age-infection HIV dynamics model with cell-to-cell transmission, 2022, 1745-5030, 1, 10.1080/17455030.2022.2073401
    5. Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad, Computational analysis of COVID-19 model outbreak with singular and nonlocal operator, 2022, 7, 2473-6988, 16741, 10.3934/math.2022919
    6. Kottakkaran Sooppy Nisar, Muhammad Farman, Mahmoud Abdel-Aty, Chokalingam Ravichandran, A review of fractional order epidemic models for life sciences problems: Past, present and future, 2024, 95, 11100168, 283, 10.1016/j.aej.2024.03.059
    7. Yuqin Song, Peijiang Liu, Anwarud Din, Analysis of a stochastic epidemic model for cholera disease based on probability density function with standard incidence rate, 2023, 8, 2473-6988, 18251, 10.3934/math.2023928
    8. M. Latha Maheswari, K. S. Keerthana Shri, Mohammad Sajid, Analysis on existence of system of coupled multifractional nonlinear hybrid differential equations with coupled boundary conditions, 2024, 9, 2473-6988, 13642, 10.3934/math.2024666
    9. Lalchand Verma, Ramakanta Meher, Fuzzy computational study on the generalized fractional smoking model with caputo gH-type derivatives, 2024, 17, 1793-5245, 10.1142/S1793524523500377
    10. Shewafera Wondimagegnhu Teklu, Belela Samuel Kotola, Haileyesus Tessema Alemneh, Joshua Kiddy K. Asamoah, Smoking and alcoholism dual addiction dissemination model analysis with optimal control theory and cost-effectiveness, 2024, 19, 1932-6203, e0309356, 10.1371/journal.pone.0309356
    11. Abdelhamid Mohammed Djaouti, Zareen A. Khan, Muhammad Imran Liaqat, Ashraf Al-Quran, A Novel Technique for Solving the Nonlinear Fractional-Order Smoking Model, 2024, 8, 2504-3110, 286, 10.3390/fractalfract8050286
    12. A. Omame, F.D. Zaman, Analytic solution of a fractional order mathematical model for tumour with polyclonality and cell mutation, 2023, 8, 26668181, 100545, 10.1016/j.padiff.2023.100545
    13. Asad Khan, Anwarud Din, Stochastic analysis for measles transmission with Lévy noise: a case study, 2023, 8, 2473-6988, 18696, 10.3934/math.2023952
    14. Viswanathan Padmavathi, Kandaswami Alagesan, Samad Noeiaghdam, Unai Fernandez-Gamiz, Manivelu Angayarkanni, Vediyappan Govindan, Tobacco smoking model containing snuffing class, 2023, 9, 24058440, e20792, 10.1016/j.heliyon.2023.e20792
    15. I R Sofia, Shraddha Ramdas Bandekar, Mini Ghosh, Mathematical modeling of smoking dynamics in society with impact of media information and awareness, 2023, 11, 26667207, 100233, 10.1016/j.rico.2023.100233
    16. Jalal Al Hallak, Mohammed Alshbool, Ishak Hashim, Amar Nath Chatterjee, Implementing Bernstein Operational Matrices to Solve a Fractional‐Order Smoking Epidemic Model, 2024, 2024, 1687-9643, 10.1155/2024/9141971
    17. Muhammad Farman, Kottakkaran Sooppy Nisar, Mumtaz Ali, Hijaz Ahmad, Muhammad Farhan Tabassum, Abdul Sattar Ghaffari, Chaos and forecasting financial risk dynamics with different stochastic economic factors by using fractional operator, 2025, 11, 2363-6203, 10.1007/s40808-025-02321-2
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5447) PDF downloads(1562) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog