Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

An improved atomic search algorithm for optimization and application in ML DOA estimation of vector hydrophone array

  • Received: 30 September 2021 Revised: 22 December 2021 Accepted: 03 January 2022 Published: 10 January 2022
  • MSC : 65K05, 65K10

  • The atom search optimization (ASO) algorithm has the characteristics of fewer parameters and better performance than the traditional intelligent optimization algorithms, but it is found that ASO may easily fall into local optimum and its accuracy is not higher. Therefore, based on the idea of speed update in particle swarm optimization (PSO), an improved atomic search optimization (IASO) algorithm is proposed in this paper. Compared with traditional ASO, IASO has a faster convergence speed and higher precision for 23 benchmark functions. IASO algorithm has been successfully applied to maximum likelihood (ML) estimator for the direction of arrival (DOA), under the conditions of the different number of signal sources, different signal-to-noise ratio (SNR) and different population size, the simulation results show that ML estimator with IASO algorithum has faster convergence speed, fewer iterations and lower root mean square error (RMSE) than ML estimator with ASO, sine cosine algorithm (SCA), genetic algorithm (GA) and particle swarm optimization (PSO). Therefore, the proposed algorithm holds great potential for not only guaranteeing the estimation accuracy but also greatly reducing the computational complexity of multidimensional nonlinear optimization of ML estimator.

    Citation: Peng Wang, Weijia He, Fan Guo, Xuefang He, Jiajun Huang. An improved atomic search algorithm for optimization and application in ML DOA estimation of vector hydrophone array[J]. AIMS Mathematics, 2022, 7(4): 5563-5593. doi: 10.3934/math.2022308

    Related Papers:

    [1] Peng Wang, Jiajun Huang, Weijia He, Jingqi Zhang, Fan Guo . Maximum likelihood DOA estimation based on improved invasive weed optimization algorithm and application of MEMS vector hydrophone array. AIMS Mathematics, 2022, 7(7): 12342-12363. doi: 10.3934/math.2022685
    [2] Chen Chen, Xiangbing Chen, Yi Ai . Convex-structured covariance estimation via the entropy loss under the majorization-minimization algorithm framework. AIMS Mathematics, 2024, 9(6): 14253-14273. doi: 10.3934/math.2024692
    [3] Youseef Alotaibi, R Deepa, K Shankar, Surendran Rajendran . Inverse chi-square-based flamingo search optimization with machine learning-based security solution for Internet of Things edge devices. AIMS Mathematics, 2024, 9(1): 22-37. doi: 10.3934/math.2024002
    [4] Abbarapu Ashok, Nadiminti Nagamani . Adaptive estimation: Fuzzy data-driven gamma distribution via Bayesian and maximum likelihood approaches. AIMS Mathematics, 2025, 10(1): 438-459. doi: 10.3934/math.2025021
    [5] Ibrahim M. Hezam, Osama Abdul-Raof, Abdelaziz Foul, Faisal Aqlan . A Quantum-Inspired Sperm Motility Algorithm. AIMS Mathematics, 2022, 7(5): 9057-9088. doi: 10.3934/math.2022504
    [6] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [7] Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang . A novel algorithm for solving sum of several affine fractional functions. AIMS Mathematics, 2023, 8(4): 9247-9264. doi: 10.3934/math.2023464
    [8] Jie Guo, Zhong Wan . A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
    [9] Xiaojun Xie, Saratha Sathasivam, Hong Ma . Modeling of 3 SAT discrete Hopfield neural network optimization using genetic algorithm optimized K-modes clustering. AIMS Mathematics, 2024, 9(10): 28100-28129. doi: 10.3934/math.20241363
    [10] Xuncai Zhang, Guanhe Liu, Kai Zhao, Ying Niu . Improved salp swarm algorithm based on gravitational search and multi-leader search strategies. AIMS Mathematics, 2023, 8(3): 5099-5123. doi: 10.3934/math.2023256
  • The atom search optimization (ASO) algorithm has the characteristics of fewer parameters and better performance than the traditional intelligent optimization algorithms, but it is found that ASO may easily fall into local optimum and its accuracy is not higher. Therefore, based on the idea of speed update in particle swarm optimization (PSO), an improved atomic search optimization (IASO) algorithm is proposed in this paper. Compared with traditional ASO, IASO has a faster convergence speed and higher precision for 23 benchmark functions. IASO algorithm has been successfully applied to maximum likelihood (ML) estimator for the direction of arrival (DOA), under the conditions of the different number of signal sources, different signal-to-noise ratio (SNR) and different population size, the simulation results show that ML estimator with IASO algorithum has faster convergence speed, fewer iterations and lower root mean square error (RMSE) than ML estimator with ASO, sine cosine algorithm (SCA), genetic algorithm (GA) and particle swarm optimization (PSO). Therefore, the proposed algorithm holds great potential for not only guaranteeing the estimation accuracy but also greatly reducing the computational complexity of multidimensional nonlinear optimization of ML estimator.



    Intelligent optimization algorithms are increasingly popular in the field of intelligent computing and are widely applied in various other fields, including engineering, medicine, ecology and environment, marine engineering, and so forth. Classical intelligent optimization algorithms include: Particle swarm algorithm (PSO)[1], genetic algorithm (GA)[2], simulated annealing (SA)[3], etc. PSO refers to a swarm intelligence optimization algorithm that simulating bird predation, in which each bird is treated as a particle and the particles follow the current optimal particle to find the optimal solution in the solution space. GA is a computational model based on Darwin's theory of evolution and Mendelian genetics to simulate biological evolution. Genetic algorithms obtain the optimal solutions through three basic operations: chromosomal self-selection, crossover and mutation. SA marks the first natural algorithm proposed to simulate the high-temperature annealing process of metallic materials. When SA is heated at high temperatures and then slowly cooled, the particles eventually reach a state of equilibrium and solidify into crystals of minimal energy. Many scholars have also developed many bionic intelligence optimization algorithms based on classical intelligence algorithms, such as Sine Cosine Algorithm (SCA) [4], the Artificial swarm algorithm (ABC) [5], the Bat algorithm(BA) [6], the Bee Evolutionary Genetic Algorithm (BEGA) [7], the Squirrel Search Algorithm (SSA) [8], the Atomic Search optimisation (ASO) Algorithm [9], etc.

    The Atomic Search Optimisation (ASO) algorithm is a new intelligent optimisation algorithm based on molecular dynamics derivatives that was proposed in 2019. ASO is composed of geometric binding force and the interaction force between atoms, following the laws of classical mechanics [10,11]. Geometric binding forces are the interaction between the Lennard-Jones(LJ) potential [12,13] and the co-borrowing bonds among atoms. In ASO, atoms represent solutions in the search space. The larger the atomic mass, the better the solution, and vice versa. Compared to traditional intelligent system algorithms, ASO requires fewer physical parameters and can achieve better performance. As a result, it has been widely used in various fields, Zhang et al. applied ASO to hydrogeological parameters estimation [9] and groundwater dispersion coefficients calculation [14]. Ahmed et al. utilized ASO in fuel cell modeling and successfully built an accurate model. Simulations showed that it was as good as commercial proton exchange membrane(PEM) fuel cells [15], Mohammed et al. used ASO to reduce the peak sidelobe level of the beam pattern [16], Saeid combined the ASO with the Tree Seeding Algorithm (TSA) to enhance its performance in exploration [17]. Ghosh et al. proposed an improved Atomic Search Algorithm(ASO) based on binary variables and combined it with Simulated Annealing (SA) technique [18]. Elaziz et al. proposed an automatic clustering algorithm combining the ASO and the SCA algorithm to automatically find the best prime numbers and the corresponding positions [19]. Sun et al. applied improved ASO to engineering design [20].

    Since ASO is prone to finding only locally optimal solutions with low accuracy, an Improved Atom Search Algorithm(IASO) algorithm based on particle velocity updating is proposed in this paper. The IASO algorithm has the same principle as the ASO algorithm, but IASO is optimized for speed iterative updates to improve the convergence speed of the algorithm, avoid finding local optimal solutions, and allow a more extensive search for optimal solutions. IASO adopts the idea of particle update speed in PSO and introduces inertia weights w to improve the performance of ASO. In addition, the learning factors c1 and c2 are added into IASO, which not only ensure the convergence performance of the algorithm, but also accelerate the convergence speed, effectively solving the problem that the original ASO tends to find only the local optimal solution.

    Array signal processing is one of the important research directions of signal processing, which has been developing rapidly in recent years. DOA estimation of signals is an ongoing research hotspot in the field of array signal processing. It has great potential for hydrophone applications. Hydrophones are generally divided into scalar hydrophones and vector hydrophones. Due to the scalar hydrophones can only test scalar parameters in the sound field, many scholars have turned to the study of vector hydrophones. Xu et al. used alternating iterative weighted least squares to deal with the off-grid problem of sparsity-based [21], DOA of an array of acoustic vector water-modulated microphones. Amiri designed a micro-electro-mechanical system (MEMS) bionic vector hydrophone with a piezoelectric gated metal oxide semiconductor field-effect transistor (MOSFET) [22]. More and more scholars have been doing research in the direction of vector array. Song et al. studied the measurement results of an acoustic vector sensor array and proposed a new method to obtain better DOA estimation performance in noisy and coherent environments [23], using the time-frequency spatial information of the vector sensor array signal, Gao et al. combined elevation, azimuth and polarization for the estimation of electromagnetic vector sensor arrays based on nested tensor modeling [24], Baron et al. optimised, conceptualised and evaluated a hydrophone array for deep-sea mining sound source localisation validation [25], and Wand et al. proposed an iterative sparse covariance matrix fitting direction estimation method based on a vector hydrophone array [26]. In recent years, Some scholars applied compressed sensing to signal DOA estimation. Keyvan et al., proposed the Three Dimensional Orthogonal Matching Pursuit(3D-FOMP) algorithm and the Three Dimensional Focused Orthogonal Matching Pursuit (3D-FOMP) algorithm to obtain better estimation performance in low signal-to-noise ratio and multi-source environments, and to solve the problem that conventional DOA estimation algorithms cannot distinguish between two adjacent sources [27]. Keyvan et al, designed a new hybrid nonuniform linear array consisting of two uniform linear subarrays and proposed a new DOA estimation method based on the OMP algorithm. This algorithm has lower computational complexity and higher accuracy compared with the FOMP algorithm. It can distinguish adjacent signal sources more accurately and solve the phase ambiguity problem [28].

    With the development of vector hydrophone, the direction of arrival estimation of the vector hydrophones signal has an increasingly wide application, which is of great importance for the functional extension of sonar devices [29]. Many useful estimation methods, multiple signal classification (MUSIC)[30], estimated signal parameters via rotational invariance technique (ESPRIT) [31] and maximum likelihood (ML) [32] etc. have been proposed by many scholars.

    In 1988, Ziskind and Max added ML estimation to DOA and achieved ideal results [33]. compared with MISIC and Espirit, the ML estimation method is more effective and stable, especially in the case of low signal-to-noise ratios (SNR) or small snapshots.However, in MLDOA estimation, the solving problem of the likelihood function is a multidimensional nonlinear polar problem that requires a multidimensional search for global extrema, which increases the computational burden.

    Many scholars have used various methods in combination with ML to improve the estimation performance of DOA. Zhang, C.Y. et al. proposed a sparse iterative covariance-based estimation method and combined it with ML to improve its performance. However, its resolution and stability [34] are not high, and Hu et al. proposed a multi-source DOA estimation based on the ML in the spherical harmonic domain [35], Ji analyzed the asymptotic performance of ML DOA estimation [36], Selva proposed an effective method to calculate ML DOA estimation in the case of unknown sensor noise power [37], Yoon et al. optimized the sequence and branching length of taxa in phylogenetic trees by using the maximum likelihood method [38], Vishnu proposed a line propagation function (LSF)-based sinusoidal frequency estimation algorithm to improve the performance of ML DOA [39].

    In response to the complexity of the ML DOA estimation problem, some scholars have used intelligent optimization algorithms to optimize MLDOA and achieved better performance. Li et al. applied the bionic algorithm genetic algorithm to MLDOA estimation for the first time, but the genetic algorithm is prone to problems such as premature convergence [40]. Sharma et al. applied the PSO algorithm to MLDOA estimation, but there are still some drawbacks in the estimation of multi-directional angles because the PSO algorithm converges slowly and tends to fall into local optimal solutions [41], Zhang et al. combined artificial colony bees with ML DOA estimation to reduce the computational complexity in calculating ML functions [5], Feng et al. combining bat algorithms with ML to optimize the multidimensional nonlinear estimation of spectral functions [6], Fan et al. applied the Improved Bee Evolutionary Genetic Algorithm (IBEGA) to MLDOA estimation [7], Wang et al. used an improved squirrel search algorithm (ISSA) in MLDOA estimation, which reduced computational complexity and enhanced the simulation effect [42], and Li et al. proposed a search space that limits the search space for particle swarm optimization [43].

    As calculating the likelihood function for maximum likelihood DOA estimation is a multi-dimensional non-linear polar problem, a multi-dimensional search for global extremes is required, which requires extensive computation. To solve this problem, the proposed IASO is applied to MLDOA estimation. Simulation results show that the combination of IASO and MLDOA estimation significantly reduces the computational complexity of multidimensional nonlinear optimization of ML estimation.

    The main structure of this article is as follows: Section2 presented the improved ASO and compared the convergence performance of ASO and IASO on 23 benchmark functions; Section3 gives the data model and ML estimation; Section4 combines IASO with ML DOA, providing the simulation results to validate the convergence performance and statistical performance of IASO ML estimation and compareing it with ASO, PSO, GA and SCA combined with ML DOA estimation separately. Section 5 concludes the paper.

    The Atomic Search Algorithm (ASO) was proposed by Zhao et al. in 2018. It is a physics-inspired algorithm developed by molecular dynamics. The algorithm is simple to implement, featured with few parameters and good convergence performance and thus it has been used to solve a variety of optimization problems.

    The ASO algorithm is based on the interaction forces between atoms and geometric constraints, and the position of each atom in the search space can be measured by its mass. This means that the heavier the atoms, the better the solution. The search and optimisation process is based on the mutual repulsion or attraction of atoms depending on the distance between them. The lighter atoms flow at an accelerated speed to the larger atoms, which widens the search area and performs a large search. The acceleration of heavier atoms is smaller, making it more concentrated to find better solutions. Suppose that a group of atoms has N atoms, the position of the ith atom is Xi=[x1i,x2i,,xdi], according to Newton's second law

    Fi+Gi=miai, (2.1)

    where Fi is the total force of the interaction force on the atom i, Gi is the geometric binding force on the atom i, and mi is the mass of the atom i.

    The general unconstrained optimization problem can be defined as

    minf(x),x=[x1,x2,,xD], (2.2)
    LbxUb,
    Lb=[lb1,,lbD],Ub=[ub1,,ubD],

    where xd(d=1,2,.D) is the d components in the search space, Lb is the lower limit, Ub is the upper limit, and D is the dimension of the search space.

    The fitness value Fiti(t) of the position of each atom is calculated according to the fitness function defined by the user. The mass of each atom is Eq (2.3) and Eq (2.4), which can be derived from its fitness.

    Mi(t)=eFiti(t)Fitbest(t)Fitworst(t)Fitbest(t), (2.3)
    mi(t)=eMi(t)Nj=1Mj(t), (2.4)

    where Fitbest(t) and Fitworst(t) refer to the best fitness value and worst fitness value of the atom in ith iterations, Fiti(t) is the fitness value of atom i at the ith iteration, and the expressions of Fitbest(t) and Fitworst(t) are as follows

    Fitbest(t)=mini{1,2,3,,N}Fiti(t), (2.5)
    Fitworst(t)=maxi{1,2,3,,N}Fiti(t). (2.6)

    The interaction force between atoms is obtained from the literature [9,11], after optimizated by the LJ potential energy

    Fij(t)=η(t)[2(hij(t))13(hij(t))7], (2.7)
    η(t)=α(1t1T)3e20tT, (2.8)

    where η(t) is the depth function that adjusts the repulsion and attractive force, α is the depth weight, T is the maximum number of iterations, and t is the current number of iterations. Figure 1 shows the functional behavior of function F, the η(t) corresponding to different h ranges from 0.9 to 2. It can be seen that when h is from 0.9 to 1.12, it is repulsion; when h is from 1.12 to 2, it is gravity; and when h=1.12, it reaches a state of equilibrium. Therefore, in order to improve the exploration in ASO, the lower limit of the repulsive force with a smaller function value is h=1.1, and the upper limit of the gravitational force is 1.24.

    hij(t)={hmin,rij(t)σ(t)<hmin,rij(t)σ(t),hminrij(t)σ(t)hmax,hmax,rij(t)σ(t)>hmin, (2.9)
    Figure 1.  Function behaviors of Fwith different values of η.

    where hij(t) is the distance function, hmin=1.1 and hmax=1.24 represent the lower and upper limits of h, rij is the Euclidean distance between atom i and atom j, and σ(t) is defined as follows

    σ(t)= (2.10)

    where x_{ij} is the position component of atom i^{th} in the j^{th} dimensional search space, \|\cdot\|_{2} stands for two norm and KBest is a subset of the atom group, which is composed of the first K atoms with the best function fitness value.

    \begin{equation} K(t) = N-(N-2) \times \sqrt{\frac{t}{T}}, \end{equation} (2.11)
    \begin{equation} \begin{cases}h_{min} = g_{0}+g(t),\\h_{max} = u,\end{cases} \end{equation} (2.12)

    where g_{0} is a drift factor, which can shift the algorithm from exploration to development

    \begin{equation} g(t) = 0.1\times\sin\left(\frac{\pi}{2}\times\frac{t}{T}\right), \end{equation} (2.13)

    Therefore, the atom i^{th} acting on other atoms can be considered as a total force, expressed as

    \begin{equation} F_{i}^d(t) = \sum\limits_{j\in KBest} rand_{j}F_{ij}^d(t), \end{equation} (2.14)

    The geometric binding force also plays an important role in ASO. Assume that each atom has a covalent bond with each atom in KBest, and that each atom is bound by KBest, Figure 2 shows the effect of atomic interactions. A_{1}, A_{2}, A_{3} and A_{4} are the atoms with the best fitness value, called KBest.In KBest, A_{1}, A_{2}, A_{3} and A_{4} attract or repel each other, and A_{5}, A_{6}, A_{7} attract or repel each other for every atom. Each atom in the population is bound by the optimal atom A_{1} ( X_{best} ), the binding force of the atom i^{th} is

    \begin{equation} G_{i}^d(t) = \lambda(t)(x_{best}^d(t)-x_{i}^d(t)), \end{equation} (2.15)
    \begin{equation} \lambda(t) = \beta e^\frac{-20t}{T}, \end{equation} (2.16)
    Figure 2.  Forces of an atom system with KBest for K = 5 .

    where x_{best}^d(t) is the atom which is in the best position in the i^{th} iteration, \beta is the multiplier weight, and \lambda(t) is the Lagrange multiplier.

    Under the action of geometric constraint force and interaction force, the acceleration of atom i^{th} atom at time t is

    \begin{equation} \begin{array}{ll} a_{i}^d(t)& = \frac{F_{i}^d(t)+G_{i}^d(t)}{m_{i}^d(t)}\\ & = -\alpha(1-\frac{t-1}{T})^3e^\frac{-20t}{T}\\ &\sum\limits_{j\in KBest} \frac{rand_{j}[2\times(h_{ij}(t))^{13}-(h_{ij})^7]}{m_{i}(t)}\\ &\frac{(x_{i}^d(t)-x_{i}^d(t))}{\parallel {x_{i}(t),x_{j}(t)}\parallel_{2}}+\beta e^{-\frac{20t}{T}}\frac{x_{best}^d(t)-x_{i}^d(t)}{m_{i}(t)}. \end{array} \end{equation} (2.17)

    In the original ASO, the algorithm was found to be prone to local optima. As a result, changes were made in the iterative update process of the speed, allowing the algorithm to go beyond the local optimum, search and optimise more broadly. The particle update velocity from PSO is used in ASO and the inertial weight w is introduced in the original ASO velocity update.The algorithm is not prone to local optima at the start of the algorithm and improves the performance of the IASO algorithm. The addition of learning factors c_{1} and c_{2} not only ensures convergence performance, but also speeds up convergence, effectively solving the problem that the original ASO tends to fall into local optimality.

    w = 0.9-0.5\times\left(\frac{t}{T}\right), c_{1} = -10\times\left(\frac{t}{T}\right)^2,
    c_{2} = 1-\left(-10\times\left(\frac{t}{T}\right)^2\right),
    \begin{equation} \begin{array}{ll} v_{i}^d(t+1) = &w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)\\&+c_{2}\times rand_{i}^d(x_{best}^d(t)-x_{i}^d(t))\\ \end{array} \end{equation} (2.18)

    At the (t+1)^{th} iteration, the position updating of the i^{th} atom can be expressed as

    \begin{equation} x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1). \end{equation} (2.19)

    The maximum number of iterations, convergence normalisation, maximum running time and accuracy of the fitness function value are commonly used convergence criteria.In this paper, the maximum number of iterations and convergence normalisation are used as criteria for stopping the iterations.The maximum number of iterations is 200 and the convergence normalisation results are as follows

    \begin{equation} D = \sqrt{\sum\limits_{i = 1}^n (Fit_i-\overline{Fit})^2} < \varepsilon, \end{equation} (2.20)

    where Fit_i is the fitness value of i^{th} squirrel and \overline{Fit} is the average fitness value of the population, the accuracy \varepsilon is often taken as 1E-6 .

    Thus, by iterating over the above operations several times, we can eventually find the optimal solution exactly. Table 1 shows the main steps of the IASO algorithm.

    Table 1.  Unimodal test functions F_1(x)-F_7(x) .
    Name Function n Range Optimum
    Sphere F_{1}(x)=\sum\limits_{i=1}^{n} x_{i}^{2} 30 [-100,100]^{n} 0
    Schwefel 2.22 F_{2}(x)=\sum\limits_{i=1}^{n}\mid x_{i}\mid +\prod\limits_{i=1}^{n}\mid x_{i}\mid 30 [-100,100]^{n} 0
    Schwefel 1.2 F_{3}(x)=\sum\limits_{i=1}^{n}(\sum\limits_{j=1}^{i} x_{j})^{2} 30 [-100,100]^{n} 0
    Schwefel 2.21 F_{4}(x)=\max\limits_{i}\{\mid x_{i}\mid, 1\leq i\leq n\} 30 [-100,100]^{n} 0
    Rosenbrock F_{5}(x)=\sum\limits_{i=1}^{n-1}(100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}) 30 [-200,200]^{n} 0
    Step F_{6}(x)=\sum\limits_{i=1}^{n}(x_{i}+0.5)^{2} 30 [-100,100]^{n} 0
    Quartic F_{7}(x)=\sum\limits_{i=1}^{n}ix_{i}^{4}+rand() 30 [-1.28, 1.28]^{n} 0

     | Show Table
    DownLoad: CSV

    The pseudocode of IASO is present in Algorithm 1.

    Algorithm 1 Pseudocode of IASO
    Begin:
    Randomly initialize a group of atoms x (solutions) and their velocity v and Fit_{best} = Inf .
    While the stop criterion is not satisfied do
      For each atom x_{i} do
        Calculate its fitness value Fit_{i} ;
        If Fit_{i} < Fit_{best} then
           Fit_{best} = Fit_{i} ;
           X_{Best} = x_{i} ;
        End If
        Calculate the mass using Eq (2.3)and Eq (2.4);
        Use K(t) = N-(N-2)\times\frac{t}{T} to determine K neighbors;
        Use Eq (2.14) and Eq (2.15) to calculate the interaction force and geometric restraint force;
        Calculate acceleration using formula Eq (2.17);
        Update the velocity
         v_{i}^d(t+1) = w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)+c_{2}\times rand_{i}^d(x_{best}^d(t)-x_{i}^d(t)) .
        Update the position
           x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) .
      End For.
    End while.
    Find the best solution so far X_{Best} .

    To test the performance of the IASO algorithm, 23 known benchmark functions were used. These benchmark functions are described in Tables 1, 2, 3, F1-F7 is unimodal function, and each unimodal function has no local optimization, only one global optimum. Therefore, the convergence speed of the algorithm can be verified. F8-F13 are multimodal functions with many local optimizations. While F14-F23 is a low dimensional function, each function has less local optimal value. Therefore, multimodal function and low dimensional function are very suitable for local optimal test avoidance and algorithm exploration ability.

    Table 2.  Multimodal test functions F_8(x)-F_{13}(x) .
    Name Function n Range Optimum
    Schwefel F_{8}(x)=-\sum\limits_{i=1}^{n}(x_{i}\sin(\sqrt{\mid x_{i}\mid})) 30 [-500,500]^{n} -12569.5
    Rastrigin F_{9}(x)=\sum\limits_{i=1}^{n}(x_{i}^{2}-10\cos(2\pi x_{i})+10) 30 [-5.12, 5.12]^{n} 0
    Ackley F_{10}(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}x_{i}^{2}})-\exp(\frac{1}{n}\sum\limits_{i=1}^{n}\cos2\pi x_{i})+20+\varepsilon 30 [-32, 32]^{n} 0
    Griewank F_{11}(x)=\frac{1}{4000}\sum\limits_{i=1}^{n}x_{i}^{2}-\prod\limits_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1 30 [-600,600]^{n} 0
    Penalized F_{12}(x)=\frac{\pi}{n}\{10\sin^{2}(\pi y_{1})+\sum\limits_{i=1}^{n-1}(y_{i}-1)^{2}[1+10\sin^{2}(\pi y_{i+1})]+(y_{n}-1)^{2}\}+\sum\limits_{i=1}^{n}u(x_{i}, 10,100, 4) 30 [-50, 50]^{n} 0
    Penalized2 F_{13}(x)=0.1\{\sin^{2}(3\pi x_{1})+\sum\limits_{i=1}^{29}(x_{i}-1)^{2}p[1+\sin^{2}(3\pi X_{i+1})] +(x_{n}-1)^{2}[1+\sin^{2}(2\pi x_{n})]\}+\sum\limits_{i=1}^{n}u(x_{i}, 5,100, 4) 30 [-50, 50]^{n} 0

     | Show Table
    DownLoad: CSV
    Table 3.  Low-dimensional test functions F_{14}(x)-F_{23}(x) .
    Name Function n Range Optimum
    Foxholes F_{14}(x)=[\frac{1}{500}+\sum\limits_{j=1}^{25}\frac{1}{j+\sum\limits_{j=1}^{2}(x_{i}-a_{ij})^{6}}]^{-1} 2 [-65.536, 65.536]^{n} 0.998
    Kowalik F_{15}(x)=\sum\limits_{i=1}^{11}\mid a_{i}-\frac{x_{i}(b_{i}^{2}+b_{i}x_{2})}{b_{i}^{2}+b_{i}x_{3}+x_{4}}\mid^{2} 4 [-5, 5]^{n} 3.075\times 10^{-4}
    Six Hump Camel F_{16}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4} 2 [-5, 5]^{n} -1.0316
    Branin F_{17}(x)=(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{\pi}x_{1}-6)^{2} +10(1-\frac{1}{8\pi})\cos x_{1}+10 2 [-5, 10]\times[0, 15] 0.398
    Goldstein-Price F_{18}(x)=1+(x_{1}+x_{2}+1)^{2}19-14x_{1}+3x_{1}^{2}-14x_{2} +6x_{1}x_{2}+3x_{2}^{2})] \times[30+(2x_{1}+1-3x_{2})^{2}(18-32x_{1} 12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})] 2 [-2, 2]^{n} 3
    Hartman 3 F_{19}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{3}a_{ij}(x_{j}-p_{ij})^{2}] 3 [0, 1]^{n} -3.86
    Hartman 6 F_{20}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{6}a_{ij}(x_{j}-p_{ij})^{2}] 6 [0, 1]^{n} -3.322
    Shckel 5 F_{21}(x)=-\sum\limits_{i=1}^{5} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} 4 [0, 10]^{n} -10.1532
    Shckel 7 F_{22}(x)=-\sum\limits_{i=1}^{7} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} 4 [0, 10]^{n} -10.4028
    Shckel 10 F_{23}(x)=-\sum\limits_{i=1}^{10}[ (x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} 4 [0, 10]^{n} -10.5363

     | Show Table
    DownLoad: CSV

    In this experiment, the population size for IASO and ASO was 50 and the maximum number of iterations was 100 . There are three performance evaluation indexes for comparing IASO and ASO: the average, minimum and standard deviation of the optimal solution. The smaller the average value of the optimal solution, the less likely it is that the algorithm will enter a local optimum and the easier it is to find the global optimum; The smaller the standard deviation of the optimal solution, the more stable the algorithm will be; the smaller the minimum value, the more accurate it will be. Tables 35 shows the comparison of optimization results of different types of functions, and the corresponding convergence curve is shown in Figures 35. Table 4 and Figure 6 are the optimization results and convergence curves of unimodal function F_1(x)-F_7(x) . It can be seen that IASO algorithm has better performance than ASO algorithm and its convergence speed is faster. Table 5 and Figure 7 are the optimization results and convergence curves of multimodal function F_8(x)-F_{13}(x) . It can be seen that the overall performance of IASO is better than that of ASO. Table 6 and Figure 8 are the optimization results and convergence curves of low dimensional function F_{14}(x)- F_{23}(x) . The convergence curve shows that ASO converges faster, but IASO is more accurate. By comparing IASO with ASO, it can be seen that the improved IASO converges much faster and is more stable than ASO It is also less likely to enter the local optimum.

    Table 4.  Comparisons of results for unimodal functions.
    Function Index ASO IASO
    F_{1}(x) Mean 2.54e-12 1.88e-18
    Std 3.24e-12 1.03e-20
    Best 3.48e-15 5.22e-19
    F_{2}(x) Mean 3.33e-08 3.39e-09
    Std 1.89e-10 9.90e-12
    Best 5.11e-08 1.84e-09
    F_{3}(x) Mean 186.5664 1.06e-17
    Std 86.3065 1.23e-21
    Best 24.1115 1.81e-18
    F_{4}(x) Mean 3.24e-09 8.77e-10
    Std 6.14-09 2.32e-12
    Best 2.13e-10 4.75e-10
    F_{5}(x) Mean 0.2905 0.0034
    Std 0.9888 0.0039
    Best 4.5370e+03 28.8627
    F_{6}(x) Mean 0 0
    Std 0 0
    Best 0 0
    F_{7}(x) Mean 0.02124 3.91e-04
    Std 0.02981 3.60e-04
    Best 0.03319 1.8710e-04

     | Show Table
    DownLoad: CSV
    Table 5.  Comparisons of results for multimodal functions.
    Function Index ASO IASO
    F_{8}(x) Mean -3887 -6772.47
    Std 564.7 354.77
    Best -4245 -6878.93
    F_{9}(x) Mean 0 0
    Std 0 0
    Best 0 0
    F_{10}(x) Mean 3.91e-09 8.63e-10
    Std 2.15e-09 2.68e-13
    Best 1.13e-09 7.257e-10
    F_{11}(x) Mean 0 0
    Std 0 0
    Best 0 0
    F_{12}(x) Mean 4.34e-23 3.69e-23
    Std 1.84e-22 1.51e-22
    Best 7.83e-24 6.53e-24
    F_{13}(x) Mean 2.03e-22 2.33e-23
    Std 2.83e-22 3.12e-22
    Best 1.91e-23 1.90e-23

     | Show Table
    DownLoad: CSV
    Figure 3.  2D plots of function F_{1}(x)-F_{7}(x) .
    Figure 4.  2D plots of function F_{8}(x)-F_{13}(x) .
    Figure 5.  2D plots of function F_{14}(x)-F_{23}(x) .
    Figure 6.  Convergence curve of the function F_{1}(x)-F_{7}(x) .
    Figure 7.  Convergence curve of the function F_{8}(x)-F_{13}(x) .
    Table 6.  Comparisons of results for low-dimensional functions.
    Function Index ASO IASO
    F_{14}(x) Mean 0.998004 0.998004
    Std 7.04e-16 4.25e-16
    Best 0.998004 0.998004
    F_{15}(x) Mean 9.47e-04 4.69e-04
    Std 2.79e-04 1.81e-04
    Best 2.79e-04 1.45e-04
    F_{16}(x) Mean -1.03163 -1.03163
    Std 0 0
    Best -1.03163 -1.03163
    F_{17}(x) Mean 0.397887 0.397887
    Std 0 0
    Best 0.397887 0.397887
    F_{18}(x) Mean 3 3
    Std 1.68e-14 1.65e-14
    Best 3 3
    F_{19}(x) Mean -3.8627 -3.8627
    Std 2.68e-15 2.53e-17
    Best -3.8627 -3.8627
    F_{20}(x) Mean -3.322 -3.322
    Std 1.12e-08 8.95e-09
    Best -3.322 -3.322
    F_{21}(x) Mean -8.7744 -9.4724
    Std 2.1867 1.3031
    Best -10.1532 -10.1532
    F_{22}(x) Mean -10.4029 -10.4029
    Std 1.84e-15 1.76e-18
    Best -10.4029 -10.4029
    F_{23}(x) Mean -10.5364 -10.5364
    Std 1.54e-15 1.62e-18
    Best -10.5364 -10.5364

     | Show Table
    DownLoad: CSV
    Figure 8.  Convergence curve of the function F_{14}(x)-F_{23}(x) .

    Assume that N far-field narrowband signal sources are incident on the hydrophone array of the M(M > N) vector sensor. The incident angle is { \bf{\Theta} } = [\Theta_1, \Theta_2, \cdots, \Theta_N]^T , where \Theta_n = (\theta_n, \alpha_n)^T , (\cdot)^T is the transposition, \theta_n is the horizontal azimuth angle of the n^{\rm{th}} incident signal, \alpha_n is the elevation angle of the n^{\rm{th}} incident signal, respectively, the incident wavelength is \lambda , and the distance between adjacent arrays is d . Then the signal received by the array can be expressed in vector form as

    \begin{equation} { \bf{Z} }(t) = { \bf{A} }({ \bf{\Theta} }){ \bf{S} }(t)+{ \bf{N} }(t), \end{equation} (3.1)

    Among them, { \bf{Z} }(t) is the 4M\times 1 dimensional received signal vector, and { \bf{N} }(t) is the array 4M\times 1 dimensional gaussian noise vector. It is assumed that the noise is gaussian white noise, which are independent of each other in time and space. { \bf{S} }(t) is the M\times 1 dimensional signal source vector. {\bf{A}}({\bf{\Theta }}) is the signal direction matrix of the vector hydrophone array

    \begin{align} { \bf{A} }({ \bf{\Theta} }) & = [{ \bf{a} }(\Theta_1), { \bf{a} }(\Theta_2), \cdots, { \bf{a} }(\Theta_N)] \\ & = [{ \bf{a} }_1(\Theta_1) \otimes { \bf{u} }_1, { \bf{a} }_2(\Theta_2) \otimes { \bf{u} }_2, \cdots, { \bf{a} }_N(\Theta_N) \otimes { \bf{u} }_N], \end{align} (3.2)

    where \otimes is the Kronecker product, { \bf{a} }_n(\Theta_n) = [1, e^{-j\beta_n}, e^{-j2\beta_n}, \cdots, e^{-j(M-1)\beta_n}]^T is the sound pressure corresponding to n^{\rm{th}} signal, { \bf{u} }_n = [1, \cos\theta_n \sin \alpha_n, \sin\theta_n \sin \alpha_n, \cos \alpha_n]^T is the direction vector of the n^{\rm{th}} signal source, and \beta_n = \frac{2\pi}{\lambda}d\cos \theta_n \sin \alpha_n . Then the array covariance matrix of the received signal is

    \begin{align} { \bf{R} } & = {\rm{E}}[{ \bf{Z} }(t){ \bf{Z} }^H(t)] \\ & = { \bf{A} }{\rm{E}}[{ \bf{S} }(t){ \bf{S} }^H(t)]{ \bf{A} }^H+ {\rm{E}}[{ \bf{N} }(t){ \bf{N} }^H(t)] \\ & = { \bf{A} }{ \bf{R} }_s{ \bf{A} }^H+\sigma^2 { \bf{I} }, \end{align} (3.3)

    where { \bf{R} }_s is the signal covariance matrix, \sigma^2 is Gaussian white noise, { \bf{I} } is the identity matrix, (\cdot)^H is the conjugate transpose of matrix (\cdot) , Assume that the signal and the array are on the same plane, that is, \alpha_n = \frac{\pi}{2} , so only \theta_n is considered in this paper. In actual calculations, the received data is limited, so the array covariance matrix is

    \begin{equation} \hat{{ \bf{R} }} = \frac{1}{K}\sum\limits_{k = 1}^K { \bf{Z} }(k){ \bf{Z} }^H(k), \end{equation} (3.4)

    where K represents the number of snapshots.

    By uniformly and independently sampling the received signal, the joint probability density function of the sampled data can be obtained as follows

    \begin{align} &P\left({ \bf{Z} }(1), { \bf{Z} }(2),\cdots, { \bf{Z} }(K)\right)\\ & = \prod\limits_{k = 1}^K \frac{\exp\left( -\frac{1}{\sigma^2}\|{ \bf{Z} }(k)-{ \bf{A} }(\tilde{\theta}){ \bf{S} }(k)\|^2\right)}{\det(\pi\sigma^2 { \bf{I} })}, \end{align} (3.5)

    where {\rm{det}}(\cdot) represents the determinant of the matrix (\cdot) , \tilde{\theta} is the unknown signal orientation estimation, P(\cdot) is a multidimensional nonlinear function of unknown parameters \tilde{\theta}, \sigma^2 and { \bf{S} } . Take the logarithm of Eq (3.5)

    \begin{align} -\ln P & = K\ln\pi+3MK\ln\sigma^2 \\ &+\frac{1}{\sigma^2}\sum\limits_{k = 1}^K\|{ \bf{Z} }(k)-{ \bf{A} }(\tilde{\theta}){ \bf{S} }(k)\|^2, \end{align} (3.6)

    In Eq (3.6), Take the partial derivative of \sigma^2 , set it to 0, get \sigma^2 = \frac{1}{4M}{\rm{tr}}\left\{{{\bf{P_A}}^\perp \hat{{ \bf{R} }}}\right\} , where {\rm{tr}}\{\cdot\} is the trace of matrix (\cdot) , {\bf{P_A}}^\perp is the orthogonal projection matrix of matrix {\bf{A}} , \hat{{ \bf{S} }} = {\bf{A}}^{+}{ \bf{Z} } and {\bf{A}}^{+} = ({\bf{A}}^H{\bf{A}})^{-1}{\bf{A}}^H are the pseudo-inverse of matrix {\bf{A}} , substitute \sigma^2 and \hat{{ \bf{S} }} , into Eq (3.6), then

    \begin{equation} \hat{\theta} = \arg \max\limits_{\tilde{\theta}} g(\tilde{\theta}), \end{equation} (3.7)

    where g(\tilde{\theta}) is the likelihood function, which can be expressed as

    \begin{equation} g(\tilde{\theta}) = {\rm{tr}} \left\{\left[{ \bf{A} }(\tilde{{ \bf{\Theta} }})({ \bf{A} }^H(\tilde{\theta}){ \bf{A} }(\tilde{\theta}))^{-1}{ \bf{A} }^H(\tilde{\theta})\right]\hat{{ \bf{R} }}\right\}. \end{equation} (3.8)

    \hat{\theta} is the estimated DOA angle of the estimated signal. Seeking the maximum value of the likelihood function g(\tilde{\theta}) can get a set of solutions corresponding to this value, which is the estimated angle sought.In order to compare the convergence of different methods, the following equation is defined as the fitness function

    \begin{equation} f(\tilde{\theta}) = |g(\tilde{\theta})-g(\theta)|, \end{equation} (3.9)

    where \theta is the known signal in Eq (3.1), g(\theta) = {\rm{tr}} \left\{\left[{\bf{A}}(\theta)({\bf{A}}^H(\theta){\bf{A}}(\theta))^{-1}{\bf{A}}^H(\theta)\right]\hat{{ \bf{R} }}\right\} , Eq (3.7) can thus be expressed as

    \begin{equation} \hat{\theta} = \arg \min\limits_{\tilde{\theta}} f(\tilde{\theta}), \end{equation} (3.10)

    when f(\tilde{\theta}) is close to 0, it means that the estimated angle is more accurate.

    The initial position is expressed as \theta_i = [ \theta_{i}^1, \theta_{i}^2, \cdots, \theta_{i}^d] . Taking Eq (3.9) in ML DOA as the fitness function Fit_{i}(t) in IASO, then the fitness function Fit_{i}(t) of Eq (2.3) in Section 2 is changed to f(\tilde{\theta}) . Then the geometric binding force of Eq (2.15) can be expressed as x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) ; The acceleration is changed from Eq (2.17) to

    \begin{equation} \begin{array}{ll} a_{i}^d(t)& = \frac{F_{i}^d(t)+G_{i}^d(t)}{m_{i}^d(t)}\\ & = -\alpha(1-\frac{t-1}{T})^3e^\frac{-20t}{T}\\ &\sum\limits_{j\in KBest} \frac{rand_{j}[2\times(h_{ij}(t))^{13}-(h_{ij})^7]}{m_{i}(t)}\\ &\frac{(\theta_{i}^d(t)-\theta_{i}^d(t))}{\parallel {\theta_{i}(t),\theta_{j}(t)}\parallel_{2}}+\beta e^{-\frac{20t}{T}}\frac{\theta_{best}^d(t)-\theta_{i}^d(t)}{m_{i}(t)} \end{array} \end{equation} (4.1)

    The speed update is changed from Eq (2.18) to

    \begin{equation} \begin{array}{ll} v_{i}^d(t+1)& = w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)\\&+c_{2}\times rand_{i}^d(\theta_{best}^d(t)-\theta_{i}^d(t)), \end{array} \end{equation} (4.2)

    the location update is changed from Eq (2.19) to

    \begin{equation} \theta_{i}^d(t+1) = \theta_{i}^d(t)+v_{i}^d(t+1). \end{equation} (4.3)

    In this part, we demonstrated the simulation results of the iterative process and convergence performance of the IASO. Then, we compared the ML DOA estimation performance between the IASO and ASO, SCA, GA, and PSO. In the experiment, the receiving array should be a uniform linear array composed of 10 acoustic vector sensors, the number of snapshots is 300, and the added noise is Gaussian white noise.

    In the simulation experiment, in 100 independent Monte Carlo experiments, the population number is 30 , the maximum number of iterations is 200 , the signal-to-noise ratio is 10 dB, and the search range is [0,180] . Taking one source \theta = [30^\circ] , two sources \theta = [30^\circ, 60^{\circ}] , respectively, the minimum process curve of fitness value is obtained. Compared with IASO, ASO, SCA, GA and PSO, Table 7 shows the parameters of the five algorithmsan and Figure 9 shows the fitness convergence curve.

    Table 7.  Parameter values of different algorithms for ML DOA estimator.
    Name of Parameter IASO ASO SCA GA PSO
    Problem dimension 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4) 2(3, 4)
    Population Size 30 30 30 30 30
    Maximum number of iterations 200 200 200 200 200
    Initial search area [0,180] [0,180] [0,180] [0,180] [0,180]
    Depth weight 50 50 - - -
    Multiplier weight 0.2 0.2 - - -
    Lower limit of repulsion 1.1 1.1 - - -
    Upper limit of attraction 1.24 1.24 - - -
    Acceleration factor c_{1} -2.5000e-04 - - - -
    Acceleration factor c_{2} 1.003 - - - -
    Acceleration factor w 0.8975 - - - -
    Crossover Fraction - - - 0.8 -
    Migration Fraction - - - 0.2 -
    Cognitive Constants - - - - 1.25
    Social Constants - - - - 0.5
    Inertial Weight - - - - 0.9

     | Show Table
    DownLoad: CSV
    Figure 9.  The curves of fitness function for ML DOA estimator with IASO, ASO, SCA, GA and PSO at SNR = 10 dB, when the number of signal sources is 1, 2, and 3, respectively.

    Figure 9 shows the fitness variation curves of the ML DOA estimators of the IASO, ASO, SCA, GA and PSO in the case of 1, 2, 3 signal source and 200 iterations. As can be seen from the picture. Regardless of the number of signal sources is 1, 2, or 3, IASO has the fastest convergence speed. When the number of signal sources is 1, IASO converges fastest, followed by ASO, In comparsion PSO, SCA and GA have large convergence range, and their fitness values are high, which indicates that they can easily to fall into local optimum, when the number of signal sources is 2, IASO has better convergence effect. When the signal source is 3, IASO still remained the best, followed by ASO. But SCA, GA and PSO not only converge rather slowly but can also easily fall into local optimum. Even after 200 iterations, the fitness function cannot converge to 0.

    In order to compare the statistical performance of different algorithms and their relationship with Cramér–Rao Bound(CRB), we performed a comparison and estimated the mean-variance of the algorithm based on the root mean square error (RMSE). The overall size is 30 iterations and the maximum number of iterations is 200.

    \begin{equation} {\rm{RMSE}} = \sqrt{\frac{1}{N\cdot L}\sum\limits_{l = 1}^L\sum\limits_{i = 1}^N \left[\hat{\theta}_i(l)-\theta_i\right]^2}, \end{equation} (4.4)

    where L is the number of experiments, \theta_i is the DOA of the i^{th} signal, N is the number of signals, and \hat{\theta}_i(j) denotes the estimate of the i^{th} DOA achieved in the j^{th} experiment.

    Figure 10 shows the RMSE of the ML DOA estimator for the five algorithms of IASO, ASO, SCA, GA and PSO when the number of signal sources is 1, 2 and 3, changing the signal-to-noise ratio ranges from -20dB to 20dB. It can be seen that the performance of IASO is more stable regardless of the source and more closer to CRB. When the number of signal sources is 3, the estimation performance of several algorithms decreases, but the DOA estimation performance based on IASO algorithm is still closer to CRB, followed by ASO, GA, PSO and SCA.The SCA performs well at low signal-to-noise ratios, but poorly at high signal-to-noise ratios. However, the MLDOA estimation performance of PSO and GA is poor, and their fitness functions have difficulty converging to the global optimum solution. Even after 200 iterations, a large RMSE is still generated.

    Figure 10.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as SNR changes from -20dB to 20dB, when the number of signal sources is 1, 2 and 3, respectively.

    Population size marks the most important parameter in biological evolutionary algorithms. In general, the estimation accuracy of intelligent algorithms improves as the population size increases. However, when the population increases, the computational load on the algorithm also increases. For ML DOA problems, the population size determines the number of likelihood functions calculated in each iteration. Therefore, this highlights the need for an algorithm with a small population size and high estimation accuracy.

    Figure 11 shows the RMSE curves of the ML DOA estimators of IASO, ASO, SCA, GA and PSO when the population size ranges from 10 to 100. As can be seen from the figure, the IASO can maintain low RMSE, high estimation accuracy and closer to CRB regardless of the number of signal sources. When there is one signal source, the RMSE of IASO, ASO, SCA and PSO is similar, while the GA algorithm keeps a relatively high RMSE when the population is less than 50. When there are two signal sources, the RMSE of IASO algorithm maintains a lower RMSE, while the ASO algorithm is somewhat unstable, and the PSO and GA still keep a higher RMSE. When the signal source is three, only IASO algorithm has lower RMSE, ASO, PSO, SCA and GA, and they have higher RMSE. This shows the population size is 100, GA and PSO algorithms only need a large population number size, bue also they also need a large number of iterations. For ML DOA estimation, when the number of signal source are 1, 2, 3, IASO algorithm can accurately estimate DOA with a smaller population number requiring less computational effort.

    Figure 11.  CRB and RMSE curve of ML DOA estimator with IASO, ASO, SCA, GA and PSO as population size changes from 10 to 100, when the number of signal sources is 1, 2 and 3, respectively.

    In addition to the convergence and statistical performance described above, the quality of an algorithm can also be judged by its computational complexity. The computational complexity is independent of the number of signal sources. Rather it is related to the maximum number of iterations, the population size and the maximum number of spatial variations. The following figure illustrates the average number of iterations calculated by stopping the standard formula 30 in the case of two signal sources: 1e-6 .

    Figure 12 shows the average iteration time curves of the different algorithms for 100 independent Monte Carlo experiments. It can be seen that the IASO algorithm has the smallest number of iterations when the signal-to-noise ratio range from -20dB to 20dB and the number of iterations is 200 , followed by the ASO, PSO, GA and SCA algorithms, which require at least 100 iterations. When the number of IASO iterations was significantly lower than the other groups, the number of iterations was significantly lower than the other algorithms. In general, the IASO algorithm has the smallest number of iterations on the mean curve of signal-to-noise ratio and overall size. For ASO, SCA, GA and PSO, more iterations are still needed to find the optimal solution under signal-to-noise ratio and overall transformation. As a result, IASO has the lowest computational load.

    Figure 12.  The average iteration number of different algorithms with 3 signal sources as SNR changes from -20dB to 20dB, the population size changes from 10 to 100, respectively.

    This paper proposes an improved ASO, employing 23 test functions to test IASO and ASO and finds that it overcomes the shortcomings of ASO that it can easily to fall into local optimality and poor convergence performance. ML DOA estimation is a high-resolution optimization method in theory, but the huge computational burden hinders its practical applications. In this paper, IASO is used in ML DOA estimation, and simulation experiments are carried out. The results show that, compared with ASO, SCA, GA and PSO methods, the ML DOA estimator of IASO proposed in this paper has faster convergence speed, lower RMSE and lower computational complexity.

    This research was funded by National Natural Science Foundation of China (Grant No. 61774137, 51875535 and 61927807), Key Research and Development Foundation of Shanxi Province(Grant No. 201903D121156), and Shanxi Scholarship Council of China (Grant No. 2020-104 and 2021-108). The authors express their sincere thanks to the anonymous referee for many valuable comments and suggestions.

    The authors declare that they have no conflict of interest.



    [1] B. Jiao, Z. Lian, X. Gu, A dynamic inertia weight particle swarm optimization algorithm, Chaos Soliton Fract, 37 (2008), 698–705. https://doi.org/10.1016/j.chaos.2006.09.063 doi: 10.1016/j.chaos.2006.09.063
    [2] D. E. Goldberg, Genetic algorithm in search optimization and machine learning, Addison Wesley, 8 (1989), 2104–2116. https://dl.acm.org/doi/book/10.5555/534133 doi: 10.5555/534133
    [3] S. Kirpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Readings Computer Vision, 220 (1983), 671–680. https://doi.org/10.1126/science.220.4598.671 doi: 10.1126/science.220.4598.671
    [4] S. Mirjalili, SCA: A Sine Cosine Algorithm for solving optimization problems, Knowl-based Syst., 96 (2016), 120–133. https://doi.org/10.1016/j.knosys.2015.12.022 doi: 10.1016/j.knosys.2015.12.022
    [5] Z. Zhang, J. Lin, Y. Shi, Application of artificial bee colony algorithm to maximum likelihood DOA estimation, J. Bionic. Eng., 10 (2013), 100–109. https://doi.org/10.1016/S1672-6529(13)60204-8 doi: 10.1016/S1672-6529(13)60204-8
    [6] S. Feng, Z. Zhang, Y. Shi, Introduction of bat algorithm into maximum likelihood DOA estimation, Modern Electronics Technique, 39 (2016), 26–29. https://doi.org/10.16652/j.issn.1004-373x.2016.08.007 doi: 10.16652/j.issn.1004-373x.2016.08.007
    [7] X. Fan, L. Pang, P. Shi, G. Li, X. Zhang, Application of bee evolutionary genetic algorithm to maximum likelihood direction-of-arrival estimation, Math. Probl. Eng., 2019 (2019), 1–11. https://doi.org/10.1155/2019/6035870 doi: 10.1155/2019/6035870
    [8] M. Jain, V. Singh, A. Rani, A novel nature-inspired algorithm for optimization: Squirrel search algorithm, Swarm Evol. Comput., 44 (2018), 148–175. https://doi.org/10.1016/j.swevo.2018.02.013 doi: 10.1016/j.swevo.2018.02.013
    [9] W. Zhao, L. Wang, Z. Zhang, Atom search optimization and its application to solve a hydrogeologic parameter estimation problem, Knowl-based Syst., 163 (2018), 283–304. https://doi.org/10.1016/j.knosys.2018.08.030 doi: 10.1016/j.knosys.2018.08.030
    [10] H. C. Corben, P. Stehle, Classical Mechanics, Physics Today, 6 (1953). https://doi.org/10.1063/1.3061288 doi: 10.1063/1.3061288
    [11] J. P. Ryckaert, G. Ciccotti, H. J. C Berendsen, Numerical integration of the cartesian equations of motion of a system with constraints: Molecular dynamics of n-alkanes, J. Comput. Phys., 23 (1977), 327–341. https://doi.org/10.1016/0021-9991(77)90098-5 doi: 10.1016/0021-9991(77)90098-5
    [12] A. Stone, The theory of intermolecular forces, Pure. Appl. Chem., 51 (1979), 1627–1636. https://doi.org/10.1351/pac197951081627 doi: 10.1351/pac197951081627
    [13] J. E. Jones, On the determination of molecular fields Ⅱ. From the equation of state of a gas, P. Roy. Soc. A-Math. Phy., 106 (1924), 463–477. https://doi.org/10.2307/94265 doi: 10.2307/94265
    [14] W. Zhao, L. Wang, Z. Zhang, A novel atom search optimization for dispersion coefficient estimation in groundwater, Future Gener Comp. Sy., 91 (2018), 601–610. https://doi.org/10.1016/j.future.2018.05.037 doi: 10.1016/j.future.2018.05.037
    [15] A. M. Agwa, A. A. El-Fergany, G. M. Sarhan, Steady-State modeling of fuel cells based on atom search optimizer, Energies, 12 (2019), 1884. https://doi.org/10.3390/en12101884 doi: 10.3390/en12101884
    [16] A. Almagboul Mohammed, F. Shu, Y. Qian, X. Zhou, J. Wang, J. Hu, Atom search optimization algorithm based hybrid antenna array receive beamforming to control sidelobe level and steering the null, Aeu-int J. Electron. C., 111 (2019), 152854. https://doi.org/10.1016/j.aeue.2019.152854 doi: 10.1016/j.aeue.2019.152854
    [17] S. Barshandeh, A new hybrid chaotic atom search optimization based on tree-seed algorithm and Levy flight for solving optimization problems, Eng. Comput-germany, 37 (2021), 3079–3122. https://doi.org/10.1007/s00366-020-00994-0 doi: 10.1007/s00366-020-00994-0
    [18] K. K. Ghosh, R. Guha, S. Ghosh, S. K. Bera, R. Sarkar, Atom Search Optimization with Simulated Annealing-a Hybrid Metaheuristic Approach for Feature Selection, arXiv preprint arXiv: 2005.08642, (2020). https://arXiv.org/pdf/2005.08642v1
    [19] M. A. Elaziz, N. Nabil, A. A. Ewees, S. Lu, Automatic data clustering based on hybrid atom search optimization and Sine-Cosine algorithm, 2019 IEEE Congress on Evolutionary Computation (CEC), (2019), 2315–2322. https://doi.org/10.1109/CEC.2019.8790361 doi: 10.1109/CEC.2019.8790361
    [20] P. Sun, H. Liu, Y. Zhang, L. Tu, Q. Meng, An intensify atom search optimization for engineering design problems, Appl. Math. Model., 89 (2021), 837–859. https://doi.org/10.1016/j.apm.2020.07.052 doi: 10.1016/j.apm.2020.07.052
    [21] L. Xu, J. Chen, Y. Gao, Off-Grid DOA estimation based on sparse representation and rife algorithm, Microelectron J., 59 (2017), 193–201. https://doi.org/10.2528/PIERM17070404 doi: 10.2528/PIERM17070404
    [22] A. Peyman, Z. Kordrostami, K. Hassanli, Design of a MEMS bionic vector hydrophone with piezo-gated MOSFET readout, Prog. Electromagn Res. M., 98 (2020), 104748. https://doi.org/10.1016/j.mejo.2020.104748 doi: 10.1016/j.mejo.2020.104748
    [23] H. Song, M. Diao, T. Tang, J. Qin, Vector-Sensor Array DOA Estimation Based on Spatial Time-Frequency Distribution, 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), (2020), 1351–1356. https://doi.org/10.1109/IMCCC.2018.00280 doi: 10.1109/IMCCC.2018.00280
    [24] M. Cao, X. Mao, L. Huang, Elevation, azimuth, and polarization estimation with nested electromagnetic vector-sensor arrays via tensor modeling, Eurasip J. Wirel. Comm., 2020 (2020), 153. https://doi.org/10.1186/s13638-020-01764-8 doi: 10.1186/s13638-020-01764-8
    [25] V. Baron, A. Finez, S. Bouley, F. Fayet, J. I. Mars, B. Nicolas, Hydrophone array optimization, conception, and validation for localization of acoustic sources in deep-Sea mining, IEEE J. Oceanic. Eng., 46 (2021), 555–563. https://doi.org/10.1109/JOE.2020.3004018 doi: 10.1109/JOE.2020.3004018
    [26] W. Wand, Q. Zhang, W. Shi, J. Shi, X. Wang, Iterative sparse covariance matrix fitting direction of arrival estimation method based on vector hydrophone array, Xibei Gongye Daxue Xuebao, 38 (2020), 14–23. https://doi.org/10.1051/jnwpu/20203810014 doi: 10.1051/jnwpu/20203810014
    [27] K. Aghababaiyan, R. G.Zefreh, V. Shah-Mansouri, 3D-OMP and 3D-FOMP algorithms for DOA estimation, Phys. Commun-amst, 31 (2018), 87–95. https://doi.org/10.1016/j.phycom.2018.10.005 doi: 10.1016/j.phycom.2018.10.005
    [28] K. Aghababaiyan, V. Shah-Mansouri, B. Maham, High-precision OMP-based direction of arrival estimation scheme for hybrid non-uniform array, IEEE Commun. Lett., 24 (2019), 354–357. https://doi.org/10.1109/LCOMM.2019.2952595 doi: 10.1109/LCOMM.2019.2952595
    [29] A. Nehorai, E. Paldi, , Acoustic vector-sensor array processing, IEEE T. Signal. Proces., 42 (1994), 2481–2491. https://doi.org/10.1109/ACSSC.1992.269285 doi: 10.1109/ACSSC.1992.269285
    [30] K. T. Wong, M. D. Zoltowski, Root-MUSIC-based azimuth-elevation angle-of-arrival estimation with uniformly spaced but arbitrarily oriented velocity hydrophones, IEEE T. Signal. Proces., 47 (1999), 3250–3260. https://doi.org/10.1109/78.806070 doi: 10.1109/78.806070
    [31] K. T. Wong, M. D. Zoltowski, Uni-vector-sensor ESPRIT for multisource azimuth, elevation, and polarization estimatio, IEEE T. Antenn. Propag., 45 (1997), 1467–1474. https://doi.org/10.1109/8.633852 doi: 10.1109/8.633852
    [32] I. Ziskind, M. Wax, Maximum likelihood localization of multiple sources by alternating projection, IEEE Trans. Acoust. Speech Signal Process, 36 (1988), 1553–1560. https://doi.org/10.1109/29.7543 doi: 10.1109/29.7543
    [33] M. Feder, E. Weinstein, Parameter estimation of superimposed signals using the EM algorithm, IEEE Trans. Acoust. Speech Signal Process, 36 (1988), 477–489. https://doi.org/10.1109/29.1552 doi: 10.1109/29.1552
    [34] Y. Zheng, L. Liu, X. Yang, SPICE-ML Algorithm for Direction-of-Arrival Estimation, Sensors, 20 (2019), 119. https://doi.org/10.3390/s20010119 doi: 10.3390/s20010119
    [35] Y. Hu, J. Lu, X. Qiu, Direction of arrival estimation of multiple acoustic sources using a maximum likelihood method in the spherical harmonic domain, Appl. Acoust., 135 (2018), 85–90. https://doi.org/10.1016/j.apacoust.2018.02.005 doi: 10.1016/j.apacoust.2018.02.005
    [36] J. W. Paik, K. H. Lee, J. H. Lee, Asymptotic performance analysis of maximum likelihood algorithm for direction-of-arrival estimation: Explicit expression of estimation error and mean square error, Applied Sciences, 10 (2020), 2415. https://doi.org/10.3390/app10072415 doi: 10.3390/app10072415
    [37] S. Jesus, Efficient ML direction of arrival estimation assuming unknown sensor noise powers, arXiv preprint arXiv: 2001.01935, (2020), https: //arXiv: 2001.01935
    [38] Y. Yoon, Y. H. Kim, Optimizing taxon addition order and branch lengths in the construction of phylogenetic trees using maximum likelihood, J. Bioinf. Comput. Biol., 18 (2020), 837–859. https://doi.org/10.1142/S0219720020500031 doi: 10.1142/S0219720020500031
    [39] P. Vishnu, C. S. Ramalingam, An improved LSF-based algorithm for sinusoidal frequency estimation that achieves maximum likelihood performance, 2020 International Conference on Signal Processing and Communications (SPCOM), (2020), 1–5. https://doi.org/10.1109/SPCOM50965.2020.9179546
    [40] M. Li, Y. Lu, Genetic algorithm based maximum likelihood DOA estimation, RADAR 2002, 2002 (2002), 502–506. https://doi.org/10.1109/RADAR.2002.1174766 doi: 10.1109/RADAR.2002.1174766
    [41] A. Sharma, S. Mathur, Comparative analysis of ML-PSO DOA estimation with conventional techniques in varied multipath channel environment, Wireless Pers. Commun., 100 (2018), 803–817. https://doi.org/10.1007/s11277-018-5350-0 doi: 10.1007/s11277-018-5350-0
    [42] P. Wang, Y. Kong, X. He, M. Zhang, X. Tan, An improved squirrel search algorithm for maximum likelihood DOA estimation and application for MEMS vector hydrophone array, IEEE Access, 7 (2019), 118343–118358. https://doi.org/10.1109/ACCESS.2019.2936823 doi: 10.1109/ACCESS.2019.2936823
    [43] L. Cai, H. Tian, H. Chen, J. Hu, A random maximum likelihood algorithm based on limited PSO initial space, Computer Modernization, 282 (2019), 60–65. https://doi.org/10.3969/j.issn.1006-2475.2019.02.011 doi: 10.3969/j.issn.1006-2475.2019.02.011
  • This article has been cited by:

    1. Amad Zafar, Jawad Tanveer, Muhammad Umair Ali, Seung Won Lee, BU-DLNet: Breast Ultrasonography-Based Cancer Detection Using Deep-Learning Network Selection and Feature Optimization, 2023, 10, 2306-5354, 825, 10.3390/bioengineering10070825
    2. Hongyan Wang, Yanping Bai, Jing Ren, Peng Wang, Ting Xu, Wendong Zhang, Guojun Zhang, DOA Estimation Method for Vector Hydrophones Based on Sparse Bayesian Learning, 2024, 24, 1424-8220, 6439, 10.3390/s24196439
    3. Kefan Yang, Tian Zhou, Juan Hui, Chao Xu, Passive acoustic bearing estimation of underwater gas leak using single vector hydrophone, 2025, 233, 0003682X, 110623, 10.1016/j.apacoust.2025.110623
    4. Mohammed A. El-Shorbagy, Anas Bouaouda, Laith Abualigah, Fatma A. Hashim, Atom Search Optimization: a comprehensive review of its variants, applications, and future directions, 2025, 11, 2376-5992, e2722, 10.7717/peerj-cs.2722
    5. Sylvère Mugemanyi, Zhaoyang Qu, François Xavier Rugema, Yunchang Dong, Lei Wang, Félicité Pacifique Mutuyimana, Emmanuel Mutabazi, Providence Habumuremyi, Rita Clémence Mutabazi, Alexis Muhirwa, Christophe Bananeza, Arcade Nshimiyimana, Clarisse Kagaju, Jean Nsengumuremyi, Atom search optimization: a systematic review of current variants and applications, 2025, 0219-1377, 10.1007/s10115-025-02389-3
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2559) PDF downloads(114) Cited by(5)

Figures and Tables

Figures(12)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog