
A stable colloid called ferrofluid is made up of tiny magnetic particles, often magnetite (Fe3O4), that have been bonded with an amphiphilic dispersion layer and are then suspended in a suitable liquid solvent carrier. Current industrial uses for ferrofluid include dynamic sealing, inertial and viscous damping, magnetic drug targeting, liquid microrobots, etc. In this article, we studied the heat transfer and MHD micropolar ferrofluid flow caused by non-linearly stretching surface. The results are presented for hybrid alumina- copper/ethylene glycol (Al2O3-Cu/EG) nanofluid. The governing non-linear equations describing flow are transformed into a system of ordinary differential equations using similarity transformations. Using the BVp4c method, the microstructure and inertial properties of a magnetite ferrofluid across a non-linear stretched sheet are studied. The influence of relevant parameters on stream function, velocity, micro-rotation velocity, and temperature are obtained and represented graphically. The computed results are original, and it has been observed that if we increase the magnetic parameter, the stream function and the velocity decrease, while the temperature and micro-rotation velocity increase. As the Prandtl number increases, the temperature profile decreases. It has been observed that the Nusselt number or heat transfer rate of hybrid nanofluid is better as compared to nanofluid flow.
Citation: Abdul Rauf, Nehad Ali Shah, Aqsa Mushtaq, Thongchai Botmart. Heat transport and magnetohydrodynamic hybrid micropolar ferrofluid flow over a non-linearly stretching sheet[J]. AIMS Mathematics, 2023, 8(1): 164-193. doi: 10.3934/math.2023008
[1] | Hasib Khan, Jehad Alzabut, Anwar Shah, Sina Etemad, Shahram Rezapour, Choonkil Park . A study on the fractal-fractional tobacco smoking model. AIMS Mathematics, 2022, 7(8): 13887-13909. doi: 10.3934/math.2022767 |
[2] | Shabir Ahmad, Aman Ullah, Mohammad Partohaghighi, Sayed Saifullah, Ali Akgül, Fahd Jarad . Oscillatory and complex behaviour of Caputo-Fabrizio fractional order HIV-1 infection model. AIMS Mathematics, 2022, 7(3): 4778-4792. doi: 10.3934/math.2022265 |
[3] | Sabri T. M. Thabet, Reem M. Alraimy, Imed Kedim, Aiman Mukheimer, Thabet Abdeljawad . Exploring the solutions of a financial bubble model via a new fractional derivative. AIMS Mathematics, 2025, 10(4): 8587-8614. doi: 10.3934/math.2025394 |
[4] | Murugesan Sivashankar, Sriramulu Sabarinathan, Vediyappan Govindan, Unai Fernandez-Gamiz, Samad Noeiaghdam . Stability analysis of COVID-19 outbreak using Caputo-Fabrizio fractional differential equation. AIMS Mathematics, 2023, 8(2): 2720-2735. doi: 10.3934/math.2023143 |
[5] | Muhammad Altaf Khan, Sajjad Ullah, Saif Ullah, Muhammad Farhan . Fractional order SEIR model with generalized incidence rate. AIMS Mathematics, 2020, 5(4): 2843-2857. doi: 10.3934/math.2020182 |
[6] | Mehmet Kocabiyik, Mevlüde Yakit Ongun . Construction a distributed order smoking model and its nonstandard finite difference discretization. AIMS Mathematics, 2022, 7(3): 4636-4654. doi: 10.3934/math.2022258 |
[7] | Rahat Zarin, Abdur Raouf, Amir Khan, Aeshah A. Raezah, Usa Wannasingha Humphries . Computational modeling of financial crime population dynamics under different fractional operators. AIMS Mathematics, 2023, 8(9): 20755-20789. doi: 10.3934/math.20231058 |
[8] | Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad . Computational analysis of COVID-19 model outbreak with singular and nonlocal operator. AIMS Mathematics, 2022, 7(9): 16741-16759. doi: 10.3934/math.2022919 |
[9] | Manal Elzain Mohamed Abdalla, Hasanen A. Hammad . Solving functional integrodifferential equations with Liouville-Caputo fractional derivatives by fixed point techniques. AIMS Mathematics, 2025, 10(3): 6168-6194. doi: 10.3934/math.2025281 |
[10] | Sara Salem Alzaid, Badr Saad T. Alkahtani . Real-world validation of fractional-order model for COVID-19 vaccination impact. AIMS Mathematics, 2024, 9(2): 3685-3706. doi: 10.3934/math.2024181 |
A stable colloid called ferrofluid is made up of tiny magnetic particles, often magnetite (Fe3O4), that have been bonded with an amphiphilic dispersion layer and are then suspended in a suitable liquid solvent carrier. Current industrial uses for ferrofluid include dynamic sealing, inertial and viscous damping, magnetic drug targeting, liquid microrobots, etc. In this article, we studied the heat transfer and MHD micropolar ferrofluid flow caused by non-linearly stretching surface. The results are presented for hybrid alumina- copper/ethylene glycol (Al2O3-Cu/EG) nanofluid. The governing non-linear equations describing flow are transformed into a system of ordinary differential equations using similarity transformations. Using the BVp4c method, the microstructure and inertial properties of a magnetite ferrofluid across a non-linear stretched sheet are studied. The influence of relevant parameters on stream function, velocity, micro-rotation velocity, and temperature are obtained and represented graphically. The computed results are original, and it has been observed that if we increase the magnetic parameter, the stream function and the velocity decrease, while the temperature and micro-rotation velocity increase. As the Prandtl number increases, the temperature profile decreases. It has been observed that the Nusselt number or heat transfer rate of hybrid nanofluid is better as compared to nanofluid flow.
Intelligent optimization algorithms are increasingly popular in the field of intelligent computing and are widely applied in various other fields, including engineering, medicine, ecology and environment, marine engineering, and so forth. Classical intelligent optimization algorithms include: Particle swarm algorithm (PSO)[1], genetic algorithm (GA)[2], simulated annealing (SA)[3], etc. PSO refers to a swarm intelligence optimization algorithm that simulating bird predation, in which each bird is treated as a particle and the particles follow the current optimal particle to find the optimal solution in the solution space. GA is a computational model based on Darwin's theory of evolution and Mendelian genetics to simulate biological evolution. Genetic algorithms obtain the optimal solutions through three basic operations: chromosomal self-selection, crossover and mutation. SA marks the first natural algorithm proposed to simulate the high-temperature annealing process of metallic materials. When SA is heated at high temperatures and then slowly cooled, the particles eventually reach a state of equilibrium and solidify into crystals of minimal energy. Many scholars have also developed many bionic intelligence optimization algorithms based on classical intelligence algorithms, such as Sine Cosine Algorithm (SCA) [4], the Artificial swarm algorithm (ABC) [5], the Bat algorithm(BA) [6], the Bee Evolutionary Genetic Algorithm (BEGA) [7], the Squirrel Search Algorithm (SSA) [8], the Atomic Search optimisation (ASO) Algorithm [9], etc.
The Atomic Search Optimisation (ASO) algorithm is a new intelligent optimisation algorithm based on molecular dynamics derivatives that was proposed in 2019. ASO is composed of geometric binding force and the interaction force between atoms, following the laws of classical mechanics [10,11]. Geometric binding forces are the interaction between the Lennard-Jones(LJ) potential [12,13] and the co-borrowing bonds among atoms. In ASO, atoms represent solutions in the search space. The larger the atomic mass, the better the solution, and vice versa. Compared to traditional intelligent system algorithms, ASO requires fewer physical parameters and can achieve better performance. As a result, it has been widely used in various fields, Zhang et al. applied ASO to hydrogeological parameters estimation [9] and groundwater dispersion coefficients calculation [14]. Ahmed et al. utilized ASO in fuel cell modeling and successfully built an accurate model. Simulations showed that it was as good as commercial proton exchange membrane(PEM) fuel cells [15], Mohammed et al. used ASO to reduce the peak sidelobe level of the beam pattern [16], Saeid combined the ASO with the Tree Seeding Algorithm (TSA) to enhance its performance in exploration [17]. Ghosh et al. proposed an improved Atomic Search Algorithm(ASO) based on binary variables and combined it with Simulated Annealing (SA) technique [18]. Elaziz et al. proposed an automatic clustering algorithm combining the ASO and the SCA algorithm to automatically find the best prime numbers and the corresponding positions [19]. Sun et al. applied improved ASO to engineering design [20].
Since ASO is prone to finding only locally optimal solutions with low accuracy, an Improved Atom Search Algorithm(IASO) algorithm based on particle velocity updating is proposed in this paper. The IASO algorithm has the same principle as the ASO algorithm, but IASO is optimized for speed iterative updates to improve the convergence speed of the algorithm, avoid finding local optimal solutions, and allow a more extensive search for optimal solutions. IASO adopts the idea of particle update speed in PSO and introduces inertia weights w to improve the performance of ASO. In addition, the learning factors c1 and c2 are added into IASO, which not only ensure the convergence performance of the algorithm, but also accelerate the convergence speed, effectively solving the problem that the original ASO tends to find only the local optimal solution.
Array signal processing is one of the important research directions of signal processing, which has been developing rapidly in recent years. DOA estimation of signals is an ongoing research hotspot in the field of array signal processing. It has great potential for hydrophone applications. Hydrophones are generally divided into scalar hydrophones and vector hydrophones. Due to the scalar hydrophones can only test scalar parameters in the sound field, many scholars have turned to the study of vector hydrophones. Xu et al. used alternating iterative weighted least squares to deal with the off-grid problem of sparsity-based [21], DOA of an array of acoustic vector water-modulated microphones. Amiri designed a micro-electro-mechanical system (MEMS) bionic vector hydrophone with a piezoelectric gated metal oxide semiconductor field-effect transistor (MOSFET) [22]. More and more scholars have been doing research in the direction of vector array. Song et al. studied the measurement results of an acoustic vector sensor array and proposed a new method to obtain better DOA estimation performance in noisy and coherent environments [23], using the time-frequency spatial information of the vector sensor array signal, Gao et al. combined elevation, azimuth and polarization for the estimation of electromagnetic vector sensor arrays based on nested tensor modeling [24], Baron et al. optimised, conceptualised and evaluated a hydrophone array for deep-sea mining sound source localisation validation [25], and Wand et al. proposed an iterative sparse covariance matrix fitting direction estimation method based on a vector hydrophone array [26]. In recent years, Some scholars applied compressed sensing to signal DOA estimation. Keyvan et al., proposed the Three Dimensional Orthogonal Matching Pursuit(3D-FOMP) algorithm and the Three Dimensional Focused Orthogonal Matching Pursuit (3D-FOMP) algorithm to obtain better estimation performance in low signal-to-noise ratio and multi-source environments, and to solve the problem that conventional DOA estimation algorithms cannot distinguish between two adjacent sources [27]. Keyvan et al, designed a new hybrid nonuniform linear array consisting of two uniform linear subarrays and proposed a new DOA estimation method based on the OMP algorithm. This algorithm has lower computational complexity and higher accuracy compared with the FOMP algorithm. It can distinguish adjacent signal sources more accurately and solve the phase ambiguity problem [28].
With the development of vector hydrophone, the direction of arrival estimation of the vector hydrophones signal has an increasingly wide application, which is of great importance for the functional extension of sonar devices [29]. Many useful estimation methods, multiple signal classification (MUSIC)[30], estimated signal parameters via rotational invariance technique (ESPRIT) [31] and maximum likelihood (ML) [32] etc. have been proposed by many scholars.
In 1988, Ziskind and Max added ML estimation to DOA and achieved ideal results [33]. compared with MISIC and Espirit, the ML estimation method is more effective and stable, especially in the case of low signal-to-noise ratios (SNR) or small snapshots.However, in MLDOA estimation, the solving problem of the likelihood function is a multidimensional nonlinear polar problem that requires a multidimensional search for global extrema, which increases the computational burden.
Many scholars have used various methods in combination with ML to improve the estimation performance of DOA. Zhang, C.Y. et al. proposed a sparse iterative covariance-based estimation method and combined it with ML to improve its performance. However, its resolution and stability [34] are not high, and Hu et al. proposed a multi-source DOA estimation based on the ML in the spherical harmonic domain [35], Ji analyzed the asymptotic performance of ML DOA estimation [36], Selva proposed an effective method to calculate ML DOA estimation in the case of unknown sensor noise power [37], Yoon et al. optimized the sequence and branching length of taxa in phylogenetic trees by using the maximum likelihood method [38], Vishnu proposed a line propagation function (LSF)-based sinusoidal frequency estimation algorithm to improve the performance of ML DOA [39].
In response to the complexity of the ML DOA estimation problem, some scholars have used intelligent optimization algorithms to optimize MLDOA and achieved better performance. Li et al. applied the bionic algorithm genetic algorithm to MLDOA estimation for the first time, but the genetic algorithm is prone to problems such as premature convergence [40]. Sharma et al. applied the PSO algorithm to MLDOA estimation, but there are still some drawbacks in the estimation of multi-directional angles because the PSO algorithm converges slowly and tends to fall into local optimal solutions [41], Zhang et al. combined artificial colony bees with ML DOA estimation to reduce the computational complexity in calculating ML functions [5], Feng et al. combining bat algorithms with ML to optimize the multidimensional nonlinear estimation of spectral functions [6], Fan et al. applied the Improved Bee Evolutionary Genetic Algorithm (IBEGA) to MLDOA estimation [7], Wang et al. used an improved squirrel search algorithm (ISSA) in MLDOA estimation, which reduced computational complexity and enhanced the simulation effect [42], and Li et al. proposed a search space that limits the search space for particle swarm optimization [43].
As calculating the likelihood function for maximum likelihood DOA estimation is a multi-dimensional non-linear polar problem, a multi-dimensional search for global extremes is required, which requires extensive computation. To solve this problem, the proposed IASO is applied to MLDOA estimation. Simulation results show that the combination of IASO and MLDOA estimation significantly reduces the computational complexity of multidimensional nonlinear optimization of ML estimation.
The main structure of this article is as follows: Section2 presented the improved ASO and compared the convergence performance of ASO and IASO on 23 benchmark functions; Section3 gives the data model and ML estimation; Section4 combines IASO with ML DOA, providing the simulation results to validate the convergence performance and statistical performance of IASO ML estimation and compareing it with ASO, PSO, GA and SCA combined with ML DOA estimation separately. Section 5 concludes the paper.
The Atomic Search Algorithm (ASO) was proposed by Zhao et al. in 2018. It is a physics-inspired algorithm developed by molecular dynamics. The algorithm is simple to implement, featured with few parameters and good convergence performance and thus it has been used to solve a variety of optimization problems.
The ASO algorithm is based on the interaction forces between atoms and geometric constraints, and the position of each atom in the search space can be measured by its mass. This means that the heavier the atoms, the better the solution. The search and optimisation process is based on the mutual repulsion or attraction of atoms depending on the distance between them. The lighter atoms flow at an accelerated speed to the larger atoms, which widens the search area and performs a large search. The acceleration of heavier atoms is smaller, making it more concentrated to find better solutions. Suppose that a group of atoms has N atoms, the position of the ith atom is Xi=[x1i,x2i,⋯,xdi], according to Newton's second law
Fi+Gi=miai, | (2.1) |
where Fi is the total force of the interaction force on the atom i, Gi is the geometric binding force on the atom i, and mi is the mass of the atom i.
The general unconstrained optimization problem can be defined as
minf(x),x=[x1,x2,⋯,xD], | (2.2) |
Lb≤x≤Ub, |
Lb=[lb1,⋯,lbD],Ub=[ub1,⋯,ubD], |
where xd(d=1,2,⋯.D) is the d components in the search space, Lb is the lower limit, Ub is the upper limit, and D is the dimension of the search space.
The fitness value Fiti(t) of the position of each atom is calculated according to the fitness function defined by the user. The mass of each atom is Eq (2.3) and Eq (2.4), which can be derived from its fitness.
Mi(t)=e−Fiti(t)−Fitbest(t)Fitworst(t)−Fitbest(t), | (2.3) |
mi(t)=eMi(t)∑Nj=1Mj(t), | (2.4) |
where Fitbest(t) and Fitworst(t) refer to the best fitness value and worst fitness value of the atom in ith iterations, Fiti(t) is the fitness value of atom i at the ith iteration, and the expressions of Fitbest(t) and Fitworst(t) are as follows
Fitbest(t)=mini∈{1,2,3,⋯,N}Fiti(t), | (2.5) |
Fitworst(t)=maxi∈{1,2,3,⋯,N}Fiti(t). | (2.6) |
The interaction force between atoms is obtained from the literature [9,11], after optimizated by the LJ potential energy
Fij(t)=−η(t)[2(hij(t))13−(hij(t))7], | (2.7) |
η(t)=α(1−t−1T)3e−20tT, | (2.8) |
where η(t) is the depth function that adjusts the repulsion and attractive force, α is the depth weight, T is the maximum number of iterations, and t is the current number of iterations. Figure 1 shows the functional behavior of function F, the η(t) corresponding to different h ranges from 0.9 to 2. It can be seen that when h is from 0.9 to 1.12, it is repulsion; when h is from 1.12 to 2, it is gravity; and when h=1.12, it reaches a state of equilibrium. Therefore, in order to improve the exploration in ASO, the lower limit of the repulsive force with a smaller function value is h=1.1, and the upper limit of the gravitational force is 1.24.
hij(t)={hmin,rij(t)σ(t)<hmin,rij(t)σ(t),hmin⩽ | (2.9) |
where h_{ij}(t) is the distance function, h_{min} = 1.1 and h_{max} = 1.24 represent the lower and upper limits of h , r_{ij} is the Euclidean distance between atom i and atom j , and \sigma(t) is defined as follows
\begin{equation} \sigma(t) = \left\|{x_{ij}(t)},\frac{\sum\limits_{j\in KBest}x_{ij}(t)}{K(t)}\right\|_{2}, \end{equation} | (2.10) |
where x_{ij} is the position component of atom i^{th} in the j^{th} dimensional search space, \|\cdot\|_{2} stands for two norm and KBest is a subset of the atom group, which is composed of the first K atoms with the best function fitness value.
\begin{equation} K(t) = N-(N-2) \times \sqrt{\frac{t}{T}}, \end{equation} | (2.11) |
\begin{equation} \begin{cases}h_{min} = g_{0}+g(t),\\h_{max} = u,\end{cases} \end{equation} | (2.12) |
where g_{0} is a drift factor, which can shift the algorithm from exploration to development
\begin{equation} g(t) = 0.1\times\sin\left(\frac{\pi}{2}\times\frac{t}{T}\right), \end{equation} | (2.13) |
Therefore, the atom i^{th} acting on other atoms can be considered as a total force, expressed as
\begin{equation} F_{i}^d(t) = \sum\limits_{j\in KBest} rand_{j}F_{ij}^d(t), \end{equation} | (2.14) |
The geometric binding force also plays an important role in ASO. Assume that each atom has a covalent bond with each atom in KBest, and that each atom is bound by KBest, Figure 2 shows the effect of atomic interactions. A_{1}, A_{2}, A_{3} and A_{4} are the atoms with the best fitness value, called KBest.In KBest, A_{1}, A_{2}, A_{3} and A_{4} attract or repel each other, and A_{5}, A_{6}, A_{7} attract or repel each other for every atom. Each atom in the population is bound by the optimal atom A_{1} ( X_{best} ), the binding force of the atom i^{th} is
\begin{equation} G_{i}^d(t) = \lambda(t)(x_{best}^d(t)-x_{i}^d(t)), \end{equation} | (2.15) |
\begin{equation} \lambda(t) = \beta e^\frac{-20t}{T}, \end{equation} | (2.16) |
where x_{best}^d(t) is the atom which is in the best position in the i^{th} iteration, \beta is the multiplier weight, and \lambda(t) is the Lagrange multiplier.
Under the action of geometric constraint force and interaction force, the acceleration of atom i^{th} atom at time t is
\begin{equation} \begin{array}{ll} a_{i}^d(t)& = \frac{F_{i}^d(t)+G_{i}^d(t)}{m_{i}^d(t)}\\ & = -\alpha(1-\frac{t-1}{T})^3e^\frac{-20t}{T}\\ &\sum\limits_{j\in KBest} \frac{rand_{j}[2\times(h_{ij}(t))^{13}-(h_{ij})^7]}{m_{i}(t)}\\ &\frac{(x_{i}^d(t)-x_{i}^d(t))}{\parallel {x_{i}(t),x_{j}(t)}\parallel_{2}}+\beta e^{-\frac{20t}{T}}\frac{x_{best}^d(t)-x_{i}^d(t)}{m_{i}(t)}. \end{array} \end{equation} | (2.17) |
In the original ASO, the algorithm was found to be prone to local optima. As a result, changes were made in the iterative update process of the speed, allowing the algorithm to go beyond the local optimum, search and optimise more broadly. The particle update velocity from PSO is used in ASO and the inertial weight w is introduced in the original ASO velocity update.The algorithm is not prone to local optima at the start of the algorithm and improves the performance of the IASO algorithm. The addition of learning factors c_{1} and c_{2} not only ensures convergence performance, but also speeds up convergence, effectively solving the problem that the original ASO tends to fall into local optimality.
w = 0.9-0.5\times\left(\frac{t}{T}\right), c_{1} = -10\times\left(\frac{t}{T}\right)^2, |
c_{2} = 1-\left(-10\times\left(\frac{t}{T}\right)^2\right), |
\begin{equation} \begin{array}{ll} v_{i}^d(t+1) = &w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)\\&+c_{2}\times rand_{i}^d(x_{best}^d(t)-x_{i}^d(t))\\ \end{array} \end{equation} | (2.18) |
At the (t+1)^{th} iteration, the position updating of the i^{th} atom can be expressed as
\begin{equation} x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1). \end{equation} | (2.19) |
The maximum number of iterations, convergence normalisation, maximum running time and accuracy of the fitness function value are commonly used convergence criteria.In this paper, the maximum number of iterations and convergence normalisation are used as criteria for stopping the iterations.The maximum number of iterations is 200 and the convergence normalisation results are as follows
\begin{equation} D = \sqrt{\sum\limits_{i = 1}^n (Fit_i-\overline{Fit})^2} < \varepsilon, \end{equation} | (2.20) |
where Fit_i is the fitness value of i^{th} squirrel and \overline{Fit} is the average fitness value of the population, the accuracy \varepsilon is often taken as 1E-6 .
Thus, by iterating over the above operations several times, we can eventually find the optimal solution exactly. Table 1 shows the main steps of the IASO algorithm.
Name | Function | n | Range | Optimum |
Sphere | F_{1}(x)=\sum\limits_{i=1}^{n} x_{i}^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.22 | F_{2}(x)=\sum\limits_{i=1}^{n}\mid x_{i}\mid +\prod\limits_{i=1}^{n}\mid x_{i}\mid | 30 | [-100,100]^{n} | 0 |
Schwefel 1.2 | F_{3}(x)=\sum\limits_{i=1}^{n}(\sum\limits_{j=1}^{i} x_{j})^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.21 | F_{4}(x)=\max\limits_{i}\{\mid x_{i}\mid, 1\leq i\leq n\} | 30 | [-100,100]^{n} | 0 |
Rosenbrock | F_{5}(x)=\sum\limits_{i=1}^{n-1}(100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}) | 30 | [-200,200]^{n} | 0 |
Step | F_{6}(x)=\sum\limits_{i=1}^{n}(x_{i}+0.5)^{2} | 30 | [-100,100]^{n} | 0 |
Quartic | F_{7}(x)=\sum\limits_{i=1}^{n}ix_{i}^{4}+rand() | 30 | [-1.28, 1.28]^{n} | 0 |
The pseudocode of IASO is present in Algorithm 1.
Algorithm 1 Pseudocode of IASO |
Begin: |
Randomly initialize a group of atoms x (solutions) and their velocity v and Fit_{best} = Inf . While the stop criterion is not satisfied do |
For each atom x_{i} do |
Calculate its fitness value Fit_{i} ; |
If Fit_{i} < Fit_{best} then |
Fit_{best} = Fit_{i} ; |
X_{Best} = x_{i} ; |
End If |
Calculate the mass using Eq (2.3)and Eq (2.4); |
Use K(t) = N-(N-2)\times\frac{t}{T} to determine K neighbors; |
Use Eq (2.14) and Eq (2.15) to calculate the interaction force and geometric restraint force; |
Calculate acceleration using formula Eq (2.17); |
Update the velocity |
v_{i}^d(t+1) = w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)+c_{2}\times rand_{i}^d(x_{best}^d(t)-x_{i}^d(t)) . |
Update the position |
x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) . |
End For. |
End while. |
Find the best solution so far X_{Best} . |
To test the performance of the IASO algorithm, 23 known benchmark functions were used. These benchmark functions are described in Tables 1, 2, 3, F1-F7 is unimodal function, and each unimodal function has no local optimization, only one global optimum. Therefore, the convergence speed of the algorithm can be verified. F8-F13 are multimodal functions with many local optimizations. While F14-F23 is a low dimensional function, each function has less local optimal value. Therefore, multimodal function and low dimensional function are very suitable for local optimal test avoidance and algorithm exploration ability.
Name | Function | n | Range | Optimum |
Schwefel | F_{8}(x)=-\sum\limits_{i=1}^{n}(x_{i}\sin(\sqrt{\mid x_{i}\mid})) | 30 | [-500,500]^{n} | -12569.5 |
Rastrigin | F_{9}(x)=\sum\limits_{i=1}^{n}(x_{i}^{2}-10\cos(2\pi x_{i})+10) | 30 | [-5.12, 5.12]^{n} | 0 |
Ackley | F_{10}(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}x_{i}^{2}})-\exp(\frac{1}{n}\sum\limits_{i=1}^{n}\cos2\pi x_{i})+20+\varepsilon | 30 | [-32, 32]^{n} | 0 |
Griewank | F_{11}(x)=\frac{1}{4000}\sum\limits_{i=1}^{n}x_{i}^{2}-\prod\limits_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1 | 30 | [-600,600]^{n} | 0 |
Penalized | F_{12}(x)=\frac{\pi}{n}\{10\sin^{2}(\pi y_{1})+\sum\limits_{i=1}^{n-1}(y_{i}-1)^{2}[1+10\sin^{2}(\pi y_{i+1})]+(y_{n}-1)^{2}\}+\sum\limits_{i=1}^{n}u(x_{i}, 10,100, 4) | 30 | [-50, 50]^{n} | 0 |
Penalized2 | F_{13}(x)=0.1\{\sin^{2}(3\pi x_{1})+\sum\limits_{i=1}^{29}(x_{i}-1)^{2}p[1+\sin^{2}(3\pi X_{i+1})] +(x_{n}-1)^{2}[1+\sin^{2}(2\pi x_{n})]\}+\sum\limits_{i=1}^{n}u(x_{i}, 5,100, 4) | 30 | [-50, 50]^{n} | 0 |
Name | Function | n | Range | Optimum |
Foxholes | F_{14}(x)=[\frac{1}{500}+\sum\limits_{j=1}^{25}\frac{1}{j+\sum\limits_{j=1}^{2}(x_{i}-a_{ij})^{6}}]^{-1} | 2 | [-65.536, 65.536]^{n} | 0.998 |
Kowalik | F_{15}(x)=\sum\limits_{i=1}^{11}\mid a_{i}-\frac{x_{i}(b_{i}^{2}+b_{i}x_{2})}{b_{i}^{2}+b_{i}x_{3}+x_{4}}\mid^{2} | 4 | [-5, 5]^{n} | 3.075\times 10^{-4} |
Six Hump Camel | F_{16}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4} | 2 | [-5, 5]^{n} | -1.0316 |
Branin | F_{17}(x)=(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{\pi}x_{1}-6)^{2} +10(1-\frac{1}{8\pi})\cos x_{1}+10 | 2 | [-5, 10]\times[0, 15] | 0.398 |
Goldstein-Price | F_{18}(x)=1+(x_{1}+x_{2}+1)^{2}19-14x_{1}+3x_{1}^{2}-14x_{2} +6x_{1}x_{2}+3x_{2}^{2})] \times[30+(2x_{1}+1-3x_{2})^{2}(18-32x_{1} 12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})] | 2 | [-2, 2]^{n} | 3 |
Hartman 3 | F_{19}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{3}a_{ij}(x_{j}-p_{ij})^{2}] | 3 | [0, 1]^{n} | -3.86 |
Hartman 6 | F_{20}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{6}a_{ij}(x_{j}-p_{ij})^{2}] | 6 | [0, 1]^{n} | -3.322 |
Shckel 5 | F_{21}(x)=-\sum\limits_{i=1}^{5} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.1532 |
Shckel 7 | F_{22}(x)=-\sum\limits_{i=1}^{7} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.4028 |
Shckel 10 | F_{23}(x)=-\sum\limits_{i=1}^{10}[ (x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.5363 |
In this experiment, the population size for IASO and ASO was 50 and the maximum number of iterations was 100 . There are three performance evaluation indexes for comparing IASO and ASO: the average, minimum and standard deviation of the optimal solution. The smaller the average value of the optimal solution, the less likely it is that the algorithm will enter a local optimum and the easier it is to find the global optimum; The smaller the standard deviation of the optimal solution, the more stable the algorithm will be; the smaller the minimum value, the more accurate it will be. Tables 3–5 shows the comparison of optimization results of different types of functions, and the corresponding convergence curve is shown in Figures 3–5. Table 4 and Figure 6 are the optimization results and convergence curves of unimodal function F_1(x)-F_7(x) . It can be seen that IASO algorithm has better performance than ASO algorithm and its convergence speed is faster. Table 5 and Figure 7 are the optimization results and convergence curves of multimodal function F_8(x)-F_{13}(x) . It can be seen that the overall performance of IASO is better than that of ASO. Table 6 and Figure 8 are the optimization results and convergence curves of low dimensional function F_{14}(x)- F_{23}(x) . The convergence curve shows that ASO converges faster, but IASO is more accurate. By comparing IASO with ASO, it can be seen that the improved IASO converges much faster and is more stable than ASO It is also less likely to enter the local optimum.
Function | Index | ASO | IASO |
F_{1}(x) | Mean | 2.54e-12 | 1.88e-18 |
Std | 3.24e-12 | 1.03e-20 | |
Best | 3.48e-15 | 5.22e-19 | |
F_{2}(x) | Mean | 3.33e-08 | 3.39e-09 |
Std | 1.89e-10 | 9.90e-12 | |
Best | 5.11e-08 | 1.84e-09 | |
F_{3}(x) | Mean | 186.5664 | 1.06e-17 |
Std | 86.3065 | 1.23e-21 | |
Best | 24.1115 | 1.81e-18 | |
F_{4}(x) | Mean | 3.24e-09 | 8.77e-10 |
Std | 6.14-09 | 2.32e-12 | |
Best | 2.13e-10 | 4.75e-10 | |
F_{5}(x) | Mean | 0.2905 | 0.0034 |
Std | 0.9888 | 0.0039 | |
Best | 4.5370e+03 | 28.8627 | |
F_{6}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{7}(x) | Mean | 0.02124 | 3.91e-04 |
Std | 0.02981 | 3.60e-04 | |
Best | 0.03319 | 1.8710e-04 |
Function | Index | ASO | IASO |
F_{8}(x) | Mean | -3887 | -6772.47 |
Std | 564.7 | 354.77 | |
Best | -4245 | -6878.93 | |
F_{9}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{10}(x) | Mean | 3.91e-09 | 8.63e-10 |
Std | 2.15e-09 | 2.68e-13 | |
Best | 1.13e-09 | 7.257e-10 | |
F_{11}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{12}(x) | Mean | 4.34e-23 | 3.69e-23 |
Std | 1.84e-22 | 1.51e-22 | |
Best | 7.83e-24 | 6.53e-24 | |
F_{13}(x) | Mean | 2.03e-22 | 2.33e-23 |
Std | 2.83e-22 | 3.12e-22 | |
Best | 1.91e-23 | 1.90e-23 |
Function | Index | ASO | IASO |
F_{14}(x) | Mean | 0.998004 | 0.998004 |
Std | 7.04e-16 | 4.25e-16 | |
Best | 0.998004 | 0.998004 | |
F_{15}(x) | Mean | 9.47e-04 | 4.69e-04 |
Std | 2.79e-04 | 1.81e-04 | |
Best | 2.79e-04 | 1.45e-04 | |
F_{16}(x) | Mean | -1.03163 | -1.03163 |
Std | 0 | 0 | |
Best | -1.03163 | -1.03163 | |
F_{17}(x) | Mean | 0.397887 | 0.397887 |
Std | 0 | 0 | |
Best | 0.397887 | 0.397887 | |
F_{18}(x) | Mean | 3 | 3 |
Std | 1.68e-14 | 1.65e-14 | |
Best | 3 | 3 | |
F_{19}(x) | Mean | -3.8627 | -3.8627 |
Std | 2.68e-15 | 2.53e-17 | |
Best | -3.8627 | -3.8627 | |
F_{20}(x) | Mean | -3.322 | -3.322 |
Std | 1.12e-08 | 8.95e-09 | |
Best | -3.322 | -3.322 | |
F_{21}(x) | Mean | -8.7744 | -9.4724 |
Std | 2.1867 | 1.3031 | |
Best | -10.1532 | -10.1532 | |
F_{22}(x) | Mean | -10.4029 | -10.4029 |
Std | 1.84e-15 | 1.76e-18 | |
Best | -10.4029 | -10.4029 | |
F_{23}(x) | Mean | -10.5364 | -10.5364 |
Std | 1.54e-15 | 1.62e-18 | |
Best | -10.5364 | -10.5364 |
Assume that N far-field narrowband signal sources are incident on the hydrophone array of the M(M > N) vector sensor. The incident angle is { \bf{\Theta} } = [\Theta_1, \Theta_2, \cdots, \Theta_N]^T , where \Theta_n = (\theta_n, \alpha_n)^T , (\cdot)^T is the transposition, \theta_n is the horizontal azimuth angle of the n^{\rm{th}} incident signal, \alpha_n is the elevation angle of the n^{\rm{th}} incident signal, respectively, the incident wavelength is \lambda , and the distance between adjacent arrays is d . Then the signal received by the array can be expressed in vector form as
\begin{equation} { \bf{Z} }(t) = { \bf{A} }({ \bf{\Theta} }){ \bf{S} }(t)+{ \bf{N} }(t), \end{equation} | (3.1) |
Among them, { \bf{Z} }(t) is the 4M\times 1 dimensional received signal vector, and { \bf{N} }(t) is the array 4M\times 1 dimensional gaussian noise vector. It is assumed that the noise is gaussian white noise, which are independent of each other in time and space. { \bf{S} }(t) is the M\times 1 dimensional signal source vector. {\bf{A}}({\bf{\Theta }}) is the signal direction matrix of the vector hydrophone array
\begin{align} { \bf{A} }({ \bf{\Theta} }) & = [{ \bf{a} }(\Theta_1), { \bf{a} }(\Theta_2), \cdots, { \bf{a} }(\Theta_N)] \\ & = [{ \bf{a} }_1(\Theta_1) \otimes { \bf{u} }_1, { \bf{a} }_2(\Theta_2) \otimes { \bf{u} }_2, \cdots, { \bf{a} }_N(\Theta_N) \otimes { \bf{u} }_N], \end{align} | (3.2) |
where \otimes is the Kronecker product, { \bf{a} }_n(\Theta_n) = [1, e^{-j\beta_n}, e^{-j2\beta_n}, \cdots, e^{-j(M-1)\beta_n}]^T is the sound pressure corresponding to n^{\rm{th}} signal, { \bf{u} }_n = [1, \cos\theta_n \sin \alpha_n, \sin\theta_n \sin \alpha_n, \cos \alpha_n]^T is the direction vector of the n^{\rm{th}} signal source, and \beta_n = \frac{2\pi}{\lambda}d\cos \theta_n \sin \alpha_n . Then the array covariance matrix of the received signal is
\begin{align} { \bf{R} } & = {\rm{E}}[{ \bf{Z} }(t){ \bf{Z} }^H(t)] \\ & = { \bf{A} }{\rm{E}}[{ \bf{S} }(t){ \bf{S} }^H(t)]{ \bf{A} }^H+ {\rm{E}}[{ \bf{N} }(t){ \bf{N} }^H(t)] \\ & = { \bf{A} }{ \bf{R} }_s{ \bf{A} }^H+\sigma^2 { \bf{I} }, \end{align} | (3.3) |
where { \bf{R} }_s is the signal covariance matrix, \sigma^2 is Gaussian white noise, { \bf{I} } is the identity matrix, (\cdot)^H is the conjugate transpose of matrix (\cdot) , Assume that the signal and the array are on the same plane, that is, \alpha_n = \frac{\pi}{2} , so only \theta_n is considered in this paper. In actual calculations, the received data is limited, so the array covariance matrix is
\begin{equation} \hat{{ \bf{R} }} = \frac{1}{K}\sum\limits_{k = 1}^K { \bf{Z} }(k){ \bf{Z} }^H(k), \end{equation} | (3.4) |
where K represents the number of snapshots.
By uniformly and independently sampling the received signal, the joint probability density function of the sampled data can be obtained as follows
\begin{align} &P\left({ \bf{Z} }(1), { \bf{Z} }(2),\cdots, { \bf{Z} }(K)\right)\\ & = \prod\limits_{k = 1}^K \frac{\exp\left( -\frac{1}{\sigma^2}\|{ \bf{Z} }(k)-{ \bf{A} }(\tilde{\theta}){ \bf{S} }(k)\|^2\right)}{\det(\pi\sigma^2 { \bf{I} })}, \end{align} | (3.5) |
where {\rm{det}}(\cdot) represents the determinant of the matrix (\cdot) , \tilde{\theta} is the unknown signal orientation estimation, P(\cdot) is a multidimensional nonlinear function of unknown parameters \tilde{\theta}, \sigma^2 and { \bf{S} } . Take the logarithm of Eq (3.5)
\begin{align} -\ln P & = K\ln\pi+3MK\ln\sigma^2 \\ &+\frac{1}{\sigma^2}\sum\limits_{k = 1}^K\|{ \bf{Z} }(k)-{ \bf{A} }(\tilde{\theta}){ \bf{S} }(k)\|^2, \end{align} | (3.6) |
In Eq (3.6), Take the partial derivative of \sigma^2 , set it to 0, get \sigma^2 = \frac{1}{4M}{\rm{tr}}\left\{{{\bf{P_A}}^\perp \hat{{ \bf{R} }}}\right\} , where {\rm{tr}}\{\cdot\} is the trace of matrix (\cdot) , {\bf{P_A}}^\perp is the orthogonal projection matrix of matrix {\bf{A}} , \hat{{ \bf{S} }} = {\bf{A}}^{+}{ \bf{Z} } and {\bf{A}}^{+} = ({\bf{A}}^H{\bf{A}})^{-1}{\bf{A}}^H are the pseudo-inverse of matrix {\bf{A}} , substitute \sigma^2 and \hat{{ \bf{S} }} , into Eq (3.6), then
\begin{equation} \hat{\theta} = \arg \max\limits_{\tilde{\theta}} g(\tilde{\theta}), \end{equation} | (3.7) |
where g(\tilde{\theta}) is the likelihood function, which can be expressed as
\begin{equation} g(\tilde{\theta}) = {\rm{tr}} \left\{\left[{ \bf{A} }(\tilde{{ \bf{\Theta} }})({ \bf{A} }^H(\tilde{\theta}){ \bf{A} }(\tilde{\theta}))^{-1}{ \bf{A} }^H(\tilde{\theta})\right]\hat{{ \bf{R} }}\right\}. \end{equation} | (3.8) |
\hat{\theta} is the estimated DOA angle of the estimated signal. Seeking the maximum value of the likelihood function g(\tilde{\theta}) can get a set of solutions corresponding to this value, which is the estimated angle sought.In order to compare the convergence of different methods, the following equation is defined as the fitness function
\begin{equation} f(\tilde{\theta}) = |g(\tilde{\theta})-g(\theta)|, \end{equation} | (3.9) |
where \theta is the known signal in Eq (3.1), g(\theta) = {\rm{tr}} \left\{\left[{\bf{A}}(\theta)({\bf{A}}^H(\theta){\bf{A}}(\theta))^{-1}{\bf{A}}^H(\theta)\right]\hat{{ \bf{R} }}\right\} , Eq (3.7) can thus be expressed as
\begin{equation} \hat{\theta} = \arg \min\limits_{\tilde{\theta}} f(\tilde{\theta}), \end{equation} | (3.10) |
when f(\tilde{\theta}) is close to 0, it means that the estimated angle is more accurate.
The initial position is expressed as \theta_i = [ \theta_{i}^1, \theta_{i}^2, \cdots, \theta_{i}^d] . Taking Eq (3.9) in ML DOA as the fitness function Fit_{i}(t) in IASO, then the fitness function Fit_{i}(t) of Eq (2.3) in Section 2 is changed to f(\tilde{\theta}) . Then the geometric binding force of Eq (2.15) can be expressed as x_{i}^d(t+1) = x_{i}^d(t)+v_{i}^d(t+1) ; The acceleration is changed from Eq (2.17) to
\begin{equation} \begin{array}{ll} a_{i}^d(t)& = \frac{F_{i}^d(t)+G_{i}^d(t)}{m_{i}^d(t)}\\ & = -\alpha(1-\frac{t-1}{T})^3e^\frac{-20t}{T}\\ &\sum\limits_{j\in KBest} \frac{rand_{j}[2\times(h_{ij}(t))^{13}-(h_{ij})^7]}{m_{i}(t)}\\ &\frac{(\theta_{i}^d(t)-\theta_{i}^d(t))}{\parallel {\theta_{i}(t),\theta_{j}(t)}\parallel_{2}}+\beta e^{-\frac{20t}{T}}\frac{\theta_{best}^d(t)-\theta_{i}^d(t)}{m_{i}(t)} \end{array} \end{equation} | (4.1) |
The speed update is changed from Eq (2.18) to
\begin{equation} \begin{array}{ll} v_{i}^d(t+1)& = w\times rand_{i}^dv_{i}^d(t)+c_{1}\times rand_{i}^da_{i}^d(t)\\&+c_{2}\times rand_{i}^d(\theta_{best}^d(t)-\theta_{i}^d(t)), \end{array} \end{equation} | (4.2) |
the location update is changed from Eq (2.19) to
\begin{equation} \theta_{i}^d(t+1) = \theta_{i}^d(t)+v_{i}^d(t+1). \end{equation} | (4.3) |
In this part, we demonstrated the simulation results of the iterative process and convergence performance of the IASO. Then, we compared the ML DOA estimation performance between the IASO and ASO, SCA, GA, and PSO. In the experiment, the receiving array should be a uniform linear array composed of 10 acoustic vector sensors, the number of snapshots is 300, and the added noise is Gaussian white noise.
In the simulation experiment, in 100 independent Monte Carlo experiments, the population number is 30 , the maximum number of iterations is 200 , the signal-to-noise ratio is 10 dB, and the search range is [0,180] . Taking one source \theta = [30^\circ] , two sources \theta = [30^\circ, 60^{\circ}] , respectively, the minimum process curve of fitness value is obtained. Compared with IASO, ASO, SCA, GA and PSO, Table 7 shows the parameters of the five algorithmsan and Figure 9 shows the fitness convergence curve.
Name of Parameter | IASO | ASO | SCA | GA | PSO |
Problem dimension | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) |
Population Size | 30 | 30 | 30 | 30 | 30 |
Maximum number of iterations | 200 | 200 | 200 | 200 | 200 |
Initial search area | [0,180] | [0,180] | [0,180] | [0,180] | [0,180] |
Depth weight | 50 | 50 | - | - | - |
Multiplier weight | 0.2 | 0.2 | - | - | - |
Lower limit of repulsion | 1.1 | 1.1 | - | - | - |
Upper limit of attraction | 1.24 | 1.24 | - | - | - |
Acceleration factor c_{1} | -2.5000e-04 | - | - | - | - |
Acceleration factor c_{2} | 1.003 | - | - | - | - |
Acceleration factor w | 0.8975 | - | - | - | - |
Crossover Fraction | - | - | - | 0.8 | - |
Migration Fraction | - | - | - | 0.2 | - |
Cognitive Constants | - | - | - | - | 1.25 |
Social Constants | - | - | - | - | 0.5 |
Inertial Weight | - | - | - | - | 0.9 |
Figure 9 shows the fitness variation curves of the ML DOA estimators of the IASO, ASO, SCA, GA and PSO in the case of 1, 2, 3 signal source and 200 iterations. As can be seen from the picture. Regardless of the number of signal sources is 1, 2, or 3, IASO has the fastest convergence speed. When the number of signal sources is 1, IASO converges fastest, followed by ASO, In comparsion PSO, SCA and GA have large convergence range, and their fitness values are high, which indicates that they can easily to fall into local optimum, when the number of signal sources is 2, IASO has better convergence effect. When the signal source is 3, IASO still remained the best, followed by ASO. But SCA, GA and PSO not only converge rather slowly but can also easily fall into local optimum. Even after 200 iterations, the fitness function cannot converge to 0.
In order to compare the statistical performance of different algorithms and their relationship with Cramér–Rao Bound(CRB), we performed a comparison and estimated the mean-variance of the algorithm based on the root mean square error (RMSE). The overall size is 30 iterations and the maximum number of iterations is 200.
\begin{equation} {\rm{RMSE}} = \sqrt{\frac{1}{N\cdot L}\sum\limits_{l = 1}^L\sum\limits_{i = 1}^N \left[\hat{\theta}_i(l)-\theta_i\right]^2}, \end{equation} | (4.4) |
where L is the number of experiments, \theta_i is the DOA of the i^{th} signal, N is the number of signals, and \hat{\theta}_i(j) denotes the estimate of the i^{th} DOA achieved in the j^{th} experiment.
Figure 10 shows the RMSE of the ML DOA estimator for the five algorithms of IASO, ASO, SCA, GA and PSO when the number of signal sources is 1, 2 and 3, changing the signal-to-noise ratio ranges from -20dB to 20dB. It can be seen that the performance of IASO is more stable regardless of the source and more closer to CRB. When the number of signal sources is 3, the estimation performance of several algorithms decreases, but the DOA estimation performance based on IASO algorithm is still closer to CRB, followed by ASO, GA, PSO and SCA.The SCA performs well at low signal-to-noise ratios, but poorly at high signal-to-noise ratios. However, the MLDOA estimation performance of PSO and GA is poor, and their fitness functions have difficulty converging to the global optimum solution. Even after 200 iterations, a large RMSE is still generated.
Population size marks the most important parameter in biological evolutionary algorithms. In general, the estimation accuracy of intelligent algorithms improves as the population size increases. However, when the population increases, the computational load on the algorithm also increases. For ML DOA problems, the population size determines the number of likelihood functions calculated in each iteration. Therefore, this highlights the need for an algorithm with a small population size and high estimation accuracy.
Figure 11 shows the RMSE curves of the ML DOA estimators of IASO, ASO, SCA, GA and PSO when the population size ranges from 10 to 100. As can be seen from the figure, the IASO can maintain low RMSE, high estimation accuracy and closer to CRB regardless of the number of signal sources. When there is one signal source, the RMSE of IASO, ASO, SCA and PSO is similar, while the GA algorithm keeps a relatively high RMSE when the population is less than 50. When there are two signal sources, the RMSE of IASO algorithm maintains a lower RMSE, while the ASO algorithm is somewhat unstable, and the PSO and GA still keep a higher RMSE. When the signal source is three, only IASO algorithm has lower RMSE, ASO, PSO, SCA and GA, and they have higher RMSE. This shows the population size is 100, GA and PSO algorithms only need a large population number size, bue also they also need a large number of iterations. For ML DOA estimation, when the number of signal source are 1, 2, 3, IASO algorithm can accurately estimate DOA with a smaller population number requiring less computational effort.
In addition to the convergence and statistical performance described above, the quality of an algorithm can also be judged by its computational complexity. The computational complexity is independent of the number of signal sources. Rather it is related to the maximum number of iterations, the population size and the maximum number of spatial variations. The following figure illustrates the average number of iterations calculated by stopping the standard formula 30 in the case of two signal sources: 1e-6 .
Figure 12 shows the average iteration time curves of the different algorithms for 100 independent Monte Carlo experiments. It can be seen that the IASO algorithm has the smallest number of iterations when the signal-to-noise ratio range from -20dB to 20dB and the number of iterations is 200 , followed by the ASO, PSO, GA and SCA algorithms, which require at least 100 iterations. When the number of IASO iterations was significantly lower than the other groups, the number of iterations was significantly lower than the other algorithms. In general, the IASO algorithm has the smallest number of iterations on the mean curve of signal-to-noise ratio and overall size. For ASO, SCA, GA and PSO, more iterations are still needed to find the optimal solution under signal-to-noise ratio and overall transformation. As a result, IASO has the lowest computational load.
This paper proposes an improved ASO, employing 23 test functions to test IASO and ASO and finds that it overcomes the shortcomings of ASO that it can easily to fall into local optimality and poor convergence performance. ML DOA estimation is a high-resolution optimization method in theory, but the huge computational burden hinders its practical applications. In this paper, IASO is used in ML DOA estimation, and simulation experiments are carried out. The results show that, compared with ASO, SCA, GA and PSO methods, the ML DOA estimator of IASO proposed in this paper has faster convergence speed, lower RMSE and lower computational complexity.
This research was funded by National Natural Science Foundation of China (Grant No. 61774137, 51875535 and 61927807), Key Research and Development Foundation of Shanxi Province(Grant No. 201903D121156), and Shanxi Scholarship Council of China (Grant No. 2020-104 and 2021-108). The authors express their sincere thanks to the anonymous referee for many valuable comments and suggestions.
The authors declare that they have no conflict of interest.
[1] | S. U. S. Choi, J. Eastman, Enhancing thermal conductivity of fluids with nanoparticles, Asme Fed, 231 (1995), 99–103. |
[2] |
J. Buongiorno, D. C. Venerus, N. Prabhat, T. McKrell, J. Townsend, R. Christianson, et al., A benchmark study on the thermal conductivity of nanofluids, J. Appl. Phys., 106 (2009), 094312. https://doi.org/10.1063/1.3245330 doi: 10.1063/1.3245330
![]() |
[3] | K. R. Singh, P. R. Solanki, B. D. Malhotra, A. C. Pandey, R. P. Singh, Introduction to nanomaterials: An overview toward broad spectrum applications, nanomaterials in bionanotechnology, CRC Press: Boca Raton, FL, USA, 2021. https://doi.org/10.1201/9781003139744-1 |
[4] | K. Suvardhan, C. Rajasekhar, R. Mashallah, Smart nanodevices for point-of-care applications, CRC Press: Boca Raton, FL, USA, 2021. |
[5] |
S. Upadhya, S. V. Mamatha, S. R. Raju, C. S. R. Raju, N. A. Shah, J. D. Chung, Importance of entropy generation on Casson, Micropolar and Hybrid magneto-nanofluids in a suspension of cross diffusion, Chinese J. Phys., 77 (2022), 1080–1101. https://doi.org/10.1016/j.cjph.2021.10.016 doi: 10.1016/j.cjph.2021.10.016
![]() |
[6] |
Q. Lou, B. Ali, S. U. Rehman, D. Habib, S. Abdal, N. A. Shah, et al., Micropolar dusty fluid: Coriolis force effects on dynamics of MHD rotating fluid when Lorentz force is significant, Mathematics, 10 (2022), 2630. https://doi.org/10.3390/math10152630 doi: 10.3390/math10152630
![]() |
[7] |
Y. Li, M. Zhou, B. Cheng, Y. Shao, Recent advances in g-C3N4-based heterojunction photocatalysts, J. Mater. Sci. Technol., 56 (2020), 1–17. https://doi.org/10.1016/j.jmst.2020.04.028 doi: 10.1016/j.jmst.2020.04.028
![]() |
[8] |
N. A. Shah, A. Wakif, E. R. El-Zahar, S. Ahmad, S-J. Yook, Numerical simulation of a thermally enhanced EMHD flow of a heterogeneous micropolar mixture comprising (60%)-ethylene glycol (EG), (40%)-water (W), and copper oxide nanomaterials (CuO), Case Stud. Therm. Eng., 2022, 102046. https://doi.org/10.1016/j.csite.2022.102046 doi: 10.1016/j.csite.2022.102046
![]() |
[9] | R. E. Rosensweig, Ferrohydrodynamics, Cambridge University Press, Cambridge, England, 1985. |
[10] | S. U. S. Choi, D. A. Singer, H. P. Wang, Developments and applications of non-Newtonian flows, Asme Fed, 66 (1995), 99–105. |
[11] |
M. Hidetoshi, A. Ebata, K. Teramae, N. Hishiunma, Conductivity and viscosity of liquid by dispersed ultra-fine particles (dispersion of Al2O3, SiO2, and TiO2 ultra-fine particles), Netsu Bussei, 7 (1993). https://doi.org/10.2963/jjtp.7.227 doi: 10.2963/jjtp.7.227
![]() |
[12] |
M. Z. Ashraf, S. U. Rehman, S. Farid, A. K. Hussein, B. Ali, N. A. Shah, et al., Insight into significance of bioconvection on MHD tangent hyperbolic nanofluid flow of irregular thickness across a slender elastic surface, Mathematics, 10 (2022), 2592. https://doi.org/10.3390/math10152592 doi: 10.3390/math10152592
![]() |
[13] |
H. I. Andersson, O. A. Valnes, Flow of a heated ferrofluid over a stretching sheet in the presence of a magnetic dipole, Acta Mech., 128 (1998), 39–47. https://doi.org/10.1007/BF01463158 doi: 10.1007/BF01463158
![]() |
[14] |
H. I. Andersson, MHD flow of a viscoelastic fluid past a stretching surface, Acta Mech., 95 (1992), 227–230. https://doi.org/10.1007/BF01170814 doi: 10.1007/BF01170814
![]() |
[15] | B. Gabriella, On similarity solutions of MHD flow over a nonlinear stretching surface in non-Newtonian power-law fluid, Electron. J. Qual. Theo., 6 (2016), 1–12. |
[16] |
C. Fetecau, N. A. Shah, D. Vieru, General solutions for hydromagnetic free convection flow over an infinite plate with Newtonian heating, mass diffusion and chemical reaction, Commun. Theor. Phys., 68 (2017), 768–782. https://doi.org/10.1088/0253-6102/68/6/768 doi: 10.1088/0253-6102/68/6/768
![]() |
[17] |
A. S. Sabu, A. Wakif, S. Areekara, A. Mathew, N. A. Shah, Significance of nanoparticles' shape and thermo-hydrodynamic slip constraints on MHD alumina-water nanoliquid flows over a rotating heated disk: The passive control approach, Int. Commun. Heat Mass Tran., 129 (2021), 105711. https://doi.org/10.1016/j.icheatmasstransfer.2021.105711. doi: 10.1016/j.icheatmasstransfer.2021.105711
![]() |
[18] |
G. Bognár, K. Hriczó, Ferrofluid flow in the presence of magnetic dipole, Tech. Mech., 39 (2019), 3–15. https://doi.org/10.24874/ti.2019.41.03.12 doi: 10.24874/ti.2019.41.03.12
![]() |
[19] |
I. A. Abdallah, Analytic solution of heat and mass transfer over a permeable stretching plate affected by chemical reaction, internal heating, Dufour-Soret effect and Hall effect, Therm. Sci., 13 (2009), 183–197. https://doi.org/10.2298/TSCI0902183A doi: 10.2298/TSCI0902183A
![]() |
[20] |
A. Zeeshan, A. Majeed, R. Ellahi, Effect of magnetic dipole on viscous ferrofluid past a stretching surface with thermal radiation, J. Mol. Liq., 215 (2016), 549–554. https://doi.org/10.1016/j.molliq.2015.12.110 doi: 10.1016/j.molliq.2015.12.110
![]() |
[21] |
M. Nawaz, A. Zeeshan, R. Ellahi, S. Abbasbandy, S. Rashidi, Joules and Newtonian heating effects on stagnation point flow over a stretching surface by means of genetic algorithm and Nelder-Mead method, Int. J. Numer. Method. H., 25 (2015), 665–684. https://doi.org/10.1108/HFF-04-2014-0103 doi: 10.1108/HFF-04-2014-0103
![]() |
[22] |
K. Saeed, M. Shaban, S. Abbasbandy, Improved analytical solutions to a stagnation-point flow past a porous stretching sheet with heat generation, J. Franklin I., 348 (2011), 2044–2058. https://doi.org/10.1016/j.jfranklin.2011.05.020 doi: 10.1016/j.jfranklin.2011.05.020
![]() |
[23] |
T. Hayat, T. Javed, Z. Abbas, Slip flow and heat transfer of a second-grade fluid past a stretching sheet through a porous space, Int. J. Heat Mass Tran., 51 (2008), 4528–4534. https://doi.org/10.1016/j.ijheatmasstransfer.2007.12.022 doi: 10.1016/j.ijheatmasstransfer.2007.12.022
![]() |
[24] |
J. M. Martin, I. D. Boyd, Momentum and heat transfer in a laminar boundary layer with slip flow, J. Thermophysics Heat Tr., 20 (2006), 710–719. https://doi.org/10.2514/1.22968 doi: 10.2514/1.22968
![]() |
[25] |
M. E. Ali, The effect of variable viscosity on mixed convection heat transfer along a vertical moving surface, Int. J. Therm. Sci., 45 (2006), 60–69. https://doi.org/10.1016/j.ijthermalsci.2005.04.006 doi: 10.1016/j.ijthermalsci.2005.04.006
![]() |
[26] |
Q. Li, Y. Xuan, Experimental investigation on heat transfer characteristics of magnetic fluid flow around a fine wire under the influence of an external magnetic field, Exp. Therm. Fluid Sci., 33 (2009), 591–596. https://doi.org/10.1016/j.expthermflusci.2008.12.003 doi: 10.1016/j.expthermflusci.2008.12.003
![]() |
[27] |
H. Yamaguchi, Z. Zhang, S. Shuchi, K. Shimada, Heat transfer characteristics of magnetic fluid in a partitioned rectangular box, J. Magn. Magn. Mater., 252 (2002), 203–205. https://doi.org/10.1016/S0304-8853(02)00731-X doi: 10.1016/S0304-8853(02)00731-X
![]() |
[28] |
M. Motozawa, J. Chang, T. Sawada, Y. Kawaguchi, Effect of magnetic field on heat transfer in rectangular duct flow of a magnetic fluid, Phys. Procedia, 9 (2010) 190–193. https://doi.org/10.1016/j.phpro.2010.11.043 doi: 10.1016/j.phpro.2010.11.043
![]() |
[29] |
J. Bahram, S. Sadighi, P. Jalili, D. D. Ganji, Characteristics of ferrofluid flow over a stretching sheet with suction and injection, Case Stud. Therm. Eng., 14 (2019), 100470. https://doi.org/10.1016/j.csite.2019.100470 doi: 10.1016/j.csite.2019.100470
![]() |
[30] |
Z. Ziabakhsh, G. Domairry, H. Bararnia, Analytical solution of non-Newtonian micropolar fluid flow with uniform suction/blowing and heat generation, J. Taiwan Inst. Chem. E., 40 (2009), 443–451. https://doi.org/10.1016/j.jtice.2008.12.005 doi: 10.1016/j.jtice.2008.12.005
![]() |
[31] |
M. Ramzan, M. Farooq, T. Hayat, J. D. Chung, Radiative and Joule heating effects in the MHD flow of a micropolar fluid with partial slip and convective boundary condition, J. Mol. Liq., 221 (2016), 394–400. https://doi.org/10.1016/j.molliq.2016.05.091 doi: 10.1016/j.molliq.2016.05.091
![]() |
1. | Faiz Muhammad Khan, Amjad Ali, Ebenezer Bonyah, Zia Ullah Khan, Mohammad Rahimi-Gorji, The Mathematical Analysis of the New Fractional Order Ebola Model, 2022, 2022, 1687-4129, 1, 10.1155/2022/4912859 | |
2. | S. Bhatter, K. Jangid, A. Abidemi, K.M. Owolabi, S.D. Purohit, A new fractional mathematical model to study the impact of vaccination on COVID-19 outbreaks, 2023, 6, 27726622, 100156, 10.1016/j.dajour.2022.100156 | |
3. | Guangdong Sui, Xiaobiao Shan, Chengwei Hou, Haigang Tian, Jingtao Hu, Tao Xie, An underwater piezoelectric energy harvester based on magnetic coupling adaptable to low-speed water flow, 2023, 184, 08883270, 109729, 10.1016/j.ymssp.2022.109729 | |
4. | Peng Wu, Anwarud Din, Taj Munir, M Y Malik, A. S. Alqahtani, Local and global Hopf bifurcation analysis of an age-infection HIV dynamics model with cell-to-cell transmission, 2022, 1745-5030, 1, 10.1080/17455030.2022.2073401 | |
5. | Maryam Amin, Muhammad Farman, Ali Akgül, Mohammad Partohaghighi, Fahd Jarad, Computational analysis of COVID-19 model outbreak with singular and nonlocal operator, 2022, 7, 2473-6988, 16741, 10.3934/math.2022919 | |
6. | Kottakkaran Sooppy Nisar, Muhammad Farman, Mahmoud Abdel-Aty, Chokalingam Ravichandran, A review of fractional order epidemic models for life sciences problems: Past, present and future, 2024, 95, 11100168, 283, 10.1016/j.aej.2024.03.059 | |
7. | Yuqin Song, Peijiang Liu, Anwarud Din, Analysis of a stochastic epidemic model for cholera disease based on probability density function with standard incidence rate, 2023, 8, 2473-6988, 18251, 10.3934/math.2023928 | |
8. | M. Latha Maheswari, K. S. Keerthana Shri, Mohammad Sajid, Analysis on existence of system of coupled multifractional nonlinear hybrid differential equations with coupled boundary conditions, 2024, 9, 2473-6988, 13642, 10.3934/math.2024666 | |
9. | Lalchand Verma, Ramakanta Meher, Fuzzy computational study on the generalized fractional smoking model with caputo gH-type derivatives, 2024, 17, 1793-5245, 10.1142/S1793524523500377 | |
10. | Shewafera Wondimagegnhu Teklu, Belela Samuel Kotola, Haileyesus Tessema Alemneh, Joshua Kiddy K. Asamoah, Smoking and alcoholism dual addiction dissemination model analysis with optimal control theory and cost-effectiveness, 2024, 19, 1932-6203, e0309356, 10.1371/journal.pone.0309356 | |
11. | Abdelhamid Mohammed Djaouti, Zareen A. Khan, Muhammad Imran Liaqat, Ashraf Al-Quran, A Novel Technique for Solving the Nonlinear Fractional-Order Smoking Model, 2024, 8, 2504-3110, 286, 10.3390/fractalfract8050286 | |
12. | A. Omame, F.D. Zaman, Analytic solution of a fractional order mathematical model for tumour with polyclonality and cell mutation, 2023, 8, 26668181, 100545, 10.1016/j.padiff.2023.100545 | |
13. | Asad Khan, Anwarud Din, Stochastic analysis for measles transmission with Lévy noise: a case study, 2023, 8, 2473-6988, 18696, 10.3934/math.2023952 | |
14. | Viswanathan Padmavathi, Kandaswami Alagesan, Samad Noeiaghdam, Unai Fernandez-Gamiz, Manivelu Angayarkanni, Vediyappan Govindan, Tobacco smoking model containing snuffing class, 2023, 9, 24058440, e20792, 10.1016/j.heliyon.2023.e20792 | |
15. | I R Sofia, Shraddha Ramdas Bandekar, Mini Ghosh, Mathematical modeling of smoking dynamics in society with impact of media information and awareness, 2023, 11, 26667207, 100233, 10.1016/j.rico.2023.100233 | |
16. | Jalal Al Hallak, Mohammed Alshbool, Ishak Hashim, Amar Nath Chatterjee, Implementing Bernstein Operational Matrices to Solve a Fractional‐Order Smoking Epidemic Model, 2024, 2024, 1687-9643, 10.1155/2024/9141971 | |
17. | Muhammad Farman, Kottakkaran Sooppy Nisar, Mumtaz Ali, Hijaz Ahmad, Muhammad Farhan Tabassum, Abdul Sattar Ghaffari, Chaos and forecasting financial risk dynamics with different stochastic economic factors by using fractional operator, 2025, 11, 2363-6203, 10.1007/s40808-025-02321-2 |
Name | Function | n | Range | Optimum |
Sphere | F_{1}(x)=\sum\limits_{i=1}^{n} x_{i}^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.22 | F_{2}(x)=\sum\limits_{i=1}^{n}\mid x_{i}\mid +\prod\limits_{i=1}^{n}\mid x_{i}\mid | 30 | [-100,100]^{n} | 0 |
Schwefel 1.2 | F_{3}(x)=\sum\limits_{i=1}^{n}(\sum\limits_{j=1}^{i} x_{j})^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.21 | F_{4}(x)=\max\limits_{i}\{\mid x_{i}\mid, 1\leq i\leq n\} | 30 | [-100,100]^{n} | 0 |
Rosenbrock | F_{5}(x)=\sum\limits_{i=1}^{n-1}(100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}) | 30 | [-200,200]^{n} | 0 |
Step | F_{6}(x)=\sum\limits_{i=1}^{n}(x_{i}+0.5)^{2} | 30 | [-100,100]^{n} | 0 |
Quartic | F_{7}(x)=\sum\limits_{i=1}^{n}ix_{i}^{4}+rand() | 30 | [-1.28, 1.28]^{n} | 0 |
Name | Function | n | Range | Optimum |
Schwefel | F_{8}(x)=-\sum\limits_{i=1}^{n}(x_{i}\sin(\sqrt{\mid x_{i}\mid})) | 30 | [-500,500]^{n} | -12569.5 |
Rastrigin | F_{9}(x)=\sum\limits_{i=1}^{n}(x_{i}^{2}-10\cos(2\pi x_{i})+10) | 30 | [-5.12, 5.12]^{n} | 0 |
Ackley | F_{10}(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}x_{i}^{2}})-\exp(\frac{1}{n}\sum\limits_{i=1}^{n}\cos2\pi x_{i})+20+\varepsilon | 30 | [-32, 32]^{n} | 0 |
Griewank | F_{11}(x)=\frac{1}{4000}\sum\limits_{i=1}^{n}x_{i}^{2}-\prod\limits_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1 | 30 | [-600,600]^{n} | 0 |
Penalized | F_{12}(x)=\frac{\pi}{n}\{10\sin^{2}(\pi y_{1})+\sum\limits_{i=1}^{n-1}(y_{i}-1)^{2}[1+10\sin^{2}(\pi y_{i+1})]+(y_{n}-1)^{2}\}+\sum\limits_{i=1}^{n}u(x_{i}, 10,100, 4) | 30 | [-50, 50]^{n} | 0 |
Penalized2 | F_{13}(x)=0.1\{\sin^{2}(3\pi x_{1})+\sum\limits_{i=1}^{29}(x_{i}-1)^{2}p[1+\sin^{2}(3\pi X_{i+1})] +(x_{n}-1)^{2}[1+\sin^{2}(2\pi x_{n})]\}+\sum\limits_{i=1}^{n}u(x_{i}, 5,100, 4) | 30 | [-50, 50]^{n} | 0 |
Name | Function | n | Range | Optimum |
Foxholes | F_{14}(x)=[\frac{1}{500}+\sum\limits_{j=1}^{25}\frac{1}{j+\sum\limits_{j=1}^{2}(x_{i}-a_{ij})^{6}}]^{-1} | 2 | [-65.536, 65.536]^{n} | 0.998 |
Kowalik | F_{15}(x)=\sum\limits_{i=1}^{11}\mid a_{i}-\frac{x_{i}(b_{i}^{2}+b_{i}x_{2})}{b_{i}^{2}+b_{i}x_{3}+x_{4}}\mid^{2} | 4 | [-5, 5]^{n} | 3.075\times 10^{-4} |
Six Hump Camel | F_{16}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4} | 2 | [-5, 5]^{n} | -1.0316 |
Branin | F_{17}(x)=(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{\pi}x_{1}-6)^{2} +10(1-\frac{1}{8\pi})\cos x_{1}+10 | 2 | [-5, 10]\times[0, 15] | 0.398 |
Goldstein-Price | F_{18}(x)=1+(x_{1}+x_{2}+1)^{2}19-14x_{1}+3x_{1}^{2}-14x_{2} +6x_{1}x_{2}+3x_{2}^{2})] \times[30+(2x_{1}+1-3x_{2})^{2}(18-32x_{1} 12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})] | 2 | [-2, 2]^{n} | 3 |
Hartman 3 | F_{19}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{3}a_{ij}(x_{j}-p_{ij})^{2}] | 3 | [0, 1]^{n} | -3.86 |
Hartman 6 | F_{20}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{6}a_{ij}(x_{j}-p_{ij})^{2}] | 6 | [0, 1]^{n} | -3.322 |
Shckel 5 | F_{21}(x)=-\sum\limits_{i=1}^{5} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.1532 |
Shckel 7 | F_{22}(x)=-\sum\limits_{i=1}^{7} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.4028 |
Shckel 10 | F_{23}(x)=-\sum\limits_{i=1}^{10}[ (x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.5363 |
Function | Index | ASO | IASO |
F_{1}(x) | Mean | 2.54e-12 | 1.88e-18 |
Std | 3.24e-12 | 1.03e-20 | |
Best | 3.48e-15 | 5.22e-19 | |
F_{2}(x) | Mean | 3.33e-08 | 3.39e-09 |
Std | 1.89e-10 | 9.90e-12 | |
Best | 5.11e-08 | 1.84e-09 | |
F_{3}(x) | Mean | 186.5664 | 1.06e-17 |
Std | 86.3065 | 1.23e-21 | |
Best | 24.1115 | 1.81e-18 | |
F_{4}(x) | Mean | 3.24e-09 | 8.77e-10 |
Std | 6.14-09 | 2.32e-12 | |
Best | 2.13e-10 | 4.75e-10 | |
F_{5}(x) | Mean | 0.2905 | 0.0034 |
Std | 0.9888 | 0.0039 | |
Best | 4.5370e+03 | 28.8627 | |
F_{6}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{7}(x) | Mean | 0.02124 | 3.91e-04 |
Std | 0.02981 | 3.60e-04 | |
Best | 0.03319 | 1.8710e-04 |
Function | Index | ASO | IASO |
F_{8}(x) | Mean | -3887 | -6772.47 |
Std | 564.7 | 354.77 | |
Best | -4245 | -6878.93 | |
F_{9}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{10}(x) | Mean | 3.91e-09 | 8.63e-10 |
Std | 2.15e-09 | 2.68e-13 | |
Best | 1.13e-09 | 7.257e-10 | |
F_{11}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{12}(x) | Mean | 4.34e-23 | 3.69e-23 |
Std | 1.84e-22 | 1.51e-22 | |
Best | 7.83e-24 | 6.53e-24 | |
F_{13}(x) | Mean | 2.03e-22 | 2.33e-23 |
Std | 2.83e-22 | 3.12e-22 | |
Best | 1.91e-23 | 1.90e-23 |
Function | Index | ASO | IASO |
F_{14}(x) | Mean | 0.998004 | 0.998004 |
Std | 7.04e-16 | 4.25e-16 | |
Best | 0.998004 | 0.998004 | |
F_{15}(x) | Mean | 9.47e-04 | 4.69e-04 |
Std | 2.79e-04 | 1.81e-04 | |
Best | 2.79e-04 | 1.45e-04 | |
F_{16}(x) | Mean | -1.03163 | -1.03163 |
Std | 0 | 0 | |
Best | -1.03163 | -1.03163 | |
F_{17}(x) | Mean | 0.397887 | 0.397887 |
Std | 0 | 0 | |
Best | 0.397887 | 0.397887 | |
F_{18}(x) | Mean | 3 | 3 |
Std | 1.68e-14 | 1.65e-14 | |
Best | 3 | 3 | |
F_{19}(x) | Mean | -3.8627 | -3.8627 |
Std | 2.68e-15 | 2.53e-17 | |
Best | -3.8627 | -3.8627 | |
F_{20}(x) | Mean | -3.322 | -3.322 |
Std | 1.12e-08 | 8.95e-09 | |
Best | -3.322 | -3.322 | |
F_{21}(x) | Mean | -8.7744 | -9.4724 |
Std | 2.1867 | 1.3031 | |
Best | -10.1532 | -10.1532 | |
F_{22}(x) | Mean | -10.4029 | -10.4029 |
Std | 1.84e-15 | 1.76e-18 | |
Best | -10.4029 | -10.4029 | |
F_{23}(x) | Mean | -10.5364 | -10.5364 |
Std | 1.54e-15 | 1.62e-18 | |
Best | -10.5364 | -10.5364 |
Name of Parameter | IASO | ASO | SCA | GA | PSO |
Problem dimension | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) |
Population Size | 30 | 30 | 30 | 30 | 30 |
Maximum number of iterations | 200 | 200 | 200 | 200 | 200 |
Initial search area | [0,180] | [0,180] | [0,180] | [0,180] | [0,180] |
Depth weight | 50 | 50 | - | - | - |
Multiplier weight | 0.2 | 0.2 | - | - | - |
Lower limit of repulsion | 1.1 | 1.1 | - | - | - |
Upper limit of attraction | 1.24 | 1.24 | - | - | - |
Acceleration factor c_{1} | -2.5000e-04 | - | - | - | - |
Acceleration factor c_{2} | 1.003 | - | - | - | - |
Acceleration factor w | 0.8975 | - | - | - | - |
Crossover Fraction | - | - | - | 0.8 | - |
Migration Fraction | - | - | - | 0.2 | - |
Cognitive Constants | - | - | - | - | 1.25 |
Social Constants | - | - | - | - | 0.5 |
Inertial Weight | - | - | - | - | 0.9 |
Name | Function | n | Range | Optimum |
Sphere | F_{1}(x)=\sum\limits_{i=1}^{n} x_{i}^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.22 | F_{2}(x)=\sum\limits_{i=1}^{n}\mid x_{i}\mid +\prod\limits_{i=1}^{n}\mid x_{i}\mid | 30 | [-100,100]^{n} | 0 |
Schwefel 1.2 | F_{3}(x)=\sum\limits_{i=1}^{n}(\sum\limits_{j=1}^{i} x_{j})^{2} | 30 | [-100,100]^{n} | 0 |
Schwefel 2.21 | F_{4}(x)=\max\limits_{i}\{\mid x_{i}\mid, 1\leq i\leq n\} | 30 | [-100,100]^{n} | 0 |
Rosenbrock | F_{5}(x)=\sum\limits_{i=1}^{n-1}(100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}) | 30 | [-200,200]^{n} | 0 |
Step | F_{6}(x)=\sum\limits_{i=1}^{n}(x_{i}+0.5)^{2} | 30 | [-100,100]^{n} | 0 |
Quartic | F_{7}(x)=\sum\limits_{i=1}^{n}ix_{i}^{4}+rand() | 30 | [-1.28, 1.28]^{n} | 0 |
Name | Function | n | Range | Optimum |
Schwefel | F_{8}(x)=-\sum\limits_{i=1}^{n}(x_{i}\sin(\sqrt{\mid x_{i}\mid})) | 30 | [-500,500]^{n} | -12569.5 |
Rastrigin | F_{9}(x)=\sum\limits_{i=1}^{n}(x_{i}^{2}-10\cos(2\pi x_{i})+10) | 30 | [-5.12, 5.12]^{n} | 0 |
Ackley | F_{10}(x)=-20\exp(-0.2\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}x_{i}^{2}})-\exp(\frac{1}{n}\sum\limits_{i=1}^{n}\cos2\pi x_{i})+20+\varepsilon | 30 | [-32, 32]^{n} | 0 |
Griewank | F_{11}(x)=\frac{1}{4000}\sum\limits_{i=1}^{n}x_{i}^{2}-\prod\limits_{i=1}^{n}\cos(\frac{x_{i}}{\sqrt{i}})+1 | 30 | [-600,600]^{n} | 0 |
Penalized | F_{12}(x)=\frac{\pi}{n}\{10\sin^{2}(\pi y_{1})+\sum\limits_{i=1}^{n-1}(y_{i}-1)^{2}[1+10\sin^{2}(\pi y_{i+1})]+(y_{n}-1)^{2}\}+\sum\limits_{i=1}^{n}u(x_{i}, 10,100, 4) | 30 | [-50, 50]^{n} | 0 |
Penalized2 | F_{13}(x)=0.1\{\sin^{2}(3\pi x_{1})+\sum\limits_{i=1}^{29}(x_{i}-1)^{2}p[1+\sin^{2}(3\pi X_{i+1})] +(x_{n}-1)^{2}[1+\sin^{2}(2\pi x_{n})]\}+\sum\limits_{i=1}^{n}u(x_{i}, 5,100, 4) | 30 | [-50, 50]^{n} | 0 |
Name | Function | n | Range | Optimum |
Foxholes | F_{14}(x)=[\frac{1}{500}+\sum\limits_{j=1}^{25}\frac{1}{j+\sum\limits_{j=1}^{2}(x_{i}-a_{ij})^{6}}]^{-1} | 2 | [-65.536, 65.536]^{n} | 0.998 |
Kowalik | F_{15}(x)=\sum\limits_{i=1}^{11}\mid a_{i}-\frac{x_{i}(b_{i}^{2}+b_{i}x_{2})}{b_{i}^{2}+b_{i}x_{3}+x_{4}}\mid^{2} | 4 | [-5, 5]^{n} | 3.075\times 10^{-4} |
Six Hump Camel | F_{16}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4} | 2 | [-5, 5]^{n} | -1.0316 |
Branin | F_{17}(x)=(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{\pi}x_{1}-6)^{2} +10(1-\frac{1}{8\pi})\cos x_{1}+10 | 2 | [-5, 10]\times[0, 15] | 0.398 |
Goldstein-Price | F_{18}(x)=1+(x_{1}+x_{2}+1)^{2}19-14x_{1}+3x_{1}^{2}-14x_{2} +6x_{1}x_{2}+3x_{2}^{2})] \times[30+(2x_{1}+1-3x_{2})^{2}(18-32x_{1} 12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})] | 2 | [-2, 2]^{n} | 3 |
Hartman 3 | F_{19}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{3}a_{ij}(x_{j}-p_{ij})^{2}] | 3 | [0, 1]^{n} | -3.86 |
Hartman 6 | F_{20}(x)=-\sum\limits_{i=1}^{4}c_{i}\exp[-\sum\limits_{j=1}^{6}a_{ij}(x_{j}-p_{ij})^{2}] | 6 | [0, 1]^{n} | -3.322 |
Shckel 5 | F_{21}(x)=-\sum\limits_{i=1}^{5} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.1532 |
Shckel 7 | F_{22}(x)=-\sum\limits_{i=1}^{7} [(x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.4028 |
Shckel 10 | F_{23}(x)=-\sum\limits_{i=1}^{10}[ (x_{i}-a_{i})(x_{i}-a_{i})^{T}+c_{i}]^{-1} | 4 | [0, 10]^{n} | -10.5363 |
Function | Index | ASO | IASO |
F_{1}(x) | Mean | 2.54e-12 | 1.88e-18 |
Std | 3.24e-12 | 1.03e-20 | |
Best | 3.48e-15 | 5.22e-19 | |
F_{2}(x) | Mean | 3.33e-08 | 3.39e-09 |
Std | 1.89e-10 | 9.90e-12 | |
Best | 5.11e-08 | 1.84e-09 | |
F_{3}(x) | Mean | 186.5664 | 1.06e-17 |
Std | 86.3065 | 1.23e-21 | |
Best | 24.1115 | 1.81e-18 | |
F_{4}(x) | Mean | 3.24e-09 | 8.77e-10 |
Std | 6.14-09 | 2.32e-12 | |
Best | 2.13e-10 | 4.75e-10 | |
F_{5}(x) | Mean | 0.2905 | 0.0034 |
Std | 0.9888 | 0.0039 | |
Best | 4.5370e+03 | 28.8627 | |
F_{6}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{7}(x) | Mean | 0.02124 | 3.91e-04 |
Std | 0.02981 | 3.60e-04 | |
Best | 0.03319 | 1.8710e-04 |
Function | Index | ASO | IASO |
F_{8}(x) | Mean | -3887 | -6772.47 |
Std | 564.7 | 354.77 | |
Best | -4245 | -6878.93 | |
F_{9}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{10}(x) | Mean | 3.91e-09 | 8.63e-10 |
Std | 2.15e-09 | 2.68e-13 | |
Best | 1.13e-09 | 7.257e-10 | |
F_{11}(x) | Mean | 0 | 0 |
Std | 0 | 0 | |
Best | 0 | 0 | |
F_{12}(x) | Mean | 4.34e-23 | 3.69e-23 |
Std | 1.84e-22 | 1.51e-22 | |
Best | 7.83e-24 | 6.53e-24 | |
F_{13}(x) | Mean | 2.03e-22 | 2.33e-23 |
Std | 2.83e-22 | 3.12e-22 | |
Best | 1.91e-23 | 1.90e-23 |
Function | Index | ASO | IASO |
F_{14}(x) | Mean | 0.998004 | 0.998004 |
Std | 7.04e-16 | 4.25e-16 | |
Best | 0.998004 | 0.998004 | |
F_{15}(x) | Mean | 9.47e-04 | 4.69e-04 |
Std | 2.79e-04 | 1.81e-04 | |
Best | 2.79e-04 | 1.45e-04 | |
F_{16}(x) | Mean | -1.03163 | -1.03163 |
Std | 0 | 0 | |
Best | -1.03163 | -1.03163 | |
F_{17}(x) | Mean | 0.397887 | 0.397887 |
Std | 0 | 0 | |
Best | 0.397887 | 0.397887 | |
F_{18}(x) | Mean | 3 | 3 |
Std | 1.68e-14 | 1.65e-14 | |
Best | 3 | 3 | |
F_{19}(x) | Mean | -3.8627 | -3.8627 |
Std | 2.68e-15 | 2.53e-17 | |
Best | -3.8627 | -3.8627 | |
F_{20}(x) | Mean | -3.322 | -3.322 |
Std | 1.12e-08 | 8.95e-09 | |
Best | -3.322 | -3.322 | |
F_{21}(x) | Mean | -8.7744 | -9.4724 |
Std | 2.1867 | 1.3031 | |
Best | -10.1532 | -10.1532 | |
F_{22}(x) | Mean | -10.4029 | -10.4029 |
Std | 1.84e-15 | 1.76e-18 | |
Best | -10.4029 | -10.4029 | |
F_{23}(x) | Mean | -10.5364 | -10.5364 |
Std | 1.54e-15 | 1.62e-18 | |
Best | -10.5364 | -10.5364 |
Name of Parameter | IASO | ASO | SCA | GA | PSO |
Problem dimension | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) | 2(3, 4) |
Population Size | 30 | 30 | 30 | 30 | 30 |
Maximum number of iterations | 200 | 200 | 200 | 200 | 200 |
Initial search area | [0,180] | [0,180] | [0,180] | [0,180] | [0,180] |
Depth weight | 50 | 50 | - | - | - |
Multiplier weight | 0.2 | 0.2 | - | - | - |
Lower limit of repulsion | 1.1 | 1.1 | - | - | - |
Upper limit of attraction | 1.24 | 1.24 | - | - | - |
Acceleration factor c_{1} | -2.5000e-04 | - | - | - | - |
Acceleration factor c_{2} | 1.003 | - | - | - | - |
Acceleration factor w | 0.8975 | - | - | - | - |
Crossover Fraction | - | - | - | 0.8 | - |
Migration Fraction | - | - | - | 0.2 | - |
Cognitive Constants | - | - | - | - | 1.25 |
Social Constants | - | - | - | - | 0.5 |
Inertial Weight | - | - | - | - | 0.9 |