The mean and noise of stochastic gene transcription with cell division

  • Received: 31 January 2018 Revised: 16 April 2018 Published: 01 October 2018
  • MSC : Primary: 37H10; Secondary: 34F05, 60J10, 92C40

  • Life growth and development are driven by continuous cell divisions. Cell division is a stochastic and complex process. In this paper, we study the impact of cell division on the mean and noise of mRNA numbers by using a two-state stochastic model of transcription. Our results show that the steady-state mRNA noise with symmetric cell division is less than that with binomial inheritance with probability 0.5, but the steady-state mean transcript level with symmetric division is always equal to that with binomial inheritance with probability 0.5. Cell division except random additive inheritance always decreases mean transcript level and increases transcription noise. Inversely, random additive inheritance always increases mean transcript level and decreases transcription noise. We also show that the steady-state mean transcript level (the steady-state mRNA noise) with symmetric cell division or binomial inheritance increases (decreases) with the average cell cycle duration. But the steady-state mean transcript level (the steady-state mRNA noise) with random additive inheritance decreases (increases) with the average cell cycle duration. Our results are confirmed by Gillespie stochastic simulation using plausible parameters.

    Citation: Qi Wang, Lifang Huang, Kunwen Wen, Jianshe Yu. The mean and noise of stochastic gene transcription with cell division[J]. Mathematical Biosciences and Engineering, 2018, 15(5): 1255-1270. doi: 10.3934/mbe.2018058

    Related Papers:

    [1] Shangbo Zhou, Yuxiao Han, Long Sha, Shufang Zhu . A multi-sample particle swarm optimization algorithm based on electric field force. Mathematical Biosciences and Engineering, 2021, 18(6): 7464-7489. doi: 10.3934/mbe.2021369
    [2] Qing Wu, Chunjiang Zhang, Mengya Zhang, Fajun Yang, Liang Gao . A modified comprehensive learning particle swarm optimizer and its application in cylindricity error evaluation problem. Mathematical Biosciences and Engineering, 2019, 16(3): 1190-1209. doi: 10.3934/mbe.2019057
    [3] Fei Chen, Yanmin Liu, Jie Yang, Meilan Yang, Qian Zhang, Jun Liu . Multi-objective particle swarm optimization with reverse multi-leaders. Mathematical Biosciences and Engineering, 2023, 20(7): 11732-11762. doi: 10.3934/mbe.2023522
    [4] Xianzi Zhang, Yanmin Liu, Jie Yang, Jun Liu, Xiaoli Shu . Handling multi-objective optimization problems with a comprehensive indicator and layered particle swarm optimizer. Mathematical Biosciences and Engineering, 2023, 20(8): 14866-14898. doi: 10.3934/mbe.2023666
    [5] Yufeng Wang, BoCheng Wang, Zhuang Li, Chunyu Xu . A novel particle swarm optimization based on hybrid-learning model. Mathematical Biosciences and Engineering, 2023, 20(4): 7056-7087. doi: 10.3934/mbe.2023305
    [6] Zhiyan Xing, Yanlong Yang, Zuopeng Hu . Nash equilibrium realization of population games based on social learning processes. Mathematical Biosciences and Engineering, 2023, 20(9): 17116-17137. doi: 10.3934/mbe.2023763
    [7] Shuang Tan, Shangrui Zhao, Jinran Wu . QL-ADIFA: Hybrid optimization using Q-learning and an adaptive logarithmic spiral-levy firefly algorithm. Mathematical Biosciences and Engineering, 2023, 20(8): 13542-13561. doi: 10.3934/mbe.2023604
    [8] Dongning Chen, Jianchang Liu, Chengyu Yao, Ziwei Zhang, Xinwei Du . Multi-strategy improved salp swarm algorithm and its application in reliability optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5269-5292. doi: 10.3934/mbe.2022247
    [9] Rongmei Geng, Renxin Ji, Shuanjin Zi . Research on task allocation of UAV cluster based on particle swarm quantization algorithm. Mathematical Biosciences and Engineering, 2023, 20(1): 18-33. doi: 10.3934/mbe.2023002
    [10] Xuepeng Zheng, Bin Nie, Jiandong Chen, Yuwen Du, Yuchao Zhang, Haike Jin . An improved particle swarm optimization combined with double-chaos search. Mathematical Biosciences and Engineering, 2023, 20(9): 15737-15764. doi: 10.3934/mbe.2023701
  • Life growth and development are driven by continuous cell divisions. Cell division is a stochastic and complex process. In this paper, we study the impact of cell division on the mean and noise of mRNA numbers by using a two-state stochastic model of transcription. Our results show that the steady-state mRNA noise with symmetric cell division is less than that with binomial inheritance with probability 0.5, but the steady-state mean transcript level with symmetric division is always equal to that with binomial inheritance with probability 0.5. Cell division except random additive inheritance always decreases mean transcript level and increases transcription noise. Inversely, random additive inheritance always increases mean transcript level and decreases transcription noise. We also show that the steady-state mean transcript level (the steady-state mRNA noise) with symmetric cell division or binomial inheritance increases (decreases) with the average cell cycle duration. But the steady-state mean transcript level (the steady-state mRNA noise) with random additive inheritance decreases (increases) with the average cell cycle duration. Our results are confirmed by Gillespie stochastic simulation using plausible parameters.


    As many real-world optimization problems become more complex, there is always a need for better optimization algorithms. By observing flying dragonflies, humans found the key to unlocking the mystery of helicopters. By following the light trail of fireflies, the cold light mining lamps have been illuminated. Researchers once again turned their attention to nature to explore intelligent optimization algorithms to solve the optimization problem. An algorithm inspired by natural biological phenomena or developed by exploiting the intelligent behavior of a group of organisms is called a swarm intelligence algorithm. Commonly used swarm intelligence optimization algorithms mainly include particle swarm optimization algorithm (PSO) [1], whale optimization algorithm (WOA) [2], firefly algorithm (FA) [3], ant lion algorithm (ALA) [4], cuckoo algorithm (CA) [5], genetic algorithm (GA) [6], monarch butterfly optimization [7], earthworm optimization algorithm (EWA) [8], elephant herding optimization (EHO) [9] moth search (MS) algorithm [10], and other optimization algorithms. Among the algorithms mentioned above, particle swarm optimization algorithms are popular because of their fast convergence, few parameters, and easy programming implementation.

    Particle swarm optimization (PSO) is a population-based metaheuristic algorithm that was proposed by Eberhart et al. [11] together with Kennedy et al. [1] in 1995. The particle swarm algorithm originated from the research on the feeding behavior of bird flocks. Its core idea is to use the information sharing among the individuals in the population so that the motion of the whole population produces an evolutionary process from disorder to order in the solution space of the problem to obtain the optimal solution.

    A large number of intelligent algorithms based on social intelligent behavior have been extensively researched in the past few decades, through the study of natural creatures, and applied to various optimization fields [12]. However, the conventional optimization methods sometimes cannot get a satisfactory solution at a reasonable time. Consequently, a large number of nature-inspired metaheuristics algorithms have been introduced to tackle various complicated problems [13]. For example, for the disadvantage that PSO tends to converge prematurely, Zhou et al. [14] proposes a multi-sample particle swarm optimization algorithm based on electric field force. Feng et al. [15] presents a novel binary monarch butterfly optimization (BMBO) method intended for addressing the 0–1 knapsack problem (0–1 KP). Feng et al. [16] presents generalized opposition-based learning (OBL) monarch butterfly optimization with Gaussian perturbation (OMBO), in which OBL strategy is used on half individuals of the population in the late stage of evolution, and Gaussian perturbation acts on the part of the individuals with poor fitness in each evolution. Liu et al. [17] proposes an artificial bee colony algorithm based on dynamic penalty function and Lévy flight (DPLABC) for COPs, Lévy flight with the logistic map is applied in the employed bee phase in this paper. Guo et al. [18] presents an improved krill herd (IKH) approach to solve global optimization problems. The method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. Hanafi et al. [19] proposed a new PSO, called hierarchical PSO, which uses a dynamic tree hierarchy based on the performance of each particle in the population to define the neighborhood structure. Lu et al. [20] proposed a novel multi-group particle swarm optimizer driven by delayed activation (DA) strategy and exclusion mechanism, called enhanced multi-group collaborative particle swarm optimizer (EMCPSO). The EMCPSO aims to take advantage of multi-group technology to overcome the problem of premature convergence of standard PSO. Zhang et al. [21] proposed a novel particle swarm optimization based on the prey-predator relationship to reduce the probability of the algorithm falling into local optimum by introducing the old deletion mechanism. Liang et al. [22] developed a novel PSO named comprehensive learning PSO, which utilizes a new learning strategy to maintain the swarm diversity and thereby prevent premature convergence in solving multi-modal problems. In the comprehensive learning particle swarm optimization (CLPSO), each dimension of a particle determines the learning object according to the learning probability. Xia et al. [23] proposed an extended particle swarm optimization with multiple paradigms and forgetting capability, which increases the diversity of particles by introducing forgetting capability. Lynn et al. [24] proposed a new comprehensive learning-based particle swarm optimization algorithm called heterogeneous integrated learning particle swarm optimization (HCLPSO). Zhang et al. [25] proposed a comprehensive learning particle swarm optimization algorithm based on Bayesian iterative methods, which is called Bayesian comprehensive learning particle swarm optimization (BCLPSO). The posterior probabilities obtained using Bayesian public iteration can inherit the historical information of the particles that may be utilized, thus maintaining the diversity of the population to prevent premature convergence. Shi et al. [26] introduced the concept of inertia weights (ω) to balance the global search and local search capabilities, which significantly improved the performance of PSO. Clerc et al. [1] also proposed another related parameter, called the shrinkage factor (χ), to prevent premature convergence. Xia et al. [27] proposed an extended particle swarm optimization with multiple examples and a forgetting ability to increase the diversity of particles by introducing the forgetting ability. Wang et al. [28] proposes a novel and robust hybrid metaheuristic optimization method to solve numerical optimization problems by adding differential evolution (DE) mutation operators to the accelerated particle swarm optimization (APSO) algorithm. Mirjalili et al. [29] combines the strengths of both particle swarm optimization (PSO) and gravitational search algorithm (GSA) to propose a novel hybrid optimization algorithm PSOGSA. The PSOGSA is a novel hybrid optimization algorithm, and it outperforms both PSO and GSA in terms of improved exploration and exploitation. Kahouli et al. [30] coupled genetic algorithm and particle swarm algorithm to optimize network reconfiguration in the distribution system to improve reliability and reduce power loss. Lin et al. [31] proposed the global genetic learning particle swarm optimization with diversity enhancement by ring topology (GGL-PSOD) to improve the GL-PSO's performance. Naderi et al. [32] developed a new hybrid algorithm to solve the transmission expansion planning problem in the grid, which is combined with shuffled frog leaping algorithm, particle swarm optimization, and teaching learning-based optimization. James et al. [33] proposed an efficient stock market forecasting model combining artificial intelligence networks and particle swarm optimization. Mohammadi et al. [34] proposed a hybrid particle swarm optimization-differential evolution algorithm combined with a multilayer perceptron for suspended sediment load estimation.

    However, the drawback of the PSO algorithm is that when the particles search for the optimal solution, they may ignore the potential optimal solution region due to the influence of the learning paradigm, which leads them to fall into local optimum and difficult to jump out. And with the development of artificial intelligence techniques, the classical PSO variants provide a wide range of intelligence and convenience for humans and lead to many complex optimization problems. These optimization problems tend to be high-dimensional, multi-constrained, multimodal, and other trends. Therefore, we should study and improve global search performance when applying particle swarm algorithms to solve these complex optimization problems.

    To improve the performance of particle swarm algorithms in solving complex optimization problems and better balance exploration and exploitation, this paper proposes a Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization (LFIACL-PSO). In the LFIACL-PSO algorithm, we introduce an inverse learning mechanism, which allows particles to change their search direction and learn from the worst position in their history when they fall into local optimum and cannot jump out, learning steps obtained through random variables with Lévy distribution. Secondly, to prevent the algorithm from converging prematurely and the particle diversity from decreasing, the comprehensive learning strategy and Ring-type topology are introduced more particle learning paradigm. Finally, we combine Lévy flight and inverse learning mechanism for updating the velocity of particles, which can effectively prevent its disadvantage of easily falling into the local optimum.

    The rest of this paper is organized as follows: Section 2 mainly briefly introduces classic PSO and Ring-type topology. The implementation of LFIACL-PSO is described in detail in Section 3. Section 4 is the experimental part of this paper; we choose 16 benchmark functions and seven well-known comparison algorithms for experimental verification and analysis. Finally, relevant conclusions are outlined in section Section 5.

    In this part, we mainly introduce the classic PSO algorithm and Ring-type topology and set the stage for subsequent content.

    In a D-dimensional scale-search space with a population of m birds, each bird is seen as a particle, and each particle can be viewed as a point in the space. The state attributes of the ith,i=(1,2,3,,m) particle in the ith iteration is described by a velocity vector Vi and a position vector Xi. During the iteration of the population, the PSO will continuously generate and update the best historical position of the ith particle, i.e., PBi=(PBi,1,PBi,2,,PBi,D) as well as the best position of the whole population, i.e., GB=(GB1,GB2,,GBD). The rules of position updating in the traditional PSO algorithm are given as Eqs (2.1) and (2.2).

    Vki,d=Vk1i,d+uPBirPBi.d(PBk1i,dXk1i,d)+uGBirGBd(GBk1dXk1i,d), (2.1)
    Xki,d=Xk1i,d+Vki,d. (2.2)

    The speed update Eq (2.1) includes three parts: Vk1i,d is the speed of the previous iteration of the particle; uPBirPBi,d(PBi,dXk1i,d) is the cognitive part, which represents the particle's thinking; uGBirGBd(GBdXk1i,d) is the social part, which represents the information sharing and cooperation between particles. Where Vi=(Vi,1,Vi,2,,Vi,D) and Xi=(Xi,1,Xi,2,,Xi,D) represent the speed and position of the ith particle in the population, respectively; uPBi and uGBi denote the acceleration coefficients, adjust the maximum step length of learning; rPBi,d and rGBd are the random number in the range [0, 1]. Generally, the speed change range of the dth dimension is limited to [Vmin,d,Vmax,d], and the position change range is [Xmin,d,Xmax,d], that is, if the speed and position exceed the boundary value during iteration, then the particle velocity is Vki,d=Vmax,d, position is Xki,d=Xmax,d.

    To improve the ability of the particle to jump out of the local optimum, Shi et al. [26] introduced the inertia weight ω by introducing it into Eq (2.1), as shown in Eq (2.3).

    Vki,d=ωVk1i,d+uPBirPBi,d(PBk1i,dXk1i,d)+uGBirGBd(GBk1dXk1i,d), (2.3)

    in addition, Clerc et al. [35] added a constraint factor α to Eq (2.2) to control the velocity, as shown in Eq (2.4).

    Xki,d=Xk1i,d+αVki,d. (2.4)

    In the canonical PSO, each particle has only one learning paradigm in each learning section. However, in real life, people choose to obtain information from more than one person. Furthermore, many studies have shown that children with multiple learning samples have better word learning abilities compared to those who learn from only one sample [36,37,38]. For this reason, we choose GB and RBi as the two examples of the ith particle social learning part in this study. Specifically, select the Ring-type topology to update the particles at the RBi. Updating the learning paradigm with Ring-type topology allows the algorithm to reduce the probability of entering a premature trap after being seduced by the local optimal point when performing local development tasks. When the number of times the particle fitness value is not updated is greater than the set maximum number of times, it is not updated for the particle to reselect neighbor particles. Update Velocity by Eq (2.5).

    Vki,d=ωVk1i,d+uPBirPB1,d(PBk1i,dxk1i,d)+uRBirRB2,d(RBk1i,dxk1i,d)+uGBirGB3,d(GBk1dxt1i,d). (2.5)

    Although the particle algorithm converges quickly and is prone to implementation, it also suffers from premature convergence and loss of particle diversity. To address the shortcomings of particle swarm algorithms, this paper proposes a Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization (LFIACL-PSO).

    Unlike the standard particle swarm update rule, the comprehensive learning strategy improves the learning pattern of the particles themselves in the PSO algorithm by using the historical best information of all other particles to update the velocity of the particles. This strategy allows the diversity of the population to be preserved to prevent premature convergence. The comprehensive learning strategy speed update formula is shown in Eq (3.1).

    Vki,d=ωVk1i,d+uPBirPB1,d(PBk1fi(d)xk1i,d), (3.1)

    where fi(d)=[fi(1),fi(2),,fi(D1),fi(D)] determines which particle's Pbesti,d should follow the ith particle for each dimension (d), can be the corresponding dimension of any particle's including its own, and the decision depends on the probability Pci, referred to as the learning probability, which can take different values for different particles. For each dimension of particle i, we generate a random number. If the random number is greater than Pci, the corresponding dimension will learn from the dimension of its pbest. Otherwise, it will learn from another particle pbest [22]. The learning probability Pci of this particle is defined as shown in Eq (3.2).

    Pci=α+β(exp(10(i1)/(PS1))1)(exp(10)1), (3.2)

    where α=0.05,β=0.45, PS denotes the size of the population. The search range of the particles is affected by the boundary [Xmin,Xmax]. When the boundary is exceeded, the fitness value and position of the particle are not updated. In addition, to improve the best position of the particle itself when it is updated, define the refresh gap m. If the particle iterates m times and the fitness value does not change, update Pci.

    In this paper, when we use the comprehensive learning strategy as an updated paradigm for the self-awareness part of the particle, then the speed update equation is shown in Eq (3.3), which is obtained by transforming Eqs (2.5) and (3.1).

    Vki,d=ωVk1i,d+uPBirPBi,d(PBk1fi(d)xk1i,d)+uRBirRB2,d(RBk1i,dxk1i,d)+uGBirGB3,d(GBk1dxt1i,d). (3.3)

    The classic particle swarm algorithm has all particles moving towards their optimal historical position and the population optimal historical position throughout the iterative process of finding the optimal place. However, in searching for food, every once in a while, birds share information about their distance from food. Therefore, it can be judged that each bird in the population affects the other individuals. The information about each generation of the particles is recorded, and by making full use of this information, it is possible to improve the algorithm's ability to find the best. When a particle gets trapped in a local optimum and stalls, let that particle learn toward its individual historical worst position to quickly guide these particles to escape from the optimum and improve the search capability of the algorithm.

    In the process of algorithm iteration, if it is detected that the individual optimal position of the particle is not updated within the set number of iterations, it can be judged that the particle is caught in a local optimum. At this time, inverse learning is used for it, and the learning method of other particles remains unchanged. The worst position of the individual history of the ith particle is denoted as Worsti=(wi,1,wi,2,,wij,,wi,D). Then the velocity update equation of the particle in inverse learning is Eq (3.4).

    Vki,d=ωVk1i,d+uPBirPBi,d(PBk1fi(d)Xk1i,d))+uRBirRB2,d(RBk1i,dxk1i,d)+uGBirGB3,d(GBk1dXk1i,d)+IrW4,d(xk1i,dWorstk1i,d), (3.4)

    where I denotes the inverse learning factor, Worsti,d denotes the value of the ith particle at that moment, the worst position of the individual history in the dth dimension.

    Lévy Flight, named after the French mathematician Paul Lévy refers to a random walk where the probability distribution of the step size is heavy-tailed. It means that particles moving according to Lévy flight occasionally perform big steps, interspersed with frequent small steps. Many biological phenomena are characterized by a Lévy distribution, such as the flight path of flies. In recent years, the Lévy distribution has been applied to the field of optimization, such as the cuckoo algorithm [39]. The use of Lévy flight can improve the global search ability of the algorithm and prevent the reduction of population diversity, so the use of Lévy flight in combination with particle swarm algorithm can effectively avoid its shortcomings of prone falling into local optimum. Enhancement of the global optimization-seeking capability of algorithms.

    Because of the complexity of the Lévy distribution, the Mantegna algorithm is commonly used for simulation. The equation for the step size Flightsize of the simulated Lévy flight is shown in Eq (3.5).

    Flightsize=μ|v|1/γ, (3.5)

    where γ=1.5 is the parameter of the Lévy distribution, μ and v are random numbers obeying a normal distribution as shown in Eq (3.6).

    {μN(0,τ2μ)vN(0,τ2v), (3.6)

    where

    {τu={Γ(1+γ)sin(πγ/2)Γ(1+γ/2)γ2(γ1)/2}1/γτv=1, (3.7)

    so, the particle motion step PSstepsize is shown by the Eq (3.8),

    PSstepsize=ρ×Flightsize, (3.8)

    where ρ is an adjustment factor, which adjusts the step size according to different specific problems so that the step size of the flight is not too large or too small and thus stays in the proper range. Figure 1 shows the image of the Lévy flight of 1000 steps simulated by Mantegna's algorithm in MATLAB.

    Figure 1.  Lévy flight image.

    Finally, combining the Lévy flight with the inverse learning mechanism and using the Lévy distribution to obtain the step size of the inverse learning, the velocity update equation for LFIACL-PSO is obtained as Eq (3.9).

    Vki,d=ωVk1i,d+uPBirPBi,d(PBk1fi(d)xk1i,d)+uRBirRB2,d(RBk1i,dxk1i,d)+uGBirGB3,d(GBk1dxt1i,d)+IrP4,d(xk1i,dWorstk1i,d)×PSsize, (3.9)

    where Worstk1i,d is the value of the dth dimension of the historical worst position of the ith particle.

    Use the acceleration coefficients to regulate the maximum step size of particle learning. In typical particle swarm optimization algorithms, when the two acceleration coefficients are not equal, the algorithm is more likely to maintain the speed of convergence and the effectiveness of the search effect. Therefore, the value of the acceleration coefficients is also one of the main focuses of research in particle swarm optimization algorithms. The existing research on the acceleration coefficient mainly has two aspects: the setting based on the empirical value, and the other is the adaptive adjustment through the update iteration of the particle. For instance, the classified state by HMM is used to adapt PSO with both acceleration parameters, and population size [40]. Kiani et al. [41] used a tangential chaos strategy to bootstrap the acceleration coefficients to find the optimal solution. Inspired by the idea of adaptive regulation, in this paper, each particle acceleration coefficient is adaptively adjusted according to the historical information of elite particles in the population. Set the number of elite particles be 0.5 times the total number of particles. When the ith particle adjusts its acceleration coefficient, we use the three values generated by the normal distribution functions N(δ1,θ21),N(δ2,θ22) and N(δ3,θ23) assigned to uPBi,uRBi and uGBi respectively. The values of δ1,δ2 and δ3 are updated by updating the ones shown in Eq (3.10).

    δi=(1λ)×δi+λ×ψ,1i3, (3.10)

    where λ denotes a learning weight for knowledge from elite particles; ψ=average(δelitei), δelitei represents a set of δi of the elites; average(δelitei) is the average value of δelitei. In addition, from Eq (3.10), the value of λ indicates the degree of influence of the elite particle's experience δi on the next generation. When λ=0, the information of the elite particle has no effect on the adjustment of δi, and when λ=1, the new δi will rely only on the historical knowledge of the elite particle.

    Usually, the sum of two acceleration coefficients in the particle swarm optimization algorithm is 4.0 [42]. Therefore, in the initial evolutionary stage, setting δ1=δ3=δ3=1.33, θ1=θ2=θ3=0.1. Finally, the value of uPBi, uRBi and uGBi can be independently obtained by N(δ1,θ21),N(δ2,θ22) and N(δ3,θ23).

    With the combination of the above components, Algorithm 1 shows the pseudo-code for LFIACL-PSO. Algorithm 1 is first initialized with relevant parameters such as population size PS, search space D, maximum number of iterations IterMax, velocity Vi and position Xi of particles. Computer population initial fitness value fit(Xi), with the initial fitness value as the current local optimum value Pfit(i) and individual worst value Worst(i) for each particle, and the best initial fitness value as the current global optimum value Gfit(i).

    Algorithm 1 The pseudo-code of the LFIACL-PSO algorithm.
    Input: The initial position vector Xi=[xi,1xi,2,xi,3,,xi,D], velocity vector Vi=[vi,1vi,2,vi,3,,vi,D].
    Output: The whole particle's best position.
    1: /*Initialization */
    2: Initialization PS, D, IterMax, m=5, flag(i)=0, δ1=δ1=δ3=1.33, θ1=θ2=θ3=0.1;
    3: Computer population initial fitness value fit(Xi), with the initial fitness value as the current local optimum value Pfit(i) and individual worst value Worst(i) for each particle, and the best initial fitness value as the current global optimum value Gfit(i).
    4: /* LOOP */
    5: for k=1 to IterMax do
    6:  for i=1 to PS do
    7:    Update Vki,d and Xki,d by Eqs (3.3) and (2.2)
    8:    if fit(Xi)<Pfit(i) then
    9:      Pfit(i)=fit(Xi)
    10:      flag(i)=0
    11:      if fit(Xi)<Gfit(i) then
    12:        Gfit(i)=fit(Xi)
    13:      end if
    14:    else
    15:      flag(i)=flag(i)+1
    16:      if fit(Xi)>Worst(i) then
    17:        Worst(i)=fit(Xi)
    18:      end if
    19:      Gain μ, v
    20:      Flightsize=μ|v|1/γ
    21:      PSsize=ρ×Flightsize
    22:      if flag(i)>m then
    23:        Update Vki,d and Xki,d by Eqs (3.9) and (2.2)
    24:        Update Pci by Eq (3.2)
    25:        Update PBk1fi(d), uPBi, uRBi, uGBi
    26:        Reselect a RBk1i,d
    27:      end if
    28:    end if
    29:  end for
    30: end for

    When the current fitness value of the particle is smaller than the local optimum of the particle, Pfit(i) is updated and flag(i)=0; similarly, Gfit(i) is updated. If the current fitness value of the particle is greater than Pfit(i), setting flag(i)=flag(i)+1, as shown in lines 7–14 of the pseudo code. If the fitness value is greater than the worst fitness value of ith position, Worst(i) is updated, as shown in lines 15, 16 of the pseudo code. When flag(i)>m, update the velocity and position by Eqs (3.9) and (2.2), and conversely, update the velocity and position by equations (3.3) and (2.2), as shown in lines 7 and 23 of the pseudo code, update Pci by Eq (3.2) and PBk1fi(d) and the associated parameters, as shown in lines 24–26 of the pseudo code.

    This chapter first analyzes the parameter sensitivity of the parameters introduced in the LFRACL-PSO algorithm. Then, the functional characteristics and roles of LFIACL-PSO analyzed by population diversity experiments. After that, the performance of the LFRACL-PSO algorithm is compared with seven typical PSO variants placed on sixteen benchmark functions in terms of solution accuracy, statistical significance test, CPU time consumption, and convergence speed in turn. Finally, to verify the practical application capability of the LFRACL-PSO algorithm, a spread spectrum radar polyphase code design engineering problem is used for testing and verification. The experimental environment is mainly implemented in windows 10 OS, Intel (R) Core (TM) i5-9600KF@3.70 GHz processor using python language programming, the python version is 3.6, and NumPy library version is 1.19.5.

    Tables 1 and 2 list the details of the 16 benchmark functions and the seven comparison algorithms. The 16 functions selected in this paper include 6 unimodal functions (f1f6) and 10 multimodal functions (f7f16). Among them, (f10f16) are more complex and difficult to handle multimodal functions.

    Table 1.  Sixteen benchmark functions.
    Name Benchmark Functions Search Range fmin
    Unimodal Sphere(f1) f1(x)=Di=1x2i, [100,100]D 0
    SchwefelsP1.2(f2) f2(x)=Di=1(ij=1xj)2 [100,100]D 0
    SchwefelsP2.21(f3) f3(x)=max(|xi|),i=1,2,...,D, [100,100]D 0
    SchwefelsP2.22(f4) f4(x)=Di=1|xi|+Di=1|xi|, [10,10]D 0
    Bentcigar(f5) f5(x)=x21+106Di=2x2i, [1.28,1.28]D 0
    Rosenbrock(f6) f6(x)=Di=1100×(xi+1x2i)2+(1xi)2, [10,10]D 0
    Multimodal Schwefel(f7) f7(x)=418.982887273×DDi=1xisin(|xi|), [500,500]D 0
    Dminima(f8) f8(x)=78.332331408+Di=1(x4i16x2i+5xi)/D, [5,5]D 0
    Alpine(f9) f9(x)=Di=1|xisinxi+0.1xi|, [10,10]D 0
    SchwefelP2.26(f10) f10(x)=Di=1(xisin(|xi|)), [500,500]D -12569.5
    Rastrigin(f11) f11(x)=Di=1[x2i10cos(2πxi)+10] [5.12,5.12]D 0
    NoncontinuousRastrigin(f12) f12(x)=(Di=1y2i100cos(2πyi)+10)yi={Xi,|Xi|<0.5round(2Xi)2,otherwise, [5,5]D 0
    Ackley(f13) f13(x)=20exp(0.21DDi=1x2i)exp(Di=1cos2πxi)+20+e, [32,32]D 0
    Griewank(f14) f14(x)=14000Di=1x2iDi=1xii+1, [600,600]D 0
    Penalized1(f15) f15(x)=πD{10sin2(πyi)+D1i=1(yi1)2[1+10sin2(πyi+1)]+(yD1)2}+Di=1u(xi,10,100,4),where yi=1+0.25(xi+1)u(xi,a,k,m)={k(xia)m,xi>a0,axiak(xia)m,xi<a, [50,50]D 0
    Penalized2(f16) f16(x)=0.1[sin2π3yi+Di=1(yi1)2[1+sin2(3πyi+1)]+(xn1)2[1+sin2(2πyD)]]+Di=1u(xi,10,100,4)whereyi=1+0.25(xi+1),(xi,a,k,m)={k(xia)m,xi>a0,axiak(xia)m,xi<a [50,50]D 0

     | Show Table
    DownLoad: CSV
    Table 2.  Baseline algorithms.
    Algorithm Year Parameter settings
    1 PSO [26] 1998 ω=0.7298,c1=c2=1.49445
    2 KPSO [35] 1999 K=0.7298,c1=c2=1.49445
    3 DMSPSO [43] 2005 ω=0.7298,c1=c2=1.49445,R=10,L=100
    4 CLPSO [22] 2006 ω=[0.4,0.9],c=1.49445,m=7
    5 TSLPSO [44] 2019 ω=0.40.9,c1=c2=1.5,c3=0.52.5
    6 CLPSO-LOT [45] 2019 ω=[0.4,0.9],c=1.49445,m=7,g=30
    7 XPSO [27] 2020 ω=[0.4,0.9],c1=c2=c3=1.35,StagMax=5,η=0.2,p=0.2

     | Show Table
    DownLoad: CSV

    To analyze how the parameters λ, flag(i), and I affect the performance of LFIACL-PSO. In this part, we set a series of experiments to analyze the sensitivity of the parameters.

    First of all, initial values are assigned to each parameter separately, such as λ=0.2, flag(i)=10, and I=1.4. When a parameter is analyzed, the rest of the parameters are fixed to their initial values. For example, when testing the efficiency of the parameter λ, the parameter flag(i)=10 and I=1.4. We selected five benchmark functions for experimental analysis, including two unimodal functions f1,f6 (Sphere, Rosenbrock) and three multimodal functions f9,f13,f14 (Alpine, Ackley, Griewank). To make sure the optimal values of parameters λ, flag(i) and I in the LFIACL-PSO algorithm, the basic experimental setups are as follows, the search space: D=30, population size: N=40, maximum number of function evaluations: MaxFEs=500. The result of each experiment is the average of 30 independent runs and takes different values of parameters λ, flag(i) and I for the experiments. The results are shown in Tables 35, and the best results are bolded. Where Mean, Std, and Min represents the mean, standard deviation, and minimum values of the calculated 30 times results, respectively.

    Table 3.  Selection of parameter λ.
    λ f1 f6 f9 f13 f14
    0.1 Mean 1.99E-10 2.68E+01 1.67E-03 7.55E-01 1.31-02
    Std 1.65E-10 1.35E+01 1.77E-03 7.52E-01 1.59E-02
    Min 1.67E-11 2.11E+01 1.25E-05 2.05E-06 5.08E-11
    0.2 Mean 1.83E-10 2.37E+01 2.98E-03 7.50E01 9.01E03
    Std 1.12E-10 1.07E+01 3.78E-03 6.95E01 1.32E02
    Min 5.45E-11 1.04E+01 4.51E-06 2.02E06 8.69E11
    0.3 Mean 2.01E-10 2.46E+01 1.42E03 7.89E-01 1.04E-02
    Std 1.88E-10 1.02E+01 1.52E03 6.69E-01 1.21E-02
    Min 3.41E-11 1.33E+01 6.10E06 2.10E-06 4.83E-11
    0.4 Mean 1.24E10 2.71E+01 1.92E-03 8.74E-01 1.47E-02
    Std 6.51E11 1.47E+01 2.02E- 03 7.12E-01 1.93E-02
    Min 1.30E11 1.72E+01 5.52E-05 1.87E-06 1.10E-10

     | Show Table
    DownLoad: CSV
    Table 4.  Selection of parameter flag(i).
    flag(i) f1 f6 f9 f13 f14
    2 Mean 2.06E-09 7.05E+02 2.52E-02 1.76E+01 1.16E-02
    Std 2.55E-09 2.49E+03 4.90E-02 6.22E+00 1.66E-02
    Min 1.81E-10 8.63E+00 6.63E-05 1.16E+00 5.20E-11
    4 Mean 1.63E10 2.85E+01 1.67E-03 6.64E01 1.03E02
    Std 1.12E10 1.51E+01 1.54E-03 7.40E01 1.19E02
    Min 3.71E11 1.57E+01 7.98E-06 1.67E06 3.35E11
    6 Mean 1.29E-09 7.46E+02 1.59E-01 3.59E+00 1.26E-02
    Min 9.04E-11 1.61E+01 9.72E-05 1.56E-03 9.38E-11
    8 Mean 1.96E-10 2.52E+01 2.73E-03 7.38E-01 1.03E-02
    Std 1.50E-10 1.06E+01 2.60E-03 6.64E-01 1.15E-02
    Min 2.22E-11 1.80E+01 4.10E-05 2.47E-06 1.94E-10
    10 Mean 1.56E-10 2.57E+01 1.45E03 9.40E-01 1.46E-02
    Std 1.86E-10 1.22E+01 1.75E03 7.50E-01 1.96E-02
    Min 1.75E-11 1.00E+01 4.52E06 2.46E-06 2.53E-11
    12 Mean 1.91E-10 2.49E+01 2.55E-03 6.78E-01 1.15E-02
    Std 1.32E-10 9.77E+00 3.77E-03 7.00E-01 1.21E-02
    Min 1.98E-11 1.58E+01 9.09E-06 1.92E-06 1.08E-10

     | Show Table
    DownLoad: CSV
    Table 5.  Selection of parameter I.
    I f1 f6 f9 f13 f14
    0.4 Mean 1.55E-10 3.26E+01 2.71E-03 7.14E-01 1.32E-03
    Std 1.46E-10 2.10E+01 4.29E-03 7.20E-01 1.73E-03
    Min 1.47E-11 1.32E+01 8.06E-06 1.60E-06 5.48E-11
    0.6 Mean 1.82E-10 2.54E+01 2.23E-03 6.04E-01 8.45E-03
    Std 1.37E-10 1.01E+01 2.83E-03 6.27E-01 9.08E-03
    Min 3.47E-11 1.80E+01 4.78E-06 1.97E-06 7.47E-11
    0.8 Mean 1.25E-10 3.08E+01 1.80E-03 8.86E-01 9.18E-03
    Std 8.32E-11 1.70E+01 1.95E-03 7.29E-01 1.32E-02
    Min 3.00E-11 1.34E+01 2.64E-05 1.99E-06 5.19E-11
    1.0 Mean 1.36E-10 3.29E+01 4.20E-03 7.74E-01 1.29E-02
    Std 9.66E-11 1.99E+01 8.48E-03 7.11E-01 1.41E-02
    Min 2.38E-11 2.04E+01 1.77E-05 1.33E-06 4.44E-11
    1.2 Mean 2.16E-10 2.75E+01 1.84E-03 7.90E-01 7.96E-03
    Std 2.23E-10 1.42E+01 1.73E-03 7.77E-01 1.16E-02
    Min 2.04E-11 1.71E+01 2.64E-05 2.63E-06 9.34E-11
    1.4 Mean 6.72E12 2.66E+01 4.13E05 7.29E07 8.53E03
    Std 4.75E12 3.31E+01 3.55E05 3.94E08 9.94E03
    Min 1.97E12 1.55E+01 5.81E06 6.90E07 1.40E10
    1.6 Mean 1.94E-10 2.63E+01 1.75E-03 8.04E-01 1.34E-02
    Std 1.75E-10 1.30E+01 1.94E-03 6.53E-01 1.66E-02
    Min 1.30E-11 1.54E+01 1.32E-05 1.16E-06 9.98E-11

     | Show Table
    DownLoad: CSV

    The parameter λ determines the degree of influence of the experience-based information of the elite particles on a particular particle. When the value of λ is small, the particles obtain more information from the elite particles, which leads to a rapid convergence of the population. When λ is larger, the particle acquires more information from its own historical experience and enhances the particle exploration ability. The experimental results are shown in Table 3. From the experimental results, we can see that setting λ in the range of [0.2,0.4] is beneficial to improve the search efficiency of LFIACL-PSO for all test problems. In this paper, λ=0.2 was used for all experiments.

    The value of the parameter flag(i) determines when the particle updates the new selection speed, position, and neighbor update method. When the value of flag(i) is small, the particles will change the velocity update put direction and neighbor particles frequently, which may disturb the search direction of the particles. When flag(i) is larger, it is beneficial to keep the search trail of particles. However, if the particle falls into an optimal local solution, a larger flag(i) will prevent the particle from reselecting the learning paradigm and jumping out of the optimal solution. The comparison results shown in the Table 4 indicate that the test function values gradually become better when flag(i)[2,4]. However, the performance decreases when flag(i)=6, and the performance of LFIACL-PSO gets better when flag(i)[8,10]. When flag(i) becomes too high, the performance deteriorates. Therefore, based on the above experimental analysis consideration, flag(i)=5 is used in this paper.

    The value of the parameter I determines how much information the particle obtains from the worst position in its history. When I is small, the particle receives less information from its worst location, and inversely, more information is acquired. Determine the optimal value of I by experimenting with different values of I. The results are shown in Table 5, when I=1.4, the optimal value of the test function is obtained. When I[1.6,1.8], the performance of LFIACL-PSO decreases. Therefore, set the parameter I=1.4 in the following experiments based on the above analysis.

    In addition, in this paper, we use the adaptive update method to obtain the values of uPBi,uRBi, and uGBi, and I adopts a given constant. The main reason is that I is a parameter value newly introduced in this paper. Therefore, we need to perform experimental analysis on I to obtain its optimal value.

    Swarm diversity is used to determine whether the population is being explored or developed [46]. The diversity of particle swarm optimization algorithms can be simply defined as the degree of dispersion of particles in the swarm. This dispersion can be defined around a specific center point. It can also be determined based on the particle's position or velocity. In the study of this article, the definition of swarm diversity is as follows Eq (4.1):

    Diversity(PS)=1PS|L|PSi=1Dj=1(xi,jˉxj)2, (4.1)
    ˉxj=PSi=1xi,jPS, (4.2)

    where PS is the population size, |L| is the length of the longest diagonal in the searching space, D is the dimensionality of the problem, xi,j denotes the position of the ith particle on the jth dimension and ˉxj denotes the average value of x=[x1,x2,,xPS] on the jth dimension.

    A set of experiments are carried out on the four representative benchmark functions, in which Schwefel's P 2.22 and Rosenbrock are unimodal functions, Rastrigin and Ackley are multimodal functions. The Schwefel's P 2.22 function with [10,10] and global minimum given by f(0,0,,0)=0; the Rosenbrock function with [2.048,2.048] and global minimum given by f(0,0,,0)=0; the Rastrigin function with [5.12,5.12] and global minimum given by f(0,0,,0)=0 and the Ackley function with [30,30] and global minimum given by f(0,0,,0)=0. The experimental results are shown in Figure 2, where LFIAPSO-CL and ACLPSO-LFI respectively represent the algorithms for removing comprehensive learning strategies and Lévy flight and inverse learning from LFIACL-PSO.

    Figure 2.  Comparative results of algorithm diversity.

    Figure 2 depicts the diversity curves presented by the LFIACL-PSO algorithm on the four benchmark functions. The population size is 40, the search space is 30 dimensions, and the maximum number of iterations of the swarm is 500. As can be seen from the figure, the LFIACL-PSO algorithm maintains good diversity over the entire population space.

    This part mainly verifies the solution accuracy of the LFIACL-PSO algorithm and the seven comparison algorithms. To ensure the fairness of the experiments, First, the parameter settings of the seven comparison algorithms were consistent with those in the reference article. Second, to reduce the impact caused by the randomness of the algorithm, each algorithm runs 30 times independently. Third, select the mean (Mean), standard deviation (Std), minimum (Min) values of the fitness function values as evaluation criteria. Fourth, to verify the performance characteristics of each algorithm in different dimensions, the search space of particles is set to 20, 30 and 50 dimensions. During the experiments, the population size and the maximum number of iterations were 40 and 1000, respectively.

    Tables 68 lists the statistical results of benchmark functions on different dimensions and includes the mean CPU time usage of each algorithm.

    Table 6.  Comparison results of 16 benchmark functions on all algorithms (20D).
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    f1 Mean 4.20E28 1.23E-05 2.66E+00 9.48E+00 2.66E+00 1.05E-01 1.30E-03 6.20E-22
    Std 8.30E28 4.82E-05 1.56E+00 6.05E+00 1.56E+00 5.41E-01 4.50E-03 4.87E-22
    Min 1.63E30 8.28E-18 6.56E-01 1.36E+00 6.56E-01 3.75E-14 2.29E-07 6.48E-23
    Time(s) 0.14 0.16 0.04 16.82 3.92 12.45 2.79 13.65
    f2 Mean 1.14E-04 2.50E+01 4.42E+01 2.59E+03 4.14E+03 1.10E+03 9.92E-04 1.20E06
    Std 1.13E-04 3.32E+01 2.65E+01 7.06E+02 2.01E+04 4.66E+02 3.10E-03 1.21E06
    Min 5.65E-06 1.98E-01 8.61E+00 1.64E+03 7.32E-02 4.67E+02 1.85E-27 5.80E08
    Time(s) 1.60 2.62 0.66 16.64 33.28 14.46 4.69 18.35
    f3 Mean 9.88E-05 2.54E+00 1.48E+00 3.35E+01 5.44E+01 2.95E+01 1.01E-04 4.94E06
    Std 1.17E-04 1.66E+00 4.58E-01 6.62E+00 1.82E+01 5.40E+00 1.21E-04 1.41E05
    Min 7.77E-06 2.73E-01 8.94E-01 1.83E+01 2.59E+01 1.81E+01 5.85E-38 7.47E09
    Time(s) 0.13 0.19 0.05 12.98 3.78 12.95 2.73 13.86
    f4 Mean 7.77E-01 2.25E-01 5.17E-01 1.63E-01 1.31E+01 3.84E-04 1.30E-10 3.81E11
    Std 3.14E+00 4.92E-01 1.31E-01 8.72E-02 4.20E+01 6.30E-04 3.37E-10 1.26E10
    Min 1.79E-14 1.29E-03 2.00E-01 3.85E-02 5.17E-02 2.12E-05 1.13E-18 3.03E13
    Time(s) 0.20 0.23 0.06 12.01 5.00 12.90 2.83 13.91
    f5 Mean 6.69E-03 2.46E-02 4.77E-03 9.52E-02 1.51E-02 5.55E-02 1.65E-15 1.54E15
    Std 3.02E-03 1.85E-02 1.94E-03 3.97E-02 7.98E-03 3.07E-02 4.06E-15 2.27E15
    Min 1.77E-03 5.81E-03 1.22E-03 2.85E-02 4.24E-03 1.41E-02 7.66E-24 6.11E18
    Time(s) 0.82 0.41 0.10 12.19 8.76 12.52 4.13 13.87
    f6 Mean 1.47E+01 4.01E+01 2.67E+01 2.33E+02 8.21E+01 1.07E+02 1.54E+01 3.20E+00
    Std 2.19E+01 3.49E+01 1.81E+01 1.38E+02 7.57E+01 1.14E+02 2.86E-01 2.72E+00
    Min 1.24E-03 5.82E+00 1.61E+01 6.74E+01 8.08E-01 1.14E+01 1.48E+01 2.45E02
    Time(s) 1.96 1.54 0.39 13.08 29.12 14.12 5.19 17.02
    f7 Mean 4.86E+03 9.03E+03 1.74E+04 4.78E+03 3.29E+06 4.71E+03 4.79E+03 5.18E+03
    Std 1.92E+02 2.22E+02 2.33E+02 2.25E+02 2.15E+05 3.30E+02 2.22E+02 2.47E+02
    Min 4.40E+03 8.36E+03 1.68E+04 4.13E+03 1.15E+05 3.92E+03 4.34E+03 4.68E+03
    Time(s) 2.41 3.41 0.85 14.27 46.59 14.94 6.81 20.01
    f8 Mean 1.42E+03 1.42E+03 1.42E+03 1.41E+03 1.43E+03 1.42E+03 1.42E+03 1.42E+03
    Std 2.51E+00 3.45E+00 2.32E+00 1.20E+00 1.58E+00 1.27E+00 2.38E+00 2.57E+00
    Min 1.42E+03 1.42E+03 1.42E+03 1.41E+03 1.43E+03 1.42E+03 1.42E+03 1.42E+03
    Time(s) 3.28 3.65 0.91 14.86 57.69 15.47 9.38 21.56
    f9 Mean 4.93E-09 2.65E-02 6.07E-02 3.93E-01 1.73E+01 9.50E-01 1.49E-05 4.14E-09
    Std 1.83E-08 3.89E-02 4.40E-02 2.06E-01 1.96E+01 3.46E-01 4.20E-05 2.22E-08
    Min 3.84E-14 5.49E-04 1.66E-02 1.09E-01 1.65E-03 3.43E-01 4.06E-10 7.85E-14
    Time(s) 0.3 0.3 0.07 11.93 4.92 12.25 6.89 13.78
    f10 Mean 4.21E+02 4.47E+03 4.97E+03 5.78E+03 7.67E+68 5.76E+03 4.27E+03 4.65E+03
    Std 4.48E+01 5.27E+02 5.13E+02 2.38E+02 2.99E+69 2.41E+02 8.31E+02 5.82E+02
    Min 5.32E+02 5.54E+03 6.24E+03 6.37E+03 1.48E+70 6.20E+03 6.80E+03 5.77E+03
    Time(s) 0.26 0.28 0.07 12.02 7.63 12.27 4.54 13.83
    f11 Mean 3.77E+01 2.69E+01 1.09E+01 2.04E+01 8.76E+01 7.09E+00 1.48E-15 0.00E+00
    Std 1.27E+01 9.17E+00 2.41E+00 5.76E+00 6.00E+01 3.16E+00 4.79E-15 0.00E+00
    Min 1.49E+01 1.19E+01 7.24E+00 1.13E+01 2.49E+01 1.74E+00 0.00E+00 0.00E+00
    Time(s) 0.27 0.36 0.09 12.19 5.73 12.01 3.09 13.53
    f12 Mean 1.59E+03 1.72E+03 1.76E+03 1.78E+03 1.22E+03 1.78E+03 1.64E+03 1.78E+03
    Std 8.94E+01 2.69E+01 2.61E+01 5.31E+00 3.59E+02 2.26E+00 1.98E+02 1.13E+00
    Min 1.76E+03 1.77E+03 1.78E+03 1.79E+03 1.69E+03 1.79E+03 1.80E+03 1.79E+03
    Time(s) 4.34 5.25 1.31 16.32 70.6 16.37 3.29 23.67
    f13 Mean 1.24E-01 2.30E+00 1.40E+00 2.68E+00 1.53E+01 1.91E+00 4.33E-12 2.33E-12
    Std 3.76E-01 9.01E-01 6.54E-01 6.40E-01 7.28E+00 1.26E+00 1.47E-11 1.35E-12
    Min 7.11E-15 7.49E-07 2.11E-01 9.23E-01 2.45E+00 8.76E-03 3.55E-15 6.32E-13
    Time(s) 0.47 0.55 0.14 12.18 8.27 12.71 3.18 14.16
    f14 Mean 2.09E-02 2.72E-02 8.92E-01 1.17E+00 2.76E-01 1.06E-01 6.69E-02 1.56E-02
    Std 1.44E-02 2.98E-02 1.36E-01 1.71E-01 7.28E-01 1.63E-01 1.37E-01 1.57E-02
    Min 0.00E+00 9.99E-16 5.00E-01 9.33E-01 4.33E-15 3.53E-03 1.11E-16 0.00E+00
    Time(s) 1.32 1.4 0.35 13.07 22.8 13.37 3.99 16.18
    f15 Mean 5.70E-02 9.07E-01 7.49E-02 4.32E+01 2.53E-01 2.71E+00 6.58E-09 9.16E-25
    Std 1.42E-01 1.26E+00 1.03E-01 1.51E+02 3.90E-01 4.88E+00 2.42E-08 1.19E-24
    Min 2.38E-31 2.19E-11 3.23E-03 1.98E+00 1.01E-15 3.40E-07 1.17E-15 5.70E-26
    Time(s) 1.94 2.09 0.52 13.53 35.79 17.13 4.47 17.65
    f16 Mean 1.83E-03 4.92E-02 3.11E-01 1.06E+00 2.30E+01 2.21E-02 7.33E-04 1.04E-23
    Std 4.09E-03 6.17E-02 1.56E-01 1.06E+00 1.19E+02 5.19E-02 2.74E-03 2.07E-23
    Min 3.25E-31 3.25E-08 8.38E-02 1.38E-01 2.53E-16 2.03E-07 1.74E-13 3.36E-25
    Time(s) 4.67 5.23 1.31 16.88 76.02 16.19 6.36 23.37

     | Show Table
    DownLoad: CSV
    Table 7.  Comparison results of 16 benchmark functions on all algorithms (30D).
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    f1 Mean 1.90E-16 1.63E+00 2.19E+01 3.77E-01 2.27E+02 1.88E-04 1.71E-15 1.27E-20
    Std 9.22E-16 3.63E+00 9.23E+00 4.87E-01 8.28E+02 2.57E-04 4.35E-15 1.31E-20
    Min 4.62E-20 3.48E-02 7.91E+00 6.54E-03 1.32E-06 7.16E-06 7.72E-64 2.51E-21
    Time(s) 0.17 0.37 3.5 49 6.64 18.4 4.22 19.79
    f2 Mean 1.19E+00 6.86E+02 4.27E+02 6.70E+03 4.52E+03 2.70E+03 5.12E+01 2.17E-01
    Std 8.65E-01 5.42E+02 1.55E+02 1.23E+03 2.99E+03 5.11E+02 6.62E+01 4.41E-01
    Min 1.04E-01 8.00E+01 1.36E+02 4.48E+03 2.85E+02 1.47E+03 4.64E-14 6.04E-03
    Time(s) 2.52 5.68 13.38 56.41 81.07 21.64 6.63 26.17
    f3 Mean 2.68E-01 1.08E+01 4.25E+00 3.23E+01 7.55E+01 2.63E+01 9.98E-04 3.87E-01
    Std 1.37E-01 3.08E+00 1.31E+00 4.84E+00 1.57E+01 5.28E+00 1.77E-03 2.72E-01
    Min 5.88E-02 5.91E+00 2.43E+00 2.24E+01 3.60E+01 1.75E+01 1.03E-18 6.94E-02
    Time(s) 0.23 0.35 3.48 49.55 6.22 18.75 4.24 19.73
    f4 Mean 8.29E+01 2.47E+00 2.19E+00 2.69E-02 7.73E+00 2.15E-03 5.88E-06 8.44E-07
    Std 1.10E+02 1.54E+00 5.87E-01 2.12E-02 5.94E+00 2.00E-03 1.45E-05 2.30E-05
    Min 1.90E-07 4.96E-01 1.38E+00 5.59E-03 2.76E-01 6.17E-04 5.33E-24 5.74E-11
    Time(s) 0.39 0.49 3.75 49.94 8.32 18.67 4.28 19.74
    f5 Mean 4.89E-03 1.21E+06 1.92E+07 2.65E+05 5.58E+04 6.37E-02 8.50E-04 5.78E-09
    Std 7.04E-03 3.25E+06 7.97E+06 3.69E+05 2.79E+05 2.67E-02 1.86E-03 2.87E-08
    Min 1.32E-04 3.78E+03 7.58E+06 8.08E+03 2.78E-02 3.64E-02 5.04E-14 3.94E-15
    Time(s) 0.7 1.15 5.08 50.41 19.07 19.72 4.71 20.14
    f6 Mean 4.10E+01 1.03E+02 8.44E+01 1.67E+02 4.57E+02 9.18E+01 2.58E+01 1.55E+01
    Std 2.97E+01 6.06E+01 3.38E+01 7.11E+01 6.24E+02 3.02E+01 3.97E-01 3.38E+00
    Min 6.46E+00 2.32E+01 4.43E+01 4.86E+01 2.87E+01 2.94E+01 2.48E+01 7.94E+00
    Time(s) 2.26 5.06 12.54 55.02 70.34 21.66 6.25 24.96
    f7 Mean 3.91E+03 8.13E+03 8.12E+03 8.26E+03 3.67E+06 8.25E+03 8.19E+03 8.69E+03
    Std 2.38E+0 2.80E+02 3.38E+02 2.55E+02 4.02E+04 2.91E+02 2.98E+02 2.80E+02
    Min 3.34E+03 7.37E+03 7.24E+03 7.42E+03 4.02E+04 7.77E+03 7.04E+03 8.02E+03
    Time(s) 4.35 9.17 19.28 61.76 119.42 22.78 8.04 29.13
    f8 Mean 2.21E+03 2.21E+03 2.21E+03 2.20E+03 2.22E+03 2.20E+03 2.21E+03 2.20E+03
    Std 2.13E+00 2.70E+00 2.08E+00 3.66E-01 2.85E+00 1.06E+00 1.61E+00 2.10E+00
    Min 2.20E+03 2.20E+03 2.20E+03 2.20E+03 2.21E+03 2.20E+03 2.20E+03 2.20E+03
    Time(s) 4.86 10.4 21.79 61.75 138.36 23.82 8.59 31.66
    f9 Mean 1.09E-01 7.72E-01 4.23E-01 4.13E-01 2.10E+01 2.64E+00 2.23E-04 2.21E-04
    Std 4.08E-01 9.84E-01 3.16E-01 2.01E-01 3.25E+01 9.40E-01 3.82E-04 6.53E-04
    Min 3.47E-08 3.53E-02 8.69E-02 8.76E-02 2.19E-02 1.01E+00 4.20E-07 2.87E-10
    Time(s) 0.21 0.51 3.8 50.07 8.13 18.24 4.32 19.79
    f10 Mean 6.28E+02 6.47E+03 6.71E+03 7.97E+03 2.53E+67 7.93E+03 5.43E+03 6.42E+03
    Std 6.53E+01 8.29E+02 7.22E+02 3.05E+02 1.31E+68 2.76E+02 1.33E+03 9.31E+02
    Min 7.33E+02 8.17E+03 8.42E+03 8.78E+03 7.32E+68 8.71E+03 9.31E+03 8.17E+03
    Time(s) 0.21 0.51 3.75 50.11 7.8 18.22 4.51 19.72
    f11 Mean 8.34E+01 4.57E+01 2.56E+01 2.95E+01 1.49E+02 6.57E+00 4.36E+00 0.00E+00
    Std 2.57E+01 1.51E+01 6.74E+00 6.12E+00 9.19E+01 3.28E+00 2.17E+01 0.00E+00
    Min 4.58E+01 2.60E+01 1.41E+01 1.67E+01 6.67E+01 1.67E-01 0.00E+00 0.00E+00
    Time(s) 0.25 0.65 4.03 49.73 9.52 17.81 5.17 19.46
    f12 Mean 2.03E+03 2.57E+03 2.52E+03 2.66E+03 1.50E+03 2.68E+03 5.00E+00 -2.83E+03
    Std 2.70E+02 5.64E+01 1.37E+02 6.13E+00 6.87E+02 2.16E+00 0.00E+00 3.56E+01
    Min 2.36E+03 2.64E+03 2.66E+03 2.67E+03 2.52E+03 2.68E+03 5.00E+00 2.87E+03
    Time(s) 5.46 14.95 30.12 70.47 175.21 24.95 5.34 34.02
    f13 Mean 1.89E+00 4.17E+00 2.85E+00 1.10E+00 2.00E+01 7.33E-01 1.81E+01 6.22E-01
    Std 3.48E+00 1.00E+00 3.53E-01 6.35E-01 1.62E+00 8.23E-01 4.77E+00 7.46E-01
    Min 9.28E-10 1.34E+00 2.01E+00 5.50E-02 1.20E+01 2.48E-03 0.00E+00 9.00E-12
    Time(s) 0.38 0.96 4.71 50.05 13.85 18.91 4.47 20.15
    f14 Mean 8.36E-03 4.23E-01 1.23E+00 4.78E-01 3.20E+00 1.12E-02 1.16E-01 7.34E-03
    Std 1.31E-02 3.67E-01 8.24E-02 2.63E-01 5.54E+00 9.71E-03 2.16E-01 1.21E-02
    Min 0.00E+00 1.15E-02 1.10E+00 1.28E-01 1.59E-05 1.09E-03 8.34E-04 0.00E+00
    Time(s) 1.61 4.22 11 53.67 52.05 19.61 5.86 23.56
    f15 Mean 1.00E-01 5.54E+00 5.53E-01 9.96E-01 9.17E+05 1.05E-01 5.95E-06 4.15E-02
    Std 1.99E-01 3.82E+00 2.67E-01 9.37E-01 4.24E+06 2.41E-01 2.16E-05 1.02E-01
    Min 1.51E-19 8.12E-01 7.05E-02 1.72E-02 7.74E-07 2.37E-06 2.26E-09 1.16E-23
    Time(s) 2.44 5.91 14.09 56.43 85.28 21.04 6.81 25.96
    f16 Mean 8.77E-03 7.12E+00 2.21E+00 9.14E-02 5.10E+01 2.59E-03 2.31E-03 1.93E-03
    Std 1.55E-02 6.56E+00 7.88E-01 8.37E-02 8.50E+01 7.10E-03 8.20E-03 4.86E-03
    Min 1.91E-19 2.57E-01 9.48E-01 5.37E-03 4.29E-02 3.22E-05 1.58E-07 1.21E-22
    Time(s) 5.61 14.14 30.03 66.93 188.98 24.61 10.39 34.77

     | Show Table
    DownLoad: CSV
    Table 8.  Comparison results of 16 benchmark functions on all algorithms (50D).
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    f1 Mean 5.82E-05 2.02E+02 2.00E+02 1.27E+01 1.91E+04 1.29E+00 1.14E-07 1.74E-13
    Std 1.84E-04 1.04E+02 5.20E+01 7.50E+00 8.73E+03 1.83E+00 5.68E-07 3.39E-13
    Min 8.73E-08 1.88E+01 9.22E+01 3.59E+00 6.97E+03 4.47E-02 1.79E-76 6.15E-16
    Time(s) 0.18 0.37 3.51 49.61 6.38 18.6 6.01 13.62
    f2 Mean 2.41E+02 5.04E+03 2.14E+03 3.23E+04 2.60E+05 1.32E+04 2.30E+03 1.59E+02
    Std 2.90E+02 2.11E+03 5.07E+02 3.45E+03 8.86E+04 2.26E+03 1.05E+03 1.17E+02
    Min 6.81E+01 1.88E+03 9.31E+02 2.64E+04 1.24E+05 9.71E+03 3.74E+02 2.57E+01
    Time(s) 4.11 5.73 13.37 57.02 78.54 22.04 9.56 17.9
    f3 Mean 3.96E+00 2.13E+01 9.46E+00 4.86E+01 9.34E+01 3.60E+01 2.65E-03 7.25E+00
    Std 8.34E-01 3.37E+00 1.44E+00 3.04E+00 5.41E+00 7.42E+00 4.05E-03 1.75E+00
    Min 2.76E+00 1.36E+01 5.67E+00 4.13E+01 7.61E+01 2.01E+01 1.82E-27 3.51E+00
    Time(s) 0.17 0.35 3.48 50.12 6.07 18.74 5.99 13.49
    f4 Mean 3.82E+02 1.28E+01 9.31E+00 6.89E-01 1.29E+02 2.13E-01 1.53E-02 1.22E-02
    Std 1.43E+02 4.21E+00 1.80E+00 1.83E-01 3.52E+01 9.11E-02 4.02E-02 3.61E-02
    Min 1.43E+00 5.78E+00 6.59E+00 3.55E-01 5.86E+01 8.53E-02 1.13E-12 8.91E-04
    Time(s) 0.24 0.49 3.76 50.23 8.1 19.56 5.95 13.61
    f5 Mean 8.34E+01 2.02E+08 1.80E+08 1.15E+07 2.59E+07 1.84E-01 4.62E+01 6.12E-02
    Std 4.49E+02 1.18E+08 4.52E+07 7.23E+06 5.35E+07 5.09E-02 3.42E-01 2.28E-01
    Min 2.21E-07 5.01E+07 1.12E+08 2.14E+06 1.44E+03 9.23E-02 4.52E+01 5.74E-10
    Time(s) 0.83 1.17 5.05 50.75 17.98 18.56 6.67 13.73
    f6 Mean 1.29E+02 7.15E+02 4.58E+02 6.75E+02 2.69E+05 2.85E+02 4.63E+01 4.34E+01
    Std 8.25E+01 4.28E+02 1.60E+02 1.79E+02 2.31E+05 8.43E+01 4.06E-01 2.37E+01
    Min 3.68E+01 2.75E+02 2.09E+02 3.72E+02 2.47E+04 1.74E+02 4.55E+01 3.38E+01
    Time(s) 3.8 5.16 12.41 55.57 69.34 20.7 8.92 16.83
    f7 Mean 1.52E+04 1.53E+04 1.52E+04 1.74E+04 5.94E+05 1.75E+04 1.53E+04 1.59E+04
    Std 3.10E+02 3.06E+02 3.02E+02 2.33E+02 4.07E+05 2.13E+02 3.21E+02 3.68E+02
    Min 1.42E+04 1.46E+04 1.46E+04 1.68E+04 1.30E+05 1.67E+04 1.47E+04 1.50E+04
    Time(s) 6.04 9.25 19.09 62.09 117.24 22.88 11.61 19.72
    f8 Mean 3.77E+03 3.77E+03 3.77E+03 3.76E+03 7.70E+03 3.77E+03 3.77E+03 3.76E+03
    Std 1.74E+00 1.89E+00 1.91E+00 6.93E-01 8.15E+00 1.17E+00 1.89E+00 1.69E+00
    Min 3.77E+03 3.77E+03 3.77E+03 3.76E+03 7.70E+03 3.77E+03 3.77E+03 3.76E+03
    Time(s) 7.37 10.47 21.86 62.1 134.85 23.57 12.56 21.35
    f9 Mean 5.94E-01 5.79E+00 2.60E+00 4.32E+00 1.16E+02 6.25E+00 1.03E-02 2.63E-03
    Std 5.54E-01 2.89E+00 9.49E-01 1.18E+00 1.31E+02 2.57E+00 1.35E-02 6.72E-03
    Min 7.54E-03 8.72E-01 9.80E-01 2.50E+00 1.85E+01 2.78E+00 2.31E-34 1.48E-04
    Time(s) 0.23 0.51 3.79 50.26 8.07 18.61 5.95 13.56
    f10 Mean 9.97E+02 1.05E+04 9.49E+03 1.20E+04 6.21E+65 1.19E+04 1.14E+04 1.02E+04
    Std 1.19E+02 1.31E+03 1.28E+03 3.56E+02 1.76E+66 3.61E+02 4.02E+03 1.40E+03
    Min 1.19E+03 1.34E+04 1.17E+04 1.28E+04 7.82E+66 1.27E+04 2.40E+04 1.30E+04
    Time(s) 0.23 0.51 3.74 50.42 7.82 18.57 6.27 13.56
    f11 Mean 2.47E+02 9.38E+01 7.61E+01 1.07E+02 8.90E+02 3.19E+01 4.02E-02 0.00E+00
    Std 7.48E+01 1.67E+01 1.65E+01 1.07E+01 3.01E+02 9.15E+00 2.03E-01 0.00E+00
    Min 7.48E+01 5.24E+01 4.71E+01 8.68E+01 5.91E+02 1.61E+01 3.55E-15 0.00E+00
    Time(s) 0.28 0.66 4.02 49.9 9.53 18.37 6 13.35
    f12 Mean 2.49E+03 4.21E+03 3.68E+03 4.35E+03 1.98E+03 4.43E+03 4.33E+03 4.39E+03
    Std 4.78E+02 1.39E+02 3.10E+02 2.12E+01 2.82E+03 7.08E+00 5.97E+02 -4.39E+03
    Min -3.21E+03 -4.39E+03 -4.08E+03 -4.39E+03 -7.36E+03 -4.46E+03 -4.50E+03 -4.39E+03
    Time(s) 9.4 15.14 29.93 70.59 173.73 25.12 6.21 23.46
    f13 Mean 3.23E+00 7.72E+00 4.43E+00 3.13E+00 2.07E+01 3.24E+00 1.98E+01 2.18E+00
    Std 8.65E-01 1.40E+00 5.13E-01 4.26E-01 6.79E-01 1.54E+00 4.82E-01 6.61E-01
    Min 1.87E+00 5.01E+00 3.57E+00 2.50E+00 1.98E+01 1.89E-01 1.77E+01 1.07E-06
    Time(s) 0.43 0.97 4.71 58.41 13.64 18.89 6.12 14.08
    f14 Mean 6.98E-03 2.83E+00 2.93E+00 1.27E+00 1.58E+02 5.76E-01 1.02E+00 1.90E-03
    Std 8.45E-03 1.15E+00 4.89E-01 1.29E-01 7.34E+01 2.34E-01 2.21E-01 6.71E-03
    Min 7.74E-10 1.23E+00 1.78E+00 1.10E+00 6.96E+01 2.37E-01 3.08E-01 7.66E-15
    Time(s) 2.63 4.31 10.92 62.59 51.54 19.81 8.21 16.17
    f15 Mean 5.97E-01 1.14E+01 3.26E+00 4.06E+00 2.38E+07 5.97E-01 1.68E-02 3.21E-01
    Std 8.78E-01 4.13E+00 1.15E+00 2.20E+00 2.54E+07 6.61E-01 5.81E-03 4.46E-01
    Min 9.00E-06 5.23E+00 1.46E+00 1.13E+00 2.03E+05 3.95E-03 8.91E-03 2.43E-13
    Time(s) 4.05 5.89 14 67.71 83.99 21.58 9.44 17.57
    f16 Mean 5.03E-02 1.34E+02 1.98E+01 4.32E+00 1.70E+03 1.53E+00 1.40E+00 5.50E-03
    Std 5.03E-02 6.68E+01 4.34E+00 1.83E+00 7.10E+02 1.72E+00 4.16E-01 8.85E-03
    Min 1.16E-07 2.28E+01 1.33E+01 1.72E+00 8.78E+02 1.60E-01 7.02E-01 8.85E-03
    Time(s) 9.73 14.35 29.92 87.34 184.47 25.22 14.48 23.25

     | Show Table
    DownLoad: CSV

    Tables 68 details the experimental results obtained for the six unimodal functions (f1f6) for the LFIACL-PSO algorithm proposed in this paper and the other seven comparison algorithms tested in different dimensions. The performance of the algorithms is measured by the number of best results obtained on the mean value. The best results by the different algorithms on the six unimodal functions are bolded. From the comparison results in Table 6, it is clear that on the six unimodal functions (f1f6). When the particle dimension is 20, the LFIACL-PSO yields the optimal solution on the benchmark functions f2f6, and the PSO obtains the optimal solution on f1. When the particle dimensions are 30 and 50 dimensions, from Tables 7 and 8, we know that the LFIACL-PSO algorithm obtains optimal solutions on the unimodal functions f1,f2,f4f6, and the XPSO gains the optimal solution on f3. The experimental comparison results show that LFIACL-PSO outperforms the other comparison algorithms in the six unimodal functions. The performance of LFIACL-PSO benefits from the following three advantages. First, compared to legacy PSOs, LFIACL-PSO has four exemplars which can give a particle more chance to obtain useful information. In addition, the Ring-type neighbor topology causes the particles to reorient their search at the end, improving the exploration ability of the population. Finally, the inverse learning mechanism and Lévy flight are applied to the particle swarm algorithm. When the particles fall into stagnation due to local optimum, the particles are allowed to learn from the worst position of their individual histories to quickly guide these particles to escape from local optimum, enhance the diversity of the population, and improve the search capability of the algorithm, thus effectively preventing the disadvantage that the particle swarm algorithm easily falls into local optimum.

    In reality, most multimodal functions contain many locally optimal solutions, making the algorithm prone to fall into the local optimum trap and converge prematurely. Tables 68 lists the experimental results obtained by testing the LFIACL-PSO algorithm with seven comparison algorithms using multimodal functions (f7f16) in different dimensions. Select the best results obtained on the mean to measure the algorithm performance. The best results obtained by the different algorithms on the ten multimodal functions are shown in bold. The comparison results presented in Tables 68 indicate that KPSO, DMPSO, TSLPSO performed much worse than LFIACL-PSO, mainly because the latter utilized much valuable historical knowledge from different samples, which facilitated the maintenance of population diversity. From the comparison results in Table 6, it is clear that the LFIACL-PSO algorithm obtains optimal values for the multimodal functions f9,f11f16. The CLPSO-LOT, CLPSO, and TLPSO algorithms obtain optimal values on the multimodal functions f7, f8, and f10, respectively. The mean values of f12 in CLPSO and CLPSO-LOT are equal to LFIACL-PSO, but their standard deviations are large; therefore, the stability of the LFIACL-PSO algorithm is better than other comparative algorithms. When the search space is 30 dimensions, as can be seen from the comparison results in Table 7, the LFIACL-PSO algorithm obtains a total of 7 best performances on ten multimodal functions f7f16, and PSO, TLPSO, CLPSO, and XPSO all acquire one best performance. When the search space is 50 dimensions, as known from the comparison results in Table 8, the LFIACL-PSO algorithm obtains a total of six optimal solutions on f7f16 and the suboptimal solution is f12. The PSO and DMPSO obtain the optimal solution in function f7. The TLPSO algorithm and CLPSO and XPSO obtain optimal solutions on functions f10,f12,f15, respectively. Overall, LFIACL-PSO has the most competitive performance among these classical comparison algorithms, as it achieves the best average value on 6 of the ten multimodal problems.

    To further illustrate the comprehensive performance of all comparison algorithms, Friedman's test with significance level alpha=0.05 was performed in this section, the results of which are shown in Tables 9 and 10. Tables 9 and 10 represent the results of Friedman's test on unimodal and multimodal functions for each of the eight algorithms in different dimensions, respectively. From Table 9, we know that LFIACL-PSO obtained the best overall performance for unimodal functions f1f6, followed by XPSO, PSO, and CPSO-LOT. From Table 10, the LFIACL-PSO acquires the best overall performance for multimodal functions f7f16, followed by CPSO-LOT, XPSO, PSO, and CLPSO. Finally, regarding the accuracy of the solution, Table 11 shows the final average ranking of all algorithms on 16 benchmark functions on different dimensions. From Table 11, the overall performance of LFIACL-PSO is the best, followed by XPSO and PSO.

    Table 9.  Friedman test of mean values on unimodal functions.
    Average Rank Algorithm Ranking 20D 30D 50D
    Algorithm Ranking Algorithm Ranking Algorithm Ranking
    1 LFIACL-PSO 1.53 LFIACL-PSO 1.92 LFIACL-PSO 1.33 LFIACL-PSO 1.33
    2 XPSO 2.31 XPSO 2.42 XPSO 2.17 XPSO 2.33
    3 PSO 3.50 PSO 2.83 PSO 4.00 PSO 3.67
    4 CLPSO-LOT 4.72 CLPSO-LOT 5.67 CLPSO-LOT 4.33 CLPSO-LOT 4.17
    5 DMSPSO 5.00 DMSPSO 4.58 DMSPSO 5.33 DMSPSO 5.08
    6 KPSO 5.34 KPSO 4.67 KPSO 5.67 CLPSO 5.67
    7 CLPSO 6.42 CLPSO 7.00 CLPSO 6.00 KPSO 6.25
    8 TSLPSO 7.20 TSLPSO 6.92 TSLPSO 7.17 TSLPSO 7.50

     | Show Table
    DownLoad: CSV
    Table 10.  Friedman test of mean values on multimodal functions.
    Average Rank Algorithm Ranking 20D 30D 50D
    Algorithm Ranking Algorithm Ranking Algorithm Ranking
    1 LFIACL-PSO 2.23 LFIACL-PSO 2.15 LFIACL-PSO 2.40 LFIACL-PSO 2.15
    2 CLPSO-LOT 3.87 CLPSO-LOT 4.15 CLPSO-LOT 3.50 CLPSO-LOT 3.95
    3 XPSO 4.05 XPSO 4.24 XPSO 4.35 XPSO 3.55
    5 CLPSO 4.45 CLPSO 4.90 CLPSO 4.30 CLPSO 4.15
    4 PSO 4.33 PSO 4.45 PSO 4.25 PSO 4.30
    6 DMSPSO 5.03 DMSPSO 5.10 DMSPSO 4.95 DMSPSO 5.05
    7 KPSO 5.30 KPSO 4.80 KPSO 5.55 KPSO 5.55
    8 TSLPSO 6.97 TSLPSO 6.90 TSLPSO 6.70 TSLPSO 7.30

     | Show Table
    DownLoad: CSV
    Table 11.  Friedman test of mean values on different dimensions.
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    20D 3.64 4.74 4.84 5.95 6.91 4.91 3.33 2.04
    30D 4.13 5.61 5.14 5.15 6.94 3.92 3.26 1.87
    50D 3.99 5.90 5.07 4.91 7.40 4.06 2.94 1.74
    Ave rank 3.92 5.42 5.02 5.34 7.08 4.30 3.18 1.77
    Final rank 3 7 5 6 8 4 2 1

     | Show Table
    DownLoad: CSV

    Furthermore, a graphical approach was used to show the key differences (CDs) in Friedman's rankings, and the results are shown in Figure 4. The CD is calculated as in Eq (4.3). Suppose the difference between the average sequential values of the two algorithms exceeds the critical value domain CD. In that case, the hypothesis that the two algorithms perform equally is rejected with the corresponding confidence level.

    CD=qαK(K+1)6N, (4.3)

    where α=0.05, qα=3.031, K=8 denotes the number of algorithms, N=16, indicating the data set.

    As can be seen from Figure 3, the LFIACL-PSO significantly outperforms the TSLPSO, CLPSO, CLPSO-LOT, KPSO, and DMSPSO in terms of performance. The result suggests that LFIACL-PSO does not significantly differ with XPSO, PSO, and CLPSO-LOT, though it offers the best performance in terms of the Friedman ranking values.

    Figure 3.  Critical difference with Friedman rankings.

    Tables 68 records the CPU time consumed by each algorithm on 16 test functions (f1f16) in different dimensions. Table 12 shows the comprehensive ranking results of CPU time. The results show that the LFIACL-PSO consumes more CPU time compared to the other compared algorithms, with an average sequential value of 6.40, ranking 6th. Ranking 7th and 8th are the TSLPSO and CLPSO, whose average ordinal values are 6.46 and 6.70, respectively. There are two main reasons why the LFIACL-PSO algorithm is more time-consuming. Firstly, the learning paradigm of the particle changes from one to four. Secondly, adopt a comprehensive learning strategy as the self-learning part of particles to update the learning paradigm of particles and increase the time of particle search.

    Table 12.  Friedman test of CPU time on different dimensions.
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    20D 2.22 2.91 1.00 5.88 6.44 6.19 3.94 7.44
    30D 1.06 2.19 3.75 7.56 6.44 5.38 3.13 6.50
    50D 1.00 2.06 3.81 7.56 6.50 6.38 3.44 5.25
    Ave rank 1.43 2.39 2.85 6.70 6.46 5.98 3.50 6.40
    Final rank 1 2 3 8 7 5 4 6

     | Show Table
    DownLoad: CSV

    Figure 4 shows the combined performance graph of LFIACL-PSO and other comparison algorithms. It is a combined ranking graph based on the accuracy of the benchmark function and CPU time. It can be seen from Figure 4 that the closer the circle is to the lower-left corner in the figure, the better the performance of the algorithm. From Figure 4, we can see that LFIACL-PSO and XPSO have the better solution accuracy among all algorithms, showing similar performance. While LFIACL-PSO is more accurate than XPSO, it requires more CPU time. This result indicates that LFIACL-PSO has higher solution accuracy and performs well in terms of algorithmic complexity. The PSO and KPSO consume less CPU time but have lower solution accuracy.

    Figure 4.  Comprehensive performance of all algorithms.

    To test the convergence of the LFIACL-PSO algorithm and the seven comparison algorithms, we selected eight benchmark functions to do experiments. These 8 benchmark functions include 3 unimodal functions (f1f3) and 5 multimodal functions (f12f16). Population size: PS=40, search space: N=30 dimensions, maximum number of iterations: IterMax=500, and 30 independent runs of each function. Figure 5 shows the experimental results and zooms in on the dense area in the figure. From Figure 5, as the number of iterations increases, the LFIACL-PSO algorithm achieves the best convergence performance overall on the randomly selected benchmark functions f1f3, f13f16. On the multimodal function f12, it is slightly lower than the KPSO algorithm. In general, the LFIACL-PSO algorithm proposed in this paper shows good convergence speed and convergence accuracy on both unimodal functions and multimodal functions.

    Figure 5.  The convergence curves under 30 dimensions for 3 unimodal functions and 5 multimodal functions.

    In this part, we will use a widely used real-world optimization problem to verify the LFIACL-PSO performance. Dukic et al. [47] proposed a design method of multiphase code for spread spectrum pulse radar(SSPR) based on the characteristics of the non-periodic autocorrelation function, which is used in the design of multiphase pulse compression code. It can be modeled as a minimum-maximum nonlinear optimization problem, defined as follows [48]:

    GlobalxXminf(x)=max{ϕ1(x),...,ϕ2m(x)}, (4.4)
    X={(x1,...,xn)xRn|0xj2π,j=1,...,n}, (4.5)

    where m=2n1, and

    ϕ2i1(x)=nj=i cos(jk=|2ij1|+1xk),i=1,...,n, (4.6)
    ϕ2i(x)=0.5+nj=i+1 cos(jk=|2ij|+1xk),i=1,...,n1, (4.7)
    ϕm+1(x)=ϕi(x),i=1,...,m. (4.8)

    Table 13 shows in detail the performance of all the algorithms on the spread spectrum radar polyphase code design problem. It can be seen from Table 14 that LFIACL-PSO is superior to the other seven algorithms. In summary, the LFIACL-PSO algorithm has outstanding performance advantages in benchmark functions and a solid ability to solve practical engineering problems, which has considerable potential application value.

    Table 13.  Comparison on the spread spectrum radar poly phase code design problem.
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    20D Mean 1.69E+00 2.00E+00 3.73E+00 4.34E+00 4.21E-02 1.92E+00 2.02E-02 1.01E-02
    Std 1.37E+00 1.61E+00 1.47E+00 1.23E+00 8.07E-02 8.58E-01 1.84E-02 8.04E-03
    Min 4.12E-03 5.46E-04 6.58E-01 1.12E+00 4.44E-16 9.71E-04 9.77E-05 8.20E-04
    30D Mean 3.37+00 4.58E+00 7.29E+00 7.81E+00 2.00E-01 4.56E+00 1.11E-01 9.35E-02
    Std 2.12E+00 1.80E+00 1.59E+00 1.53E+00 4.98E-01 9.10E-01 1.66E-01 8.16E-02
    Min 3.78E-02 1.78E-01 4.13E+00 4.41E+00 1.89E-15 2.54E+00 1.88E-03 2.06E-03
    50D Mean 9.78E+00 9.30E+00 1.47E+01 1.38E+01 3.12E+00 1.01E+01 6.53E-01 9.65E-01
    Std 3.72E+00 3.82E+00 2.54E+00 2.70E+00 2.88E+00 1.68E+00 8.49E-01 7.89E-01
    Min 1.09E-01 7.93E-02 1.05E+01 5.58E+00 6.11E-16 7.10E+00 2.73E-02 2.12E-02

     | Show Table
    DownLoad: CSV
    Table 14.  Ranks achieved by the Friedman test of mean values.
    PSO KPSO DMSPSO CLPSO TSLPSO CLPSO-LOT XPSO LFIACL-PSO
    20D 4 6 7 8 3 5 2 1
    30D 4 6 7 8 3 5 2 1
    50D 5 4 8 7 3 6 1 2
    Ave rank 4.33 5.33 7.33 7.67 3.00 5.33 1.67 1.33
    Final Rank 4 5 6 7 3 5 2 1

     | Show Table
    DownLoad: CSV

    Aiming at the disadvantage that particle swarm optimization easily falls into local optimum, we propose the LFIACL-PSO algorithm. The implementation of LFIACL-PSO is described in detail in the paper, and sensitivity analysis of the newly introduced parameters is performed through extensive experiments. In addition, the comparison of LFIACL-PSO with seven other variants of PSO algorithms on 16 benchmark functions and a real problem validates the benefits of LFIACL-PSO. The comparison results show that LFIACL-PSO has the following advantages. Firstly, the use of comprehensive learning strategies and Ring-type topology as part of the learning paradigm makes the particles more informative when updating iterations, thus maintaining the diversity of particles. Secondly, when the particle falls into the local optimum and cannot jump out, let the particle learn from its own worst position and use the Levy flight to obtain the learning step size to enhance the global exploration ability of the population. Finally, use the historical information of elite particles in the population to update the acceleration coefficients to meet the different requirements of different evolutionary stages.

    Although LFIACL-PSO has good performance, there are still some problems in the LFIACL-PSO algorithm. For example, the LFIACL-PSO algorithm has high solution accuracy but consumes more CPU time, which needs further research in our future work. What's more, except the methods used in the paper, some of the most representative computational intelligence algorithms can be used to solve the problems, like monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, Slime mould algorithm (SMA), and Harris hawks optimization (HHO) and so on.

    This work was supported by NSFC Grant Nos. 61701060 and 61801067, Guangxi Colleges and Universities Key Laboratory of Intelligent Processing of Computer Images and Graphics Project No. GIIP1806, and the Science and Technology Research Project of Higher Education of Hebei Province (Grant No. QN2019069), and Chongqing Key Lab of Computer Network and Communication Technology (CY-CNCL-2017-02).

    The authors declare there is no conflict of interest.

    [1] [ D. Antunes,A. Singh, Quantifying gene expression variability arising from randomness in cell division times, J. Math. Biol., 71 (2015): 437-463.
    [2] [ A. Arkin,J. Ross,H. H. Mcadams, Stochastic kinetic analysis of developmental pathway bifurcation in phage lambda-infected Escherichia coli cells, Genetics, 149 (1998): 1633-1648.
    [3] [ P. Bastiaens, Systems biology: When it is time to die, Nature, 459 (2009): 334-335.
    [4] [ C. Bertoli,J. M. Skotheim,R. A. de Bruin, Control of cell cycle transcription during G1 and S phases, Nat. Rev. Mol. Cell Bio., 14 (2013): 518-528.
    [5] [ W. J. Blake,M. Kærn,C. R. Cantor,J. J. Collins, Noise in eukaryotic gene expression, Nature, 422 (2003): 633-637.
    [6] [ P. Bokes,J. R. King,A. T. A. Wood,M. Loose, Exact and approximate distributions of protein and mRNA in the low-copy regime of gene expression, J. Math. Biol., 64 (2012): 829-854.
    [7] [ A. Brock,H. Chang,S. Huang, Non-genetic heterogeneity-a mutation-independent driving force for the somatic evolution of tumours, Nat. Rev. Genet., 10 (2009): 336-342.
    [8] [ H. H. Chang, et al., Transcriptome-wide noise controls lineage choice in mammalian progenitor cells, Nature, 453 (2008), 544–547.
    [9] [ E. Clayton, et al., A single type of progenitor cell maintains normal epidermis, Nature, 446 (2007), 185–189.
    [10] [ A. Colman-Lerner, et al., Regulated cell-to-cell variation in a cell-fate decision system, Nature, 437 (2005), 699–706.
    [11] [ M. R. Dowling, et al., Stretched cell cycle model for proliferating lymphocytes, Proc. Natl. Acad. Sci. USA, 111 (2014), 6377–6382.
    [12] [ M. B. Elowitz,A. J. Levine,E. D. Siggia,P. S. Swain, Stochastic gene expression in a single cell, Science, 297 (2002): 1183-1186.
    [13] [ P. L. Felmer,A. Quaas,M. X. Tang,J. S. Yu, Random dynamics of gene transcription activation in single cells, J. Differ. Equations, 247 (2009): 1796-1816.
    [14] [ D. Fraser,M. Kærn, A chance at survival: Gene expression noise and phenotypic diversification strategies, Mol. Microbiol., 71 (2009): 1333-1340.
    [15] [ I. Golding,J. Paulsson,S. M. Zawilski,E. C. Cox, Real-time kinetics of gene activity in individual bacteria, Cell, 123 (2005): 1025-1036.
    [16] [ D. Gonze, Modeling the effect of cell division on genetic oscillators, J. Theor. Biol., 325 (2013): 22-33.
    [17] [ E. D. Hawkins,J. F. Markham,L. P. Mcguinness,P. D. Hodgkin, A single-cell pedigree analysis of alternative stochastic lymphocyte fates, Proc. Natl. Acad. Sci. USA, 106 (2009): 13457-13462.
    [18] [ E. D. Hawkins, et al., A model of immune regulation as a consequence of randomized lymphocyte division and death times, Proc. Natl. Acad. Sci. USA, 104 (2007), 5032–5037.
    [19] [ L. F. Huang, et al., The free-energy cost of interaction between DNA loops, Sci Rep-UK, 7 (2017).
    [20] [ D. Huh,J. Paulsson, Non-genetic heterogeneity from random partitioning at cell division, Nat. Genet., 43 (2011): 95-100.
    [21] [ D. Huh,J. Paulsson, Random partitioning of molecules at cell division, Proc. Natl. Acad. Sci. USA, 108 (2011): 15004-15009.
    [22] [ J. Jaruszewicz,M. Kimmel,T. Lipniacki, Stability of bacterial toggle switches is enhanced by cell-cycle lengthening by several orders of magnitude, Phys. Rev. E., 89 (2014): 022710.
    [23] [ F. Jiao,M. X. Tang,J. S. Yu, Distribution profiles and their dynamic transition in stochastic gene transcription, J. Differ. Equations, 254 (2013): 3307-3328.
    [24] [ F. Jiao,M. X. Tang,J. S. Yu,B. Zheng, Distribution modes and their corresponding parameter regions in stochastic gene transcription, SIAM J. Appl. Math., 75 (2015): 2396-2420.
    [25] [ I. G. Johnston and N. S. Jones, Closed-form stochastic solutions for non-equilibrium dynamics and inheritance of cellular components over many cell divisions, Proc. R. Soc. A, 471 (2015), 20150050, 19pp.
    [26] [ M. Kærn,T. C. Elston,W. J. Blake,J. J. Collins, Stochasticity in gene expression: From theories to phenotypes, Nat. Rev. Genet., 6 (2005): 451-464.
    [27] [ S. J. Kron,C. A. Styles,G. R. Fink, Symmetric cell division in pseudohyphae of the yeast Saccharomyces cerevisiae, Mol. Biol. Cell, 5 (1994): 933-1063.
    [28] [ J. H. Kuang,M. X. Tang,J. S. Yu, The mean and noise of protein numbers in stochastic gene expression, J. Math. Biol., 67 (2013): 261-291.
    [29] [ E. Kussell,R. Kishony,N. Q. Balaban,S. Leibler, Bacterial persistence: A model of survival in changing environments, Genetics, 169 (2005): 1807-1814.
    [30] [ K. Lewis, Persister cells, Annu. Rev. Microbiol., 64 (2010): 357-372.
    [31] [ Q. Y. Li,L. F. Huang,J. S. Yu, Modulation of first-passage time for bursty gene expression via random signals, Math. Biosci. Eng., 14 (2017): 1261-1277.
    [32] [ Y. Y. Li,M. X. Tang,J. S. Yu, Transcription dynamics of inducible genes modulated by negative regulations, Math. Med. Biol., 32 (2015): 115-136.
    [33] [ E. Libby,T. J. Perkins,P. S. Swain, Noisy information processing through transcriptional regulation, Proc. Natl. Acad. Sci. USA, 104 (2007): 7151-7156.
    [34] [ J. Lloyd-Price,H. Tran,A. S. Ribeiro, Dynamics of small genetic circuits subject to stochastic partitioning in cell division, J. Theor. Biol., 356 (2014): 11-19.
    [35] [ R. Losick,C. Desplan, Stochasticity and cell fate, Science, 320 (2008): 65-68.
    [36] [ A. A. Martinez,J. M. Brickman, Gene expression heterogeneities in embryonic stem cell populations: Origin and function, Curr. Opin. Cell Biol., 23 (2011): 650-656.
    [37] [ B. Munsky,G. Neuert,O. A. Van, Using gene expression noise to understand gene regulation, Science, 336 (2012): 183-187.
    [38] [ M. Osella,E. Nugent,L. M. Cosentino, Concerted control of Escherichia coli cell division, Proc. Natl. Acad. Sci. USA, 111 (2014): 3431-3435.
    [39] [ J. Peccoud,B. Ycart, Markovian modeling of gene-product synthesis, Theor. Popul. Biol., 48 (1995): 222-234.
    [40] [ A Raj,O. A. Van, Nature, nurture, or chance: Stochastic gene expression and its consequences, Cell, 135 (2008): 216-226.
    [41] [ A. Sanchez,S. Choubey,J. Kondev, Regulation of noise in gene expression, Annu. Rev. Biophys., 42 (2013): 469-491.
    [42] [ A. Singh,L. S. Weinberger, Stochastic gene expression as a molecular switch for viral latency, Curr. Opin. Microbiol., 12 (2009): 460-466.
    [43] [ S. O. Skinner, et al., Single-cell analysis of transcription kinetics across the cell cycle, eLife, 5 (2016), e12175.
    [44] [ L. H. So, et al., General properties of transcriptional time series in Escherichia coli, Nat. Genet., 43 (2011), 554–560.
    [45] [ S. L. Spencer, et al., Non-genetic origins of cell-to-cell variability in TRAIL-induced apoptosis, Nature, 459 (2009), 428–432.
    [46] [ Q. W. Sun,M. X. Tang,J. S. Yu, Modulation of gene transcription noise by competing transcription factors, J. Math. Biol., 64 (2012): 469-494.
    [47] [ Q. W. Sun,M. X. Tang,J. S. Yu, Temporal profile of gene transcription noise modulated by cross-talking signal transduction pathways, B. Math. Biol., 74 (2012): 375-398.
    [48] [ P. S. Swain,M. B. Elowitz,E. D. Siggia, Intrinsic and extrinsic contributions to stochasticity in gene expression, Proc. Natl. Acad. Sci. USA, 99 (2002): 12795-12800.
    [49] [ M. X. Tang, The mean and noise of stochastic gene transcription, J. Theor. Biol., 253 (2008): 271-280.
    [50] [ M. L. Turner,E. D. Hawkins,P. D. Hodgkin, Quantitative regulation of B cell division destiny by signal strength, J. Immunol., 181 (2008): 374-382.
    [51] [ Y. Voichek,R. Bar-Ziv,N. Barkai, Expression homeostasis during DNA replication, Science, 351 (2016): 1087-1090.
    [52] [ H. H. Wang,Z. J. Yuan,P. J. Liu,T. S. Zhou, Division time-based amplifiers for stochastic gene expression, Mol. Biosyst., 11 (2015): 2417-2428.
    [53] [ L. S. Weinberger, et al., Stochastic gene expression in a lentiviral positive-feedback loop: HIV-1 Tat fluctuations drive phenotypic diversity, Cell, 122 (2005), 169–182.
    [54] [ J. S. Yu,X. J. Liu, Monotonic dynamics of mRNA degradation by two pathways, J. Appl. Anal. Comput., 7 (2017): 1598-1612.
    [55] [ J. S. Yu,Q. W. Sun,M. X. Tang, The nonlinear dynamics and fluctuations of mRNA levels in cross-talking pathway activated transcription, J. Theor. Biol., 363 (2014): 223-234.
  • This article has been cited by:

    1. Zhenyu Wang, Yichun Chen, Sheng Ding, Deli Liang, Huan He, A novel particle swarm optimization algorithm with Lévy flight and orthogonal learning, 2022, 75, 22106502, 101207, 10.1016/j.swevo.2022.101207
    2. Lin Wang, Xiyu Liu, Jianhua Qu, Yuzhen Zhao, Zhenni Jiang, Ning Wang, An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation, 2022, 10, 2227-7390, 4169, 10.3390/math10224169
    3. Chen Bao, Yongwei Miao, Jiazhou Chen, Xudong Zhang, Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions, 2023, 13, 2076-3417, 10317, 10.3390/app131810317
    4. Jiayin Wang, Yukun Wang, Multi-strategy enhanced snake optimizer for quantitative structure-activity relationship modeling, 2024, 132, 0307904X, 531, 10.1016/j.apm.2024.04.057
    5. Lin Wang, Xiyu Liu, Jianhua Qu, Yuzhen Zhao, Liang Gao, Qianqian Ren, An Extended Membrane System with Monodirectional Tissue-like P Systems and Enhanced Particle Swarm Optimization for Data Clustering, 2023, 13, 2076-3417, 7755, 10.3390/app13137755
    6. Xuepeng Zheng, Bin Nie, Jiandong Chen, Yuwen Du, Yuchao Zhang, Haike Jin, An improved particle swarm optimization combined with double-chaos search, 2023, 20, 1551-0018, 15737, 10.3934/mbe.2023701
    7. Huan Xu, Zuhan Liu, Lili Wang, Yun Peng, Yuriy S. Shmaliy, Yougang Sun, Habib Zaidi, Hongying Meng, Hoshang Kolivand, Jianping Luo, Mamoun Alazab, 2023, Improved ACLPSO algorithm for automatic parameter optimization, 9781510668621, 100, 10.1117/12.3009531
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4185) PDF downloads(880) Cited by(5)

Figures and Tables

Figures(5)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog