Research article Special Issues

IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems


  • Aquila Optimizer (AO) and African Vultures Optimization Algorithm (AVOA) are two newly developed meta-heuristic algorithms that simulate several intelligent hunting behaviors of Aquila and African vulture in nature, respectively. AO has powerful global exploration capability, whereas its local exploitation phase is not stable enough. On the other hand, AVOA possesses promising exploitation capability but insufficient exploration mechanisms. Based on the characteristics of both algorithms, in this paper, we propose an improved hybrid AO and AVOA optimizer called IHAOAVOA to overcome the deficiencies in the single algorithm and provide higher-quality solutions for solving global optimization problems. First, the exploration phase of AO and the exploitation phase of AVOA are combined to retain the valuable search competence of each. Then, a new composite opposition-based learning (COBL) is designed to increase the population diversity and help the hybrid algorithm escape from the local optima. In addition, to more effectively guide the search process and balance the exploration and exploitation, the fitness-distance balance (FDB) selection strategy is introduced to modify the core position update formula. The performance of the proposed IHAOAVOA is comprehensively investigated and analyzed by comparing against the basic AO, AVOA, and six state-of-the-art algorithms on 23 classical benchmark functions and the IEEE CEC2019 test suite. Experimental results demonstrate that IHAOAVOA achieves superior solution accuracy, convergence speed, and local optima avoidance than other comparison methods on most test functions. Furthermore, the practicality of IHAOAVOA is highlighted by solving five engineering design problems. Our findings reveal that the proposed technique is also highly competitive and promising when addressing real-world optimization tasks. The source code of the IHAOAVOA is publicly available at https://doi.org/10.24433/CO.2373662.v1.

    Citation: Yaning Xiao, Yanling Guo, Hao Cui, Yangwei Wang, Jian Li, Yapeng Zhang. IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems[J]. Mathematical Biosciences and Engineering, 2022, 19(11): 10963-11017. doi: 10.3934/mbe.2022512

    Related Papers:

    [1] John Leventides, Evangelos Melas, Costas Poulios . Extended dynamic mode decomposition for cyclic macroeconomic data. Data Science in Finance and Economics, 2022, 2(2): 117-146. doi: 10.3934/DSFE.2022006
    [2] Aditya Narvekar, Debashis Guha . Bankruptcy prediction using machine learning and an application to the case of the COVID-19 recession. Data Science in Finance and Economics, 2021, 1(2): 180-195. doi: 10.3934/DSFE.2021010
    [3] Georgios Alkis Tsiatsios, John Leventides, Evangelos Melas, Costas Poulios . A bounded rational agent-based model of consumer choice. Data Science in Finance and Economics, 2023, 3(3): 305-323. doi: 10.3934/DSFE.2023018
    [4] Habib Zouaoui, Meryem-Nadjat Naas . Option pricing using deep learning approach based on LSTM-GRU neural networks: Case of London stock exchange. Data Science in Finance and Economics, 2023, 3(3): 267-284. doi: 10.3934/DSFE.2023016
    [5] Ezekiel NN Nortey, Edmund F. Agyemang, Enoch Sakyi-Yeboah, Obu-Amoah Ampomah, Louis Agyekum . AI meets economics: Can deep learning surpass machine learning and traditional statistical models in inflation time series forecasting?. Data Science in Finance and Economics, 2025, 5(2): 136-155. doi: 10.3934/DSFE.2025007
    [6] Manveer Kaur Mangat, Erhard Reschenhofer, Thomas Stark, Christian Zwatz . High-Frequency Trading with Machine Learning Algorithms and Limit Order Book Data. Data Science in Finance and Economics, 2022, 2(4): 437-463. doi: 10.3934/DSFE.2022022
    [7] Ming Li, Ying Li . Research on crude oil price forecasting based on computational intelligence. Data Science in Finance and Economics, 2023, 3(3): 251-266. doi: 10.3934/DSFE.2023015
    [8] Sami Mestiri . Credit scoring using machine learning and deep Learning-Based models. Data Science in Finance and Economics, 2024, 4(2): 236-248. doi: 10.3934/DSFE.2024009
    [9] Man-Fai Leung, Abdullah Jawaid, Sai-Wang Ip, Chun-Hei Kwok, Shing Yan . A portfolio recommendation system based on machine learning and big data analytics. Data Science in Finance and Economics, 2023, 3(2): 152-165. doi: 10.3934/DSFE.2023009
    [10] Lindani Dube, Tanja Verster . Enhancing classification performance in imbalanced datasets: A comparative analysis of machine learning models. Data Science in Finance and Economics, 2023, 3(4): 354-379. doi: 10.3934/DSFE.2023021
  • Aquila Optimizer (AO) and African Vultures Optimization Algorithm (AVOA) are two newly developed meta-heuristic algorithms that simulate several intelligent hunting behaviors of Aquila and African vulture in nature, respectively. AO has powerful global exploration capability, whereas its local exploitation phase is not stable enough. On the other hand, AVOA possesses promising exploitation capability but insufficient exploration mechanisms. Based on the characteristics of both algorithms, in this paper, we propose an improved hybrid AO and AVOA optimizer called IHAOAVOA to overcome the deficiencies in the single algorithm and provide higher-quality solutions for solving global optimization problems. First, the exploration phase of AO and the exploitation phase of AVOA are combined to retain the valuable search competence of each. Then, a new composite opposition-based learning (COBL) is designed to increase the population diversity and help the hybrid algorithm escape from the local optima. In addition, to more effectively guide the search process and balance the exploration and exploitation, the fitness-distance balance (FDB) selection strategy is introduced to modify the core position update formula. The performance of the proposed IHAOAVOA is comprehensively investigated and analyzed by comparing against the basic AO, AVOA, and six state-of-the-art algorithms on 23 classical benchmark functions and the IEEE CEC2019 test suite. Experimental results demonstrate that IHAOAVOA achieves superior solution accuracy, convergence speed, and local optima avoidance than other comparison methods on most test functions. Furthermore, the practicality of IHAOAVOA is highlighted by solving five engineering design problems. Our findings reveal that the proposed technique is also highly competitive and promising when addressing real-world optimization tasks. The source code of the IHAOAVOA is publicly available at https://doi.org/10.24433/CO.2373662.v1.



    Chaotic dynamical systems are deterministic dynamical systems which are characterized by the fact that small changes in the initial conditions lead to very large differences at a later time. Chaotic dynamical systems are to be contradistinguished with stochastic dynamical systems. Stochastic dynamical systems are not deterministic and a probability space is required to study their evolution.

    In recent years, chaotic systems have been receiving more and more attention and interest because of their potential applications in many areas which range from physics and telecommunications to biological networks and economic models. Since the chaotic phenomenon in economics was discovered in 1985, it has been widely accepted that the economy and the finance systems are very complicated nonlinear systems containing many complex factors and accordingly, many efforts have been devoted to their study (Chian et al., 2006; Gao and Ma, 2009; Gu´egan, 2009; Wijeratne et al., 2009). As stated in chian, characterization of the complex dynamics of economic systems is a powerful tool for pattern recognition and forecasting of business and financial cycles, as well as for optimization of management strategy and decision technology. Therefore, the need for analysis and control of these systems is emerging and many researchers have been working towards this direction.

    In this paper, we indicate a method for approximating a macroeconomic chaotic system and reconstructing its trajectories using a linear finite dimensional dynamical system. This method sits across Koopman operators, EDMD, Takens theorem and Machine Learning. EDMD and its precursor DMD, as well as, Koopman operators, Takens' theorem and Machine Learning have been extensively used in Finance (for instance, Hua et al., 2017; Mauroy et al., 2020; Mann and Kutz, 2016; Stavroglou et al., 2019; Ni et al., 2021).

    Roughly speaking, the Koopman operator "lifts" the dynamics of the original system from the state space to linear spaces consisting of functions defined on the state space (which are called observables). The advantage of this lifting is that we obtain a linearization of the original system which holds to the whole state space and many properties of the dynamics correspond to spectral properties of the Koopman operator. The disadvantage is that the linear system induced by the Koopman operator is infinite dimensional. Therefore, its dynamics and spectral properties cannot be calculated while several methods for analysis and control cannot be applied.

    The Extended Dynamic Mode Decomposition provides us with a method for obtaining finite dimensional approximations of the Koopman operator. These approximations, under some suitable assumptions, converge to the Koopman operator. Therefore, they can also be used to reconstruct the trajectories in the phase space of the original nonlinear system.

    The success of EDMD depends on a set of observables (a dictionary) which is chosen a priori. This method is purely data-driven and is based on instantaneous measurements of the values of the observables. Then, the linear dynamical system that is provided advances the measurements form one time to the next.

    On the other hand, it is generally feasible to extract information about features of phase space from time series of general measurements made on an evolving system (Sauer et al., 1991). The central result in this direction was proved by (Takens, 1981) who showed how a time series of measurements of a single observable can be used to reconstruct qualitative features of the phase space of the system. The technique described by Takens, the method of delays, is so simple that it can be applied to essentially any time series whatever, and has made possible the wideranging search for chaos in dynamical systems. Takens' theorem asserts that the phase space of a dynamical system and in particular strange attractors of the system can be diffeomorphically embedded in a higher dimensional space spanned by an appropriate number of time lags of a single observable of the system.

    This is a very powerful result since it allows to recover the dynamics from the observation of just one variable, the embedding preserves the topology of the phase space and it preserves the topology of the attractor in particular, the embedding being a diffeomorphism allows system identification, it also allows to calculate the box-counting dimension of the attractor, and it allows to calculate the (positive) Lyapunov exponents. However, Takens' theorem does not allow reconstruction of the trajectories of the dynamical system. This is to be juxtaposed with the Koopman-EDMD theory which aims at reconstructing the trajectories of the system.

    What the two approaches have in common is that both are data-driven since the observables they use are constructed from data measured during the evolution of the system. In Takens' theorem the observables are simpler in form and they are the time lags of a single observable. In Koopman-EDMD theory the observables are usually the state variables of the dynamical system and nonlinear functions of them. In this paper we use an hybrid method in order to reconstruct with simple means trajectories of a macroeconomic dynamical system which exhibits chaotic behaviour. In particular we use Koopman-EDMD theory with observables those used by Takens' Theorem. The finite dimensional approximation to the Koopman operator which drives the dynamics and reconstructs the trajectories of the system is obtained by two different methods: By a Linear autoregressive model and by an Autoregression with machine learning methods. The use of the Linear autoregressive model leads to more faithful reconstruction of the trajectories of the dynamical system at hand than the use of Autoregression with machine learning methods. However, this is attributed to the fact the error made in the case of the Linear autoregressive model can be corrected whereas this is not feasible in the case of Autoregression with machine learning methods where the error made in the prediction of the future value of the observable from the previous values cannot be corrected and accumulates over time.

    Therefore, the motivation, innovation and contribution of this paper can be summarized as follows: This paper addresses the trajectory approximation of an hyperchaotic system via EDMD methods. The EDMD methods give rise to a linear system on an enhanced state space that can approximate a given trajectory. Having data of a given trajectory in a finite horizon allows to construct a discrete linear system of dimension n>>m, where m is the dimension of the state space of the nonlinear system. Takens' Theorem indicates how large n can be. Here we demonstrate the approximation of a single trajectory of a chaotic system via EDMD methods equivalent to linear Autoregressive construction. Furthermore, we demonstrate the use of the same method on multiple trajectories. Finally, we demonstrate how we can use the same data in order to construct nonlinear trajectory approximation via machine learning methods.

    Trajectory approximation of chaotic nonlinear systems is a particularly difficult problem as these trajectories are sensitive to initial conditions and also exhibit variable spectrum and almost periodicity properties. Simultaneous approximation of orbits that differ seems a challenge via any method requiring big data and accurate approaches. The use of linear dynamic models seems in certain respects more appropriate as the knowledge of its structure allows to correct the fitted model if significant part of the spectrum of the data is lost due to numerical or other errors. This is demonstrated in this paper. The conclusions in this paper are derived with the use of an economic model (Yu et al., 2012) which exhibits hyperchaotic behaviour. We note that the scope the methods and the results in this paper are entirely different from those in Yu. In a nutshell in this paper we approximate the trajectory of the hyperchaotic system introduced in Yu via EDMD methods whereas in Yu the general structure of this system is studied, namely, its equilibrium points, its bifurcations, e.t.c..

    The EDMD method is data driven. Consequently our method can be applied to any dynamical system for which probably the dynamical law which drives its dynamics is unknown and data, in the form of time series, can be collected for some of its trajectories in state space. Moreover, our approach can be advocated for any nonlinear dynamical system whose dynamical law which drives its dynamics might be known but a linearization of its dynamics via the EDMD method may be required in order for example to study control theory of this linearized system; control theory of linear systems is much better understood than the control theory of nonlinear systems.

    The rest of the paper is organized as follows. In Section 2, we firstly review a dynamical system which has emerged from financial studies and it shows chaotic behaviour for some values of the parameters. Furthermore, we briefly present some basic facts about the Koopman operator. We also describe the EDMD algorithm and the main idea about the connections with Takens' theorem. The latter is analyzed with more details in Subsection 2.5. In Section 3 we consider the chaotic system of Section 2.1 and we developed our methodology for approximating and reconstructing a single trajectory of the system. In Section 4, we demonstrate the proposed approach in the case where we wish to simultaneously approximate two orbits of the original system with the same autoregressive model. Finally, Section 5 concludes the paper.

    Various financial system have been found to exhibit chaotic dynamics (Chian, 2000; Fanti and Manfredi, 2007; Gu´egan, 2009; Hass, 1998; Holyst et al., 1996; Lorenz, 1993). A lot of research has been invested in the analysis, control and stabilization of these systems (for instance, see Ahmad et al., 2021; Chen et al., 2021; Ma et al., 2022; Pan and Wu, 2022; for some recent studies). In the present paper, we use in our analysis a chaotic dynamical model that describes the time evolution of three state variables: x is the interest rate, y is the investment demand and z is the price exponent. For more details about the origination, the development and the economic interpretation of the model, we refer to ma, Yu, jiang and zhao. The system is described by the following non-linear differential equations.

    {˙x=z+(ya)x˙y=1byx2˙z=xcz (1)

    where a, b, c are positive constant parameters that represent the amount of savings, the cost per investment and the elasticity of demand respectively (see Ma and Chen, 2001).

    When the values of parameters are chosen as a=0.9, b=0.2, c=1.2 and for initial conditions (x0,y0,z0)=(1,3,2) the system (1) has a strange attractor that can be classified neither as a stable equilibrium nor as a periodic or almost periodic oscillation. Therefore, the system exhibits chaotic behaviour (see Ma and Chen, 2001; Rigatos, 2017).

    Later, in Yu, it was observed that x depends not only on the investment demand and the price exponent, but also on the average profit margin (denoted by u(t)). Therefore, in Yu a dynamical system consisting of four state variables (x,y,z,u) is proposed which is described by the next first order non-linear differential equations

    {˙x=z+(ya)x+u˙y=1byx2˙z=xcz˙u=dxyku

    where a,b,c,d,k are positive parameters. It is shown that, if the parameters' values are chosen to be a=0.9, b=0.2, c=1.5, d=0.2, k=0.17, then the system shows hyperchaotic behaviour (see Yu et al., 2012). A hyperchaotic system is usually defined as a chaotic system with more than one positive Lyapunov exponent. Therefore, these system show more complex behaviour than chaotic systems.

    The Koopman operator theory represents an increasingly popular formalism of dynamical systems since it allows the analysis of nonlinear and complicated systems. Therefore, it can be utilized in chaotic and hyperchaotic systems.

    Assume that (M,f) is a continuous dynamical system, where MRn is a real manifold, f is the evolution map and the system is described by the equation ˙x=f(x). We also denote by Φt(x0) the flow map defined as the state of the system at time t when we start from the initial condition x0.

    Any function g:MC is called an observable of the system (M,f). Let now F be a function space of observables which enjoys the property: for any gF and any t0, it holds that gΦtF. Then, the Koopman operator, which is actually a semigroup of operators, is defined for any t0, as Kt:FF with

    Kt(g)=gΦt.

    It is easily verified that Kt is linear for any t0. Throughout this paper, following the common practice, we will use the term Koopman operator to refer to the whole class of operators K=(Kt)t0. In many applications, the space F coincides with the Hilbert space L2(M), however other choices can be used as well.

    Similarly, the Koopman operator can be defined for any discrete dynamical system xk+1=f(xk) defined on the state space M. The Koopman operator is the composition of any observable with the evolution map f, that is, K:FF is defined by K(g)=gf, for any observable gF. Here, F is a linear space of observables which is closed under composition with f and the space L2(M) remains a standard choice.

    Roughly speaking the Koopman operator updates every observable in the space F according to the evolution of the dynamical system. It defines a linear dynamical system (F,K) which completely describes the original system (M,f). This linearization is applied to any nonlinear system and it is global, i.e. it does not hold only to the area of some attractor or fixed point. Furthermore, many properties of the original system can be codified as properties of the Koopman operator and, usually they can be related to the eigenstructure of K. However, since K is usually infinite dimensional, its spectral properties cannot be calculated except some rare cases.

    To overcome the problem of dimensionality, one has to look for finite dimensional approximations of the Koopman operator, i.e. finite dimensional subspaces of the domain of K which are invariant under K. The most natural candidates for this project are the spaces generated by eigenvectors of K, which, however, are difficult to find. Hence, we are obliged to search for finite dimensional linear approximations of the Koopman operator. Towards this direction, the Dynamic Mode Decomposition (DMD) and its generalization the so-called Extended Dynamic Mode Decomposition (EDMD) have been proved very successful.

    We next describe briefly the main steps of the EDMD algorithm. Firstly, one has to consider a set of observables {g1,g2,,gp}. This set is usually called a dictionary. The cardinality p of the dictionary is expected to be much bigger than n (the dimension of the original state space M). The DMD (Dynamic Mode Decomposition) uses only the observables gi(x)=xi, for i=1,2,,n. However, in EDMD we can choose any observables. Hence, an augmented state space is constructed and EDMD provides, in general, better approximations than DMD. We denote the augmented state space by ¯M and its elements by y=[g1(x),,gp(x)]T.

    The second step is to collect the data. We consider a trajectory of the original system with initial condition x0. The trajectory is witnessed for some finite time horizon T and sampling points are collected at a fixed time interval ΔT. Uniform sampling is not mandatory and other sampling methods can be applied. Therefore, n0=TΔT points in this trajectory are considered. We denote our data by

    (xs)n0s=0.

    These data correspond to data (ys)n0s=0 in the augmented space ¯M. We now consider the data matrices

    Y[0,n01]=[y0,y1,,yn01]andY[1,n0]=[y1,y2,,yn0].

    Finally, using least square regression methods, we obtain a p×p matrix A, such that Y[1,n0]AY[0,n01]. Therefore, we have

    A=argmin˜ARp×pY[1,n0]˜AY[0,n01].

    Here, denotes some matrix norm, for instance the Frobenius norm.

    This procedure can also be applied to several trajectories. In this case, we construct data matrices Yj[0,n01] and Yj[01,n0] for every trajectory j=1,2,,k. Then, the matrix A is chosen such that

    A=argmin˜ARp×pkj=1Yj[1,n0]˜AYj[0,n01].

    Roughly speaking, A is a best-fit matrix which relates the two data matrices in every trajectory. This matrix generates a finite dimensional linear system that advances spatial measurements from one time to the next and therefore it may provide approximations of the original nonlinear system.

    One of the advantages of the EDMD method is that it depends on data. Hence, it may be applied when the dynamics of the system is unknown. On the other hand, the success of this method depends on the dictionary which is chosen a priori. In many cases, it is a difficult problem to find the suitable observables. Recent studies try to utilize machine learning and artificial intelligence methods in order to "train" the dictionary.

    As it has been mentioned, EDMD advances measurements of the state of the system from one time to the next. However, we may also obtain measurements coordinates for the approximation of Koopman operator using time-delayed measurements of the system. This approach is also data-driven and utilize the information from the previous measurements to predict the future. It is particularly useful in chaotic systems. In the case of fixed points or periodic orbits, more data from previous measurements have small contributions. On the contrary, when the trajectories densely fill a strange attractor, more data provide more information.

    From this point of view, Koopman operators can interact with Takens embedding theorem. The latter could provide us information about the dimensions of the augmented state-space as well as the dictionary that should be chosen. A connection between Koopman operators and Takens Theorem has already been investigated in bana.

    We firstly need some definitions. A dynamical system on a manifold M is specified by a flow

    ϕ()():R×MM,

    where ϕ0(x)=x, ϕt is a diffeomorphism for all t and ϕtϕs=ϕt+s (e.g. ϕ is the solution to a system of ODEs).

    Roughly speaking, an attractor of a dynamical system is a subset of the state space to which orbits originating from typical initial conditions tend as time increases. It is very common for dynamical systems to have more than one attractor. For each such attractor, its basin of attraction is the set of initial conditions leading to longtime behavior that approaches that attractor.

    An attracting set A for the flow ϕt is a closed set such that:

    ● the basin of attraction B(A) has positive measure,

    ● for AA the difference B(A)B(A) has positive measure.

    A set AM is an attractor if it is an attracting set that contains a dense orbit of the flow ϕt. An attractor A is strange attractor ((Eckmann and Ruelle, 1985; Guckenheimer and Holmes, 1983), if its box-counting dimension d is non-integer.

    Theorem 2.1 (Takens' theorem). Let M be a compact manifold of (integer) dimension q. Then for generic pairs (ϕ,y), where

    ϕ:MM is a C2diffeomorphism of M in itself,

    y:MM is a a C2differentiable function,

    the map Φ(ϕ,y):MR2q+1 given by

    Φ(ϕ,y)(x):=(y(x),y(ϕ(x)),y(ϕ2(x)),...,y(ϕ2q(x)))

    is an embedding (i.e. an injective and an immersive map) of M in R2q+1.

    If ϕt is a flow on M and τ>0 is a fixed time, then we can define the delay map by

    Φ(x):=(y(x),y(ϕτ(x)),y(ϕ2τ(x)),...,y(ϕκτ(x))). (2)

    If M is a manifold that is an attractor for the flow ϕt, then ϕτ is a diffeomorphism of M into itself, so if κ2dim(M)+1 for generic y,τ, Takens' theorem (see Sauer et al., 1991; Packard et al., 1980; Takens, 1981; Muldoon et al., 1993) says that the delay map defined by (2) is actually an embedding.

    Finally, we close with some details about the meaning of generic. Let P be a property of functions in Ck(M,N) they might or might not have. Then we say that P(f) is true for generic fCk(M,N) if the set of functions for which it holds is open and dense in the Ck topology. So arbitrary small perturbations turn bad choices in good choices.

    The central plank of this theory is a result suggested by several people (Packard et al., 1980) and eventually proved in (Takens, 1981) which shows how a time series of measurements of a single observable can often be used to reconstruct qualitative features of the phase space of the system. The technique described by Takens, the method of delays (2), is so simple that it can be applied to essentially any time series whatever, and has made possible the wide-ranging search for chaos.

    We consider the 4-dimensional (financial) dynamical system described in Section 2.1, which for the parameter values a=0.9, b=0.2, c=1.5, d=0.2, k=0.17 exhibits chaotic behaviour. We select initial conditions for the four variables to be: x(0)=1, y(0)=5, z(0)=1, u(0)=1. For this values the trajectory for x(t) is depicted in Figure 1.

    Figure 1.  The trajectory of the investment rate x(t), for initial values x(0)=1, y(0)=5, z(0)=1, u(0)=1.

    The vector of variables (x,y,z,u) converges to a chaotic attractor in R4. According to Whitney Takens theorem we can embed this attractor to a higher dimensional space of dimension d at least 8 by considering a single observable and a vector of d of its lagged values. We try to use this idea so that to attempt to reconstruct the above trajectory by using a 9-dimensional dynamical system. The simplest way to do that is to consider x as an observable and to model the evolution of the vector (x(T9Δt),,x(T)) assuming a sampling period Δt in our case Δt=1.

    The simplest model for this evolution is a linear autoregressive model

    xn+9=a8xn+8++a0xn.

    We model the time series of the above plotted variable x(t) obtained after sampling and we get the results showed in Tables 1 and 2. Using these results, we reconstruct the x trajectory as shown in Figure 2.

    Table 1.  Basic parameter table. We have adjusted R2=0.986.
    Estimate Standard error t-Statistic p-value
    1 0.382345 0.0526912 7.25634 1.1104 ×1011
    2 0.0528833 0.0593214 0.891471 0.373854
    3 0.0510388 0.0532897 0.95776 0.339454
    4 0.0669098 0.0533061 1.2552 0.211016
    5 0.0546443 0.0532567 1.02606 0.306227
    6 0.0756992 0.0534338 1.41669 0.158283
    7 0.488672 0.0536388 9.11044 1.47733 ×1016
    8 0.457925 0.0622202 7.35974 6.12628 ×1012
    9 0.374469 0.040841 9.16895 1.01965 ×1016

     | Show Table
    DownLoad: CSV
    Table 2.  Analysis of variance.
    DF SS MS F-Statistic p-value
    1 1 440.283 440.283 12718.8 2.36926 ×10170
    2 1 0.252312 0.252312 7.28877 0.00759293
    3 1 39.9561 39.9561 1154.25 1.03165 ×1080
    4 1 0.388756 0.388756 11.2304 0.000978538
    5 1 0.447058 0.447058 12.9146 0.000419584
    6 1 4.09436 4.09436 118.277 1.52056 ×1021
    7 1 9.81632 9.81632 283.573 5.71589 ×1039
    8 1 0.0967607 0.0967607 2.79521 0.0962645
    9 1 2.9102 2.9102 84.0696 1.01965 ×1016
    Error 182 6.30022 0.0346166
    Total 191 504.545

     | Show Table
    DownLoad: CSV
    Figure 2.  Reconstruction of the trajectory of x.

    Furthermore, we extrapolate the modes of the above AR system which have modulus close to unity on the unit circle by radial projection. The new AR system reconstructs x(t) as depicted in Figure 3.

    Figure 3.  Reconstruction of the trajectory of x.

    The above results demonstrate that the considered chaotic financial system can be approximated, at least orbitwise and in finite horizon, by AR models of high enough dimensions and thus EDMD methods may assist in modeling the Takens embedding for the purpose of simulating single or multiple chaotic trajectories.

    Finally, we apply two further methods to explore nonlinear relationships for the transition proposed by the Takens embedding using machine learning methods. Both are generally described by a nonlinear autoregression

    xn+9=F(xn+8,,xn),

    which was realised by two machine learning approaches, namely random forest interpolation and Gaussian process interpolation. The results for the same trajectory reconstruction are depicted in Figures 4 (for random forests) and 5 (for Gaussian process interpolation). Details of the two approaches are given in Tables 3 and 4.

    Figure 4.  Reconstruction of the trajectory of x using random forests.
    Table 3.  Information for the random forest method.
    Predictor information
    Method Random Forest
    Number of features 1
    Number of training examples 190
    Number of trees 50

     | Show Table
    DownLoad: CSV
    Table 4.  Information for the Gaussian interpolation process.
    Predictor information
    Method Gaussian process
    Number of features 1
    Number of training examples 190
    Assume Deterministic False
    Numerical Covariance Type Squared Exponential
    Nominal Covariance Type Hamming Distance
    Estimation Method Maximum Posterior
    Optimization Method Find Minimum

     | Show Table
    DownLoad: CSV

    Both methods fit well to the training data however when used for trajectory reconstruction starting from the same initial condition the subsequent calculation of the trajectory suffers from error accumulation and therefore reconstruction worsens with time. Gaussian process interpolation seems to work better.

    In this section, we try to simultaneously approximate two orbits of the hyperchaotic dynamical system via a single autoregressive dynamical system. To achieve this goal we need to increase the number of lags to 30.

    First of all, we use data from numerical integration to plot the second orbit. This orbit is shown in Figure 6 and corresponds to initial conditions x(0)=0, y(0)=2, z(0)=0 and u(0)=1 and time horizon t[0,100]. We note that the new initial conditions give rise to an orbit that is quite far from the first one. Hence, the task of the simultaneous approximation is quite challenging. The new autoregression is supposed to produce a new dynamical system that approximate the two trajectories by a single autoregressive equation and different initial conditions.

    Figure 5.  Reconstruction of the trajectory of x using Gaussian interpolation process.
    Figure 6.  The trajectory of the investment rate x(t), for initial values x(0)=0, y(0)=2, z(0)=0, u(0)=1 and time horizon T=100.

    Similarly to the case of one trajectory, the simplest autoregressive model that one may consider is linear. The regression results are depicted in Table 6. The value of the adjusted R2 is 0.9777954946531674. We can see that this value is smaller than the corresponding value of the one orbit case, however, it is still very close to 1. The regression function is given by:

    xn+31=30j=1ajxn+j,
    Table 5.  Analysis of variance.
    DF SS MS F-Statistic p-value
    1 1 885.406 885.406 13286.5 1.38351 ×10256
    2 1 0.466206 0.466206 6.99591 0.00858584
    3 1 24.5391 24.5391 368.235 1.2149 ×1054
    4 1 1.85537 1.85537 27.8417 2.47799 ×107
    5 1 2.68491 2.68491 40.2899 7.76562 ×1010
    6 1 17.6117 17.6117 264.283 2.08632 ×1043
    7 1 23.8109 23.8109 357.307 1.51675 ×1053
    8 1 11.7662 11.7662 176.565 3.37191 ×1032
    9 1 6.70749 6.70749 100.653 1.07024 ×1020
    10 1 0.204543 0.204543 3.06939 0.0807683
    11 1 8.20769 8.20769 123.165 2.54294 ×1024
    12 1 0.0414977 0.0414977 0.622716 0.430643
    13 1 0.0341088 0.0341088 0.511839 0.474883
    14 1 1.21188 1.21188 18.1856 0.0000266385
    15 1 2.18295 2.18295 32.7575 2.46335 ×108
    16 1 1.68824 1.68824 25.3338 8.1894 ×107
    17 1 0.371216 0.371216 5.57049 0.0188848
    18 1 4.50535 4.50535 67.6076 5.53381 ×1015
    19 1 0.313024 0.313024 4.69725 0.030972
    20 1 0.360659 0.360659 5.41207 0.0206423
    21 1 0.718296 0.718296 10.7788 0.00114403
    22 1 1.03007 1.03007 15.4573 0.000104143
    23 1 1.23863 1.23863 18.5869 0.0000218361
    24 1 0.0496161 0.0496161 0.744542 0.388877
    25 1 0.00615967 0.00615967 0.0924324 0.761311
    26 1 0.202932 0.202932 3.0452 0.0819668
    27 1 2.1012 2.1012 31.5307 4.36478 ×108
    28 1 0.157497 0.157497 2.36341 0.125231
    29 1 0.199552 0.199552 2.99448 0.0845438
    30 1 0.0705909 0.0705909 1.05929 0.304179
    Error 310 20.6583 0.0666397
    Total 340 1020.4

     | Show Table
    DownLoad: CSV
    Table 6.  Basic parameter table.
    Estimate Standard error t-Statistic p-value
    1 0.245794 0.0552653 4.44753 0.0000121198
    2 0.473285 0.0566816 8.34989 2.30183 ×1015
    3 0.00730259 0.0626806 0.116505 0.907368
    4 0.384589 0.0599047 6.42001 5.1073 ×1010
    5 0.244072 0.0613891 3.97582 0.0000873291
    6 0.353163 0.0626402 -5.63797 3.87278 ×108
    7 0.00100867 0.0631682 0.0159681 0.98727
    8 0.142987 0.0620628 2.30391 0.0218891
    9 0.136853 0.0610078 2.24321 0.0255892
    10 0.241157 0.0608958 3.96016 0.0000929613
    11 0.167892 0.0599006 2.80285 0.00538436
    12 0.128651 0.0605339 -2.12527 0.0343548
    13 0.0686504 0.0554803 1.23738 0.216881
    14 0.0330885 0.0555248 0.595923 0.551661
    15 0.327951 0.0537379 6.10279 3.10786 ×109
    16 0.0368829 0.0559783 0.658877 0.510464
    17 0.171794 0.0557006 3.08425 0.00222441
    18 0.357909 0.0566593 6.31687 9.25464 ×1010
    19 0.194009 0.0603826 3.213 0.00145176
    20 0.0956424 0.0602179 1.58827 0.113244
    21 0.247097 0.0593144 4.16589 0.0000402579
    22 0.183056 0.0604346 3.029 0.00265998
    23 0.335842 0.060323 5.5674 5.60543 ×108
    24 0.00644882 0.0597654 0.107902 0.914143
    25 0.162157 0.0582993 2.78146 0.00574279
    26 0.102138 0.0573372 1.78136 0.0758319
    27 0.301513 0.0564203 5.34406 1.76445 ×107
    28 0.0609255 0.0545446 1.11699 0.264865
    29 0.0523586 0.0520694 1.00555 0.315414
    30 0.0408878 0.039727 1.02922 0.304179

     | Show Table
    DownLoad: CSV

    where (aj)3j=10 are the estimates given by the first column in Figure 6.

    Finally, we can use the above linear autoregressive dynamical system to approximate the two different original trajectories of x(t). Figure 7 shows the plots of these approximations.

    Figure 7.  Reconstruction of (a) the first and (b) the second trajectory of x.

    We next assume that the autoregressive model xn+31=F(xn+30,,xn) is non-linear. Using the previous data, we utilize machine learning methods to obtain a reconstruction of the trajectories of x. Firstly, we interpolate the data with the random forest method with parameters given in Table 7.

    Table 7.  Details of the random forest interpolation.
    Predictor information
    Method Random Forest
    Number of features 1
    Number of training examples 340
    Number of trees 50

     | Show Table
    DownLoad: CSV

    Figure 8 shows the simulated two trajectories using the random forest predictor.

    Figure 8.  Reconstruction of (a) the first and (b) the second trajectory of x using random forest interpolation.

    Finally, we can interpolate the previous data with the Gaussian process method instead of random forests. The relative information of the used method are described in Table 8. Finally, Figure 9 contains the approximations of the two trajectories of x that can be obtained using this method.

    Table 8.  Details of the Gaussian interpolation process.
    Predictor information
    Method Gaussian process
    Number of features 1
    Number of training examples 340
    Assume Deterministic False
    Numerical Covariance Type Squared Exponential
    Nominal Covariance Type Hamming Distance
    Estimation Method Maximum Posterior
    Optimization Method Find Minimum

     | Show Table
    DownLoad: CSV
    Figure 9.  Reconstruction of (a) the first and (b) the second trajectory of x using Gaussian interpolation.

    Given a chaotic system with a strange attractor, Takens' theorem informs us that we can obtain a structure topologically equivalent to the attractor by means of a delay embedding. Furthermore, it provides us with an upper bound for the required dimension of the embedding. However, this is mainly a theoretical result which may not be very useful in practice, since we may not know the quantities that appear in it. Furthermore, in some cases the estimation of the dimension may be too high.

    In this paper, we combine the result of Takens' theorem with ideas from Koopman operators and Extended Dynamic Mode Decomposition. Since EDMD depends on data, it provides the necessary tools for numerical calculations, which are absent from the Takens' framework. In this way, we obtain a numerical method for reconstruction of the trajectories of a chaotic dynamical system. This reconstruction may also lead to simplified versions of the chaotic dynamical system as well as to a better understanding of its dynamics.

    In particular, we applied the methodology to a chaotic dynamical system that appears in financial studies. According to Takens' theorem, we consider one observable of the system. This observable may be one of the state variables. We used the variable x of the system. Similar work can also be executed for the other variables as well as for other observables, if they can provide useful information about the system. Then, we take time-delayed measurements of this observable

    x(t)=[x(t),x(tΔt),,x(tdΔt)]T.

    Takens' theorem provides information about the number d of measurements. In particular, when d>2n, the embedding is faithful. Therefore, we may seek for a map F:RdR such that

    x(t+1)=F(x(t))=F(x(t),x(tΔt),,x(tdΔt)).

    In Section 3, we have tested several choices for the function F. The simplest case is to consider a linear dependence of x(t+1) form the previous measurement. Other choices that we examined use machine learning methods (random forests, Gaussian interpolation) to find the map F and to reconstruct the trajectory of the variable x of the original dynamical system.

    What is the advantage in our approach is the combination of the Koopman-EDMD theory, Takens' theorem and Machine Learning that allows us to use data measurements in order to obtain the orbits of a system exhibiting very complicated behaviour. The key problem with many facets we are faced with is the reconstruction of the trajectories of dynamical systems in general and of financial dynamical systems in particular (To). A first step towards this direction is to use more observables in the Koopman-EDMD theory consisting of the state variables, the observables used in Takens' theorem and (non) linear functions of them.

    Koopan Mode Analysis has been applied in energy economics (george) and in financial economics. More specifically, the findings of the present paper could have practical application in financial trading, extending further the work of mann who used the Koopman operator in financial trading, developing dynamic mode decomposition on portfolios of financial data. Furthermore, they could be considered together with Hu proposed novel methodology for high dimensional time series prediction based on the kernel method extension of data-driven Koopman spectral analysis, by researchers on quantitative finance.

    The authors would like to thank the reviewers for their helpful remarks and comments.

    All authors declare no conflicts of interest in this paper.



    [1] Y. Xiao, X. Sun, Y. Guo, S. Li, Y. Zhang, Y. Wang, An improved gorilla troops optimizer based on lens opposition-based learning and adaptive β-Hill climbing for global optimization, CMES-Comput. Model. Eng. Sci., 131 (2022), 815–850. https://doi.org/10.32604/cmes.2022.019198 doi: 10.32604/cmes.2022.019198
    [2] Y. Xiao, X. Sun, Y. Guo, H. Cui, Y. Wang, J. Li, et al., An enhanced honey badger algorithm based on Lévy flight and refraction opposition-based learning for engineering design problems, J. Intell. Fuzzy Syst., (2022), 1–24. https://doi.org/10.3233/JIFS-213206 doi: 10.3233/JIFS-213206
    [3] Q. Liu, N. Li, H. Jia, Q. Qi, L. Abualigah, Y. Liu, A hybrid arithmetic optimization and golden sine algorithm for solving industrial engineering design problems, Mathematics, 10 (2022), 1567. https://doi.org/10.3390/math10091567 doi: 10.3390/math10091567
    [4] A. S. Sadiq, A. A. Dehkordi, S. Mirjalili, Q. V. Pham, Nonlinear marine predator algorithm: a cost-effective optimizer for fair power allocation in NOMA-VLC-B5G networks, Expert Syst. Appl., 203 (2022), 117395. https://doi.org/10.1016/j.eswa.2022.117395 doi: 10.1016/j.eswa.2022.117395
    [5] G. Hu, J. Zhong, B. Du, G. Wei, An enhanced hybrid arithmetic optimization algorithm for engineering applications, Comput. Methods Appl. Mech. Eng., 394 (2022), 114901. https://doi.org/10.1016/j.cma.2022.114901 doi: 10.1016/j.cma.2022.114901
    [6] A. A. Dehkordi, A. S. Sadiq, S. Mirjalili, K. Z. Ghafoor, Nonlinear-based Chaotic harris hawks optimizer: algorithm and internet of vehicles application, Appl. Soft Comput., 109 (2021), 107574. https://doi.org/10.1016/j.asoc.2021.107574 doi: 10.1016/j.asoc.2021.107574
    [7] W. Zhao, L. Wang, S. Mirjalili, Artificial hummingbird algorithm: a new bio-inspired optimizer with its engineering applications, Comput. Methods Appl. Mech. Eng., 388 (2022), 114194. https://doi.org/10.1016/j.cma.2021.114194 doi: 10.1016/j.cma.2021.114194
    [8] K. Sun, H. Jia, Y. Li, Z. Jiang, Hybrid improved slime mould algorithm with adaptive β hill climbing for numerical optimization, J. Intell. Fuzzy Syst., 40 (2021), 1667–1679. https://doi.org/10.3233/jifs-201755 doi: 10.3233/JIFS-201755
    [9] K. Zhong, G. Zhou, W. Deng, Y. Zhou, Q. Luo, MOMPA: multi-objective marine predator algorithm, Comput. Methods Appl. Mech. Eng., 385 (2021), 114029. https://doi.org/10.1016/j.cma.2021.114029 doi: 10.1016/j.cma.2021.114029
    [10] Q. Fan, H. Huang, K. Yang, S. Zhang, L. Yao, Q. Xiong, A modified equilibrium optimizer using opposition-based learning and novel update rules, Expert Syst. Appl., 170 (2021), 114575. https://doi.org/10.1016/j.eswa.2021.114575 doi: 10.1016/j.eswa.2021.114575
    [11] L. Abualigah, A. Diabat, M. A. Elaziz, Improved slime mould algorithm by opposition-based learning and Levy flight distribution for global optimization and advances in real-world engineering problems, J. Ambient Intell. Humanized Comput., (2021), https://doi.org/10.1007/s12652-021-03372-w doi: 10.1007/s12652-021-03372-w
    [12] S. Wang, H. Jia, L. Abualigah, Q. Liu, R. Zheng, An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems, Processes, 9 (2021), 1551. https://doi.org/10.3390/pr9091551 doi: 10.3390/pr9091551
    [13] L. Abualigah, A. A. Ewees, M. A. A. Al-qaness, M. A. Elaziz, D. Yousri, R. A. Ibrahim, et al., Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems, Neural Comput. Appl., 34 (2022), 8823–8852. https://doi.org/10.1007/s00521-022-06906-1 doi: 10.1007/s00521-022-06906-1
    [14] Y. Zhang, Y. Wang, L. Tao, Y. Yan, J. Zhao, Z. Gao, Self-adaptive classification learning hybrid JAYA and Rao-1 algorithm for large-scale numerical and engineering problems, Eng. Appl. Artif. Intell., 114 (2022), 105069. https://doi.org/10.1016/j.engappai.2022.105069 doi: 10.1016/j.engappai.2022.105069
    [15] D. Wu, H. Jia, L. Abualigah, Z. Xing, R. Zheng, H. Wang, et al., Enhance teaching-learning-based optimization for tsallis-entropy-based feature selection classification approach, Processes, 10 (2022), 360. https://doi.org/10.3390/pr10020360 doi: 10.3390/pr10020360
    [16] H. Jia, W. Zhang, R. Zheng, S. Wang, X. Leng, N. Cao, Ensemble mutation slime mould algorithm with restart mechanism for feature selection, Int. J. Intell. Syst., 37 (2021), 2335–2370. https://doi.org/10.1002/int.22776 doi: 10.1002/int.22776
    [17] H. Jia, K. Sun, Improved barnacles mating optimizer algorithm for feature selection and support vector machine optimization, Pattern Anal. Appl., 24 (2021), 1249–1274. https://doi.org/10.1007/s10044-021-00985-x doi: 10.1007/s10044-021-00985-x
    [18] C. Kumar, T. D. Raj, M. Premkumar, T. D. Raj, A new stochastic slime mould optimization algorithm for the estimation of solar photovoltaic cell parameters, Optik, 223 (2020), 165277. https://doi.org/10.1016/j.ijleo.2020.165277 doi: 10.1016/j.ijleo.2020.165277
    [19] Y. Zhang, Y. Wang, S. Li, F. Yao, L. Tao, Y. Yan, et al., An enhanced adaptive comprehensive learning hybrid algorithm of Rao-1 and JAYA algorithm for parameter extraction of photovoltaic models, Math. Biosci. Eng., 19 (2022), 5610–5637. https://doi.org/10.3934/mbe.2022263 doi: 10.3934/mbe.2022263
    [20] M. Eslami, E. Akbari, S. T. Seyed Sadr, B. F. Ibrahim, A novel hybrid algorithm based on rat swarm optimization and pattern search for parameter extraction of solar photovoltaic models, Energy Sci. Eng., (2022). https://doi.org/10.1002/ese3.1160 doi: 10.1002/ese3.1160
    [21] J. Zhao, Y. Zhang, S. Li, Y. Wang, Y. Yan, Z. Gao, A chaotic self-adaptive JAYA algorithm for parameter extraction of photovoltaic models, Math. Biosci. Eng., 19 (2022), 5638–5670. https://doi.org/10.3934/mbe.2022264 doi: 10.3934/mbe.2022264
    [22] X. Bao, H. Jia, C. Lang, A novel hybrid harris hawks optimization for color image multilevel thresholding segmentation, IEEE Access, 7 (2019), 76529–76546. https://doi.org/10.1109/access.2019.2921545 doi: 10.1109/ACCESS.2019.2921545
    [23] S. Lin, H. Jia, L. Abualigah, M. Altalhi, Enhanced slime mould algorithm for multilevel thresholding image segmentation using entropy measures, Entropy, 23 (2021), 1700. https://doi.org/10.3390/e23121700 doi: 10.3390/e23121700
    [24] M. Abd Elaziz, D. Mohammadi, D. Oliva, K. Salimifard, Quantum marine predators algorithm for addressing multilevel image segmentation, Appl. Soft Comput., 110 (2021), 107598. https://doi.org/10.1016/j.asoc.2021.107598 doi: 10.1016/j.asoc.2021.107598
    [25] J. Yao, Y. Sha, Y. Chen, G. Zhang, X. Hu, G. Bai, et al., IHSSAO: An improved hybrid salp swarm algorithm and aquila optimizer for UAV path planning in complex terrain, Appl. Sci., 12 (2022), 5634. https://doi.org/10.3390/app12115634 doi: 10.3390/app12115634
    [26] J. H. Holland, Genetic algorithms, Sci. Am., 267 (1992), 66–72. https://doi.org/10.1038/scientificamerican0792-66 doi: 10.1038/scientificamerican0792-66
    [27] P. J. Angeline, Genetic programming: On the programming of computers by means of natural selection, Biosystems, 33 (1994), 69–73. https://doi.org/10.1016/0303-2647(94)90062-0 doi: 10.1016/0303-2647(94)90062-0
    [28] R. Storn, K. Price, Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., 11 (1997), 341–359. https://doi.org/10.1023/A:1008202821328 doi: 10.1023/A:1008202821328
    [29] H. G. Beyer, H. P. Schwefel, Evolution strategies-A comprehensive introduction, Nat. Comput., 1 (2002), 3–52. https://doi.org/10.1023/A:1015059928466 doi: 10.1023/A:1015059928466
    [30] D. Simon, Biogeography-based optimization, IEEE Trans. Evol. Comput., 12 (2008), 702–713. https://doi.org/10.1109/TEVC.2008.919004 doi: 10.1109/TEVC.2008.919004
    [31] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Science, 220 (1983), 671–680. https://doi.org/10.1126/science.220.4598.671 doi: 10.1126/science.220.4598.671
    [32] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search algorithm, Inf. Sci., 179 (2009), 2232–2248. https://doi.org/10.1016/j.ins.2009.03.004 doi: 10.1016/j.ins.2009.03.004
    [33] S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-Verse Optimizer: a nature-inspired algorithm for global optimization, Neural Comput. Appl., 27 (2016), 495–513. https://doi.org/10.1007/s00521-015-1870-7 doi: 10.1007/s00521-015-1870-7
    [34] W. Zhao, L. Wang, Z. Zhang, Atom search optimization and its application to solve a hydrogeologic parameter estimation problem, Knowl.-Based Syst., 163 (2019), 283–304. https://doi.org/10.1016/j.knosys.2018.08.030 doi: 10.1016/j.knosys.2018.08.030
    [35] A. Hatamlou, Black hole: a new heuristic optimization approach for data clustering, Inf. Sci., 222 (2013), 175–184. https://doi.org/10.1016/j.ins.2012.08.023 doi: 10.1016/j.ins.2012.08.023
    [36] S. Mirjalili, SCA: a sine cosine algorithm for solving optimization problems, Knowl.-Based Syst., 96 (2016), 120–133. https://doi.org/10.1016/j.knosys.2015.12.022 doi: 10.1016/j.knosys.2015.12.022
    [37] A. Kaveh, A. Dadras, A novel meta-heuristic optimization algorithm: thermal exchange optimization, Adv. Eng. Software, 110 (2017), 69–84. https://doi.org/10.1016/j.advengsoft.2017.03.014 doi: 10.1016/j.advengsoft.2017.03.014
    [38] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
    [39] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95 - International Conference on Neural Networks, (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [40] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell. Mag., 1 (2006), 28–39. https://doi.org/10.1109/MCI.2006.329691 doi: 10.1109/MCI.2006.329691
    [41] S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems, Neural Comput. Appl., 27 (2015), 1053–1073. https://doi.org/10.1007/s00521-015-1920-1 doi: 10.1007/s00521-015-1920-1
    [42] S. Mirjalili, The ant lion optimizer, Adv. Eng. Software, 83 (2015), 80–98. https://doi.org/10.1016/j.advengsoft.2015.01.010 doi: 10.1016/j.advengsoft.2015.01.010
    [43] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [44] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
    [45] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: a bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114 (2017), 163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002 doi: 10.1016/j.advengsoft.2017.07.002
    [46] F. Glover, Tabu search—Part Ⅰ, ORSA J. Comput., 1 (1989), 190–206. https://doi.org/10.1287/ijoc.1.3.190 doi: 10.1287/ijoc.1.3.190
    [47] D. Manjarres, I. Landa-Torres, S. Gil-Lopez, J. Del Ser, M. N. Bilbao, S. Salcedo-Sanz, et al., A survey on applications of the harmony search algorithm, Eng. Appl. Artif. Intell., 26 (2013), 1818–1831. https://doi.org/10.1016/j.engappai.2013.05.008 doi: 10.1016/j.engappai.2013.05.008
    [48] M. S. Gonçalves, R. H. Lopez, L. F. F. Miguel, Search group algorithm: a new metaheuristic method for the optimization of truss structures, Comput. Struct., 153 (2015), 165–184. https://doi.org/10.1016/j.compstruc.2015.03.003 doi: 10.1016/j.compstruc.2015.03.003
    [49] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, 2007 IEEE Congr. Evol. Comput., (2007), 4661–4667. https://doi.org/10.1109/CEC.2007.4425083 doi: 10.1109/CEC.2007.4425083
    [50] R. V. Rao, V. J. Savsani, D. P. Vakharia, Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems, Comput.Aided Des., 43 (2011), 303–315. https://doi.org/10.1016/j.cad.2010.12.015 doi: 10.1016/j.cad.2010.12.015
    [51] S. Mirjalili, Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm, Knowl.-Based Syst., 89 (2015), 228–249. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
    [52] S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: a new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
    [53] S. Kaur, L. K. Awasthi, A. L. Sangal, G. Dhiman, Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization, Eng. Appl. Artif. Intell., 90 (2020), 103541. https://doi.org/10.1016/j.engappai.2020.103541 doi: 10.1016/j.engappai.2020.103541
    [54] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
    [55] B. Abdollahzadeh, F. Soleimanian Gharehchopogh, S. Mirjalili, Artificial gorilla troops optimizer: a new nature‐inspired metaheuristic algorithm for global optimization problems, Int. J. Intell. Syst., 36 (2021), 5887–5958. https://doi.org/10.1002/int.22535 doi: 10.1002/int.22535
    [56] H. Jia, X. Peng, C. Lang, Remora optimization algorithm, Expert Syst. Appl., 185 (2021), 115665. https://doi.org/10.1016/j.eswa.2021.115665 doi: 10.1016/j.eswa.2021.115665
    [57] Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts, Expert Syst. Appl., 177 (2021), 114864. https://doi.org/10.1016/j.eswa.2021.114864 doi: 10.1016/j.eswa.2021.114864
    [58] L. Abualigah, M. A. Elaziz, P. Sumari, Z. W. Geem, A. H. Gandomi, Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer, Expert Syst. Appl., 191 (2022), 116158. https://doi.org/10.1016/j.eswa.2021.116158 doi: 10.1016/j.eswa.2021.116158
    [59] Y. Xiao, X. Sun, Y. Zhang, Y. Guo, Y. Wang, J. Li, An improved slime mould algorithm based on Tent chaotic mapping and nonlinear inertia weight, Int. J. Innovative Comput. Inf. Control, 17 (2021), 2151–2176. https://doi.org/10.24507/ijicic.17.06.2151 doi: 10.24507/ijicic.17.06.2151
    [60] R. Zheng, H. Jia, L. Abualigah, Q. Liu, S. Wang, Deep ensemble of slime mold algorithm and arithmetic optimization algorithm for global optimization, Processes, 9 (2021), 1774. https://doi.org/10.3390/pr9101774 doi: 10.3390/pr9101774
    [61] H. Jia, K. Sun, W. Zhang, X. Leng, An enhanced chimp optimization algorithm for continuous optimization domains, Complex Intell. Syst., 8 (2022), 65–82. https://doi.org/10.1007/s40747-021-00346-5 doi: 10.1007/s40747-021-00346-5
    [62] A. S. Sadiq, A. A. Dehkordi, S. Mirjalili, J. Too, P. Pillai, Trustworthy and efficient routing algorithm for IoT-FinTech applications using non-linear Lévy brownian generalized normal distribution optimization, IEEE Internet Things J., (2021), 1–16. https://doi.org/10.1109/jiot.2021.3109075 doi: 10.1109/jiot.2021.3109075
    [63] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput., 1 (1997), 67–82. https://doi.org/10.1109/4235.585893 doi: 10.1109/4235.585893
    [64] S. Chakraborty, A. K. Saha, R. Chakraborty, M. Saha, S. Nama, HSWOA: an ensemble of hunger games search and whale optimization algorithm for global optimization, Int. J. Intell. Syst., 37 (2022), 52–104. https://doi.org/10.1002/int.22617 doi: 10.1002/int.22617
    [65] P. Pirozmand, A. Javadpour, H. Nazarian, P. Pinto, S. Mirkamali, F. Ja'fari, GSAGA: A hybrid algorithm for task scheduling in cloud infrastructure, J. Supercomput., (2022). https://doi.org/10.1007/s11227-022-04539-8 doi: 10.1007/s11227-022-04539-8
    [66] H. Abdel-Mawgoud, S. Kamel, A. A. A. El-Ela, F. Jurado, Optimal allocation of DG and capacitor in distribution networks using a novel hybrid MFO-SCA method, Electr. Power Compon. Syst., 49 (2021), 259–275. https://doi.org/10.1080/15325008.2021.1943066 doi: 10.1080/15325008.2021.1943066
    [67] L. Abualigah, D. Yousri, M. Abd Elaziz, A. A. Ewees, M. A. A. Al-qaness, A. H. Gandomi, Aquila optimizer: a novel meta-heuristic optimization algorithm, Comput. Ind. Eng., 157 (2021), 107250. https://doi.org/10.1016/j.cie.2021.107250 doi: 10.1016/j.cie.2021.107250
    [68] B. Abdollahzadeh, F. S. Gharehchopogh, S. Mirjalili, African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global optimization problems, Comput. Ind. Eng., 158 (2021), 107408. https://doi.org/10.1016/j.cie.2021.107408 doi: 10.1016/j.cie.2021.107408
    [69] Z. Guo, B. Yang, Y. Han, T. He, P. He, X. Meng, et al., Optimal PID tuning of PLL for PV inverter based on aquila optimizer, Front. Energy Res., 9 (2022), 812467. https://doi.org/10.3389/fenrg.2021.812467 doi: 10.3389/fenrg.2021.812467
    [70] M. R. Hussan, M. I. Sarwar, A. Sarwar, M. Tariq, S. Ahmad, A. Shah Noor Mohamed, et al., Aquila optimization based harmonic elimination in a modified H-bridge inverter, Sustainability, 14 (2022), 929. https://doi.org/10.3390/su14020929 doi: 10.3390/su14020929
    [71] G. Vashishtha, R. Kumar, Autocorrelation energy and aquila optimizer for MED filtering of sound signal to detect bearing defect in Francis turbine, Meas. Sci. Technol., 33 (2021), 015006. https://doi.org/10.1088/1361-6501/ac2cf2 doi: 10.1088/1361-6501/ac2cf2
    [72] A. M. AlRassas, M. A. A. Al-qaness, A. A. Ewees, S. Ren, M. Abd Elaziz, R. Damaševičius, et al., Optimized ANFIS model using Aquila optimizer for oil production forecasting, Processes, 9 (2021), 1194. https://doi.org/10.3390/pr9071194 doi: 10.3390/pr9071194
    [73] A. K. Khamees, A. Y. Abdelaziz, M. R. Eskaros, A. El-Shahat, M. A. Attia, Optimal power flow solution of wind-integrated power system using novel metaheuristic method, Energies, 14 (2021), 6117. https://doi.org/10.3390/en14196117 doi: 10.3390/en14196117
    [74] J. Zhao, Z. M. Gao, The heterogeneous Aquila optimization algorithm, Math. Biosci. Eng., 19 (2022), 5867–5904. https://doi.org/10.3934/mbe.2022275 doi: 10.3934/mbe.2022275
    [75] M. Kandan, A. Krishnamurthy, S. A. M. Selvi, M. Y. Sikkandar, M. A. Aboamer, T. Tamilvizhi, Quasi oppositional Aquila optimizer-based task scheduling approach in an IoT enabled cloud environment, J. Supercomput., 78 (2022), 10176–10190. https://doi.org/10.1007/s11227-022-04311-y doi: 10.1007/s11227-022-04311-y
    [76] X. Li, S. Mobayen, Optimal design of a PEMFC‐based combined cooling, heating and power system based on an improved version of Aquila optimizer, Concurrency Comput. Pract. Exper., 34 (2022), e6976. https://doi.org/10.1002/cpe.6976 doi: 10.1002/cpe.6976
    [77] J. Zhao, Z. M. Gao, H. F. Chen, The simplified aquila optimization algorithm, IEEE Access, 10 (2022), 22487–22515. https://doi.org/10.1109/access.2022.3153727 doi: 10.1109/ACCESS.2022.3153727
    [78] S. Mahajan, L. Abualigah, A. K. Pandit, M. Altalhi, Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks, Soft Comput., 26 (2022), 4863–4881. https://doi.org/10.1007/s00500-022-06873-8 doi: 10.1007/s00500-022-06873-8
    [79] Y. Zhang, Y. Yan, J. Zhao, Z. Gao, AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer, IEEE Access, 10 (2022), 10907–10933. https://doi.org/10.1109/access.2022.3144431 doi: 10.1109/ACCESS.2022.3144431
    [80] G. Vashishtha, S. Chauhan, A. Kumar, R. Kumar, An ameliorated African vulture optimization algorithm to diagnose the rolling bearing defects, Meas. Sci. Technol., 33 (2022), 075013. https://doi.org/10.1088/1361-6501/ac656a doi: 10.1088/1361-6501/ac656a
    [81] M. R. Kaloop, B. Roy, K. Chaurasia, S. M. Kim, H. M. Jang, J. W. Hu, et al., Shear strength estimation of reinforced concrete deep beams using a novel hybrid metaheuristic optimized SVR models, Sustainability, 14 (2022), 5238. https://doi.org/10.3390/su14095238 doi: 10.3390/su14095238
    [82] M. Manickam, R. Siva, S. Prabakeran, K. Geetha, V. Indumathi, T. Sethukarasi, Pulmonary disease diagnosis using African vulture optimized weighted support vector machine approach, Int. J. Imaging Syst. Technol., 32 (2022), 843–856. https://doi.org/https://doi.org/10.1002/ima.22669 doi: 10.1002/ima.22669
    [83] J. Fan, Y. Li, T. Wang, An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism, PLoS One, 16 (2021), e0260725. https://doi.org/10.1371/journal.pone.0260725 doi: 10.1371/journal.pone.0260725
    [84] H. R. Tizhoosh, Opposition-based learning: a new scheme for machine intelligence, in International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, (2005), 695–701. https://doi.org/10.1109/CIMCA.2005.1631345
    [85] N. A. Alawad, B. H. Abed-alguni, Discrete island-based cuckoo search with highly disruptive polynomial mutation and opposition-based learning strategy for scheduling of workflow applications in cloud environments, Arabian J. Sci. Eng., 46 (2021), 3213–3233. https://doi.org/10.1007/s13369-020-05141-x doi: 10.1007/s13369-020-05141-x
    [86] T. T. Nguyen, H. J. Wang, T. K. Dao, J. S. Pan, J. H. Liu, S. Weng, An improved slime mold algorithm and its application for optimal operation of cascade hydropower stations, IEEE Access, 8 (2020), 226754–226772. https://doi.org/10.1109/access.2020.3045975 doi: 10.1109/access.2020.3045975
    [87] Y. Zhang, Y. Wang, Y. Yan, J. Zhao, Z. Gao, LMRAOA: an improved arithmetic optimization algorithm with multi-leader and high-speed jumping based on opposition-based learning solving engineering and numerical problems, Alexandria Eng. J., 61 (2022), 12367–12403. https://doi.org/10.1016/j.aej.2022.06.017 doi: 10.1016/j.aej.2022.06.017
    [88] S. Wang, H. Jia, Q. Liu, R. Zheng, An improved hybrid Aquila optimizer and Harris Hawks optimization for global optimization, Math. Biosci. Eng., 18 (2021), 7076–7109. https://doi.org/10.3934/mbe.2021352 doi: 10.3934/mbe.2021352
    [89] Q. Fan, Z. Chen, W. Zhang, X. Fang, ESSAWOA: Enhanced whale optimization algorithm integrated with salp swarm algorithm for global optimization, Eng. Comput., 38 (2022), 797–814. https://doi.org/10.1007/s00366-020-01189-3 doi: 10.1007/s00366-020-01189-3
    [90] F. Yu, Y. Li, B. Wei, X. Xu, Z. Zhao, The application of a novel OBL based on lens imaging principle in PSO, Acta Electron. Sin., 42 (2014), 230–235. https://doi.org/10.3969/j.issn.0372-2112.2014.02.004 doi: 10.3969/j.issn.0372-2112.2014.02.004
    [91] W. Long, J. Jiao, X. Liang, S. Cai, M. Xu, A random opposition-based learning grey wolf optimizer, IEEE Access, 7 (2019), 113810–113825. https://doi.org/10.1109/access.2019.2934994 doi: 10.1109/access.2019.2934994
    [92] H. T. Kahraman, H. Bakir, S. Duman, M. Katı, S. Aras, U. Guvenc, Dynamic FDB selection method and its application: modeling and optimizing of directional overcurrent relays coordination, Appl. Intell., 52 (2022), 4873–4908. https://doi.org/10.1007/s10489-021-02629-3 doi: 10.1007/s10489-021-02629-3
    [93] H. T. Kahraman, S. Aras, E. Gedikli, Fitness-distance balance (FDB): a new selection method for meta-heuristic search algorithms, Knowl.-Based Syst., 190 (2020), 105169. https://doi.org/10.1016/j.knosys.2019.105169 doi: 10.1016/j.knosys.2019.105169
    [94] S. Aras, E. Gedikli, H. T. Kahraman, A novel stochastic fractal search algorithm with fitness-distance balance for global numerical optimization, Swarm Evol. Comput., 61 (2021), 100821. https://doi.org/10.1016/j.swevo.2020.100821 doi: 10.1016/j.swevo.2020.100821
    [95] S. Duman, H. T. Kahraman, U. Guvenc, S. Aras, Development of a Lévy flight and FDB-based coyote optimization algorithm for global optimization and real-world ACOPF problems, Soft Comput., 25 (2021), 6577–6617. https://doi.org/10.1007/s00500-021-05654-z doi: 10.1007/s00500-021-05654-z
    [96] S. García, A. Fernández, J. Luengo, F. Herrera, Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power, Inf. Sci., 180 (2010), 2044–2064. https://doi.org/10.1016/j.ins.2009.12.010 doi: 10.1016/j.ins.2009.12.010
    [97] E. Theodorsson-Norheim, Friedman and Quade tests: basic computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples, Comput. Biol. Med., 17 (1987), 85–99. https://doi.org/10.1016/0010-4825(87)90003-5 doi: 10.1016/0010-4825(87)90003-5
    [98] K. V. Price, N. H. Awad, M. Z. Ali, P. N. Suganthan, The 100-digit challenge: problem definitions and evaluation criteria for the 100-digit challenge special session and competition on single objective numerical optimization. Technical Report Nanyang Technological University, Singapore, (2018).
    [99] C. A. Coello Coello, Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art, Comput. Methods Appl. Mech. Eng., 191 (2002), 1245–1287. https://doi.org/10.1016/S0045-7825(01)00323-1 doi: 10.1016/S0045-7825(01)00323-1
  • This article has been cited by:

    1. Hongqiang Fan, Yichen Sun, Lifen Yun, Runfeng Yu, A Joint Distribution Pricing Model of Express Enterprises Based on Dynamic Game Theory, 2023, 11, 2227-7390, 4054, 10.3390/math11194054
    2. Fanqi Wang, Weisheng Tang, Maofeng Tang, Konstantinos Georgiou, Hairong Qi, Cody Champion, Marc Bosch, 2024, Koopman-Based Transition Detection in Satellite Imagery: Unveiling Construction Phase Dynamics Through Material Histogram Analysis, 979-8-3503-6032-5, 8786, 10.1109/IGARSS53475.2024.10641979
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5817) PDF downloads(739) Cited by(44)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog