Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Evolutionary optimization framework to train multilayer perceptrons for engineering applications


  • Training neural networks by using conventional supervised backpropagation algorithms is a challenging task. This is due to significant limitations, such as the risk for local minimum stagnation in the loss landscape of neural networks. That may prevent the network from finding the global minimum of its loss function and therefore slow its convergence speed. Another challenge is the vanishing and exploding gradients that may happen when the gradients of the loss function of the model become either infinitesimally small or unmanageably large during the training. That also hinders the convergence of the neural models. On the other hand, the traditional gradient-based algorithms necessitate the pre-selection of learning parameters such as the learning rates, activation function, batch size, stopping criteria, and others. Recent research has shown the potential of evolutionary optimization algorithms to address most of those challenges in optimizing the overall performance of neural networks. In this research, we introduce and validate an evolutionary optimization framework to train multilayer perceptrons, which are simple feedforward neural networks. The suggested framework uses the recently proposed evolutionary cooperative optimization algorithm, namely, the dynamic group-based cooperative optimizer. The ability of this optimizer to solve a wide range of real optimization problems motivated our research group to benchmark its performance in training multilayer perceptron models. We validated the proposed optimization framework on a set of five datasets for engineering applications, and we compared its performance against the conventional backpropagation algorithm and other commonly used evolutionary optimization algorithms. The simulations showed the competitive performance of the proposed framework for most examined datasets in terms of overall performance and convergence. For three benchmarking datasets, the proposed framework provided increases of 2.7%, 4.83%, and 5.13% over the performance of the second best-performing optimizers, respectively.

    Citation: Rami AL-HAJJ, Mohamad M. Fouad, Mustafa Zeki. Evolutionary optimization framework to train multilayer perceptrons for engineering applications[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 2970-2990. doi: 10.3934/mbe.2024132

    Related Papers:

    [1] Khaled M. Bataineh, Assem N. AL-Karasneh . Direct solar steam generation inside evacuated tube absorber. AIMS Energy, 2016, 4(6): 921-935. doi: 10.3934/energy.2016.6.921
    [2] Sunbong Lee, Shaku Tei, Kunio Yoshikawa . Properties of chicken manure pyrolysis bio-oil blended with diesel and its combustion characteristics in RCEM, Rapid Compression and Expansion Machine. AIMS Energy, 2014, 2(3): 210-218. doi: 10.3934/energy.2014.3.210
    [3] Saad S. Alrwashdeh . Investigation of the effect of the injection pressure on the direct-ignition diesel engine performance. AIMS Energy, 2022, 10(2): 340-355. doi: 10.3934/energy.2022018
    [4] Xiaoyu Zheng, Dewang Chen, Yusheng Wang, Liping Zhuang . Remaining useful life indirect prediction of lithium-ion batteries using CNN-BiGRU fusion model and TPE optimization. AIMS Energy, 2023, 11(5): 896-917. doi: 10.3934/energy.2023043
    [5] Gitsuzo B.S. Tagawa, François Morency, Héloïse Beaugendre . CFD investigation on the maximum lift coefficient degradation of rough airfoils. AIMS Energy, 2021, 9(2): 305-325. doi: 10.3934/energy.2021016
    [6] Charalambos Chasos, George Karagiorgis, Chris Christodoulou . Technical and Feasibility Analysis of Gasoline and Natural Gas Fuelled Vehicles. AIMS Energy, 2014, 2(1): 71-88. doi: 10.3934/energy.2014.1.71
    [7] Tien Duy Nguyen, Trung Tran Anh, Vinh Tran Quang, Huy Bui Nhat, Vinh Nguyen Duy . An experimental evaluation of engine performance and emissions characteristics of a modified direct injection diesel engine operated in RCCI mode. AIMS Energy, 2020, 8(6): 1069-1087. doi: 10.3934/energy.2020.6.1069
    [8] Abdelrahman Kandil, Samir Khaled, Taher Elfakharany . Prediction of the equivalent circulation density using machine learning algorithms based on real-time data. AIMS Energy, 2023, 11(3): 425-453. doi: 10.3934/energy.2023023
    [9] Anbazhagan Geetha, S. Usha, J. Santhakumar, Surender Reddy Salkuti . Forecasting of energy consumption rate and battery stress under real-world traffic conditions using ANN model with different learning algorithms. AIMS Energy, 2025, 13(1): 125-146. doi: 10.3934/energy.2025005
    [10] Kunzhan Li, Fengyong Li, Baonan Wang, Meijing Shan . False data injection attack sample generation using an adversarial attention-diffusion model in smart grids. AIMS Energy, 2024, 12(6): 1271-1293. doi: 10.3934/energy.2024058
  • Training neural networks by using conventional supervised backpropagation algorithms is a challenging task. This is due to significant limitations, such as the risk for local minimum stagnation in the loss landscape of neural networks. That may prevent the network from finding the global minimum of its loss function and therefore slow its convergence speed. Another challenge is the vanishing and exploding gradients that may happen when the gradients of the loss function of the model become either infinitesimally small or unmanageably large during the training. That also hinders the convergence of the neural models. On the other hand, the traditional gradient-based algorithms necessitate the pre-selection of learning parameters such as the learning rates, activation function, batch size, stopping criteria, and others. Recent research has shown the potential of evolutionary optimization algorithms to address most of those challenges in optimizing the overall performance of neural networks. In this research, we introduce and validate an evolutionary optimization framework to train multilayer perceptrons, which are simple feedforward neural networks. The suggested framework uses the recently proposed evolutionary cooperative optimization algorithm, namely, the dynamic group-based cooperative optimizer. The ability of this optimizer to solve a wide range of real optimization problems motivated our research group to benchmark its performance in training multilayer perceptron models. We validated the proposed optimization framework on a set of five datasets for engineering applications, and we compared its performance against the conventional backpropagation algorithm and other commonly used evolutionary optimization algorithms. The simulations showed the competitive performance of the proposed framework for most examined datasets in terms of overall performance and convergence. For three benchmarking datasets, the proposed framework provided increases of 2.7%, 4.83%, and 5.13% over the performance of the second best-performing optimizers, respectively.



    Fractional calculus is a branch of mathematics that investigates the properties of arbitrary-order differential and integral operators to address various problems. Fractional differential equations provide a more appropriate model for describing diffusion processes, wave phenomena, and memory effects [1,2,3,4] and possess a diverse array of applications across numerous fields, encompassing stochastic equations, fluid flow, dynamical systems theory, biological and chemical engineering, and other domains[5,6,7,8,9].

    Star graph G=(V,E) consists of a finite set of nodes or vertices V(G)={v0,v1,...,vk} and a set of edges E(G)={e1=v1v0,e2=v2v0,...,ek=vkv0} connecting these nodes, where v0 is the joint point and ei is the length of li the edge connecting the nodes vi and v0, i.e., li=|viv0|.

    Graph theory is a mathematical discipline that investigates graphs and networks. It is frequently regarded as a branch of combinatorial mathematics. Graph theory has become widely applied in sociology, traffic management, telecommunications, and other fields[10,11,12].

    Differential equations on star graphs can be applied to different fields, such as chemistry, bioengineering, and so on[13,14]. Mehandiratta et al. [15] explored the fractional differential system on star graphs with n+1 nodes and n edges,

    {CDα0,xui(x)=fi(x,ui,CDβ0,xui(x)),0<x<li,i=1,2,...,k,ui(0)=0,i=1,2,...,k,ui(li)=uj(lj),i,j=1,2,...,k,ij,ki=1ui=0,i=1,2,...,k,

    where CDα0,x,CDβ0,x are the Caputo fractional derivative operator, 1<α2, 0<βα1, fi,i=1,2,...,k are continuous functions on C([0,1]×R×R). By a transformation, the equivalent fractional differential system defined on [0,1] is obtained. The author studied a nonlinear Caputo fractional boundary value problem on star graphs and established the existence and uniqueness results by fixed point theory.

    Zhang et al. [16] added a function λi(x) on the basis of the reference [15]. In addition, Wang et al. [17] discussed the existence and stability of a fractional differential equation with Hadamard derivative. For more papers on the existence of solutions to fractional differential equations, refer to [18,19,20,21]. By numerically simulating the solution of fractional differential systems, we are able to solve problems more clearly and accurately. However, numerical simulation has been rarely used to describe the solutions of fractional differential systems on graphs [22,23].

    The word chemical is used to distinguish chemical graph theory from traditional graph theory, where rigorous mathematical proofs are often preferred to the intuitive grasp of key ideas and theorems. However, graph theory is used to represent the structural features of chemical substances. Here, we introduce a novel modeling of fractional boundary value problems on the benzoic acid graph (Figure 1). The molecular structure of the benzoic acid seven carbon atoms, seven hydrogen atoms, and one oxygen atom. Benzoic acid is mainly used in the preparation of sodium benzoate preservatives, as well as in the synthesis of drugs and dyes. It is also used in the production of mordants, fungicides, and fragrances. Therefore, a thorough understanding of its properties is of utmost importance.

    Figure 1.  Molecular structure of benzoic acid.

    By this structure, we consider atoms of carbon, hydrogen, and oxygen as the vertices of the graph and also the existing chemical bonds between atoms are considered as edges of the graph. To investigate the existence of solutions for our fractional boundary value problems, we label vertices of the benzoic acid graph in the form of labeled vertices by two values, 0 or 1, and the length of each edge is fixed at e (|ei|=e,i=1,2,...,14) (Figure 2). In this case, we construct a local coordinate system on the benzoic acid graph, and the orientation of each vertex is determined by the orientation of its corresponding edge. The labels of the beginning and ending vertices are taken into account as values 0 and 1, respectively, as we move along any edge.

    Figure 2.  Benzene graphs with vertices 0 or 1.

    Motivated by the above work and relevant literature[15,16,17,18,19,20,21,22,23], we discuss a boundary value problem consisting of nonlinear fractional differential equations defined on |ei|=e,i=1,2,,14 by

    HDα1+ri(t)=ϱαiϕp(fi(s,ri(s),HDβ1+ri(s))),t[1,e],

    and the boundary conditions defined at boundary nodes e1,e2,,e14, and

    ri(1)=0,ri(e)=rj(e),i,j=1,2,,14,ij,

    together with conditions of conjunctions at 0 or 1 with

    ki=1ϱ1iri(e)=0,i=1,2,,14.

    Overall, we consider the existence and stability of solutions to the following nonlinear boundary value problem on benzoic acid graphs:

    {HDα1+ri(t)=ϱαiϕp(fi(s,ri(s),HDβ1+ri(s))),t[1,e],ri(1)=0,i=1,2,,14,ri(e)=rj(e),i,j=1,2,,14,ij,ki=1ϱ1iri(e)=0,i=1,2,,14, (1.1)

    where HDα1+,HDβ1+ represent the Hadamard fractional derivative, α(1,2],β(0,1],fiC([1,e]×R×R), ϱi is a real constant, and ϕp(s)=sgn(s)|s|p1. The existence and Hyers-Ulam stability of the solutions to the system (1.1) are discussed. Moreover, the approximate graphs of the solution are obtained.

    It is also noteworthy that solutions obtained from the problem (1.1) can be depicted in various rational applications of organic chemistry. More precisely, any solution on an arbitrary edge can be described as the amount of bond polarity, bond power, bond energy, etc. This paper lies in the integration of fractional differential equations with graph theory, utilizing the formaldehyde graph as a specific case for numerical simulation, and providing an approximate solution graph after iterations.

    In this section, for conveniently researching the problem, several properties and lemmas of fractional calculus are given, forming the indispensable premises for obtaining the main conclusions.

    Definition 2.1. [2,20] The Hadamard fractional integral of order α, for a function gLp[a,b], 0atb, is defined as

    HIqa+g(t)=1Γ(α)ta(logts)α1g(s)sds,

    and the Hadamard fractional integral is a particular case of the generalized Hattaf fractional integral introduced in [24].

    Definition 2.2. [2,20] Let [a,b]R, δ=tddt and ACnδ[a,b]={g:[a,b]R:δn1(g(t))AC[a,b]}. The Hadamard derivative of fractional order α for a function gACnδ[a,b] is defined as

    HDαa+g(t)=δn(HInαa+)(t)=1Γ(nα)(tddt)nta(logts)nα1g(s)sds,

    where n1<α<n, n=[α]+1, and [α] denotes the integer part of the real number α and log()=loge().

    Definition 2.3. [25] Completely continuous operator: A bounded linear operator f, acting from a Banach space X into another space Y, that transforms weakly-convergent sequences in X to norm-convergent sequences in Y. Equivalently, an operator f is completely-continuous if it maps every relatively weakly compact subset of X into a relatively compact subset of Y.

    Compact operator: An operator A defined on a subset M of a topological vector space X, with values in a topological vector space Y, such that every bounded subset of M is mapped by it into a pre-compact subset of Y. If, in addition, the operator A is continuous on M, then it is called completely continuous on this set. In the case when X and Y are Banach or, more generally, bornological spaces and the operator A:XY is linear, the concepts of a compact operator and of a completely-continuous operator are the same.

    Lemma 2.4. [20] For yACnδ[a,b], the following result hold

    HIα0+(HDα0+)y(t)=y(t)n1k=0ci(logta)k,

    where ciR,i=0,1,,n1.

    Lemma 2.5. [21] For p>2, x,y∣≤M, we have

    ϕp(x)ϕp(y)∣≤(p1)Mp2xy.

    Lemma 2.6. [4] Let M be a closed convex and nonempty subset of a Banach space X. Let T,S be the operators TS:MX such that

    (ⅰ) Tx+SyM whenever x,yM;

    (ⅱ) T is contraction mapping;

    (ⅲ) S is completely continuous in M.

    Then T+S has at least one fixed point in M.

    Proof. For every ZS(M), we have T(x)+z:MM. According to (ⅱ) and the Banach contraction mapping principle, Tx+z=x or zx=Tx has only one solution in M. For any z,˜zS(M), we have

    T(t(z))+z=T(z),T(t(˜z))+z=t(˜z).

    So we have

    |t(z)t(˜z)||T(t(z))T(t(˜z))|+|z˜z|ν|t(z)t(˜z)|+|z˜z|,0ν<1.

    Thus, |t(z)t(˜z)|11ν|z˜z|. It indicates that tC(S(M)). Because of S is completely continuous in M, tS is completely continuous. According to the Schauder fixed point theorem, there exists xM, such that tS(x)=x. So we have

    T(t(S(x)))+S(x)=t(S(x)),Tx+Sx=x.

    Lemma 2.7. Let hi(t)AC([1,e],R),i=1,2,,14; then the solution of the fractional differential equations

    {HDα1+ri(t)=ζi(t),t[1,e],ri(1)=0,i=1,2,,14,ri(e)=rj(e),i,j=1,2,,14,ij,ki=1ϱ1iri(e)=0,i=1,2,,14, (2.1)

    is given by

    ri(t)=1Γ(α)t1(logts)α1ζi(s)sds +logt[1Γ(α1)kj=1(ϱ1jkj=1ϱ1j)e1(loges)α2ζj(s)sds 1Γ(α)kj=1ji(ϱ1jkj=1ϱ1j)(e1(loges)α1(ζj(s)ζi(s)s))ds]. (2.2)

    Proof. By Lemma 2.4, we have

    ri(t)=HIα1+ζi(t)+c(1)i+c(2)ilogt,i=1,2,,14,

    where c(1)i,c(2)i are constants. The boundary condition ri(1)=0 gives c(1)i=0, for i=1,2,,14.

    Hence,

    ri(t)=HIα1+ζi(t)+c(2)ilogt =1Γ(α)t1(logts)α1ζ(s)sds+c(2)ilogt,i=1,2,,14. (2.3)

    Also

    ri(t)=1Γ(α1)t11t(logts)α2ζ(s)sds+1tc(2)i.

    Now, the boundary conditions ri(e)=rj(e) and ki=1ϱ1iri(e)=0 implies that c(2)i must satisfy

    1Γ(α)e1(loges)α1ζi(s)sds+c(2)i=1Γ(α)e1(loges)α1ζj(s)sds+c(2)j, (2.4)
    ki=1ϱ1i(1Γ(α1)e1(loges)α2ζi(s)sdsc(2)i)=0. (2.5)

    On solving above Eqs (2.4) and (2.5), we have

    kj=1ϱ1j(1Γ(α1)e1(loges)α2ζj(s)sds)+λ1ic(2)i=kj=1jiϱ1j[1Γ(α)e1(loges)α1ζj(s)sds1Γ(α)e1(loges)α1ζi(s)sds+c(2)i],

    which implies

    kj=1ϱ1jc(2)i=kj=1ϱ1j1Γ(α1)e1(loges)α2ζj(s)sds+kj=1jiϱ1j1Γ(α)e1(loges)α1(ζj(s)ζi(s)s)ds.

    Hence, we get

    c(2)i=1Γ(α1)kj=1(ϱ1jkj=1ϱ1j)e1(loges)α2ζj(s)sds +1Γ(α)kj=1ji(ϱ1jkj=1ϱ1j)(e1(loges)α1(ζj(s)ζi(s)s))ds. (2.6)

    Hence, inserting the values of c(2)i, we get the solution (2.2). This completes the proof.

    In this section, we discuss the existence and uniqueness of solutions of system (1.1) by using fixed point theory.

    We define the space X={r:rC([1,e],R),HDβ1+rC([1,e],R)} with the norm

    rX=r+HDβ1+u=supt[1,e]|r(t)|+supt[1,e]|HDβ1+r(t)|.

    Then, (X,.X) is a Banach space, and accordingly, the product space (Xk=X1×X2×X14,.Xk) is a Banach space with norm

    rXk=(r1,r2,,r14)X=ki=1riX,(r1,r2,,rk)Xk.

    In view of Lemma 2.7, we define the operator T:XkXk by

    T(r1,r2,,rk)(t):=(T1(r1,r2,,rk)(t),,Tk(r1,r2,,rk)(t)),

    where

    Ti(r1,r2,,rk)(t)=ϱαiΓ(α)t1(logts)α1ϕp(gi(s,ri(s),HDβ1+ri(s)))ds+logtΓ(α1)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2ϕp(gj(s,rj(s),HDβ1+rj(s)))dslogtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1ϕp(gj(s,rj(s),HDβ1+rj(s)))ds+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1ϕp(gi(s,ri(s),HDβ1+ri(s)))ds, (3.1)

    where ϕp(fi(s,ri(s),HDβ1+ri(s))s)=ϕp(gi(s,ri(s),HDβ1+ri(s))).

    Assume that the following conditions hold:

    (H1) gi:[1,e]×R×RR,i=1,2,,14 be continuous functions, and there exists nonnegative functions li(t)C[1,e] such that

    |gi(t,x,y)gi(t,x1,y1)|ui(t)(|xx1|+|yy1|),

    where t[1,e],(x,y),(x1,y1)R2;

    (H2) ιi=supt[1,e]|ui(t)|,i=1,2,,14;

    (H3) There exists Qi>0, such that

    |gi(t,x,y)|Qi,t[1,e],(x,y)R×R,i=1,2,,14;

    (H4)sup1tegi(t,0,0)∣=κ<,i=1,2,...,14.

    For computational convenience, we also set the following quantities:

    χi=e(p1)Qp2i(ϱαi+ϱαβi) ×[1Γ(α)+2Γ(α+1)+1Γ(αβ+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)] +ekj=1ji(ϱαj+ϱαβj)(p1)Qp2j ×[1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)], (3.2)
    Υi=e(p1)Qp2i(ϱαi+ϱαβi) ×[1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)] +ekj=1ji(ϱαj+ϱαβj)(p1)Qp2j ×[1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)]. (3.3)

    Theorem 3.1. Assume that (H1) and (H2) hold; then the fractional differential system (1.1) has a unique solution on [1,e] if

    (ki=1χi)(ki=1ιi)<1,

    where χi,i=1,2,,14 are given by Eq (3.2).

    Proof. Let u=(r1,r2,,r14),v=(v1,v2,,v14)Xk,t[1,e], we have

    |Tir(t)Tiv(t)|ϱαiΓ(α)t1(logts)α1|ϕp(gi(s,ri(s),HDβ1+ri(s))ϕp(gi(s,vi(s),HDβ1+vi(s))|ds+logtΓ(α1)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2[|ϕp(gj(s,rj(s),HDβ1+rj(s))ϕp(gj(s,vj(s),HDβ1+vj(s))|ds]+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1[|ϕp(gj(s,rj(s),HDβ1+rj(s))ϕp(gj(s,vj(s),HDβ1+vj(s))|ds]+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1[|ϕp(gi(s,ri(s),HDβ1+ri(s))ϕp(gi(s,vi(s),HDβ1+vi(s))|ds].

    Using Lemma 2.5, (H1) and (H2), t[1,e] and (ϱ1jkj=1ϱ1j)<1 for j=1,2,,k, we obtain

    |Tir(t)Tiv(t)|eϱαiΓ(α+1)(p1)Qp2iιirivi+eϱαβiΓ(α+1)(p1)Qp2iιiHDβ1+riHDβ1+vi+e(p1)Qp2jkj=1(ϱαjΓ(α)ιjrjvj+ϱαβjΓ(α)ιjHDβ1+rjHDβ1+vj)+e(p1)Qp2jkj=1ji(ϱαjΓ(α+1)ιjrjvj+ϱαβjΓ(α+1)ιjHDβ1+rjHDβ1+vj)+eϱαiΓ(α+1)(p1)Qp2iιirivi+eϱαβiΓ(α+1)(p1)Qp2iιiHDβ1+riHDβ1+vi2e(ϱαi+ϱαβi)Γ(α+1)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi)+ekj=1ϱαj+ϱαβjΓ(α)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj)+ekj=1jiϱαj+ϱαβjΓ(α+1)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj)=e(1Γ(α)+2Γ(α+1))(ϱαi+ϱαβi)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi)+e(1Γ(α)+1Γ(α+1))kj=1ji(ϱαj+ϱαβj)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj).

    Hence,

    Tir(t)Tiv(t) e(1Γ(α)+2Γ(α+1))(ϱαi+ϱαβi)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi) +e(1Γ(α)+1Γ(α+1))kj=1ji(ϱαj+ϱαβj)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj). (3.4)

    By the formula in reference[3],

    HDβ1+(logts)β1=Γ(β)Γ(βα)(logts)βα1,β>1,

    we have

    |HDβ1+Tir(t)HDβ1+Tiv(t)|ϱαiΓ(αβ)t1(logts)αβ1|ϕp(gi(s,ri(s),HDβ1+ri(s))ϕp(gi(s,vi(s),HDβ1+vi(s))|ds+(logt)1βΓ(α1)Γ(2β)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2[|ϕp(gj(s,rj(s),HDβ1+rj(s))ϕp(gj(s,vj(s),HDβ1+vj(s))|ds]+(logt)1βΓ(α)Γ(2β)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1[|ϕp(gj(s,rj(s),HDβ1+rj(s))ϕp(gj(s,vj(s),HDβ1+vj(s))|ds]+(logt)1βΓ(α)Γ(2β)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1[|ϕp(gi(s,ri(s),HDβ1+ri(s))ϕp(gi(s,vi(s),HDβ1+vi(s))|ds].

    Using Lemma 2.5, (H1) and (H2), Γ(2β)1 and (ϱ1jkj=1ϱ1j)<1 for j=1,2,,k, we obtain

    |HDβ1+Tir(t)HDβ1+Tiv(t)|eϱαiΓ(αβ+1)(p1)Qp2iιirivi+eϱαiΓ(α+1)Γ(2β)(p1)Qp2iιirivi+eϱαβiΓ(αβ+1)(p1)Qp2iιiHDβ1+riHDβ1+vi+eϱαβiΓ(α+1)Γ(2β)(p1)Qp2iιiHDβ1+riHDβ1+vi+e(p1)Qp2jkj=1(ϱαiΓ(α)Γ(2β)ιjrjvj+ϱαβiΓ(α)Γ(2β)ιjHDβ1+rjHDβ1+vj)+e(p1)Qp2jkj=1ji(λαjΓ(α+1)ιjrjvj+ϱαβjΓ(α+1)ιjHDβ1+rjHDβ1+vj)e(ϱαi+ϱαβi)Γ(αβ+1)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi)+ekj=1ϱαj+ϱαβjΓ(α)Γ(2β)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj)+ekj=1jiϱαj+ϱαβjΓ(α+1)Γ(2β)(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj)+e(ϱαi+ϱαβi)Γ(α+1)Γ(2β)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi).

    Hence,

    HDβ1+Tir(t)HDβ1+Tiv(t) e(1Γ(αβ+1)+1(Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)) ×(ϱαi+ϱαβi)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi) +e(1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))kj=1ji(ϱαj+ϱαβj) ×(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj). (3.5)

    From (3.4) and (3.5), we have

    Tir(t)Tiv(t)+HDβ1+Tir(t)HDβ1+Tiv(t)e(1Γ(α)+2Γ(α+1)+1Γ(αβ+1)+1(Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))×(ϱαi+ϱαβi)(p1)Qp2iιi(rivi+HDβ1+riHDβ1+vi)+e(1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))kj=1ji(ϱαj+ϱαβj)×(p1)Qp2jιj(rjvj+HDβ1+rjHDβ1+vj).

    Hence,

    Tir(t)Tiv(t)X e(p1)[(1Γ(α)+2Γ(α+1)+1Γ(αβ+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)) ×Qp2i(ϱαi+ϱαβi)+(1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)) ×Qp2jkj=1ji(ϱαj+ϱαβj)](ki=1ιi)(rjvj+HDβ1+rjHDβ1+vj) =χi(ki=1ιi)rvXk, (3.6)

    where χi,i=1,2,,k are given by (3.2).

    From the above Eq (3.6), it follows that

    TrTv)Xk=ki=1TirTivX(ki=1χi)(ki=1ιi)rvXk.

    Since

    (ki=1χi)(ki=1ιi)<1,

    we obtain that T is a contraction map. According to Banach's contraction principle, the original system (1.1) has a unique solution on [1,e].

    Theorem 3.2. Assume that (H1) and (H2) hold; then system (2.1) has at least one solution on [1,e] if

    (ki=1Υi)(ki=1ιi)<1,

    where Υi,i=1,2,,14 are given by Eq (3.3).

    By Theorem 3.1, T is defined under the consideration of Krasnoselskii's fixed point theorem as follows:

    Tμ=ϖ1μ+ϖ2μ,

    where

    ϖ1μ=ϱαiΓ(α)t1(logts)α1ϕp(gi(s,ri(s),HDβ1+ri(s)))ds,ϖ2μ=logtΓ(α1)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2ϕp(gj(s,rj(s),HDβ1+rj(s)))dslogtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1ϕp(gj(s,rj(s),HDβ1+rj(s)))ds+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1ϕp(gi(s,ri(s),HDβ1+ri(s)))ds.

    Proof. For any δ=(δ1,δ2,,δ14)(t), μ=(μ1,μ2,,μ14)(t)Xk, we have

    |ϖ2δ(t)ϖ2μ(t)|logtΓ(α1)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2[|ϕp(gi(s,δi(s),HDβ1+δi(s))ϕp(gi(s,μi(s),HDβ1+μi(s))|ds]+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1[|ϕp(gj(s,δj(s),HDβ1+δj(s))ϕp(gj(s,μj(s),HDβ1+μj(s))|ds]+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1[|ϕp(gi(s,δi(s),HDβ1+δi(s))ϕp(gi(s,μi(s),HDβ1+μi(s))|ds],

    which implies that

    ϖ2δ(t)ϖ2μ(t)e(p1)Qp2jkj=1(ϱαjΓ(α)uj(t)δjμj+ϱαβjΓ(α)uj(t)HDβ1+δjHDβ1+μj)+e(p1)Qp2jkj=1ji(ϱαjΓ(α+1)uj(t)δjμj+ϱαβjΓ(α+1)uj(t)HDβ1+δjHDβ1+μj)+eϱαiΓ(α+1)(p1)Qp2iui(t)δiμi+eϱαβiΓ(α+1)(p1)Qp2iui(t)HDβ1+δiHDβ1+μie(ϱαi+ϱαβi)Γ(α+1)(p1)Qp2iui(t)(δiμi+HDβ1+δiHDβ1+μi)+ekj=1ϱαj+ϱαβjΓ(α)(p1)Qp2juj(t)(δjμj+HDβ1+δjHDβ1+μj)+ekj=1jiϱαj+ϱαβjΓ(α+1)(p1)Qp2juj(t)(δjμj+HDβ1+δjHDβ1+μj)=e(1Γ(α)+1Γ(α+1))(ϱαi+ϱαβi)(p1)Qp2iui(t)(δiμi+HDβ1+δiHDβ1+μi)+e(1Γ(α)+1Γ(α+1))kj=1ji(ϱαj+ϱαβj)(p1)Qp2juj(t)(δjμj+HDβ1+δjHDβ1+μj).

    In a similar way, we get

    |HDβ1+ϖ2δ(t)HDβ1+ϖ2μ(t)|eϱαiΓ(α+1)Γ(2β)(p1)Qp2iui(t)δiμi+eϱαβiΓ(α+1)Γ(2β)(p1)Qp2iui(t)HDβ1+δiHDβ1+μi+e(p1)Qp2jkj=1(ϱαiΓ(α)Γ(2β)uj(t)δjμj+ϱαβiΓ(α)Γ(2β)uj(t)HDβ1+δjHDβ1+μj)+e(p1)Qp2jkj=1ji(ϱαjΓ(α+1)uj(t)δjμj+ϱαβjΓ(α+1)uj(t)HDβ1+δjHDβ1+μj)ekj=1ϱαj+ϱαβjΓ(α)Γ(2β)(p1)Qp2juj(t)(δjμj+HDβ1+δjHDβ1+μj)+ekj=1jiϱαj+ϱαβjΓ(α+1)Γ(2β)(p1)Qp2juj(t)(δjμj+HDβ1+δjHDβ1+μj)+e(λαi+λαβi)Γ(α+1)Γ(2β)(p1)Qp2iuj(t)(δiμi+HDβ1+δiHDβ1+μi).

    Then,

    ϖ2δ(t)ϖ2μ(t)X e(p1)[(1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))Qp2i(ϱαi+ϱαβi) +(1Γ(α)+1Γ(α+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))Qp2jkj=1ji(ϱαj+ϱαβj)] ×(ki=1ιi)(δjμj+HDβ1+δjHDβ1+μj) =Υi(ki=1ωi)δμXk. (3.7)

    Hence,

    ϖ2δ(t)ϖ2μ(t)Xk=ki=1ϖ2δϖ2μX (ki=1Υi)(ki=1ιi)δμXk. (3.8)

    It follows from (ki=1Υi)(ki=1ιi)<1 that ϖ2 is a contraction operator. In addition, we shall prove that ϖ1 is continuous and compact. For any δ=(δ1,δ2,,δ14)(t)Xk, we have

    ϖ1δ(t)ϱαiΓ(α)t1(logts)α1|ϕp(gi(s,δi(s),HDβ1+δi(s)))|dsϱαiΓ(α)t1(logts)α1|ϕp(gi(s,δi(s),HDβ1+δi(s)))ϕp(gi(s,0,0))|ds+ϱαiΓ(α)t1(logts)α1|ϕp(gi(s,0,0))|dseϱαiΓ(α+1)(p1)Qp2iιi(δi+HDβ1+δi)+eϱαiΓ(α+1)|ϕp(κ)|=eϱαiΓ(α+1)(p1)Qp2iιiδiX+eϱαiΓ(α+1)|ϕp(κ)|.

    Then,

    HDβ1+ϖ1δ(t)ϱαiΓ(αβ)t1(logts)αβ1|ϕp(gi(s,δi(s),HDβ1+δi(s))|dsϱαiΓ(αβ)t1(logts)αβ1|ϕp(gi(s,δi(s),HDβ1+δi(s)))ϕp(gi(s,0,0))|ds+ϱαiΓ(αβ)t1(logts)αβ1|ϕp(gi(s,0,0))|dseϱαiΓ(αβ+1)(p1)Qp2iιi(δi+HDβ1+δi)+eϱαiΓ(αβ+1)|ϕp(κ)|=eϱαiΓ(α+1)(p1)Qp2iιiδiX+eλαiΓ(αβ+1)|ϕp(κ)|,

    which implies

    ϖ1δ(t)X(1Γ(α+1)+1Γ(αβ+1))((p1)Qp2iιiδiX+|ϕp(κ)|). (3.9)

    Hence,

    ϖ1δ(t)Xk(1Γ(α+1)+1Γ(αβ+1))×(ki=1(p1)Qp2iιiδiX+ki=1|ϕp(κ)|)<. (3.10)

    This shows that ϖ1 is bounded. In addition, we will prove that ϖ1 is equi-continuous. Let t1, t2[1,e]; we have

    |ϖ1δ(t2)ϖ1δ(t1)| ϱαiΓ(α)t11((logt2s)α1(logt1s)α1)|ϕp(gi(s,δi(s),HDβ1+δi(s)))|ds +ϱαiΓ(α)t2t1(logt2s)α1|ϕp(gi(s,δi(s),HDβ1+δi(s)))|ds eϱαiϕp(Qi)Γ(α+1)((logt2)α(logt1)α)+ϱαiϕp(Qi)Γ(α)t2t1(logt2s)α1ds, (3.11)

    and

    |HDβ1+ϖ1δ(t2)HDβ1+ϖ1δ(t1)| ϱαiΓ(αβ)t11((logt2s)αβ1(logt1s)αβ1)|ϕp(gi(s,δi(s),HDβ1+δi(s)))|ds +ϱαiΓ(αβ)t2t1(logt2s)αβ1|ϕp(gi(s,δi(s),HDβ1+δi(s)))|ds eϱαiϕp(Qi)Γ(αβ+1)((logt2)αβ(logt1)αβ)+λαiϕp(Qi)Γ(αβ+1)t2t1(logt2s)αβ1ds. (3.12)

    Therefore, from (3.10) and (3.12), we obtain

    ϖ1δ(t2)ϖ1δ(t1)Xeϱαiϕp(Qi)Γ(αβ+1)((logt2)αβ(logt1)αβ)+ϱαiϕp(Qi)Γ(αβ+1)t2t1(logt2s)αβ1ds+eϱαiϕp(Qi)Γ(α+1)((logt2)α(logt1)α)+ϱαiϕp(Qi)Γ(α)t2t1(logt2s)α1ds. (3.13)

    This indicates that ϖ1δ(t2)ϖ1δ(t1)X0, as t2t1.Thus, ϖ1δ(t2)ϖ1δ(t1)Xk0, as t2t1. By the Arzela-Ascoli theorem, we obtain that ω1 is completely continuous. According to Lemma 2.6, system (2.1) has at least one solution on [1,e], which denotes that the original system (1.1) has at least one solution on [1,e].

    Let εi>0. Consider the following inequality

    |HDα1+ri(t)ϱαiϕp(fi(t,ri(t),HDβ1+ri(t)))|εi,t[1,e]. (4.1)

    Definition 4.1. [16] The fractional differential system (1.1) is called Hyers-Ulam stable if there is a constant cf1,f2,...,fk>0 such that for each ε=ε(ε1,ε2,...,εk)>0 and for each solution r=(r1,r2,...,rk)Xk of the inequality (4.1), there exists a solution ˉr=(ˉr1,ˉr2,...,ˉrk)Xk of (1.1) with

    rˉrXkcf1,f2,...,fkε,t[1,e].

    Definition 4.2. [16] The fractional differential system (1.1) is called generalized Hyers-Ulam stable if there exists-function ψf1,f2,...,fkC(R+,R+) with ψf1,f2,...,fk(0)=0 such that for each ε=ε(ε1,ε2,...,εk)>0, and for each solution r=(r1,r2,...,rk)Xk of the inequality (4.1), there exists a solution ˉr=(ˉr1,ˉr2,...,ˉrk)Xk of (1.1) with

    rˉrXψf1,f2,...,fk(ε),t[1,e].

    Remark 4.3. Let function r=(r1,r2,,rk)Xk,k=1,2,,14, be the solution of system (4.1). If there are functions φi:[1,e]R+ dependent on ui respectively, then

    (ⅰ) |φi(t)|εi, t[1,e], i=1,2,,14;

    (ⅱ) HDα1+ri(t)=ϱαiϕp(fi(t,ri(t),HDβ1+ri(t)))+φi(t),t[1,e],i=1,2,...,14.

    Remark 4.4. It is worth noting that Hyers-Ulam stability is different from asymptotic stability, which means that the system can gradually return to equilibrium after being disturbed. If the Lyapunov function satisfies certain conditions, the system is asymptotically stable. This stability emphasizes the behavior of the system over a long period of time, that is, as time goes on, the state of the system will return to the equilibrium state, while the error of Hyers-Ulam stability is bounded (proportional to the size of the disturbance).

    Lemma 4.5. Suppose r=(r1,r2,...,rk)Xk is the solution of inequality (4.1). Then, the following inequality holds:

    |ri(t)ri(t)|εie(1Γ(α)+2Γ(α+1))+εjekj=1ji(1Γ(α)+1Γ(α+1)),|HDβ1+ri(t)HDβ1+ri(t)|εjekj=1ji(1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))+εie(1Γ(αβ+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)),

    where

    ri(t)=1Γ(α)t1(logts)α1zi(s)ds logtΓ(α1)kj=1(λ1jkj=1λ1j)e1(loges)α2zj(s)ds +logtΓ(α)kj=1ji(λ1jkj=1λ1j)e1(loges)α1zj(s)ds logtΓ(α)kj=1ji(λ1jkj=1λ1j)e1(loges)α1zi(s)ds,HDβ1+ri(t)=1Γ(αβ)t1(logts)αβ1zi(s)ds(logt)1βΓ(α1)Γ(2β)kj=1(λ1jkj=1λ1j)e1(loges)α2zj(s)ds+(logt)1βΓ(α)Γ(2β)kj=1ji(λ1jkj=1λ1j)e1(loges)α1zj(s)ds(logt)1βΓ(α)Γ(2β)kj=1ji(λ1jkj=1λ1j)e1(loges)α1zi(s)ds,

    and here

    zi(s)=hi(s)s,hi(s)=ϱαifi(s,ri(s),HDβ1+ri(s)),i=1,2,,14.

    Proof. From Remark 4.3, we have

    {HDα1+ri(t)=ϱαiϕp(fi(s,ri(s),HDβ1+ri(s)))+φi(t),t[1,e],ri(1)=0,i=1,2,,14,ri(e)=rj(e),i,j=1,2,,14,ij,ki=1ϱ1iri(e)=0,i=1,2,,14. (4.2)

    By Lemma 2.7, the solution of (4.2) can be given in the following form:

    ri(t)=1Γ(α)t1(logts)α1(zi(s)+φi(s)s)ds logtΓ(α1)kj=1(λ1jkj=1λ1j)e1(loges)α2(zj(s)+φj(s)s)ds +logtΓ(α)kj=1ji(λ1jkj=1λ1j)e1(loges)α1(zj(s)+φj(s)s)ds logtΓ(α)kj=1ji(λ1jkj=1λ1j)e1(loges)α1(zi(s)+φi(s)s)ds,

    and

    HDβ1+ri(t)=1Γ(αβ)t1(logts)αβ1(zi(s)+φi(s)s)ds(logt)1βΓ(α1)Γ(2β)kj=1(λ1jkj=1λ1j)e1(loges)α2(zj(s)+φj(s)s)ds+(logt)1βΓ(α)Γ(2β)kj=1ji(λ1jkj=1λ1j)e1(loges)α1(zj(s)+φj(s)s)ds(logt)1βΓ(α)Γ(2β)kj=1ji(λ1jkj=1λ1j)e1(loges)α1(zi(s)+φi(s)s)ds.

    Then, we deduce that

    |ri(t)ri(t)|εi2eΓ(α+1)+kj=1εjeΓ(α)+kj=1jiεjeΓ(α+1)=εie(1Γ(α)+2Γ(α+1))+εjekj=1ji(1Γ(α)+1Γ(α+1))

    and

    |HDβ1+ri(t)HDβ1+ri(t)|εieΓ(αβ+1)+εjeΓ(α+1)Γ(2β)+εjkj=1eΓ(α)Γ(2β)+εikj=1jieΓ(α+1)Γ(2β)=εjekj=1ji(1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))+εie(1Γ(αβ+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)).

    Theorem 4.6. Assume that Theorem 3.1 hold; then the fractional differential system (1.1) is Hyers-Ulam stable if the eigenvalues of matrix A are in the open unit disc. There exists |λ|<1 for λC with det(λIA)=0, where

    A=(θ1(ϱα1+ϱαβ1)(p1)Qp21u1θ2(ϱα2+ϱαβ2)(p1)Qp22u2θ2(ϱα12+ϱαβ12)(p1)Qp212u14θ2(ϱα1+ϱαβ1)(p1)Qp21u1θ1(ϱα2+ϱαβ2)(p1)Qp22u2θ2(ϱα12+ϱαβ12)(p1)Qp212u14θ2(ϱα1+ϱαβ1)(p1)Qp21u1θ2(ϱα2+ϱαβ2)(p1)Qp22u2θ1(ϱα12+ϱαβ12)(p1)Qp212u14).

    Proof. Let r=(r1,r2,...,r14)Xk,k=1,2,14, be the solution of the inequality given by

    |HDα1+ri(t)ϱαiϕp(fi(t,ri(t),HDβ1+ri(t)))|εi,t[1,e],

    and ˉr=(ˉr1,ˉr2,...,ˉr14)Xk be the solution of the following system:

    {HDα1+ˉri(t)=ϱαiϕp(fi(s,ˉri(s),HDβ1+ˉri(s))),t[1,e],ˉri(1)=0,i=1,2,,14,ˉri(e)=ˉrj(e),i,j=1,2,,14,ij,ki=1ϱ1iˉri(e)=0,i=1,2,,14. (4.3)

    By Lemma 2.7, the solution of (4.3) can be given in the following form:

    ˉri(t)=ϱαiΓ(α)t1(logts)α1ϕp(gi(s,ˉri(s),HDβ1+ˉri(s)))dslogtΓ(α1)kj=1(ϱ1jkj=1ϱ1j)ϱαje1(loges)α2ϕp(gj(s,ˉrj(s),HDβ1+ˉrj(s)))ds+logtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαje1(loges)α1ϕp(gj(s,ˉrj(s),HDβ1+ˉrj(s)))dslogtΓ(α)kj=1ji(ϱ1jkj=1ϱ1j)ϱαie1(loges)α1ϕp(gi(s,ˉri(s),HDβ1+ˉri(s)))ds.

    Now, by Lemma 4.5, for t[1,e], we can get

    |ri(t)ˉri(t)||ri(t)ri(t)|+|ri(t)ˉri(t)|εie(1Γ(α)+2Γ(α+1))+εjekj=1ji(1Γ(α)+1Γ(α+1))+e(1Γ(α)+2Γ(α+1))(ϱαi+ϱαβi)(p1)Qp2iui(t)(riˉri+HDβ1+riHDβ1+ˉri)+e(1Γ(α)+1Γ(α+1))kj=1ji(ϱαj+ϱαβj)(p1)Qp2iuj(t)(rjˉrj+HDβ1+rjHDβ1+ˉrj)

    and

    |HDβ1+ri(t)HDβ1+ˉri(t)||HDβ1+ri(t)HDβ1+ri(t)|+|HDβ1+ri(t)HDβ1+ˉri(t)|εjekj=1ji(1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))+εie(1Γ(αβ+1)+1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))+e(1Γ(αβ+1)+1(Γ(α)Γ(2β)+1Γ(α+1)Γ(2β)) ×(ϱαi+ϱαβi)(p1)Qp2iui(t)(riˉri+HDβ1+riHDβ1+ˉri) +e(1Γ(α)Γ(2β)+1Γ(α+1)Γ(2β))kj=1ji(ϱαj+ϱαβj) ×(p1)Qp2iuj(t)(rjˉrj+HDβ1+rjHDβ1+ˉrj).

    Hence, we have

    riˉriX=riˉri+HDβ1+ri(t)HDβ1+ˉri(t)e(α+2Γ(α+1)+1Γ(αβ+1)+α+1Γ(α+1)Γ(2β))εi+ekj=1ji(α+1Γ(α+1)+α+1Γ(α+1)Γ(2β))εj+e(α+2Γ(α+1)+1Γ(αβ+1)+α+1Γ(α+1)Γ(2β))×(ϱαi+ϱαβi)(p1)Qp2iui(t)(riˉriX+ekj=1ji(α+1Γ(α+1)+α+1Γ(α+1)Γ(2β))(ϱαj+ϱαβj)(p1)Qp2iuj(t)rjˉrjX=θ1εi+kj=1jiθ2εi+θ1(ϱαi+ϱαβi)(p1)Qp2iui(t)riˉriX+kj=1jiθ2(ϱαj+ϱαβj)(p1)Qp2iuj(t)rjˉrjX,

    where

    θ1=e(α+2Γ(α+1)+1Γ(αβ+1)+α+1Γ(α+1)Γ(2β)),θ2=e(α+1Γ(α+1)+α+1Γ(α+1)Γ(2β)).

    Then we have

    (r1ˉr1X,r2ˉr2X,...,r14ˉr14X)TB(ε1,ε2,...,ε12)T+A(r1ˉr1X,r2ˉr2X,...,r14ˉr14X)T,

    where

    B14×14=(θ1θ2θ2θ2θ1θ2θ2θ2θ1).

    Then, we can get

    (r1ˉr1X,r2ˉr2X,...,r14ˉr14X)T(IA)1B(ε1,ε2,...,ε14)T.

    Let

    H=(IA)1B=(h1,1h1,2h1,14h2,1h2,2h2,14h14,1h14,2h14,14).

    Obviously, hi,j>0,i,j=1,2,,14. Set ε=max{ε1,ε2,...,ε14}, then we can get

    rˉrXk(kj=1ki=1hi,j)ε. (4.4)

    Thus, we have derived that system (1.1) is Hyers-Ulam stable.

    Remark 4.7. Making ψf1,f2,...,fk(ε) in (4.4). We have ψf1,f2,...,fk(0)=0. Then by Definition 4.2, we deduce that the fractional differential system (1.1) is generalized Hyers-Ulam stable.

    The benzoic acid graph we studied in the system (1.1) can be extended to other types of graphs. For example, star graphs and chord bipartite graphs provide a theoretical basis for physics, computer networks, and other fields. Here we only discuss the fractional differential system on the star graphs (i = 1, 2, 3). We discuss the solution of a fractional differential equation on a formaldehyde graph, and the approximate graphs of solutions are presented by using iterative methods and numerical simulation.

    Example 5.1. Consider the following fractional differential equation:

    {HD741+r1(t)+(13)74ϕ3(1+t5(t+3)5(sin(r1(t)+|HD341+r1(t)|1+|HD341+r1(t)|))=0,HD741+r2(t)+(12)74ϕ3(1+t(t+2)7(sin(r2(t)+|HD341+r2(t)|1+|HD341+r2(t)|))=0,HD741+r3(t)+(23)74ϕ3(1+t1000|arcsin(r3(t))|+t|HD341+r3(t)|1000(1+|HD341+r3(t)|))=0,r1(1)=r2(1)=r3(1)=0,r1(e)=r2(e)=r3(e),(13)1r1(e)+(12)1r2(e)+(23)1r3(e)=0, (5.1)

    corresponding to the system (1.1), we obtain

    α=74,β=34,k=3,ϱ1=13,ϱ2=12,ϱ3=23.

    Figure 3 (The structure of formaldehyde) is from reference[22]. Coordinate systems with r1,r2, and r3 are established, respectively, on the formaldehyde graph with 3 edges (Figure 4).

    Figure 3.  A sketch of CH20.
    Figure 4.  Formaldehyde graph with labeled vertices.

    Fort[1,e],

    g1(t,r1(t),HDβ1+r1(t))=1+15(t+3)2(sin(r1(t)+|HD341+r1(t)|1+|HD341+r1(t)|),g2(t,r2(t),HDβ1+r2(t))=1+1(t+2)7(sin(r2(t)+|HD341+r2(t)|1+|HD341+r2(t)|),g3(t,r3(t),HDβ1+r3(t))=1+t1000|arcsin(r3(t))|+t|HD341+r3(t)|1000(1+|HD341+r3(t)|).

    For any x,y,x1,y1, it is clear that

    g1(t,x,y)g1(t,x1,y1)15(t+3)2(|xx1|+|yy1|),
    g2(t,x,y)g2(t,x2,y2)1(t+2)7(|xx2|+|yy2|),
    g3(t,x,y)g3(t,x3,y3)t1000(|xx3|+|yy3|).

    So we get

    u1=supt[1,e]|u1(t)|=15120,u2=supt[1,e]|u2(t)|=12187,u3=supt[1,e]|u3(t)|=e1000,
    χ1=51.9811,χ2=54.7872,χ3=58.0208,

    and

    (χ1+χ2+χ3)(u1+u2+u3)=0.5555<1.

    Therefore, by Theorem 3.1, system (5.1) has a unique solution on [1,e].

    Meanwhile,

    θ1=14.1839,θ2=9.7755,
    A=(2.6581e037.134e036.1901e021.832e031.0351e026.1901e021.832e037.134e038.9816e02).

    Let

    det(λIA)=(λ0.0964)(λ0.0011)(λ0.0054)=0,

    so we have

    λ1=0.0964<1,λ2=0.0011<1,λ3=0.0054<1.

    It follows from Theorem 4.6 that system (5.1) is Hyer-Ulams stable, and by Remark 4.4, it will be generalized Hyer-Ulams stable.

    Ultimately, the iterative process curve and approximate solution to the fractional differential system (5.1) are carried out by using the iterative method and numerical simulation. Let ui,0=0, the iteration sequence is as follows:

    r1,n+1(t)=(13)74Γ(74)t1(logts)34ϕ3(1+15(t+3)2(sin(r1,n(t))+|HD341+r1,n(t)|1+|HD341+r1,n(t)|))ds(13)74(12)1(logt)((13)1+(12)1+(23)1)Γ(74)e1(1logs)34ϕ3(1+1(t+2)7(sin|r2,n(t)|+|HD341+r2,n(t)|1+|HD341+r2,n(t)|))ds(13)74(12)1(logt)((13)1+(12)1+(23)1)Γ(74)e1(1logs)34×ϕ3((1+0.001t|arcsin(r3,n(t))|+t|HD341+r3,n(t)|1000+1000|HD341+r3,n(t)|))ds+(12)1(13)74(logt)((13)1+(12)1+(23)1)Γ(74)e1(1logs)34ϕ3(1+15(t+3)2(sin(r1,n(t))+|HD341+r1,n(t)|1+|HD341+r1,n(t)|))ds+(13)74(23)1(logt)((13)1+(12)1+(23)1)Γ(74)e1(1logs)34×ϕ3(1+15(t+3)2(sin(r1,n(t))+|HD341+r1,n(t)|1+|HD341+r1,n(t)|))ds+(13)74(13)1(logt)((13)1+(12)1+(23)1)1Γ(34)e1(1logs)12ϕ3(1+15(t+3)2×(sin(r1,n(t))+|HD341+r1,n(t)|1+|HD341+r1,n(t)|))ds+(12)52(12)1(logt)((14)1+(12)1+(23)1)Γ(34)×e1(1logs)12ϕ3(1+1(t+2)7(sin|r2,n(t)|+|D341+r2,n(t)|1+|D341+r2,n(t)|))ds+(23)74(23)1(logt)((13)1+(12)1+(23)1)Γ(32)e1(1logs)12ϕ3(1+0.003t×|arcsin(r3,n(t))|+t|HD341+r3,n(t)|1000+1000|HD341+r3,n(t)|))ds.

    The iterative sequence of r2,n+1 and r3,n+1 is similar to r1,n+1. After several iterations, the approximate solution of fractional differential system (5.1) can be obtained by using the numerical simulation. Figure 5 is the approximate graph of the solution of r1r0 after iterations. Figure 6 is the approximate graph of the solution of r2r0 after iterations. Figure 7 is the approximate graph of the solution of r3r0 after iterations.

    Figure 5.  Approximate solution of u1.
    Figure 6.  Approximate solution of u2.
    Figure 7.  Approximate solution of u3.

    Using the idea of star graphs, several scholars have studied the solutions of fractional differential equations. They chose to utilize star graphs since their method required a central node connected to nearby vertices through interconnections, but there are no edges between the nodes.

    The purpose of this study was to extend the technique's applicability by introducing the concept of a benzoic acid graph, a fundamental molecule in chemistry with the formula C7H6O2. In this manner, we explored a network in which the vertices were either labeled with 0 or 1, and the structure of the chemical molecule benzoic acid was shown to have an effect on this network. To study whether or not there were solutions to the offered boundary value problems within the context of the Caputo fractional derivative with p-Laplacian operators, we used the fixed-point theorems developed by Krasnoselskii. Meanwhile, the Hyers-Ulam stability has been considered. In conclusion, an example was given to illustrate the significance of the findings obtained from this research line.

    The following open problems are presented for the consideration of readers interested in this topic: At present, the research prospects of fractional boundary value problems and their numerical solutions on graphs are very broad, which can be extended to other graphs, such as chord bipartite graphs. The follow-up research process, especially the research on chemical graph theory, has a certain practical significance. This is because such research does not need chemical reagents and experimental equipment. In the absence of experimental conditions and reagents, the molecular structure is studied, and the same results are obtained under experimental conditions. Although the differential equation on the benzoic acid graph is studied in this paper, the molecular structure is not studied by the topological index in the research process. It can be tried later to provide a theoretical basis for the study of reverse engineering and provide new ideas for the study of mathematics, chemistry, and other fields.

    Yunzhe Zhang: Writing-original draft, formal analysis, investigation; Youhui Su: Supervision, writing-review, editing and project administration; Yongzhen Yu: Resources, editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the Xuzhou Science and Technology Plan Project (KC23058) and the Natural Science Research Project of Jiangsu Colleges and Universities (22KJB110026).

    The author declares no conflicts of interest.



    [1] S. Haykin, Neural Networks and Learning Machines, Prentice Hall, 2011.
    [2] O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, H. Arshad, State-of-the-art in artificial neural network applications: A survey, Heliyon, 4 (2018). https://doi.org/10.1016/j.heliyon.2018.e00938
    [3] F. Li, M. Sun, EMLP: Short-term gas load forecasting based on ensemble multilayer perceptron with adaptive weight correction, Math. Biosci. Eng., 18 (2021), 1590–1608. https://doi.org/10.3934/mbe.2021082 doi: 10.3934/mbe.2021082
    [4] A. Rana, A. S. Rawat, A. Bijalwan, H. Bahuguna, Application of multi layer (perceptron) artificial neural network in the diagnosis system: a systematic review, in 2018 International Conference on Research in Intelligent and Computing in Engineering (RICE), (2018), 1–6. https://doi.org/10.1109/RICE.2018.8509069
    [5] L. C. Velasco, J. F. Bongat, C. Castillon, J. Laurente, E. Tabanao, Days-ahead water level forecasting using artificial neural networks for watersheds, Math. Biosci. Eng., 20 (2023), 758–774. https://doi.org/10.3934/mbe.2023035 doi: 10.3934/mbe.2023035
    [6] S. Hochreiter, A. S. Younger, P. R. Conwell, Learning to learn using gradient descent, in Artificial Neural Networks—ICANN 2001: International Conference Vienna, (2001), 87–94. https://doi.org/10.1007/3-540-44668-0_13
    [7] L. M. Saini, M. K. Soni, Artificial neural network-based peak load forecasting using conjugate gradient methods, IEEE Trans. Power Syst., 17 (2002), 907–912. https://doi.org/10.1109/TPWRS.2002.800992 doi: 10.1109/TPWRS.2002.800992
    [8] H. Adeli, A. Samant, An adaptive conjugate gradient neural network-wavelet model for traffic incident detection, Comput. Aided Civil Infrast. Eng., 15 (2000), 251–260. https://doi.org/10.1111/0885-9507.00189 doi: 10.1111/0885-9507.00189
    [9] J. Bilski, B. Kowalczyk, A. Marchlewska, J. M. Zurada, Local Levenberg-Marquardt algorithm for learning feedforwad neural networks, J. Artif. Intell. Soft Comput. Res., 10 (2020), 299–316. https://doi.org/10.2478/jaiscr-2020-0020 doi: 10.2478/jaiscr-2020-0020
    [10] R. Pascanu, T. Mikolov, T. Y. Bengio, On the difficulty of training recurrent neural networks, in International Conference on Machine Learning, (2013), 1310–1318.
    [11] H. Faris, I. Aljarah, S. Mirjalili, Training feedforward neural networks using multi-verse optimizer for binary classification problems, Appl. Intell., 45 (2016), 322–332. https://doi.org/10.1007/s10489-016-0767-1 doi: 10.1007/s10489-016-0767-1
    [12] M. Črepinšek, S. H. Liu, M. Mernik, Exploration and exploitation in evolutionary algorithms: A survey, ACM Comput. Surv., 45 (2013), 1–33. https://doi.org/10.1145/2480741.2480752 doi: 10.1145/2480741.2480752
    [13] G. Xu, An adaptive parameter tuning of particle swarm optimization algorithm, Appl. Math. Comput., 219 (2013), 4560–4569. https://doi.org/10.1016/j.amc.2012.10.067 doi: 10.1016/j.amc.2012.10.067
    [14] S. Mirjalili, S. Z. M. Hashim, H. M. Sardroudi, Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm, Appl. Math. Comput., 218 (2012), 11125–11137. https://doi.org/10.1016/j.amc.2012.04.069 doi: 10.1016/j.amc.2012.04.069
    [15] X. S. Yang, Random walks and optimization, in Nature Inspired Optimization Algorithms, Elsevier, (2014), 45–65. https://doi.org/10.1016/B978-0-12-416743-8.00003-8
    [16] M. Ghasemi, S. Ghavidel, S. Rahmani, A. Roosta, H. Falah, A novel hybrid algorithm of imperialist competitive algorithm and teaching learning algorithm for optimal power flow problem with non-smooth cost functions, Eng. Appl. Artif. Intell., 29 (2014), 54–69. https://doi.org/10.1016/j.engappai.2013.11.003 doi: 10.1016/j.engappai.2013.11.003
    [17] S. Pothiya, I. Ngamroo, W. Kongprawechnon, Ant colony optimisation for economic dispatch problem with non-smooth cost functions, Int. J. Electr. Power Energy Syst., 32 (2010), 478–487. https://doi.org/10.1016/j.ijepes.2009.09.016 doi: 10.1016/j.ijepes.2009.09.016
    [18] M. M. Fouad, A. I. El-Desouky, R. Al-Hajj, E. S. M. El-Kenawy, Dynamic group-based cooperative optimization algorithm, IEEE Access, 8 (2020), 148378–148403. https://doi.org/10.1109/ACCESS.2020.3015892 doi: 10.1109/ACCESS.2020.3015892
    [19] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
    [20] F. Van den Bergh, A. P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Trans. Evol. Comput., 8 (2004), 225–239. https://doi.org/10.1109/TEVC.2004.826069 doi: 10.1109/TEVC.2004.826069
    [21] C. K. Goh, K. C. Tan, A competitive-cooperative co-evolutionary paradigm for dynamic multi-objective optimization, IEEE Trans. Evol. Comput., 13 (2008), 103–127. https://doi.org/10.1109/TEVC.2008.920671 doi: 10.1109/TEVC.2008.920671
    [22] J. H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, 1992. https://doi.org/10.7551/mitpress/1090.001.0001
    [23] D. E. Goldberg, Genetic Algorithms in Search Optimization and Machine Learning, Addison-Wesley, 1989.
    [24] EK Burke, EK Burke, G Kendall, G Kendall, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques, Springer, 2014. https://doi.org/10.1007/978-1-4614-6940-7
    [25] U. Seiffert, Multiple layer perceptron training using genetic algorithms, in Proceedings of the European Symposium on Artificial Neural Networks, (2001), 159–164.
    [26] F. Ecer, S. Ardabili, S. S. Band, A. Mosavi, Training multilayer perceptron with genetic algorithms and particle swarm optimization for modeling stock price index prediction, 22 (2020), Entropy, 1239. https://doi.org/10.3390/e22111239
    [27] C. Zanchettin, T. B. Ludermir, L. M. Almeida, Hybrid training method for MLP: Optimization of architecture and training, IEEE Trans. Syst. Man Cyber. Part B, 41 (2011), 1097–1109. https://doi.org/10.1109/TSMCB.2011.2107035 doi: 10.1109/TSMCB.2011.2107035
    [28] H. Wang, H. Moayedi, L. Kok Foong, Genetic algorithm hybridized with multilayer perceptron to have an economical slope stability design, Eng. Comput., 37 (2021), 3067–3078. https://doi.org/10.1007/s00366-020-00957-5 doi: 10.1007/s00366-020-00957-5
    [29] C. C. Ribeiro, P. Hansen, V. Maniezzo, A. Carbonaro, Ant colony optimization: An overview, Essay Sur. Metaheuristics, 2002 (2002), 469–492. https://doi.org/10.1007/978-1-4615-1507-4_21 doi: 10.1007/978-1-4615-1507-4_21
    [30] M. Dorigo, T. Stützle, Ant Colony Optimization: Overview and Recent Advances, Springer International Publishing, (2019), 311–351. https://doi.org/10.1007/978-3-319-91086-4_10
    [31] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: Artificial bee colony (ABC) algorithm and applications, Artif. Intell. Revi., 42 (2014), 21–57. https://doi.org/10.1007/s10462-012-9328-0 doi: 10.1007/s10462-012-9328-0
    [32] B. A. Garro, R. A. Vázquez, Designing artificial neural networks using particle swarm optimization algorithms, Comput. Intell. Neurosci., 2015 (2015), 61. https://doi.org/10.1155/2015/369298 doi: 10.1155/2015/369298
    [33] I. Vilovic, N. Burum, Z. Sipus, Ant colony approach in optimization of base station position, in 2009 3rd European Conference on Antennas and Propagation, (2009), 2882–2886.
    [34] K. Socha, C. Blum, An ant colony optimization algorithm for continuous optimization: Application to feed-forward neural network training, Neural Comput. Appl., 16 (2007), 235–247. https://doi.org/10.1007/s00521-007-0084-z doi: 10.1007/s00521-007-0084-z
    [35] M. Mavrovouniotis, S. Yang, Training neural networks with ant colony optimization algorithms for pattern classification, Soft Comput., 19 (2015), 1511–1522. https://doi.org/10.1007/s00500-014-1334-5 doi: 10.1007/s00500-014-1334-5
    [36] C. Ozturk, D. Karaboga, Hybrid artificial bee colony algorithm for neural network training, in 2011 IEEE Congress of Evolutionary Computation (CEC), (2011), 84–88. https://doi.org/10.1109/CEC.2011.5949602
    [37] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optimization, 11 (1997), 341–359. https://doi.org/10.1023/A:1008202821328 doi: 10.1023/A:1008202821328
    [38] N. Bacanin, K. Alhazmi, M. Zivkovic, K. Venkatachalam, T. Bezdan, J. Nebhen, Training multi-layer perceptron with enhanced brain storm optimization metaheuristics, Comput. Mater. Contin, 70 (2022), 4199–4215. https://doi.org/10.32604/cmc.2022.020449 doi: 10.32604/cmc.2022.020449
    [39] J. Ilonen, J. K. Kamarainen, J. Lampinen, Differential evolution training algorithm for feed-forward neural networks, Neural Process. Lett., 17 (2003), 93–105. https://doi.org/10.1023/A:1022995128597 doi: 10.1023/A:1022995128597
    [40] A. Slowik, M. Bialko, Training of artificial neural networks using differential evolution algorithm, in 2008 Conference on Human System Interactions, (2008), 60–65. https://doi.org/10.1109/HSI.2008.4581409
    [41] A. A. Bataineh, D. Kaur, S. M. J. Jalali, Multi-layer perceptron training optimization using nature inspired computing, IEEE Access, 10 (2022), 36963–36977. https://doi.org/10.1109/ACCESS.2022.3164669 doi: 10.1109/ACCESS.2022.3164669
    [42] K. N. Dehghan, S. R. Mohammadpour, S. H. A. Rahamti, US natural gas consumption analysis via a smart time series approach based on multilayer perceptron ANN tuned by metaheuristic algorithms, in Handbook of Smart Energy Systems, Springer International Publishing, (2023), 1–13. https://doi.org/10.1007/978-3-030-72322-4_137-1
    [43] A. Alimoradi, H. Hajkarimian, H. H. Ahooi, M. Salsabili, Comparison between the performance of four metaheuristic algorithms in training a multilayer perceptron machine for gold grade estimation, Int. J. Min. Geo-Eng., 56 (2022), 97–105. https://doi.org/10.22059/ijmge.2021.314154.594880 doi: 10.22059/ijmge.2021.314154.594880
    [44] K. Bandurski, W. Kwedlo, A Lamarckian hybrid of differential evolution and conjugate gradients for neural network training, Neural Process. Lett., 32 (2010), 31–44. https://doi.org/10.1007/s11063-010-9141-1 doi: 10.1007/s11063-010-9141-1
    [45] B. Warsito, A. Prahutama, H. Yasin, S. Sumiyati, Hybrid particle swarm and conjugate gradient optimization in neural network for prediction of suspended particulate matter, in E3S Web of Conferences, (2019), 25007. https://doi.org/10.1051/e3sconf/201912525007
    [46] A. Cuk, T. Bezdan, N. Bacanin, M. Zivkovic, K. Venkatachalam, T. A. Rashid, et al., Feedforward multi-layer perceptron training by hybridized method between genetic algorithm and artificial bee colony, Data Sci. Data Anal. Oppor. Challenges, 2021 (2021), 279. https://doi.org/10.1201/9781003111290-17-21 doi: 10.1201/9781003111290-17-21
    [47] UC Irvine Machine Learning Repository. Available form: http://archive.ics.uci.edu/ml/
    [48] Kaggel Database. Available form: https://www.kaggle.com/datasets/
    [49] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, et al., Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12 (2011), 2825–2830.
    [50] F. Dick, H. Tevaearai, Significance and limitations of the p value, Eur. J. Vasc. Endovascular Surg., 50 (2015), 815. https://doi.org/10.1016/j.ejvs.2015.07.026 doi: 10.1016/j.ejvs.2015.07.026
  • This article has been cited by:

    1. Okorie Ekwe Agwu, Saad Alatefi, Reda Abdel Azim, Ahmad Alkouh, Applications of artificial intelligence algorithms in artificial lift systems: A critical review, 2024, 97, 09555986, 102613, 10.1016/j.flowmeasinst.2024.102613
    2. Okorie Ekwe Agwu, Ahmad Alkouh, Saad Alatefi, Reda Abdel Azim, Razaq Ferhadi, Utilization of machine learning for the estimation of production rates in wells operated by electrical submersible pumps, 2024, 14, 2190-0558, 1205, 10.1007/s13202-024-01761-3
    3. Onyebuchi Ivan Nwanwe, Nkemakolam Chinedu Izuwa, Nnaemeka Princewill Ohia, Anthony Kerunwa, Nnaemeka Uwaezuoke, Determining optimal controls placed on injection/production wells during waterflooding in heterogeneous oil reservoirs using artificial neural network models and multi-objective genetic algorithm, 2024, 1420-0597, 10.1007/s10596-024-10300-2
    4. Daniel Chuquin-Vasco, Dennise Chicaiza-Sagal, Cristina Calderón-Tapia, Nelson Chuquin-Vasco, Juan Chuquin-Vasco, Lidia Castro-Cepeda, Forecasting mixture composition in the extractive distillation of n-hexane and ethyl acetate with n-methyl-2-pyrrolidone through ANN for a preliminary energy assessment, 2024, 12, 2333-8334, 439, 10.3934/energy.2024020
    5. Mariea Marcu, Approximation methods used to build the gas-lift performance curve based on reduced datasets: a comprehensive statistical comparison, 2024, 1091-6466, 1, 10.1080/10916466.2024.2404625
    6. Sungil Kim, Tea-Woo Kim, Suryeom Jo, Artificial intelligence in geoenergy: bridging petroleum engineering and future-oriented applications, 2025, 15, 2190-0558, 10.1007/s13202-025-01939-3
    7. James O. Arukhe, Ammal F. Anazi, 2025, Predicting Gas Lift Equipment Failure with Deep Learning Techniques, 10.4043/35605-MS
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1673) PDF downloads(47) Cited by(1)

Figures and Tables

Figures(6)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog