
The widespread application of chaotic dynamical systems in different fields of science and engineering has attracted the attention of many researchers. Hence, understanding and capturing the complexities and the dynamical behavior of these chaotic systems is essential. The newly proposed fractal-fractional derivative and integral operators have been used in literature to predict the chaotic behavior of some of the attractors. It is argued that putting together the concept of fractional and fractal derivatives can help us understand the existing complexities better since fractional derivatives capture a limited number of problems and on the other side fractal derivatives also capture different kinds of complexities. In this study, we use the newly proposed Caputo-Fabrizio fractal-fractional derivatives and integral operators to capture and predict the behavior of the Lorenz chaotic system for different values of the fractional dimension q and the fractal dimension k. We will look at the well-posedness of the solution. For the effect of the Caputo-Fabrizio fractal-fractional derivatives operator on the behavior, we present the numerical scheme to study the graphical numerical solution for different values of q and k.
Citation: Anastacia Dlamini, Emile F. Doungmo Goufo, Melusi Khumalo. On the Caputo-Fabrizio fractal fractional representation for the Lorenz chaotic system[J]. AIMS Mathematics, 2021, 6(11): 12395-12421. doi: 10.3934/math.2021717
[1] | Xiaoli Wang, Lizhen Wang . Traveling wave solutions of conformable time fractional Burgers type equations. AIMS Mathematics, 2021, 6(7): 7266-7284. doi: 10.3934/math.2021426 |
[2] | Zui-Cha Deng, Fan-Li Liu, Liu Yang . Numerical simulations for initial value inversion problem in a two-dimensional degenerate parabolic equation. AIMS Mathematics, 2021, 6(4): 3080-3104. doi: 10.3934/math.2021187 |
[3] | Mohammad Partohaghighi, Ali Akgül, Jihad Asad, Rania Wannan . Solving the time-fractional inverse Burger equation involving fractional Heydari-Hosseininia derivative. AIMS Mathematics, 2022, 7(9): 17403-17417. doi: 10.3934/math.2022959 |
[4] | M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566 |
[5] | Humaira Yasmin, Aljawhara H. Almuqrin . Analytical study of time-fractional heat, diffusion, and Burger's equations using Aboodh residual power series and transform iterative methodologies. AIMS Mathematics, 2024, 9(6): 16721-16752. doi: 10.3934/math.2024811 |
[6] | Jian-Gen Liu, Jian Zhang . A new approximate method to the time fractional damped Burger equation. AIMS Mathematics, 2023, 8(6): 13317-13324. doi: 10.3934/math.2023674 |
[7] | Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319 |
[8] | Asif Khan, Tayyaba Akram, Arshad Khan, Shabir Ahmad, Kamsing Nonlaopon . Investigation of time fractional nonlinear KdV-Burgers equation under fractional operators with nonsingular kernels. AIMS Mathematics, 2023, 8(1): 1251-1268. doi: 10.3934/math.2023063 |
[9] | Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350 |
[10] | Shuang-Shuang Zhou, Saima Rashid, Asia Rauf, Khadija Tul Kubra, Abdullah M. Alsharif . Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time. AIMS Mathematics, 2021, 6(11): 12114-12132. doi: 10.3934/math.2021703 |
The widespread application of chaotic dynamical systems in different fields of science and engineering has attracted the attention of many researchers. Hence, understanding and capturing the complexities and the dynamical behavior of these chaotic systems is essential. The newly proposed fractal-fractional derivative and integral operators have been used in literature to predict the chaotic behavior of some of the attractors. It is argued that putting together the concept of fractional and fractal derivatives can help us understand the existing complexities better since fractional derivatives capture a limited number of problems and on the other side fractal derivatives also capture different kinds of complexities. In this study, we use the newly proposed Caputo-Fabrizio fractal-fractional derivatives and integral operators to capture and predict the behavior of the Lorenz chaotic system for different values of the fractional dimension q and the fractal dimension k. We will look at the well-posedness of the solution. For the effect of the Caputo-Fabrizio fractal-fractional derivatives operator on the behavior, we present the numerical scheme to study the graphical numerical solution for different values of q and k.
In recent years, fractional partial differential equations (FPDEs) have been widely used in natural science and engineering technology [1,2,3,4]. The advantage of FPDEs lies in their ability to better describe materials and processes that exhibit memory and genetic properties [5,6]. However, the solutions of FPDEs are much more complex. Many researchers have exploited diverse techniques for the investigation of FPDEs such as the finite difference method (FDM) [7], finite element method [8], spectral method [9], virtual element method [10], etc. The development of effective numerical methods to approximate FPDEs has been the goal of some researchers.
In recent years, neural networks (NNs) have been successfully applied to solve problems in various fields [11,12,13]. Due to the high expressiveness of NNs in functional approximation [14,15,16], using NNs to solve differential and integral equations has become an active and important research field. Physics-informed neural networks (PINNs) [17,18,19,20] are machine learning models that combine deep learning with physical knowledge. PINNs embed PDEs into the loss function of the NNs, enabling the NNs to learn solutions to PDEs. The PINNs algorithm is meshless and simple, and can be applied to various types of PDEs, including integral differential equations, FPDEs, and random partial differential equations. Moreover, PINNs solved the inverse problem of PDEs just as easily as they solved the forward problem [17]. PINNs have been successfully applied to solve various problems in scientific computing [21,22,23]. Pang et al. [24] used the FDM to approximate the fractional derivatives that cannot be automatically differentiated, thus extending the PINNs to fPINNs for solving FPDEs.
Despite the success of deep learning in the past, solving a wide range of PDEs is theoretically and practically challenging as complexity increases. Therefore, many aspects of PINNs need to be further improved to achieve more accurate predictions, higher computational efficiency, and robustness of training. Lu et al. [25] proposed DeepXDE, a deep learning library for solving PDEs, introduced a new residual-based adaptive refinement method to improve the training efficiency of PINNs, and new residual points were added at the position where the residuals of the PDEs were large, so that the discontinuities of PDEs could be captured well. Zhang et al. [26] combined fPINNs with the spectral method to solve the time-fractional phase field models. It had the characteristics of reducing the approximate number of discrete fractional operators, thus improving the training efficiency and obtaining higher error accuracy. Wu et al. [27] conducted a comprehensive study on two types of sampling of PINNs, including non-adaptive uniform sampling and adaptive non-uniform sampling, and the research results could also be used as a practical guide for selecting sampling methods. Zhang et al. [28] removed the soft constraints of PDEs in the loss function, and used the Lie symmetry group to generate the labeled data of PDEs to build a supervised learning model, thus effectively predicting the large amplitude and high frequency solutions of the Klein-Gordon equation. Zhang et al. [29] introduced the symmetry-enhanced physics-informed neural network (SPINN), which incorporated the invariant surface conditions derived from Lie symmetries or non-classical symmetries of PDEs into the loss function of PINNs, aiming to improve accuracy of PINNs. Lu et al. [30] and Xie et al. [31] introduced gradient-enhanced physics-informed neural networks (gPINNs) to solve PDEs and the idea of embedding the gradient information from the residuals of PDEs into the loss functions has also proven to be effective in other methods such as Gaussian process regression [32].
In this paper, inspired by the above works, gfPINNs are applied to solve the forward and inverse problems of the multiterm time-fractional Burger-type equation. The integer order derivatives are handled using the automatic differentiation capability of the NNs, while the fractional derivatives of the equation are approximated using finite difference discretization [33,34]. Subsequently, the residual information of the equation is then incorporated into the loss function of NNs and optimized to yield optimal parameters. For the inverse problems of the multiterm time-fractional Burger-type equation, their overall form are known but the coefficient and the orders of time-fractional derivatives are unknown. The gfPINNs explicitly incorporate information from the equation by including the differential operators of the equation directly into the optimization loss function. The parameters to be identified appear in the differential operators, which are then optimized by minimizing the loss function associated with those parameters. A numerical comparison between fPINNs and gfPINNs is conducted using numerical examples. The numerical results demonstrate the effectiveness of gfPINNs in solving the multiterm time-fractional Burger-type equation.
The structure of this paper is as follows. In Section 2, we define forward and inverse problems for the multiterm time-fractional Burger-type equation. In Section 3, we introduce fPINNs and gfPINNs and give the finite difference discretization to approximate the time-fractional derivatives. In Section 4, we demonstrate the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation by numerical examples, and compare the experimental results of fPINNs and gfPINNs. Finally, we give the conclusions of this paper in Section 5.
We consider the following multiterm time-fractional Burger-type equation defined on the bounded domain Ω:
c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)∂u(x,t)∂x=v∂2u(x,t)∂x2+f(x,t), | (2.1) |
where (x,t)∈Ω×[0,T] and the initial and boundary conditions are given as
{u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω, | (2.2) |
where u(x,t) is the solution of the equation, f(x,t) is the forcing term whose values are only known at scattered spatio-temporal coordinates, v is the kinematic viscosity of fluid, g(x) is a sufficiently smooth function, the fractional orders α and γ have been restricted to (0, 1) and (1, 2), respectively, C0Dθtu(x,t) is the Caputo time-fractional derivative of order θ (θ>0,n−1≤θ<n) of u(x,t) with respect to t [35,36]:
C0Dθtu(x,t)={1Γ(n−θ)∫tα(t−s)n−1−θ∂nu(x,s)∂snds,θ∉z+,∂θu(x,t)∂tθ,θ∈z+, | (2.3) |
where Γ(⋅) is the gamma function.
The forward and inverse problems of solving the multiterm time-fractional Burger-type equation are described as follows. For the forward problem, under the given preconditions of the fractional orders α and γ, the forcing term f, and the initial and boundary conditions, the solution u(x,t) is solved. For the inverse problem, under the given preconditions of the initial and boundary conditions, the forcing term f, and additional concentration measurements at the final time u(x,t)=h(x,t), the fractional orders α and γ, the flow velocity v, and the solution u(x,t) are solved.
This subsection introduces the idea of fPINNs and we consider both the forward and inverse problems, along with their corresponding NNs. We first consider the forward problem of the multiterm time-fractional Burger-type equation in the following form:
{L{u(x,t)}=f(x,t),(x,t)∈Ω×[0,T],u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω, | (3.1) |
where L{⋅} is a nonlinear operator and L{u(x,t)}=c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2. We divide the nonlinear operator L{⋅} into two parts, L=LAD+LnonAD. The first part is an integer derivative operator, which can be automatically differentiated (AD) using the chain rule. We have
LAD{⋅}={u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α∈(0,1),γ∈(1,2),c2∂2u(x,t)∂t2+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α∈(0,1),γ=2,c1∂u(x,t)∂t+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α=1,γ∈(1,2), | (3.2) |
and the second category consists of operators that lack automatic differentiation capabilities:
LnonAD{⋅}={c1C0Dαtu(x,t)+c2C0Dγtu(x,t),α∈(0,1),γ∈(1,2),c1C0Dαtu(x,t),α∈(0,1),γ=2,c2C0Dγtu(x,t),α=1,γ∈(1,2). | (3.3) |
For LnonAD, we can discretize it using FDM and denote by LFDM the discretization version of LnonAD.
During the NNs training process, our goal is to optimize its parameters in order to ensure that the approximate solution of the equation closely satisfies the initial and boundary conditions. The approximate solution is chosen as
˜u(x,t)=tρ(x)uNN(x,t)+g(x), | (3.4) |
where uNN represents the output of the NNs. The NNs acts as a surrogate model, approximating the relationship between spatio-temporal coordinates and the solution of the equation. It is defined by its weights and biases, forming the parameter vector μ; see Figure 1 for a simple NN. This is fully connected with a single hidden layer consisting of three neurons. In this network, x and t are two inputs, which go through a linear transformation to obtain x1=w1x+w4t+b1, x2=w2x+w5t+b2, and x3=w3x+w6t+b3 in the hidden layer, and then, they go through a nonlinear transformation to get Yi=f(xi) for i=1,2,3. We choose the hyperbolic tangent function tanh(⋅). Yi to go through a linear transformation to obtain the output of the NNs, uNN(x,t;μ)=w7Y1+w8Y2+w9Y3+b4. The vector of parameters μ is comprised of the weights wi and biases bi. ρ(0)=ρ(1)=0 and the auxiliary function ρ(x) is preselected. g(x) is the initial condition function such that it satisfies the initial and boundary conditions automatically.
The loss function of fPINNs for the forward problem with the approximate solution is defined as the mean-squared error of the equation residual
LFW=1|SF|∑(x,t)∈S[LFDM{˜u(x,t)}+LAD{˜u(x,t)}−f(x,t)]2, | (3.5) |
where SF⊂Ω×[0,T] and |SF| represents the number of training points. Then, we train the NNs to optimize the loss function of the forward problem with respect to the NNs parameters μ, thus obtaining the optimal parameters μbest. Finally, we specify a set of arbitrary test points to test the trained NNs and observe the training performance.
The codes for solving the forward and inverse problems of the equation using NNs is similar. We only need to incorporate the parameters to be identified in the inverse problem into the loss function to be optimized in the forward problem, and no other changes are necessary. Next, we consider the following form of the inverse problem:
{Lξ={α,γ,v}{u(x,t)}=f(x,t),(x,t)∈Ω×[0,T],u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω,u(x,t)=h(x,t),(x,t)∈Ω×[0,T], | (3.6) |
where ξ is the parameter of the equation, so the loss function LIV for the inverse problem under consideration is
LIV{μ,ξ={α,γ,v}}=WI11|SI1|∑(x,t)∈SI1[L{α,γ}FDM{˜u(x,t)}+LvAD{˜u(x,t)}−f(x,t)]2+WI21|SI2|∑(x,t)∈SI2[˜u(x,t)−h(x,t)]2, | (3.7) |
where α∈(0,1) and γ∈(1,2), SI1⊂Ω×[0,T] and SI2⊂Ω×[0,T] are two sets of different training points, and WI1 and WI2 are preselected weight coefficients. We train the NNs to minimize the loss function, thereby obtaining αbest and γbest, the flow velocity vbest, and the optimal parameters μbest of the NNs.
We incorporate the residual information of the equation into the loss function of NNs and train the NNs to minimize this loss function, thus obtaining the optimal parameters of NNs. If the residuals in the PDEs are zero, then the gradient of the residuals in the PDEs should also be zero. Therefore, adding gradient information to the loss function is a necessary condition for training NNs. One motivation behind gfPINNs is that the residual in the loss function often fluctuates near zero. Penalizing the slope of the residual can reduce these fluctuations, making the residual closer to zero. In this section, we continue to consider the formulation of the forward and inverse problems of the equation discussed in the previous section.
We first consider the forward problem in the form of (3.1) and provide the loss function of gfPINNs for this form:
LgFW=WFLFW+Wg1FLg1FW+Wg2FLg2FW, | (3.8) |
where
Lg1FW=1|Sg1F|∑(x,t)∈Sg1F[∂LFDM{˜u(x,t)}∂x+∂LAD{˜u(x,t)}∂x−∂f(x,t)∂x]2, | (3.9) |
Lg2FW=1|Sg2F|∑(x,t)∈Sg2F[∂LFDM{˜u(x,t)}∂t+∂LAD{˜u(x,t)}∂t−∂f(x,t)∂t]2, | (3.10) |
and the approximate solution of the equation is the same as Eq (3.4): ˜u(x,t)=ρ(x)uNN(x,t)+g(x). The expression LFW as shown in Eq (3.5), where WF, Wg1F, and Wg2F are preselected weighting coefficients, Sg1F⊂Ω×[0,T] and Sg2F⊂Ω×[0,T] are two sets of different training points.
Next, we consider the inverse problem in the form of (3.6) and provide the loss function of gfPINNs for this form. The approach for the inverse problem of gfPINNs is similar to that of fPINNs. We provide the loss function for the inverse problem of gfPINNs.
LgIV=WILIV{μ,ξ={α,γ,v}}+Wg1ILg1IV+Wg2ILg2IV, | (3.11) |
where
Lg1IV=Wg1I11|Sg1I1|∑(x,t)∈Sg1I1[∂L{α,γ}FDM{˜u(x,t)}∂x+∂LvAD{˜u(x,t)}∂x−∂f(x,t)∂x]2+Wg1I21|Sg1I2|∑(x,t)∈Sg1I2[∂˜u(x,t)∂x−∂h(x,t)∂x]2, | (3.12) |
Lg2IV=Wg2I11|Sg2I1|∑(x,t)∈Sg2I1[∂L{α,γ}FDM{˜u(x,t)}∂t+∂LvAD{˜u(x,t)}∂t−∂f(x,t)∂t]2+Wg2I21|Sg2I2|∑(x,t)∈Sg2I2[∂˜u(x,t)∂t−∂h(x,t)∂t]2, | (3.13) |
and the expression LIV{μ,ξ={α,γ,v}} as shown in Eq (3.7), where WI, Wg1I, Wg2I, Wg1I1, Wg1I2, Wg2I1, and Wg2I2 are preselected weighting coefficients, Sg1I1,Sg2I1⊂Ω×[0,T], Sg1I2,andSg2I2⊂Ω×[0,T] are four sets of different training points.
This defines the loss function of gfPINNs, which is exactly the same as discussed above for fPINNs. We train the NNs to obtain the optimal parameters of the NNs.
In the x direction [0,M], we take the mesh points xp=ihx,i=0,1,2,...,M1, and in the t direction [0,T], we take the mesh points tn=nτ,n=0,1,...,N, where hx=MM1 and τ=TN are the uniform spatial step size and temporal step size, respectively. Denote Ωh≡{0≤i≤M1}, Ωτ≡{0≤n≤N}. Suppose uni = u(xi,tn) is a grid function on Ωh×Ωτ.
We approximate the fractional derivatives of the equation using the finite difference discretization [33,34].
For α∈(0,1), we have C0Dαtu(x,t)∣(xi,tn)=Dατ˜uni+R1(˜uni),
Dατ˜uni:=τ−αΓ(2−α)[aα0˜uni+n−1∑k=1(aαn−k−aαn−k−1)˜uki−aαn−1˜u0i], | (3.14) |
where ˜uni=˜u(xi,tn), R1≤C(τ2−α), and aαk=(k+1)1−α−k1−α.
Lemma 3.1. [33] α∈(0,1), aαl=(l+1)1−α−l1−α, l=0,1,2,…,
(1) 1=aα0>aα1>aα2>⋯>aαl>0,liml→∞aαl→0,
(2) (1−α)l−α<a(α)l−1<(1−α)(l−1)−α,l≥1.
For γ∈(1,2), C0Dγtu(x,t)∣(xi,tn)=Dγτ˜uni+R2(˜uni),
Dγτ˜uni:=τ1−γΓ(3−γ)[bγ0δt˜uni+n−1∑k=1(bγn−k−bγn−k−1)δt˜uki−bγn−1δt˜u0i], | (3.15) |
where δtu(x,t)=∂u(x,t)∂t, R2≤C(τ3−γ), and bγk=(k+1)2−γ−k2−γ.
Given the spatial position x, it can be seen from the finite difference discretization that the time-fractional derivative of ˜u(x,t) evaluated at time t depends on the value of ˜u(x,t) calculated at all previous times 0, τ, 2τ, ⋯, t. We call the current time and the previous time the training points and the auxiliary points, respectively.
In this section, we demonstrate the effectiveness of gfPINNs in solving forward and inverse problems of the multiterm time-fractional Burger-type equation and we compared fPINNs with gfPINNs. We solve the forward problems of the equation and present the experimental results in Section 4.1. We solve the inverse problems and present the experimental results in Section 4.2.
We give a fabricated solution to the problem u(x,t)=tpsin(πx). In the given approximate solution (3.4), the auxiliary function ρ(⋅) is defined as ρ(⋅)=1−‖x‖22. We use the following form of L2 relative error:
{∑k[u(xtest,k,ttest,k)−˜u(xtest,k,ttest,k)]2}12{∑k[u(xtest,k,ttest,k)]2}12 | (4.1) |
to measure the performance of the NNs, where ˜u denotes the approximated solution, u is the exact solution, and (xk,tk) denotes the k-th test point.
We wrote the code in Python and took advantage of the automatic differentiation capability of TensorFlow [37]. The stochastic gradient descent Adam algorithm [38] was used to optimize the loss function. We initialized the NNs parameters using normalized Glorot initialization [39]. Otherwise, when training a neural network, we set the learning rate, the number of neurons, the number of hidden layers, and the activation function as 1×10−3, 20, 4, and tanh(x), respectively.
In this section, we consider the the multiterm time-fractional Burger-type equation of the form (2.1) with initial and boundary conditions (2.2). We let v=1, (x,t)∈[0,1]×[0,1], and g(x)=0, considering the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term
f(x,t)=Γ(p+1)Γ(p+1−α)t(p−α)(p−α)sin(πx)+Γ(p+1)Γ(p+1−γ)t(p−γ)(p−γ)sin(πx)+t2psin(πx)cos(πx)+π2tpsin(πx). | (4.2) |
Case 1: We choose c1=1, c2=0, and α=0.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=3.5Γ(5)Γ(4.5)t3.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). We consider M1−1 training points of the spatial domain: xi=ihx for i=1,2,⋯,M1−1 and N training points of the time domain: tn=nτ for n=1,2,⋯,N. We do not need to place training points on the initial and boundary since the approximate solution ˜u(x,t)=tx(1−x)uNN(x,t;μ) satisfies the initial and boundary conditions automatically. For fPINNs, the loss function can be written as
LFW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50˜u(xi,tn)+n−1∑k=1(a0.5n−k−a0.5n−k−1)˜u(xi,tk)]+˜u(xi,tn)∂˜u(xi,tn)∂xi−∂2˜u(xi,tn)∂x2i−f(xi,tn)}2. | (4.3) |
The loss function of gfPINNs can be given as
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50∂˜u(xi,tn)∂xi+n−1∑k=1(a0.5n−k−a0.5n−k−1)∂˜u(xi,tk)∂xi]+˜u(xi,tn)∂2˜u(xi,tn)∂x2i+(∂˜u(xi,tn)∂xi)2−∂3˜u(xi,tn)∂x3i−∂f(xi,tn)∂xi}2, | (4.4) |
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50∂˜u(xi,tn)∂tn+n−1∑k=1(a0.5n−k−a0.5n−k−1)∂˜u(xi,tk)∂tk]+˜u(xi,tn)∂2˜u(xi,tn)∂xi∂tn+∂˜u(xi,tn)∂xi∂˜u(xi,tn)∂tn−∂3˜u(xi,tn)∂x2i∂tn−∂f(xi,tn)∂tn}2. | (4.5) |
By substituting Eqs (4.3)–(4.5) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=1, and Wg2F=1. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 2–4 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 5 shows the absolute errors between the exact solution and the solutions predicted by fPINNs and gfPINNs, and it can be seen that the prediction performance of gfPINNs is better than that of fPINNs. Figure 6 illustrates the L2 relative errors of both fPINNs and gfPINNs models for a single experiment as the iteration count varies, showing that while both can achieve errors as low as 10−4, gfPINNs exhibits comparatively lower error and reduced oscillation.
Case 2: We choose c1=0, c2=1, and γ=1.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=2.5Γ(5)Γ(3.5)t2.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). Similarly, we give the loss function of fPINNs as
LFW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂˜u(xi,tn)∂tn+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂˜u(xi,tn)∂tk]+˜u(xi,tn)∂˜u(xi,tn)∂xi−∂2˜u(xi,tn)∂x2i−f(xi,tn)}2. | (4.6) |
For gfPINNs, the loss function can be written as
Lg1FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂2˜u(xi,tn)∂tn∂xi+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂2˜u(xi,tk)∂tk∂xi]+˜u(xi,tn)∂2˜u(xi,tn)∂x2i+(∂˜u(xi,tn)∂xi)2−∂3˜u(xi,tn)∂x3i−∂f(xi,tn)∂xi}2, | (4.7) |
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂2˜u(xi,tn)∂t2n+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂2˜u(xi,tk)∂t2k]+˜u(xi,tn)∂2˜u(xi,tn)∂xi∂tn+∂˜u(xi,tn)∂xi∂˜u(xi,tn)∂tn−∂3˜u(xi,tn)∂x2i∂tn−∂f(xi,tn)∂tn}2. | (4.8) |
By substituting Eqs (4.6)–(4.8) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=0.16, and Wg2F=0.16. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 7–9 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 10 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 11 presents the iteration convergence curves for both the fPINNs and gfPINNs models for a single experiment, revealing that while both can achieve L2 relative errors of 10−4 with increasing iterations, the prediction errors of gfPINNs are relatively low and more stable, resulting in superior prediction performance compared to fPINNs.
We use the code that solves the forward problem to solve the inverse problem. We simply add the parameters to be identified in the inverse problem to the list of parameters to be optimized in the forward problem, without changing anything else. In this section, gfPINNs are applied to solve the inverse problems of the multiterm time-fractional Burger-type equation of the form (3.6). We let v=1, (x,t)∈[0,1]×[0,1], g(x)=0, and considering additional concentration measurements at the final time u(x,1)=h(x,1). Here, we still consider the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term of formula (4.2).
Case 1: We choose c1=1 and c2=0. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.25, and Wg2I=0.25. We set the fractional derivative to be 0.6. We selected 470 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 12–14 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 15 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller and more stable absolute error. Figure 16 illustrates the iteration convergence curves for the fPINNs and gfPINNs for a single experiment, indicating that although gfPINNs incur a higher computational cost for solving the inverse problem due to an additional loss term, both models can achieve L2 relative errors of 10−4 as iterations progress, with gfPINNs showing a lower and more stable error curve compared to fPINNs.
Case 2: We choose c1=0 and c2=1. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.16, and Wg2I=0.0001. We set the fractional derivative to be 1.6. We selected 400 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. For fPINNs and gfPINNs, we get the similar conclusion as Case 1 by training the NNs and observing the experimental results. Figures 17–19 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 20 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 21 compares the L2 relative errors of fPINNs and gfPINNs for a single experiment as iterations progress, revealing that while gfPINNs incurs a higher computational cost due to an additional loss term, both models can achieve an L2 relative error of 10−3, with gfPINNs demonstrating a lower and more stable error curve than fPINNs.
In this paper, the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation is verified through numerical examples. The L2 relative errors for solutions predicted by both fPINNs and gfPINNs can achieve 10−4 for forward problems and 10−3 or even 10−4 for inverse problems. The experimental results indicate that gfPINNs demonstrate relatively lower and more stable errors with the increase of training iterations, thereby enhancing prediction performance. Nonetheless, the inclusion of an additional loss term in gfPINNs may result in a higher computational cost, such as when solving inverse problems, fPINNs exhibit faster convergence compared to gfPINNs.
Shanhao Yuan, Yanqin Liu, Qiuping Li and Chao Guo: Conceptualization, Methodology; Yibin Xu, Shanhao Yuan and Yanfeng Shen: Software, Visualization, Validation; Shanhao Yuan: Writing–Original draft preparation; Yanqin Liu: Writing–Reviewing & editing. All authors have read and approved the final version of the manuscript for publication.
We appreciated the support by the Natural Science Foundation of Shandong Province (ZR2023MA062), the National Science Foundation of China (62103079), the Belt and Road Special Foundation of The National Key Laboratory of Water Disaster Prevention (2023491911), and the Open Research Fund Program of the Data Recovery Key Laboratory of Sichuan Province (DRN19020).
The authors declare that they have no conflicts of interest.
[1] | R. E. Gutierrez, J. M. Rosario, J. Tenreiro Machado, Fractional order calculus: basic concepts and engineering applications, Math. Probl. Eng., 2010 (2010), 375858. |
[2] |
M. Inc, A. Yusuf, A. I. Aliyu, D. Baleanu, Investigation of the logarithmic-KdV equation involving Mittag-Leffler type kernel with Atangana–Baleanu derivative, Physica A, 506 (2018), 520–531. doi: 10.1016/j.physa.2018.04.092
![]() |
[3] |
R. Almeida, N. R. Bastos, M. T. T. Monteiro, Modeling some real phenomena by fractional differential equations, Math. Method. Appl. Sci., 39 (2016), 4846–4855. doi: 10.1002/mma.3818
![]() |
[4] |
R. Almeida, A. B. Malinowska, M. T. T. Monteiro, Fractional differential equations with a Caputo derivative with respect to a kernel function and their applications, Math. Method. Appl. Sci., 41 (2018), 336–352. doi: 10.1002/mma.4617
![]() |
[5] |
A. Yusuf, S. Qureshi, M. Inc, A. I. Aliyu, D. Baleanu, A. A. Shaikh, Two-strain epidemic model involving fractional derivative with Mittag-Leffler kernel, Chaos, 28 (2018), 123121. doi: 10.1063/1.5074084
![]() |
[6] | M. Awadalla, Y. Yameni, Modeling exponential growth and exponential decay real phenomena by ψ-Caputo fractional derivative, JAMCS, 28 (2018), 1–13. |
[7] |
A. Jajarmi, S. Arshad, D. Baleanu, A new fractional modelling and control strategy for the outbreak of dengue fever, Physica A, 535 (2019), 122524. doi: 10.1016/j.physa.2019.122524
![]() |
[8] | M. Khader, K. M. Saad, Numerical treatment for studying the blood ethanol concentration systems with different forms of fractional derivatives, Int. J. Mod. Phys. C, 31 (2020), https://doi.org/10.1142/S0129183120500448. |
[9] | S. Bushnaq, S. A. Khan, K. Shah, G. Zaman, Existence theory of HIV-1 infection model by using arbitrary order derivative of without singular kernel type, Journal of Mathematical Analysis, 9 (2018), 16–28. |
[10] |
S. Rezapour, H. Mohammadi, A study on the AH1N1/09 influenza transmission model with the fractional Caputo–Fabrizio derivative, Adv. Differ. Equ., 2020 (2020), 1–15. doi: 10.1186/s13662-019-2438-0
![]() |
[11] |
S. B. Chen, S. Rashid, M. A. Noor, R. Ashraf, Y. M. Chu, A new approach on fractional calculus and probability density function, AIMS Mathematics, 5 (2020), 7041–7054. doi: 10.3934/math.2020451
![]() |
[12] | S. Das, I. Pan, Fractional order signal processing: introductory concepts and applications, Springer Science & Business Media, 2011. |
[13] |
F. Meral, T. Royston, R. Magin, Fractional calculus in viscoelasticity: an experimental study, Commun. Nonlinear Sci., 15 (2010), 939–945. doi: 10.1016/j.cnsns.2009.05.004
![]() |
[14] |
K. M. Owolabi, A. Atangana, On the formulation of Adams-Bashforth scheme with Atangana-Baleanu-Caputo fractional derivative to model chaotic problems, Chaos, 29 (2019), 023111. doi: 10.1063/1.5085490
![]() |
[15] |
M. Inc, A. Yusuf, A. I. Aliyu, D. Baleanu, Investigation of the logarithmic-KdV equation involving Mittag-Leffler type kernel with Atangana–Baleanu derivative, Physica A, 506 (2018), 520–531. doi: 10.1016/j.physa.2018.04.092
![]() |
[16] |
A. Atangana, A. Akgül, K. M. Owolabi, Analysis of fractal fractional differential equations, Alex. Eng. J., 59 (2020), 1117–1134. doi: 10.1016/j.aej.2020.01.005
![]() |
[17] |
A. Akgül, A novel method for a fractional derivative with non-local and non-singular kernel, Chaos Soliton. Fract., 114 (2018), 478–482. doi: 10.1016/j.chaos.2018.07.032
![]() |
[18] |
K. M. Owolabi, A. Atangana, Chaotic behaviour in system of noninteger-order ordinary differential equations, Chaos Soliton. Fract., 115 (2018), 362–370. doi: 10.1016/j.chaos.2018.07.034
![]() |
[19] |
K. M. Owolabi, Z. Hammouch, Spatiotemporal patterns in the Belousov–Zhabotinskii reaction systems with Atangana–Baleanu fractional order derivative, Physica A, 523 (2019), 1072–1090. doi: 10.1016/j.physa.2019.04.017
![]() |
[20] |
D. Mathale, E. F. Doungmo Goufo, M. Khumalo, Coexistence of multi-scroll chaotic attractors for fractional systems with exponential law and non-singular kernel, Chaos Soliton. Fract., 139 (2020), 110021. doi: 10.1016/j.chaos.2020.110021
![]() |
[21] |
E. F. Doungmo Goufo, Mathematical analysis of peculiar behavior by chaotic, fractional and strange multiwing attractors, Int. J. Bifurcat. Chaos, 28 (2018), 1850125. doi: 10.1142/S0218127418501250
![]() |
[22] | I. Podlubny, Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications, Elsevier, 1998. |
[23] |
E. F. Doungmo Goufo, J. J. Nieto, Attractors for fractional differential problems of transition to turbulent flows, J. Comput. Appl. Math., 339 (2018), 329–342. doi: 10.1016/j.cam.2017.08.026
![]() |
[24] |
S. Eftekhari, A. Jafari, Numerical simulation of chaotic dynamical systems by the method of differential quadrature, Sci. Iran., 19 (2012), 1299–1315. doi: 10.1016/j.scient.2012.08.003
![]() |
[25] |
E. F. Doungmo Goufo, The Proto-Lorenz system in its chaotic fractional and fractal structure, Int. J. Bifurcat. Chaos, 30 (2020), 2050180. doi: 10.1142/S0218127420501801
![]() |
[26] |
E. F. Doungmo Goufo, Y. Khan, A new auto-replication in systems of attractors with two and three merged basins of attraction via control, Commun. Nonlinear Sci., 96 (2021), 105709. doi: 10.1016/j.cnsns.2021.105709
![]() |
[27] | E. F. Doungmo Goufo, Multi-directional and saturated chaotic attractors with many scrolls for fractional dynamical systems, Discrete Cont. Dyn. S, 13 (2020), 629–643. |
[28] |
E. F. Doungmo Goufo, Fractal and fractional dynamics for a 3D autonomous and two-wing smooth chaotic system, Alex. Eng. J., 59 (2020), 2469–2476. doi: 10.1016/j.aej.2020.03.011
![]() |
[29] |
Y. Mousavi, A. Alf, Fractional calculus-based firefly algorithm applied to parameter estimation of chaotic systems, Chaos Soliton. Fract., 114 (2018), 202–215. doi: 10.1016/j.chaos.2018.07.004
![]() |
[30] |
M. Fiaz, M. Aqeel, Fractional order analysis of modified stretch–twist–fold flow with synchronization control, AIP Adv., 10 (2020), 125202, doi: 10.1063/5.0026319
![]() |
[31] |
Q. Jia, Hyperchaos generated from the Lorenz chaotic system and its control, Phys. Lett. A, 366 (2007), 217–222. doi: 10.1016/j.physleta.2007.02.024
![]() |
[32] |
Y. Yu, H. X. Li, S. Wang, J. Yu, Dynamic analysis of a fractional-order Lorenz chaotic system, Chaos Soliton. Fract., 42 (2009), 1181–1189. doi: 10.1016/j.chaos.2009.03.016
![]() |
[33] |
H. T. Yau, J. J. Yan, Design of sliding mode controller for Lorenz chaotic system with nonlinear input, Chaos Soliton. Fract., 19 (2004), 891–898. doi: 10.1016/S0960-0779(03)00255-8
![]() |
[34] |
E. N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci., 20 (1963), 130–141. doi: 10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2
![]() |
[35] |
A. Azam, M. Aqeel, S. Ahmad, F. Ahmad, Chaotic behavior of modified stretch-twist-fold (STF) flow with fractal property, Nonlinear Dyn., 90 (2017), 1–12. doi: 10.1007/s11071-017-3641-8
![]() |
[36] | T. Zhou, Y. Tang, G. Chen, Chen's attractor exists, Int. J. Bifurcat. Chaos, 14 (2004), 3167–3177. |
[37] |
O. E. Rössler, An equation for continuous chaos, Phys. Lett. A, 57 (1976), 397–398. doi: 10.1016/0375-9601(76)90101-8
![]() |
[38] |
L. Zhou, F. Chen, Sil'nikov chaos of the Liu system, Chaos, 18 (2008), 013113. doi: 10.1063/1.2839909
![]() |
[39] |
A. Atangana, Fractal-fractional differentiation and integration: connecting fractal calculus and fractional calculus to predict complex system, Chaos Soliton. Fract., 102 (2017), 396–406. doi: 10.1016/j.chaos.2017.04.027
![]() |
[40] | M. Giona, Fractal calculus on [0, 1], Chaos Soliton. Fract., 5 (1995), 987–1000. |
[41] | J. Fan, J. He, Fractal derivative model for air permeability in hierarchic porous media, Abstr. Appl. Anal., 2012 (2012), 354701. |
[42] | Y. Hu, J. H. He, On fractal space-time and fractional calculus, Therma. Sci., 20 (2016), 773–777. |
[43] |
S. Qureshi, A. Atangana, Fractal-fractional differentiation for the modeling and mathematical analysis of nonlinear diarrhea transmission dynamics under the use of real data, Chaos Soliton. Fract., 136 (2020), 109812. doi: 10.1016/j.chaos.2020.109812
![]() |
[44] |
H. Srivastava, K. M. Saad, Numerical simulation of the fractal-fractional Ebola Virus, Fractal Fract., 4 (2020), 49. doi: 10.3390/fractalfract4040049
![]() |
[45] |
A. Atangana, S. Qureshi, Modeling attractors of chaotic dynamical systems with fractal–fractional operators, Chaos Soliton. Fract., 123 (2019), 320-337. doi: 10.1016/j.chaos.2019.04.020
![]() |
[46] |
E. F. Doungmo Goufo, Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: basic theory and applications, Chaos, 26 (2016), 084305. doi: 10.1063/1.4958921
![]() |
[47] | A. Atangana, D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: theory and application to heat transfer model, 2016, arXiv: 1602.03408. |
[48] |
A. Atangana, I. Koca, Chaos in a simple nonlinear system with Atangana–Baleanu derivatives with fractional order, Chaos Soliton. Fract., 89 (2016), 447–454. doi: 10.1016/j.chaos.2016.02.012
![]() |
[49] |
J. C. B. de Figueiredo, L. Diambra, C. P. Malta, Convergence criterium of numerical chaotic solutions based on statistical measures, Applied Mathematics, 2 (2011), 436–443. doi: 10.4236/am.2011.24055
![]() |
[50] | Y. S. Shimizu, K. Fidkowski, Output-based error estimation for chaotic flows using reduced-order modeling, 2018 AIAA Aerospace Sciences Meeting, 2018. Available from: https://arc.aiaa.org/doi/abs/10.2514/6.2018-0826. |
1. | Jiawei Wang, Yanqin Liu, Limei Yan, Kunling Han, Libo Feng, Runfa Zhang, Fractional sub-equation neural networks (fSENNs) method for exact solutions of space–time fractional partial differential equations, 2025, 35, 1054-1500, 10.1063/5.0259937 |