Processing math: 100%
Research article Special Issues

A hierarchical age-structured model of optimal vermin management by contraception

  • Taking the reproduction law of vermin into consideration, we formulate a hierarchical age-structured model to describe the optimal management of vermin by contraception control. It is shown that the model is well-posed, and the solution has a separable form. The existence of optimal management policy is established via a minimizing sequence and the use of compactness, while the structure of optimal strategy is obtained by using an adjoint system and normal cones. To show the compactness, we use the Fréchet-Kolmogorov theorem and its generalization. To construct the adjoint system, we give some continuity results. Finally, an illustrative example is given.

    Citation: Rong Liu, Fengqin Zhang. A hierarchical age-structured model of optimal vermin management by contraception[J]. Mathematical Biosciences and Engineering, 2023, 20(4): 6691-6720. doi: 10.3934/mbe.2023288

    Related Papers:

    [1] Xiaoli Wang, Lizhen Wang . Traveling wave solutions of conformable time fractional Burgers type equations. AIMS Mathematics, 2021, 6(7): 7266-7284. doi: 10.3934/math.2021426
    [2] Zui-Cha Deng, Fan-Li Liu, Liu Yang . Numerical simulations for initial value inversion problem in a two-dimensional degenerate parabolic equation. AIMS Mathematics, 2021, 6(4): 3080-3104. doi: 10.3934/math.2021187
    [3] Mohammad Partohaghighi, Ali Akgül, Jihad Asad, Rania Wannan . Solving the time-fractional inverse Burger equation involving fractional Heydari-Hosseininia derivative. AIMS Mathematics, 2022, 7(9): 17403-17417. doi: 10.3934/math.2022959
    [4] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [5] Humaira Yasmin, Aljawhara H. Almuqrin . Analytical study of time-fractional heat, diffusion, and Burger's equations using Aboodh residual power series and transform iterative methodologies. AIMS Mathematics, 2024, 9(6): 16721-16752. doi: 10.3934/math.2024811
    [6] Jian-Gen Liu, Jian Zhang . A new approximate method to the time fractional damped Burger equation. AIMS Mathematics, 2023, 8(6): 13317-13324. doi: 10.3934/math.2023674
    [7] Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319
    [8] Asif Khan, Tayyaba Akram, Arshad Khan, Shabir Ahmad, Kamsing Nonlaopon . Investigation of time fractional nonlinear KdV-Burgers equation under fractional operators with nonsingular kernels. AIMS Mathematics, 2023, 8(1): 1251-1268. doi: 10.3934/math.2023063
    [9] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
    [10] Shuang-Shuang Zhou, Saima Rashid, Asia Rauf, Khadija Tul Kubra, Abdullah M. Alsharif . Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time. AIMS Mathematics, 2021, 6(11): 12114-12132. doi: 10.3934/math.2021703
  • Taking the reproduction law of vermin into consideration, we formulate a hierarchical age-structured model to describe the optimal management of vermin by contraception control. It is shown that the model is well-posed, and the solution has a separable form. The existence of optimal management policy is established via a minimizing sequence and the use of compactness, while the structure of optimal strategy is obtained by using an adjoint system and normal cones. To show the compactness, we use the Fréchet-Kolmogorov theorem and its generalization. To construct the adjoint system, we give some continuity results. Finally, an illustrative example is given.



    In recent years, fractional partial differential equations (FPDEs) have been widely used in natural science and engineering technology [1,2,3,4]. The advantage of FPDEs lies in their ability to better describe materials and processes that exhibit memory and genetic properties [5,6]. However, the solutions of FPDEs are much more complex. Many researchers have exploited diverse techniques for the investigation of FPDEs such as the finite difference method (FDM) [7], finite element method [8], spectral method [9], virtual element method [10], etc. The development of effective numerical methods to approximate FPDEs has been the goal of some researchers.

    In recent years, neural networks (NNs) have been successfully applied to solve problems in various fields [11,12,13]. Due to the high expressiveness of NNs in functional approximation [14,15,16], using NNs to solve differential and integral equations has become an active and important research field. Physics-informed neural networks (PINNs) [17,18,19,20] are machine learning models that combine deep learning with physical knowledge. PINNs embed PDEs into the loss function of the NNs, enabling the NNs to learn solutions to PDEs. The PINNs algorithm is meshless and simple, and can be applied to various types of PDEs, including integral differential equations, FPDEs, and random partial differential equations. Moreover, PINNs solved the inverse problem of PDEs just as easily as they solved the forward problem [17]. PINNs have been successfully applied to solve various problems in scientific computing [21,22,23]. Pang et al. [24] used the FDM to approximate the fractional derivatives that cannot be automatically differentiated, thus extending the PINNs to fPINNs for solving FPDEs.

    Despite the success of deep learning in the past, solving a wide range of PDEs is theoretically and practically challenging as complexity increases. Therefore, many aspects of PINNs need to be further improved to achieve more accurate predictions, higher computational efficiency, and robustness of training. Lu et al. [25] proposed DeepXDE, a deep learning library for solving PDEs, introduced a new residual-based adaptive refinement method to improve the training efficiency of PINNs, and new residual points were added at the position where the residuals of the PDEs were large, so that the discontinuities of PDEs could be captured well. Zhang et al. [26] combined fPINNs with the spectral method to solve the time-fractional phase field models. It had the characteristics of reducing the approximate number of discrete fractional operators, thus improving the training efficiency and obtaining higher error accuracy. Wu et al. [27] conducted a comprehensive study on two types of sampling of PINNs, including non-adaptive uniform sampling and adaptive non-uniform sampling, and the research results could also be used as a practical guide for selecting sampling methods. Zhang et al. [28] removed the soft constraints of PDEs in the loss function, and used the Lie symmetry group to generate the labeled data of PDEs to build a supervised learning model, thus effectively predicting the large amplitude and high frequency solutions of the Klein-Gordon equation. Zhang et al. [29] introduced the symmetry-enhanced physics-informed neural network (SPINN), which incorporated the invariant surface conditions derived from Lie symmetries or non-classical symmetries of PDEs into the loss function of PINNs, aiming to improve accuracy of PINNs. Lu et al. [30] and Xie et al. [31] introduced gradient-enhanced physics-informed neural networks (gPINNs) to solve PDEs and the idea of embedding the gradient information from the residuals of PDEs into the loss functions has also proven to be effective in other methods such as Gaussian process regression [32].

    In this paper, inspired by the above works, gfPINNs are applied to solve the forward and inverse problems of the multiterm time-fractional Burger-type equation. The integer order derivatives are handled using the automatic differentiation capability of the NNs, while the fractional derivatives of the equation are approximated using finite difference discretization [33,34]. Subsequently, the residual information of the equation is then incorporated into the loss function of NNs and optimized to yield optimal parameters. For the inverse problems of the multiterm time-fractional Burger-type equation, their overall form are known but the coefficient and the orders of time-fractional derivatives are unknown. The gfPINNs explicitly incorporate information from the equation by including the differential operators of the equation directly into the optimization loss function. The parameters to be identified appear in the differential operators, which are then optimized by minimizing the loss function associated with those parameters. A numerical comparison between fPINNs and gfPINNs is conducted using numerical examples. The numerical results demonstrate the effectiveness of gfPINNs in solving the multiterm time-fractional Burger-type equation.

    The structure of this paper is as follows. In Section 2, we define forward and inverse problems for the multiterm time-fractional Burger-type equation. In Section 3, we introduce fPINNs and gfPINNs and give the finite difference discretization to approximate the time-fractional derivatives. In Section 4, we demonstrate the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation by numerical examples, and compare the experimental results of fPINNs and gfPINNs. Finally, we give the conclusions of this paper in Section 5.

    We consider the following multiterm time-fractional Burger-type equation defined on the bounded domain Ω:

    c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)u(x,t)x=v2u(x,t)x2+f(x,t), (2.1)

    where (x,t)Ω×[0,T] and the initial and boundary conditions are given as

    {u(x,t)=0,xΩ,u(x,0)=g(x),xΩ, (2.2)

    where u(x,t) is the solution of the equation, f(x,t) is the forcing term whose values are only known at scattered spatio-temporal coordinates, v is the kinematic viscosity of fluid, g(x) is a sufficiently smooth function, the fractional orders α and γ have been restricted to (0, 1) and (1, 2), respectively, C0Dθtu(x,t) is the Caputo time-fractional derivative of order θ (θ>0,n1θ<n) of u(x,t) with respect to t [35,36]:

    C0Dθtu(x,t)={1Γ(nθ)tα(ts)n1θnu(x,s)snds,θz+,θu(x,t)tθ,θz+, (2.3)

    where Γ() is the gamma function.

    The forward and inverse problems of solving the multiterm time-fractional Burger-type equation are described as follows. For the forward problem, under the given preconditions of the fractional orders α and γ, the forcing term f, and the initial and boundary conditions, the solution u(x,t) is solved. For the inverse problem, under the given preconditions of the initial and boundary conditions, the forcing term f, and additional concentration measurements at the final time u(x,t)=h(x,t), the fractional orders α and γ, the flow velocity v, and the solution u(x,t) are solved.

    This subsection introduces the idea of fPINNs and we consider both the forward and inverse problems, along with their corresponding NNs. We first consider the forward problem of the multiterm time-fractional Burger-type equation in the following form:

    {L{u(x,t)}=f(x,t),(x,t)Ω×[0,T],u(x,t)=0,xΩ,u(x,0)=g(x),xΩ, (3.1)

    where L{} is a nonlinear operator and L{u(x,t)}=c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)u(x,t)xv2u(x,t)x2. We divide the nonlinear operator L{} into two parts, L=LAD+LnonAD. The first part is an integer derivative operator, which can be automatically differentiated (AD) using the chain rule. We have

    LAD{}={u(x,t)u(x,t)xv2u(x,t)x2,α(0,1),γ(1,2),c22u(x,t)t2+u(x,t)u(x,t)xv2u(x,t)x2,α(0,1),γ=2,c1u(x,t)t+u(x,t)u(x,t)xv2u(x,t)x2,α=1,γ(1,2), (3.2)

    and the second category consists of operators that lack automatic differentiation capabilities:

    LnonAD{}={c1C0Dαtu(x,t)+c2C0Dγtu(x,t),α(0,1),γ(1,2),c1C0Dαtu(x,t),α(0,1),γ=2,c2C0Dγtu(x,t),α=1,γ(1,2). (3.3)

    For LnonAD, we can discretize it using FDM and denote by LFDM the discretization version of LnonAD.

    During the NNs training process, our goal is to optimize its parameters in order to ensure that the approximate solution of the equation closely satisfies the initial and boundary conditions. The approximate solution is chosen as

    ˜u(x,t)=tρ(x)uNN(x,t)+g(x), (3.4)

    where uNN represents the output of the NNs. The NNs acts as a surrogate model, approximating the relationship between spatio-temporal coordinates and the solution of the equation. It is defined by its weights and biases, forming the parameter vector μ; see Figure 1 for a simple NN. This is fully connected with a single hidden layer consisting of three neurons. In this network, x and t are two inputs, which go through a linear transformation to obtain x1=w1x+w4t+b1, x2=w2x+w5t+b2, and x3=w3x+w6t+b3 in the hidden layer, and then, they go through a nonlinear transformation to get Yi=f(xi) for i=1,2,3. We choose the hyperbolic tangent function tanh(). Yi to go through a linear transformation to obtain the output of the NNs, uNN(x,t;μ)=w7Y1+w8Y2+w9Y3+b4. The vector of parameters μ is comprised of the weights wi and biases bi. ρ(0)=ρ(1)=0 and the auxiliary function ρ(x) is preselected. g(x) is the initial condition function such that it satisfies the initial and boundary conditions automatically.

    Figure 1.  A simple NN.

    The loss function of fPINNs for the forward problem with the approximate solution is defined as the mean-squared error of the equation residual

    LFW=1|SF|(x,t)S[LFDM{˜u(x,t)}+LAD{˜u(x,t)}f(x,t)]2, (3.5)

    where SFΩ×[0,T] and |SF| represents the number of training points. Then, we train the NNs to optimize the loss function of the forward problem with respect to the NNs parameters μ, thus obtaining the optimal parameters μbest. Finally, we specify a set of arbitrary test points to test the trained NNs and observe the training performance.

    The codes for solving the forward and inverse problems of the equation using NNs is similar. We only need to incorporate the parameters to be identified in the inverse problem into the loss function to be optimized in the forward problem, and no other changes are necessary. Next, we consider the following form of the inverse problem:

    {Lξ={α,γ,v}{u(x,t)}=f(x,t),(x,t)Ω×[0,T],u(x,t)=0,xΩ,u(x,0)=g(x),xΩ,u(x,t)=h(x,t),(x,t)Ω×[0,T], (3.6)

    where ξ is the parameter of the equation, so the loss function LIV for the inverse problem under consideration is

    LIV{μ,ξ={α,γ,v}}=WI11|SI1|(x,t)SI1[L{α,γ}FDM{˜u(x,t)}+LvAD{˜u(x,t)}f(x,t)]2+WI21|SI2|(x,t)SI2[˜u(x,t)h(x,t)]2, (3.7)

    where α(0,1) and γ(1,2), SI1Ω×[0,T] and SI2Ω×[0,T] are two sets of different training points, and WI1 and WI2 are preselected weight coefficients. We train the NNs to minimize the loss function, thereby obtaining αbest and γbest, the flow velocity vbest, and the optimal parameters μbest of the NNs.

    We incorporate the residual information of the equation into the loss function of NNs and train the NNs to minimize this loss function, thus obtaining the optimal parameters of NNs. If the residuals in the PDEs are zero, then the gradient of the residuals in the PDEs should also be zero. Therefore, adding gradient information to the loss function is a necessary condition for training NNs. One motivation behind gfPINNs is that the residual in the loss function often fluctuates near zero. Penalizing the slope of the residual can reduce these fluctuations, making the residual closer to zero. In this section, we continue to consider the formulation of the forward and inverse problems of the equation discussed in the previous section.

    We first consider the forward problem in the form of (3.1) and provide the loss function of gfPINNs for this form:

    LgFW=WFLFW+Wg1FLg1FW+Wg2FLg2FW, (3.8)

    where

    Lg1FW=1|Sg1F|(x,t)Sg1F[LFDM{˜u(x,t)}x+LAD{˜u(x,t)}xf(x,t)x]2, (3.9)
    Lg2FW=1|Sg2F|(x,t)Sg2F[LFDM{˜u(x,t)}t+LAD{˜u(x,t)}tf(x,t)t]2, (3.10)

    and the approximate solution of the equation is the same as Eq (3.4): ˜u(x,t)=ρ(x)uNN(x,t)+g(x). The expression LFW as shown in Eq (3.5), where WF, Wg1F, and Wg2F are preselected weighting coefficients, Sg1FΩ×[0,T] and Sg2FΩ×[0,T] are two sets of different training points.

    Next, we consider the inverse problem in the form of (3.6) and provide the loss function of gfPINNs for this form. The approach for the inverse problem of gfPINNs is similar to that of fPINNs. We provide the loss function for the inverse problem of gfPINNs.

    LgIV=WILIV{μ,ξ={α,γ,v}}+Wg1ILg1IV+Wg2ILg2IV, (3.11)

    where

    Lg1IV=Wg1I11|Sg1I1|(x,t)Sg1I1[L{α,γ}FDM{˜u(x,t)}x+LvAD{˜u(x,t)}xf(x,t)x]2+Wg1I21|Sg1I2|(x,t)Sg1I2[˜u(x,t)xh(x,t)x]2, (3.12)
    Lg2IV=Wg2I11|Sg2I1|(x,t)Sg2I1[L{α,γ}FDM{˜u(x,t)}t+LvAD{˜u(x,t)}tf(x,t)t]2+Wg2I21|Sg2I2|(x,t)Sg2I2[˜u(x,t)th(x,t)t]2, (3.13)

    and the expression LIV{μ,ξ={α,γ,v}} as shown in Eq (3.7), where WI, Wg1I, Wg2I, Wg1I1, Wg1I2, Wg2I1, and Wg2I2 are preselected weighting coefficients, Sg1I1,Sg2I1Ω×[0,T], Sg1I2,andSg2I2Ω×[0,T] are four sets of different training points.

    This defines the loss function of gfPINNs, which is exactly the same as discussed above for fPINNs. We train the NNs to obtain the optimal parameters of the NNs.

    In the x direction [0,M], we take the mesh points xp=ihx,i=0,1,2,...,M1, and in the t direction [0,T], we take the mesh points tn=nτ,n=0,1,...,N, where hx=MM1 and τ=TN are the uniform spatial step size and temporal step size, respectively. Denote Ωh{0iM1}, Ωτ{0nN}. Suppose uni = u(xi,tn) is a grid function on Ωh×Ωτ.

    We approximate the fractional derivatives of the equation using the finite difference discretization [33,34].

    For α(0,1), we have C0Dαtu(x,t)(xi,tn)=Dατ˜uni+R1(˜uni),

    Dατ˜uni:=ταΓ(2α)[aα0˜uni+n1k=1(aαnkaαnk1)˜ukiaαn1˜u0i], (3.14)

    where ˜uni=˜u(xi,tn), R1C(τ2α), and aαk=(k+1)1αk1α.

    Lemma 3.1. [33] α(0,1), aαl=(l+1)1αl1α, l=0,1,2,,

    (1) 1=aα0>aα1>aα2>>aαl>0,limlaαl0,

    (2) (1α)lα<a(α)l1<(1α)(l1)α,l1.

    For γ(1,2), C0Dγtu(x,t)(xi,tn)=Dγτ˜uni+R2(˜uni),

    Dγτ˜uni:=τ1γΓ(3γ)[bγ0δt˜uni+n1k=1(bγnkbγnk1)δt˜ukibγn1δt˜u0i], (3.15)

    where δtu(x,t)=u(x,t)t, R2C(τ3γ), and bγk=(k+1)2γk2γ.

    Given the spatial position x, it can be seen from the finite difference discretization that the time-fractional derivative of ˜u(x,t) evaluated at time t depends on the value of ˜u(x,t) calculated at all previous times 0, τ, 2τ, , t. We call the current time and the previous time the training points and the auxiliary points, respectively.

    In this section, we demonstrate the effectiveness of gfPINNs in solving forward and inverse problems of the multiterm time-fractional Burger-type equation and we compared fPINNs with gfPINNs. We solve the forward problems of the equation and present the experimental results in Section 4.1. We solve the inverse problems and present the experimental results in Section 4.2.

    We give a fabricated solution to the problem u(x,t)=tpsin(πx). In the given approximate solution (3.4), the auxiliary function ρ() is defined as ρ()=1x22. We use the following form of L2 relative error:

    {k[u(xtest,k,ttest,k)˜u(xtest,k,ttest,k)]2}12{k[u(xtest,k,ttest,k)]2}12 (4.1)

    to measure the performance of the NNs, where ˜u denotes the approximated solution, u is the exact solution, and (xk,tk) denotes the k-th test point.

    We wrote the code in Python and took advantage of the automatic differentiation capability of TensorFlow [37]. The stochastic gradient descent Adam algorithm [38] was used to optimize the loss function. We initialized the NNs parameters using normalized Glorot initialization [39]. Otherwise, when training a neural network, we set the learning rate, the number of neurons, the number of hidden layers, and the activation function as 1×103, 20, 4, and tanh(x), respectively.

    In this section, we consider the the multiterm time-fractional Burger-type equation of the form (2.1) with initial and boundary conditions (2.2). We let v=1, (x,t)[0,1]×[0,1], and g(x)=0, considering the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term

    f(x,t)=Γ(p+1)Γ(p+1α)t(pα)(pα)sin(πx)+Γ(p+1)Γ(p+1γ)t(pγ)(pγ)sin(πx)+t2psin(πx)cos(πx)+π2tpsin(πx). (4.2)

    Case 1: We choose c1=1, c2=0, and α=0.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=3.5Γ(5)Γ(4.5)t3.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). We consider M11 training points of the spatial domain: xi=ihx for i=1,2,,M11 and N training points of the time domain: tn=nτ for n=1,2,,N. We do not need to place training points on the initial and boundary since the approximate solution ˜u(x,t)=tx(1x)uNN(x,t;μ) satisfies the initial and boundary conditions automatically. For fPINNs, the loss function can be written as

    LFW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[a0.50˜u(xi,tn)+n1k=1(a0.5nka0.5nk1)˜u(xi,tk)]+˜u(xi,tn)˜u(xi,tn)xi2˜u(xi,tn)x2if(xi,tn)}2. (4.3)

    The loss function of gfPINNs can be given as

    Lg2FW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[a0.50˜u(xi,tn)xi+n1k=1(a0.5nka0.5nk1)˜u(xi,tk)xi]+˜u(xi,tn)2˜u(xi,tn)x2i+(˜u(xi,tn)xi)23˜u(xi,tn)x3if(xi,tn)xi}2, (4.4)
    Lg2FW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[a0.50˜u(xi,tn)tn+n1k=1(a0.5nka0.5nk1)˜u(xi,tk)tk]+˜u(xi,tn)2˜u(xi,tn)xitn+˜u(xi,tn)xi˜u(xi,tn)tn3˜u(xi,tn)x2itnf(xi,tn)tn}2. (4.5)

    By substituting Eqs (4.3)–(4.5) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=1, and Wg2F=1. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 24 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 5 shows the absolute errors between the exact solution and the solutions predicted by fPINNs and gfPINNs, and it can be seen that the prediction performance of gfPINNs is better than that of fPINNs. Figure 6 illustrates the L2 relative errors of both fPINNs and gfPINNs models for a single experiment as the iteration count varies, showing that while both can achieve errors as low as 104, gfPINNs exhibits comparatively lower error and reduced oscillation.

    Figure 2.  The exact solution and predicted solutions of the equation.
    Figure 3.  The exact solution and numerical solutions' profiles of velocity u(x,t) with α=0.5.
    Figure 4.  Predicted cross-sectional views of the equation using fPINNs and gfPINNs.
    Figure 5.  The absolute errors for solutions predicted by fPINNs and gfPINNs.
    Figure 6.  The L2 relative error of the problem with the number of iterations.

    Case 2: We choose c1=0, c2=1, and γ=1.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=2.5Γ(5)Γ(3.5)t2.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). Similarly, we give the loss function of fPINNs as

    LFW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[b1.50˜u(xi,tn)tn+n1k=1(b1.5nkb1.5nk1)˜u(xi,tn)tk]+˜u(xi,tn)˜u(xi,tn)xi2˜u(xi,tn)x2if(xi,tn)}2. (4.6)

    For gfPINNs, the loss function can be written as

    Lg1FW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[b1.502˜u(xi,tn)tnxi+n1k=1(b1.5nkb1.5nk1)2˜u(xi,tk)tkxi]+˜u(xi,tn)2˜u(xi,tn)x2i+(˜u(xi,tn)xi)23˜u(xi,tn)x3if(xi,tn)xi}2, (4.7)
    Lg2FW=1(M11)NM1i=1Nn=1{τ0.5Γ(1.5)[b1.502˜u(xi,tn)t2n+n1k=1(b1.5nkb1.5nk1)2˜u(xi,tk)t2k]+˜u(xi,tn)2˜u(xi,tn)xitn+˜u(xi,tn)xi˜u(xi,tn)tn3˜u(xi,tn)x2itnf(xi,tn)tn}2. (4.8)

    By substituting Eqs (4.6)–(4.8) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=0.16, and Wg2F=0.16. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 79 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 10 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 11 presents the iteration convergence curves for both the fPINNs and gfPINNs models for a single experiment, revealing that while both can achieve L2 relative errors of 104 with increasing iterations, the prediction errors of gfPINNs are relatively low and more stable, resulting in superior prediction performance compared to fPINNs.

    Figure 7.  The exact solution and predicted solutions of the equation.
    Figure 8.  The exact solution and numerical solutions' profiles of velocity u(x,t) with γ=1.5.
    Figure 9.  Predicted cross-sectional views of the equation using fPINNs and gfPINNs.
    Figure 10.  The absolute errors for solutions predicted by fPINNs and gfPINNs.
    Figure 11.  The L2 relative error of the problem with the number of iterations.

    We use the code that solves the forward problem to solve the inverse problem. We simply add the parameters to be identified in the inverse problem to the list of parameters to be optimized in the forward problem, without changing anything else. In this section, gfPINNs are applied to solve the inverse problems of the multiterm time-fractional Burger-type equation of the form (3.6). We let v=1, (x,t)[0,1]×[0,1], g(x)=0, and considering additional concentration measurements at the final time u(x,1)=h(x,1). Here, we still consider the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term of formula (4.2).

    Case 1: We choose c1=1 and c2=0. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.25, and Wg2I=0.25. We set the fractional derivative to be 0.6. We selected 470 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 1214 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 15 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller and more stable absolute error. Figure 16 illustrates the iteration convergence curves for the fPINNs and gfPINNs for a single experiment, indicating that although gfPINNs incur a higher computational cost for solving the inverse problem due to an additional loss term, both models can achieve L2 relative errors of 104 as iterations progress, with gfPINNs showing a lower and more stable error curve compared to fPINNs.

    Figure 12.  The exact solution and predicted solutions of the equation.
    Figure 13.  The exact solution and numerical solutions' profiles of velocity u(x,t).
    Figure 14.  Predicted cross-sectional views of the equation using fPINNs and gfPINNs.
    Figure 15.  The absolute errors for solutions predicted by fPINNs and gfPINNs.
    Figure 16.  The L2 relative error of the problem with the number of iterations.

    Case 2: We choose c1=0 and c2=1. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.16, and Wg2I=0.0001. We set the fractional derivative to be 1.6. We selected 400 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. For fPINNs and gfPINNs, we get the similar conclusion as Case 1 by training the NNs and observing the experimental results. Figures 1719 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 20 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 21 compares the L2 relative errors of fPINNs and gfPINNs for a single experiment as iterations progress, revealing that while gfPINNs incurs a higher computational cost due to an additional loss term, both models can achieve an L2 relative error of 103, with gfPINNs demonstrating a lower and more stable error curve than fPINNs.

    Figure 17.  The exact solution and predicted solutions of the equation.
    Figure 18.  The exact solution and numerical solutions' profiles of velocity u(x,t).
    Figure 19.  Predicted cross-sectional views of the equation using fPINNs and gfPINNs.
    Figure 20.  The absolute errors for solutions predicted by fPINNs and gfPINNs.
    Figure 21.  The L2 relative error of the problem with the number of iterations.

    In this paper, the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation is verified through numerical examples. The L2 relative errors for solutions predicted by both fPINNs and gfPINNs can achieve 104 for forward problems and 103 or even 104 for inverse problems. The experimental results indicate that gfPINNs demonstrate relatively lower and more stable errors with the increase of training iterations, thereby enhancing prediction performance. Nonetheless, the inclusion of an additional loss term in gfPINNs may result in a higher computational cost, such as when solving inverse problems, fPINNs exhibit faster convergence compared to gfPINNs.

    Shanhao Yuan, Yanqin Liu, Qiuping Li and Chao Guo: Conceptualization, Methodology; Yibin Xu, Shanhao Yuan and Yanfeng Shen: Software, Visualization, Validation; Shanhao Yuan: Writing–Original draft preparation; Yanqin Liu: Writing–Reviewing & editing. All authors have read and approved the final version of the manuscript for publication.

    We appreciated the support by the Natural Science Foundation of Shandong Province (ZR2023MA062), the National Science Foundation of China (62103079), the Belt and Road Special Foundation of The National Key Laboratory of Water Disaster Prevention (2023491911), and the Open Research Fund Program of the Data Recovery Key Laboratory of Sichuan Province (DRN19020).

    The authors declare that they have no conflicts of interest.



    [1] F. Zhang, H. Liu, Modeling and Research on Contraception Control of the Vermin, Science Press, Beijing, 2021.
    [2] J. Jacob, J. Rahmini, J. Sudarmaji, The impact of imposed female sterility on field populations of ricefield rats (Rattus argentiventer), Agric., Ecosyst. Environ., 115 (2006), 281–284. https://doi.org/10.1016/j.agee.2006.01.001 doi: 10.1016/j.agee.2006.01.001
    [3] J. Jacob, G. R. Singleton, L. A. Hinds, Fertility control of rodent pests, Wildl. Res., 35 (2008), 487–493. https://doi.org/10.1071/WR07129 doi: 10.1071/WR07129
    [4] Rodent Pests, Ecology of rodent infestation in forest area, 2023. Available from: http://www.chinarodent.com/index.php?m=contentc=indexa=showcatid=26id=79
    [5] P. Magal, S. Ruan, Structured-Population Models in Biology and Epidemiology, Springer, Berlin, 2008.
    [6] S. Aniţa, Analysis and Control of Age-Dependent Population Dynamics, Springer, Berlin, 2000.
    [7] L. Aniţa, S. Aniţa, Note on some periodic optimal harvesting problems for age-structured population dynamics, Appl. Math. Comput., 276 (2016), 21–30. https://doi.org/10.1016/j.amc.2015.12.010 doi: 10.1016/j.amc.2015.12.010
    [8] P. Golubtsov, S. Steinshamn, Analytical and numerical investigation of optimal harvest with a continuously age-structured model, Ecol. Modell., 392 (2019), 67–81. https://doi.org/10.1016/j.ecolmodel.2018.11.010 doi: 10.1016/j.ecolmodel.2018.11.010
    [9] L. Li, C. P. Ferreira, B. Ainseba, Optimal control of an age-structured problem modelling mosquito plasticity, Nonlinear Anal.: Real World Appl., 45 (2019), 157–169. https://doi.org/10.1016/j.nonrwa.2018.06.014 doi: 10.1016/j.nonrwa.2018.06.014
    [10] Z. He, M. Wang, Z. Ma, Optimal birth control problems for nonlinear age-structured population dynamics, Discrete Contin. Dyn. Syst. Ser. B, 4 (2004), 589–594. https://doi.org/10.3934/dcdsb.2004.4.589 doi: 10.3934/dcdsb.2004.4.589
    [11] Z. He, Y. Liu, An optimal birth control problem for a dynamical population model with size-structure, Nonlinear Anal.: Real World Appl., 13 (2012), 1369–1378. https://doi.org/10.1016/j.nonrwa.2011.11.001 doi: 10.1016/j.nonrwa.2011.11.001
    [12] R. Liu, G. Liu, Optimal birth control problems for a nonlinear vermin population model with size-structure, J. Math. Anal. Appl., 449 (2017), 265–291. http://dx.doi.org/10.1016/j.jmaa.2016.12.010 doi: 10.1016/j.jmaa.2016.12.010
    [13] Y. Li, Z. Zhang, Y. Lv, Z. Liu, Optimal harvesting for a size-stage-structured population model, Nonlinear Anal.: Real World Appl., 44 (2018), 616–630. https://doi.org/10.1016/j.nonrwa.2018.06.001 doi: 10.1016/j.nonrwa.2018.06.001
    [14] N. Kato, Optimal harvesting for nonlinear size-structured population dynamics, J. Math. Anal. Appl., 342 (2008), 1388–1398. https://doi.org/10.1016/j.jmaa.2008.01.010 doi: 10.1016/j.jmaa.2008.01.010
    [15] N. Hritonenko, Y. Yatsenko, R. Goetz, A. Xabadia, Maximum principle for a size-structured model of forest and carbon sequestration management, Appl. Math. Lett., 21 (2008), 1090–1094. https://doi.org/10.1016/j.aml.2007.12.006 doi: 10.1016/j.aml.2007.12.006
    [16] R. Liu, G. Liu, Optimal contraception control for a nonlinear vermin population model with size-structure, Appl. Math. Optim., 79 (2019), 231–256. https://doi.org/10.1007/s00245-017-9428-y doi: 10.1007/s00245-017-9428-y
    [17] R. Liu, G. Liu, Optimal contraception control for a size-structured population model with extra mortality, Appl. Anal., 99 (2020), 658–671. https://doi.org/10.1080/00036811.2018.1506875 doi: 10.1080/00036811.2018.1506875
    [18] W. S. C. Gurney, R. M. Nisbet, Ecological stability and social hierarchy, Theor. Popul. Biol., 16 (1979), 48–80. https://doi.org/10.1016/0040-5809(79)90006-6 doi: 10.1016/0040-5809(79)90006-6
    [19] A. S. Ackleh, K. Deng, S. Hu, A quasilinear hierarchical size-structured model: Well-posedness and approximation, Appl. Math. Optim., 51 (2005), 35–59. https://doi.org/10.1007/s00245-004-0806-2 doi: 10.1007/s00245-004-0806-2
    [20] Z. He, D. Ni, Y. Liu, Theory and approximation of solutions to a hierarchical age-structured population model, J. Appl. Anal. Comput., 8 (2018), 1326–1341. https://doi.org/10.11948/2018.1326 doi: 10.11948/2018.1326
    [21] D. Yan, X. Fu, Asymptotic behavior of a hierarchical size-structured population model, Evol. Equations Control Theory, 7 (2018), 293–316. https://doi.org/10.3934/eect.2018015 doi: 10.3934/eect.2018015
    [22] Z. He, D. Ni, S. Wang, Optimal harvesting of a hierarchical age-structured population system, Int. J. Biomath., 12 (2019), 1950091. https://doi.org/10.1142/S1793524519500918 doi: 10.1142/S1793524519500918
    [23] Z. He, M. Han, Theoretical results of optimal harvesting in a hierarchical size-structured population system with delay, Int. J. Biomath., 14 (2021), 2150054. https://doi.org/10.1142/S1793524521500546 doi: 10.1142/S1793524521500546
    [24] B. K. Kakumani, S. K. Tumuluri, Extinction and blow-up phenomena in a non-linear gender structured population model, Nonlinear Anal.: Real World Appl., 28 (2016), 290–299. https://doi.org/10.1016/j.nonrwa.2015.10.005 doi: 10.1016/j.nonrwa.2015.10.005
    [25] H. Liu, R. Wang, F. Zhang, Q. Li, Research advances of contraception control of rodent pest in China, Acta Ecologica Sinica, 31 (2011), 5484–5494.
    [26] K. Yosida, Functional Analysis, 6th edition, Springer, Berlin, 1980.
    [27] V. Barbu, Mathematical Methods in Optimization of Differential Systems, Kluwer Academic Publishers, Boston, 1994.
  • This article has been cited by:

    1. Jiawei Wang, Yanqin Liu, Limei Yan, Kunling Han, Libo Feng, Runfa Zhang, Fractional sub-equation neural networks (fSENNs) method for exact solutions of space–time fractional partial differential equations, 2025, 35, 1054-1500, 10.1063/5.0259937
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2051) PDF downloads(92) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog