Processing math: 46%
Research article

Supervised neural learning for the predator-prey delay differential system of Holling form-III

  • Received: 05 July 2022 Revised: 22 August 2022 Accepted: 30 August 2022 Published: 13 September 2022
  • MSC : 92B20, 34A34

  • The purpose of this work is to present the stochastic computing study based on the artificial neural networks (ANNs) along with the scaled conjugate gradient (SCG), ANNs-SCG for solving the predator-prey delay differential system of Holling form-III. The mathematical form of the predator-prey delay differential system of Holling form-III is categorized into prey class, predator category and the recent past effects. Three variations of the predator-prey delay differential system of Holling form-III have been numerical stimulated by using the stochastic ANNs-SCG procedure. The selection of the data to solve the predator-prey delay differential system of Holling form-III is provided as 13%, 12% and 75% for testing, training, and substantiation together with 15 neurons. The correctness and exactness of the stochastic ANNs-SCG method is provided by using the comparison of the obtained and data-based reference solutions. The constancy, authentication, soundness, competence, and precision of the stochastic ANNs-SCG technique is performed through the analysis of the correlation measures, state transitions (STs), regression analysis, correlation, error histograms (EHs) and MSE.

    Citation: Naret Ruttanaprommarin, Zulqurnain Sabir, Salem Ben Said, Muhammad Asif Zahoor Raja, Saira Bhatti, Wajaree Weera, Thongchai Botmart. Supervised neural learning for the predator-prey delay differential system of Holling form-III[J]. AIMS Mathematics, 2022, 7(11): 20126-20142. doi: 10.3934/math.20221101

    Related Papers:

    [1] Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319
    [2] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [3] Zihan Yue, Wei Jiang, Boying Wu, Biao Zhang . A meshless method based on the Laplace transform for multi-term time-space fractional diffusion equation. AIMS Mathematics, 2024, 9(3): 7040-7062. doi: 10.3934/math.2024343
    [4] Fethi Bouzeffour . Inversion formulas for space-fractional Bessel heat diffusion through Tikhonov regularization. AIMS Mathematics, 2024, 9(8): 20826-20842. doi: 10.3934/math.20241013
    [5] Qian Li, Qianqian Yuan, Jianhua Chen . An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942
    [6] Lynda Taleb, Rabah Gherdaoui . Approximation by the heat kernel of the solution to the transport-diffusion equation with the time-dependent diffusion coefficient. AIMS Mathematics, 2025, 10(2): 2392-2412. doi: 10.3934/math.2025111
    [7] Jin Li . Barycentric rational collocation method for fractional reaction-diffusion equation. AIMS Mathematics, 2023, 8(4): 9009-9026. doi: 10.3934/math.2023451
    [8] Mubashara Wali, Sadia Arshad, Sayed M Eldin, Imran Siddique . Numerical approximation of Atangana-Baleanu Caputo derivative for space-time fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 15129-15147. doi: 10.3934/math.2023772
    [9] Krunal B. Kachhia, Jyotindra C. Prajapati . Generalized iterative method for the solution of linear and nonlinear fractional differential equations with composite fractional derivative operator. AIMS Mathematics, 2020, 5(4): 2888-2898. doi: 10.3934/math.2020186
    [10] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
  • The purpose of this work is to present the stochastic computing study based on the artificial neural networks (ANNs) along with the scaled conjugate gradient (SCG), ANNs-SCG for solving the predator-prey delay differential system of Holling form-III. The mathematical form of the predator-prey delay differential system of Holling form-III is categorized into prey class, predator category and the recent past effects. Three variations of the predator-prey delay differential system of Holling form-III have been numerical stimulated by using the stochastic ANNs-SCG procedure. The selection of the data to solve the predator-prey delay differential system of Holling form-III is provided as 13%, 12% and 75% for testing, training, and substantiation together with 15 neurons. The correctness and exactness of the stochastic ANNs-SCG method is provided by using the comparison of the obtained and data-based reference solutions. The constancy, authentication, soundness, competence, and precision of the stochastic ANNs-SCG technique is performed through the analysis of the correlation measures, state transitions (STs), regression analysis, correlation, error histograms (EHs) and MSE.



    Fractional diffusion equations (FDEs) have been utilized to model anomalous diffusion phenomena in the real world, see for instance [1,2,4,18,19,22,23,25]. One of the main features of the fractional differential operator is nonlocality. It brings big challenge for finding the numerical solution of FDEs, as the coefficient matrix of the discretized FDEs is typically dense, which requires O(N3) of computational cost and O(N2) of memory storage if a direct solution method is employed, where N is the number of unknowns. However, by making use of the Toeplitz-like structure of the coefficient matrices, many efficient algorithms have been developed, see for instance [6,9,12,13,14,15,16,21].

    In this paper, we consider the following initial-boundary value problem of spatial balanced FDEs [7,24]:

    u(x,t)t+RLxLDαx(d+(x,t)CxDαxRu(x,t))+RLxDβxR(d(x,t)CxLDβxu(x,t))=f(x,t),(x,t)(xL,xR)×(0,T],u(xL,t)=uL(t),u(xR,t)=uR(t),0tT,u(x,0)=u0(x),xLxxR. (1.1)

    where d±(x,t)>0 are diffusion coefficients, f(x,t) is the source term, and α,β are the fractional orders satisfying 12<α,β<1. Here RLxLDγx and RLxDγxR denote the left-sided and right-sided Riemann-Liouville fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    RLxLDγxu(x)=1Γ(1γ)ddxxxLu(ξ)(xξ)γdξ,RLxDγxRu(x)=1Γ(1γ)ddxxRxu(ξ)(ξx)γdξ,

    where Γ() is the Gamma function, while CxLDγx and CxDγxR denote the left-sided and right-sided Caputo fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    CxLDγxu(x)=1Γ(1γ)xxLu(ξ)(xξ)γdξ,CxDγxRu(x)=1Γ(1γ)xRxu(ξ)(ξx)γdξ.

    The fractional differential operator in (1.1) is called the balanced central fractional derivative, which was studied in [24]. One advantage of such fractional differential operator is that its variational formulation has a symmetric bilinear form, which can greatly benefit theoretical investigation.

    Recently, a finite volume approximation for the spatial balanced FDEs (1.1) is proposed in [7]. By applying a standard first-order difference scheme for the time derivative and a finite volume discretization scheme for the spatial balanced fractional differential operator, a series of systems of linear equations are generated, whose coefficient matrices share the form of the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. One attractive feature of these coefficient matrices is that they are symmetric positive definite, so that the linear systems can be solved by CG method, in which the three-term recurrence can significantly reduce the computational and storage cost. However, due to the ill-conditioning of the coefficient matrices, CG method, when applied to solve the resulted linear systems, usually converges very slow. Therefore, preconditioners should be applied to improve the computational efficiency. In [7], the authors proposed two preconditioners: circulant preconditioner for the constant diffusion coefficient case and banded preconditioner for the variational diffusion coefficient case.

    In this paper, we consider the approximate inverse preconditioners for the resulting linear systems arising from the finite volume discretization of the spatial balanced FDEs (1.1). Our preconditioner is based on the symmetric approximate inverse strategy studied in [11] and the Sherman-Morrison-Woodburg formula. Rigorous analysis shows that the preconditioned matrix can be written as the sum of the identity matrix, a small norm matrix, and a low-rank matrix. Therefore, the quick convergence of the CG method for solving the preconditioned linear systems is expected. Numerical examples, for both one-dimensional and two-dimensional cases, are given to demonstrate the robustness and effectiveness of the proposed preconditioner. We remark that our preconditioner can also be applied to another class of conservative balanced FDEs which was studied in [8,10].

    The rest of the paper is organized as follows. In Section 2, we present the discretized linear systems of the balanced FDEs. Our new preconditioners are given in Section 3, and their properties are investigated in detail in Section 4. In Section 5, we carry out the numerical experiments to demonstrate the performance of the proposed preconditioners. Finally, we give the concluding remarks in Section 6.

    Let Δt=T/Mt be the time step where Mt is a given positive integer. We define a temporal partition tj=jΔt for j=0,1,2,,Mt. The first-order time derivative in (1.1) can be discretized by the standard backward Euler scheme, and we obtain the following semidiscrete form:

    u(x,tj)u(x,tj1)Δt+RLxLDαx(d+(x,tj)CxDαxRu(x,tj))+RLxDβxR(d(x,tj)CxLDβxu(x,tj))=f(x,tj), (2.1)

    for j=1,2,,Mt. Let Δx=(xRxL)/(N+1) be the size of the spatial grid where N is a positive integer. We define a spatial partition xi=xL+iΔx for i=0,1,2,,N+1, and denote by xi12=xi1+xi2 the midpoint of the interval [xi1,xi]. Integrating both sides of (2.1) over [xi12,xi+12] gives

    1Δtxi+12xi12u(x,tj)dx+xi+12xi12RLxLDαx(d+(x,tj)CxDαxRu(x,tj))dx+xi+12xi12RLxDβxR(d(x,tj)CxLDβxu(x,tj))dx=1Δtxi+12xi12u(x,tj1)dx+xi+12xi12f(x,tj)dx. (2.2)

    Let SΔx(xL,xR) be the space of continuous and piecewise-linear functions with respect to the spatial partition, and define the nodal basis functions ϕk(x) as

    ϕk(x)={xxk1Δx,x[xk1,xk],xk+1xΔx,x[xk,xk+1],0,elsewhere,

    for k=1,2,,N, and

    ϕ0(x)={x1xΔx,x[x0,x1],0,elsewhere,ϕN+1(x)={xxNΔx,x[xN,xN+1],0,elsewhere.

    The approximate solution uΔx(x,tj)SΔx(xL,xR) can be expressed as

    uΔx(x,tj)=N+1k=0u(j)kϕk(x).

    Therefore, the corresponding finite volume scheme leads to

    1ΔtN+1k=0u(j)kxi+12xi12ϕj(x)dx+1Γ(1α)xxL(xξ)α(d+(ξ,tj)CξDαxRuh(ξ,tj))dξ|xi+12xi121Γ(1α)xRx(ξx)α(d(ξ,tj)CxLDαξuh(ξ,tj))dξ|xi+12xi12=1ΔtN+1k=0u(j1)kxi+1/2xi1/2ϕj(x)dx+xi+1/2xi1/2f(x,tj)dx, (2.3)

    which can be further approximated and we obtain [7]

    18(u(j)i1+6u(j)i+u(j)i+1)+ηαil=0g(α)ild+(xl+12,tj)(Nk=lg(α)klu(j)ka(α)Nlu(j)N+1)+ηβN+1l=ig(β)lid(xl12,tj)(k=1g(β)lku(j)ka(β)l1u(j)0)=18(u(j1)i1+6u(j1)i+u(j1)i+1)+ΔtΔxxi+12xi12f(x,tj)dx,1iN, 1jMt, (2.4)

    where ηα=ΔtΓ(2α)2Δx2α, ηβ=ΔtΓ(2β)2Δx2β, and

    g(γ)0=a(γ)0,g(γ)k=a(γ)ka(γ)k1,k=1,2,,N, (2.5)

    for γ=α,β, with

    a(γ)0=(12)1γ,a(γ)k=(k+12)1γ(k12)1γ,k=1,2,,N.

    The initial value and boundary condition are

    u(0)k=u0(xk),k=0,1,2,,N+1,u(j)0=uL(tj),u(j)N+1=uR(tj),j=1,2,,Mt.

    Collecting all components i into a single matrix system, we obtain

    (M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ)u(j)=Mu(j1)+Δtf(j)+b(j),j=1,2,,Mt, (2.6)

    where

    ˜D(j)+=diag({d+(xk+12,tj)}Nk=0)and˜D(j)=diag({d(xk+12,tj)}Nk=0)

    are diagonal matrices, M=18tridiag(1,6,1), u(j)=[u(j)1,u(j)2,,u(j)N], f(j)=[f(j)1,f(j)2,, f(j)N] with

    f(j)i=1Δxxi+12xi12f(x,tj)dx,

    and

    b(j)=[b(j)1+18(u(j1)0u(j)0),b(j)2,b(j)3,,b(j)N1,b(j)N+18(u(j1)N+1u(j)N+1)]

    with

    b(j)i=ηαil=0g(α)ild+(xl+12,tj)a(α)Nlu(j)N+1ηαg(α)id+(x12,tj)g(α)0u(j)0+ηβN+1l=ig(β)lid(xl12,tj)a(β)l1u(j)0ηβg(β)Ni+1d(xN+12,tj)g(β)0u(j)N+1,i=1,2,,N.

    The matrices ˜Gα, ˜Gβ are N-by-(N+1) Toeplitz matrices defined by

    ˜Gα=[g(α)1g(α)0g(α)2g(α)1g(α)0g(α)Ng(α)2g(α)1g(α)0],˜Gβ=[g(β)0g(β)1g(β)2g(β)Ng(β)0g(β)1g(β)2g(β)0g(β)1].

    We remark that the entries of ˜Gα and ˜Gβ are independent on the time partition tj.

    We denote the coefficient matrix by A(j), that is,

    A(j)=M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ. (2.7)

    As M and ˜D(j)+,˜D(j) are symmetric positive definite, we can see that the coefficient matrix A(j) is symmetric positive definite too.

    For the entries of ˜Gα and ˜Gβ, we have the following result, which is directly obtained from Lemma 2.3 in [15].

    Lemma 2.1. Let g(γ)k be defined by (2.5) with 12<γ<1. Then

    (1) g(γ)0>0, g(γ)1<g(γ)2<<g(γ)k<<0;

    (2) |g(γ)k|<cγ(k1)γ+1 for k=2,3,, where cγ=γ(1γ);

    (3) limk|g(γ)k|=0, k=0g(γ)k=0, and pk=0g(γ)k>0 for p1.

    As the coefficient matrix is symmetric positive definite, we can apply CG method to solve the linear systems (2.6). In order to improve the performance and reliability of the CG method, preconditioning is necessarily to employed. In this section, we develop the approximate inverse preconditioner for the linear system (2.6).

    For the convenience of analysis, we omit the superscript "(j)" for the jth time step, that is, we denote the coefficient matrix (2.7) as

    A=M+ηα˜Gα˜D+˜Gα+ηβ˜Gβ˜D˜Gβ, (3.1)

    where ˜D+=˜D(j)+, ˜D=˜D(j).

    Let gα and Gα be the first column and the last N columns of ˜Gα, respectively, that is, ˜Gα=[gα,Gα] where

    gα=[g(α)1g(α)2g(α)N]RNandGα=[g(α)0g(α)1g(α)0g(α)N1g(α)1g(α)0]RN×N.

    Analogously, we denote by gβ and Gβ the last column and the first N columns of ˜Gβ, respectively, that is, ˜Gβ=[Gβ,gβ] where

    gβ=[g(β)Ng(β)N1g(β)1]RNandGβ=[g(β)0g(β)1g(β)N1g(β)0g(β)1g(β)0]RN×N.

    Then we have

    A=M+ηα[gα,Gα]˜D+[gα,Gα]+ηβ[Gβ,gβ]˜D[Gβ,gβ]=M+ηαGαˆD+Gα+ηβGβˆDGβ+ηαd+(x12)gαgα+ηβd(xN+12)gβgβ,

    where

    ˆD+=diag({d+(xk+12)}Nk=1),ˆD=diag({d(xk+12)}N1k=0).

    Therefore, A can be written as

    A=ˆA+UU, (3.2)

    where

    ˆA=M+ηαGαˆD+Gα+ηβGβˆDGβ (3.3)

    and

    U=[ηαd+(x12)gα,ηβd(xN+12)gβ]RN×2. (3.4)

    According to Sherman-Morrison-Woodburg Theorem, we have

    A1=(ˆA+UU)1=ˆA1ˆA1U(I+UˆA1U)1UˆA1. (3.5)

    In the following, we consider the approximation of ˆA1. To this end, we first define a matrix ˜A with

    ˜A=M+ηαGαD+Gα+ηβGβDGβ, (3.6)

    where D+=diag({d+(xk)}Nk=1) and D=diag({d(xk)}Nk=1).

    Assume d+(x),d(x)C[xL,xR], then it is easy to see that for any ϵ>0, when the spatial gird Δx is small enough, we have

    |d+(xk)d+(xk±12)|<ϵand|d(xk)d(xk±12)|<ϵ.

    Meanwhile, we learn from Lemma 2.1 that there exists cα,cβ>0 such that Gα2cα,Gβ2cβ. Let

    S1=ˆA˜A. (3.7)

    Then it holds that

    S12=ηαGα(ˆD+D+)Gα+ηβGβ(ˆDD)Gβ2(ηαc2α+ηβc2β)ϵ.

    This indicates that ˜A can be a good approximation of ˆA when Δx is very small. However, it is not easy to compute the inverse of ˜A.

    Motivated by the symmetric approximate inverse preconditioner proposed in [11], we consider the following approximations

    ˜A1/2eiK1/2iei,i=1,2,,N,

    where ei is the i-th column of the identity matrix I and

    Ki=M+ηαd+(xi)GαGα+ηβd(xi)GβGβ, (3.8)

    which is symmetric positive definite. That is, we approximate the i-th column of ˜A1/2 by the i-th column of K1/2i. Then we propose the following symmetric approximate inverse preconditioner

    P11=(Ni=1K1/2ieiei)(Ni=1K1/2ieiei). (3.9)

    Although Ki is symmetric positive definite, it is not easy to compute the inverse of its square root. Hence we further approximate Ki by circulant matrices whose inverse square roots can be easily obtained by FFT.

    Let CM, Cα and Cβ be Strang's circulant approximations [5] of M, Gα and Gβ, respectively. Then we obtain the following preconditioner

    P12=(Ni=1C1/2ieiei)(Ni=1C1/2ieiei), (3.10)

    where Ci=CM+ηαd+(xi)CαCα+ηβd(xi)CβCβ is circulant matrix.

    In order to make the preconditioner more practical, similar to the idea in [11,13], we utilize the interpolation technique. Let {˜xk}k=1 be distinct interpolation points in [xL,xR] with a small integer (N), and denote by λ(λM,λα,λβ), where λM, λα, λβ are certain positive real numbers. Then we define the function

    gλ(x)=(λM+ηαλαd+(x)+ηβλβd(x))1/2,x[xL,xR].

    Let

    qλ(x)=ϕ1(x)gλ(˜x1)+ϕ2(x)gλ(˜x2)++ϕ(x)gλ(˜x)

    be the piecewise-linear interpolation for gλ(x) based on the points {(˜xk,gλ(˜xk))}k=1. Define

    ˜Ck=CM+ηαd+(˜xk)CαCα+ηβd(˜xk)CβCβ,k=1,2,,.

    Then ˜Ck is symmetric positive definite and can be diagonalized by FFT, that is,

    ˜Ck=F˜ΛkF,

    where F is the Fourier matrix and ˜Λk is a diagonal matrix whose diagonal entries are the eigenvalues of ˜Ck. By making use of the interpolation technique, we approximate C1/2i by

    C1/2iF(k=1ϕk(xi)˜Λ1/2k)F=k=1ϕk(xi)˜C1/2k,i=1,2,,N.

    By substituting these approximations into P12, we obtain the practical preconditioner

    P13=(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)=(Ni=1k=1ϕk(xi)eieiF˜Λ1/2k)(Ni=1k=1˜Λ1/2kFϕk(xi)eiei)=(k=1Ni=1ϕk(xi)eieiF˜Λ1/2k)(k=1˜Λ1/2kFNi=1ϕk(xi)eiei)=(k=1ΦkF˜Λ1/2k)(k=1˜Λ1/2kFΦk), (3.11)

    where Φk=diag(ϕk(x1),ϕk(x2),,ϕk(xN)). Now applying P13 to any vector requires about O(NlogN) operations, which is acceptable for a moderate number .

    Finally, by substituting ˆA1 in the Sherman-Morrison-Woodburg formula (3.5) with P13, we obtain the preconditioner

    P14=P13P13U(I+UP13U)1UP13. (3.12)

    We remark that both P13 and P14 can be taken as preconditioners. It is clear that implementing P14 requires more computational work than P13. However, we note that P14 is a rank-2 update of P13, and gα, gβ are independent on tj, which indicates that, during the implementation of the preconditioner P14 to the CG method, we can precompute P13U ahead, as well as the inner product UP13U and the inverse of the 2-by-2 matrix I+UP13U. Therefore, at each CG iteration with preconditioner P14, besides the matrix-vector product with P13, only two inner-products and two vector updates are needed. Therefore, it is expected that P14 may have better performance for the one dimensional problems.

    Since P14 is obtained by substituting ˆA1 with P13 in the expression of A1, the approximation property of P14 to A1 is dependent on how close P13 to ˆA1 will be. Therefore, in this section, we study the difference between P13 and ˆA1.

    We first introduce the off-diagonal decay property [20], which is crucial for studying the circulant approximation of the Toeplitz matrix.

    Definition 4.1. Let A=[ai,j]i,jI be a matrix, where the index set is I=Z,N or {1,2,,N}. We say A belongs to the class Ls if

    |ai,j|c(1+|ij|)s (4.1)

    holds for s>1 and some constant c>0, and say A belongs to the class Er if

    |ai,j|cer|ij| (4.2)

    holds for r>0 and some constant c>0.

    The following results hold for the off-diagonal decay matrix class Ls and Er [3,11,15,20].

    Lemma 4.1. Let A=[ai,j]i,jI be a nonsingular matrix, where the index set is I=Z,N or {1,2,,N}.

    (1) If ALs for some s>1, then A1Ls.

    (2) If AL1+s1 and BL1+s2 are finite matrices, then ABL1+s, where s=min{s1,s2}.

    (3) If AEr1 and BEr2 are finite matrices, then ABEr for some constant 0<r<min{r1,r2}.

    (4) Let A be a banded finite matrix and Hermitian positive definite, and let f be an analytic function on [λmin(A),λmax(A)] and f(λ) is real for real λ, where λmin(A) and λmax(A) denote the minimal and maximal eigenvalues of A, respectively. Then f(A) has the off-diagonal exponential decay property (4.2). In particular, let f(x)=x1,x1/2,andx1/2, respectively, then A1, A1/2, and A1/2 have the off-diagonal exponential decay property (4.2).

    Assume that d+(x),d(x)C[xL,xR], it follows from Lemma 2.1 and Lemma 4.1 that we have

    Gα, GαGα, GαD+GαL1+αandGβ, GβGβ, GβDGβL1+β,

    for 12<α,β<1. Therefore, it holds that ˜A, ˜A1L1+min{α,β}.

    Given an integer m, let Gα,m and Gβ,m be the (2m+1)-banded approximations of Gα and Gβ, respectively, that is,

    Gα,m(i,j)={Gα(i,j),|ij|m,0,otherwise,Gβ,m(i,j)={Gβ(i,j),|ij|m,0,otherwise.

    In the following discussion, we let

    Ki=M+ηαd+(xi)Gα,mGα,m+ηβd(xi)Gβ,mGβ,m,i=1,2,,N, (4.3)

    which can be regarded as some kind of banded approximation of the Ki defined in (3.8). We remark that, in the actual applications, we still employ Ki in (3.8) to construct the preconditioners.

    Define

    ˜Am=M+ηαGα,mD+Gα,m+ηβGβ,mDGβ,m. (4.4)

    As ˜Am and Ki are banded matrices and symmetric positive definite, it follows from Lemma 4.1 that ˜A1m, K1i and K1/2i have off diagonal exponential decay property. Meanwhile, we have

    λmin(˜A)λmin(M)12,λmin(˜Am)λmin(M)12,

    and hence

    ˜A122and˜A1m22.

    For a given ϵ>0, it follows from Theorem 12 in [7] that there exists an integer m1>0 such that for all mm1 we have

    ˜Am˜A2ϵ/4.

    Therefore, it holds that

    ˜A1m˜A12=˜A1m(˜A˜Am)˜A12˜A1m2˜Am˜A2˜A12ϵ, (4.5)

    and

    P11˜A12P11˜A1m2+˜A1m˜A12P11˜A1m2+ϵ.

    Now, we turn to estimate the upper bound of P11˜A1m2. Since both P11 and ˜A1m are symmetric, we have

    P11˜A1m2=ρ(P11˜A1m)P11˜A1m1=max1jN(P11˜A1m)ej1=max1jN(P11K1j)ej1+max1jN(K1j˜A1m)ej1. (4.6)

    For the first item, we have the following estimation.

    Lemma 4.2. Let Kj be defined by (4.3). Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exist an integer N1>0 and two positive constants c3,c4, such that

    P11ejK1jej1<c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ,

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j|, ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Proof. We have

    P11ejK1jej=(Ni=1K1/2ieiei)K1/2jejK1jej=Ni=1eieiK1/2iK1/2jejNi=1eieiK1jej=Ni=1eiei(K1/2iK1/2j)K1/2jej. (4.7)

    As it was shown in Theorem 2.2 of [3], for a given ϵ>0, we can find a polynomial pk(t)=k=0at of degree k such that

    |[K1/2i]l,r[pk(Ki)]l,r|K1/2ipk(Ki)2<ϵ,1iN,1l,rN, (4.8)

    where []l,r denotes the (l,r)-th entry of a matrix. Then we can write (4.7) as

    P11ejK1jej=Ni=1eiei(K1/2ipk(Ki)+pk(Ki)pk(Kj)+pk(Kj)K1/2j)K1/2jej.=Ni=1eieiH(1)ijej+Ni=1eieiH(2)ijej+Ni=1eieiH(3)ijej, (4.9)

    where

    H(1)ij=(K1/2ipk(Ki))K1/2j,H(2)ij=(pk(Ki)pk(Kj))K1/2j,H(3)ij=(pk(Kj)K1/2j)K1/2j.

    Note that pk(Ki) is (2km+1)-banded matrix and K1/2i has the off diagonal exponential decay property, we know that K1/2ipk(Ki) also has the off diagonal exponential decay property. According to Lemma 4.1, we learn that H(1)ij has the off diagonal exponential decay property. Hence, analogous to the proof of Lemma 3.8 in [11], we can find constants ˆc>0 and ˆr>0 such that

    |[H(1)ij]l,r|ˆceˆr|lr|forl,r=1,2,,N. (4.10)

    On the other hand, we denote the j-th column of H(1)ij by [h1,j,h2,j,,hN,j], and let ˜KiK1/2ipk(Ki). Observe that

    hl,j=Nr=1˜Ki(l,r)K1/2j(r,j),l=1,2,,N.

    Then based on (4.8) and the fact that K1/2j has off diagonal exponential property, we can show that there exists a constant ˆc1 such that

    |hl,j|<ˆc1ϵ,l=1,2,,N. (4.11)

    Denote

    Ni=1eieiH(1)ijej[H(1)1j(1,j),H(1)2j(2,j),,H(1)Nj(N,j)].

    From (4.10) and (4.11), we can show that there exists a constant c1>0, such that

    Ni=1eieiH(1)ijej1Ni=1|H(1)ij(i,j)|<c1ϵ. (4.12)

    Analogously, we can show that there exists a constant c2>0 such that

    Ni=1eieiH(3)ijej1<c2ϵ. (4.13)

    Now we turn to H(2)ij. It follows from the proof of Lemma 3.11 in [11] that pk(Ki)pk(Kj) has the form

    H(2)ij=k=1a(1q=0Kq1i(KiKj)Kqj)K1/2j=(d+,id+,j)˜H(2)ij+(d,id,j)ˆH(2)ij, (4.14)

    where

    ˜H(2)ij=(k=11q=0aηαKq1iGαGαKqj)K1/2jandˆH(2)ij=(k=11q=0aηβKq1iGβGβKqj)K1/2j.

    It is easy to see that both ˜H(2)ij and ˆH(2)ij have the off diagonal exponential decay property. As

    Ni=1eieiH(2)ijej=[(d+,1d+,j)˜H(2)1j(1,j),,(d+,Nd+,j)˜H(2)Nj(N,j)]+[(d,1d,j)ˆH(2)1j(1,j),,(d,Nd,j)ˆH(2)Nj(N,j)],

    there exists an integer N1 such that

    Ni=1eieiH(2)ijej1Nk=1|d+,kd+,j||˜H(2)kj(k,j)|+Nk=1|d,kd,j||ˆH(2)kj(k,j)|<j+N1k=jN1|d+,kd+,j||˜H(2)kj(k,j)|+j+N1k=jN1|d,kd,j||ˆH(2)kj(k,j)|+4dmaxϵ˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ, (4.15)

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j| and ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Let c3=˜c and c4=c1+c2+4dmax. It follows from (4.12), (4.13) and (4.15) that

    P11ejK1jej1Ni=1eieiH(1)ijej1+Ni=1eieiH(2)ijej1+Ni=1eieiH(3)ijej1<(c1+c2)ϵ+˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ=c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ.

    Now we consider the second item in (4.6).

    Lemma 4.3. Let Kj and ˜Am be defined by (4.3) and (4.4), respectively. Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exists an integer N2>0 and constants c5,c6>0 such that

    (K1j˜A1m)ej1c5max1kNmax{ΔN2d+,k,ΔN2d,k}+c6ϵ. (4.16)

    Proof. Let

    ˆAm=M+ηαGαGαD++ηβGβGβD

    then we have

    (K1j˜A1m)ej1(K1jˆA1m)ej1+(ˆA1m˜A1m)ej1. (4.17)

    On the one hand, observe that

    K1jˆA1m=ˆA1m(ˆAmKj)K1j=ηαˆA1mGαGα(D+d+,jI)K1j+ηβˆA1mGβGβ(Dd,jI)K1j,

    we have

    (K1jˆA1m)ej1ηαˆA1m1Gα21(D+d+,jI)K1jej1+ηβˆA1m1Gβ21(Dd,jI)K1jej1,

    where

    (D+d+,jI)K1jej1=Nk=1|(d+,kd+,j)K1j(k,j)|

    and

    (Dd,jI)K1jej1=Nk=1|(d,kd,j)K1j(k,j)|.

    As K1j has off diagonal exponential decay property, similar to the derivation of (4.15), we can show that, given a ϵ>0, there exists a integer ˜N1>0 and a constant ˜c1>0, such that

    (D+d+,jI)K1jej1˜c1Δ˜N1d+,j+2dmaxϵ,

    and

    (Dd,jI)K1jej1˜c1Δ˜N1d,j+2dmaxϵ.

    Therefore, there exists positive constants ˜c2 and ˜c3, such that

    (K1jˆA1m)ej1˜c2max{Δ˜N1d+,j,Δ˜N1d,j}+˜c3ϵ. (4.18)

    On the other hand, observe that

    ˆA1m˜A1m=˜A1m(˜AmˆAm)ˆA1m=ηα˜A1m(GαD+GαGαGαD+)ˆA1m+ηβ˜A1m(GβDGβGβGβD)ˆA1m=ηα˜A1mGα(D+GαGαD)ˆA1m+ηβ˜A1mGβ(DGβGβD)ˆA1m,

    we have

    (ˆA1m˜A1m)ej1ˆA1m˜A1m1˜A1m1ˆA1m1(ηαGα1D+GTαGαD+1+ηβGβ1DGTβGβD1).

    Since Gα and Gβ have off diagonal decay property, then as shown in the proof of Lemma 4.7 in [15], given a ϵ>0, there exists a integer ˜N2>0 and a constant ˜c4>0 such that

    D+GαGαD+1˜c4max1kNΔ˜N2d+,k+2dmaxϵ

    and

    DGβGβD1˜c4max1kNΔ˜N2d,k+2dmaxϵ,

    where Δ˜N2d+,k=maxk˜N2lk+˜N2|d+,ld+,k| and Δ˜N2d,k=maxk˜N2lk+˜N2|d,ld,k|. Therefore, there exists positive constants \tilde{c}_5 and \tilde{c}_6 such that

    \begin{equation} \|(\hat{A}_{m}^{-1} - \tilde{A}_{m}^{-1}) e_j\|_{1} \leq \tilde{c}_{5} \max\limits_{1\leq k \leq N} \max \{\Delta_{\tilde{N}_2}d_{+,k}, \Delta_{\tilde{N}_2}d_{-,k}\} +\tilde{c}_{6}\epsilon. \end{equation} (4.19)

    Now, let

    N_{2} = \max\left\{\tilde{N}_1,\tilde{N}_2\right\},\; \; \; \; c_{5} = \tilde{c}_{2}+\tilde{c}_{5} \; \; {\text{and}}\; \; c_{6} = \tilde{c}_{3}+\tilde{c}_{6}.

    It follows from (4.18) and (4.19) that, given a \epsilon > 0 , there exists a integer N_2 > 0 and constants c_{5}, c_{6} > 0 such that

    \begin{equation*} \|(K_j^{-1}-\tilde{A}_{m}^{-1})e_j\|_{1}\leq c_{5}\max\limits_{1\leq k \leq N} \max \{\Delta_{N_2}d_{+,k}, \Delta_{N_2}d_{-,k}\} + c_{6}\epsilon. \end{equation*}

    The proof is completed.

    In combination with (4.5), Lemma 4.2 and Lemma 4.3, we obtain the following theorem.

    Theorem 4.1. Assume 0\leq d_{\min} \leq d_+(x), d_-(x)\leq d_{\max} . Then given an \epsilon > 0 , there exist an integer N_3 > 0 and constants c_{7}, c_{8} > 0 such that

    \begin{equation} \|P_{1}^{-1}-\tilde{A}^{-1}\|_{2}\leq c_{7}\max\limits_{1\leq k \leq N} \max\{\Delta_{N_3}d_{+,k}, \Delta_{N_3}d_{-,k}\} +c_{8}\epsilon. \end{equation} (4.20)

    Remark 1. If d_{+}(x), \, d_{-}(x)\in C[x_L, x_R] , then \max\limits_{1\leq k\leq N}\max\{\Delta_{N_3}d_{+, k}, \Delta_{N_3}d_{-, k}\} can be very small for sufficiently large N , which implies that P_{1}^{-1} then will be a good approximation to \tilde{A}^{-1} .

    Now we consider the difference between P_{2}^{-1} to P_{1}^{-1} . As shown in Section 3.3 of [11], we know that, for a given \epsilon > 0 , there exists a polynomial p_{k}(t) = \sum_{\ell = 0}^{k} a_\ell t^\ell of degree k such that

    \begin{equation} \left\|K_{i}^{\frac{1}{2}}-p_{k}(K_i)\right\|_2 < \epsilon \; \; {\text{and}}\; \; \left\|C_{i}^{\frac{1}{2}}-p_{k}(C_i)\right\|_2 < \epsilon, \quad {\text{for}}\ i = 1,2,\cdots,N. \end{equation} (4.21)

    Define

    \begin{equation} \tilde{P}_{1} = \left(\sum\limits_{i = 1}^{n} p_{k}\left(K_{i}\right) e_{i} e_{i}^{ {\intercal}}\right)^{*} \left(\sum\limits_{i = 1}^{n} p_{k}\left(K_{i}\right) e_{i} e_{i}^{ {\intercal}}\right) \end{equation} (4.22)

    and

    \begin{equation} \tilde{P}_{2} = \left(\sum\limits_{i = 1}^{n} p_{k}\left(C_{i}\right) e_{i} e_{i}^{ {\intercal}}\right)^{*} \left(\sum\limits_{i = 1}^{n} p_{k}\left(C_{i}\right) e_{i} e_{i}^{ {\intercal}}\right). \end{equation} (4.23)

    Then we have

    \begin{equation} P_2^{-1}-P_1^{-1} = (P_2^{-1}-\tilde{P}_2)+(\tilde{P}_2-\tilde{P}_1)+(\tilde{P}_1-P_1^{-1}). \end{equation} (4.24)

    Lemma 4.4. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then we have

    \begin{equation*} {\mathrm{rank}}(\tilde{P}_{2}-\tilde{P}_{1})\leq 8km. \end{equation*}

    Proof. Observe that

    \begin{equation} K_{i} - C_{i} = M-C(M)+\eta_{\alpha} d_{+,i} \left(G_{\alpha}G_{\alpha}^{ { \intercal }}-C_{\alpha}C_{\alpha}^{ { \intercal }}\right) + \eta_{\beta} d_{-,i}\left(G_{\beta}G_{\beta}^{ { \intercal }}-C_{\beta}C_{\beta}^{ { \intercal }}\right). \end{equation} (4.25)

    It is easy to see that

    \begin{equation} M-C(M) = \frac{1}{8}\left(\begin{bmatrix} 6 & 1 & & \\ 1 & 6 & \ddots & \\ & \ddots & \ddots & 1 \\ & & 1 & 6 \end{bmatrix} -\begin{bmatrix} 6 & 1 & & 1 \\ 1 & 6 & \ddots & \\ & \ddots & \ddots & 1 \\ 1 & & 1 & 6 \end{bmatrix}\right) = \frac{1}{8}\begin{bmatrix} & & & -1 \\ & & & \\ & & & \\ -1 & & & \end{bmatrix}. \end{equation} (4.26)

    For the matrix G_{\alpha} and its Strang's circulant approximation C_{\alpha} , they have the following form

    \begin{equation*} G_{\alpha} = \begin{bmatrix} K_0 & & & \\ K_1 & K_0 & & \\ &\ddots &\ddots & \\ & & K_1 & K_0 \end{bmatrix} \; \; {\text{and}}\; \; C_{\alpha} = \begin{bmatrix} K_0 & & & K_1 \\ K_1 & K_0 & & \\ & \ddots & \ddots & \\ & & K_1 & K_0 \end{bmatrix}, \end{equation*}

    where K_0, K_1\in\mathbb{R}^{m \times m} . Therefore, we have

    \begin{equation} \begin{split} C_{\alpha}C_{\alpha}^{ { \intercal }} & = \left(G_{\alpha}+\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}\right) \left(G_{\alpha}^{ { \intercal }}+\begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix}\right) \\ & = G_{\alpha}G_{\alpha}^{ { \intercal }} +G_{\alpha}\begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix} +\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}G_{\alpha}^{ { \intercal }} +\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}\, \begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix}\\ & = G_{\alpha}G_{\alpha}^{ { \intercal }} +\begin{bmatrix} & & \\& &\\K_{0}K_{1}^{ { \intercal }} & &\end{bmatrix} +\begin{bmatrix} & & K_{1}K_{0}^{ { \intercal }}\\ & & \\ & &\end{bmatrix} +\begin{bmatrix} K_{1}K_{1}^{ { \intercal }} & & \\ & & \\ & &\end{bmatrix}. \end{split} \end{equation} (4.27)

    Similarly, we can show that

    \begin{equation} C_{\beta}C_{\beta}^{ { \intercal }} = G_{\beta}G_{\beta}^{ { \intercal }} +\begin{bmatrix} & & \tilde{K}_{0}\tilde{K}_{1}^{ { \intercal }} \\ & & \\ & &\end{bmatrix} +\begin{bmatrix} & & \\ & & \\ \tilde{K}_{1}\tilde{K}_{0}^{ { \intercal }} & &\end{bmatrix} +\begin{bmatrix} & & \\ & & \\ & & \tilde{K}_{1}\tilde{K}_{1}^{ { \intercal }}\end{bmatrix}. \end{equation} (4.28)

    Therefore, we can see from (4.25), (4.26), (4.27) and (4.28) that K_{i}-C_{i} is of the form

    \begin{equation*} K_{i}-C_{i} = \begin{bmatrix} + & & & + \\ & & & \\ + & & & + \end{bmatrix}, \end{equation*}

    where "+" denotes a m -by- m block matrix.

    Analogous to the proof of Lemma 3.11 in [11], it follows from

    \begin{equation*} p_k(C_i)-p_k(K_i) = \sum\limits_{j = 0}^{k} a_j \left(C_i^j-K_i^j\right) = \sum\limits_{j = 0}^{k}\sum\limits_{\ell = 0}^{j-1} a_j C_i^{j-\ell-1}(C_i-K_i)K_i^{\ell} \end{equation*}

    and

    \tilde{P}_{2}-\tilde{P}_{1} = \left(\sum\limits_{i = 1}^{N} e_ie_i^{ { \intercal }} p_k(C_i)\right) \left(\sum\limits_{i = 1}^{N} (p_k(C_i)-p_k(K_i)) e_ie_i^{ { \intercal }}\right) +\left(\sum\limits_{i = 1}^{N} (p_k(C_i)-p_k(K_i)) e_ie_i^{ { \intercal }}\right)^{*} \left(\sum\limits_{i = 1}^{N} p_k(K_i)e_ie_i^{ { \intercal }}\right)

    that {\mathrm{rank}}(\tilde{P}_2-\tilde{P}_1)\leq 8km .

    On the other hand, for P_1-\tilde{P}_1 and P_2-\tilde{P}_2 , we can directly apply the proof of Lemma 3.12 in [11] to obtain the following lemma.

    Lemma 4.5. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then given a \epsilon > 0 , there exist constants \tilde{c}_7 > 0 and \tilde{c}_8 > 0 such that

    \begin{equation*} \|P_1^{-1}-\tilde{P}_{1}\|_{2} < \tilde{c}_7\epsilon \; \; \mathit{{\text{and}}}\; \; \|P_2^{-1}-\tilde{P}_{2}\|_{2} < \tilde{c}_8\epsilon. \end{equation*}

    By Lemma 4.4 and Lemma 4.5, we obtain the following theorem.

    Theorem 4.2. Let P_{1}^{-1} and P_{2}^{-1} be defined by (3.9) and (3.10), respectively. Then given a \epsilon > 0 , there exists constants \tilde{c}_9 > 0 and k > 0 such that

    \begin{equation*} P_2^{-1}-P_{1}^{-1} = E+M, \end{equation*}

    where E = (P_2^{-1}-\tilde{P}_2)+(\tilde{P}_1-P_1^{-1}) is a small norm matrix with \|E\|_{2} < \tilde{c}_9\epsilon , and M = \tilde{P}_2-\tilde{P}_1 is a low rank matrix with {\mathrm{rank}}(M)\leq 8km .

    We can directly apply the analysis of section 3.4 in [11] to reach the following conclusion.

    Lemma 4.6. Let P_{2}^{-1} and P_{3}^{-1} be defined by (3.10) and (3.11), respectively. Then, for a given \epsilon > 0 , there exist an integer \ell_{0} > 0 and a constant c_{9} > 0 such that

    \begin{equation*} \|P_{3}^{-1}-P_{2}^{-1}\|_{2} < c_{9}\epsilon \end{equation*}

    holds when the number of interpolation points in P_{3}^{-1} satisfies \ell\geq \ell_{0} .

    Summarizing our analysis, we have the following theorem.

    Theorem 4.3. Let P_{3}^{-1} and A be defined by (3.11) and (3.2), respectively. Then we have

    \begin{equation*} P_{3}^{-1}A = I+E+S, \end{equation*}

    where E is a low rank matrix and S is a small norm matrix.

    Proof. Observe that

    \begin{equation} \begin{split} P_{3}^{-1}A & = \left(P_{3}^{-1}-\tilde{A}^{-1}+\tilde{A}^{-1}\right)A = \left(P_{3}^{-1}-\tilde{A}^{-1}\right)A + \tilde{A}^{-1}\left(\tilde{A}+A-\tilde{A}\right) = I+\left(P_{3}^{-1}-\tilde{A}^{-1}\right)A + \tilde{A}^{-1}\left(A-\tilde{A}\right). \end{split} \end{equation} (4.29)

    By (3.2), (3.7), Theorems 4.1, 4.2 and Lemma 4.6, we can show that

    \begin{equation*} A-\tilde{A} = E_1+S_1 \quad{\text{and}}\quad P_{3}^{-1}-\tilde{A}^{-1} = E_2+S_2, \end{equation*}

    where E_1 and E_2 are two low rank matrices, and S_1 and S_2 are two small norm matrices. Therefore, it is easy to see from (4.29) that

    \begin{equation*} P_{3}^{-1}A = I+E+S, \end{equation*}

    where E is a low rank matrix and S is a small norm matrix.

    In this section, we carry out numerical experiments to show the performance of the proposed preconditioners P_3 and P_4 . We use the preconditioned conjugate gradient (PCG) method to solve the symmetric positive definite linear systems (2.6). In all experiments, we set the stopping criterion as

    \begin{equation*} \frac{\|r_k\|_{2}}{\|b\|_{2}} < 10^{-8}, \end{equation*}

    where r_{k} is the residual vector after k iterations and b is the right-hand side.

    For the sake of comparison, we also test two other types of preconditioners which are proposed in [7]. One is the Strang's circulant preconditioner C_s defined as

    C_s = s(M)+\eta_{\alpha}\tilde{d}_{+}s(G_{\alpha})s(G_{\alpha})^{ { \intercal }} +\eta_{\beta}\tilde{d}_{-}s(G_{\beta})s(G_{\beta})^{ { \intercal }},

    where s(M) , s(G_{\alpha}) and s(G_{\beta}) are the strang-circulant approximations of the Toeplitz matrices M , G_{\alpha} and G_{\beta} , and \tilde{d}_{+} and \tilde{d}_{-} are the mean values of the diagonals of \tilde{D}_{+} and \tilde{D}_{-} , respectively. Another is the banded preconditioner B(\ell) defined as

    \begin{equation*} B(\ell) = M+\eta_{\alpha}\tilde{B}_{\alpha}\tilde{D}_{+}^{(j)}\tilde{B}_{\alpha}^{ { \intercal }} +\eta_{\beta}\tilde{B}_{\beta}\tilde{D}_{-}^{(j)}\tilde{B}_{\beta}^{ { \intercal }}, \end{equation*}

    where \tilde{B}_{\alpha} and \tilde{B}_{\beta} are the banded approximations of \tilde{G}_{\alpha} and \tilde{G}_{\beta} which are of the form

    \begin{equation*} \begin{bmatrix} g_{1}^{(\alpha)} & g_{0}^{(\alpha)} & & & & \\ \vdots & g_{1}^{(\alpha)} & g_{0}^{(\alpha)} & & & \\ g_{\ell}^{(\alpha)} & \ddots & \ddots & \ddots & & \\ & \ddots & \ddots & \ddots & \ddots & \\ & & \sum\limits_{k = \ell}^{N}g_{k}^{(\alpha)} & \cdots & g_{1}^{(\alpha)} & g_{0}^{(\alpha)} \end{bmatrix}\quad {\text{and}} \quad \begin{bmatrix} g_{0}^{(\beta)} & g_{1}^{(\beta)} & \cdots & \sum\limits_{k = \ell}^{N}g_{k}^{(\beta)} & & \\ & g_{0}^{(\beta)} & g_{1}^{(\beta)} & \ddots & \ddots & \\ & & \ddots & \ddots & \ddots & g_{\ell}^{(\beta)} \\ & & & \ddots & \ddots & \vdots\\ & & & & g_{0}^{(\beta)} & g_{1}^{(\beta)} \end{bmatrix}, \end{equation*}

    respectively.

    Example 1. We consider balanced fractional diffusion equation (1.1) with (x_L, x_R) = (0, 1) and T = 1 . The diffusion coefficients are given by

    \begin{equation*} d_{+}(x,t) = 1000 (x+1)^{4+\alpha}+t^2,\quad d_{-}(x,t) = 1000 (x+1)^{4+\beta}+t^2, \end{equation*}

    and f(x, t) = 1000 .

    In the numerical testing, we set the number of temporal girds with M_t = N/2 . The results for different values of \alpha and \beta are reported in Table 1. We do not report the numerical results of the unpreconditioned CG method as it converges very slow.

    Table 1.  Numerical results for Example 1.
    C_{s} B(4) B(6) P_3(3) P_3(4) P_4(3) P_4(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    \alpha=0.6, \ \beta=0.6
    2^{11} 61.44 14.90 35.00 10.26 31.00 9.60 14.00 5.21 12.00 5.01 13.00 5.24 11.00 5.06
    2^{12} 62.98 58.68 42.81 47.63 37.00 42.57 14.00 18.80 13.00 18.36 13.00 18.73 12.00 17.76
    2^{13} 64.00 231.27 50.01 210.70 44.00 187.71 15.00 76.60 13.00 69.07 13.00 71.41 12.00 69.55
    2^{14} 65.00 1060.76 58.01 1068.46 52.00 989.73 15.00 334.53 14.00 319.51 14.00 337.00 13.00 325.66
    \alpha=0.6, \ \beta=0.7
    2^{11} 65.06 16.61 31.00 9.63 27.41 8.97 14.00 5.43 13.00 5.54 13.00 5.36 11.00 5.15
    2^{12} 66.72 61.84 37.00 40.06 32.88 39.05 15.85 21.13 13.00 18.23 13.00 18.85 12.00 17.94
    2^{13} 68.00 248.34 43.00 182.87 38.00 170.32 16.01 83.84 14.54 79.90 14.34 79.94 13.00 77.52
    2^{14} 69.01 1171.84 49.00 910.07 43.95 862.68 18.00 409.78 15.95 374.91 14.99 366.86 13.99 357.92
    \alpha=0.7, \ \beta=0.7
    2^{11} 67.55 17.41 28.29 8.97 25.01 8.41 14.96 5.85 13.52 5.82 13.00 5.38 12.00 5.64
    2^{12} 69.01 65.25 34.00 38.90 30.11 37.09 16.00 22.02 13.86 20.20 14.00 20.61 12.00 18.76
    2^{13} 70.09 264.48 40.00 176.53 35.39 164.35 17.98 95.51 15.95 87.88 14.00 77.99 13.00 77.78
    2^{14} 71.31 1197.63 46.00 857.65 41.00 797.35 18.00 415.70 17.00 408.09 15.00 383.49 13.00 349.08
    \alpha=0.7, \ \beta=0.8
    2^{11} 72.00 17.96 23.97 7.53 21.06 7.00 16.00 6.02 14.00 5.95 13.04 5.41 12.00 5.60
    2^{12} 73.68 68.46 28.00 32.62 25.00 30.81 18.73 25.53 16.00 22.81 15.00 22.06 12.99 20.87
    2^{13} 75.16 282.66 32.00 141.63 28.98 140.93 18.96 101.00 17.47 96.63 15.00 84.31 14.00 82.02
    2^{14} 76.76 1262.66 35.99 683.73 32.98 653.98 20.00 458.79 17.79 430.50 16.00 397.79 14.00 370.01
    \alpha=0.8, \ \beta=0.8
    2^{11} 74.69 18.91 21.00 7.04 19.00 6.65 17.94 6.82 14.94 6.34 14.00 5.76 12.00 5.60
    2^{12} 76.09 70.32 24.00 28.41 22.00 27.70 19.00 26.66 16.65 24.02 14.00 20.89 12.00 18.97
    2^{13} 77.57 290.59 28.00 128.10 25.00 120.11 20.00 105.85 18.00 100.20 15.00 83.79 13.00 78.07
    2^{14} 79.37 1291.96 31.00 592.09 28.99 587.79 20.24 464.72 18.00 432.03 15.99 405.75 14.00 370.07
    \alpha=0.8, \ \beta=0.9
    2^{11} 79.85 20.58 16.00 5.67 15.00 5.66 19.19 7.45 17.00 7.20 14.96 6.18 13.00 6.03
    2^{12} 81.45 77.06 18.00 22.74 17.00 22.56 19.71 26.73 17.00 24.08 15.00 21.87 13.00 19.83
    2^{13} 84.14 320.03 20.00 95.92 18.99 100.71 22.00 115.14 19.12 104.20 16.00 90.63 14.00 81.37
    2^{14} 85.47 1413.42 21.98 449.53 19.99 428.83 23.22 529.87 19.93 478.16 16.97 424.17 14.00 374.99
    \alpha=0.9, \ \beta=0.9
    2^{11} 82.93 20.64 14.00 4.95 13.00 4.95 20.00 7.51 17.00 7.22 14.00 5.62 12.00 5.57
    2^{12} 84.41 77.71 15.00 19.57 14.00 19.73 22.14 29.98 18.83 26.68 15.00 22.15 13.00 20.20
    2^{13} 85.77 324.09 17.00 83.82 16.00 84.84 22.52 118.15 20.00 111.70 15.99 90.72 13.99 84.47
    2^{14} 86.84 1422.57 18.00 384.27 17.00 385.21 23.96 541.14 20.00 494.21 16.00 409.06 13.99 373.06

     | Show Table
    DownLoad: CSV

    In the table, we use "P_3(\ell)" and "P_4(\ell)" to denote our proposed preconditioners with \ell being the number of interpolation points, "Iter" to denote the average number of iteration steps required to solve the discrete fractional diffusion equation (2.6) at each time step, and "CPU" to denote the total CPU time in seconds for solve the whole discretized system.

    From Table 1, we can see that the banded preconditioner B(\ell) and our proposed preconditioners P_3(\ell) and P_4(\ell) perform much better than the circulant preconditioner C_s in terms of both iteration number and elapsed time. The banded preconditioner B(\ell) has better performance when \alpha = \beta = 0.9 . In this case, the off-diagonals of the coefficient matrix decay to zero very quickly. For other cases, P_3(\ell) and P_4(\ell) perform much better.

    We also see that the performance of P_3(\ell) and P_4(\ell) are very robust in the sense that the average iteration numbers are almost the same for different values of \alpha and \beta , as well as for the different mesh size.

    Example 2. Consider the two dimensional balanced fractional diffusion equation

    \begin{equation*} \begin{split} &\frac{\partial u(x,y,t)}{\partial t} +{}_{{x_L}}^{{{\rm{RL}}}}{{\rm{D}}}_{x}^{{\alpha_1}}\left(d_{+}(x,y,t){}_{{x}}^{{{\rm{C}}}}{{\rm{D}}}_{x_R}^{\alpha_1} u(x,y,t)\right) +{}_{{x}}^{{{\rm{RL}}}}{{\rm{D}}}_{x_R}^{\beta_1}\left(d_{-}(x,y,t){}_{{x_L\!}}^{{{\rm{C}}}}{{\rm{D}}}_{x}^{{\beta_1}} u(x,y,t)\right) \\ &\qquad +{}_{{y_L}}^{{{\rm{RL}}}}{{\rm{D}}}_{y}^{\alpha_2}\left(e_{+}(x,y,t){}_{{y}}^{{{\rm{C}}}}{{\rm{D}}}_{y_R}^{\alpha_2} u(x,y,t)\right) +{}_{{y}}^{{{\rm{RL}}}}{{\rm{D}}}_{y_R}^{\beta_2}\left(e_{-}(x,y,t){}_{{y_L}}^{{{\rm{C}}}}{{\rm{D}}}_{y}^{\beta_2} u(x,y,t)\right) = f(x,y,t),\\[0.3em] & (x,y) \in \Omega = \left(x_{L}, x_{R}\right) \times (y_L,y_R), \quad t\in (0,T], \\[0.2em] & u(x,y,t) = 0, \quad (x,y)\in \partial\Omega, t\in [0,T], \\[0.2em] & u(x,y,0) = u_{0}(x,y), \quad (x,y)\in [x_L,x_R]\times[y_L,y_R], \end{split} \end{equation*}

    where \frac{1}{2} < \alpha_1, \beta_1, \alpha_2, \beta_2 < 1 , and d_{\pm}(x, y, t) > 0 , e_{\pm}(x, y, t) > 0 for (x, y, t)\in\Omega \times (0, T] .

    As was shown in [7], the finite volume discretization leads to

    \begin{equation*} A^{(j)}u^{(j)} = (M_{x}\otimes M_{y}) u^{(j-1)}+f^{(j)}, \quad j = 1,2,\ldots, M_t, \end{equation*}

    where

    \begin{equation*} \begin{aligned} A^{(j)}& = M_{x} \otimes M_{y} +\eta_{\alpha_{1}}\left(G_{\alpha_{1}} \otimes I_{y}\right){D}_{+}^{(j)} \left(G_{\alpha_{1}}^{ { \intercal }} \otimes I_{y}\right) +\eta_{\beta_{1}}\left(G_{\beta_{1}} \otimes I_{y}\right){D}_{-}^{(j)} \left(G_{\beta_{1}}^{ { \intercal }} \otimes I_{y}\right) \\ &\quad +\eta_{\alpha_{2}}\left(I_{x} \otimes G_{\alpha_{2}}\right){E}_{+}^{(j)} \left(I_{x} \otimes G_{\alpha_{2}}^{ { \intercal }}\right) +\eta_{\beta_{2}}\left(I_{x} \otimes G_{\beta_{2}}\right){E}_{-}^{(j)} \left(I_{x} \otimes G_{\beta_{2}}^{ { \intercal }}\right). \end{aligned} \end{equation*}

    Here M_{x}, M_{y} are tridiagonal matrices and D_{\pm}^{(j)} , E_{\pm}^{(j)} are block diagonal matrices

    \begin{equation*} {D}_{\pm}^{(j)} = {\rm{diag}}\left(\left\{\left\{ d_{\pm}\left(x_{i+\frac{1}{2}}, y_{k}, t_{j} \right)\right\}_{k = 1}^{N_{y}}\right\}_{i = 0}^{N_{x}}\right), \quad {E}_{\pm}^{(j)} = {\rm{diag}}\left(\left\{\left\{ e_{\pm}\left(x_{i}, y_{k+\frac{1}{2}}, t_{j} \right)\right\}_{k = 0}^{N_{y}}\right\}_{i = 1}^{N_{x}}\right), \end{equation*}

    where N_{x} and N_{y} denote the number of grid points in the x -direction and y -direction, respectively.

    Similar to the construction of P_{3} of (3.11), we can define an approximate inverse preconditioner for the two dimensional problems. For convenience of expression, we omit the superscript in A^{(j)} . To construct the preconditioner, we define the following circulant matrices

    \begin{equation*} \begin{aligned} \tilde{C}_{i,j}& = C_{M_x} \otimes C_{M_y} +\eta_{\alpha_1} d_{+}(x_i,y_j) \left(C_{\alpha_1}C_{\alpha_1}^{ { \intercal }}\right)\otimes I_y +\eta_{\beta_1} d_{-}(x_i,y_j) \left(C_{\beta_1}C_{\beta_1}^{ { \intercal }}\right)\otimes I_y \\ &\quad +\eta_{\alpha_2} e_{+}(x_i,y_j) I_{x} \otimes \left(C_{\alpha_2}C_{\alpha_2}^{ { \intercal }}\right) +\eta_{\beta_2} e_{-}(x_i,y_j) I_{x} \otimes \left(C_{\beta_2}C_{\beta_2}^{ { \intercal }}\right). \end{aligned} \end{equation*}

    Then we choose the interpolation points \{(\tilde{x}_i, \tilde{y}_j\} , i = 1, 2, \ldots, \ell_x , j = 1, 2, \ldots, \ell_y , and we approximate \tilde{C}_{i, j}^{-1/2} by

    \begin{equation*} \tilde{C}_{i,j}^{-1/2}\approx F^{*}\left(\sum\limits_{k = 1}^{\ell_{x}}\sum\limits_{l = 1}^{\ell_{y}} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}\right)F. \end{equation*}

    where F refers to the two dimensional discrete Fourier transform matrix of size N_{x}N_{y} , and \tilde{\Lambda}_{k, l} is the diagonal matrix whose diagonals are the eigenvalues of \tilde{C}_{k, l} for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y . Then we obtain the resulting preconditioner

    \begin{equation*} \begin{split} P_3^{-1}& = \left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} F^{*} \sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}F \,(e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }} \right)^{*}\\ &\quad\left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} F^{*} \sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}F \,(e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }} \right)\\ & = \left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y}\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j)\, F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right)\\ &\quad\left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y}\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \, (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j) \right)\\ & = \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j)\, F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right)\\ &\quad\left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j) \right)\\ & = \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \Phi_{k,l} F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right) \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \Phi_{k,l} \right), \end{split} \end{equation*}

    where \Phi_{k, l} = {\rm{diag}}\left(\phi_{k, l}(x_1, y_1), \phi_{k, l}(x_1, y_2), \cdots, \phi_{k, l}(x_{N_x}, y_{N_y})\right) for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y .

    In the tests, we set \Omega = (0, 1)\times(0, 1) , T = 1 ,

    \begin{equation*} \begin{aligned} &d_{+}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\alpha_1}+1000 y^{4+\alpha_1}+1),\ d_{-}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\beta_1}+1000 y^{4+\beta_1}+1), \\ &e_{+}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\alpha_2}+1000 y^{4+\alpha_2}+1),\ e_{-}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\beta_2}+1000 y^{4+\beta_2}+1), \end{aligned} \end{equation*}

    and f(x, y, t) = 1000 . We also set N_{x} = N_{y} = N , M = N/2 and \ell_{x} = \ell_{y} = \ell . The maximum iteration number is set to be 500. As it is expensive to compute the second item of the Sherman-Morrison-Woodburg formula 3.5 for the two dimensional problem, we only test the preconditioner P_3(\ell) .

    The results are reported in Table 2. We can see that, for the two dimensional problem, although our proposed preconditioners P_3(3) and P_3(4) take more iteration steps to converge than the banded preconditioners B(3) and B(4) , it is much cheaper to implement our new preconditioners than banded preconditioner and circulant preconditioner in terms of CPU time.

    Table 2.  Numerical results for Example 2.
    C_{s} B(3) B(4) P_3(3) P_3(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    \alpha_1=\alpha_2=0.6, \ \beta_1=\beta_2=0.6
    2^{7} 497.05 242.31 20.95 39.17 18.00 45.78 28.81 18.30 28.73 21.17
    2^{8} > 500 - 25.95 588.07 22.98 718.04 33.02 112.40 31.97 126.29
    2^{9} > 500 - 31.11 9887.04 27.98 12435.90 38.02 868.40 36.80 1023.78
    \alpha_1=\alpha_2=0.7, \ \beta_1=\beta_2=0.7
    2^{7} > 500 - 17.02 33.78 15.95 43.17 36.30 23.01 35.31 25.93
    2^{8} > 500 - 21.00 511.49 19.00 637.12 43.07 144.57 41.18 163.50
    2^{9} > 500 - 25.84 8799.13 23.42 11217.25 52.83 1233.09 50.97 1455.41
    \alpha_1=\alpha_2=0.8, \ \beta_1=\beta_2=0.8
    2^{7} > 500 - 13.98 29.96 12.73 38.05 46.31 29.15 45.38 33.82
    2^{8} > 500 - 16.00 437.62 15.00 568.48 58.30 198.03 56.60 223.90
    2^{9} > 500 - 18.99 7353.96 17.72 9675.65 76.26 1767.78 72.09 2053.69
    \alpha_1=\alpha_2=0.9, \ \beta_1=\beta_2=0.9
    2^{7} > 500 - 9.98 25.37 9.00 32.09 62.08 39.55 58.61 42.91
    2^{8} > 500 - 11.00 360.76 10.00 470.30 83.97 284.48 78.90 309.27
    2^{9} > 500 - 12.00 5933.38 11.99 8162.24 114.65 2631.44 109.24 3042.00

     | Show Table
    DownLoad: CSV

    In this paper, we consider the preconditioners for the linear systems resulted from the discretization of the balanced fractional diffusion equations. The coefficient matrices are symmetric positive definite and can be written as the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. We investigate the approximate inverse preconditioners and show that the spectra of the preconditioned matrices are clustered around 1. Numerical experiments have shown that the conjugate gradient method, when applied to solve the preconditioned systems, converges very quickly. Besides, we extend our preconditioning technique to the case of two dimensional problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by NSFC grant 11771148 and Science and Technology Commission of Shanghai Municipality grant 22DZ2229014.

    All authors declare no conflicts of interest in this paper.



    [1] J. D. Ferreira, C. A. T. Salazar, P. C. C. Tabares, Weak Allee effect in a predator-prey model involving memory with a hump, Nonlinear Anal.: Real World Appl., 14 (2013), 536–548. https://doi.org/10.1016/j.nonrwa.2012.07.014 doi: 10.1016/j.nonrwa.2012.07.014
    [2] M. Cavani, M. Farkas, Bifurcations in a predator-prey model with memory and diffusion. I: Andronov-Hopf bifurcation, Acta Math. Hung., 63 (1994), 213–229. https://doi.org/10.1007/bf01874129 doi: 10.1007/bf01874129
    [3] M. Umar, Z. Sabir, M. A. Z. Raja, Intelligent computing for numerical treatment of nonlinear prey-predator models, Appl. Soft Comput., 80 (2019), 506–524. https://doi.org/10.1016/j.asoc.2019.04.022 doi: 10.1016/j.asoc.2019.04.022
    [4] Z. Sabir, T. Botmart, M. A. Z. Raja, W. Weera, An advanced computing scheme for the numerical investigations of an infection-based fractional-order nonlinear prey-predator system, Plos One, 17 (2022), 1–13. https://doi.org/10.1371/journal.pone.0265064 doi: 10.1371/journal.pone.0265064
    [5] U. Ghosh, S. Pal, M. Banerjee, Memory effect on Bazykin's prey-predator model: Stability and bifurcation analysis, Chaos Solitons Fract., 143 (2021), 1–10. https://doi.org/10.1016/j.chaos.2020.110531 doi: 10.1016/j.chaos.2020.110531
    [6] A. Gökçe, The influence of past in a population system involving intraspecific competition and Allee effect, Eur. Phys. J. Plus, 137 (2022), 1–11. https://doi.org/10.1140/epjp/s13360-022-02425-z doi: 10.1140/epjp/s13360-022-02425-z
    [7] B. Sahoo, S. Poria, Dynamics of predator-prey system with fading memory, Appl. Math. Comput., 347 (2019), 319–333. https://doi.org/10.1016/j.amc.2018.11.013 doi: 10.1016/j.amc.2018.11.013
    [8] L. Berec, E. Angulo, F. Courchamp, Multiple Allee effects and population management, Trends Ecol. Evol., 22 (2007), 185–191. https://doi.org/10.1016/j.tree.2006.12.002 doi: 10.1016/j.tree.2006.12.002
    [9] B. Souayeh, Z. Sabir, M. Umar, M. W. Alam, Supervised neural network procedures for the novel fractional food supply model, Fractal Fract., 6 (2022), 1–15. https://doi.org/10.3390/fractalfract6060333 doi: 10.3390/fractalfract6060333
    [10] E. Angulo, G. M. Luque, S. D. Gregory, J. W. Wenzel, C. Bessa‐Gomes, L. Berec, et al., Allee effects in social species, J. Anim. Ecol., 87 (2018), 47–58. https://doi.org/10.1111/1365-2656.12759
    [11] T. Perälä, J. A. Hutchings, A. Kuparinen, Allee effects and the Allee-effect zone in northwest Atlantic cod, Biol. Lett., 18 (2022), 1–6. https://doi.org/10.1098/rsbl.2021.0439 doi: 10.1098/rsbl.2021.0439
    [12] B. Dennis, Allee effects: Population growth, critical density, and the chance of extinction, Nat. Resour. Model., 3 (1989), 481–538. https://doi.org/10.1111/j.1939-7445.1989.tb00119.x doi: 10.1111/j.1939-7445.1989.tb00119.x
    [13] T. Botmart, Z. Sabir, M. A. Z. Raja, M. R. Ali, R. Sadat, A. A. Aly, et al., A hybrid swarming computing approach to solve the biological nonlinear Leptospirosis system, Biomed. Signal Process. Control, 77 (2022), 103789. https://doi.org/10.1016/j.bspc.2022.103789
    [14] F. Courchamp, L. Berec, J. Gascoigne, Allee effects in ecology and conservation, 1 Ed., New York: Oxford University Press Inc., 2008. https://doi.org/10.1093/acprof:oso/9780198570301.001.0001
    [15] C. Çelik, H. Merdan, O. Duman, Ö. Akın, Allee effects on population dynamics with delay, Chaos Solitons Fract., 37 (2008), 65–74. https://doi.org/10.1016/j.chaos.2006.08.019
    [16] J. P. Tripathi, P. S. Mandal, A. Poonia, V. P. Bajiya, A widespread interaction between generalist and specialist enemies: The role of intraguild predation and Allee effect, Appl. Math. Model., 89 (2021), 105–135. https://doi.org/10.1016/j.apm.2020.06.074 doi: 10.1016/j.apm.2020.06.074
    [17] P. C. Tabares, J. D. Ferreira, V. Rao, Weak Allee effect in a predator-prey system involving distributed delays, Comput. Appl. Math., 30 (2011), 675–699. https://doi.org/10.1590/S1807-03022011000300011 doi: 10.1590/S1807-03022011000300011
    [18] T. Botmart, W. Weera, Guaranteed cost control for exponential synchronization of cellular neural networks with mixed time-varying delays via hybrid feedback control, Abstr. Appl. Anal., 2013 (2013), 175796. https://doi.org/10.1155/2013/175796 doi: 10.1155/2013/175796
    [19] M. JovanoviĆ, M. KrstiĆ, Extinction in stochastic predator-prey population model with Allee effect on prey, Discrete Cont. Dyn. Syst. Ser. B, 22 (2017), 2651–2667. https://doi.org/10.3934/dcdsb.2017129 doi: 10.3934/dcdsb.2017129
    [20] P. J. Pal, T. Saha, M. Sen, M. Banerjee, A delayed predator–prey model with strong Allee effect in prey population growth, Nonlinear Dyn., 68 (2012), 23–42. https://doi.org/10.1007/s11071-011-0201-5 doi: 10.1007/s11071-011-0201-5
    [21] A. Surendran, M. J. Plank, M. J. Simpson, Population dynamics with spatial structure and an Allee effect, Proc. Math. Phys. Eng. Sci., 476 (2020), 20200501. https://doi.org/10.1098/rspa.2020.0501 doi: 10.1098/rspa.2020.0501
    [22] M. Jankovic, S. Petrovskii, Are time delays always destabilizing? Revisiting the role of time delays and the Allee effect, Theor. Ecol., 7 (2014), 335–349. https://doi.org/10.1007/s12080-014-0222-z doi: 10.1007/s12080-014-0222-z
    [23] A. W. Stoner, M. Ray-Culp, Evidence for Allee effects in an over-harvested marine gastropod: Density-dependent mating and egg production, Mar. Ecol. Prog. Ser., 202 (2000), 297–302. http://dx.doi.org/10.3354/meps202297
    [24] F. Courchamp, B. T. Grenfell, T. H. Clutton‐Brock, Impact of natural enemies on obligately cooperative breeders, Oikos, 91 (2000), 311–322. https://doi.org/10.1034/j.1600-0706.2000.910212.x
    [25] M. Kuussaari, I. Saccheri, M. Camara, I. Hanski, Allee effect and population dynamics in the Glanville fritillary butterfly, Oikos, 82 (1998), 384–392. https://doi.org/10.2307/3546980 doi: 10.2307/3546980
    [26] Z. Ma, Hopf bifurcation of a generalized delay-induced predator-prey system with habitat complexity, Int. J. Bifurcat. Chaos, 30 (2020), 2050082. https://doi.org/10.1142/S0218127420500820 doi: 10.1142/S0218127420500820
    [27] H. Yu, M. Zhao, R. P. Agarwal, Stability and dynamics analysis of time delayed eutrophication ecological model based upon the Zeya reservoir, Math. Comput. Simul., 97 (2014), 53–67. https://doi.org/10.1016/j.matcom.2013.06.008 doi: 10.1016/j.matcom.2013.06.008
    [28] Y. Tang, L. Zhou, Stability switch and Hopf bifurcation for a diffusive prey–predator system with delay, J. Math. Anal. Appl., 334 (2007), 1290–1307. https://doi.org/10.1016/j.jmaa.2007.01.041 doi: 10.1016/j.jmaa.2007.01.041
    [29] A. Gökçe, A mathematical study for chaotic dynamics of dissolved oxygen-phytoplankton interactions under environmental driving factors and time lag, Chaos Solitons Fract., 151 (2021), 1–13. https://doi.org/10.1016/j.chaos.2021.111268 doi: 10.1016/j.chaos.2021.111268
    [30] K. Chakraborty, M. Chakraborty, T. K. Kar, Bifurcation and control of a bioeconomic model of a prey–predator system with a time delay, Nonlinear Anal.: Hybrid Syst., 5 (2011), 613–625. https://doi.org/10.1016/j.nahs.2011.05.004 doi: 10.1016/j.nahs.2011.05.004
    [31] H. Zhao, X. Huang, X. Zhang, Hopf bifurcation and harvesting control of a bioeconomic plankton model with delay and diffusion terms, Phys. A: Stat. Mech. Appl., 421 (2015), 300–315. https://doi.org/10.1016/j.physa.2014.11.042 doi: 10.1016/j.physa.2014.11.042
    [32] A. Gökçe, Numerical bifurcation analysis for a prey-predator type interactions with a time lag and habitat complexity, Bitlis Eren Ü niv. Fen Bilim. Derg., 10 (2021), 57–66. https://doi.org/10.17798/bitlisfen.840245 doi: 10.17798/bitlisfen.840245
    [33] K. Gopalsamy, G. Ladas, On the oscillation and asymptotic behavior of \dot{N}(t)=N(t)[a+ \left.b N(t-\tau)-c N^2(t-\tau)\right], Quart. Appl. Math., 48 (1990), 433–440.
    [34] M. Umar, Z. Sabir, F. Amin, J. L. Guirao, M. A. Z. Raja, Stochastic numerical technique for solving HIV infection model of CD4+ T cells, Eur. Phys. J. Plus, 135 (2020), 1–19. https://doi.org/10.1140/epjp/s13360-020-00417-5 doi: 10.1140/epjp/s13360-020-00417-5
    [35] Z. Sabir, Stochastic numerical investigations for nonlinear three-species food chain system, Int. J. Biomath., 15 (2022), 2250005. https://doi.org/10.1142/S179352452250005X doi: 10.1142/S179352452250005X
    [36] Z. Sabir, M. R. Ali, R. Sadat, Gudermannian neural networks using the optimization procedures of genetic algorithm and active set approach for the three-species food chain nonlinear model, J. Ambien. Intell. Human. Comput., 13 (2022), 1–10. https://doi.org/10.1007/s12652-021-03638-3 doi: 10.1007/s12652-021-03638-3
    [37] M. Umar, Z. Sabir, M. A. Z. Raja, M. Shoaib, M. Gupta, Y. G. Sánchez, A stochastic intelligent computing with neuro-evolution heuristics for nonlinear SITR system of novel COVID-19 dynamics, Symmetry, 12 (2020), 1–17. https://doi.org/10.3390/sym12101628
    [38] M. Umar, F. Amin, H. A. Wahab, D. Baleanu, Unsupervised constrained neural network modeling of boundary value corneal model for eye surgery, Appl. Soft Comput., 85 (2019), 1–16. https://doi.org/10.1016/j.asoc.2019.105826 doi: 10.1016/j.asoc.2019.105826
    [39] B. Wang, J. F. Gomez-Aguilar, Z. Sabir, M. A. Z. Raja, W. F. Xia, H. Jahanshahi, et al., Numerical computing to solve the nonlinear corneal system of eye surgery using the capability of Morlet wavelet artificial neural networks, Fractals, 30 (2022), 1–19. https://doi.org/10.1142/S0218348X22401478
    [40] Z. Sabir, Neuron analysis through the swarming procedures for the singular two-point boundary value problems arising in the theory of thermal explosion, Eur. Phys. J. Plus, 137 (2022), 1–18. https://doi.org/10.1140/epjp/s13360-022-02869-3 doi: 10.1140/epjp/s13360-022-02869-3
    [41] Z. Sabir, H. A. Wahab, Evolutionary heuristic with Gudermannian neural networks for the nonlinear singular models of third kind, Phys. Scr., 96 (2021), 1–12. https://doi.org/10.1088/1402-4896/ac3c56 doi: 10.1088/1402-4896/ac3c56
    [42] T. Saeed, Z. Sabir, M. S. Alhodaly, H. H. Alsulami, Y. G. Sánchez, An advanced heuristic approach for a nonlinear mathematical based medical smoking model, Results Phys., 32 (2022), 1–13. https://doi.org/10.1016/j.rinp.2021.105137
    [43] A. Gökçe, A dynamic interplay between Allee effect and time delay in a mathematical model with weakening memory, Appl. Math. Comput., 430 (2022), 127306. https://doi.org/10.1016/j.amc.2022.127306 doi: 10.1016/j.amc.2022.127306
    [44] M. R. Ali, S. Raut, S. Sarkar, U. Ghosh, Unraveling the combined actions of a Holling type III predator–prey model incorporating Allee response and memory effects, Comp. Math. Methods., 3 (2021), 1–18. https://doi.org/10.1002/cmm4.1130 doi: 10.1002/cmm4.1130
    [45] A. Rojas-Palma, E. González-Olivares, Optimal harvesting in a predator-prey model with Allee effect and sigmoid functional response, Appl. Math. Model., 36 (2012), 1864–1874. https://doi.org/10.1016/j.apm.2011.07.081
    [46] T. Botmart, N. Yotha, P. Niamsup, W. Weera, Hybrid adaptive pinning control for function projective synchronization of delayed neural networks with mixed uncertain couplings, Complexity, 2017 (2017), 4654020. https://doi.org/10.1155/2017/4654020 doi: 10.1155/2017/4654020
    [47] P. Lakshminarayana, K. Vajravelu, G. Sucharitha, S. Sreenadh, Peristaltic slip flow of a Bingham fluid in an inclined porous conduit with Joule heating, Appl. Math. Nonlinear Sci., 3 (2018), 41–54. https://doi.org/10.21042/AMNS.2018.1.00005 doi: 10.21042/AMNS.2018.1.00005
    [48] T. Sajid, S. Tanveer, Z. Sabir, J. L. G. Guirao, Impact of activation energy and temperature-dependent heat source/sink on Maxwell–Sutterby fluid, Math. Probl. Eng., 2020 (2020), 1–15. https://doi.org/10.1155/2020/5251804 doi: 10.1155/2020/5251804
    [49] R. Ahmad, A. Farooqi, J. Zhang, N. Ali, Steady flow of a power law fluid through a tapered non-symmetric stenotic tube, Appl. Math. Nonlinear Sci., 4 (2019), 255–266. https://doi.org/10.2478/AMNS.2019.1.00022 doi: 10.2478/AMNS.2019.1.00022
    [50] Z. Sabir, A. Imran, M. Umar, M. Zeb, M. Shoaib, M. A. Z. Raja, A numerical approach for 2-D Sutterby fluid-flow bounded at a stagnation point with an inclined magnetic field and thermal radiation impacts, Therm. Sci., 25 (2021), 1975–1987. https://doi.org/10.2298/TSCI191207186S doi: 10.2298/TSCI191207186S
    [51] Z. Sabir, M. A. Z. Raja, M. Shoaib, J. F. Aguilar, FMNEICS: Fractional Meyer neuro-evolution-based intelligent computing solver for doubly singular multi-fractional order Lane–Emden system, Comp. Appl. Math., 39 (2020), 1–18. https://doi.org/10.1007/s40314-020-01350-0 doi: 10.1007/s40314-020-01350-0
    [52] H. Günerhan, E. Çelik, Analytical and approximate solutions of fractional partial differential-algebraic equations, Appl. Math. Nonlinear Sci., 5 (2020), 109–120. https://doi.org/10.2478/amns.2020.1.00011
    [53] K. A. Touchent, Z. Hammouch, T. Mekkaoui, A modified invariant subspace method for solving partial differential equations with non-singular kernel fractional derivatives, Appl. Math. Nonlinear Sci., 5 (2020), 35–48. https://doi.org/10.2478/amns.2020.2.00012 doi: 10.2478/amns.2020.2.00012
    [54] Z. Sabir, M. A. Z. Raja, J. L. Guirao, T. Saeed, Meyer wavelet neural networks to solve a novel design of fractional order pantograph Lane-Emden differential model, Chaos Solitons Fract., 152 (2021), 1–14. https://doi.org/10.1016/j.chaos.2021.111404 doi: 10.1016/j.chaos.2021.111404
    [55] E. İlhan, İ. O. Kıymaz, A generalization of truncated M-fractional derivative and applications to fractional differential equations, Appl. Math. Nonlinear Sci., 5 (2020), 171–188. https://doi.org/10.2478/amns.2020.1.00016 doi: 10.2478/amns.2020.1.00016
    [56] H. M. Baskonus, H. Bulut, T. A. Sulaiman, New complex hyperbolic structures to the lonngren-wave equation by using sine-gordon expansion method, Appl. Math. Nonlinear Sci., 4 (2019), 129–138. https://doi.org/10.2478/AMNS.2019.1.00013 doi: 10.2478/AMNS.2019.1.00013
  • This article has been cited by:

    1. Shi-Ping Tang, Ai-Li Yang, Jian-Lin Zhou, Yu-Jiang Wu, R. Chan’s circulant-based approximate inverse preconditioning iterative method for solving second-order space fractional advection–dispersion equations with variable coefficients, 2024, 43, 2238-3603, 10.1007/s40314-024-02592-y
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2017) PDF downloads(246) Cited by(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog