Loading [MathJax]/jax/output/SVG/jax.js
Research article

Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations

  • Received: 28 April 2023 Revised: 16 May 2023 Accepted: 16 May 2023 Published: 18 May 2023
  • MSC : 65F08, 65F10

  • We consider the preconditioned iterative methods for the linear systems arising from the finite volume discretization of spatial balanced fractional diffusion equations where the fractional differential operators are comprised of both Riemann-Liouville and Caputo fractional derivatives. The coefficient matrices of the linear systems consist of the sum of tridiagonal matrix and Toeplitz-times-diagonal-times-Toeplitz matrix. We propose using symmetric approximate inverse preconditioners to solve such linear systems. We show that the spectra of the preconditioned matrices are clustered around 1. Numerical examples, for both one and two dimensional problems, are given to demonstrate the efficiency of the new preconditioners.

    Citation: Xiaofeng Guo, Jianyu Pan. Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations[J]. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884

    Related Papers:

    [1] Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319
    [2] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [3] Zihan Yue, Wei Jiang, Boying Wu, Biao Zhang . A meshless method based on the Laplace transform for multi-term time-space fractional diffusion equation. AIMS Mathematics, 2024, 9(3): 7040-7062. doi: 10.3934/math.2024343
    [4] Fethi Bouzeffour . Inversion formulas for space-fractional Bessel heat diffusion through Tikhonov regularization. AIMS Mathematics, 2024, 9(8): 20826-20842. doi: 10.3934/math.20241013
    [5] Qian Li, Qianqian Yuan, Jianhua Chen . An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942
    [6] Lynda Taleb, Rabah Gherdaoui . Approximation by the heat kernel of the solution to the transport-diffusion equation with the time-dependent diffusion coefficient. AIMS Mathematics, 2025, 10(2): 2392-2412. doi: 10.3934/math.2025111
    [7] Jin Li . Barycentric rational collocation method for fractional reaction-diffusion equation. AIMS Mathematics, 2023, 8(4): 9009-9026. doi: 10.3934/math.2023451
    [8] Mubashara Wali, Sadia Arshad, Sayed M Eldin, Imran Siddique . Numerical approximation of Atangana-Baleanu Caputo derivative for space-time fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 15129-15147. doi: 10.3934/math.2023772
    [9] Krunal B. Kachhia, Jyotindra C. Prajapati . Generalized iterative method for the solution of linear and nonlinear fractional differential equations with composite fractional derivative operator. AIMS Mathematics, 2020, 5(4): 2888-2898. doi: 10.3934/math.2020186
    [10] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
  • We consider the preconditioned iterative methods for the linear systems arising from the finite volume discretization of spatial balanced fractional diffusion equations where the fractional differential operators are comprised of both Riemann-Liouville and Caputo fractional derivatives. The coefficient matrices of the linear systems consist of the sum of tridiagonal matrix and Toeplitz-times-diagonal-times-Toeplitz matrix. We propose using symmetric approximate inverse preconditioners to solve such linear systems. We show that the spectra of the preconditioned matrices are clustered around 1. Numerical examples, for both one and two dimensional problems, are given to demonstrate the efficiency of the new preconditioners.



    Fractional diffusion equations (FDEs) have been utilized to model anomalous diffusion phenomena in the real world, see for instance [1,2,4,18,19,22,23,25]. One of the main features of the fractional differential operator is nonlocality. It brings big challenge for finding the numerical solution of FDEs, as the coefficient matrix of the discretized FDEs is typically dense, which requires O(N3) of computational cost and O(N2) of memory storage if a direct solution method is employed, where N is the number of unknowns. However, by making use of the Toeplitz-like structure of the coefficient matrices, many efficient algorithms have been developed, see for instance [6,9,12,13,14,15,16,21].

    In this paper, we consider the following initial-boundary value problem of spatial balanced FDEs [7,24]:

    u(x,t)t+RLxLDαx(d+(x,t)CxDαxRu(x,t))+RLxDβxR(d(x,t)CxLDβxu(x,t))=f(x,t),(x,t)(xL,xR)×(0,T],u(xL,t)=uL(t),u(xR,t)=uR(t),0tT,u(x,0)=u0(x),xLxxR. (1.1)

    where d±(x,t)>0 are diffusion coefficients, f(x,t) is the source term, and α,β are the fractional orders satisfying 12<α,β<1. Here RLxLDγx and RLxDγxR denote the left-sided and right-sided Riemann-Liouville fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    RLxLDγxu(x)=1Γ(1γ)ddxxxLu(ξ)(xξ)γdξ,RLxDγxRu(x)=1Γ(1γ)ddxxRxu(ξ)(ξx)γdξ,

    where Γ() is the Gamma function, while CxLDγx and CxDγxR denote the left-sided and right-sided Caputo fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    CxLDγxu(x)=1Γ(1γ)xxLu(ξ)(xξ)γdξ,CxDγxRu(x)=1Γ(1γ)xRxu(ξ)(ξx)γdξ.

    The fractional differential operator in (1.1) is called the balanced central fractional derivative, which was studied in [24]. One advantage of such fractional differential operator is that its variational formulation has a symmetric bilinear form, which can greatly benefit theoretical investigation.

    Recently, a finite volume approximation for the spatial balanced FDEs (1.1) is proposed in [7]. By applying a standard first-order difference scheme for the time derivative and a finite volume discretization scheme for the spatial balanced fractional differential operator, a series of systems of linear equations are generated, whose coefficient matrices share the form of the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. One attractive feature of these coefficient matrices is that they are symmetric positive definite, so that the linear systems can be solved by CG method, in which the three-term recurrence can significantly reduce the computational and storage cost. However, due to the ill-conditioning of the coefficient matrices, CG method, when applied to solve the resulted linear systems, usually converges very slow. Therefore, preconditioners should be applied to improve the computational efficiency. In [7], the authors proposed two preconditioners: circulant preconditioner for the constant diffusion coefficient case and banded preconditioner for the variational diffusion coefficient case.

    In this paper, we consider the approximate inverse preconditioners for the resulting linear systems arising from the finite volume discretization of the spatial balanced FDEs (1.1). Our preconditioner is based on the symmetric approximate inverse strategy studied in [11] and the Sherman-Morrison-Woodburg formula. Rigorous analysis shows that the preconditioned matrix can be written as the sum of the identity matrix, a small norm matrix, and a low-rank matrix. Therefore, the quick convergence of the CG method for solving the preconditioned linear systems is expected. Numerical examples, for both one-dimensional and two-dimensional cases, are given to demonstrate the robustness and effectiveness of the proposed preconditioner. We remark that our preconditioner can also be applied to another class of conservative balanced FDEs which was studied in [8,10].

    The rest of the paper is organized as follows. In Section 2, we present the discretized linear systems of the balanced FDEs. Our new preconditioners are given in Section 3, and their properties are investigated in detail in Section 4. In Section 5, we carry out the numerical experiments to demonstrate the performance of the proposed preconditioners. Finally, we give the concluding remarks in Section 6.

    Let Δt=T/Mt be the time step where Mt is a given positive integer. We define a temporal partition tj=jΔt for j=0,1,2,,Mt. The first-order time derivative in (1.1) can be discretized by the standard backward Euler scheme, and we obtain the following semidiscrete form:

    u(x,tj)u(x,tj1)Δt+RLxLDαx(d+(x,tj)CxDαxRu(x,tj))+RLxDβxR(d(x,tj)CxLDβxu(x,tj))=f(x,tj), (2.1)

    for j=1,2,,Mt. Let Δx=(xRxL)/(N+1) be the size of the spatial grid where N is a positive integer. We define a spatial partition xi=xL+iΔx for i=0,1,2,,N+1, and denote by xi12=xi1+xi2 the midpoint of the interval [xi1,xi]. Integrating both sides of (2.1) over [xi12,xi+12] gives

    1Δtxi+12xi12u(x,tj)dx+xi+12xi12RLxLDαx(d+(x,tj)CxDαxRu(x,tj))dx+xi+12xi12RLxDβxR(d(x,tj)CxLDβxu(x,tj))dx=1Δtxi+12xi12u(x,tj1)dx+xi+12xi12f(x,tj)dx. (2.2)

    Let SΔx(xL,xR) be the space of continuous and piecewise-linear functions with respect to the spatial partition, and define the nodal basis functions ϕk(x) as

    ϕk(x)={xxk1Δx,x[xk1,xk],xk+1xΔx,x[xk,xk+1],0,elsewhere,

    for k=1,2,,N, and

    ϕ0(x)={x1xΔx,x[x0,x1],0,elsewhere,ϕN+1(x)={xxNΔx,x[xN,xN+1],0,elsewhere.

    The approximate solution uΔx(x,tj)SΔx(xL,xR) can be expressed as

    uΔx(x,tj)=N+1k=0u(j)kϕk(x).

    Therefore, the corresponding finite volume scheme leads to

    1ΔtN+1k=0u(j)kxi+12xi12ϕj(x)dx+1Γ(1α)xxL(xξ)α(d+(ξ,tj)CξDαxRuh(ξ,tj))dξ|xi+12xi121Γ(1α)xRx(ξx)α(d(ξ,tj)CxLDαξuh(ξ,tj))dξ|xi+12xi12=1ΔtN+1k=0u(j1)kxi+1/2xi1/2ϕj(x)dx+xi+1/2xi1/2f(x,tj)dx, (2.3)

    which can be further approximated and we obtain [7]

    18(u(j)i1+6u(j)i+u(j)i+1)+ηαil=0g(α)ild+(xl+12,tj)(Nk=lg(α)klu(j)ka(α)Nlu(j)N+1)+ηβN+1l=ig(β)lid(xl12,tj)(k=1g(β)lku(j)ka(β)l1u(j)0)=18(u(j1)i1+6u(j1)i+u(j1)i+1)+ΔtΔxxi+12xi12f(x,tj)dx,1iN, 1jMt, (2.4)

    where ηα=ΔtΓ(2α)2Δx2α, ηβ=ΔtΓ(2β)2Δx2β, and

    g(γ)0=a(γ)0,g(γ)k=a(γ)ka(γ)k1,k=1,2,,N, (2.5)

    for γ=α,β, with

    a(γ)0=(12)1γ,a(γ)k=(k+12)1γ(k12)1γ,k=1,2,,N.

    The initial value and boundary condition are

    u(0)k=u0(xk),k=0,1,2,,N+1,u(j)0=uL(tj),u(j)N+1=uR(tj),j=1,2,,Mt.

    Collecting all components i into a single matrix system, we obtain

    (M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ)u(j)=Mu(j1)+Δtf(j)+b(j),j=1,2,,Mt, (2.6)

    where

    ˜D(j)+=diag({d+(xk+12,tj)}Nk=0)and˜D(j)=diag({d(xk+12,tj)}Nk=0)

    are diagonal matrices, M=18tridiag(1,6,1), u(j)=[u(j)1,u(j)2,,u(j)N], f(j)=[f(j)1,f(j)2,, f(j)N] with

    f(j)i=1Δxxi+12xi12f(x,tj)dx,

    and

    b(j)=[b(j)1+18(u(j1)0u(j)0),b(j)2,b(j)3,,b(j)N1,b(j)N+18(u(j1)N+1u(j)N+1)]

    with

    b(j)i=ηαil=0g(α)ild+(xl+12,tj)a(α)Nlu(j)N+1ηαg(α)id+(x12,tj)g(α)0u(j)0+ηβN+1l=ig(β)lid(xl12,tj)a(β)l1u(j)0ηβg(β)Ni+1d(xN+12,tj)g(β)0u(j)N+1,i=1,2,,N.

    The matrices ˜Gα, ˜Gβ are N-by-(N+1) Toeplitz matrices defined by

    ˜Gα=[g(α)1g(α)0g(α)2g(α)1g(α)0g(α)Ng(α)2g(α)1g(α)0],˜Gβ=[g(β)0g(β)1g(β)2g(β)Ng(β)0g(β)1g(β)2g(β)0g(β)1].

    We remark that the entries of ˜Gα and ˜Gβ are independent on the time partition tj.

    We denote the coefficient matrix by A(j), that is,

    A(j)=M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ. (2.7)

    As M and ˜D(j)+,˜D(j) are symmetric positive definite, we can see that the coefficient matrix A(j) is symmetric positive definite too.

    For the entries of ˜Gα and ˜Gβ, we have the following result, which is directly obtained from Lemma 2.3 in [15].

    Lemma 2.1. Let g(γ)k be defined by (2.5) with 12<γ<1. Then

    (1) g(γ)0>0, g(γ)1<g(γ)2<<g(γ)k<<0;

    (2) |g(γ)k|<cγ(k1)γ+1 for k=2,3,, where cγ=γ(1γ);

    (3) limk|g(γ)k|=0, k=0g(γ)k=0, and pk=0g(γ)k>0 for p1.

    As the coefficient matrix is symmetric positive definite, we can apply CG method to solve the linear systems (2.6). In order to improve the performance and reliability of the CG method, preconditioning is necessarily to employed. In this section, we develop the approximate inverse preconditioner for the linear system (2.6).

    For the convenience of analysis, we omit the superscript "(j)" for the jth time step, that is, we denote the coefficient matrix (2.7) as

    A=M+ηα˜Gα˜D+˜Gα+ηβ˜Gβ˜D˜Gβ, (3.1)

    where ˜D+=˜D(j)+, ˜D=˜D(j).

    Let gα and Gα be the first column and the last N columns of ˜Gα, respectively, that is, ˜Gα=[gα,Gα] where

    gα=[g(α)1g(α)2g(α)N]RNandGα=[g(α)0g(α)1g(α)0g(α)N1g(α)1g(α)0]RN×N.

    Analogously, we denote by gβ and Gβ the last column and the first N columns of ˜Gβ, respectively, that is, ˜Gβ=[Gβ,gβ] where

    gβ=[g(β)Ng(β)N1g(β)1]RNandGβ=[g(β)0g(β)1g(β)N1g(β)0g(β)1g(β)0]RN×N.

    Then we have

    A=M+ηα[gα,Gα]˜D+[gα,Gα]+ηβ[Gβ,gβ]˜D[Gβ,gβ]=M+ηαGαˆD+Gα+ηβGβˆDGβ+ηαd+(x12)gαgα+ηβd(xN+12)gβgβ,

    where

    ˆD+=diag({d+(xk+12)}Nk=1),ˆD=diag({d(xk+12)}N1k=0).

    Therefore, A can be written as

    A=ˆA+UU, (3.2)

    where

    ˆA=M+ηαGαˆD+Gα+ηβGβˆDGβ (3.3)

    and

    U=[ηαd+(x12)gα,ηβd(xN+12)gβ]RN×2. (3.4)

    According to Sherman-Morrison-Woodburg Theorem, we have

    A1=(ˆA+UU)1=ˆA1ˆA1U(I+UˆA1U)1UˆA1. (3.5)

    In the following, we consider the approximation of ˆA1. To this end, we first define a matrix ˜A with

    ˜A=M+ηαGαD+Gα+ηβGβDGβ, (3.6)

    where D+=diag({d+(xk)}Nk=1) and D=diag({d(xk)}Nk=1).

    Assume d+(x),d(x)C[xL,xR], then it is easy to see that for any ϵ>0, when the spatial gird Δx is small enough, we have

    |d+(xk)d+(xk±12)|<ϵand|d(xk)d(xk±12)|<ϵ.

    Meanwhile, we learn from Lemma 2.1 that there exists cα,cβ>0 such that Gα2cα,Gβ2cβ. Let

    S1=ˆA˜A. (3.7)

    Then it holds that

    S12=ηαGα(ˆD+D+)Gα+ηβGβ(ˆDD)Gβ2(ηαc2α+ηβc2β)ϵ.

    This indicates that ˜A can be a good approximation of ˆA when Δx is very small. However, it is not easy to compute the inverse of ˜A.

    Motivated by the symmetric approximate inverse preconditioner proposed in [11], we consider the following approximations

    ˜A1/2eiK1/2iei,i=1,2,,N,

    where ei is the i-th column of the identity matrix I and

    Ki=M+ηαd+(xi)GαGα+ηβd(xi)GβGβ, (3.8)

    which is symmetric positive definite. That is, we approximate the i-th column of ˜A1/2 by the i-th column of K1/2i. Then we propose the following symmetric approximate inverse preconditioner

    P11=(Ni=1K1/2ieiei)(Ni=1K1/2ieiei). (3.9)

    Although Ki is symmetric positive definite, it is not easy to compute the inverse of its square root. Hence we further approximate Ki by circulant matrices whose inverse square roots can be easily obtained by FFT.

    Let CM, Cα and Cβ be Strang's circulant approximations [5] of M, Gα and Gβ, respectively. Then we obtain the following preconditioner

    P12=(Ni=1C1/2ieiei)(Ni=1C1/2ieiei), (3.10)

    where Ci=CM+ηαd+(xi)CαCα+ηβd(xi)CβCβ is circulant matrix.

    In order to make the preconditioner more practical, similar to the idea in [11,13], we utilize the interpolation technique. Let {˜xk}k=1 be distinct interpolation points in [xL,xR] with a small integer (N), and denote by λ(λM,λα,λβ), where λM, λα, λβ are certain positive real numbers. Then we define the function

    gλ(x)=(λM+ηαλαd+(x)+ηβλβd(x))1/2,x[xL,xR].

    Let

    qλ(x)=ϕ1(x)gλ(˜x1)+ϕ2(x)gλ(˜x2)++ϕ(x)gλ(˜x)

    be the piecewise-linear interpolation for gλ(x) based on the points {(˜xk,gλ(˜xk))}k=1. Define

    ˜Ck=CM+ηαd+(˜xk)CαCα+ηβd(˜xk)CβCβ,k=1,2,,.

    Then ˜Ck is symmetric positive definite and can be diagonalized by FFT, that is,

    ˜Ck=F˜ΛkF,

    where F is the Fourier matrix and ˜Λk is a diagonal matrix whose diagonal entries are the eigenvalues of ˜Ck. By making use of the interpolation technique, we approximate C1/2i by

    C1/2iF(k=1ϕk(xi)˜Λ1/2k)F=k=1ϕk(xi)˜C1/2k,i=1,2,,N.

    By substituting these approximations into P12, we obtain the practical preconditioner

    P13=(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)=(Ni=1k=1ϕk(xi)eieiF˜Λ1/2k)(Ni=1k=1˜Λ1/2kFϕk(xi)eiei)=(k=1Ni=1ϕk(xi)eieiF˜Λ1/2k)(k=1˜Λ1/2kFNi=1ϕk(xi)eiei)=(k=1ΦkF˜Λ1/2k)(k=1˜Λ1/2kFΦk), (3.11)

    where Φk=diag(ϕk(x1),ϕk(x2),,ϕk(xN)). Now applying P13 to any vector requires about O(NlogN) operations, which is acceptable for a moderate number .

    Finally, by substituting ˆA1 in the Sherman-Morrison-Woodburg formula (3.5) with P13, we obtain the preconditioner

    P14=P13P13U(I+UP13U)1UP13. (3.12)

    We remark that both P13 and P14 can be taken as preconditioners. It is clear that implementing P14 requires more computational work than P13. However, we note that P14 is a rank-2 update of P13, and gα, gβ are independent on tj, which indicates that, during the implementation of the preconditioner P14 to the CG method, we can precompute P13U ahead, as well as the inner product UP13U and the inverse of the 2-by-2 matrix I+UP13U. Therefore, at each CG iteration with preconditioner P14, besides the matrix-vector product with P13, only two inner-products and two vector updates are needed. Therefore, it is expected that P14 may have better performance for the one dimensional problems.

    Since P14 is obtained by substituting ˆA1 with P13 in the expression of A1, the approximation property of P14 to A1 is dependent on how close P13 to ˆA1 will be. Therefore, in this section, we study the difference between P13 and ˆA1.

    We first introduce the off-diagonal decay property [20], which is crucial for studying the circulant approximation of the Toeplitz matrix.

    Definition 4.1. Let A=[ai,j]i,jI be a matrix, where the index set is I=Z,N or {1,2,,N}. We say A belongs to the class Ls if

    |ai,j|c(1+|ij|)s (4.1)

    holds for s>1 and some constant c>0, and say A belongs to the class Er if

    |ai,j|cer|ij| (4.2)

    holds for r>0 and some constant c>0.

    The following results hold for the off-diagonal decay matrix class Ls and Er [3,11,15,20].

    Lemma 4.1. Let A=[ai,j]i,jI be a nonsingular matrix, where the index set is I=Z,N or {1,2,,N}.

    (1) If ALs for some s>1, then A1Ls.

    (2) If AL1+s1 and BL1+s2 are finite matrices, then ABL1+s, where s=min{s1,s2}.

    (3) If AEr1 and BEr2 are finite matrices, then ABEr for some constant 0<r<min{r1,r2}.

    (4) Let A be a banded finite matrix and Hermitian positive definite, and let f be an analytic function on [λmin(A),λmax(A)] and f(λ) is real for real λ, where λmin(A) and λmax(A) denote the minimal and maximal eigenvalues of A, respectively. Then f(A) has the off-diagonal exponential decay property (4.2). In particular, let f(x)=x1,x1/2,andx1/2, respectively, then A1, A1/2, and A1/2 have the off-diagonal exponential decay property (4.2).

    Assume that d+(x),d(x)C[xL,xR], it follows from Lemma 2.1 and Lemma 4.1 that we have

    Gα, GαGα, GαD+GαL1+αandGβ, GβGβ, GβDGβL1+β,

    for 12<α,β<1. Therefore, it holds that ˜A, ˜A1L1+min{α,β}.

    Given an integer m, let Gα,m and Gβ,m be the (2m+1)-banded approximations of Gα and Gβ, respectively, that is,

    Gα,m(i,j)={Gα(i,j),|ij|m,0,otherwise,Gβ,m(i,j)={Gβ(i,j),|ij|m,0,otherwise.

    In the following discussion, we let

    Ki=M+ηαd+(xi)Gα,mGα,m+ηβd(xi)Gβ,mGβ,m,i=1,2,,N, (4.3)

    which can be regarded as some kind of banded approximation of the Ki defined in (3.8). We remark that, in the actual applications, we still employ Ki in (3.8) to construct the preconditioners.

    Define

    ˜Am=M+ηαGα,mD+Gα,m+ηβGβ,mDGβ,m. (4.4)

    As ˜Am and Ki are banded matrices and symmetric positive definite, it follows from Lemma 4.1 that ˜A1m, K1i and K1/2i have off diagonal exponential decay property. Meanwhile, we have

    λmin(˜A)λmin(M)12,λmin(˜Am)λmin(M)12,

    and hence

    ˜A122and˜A1m22.

    For a given ϵ>0, it follows from Theorem 12 in [7] that there exists an integer m1>0 such that for all mm1 we have

    ˜Am˜A2ϵ/4.

    Therefore, it holds that

    ˜A1m˜A12=˜A1m(˜A˜Am)˜A12˜A1m2˜Am˜A2˜A12ϵ, (4.5)

    and

    P11˜A12P11˜A1m2+˜A1m˜A12P11˜A1m2+ϵ.

    Now, we turn to estimate the upper bound of P11˜A1m2. Since both P11 and ˜A1m are symmetric, we have

    P11˜A1m2=ρ(P11˜A1m)P11˜A1m1=max1jN(P11˜A1m)ej1=max1jN(P11K1j)ej1+max1jN(K1j˜A1m)ej1. (4.6)

    For the first item, we have the following estimation.

    Lemma 4.2. Let Kj be defined by (4.3). Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exist an integer N1>0 and two positive constants c3,c4, such that

    P11ejK1jej1<c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ,

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j|, ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Proof. We have

    P11ejK1jej=(Ni=1K1/2ieiei)K1/2jejK1jej=Ni=1eieiK1/2iK1/2jejNi=1eieiK1jej=Ni=1eiei(K1/2iK1/2j)K1/2jej. (4.7)

    As it was shown in Theorem 2.2 of [3], for a given ϵ>0, we can find a polynomial pk(t)=k=0at of degree k such that

    |[K1/2i]l,r[pk(Ki)]l,r|K1/2ipk(Ki)2<ϵ,1iN,1l,rN, (4.8)

    where []l,r denotes the (l,r)-th entry of a matrix. Then we can write (4.7) as

    P11ejK1jej=Ni=1eiei(K1/2ipk(Ki)+pk(Ki)pk(Kj)+pk(Kj)K1/2j)K1/2jej.=Ni=1eieiH(1)ijej+Ni=1eieiH(2)ijej+Ni=1eieiH(3)ijej, (4.9)

    where

    H(1)ij=(K1/2ipk(Ki))K1/2j,H(2)ij=(pk(Ki)pk(Kj))K1/2j,H(3)ij=(pk(Kj)K1/2j)K1/2j.

    Note that pk(Ki) is (2km+1)-banded matrix and K1/2i has the off diagonal exponential decay property, we know that K1/2ipk(Ki) also has the off diagonal exponential decay property. According to Lemma 4.1, we learn that H(1)ij has the off diagonal exponential decay property. Hence, analogous to the proof of Lemma 3.8 in [11], we can find constants ˆc>0 and ˆr>0 such that

    |[H(1)ij]l,r|ˆceˆr|lr|forl,r=1,2,,N. (4.10)

    On the other hand, we denote the j-th column of H(1)ij by [h1,j,h2,j,,hN,j], and let ˜KiK1/2ipk(Ki). Observe that

    hl,j=Nr=1˜Ki(l,r)K1/2j(r,j),l=1,2,,N.

    Then based on (4.8) and the fact that K1/2j has off diagonal exponential property, we can show that there exists a constant ˆc1 such that

    |hl,j|<ˆc1ϵ,l=1,2,,N. (4.11)

    Denote

    Ni=1eieiH(1)ijej[H(1)1j(1,j),H(1)2j(2,j),,H(1)Nj(N,j)].

    From (4.10) and (4.11), we can show that there exists a constant c1>0, such that

    Ni=1eieiH(1)ijej1Ni=1|H(1)ij(i,j)|<c1ϵ. (4.12)

    Analogously, we can show that there exists a constant c2>0 such that

    Ni=1eieiH(3)ijej1<c2ϵ. (4.13)

    Now we turn to H(2)ij. It follows from the proof of Lemma 3.11 in [11] that pk(Ki)pk(Kj) has the form

    H(2)ij=k=1a(1q=0Kq1i(KiKj)Kqj)K1/2j=(d+,id+,j)˜H(2)ij+(d,id,j)ˆH(2)ij, (4.14)

    where

    ˜H(2)ij=(k=11q=0aηαKq1iGαGαKqj)K1/2jandˆH(2)ij=(k=11q=0aηβKq1iGβGβKqj)K1/2j.

    It is easy to see that both ˜H(2)ij and ˆH(2)ij have the off diagonal exponential decay property. As

    Ni=1eieiH(2)ijej=[(d+,1d+,j)˜H(2)1j(1,j),,(d+,Nd+,j)˜H(2)Nj(N,j)]+[(d,1d,j)ˆH(2)1j(1,j),,(d,Nd,j)ˆH(2)Nj(N,j)],

    there exists an integer N1 such that

    Ni=1eieiH(2)ijej1Nk=1|d+,kd+,j||˜H(2)kj(k,j)|+Nk=1|d,kd,j||ˆH(2)kj(k,j)|<j+N1k=jN1|d+,kd+,j||˜H(2)kj(k,j)|+j+N1k=jN1|d,kd,j||ˆH(2)kj(k,j)|+4dmaxϵ˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ, (4.15)

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j| and ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Let c3=˜c and c4=c1+c2+4dmax. It follows from (4.12), (4.13) and (4.15) that

    P11ejK1jej1Ni=1eieiH(1)ijej1+Ni=1eieiH(2)ijej1+Ni=1eieiH(3)ijej1<(c1+c2)ϵ+˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ=c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ.

    Now we consider the second item in (4.6).

    Lemma 4.3. Let Kj and ˜Am be defined by (4.3) and (4.4), respectively. Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exists an integer N2>0 and constants c5,c6>0 such that

    (K1j˜A1m)ej1c5max1kNmax{ΔN2d+,k,ΔN2d,k}+c6ϵ. (4.16)

    Proof. Let

    ˆAm=M+ηαGαGαD++ηβGβGβD

    then we have

    (K1j˜A1m)ej1(K1jˆA1m)ej1+(ˆA1m˜A1m)ej1. (4.17)

    On the one hand, observe that

    K1jˆA1m=ˆA1m(ˆAmKj)K1j=ηαˆA1mGαGα(D+d+,jI)K1j+ηβˆA1mGβGβ(Dd,jI)K1j,

    we have

    (K1jˆA1m)ej1ηαˆA1m1Gα21(D+d+,jI)K1jej1+ηβˆA1m1Gβ21(Dd,jI)K1jej1,

    where

    (D+d+,jI)K1jej1=Nk=1|(d+,kd+,j)K1j(k,j)|

    and

    (Dd,jI)K1jej1=Nk=1|(d,kd,j)K1j(k,j)|.

    As K1j has off diagonal exponential decay property, similar to the derivation of (4.15), we can show that, given a ϵ>0, there exists a integer ˜N1>0 and a constant ˜c1>0, such that

    (D+d+,jI)K1jej1˜c1Δ˜N1d+,j+2dmaxϵ,

    and

    (Dd,jI)K1jej1˜c1Δ˜N1d,j+2dmaxϵ.

    Therefore, there exists positive constants ˜c2 and ˜c3, such that

    (K1jˆA1m)ej1˜c2max{Δ˜N1d+,j,Δ˜N1d,j}+˜c3ϵ. (4.18)

    On the other hand, observe that

    ˆA1m˜A1m=˜A1m(˜AmˆAm)ˆA1m=ηα˜A1m(GαD+GαGαGαD+)ˆA1m+ηβ˜A1m(GβDGβGβGβD)ˆA1m=ηα˜A1mGα(D+GαGαD)ˆA1m+ηβ˜A1mGβ(DGβGβD)ˆA1m,

    we have

    (ˆA1m˜A1m)ej1ˆA1m˜A1m1˜A1m1ˆA1m1(ηαGα1D+GTαGαD+1+ηβGβ1DGTβGβD1).

    Since Gα and Gβ have off diagonal decay property, then as shown in the proof of Lemma 4.7 in [15], given a ϵ>0, there exists a integer ˜N2>0 and a constant ˜c4>0 such that

    D+GαGαD+1˜c4max1kNΔ˜N2d+,k+2dmaxϵ

    and

    DGβGβD1˜c4max1kNΔ˜N2d,k+2dmaxϵ,

    where Δ˜N2d+,k=maxk˜N2lk+˜N2|d+,ld+,k| and Δ˜N2d,k=maxk˜N2lk+˜N2|d,ld,k|. Therefore, there exists positive constants ˜c5 and ˜c6 such that

    (ˆA1m˜A1m)ej1˜c5max1kNmax{Δ˜N2d+,k,Δ˜N2d,k}+˜c6ϵ. (4.19)

    Now, let

    N2=max{˜N1,˜N2},c5=˜c2+˜c5andc6=˜c3+˜c6.

    It follows from (4.18) and (4.19) that, given a ϵ>0, there exists a integer N2>0 and constants c5,c6>0 such that

    (K1j˜A1m)ej1c5max1kNmax{ΔN2d+,k,ΔN2d,k}+c6ϵ.

    The proof is completed.

    In combination with (4.5), Lemma 4.2 and Lemma 4.3, we obtain the following theorem.

    Theorem 4.1. Assume 0dmind+(x),d(x)dmax. Then given an ϵ>0, there exist an integer N3>0 and constants c7,c8>0 such that

    P11˜A12c7max1kNmax{ΔN3d+,k,ΔN3d,k}+c8ϵ. (4.20)

    Remark 1. If d+(x),d(x)C[xL,xR], then max1kNmax{ΔN3d+,k,ΔN3d,k} can be very small for sufficiently large N, which implies that P11 then will be a good approximation to ˜A1.

    Now we consider the difference between P12 to P11. As shown in Section 3.3 of [11], we know that, for a given ϵ>0, there exists a polynomial pk(t)=k=0at of degree k such that

    K12ipk(Ki)2<ϵandC12ipk(Ci)2<ϵ,for i=1,2,,N. (4.21)

    Define

    ˜P1=(ni=1pk(Ki)eiei)(ni=1pk(Ki)eiei) (4.22)

    and

    ˜P2=(ni=1pk(Ci)eiei)(ni=1pk(Ci)eiei). (4.23)

    Then we have

    P12P11=(P12˜P2)+(˜P2˜P1)+(˜P1P11). (4.24)

    Lemma 4.4. Let ˜P1 and ˜P2 be defined by (4.22) and (4.23), respectively. Then we have

    rank(˜P2˜P1)8km.

    Proof. Observe that

    KiCi=MC(M)+ηαd+,i(GαGαCαCα)+ηβd,i(GβGβCβCβ). (4.25)

    It is easy to see that

    MC(M)=18([6116116][611161116])=18[11]. (4.26)

    For the matrix Gα and its Strang's circulant approximation Cα, they have the following form

    Gα=[K0K1K0K1K0]andCα=[K0K1K1K0K1K0],

    where K0,K1Rm×m. Therefore, we have

    CαCα=(Gα+[K1])(Gα+[K1])=GαGα+Gα[K1]+[K1]Gα+[K1][K1]=GαGα+[K0K1]+[K1K0]+[K1K1]. (4.27)

    Similarly, we can show that

    CβCβ=GβGβ+[˜K0˜K1]+[˜K1˜K0]+[˜K1˜K1]. (4.28)

    Therefore, we can see from (4.25), (4.26), (4.27) and (4.28) that KiCi is of the form

    KiCi=[++++],

    where "+" denotes a m-by-m block matrix.

    Analogous to the proof of Lemma 3.11 in [11], it follows from

    pk(Ci)pk(Ki)=kj=0aj(CjiKji)=kj=0j1=0ajCj1i(CiKi)Ki

    and

    ˜P2˜P1=(Ni=1eieipk(Ci))(Ni=1(pk(Ci)pk(Ki))eiei)+(Ni=1(pk(Ci)pk(Ki))eiei)(Ni=1pk(Ki)eiei)

    that rank(˜P2˜P1)8km.

    On the other hand, for P1˜P1 and P2˜P2, we can directly apply the proof of Lemma 3.12 in [11] to obtain the following lemma.

    Lemma 4.5. Let ˜P1 and ˜P2 be defined by (4.22) and (4.23), respectively. Then given a ϵ>0, there exist constants ˜c7>0 and ˜c8>0 such that

    P11˜P12<˜c7ϵandP12˜P22<˜c8ϵ.

    By Lemma 4.4 and Lemma 4.5, we obtain the following theorem.

    Theorem 4.2. Let P11 and P12 be defined by (3.9) and (3.10), respectively. Then given a ϵ>0, there exists constants ˜c9>0 and k>0 such that

    P12P11=E+M,

    where E=(P12˜P2)+(˜P1P11) is a small norm matrix with E2<˜c9ϵ, and M=˜P2˜P1 is a low rank matrix with rank(M)8km.

    We can directly apply the analysis of section 3.4 in [11] to reach the following conclusion.

    Lemma 4.6. Let P12 and P13 be defined by (3.10) and (3.11), respectively. Then, for a given ϵ>0, there exist an integer 0>0 and a constant c9>0 such that

    P13P122<c9ϵ

    holds when the number of interpolation points in P13 satisfies 0.

    Summarizing our analysis, we have the following theorem.

    Theorem 4.3. Let P13 and A be defined by (3.11) and (3.2), respectively. Then we have

    P13A=I+E+S,

    where E is a low rank matrix and S is a small norm matrix.

    Proof. Observe that

    P13A=(P13˜A1+˜A1)A=(P13˜A1)A+˜A1(˜A+A˜A)=I+(P13˜A1)A+˜A1(A˜A). (4.29)

    By (3.2), (3.7), Theorems 4.1, 4.2 and Lemma 4.6, we can show that

    A˜A=E1+S1andP13˜A1=E2+S2,

    where E1 and E2 are two low rank matrices, and S1 and S2 are two small norm matrices. Therefore, it is easy to see from (4.29) that

    P13A=I+E+S,

    where E is a low rank matrix and S is a small norm matrix.

    In this section, we carry out numerical experiments to show the performance of the proposed preconditioners P3 and P4. We use the preconditioned conjugate gradient (PCG) method to solve the symmetric positive definite linear systems (2.6). In all experiments, we set the stopping criterion as

    rk2b2<108,

    where rk is the residual vector after k iterations and b is the right-hand side.

    For the sake of comparison, we also test two other types of preconditioners which are proposed in [7]. One is the Strang's circulant preconditioner Cs defined as

    Cs=s(M)+ηα˜d+s(Gα)s(Gα)+ηβ˜ds(Gβ)s(Gβ),

    where s(M), s(Gα) and s(Gβ) are the strang-circulant approximations of the Toeplitz matrices M, Gα and Gβ, and ˜d+ and ˜d are the mean values of the diagonals of ˜D+ and ˜D, respectively. Another is the banded preconditioner B() defined as

    B()=M+ηα˜Bα˜D(j)+˜Bα+ηβ˜Bβ˜D(j)˜Bβ,

    where ˜Bα and ˜Bβ are the banded approximations of ˜Gα and ˜Gβ which are of the form

    [g(α)1g(α)0g(α)1g(α)0g(α)Nk=g(α)kg(α)1g(α)0]and[g(β)0g(β)1Nk=g(β)kg(β)0g(β)1g(β)g(β)0g(β)1],

    respectively.

    Example 1. We consider balanced fractional diffusion equation (1.1) with (xL,xR) =(0,1) and T=1. The diffusion coefficients are given by

    d+(x,t)=1000(x+1)4+α+t2,d(x,t)=1000(x+1)4+β+t2,

    and f(x,t)=1000.

    In the numerical testing, we set the number of temporal girds with Mt=N/2. The results for different values of α and β are reported in Table 1. We do not report the numerical results of the unpreconditioned CG method as it converges very slow.

    Table 1.  Numerical results for Example 1.
    Cs B(4) B(6) P3(3) P3(4) P4(3) P4(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    α=0.6, β=0.6
    211 61.44 14.90 35.00 10.26 31.00 9.60 14.00 5.21 12.00 5.01 13.00 5.24 11.00 5.06
    212 62.98 58.68 42.81 47.63 37.00 42.57 14.00 18.80 13.00 18.36 13.00 18.73 12.00 17.76
    213 64.00 231.27 50.01 210.70 44.00 187.71 15.00 76.60 13.00 69.07 13.00 71.41 12.00 69.55
    214 65.00 1060.76 58.01 1068.46 52.00 989.73 15.00 334.53 14.00 319.51 14.00 337.00 13.00 325.66
    α=0.6, β=0.7
    211 65.06 16.61 31.00 9.63 27.41 8.97 14.00 5.43 13.00 5.54 13.00 5.36 11.00 5.15
    212 66.72 61.84 37.00 40.06 32.88 39.05 15.85 21.13 13.00 18.23 13.00 18.85 12.00 17.94
    213 68.00 248.34 43.00 182.87 38.00 170.32 16.01 83.84 14.54 79.90 14.34 79.94 13.00 77.52
    214 69.01 1171.84 49.00 910.07 43.95 862.68 18.00 409.78 15.95 374.91 14.99 366.86 13.99 357.92
    α=0.7, β=0.7
    211 67.55 17.41 28.29 8.97 25.01 8.41 14.96 5.85 13.52 5.82 13.00 5.38 12.00 5.64
    212 69.01 65.25 34.00 38.90 30.11 37.09 16.00 22.02 13.86 20.20 14.00 20.61 12.00 18.76
    213 70.09 264.48 40.00 176.53 35.39 164.35 17.98 95.51 15.95 87.88 14.00 77.99 13.00 77.78
    214 71.31 1197.63 46.00 857.65 41.00 797.35 18.00 415.70 17.00 408.09 15.00 383.49 13.00 349.08
    α=0.7, β=0.8
    211 72.00 17.96 23.97 7.53 21.06 7.00 16.00 6.02 14.00 5.95 13.04 5.41 12.00 5.60
    212 73.68 68.46 28.00 32.62 25.00 30.81 18.73 25.53 16.00 22.81 15.00 22.06 12.99 20.87
    213 75.16 282.66 32.00 141.63 28.98 140.93 18.96 101.00 17.47 96.63 15.00 84.31 14.00 82.02
    214 76.76 1262.66 35.99 683.73 32.98 653.98 20.00 458.79 17.79 430.50 16.00 397.79 14.00 370.01
    α=0.8, β=0.8
    211 74.69 18.91 21.00 7.04 19.00 6.65 17.94 6.82 14.94 6.34 14.00 5.76 12.00 5.60
    212 76.09 70.32 24.00 28.41 22.00 27.70 19.00 26.66 16.65 24.02 14.00 20.89 12.00 18.97
    213 77.57 290.59 28.00 128.10 25.00 120.11 20.00 105.85 18.00 100.20 15.00 83.79 13.00 78.07
    214 79.37 1291.96 31.00 592.09 28.99 587.79 20.24 464.72 18.00 432.03 15.99 405.75 14.00 370.07
    α=0.8, β=0.9
    211 79.85 20.58 16.00 5.67 15.00 5.66 19.19 7.45 17.00 7.20 14.96 6.18 13.00 6.03
    212 81.45 77.06 18.00 22.74 17.00 22.56 19.71 26.73 17.00 24.08 15.00 21.87 13.00 19.83
    213 84.14 320.03 20.00 95.92 18.99 100.71 22.00 115.14 19.12 104.20 16.00 90.63 14.00 81.37
    214 85.47 1413.42 21.98 449.53 19.99 428.83 23.22 529.87 19.93 478.16 16.97 424.17 14.00 374.99
    α=0.9, β=0.9
    211 82.93 20.64 14.00 4.95 13.00 4.95 20.00 7.51 17.00 7.22 14.00 5.62 12.00 5.57
    212 84.41 77.71 15.00 19.57 14.00 19.73 22.14 29.98 18.83 26.68 15.00 22.15 13.00 20.20
    213 85.77 324.09 17.00 83.82 16.00 84.84 22.52 118.15 20.00 111.70 15.99 90.72 13.99 84.47
    214 86.84 1422.57 18.00 384.27 17.00 385.21 23.96 541.14 20.00 494.21 16.00 409.06 13.99 373.06

     | Show Table
    DownLoad: CSV

    In the table, we use "P3()" and "P4()" to denote our proposed preconditioners with being the number of interpolation points, "Iter" to denote the average number of iteration steps required to solve the discrete fractional diffusion equation (2.6) at each time step, and "CPU" to denote the total CPU time in seconds for solve the whole discretized system.

    From Table 1, we can see that the banded preconditioner B() and our proposed preconditioners P3() and P4() perform much better than the circulant preconditioner Cs in terms of both iteration number and elapsed time. The banded preconditioner B() has better performance when α=β=0.9. In this case, the off-diagonals of the coefficient matrix decay to zero very quickly. For other cases, P3() and P4() perform much better.

    We also see that the performance of P3() and P4() are very robust in the sense that the average iteration numbers are almost the same for different values of α and β, as well as for the different mesh size.

    Example 2. Consider the two dimensional balanced fractional diffusion equation

    u(x,y,t)t+RLxLDα1x(d+(x,y,t)CxDα1xRu(x,y,t))+RLxDβ1xR(d(x,y,t)CxLDβ1xu(x,y,t))+RLyLDα2y(e+(x,y,t)CyDα2yRu(x,y,t))+RLyDβ2yR(e(x,y,t)CyLDβ2yu(x,y,t))=f(x,y,t),(x,y)Ω=(xL,xR)×(yL,yR),t(0,T],u(x,y,t)=0,(x,y)Ω,t[0,T],u(x,y,0)=u0(x,y),(x,y)[xL,xR]×[yL,yR],

    where 12<α1,β1,α2,β2<1, and d±(x,y,t)>0, e±(x,y,t)>0 for (x,y,t)Ω×(0,T].

    As was shown in [7], the finite volume discretization leads to

    A(j)u(j)=(MxMy)u(j1)+f(j),j=1,2,,Mt,

    where

    A(j)=MxMy+ηα1(Gα1Iy)D(j)+(Gα1Iy)+ηβ1(Gβ1Iy)D(j)(Gβ1Iy)+ηα2(IxGα2)E(j)+(IxGα2)+ηβ2(IxGβ2)E(j)(IxGβ2).

    Here Mx,My are tridiagonal matrices and D(j)±, E(j)± are block diagonal matrices

    D(j)±=diag({{d±(xi+12,yk,tj)}Nyk=1}Nxi=0),E(j)±=diag({{e±(xi,yk+12,tj)}Nyk=0}Nxi=1),

    where Nx and Ny denote the number of grid points in the x-direction and y-direction, respectively.

    Similar to the construction of P3 of (3.11), we can define an approximate inverse preconditioner for the two dimensional problems. For convenience of expression, we omit the superscript in A(j). To construct the preconditioner, we define the following circulant matrices

    ˜Ci,j=CMxCMy+ηα1d+(xi,yj)(Cα1Cα1)Iy+ηβ1d(xi,yj)(Cβ1Cβ1)Iy+ηα2e+(xi,yj)Ix(Cα2Cα2)+ηβ2e(xi,yj)Ix(Cβ2Cβ2).

    Then we choose the interpolation points {(˜xi,˜yj}, i=1,2,,x, j=1,2,,y, and we approximate ˜C1/2i,j by

    ˜C1/2i,jF(xk=1yl=1ϕk,l(xi,yj)˜Λ1/2k,l)F.

    where F refers to the two dimensional discrete Fourier transform matrix of size NxNy, and ˜Λk,l is the diagonal matrix whose diagonals are the eigenvalues of ˜Ck,l for 1kx and 1ly. Then we obtain the resulting preconditioner

    P13=(Nxi=1Nyj=1Fxk=1yl=1ϕk,l(xi,yj)˜Λ1/2k,lF(eiej)(eiej))(Nxi=1Nyj=1Fxk=1yl=1ϕk,l(xi,yj)˜Λ1/2k,lF(eiej)(eiej))=(Nxi=1Nyj=1xk=1yl=1(eiej)(eiej)ϕk,l(xi,yj)F˜Λ1/2k,l)(Nxi=1Nyj=1xk=1yl=1˜Λ1/2k,lF(eiej)(eiej)ϕk,l(xi,yj))=(xk=1yl=1Nxi=1Nyj=1(eiej)(eiej)ϕk,l(xi,yj)F˜Λ1/2k,l)(xk=1yl=1˜Λ1/2k,lFNxi=1Nyj=1(eiej)(eiej)ϕk,l(xi,yj))=(xk=1yl=1Φk,lF˜Λ1/2k,l)(xk=1yl=1˜Λ1/2k,lFΦk,l),

    where Φk,l=diag(ϕk,l(x1,y1),ϕk,l(x1,y2),,ϕk,l(xNx,yNy)) for 1kx and 1ly.

    In the tests, we set Ω=(0,1)×(0,1), T=1,

    d+(x,y,t)=et(1000x4+α1+1000y4+α1+1), d(x,y,t)=et(1000x4+β1+1000y4+β1+1),e+(x,y,t)=et(1000x4+α2+1000y4+α2+1), e(x,y,t)=et(1000x4+β2+1000y4+β2+1),

    and f(x,y,t)=1000. We also set Nx=Ny=N, M=N/2 and x=y=. The maximum iteration number is set to be 500. As it is expensive to compute the second item of the Sherman-Morrison-Woodburg formula 3.5 for the two dimensional problem, we only test the preconditioner P3().

    The results are reported in Table 2. We can see that, for the two dimensional problem, although our proposed preconditioners P3(3) and P3(4) take more iteration steps to converge than the banded preconditioners B(3) and B(4), it is much cheaper to implement our new preconditioners than banded preconditioner and circulant preconditioner in terms of CPU time.

    Table 2.  Numerical results for Example 2.
    Cs B(3) B(4) P3(3) P3(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    α1=α2=0.6, β1=β2=0.6
    27 497.05 242.31 20.95 39.17 18.00 45.78 28.81 18.30 28.73 21.17
    28 >500 - 25.95 588.07 22.98 718.04 33.02 112.40 31.97 126.29
    29 >500 - 31.11 9887.04 27.98 12435.90 38.02 868.40 36.80 1023.78
    α1=α2=0.7, β1=β2=0.7
    27 >500 - 17.02 33.78 15.95 43.17 36.30 23.01 35.31 25.93
    28 >500 - 21.00 511.49 19.00 637.12 43.07 144.57 41.18 163.50
    29 >500 - 25.84 8799.13 23.42 11217.25 52.83 1233.09 50.97 1455.41
    α1=α2=0.8, β1=β2=0.8
    27 >500 - 13.98 29.96 12.73 38.05 46.31 29.15 45.38 33.82
    28 >500 - 16.00 437.62 15.00 568.48 58.30 198.03 56.60 223.90
    29 >500 - 18.99 7353.96 17.72 9675.65 76.26 1767.78 72.09 2053.69
    α1=α2=0.9, β1=β2=0.9
    27 >500 - 9.98 25.37 9.00 32.09 62.08 39.55 58.61 42.91
    28 >500 - 11.00 360.76 10.00 470.30 83.97 284.48 78.90 309.27
    29 >500 - 12.00 5933.38 11.99 8162.24 114.65 2631.44 109.24 3042.00

     | Show Table
    DownLoad: CSV

    In this paper, we consider the preconditioners for the linear systems resulted from the discretization of the balanced fractional diffusion equations. The coefficient matrices are symmetric positive definite and can be written as the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. We investigate the approximate inverse preconditioners and show that the spectra of the preconditioned matrices are clustered around 1. Numerical experiments have shown that the conjugate gradient method, when applied to solve the preconditioned systems, converges very quickly. Besides, we extend our preconditioning technique to the case of two dimensional problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by NSFC grant 11771148 and Science and Technology Commission of Shanghai Municipality grant 22DZ2229014.

    All authors declare no conflicts of interest in this paper.



    [1] D. Benson, S. W. Wheatcraft, M. M. Meerschaert, Application of a fractional advection-dispersion equations, Water Resour. Res., 36 (2000), 1403–1412. https://doi.org/10.1029/2000WR900031 doi: 10.1029/2000WR900031
    [2] D. Benson, S. W. Wheatcraft, M. M. Meerschaert, The fractional-order governing equation of Lévy motion, Water Resour. Res., 36 (2000), 1413–1423. https://doi.org/10.1029/2000WR900032 doi: 10.1029/2000WR900032
    [3] M. Benzi, G. H. Golub, Bounds for the entries of matrix functions with applications to preconditioning, BIT Numerical Mathematics, 39 (1999), 417–438. https://doi.org/10.1023/A:1022362401426 doi: 10.1023/A:1022362401426
    [4] B. A. Carreras, V. E. Lynch, G. M. Zaslavsky, Anomalous diffusion and exit time distribution of particle tracers in plasma turbulence models, Phys. Plasmas, 8 (2001), 5096–5103. https://doi.org/10.1063/1.1416180 doi: 10.1063/1.1416180
    [5] R. H. Chan, M. K. Ng, Conjugate gradient methods for Toeplitz systems, SIAM Rev., 38 (1996), 427–482. https://doi.org/10.1137/S0036144594276474 doi: 10.1137/S0036144594276474
    [6] M. Donatelli, M. Mazza, S. Serra-Capizzano, Spectral analysis and structure preserving preconditioners for fractional diffusion equations, J. Comput. Phys., 307 (2016), 262–279. https://doi.org/10.1016/j.jcp.2015.11.061 doi: 10.1016/j.jcp.2015.11.061
    [7] Z. W. Fang, X. L. Lin, M. K. Ng, H. W. Sun, Preconditioning for symmetric positive definite systems in balanced fractional diffusion equations, Numer. Math., 147 (2021), 651–677. https://doi.org/10.1007/s00211-021-01175-x doi: 10.1007/s00211-021-01175-x
    [8] Z. Fang, M. K. Ng, H. W. Sun, Circulant preconditioners for a kind of spatial fractional diffusion equations, Numer. Algor., 82 (2019), 729–747. https://doi.org/10.1007/s11075-018-0623-y doi: 10.1007/s11075-018-0623-y
    [9] F. R. Lin, S. W. Yang, X. Q. Jin, Preconditioned iterative methods for fractional diffusion equation, J. Comput. Phys., 256 (2014), 109–117. https://doi.org/10.1016/j.jcp.2013.07.040 doi: 10.1016/j.jcp.2013.07.040
    [10] Z. Mao, J. Shen, Efficient spectral-Galerkin methods for fractional partial differential equations with variable coefficients, J. Comput. Phys., 307 (2016), 243–261. https://doi.org/10.1016/j.jcp.2015.11.047 doi: 10.1016/j.jcp.2015.11.047
    [11] M. K. Ng, J. Y. Pan, Approximate inverse circulant-plus-diagonal preconditioners for Toeplitz-plus-diagonal matrices, SIAM J. Sci. Comput., 32 (2010), 1442–1464. https://doi.org/10.1137/080720280 doi: 10.1137/080720280
    [12] J. Pan, R. Ke, M. K. Ng, H. W. Sun, Preconditioning techniques for diagnoal-times-Toeplitz matrices in fractional diffusion equations, SIAM J. Sci. Comput., 36 (2014), A2698–A2719. https://doi.org/10.1137/130931795 doi: 10.1137/130931795
    [13] J. Y. Pan, M. K. Ng, H. Wang, Fast iterative solvers for linear systems arising from time-dependent space fractional diffusion equations, SIAM J. Sci. Comput., 38 (2016), A2806–A2826. https://doi.org/10.1137/15M1030273 doi: 10.1137/15M1030273
    [14] J. Y. Pan, M. K. Ng, H. Wang, Fast preconditioned iterative methods for finite volume discretization of steady-state space-fractional diffusion equations, Numer. Algor., 74 (2017), A153–A173. https://doi.org/10.1007/s11075-016-0143-6 doi: 10.1007/s11075-016-0143-6
    [15] H. K. Pang, H. H. Qin, H. W. Sun, T. T. Ma, Circulant-based approximate inverse preconditioners for a class of fractional diffusion equations, Comput. Math. Appl., 85 (2021), 18–29. https://doi.org/10.1016/j.camwa.2021.01.007 doi: 10.1016/j.camwa.2021.01.007
    [16] H. K. Pang, H. H. Sun, Multigrid method for fractional diffusion equations, J. Comput. Phys., 231 (2012), 693–703. https://doi.org/10.1016/j.jcp.2011.10.005 doi: 10.1016/j.jcp.2011.10.005
    [17] I. Podlubny, Fractional Differential Equations, Academic Press, 1999.
    [18] M. F. Shlesinger, B. J. West, J. Klafter, Lévy dynamics of enhanced diffusion: Application to turbulence, Phys. Rev. Lett., 58 (1987), 1100–1103. https://doi.org/10.1103/PhysRevLett.58.1100 doi: 10.1103/PhysRevLett.58.1100
    [19] I. M. Sokolov, J. Klafter, A. Blumen, Fractional kinetics, Phys. Today, 55 (2002), 48–55. https://doi.org/10.1063/1.1535007 doi: 10.1063/1.1535007
    [20] T. Stromer, Four short stories about Toeplitz matrix calculations, Linear Algebra Appl., 343 (2002), 321–344. https://doi.org/10.1016/S0024-3795(01)00243-9 doi: 10.1016/S0024-3795(01)00243-9
    [21] H. Wang, K. X. Wang, T. Sircar, A direct O(Nlog2N) finite difference method for fractional diffusion equations, J. Comput. Phys., 229 (2010), 8095–8104. https://doi.org/10.1016/j.jcp.2010.07.011 doi: 10.1016/j.jcp.2010.07.011
    [22] M. K. Wang, C. Wang, J. F. Yin, A class of fourth-order Padé schemes for fractional exotic options pricing model, Electron. Res. Arch., 30 (2022), 874–897. https://doi.org/10.3934/era.2022046 doi: 10.3934/era.2022046
    [23] Z. Q. Wang, J. F. Yin, Q. Y. Dou, Preconditioned modified Hermitian and skew-Hermitian splitting iteration methods for fractional nonlinear Schrödinger equations, J. Comput. Appl. Math., 367 (2020), 112420. https://doi.org/10.1016/j.cam.2019.112420 doi: 10.1016/j.cam.2019.112420
    [24] Y. Xu, H. Sun, Q. Sheng, On variational properties of balanced central fractional derivatives, Int. J. Comput. Math., 95 (2018), 1195–1209. https://doi.org/10.1080/00207160.2017.1398324 doi: 10.1080/00207160.2017.1398324
    [25] G. M. Zaslavsky, D. Stevens, H. Weitzner, Self-similar transport in incomplete chaos, Phys. Rev. E, 48 (1993), 1683–1694. https://doi.org/10.1103/PhysRevE.48.1683 doi: 10.1103/PhysRevE.48.1683
  • This article has been cited by:

    1. Shi-Ping Tang, Ai-Li Yang, Jian-Lin Zhou, Yu-Jiang Wu, R. Chan’s circulant-based approximate inverse preconditioning iterative method for solving second-order space fractional advection–dispersion equations with variable coefficients, 2024, 43, 2238-3603, 10.1007/s40314-024-02592-y
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1709) PDF downloads(67) Cited by(1)

Figures and Tables

Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog