Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

An efficient high order numerical scheme for the time-fractional diffusion equation with uniform accuracy

  • Received: 22 March 2023 Revised: 24 April 2023 Accepted: 27 April 2023 Published: 05 May 2023
  • MSC : 65L12, 65M06, 65M12

  • The construction of efficient numerical schemes with uniform convergence order for time-fractional diffusion equations (TFDEs) is an important research problem. We are committed to study an efficient uniform accuracy scheme for TFDEs. Firstly, we use the piecewise quadratic interpolation to construct an efficient uniform accuracy scheme for the fractional derivative of time. And the local truncation error of the efficient scheme is also given. Secondly, the full discrete numerical scheme for TFDEs is given by combing the spatial center second order scheme and the above efficient time scheme. Thirdly, the efficient scheme's stability and error estimates are strictly theoretical analysis to obtain that the unconditionally stable scheme is 3β convergence order with uniform accuracy in time. Finally, some numerical examples are applied to show that the proposed scheme is an efficient unconditionally stable scheme.

    Citation: Junying Cao, Qing Tan, Zhongqing Wang, Ziqiang Wang. An efficient high order numerical scheme for the time-fractional diffusion equation with uniform accuracy[J]. AIMS Mathematics, 2023, 8(7): 16031-16061. doi: 10.3934/math.2023818

    Related Papers:

    [1] Junying Cao, Zhongqing Wang, Ziqiang Wang . Stability and convergence analysis for a uniform temporal high accuracy of the time-fractional diffusion equation with 1D and 2D spatial compact finite difference method. AIMS Mathematics, 2024, 9(6): 14697-14730. doi: 10.3934/math.2024715
    [2] Lei Ren . High order compact difference scheme for solving the time multi-term fractional sub-diffusion equations. AIMS Mathematics, 2022, 7(5): 9172-9188. doi: 10.3934/math.2022508
    [3] Ziqiang Wang, Kaihao Shi, Xingyang Ye, Junying Cao . Higher-order uniform accurate numerical scheme for two-dimensional nonlinear fractional Hadamard integral equations. AIMS Mathematics, 2023, 8(12): 29759-29796. doi: 10.3934/math.20231523
    [4] Philsu Kim, Seongook Heo, Dojin Kim . A high-order convergence analysis for semi-Lagrangian scheme of the Burgers' equation. AIMS Mathematics, 2023, 8(5): 11270-11296. doi: 10.3934/math.2023571
    [5] Xumei Zhang, Junying Cao . A high order numerical method for solving Caputo nonlinear fractional ordinary differential equations. AIMS Mathematics, 2021, 6(12): 13187-13209. doi: 10.3934/math.2021762
    [6] Muhammad Asim Khan, Norma Alias, Umair Ali . A new fourth-order grouping iterative method for the time fractional sub-diffusion equation having a weak singularity at initial time. AIMS Mathematics, 2023, 8(6): 13725-13746. doi: 10.3934/math.2023697
    [7] Yanhua Gu . High-order numerical method for the fractional Korteweg-de Vries equation using the discontinuous Galerkin method. AIMS Mathematics, 2025, 10(1): 1367-1383. doi: 10.3934/math.2025063
    [8] Xiao Qin, Xiaozhong Yang, Peng Lyu . A class of explicit implicit alternating difference schemes for generalized time fractional Fisher equation. AIMS Mathematics, 2021, 6(10): 11449-11466. doi: 10.3934/math.2021663
    [9] Sara S. Alzaid, Pawan Kumar Shaw, Sunil Kumar . A numerical study of fractional population growth and nuclear decay model. AIMS Mathematics, 2022, 7(6): 11417-11442. doi: 10.3934/math.2022637
    [10] Asifa Tassaddiq, Muhammad Yaseen, Aatika Yousaf, Rekha Srivastava . Computational study of the convection-diffusion equation using new cubic B-spline approximations. AIMS Mathematics, 2021, 6(5): 4370-4393. doi: 10.3934/math.2021259
  • The construction of efficient numerical schemes with uniform convergence order for time-fractional diffusion equations (TFDEs) is an important research problem. We are committed to study an efficient uniform accuracy scheme for TFDEs. Firstly, we use the piecewise quadratic interpolation to construct an efficient uniform accuracy scheme for the fractional derivative of time. And the local truncation error of the efficient scheme is also given. Secondly, the full discrete numerical scheme for TFDEs is given by combing the spatial center second order scheme and the above efficient time scheme. Thirdly, the efficient scheme's stability and error estimates are strictly theoretical analysis to obtain that the unconditionally stable scheme is 3β convergence order with uniform accuracy in time. Finally, some numerical examples are applied to show that the proposed scheme is an efficient unconditionally stable scheme.



    u: Change in temperature (K); f: The heat of source (K/s); x: Change in space (m); t: Change in time (s); Λ: Region; T: Maximum time; β: The order of fractional derivative of time; 0Dβt: Fractional derivative of time in the sense of Caputo; 2x: The second derivative of space; Δt: The step of time (s); h: The space division step size (m); K: Maximum number of time divisions; N: Maximum number of space divisions

    Fractional calculus is widely used in natural phenomena because many problems in engineering and science can be well described by fractional differential equation model. Recently, the research of solutions and numerical solutions of fractional differential equations is a very important research topic. For instance, finite difference method [1], spectral method [2,3], wavelets method [4], etc.

    In this paper, we consider the TFDE as following form:

    0Dβtu(x,t)2xu(x,t)=f(x,t), (1.1)

    with the initial and boundary conditions as follows:

    {u(x,0)=u0(x),    (1.2)u(a,t)=u(b,t)=0,0<tT,    (1.3)

    where T>0,0<β<1,Λ=(a,b) and I=(0,T]. The Caputo fractional derivatives 0Dβtu(x,t) of β order in (1.1) is defined by

    0Dβtu(x,t)=1Γ(1β)t0(tτ)βτu(x,τ)dτ,0<β<1,

    where Γ() being Euler's gamma function. It is easy to see that the Caputo fractional derivative can be understood as the weighted integral average of the classical derivatives in the past time. This means that the change rate of function at the current time is affected by the past time. This property is useful to describe some materials mechanical behaviors. Therefore, the study of numerical algorithms for TFDE has attracted many researchers. In [5], an alternating direction implicit scheme was used to solve the TFDE of 2D with Dirichlet boundary condition with initial weak singularity solution by L1 scheme on uniform mesh. In [6], it proposed a novel stability and convergence numerical scheme for TFDE with α(1<α<2) Caputo fractional derivative. The stability and convergence L2 type numerical scheme of the Caputo fractional derivative was constructed in [7]. A high-order convergence compact finite difference method for solving the TFDE of 2D was constructed in [8] by the L1 approximation with operator-splitting technique. In [9], they used L1 and L2 type approximation to construct a high order stability and convergence numerical method for the TFDE with rigorous analysis. In [10], it used Legendre polynomial to solve nonlinear fractional diffusion equation with advection and reaction terms. In [11], they constructed a space-time Petrov-Galerkin spectral method for TFDEs based on eigen-decomposition. In [12], it used the spectral collocation methods to solve distributed order TFDEs. In [13], they used the Legendre spectral tau method to solve the multi-term TFDEs and gave the error estimate and rigorous convergence analysis. For efficient numerical scheme constructing, an popular numerical method is based on sum-of-exponentials technique [14,15,16]. Many researchers use non-uniform grid or graded grid numerical schemes to solve problems when the solution of TFDE is initial value singularity [17,18,19,20,21]. In [22], they proposed an approximate spectral method for the nonlinear time-fractional partial integro-differential equation with a weakly singular kernel by using new basis functions based on shifted first-kind Chebyshev polynomials. From partial integro-differential equations, an efficient alternating direction implicit scheme were constructed for the nonlinear TFDEs in [23]. Based on the block-by-block method, a new general efficient technique to construct efficient numerical schemes for the fractional derivative was constructed in [24,25]. In [26], it presented finite difference and finite element scheme for space-time fractional diffusion equations. In [27], the time-stepping spectral method for the TFDEs was constructed by using the temporal second-order difference scheme and spatial spectral method discrete. In [28], it gave an efficient numerical scheme for the variable coefficient multiterm TFDE by the Crank-Nicolson method in temporal part and the exponential B-splines in spatial part with the unconditional stability and convergence rates analysis. Readers can refer to more references such as [29,30,31]. In this paper, we construct an efficient numerical scheme for TFDE with uniform accuracy by block-by-block approach to overcome the lack of the theoretical convergence order at the initial time of the above algorithms.

    The organizational structure of the rest article is as follows: We first describe the time discretization for the time-fractional derivative and derive a sharp estimate for the truncation error in Section 2. In Section 3, the full discrete numerical scheme for TFDE is given by combing the spatial center second order scheme and the efficient time scheme of Section 2. In Section 4, the efficient scheme's stability and error estimates are strictly theoretical analysis to obtain that the scheme is unconditionally stable and 3β convergence order in time with uniform accuracy. Some typical numerical examples are given to show the effectiveness of numerical algorithm in Section 5. We provide some concluding remarks in the final section.

    In order to simplify the symbols without losing generality, we set f(x,t)0 during the construction and analysis of numerical scheme. Now, we construct an efficient high order scheme for the fractional derivative in time, and divide the interval (0,T] into K equal subintervals with Δt=TK,tk=kΔt,k=0,1,,K.

    Next, we discuss the numerical scheme for time-fractional derivative 0Dβtu(x,t) in (1.1). Firstly, we determine the values of u(,t) on t1 and t2. The approximate formula of u(,t) on the interval [t0,t2] is given by Lagrange interpolation [32]:

    J[t0,t2]u(,t)=u(,t0)ˉω0.0(t)+u(,t1)ˉω1.0(t)+u(,t2)ˉω2.0(t), (2.1)

    where ˉωi.0(t) is defined by

    ˉω0.0(t)=(tt1)(tt2)(t0t1)(t0t2),ˉω1.0(t)=(tt0)(tt2)(t1t0)(t1t2),ˉω2.0(t)=(tt0)(tt1)(t2t0)(t2t1).

    When k=1,2, based on (2.1) we can approximate 0Dβtu(x,t1),0Dβtu(x,t2) by replacing u(,t) with J[t0,t2]u(,t) as following,

    0Dβtu(x,t1)=1Γ(1β)t10(t1τ)βτu(x,τ)dτ1Γ(tβ)t10(t1τ)βτ(J[t0,t2]u(x,τ))dτ=B0,01u(x,t0)+B1,01u(x,t1)+B2,01u(x,t2),0Dβtu(x,t2)=1Γ(1β)t20(t2τ)βτu(x,τ)dτ1Γ(tβ)t20(t2τ)βτ(J[t0,t2]u(x,τ))dτ=B0,02u(x,t0)+B1,02u(x,t1)+B2,02u(x,t2), (2.2)

    where

    Bi,01=1Γ(1β)t10(t1τ)βˉωi,0(τ)dτ,i=0,1,2,Bi.02=1Γ(1β)t20(t2τ)βˉωi,0(τ)dτ,i=0,1,2.

    Divide the integration interval into several subintervals, one can obtain 0Dβtu(x,tk) for k3 as follows:

    0Dβtu(x,tk)=1Γ(1β)tk0τu(x,τ)(tkτ)βdτ=1Γ(1β)[t10τu(x,τ)(tkτ)βdτ+k1j=1tj+1tjτu(x,τ)(tkτ)βdτ].

    For every subintervals [tj,tj+1],j=1,2,,k1, the approximation of u(x,t) is as follows:

    u(,t)u(,tj1)w0,j(t)+u(,tj)w1,j(t)+u(,tj+1)w2,j(t)J[tj,tj+1]u(,t), (2.3)

    where wi,j,i=0,1,2;j=1,2,k1 are defined by

    w0,j(t)=(ttj)(ttj+1)(tj1tj)(tj1tj+1),w1,j(t)=(ttj1)(ttj+1)(tjtj1)(tjtj+1),w2,j(t)=(ttj1)(ttj)(tj+1tj1)(tj+1tj).

    Similar to (2.2), based on (2.3) for k3, 0Dβtu(x,tk) can be approximated by replacing u(,t) with J[tj,tj+1]u(,t) in every [tj,tj+1] as follows:

    0Dβtu(x,tk)=1Γ(1β)[t10τu(x,τ)(tkτ)βdτ+k1j=1tj+1tjτu(x,τ)(tkτ)βdτ]1Γ(1β){t10(tkτ)β[J[t0,t2]u(τ)]dτ   +k1j=1tj+1tj(tkτ)β[J[tj,tj+1]u(τ)]dτ}=B0,0ku(x,t0)+B1,0ku(x,t1)+B2,0ku(x,t2)   +k1j=1[B0,jku(x,tj1)+B1,jku(x,tj)+B2,jku(x,tj+1)], (2.4)

    where

    Bi,0k=1Γ(1β)t10(tkτ)βˉωi,0(τ)dτ,i=0,1,2,Bi.jk=1Γ(1β)tj+1tj(tkτ)βˉωi,j(τ)dτ,i=0,1,2;j=1,2,,k1.

    Combining (2.2), (2.4), let β0=Γ(3β)Δtβ, one can immediately obtain the efficient scheme for 0Dβtu(x,tk) as follows:

    0DβΔtu(x,tk)={β10(^D0u(x,t0)+^D1u(x,t1)+^D2u(x,t2)),k=1,β10(~D0u(x,t0)+~D1u(x,t1)+~D2u(x,t2)),k=2,β10[¯Aku(x,t0)+¯Bku(x,t1)+¯Cku(x,t2)+k1j=1(Aju(x,tkj1)+Bju(x,tkj)+Cju(x,tkj+1))],k3, (2.5)

    where

    ^D1=(3β4)/2,^D1=2(1β),^D2=β/2,~D0=(3β2)/2β,~D1=4β/2β,~D2=(β+2)/2β,¯Ak=(2β)(k1)1β/23(2β)k1β/2(k1)2β+k2β,¯Bk=2(2θ)k1β+2(k1)2θ2k2β,¯Ck=(2β)[k1β+(k1)1β]/2+k2β(k1)2β,Aj=(2β)[(j1)1β+j1β]/2(j1)2β+j2β,Bj=2[(2β)(j1)1β+(j1)2βj2β],Cj=3(2β)(j1)1β/2+(2β)j1β/2(j1)2β+j2β.

    In the following Theorem 2.1, we give the error estimate to proposed numerical scheme (2.5) of fractional derivative.

    Theorem 2.1. Suppose u(,t)C3[0,T] and denote

    rk(Δt)=0Dβtu(x,tk)0DβΔtu(x,tk),k1,

    then

    |rk(Δt)|CuΔt3β,0<β<1, (2.6)

    where Cu is a constant and only depending on the function u.

    Proof. Based on the Taylor theorem, one can obtain the residue of (2.1) as follows

    u(x,t)J[t0,t2]u(x,t)=163u(x,ξ(t))t3(tt2)(tt1)(tt0),t[t0,t2], (2.7)

    where ξ(t)[t0,t2].

    Firstly, we will use (2.7) to estimate the case k=1 of (2.6) as follows:

    |r1(Δt)|=|1Γ(1β){t10(t1τ)βτu(x,τ)dτt10(t1τ)βτ([J[t0,t2]u(x,τ)])dτ}|=βΓ(1β)|t10(t1τ)β1[u(x,τ)J[t0,t2]u(x,τ)]dτ|=β6Γ(1β)|t103u(x,ξ(τ))τ3(τt2)(τt0)(t1τ)βdτ|β6Γ(1β)maxt[0,T]|3u(x,t)t3|t10(t2τ)(t1τ)βτdτ=β3(3β)Γ(2β)maxt[0,T]|3u(x,t)t3|Δt3βCuΔt3β.

    Secondly, the estimate of the case k=2 of (2.6) is similar to k=1, so the proof process is omitted without losing generality.

    Thirdly, based on the direct calculation it is easy to split the estimate of the case of (2.6) for k3 into two parts in the following

    |rk(Δt)|=|0Dβtu(x,tk)0DβΔtu(x,tk)|=1Γ(1β)|t10(tkτ)βτ[u(x,τ)J[t0,t2]u(x,τ)]dτ+k1j=1tj+1tj(tkτ)βτ[u(x,τ)J[tj,tj+1]u(x,τ)]dτ||M+N|.

    Repeating the similar proof process of |r1(Δt)|, one can immediately obtain the following estimate

    |M|CuΔt3β. (2.8)

    For the estimate of the second part N of |rk(Δt)|, using the Taylor theorem one can obtain

    |N|=1Γ(1β)|k1j=1tj+1tj(tkτ)βτ[u(x,τ)J[tj,tj+1]u(x,τ)]dτ|=1Γ(1β)|k2j=1tj+1tj(tkτ)βτ[u(x,τ)J[tj,tj+1]u(x,τ)]dτ+tktk1(tkτ)βτ[u(x,τ)J[tj,tj+1]u(x,τ)]dτ|=16Γ(1β){|k2j=1tj+1tj3u(x,ξ(τ))τ3(τtj)(τtj+1)(τtj1)d(tkτ)βtktk13u(x,ξ(τ))τ3(τtk)(τtk1)(τtk2)d(tkτ)β|}3β27Γ(1β)maxt[0,T]|3tu(x,t)|Δt3tj+1tj(tkτ)β1dτ+β6Γ(1β)maxt[0,T]|3tu(x,t)||Δt2tktk1(tkτ)βdτ3β27Γ(1β)maxt[0,T]|3tu(x,t)|Δt3β+β6Γ(1β)maxt[0,T]|3tu(x,t)|Δt3β. (2.9)

    In the above proof, we use |(τtj)(τtj+1)(τtj1)|239Δt3. Combining (2.8) and (2.9), one can get the following conclusions

    |rk(Δt)|CuΔt3β,k3.

    To sum up, the proof of Theorem 2.1 is completed.

    For convenience and generality, we divide Λ=(a,b) into N equal parts and denote h=baN, and xj=a+jh,j=0,1,,N. The numerical solution (1.1) at (xj,tk) is denoted as ukj. We use the following differential notations

    (ukj)x=ukj+1ukjh,(ukj)ˉx=ukjukj1h. (3.1)

    Similar to [16], for the sake stability and convergence we introduce the discrete inner product and norm as follows,

    (uk,vk)=N1j=1ukjvkjh, (3.2)

    Though direct calculation, it is easy to prove the discrete Green formula:

    \begin{eqnarray} (u^k, v^k_x) = u_N^kv_N^k-u_0^kv_1^k-(u_{\bar{x}}^k, v^k]. \end{eqnarray} (3.3)

    Let u = y, v = z_{\bar{x}} , the Eq (3.3) immediately becomes

    \begin{eqnarray} (y^k, (z_{\bar{x}}^k)_x) = (y^kz_{\bar{x}}^k)_N-(y^kz_x^k)_0-(y_{\bar{x}}^k, z_{\bar{x}}^k]. \end{eqnarray} (3.4)

    Therefore, using (3.1), the second-order central difference scheme in space as follows:

    \begin{equation} \frac{{{\partial ^2}u({x_j}, t_k)}}{{\partial {x^2}}} = \frac{(u_{j+1}^k)_{\bar{x}}-(u_j^k)_{\bar{x}}}{h}+O(h^2) \doteq(u_{\bar x, j}^k)_x+O(h^2). \end{equation} (3.5)

    Combining (2.5) with (3.5), one can immediately obtain the full discrete scheme for TFDEs as follows:

    \begin{eqnarray} \left\{\begin{aligned} &\widehat{D_0} u_j^0+\widehat{D_1} u_j^1+\widehat{D_2} u_j^2-\beta_0 (u_{\bar x, j}^1)_x = 0, \ \ \ k = 1, \\ &\widetilde{D_0} u_j^0+\widetilde{D_1} u_j^1+\widetilde{D_2} u_j^2-\beta_0 (u_{\bar x, j}^2)_x = 0, \ \ \ k = 2, \\ &\overline{A_k} u_j^0+\overline{B_k} u_j^1+\overline{C_k} u_j^2 +\sum\limits_{i = 1}^{k-1}(A_i u_j^{k-i-1}+B_i u_j^{k-i}+C_i u_j^{k-i+1})-\beta_0C_1^{-1} (u_{\bar x, j}^k)_x = 0, \ \ \ k \geq 3. \end{aligned}\right. \end{eqnarray} (3.6)

    In order to analyze the numerical scheme (3.6), we firstly rewrite (3.6) for k\geq4 into the following equivalent form

    \begin{eqnarray} u^k_j+\beta_0 C_1^{-1} (u_{\bar x, j}^k)_x = \sum\limits_{i = 1}^k d_{k-i}^k u^{k-i}_j, \quad 4 \leq k \leq K, \end{eqnarray} (3.7)

    where

    \begin{eqnarray} \begin{array}{l} d_0^k = -C_1^{-1}(\bar{A}_k+A_{k-1}), \ \ \ d_1^k = -C_1^{-1}(A_{k-2}+B_{k-1}+\bar{B}_k), \\ d_2^k = -C_1^{-1}(\overline{C_k}+A_{k-3}+B_{k-2}+C_{k-1}), \\ d_{k-i}^k = -C_1^{-1}(A_{i-1}+B_i+C_{i+1}), i = 3, 4, \cdots, k-3, \\ d_{k-2}^k = -C_1^{-1}(A_1+B_2+C_3), \ \ \ d_{k-1}^k = -C_1^{-1}(B_1+C_2). \end{array} \end{eqnarray} (3.8)

    Similarly, we secondly rewrite (3.6) for k = 3 into the following equivalent form

    \begin{eqnarray} u^3_j+\beta_0 C_1^{-1} (u_{\bar x, j}^3)_x = d_2^3 u^2_j+d_1^3 u^1_j+d_0^3 u^0_j, \end{eqnarray} (3.9)

    where

    \begin{eqnarray*} d_2^3 = -C_1^{-1}(\bar{C_3}+B_1+C_2), \ \ d_1^3 = -C_1^{-1}(\bar{B_3}+A_1+B_2), \ \ d_0^3 = -C_1^{-1}(\bar{A_3}+A_2). \end{eqnarray*}

    According to (3.7) and (3.9), the full discrete scheme of (3.6) can be rewritten as the following equivalent form

    \left\{ \begin{array}{l} \widehat{D_0} u_j^0+\widehat{D_1} u_j^1+\widehat{D_2} u_j^2-\beta_0 (u_{\bar x, j}^1)_x = 0, \ \ \ k = 1, \ \ \ \ \ \ (3.10{\rm{a}})\\ \widetilde{D_0} u_j^0+\widetilde{D_1} u_j^1+\widetilde{D_2} u_j^2-\beta_0 (u_{\bar x, j}^2)_x = 0, \ \ \ k = 2, \ \ \ \ \ \ (3.10{\rm{b}})\\ u^3_j+\beta_0 C_1^{-1} (u_{\bar x, j}^3)_x = d_2^3 u^2_j+d_1^3 u^1_j+d_0^3 u^0_j, \ \ \ k = 3, \ \ \ \ \ \ (3.10{\rm{c}})\\ u^k_j+\beta_0 C_1^{-1} (u_{\bar x, j}^k)_x = \sum\limits_{i = 1}^k d_{k-i}^k u^{k-i}_j, \ \ \ k \geq 4.\ \ \ \ \ \ (3.10{\rm{d}})\end{array} \right.

    Before carrying out the full discrete scheme (4.1)'s stability and convergence, the properties of the coefficients d_{k-i}^k will be analyze firstly in the Lemma 3.1.

    Lemma 3.1. For all 0 < \beta < 1 , as k \geq 4 , the coefficients in the scheme (4.1d) satisfy

    (I) C_1 = \frac{4-\beta}{2} \in(\frac{3}{2}, 2) ,

    (II) { }\sum_{i = 1}^k d_{k -i}^k\equiv1 ,

    (III) d_{k-i}^k > 0, i = 3, \cdots, k ,

    (IV) 0 < d_{k-1}^k < \frac{4}{3} ,

    (V) d_{k-2}^k there are positive and negative,

    (VI) d_{k-2}^k+\frac{1}{4}(d_{k-1}^k)^2 > 0.

    Proof. (I) Based on the direct calculation, it is easy to prove directly.

    (II) Based on the fact that the fractional derivative of the constant is zero, one can prove it immediately by combining (2.5) and (3.8).

    (III) For k\geq4 , we can easy to observe that

    \begin{eqnarray*} { } 2(-A_{i-1}-B_{i}-C_{i+1}) & = & (-\beta+2)[(i-2)^{1-\beta}-3(i-1)^{1-\beta}+3i^{1-\beta}-(1+i)^{1-\beta}]\\ &&+2(i-2)^{2-\beta}-6(i-1)^{2-\beta}+6i^{2-\beta}-2(1+i)^{2-\beta}. \end{eqnarray*}

    Denote i-2 = \bar{i} for i\geq 6 , and we use Taylor formula yields

    \begin{eqnarray} &&-2(A_{i-1}+B_{i}+C_{i+1})\\ & = & \bar{i}^{1-\beta}(2-\beta)\{1-3(1+\frac{1}{\bar{i}})^{1-\beta}+3(1+\frac{2}{\bar{i}})^{1-\beta} - { } (1+\frac{3}{\bar{i}})^{1-\beta}\}\\ &&+ \bar{i}^{2-\beta}\{2-6(\frac{1}{\bar{i}}+1)^{2-\beta}+6(1+\frac{2}{\bar{i}})^{2-\beta} -{ } 2(1+\frac{3}{\bar{i}})^{2-\beta}\}\\ & = & { } \bar{i}^{1-\beta}(2-\beta)\{1-3[1+(1-\beta)(\frac{1}{\bar{i}})+\ldots]\\ &&+ { } 3[1+(1-\beta)(\frac{2}{\bar{i}})+\frac{(1-\beta)(-\beta)}{2!}(\frac{2}{\bar{i}})^{2}+\ldots]\\ &&- { } [1+(1-\beta)(\frac{3}{\bar{i}})+\frac{(1-\beta)(-\beta)}{2!}(\frac{3}{\bar{i}})^{2}+\ldots]\}\\ &&+ { } \bar{i}^{2-\beta}\{2-6[1+(2-\beta)(\frac{1}{\bar{i}})+\frac{(2-\beta)(1-\beta)}{2!}(\frac{1}{\bar{i}})^{2}+\ldots]\\ &&+ { } 6[1+(2-\beta)(\frac{2}{\bar{i}})+\frac{(2-\beta)(1-\beta)}{2!}(\frac{2}{\bar{i}})^{2}+\ldots]\\ &&- { } 2[1+(2-\beta)(\frac{3}{\bar{i}})+\frac{(2-\beta)(1-\beta)}{2!}(\frac{3}{\bar{i}})^{2}+\ldots]\}\\ & = & { } -\beta(2-\beta)(1-\beta)\bar{i}^{-2-\beta}\sum\limits_{k = 0}^{+\infty}a_{k} +2\beta(2-\beta)(1-\beta)\bar{i}^{-1-\beta}, \end{eqnarray} (3.11)

    where

    \begin{eqnarray*} a_{k} = \prod\limits_{i = 0}^{k}(-i-1-\beta)(\frac{1}{\bar{i}})^{k}\frac{-3(6+k)+24(8+k)2^{k}-27(10+k)3^{k}}{(k+4)!}. \end{eqnarray*}

    It is easy to check that \sum_{k = 0}^{+\infty}a_{k} is an alternating series with a positive first term by carefully calculate, and we have 0 < \sum_{k = 0}^{+\infty}a_{k} < 4(\beta+1) . Therefore, we have

    \begin{eqnarray*} -2(A_{i-1}+B_{i}+C_{i+1})& > & -\beta(2-\beta)(1-\beta)\bar{i}^{-2-\beta}\cdot4(\beta+1) +2\beta(2-\beta)(1-\beta)\bar{i}^{-1-\beta}\\ & = &2\beta(2-\beta)(1-\beta)\bar{i}^{-2-\beta}[-2(\beta+1)+\bar{i}] > 0. \end{eqnarray*}

    Similarly, based on the directly calculating it is easy to prove as follows:

    \begin{eqnarray} \begin{array}{lll} 2(-A_{2}-B_{3}-C_{4}) = 4-\beta+(3\beta-18)2^{1-\beta}+(24-3\beta)3^{1-\beta}+(\beta-10)4^{1-\beta} > 0, \ i = 3, \\ 2(-A_{3}-B_{4}-C_{5}) = (6-\beta)2^{1-\beta}+(3\beta-24)3^{1-\beta}+(30-3\beta)4^{1-\beta}+(\beta-12)5^{1-\beta} > 0, \ i = 4, \\ 2(-A_{4}-B_{5}-C_{6}) = (8-\beta)3^{1-\beta}+(3\beta-30)4^{1-\beta}+(36-3\beta)5^{1-\beta}+(\beta-14)6^{1-\beta} > 0, \ i = 5. \end{array} \end{eqnarray} (3.12)

    Combining (3.11) and (3.12), one can easily obtain that

    \begin{eqnarray*} d_{k-i}^{k} = -\frac{1}{2-\frac{\beta}{2}}(A_{i-1}+B_{i}+C_{i+1}) > 0, i = 3, \cdots, k-3 . \end{eqnarray*}

    For d_{1}^{k} = -(A_{k-2}+B_{k-1}+\bar{B}_{k})\beta_{0}^{-1} , we let W_{1} = -(A_{k-2}+B_{k-1}+\bar{B}_{k}) , then

    \begin{eqnarray*} W_{1}& = & { } \frac{3}{2}(-2+\beta)(k-2)^{1-\beta}-\frac{1}{2}(-2+\beta)(k-3)^{1-\beta}+2(-2+\beta)k^{1-\beta}\\ &&+ { } (k-3)^{2-\beta}-3(k-2)^{2-\beta}+2k^{2-\beta}. \end{eqnarray*}

    Let k-2 = \bar{k} , we use a Taylor expansion and get following as

    \begin{eqnarray*} W_{1}& = & \frac{3}{2}(-2+\beta)\bar{k}^{1-\beta}+\frac{1}{2}(2-\beta)(-1+\bar{k})^{1-\beta}-2(2-\beta)(2+\bar{k})^{1-\beta}\\ &&+(-1+\bar{k})^{2-\beta}-3\bar{k}^{2-\beta}+2(2+\bar{k})^{2-\beta}\\ & = &(2-\beta)\bar{k}^{1-\beta}\{\frac{1}{2}[\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{\bar{k}})^{2} -\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{1}{\bar{k}})^{3}+\cdots]\\ &&- 2[\frac{(1-\beta)(-\beta)}{2!}(\frac{2}{\bar{k}})^{2}+\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{2}{\bar{k}})^{3}+\cdots]\}\\ &&+ (2-\beta)\bar{k}^{2-\beta}\{-\frac{(1-\beta)(-\beta)}{3!}(\frac{1}{\bar{k}})^{3} +\frac{(1-\beta)(\beta)(\beta+1)}{4!}(\frac{1}{\bar{k}})^{4}-\cdots\\ &&+ 2[\frac{(1-\beta)(-\beta)}{3!}(\frac{2}{\bar{k}})^{3}+\frac{(1-\beta)(-\beta)(-\beta-1)}{4!}(\frac{2}{\bar{k}})^{4}+\cdots]\}\\ & = & (2-\beta)(1-\beta)\bar{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\frac{\prod\limits_{i = 0}^{n}(-\beta-i)}{(n+2)!}(\frac{1}{\bar{k}})^{n} [\frac{1}{2}(-1)^{n+2}-2^{n+3}]\\ &&+(2-\beta)(1-\beta)\bar{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\frac{\prod\limits_{i = 0}^{n}(-\beta-i)}{(n+3)!} (\frac{1}{\bar{k}})^{n}[(-1)^{n+3}+2^{n+4}]\\ & = & (2-\beta)(1-\beta)\bar{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\prod\limits_{i = 0}^{n}(-\beta-i)(\frac{1}{\bar{k}})^{n}\cdot a_{n}, \end{eqnarray*}

    where

    \begin{eqnarray*} a_{n} = \frac{\frac{1}{2}(-1)^{n}(n+1)-8\cdot2^{n}(n+1)}{(n+3)!} = \frac{(n+1)[(-1)^{n}-2^{n+4}]}{2\cdot(n+3)!} < 0. \end{eqnarray*}

    Therefore, { } \sum\limits_{n = 0}^{+\infty}\prod\limits_{i = 0}^{n}(-i-\beta)(\frac{1}{\bar{k}})^{n}\cdot a_{n} is an alternating series with a positive first term and satisfy

    \begin{eqnarray*} \sum\limits_{n = 0}^{+\infty}\prod\limits_{i = 0}^{n}(-i-\beta)(\frac{1}{\bar{k}})^{n}\cdot a_{n} > 0. \end{eqnarray*}

    For d_{2}^{k} = -(\bar{C}_{k}+A_{k-3}+B_{k-2}+C_{k-1})\beta_{0}^{-1} , taking W_{2} = -(A_{k-3}+B_{k-2}+C_{k-1}+\bar{C}_{k}) , we have

    \begin{eqnarray*} W_{2} & = & { } \frac{1}{2}(-\beta+2)k^{1-\beta}+\frac{3}{2}(-\beta+2)(k-2)^{1-\beta}-\frac{3}{2}(-\beta+2)(k-3)^{1-\beta}\\ &&+{ } \frac{1}{2}(-\beta+2)(k-4)^{1-\beta}- { } k^{2-\beta}+3(k-2)^{2-\beta}-3(k-3)^{2-\beta}+(k-4)^{2-\beta}. \end{eqnarray*}

    Let k-2 = \hat{k} , we still use Taylor formula and get

    \begin{eqnarray*} W_{2}{ } & = &{ } \frac{1}{2}(2-\beta)(2+\hat{k})^{1-\beta}+\frac{3}{2}(2-\beta)\hat{k}^{1-\beta}-\frac{3}{2}(2-\beta)(\hat{k}-1)^{1-\beta}\\ &&+{ }\frac{1}{2}(2-\beta)(\hat{k}-2)^{1-\beta} -(\hat{k}+2)^{2-\beta}+3\hat{k}^{2-\beta}-3(\hat{k}-1)^{2-\beta}+(\hat{k}-2)^{2-\beta}\\ { }& = &{ } \frac{3}{2}(2-\beta)\hat{k}^{1-\beta}-\frac{3}{2}(2-\beta)\hat{k}^{1-\beta}(1-\frac{1}{\hat{k}})^{1-\beta} +\frac{1}{2}(2-\beta)\hat{k}^{1-\beta}(1-\frac{2}{\hat{k}})^{1-\beta}\\ &&+{ }\frac{1}{2}(2-\beta)\hat{k}^{1-\beta}(1+\frac{2}{\hat{k}})^{1-\beta}+{ } 3\hat{k}^{2-\beta}- 3\hat{k}^{2-\beta}(1-\frac{1}{\hat{k}})^{2-\beta}\\ &&+{ }\hat{k}^{2-\beta}(1-\frac{2}{\hat{k}})^{2-\beta}-\hat{k}^{2-\beta}(1+\frac{2}{\hat{k}})^{2-\beta}\\ { }& = &{ }(2-\beta)\hat{k}^{1-\beta}\{-\frac{3}{2}[\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{\hat{k}})^{2} -\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{1}{\hat{k}})^{3}+\cdots]\\ &&+{ } \frac{1}{2}[\frac{(1-\beta)(-\beta)}{2!}(\frac{2}{\hat{k}})^{2} -\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{2}{\hat{k}})^{3}+\cdots]\\ &&+{ }\frac{1}{2}[\frac{(1-\beta)(-\beta)}{2!}(\frac{2}{\hat{k}})^{2} +\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{2}{\hat{k}})^{3}+\cdots]\}\\ &&+{ }(2-\beta)\hat{k}^{2-\beta}\{-3[\frac{(1-\beta)\beta}{3!}(\frac{1}{\hat{k}})^{3} +\frac{(1-\beta)\beta(\beta+1)}{4!}(\frac{1}{\hat{k}})^{4}-\cdots]\\ &&+{ }[\frac{(1-\beta)\beta}{3!}(\frac{2}{\hat{k}})^{3} +\frac{(1-\beta)\beta(\beta+1)}{4!}(\frac{2}{\hat{k}})^{4}-\cdots]\\ &&-{ } [\frac{(1-\beta)(-\beta)}{3!}(\frac{2}{\hat{k}})^{3} +\frac{(1-\beta)\beta(\beta+1)}{4!}(\frac{2}{\hat{k}})^{4}+\cdots]\}\\ { }& = &{ } (2-\beta)(1-\beta)\hat{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\frac{\prod\limits_{i = 0}^{n}(-\beta-i)}{(n+2)!} (\frac{1}{\hat{k}})^{n}\{\frac{3}{2}(-1)^{n+3}\\ &&+{ }\frac{1}{2}(-2)^{n+2}+2^{n+1}\}+ (2-\beta)(1-\beta)\hat{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\frac{\prod\limits_{i = 0}^{n}(-\beta-i)}{(n+3)!}(\frac{1}{\hat{k}})^{n}\\ &&{ } [3(-1)^{n+4}+(-2)^{n+3}-2^{n+3}]\\ { }& = &{ } (2-\beta)(1-\beta)\hat{k}^{-\beta-1}\sum\limits_{n = 0}^{+\infty}\prod\limits_{i = 0}^{n}(-\beta-i)(\frac{1}{\hat{k}})^{n}\frac{1}{(n+3)!}b_{n}, \end{eqnarray*}

    where

    \begin{eqnarray*} b_{n} = \frac{3}{2}(-1)^{n+1}(n+1)+2^{n+1}(n+1)[1+(-1)^{n}], \quad b_{0} = \frac{5}{2}, \quad b_{1} = 3. \end{eqnarray*}

    As n\geq 2 , b_{n} < 0 for odd number n and b_{n} = (n-1)[4\cdot2^{n}-\frac{3}{2}]-3 > 0 for even number n . It is easy to verify that b_{0} and b_{1} are all positive. Therefore, it is an alternating series for n\geq 2 , i.e.,

    \begin{eqnarray*} 0 < \sum\limits_{n = 0}^{+\infty}\prod\limits_{i = 0}^{n}(-\beta-i)(\frac{1}{\hat{k}})^{n}\frac{1}{(n+3)!}b_{n} < -\frac{11}{3!}(-\beta)+(-\beta)(-\beta-1)\frac{3}{4!}. \end{eqnarray*}

    Therefore, we have

    \begin{eqnarray*} d_{2}^{k} = \frac{-(A_{k-3}+B_{k-2}+C_{k-1}+\bar{C}_{k})}{2-\frac{\beta}{2}} > 0, d_{1}^{k} = \frac{-A_{k-2}-B_{k-1}-\bar{B}_{k}}{2-\frac{\beta}{2}} > 0. \end{eqnarray*}

    (IV) According to (3.8), we have

    \begin{eqnarray*} d_{k-1}^{k} = \frac{3(4-\beta)+(\beta-6)\cdot2^{1-\beta}}{4-\beta}. \end{eqnarray*}

    Therefore,

    \begin{eqnarray*} d_{k-1}^{k}-\frac{4}{3} = \frac{5(4-\beta)+3(\beta-6)2^{1-\beta}}{3(4-\beta)}. \end{eqnarray*}

    Let f(\beta) = 5(4-\beta)+3(\beta-6)2^{1-\beta} , by carefully calculate, we have

    \begin{eqnarray*} \begin{array}{lll}{ } f^{\prime}(\beta) = -5+3\cdot2^{1-\beta}+3(\beta-6)\cdot2^{1-\beta}(-\ln2), \\ f^{\prime\prime}(\beta) = 3\cdot2^{2-\beta}(-\ln2)+3(\beta-6)2^{1-\beta}(\ln2)^{2} < 0. \end{array} \end{eqnarray*}

    Therefore, f^{\prime}(\beta) is a monotone decreasing function and f^{\prime}(1) = 15\ln2-2 > 0, 0 < f^{\prime}(1) < f^{\prime}(\beta) < f^{\prime}(0) . So f(\beta) is a monotone increasing function and f(0) < f(\beta) < f(1) = 0 . To sum up, we obtain \frac{4}{3} > d_{k-1}^{k} > 0 .

    We take g(\beta) = 3(4-\beta)+(\beta-6)2^{1-\beta} , due to

    \begin{eqnarray*} g^{\prime}(\beta)& = &-3+2^{1-\beta}+(\beta-6)(-\ln2)\cdot2^{1-\beta}, \\ g^{\prime\prime}(\beta)& = &2^{2-\beta}(-\ln2)+(\beta-6)(-\ln2)^{2}\cdot2^{1-\beta} < 0. \end{eqnarray*}

    We can deduce g(\beta) is a monotone increasing function and 0 < g(0) < g(\beta) < g(1) . Therefore, 0 < d_{k-1}^{k} < \frac{4}{3}.

    (V) As k\geq4 , we have

    \begin{eqnarray*} d_{k-2}^{k}& = &-C_{1}^{-1}(A_{1}+B_{2}+C_{3}) = \frac{3(\frac{\beta}{2}-2)-(4-\frac{\beta}{2})3^{1-\beta}+(9-\frac{3\beta}{2})2^{1-\beta}}{2-\frac{\beta}{2}}\\ & = &\frac{1}{4-\beta}[-3(4-\beta)-(8-\beta)3^{1-\beta}+3(6-\beta)2^{1-\beta}] \doteq\frac{1}{4-\beta}f(\beta). \end{eqnarray*}

    Next, we discuss the sign of the f(\beta) . \forall {\beta} \in (0, 1) , 4-\beta > 0 always holds. So the sign of the d_{k-2}^{k} is determined by f(\beta) . After calculation, f^{\prime}(0) > 0, f^{\prime}(1) < 0 . Therefore, f(\beta) increases first and then decreases and f(0) = 0, f(1) = -1 < 0 , we know f(\beta) has only one zero \beta_{0}^{*}\in(0, 1) . Furthermore, when \beta\in(0, \beta_{0}^{*}) , f(\beta) > 0 . When \beta\in(\beta_{0}^{*}, 1) , f(\beta) < 0 . That is, d_{k-2}^{k} has positive and negative on \beta \in(0, 1) .

    (VI) The following equation can be obtained through direct calculation

    \begin{eqnarray} \frac{1}{4}(d_{k-1}^{k})^2+d_{k-2}^{k} = \frac{1}{4(4-\beta)^2}f(\beta), \end{eqnarray} (3.13)

    where

    \begin{eqnarray*} f(\beta) = -3(4-\beta)^2+6(4-\beta)(6-\beta)2^{1-\beta}-4(4-\beta)(8-\beta)3^{1-\beta}+(6-\beta)^2\cdot4^{1-\beta}. \end{eqnarray*}

    According to Lagrange mean value theorem, we have 4^{1-\beta} > \frac{4}{3+\beta}\cdot 3^{1-\beta} , and

    \begin{eqnarray*} f(\beta)& > & -3(4-\beta)^{2}-6(4-\beta)(-6+\beta)2^{1-\beta}+4(4-\beta)(-8+\beta)3^{1-\beta}\\ &&+(-\beta+6)^{2}\cdot \frac{4}{3+\beta}\cdot 3^{1-\beta}\\ & = & a_{1}+a_{2}\cdot 2^{1-\beta}+a_{3}\cdot 2^{1-\beta}(1+\frac{1}{2})^{1-\beta}\\ & = & a_{1}+2^{1-\beta}\{a_{2}+a_{3}[1+\frac{1}{2}(1-\beta)+\frac{(1-\beta)(-\beta)}{2}(\frac{1}{2})^{2}]\\ &&+a_{3}\cdot[\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{1}{2})^{3}+\ldots]\}\\ & = &a_{1}+2^{1-\beta}\{a_{2}+a_{3} a_{4}+a_{3}(1-\beta)\beta\sum\limits_{k = 0}^{+\infty}\Pi_{i = 0}^{k}(-\beta-1-i)\frac{-1}{(k+3)!}(\frac{1}{2})^{k+3}\}\\ &\doteq& a_{1}+2^{1-\beta}\{a_{2}+a_{3}\cdot a_{4}+a_{3}(1-\beta)\beta\sum\limits_{k = 0}^{+\infty}b_{k}\}, \end{eqnarray*}

    where

    \begin{eqnarray*} \begin{array}{l}{ } a_{1} = { } -3(4-\beta)^{2}, \quad a_{3} = { }\frac{4}{3+\beta}[-(4-\beta)(8-\beta)(3+\beta)+(6-\beta)^{2}], \\ a_{2} = { } 6(4-\beta)(6-\beta), \quad a_{4} = { }1+\frac{ 1-\beta}{2}+\frac{(1-\beta)(-\beta)}{2}(\frac{1}{2})^{2}, \\ { } b_{k}{ } = { }\Pi_{i = 0}^{k}(-1-i-\beta)\cdot \frac{-1}{(k+3)!}(\frac{1}{2})^{k+3}, \\ b_{0} = { }\frac{-(-\beta-1)}{3!}(\frac{1}{2})^{3} > 0, \quad { }|\frac{b_{k+1}}{b_{k}}| = \frac{\beta+2+k}{2(k+4)} < 1. \end{array} \end{eqnarray*}

    So \sum_{k = 0}^{+\infty}b_{k} is an alternating and convergent series, i.e., 0 < \sum_{k = 0}^{+\infty}b_{k} < b_{0} , we have

    \begin{eqnarray} \begin{array}{lll}{ } f(\beta)& > & { } a_{1}+2^{1-\beta}\{a_{2}+a_{3}\cdot a_{4}+a_{3}\beta(1-\beta)\sum\limits_{k = 0}^{+\infty}b_{k}\}\\ &\geq & { } a_{1}+2^{1-\beta}\{a_{2}+a_{3}\cdot a_{4}+a_{3}(1-\beta)\beta b_{0}\}\\ & = & { } a_{1}+2^{1-\beta}\{a_{2}+a_{3}[a_{4}+(1-\beta)\beta\cdot \frac{-(-\beta-1)}{3!}(\frac{1}{2})^{3}]\}\\ &\doteq & { } a_{1}+2^{1-\beta}[a_{2}+a_{3}\cdot a_{5}] \doteq { } f_{1}(\beta), \end{array} \end{eqnarray} (3.14)

    where

    \begin{eqnarray*} a_{5} = a_{4}+(1-\beta)\beta\cdot \frac{-(-\beta-1)}{3!}(\frac{1}{2})^{3} = \frac{48+24(1-\beta)-6(\beta-\beta^{2})+(\beta-\beta^{3})}{48}. \end{eqnarray*}

    It can be obtained by careful calculation, we obtain

    \begin{eqnarray} \begin{array}{llll}{ } f_{1}(\beta)& = & { } a_{1}+2^{1-\beta}[a_{2}+a_{3}\cdot a_{5}]\\ & = & { } \frac{-36}{12(3+\beta)}[\beta^{3}-5\beta^{2}-8\beta+48]\\ &&+{ } \frac{2^{1-\beta}}{12(3+\beta)}[\beta^{6}-16\beta^{5}+97\beta^{4}-278\beta^{3}+88\beta^{2}+732\beta+864]\\ & = & { } \frac{1}{12(3+\beta)}[-36(\beta^{3}-5\beta^{2}-8\beta+48)+2^{1-\beta}(\beta^{6}-16\beta^{5}+97\beta^{4}\\ &&-278\beta^{3}+88\beta^{2}+732\beta+864)]\\ &\doteq& { } \frac{1}{12(3+\beta)}\bar{f}_{3}(\beta). \end{array} \end{eqnarray} (3.15)

    Next, we estimate \bar{f}_{3}(\beta) . \forall\beta\in(0, 1) , \beta^{3} < \beta^{2}, \beta^{6} > 0 , we have

    \begin{eqnarray} \begin{array}{llll}{ } \bar{f}_{3}(\beta)& > & -36(\beta^{2}-5\beta^{2}-8\beta+48)+2^{1-\beta}(0-16\beta^{5}+97\beta^{4}-278\beta^{3}\\ &&+88\beta^{2}+732\beta+864)\\ & = & 144(\beta^{2}+2\beta-12)+2^{1-\beta}(-16\beta^{5}+97\beta^{4}-278\beta^{3}\\ &&+88\beta^{2}+732\beta+864)\\ &\doteq& 144(\beta^{2}+2\beta-12)+2^{1-\beta}\cdot f_{4}(\beta) \doteq f_{3}(\beta), \end{array} \end{eqnarray} (3.16)

    where

    \begin{eqnarray*} f_{4}(\beta) = -16\beta^{5}+97\beta^{4}-278\beta^{3}+88\beta^{2}+732\beta+864. \end{eqnarray*}

    After careful calculation, f^{\prime\prime}_{3}(\beta) < 0 . Therefore, f^{\prime}_{3}(\beta) is a monotone decreasing function, so f^{\prime}_{3}(0) > f^{\prime}_{3}(\beta) > f^{\prime}_{3}(1) . Similarly,

    \begin{eqnarray*} f^{\prime}_{3}(\beta) = 144(2\beta+2)+2^{1-\beta}[f_{4}(\beta)(-\ln2) +f^{\prime}_{4}(\beta)], f^{\prime}_{3}(0) > 0, f^{\prime}_{3}(1) < 0, \end{eqnarray*}

    we find the sign of f^{\prime}_{3}(\beta) become positive to negative, i.e., f_{3}(\beta) increases first and then decreases. f_{3}(0) = 0 , f_{3}(1) = 191 > 0 . Therefore we prove

    \begin{eqnarray} f_{3}(\beta) > 0. \end{eqnarray} (3.17)

    Combining (3.14), (3.15) with (3.16), (3.17), we have f(\beta) > f_{1}(\beta) > f_{3}(\beta) > 0 . From (3.13), we find

    \begin{eqnarray*} \frac{1}{4}(d_{k-1}^{k})^{2}+d_{k-2}^{k} > 0, \forall\beta \in (0, 1). \end{eqnarray*}

    To sum up, we already proved Lemma 3.1.

    Remark 3.1. From Lemma 3.1, the coefficient d_{k-2}^{k} has positive and negative on \beta \in(0, 1) : When \beta\in(0, \beta_{0}^{*}) , d_{k-2}^{k} > 0 ; when \beta\in(\beta_{0}^{*}, 1) , d_{k-2}^{k} < 0 . This brings great difficulties to the convergence and stability analysis of the proposed scheme, and the classical theoretical analysis method are invalid. Therefore, here a novel technique for the stability analysis \forall\beta \in(0, 1) will be given.

    Next, we will introduce a technique for numerical scheme (4.1):

    \begin{eqnarray} \rho = \frac{1}{2} d_{k-1}^k. \end{eqnarray} (3.18)

    By recombining of the Eq (4.1d), we have

    \begin{eqnarray*} \begin{aligned} &u^k_j-\rho u^{k-1}_j+\beta_0 C_1^{-1} (u_{\bar x, j}^k)_x\\ & = \rho(u^{k-1}_j-\rho u^{k-2}_j)+(\rho^2+d_{k-2}^k) u^{k-2}_j+d_{k-3}^k u^{k-3}_j+\cdots+d_0^k u^0_j \\ & = \rho(u^{k-1}_j-\rho u^{k-2}_j)+(\rho^2+d_{k-2}^k)(u^{k-2}_j-\rho u^{k-3}_j)+(\rho^3+\rho d_ {k-2}^k+d_{k-3}^k)(u^{k-3}_j-\rho u^{k-4}_j) \\ &+\cdots+(\rho^{k-2}+\rho^{k-4} d_{k-2}^k+\cdots+\rho d_3^k+d_2^k)(u^2_j-\rho u_j^1) \\ &+(\rho^{k-1}+\rho^{k-3} d _{k-2}^k+\cdots+\rho d_2^k+d_1^k)(u^1_j-\rho u^0_j) \\ &+(\rho^k+\rho^{k-2} d_{k-2}^k+\cdots+\rho d_0^k+d_0^k) u^0_j. \end{aligned} \end{eqnarray*}

    Next, we denote

    \begin{eqnarray} &\bar{d}_{k -i}^k = { }\rho^i+\sum\limits_{j = 2}^i \rho^{i-j} d_{k-j}^k, i = 2, \cdots, k. \end{eqnarray} (3.19)
    \begin{eqnarray} &\bar{u}^i_j = u^i_j-\rho u^{i-1}_j, \quad i = 1, \cdots, k . \end{eqnarray} (3.20)

    Thus (4.1d) can be equivalent as follows:

    \begin{eqnarray} \bar{u}_j^k-\beta_0 C_1^{-1}(u_{\bar x, j}^k)_x = { }\rho \bar{u}_j^{k-1}+\sum\limits_{i = 2}^{k-1} \bar{d}_{k -i}^k \bar{u}_j^{k-i}+\bar{d}_0^k u_j^0, \ \ \ k\geq4. \end{eqnarray} (3.21)

    In the following Lemma 3.2, we give some good properties of the coefficients in numerical scheme (4.1).

    Lemma 3.2. When k \geq 4 , \forall0 < \beta < 1 , the coefficients in numerical scheme (3.21) satisfy

    (I) 0 < \rho < \frac{2}{3} ,

    (II) \bar{d}_{k- i}^k > 0, i = 2, 3, \cdots, k ,

    (III) { }\rho+\sum_{i = 2}^{k-1} \bar{d}_{k-i}^k+\bar{d}_0^k \leq 1.

    Proof. (I) According to (3.18) and (IV) in Lemma 3.1, we immediately give the estimate for \rho .

    (II) As i = 2 , we have

    \begin{eqnarray*} \begin{array}{lll}{ } \bar d_{k-2}^{k} = { } d_{k-2}^{k}+\rho^{2} = { } d_{k-2}^{k}+\frac{1}{4}(d_{k-1}^{k})^{2}. \end{array} \end{eqnarray*}

    According to (VI) in Lemma 3.1, we get \bar d_{k-2}^{k} > 0 . Furthermore,

    \begin{eqnarray*} \bar d_{k-i}^{k} = \bar d_{k-i+1}^{k} \rho +d_{k-i}^{k} , i = 3, \cdots, k. \end{eqnarray*}

    Because of \rho > 0 and (III) in Lemma 3.1, we obtain

    \begin{eqnarray*} \bar d_{k-i}^{k} > 0, i = 3, 4, \cdots, k. \end{eqnarray*}

    (III) Let P_{k} = { }\rho + \sum_{i = 2}^{k-1}\bar d_{k-i}^{k}+\bar d_{0}^{k} , we can obtain from (3.19) that

    \begin{eqnarray*} \begin{array}{llll}{ } P_{k}& = & { } \rho \sum\limits_{i = 0}^{k-1} \rho^{i}+d_{k-2}^{k} \sum\limits_{i = 0}^{k-2} \rho^{i}+ \cdots +d_{1}^{k}\sum\limits_{i = 0}^{1}\rho^{i}+d_{0}^{k}\\ & = & { } \rho\frac{1-\rho^{k}}{1-\rho}+d_{k-2}^{k}\frac{1-\rho^{k-1}}{1-\rho}+ \cdots +d_{2}^{k}\frac{1-\rho^{3}}{1-\rho}+d_{1}^{k}\frac{1-\rho^{2}}{1-\rho}+d_{0}^{k}. \end{array} \end{eqnarray*}

    That is,

    \begin{eqnarray*} \begin{array}{llll}{ } P_{k}(1-\rho) = (1-\rho^{k})\rho+(1-\rho^{k-1})d_{k-2}^{k}+ \cdots+(1-\rho^{2})d_{1}^{k}+(1-\rho)d_{0}^{k}. \end{array} \end{eqnarray*}

    Then, we use (II), (III), (VI) in Lemma 3.1 gives

    \begin{eqnarray*} \begin{array}{llll}{ } P_{k}(1-\rho)&\leq& { } (1-\rho^{k})\rho+(1-\rho^{k-1})d_{k-2}^{k}+\sum\limits_{i = 3}^{k}d_{k-i}^{k}\\ & = & { } (\rho +\sum\limits_{i = 2}^{k}d_{k-i}^{k})-(d_{k-2}^{k}+\rho^{2})\rho^{k-1}\\ & = & { } (1-\rho)-(d_{k-2}^{k}+\rho^{2})\rho^{k-1} < (1-\rho). \end{array} \end{eqnarray*}

    Therefore, we already proved { }\rho+\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k+\bar{d}_0^k \leq 1 .

    As k = 3 , the equivalent of (4.1c) as follows:

    \begin{eqnarray} \bar{u}^3+\beta_0 C_1^{-1}(u_{\bar x, j}^3)_x = \bar{d}_{2}^3 \bar{u}^{2}+\bar{d}_{1}^3 \bar{u}^{1}+\bar{d}_{0}^3 \bar{u}^{0}, \end{eqnarray} (3.22)

    where

    \begin{eqnarray} \bar{d}_2^3 = d_2^3-\rho, \bar{d}_1^3 = \bar{d}_2^3\rho+d_1^3, \bar{d}_0^3 = \bar{d}_1^3 \rho+d_0^3. \end{eqnarray} (3.23)

    We find \rho\neq d_2^3-\rho . So we will give some properties of the coefficients in numerical scheme (3.22) in the following Lemma 3.3 as k = 3 .

    Lemma 3.3. The coefficients of (3.22) have the following properties when 0 < \beta < 1 and k = 3 ,

    (I) \bar{d}_{3-i}^3 > 0, i = 1, 2, 3 ,

    (II) \bar{d}_{2}^{3}-\rho < 0 ,

    (III) \bar{d}_{0}^{3}+\bar{d}_{1}^{3}+\bar{d}_{2}^{3}\leq 1 .

    Proof. (I) Let's firstly prove that,

    \begin{eqnarray} \bar{d}_{2}^{3}\geq 0 , \quad \bar{d}_{1}^{3}\geq 0, \quad \bar{d}_{0}^{3}\geq 0. \end{eqnarray} (3.24)

    Based on calculating carefully, one can immediately obtain that

    \begin{eqnarray*} \bar{d}_{2}^{3}& = & { } \frac{2}{4-\beta}[6-(\frac{\beta}{2}+2)3^{1-\beta}-\frac{3}{2}\beta]-\frac{1}{2}[3+\frac{(\frac{\beta}{2}-3)2^{1-\beta}}{2-\frac{\beta}{2}}]\\ & = & { } \frac{3}{2}+\frac{4+\beta}{\beta-4}3^{1-\beta}-\frac{6-\beta}{\beta-4}2^{-\beta}\\ & > & { } \frac{3}{2}-(\frac{2\beta-2}{4-\beta})3^{1-\beta} > { } 0. \end{eqnarray*}

    Because of d_{1}^{3} = \frac{2}{4-\beta}[(2\beta-2)3^{1-\beta}-6+\frac{3}{2}\beta] , so

    \begin{eqnarray*} \begin{array}{llll}{ } \bar{d}_{1}^{3}& = & { } -\frac{3}{4}+\frac{5\beta-4}{2(4-\beta)}\cdot3^{1-\beta} +\frac{(4+\beta)(6-\beta)}{(4-\beta)^2}\cdot3^{1-\beta}\cdot2^{1-\beta} -\frac{(6-\beta)^2}{(4-\beta)^2}2^{-2\beta}\\ &\doteq& { } -\frac{3}{4}+a_{1}\cdot3^{1-\beta}+a_{2}\cdot3^{1-\beta}\cdot2^{-\beta}+a_{3}\cdot2^{-2\beta}, \end{array} \end{eqnarray*}

    where

    \begin{eqnarray*} \begin{array}{lll}{ } a_{1} = \frac{5\beta-4}{2(4-\beta)}, a_{2} = \frac{(6-\beta)(4+\beta)}{(4-\beta)^2}, a_{3} = -\frac{(6-\beta)^2}{(4-\beta)^2}. \end{array} \end{eqnarray*}

    Next, using a Taylor expansion yields

    \begin{eqnarray*} \begin{array}{lll}{ } &&\bar{d}_{1}^{3} = { } -\frac{3}{4}+a_{1}\cdot2^{1-\beta}(1+\frac{1}{2})^{1-\beta} +a_{2}\cdot2^{1-\beta}(1+\frac{1}{2})^{1-\beta}\cdot2^{-\beta}+a_{3}\cdot2^{-2\beta}\\ & = & { } -\frac{3}{4}+[a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}] (1+\frac{1}{2})^{1-\beta}+a_{3}\cdot2^{-2\beta}\\ & = & { } -\frac{3}{4}+[a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}] [1+\frac{1-\beta}{1!}\frac{1}{2}+\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{2})^{2} +\cdots] +a_{3}\cdot2^{-2\beta}\\ & = & { } -\frac{3}{4}+a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}+a_{3}\cdot2^{-2\beta}\\ &&{ }+[a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}][\frac{1}{2}(1-\beta) +\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{2})^{2} +\cdots]. \end{array} \end{eqnarray*}

    Next, we will estimate a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta} > 0 .

    \begin{eqnarray*} \begin{array}{llll}{ } a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}& = & { } \frac{2^{-\beta}}{(4-\beta)^{2}}[(5\beta-4)(4-\beta)+2(4+\beta)(6-\beta)2^{-\beta}]\\ &\geq& { } \frac{2^{-\beta}}{(4-\beta)^{2}}[(5\beta-4)(4-\beta)+(4+\beta)(6-\beta)]\\ &\doteq& { } \frac{2^{-\beta}}{(4-\beta)^{2}}f(\beta), \end{array} \end{eqnarray*}

    where f(\beta) = (5\beta-4)(4-\beta)+(4+\beta)(6-\beta) . Because of f'(\beta) = 26-12\beta > 0, f(\beta) monotonic increase, f(\beta)\geq f(0) = 8 > 0 , so f(\beta) > 0 .

    Because of \frac{1-\beta}{1!}\cdot\frac{1}{2}+\frac{(1-\beta)(-\beta)}{2!}\cdot(\frac{1}{2})^{2} +\frac{(1-\beta)(-\beta)(-\beta-1)}{3!}(\frac{1}{2})^{3}+\cdots\doteq\sum_{k = 0}^{+\infty}a_{k} is an alternating series with positive first term, and \sum_{k = 0}^{+\infty}a_{k} = a_{0}+a_{1}+\sum_{k = 2}^{+\infty}a_{k} , where \sum_{k = 2}^{+\infty}a_{k} is an alternating series which the first term is positive number, so 0 < \sum_{k = 2}^{+\infty}a_{k} < a_{2} , and

    \begin{eqnarray*} \begin{array}{lll}{ } { }\bar{d}_{1}^{3}& = & { } -\frac{3}{4}+a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}+a_{3}\cdot2^{-2\beta}\\ && { }+[a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}][\frac{1}{2}(1-\beta)+\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{2})^{2}]\\ & = & { } -\frac{3}{4}+a_{3}\cdot2^{-2\beta}+[a_{1}\cdot2^{1-\beta}+a_{2}\cdot2^{1-2\beta}][1+\frac{1}{2}(1-\beta)+\frac{(1-\beta)(-\beta)}{2!}(\frac{1}{2})^{2}]\\ &\doteq& { } -\frac{3}{4}+\frac{2^{-\beta}}{8(4-\beta)^{2}}[f_{1}(\beta)+f_{2}(\beta)], \end{array} \end{eqnarray*}

    where

    \begin{eqnarray*} f_{1}(\beta) = -192+368\beta-196\beta^2+49\beta^3-5\beta^4, \\ f_{2}(\beta) = (144-48\beta-2\beta^2+7\beta^3-\beta^4)2^{1-\beta}. \end{eqnarray*}

    Next, we will prove \bar{d}_{1}^{3} > 0 , that is -\frac{3}{4}+\frac{2^{-\beta}}{8(4-\beta)^{2}}[f_{1}(\beta)+f_{2}(\beta)] > 0\Longleftrightarrow f_{1}(\beta)+f_{2}(\beta) > \frac{3}{4}\cdot8(4-\beta)^{2}2^{\beta} = 6(4-\beta)^{2}2^{\beta} , that is f_{1}(\beta)+f_{2}(\beta) > 6(4-\beta)^{2}2^{\beta}\doteq6f_{3}(\beta) . So to prove \bar{D}_{1}^{3} > 0 , just prove f_{1}(\beta)+f_{2}(\beta)-6f_{3}(\beta) > 0 . Let's remember \bar{f}(\beta) = f_{1}(\beta)+f_{2}(\beta)-6f_{3}(\beta) . Since \bar{f}(\beta) is an increasing firstly and then decreasing function, and \bar{f}(0) = 0 , \bar{f}(1) = 16 > 0 , so \bar{f}(\beta) > 0 , therefore \bar{d}_{1}^{3} > 0 .

    Because d_{0}^{3} = [2-(3^{2-\beta}+1)\frac{\beta}{2}] > 0 , so \bar{d}_{0}^{3} = \bar{d}_{1}^{3}\rho+d_{0}^{3} > 0. To sum up, (3.24) is completed proved.

    (II) Because of

    \begin{eqnarray*} \bar{d}_{2}^{3}-\rho & = & { } d_{2}^{3}-2\rho\\ & = & { } \frac{2}{4-\beta}[6-(\frac{\beta}{2}+2)3^{1-\beta} -\frac{3}{2}\beta]-[3+\frac{(\frac{\beta}{2}-3)2^{1-\beta}}{2-\frac{\beta}{2}}] < { } 0. \end{eqnarray*}

    That is \bar{d}_{2}^{3}-\rho < 0 .

    (III) According to (3.23), we have

    \begin{eqnarray*} \begin{array}{llll}{ } \bar{d}_{0}^{3}+\bar{d}_{1}^{3}+\bar{d}_{2}^{3}& = & { } d_{2}^{3}-\rho+\bar{d}_{2}^{3}\rho+d_{1}^{3}+\bar{d}_{1}^{3}\rho+d_{0}^{3}\\ & = & { } d_{2}^{3}-\rho+(d_{2}^{3}-\rho)\rho+d_{1}^{3}+(d_{2}^{3}-\rho)\rho^{2}+\rho d_{1}^{3}+d_{0}^{3}\\ & = & { } (d_{2}^{3}-\rho)(1+\rho+\rho^{2})+D_{1}^{3}(1+\rho)+D_{0}^{3}\\ & = & { } (d_{2}^{3}-\rho)\frac{1-\rho^{3}}{1-\rho}+d_{1}^{3}\frac{1-\rho^{2}}{1-\rho}+d_{0}^{3}\frac{1-\rho}{1-\rho}\\ &\doteq& { } P_{3}. \end{array} \end{eqnarray*}

    Therefore, we have

    \begin{eqnarray} \begin{array}{llll}{ } (1-\rho)P_{3}& = & { } (d_{2}^{3}-\rho)(1-\rho^{3})+d_{1}^{3}(1-\rho^{2})+d_{0}^{3}(1-\rho)\\[3pt] & = & { } (d_{0}^{3}+d_{1}^{3}+d_{2}^{3}-\rho)-(d_{2}^{3}-\rho)\rho^{3}-d_{1}^{3}\rho^{2}-d_{0}^{3}\rho\\[3pt] &\leq & { } (d_{0}^{3}+d_{1}^{3}+d_{2}^{3}-\rho)-(d_{2}^{3}-\rho)\rho^{3}-d_{1}^{3}\rho^{2}\\[3pt] & = & { } (d_{0}^{3}+d_{1}^{3}+d_{2}^{3}-\rho)-\rho^{2}\bar{d}_{1}^{3}\leq d_{0}^{3}+d_{1}^{3}+d_{2}^{3}-\rho, \end{array} \end{eqnarray} (3.25)

    where d_{0}^{3} > 0 , \bar{d}_{1}^{3} > 0 . By carefully calculate, we have d_{0}^{3}+d_{1}^{3}+d_{2}^{3} = 1 . According to (3.25), we obtain (1-\rho)P_{3}\leq 1-\rho , i.e.,

    \begin{eqnarray*} \bar{d}_{0}^{3}+\bar{d}_{1}^{3}+\bar{d}_{2}^{3}\leq 1. \end{eqnarray*}

    The proof is then completed.

    From (3.21) and (3.22), thus the equivalent of (4.1) as follows:

    \left\{ \begin{array}{l} \widehat{D_0} u_j^0+\widehat{D_1} u_j^1+\widehat{D_2} u_j^2-\beta_0 (u_{\bar x, j}^1)_x = 0, \ \ \ k = 1, \ \ \ \ \ \ (4.1{\rm{a}})\\ \widetilde{D_0} u_j^0+\widetilde{D_1} u_j^1+\widetilde{D_2} u_j^2-\beta_0 (u_{\bar x, j}^2)_x = 0, \ \ \ k = 2, \ \ \ \ \ \ (4.1{\rm{b}})\\ u^3_j+\beta_0 C_1^{-1} (u_{\bar x, j}^3)_x = d_2^3 u^2_j+d_1^3 u^1_j+d_0^3 u^0_j, \ \ \ k = 3, \ \ \ \ \ \ (4.1{\rm{c}})\\ u^k_j+\beta_0 C_1^{-1} (u_{\bar x, j}^k)_x = \sum\limits_{i = 1}^k d_{k-i}^k u^{k-i}_j, \ \ \ k \geq 4.\ \ \ \ \ \ (4.1{\rm{d}}) \end{array} \right.

    We will give the estimation of \|\bar{u}^{1}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2, \|\bar{u}^{2}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2 as following Lemma 4.1,

    Lemma 4.1. Let \widehat{\beta} = \min\{-\widehat{D_1}\widetilde{D_1}, \widetilde{D_2}\widehat{D_2}, -\widetilde{D_1}, \widehat{D_2}\} and \widehat{\alpha} = \max\{\widehat{D_0}\widetilde{D_1}, |-\widetilde{D_0}\widehat{D_2}|\} , we have

    \left\{ \begin{array}{l} \|\bar{u}^{1}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2 \leqslant M_0\|u^0\|_0^2, \ \ \ \ \ \ \ \ \ \ \ (4.2) \\ \|\bar{u}^{2}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2\leqslant M_0\|u^2\|_0^2, \ \ \ \ \ \ \ \ \ \ \ \ (4.3) \end{array} \right.

    where M_0 satisfies

    \begin{eqnarray} M_0 = \max \{8(\frac{\widehat{\alpha}}{\widehat{\beta}})^2+2\rho^2, 8(\frac{\widehat{\alpha}}{\widehat{\beta}})^2(1+\rho^2)\}. \end{eqnarray} (4.4)

    Proof. Multiplying -\widetilde{D_1}u_j^1h on both sides of (4.1a) for k = 1 and taking the sum over j , one immediately gets

    \begin{eqnarray} -\widehat{D_0}\widetilde{D_1}(u^0, u^1)-\widehat{D_1}\widetilde{D_1}(u^1, u^1)-\widehat{D_2}\widetilde{D_1}(u^2, u^1)-\beta_0 \widetilde{D_1}\|u_{\bar{x}}^1]|^2 = 0. \end{eqnarray} (4.5)

    Multiplying \widehat{D_2}u_j^2h on both sides of (4.1b) by for k = 2 and taking the sum over j with (3.2), one immediately gets

    \begin{eqnarray} \widetilde{D_0}\widehat{D_2}(u^0, u^2)+\widetilde{D_1}\widehat{D_2}(u^1, u^2)+\widetilde{D_2}\widehat{D_2} (u^2, u^2)+\beta_0\widehat{D_2}\|u_{\bar{x}}^2]|^2 = 0. \end{eqnarray} (4.6)

    Add Eqs (4.5) and (4.6) correspondingly, one can obtain by combining similar terms,

    \begin{eqnarray*} -\widehat{D_1}\widetilde{D_1}\|u^1\|_0^2+\widetilde{D_2}\widehat{D_2}\|u^2\|_0^2 -\beta_0 \widetilde{D_1}\|u_{\bar{x}}^1]|^2 +\beta_0\widehat{D_2}\|u_{\bar{x}}^2]|^2 = (u^0, \widehat{D_0}\widetilde{D_1}u^1-\widetilde{D_0}\widehat{D_2}u^2). \end{eqnarray*}

    Because of -\widehat{D_1}\widetilde{D_1}, \widetilde{D_2}\widehat{D_2}, -\widetilde{D_1}, \widehat{D_2} and \widehat{D_0}\widetilde{D_1} are all positive number depend on \beta , therefore \widehat{\beta} > 0, \widehat{\alpha} > 0 , we have

    \begin{eqnarray*} &&\|u^1\|_0^2+\|u^2\|_0^2 +\beta_0\|u_{\bar{x}}^1]|^2 +\beta_0\|u_{\bar{x}}^2]|^2\\ &\leq&\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0(\|u^1\|_0+\|u^2\|_0) \leq\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0(\|u^1\|_0+\|u^2\|_0 +\sqrt{\beta_0}\|u_{\bar{x}}^1]|+\sqrt{\beta_0}\|u_{\bar{x}}^2]|)\\ & = &\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0\cdot\|u^1\|_0 +\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0\cdot\|u^2\|_0 +\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0\cdot\sqrt{\beta_0}\|u_{\bar{x}}^1]| +\frac{\widehat{\alpha}}{\widehat{\beta}}\|u^0\|_0\cdot\sqrt{\beta_0}\|u_{\bar{x}}^2]|\\ &\leq&\frac{1}{2}[4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2 +\|u^1\|_0^2+\|u^2\|_0^2 +\beta_0\|u_{\bar{x}}^1]|^2 +\beta_0\|u_{\bar{x}}^2]|^2]. \end{eqnarray*}

    That is,

    \begin{eqnarray} \|u^1\|_0^2+\|u^2\|_0^2 +\beta_0\|u_{\bar{x}}^1]|^2 +\beta_0\|u_{\bar{x}}^2]|^2\leq4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2. \end{eqnarray} (4.7)

    Because of C_1 = \frac{4-\beta}{2} , therefore C_1^{-1} = \frac{2}{4-\beta}\in (\frac{1}{2}, \frac{2}{3}) , so C_1^{-1} < 1, \beta_0C_1^{-1} < \beta_0 . From (4.7), we can get

    \left\{ \begin{array}{l} \|u^1\|_0^2 +\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2 \leq{ } 4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4.8)\\ \|u^2\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2\leq{ }4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4.9) \end{array} \right.

    According to \overline{u}^{1} = u^{1}-\rho u^0 and trigonometric inequality, one can get

    \begin{eqnarray} \|\bar{u}^{1}\|_0^2 = \|u^{1}-\rho u^{0}\|_0^2 \leq(\|u^1\|_0+\rho\|u^0\|_0)^2 \leqslant 2\|u^1\|_0^2+2 \rho^2\|u^0\|_0^2. \end{eqnarray} (4.10)

    Using (4.8) and (4.10), it is easy to obtain that

    \begin{eqnarray} \|\bar{u}^{1}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2 \leqslant 2\|u^1\|_0^2+2 \rho^2\|u^0\|_0^2+2\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2 \leq[8(\frac{\widehat{\alpha}}{\widehat{\beta}})^2+2\rho^2]\|u^0\|_0^2. \end{eqnarray} (4.11)

    Similarly, we will estimate \|\bar{u}^{2}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2 . Using (4.8) and (4.9), one can get

    \begin{eqnarray} \|\bar{u}^{2}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2 &\leq&2\|u^2\|_0^2+2 \rho^2\|u^1\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2\\ &\leq&2(\|u^2\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2)+2\rho^2(\|u^1\|_0^2 +\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2)\\ &\leq&2\cdot4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2 +2\rho^2\cdot4(\frac{\widehat{\alpha}}{\widehat{\beta}})^2\|u^0\|_0^2\\ & = &8(\frac{\widehat{\alpha}}{\widehat{\beta}})^2(1+\rho^2)\|u^0\|_0^2. \end{eqnarray} (4.12)

    In summary, by using (4.4), combining (4.11) with (4.12), then

    \begin{eqnarray*} \left\{ \begin{array}{ll} \|\bar{u}^{1}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^1]|^2 \leqslant M_0\|u^0\|_0^2, \\ \|\bar{u}^{2}\|_0^2+\beta_0C_1^{-1}\|u_{\bar{x}}^2]|^2\leqslant M_0\|u^2\|_0^2. \end{array} \right. \end{eqnarray*}

    Lemma 4.1 already completed proved.

    Next, for k\geq3 , we give the estimate for \|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2 in the following Lemma 4.2, which is an important conclusion for the stability analysis of the proposed scheme.

    Lemma 4.2. \|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2\leq M_0\|u^0\|_0^2, \ \ 3\leq k\leq K, where M_0 defined in (4.4).

    Proof. First, we deduce the general formula. Using (1.3) and (3.4), we have

    \begin{eqnarray*} &&-2\sum\limits_{j = 1}^{N-1}(u_{\bar{x}, j}^k)_x \bar{u}_j^k h = -2((u_{\bar{x}}^k)_x, \bar{u}^k) = 2(u_{\bar{x}}^{k}, \bar{u}_{\bar{x}}^k]\\ & = &(u_{\bar{x}}^k+\bar{u}_{\bar{x}}^k+\rho \bar{u}_{\bar{x}}^{k-1}, \bar{u}_{\bar{x}}^k] = (\bar{u}_{\bar{x}}^k, \bar{u}_{\bar{x}}^k]+(u_{\bar{x}}^k+\rho u_{\bar{x}}^{k-1}, u_{\bar{x}}^k]\\ & = &(\bar{u}_{\bar{x}}^k, \bar{u}_{\bar{x}}^k]+(u_{\bar{x}}^k+\rho u_{\bar{x}}^{k-1}, u_{\bar{x}}^k-\rho u_{\bar{x}}^{k-1}]\\ & = &(\bar{u}_{\bar{x}}^k, \bar{u}_{\bar{x}}^k]+(u_{\bar{x}}^k, u_{\bar{x}}^k]-\rho(u_{\bar{x}}^k, u_{\bar{x}}^{k-1}]+\rho(u_{\bar{x}}^{k-1}, u_{\bar{x}}^k]-\rho^2(u_{\bar{x}}^{k-1}, u_{\bar{x}}^{k-1}]. \end{eqnarray*}

    That is,

    \begin{eqnarray} -2\sum\limits_{j = 1}^{N-1}(u_{\bar x, j}^k)_x\bar{u}_j^kh = \| \bar{u}_{\bar{x}}^k]|^2+\| u_{\bar{x}}^k]|^2-\rho^2 \|u_{\bar{x}}^{k-1}]|^2. \end{eqnarray} (4.13)

    Multiplying 2\bar{u}_j^3h on both sides of (4.1c) for k = 3 and taking the sum over j and using (4.13), we get

    \begin{eqnarray*} \|\bar{u}^3\|_0^2+\beta_0 C_1^{-1}\|u_{\bar{x}}^3]|^2 -\beta_0 C_1^{-1} \rho^2\|u_{\bar{x}}^2]|^2 \leqslant \bar{d}_2^3\| \bar{u}^2\|_0^2+\bar{d}_1^3\| \bar{u}^1\|_0^2+\bar{d}_0^3\| u \|_0^2. \end{eqnarray*}

    According to (I) in Lemma 3.2 and (II) in lemma 3.3, we have

    \begin{eqnarray} &&\|\bar{u}^3\|_0^2+\beta_0 C_1^{-1}\|u_{\bar{x}}^3]|^2\\ &\leq& \bar{d}_2^3\| \bar{u}^2\|_0^2+\beta_0 C_1^{-1} \rho^2\|u_{\bar{x}}^2]|^2 +\bar{d}_1^3\| \bar{u}^1\|_0^2+\bar{d}_0^3\| u \|_0^2\\ &\leq& \rho(\| \bar{u}^2\|_0^2+\beta_0 C_1^{-1} \rho\|u_{\bar{x}}^2]|^2) +\bar{d}_1^3\| \bar{u}^1\|_0^2+\bar{d}_0^3\| u \|_0^2\\ &\leq& \rho(\| \bar{u}^2\|_0^2+\beta_0 C_1^{-1}\|u_{\bar{x}}^2]|^2) +\bar{d}_1^3(\| \bar{u}^1\|_0^2+\beta_0 C_1^{-1}\|u_{\bar{x}}^1]|^2)+\bar{d}_0^3\| u \|_0^2. \end{eqnarray} (4.14)

    By directly computing, it can be easy to deduce that \bar{d}_{0}^{3}+\bar{d}_{1}^{3}+\rho\leq 1 . By using (4.2) and (4.3), (4.14) is becoming as follows

    \begin{eqnarray} \|\bar{u}^3\|_0^2+\beta_0 C_1^{-1}\|u_{\bar{x}}^3]|^2 \leq M_0(\rho+\bar{d}_1^3+\bar{d}_0^3)\| u \|_0^2 \leq M_0\| u \|_0^2. \end{eqnarray} (4.15)

    For k\geq4 , multiplying both sides of the (4.1d) by 2\bar{u}_j^kh and taking the sum over j and using (4.13), one gets

    \begin{eqnarray} &&2\|\bar{u}^k\|_0^2+\beta_0 C_1^{-1}\|\bar{u}_{\bar{x}}^k]|^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2-\beta_0 C_1^{-1} \rho^2 \| u_{\bar{x}}^{k-1}]|^2 \\ &\leq & \rho(\|\bar{u}^{k-1}\|_0^2+\|\bar{u}^k\|_0^2) + \sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k(\|\bar{u}^{k-i}\|_0^2+\|\bar{u}^k\|_0^2) +\bar{d}_0^k(\|u^0\|_0^2+\|\bar{u}^k\|_0^2) \\ & = & \rho\|\bar{u}^{k-1}\|_0^2+\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k\|\bar{u}^{k-i}\|_0^2+\bar{d}_0^k\|u^0\|_0^2 +(\rho+\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k+\bar{d}_0^k)\|\bar{u}^k\|_0^2. \end{eqnarray} (4.16)

    According to (III) in Lemma 3.3, the above inequality (4.16) becomes

    \begin{eqnarray} &&\|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2-\beta_0 C_1^{-1} \rho^2 \| u_{\bar{x}}^{k-1}]|^2 \\ &\leq & \rho\|\bar{u}^{k-1}\|_0^2+\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k\|\bar{u}^{k-i}\|_0^2+\bar{d}_0^k\|u^0\|_0^2. \end{eqnarray} (4.17)

    By (I) in Lemma 3.2: 0 < \rho < 1 and (4.17) yields

    \begin{eqnarray*} &&\|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2 \\ &\leq & \rho\|\bar{u}^{k-1}\|_0^2+\beta_0 C_1^{-1} \rho^2 \| u_{\bar{x}}^{k-1}]|^2 +\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k\|\bar{u}^{k-i}\|_0^2+\bar{d}_0^k\|u^0\|_0^2\\ & = & \rho(\|\bar{u}^{k-1}\|_0^2+\beta_0 C_1^{-1} \rho \| u_{\bar{x}}^{k-1}]|^2) +\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k\|\bar{u}^{k-i}\|_0^2+\bar{d}_0^k\|u^0\|_0^2\\ &\leq & \rho(\|\bar{u}^{k-1}\|_0^2+\beta_0 C_1^{-1}\| u_{\bar{x}}^{k-1}]|^2) +\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k\|\bar{u}^{k-i}\|_0^2+\bar{d}_0^k\|u^0\|_0^2\\ &\leq & \rho(\|\bar{u}^{k-1}\|_0^2+\beta_0 C_1^{-1} \rho \| u_{\bar{x}}^{k-1}]|^2) +\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k(\|\bar{u}^{k-i}\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^{k-i}]|^2)+\bar{d}_0^k\|u^0\|_0^2. \end{eqnarray*}

    That is,

    \begin{eqnarray} &&\|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2 \leq \rho(\|\bar{u}^{k-1}\|_0^2+\beta_0 C_1^{-1} \rho \| u_{\bar{x}}^{k-1}]|^2)\\ & & +\sum\limits_{i = 2}^{k-1} \bar{d}_{k-i}^k(\|\bar{u}^{k-i}\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^{k-i}]|^2)+\bar{d}_0^k\|u^0\|_0^2, \ \ k\geq4. \end{eqnarray} (4.18)

    Using the mathematics induction, it is easy to prove the following inequality

    \begin{eqnarray} \|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2\leq M_0\|u^0\|_0^2, \ \ 4\leq k\leq K. \end{eqnarray} (4.19)

    As k = 4 , by (4.2), (4.3) and (4.15), from (4.18) we can obtain

    \begin{eqnarray} &&\|\bar{u}^4\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^4]|^2\\ &\leq& \rho(\|\bar{u}^{3}\|_0^2+\beta_0 C_1^{-1} \rho \| u_{\bar{x}}^{3}]|^2) +\sum\limits_{i = 2}^{3} \bar{d}_{4-i}^4(\|\bar{u}^{4-i}\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^{4-i}]|^2)+\bar{d}_0^4\|u^0\|_0^2\\ &\leq& M_0(\rho+\sum\limits_{i = 2}^{3} \bar{d}_{4-i}^4+\bar{d}_0^4)\|u^0\|_0^2 \leq M_0\|u^0\|_0^2. \end{eqnarray} (4.20)

    According to (4.20), this is the case of (4.19) when k = 4 . Assuming that (19) establish for k = 5, 6, \cdots, K-1 , and from (4.18) one immediately obtain that,

    \begin{eqnarray*} \|\bar{u}^K\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^K]|^2 \leq M_0(\rho+\sum\limits_{i = 2}^{K-1} \bar{d}_{K-i}^K+\bar{d}_0^K)\|u^0\|_0^2 \leq M_0\|u^0\|_0^2. \end{eqnarray*}

    The proof of Lemma 4.2 is completed.

    We will give numerical scheme (4.1) is unconditionally stable in the following Theorem 4.1.

    Theorem 4.1. The numerical scheme (4.1) is unconditionally stable, and its solution satisfies the following estimate for all \Delta t > 0, h > 0 :

    \begin{eqnarray*} \|u^k\|_0+\sqrt{\beta_0 C_1^{-1}} \| u_{\bar x}^k]|\leq(4\sqrt{M_0}+1)\|u_0\|_0, \ \ k = 1, 2, \cdots, K. \end{eqnarray*}

    Proof. According to Lemma 4.1 and Lemma 4.2, we immediately give the following estimate

    \begin{eqnarray} \|\bar{u}^k\|_0^2+\beta_0 C_1^{-1} \| u_{\bar x}^k]|^2\leq M_0\|u^0\|_0^2, \ \ k = 1, 2, \cdots, K. \end{eqnarray} (4.21)

    From (4.21), we have

    \begin{eqnarray*} \|\bar{u}^k\|_0\leq \sqrt{M_0}\|u^0\|_0, \ \ k = 1, 2, \cdots, K. \end{eqnarray*}

    According to (3.20), (I) in Lemma 3.2, we have

    \begin{eqnarray*} \|u^k\|_0& = &\|\bar{u}^{k}+\rho u^{k-1}\|_0 \leqslant\|\bar{u}^k\|_0+\rho\|u^{k-1}\|_0 \leqslant\|\bar{u}^k\|_0+\rho(\|\bar{u}^{k-1}\|_0+\rho\|u^{k-2}\|_0) \\ & = &\|\bar{u}^k\|_0+\rho\|\bar{u}^{k-1}\|_0+\rho^2\|u^{k-2}\|_0 \\ &\leq&\cdots \cdots \\ &\leq&\|\bar{u}^k\|_0+\rho\|\bar{u}^{k-1}\|_0+\rho^2\|\bar{u}^{k-2}\|_0+\rho^3\|\bar{u}^{k-3}\|_0+\cdots+\rho^{k-1}\|\bar{u}^1\|_0+\rho^k\|u^0\|_0 \\ &\leq& \sqrt{M_0}\|u^0\|_0+\rho \sqrt{M_0}\|u^0\|_0+\rho^2 \sqrt{M_0}\|u^0\|_0+\cdots+\rho^{k-1} \sqrt{M_0}\|u^0\|_0+\rho^k\|u^0\|_0 \\ & = &\sqrt{M_0}\|u^0\|_0(1+\rho+\rho^2+\cdots+\rho^{k-1})+\rho^k\|u^0\|_0 \\ &\leq&\sqrt{M_0}\|u^0\|_0 \cdot \frac{1}{1-\rho}+\|u^0\|_0 \leq(3 \sqrt{M_0}+1)\|u^0\|_0. \end{eqnarray*}

    According to the above estimation, we have

    \begin{eqnarray*} \|u^k\|_0+\sqrt{\beta_0 C_1^{-1}} \| u_{\bar x}^k]|\leq(4\sqrt{M_0}+1)\|u^0\|_0, \ \ k = 1, \cdots, K. \end{eqnarray*}

    Theorem 4.1 is then completed.

    In the next Theorem 4.2, we give the convergence analysis of the full discrete scheme.

    Theorem 4.2. Assume that u(x, t) which is the solution of Eq (1.1), u_j^k be the solution of the problem (4.1), Suppose { } \max\limits_{t\in (0, T]}|\partial_t^3u| \leq M . Then for k = 1, 2, \cdots, K , we have

    \begin{eqnarray*} \|u(x_j, t_k)-u^k\|_0+\sqrt{\beta_0 C_1^{-1}}\|u_{\bar x}-u_{\bar x}^k]| \le C(\Delta{t^{3-\beta}} + h^2), \end{eqnarray*}

    where 0 < \beta < 1, and C is a positive constant that does not depend on \Delta t , h .

    Proof. According to Theorem 2.1 and (3.5), similar to the Theorem 4.1, we can obtain the proof of Theorem 4.2. Here we omit it.

    Remark 4.1 The convergence of the proposed scheme can be analysis with the above analysis technique and the idea of [21] on the graded mesh. For the stability analysis of the proposed scheme is very difficult at present by using the analysis method in this paper. The mainly difficulty is whether the coefficients d^k_{k-i} in Lemma 3.1 is nonnegativity for the graded temporal mesh, which is still an open problem up to now.

    For the sake of simplicity, we only implement algorithm (4.1) as following:

    (1) Denote the I_{(N-1)\times(N-1)} as a identity matrix of (N-1)\times(N-1) , and D_{(N-1)\times(N-1)} as a difference matrix of second derivative in space, i.e.,

    \begin{eqnarray*} D_{(N-1)\times(N-1)} = \frac{1}{h^2} \begin{pmatrix} -2 & 1 & &&&\\ 1 & -2 & 1&&&\\ &\ddots&\ddots&\ddots&\\ &&\ddots&\ddots&1&\\ &&&1&-2&\\ \end{pmatrix}, \\ u^k = (u^k_1, u^k_2, \cdots, u^k_{N-1}), k = 1, 2, \cdots, K. \end{eqnarray*}

    (2) For k = 1 and k = 2 , based on (4.1a) and (4.1b), we have

    \begin{eqnarray} \begin{array}{lll}{ } &&\begin{pmatrix} \widehat{D_1}I_{(N-1)\times(N-1)}-\beta_0D_{(N-1)\times(N-1)} & \widehat{D_2}I_{(N-1)\times(N-1)}\\ \widetilde{D_1}I_{(N-1)\times(N-1)} & \widetilde{D_2}I_{(N-1)\times(N-1)}-\beta_0D_{(N-1)\times(N-1)} \\ \end{pmatrix} \begin{pmatrix} (u^1)^{\mathit{T}}\\ (u^2)^{\mathit{T}}\\ \end{pmatrix}\\ & = &- \begin{pmatrix} \widehat{D_0}I_{(N-1)\times(N-1)}(u^0)^{\mathit{T}}\\ \widetilde{D_0}I_{(N-1)\times(N-1)}(u^0)^{\mathit{T}}\\ \end{pmatrix}, \end{array} \end{eqnarray} (5.1)

    where (u^1)^{\mathit{T}} represent the transpose of (u^1) . Solving the above Eq (5.1), we obtain u^1 and u^2 .

    (3) For k = 3 , based on (4.1c), we have

    \begin{eqnarray} \begin{array}{lll}{ } &&(I_{(N-1)\times(N-1)}+\beta_0 C_1^{-1}D_{(N-1)\times(N-1)})u^3\\ & = &d_2^3I_{(N-1)\times(N-1)} u^2+d_1^3I_{(N-1)\times(N-1)} u^1+d_0^3I_{(N-1)\times(N-1)} u^0. \end{array} \end{eqnarray} (5.2)

    By solving the above Eq (5.2), we can obtain u^3 . {1.} (4) For k\geq4 , based on (4.1d), we have

    \begin{eqnarray} \begin{array}{lll}{ } (I_{(N-1)\times(N-1)}+\beta_0 C_1^{-1}D_{(N-1)\times(N-1)})u^k = \sum\limits_{i = 1}^k d_{k-i}^k I_{(N-1)\times(N-1)}u^{k-i}. \end{array} \end{eqnarray} (5.3)

    We solve the Eq (5.3) and get u^k, k = 4, 5, \cdots, k .

    In this part, we carry out in this section a series of numerical experiments and present some results to confirm our theoretical statements. The main purpose is to check the convergence behavior of the numerical solution with respect to the time step \Delta t and space size h used in the calculation.The first two numerical examples are proposed to show the efficiency of the 3-\beta order in time and second order in space of one and two dimension in space, respectively. The last numerical example, we choose the graded grid numerical schemes to solve problems when the solution of TFDEs is initial value singularity.

    Example 5.1. Consider f(x, t) and u_0(x) in (1.2) as the following form

    \begin{eqnarray*} f(x, t) = [\frac{\Gamma(5)}{\Gamma(5-\beta)}t^{4-\beta}+4\pi^2t^4]\sin(2\pi x), \ \ u_0(x) = 0, \end{eqnarray*}

    it is easy to check that the exact solution is given by u(x, t) = t^4\sin(2\pi x) . In this example, we choose a = 0, b = 1, T = 1, \beta = 0.2, 0.5, 0.8 and the step size \Delta t = \frac{1}{K}, h = \frac{1}{N} . Here we take two different sets of step size parameters. In the first case, we verify that the spatial convergence order and choose K = 2^{10} , and N = 2^k, k = 2, 3, 4, 5, 6, 7 . In the second case, we verify that the convergence order in time and choose K = 2^k, k = 2, 3, 4, 5, 6, 7 ; and N = [K^{\frac{3-\beta}{2}}] where [x] denote the maxinum integer part of x . We denote the max error as e^{\Delta t, h} = \mathop {\max }\limits_{j, k} | {u_j^k - u({x_j}, {t_k})}| , here u_j^k and u({x_j}, {t_k}) are the numerical solution and exact solution for (1.1) at point (x_j, t_k) , respectively.

    Firstly, we plot the error distribution. In Figure 1, the error distribution of K = 2^6, N = [K^{(1.5-0.5*\beta)}] and \beta = 0.5 is shown, where [\cdot] indicates rounding up. From Figure 1, we find that the errors can be as small as 10^{-4} .

    Figure 1.  Error distribution of \beta = 0.5 for Example 5.1.

    Secondly, we plot log-log graph of error. In Figure 2, A logarithmic scale has been used for both \Delta t -axis and error-axis in this figure. For \beta = 0.2, 0.8 , we find the temporal approximation order close to 3-\beta , i.e. the slopes of the error curves in these log-log plots are 2.8, 2.2 respectively for \beta = 0.2, 0.8 .

    Figure 2.  Errors as a function of the time step \Delta t for \beta = 0.2, 0.8 for Example 5.1.

    It is easy to see that the convergence order in Table 1 is very close to 2 . This shows that the convergence order of space is 2 and not related to the selection of \beta , which is consistent with the theoretical result of the Theorem 4.2.

    Table 1.  The errors e^{\Delta t, h} and decay rates for \beta = 0.2 , 0.5 and 0.8 for Example 5.1.
    h \beta=0.2 Rate \beta=0.5 Rate \beta=0.8 Rate
    \frac{1}{4} 2.24267623e-1 - 2.19495762e-1 - 2.12827066e-1 -
    \frac{1}{8} 5.11915396e-2 2.13124405 5.02547233e-2 2.12686198 4.89402966e-2 2.12058688
    \frac{1}{16} 1.25184509e-2 2.03184934 1.22977077e-2 2.03086976 1.19877349e-2 2.02946376
    \frac{1}{32} 3.11251399e-3 2.00790382 3.05813631e-3 2.00766481 2.98178274e-3 2.00731203
    \frac{1}{64} 7.77065525e-4 2.00197215 7.63522666e-4 2.00190982 7.44526894e-4 2.00177927
    \frac{1}{128} 1.94200123e-4 2.00049214 1.90819202e-4 2.00046462 1.86098139e-4 2.00026033

     | Show Table
    DownLoad: CSV

    From Table 2, it is easy to see that convergence order is almost 2.8, 2.5, 2.2 with respect to \beta = 0.2, 0.5, 0.8 , respectively. This indicates that the time convergence order is 3-\beta which is consistent with the theoretical result of the Theorem 4.2.

    Table 2.  The errors e^{\Delta t, h} and decay rates for \beta = 0.2 , 0.5 and 0.8 for Example 5.1.
    \Delta t \beta=0.2 Rate \beta=0.5 Rate \beta=0.8 Rate
    \frac{1}{4} 6.61870458e-2 - 8.09279866e-2 - 1.30876112e-1 -
    \frac{1}{8} 9.79159847e-3 2.75693257 1.89320008e-2 2.09581181 3.08129985e-2 2.08659081
    \frac{1}{16} 1.33613669e-3 2.87347678 3.12778295e-3 2.59761458 7.22284764e-3 2.09289944
    \frac{1}{32} 1.95889521e-4 2.76995547 5.54215075e-4 2.49662254 1.57296693e-3 2.19907939
    \frac{1}{64} 2.81063830e-5 2.80107051 9.77531242e-5 2.50323123 3.38827437e-4 2.21486573
    \frac{1}{128} 4.04673058e-6 2.79606910 1.72479211e-5 2.50272032 7.37134520e-5 2.20055087

     | Show Table
    DownLoad: CSV

    Example 5.2. Consider f(x, y, t) and u_0(x, y) in (1.1) as the following form

    \begin{eqnarray*} f(x, y, t) = [\frac{\Gamma(5)}{\Gamma(5-\beta)}t^{4-\beta}+8\pi^2t^4]\sin(2\pi x)\sin(2\pi y), \ \ u_0(x, y) = 0, \end{eqnarray*}

    it is easy to check that the exact solution of (1.1) is following

    \begin{eqnarray*} u(x, y, t) = t^4\sin(2\pi x)\sin(2\pi y). \end{eqnarray*}

    In this numerical example, we choose T = 1 and space domain is [0, 1]^2 . Denote the step size \Delta t = \frac{1}{K} , and spatial size \Delta x = \Delta y = h = \frac{1}{N} , and the error is

    \begin{eqnarray*} e^{\Delta t, \Delta x, \Delta y} = \mathop {\max }\limits_{i, j, k} | {u_{i, j}^k - u({x_i, y_j}, {t_k})}|. \end{eqnarray*}

    Here we take two different sets of step size parameters. In the first case, we verify that the spatial convergence order and choose K = 5\cdot 2^8 and N = 5\cdot 2^k, k = 2, 3, 4, 5, 6 . In the second case, we verify that the temporal convergence order and choose K = 5\cdot 2^k, k = 2, 3, 4, 5, 6 , and N = [K^{\frac{3-\beta}{2}}] .

    From Table 3, it is easy to show that the convergence order in space is close to 2 which is consistent with the convergence order analysis in space of the Theorem 4.2.

    Table 3.  The errors e^{\Delta t, \Delta x, \Delta y} and decay rates with \beta = 0.2 , 0.5 and 0.8 for Example 5.2.
    \Delta x=\Delta y \beta=0.2 Rate \beta=0.5 Rate \beta=0.8 Rate
    \frac{1}{4} 7.32588952e-3 - 7.26023603e-3 - 7.16700414e-3 -
    \frac{1}{8} 1.92366631e-3 1.92914538 1.90651985e-3 1.92907489 1.88217592e-3 1.92896870
    \frac{1}{16} 4.92984038e-4 1.96424572 4.88596603e-4 1.96422581 4.82373986e-4 1.96417747
    \frac{1}{32} 1.24789589e-4 1.98204334 1.23679889e-4 1.98203289 1.22112645e-4 1.98193949
    \frac{1}{64} 3.13925997e-5 1.99100117 3.11139553e-5 1.99097722 3.07270665e-5 1.99063066

     | Show Table
    DownLoad: CSV

    From Table 4, one can be easy to see that convergence order is almost 2.8, 2.5, 2.2 for \beta = 0.2, 0.5, 0.8 , respectively. This indicates that the temporal convergence order is 3-\beta which is consistent with the theoretical result of the Theorem 4.2.

    Table 4.  The errors e^{\Delta t, \Delta x, \Delta y} and decay rates with \beta = 0.2 , 0.5 and 0.8 for Example 5.2.
    \Delta t \beta=0.2 Rate \beta=0.5 Rate \beta=0.8 Rate
    \frac{1}{4} 1.45302385e-2 - 1.98108543e-2 - 2.87895267e-2 -
    \frac{1}{8} 2.39515094e-3 2.60086990 4.57826737e-3 2.11341747 7.83171525e-3 1.87814385
    \frac{1}{16} 3.35463869e-4 2.83588728 7.93856560e-4 2.52785146 1.86353942e-3 2.07128296
    \frac{1}{32} 4.98304436e-5 2.75105806 1.43458541e-4 2.46824448 4.16260488e-4 2.16248681
    \frac{1}{64} 7.18924725e-6 2.79311479 2.55133078e-5 2.49131199 9.07614406e-5 2.19733521

     | Show Table
    DownLoad: CSV

    In the above two numerical examples, we assume that the solution is sufficiently smooth, which is also the basic requirement of all high order numerical schemes. In order to illustrate the effectiveness of the numerical scheme of this paper, the third numerical example is the numerical result of nonsmooth solution by using the graded mesh.

    Example 5.3. In this example, we consider the problem (1.1) with exact solution is

    \begin{eqnarray*} u(x, t) = t^{2+\beta}\sin(2\pi x), \end{eqnarray*}

    which the third derivative of time is singular at the initial value. It is easy to check that the right hand function is

    \begin{eqnarray*} f(x, t) = [\frac{\Gamma(3+\beta)}{\Gamma(3)}t^{2}+4\pi^2t^{2+\beta}]\sin(2\pi x) \end{eqnarray*}

    and u_0(x) = 0.

    It is easy to prove that the exact solution of this example is not satisfied the condition of Theorem 2.1. In order to ensure the time convergence order unreduced, we choose graded mesh is t_j = (j/K)^{\gamma}, j = 0, 1, \cdots, K , with a graded parameter \gamma base on the idea of [21] and \beta = 0.3, 0.5, 0.7 . Here we take two different sets of step size parameters. In the first case, we verify that the spatial convergence order and choose K = 2^9 and N = 2^k, k = 2, 3, 4, 5, 6, 7 . In the second case, we verify that the temporal convergence order and choose K = 4, 8, 16, 32, 64, 128 with \gamma = \frac{3-\beta}{\beta} , N = [K^{\frac{3-\beta}{2}}] .

    From Table 5, one can see that the convergence order of spatial is almost 2 . And under the condition of \gamma = \frac{3-\beta}{\beta} , it is easy to obtain that the convergence order in time is 3-\beta with respect to the theoretical analysis Theorem 4.1 in [21]. From the Table 6, we see that the convergence order in time is almost 3-\beta .

    Table 5.  The errors e^{\Delta t, h} and decay rates with \beta = 0.3 , 0.5 and 0.7 for Example 5.3.
    h \beta=0.3 Rate \beta=0.5 Rate \beta=0.7 Rate
    \frac{1}{4} 2.24283447e-1 - 2.22112851e-1 - 2.19254736e-1 -
    \frac{1}{8} 5.11949227e-2 2.13125050 5.07700212e-2 2.12924409 5.02100716e-2 2.12655931
    \frac{1}{16} 1.25192834e-2 2.03184875 1.24192472e-2 2.03139913 1.22874280e-2 2.03079381
    \frac{1}{32} 3.11274154e-3 2.00789428 3.08812229e-3 2.00777593 3.05571680e-3 2.00760021
    \frac{1}{64} 7.77143859e-4 2.00193220 7.71032310e-4 2.00186666 7.63024883e-4 2.00170883
    \frac{1}{128} 1.94241274e-4 2.00033188 1.92735233e-4 2.00017098 1.90799176e-4 1.99967516

     | Show Table
    DownLoad: CSV
    Table 6.  The errors e^{\Delta t, h} and decay rates with \beta = 0.3 , 0.5 and 0.7 for Example 5.3.
    \Delta t \beta=0.3 Rate \beta=0.5 Rate \beta=0.7 Rate
    \frac{1}{4} 2.33652761e-2 - 2.45667893e-2 - 3.43008919e-2 -
    \frac{1}{8} 3.21431860e-3 2.86178124 5.46965600e-3 2.16718731 7.69626349e-3 2.15601599
    \frac{1}{16} 5.91658503e-4 2.44167631 9.80816120e-4 2.47939550 1.68193842e-3 2.19403330
    \frac{1}{32} 9.94240948e-5 2.57309728 1.81758076e-4 2.43196321 3.41593332e-4 2.29977316
    \frac{1}{64} 1.65508545e-5 2.58668981 3.29996439e-5 2.46149711 7.10322502e-5 2.26573372
    \frac{1}{128} 2.67622171e-6 2.62863615 5.93030244e-6 2.47627286 1.44529613e-5 2.29710906

     | Show Table
    DownLoad: CSV

    On the idea of [24,25], we propose a uniform accuracy 3-\beta order for fractional derivatives based on piecewise quadratic interpolation, and apply it to solve TFDE. In particular, the proposed numerical scheme overcome the problem of order reduction at the initial value of the high-order numerical scheme, and also provides a general construction method for the numerical scheme with uniform convergence order. The stability and error estimates of high order uniform accuracy are strictly theoretical analysis. In terms of numerical implementation, we firstly use the full discrete scheme to solve one and two dimensional TFDE with the sufficiently smooth solution. We secondly use the numerical scheme to solve the TFDE with nonsmooth solution on the graded mesh. These two kinds of numerical examples fully illustrate that the full discrete scheme of this paper is very effective. The method of analyzing the convergence and stability of schemes in this paper can provide an effective analysis tool for analyzing the convergence and stability of numerical schemes of fractional integro-differential equations. In the future, we will use the proposed effective scheme to solve TFDE optimal control problem, and expands proposed effective scheme for solving the nonlinear multi-dimensional fractional integro-differential equations. Based on the ideas of references [33], we also consider to solve the nonlinear Lane-Emden equation with fractal-fractional derivative.

    This work was supported by National Natural Science Foundation of China (Grant Nos. 11961009, 11901135), Foundation of Guizhou Science and Technology Department (Grant No. [2020]1Y015), Natural Science Research Project of Department of Education of Guizhou Province (Grant Nos. QJJ2022015, QJJ2022047).

    The authors declare that they have no competing interests.



    [1] Y. Wang, L. Ren, A high-order L2-compact difference method for Caputo-type time-fractional sub-diffusion equations with variable coefficients, Appl. Math. Comput., 342 (2019), 71–93. https://doi.org/10.1016/j.amc.2018.09.007 doi: 10.1016/j.amc.2018.09.007
    [2] H. Zhang, J. Jia, X. Jiang, An optimal error estimate for the two-dimensional nonlinear time fractional advection-diffusion equation with smooth and non-smooth solutions, Comput. Math. Appl., 79 (2020), 2819–2831. https://doi.org/10.1016/j.camwa.2019.12.013 doi: 10.1016/j.camwa.2019.12.013
    [3] Y. H. Youssri, A. G. Atta, Petrov-Galerkin Lucas polynomials procedure for the time-fractional diffusion equation, Contemp. Math., 4 (2023), 230–248. https://doi.org/10.37256/cm.4220232420 doi: 10.37256/cm.4220232420
    [4] T. Eftekhari, S. Hosseini, A new and efficient approach for solving linear and nonlinear time-fractional diffusion equations of distributed order, Comput. Appl. Math., 41 (2022), 281. https://doi.org/10.1007/s40314-022-01981-5 doi: 10.1007/s40314-022-01981-5
    [5] Y. Wang, H. Chen, Pointwise error estimate of an alternating direction implicit difference scheme for two-dimensional time-fractional diffusion equation, Comput. Math. Appl., 99 (2021), 155–161. https://doi.org/10.1016/j.camwa.2021.08.012 doi: 10.1016/j.camwa.2021.08.012
    [6] Z. Liu, A. Cheng, X. Li, A novel finite difference discrete scheme for the time fractional diffusion-wave equation, Appl. Numer. Math., 134 (2018), 17–30. https://doi.org/10.1016/j.apnum.2018.07.001 doi: 10.1016/j.apnum.2018.07.001
    [7] A. A. Alikhanov, C. Huang, A high-order L2 type difference scheme for the time-fractional diffusion equation, Appl. Math. Comput., 411 (2021), 126545. https://doi.org/10.1016/j.amc.2021.126545 doi: 10.1016/j.amc.2021.126545
    [8] M. Cui, Convergence analysis of high-order compact alternating direction implicit schemes for the two-dimensional time fractional diffusion equation, Numer. Algor., 62 (2013), 383–409. https://doi.org/10.1007/s11075-012-9589-3 doi: 10.1007/s11075-012-9589-3
    [9] C. Lv, C. Xu, Error analysis of a high order method for time-fractional diffusion equations, SIAM J. Sci. Comput., 38 (2016), A2699–A2722. https://doi.org/10.1137/15M102664X doi: 10.1137/15M102664X
    [10] S. Kumar, D. Zeidan, An efficient Mittag-Leffler kernel approach for time-fractional advection-reaction-diffusion equation, Appl. Numer. Math., 170 (2021), 190–207. https://doi.org/10.1016/j.apnum.2021.07.025 doi: 10.1016/j.apnum.2021.07.025
    [11] J. Shen, C. Sheng, An efficient space-time method for time fractional diffusion equation, J. Sci. Comput., 81 (2019), 1088–1110. https://doi.org/10.1007/s10915-019-01052-8 doi: 10.1007/s10915-019-01052-8
    [12] M. A. Abdelkawy, A. M. Lopes, M. A. Zaky, Shifted fractional Jacobi spectral algorithm for solving distributed order time-fractional reaction-diffusion equations, Comput. Appl. Math., 38 (2019), 81. https://doi.org/10.1007/s40314-019-0845-1 doi: 10.1007/s40314-019-0845-1
    [13] M. A. Zaky, A Legendre spectral quadrature tau method for the multi-term time-fractional diffusion equations, Comput. Appl. Math., 37 (2018), 3525–3538. https://doi.org/10.1007/s40314-017-0530-1 doi: 10.1007/s40314-017-0530-1
    [14] J. Zhang, Z. Fang, H. Sun, Exponential-sum-approximation technique for variable-order time-fractional diffusion equations, J. Appl. Math. Comput., 68 (2022), 323–347. https://doi.org/10.1007/s12190-021-01528-7 doi: 10.1007/s12190-021-01528-7
    [15] P. Lyu, S. Vong, A fast linearized numerical method for nonlinear time-fractional diffusion equations, Numer. Algor., 87 (2021), 381–408. https://doi.org/10.1007/s11075-020-00971-0 doi: 10.1007/s11075-020-00971-0
    [16] S. Jiang, J. Zhang, Q. Zhang, Z. Zhang, Fast evaluation of the caputo fractional derivative and its applications to fractional diffusion equations. Commun. Comput. Phys., 21 (2017), 650–678. https://doi.org/10.4208/cicp.OA-2016-0136
    [17] P. Roul, V. Rohil, A high-order numerical scheme based on graded mesh and its analysis for the two-dimensional time-fractional convection-diffusion equation, Comput. Math. Appl., 126 (2022), 1–13. https://doi.org/10.1016/j.camwa.2022.09.006 doi: 10.1016/j.camwa.2022.09.006
    [18] S. Martin, O. Eugene, L. Jose, Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation, SIAM J. Numer. Anal., 55 (2017), 1057–1079. https://doi.org/10.1137/16M1082329 doi: 10.1137/16M1082329
    [19] N. Kedia, A. Alikhanov, V. Singh, Stable numerical schemes for time-fractional diffusion equation with generalized memory kernel, Appl. Numer. Math., 172 (2022), 546–565. https://doi.org/10.1016/j.apnum.2021.11.006 doi: 10.1016/j.apnum.2021.11.006
    [20] K. Mustapha, An L1 approximation for a fractional reaction-diffusion equation, a second-order error analysis over time-graded meshes, SIAM J. Numer. Anal., 58 (2020), 1319–1338. https://doi.org/10.1137/19M1260475 doi: 10.1137/19M1260475
    [21] N. Kopteva, Error analysis of an L2-type method on graded meshes for a fractional-order parabolic problem, Math. Comp., 90 (2021), 19–40. https://doi.org/10.1090/mcom/3552 doi: 10.1090/mcom/3552
    [22] A. G. Atta, Y. H. Youssri, Advanced shifted first-kind Chebyshev collocation approach for solving the nonlinear time-fractional partial integro-differential equation with a weakly singular kernel, Comp. Appl. Math., 41 (2022), 381. https://doi.org/10.1007/s40314-022-02096-7 doi: 10.1007/s40314-022-02096-7
    [23] J. Huang, D. Yang, L. O. Jay, Efficient methods for nonlinear time fractional diffusion-wave equations and their fast implementations, Numer. Algor., 85 (2020), 375–397. https://doi.org/10.1007/s11075-019-00817-4 doi: 10.1007/s11075-019-00817-4
    [24] J. Cao, C. Xu, A high order schema for the numercial solution of the fractional ordinary differential equations, J. Comput. Phys., 238 (2013), 154–168. https://doi.org/10.1016/j.jcp.2012.12.013 doi: 10.1016/j.jcp.2012.12.013
    [25] J. Cao, Z. Cai, Numerical analysis of a high-order scheme for nonlinear fractional differential equations with uniform accuracy, Numer. Math. Theor. Meth. Appl., 14 (2021), 71–112. https://doi.org/10.4208/nmtma.OA-2020-0039 doi: 10.4208/nmtma.OA-2020-0039
    [26] L. Feng, P. Zhuang, F. Liu, Y. T. Gu, Finite element method for space-time fractional diffusion equation, Numer. Algor., 72 (2016), 749–767. https://doi.org/10.1007/s11075-015-0065-8 doi: 10.1007/s11075-015-0065-8
    [27] H. Zhang, X. Jiang, F. Zeng, An H^{1} convergence of the spectral method for the time-fractional non-linear diffusion equations, Adv. Comput. Math., 47 (2021), 63. https://doi.org/10.1007/s10444-021-09892-5 doi: 10.1007/s10444-021-09892-5
    [28] A. S. V. R. Kanth, N. Garg, An unconditionally stable algorithm for multiterm time fractional advection-diffusion equation with variable coefficients and convergence analysis, Numer. Methods Partial Differ. Equ., 37 (2021), 1928–1945. https://doi.org/10.1002/num.22629 doi: 10.1002/num.22629
    [29] A. A. Alikhanov, A new difference scheme for the time fractional diffusion equation, J. Comput. Phys., 280 (2015), 424–438. https://doi.org/10.1016/j.jcp.2014.09.031 doi: 10.1016/j.jcp.2014.09.031
    [30] D. A. Murio, Implicit finite difference approximation for time fractional diffusion equations, Comput. Math. Appl., 56 (2008), 1138–1145. https://doi.org/10.1016/j.camwa.2008.02.015 doi: 10.1016/j.camwa.2008.02.015
    [31] R. Gorenflo, E. A. Abdel-Rehim, Convergence of the Grünwald-Letnikov scheme for time-fractional diffusion, J. Comput. Appl. Math., 205 (2007), 871–881. https://doi.org/10.1016/j.cam.2005.12.043 doi: 10.1016/j.cam.2005.12.043
    [32] R. L. Burden, J. D. Faires, A. M. Burden, Numerical analysis, Cengage Learning, 2015.
    [33] Y. H. Youssri, A. G. Atta, Spectral collocation approach via normalized shifted Jacobi polynomials for the nonlinear Lane-Emden equation with fractal-fractional derivative, Fractal Fract., 7 (2023), 133. https://doi.org/10.3390/fractalfract7020133 doi: 10.3390/fractalfract7020133
  • This article has been cited by:

    1. Junying Cao, Zhongqing Wang, Ziqiang Wang, Stability and convergence analysis for a uniform temporal high accuracy of the time-fractional diffusion equation with 1D and 2D spatial compact finite difference method, 2024, 9, 2473-6988, 14697, 10.3934/math.2024715
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1630) PDF downloads(72) Cited by(1)

Figures and Tables

Figures(2)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog