Processing math: 35%
Research article Special Issues

Bayesian quantile regression for streaming data

  • # These authors contributed equally to this work.
  • Quantile regression has been widely used in many fields because of its robustness and comprehensiveness. However, it remains challenging to perform the quantile regression (QR) of streaming data by a conventional methods, as they are all based on the assumption that the memory can fit all the data. To address this issue, this paper proposes a Bayesian QR approach for streaming data, in which the posterior distribution was updated by utilizing the aggregated statistics of current and historical data. In addition, theoretical results are presented to confirm that the streaming posterior distribution is theoretically equivalent to the orcale posterior distribution calculated using the entire dataset together. Moreover, we provide an algorithmic procedure for the proposed method. The algorithm shows that our proposed method only needs to store the parameters of historical posterior distribution of streaming data. Thus, it is computationally simple and not storage-intensive. Both simulations and real data analysis are conducted to illustrate the good performance of the proposed method.

    Citation: Zixuan Tian, Xiaoyue Xie, Jian Shi. Bayesian quantile regression for streaming data[J]. AIMS Mathematics, 2024, 9(9): 26114-26138. doi: 10.3934/math.20241276

    Related Papers:

    [1] Depei Wang, Lianglun Cheng, Tao Wang . Fairness-aware genetic-algorithm-based few-shot classification. Mathematical Biosciences and Engineering, 2023, 20(2): 3624-3637. doi: 10.3934/mbe.2023169
    [2] Wu Zeng, Zheng-ying Xiao . Few-shot learning based on deep learning: A survey. Mathematical Biosciences and Engineering, 2024, 21(1): 679-711. doi: 10.3934/mbe.2024029
    [3] Bin Deng, Ran Ding, Jingfeng Li, Junfeng Huang, Kaiyi Tang, Weidong Li . Hybrid multi-objective metaheuristic algorithms for solving airline crew rostering problem with qualification and language. Mathematical Biosciences and Engineering, 2023, 20(1): 1460-1487. doi: 10.3934/mbe.2023066
    [4] Song Yang, Huibin Wang, Hongmin Gao, Lili Zhang . Few-shot remote sensing scene classification based on multi subband deep feature fusion. Mathematical Biosciences and Engineering, 2023, 20(7): 12889-12907. doi: 10.3934/mbe.2023575
    [5] Qian Zhang, Haigang Li, Ming Li, Lei Ding . Feature extraction of face image based on LBP and 2-D Gabor wavelet transform. Mathematical Biosciences and Engineering, 2020, 17(2): 1578-1592. doi: 10.3934/mbe.2020082
    [6] Yingying Xu, Chunhe Song, Chu Wang . Few-shot bearing fault detection based on multi-dimensional convolution and attention mechanism. Mathematical Biosciences and Engineering, 2024, 21(4): 4886-4907. doi: 10.3934/mbe.2024216
    [7] Yanpei Liu, Yunjing Zhu, Yanru Bin, Ningning Chen . Resources allocation optimization algorithm based on the comprehensive utility in edge computing applications. Mathematical Biosciences and Engineering, 2022, 19(9): 9147-9167. doi: 10.3934/mbe.2022425
    [8] Zulqurnain Sabir, Muhammad Asif Zahoor Raja, Abeer S. Alnahdi, Mdi Begum Jeelani, M. A. Abdelkawy . Numerical investigations of the nonlinear smoke model using the Gudermannian neural networks. Mathematical Biosciences and Engineering, 2022, 19(1): 351-370. doi: 10.3934/mbe.2022018
    [9] Li-zhen Du, Shanfu Ke, Zhen Wang, Jing Tao, Lianqing Yu, Hongjun Li . Research on multi-load AGV path planning of weaving workshop based on time priority. Mathematical Biosciences and Engineering, 2019, 16(4): 2277-2292. doi: 10.3934/mbe.2019113
    [10] Yufeng Qian . Exploration of machine algorithms based on deep learning model and feature extraction. Mathematical Biosciences and Engineering, 2021, 18(6): 7602-7618. doi: 10.3934/mbe.2021376
  • Quantile regression has been widely used in many fields because of its robustness and comprehensiveness. However, it remains challenging to perform the quantile regression (QR) of streaming data by a conventional methods, as they are all based on the assumption that the memory can fit all the data. To address this issue, this paper proposes a Bayesian QR approach for streaming data, in which the posterior distribution was updated by utilizing the aggregated statistics of current and historical data. In addition, theoretical results are presented to confirm that the streaming posterior distribution is theoretically equivalent to the orcale posterior distribution calculated using the entire dataset together. Moreover, we provide an algorithmic procedure for the proposed method. The algorithm shows that our proposed method only needs to store the parameters of historical posterior distribution of streaming data. Thus, it is computationally simple and not storage-intensive. Both simulations and real data analysis are conducted to illustrate the good performance of the proposed method.



    In this paper, a nonlinear compact finite difference scheme is studied for the following the pseudo-parabolic Burgers' equation [1]

    ut=μuxx+γuux+ε2uxxt,0<x<L,0<tT, (1.1)

    subject to the periodic boundary condition

    u(x,t)=u(x+L,t),0xL,0<tT, (1.2)

    and the initial data

    u(x,0)=φ(x),0xL, (1.3)

    where μ>0 is the coefficient of kinematic viscosity, γ and ε>0 are two parameters, φ(x) is an L-periodic function. Parameter L denotes the spatial period. Setting ε=0, Eq (1.1) reduces to a viscous Burgers' equation [2]. Equation (1.1) is derived by the degenerate pseudo-parabolic equation [3]

    ut=(uα+uβux+ε2uκ(uγut)x)x, (1.4)

    where α, β, κ, γ are nonnegative constants. The derivative term {uκ(uγut)x}x represents a dynamic capillary pressure relation instead of a usual static one [4]. Equation (1.4) is a model of one-dimensional unsaturated groundwater flow.

    Here, u denotes the water saturation. We refer to [5] for a detailed explanation of the model. Equation (1.1) is also viewed as a simplified edition of the Benjamin-Bona-Mahony-Burgers (BBM-Burgers) equation, or a viscous regularization of the original BBM model for the long wave propagation [6]. The problem (1.1)–(1.3) has the following conservation laws

    Q(t)=L0u(x,t)dx=Q(0),t>0, (1.5)
    E(t)=L0[u2(x,t)+ε2u2x(x,t)]dx+2μt0L0 u2x(x,s)dxds=E(0),t>0. (1.6)

    Based on (1.6), by a simple calculation, the exact solution satisfies

    max{u,εux,εu}c0,

    where c0=(1+L2)E(0).

    Numerical and theoretical research for solving (1.1)–(1.3) have been extensively carried out. For instance, Koroche [7] employed the the upwind approach and Lax-Friedrichs to obtain the solution of In-thick Burgers' equation. Rashid et al. [8] employed the Chebyshev-Legendre pseudo-spectral method for solving coupled viscous Burgers' equations, and the leapfrog scheme was used in time direction. Qiu al. [9] constructed the fifth-order weighted essentially non-oscillatory schemes based on Hermite polynomials for solving one dimensional non-linear hyperbolic conservation law systems and presented numerical experiments for the two dimensional Burgers' equation. Lara et al. [10] proposed accelerate high order discontinuous Galerkin methods using Neural Networks. The methodology and bounds are examined for a variety of meshes, polynomial orders, and viscosity values for the 1D viscous Burgers' equation. Pavani et al. [11] used the natural transform decomposition method to obtain the analytical solution of the time fractional BBM-Burger equation. Li et al. [12] established and proved the existence of global weak solutions for a generalized BBM-Burgers equation. Wang et al. [13] introduced a linearized second-order energy-stable fully discrete scheme and a super convergence analysis for the nonlinear BBM-Burgers equation by the finite element method. Mohebbi et al. [14] investigated the solitary wave solution of nonlinear BBM-Burgers equation by a high order linear finite difference scheme.

    Zhang et al. [15] developed a linearized fourth-order conservative compact scheme for the BBMB-Burgers' equation. Shi et al. [16] investigated a time two-grid algorithm to get the numerical solution of nonlinear generalized viscous Burgers' equation. Li et al. [17] used the backward Euler method and a semi-discrete approach to approximate the Burgers-type equation. Mao et al. [18] derived a fourth-order compact difference schemes for Rosenau equation by the double reduction order method and the bilinear compact operator. It offers an effective method for solving nonlinear equations. Cuesta et al. [19] analyzed the boundary value problem and long-time behavior of the pseudo-parabolic Burgers' equation. Wang et al. [20] proposed fourth-order three-point compact operator for the nonlinear convection term. They adopted the classical viscous Burgers' equation as an example and established the conservative fourth-order implicit compact difference scheme based on the reduction order method. The compact difference scheme enables higher accuracy in solving equations with fewer grid points. Therefore, using the compact operators to construct high-order schemes has received increasing attention and application [21,22,23,24,25,26,27,28,29].

    Numerical solutions for the pseudo-parabolic equations have garnered widespread attention. For instance, Benabbes et al. [30] provided the theoretical analysis of an inverse problem governed by a time-fractional pseudo-parabolic equation. Moreover, Ilhan et al. [31] constructed a family of travelling wave solutions for obtaining hyperbolic function solutions. Di et al. [32] established the well-posedness of the regularized solution and gave the error estimate for the nonlinear fractional pseudo-parabolic equation. Nghia et al. [33] considered the pseudo-parabolic equation with Caputo-Fabrizio fractional derivative and gave the formula of mild solution. Abreu et al. [34] derived the error estimates for the nonlinear pseudo-parabolic equations basedon Jacobi polynomials. Jayachandran et al. [35] adopted the Faedo-Galerkin method to the pseudo-parabolic partial differential equation with logarithmic nonlinearity, and they analyzed the global existence and blowup of solutions.

    To the best of our knowledge, the study of high-order difference schemes for Eq (1.1) is scarce. The main challenge is the treatment of the nonlinear term uux, as well as the error estimation of the numerical scheme. Inspired by the researchers in [15] and [20], we construct an implicit compact difference scheme based on the three–point fourth-order compact operator for the pseudo-parabolic Burgers' equation. The main contribution of this paper is summarized as follows:

    ● A fourth-order compact difference scheme is derived for the pseudo-parabolic Burgers' equation.

    ● The pointwise error estimate (L-estimate) of a fourth-order compact difference scheme is proved by the energy method [36,37] for the pseudo-parabolic Burgers' equation.

    ● Numerical stability, unique solvability, and conservation are obtained for the high-order difference scheme of the pseudo-parabolic Burgers' equation.

    In particular, our numerical scheme for the special cases reduces to several other ones in this existing paper (see e.g., [38,39]).

    The remainder of the paper is organized as follows. In Section 2, we introduce the necessary notations and present some useful lemmas. A compact difference scheme is derived in Section 3 using the reduction order method and the recent proposed compact difference operator. In Section 4, we establish the key results of the paper, including the conservation invariants, boundedness, uniqueness of the solution, stability, and convergence of the scheme. In Section 4.4, we present several numerical experiments to validate the theoretical findings, followed by a conclusion in Section 5.

    Throughout the paper, we assume that the exact solution u(x,t) satisfies u(x,t)C6,3([0,L]×[0,T]).

    In this section, we introduce some essential notations and lemmas. We begin by dividing the domain [0,L]×[0,T]. For two given positive integers, M and N, let h=L/M,τ=T/N. Additionally, denote xi=ih,0iM,tk=kτ,0kN; Vh={v|v={vi},vi+M=vi}. For any grid function u, vVh, we introduce

    vk+12i=12(vki+vk+1i),δtvk+12i=1τ(vk+1ivki),δxvki+12=1h(vki+1vki),Δxvki=12h(vki+1vki1),δ2xvki=1h(δxvki+12δxvki12),ψ(u,v)i=13[uiΔxvi+Δx(uv)i].

    Moreover, we introduce the discrete inner products and norms (semi-norm)

    (u,v)=hMi=1uivi,u,v=hMi=1(δxui+12)(δxvi+12),u=(u,u),|u|1=u,u,u=max1iM|ui|.

    The following lemmas play important roles in the numerical analysis later, and we collect them here.

    Lemma 1. [15,40] For any grid functions u, vVh, we have

    vL2|v|1,vL6|v|1,(u,δ2xv)=u,v,(ψ(u,v),v)=0.

    Lemma 2. [40] For any grid function vVh and arbitrary ξ>0, we have

    |v|12hv,v2ξ|v|21+(1ξ+1L)v2.

    Lemma 3. [20] Let g(x)C5[xi1,xi+1] and G(x)=g(x), we have

    g(xi)g(xi)=ψ(g,g)ih22ψ(G,g)i+O(h4).

    Lemma 4. [15,18] For any grid functions u, vVh and SVh satisfying

    vk+12i=δ2xuk+12ih212δ2xvk+12i+Sk+12i,1iM,0kN1, (2.1)

    we have the following results:

    (I)

    (vk+12,uk+12)=|uk+12|21h212vk+122+h4144|vk+12|21+h212(Sk+12,vk+12)+(Sk+12,uk+12), (2.2)
    (vk+12,uk+12)|uk+12|21h218vk+122+h212(Sk+12,vk+12)+(Sk+12,uk+12), (2.3)
    (δtvk+12,uk+12)=12τ(|uk+1|21|uk|21)h224τ(vk+12vk2)+h4288τ(|vk+1|21|vk|21)+(δtSk+12,uk+12)+h212(δtvk+12,Sk+12). (2.4)

    (II)

    |uk+12|21uk+12(vk+12+Sk+12),h212vk+12245uk+12+h25Sk+12, (2.5)
    vk+12218h2|uk+12|21+92Sk+122. (2.6)

    Proof. The result in (2.2)–(2.3) has been described in [15], and (2.5) has been proven in [18], we only need to only prove (2.4) and (2.6). Using the definition of the operator, we have

    (δtvk+12,uk+12)=(δt(δ2xuk+12h212δ2xvk+12+Sk+12),uk+12)=12τ(|uk+1|21|uk|21)h212(δtvk+12,vk+12+h212δ2xvk+12Sk+12)+(δtSk+12,uk+12)=12τ(|uk+1|21|uk|21)h224τ(vk+12vk2)+h4288τ(|vk+1|21|vk|21)+(δtSk+12,uk+12)+h212(δtvk+12,Sk+12).

    Taking the inner product of (2.1) with vk+12, we have

    vk+122=(δ2xuk+12,vk+12)h212(δ2xvk+12,vk+12)+(Sk+12,vk+12)δ2xuk+12vk+12+h212|vk+12|21+Sk+12vk+1216vk+122+32δ2xuk+122+13vk+122+16vk+122+32Sk+12223vk+122+6h2|uk+12|21+32Sk+122.

    Therefore, the result (2.6) is obtained.

    Remark 1. [18] Denote 1=(1,1,,1)TVh. If S=0 in (2.1), then we further have

    (ψ(u,u),1)=0,(ψ(v,u),1)=0.

    Let v=uxx, then the problem (1.1) is equivalent to

    {ut=μv+γuux+ε2vt,0<x<L,0<tT,(3.1)v=uxx,0<x<L,0<tT,(3.2)u(x,0)=φ(x),0xL,(3.3)u(x,t)=u(x+L,t),0xL,0<tT.(3.4)

    According to (3.2) and (3.4), it is easy to know that

    v(x,t)=v(x+L,t),0xL,0<tT. (3.5)

    Define the grid functions U={Uki|1iM,0kN} with Uki=u(xi,tk), V={Vki|1iM,0kN} with Vki=v(xi,tk). Considering (3.1) at the point (xi,tk+12) and (3.2) at the point (xi,tk), respectively, we have

    {ut(xi,tk+12)=μv(xi,tk+12)+γu(xi,tk+12)ux(xi,tk+12)+ε2vt(xi,tk+12),1iM,0kN1,v(xi,tk)=uxx(xi,tk),1iM,0kN.

    Using the Taylor expansion and Lemma 3, we have

    {δtUk+12i=μVk+12i+γ(ψ(Uk+12,Uk+12)ih22ψ(Vk+12,Uk+12)i)+ε2δtVk+12i+Pk+12i,1iM,0kN1,Vki=δ2xUkih212δ2xVki+Qki,1iM,0kN. (3.6)

    Noticing the initial-boundary value conditions (3.3)–(3.5), we have

    {U0i=φ(xi),1iM;(3.7)Uki=Uki+M,Vki=Vki+M,1iM,1kN.(3.8)

    There is a positive constant c1 such that the local truncation errors satisfy

    {|Pk+12i|c1(τ2+h4),1iM,0kN1,|Qki|c1h4,1iM,0kN,|δtQk+12i|c1(τ2+h4),1iM,0kN1.

    Omitting the local truncation error terms in (3.6) and combining them with (3.7) and (3.8), the difference scheme for (3.1)–(3.5) as follows

    {δtuk+12i=μvk+12i+γ(ψ(uk+12,uk+12)ih22ψ(vk+12,uk+12)i)+ε2δtvk+12i,1iM,0kN1,(3.9)vki=δ2xukih212δ2xvki,1iM,0kN,(3.10)u0i=φ(xi),1iM,(3.11)uki=uki+M,vki=vki+M,1iM,1kN.(3.12)

    Remark 2. As we see from the difference equations (3.9) and (3.10), only three points for each of them are utilized to generate fourth-order accuracy for the nonlinear pseudo-parabolic Burgers' equation without using additional boundary message. This is the reason we call this scheme the compact difference scheme. In addition, a fast iterative algorithm can be constructed, as shown in the numerical part in Section 4.4.

    Theorem 1. Let {uki,vki|1iM,0kN} be the solution of (3.9)–(3.12). Denote

    Qk=(uk,1).

    Then, we have

    Qk=Q0,0kN.

    Proof. Taking an inner product of (3.9) with 1, we have

    (δtuk+12,1)=μ(vk+12,1)+γ(ψ(uk+12,uk+12)h22ψ(vk+12,uk+12),1)+ε2(δtvk+12,1),0kN1.

    By using Remark 1 in Lemma 4, the equality above deduces to

    (uk+1,1)(uk,1)=0,

    namely

    Qk+1=Qk,0kN1.

    Theorem 2. Let {uki,vki|1iM,0kN} be the solution of (3.9)–(3.12). Then it holds that

    Ek=E0,1kN,

    where

    Ek=uk2+ε2|uk|21+ε2h212vk2ε2h4144|vk|21+2τμ(k1l=0|ul+12|21+h212k1l=0vl+122h4144k1l=0|vl+12|21).

    Proof. Taking the inner product of (3.9) with uk+12, and applying Lemma 1, we have

    (δtuk+12,uk+12)=μ(vk+12,uk+12)+ε2(δtvk+12,uk+12).

    With the help of (2.2) and (2.4) in Lemma 4, the equality above deduces to

    12τ(uk+12uk2)=μ(|uk+12|21h212vk+122+h4144|vk+12|21)ε22τ((|uk+1|21|uk|21)+h212(vk+12vk2)h4144(|vk+1|21|vk|21)).

    Replacing the superscript k with l and summing over l from 0 to k1, we have

    (uk2+ε2|uk|21+ε2h212vk2ε2h4144|vk|21)(u02+ε2|u0|21+ε2h212v02ε2h4144|v0|21)+2τμ(k1l=0|ul+12|21+h212k1l=0vl+122h4144k1l=0|vl+12|21)=0,

    which implies that

    Ek=E0,1kN.

    Remark 3. Combining Lemma 1 with Theorem 2, it is easy to know that there is a positive constant c2 such that

    ukc2,ε|uk|1c2,εukc2,1kN. (4.1)

    Next, we recall the Browder theorem and consider the unique solvability of (3.9)–(3.12).

    Lemma 5 (Browder theorem[41]). Let (H,(,)) be a finite dimensional inner product space, be the associated norm, and Π:HH be a continuous operator. Assume

    α>0,zH,z=α,(Π(z),z)0.

    Then there exists a zH satisfying zα such that Π(z)=0.

    Theorem 3. The difference scheme (3.9)–(3.12) has a solution at least.

    Proof. Denote

    uk=(u1,u2,,uM),vk=(v1,v2,,vM),0kN.

    It is easy to know that u0 has been determined by (3.11). From (3.10) and (3.11), we can get v0 by computing a system of linear equations as its coefficient matrix is strictly diagonally dominant. Suppose that {uk,vk} has been determined, then we may regard {uk+12,vk+12} as unknowns. Obviously,

    uk+1i=2uk+12iuki,vk+1i=2vk+12ivki,1iM,0kN1.

    Denote

    Xi=uk+12i,Yi=vk+12i,1iM,0kN1.

    Then the difference scheme (3.9)–(3.10) can be rewritten as

    {2τ(Xiuki)μYiγ(ψ(X,X)ih22ψ(Y,X)i)2τε2(Yivki)=0,1iM,0kN,(4.2)Yi=δ2xXih212δ2xYi,1iM.(4.3)

    Define an operator Π on Vh:

    Π(Xi)=2τ(Xiuki)μYiγ(ψ(X,X)ih22ψ(Y,X)i)2τε2(Yivki),1iM,0kN.

    Taking an inner product of Π(X) with X, we have

    (Π(X),X)=2τ(X2(uk,X))μ(Y,X)2τε2((Y,X)(vk,X)). (4.4)

    In combination of the technique from (2.2) in Lemma 4 and the Cauchy-Schwartz inequality, we have

    (δxY,δxX)=(δx(δ2xXh212δ2xY),δxX)=δ2xX2+h212(δ2xY,δ2xX)=δ2xX2+h212(δ2xY,Y+h212δ2xY)=δ2xX2h212δxY2+h4144δ2xY2

    and

    (δxuk,δxX)δxukδxX14δxuk2+δxX2=14δxuk2+|X|21.

    Correspondingly,

    (δxuk,δxX)14δxuk2|X|21.

    Then

    (Y,X)+(vk,X)=(δ2xXh212δ2xY,X)+(δ2xukh212δ2xvk,X)=|X|21(δxuk,δxX)h212((δxY,δxX)+(δ2xvk,X))14δxuk2+h212(δ2xX2+h212δxY2h4144δ2xY2(δ2xvk,X))14δxuk2+h212(h212δxY2h4144δ2xY214vk2)14δxuk2h248vk2,

    and

    X2(uk,X)X212(uk2+X2)12(X2uk2).

    Substituting the equality above into (4.4) and according to (2.3) in Lemma 4, we have

    (Π(X),X)1τ(X2uk2)+2ε2τ(14δxuk2h248vk2)1τ(X2uk2ε22δxuk2ε2h224vk2).

    Thus, when X=αk, where αk=uk2+ε22δxuk2+ε2h224vk2, then (Π(X),X)0. By Lemma 5, there exists a XVh satisfying Xαk such that Π(X)=0. Consequently, the difference scheme (3.9)–(3.12) exists at least a solution uk+1=2Xuk. Observing, when (X1,X2,,XM) is known, (Y1,Y2,,YM) can be determined by (4.3) uniquely. Thus, we know vk+1i=2Yivki, 1iM exists.

    Now we are going to verify the uniqueness of the solution of the difference scheme. We have the following result.

    Theorem 4. When γ=0, the solution of the difference scheme (3.9)–(3.12) is uniquely solvable for any temporal step-size; When γ0 and τmin{4Lc2|γ|(L+1),2ε23c2|γ|(2L+1)}, the solution of the difference scheme (3.9)–(3.12) is uniquely solvable.

    Proof. According to Theorem 3, we just need to prove that (4.2)–(4.3) has a unique solution. Suppose that both {u(1),v(1)}Vh and {u(2),v(2)}Vh are the solutions of (4.2)–(4.3), respectively. Let

    ui=u(1)iu(2)i,vi=v(1)iv(2)i,1iM.

    Then we have

    {2τuiμviγ(ψ(u(1),u(1))iψ(u(2),u(2))i)+γh22(ψ(v(1),u(1))iψ(v(2),u(2))i)2ε2τvi=0,1iM,(4.5)vi=δ2xuih212δ2xvi,1iM.(4.6)

    Taking an inner product of (4.5) with u, we have

    2τu2μ(v,u)γ(ψ(u(1),u(1))ψ(u(2),u(2)),u)+γh22(ψ(v(1),u(1))ψ(v(2),u(2)),u)2ε2τ(v,u)=0.

    With the application of Lemma 2 and (2.3) in Lemma 4, it follow from the equality above that

    2τu2+(μ+2ε2τ)(|u|21+h218v2)γ(ψ(u(1),u(1))ψ(u(2),u(2)),u)γh22(ψ(v(1),u(1))ψ(v(2),u(2)),u). (4.7)

    By the definition of ψ(,) and (4.1), we have

    h22(ψ(v(1),u(1))ψ(v(2),u(2)),u)=h22(ψ(v(1),u(1))ψ(v(1)v,u(1)u),u)=h22(ψ(v,u(1)),u)=h36Mi=1[viΔxu(1)i+Δx(vu(1))i]ui=h36Mi=1[u(1)iΔx(uv)i+(vu(1))iΔxui]=h36Mi=1[u(1)i12h(ui+1vi+1ui1vi1)+(vu(1))iΔxui]=h36Mi=1[u(1)i12h(vi+1(ui+1ui)+ui(vi+1vi1)+vi1(uiui1))+(vu(1))iΔxui]=h36Mi=1[u(1)i(uiΔxvi+12vi+1δxui+12+12vi1δxui12)+(vu(1))iΔxui]c2h26(|v|1u+2v|u|1).

    Using the Cauchy-Schwarz inequality, Lemmas 1 and 2, we have

    h22(ψ(v(1),u(1))ψ(v(2),u(2)),u)c26(h44|v|21+u2)+c23(h4v2+14|u|21)c224(L+2)|u|21+c2h26(1+2L)v2. (4.8)

    Similarly, we have

    (ψ(u(1),u(1))ψ(u(2),u(2)),u)=(ψ(u(1),u(1))ψ(u(1)u,u(1)u),u)=(ψ(u,u(1)),u)=h3Mi=1[uiΔxu(1)i+Δx(uu(1))i]ui=h3Mi=1[u(1)iΔx(uu)i+(uu(1))iΔxui]=h3Mi=1[u(1)i(uiΔxui+12ui+1δxui+12+12ui1δxui12)+(uu(1))iΔxui]c2|u|1uc22(u2+|u|21)c22(1+1L)u2+c2|u|21. (4.9)

    Substituting (4.8) and (4.9) into (4.7), we can obtain

    2τu2+(μ+2ε2τ)|u|21+h218(μ+2ε2τ)v2c2|γ|2L(L+1)u2+c2|γ|24(L+26)|u|21+c2h2|γ|6(2L+1)v2.

    When τmin{4Lc2|γ|(L+1),2ε23c2|γ|(2L+1)}, we have ui=0,1iM.

    Let h0>0 and denote

    c3=max(x,t)[0,L]×[0,T]{|u(x,t)|,|ux(x,t)|},c4=3+c3|γ|+3c23γ2h204μ+3c23γ2μ,c5=c21LT(1+μ2+ε4+3μh2016+ε2h2012)+1312ε2h20c21L,c6=32c5e32c4T,

    and error functions

    eki=Ukiuki,fki=Vkivki,1iM,1kN,

    we have the following convergence results.

    Theorem 5. Let {u(x,t),v(x,t)} be the solution of (3.1)–(3.5) and {uki,vki|0iM,0kN} be the solution of the difference scheme (3.9)–(3.12). When c4τ13 and hh0, we have

    ekc6(τ2+h4),ε|ek|1c6(τ2+h4),εekc6L2(τ2+h4),0kN.

    Proof. Subtracting (3.9)–(3.12) from (3.6)–(3.8), we can get an error system

    {δtek+12i=μfk+12i+γ(ψ(Uk+12,Uk+12)iψ(uk+12,uk+12)i)γh22(ψ(Vk+12,Uk+12)iψ(vk+12,uk+12)i)+ε2δtfk+12i+Pk+12i,1iM,0kN1,(4.10)fki=δ2xekih212δ2xfki+Qki,1iM,0kN,(4.11)e0i=0,1iM,(4.12)eki=eki+M,fki=fki+M,1iM,1kN.(4.13)

    Taking an inner product of (4.10) with ek+12, we have

    (δtek+12,ek+12)=μ(fk+12,ek+12)+γ(ψ(Uk+12,Uk+12)ψ(uk+12,uk+12),ek+12)γh22(ψ(Vk+12,Uk+12)ψ(vk+12,uk+12),ek+12)+ε2(δtfk+12,ek+12)+(Pk+12,ek+12). (4.14)

    Applying (2.3) in Lemma 4, we have

    (fk+12,ek+12)|ek+12|21h218||fk+12||2+h212(Qk+12,fk+12)+(Qk+12,ek+12). (4.15)

    Similar to the derivation in (4.8) and (4.9), we have

    \begin{align} &\quad \big(\psi(U^{k+\frac{1}{2} }, U^{k+\frac{1}{2} })-\psi(u^{k+\frac{1}{2} }, u^{k+\frac{1}{2} }), e^{k+\frac{1}{2} }\big)\\ & = \big(\psi(e^{k+\frac{1}{2} }, U^{k+\frac{1}{2} }), e^{k+\frac{1}{2} }\big)\\ & = \frac{h}{3}\sum\limits_{i = 1}^{M}\Big[e_i^{k+\frac{1}{2} }\Delta_xU_i^{k+\frac{1}{2} }+\Delta_x(e^{k+\frac{1}{2} }U^{k+\frac{1}{2} })_i\Big]e_i^{k+\frac{1}{2} }\\ & = \frac{h}{3}\sum\limits_{i = 1}^{M}\Big[(e_i^{k+\frac{1}{2} })^2\Delta_xU_i^{k+\frac{1}{2} }-e_i^{k+\frac{1}{2} }U_i^{k+\frac{1}{2} }\Delta_xe_i^{k+\frac{1}{2} }\Big]\\ & = \frac{h}{3}\sum\limits_{i = 1}^{M}(e_i^{k+\frac{1}{2} })^2\Delta_xU_i^{k+\frac{1}{2} }{+}\frac{h}{6}\sum\limits_{i = 1}^{M} \frac{U_{i+1}^{k+\frac{1}{2}}-U_i^{k+\frac{1}{2}}}{h}e_i^{k+\frac{1}{2}}e_{i+1}^{k+\frac{1}{2} }\\ &\leq\frac{c_3}{2}\|e^{k+\frac{1}{2} }\|^2 \end{align} (4.16)

    and

    \begin{align} &\quad -\big(\psi(V^{k+\frac{1}{2} }, U^{k+\frac{1}{2}}) -\psi(v^{k+\frac{1}{2} }, u^{k+\frac{1}{2} }), e^{k+\frac{1}{2} }\big)\\ & = -\big(\psi(f^{k+\frac{1}{2}}, U^{k+\frac{1}{2}}) , e^{k+\frac{1}{2}}\big) = -\frac{h}{3}\sum\limits_{i = 1}^{M}\Big[f_i^{k+\frac{1}{2}}\Delta_xU_i^{k+\frac{1}{2}} +\Delta_x(f^{k+\frac{1}{2}}U^{k+\frac{1}{2}} )_i\Big] e_i^{k+\frac{1}{2}}\\ & = -\frac{h}{3}\sum\limits_{i = 1}^{M}f_i^{k+\frac{1}{2}}e_i^{k+\frac{1}{2}}\Delta_xU_i^{k+\frac{1}{2}} +\frac{h}{3}\sum\limits_{i = 1}^{M}f_i^{k+\frac{1}{2}}U^{k+\frac{1}{2}}_i \Delta_xe_i^{k+\frac{1}{2}}\\ &\leq \frac{1}{3}c_3||f^{k+\frac{1}{2}}||\cdot||e^{k+\frac{1}{2}}|| +\frac{1}{3}c_3||f^{k+\frac{1}{2}}||\cdot||\Delta_xe^{k+\frac{1}{2}}||. \end{align} (4.17)

    Substituting (4.15)–(4.17) into (4.14), and using (2.3)–(2.4) in Lemma 4, we obtain

    \begin{align} &\frac{1}{2\tau}(\|e^{k+1}\|^2-\|e^k\|^2)\\ \leq&\, \mu\Big (-|e^{k+\frac{1}{2}}|_1^2-\frac{h^2}{18}||f^{k+\frac{1}{2}}||^2 +\frac{h^2}{12}(Q^{k+\frac{1}{2}}, f^{k+\frac{1}{2}}) +(Q^{k+\frac{1}{2}}, e^{k+\frac{1}{2}})\Big)\\ &+ \frac{1}{2}c_3|\gamma|\cdot\|e^{k+\frac{1}{2} }\|^2 +\frac{c_3 h^2|\gamma|}{6} \left(||f^{k+\frac{1}{2}}||\cdot||e^{k+\frac{1}{2}}||+||f^{k+\frac{1}{2}}||\cdot||\Delta_xe^{k+\frac{1}{2}}||\right)\\ &-\frac{\varepsilon ^2}{2\tau}\Big(|e^{k+1}|_{1}^{2}-|e^{k}|_{1}^{2}+\frac{h^{2}}{12}(\|f^{k+1}\|^{2}-\|f^k\|^{2}) -\frac{h^{4}}{144}(|f^{k+1}|_{1}^{2}-|f^k|_{1}^{2})-2\tau(\delta_tQ^{k+\frac{1}{2}}, e^{k+\frac{1}{2}})\Big)\\ &+\frac{\varepsilon ^2h^2}{12}(\delta_tf^{k+\frac{1}{2}}, Q^{k+\frac{1}{2}})+(P^{k+\frac{1}{2} }, e^{k+\frac{1}{2}}). \end{align}

    Then, we have

    \begin{align*} &\|e^{k+1}\|^2-\|e^k\|^2 +\varepsilon^2\Big(|e^{k+1}|_{1}^{2}-|e^{k}|_{1}^{2}+\frac{h^{2}}{12}(\|f^{k+1}\|^{2}-\|f^k\|^{2}) -\frac{h^{4}}{144}(|f^{k+1}|_{1}^{2}-|f^k|_{1}^{2})\Big)\\ \leq& -2\mu\tau|e^{k+\frac{1}{2}}|_1^2-\frac{\mu\tau h^2}{9}||f^{k+\frac{1}{2}}||^2 +\frac{\mu\tau h^2}{6}(Q^{k+\frac{1}{2}}, f^{k+\frac{1}{2}}) +2\mu\tau (Q^{k+\frac{1}{2}}, e^{k+\frac{1}{2}})\\ &+c_3\tau|\gamma|\cdot\|e^{k+\frac{1}{2} }\|^2 +\frac{ c_3\tau h^2|\gamma|}{3}||f^{k+\frac{1}{2}}||\cdot||e^{k+\frac{1}{2}}|| +\frac{ c_3\tau h^2|\gamma|}{3}||f^{k+\frac{1}{2}}||\cdot||\Delta_x e^{k+\frac{1}{2}}||\nonumber\\ &+2\tau\varepsilon^2(\delta_tQ^{k+\frac{1}{2}}, e^{k+\frac{1}{2}})+2\tau (P^{k+\frac{1}{2}}, e^{k+\frac{1}{2}})+\frac{\tau\varepsilon ^2h^2}{6}(\delta_tf^{k+\frac{1}{2}}, Q^{k+\frac{1}{2}}). \end{align*}

    Using Cauchy-Schwartz inequality, we can rearrange the inequality above into the following form

    \begin{align*} &\|e^{k+1}\|^2-\|e^k\|^2+\varepsilon^2(|e^{k+1}|_{1}^{2}-|e^{k}|_{1}^{2}) +\frac{\varepsilon^2h^{2}}{12}\Big[\Big(\|f^{k+1}\|^{2}-\frac{h^{2}}{12}|f^{k+1}|_{1}^{2}\Big) -\Big(\|f^{k}\|^{2}-\frac{h^{2}}{12}|f^{k}|_{1}^{2}\Big)\Big]\notag\\ \leq& -2\mu\tau|e^{k+\frac{1}{2}}|_1^2-\frac{\mu\tau h^2}{9}||f^{k+\frac{1}{2}}||^2 +\frac{\mu\tau h^2}{6}\|Q^{k+\frac{1}{2}}\|\cdot\|f^{k+\frac{1}{2}}\| +2\mu\tau\|Q^{k+\frac{1}{2}}\|\cdot\|e^{k+\frac{1}{2}}\|\notag\\ &+c_3\tau|\gamma|\cdot\|e^{k+\frac{1}{2} }\|^2 +\frac{ c_3|\gamma|\tau h^2}{3}||f^{k+\frac{1}{2}}||\cdot||e^{k+\frac{1}{2}}|| +\frac{ c_3|\gamma|\tau h^2}{3}||f^{k+\frac{1}{2}}||\cdot||\Delta_x e^{k+\frac{1}{2}}||\notag\\ & +2\tau\varepsilon^2\|\delta_tQ^{k+\frac{1}{2}}\|\cdot\|e^{k+\frac{1}{2}}\|+2\tau\| P^{k+\frac{1}{2}}\|\cdot\|e^{k+\frac{1}{2}}\|+\frac{\tau\varepsilon ^2h^2}{6}(\delta_tf^{k+\frac{1}{2}}, Q^{k+\frac{1}{2}})\notag\\ \leq& -2\mu\tau|e^{k+\frac{1}{2}}|_1^2-\frac{\mu\tau h^2}{9}||f^{k+\frac{1}{2}}||^2 +\frac{3\mu\tau h^2}{16}\|Q^{k+\frac{1}{2}}\|^2+\frac{\mu\tau h^2}{27}\|f^{k+\frac{1}{2}}\|^2 +\mu^2\tau\|Q^{k+\frac{1}{2}}\|^2+\tau\|e^{k+\frac{1}{2}}\|^2\notag\\ &+c_3|\gamma|\tau\|e^{k+\frac{1}{2} }\|^2 +\frac{\mu\tau h^2}{27}||f^{k+\frac{1}{2}}||^2+\frac{ 3c_3^2\gamma^2\tau h^2}{4\mu}||e^{k+\frac{1}{2}}||^2 +\frac{\mu\tau h^2}{27}||f^{k+\frac{1}{2}}||^2+\frac{ 3c_3^2\gamma^2\tau }{\mu}\|e^{k+\frac{1}{2}}\|^2\notag\\ & +\tau\varepsilon^4\|\delta_tQ^{k+\frac{1}{2}}\|^2+\tau\|e^{k+\frac{1}{2}}\|^2 +\tau\| P^{k+\frac{1}{2}}\|^2+\tau\|e^{k+\frac{1}{2}}\|^2 +\frac{\tau\varepsilon ^2h^2}{6}(\delta_tf^{k+\frac{1}{2}}, Q^{k+\frac{1}{2}})\notag\\ \leq& \tau\Big(\frac{3}{2}+\frac{ c_3|\gamma|}{2}+\frac{3c_3^2\gamma^2h^2}{8\mu} +\frac{3c_3^2\gamma^2}{2\mu}\Big) (\|e^{k+1}\|^2+\|e^{k}\|^2) +\tau\Big(\mu^2+\frac{3\mu h^2}{16}\Big)\|Q^{k+\frac{1}{2}}\|^2\notag\\ & +\tau\varepsilon^4\|\delta_tQ^{k+\frac{1}{2}}\|^2+\tau\|P^{k+\frac{1}{2}}\|^2 +\frac{\tau\varepsilon ^2h^2}{6}(\delta_tf^{k+\frac{1}{2}}, Q^{k+\frac{1}{2}}).\notag \end{align*}

    Replacing the superscript k with l and summing over l from 0 to k , we get

    \begin{align} &\|e^{k+1}\|^2+\varepsilon^2|e^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{12}\Big(\|f^{k+1}\|^{2}-\frac{h^{2}}{12}|f^{k+1}|_{1}^{2}\Big) -\frac{\varepsilon^2h^{2}}{12}\Big(\|f^{0}\|^{2}-\frac{h^{2}}{12}|f^{0}|_{1}^{2}\Big)\\ \leq& \tau\Big(\frac{3}{2}+\frac{ c_3|\gamma|}{2}+\frac{3c_3^2\gamma^2h^2}{8\mu} +\frac{3c_3^2\gamma^2}{2\mu}\Big) \sum\limits_{l = 0}^{k}(\|e^{l+1}\|^2+\|e^{l}\|^2) +\tau(k+1)\Big(\mu^2+\frac{3\mu h^2}{16}\Big)Lc_1^2h^8\\ & +\tau\varepsilon^4(k+1)Lc_1^2(\tau^2+h^4)^2+\tau(k+1)Lc_1^2(\tau^2+h^4)^2 +\frac{\tau\varepsilon ^2h^2}{6}\sum\limits_{l = 0}^{k}(\delta_tf^{l+\frac{1}{2}}, Q^{l+\frac{1}{2}}) , \quad 0\leq k\leq N-1. \end{align} (4.18)

    For the last item on the right-hand side of (4.18), we have

    \begin{align} \sum\limits_{l = 0}^{k}(\delta_{t}f^{l+\frac{1}{2}}, Q^{l+\frac{1}{2}}) & = \frac{1}{\tau}\Big[\sum\limits_{l = 0}^{k}(f^{l+1}, Q^{l+\frac{1}{2}}) -\sum\limits_{l = 0}^{k}(f^{l}, Q^{l+\frac{1}{2}})\Big]\\ & = \frac{1}{\tau}\big[(f^{k+1}, Q^{k+\frac{1}{2}}) -(f^0 , Q^{\frac{1}{2}})\big]-\sum\limits_{l = 1}^k(f^l, \delta_tQ^l)\\ &\leq\frac{1}{\tau}\Big(\frac{1}{6}\|f^{k+1}\|^2 +\frac{3}{2}\|Q^{k+\frac{1}{2}}\|^2\Big) +\frac{1}{2\tau}(\|f^0\|^2+\|Q^{\frac{1}{2}}\|^2) +\frac{1}{2}\sum\limits_{l = 1}^k(\|f^l\|^2+\|\delta_tQ^l\|^2). \end{align} (4.19)

    Substituting (4.19) into (4.18) and using Lemma 2, we get

    \begin{align} &\|e^{k+1}\|^2+\varepsilon^2|e^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{18}\|f^{k+1}\|^{2}\\ \leq& \frac{\varepsilon^2h^{2}}{12}\Big(\|f^{0}\|^{2}-\frac{h^{2}}{12}|f^{0}|_{1}^{2}\Big) +2\tau\Big(\frac{3}{2}+\frac{ c_3|\gamma|}{2}+\frac{3c_3^2\gamma^2h^2}{8\mu} +\frac{3c_3^2\gamma^2}{2\mu}\Big) \sum\limits_{l = 0}^{k}\|e^{l+1}\|^2\\ &+\tau\Big(\mu^2+\frac{3\mu h^2}{16}\Big)(k+1)Lc_1^2h^8 +\tau\varepsilon^4(k+1)Lc_1^2(\tau^2+h^4)^2+\tau(k+1)Lc_1^2(\tau^2+h^4)^2\\ &+\frac{\tau\varepsilon ^2h^2}{6}\Big[\frac{1}{2}\sum\limits_{l = 1}^{k}\|f^l\|^2 +\frac{1}{2}kLc_1^2(\tau^2+h^4)^2 +\frac{1}{\tau}\Big(\frac{1}{6}\|f^{k+1}\|^2+\frac{3}{2}Lc_1^2h^8\Big) +\frac{1}{2\tau}(\|f^0\|^2+Lc_1^2h^8)\Big]. \end{align}

    We can rearrange the inequality above into the following form

    \begin{align} &\|e^{k+1}\|^2+\varepsilon^2|e^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{36}\|f^{k+1}\|^{2}\\ \leq&\tau\Big(3+c_3|\gamma|+\frac{3c_3^2\gamma^2h^2}{4\mu} +\frac{3c_3^2\gamma^2}{\mu}\Big) \Big[\sum\limits_{l = 0}^{k}\Big(\|e^{l}\|^2+\varepsilon^2|e^{l}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{36}\|f^{l}\|^{2}\Big)+\|e^{k+1}\|^2\Big]\\ &+\frac{\varepsilon^2h^{2}}{12}\|f^{0}\|^{2}+\tau\Big(\mu^2+\frac{3\mu h^2}{16}\Big)(k+1)Lc_1^2h^8 +\tau\varepsilon^4(k+1)Lc_1^2(\tau^2+h^4)^2+\tau(k+1)Lc_1^2(\tau^2+h^4)^2\\ &+\frac{\tau\varepsilon^2h^2}{12}kLc_1^2(\tau^2+h^4)^2 +\frac{\varepsilon^2h^2}{4}Lc_1^2h^8+\frac{\varepsilon^2h^{2}}{12}\|f^{0}\|^{2} +\frac{\varepsilon^2h^{2}}{12}Lc_1^2h^8. \end{align} (4.20)

    Denote

    F^k = \|e^k\|^2+\varepsilon^2|e^{k}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{36}\|f^{k}\|^{2}, \quad 1\leq k\leq N.

    In combination of (2.6) in Lemma 4 with (4.12), we have

    \begin{align} \frac{\varepsilon^2h^{2}}{12}\|f^{0}\|^{2} \leq \frac{3\varepsilon^2h^{2}}{8}\|Q^{0}\|^{2} \leq \frac{3}{8}c_1^2\varepsilon^2h^{10}L. \end{align} (4.21)

    Substituting (4.21) into (4.20), when h\leq h_0 and c_4\tau\leq\frac{1}{3} , (4.20) can be rewritten as

    F^{k+1}\leq \tau c_4\sum\limits_{l = 0}^kF^{l}+\tau c_4F^{k+1}+c_5(\tau^2+h^4)^2,

    which implies that

    F^{k+1}\leq \frac{3}{2}c_4\tau\sum\limits_{l = 0}^k F^{l}+\frac{3}{2}c_5(\tau^2+h^4)^2.

    According to the Gronwall inequality, we have

    \begin{align} F^{k+1}\leq \frac{3}{2}c_5 {\rm e}^{\frac{3}{2}c_4T}(\tau^2+h^4)^2 = c_6^2(\tau^2+h^4)^2, \quad 0\leq k\leq N-1. \end{align}

    Thus, it holds that

    \|e^{k}\|\leq c_6(\tau^2+h^4), \quad\varepsilon|e^{k}|_1\leq c_6(\tau^2+h^4) , \quad\varepsilon\|e^{k}\|_\infty\leq \frac{c_6\sqrt{L}}{2}(\tau^2+h^4), \quad 0\leq k\leq N.

    Below, we consider the stability of the difference scheme (3.9)–(3.12). Suppose that \lbrace \, \tilde{u}_i^k, \, \tilde{v}_i^k\, | \, 1\leq i\leq M, \, 0\leq k\leq N \, \rbrace is the solution of

    \left\{ {\begin{array}{l} \delta_t \tilde{u}_i^{k+\frac{1}{2}} = \mu \tilde{v}_i^{k+\frac{1}{2}}+\gamma\Big(\psi(\tilde{u}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}})_i -\frac{h^2}{2}\psi(\tilde{v}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}})_i\Big) +\varepsilon^2\delta_t\tilde{v}_i^{k+\frac{1}{2}}, \notag\\ \qquad \qquad 1\leq i\leq M, \quad0\leq k\leq N-1, \notag\\ \tilde{v}_i^k = \delta_x^2\tilde{u}_i^k-\frac{h^2}{12}\delta_x^2\tilde{v}_i^k, \quad1\leq i\leq M, \quad0\leq k\leq N, \\ \tilde{u}_i^0 = \varphi(x_i)+r(x_i), \quad 1\leq i\leq M, \notag\\ \tilde{u}_i^k = \tilde{u}_{i+M}^k, \quad \tilde{v}_i^k = \tilde{v}_{i+M}^k , \quad 1\leq i\leq M, \quad 1\leq k\leq N.\notag \end{array}} \right. (4.22)

    Denote

    \xi_i^k = \tilde{u}_i^k-u_i^k, \quad\eta_i^k = \tilde{v}_i^k-v_i^k, \quad 1\leq i\leq M, \quad0\leq k\leq N.

    Subtracting (3.9)–(3.12) from (4.22), we obtain the perturbation equation as follows

    \left\{ {\begin{array}{l} \delta_t \xi_i^{k+\frac{1}{2}} = \mu \eta_i^{k+\frac{1}{2}}+\gamma\big[\psi(\tilde{u}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}})_i -\psi(u^{k+\frac{1}{2}}, u^{k+\frac{1}{2}})_i\big] -\frac{\gamma h^2}{2}\big[\psi(\tilde{v}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}})_i-\psi(v^{k+\frac{1}{2}}, u^{k+\frac{1}{2}})_i \big] & (4.23)\\ \;\;\;\;\;\;\;\;+\varepsilon^2\delta_t\eta_i^{k+\frac{1}{2}}, \quad 1\leq i\leq M, \quad0\leq k\leq N-1, \\ \eta_i^k = \delta_x^2\xi_i^k-\frac{h^2}{12}\delta_x^2\eta_i^k, \quad 1\leq i\leq M, \quad0\leq k\leq N, &(4.24)\\ \xi_i^0 = r(x_i), \quad 1\leq i\leq M, &(4.25) \\ \xi_i^k = \xi_{i+M}^k, \quad\eta_i^k = \eta_{i+M}^k , \quad 1\leq i\leq M, \quad1\leq k\leq N. &(4.26) \end{array}} \right.

    Theorem 6. Let \lbrace\, \xi_i^k, \, \eta_i^k\, \vert\, 1\leq i\leq M, \, 0\leq k\leq N\, \rbrace be the solution of (4.23)–(4.26). When c_4\tau\leq\frac{1}{3} and h\leq h_0 , we have

    \|\xi^{k}\|\leq c_7|r|_1, \quad\varepsilon|\xi^{k}|_1\leq c_7|r|_1, \quad\varepsilon\|\xi^{k}\|_\infty\leq \frac{c_7\sqrt{L}}{2}|r|_1,

    where c_7 = \frac{1}{2}\sqrt{{\rm e}^{\frac{3}{2}c_4T}\Big(L^2+15\varepsilon^2\Big)}.

    Proof. Taking an inner product of (4.23) with \xi^{k+\frac{1}{2}} , we have

    \begin{align*} (\delta_t\xi^{k+\frac{1}{2}}, \xi^{k+\frac{1}{2}}) = &\, \mu(\eta^{k+\frac{1}{2}}, \xi^{k+\frac{1}{2}}) +\gamma\big(\psi(\tilde{u}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}}) -\psi(u^{k+\frac{1}{2}}, u^{k+\frac{1}{2}}), \xi^{k+\frac{1}{2}}\big)\notag\\ &-\frac{\gamma h^2}{2}\big(\psi(\tilde{v}^{k+\frac{1}{2}}, \tilde{u}^{k+\frac{1}{2}}) -\psi(v^{k+\frac{1}{2}}, u^{k+\frac{1}{2}}), \xi^{k+\frac{1}{2}}\big) +\varepsilon^2(\delta_t\eta^{k+\frac{1}{2}}, \xi^{k+\frac{1}{2}}). \end{align*}

    Similar to the analysis technique in Theorem 5, we obtain

    \begin{align*} &\|\xi^{k+1}\|^2-\|\xi^k\|^2+\varepsilon^2(|\xi^{k+1}|_{1}^{2}-|\xi^{k}|_{1}^{2}) +\frac{\varepsilon^2h^{2}}{12}\Big[\Big(\|\eta^{k+1}\|^{2}-\frac{h^{2}}{12}|\eta^{k+1}|_{1}^{2}\Big) -\Big(\|\eta^{k}\|^{2}-\frac{h^{2}}{12}|\eta^{k}|_{1}^{2}\Big)\Big]\\ \leq& -\frac{\mu\tau h^2}{9}||\eta^{k+\frac{1}{2}}||^2 +c_3\tau|\gamma|\cdot\|\xi^{k+\frac{1}{2} }\|^2 +\frac{ c_3\tau h^2|\gamma|}{3}||\eta^{k+\frac{1}{2}}||\cdot||\xi^{k+\frac{1}{2}}|| +\frac{ c_3\tau h^2|\gamma|}{3}||\eta^{k+\frac{1}{2}}||\cdot||\Delta_x \xi^{k+\frac{1}{2}}||\nonumber\\ \leq& c_3\tau|\gamma|\cdot||\xi^{k+\frac{1}{2}}||^2 +\frac{c_3^2\tau h^2\gamma^2}{2\mu}||\xi^{k+\frac{1}{2}}||^2 +\frac{2c_3^2\tau \gamma^2}{\mu}\|\xi^{k+\frac{1}{2}}\|^2\nonumber\\ \leq& \tau\Big(\frac{c_3|\gamma|}{2} +\frac{c_3^2\gamma^2 h^2}{4\mu}+\frac{c_3^2\gamma^2}{\mu}\Big) (\|\xi^{k+1}\|^2+\|\xi^k\|^2). \end{align*}

    Replacing the superscript k with l and summing over l from 0 to k , we get

    \begin{align*} &\|\xi^{k+1}\|^2+\varepsilon^2|\xi^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{12}\Big(\|\eta^{k+1}\|^{2}-\frac{h^{2}}{12}|\eta^{k+1}|_{1}^{2}\Big) -\|\xi^0\|^2-\varepsilon^2|\xi^{0}|_{1}^{2} -\frac{\varepsilon^2h^{2}}{12}\Big(\|\eta^{0}\|^{2}-\frac{h^{2}}{12}|\eta^{0}|_{1}^{2}\Big)\\ \leq& 2\tau\Big(\frac{c_3|\gamma|}{2} +\frac{c_3^2\gamma^2 h^2}{4\mu}+\frac{c_3^2\gamma^2}{\mu}\Big) \Big(\sum\limits_{l = 0}^k\|\xi^{l}\|^2+\|\xi^{k+1}\|^2\Big). \end{align*}

    Using Lemma 2 and when h\leq h_0 , we can rearrange the inequality above into the following form

    \begin{align} &\|\xi^{k+1}\|^2+\varepsilon^2|\xi^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{18}\|\eta^{k+1}\|^{2} \\ \leq& \tau\Big(c_3|\gamma|+\frac{c_3^2\gamma^2 h^2}{2\mu}+\frac{2c_3^2\gamma^2}{\mu}\Big) \Big(\sum\limits_{l = 0}^k\|\xi^{l}\|^2+\|\xi^{k+1}\|^2\Big) +\|\xi^0\|^2+\varepsilon^2|\xi^{0}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{12}\|\eta^{0}\|^2\\ \leq& \tau c_4 \Big(\sum\limits_{l = 0}^k\|\xi^{l}\|^2+\|\xi^{k+1}\|^2\Big) +\|\xi^0\|^2+\varepsilon^2|\xi^{0}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{12}\|\eta^{0}\|^2. \end{align} (4.27)

    Denote

    \tilde{F^k} = \|\xi^k\|^2+\varepsilon^2|\xi^{k}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{18}\|\eta^{k}\|^{2}, \quad 0\leq k\leq N.

    Combining (2.6) in Lemma 4 with (4.25), we have

    \begin{align} \frac{\varepsilon^2h^{2}}{12}\|\eta^{0}\|^{2} \leq \frac{3\varepsilon^2}{2}|\xi^{0}|_1^{2} = \frac{3\varepsilon^2}{2}|r|_1^{2}. \end{align} (4.28)

    Substituting (4.28) into (4.27), (4.27) can be rewritten as

    \begin{align} \|\xi^{k+1}\|^2+\varepsilon^2|\xi^{k+1}|_{1}^{2} +\frac{\varepsilon^2h^{2}}{18}\|\eta^{k+1}\|^{2} \leq \tau c_4 \Big(\sum\limits_{l = 0}^k\|\xi^{l}\|^2+\|\xi^{k+1}\|^2\Big) +\frac{L^2}{6}|r|_1^2+\varepsilon^2|r|_{1}^{2} +\frac{3\varepsilon^2}{2}|r|_1^{2}. \end{align}

    Then, we have

    (1-c_4\tau)\tilde{F}^{k+1}\leq \tau c_4 \sum\limits_{l = 0}^k\tilde{F}^{l} +\Big(\frac{5\varepsilon^2}{2}+\frac{L^2}{6} \Big)|r|_1^{2}, \quad 0\leq k\leq N-1.

    According to the Gronwall inequality, when c_4\tau\leq\frac{1}{3} , we have

    \tilde{F}^{k}\leq c_7^2|r|_1^2, \quad 0\leq k\leq N,

    where c_7 = \frac{1}{2}\sqrt{{\rm e}^{\frac{3}{2}c_4T}\Big(L^2+15\varepsilon^2\Big)}.

    Therefore, it holds that

    \|\xi^{k}\|\leq c_7|r|_1, \quad\varepsilon|\xi^{k}|_1\leq c_7|r|_1, \quad\varepsilon\|\xi^{k}\|_\infty\leq \frac{c_7\sqrt{L}}{2}|r|_1.

    In this section, we perform numerical experiments to verify the effectiveness of the difference scheme and the accuracy of the theoretical results. Before conducting the experiments, we first introduce an algorithm for solving the nonlinear compact scheme. Denote

    \begin{align*} u^k = (u_1^k, u_2^k, \cdots, u_M^k)^T, \quad\nu^k = (v_1^k, v_2^k, \cdots, v_M^k)^T, \quad w = (w_1, w_2, \cdots, w_M)^T, \quad z = (z_1, z_2, \cdots, z_M)^T, \end{align*}

    where 0\leq k\leq N . The algorithm of the compact difference scheme (3.9)–(3.12) can be described as follows:

    \textbf{Step 1} Solve u^0 and v^0 based on (3.10) and (3.11).

    \textbf{Step 2} Suppose u^k is known, the following linear system of equations will be used to approximate the solution of the difference scheme (3.9)–(3.12), for 1\leq i\leq M , we have

    \left\{ {\begin{array}{l} \frac{2}{\tau}(w_i^{(l+1)}-u_i^k) = \mu z_i^{(l+1)}+\gamma\Big[\psi(w^{(l)}, w^{(l+1)})_i-\frac{h^2}{2}\psi(z^{(l)}, w^{(l+1)})_i\Big] +\frac{2}{\tau}\varepsilon^2(z_i^{(l+1)}-\nu_i^k), \notag\\ z_i^{(l)} = \delta_x^2w_i^{(l)}-\frac{h^2}{12}\delta_x^2z_i^{(l)}, \notag\\ w_i^{(l)} = w_{i+M}^{(l)}, \quad z_i^{(l)} = z_{i+M}^{(l)}, \notag \end{array}} \right.

    until

    \max\limits_{1\leq i\leq M}|w_i^{(l+1)}-w_i^{(l)}|\leq \epsilon, \quad l = 0, 1, 2, \ldots.

    Let

    u_i^{k+1} = 2w_i^{(l+1)}-u_i^k, \quad v_i^{k+1} = 2z_i^{(l+1)}-v_i^k, \quad0\leq i\leq M.

    In the following numerical experiments, we set the tolerance error \epsilon = 1\times 10^{-12} for each iteration unless otherwise specified.

    When the exact solution is known, we define the discrete error in the L^\infty -norm as follows:

    {\rm E}_\infty(h, \tau) = \max\limits_{1\leq i\leq M, \, 0\leq k\leq N}|U_i^k-u_i^k|,

    where U_i^k and u_i^k represent the analytical solution and the numerical solution, respectively. Additionally, the convergence orders in space and time are defined as follows:

    {\rm Ord}_\infty^h = \log_2\frac{{\rm E}_\infty\left(2h, \tau\right)}{{\rm E}_\infty\left(h, \tau\right)}, \quad {\rm Ord}_\infty^\tau = \log_2\frac{{\rm E}_\infty\left(h, 2\tau\right)}{{\rm E}_\infty\left(h, \tau\right)}.

    When the exact solution is unknown, we use a posteriori error estimation to verify the convergence orders in space and time. For a sufficiently small h , we denote

    {\rm F}_\infty(h, \tau) = \max\limits_{1\leq i\leq M, \, 0\leq k\leq N}|u_i^k(h, \tau)-u_{2i}^k(h/2, \tau)|, \quad {\rm Ord}_\infty^h = \log_2\frac{{\rm F}_\infty(2h, \tau)}{{\rm F}_\infty(h, \tau)}.

    Similarly, for sufficiently small \tau , we denote

    {\rm G}_\infty(h, \tau) = \max\limits_{1\leq i\leq M, \, 0\leq k\leq N}|u_i^k(h, \tau)-u_i^{2k}(h, \tau/2)|, \quad {\rm Ord}_\infty^\tau = \log_2\frac{{\rm G}_\infty(h, 2\tau)}{{\rm G}_\infty(h, \tau)}.

    Example 1. We first consider the following equation

    \left\{ {\begin{array}{l} u_t = u_{xx}+uu_x+u_{xxt}+f\left(x, t\right), \quad 0 < x < 2, \quad 0 < t\leq 1, \notag\\ u\left(x, 0\right) = \sin\left(\pi x\right), \quad 0\leq x\leq 2, \notag\\ u\left(0, t\right) = u\left(2, t\right), \quad 0 < t\leq 1, \notag \end{array}} \right.

    where

    f\left(x, t\right) = {\rm e}^t\sin\left(\pi x\right)+2\pi^2{\rm e}^t\sin\left(\pi x\right)-\pi {\rm e}^{2t}\sin\left(\pi x\right)\cos(\pi x).

    The initial condition is determined by the exact solution u\left(x, t\right) = {\rm e}^t\sin(\pi x) with the period L = 2 and T = 1 .

    The numerical results are reported in Table 1 and Figure 1.

    Table 1.  Convergence orders versus numerical errors for the scheme (3.5)–(3.12) with reduced step-size under L = 2 and T = 1 .
    \tau = 1/1000 h = 1/50
    h {\rm E}_\infty(h, \tau) {\rm Ord}_\infty^h \tau {\rm E}_\infty(h, \tau) {\rm Ord}_\infty^\tau
    1/2 6.1769e-02 * 1/4 6.7342e-03 *
    1/4 7.4321e-03 3.0550 1/8 1.6884e-03 1.9959
    1/8 4.8805e-04 3.9287 1/16 4.2259e-04 1.9983
    1/16 3.1790e-05 3.9404 1/32 1.0587e-04 1.9970
    1/32 2.0894e-06 3.9274 1/64 2.6669e-05 1.9890

     | Show Table
    DownLoad: CSV
    Figure 1.  The numerical solution u(x, t) and the numerical error surface |U(x, t)-u(x, t)| with \tau = 1/1000 , h = 1/50 , L = 2 and T = 1 .

    Table 1, we progressively reduce the spatial step-size h half by half (h = 1/2, 1/4, 1/8, 1/16, 1/32) while keeping the time step-size \tau = 1/1000 . Conversely, we gradually decrease the time step-size \tau half by half (\tau = 1/4, 1/8, 1/16, 1/32, 1/64) while maintaining the spatial step-size h = 1/50 .

    As we can see, the spatial convergence order approaches to the four order approximately, and the temporal convergence order approaches to two orders in the maximum norm, which are consistent with our convergence results. Comparing our numerical results with those in [42] from Table 2, we find our scheme is more efficient and accurate.

    Table 2.  Convergence orders versus numerical errors for the scheme [42] with reduced step-size under L = 2 and T = 1 .
    \tau = 1/1000 h = 1/200
    h {\rm E}_\infty(h, \tau) {\rm Ord}_\infty^h \tau {\rm E}_\infty(h, \tau) {\rm Ord}_\infty^\tau
    1/2 5.0747e-01 * 1/4 6.7850e-03 *
    1/4 1.1891e-01 2.0935 1/8 1.7378e-03 1.9651
    1/8 3.0838e-02 1.9471 1/16 4.7159e-04 1.8816
    1/16 7.6531e-03 2.0106 1/32 1.5477e-04 1.6074
    1/32 1.9208e-03 1.9943 1/64 7.5545e-05 1.0347

     | Show Table
    DownLoad: CSV

    By observing the first subgraph in Figure 1, the evolutionary trend surface of the numerical solution u(x, t) with \tau = 1/1000 , h = 1/50 , L = 2 and T = 1 is illustrated. This figure successfully reflects the panorama of the exact solution. In order to verify the accuracy of the difference scheme (3.9)–(3.12), we have drawn the numerical error surface in the second subgraph in Figure 1 with \tau = 1/1000 , h = 1/50 , L = 2 and T = 1 .

    We observe that the rates of the numerical error in the maximum norm approaches a fixed value, which verifies that the difference scheme (3.9)–(3.12) is convergent. It took us 2.03 seconds to compute the spatial order of accuracy and 0.37 seconds to determine the temporal order of accuracy.

    Example 2. We further consider the problem of the form

    \left\{ {\begin{array}{l} u_t = u_{xx}+uu_x+\varepsilon^2u_{xxt}, \quad -25 < x < 25, \quad 0 < t\leq {T}, \notag\\ u\left(x, 0\right) = \frac{1}{2} \mathrm{sech}\left(\frac{x}{4}\right), \quad -25\leq x\leq 25, \notag\\ u\left(-25, t\right) = u\left(25, t\right), \quad 0 < t\leq {T}, \notag \end{array}} \right.

    where the exact solution is unavailable.

    \textbf{Case I} \varepsilon = 1 :

    The numerical results are reported in Table 3 and Figure 2. The two discrete conservation laws of the difference scheme (3.9)–(3.12) are reported in Table 4. In the following calculations, we set T = 1 . First, we fix the temporal step-size \tau = 1/1000 and reduce the spatial step-size h half by half (h = 50/11, 50/22, 50/44, 50/88) . Second, we fix the spatial step-size h = 1/2 , meanwhile, reduce the temporal-step size \tau half by half (\tau = 1/2, 1/4, 1/8, 1/16) .

    Table 3.  Convergence orders versus numerical errors for the scheme (3.9)–(3.12) with reduced step sizes under L = 50 and T = 1 .
    \tau = 1/1000 h = 1/2
    h {\rm F}_\infty(h, \tau) {\rm Ord}_\infty^h \tau {\rm G}_\infty(h, \tau) {\rm Ord}_\infty^\tau
    50/11 4.5583e-03 * 1/2 2.7427e-05 *
    50/22 5.0140e-04 3.1845 1/4 6.8356e-06 2.0045
    50/44 4.0505e-05 3.6298 1/8 1.7076e-06 2.0011
    50/88 4.6251e-06 3.1305 1/16 4.2681e-07 2.0003

     | Show Table
    DownLoad: CSV
    Figure 2.  The numerical solutions u(x, t) with \tau = 1/1000 , h = 1/2 , L = 50 (Left: \varepsilon = 1 and T = 1 ; Right: \varepsilon = 0.1 and T = 10 ).
    Table 4.  The discrete conservation laws of the difference scheme (3.9)–(3.12) with reduced step sizes under h = 1/2 and \tau = 1/1000 .
    t Q E
    0 6.267721589835858 2.041650615050223
    0.125 6.267721589832776 2.041650615048104
    0.250 6.267721589829613 2.041650615045897
    0.375 6.267721589826562 2.041650615043805
    0.500 6.267721589823495 2.041650615041712
    0.625 6.267721589820407 2.041650615039603
    0.750 6.267721589816996 2.041650615037288
    0.875 6.267721589813823 2.041650615035165
    1.000 6.267721589810754 2.041650615033137

     | Show Table
    DownLoad: CSV

    As we can see, the spatial convergence order approaches to four orders, approximately, and the temporal convergence order approaches to two orders in the maximum norm, which is consistent with our convergence results. It took us 6.74 seconds to compute the spatial order of accuracy, and 0.30 seconds to determine the temporal order of accuracy.

    From Table 4, we can see that the discrete conservation laws in Theorems 1 and 2 are also satisfied. In the first graph of Figure 2, we depict the evolutionary trend surface of the numerical solution u(x, t) with \tau = 1/1000 , h = 1/2 , L = 50 and T = 1 , and this figure successfully reflects the panorama of the exact solution.

    When simulating a short duration of time T = 1 , the impact of values \varepsilon = 1 and \varepsilon = 0.1 on the numerical simulation is relatively small. Therefore, in the following Case Ⅱ, we take \varepsilon = 0.1 and T = 10 to observe the impact of \varepsilon on the numerical simulation situation.

    \textbf{Case II} \varepsilon = 0.1 :

    The numerical results are reported in Table 5 and Figure 2. The two discrete conservation laws of the difference scheme (3.9)–(3.12) are reported in Table 6.

    Table 5.  Convergence orders versus numerical errors for the scheme (3.9)–(3.12) with reduced step-sizes under L = 50 and T = 10 .
    \tau = 1/1000 h = 1/2
    h {\rm F}_\infty(h, \tau) {\rm Ord}_\infty^h \tau {\rm G}_\infty(h, \tau) {\rm Ord}_\infty^\tau
    25/3 5.1657e-02 * 1/2 9.3665e-03 *
    25/6 8.1492e-03 2.6642 1/4 2.6282e-03 1.8334
    25/12 6.5306e-04 3.6414 1/8 5.8168e-04 2.1758
    25/24 4.8822e-05 3.7416 1/16 1.3874e-04 2.0679

     | Show Table
    DownLoad: CSV
    Table 6.  The discrete conservation laws of the difference scheme (3.9)–(3.12) with reduced step-sizes under h = 1/2 and \tau = 1/1000 .
    t Q E
    0 6.267721589835858 2.000401671877802
    1.25 6.267721589837073 2.000401671878745
    2.50 6.267721589838247 2.000401671879621
    3.75 6.267721589839301 2.000401671880389
    5.00 6.267721589840338 2.000401671881143
    6.25 6.267721589840971 2.000401671881638
    7.50 6.267721589841729 2.000401671882164
    8.75 6.267721589842555 2.000401671882761
    10.0 6.267721589843263 2.000401671883252

     | Show Table
    DownLoad: CSV

    First, we fix the temporal step-size \tau = 1/1000 and reduce the spatial step-size h half by half (h = 25/3, 25/6, 25/12, 25/24) . Second, we fix the spatial step-size h = 1/2 and reduce the temporal step size \tau half by half (\tau = 1/2, 1/4, 1/8, 1/16) .

    As we can see, the spatial convergence order approaches to four orders, approximately, and the temporal convergence order approaches to two orders in the maximum norm, which is consistent with our convergence results. It took us 33.76 seconds to compute the spatial order of accuracy and 0.33 seconds to determine the temporal order of accuracy.

    In the second subgraph of Figure 2, we depict the evolutionary trend surface of the numerical solution u(x, t) with \tau = 1/1000 , h = 1/2 , L = 50 and T = 10 . Compared with the first subgraph of Figure 2, smaller \varepsilon amplifies sharper transitions and wave-like behavior, whereas the larger \varepsilon makes the solution smoother.

    Example 3. In the last example, we consider the problem

    \left\{ {\begin{array}{l} u_t = u_{xx}+uu_x+u_{xxt}, \quad 0 < x < 30, \quad 0 < t\leq 20, \notag\\ u\left(0, t\right) = u\left(30, t\right), \quad 0 < t\leq 20.\notag \end{array}} \right.

    with the Maxwell initial conditions u\left(x, 0\right) = {\rm e}^{-\left(x-7\right)^2}, \quad 0\leq x\leq 30.

    Figure 3 reflects the behavior of the solutions to the pseudo-parabolic Burgers' equation. During the propagation process, we observe that the pseudo-parabolic Burgers' equation exhibits characteristics of both diffusion and advection. As we can see, the peak gradually spreads out and flattens as time progresses. Additionally, the solution moves to the right, indicating propagation direction.

    Figure 3.  The numerical solution u(x, t) with \tau = 1/500 , h = 3/10 , L = 30 and T = 20 .

    The numerical scheme ensures stability, convergence, and the preservation of physical properties, which can be observed from the smooth transitions over time. This phenomenon indicates that the numerical scheme preserves the physical properties, ensuring stability and convergence.

    In Figure 4, we observe that the pseudo-parabolic Burgers' equation exhibits propagation characteristics coupled with gradual damping.

    Figure 4.  The numerical solution u(x, t) with \tau = 1/500 , h = 3/10 , L = 30 and T = 20 .

    From Table 7, we can see that the discrete conservation law agrees well with Theorems 1 and 2. The value of Q remains almost constant throughout the simulation, which is crucial for maintaining the physical integrity of the solution. Similarly, the phenomenon is suitable for the energy E , and these results further verify the correctness and reliability of the high-order compact difference scheme.

    Table 7.  The discrete conservation laws of the difference scheme (3.9)–(3.12) with reduced step sizes under h = 3/10 and \tau = 1/500 .
    t Q E
    0 1.772453850905516 2.505978912117327
    2.50 1.772453850935547 2.505978912141013
    5.00 1.772453850964681 2.505978912152665
    7.50 1.772453850994238 2.505978912161075
    10.0 1.772453851021927 2.505978912167129
    12.5 1.772453851052832 2.505978912173078
    15.0 1.772453851083987 2.505978912178467
    17.5 1.772453851115895 2.505978912183606
    20.0 1.772453851148868 2.505978912188604

     | Show Table
    DownLoad: CSV

    We propose and analyze an implicit compact difference scheme for the pseudo-parabolic Burgers' equation, achieving second-order accuracy in time and fourth-order accuracy in space. Using the energy method, we provide a rigorous numerical analysis of the scheme, proving the existence, uniqueness, uniform boundedness, convergence, and stability of its solution. Finally, the theoretical results are validated through numerical experiments. The experimental results demonstrate that the proposed scheme is highly accurate and effective, aligning with the theoretical predictions. As part of our ongoing research [42,43,44,45,46,47,48], we aim to extend these techniques and approaches to other nonlocal or nonlinear evolution equations [49,50,51,52,53,54,55].

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors would like to thank the supervisor Qifeng Zhang, who provide this interesting topic and detailed guidance. They are also guilt to Baohui Xie's work when he studied in Zhejiang Sci-Tech Universtiy. The authors are grateful to the editor and the anonymous reviewers for their careful reading and many patient checking of the whole manuscript.

    The work is supported by Institute-level Project of Zhejiang Provincial Architectural Design Institute Co., Ltd. (Research on Digital Monitoring Technology for Central Air Conditioning Systems, Institute document No. 18), and National Social Science Fund of China (Grant No. 23BJY006) and General Natural Science Foundation of Xinjiang Institute of Technology(Grant No. ZY202403).

    The authors declare there is no conflicts of interest.



    [1] M. Hilbert, Big data for development: A review of promises and challenges, Dev. Policy. Rev., 34 (2016), 135–174. http://doi.org/10.1111/dpr.12142 doi: 10.1111/dpr.12142
    [2] C. Wang, J. Wu, J. Yan, Statistical methods and computing for big data, Stat. Interface, 9 (2016), 399. https://dx.doi.org/10.4310/SII.2016.v9.n4.a1 doi: 10.4310/SII.2016.v9.n4.a1
    [3] H. Wang, Y. Ma, Optimal subsampling for quantile regression in big data, Biometrika, 108 (2021), 99–112. https://doi.org/10.1093/biomet/asaa043 doi: 10.1093/biomet/asaa043
    [4] H. Wang, R. Zhu, P. Ma, Optimal subsampling for large sample logistic regression, J. Am. Stat. Assoc., 117 (2022), 265–276. https://doi.org/10.1080/01621459.2020.1773832 doi: 10.1080/01621459.2020.1773832
    [5] X. Chen, W. Liu, X. Mao, Z. Yang, Distributed high-dimensional regression under a quantile loss function, J. Mach. Learn. Res., 21 (2020), 7432–7474. https://doi.org/10.1214/18-AOS1777 doi: 10.1214/18-AOS1777
    [6] A. Hu, Y. Jiao, Y. Liu, Y. Shi, Y. Wu, Distributed quantile regression for massive heterogeneous data, Neurocomputing, 448 (2021), 249–262. https://doi.org/10.1016/j.neucom.2021.03.041 doi: 10.1016/j.neucom.2021.03.041
    [7] R. Jiang, K. Yu, Smoothing quantile regression for a distributed system, Neurocomputing, 466 (2021), 311–326. https://doi.org/10.1016/j.neucom.2021.08.101 doi: 10.1016/j.neucom.2021.08.101
    [8] M. I. Jordan, J. D. Lee, Y. Yang, Communication-efficient distributed statistical inference, J. Am. Stat. Assoc., 526 (2018), 668–681. https://doi.org/10.1080/01621459.2018.1429274 doi: 10.1080/01621459.2018.1429274
    [9] N. Lin, R. Xi, Aggregated estimating equation estimation, Stat. Interface, 4 (2011), 73–83. https://dx.doi.org/10.4310/SII.2011.v4.n1.a8 doi: 10.4310/SII.2011.v4.n1.a8
    [10] L. Luo, P. Song, Renewable estimation and incremental inference in generalized linear models with streaming data sets, J. R. Stat. Soc. B, 82 (2020), 69–97. https://doi.org/10.1111/rssb.12352 doi: 10.1111/rssb.12352
    [11] C. Shi, R. Song, W. Lu, R. Li, Statistical inference for high-dimensional models via recursive online-score estimation, J. Am. Stat. Assoc., 116 (2021), 1307–1318. https://doi.org/10.1080/01621459.2019.1710154 doi: 10.1080/01621459.2019.1710154
    [12] E. D. Schifano, J. Wu, C. Wang, J. Yan, M. Chen, Online updating of statistical inference in the big data setting, Technometrics, 58 (2016), 393–403. https://doi.org/10.1080/00401706.2016.1142900 doi: 10.1080/00401706.2016.1142900
    [13] S. Mohamad, A. Bouchachia, Deep online hierarchical dynamic unsupervised learning for pattern mining from utility usage data, Neurocomputing, 390 (2020), 359–373. https://doi.org/10.1016/j.neucom.2019.08.093 doi: 10.1016/j.neucom.2019.08.093
    [14] H. M. Gomes, J. Read, A. Bifet, J. Paul, J. Gama, Machine learning for streaming data: State of the art, challenges, and opportunities, ACM Sigkdd Explor. Newslett., 21 (2019), 6–22. https://doi.org/10.1145/3373464.3373470 doi: 10.1145/3373464.3373470
    [15] L. Lin, W. Li, J. Lu, Unified rules of renewable weighted sums for various online updating estimations, arXiv Preprint, 2020. https://doi.org/10.48550/arXiv.2008.08824
    [16] C. Wang, M. Chen, J. Wu, J. Yan, Y. Zhang, E. Schifano, Online updating method with new variables for big data streams, Can. J. Stat., 46 (2018), 123–146. https://doi.org/10.1002/cjs.11330 doi: 10.1002/cjs.11330
    [17] J. Wu, M. Chen, Online updating of survival analysis, J. Comput. Graph. Stat., 30 (2021), 1209–1223. https://doi.org/10.1080/10618600.2020.1870481 doi: 10.1080/10618600.2020.1870481
    [18] Y. Xue, H. Wang, J. Yan, E. D. Schifano, An online updating approach for testing the proportional hazards assumption with streams of survival data, Biometrics, 76 (2020), 171–182. https://doi.org/10.1111/biom.13137 doi: 10.1111/biom.13137
    [19] S. Balakrishnan, D. Madigan, A one-pass sequential Monte Carlo method for Bayesian analysis of massive datasets, Bayesian Anal., 1 (2006), 345–361. https://doi.org/10.1214/06-BA112 doi: 10.1214/06-BA112
    [20] L. N. Geppert, K. Ickstadt, A. Munteanu, J. Quedenfeld, C. Sohler, Random projections for Bayesian regression, Biometrics, 27 (2017), 79–101. https://doi.org/10.1007/s11222-015-9608-z doi: 10.1007/s11222-015-9608-z
    [21] R. Koenker, G. Bassett, Regression quantiles, Econometrica, 1978, 33–50. https://doi.org/10.2307/1913643 doi: 10.2307/1913643
    [22] Y. Wei, A. Pere, R. Koenker, X. He, Quantile regression methods for reference growth charts, Stat. Med., 25 (2006), 1369–1382. https://doi.org/10.1002/sim.2271 doi: 10.1002/sim.2271
    [23] H. Wang, Z. Zhu, J. Zhou, Quantile regression in partially linear varying coefficient models, Ann. Stat., 2009, 3841–3866. https://doi.org/10.1214/09-AOS695 doi: 10.1214/09-AOS695
    [24] X. He, B. Fu, W. K. Fung, Median regression for longitudinal data, Stat. Med., 22 (2003), 3655–3669. https://doi.org/10.1002/sim.1581 doi: 10.1002/sim.1581
    [25] M. Buchinsky, Changes in the US wage structure 1963–1987: Application of quantile regression, Econometrica, 1994,405–458. https://doi.org/10.2307/2951618 doi: 10.2307/2951618
    [26] A. J. Cannon, Quantile regression neural networks: Implementation in R and application to precipitation downscaling, Comput. Geosci., 37 (2011), 1277–1284. https://doi.org/10.1002/sim.1581 doi: 10.1002/sim.1581
    [27] Q. Xu, K. Deng, C. Jiang, F. Sun, X. Huang, Composite quantile regression neural network with applications, Expert Syst. Appl., 76 (2017), 129–139. https://doi.org/10.1016/j.eswa.2017.01.054 doi: 10.1016/j.eswa.2017.01.054
    [28] X. Chen, W. Liu, Y. Zhang, Quantile regression under memory constraint, Ann. Stat., 47 (2019), 3244–3273. https://doi.org/10.1214/18-AOS1777 doi: 10.1214/18-AOS1777
    [29] L. Chen, Y. Zhou, Quantile regression in big data: A divide and conquer based strategy, Comput. Stat. Data. An., 144 (2020), 106892. https://doi.org/10.1016/j.csda.2019.106892 doi: 10.1016/j.csda.2019.106892
    [30] K. Wang, H. Wang, S. Li, Renewable quantile regression for streaming datasets, Knowl.-Based Syst., 235 (2022), 107675. https://doi.org/10.1016/j.knosys.2021.107675 doi: 10.1016/j.knosys.2021.107675
    [31] Y. Chu, Z. Yin, K. Yu, Bayesian scale mixtures of normals linear regression and Bayesian quantile regression with big data and variable selection, J. Comput. Appl. Math., 428 (2023), 115192. https://doi.org/10.1016/j.cam.2023.115192 doi: 10.1016/j.cam.2023.115192
    [32] K. Lum, A. E. Gelfand, Spatial quantile multiple regression using the asymmetric Laplace process, Bayesian Anal., 7 (2012), 235–258. https://doi.org/10.1214/12-BA708 doi: 10.1214/12-BA708
    [33] M. Smith, R. Kohn, Nonparametric regression using Bayesian variable, J. Econometrics, 75 (1996), 317–343. https://doi.org/10.1016/0304-4076(95)01763-1 doi: 10.1016/0304-4076(95)01763-1
    [34] M. Dao, M. Wang, S. Ghosh, K. Ye, Bayesian variable selection and estimation in quantile regression using a quantile-specific prior, Computation. Stat., 37 (2022), 1339–1368. https://doi.org/10.1007/s00180-021-01181-5 doi: 10.1007/s00180-021-01181-5
    [35] K. E. Lee, N. Sha, E. R. Dougherty, M. Vannucci, B. K. Mallick, Gene selection: A Bayesian variable selection approach, Bioinformatics, 19 (2003), 90–97. https://doi.org/10.1093/bioinformatics/19.1.90 doi: 10.1093/bioinformatics/19.1.90
    [36] R. Chen, C. Chu, T. Lai, Y. Wu, Stochastic matching pursuit for Bayesian variable selection, Stat. Comput., 21 (2011), 247–259. https://doi.org/10.1007/s11222-009-9165-4 doi: 10.1007/s11222-009-9165-4
    [37] R. Jiang, K. Yu, Renewable quantile regression for streaming data sets, Neurocomputing, 508 (2022), 208–224. https://doi.org/10.1016/j.knosys.2021.107675 doi: 10.1016/j.knosys.2021.107675
    [38] X. Li, The influencing factors on PM_{2.5} concentration of Lanzhou based on quantile eegression, HGU. J., 41 (2018), 61–68. https://doi.org/10.13937/j.cnki.hbdzdxxb.2018.06.009 doi: 10.13937/j.cnki.hbdzdxxb.2018.06.009
    [39] X. Zhang, W. Zhang, Spatial and temporal variation of PM_{2.5} in Beijing city after rain, Ecol. Environ. Sci., 23 (2014), 797–805. https://doi.org/10.3969/j.issn.1674-5906.2014.05.011 doi: 10.3969/j.issn.1674-5906.2014.05.011
    [40] R. Tibshirani, Regression shrinkage and selection via the Lasso, J. R. Stat. Soc. B, 58 (2018), 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [41] J. Fan, R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Am. Stat. Assoc., 96 (2011), 1348–1360. https://doi.org/10.1198/016214501753382273 doi: 10.1198/016214501753382273
    [42] F. E. Streib, M. Dehmer, High-dimensional LASSO-based computational regression models: Regularization, shrinkage, and selection, Mach. Learn. Know. Extr., 1 (2019), 359–383. https://doi.org/10.3390/make1010021 doi: 10.3390/make1010021
    [43] X. Ma, L. Lin, Y. Gai, A general framework of online updating variable selection for generalized linear models with streaming datasets, J. Stat. Comput. Sim., 93 (2023), 325–340. https://doi.org/10.1080/00949655.2022.2107207 doi: 10.1080/00949655.2022.2107207
    [44] A. Liu, J. Lu, F. Liu, G. Zhang, Accumulating regional density dissimilarity for concept drift detection in data streams, Pattern Recogn., 76 (2018), 256–272. https://doi.org/10.1016/j.patcog.2017.11.009 doi: 10.1016/j.patcog.2017.11.009
    [45] J. Wang, J. Shen, P. Li, Provable variable selection for streaming features, International Conference On Machine Learning, 80 (2018), 5171–5179. Available from: https://proceedings.mlr.press/v80/wang18g.html.
    [46] J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, G. Zhang, Learning under concept drift: A review, IEEE T. Knowl. Data En., 31 (2018), 2346–2363. https://doi.org/10.1109/TKDE.2018.2876857 doi: 10.1109/TKDE.2018.2876857
    [47] R. Elwell, R. Polikar, Incremental learning of concept drift in nonstationary environments, IEEE T. Neural Networ., 22 (2011), 1517–1531. https://doi.org/10.1109/TNN.2011.2160459 doi: 10.1109/TNN.2011.2160459
    [48] D. Rezende, S. Mohamed, Variational inference with normalizing flows, International Conference On Machine Learning, 22 (2015), 1530–1538. Available from: https://proceedings.mlr.press/v37/rezende15.
    [49] P. Müller, F. A. Quintana, A. Jara, T. Hanson, Bayesian nonparametric data analysis, New York: Springer Press, 2015. https://doi.org/10.1007/978-0-387-69765-9-7
    [50] R. Koenker, J. A. Machado, Goodness of fit and related inference processes for quantile regression, J. Am. Stat. Assoc., 94 (1999), 1296–1310. https://doi.org/10.1109/TNN.2011.2160459 doi: 10.1109/TNN.2011.2160459
    [51] K. Yu, R. A. Moyeed, Bayesian quantile regression, Stat. Probab. Lett., 54 (2001), 437–447. https://doi.org/10.1016/S0167-7152(01)00124-9 doi: 10.1016/S0167-7152(01)00124-9
    [52] M. Geraci, Linear quantile mixed models: The lqmm package for Laplace quantile regression, J. Stat. Softw., 57 (2014), 1–29. https://doi.org/10.18637/jss.v057.i13 doi: 10.18637/jss.v057.i13
    [53] M. Geraci, M. Bottai, Quantile regression for longitudinal data using the asymmetric laplace distribution, Biostatistics, 8 (2007), 140–154. https://doi.org/10.1093/biostatistics/kxj039 doi: 10.1093/biostatistics/kxj039
    [54] D. F. Benoit, D. V. den Poel, bayesQR: A Bayesian approach to quantile regression, J. Stat. Softw., 76 (2017), 1–32. https://doi.org/10.18637/jss.v076.i07 doi: 10.18637/jss.v076.i07
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1083) PDF downloads(47) Cited by(1)

Figures and Tables

Figures(4)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog