Processing math: 13%
Research article Special Issues

Group theoretic particle swarm optimization for gray-level medical image enhancement


  • As a principal category in the promising field of medical image processing, medical image enhancement has a powerful influence on the intermedia features and final results of the computer aided diagnosis (CAD) system by increasing the capacity to transfer the image information in the optimal form. The enhanced region of interest (ROI) would contribute to the early diagnosis and the survival rate of patients. Meanwhile, the enhancement schema can be treated as the optimization approach of image grayscale values, and metaheuristics are adopted popularly as the mainstream technologies for medical image enhancement. In this study, we propose an innovative metaheuristic algorithm named group theoretic particle swarm optimization (GT-PSO) to tackle the optimization problem of image enhancement. Based on the mathematical foundation of symmetric group theory, GT-PSO comprises particle encoding, solution landscape, neighborhood movement and swarm topology. The corresponding search paradigm takes place simultaneously under the guidance of hierarchical operations and random components, and it could optimize the hybrid fitness function of multiple measurements of medical images and improve the contrast of intensity distribution. The numerical results generated from the comparative experiments show that the proposed GT-PSO has outperformed most other methods on the real-world dataset. The implication also indicates that it would balance both global and local intensity transformations during the enhancement process.

    Citation: Jinyun Jiang, Jianchen Cai, Qile Zhang, Kun Lan, Xiaoliang Jiang, Jun Wu. Group theoretic particle swarm optimization for gray-level medical image enhancement[J]. Mathematical Biosciences and Engineering, 2023, 20(6): 10479-10494. doi: 10.3934/mbe.2023462

    Related Papers:

    [1] Farman Ali Shah, Kamran, Dania Santina, Nabil Mlaiki, Salma Aljawi . Application of a hybrid pseudospectral method to a new two-dimensional multi-term mixed sub-diffusion and wave-diffusion equation of fractional order. Networks and Heterogeneous Media, 2024, 19(1): 44-85. doi: 10.3934/nhm.2024003
    [2] Rong Huang, Zhifeng Weng . A numerical method based on barycentric interpolation collocation for nonlinear convection-diffusion optimal control problems. Networks and Heterogeneous Media, 2023, 18(2): 562-580. doi: 10.3934/nhm.2023024
    [3] Yaxin Hou, Cao Wen, Yang Liu, Hong Li . A two-grid ADI finite element approximation for a nonlinear distributed-order fractional sub-diffusion equation. Networks and Heterogeneous Media, 2023, 18(2): 855-876. doi: 10.3934/nhm.2023037
    [4] Narcisa Apreutesei, Vitaly Volpert . Reaction-diffusion waves with nonlinear boundary conditions. Networks and Heterogeneous Media, 2013, 8(1): 23-35. doi: 10.3934/nhm.2013.8.23
    [5] Min Li, Ju Ming, Tingting Qin, Boya Zhou . Convergence of an energy-preserving finite difference method for the nonlinear coupled space-fractional Klein-Gordon equations. Networks and Heterogeneous Media, 2023, 18(3): 957-981. doi: 10.3934/nhm.2023042
    [6] Leqiang Zou, Yanzi Zhang . Efficient numerical schemes for variable-order mobile-immobile advection-dispersion equation. Networks and Heterogeneous Media, 2025, 20(2): 387-405. doi: 10.3934/nhm.2025018
    [7] Yinlin Ye, Hongtao Fan, Yajing Li, Ao Huang, Weiheng He . An artificial neural network approach for a class of time-fractional diffusion and diffusion-wave equations. Networks and Heterogeneous Media, 2023, 18(3): 1083-1104. doi: 10.3934/nhm.2023047
    [8] Yves Achdou, Victor Perez . Iterative strategies for solving linearized discrete mean field games systems. Networks and Heterogeneous Media, 2012, 7(2): 197-217. doi: 10.3934/nhm.2012.7.197
    [9] Xiaoqian Gong, Alexander Keimer . On the well-posedness of the "Bando-follow the leader" car following model and a time-delayed version. Networks and Heterogeneous Media, 2023, 18(2): 775-798. doi: 10.3934/nhm.2023033
    [10] Kexin Li, Hu Chen, Shusen Xie . Error estimate of L1-ADI scheme for two-dimensional multi-term time fractional diffusion equation. Networks and Heterogeneous Media, 2023, 18(4): 1454-1470. doi: 10.3934/nhm.2023064
  • As a principal category in the promising field of medical image processing, medical image enhancement has a powerful influence on the intermedia features and final results of the computer aided diagnosis (CAD) system by increasing the capacity to transfer the image information in the optimal form. The enhanced region of interest (ROI) would contribute to the early diagnosis and the survival rate of patients. Meanwhile, the enhancement schema can be treated as the optimization approach of image grayscale values, and metaheuristics are adopted popularly as the mainstream technologies for medical image enhancement. In this study, we propose an innovative metaheuristic algorithm named group theoretic particle swarm optimization (GT-PSO) to tackle the optimization problem of image enhancement. Based on the mathematical foundation of symmetric group theory, GT-PSO comprises particle encoding, solution landscape, neighborhood movement and swarm topology. The corresponding search paradigm takes place simultaneously under the guidance of hierarchical operations and random components, and it could optimize the hybrid fitness function of multiple measurements of medical images and improve the contrast of intensity distribution. The numerical results generated from the comparative experiments show that the proposed GT-PSO has outperformed most other methods on the real-world dataset. The implication also indicates that it would balance both global and local intensity transformations during the enhancement process.



    Let Ω= [b1,b2] ×[d1,d2], where b1, b2, d1 and d2 are constant numbers and belong to R. Then we denote Ω=(b1,b2)×(d1,d2) R2 and x=(x,y). In this study, we are concerned with the numerical solutions of the following nonlinear wave equations with time delay [1,2,3,4,5,6,7,8]

    2ut2a2(2ux2+2uy2)=f(u(x,t),u(x,ts),x,t),(x,t)Ω×[0,T], (1.1a)
    u(x,t)=ϕ(x,t),(x,t)Ω×[s,0], (1.1b)
    u(x,t)=ψ(x,t),(x,t)Ω×[0,T] (1.1c)

    by devising efficient numerical methods. Here, s>0 is a constant delay, x and y represent spatial variables in x- and y-directions, respectively, and t denotes temporal variable. Denote (x,y) by x for brevity. The exact solution to the problem (1.1a)–(1.1c) is denoted by u(x,t). Furthermore, we assume that f(u(x,t),u(x,ts),x,t), ϕ(x,t) and φ(x,t) are sufficiently smooth to make the regularity of the exact solutions satisfied, and our algorithms achieve claimed accuracies.

    Delayed partial differential equations (DPDEs) are extensively utilized to simulate many natural, social and engineering phenomena such as population ecology, cell biology and control theory [9,10]. For example, the following nonlinear delayed reaction-diffusion equations

    utuxx=f(u(x,t),u(x,ts),x,t),(x,t)Ω×[0,T], (1.2a)
    u(x,t)=ϕ(x,t),(x,t)Ω×[s,0], (1.2b)
    u(x,t)=ψ(x,t),(x,t)Ω×[0,T], (1.2c)

    is used to describe many practical problems. Also, in Eqs (1.2a)–(1.2c), s>0 is a time delay, u(x,t) denotes the exact solution to the IBVPs (1.2a)–(1.2c), and x and t represent spatial and temporal variables, respectively. As

    f(u(x,t),u(x,ts),x,t)=αu(x,t)βθdu(x,t)θd+ud(x,t)+2βθdu(x,ts)exp(γs)θd+ud(x,ts),

    Eq (1.2a) is referred to as Hematopoiesis model, and applied to describe the dynamics of blood cell production (cf. [11,12,13]). Denote the density of mature stem cells by u(x,t) in blood circulation. The time delay between the production of immature stem cells in the bone marrow and their maturation for release in the circulating blood stream is represented by s0. Parameters α, β, θ, γ and d(1,) are positives constants. Besides, as

    f(u(x,t),u(x,ts),x,t)=νu(x,t)+ϱu(x,ts)exp(ξu(x,ts)),

    Eq (1.2a) is known as diffusive Nicholson's blowflies equation (cf. [14,15,16]). Here, u(x,t), ϱ, 1/ξ, ν and s denote the size of the population at time t, represents maximum per capita daily egg production rate, the size at which the population reproduces at its maximum rate, the per capita daily adult death rate and the generation time, respectively. Finally, as

    f(u(x,t),u(x,ts),x,t)=u(x,t)[1u(x,ts)],

    Eq (1.2a) is well-known Hutchinson equation. It is proposed by American biologist Hutchinson in 1948 (cf. [17,18,19]). Here, u(x,t) denotes the current population density of mammals. For much more detailed mathematical models with delays, readers are referred to [9,10]. Hence, it is important and necessary to comprehensively research this kinds of equations from the theoretical point of view and from numerical analysis because they possess strong applied background. In these three examples, u(x,t) should be more than zero according to its meaning.

    Theoretical studies on the exact solutions of the DPDEs, such as the unique existence, asymptotic behavior and blow up, have been attracting a lot of attention. Considerably theoretical findings can be found in [9,10] and the related references. However, for DPDEs with arbitrary initial-boundary conditions, it is difficult to obtain the exact solutions of them. Thus, it is of great significance to explore high-performance numerical algorithms used to solve them.

    In the past two decades, there have been great advance in numerical researches for delayed parabolic equations (DPEs). For example, a FDM combined with domain decomposion technique was derived for solving one-dimensional (1D) nonlinear delayed DPEs in [20]. It is second-order accurate in both time and space. To improve spatial accuracy, a compact FDM, which is second-order and fourth-order accurate in time and space, respectively, was developed for 1D and two-dimensional (2D) DPEs in [21]. In [22], it is shown that the use of fourth-order compact FDM to the spatial discretization leads to a reduction of the asymptotic stability region of the original DPEs. Recently, a class of θ-schemes combined with Lagrange interpolation has been suggested for nonlinear parabolic equations with variable delays in [23]. A Crank-Nicolson alternating directional implicit (ADI) FDM [24], which is second-order accurate in both time and spaces, was first developed for 2D DPEs in [24]. A Crank-Nicolson ADI compact FDM, which is second-order and fourth-order accurate in time and spaces, respectively, was developed in [25]. By using second-order backward differentiation formula (BDF2) to discrete temporal derivatives, a multistep ADI compact FDM, which is second-order and fourth-order accurate in time and spaces, respectively, was developed for 2D nonlinear delayed parabolic equations with constant coefficients in [26]. Similarly, two multistep ADI compact FDMs, which are second-order and fourth-order accurate in time and spaces, respectively, were developed for 2D nonlinear delayed parabolic equations with different variable coefficients in [27] and [28], respectively. Other efficient numerical methods including locally discontinuous Galerkin method [29], waveform relaxation methods combined with spectral collocation methods [30] and finite volume element method [31] were developed for solving 1D parabolic equations with constant delays. Furthermore, a box scheme with second-order temporal and spatial accuracies was derived for delayed convection-diffusion equations with Riemannian boundary conditions in [32]. A compact FDM, which is second-order and fourth-order accurate in time and space, respectively, was developed for delayed convection-diffusion equations in [33]. Moreover, Legendre spectral Galerkin method was also established for solving 2D convection-diffusion equations with delays in [34]. Similarly, FDMs [35,36], multi-domain Legendre spectral collocation method [37] and Galerkin methods [38] were constructed for the neutral DPEs. More recently, in [39], a compact difference method with second-order temporal accuracy and fourth-order spatial accuracy, which was used along with a Richardson extrapolation method (REM) to obtain fourth-order temporally and spatially accurate solutions, was devised for Sobolev equations with constant delay. A adaptive FDM based on Bakhvalov mesh has been constructed for a singularly perturbed Sobolev equations with constant delay in [40]. An exponentially-fitted difference scheme on a uniform mesh was designed for a singularly perturbed Sobolev equations with constant delay in [41].

    Also, much attention has been paid on the theoretical studies of the delayed wave equations. A number of theoretical results, such as the well posedness, stability, long time behavior and existence of the exact solutions, can be found in [1,2,3,4,5]. Moreover, separation of variables and Lie group method were applied to find the exact solutions of them in [6,7] and [8], respectively. In contrast, there have been only a few numerical studies on nonlinear wave equations with delays. A three-level compact FDM, which is second-order and fourth-order accurate in time and space, respectively, has been derived for 1D delayed wave equation in [42]. Subsequently, a three-level ADI compact FDM, which is also second-order and fourth-order accurate in time and spaces, respectively, has been derived for 2D delayed wave equation in [43]. To promote the numerical studies for delay wave equations, two explicit finite difference methods (EFDMs) and corresponding REMs are designed to solve the delayed wave equations.

    As we know, the stable requirement of the standard EFDM for linear wave equation is accepted. Optimally convergent solutions can be obtained as long as spatial and temporal meshsizes are reasonably chosen in accordance with the stable requirement. Besides, Du Fort-Frankel scheme proposed by Du Fort and Frankel for 1D linear parabolic equations with periodic boundary conditions is unconditionally Riemann stable, and fast convergent as consistent condition is satisfied (cf. [44]). However, there has been no numerical studies for nonlinear wave equation with delays by Du Fort-Frankel-type schemes. In this paper, inspiring by these two types of EFDMs, two EFDMs, which can be used along with REMs to get high-order accurate numerical solutions of nonlinear wave equation with delay, have been designed. By using the discrete energy methods, error estimations are deduced, rigorously. There are some advantages of our algorithms listed as follows. (1) The algorithms are both very easy to be implemented because they are explicit schemes. (2) The explicit methods combined with REMs possess low computational burden because of explicitness and high-order accuracy. (3) They are easily generalized to multi-dimensional problems with various kinds of delays, such as multiple delays, variable delays and distributed delays. What's more, a main contribution of this study is that Du Fort-Frankel scheme and a new REM are constructed for nonlinear wave equation with delay. They possess good stability, high-order accuracy and low computational burden.

    The basic organization of the article is as follows. Partition and notations are given in Section 2. The construction and theoretical analyses of the the first EFDM and corresponding REM are given in Section 3. Section 4 is devoted to the establishment and theoretical analyses of the the second EFDM and corresponding REM. The extensions of the proposed methods to wave equation with other delays are discussed in Section 5. Section 6 concentrates on numerical experiments. A conclusion is summarized in Section 7.

    To begin with, we divide the spatial domain Ω= [b1,b2] ×[d1,d2] R2, where b1, b2, d1 and d2 are constant numbers. Let hx=(b2b1)/m1, hy=(d2d1)/m2, (m1,m2Z+) be spatial meshsizes in x- and y-directions, respectively.

    Let T and s be positive constants. Then the temporal domain and delay are denoted by [0,T] and s, respectively. As [20,21,22,23], the restricted grid with meshsize τ (τ=s/n1=T/n) (n1,nZ+) is used. If the restricted grid is not used, Lagrange interpolation formula will be applied to approximate the delay term u(x,ts). The use of Lagrange interpolation formula brings about computational complexity and the failure of improving temporal accuracy by REMs. Thus, for constant time delays, we choose constrained timestep.

    By denoting tk=kτ, h=max{hx,hy}, xi=b1+ihx, yj=d1+jhy and (xi,yj) =xi,j, the domain Ω×[0,T] is covered by Ωhτ =ˉΩh×Ωτ, where

    ˉΩh={xi,j0im1,0jm2},Ωτ={tkn1kn}.

    Besides, we introduce

    Ωh={xi,j1im11,1jm21},Γh=ˉΩhΩh.

    On Ωhτ, further define

    Sh={U|U={Ui,j0im1,0jm2}},S0h={UU={Ui,j0im1,0jm2}andwhenxi,jΓh,Ui,j=0}.

    Then, on Ωh×Ωτ, we further introduce the following difference notations

    δ2tUki,j=Uk+1i,j2Uki,j+Uk1i,jτ2,δˆtUki,j=Uk+1i,jUk1i,j2τ,δtUk12i,j=Uki,jUk1i,jτ,δ2xUki,j=Uki+1,j2Uki,j+Uki1,jh2x,δ2yUki,j=Uki,j+12Uki,j+Uki,j1h2y,δxUki12,j=Uki,jUki1,jhx,δyUki,j12=Uki,jUki,j1hy,ΔhUi,j=(δ2x+δ2y)Ui,j.

    Finally, U, V S0h, we define the norms as follows

    U,V=hxhym11i=1m21j=1Ui,jVi,j,U,V1,x=hxhym1i=1m21j=1δxUi12,jδxVi12,j,U,V1,y=hxhym11i=1m2j=1δyUi,j12δyVi,j12,U=U,U,U=maxi,jΩh|Ui,j|,δxU=U,U1,x,δyU=U,U1,y,|U|1=δxU2+δyU2,U1=U2+|U|21.

    Define grid functions uki,j=u(xi,j,tk), xi,jΩh, n1kn. The approximation of uki,j is denoted by Uki,j.

    Applying second-order centered difference formulas to approximate temporal and spatial derivatives at point (xi,j,tk) yields that

    δ2tuki,ja2(δ2xuki,j+δ2yuki,j)=f(uki,j,ukn1i,j,xi,j,tk)+(R1)ki,j,xi,jΩ,0kn, (3.1)

    where

    (R1)ki,j=τ2124ut4(xi,j,tk)a2h2x124ux4(xi,j,tk)a2h2y124uy4(xi,j,tk)+O(τ4+h4x+h4y). (3.2)

    Omitting the small term (R1)ki,j, and then replacing uki,j with Uki,j in Eq (3.1), we get the first EFDM as follows

    δ2tUki,ja2(δ2xUki,j+δ2yUki,j)=f(Uki,j,Ukn1i,j,xi,j,tk),xi,jΩh,0kn, (3.3a)
    Uki,j=ϕ(xi,j,tk),xi,j¯Ωh,n1k0, (3.3b)
    Uki,j=ψ(xi,j,tk),xi,jΩh,0kn. (3.3c)

    Obviously, the EFDM (3.3a)–(3.3c) is uniquely solvable because it is an explicit scheme.

    Besides, we give an assumption for f(μ, υ, x,t) as follow.

    Assumption A. Let u(x,t) be the exact solution to the problem (1.2a)–(1.2c). Then, there are positive constants c1 and ε0, and εl (l=1,2) satisfying |εl|<ε0, (l=1,2), it holds that

    f(u(x,t)+ε1,u(x,ts)+ε2,x,t)f(u(x,t),u(x,ts),x,t)∣≤c1(ε1+ε2).

    Moreover, denote eki,j=uki,jUki,j, 0im1, 0jm2, n1kn and h=max{hx,hy}, and give several lemmas used later.

    Lemma 3.1 [45] Let VS0h, we have

    [6(b2b1)2+6(d2d1)2]V2|V|21,|V|21V21[1+16(b2b1)2+6(d2d1)2]|V|21,h2xδxV24V2,h2yδyV24V2.

    Lemma 3.2 [26] Let both A and B be positive constants, {Fk|k0} be a non-negative sequence and satisfy the following inequality Fk+1A+BτKk=0Fk. Then we can obtain max0kK+1FkAexp(B(K+1)τ). Moreover, if Fk+1A+BτK+1k=0Fk, we have max0kK+1FkAexp(2B(K+1)τ) as long as τ is sufficiently small such that Bτ12.

    Lemma 3.3 [46] Let rχ=(|a|τ)/hχ (χ=x or y), Ek=||δtek+12||2+a2ek,ek+11,x+a2ek,ek+11,y, c2=1r2xr2y. Then when r2x+r2y<1, the following inequalities

    δtek+122Ekc2,|ek+12|21Eka2, (3.4a)
    a2|ek+1|212c2Ek, (3.4b)
    ek216(b2b1)2+6(d2d1)2|ek|21(b2b1)2(d2d1)26[(b2b1)2+(d2d1)2]2c2a2Ek1 (3.4c)

    hold.

    Proof. By using the equality αβ=[(α+β)2(αβ)2]/4 and simple computations, we have

    Ek=δtek+122+a2|ek+12|21a2[||δxek+12||2ek,ek+11,x]a2[δyek+122ek,ek+11,y]=δtek+122+a2|ek+12|21a24|ek+1ek|21=c2δtek+122+a2|ek+12|21+r2xδtek+122r2x4hxhym11i=0m21j=1(δtek+12i+1,jδtek+12i,j)2+r2yδtek+122r2y4hxhym11i=1m21j=0(δtek+12i,j+1δtek+12i,j)2c2δtek+122+a2|ek+12|21,

    which is used along with r2x+r2y<1 to yield Eq (3.4a).

    According to δxek+1i+12,j=τ2δxδtek+12i+12,j+δxek+12i+12,j, by simple computation, it is easy to find that

    (δxek+1i+12,j)2=δxek+12i+12,jδxek+1i+12,j+rx2|a|(δtek+12i+1,jδtek+12i,j)δxek+1i+12,j(δxek+12i+12,j)2+(δxek+1i+12,j)22+[rx|a|(δtek+12i+1,jδtek+12i,j)]2+(δxek+1i+12,j)2434(δxek+1i+12,j)2+12(δxek+12i+12,j)2+r2x4a2(δtek+12i+1,jδtek+12i,j)2. (3.5)

    Multiplying a2hxhy to both sides of Eq (3.5) and summing i from 1 to m11 and j from 1 to m21, we obtain

    a2δxek+123a24δxek+12+a22δxek+122+r2x2δtek+122,

    which shows that

    a2δxek+122a2δxek+122+2r2xδtek+122. (3.6)

    Also, noting δyek+1i,j+12=τ2δyδtek+12i,j+12+δyek+12i,j+12 and utilizing the analytical methods similar to Eq (3.6) get that

    a2δyek+122a2δyek+122+2r2yδtek+122. (3.7)

    Adding Eq (3.6) to Eq (3.7) and using Eq (3.4a) lead to

    a2|ek+1|212Ek+2(r2x+r2y)c2Ek=2c2Ek,

    which is used along with Lemma 3.1 to find that

    ek216(b2b1)2+6(d2d1)2|ek|21(b2b1)2(d2d1)26[(b2b1)2+(d2d1)2]2c2a2Ek1. (3.8)

    The proof is finished.

    Let exact solution u(x,t)C4,4(Ω×[s,T]). Then there exists positive constant c3 such that

    |(R1)ki,j|c3(τ2+h2x+h2y). (3.9)

    Theorem 3.1 Let u(x,t) C4,4(Ω×[s,T]) be the exact solution of (1.2a)–(1.2c), and Uki,j be the solution of the scheme (3.3a)–(3.3c) at the time level k. Assuming that there exist positive constants μ, σ and ϵ, such that μhhx h, μh hyh and τσh12+ϵ, we can derive that

    ek1c4(τ2+h2x+h2y) (3.10)

    under the conditions that

    r2x+r2y<1,hmin((ε0μ2σ2c4)12ϵ,ε0μ4c4),τ{4+8c23(b2b1)2(d2d1)23c22a2[(b2b1)2+(d2d1)2]}1,

    and Assumption A are valid. Here,

    c4=(1+(b2b1)2(d2d1)26[(d2d1)2+(b2b1)2])2c5a2c2,
    c5=Tc23(b2b1)(d2d1)c2exp({4+8c21(b2b1)2(d2d1)23c22a2[(b2b1)2+(d2d1)2]}T).

    Proof. For convenience, denote f(uki,j)=f(uki,j,ukn1i,j,xi,yj,tk),f(Uki,j)=f(Uki,j,Ukn1i,j,xi,yj,tk) without confusion. Subtracting Eq (3.3a) from Eq (3.1), the error equations are derived as follows

    δ2teki,ja2(δ2xeki,j+δ2yeki,j)=f(uki,j)f(Uki,j)+(R1)ki,j,xi,jˉΩh,0kn, (3.11a)
    eki,j=0,xi,jˉΩh,n1k0, (3.11b)
    eki,j=0,xi,jΩh,0kn. (3.11c)

    As eki,j=0 (n1k0), it is obvious that Eq (3.10) holds for n1k0. Assuming that Eq (3.10) is valid for n1kl, we will prove that it is also true for k=l+1.

    As μhhxh and μhhyh, it is easy to find that

    1hx1μh,1hy1μh. (3.12)

    Thus, using τσh12+ϵ and Eq (3.12), we obtain

    τ2hxhy+h32xhy+h32yhxσ2h1+2ϵμhμh+h32μh+h32μhσ2μh2ϵ+2hμ. (3.13)

    Besides, applying hmin{((ε0μ2σ2c4) 12ϵ, ε0μ4c4)}, it is easy to find that

    σ2μh2ϵε02c4,σ2μh2ϵε02c4. (3.14)

    As a result, using the induction hypothesis and Eqs (3.12)–(3.14) yields that

    |eki,j|h12xh12yekh12xh12yek1h12xh12yc4(τ2+h2x+h2y)=c4(τ2hxhy+h32xhy+h32yhx)c4(τ2μh+h32μh+h32μh)c4(σ2h2ϵμ+2hμ)<ε0,n1kl,

    which follows from Assumption A that

    |f(uki,j)f(Uki,j)|c1(|eki,j|+|ekn1i,j|),n1kl. (3.15)

    Taking inner product with Eq (3.11a) by 2δˆtek gives that

    δ2tek,2δˆteka2δ2xek,2δˆteka2δ2yek,2δˆtek=f(uk)f(Uk),2δˆtek+(R1)k,2δˆtek. (3.16)

    In what follows, we estimate each terms of Eq (3.16) step by step. Using the equality α2β2 =(αβ)(α+β) gives that

    δ2tek,2δˆtek=1τ(δtek+122δtek122). (3.17)

    Applying the discrete Green formula yields that

    a2δ2xek,2δˆtek=a2τhxhym11i=0m2j=0(δxeki+12,j)(δxek+1i+12,jδxek1i+12,j)=a2τ[ek,ek+11,xek,ek11,x]. (3.18)

    Similar to Eq (3.18), we also arrive at

    a2δ2yek,2δˆtek=a2τ[ek,ek+11,yek,ek11,y]. (3.19)

    Next, we estimate each terms at the right of Eq (3.16). Applying Eq (3.15), the inequalities 2αβ1εα2+εβ2 and (α+β)22(α2+β2), and Lemma 3.3, we have

    f(Uk)f(uk),2δˆtek2c21c2(ek2+ekn12)+c22(δtek+122+δtek122)2c21c2(ek||2+ekn12)+12(Ek+Ek1). (3.20)

    Using the analytical methods similar to Eq (3.20) yields that

    (R1)k,2δˆtekc23(b2b1)(d2d1)c2(τ2+h2x+h2y)2+12(Ek+Ek1). (3.21)

    Substituting Eqs (3.17)–(3.21) into Eq (3.16) infers that

    EkEk1τ(Ek+Ek1)+2c21τc2(||ek||2+||ekn1||2)+c22(b2b1)(d2d1)τc2(τ2+h2x+h2y)2. (3.22)

    Using Eq (3.4c) in Eq (3.22), replacing k with γ and summing γ from 0 to k, we have

    Ekτkγ=1Eγ+τk1γ=0Eγ+τ(2c1c2a)2k1γ=0(b2b1)2(d2d1)26[(b2b1)2+(d2d1)2]Eγ+τ(2c1c2a)2kn11γ=n1(b2b1)2(d2d1)26[(b2b1)2+(d2d1)2]Eγ+τkγ=0c23(b2b1)(d2d1)c2(τ2+h2x+h2y)2τ{2+8c21c22a2(b2b1)2(d2d1)26[(b2b1)2+(d2d1)2]}kγ=0Eγ+c23(b2b1)(d2d1)kτc2(τ2+h2x+h2y)2τ{2+4c21(b2b1)2(d2d1)23c22a2[(b2b1)2+(d2d1)2]}kγ=0Eγ+c23(b2b1)(d2d1)Tc2(τ2+h2x+h2y)2. (3.23)

    Hence, applying Lemma 3.2 to Eq (3.23), it follows from τ{4+8c23(b2b1)2(d2d1)23c22a2[(b2b1)2+(d2d1)2]}1 that

    Ekc5(τ2+h2x+h2y)2,0kl. (3.24)

    In Eq (3.24), setting k=l, applying Lemma 3.1, Eqs (3.4b)–(3.4c) in Lemma 3.3, we can conclude that Eq (3.10) holds for k=l+1.

    Thus, by mathematical induction, this theorem is valid.

    This section is suggested for the improvement of the computational efficiency of the EFDM in Eqs (3.3a)–(3.3c) by using REMs. As this EFDM (3.3a)–(3.3c) is conditionally stable, temporal meshsize is often taken smaller than spatial meshsizes. Thus, a REM in spaces is devised to improve the computational efficiency. In this section, C is a positive constant and independent of grid parameters τ, hx and hy.

    Theorem 3.2 Suppose that u(x,t) C6,4(Ω×[s,T]) is the exact solution of Eqs (1.2a)–(1.2c), and Uki,j(τ,hx,hy) is the numerical solution of the 1st EFDM (3.3a)–(3.3c) at (xi,j,tk) using meshsizes hx, hy and τ. Define a REM as follows

    (UE)ki,j={43Uk2i,2j(τ,hx2,hy2)13Uki,j(τ,hx,hy),xi,jˉΩh,1kN,ψ(xi,j,tk),xi,jˉΩh,mk0. (3.25)

    Then, as 4a2[(τhx)2+(τhy)2]<1, and grid parameters τ, hx and hy are small enough, we obtain

    uk(UE)k1C(τ2+h4x+h4y). (3.26)

    Proof. Let r1(x,t)=a2124ux4(x,t) and r2(x,t)=a2124uy4(x,t). Then a combination of Taylor expansion formula with (3.2) gives that

    (R1)ki,j=τ2124ut4(xi,j,tk)h2xr1(xi,j,tk)h2yr2(xi,j,tk)+O(τ4+h4x+h4y). (3.27)

    By Eq (3.27), we can rewrite the error equations Eqs (3.11a)–(3.11c) as

    δ2teki,ja2Δheki,j=f(uki,j,ukn1i,j,xi,j,tk)f(Uki,j,Ukn1i,j,xi,j,tk)+τ2124ut4(xi,j,tk)h2x(r1)ki,jh2y(r2)ki,j+O(τ4+h4x+h4y),xi,jˉΩh,0kn; (3.28a)
    eki,j=0,xi,jˉΩh,n1k0; (3.28b)
    eki,j=0,xi,jΩh,0kn. (3.28c)

    Furthermore, two auxiliary problems are introduced to derive the asymptotic expansion of the numerical solutions. Namely, let v1(x,t) and v2(x,t) be the exact solutions of the following initial-boundary-value problems (IBVPs)

    (v1)tta2Δv1=r1(x,t)+fμ(u(x,t),u(x,ts),x,t)v1(x,t)+fυ(u(x,t),u(x,ts),x,t)v1(x,ts),(x,t)Ω×[0,T]; (3.29a)
    v1(x,t)=0,(x,t)Ω×[s,0]; (3.29b)
    v1(x,t)=0,(v1)t(x,t)=0,xΩ,t×[0,T]; (3.29c)
    (v2)tta2Δv2=r2(x,t)+fμ(u(x,t),u(x,ts),x,t)v2(x,t)+fυ(u(x,t),u(x,ts),x,t)v2(x,ts)(x,t)Ω×[0,T]; (3.30a)
    v2(x,t)=0,(x,t)Ω×[s,0]; (3.30b)
    v2(x,t)=0,(v2)t(x,t)=0,xΩ,t×[0,T], (3.30c)

    respectively.

    Denote (v1)ki,j =v1(xi,j,tk) and (v2)ki,j =v2(xi,j,tk). Applying the difference methods similar to those utilized in Eqs (3.3a)–(3.3c) to discrete Eqs (3.29b)–(3.29d) and Eqs (3.30b)–(3.30d) yields that

    δ2t(v1)ki,ja2Δh(v1)ki,j=r1(xi,j,tk)+fμ(uki,j,ukn1i,j,xi,j,tk)(v1)ki,j+fυ(uki,j,ukn1i,j,xi,j,tk)(v1)kn1i,j+O(τ2+h2x+h2y),xi,jˉΩh,0kn; (3.31a)
    (v1)ki,j=0,xi,jˉΩh,n1k0; (3.31b)
    (v1)ki,j=0,xi,jΩh,0kn, (3.31c)
    δ2t(v2)ki,ja2Δh(v2)ki,j=r2(xi,j,tk)+fμ(Uki,j,Ukn1i,j,xi,j,tk)(v2)ki,j+fυ(Uki,j,Ukn1i,j,xi,j,tk)(v2)kn1i,j+O(τ2+h2x+h2y),xi,jˉΩh,0kn; (3.32a)
    (v2)ki,j=0,xi,jˉΩh,n1k0; (3.32b)
    (v2)ki,j=0,xi,jΩh,0kn, (3.32c)

    respectively.

    Denote pki,j=eki,j+h2x(v1)ki,j+h2y(v2)ki,j, 0im1, 0jm2, n1kn. Then, multiplying Eqs (3.31b)–(3.31d) by h2x and Eqs (3.32b)–(3.32d) by h_{y}^{2} , respectively, and then adding the obtained results to Eqs (3.28b)–(3.28d), we have

    \begin{array}{l} \delta_t^2p_{i, j}^k-a^2\Delta_{h}p_{i, j}^k = (\widetilde{H}_1)_{i, j}^k+(\tilde{R_1})_{i, j}^k, \; \; \; \; {\mathbf x}_{i, j}\in\bar{\Omega}_h, \; \; \; 0\leq k \leq n; \end{array} (3.33a)
    \begin{array}{l} p_{i, j}^k = 0, \quad {\mathbf x}_{i, j}\in\bar{\Omega}_h, \; \; \; -n_1\leq k \leq 0; \end{array} (3.33b)
    \begin{array}{l} p_{i, j}^k = 0, \quad {\mathbf x}_{i, j} \in \partial\Omega_{h}, \; \; \; 0\leq k \leq n, \end{array} (3.33c)

    where

    \begin{equation*} \begin{array}{ll} (\tilde{H}_1)_{i, j}^k = f(u_{i, j}^k+h_x^2(v_1)_{i, j}^k+h_y^2(v_2)_{i, j}^k, u_{i, j}^{k-n_{1}}+h_x^2(v_{1})_{i, j}^{k-n_{1}}+h_{y}^{2}(v_{2})_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k})\\ \quad -f(U_{i, j}^{k}, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k}), \end{array} \end{equation*}

    and

    \begin{equation} (\tilde{R_1})_{i, j}^k = \frac{\tau^2}{12}\frac{\partial^4u}{\partial t^4}(x_i, y_j, t_k)+O(\tau^4+h_x^4+h_y^4). \end{equation} (3.34)

    In Eq (3.33a), the following Taylor expansion formula

    \begin{equation} \begin{array}{ll} f(u_{i, j}^{k}, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_k)+h_{x}^{2}\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k})(v_{1})_{i, j}^{k} + f_{\upsilon}(u_{i, j}^{k}, U_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_{k})(v_{1})_{i, j}^{k-n_{1}}\Big] \nonumber \\ +h_{y}^{2}\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_2)_{i, j}^{k} +f_{\upsilon}(u_{i, j}^k, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k})(v_2)_{i, j}^{k-n_1}\Big] \nonumber \\ = f(u_{i, j}^{k}+h_{x}^{2}(v_{1})_{i, j}^{k}+h_{y}^{2}(v_{2})_{i, j}^k, u_{i, j}^{k-n_1}+h_{x}^{2}(v_{1})_{i, j}^{k-n_{1}}+h_y^2(v_{2})_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_{k})\nonumber \\ +O(h_{x}^{4}+h_{y}^{4}+h_{x}^{2}h_{y}^{2}) \nonumber \end{array} \end{equation}

    is used.

    Then, as a^2[(\frac{\tau}{h_{x}})^2+(\frac{\tau}{h_{y}})^2] < 1 , using the same techniques as those proposed in the proof of Theorem 3.1 , we find that

    \begin{equation} \|p^k\|_1\leq C(\tau^2+h_x^4+h_y^4), \quad 0\leq k \leq n, \end{equation} (3.35)

    where, C is a positive constant number and independent of grid parameters. Namely,

    \begin{equation} \|u^{k}-[U^{k}(\tau, h_{x}, h_{y})-h_{x}^{2}(v_{1})^{k}-h_{y}^{2}(v_{2})^{k}]\|_1\leq C(\tau^{2}+h_{x}^{4}+h_{y}^{4}), \quad 0\leq k \leq n. \end{equation} (3.36)

    On the other hand, from Richardson extrapolation solutions

    \begin{equation} (U_E)_{i, j}^k = \frac{4}{3}U_{2i, 2j}^k(\tau, \frac{h_{x}}{2}, \frac{h_{y}}{2})-\frac{1}{3}U_{i, j}^{k}(\tau, h_{x}, h_{y}), \quad {\mathbf x}_{i, j}\in \Omega_{h}, \quad 0 \leq k \leq n, \end{equation} (3.37)

    it follows that

    \begin{align} u_{i, j}^{k}-(U_E)_{i, j}^{k} = \; &\frac{4}{3}\Big\{u_{i, j}^k-\big[U_{2i, 2j}^k(\tau, \frac{h_x}{2}, \frac{h_y}{2})-(\frac{h_x}{2})^2(v_1)_{i, j}^k -(\frac{h_{y}^{2}}{2})^2(v_2)_{i, j}^{k}\big]\Big\} \\ &\; \; -\frac{1}{3}\Big\{u_{i, j}^{k}-\big[U_{i, j}^{k}(\tau, h_{x}, h_{y})-h_{x}^{2}(v_{1})_{i, j}^{k}-h_{y}^{2}(v_2)_{i, j}^{k}\big]\Big\}. \end{align} (3.38)

    Using the triangle inequality to Eq (3.38) and noting Eq (3.36) yields that

    \begin{align} \|u^k-(U_{E})^{k}\|_{1} \leq &\frac{4}{3}\|u^{k}-[U^{k}(\tau, \frac{h_x}{2}, \frac{h_y}{2})-(\frac{h_x}{2})^2(v_{1})^{k} -(\frac{h_{y}^{2}}{2})^2(v_{2})^{k}]\|_{1} \\ &+\frac{1}{3}\|u^k-[U^k(\tau, h_{x}, h_{y})-h_{x}^{2}(v_{1})^k-h_{y}^{2}(v_{2})^{k}]\|_{1} \\ \leq & \frac{4C}{3}\Big[\tau^{2}+(\frac{h_x}{2})^{4}+(\frac{h_y}{2})^{4}\Big]+\frac{C}{3}(\tau^2+h_{x}^{4}+h_{y}^{4}) \\ \leq & \frac{5C}{3}(\tau^{2}+h_{x}^{4}+h_{y}^{4}), \end{align} (3.39)

    holds as long as 4a^2[(\frac{\tau}{h_{x}})^2+(\frac{\tau}{h_{y}})^2] < 1 . The proof is completed.

    Remark I Theorem 3.1 shows that the first EFDM (3.3a)–(3.3c) is conditionally convergent with an order of O(\tau^{2}+h^{2}_{x}+h^{2}_{y}) as r^{2}_{x}+r^{2}_{y} < 1 . Besides, Theorem 4.1 shows that REM (3.25) is also conditionally convergent with an order of O(\tau^{2}+h^{2}_{4}+h^{4}_{y}) as 4(r^{2}_{x}+r^{2}_{y}) < 1 (see Eq (3.39)). Besides, as h_{x} = h_{y} = h and \tau = h^{2} , Richardson extrapolation solutions have a convergent rate of O(h^{4}) . In Section 6, to obtain numerical solutions with optimal convergent rates, we take temporal and spatial grids according to these conditions.

    In this section, the famous Du Fort-Frankel scheme is generalized to the numerical solutions of delay nonlinear wave Eqs (1.2a)–(1.2c).

    Using the following difference formulas

    \begin{array}{l} u_{xx}({\mathbf x}_{i, j}, t_{k}) = \delta_x^2u_{i, j}^k-\frac{\tau^2}{h_x^2}\delta_t^2u_{i, j}^{k}+\frac{h_{x}^{2}}{12}\frac{\partial^{4}u}{\partial x^4}({\mathbf x}_{i, j}, t_{k})+\frac{\tau^2}{h_{x}^2}\frac{\partial^2 u}{\partial t^2}({\mathbf x}_{i, j}, t_{k})+O( \frac{\tau^4}{h_{x}^2}+h^{4}_{x}), \end{array} (4.1a)
    \begin{array}{l} u_{yy}({\mathbf x}_{i, j}, t_{k}) = \delta_{y}^{2}u_{i, j}^{k}-\frac{\tau^2}{h_{y}^{2}}\delta_{t}^{2}u_{i, j}^{k}+\frac{h_{y}^{2}}{12}\frac{\partial^4u}{\partial y^4}({\mathbf x}_{i, j}, t_{k})+\frac{\tau^2}{h_{y}^2}\frac{\partial^2 u}{\partial t^2}({\mathbf x}_{i, j}, t_{k})+O(\frac{\tau^4}{h^{2}_{y}}+h^{4}_{y}) \end{array} (4.1b)

    to approximate the spatial derivatives of Eqs (1.2a)–(1.2c) yields that

    \begin{eqnarray} \begin{array}{ll} (1+r_x^2+r_y^2)\delta_t^2u_{i, j}^k-a^2\Delta_{h}u_{i, j}^{k} = f(u_{i, j}^k, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_k)+(R_2)_{i, j}^{k}, \\ \; \; {\mathbf x}_{i, j}\in\Omega_h, \; \; 0\leq k\leq n, \end{array} \end{eqnarray} (4.2)

    where

    \begin{eqnarray} \begin{array}{ll} (R_2)_{i, j}^k = \frac{\tau^2}{12}\frac{\partial^4u}{\partial t^4}({\mathbf x}_{i, j}, t_k)-\frac{a^2h_x^2}{12} \frac{\partial^4u}{\partial x^4}({\mathbf x}_{i, j}, t_k)-\frac{a^2h_y^2}{12}\frac{\partial^4u}{\partial y^4}({\mathbf x}_{i, j}, t_k) \\ \quad +a^2(\frac{\tau^2}{h_x^2} +\frac{\tau^2}{h^{2}_{y}})\frac{\partial^2u}{\partial t^2}({\mathbf x}_{i, j}, t_{k})+O(\tau^4+h_x^4+h_y^4 +\frac{\tau^4}{h_x^2}+\frac{\tau^4}{h_y^2}). \end{array} \end{eqnarray} (4.3)

    Omitting the small term (R_2)_{i, j}^{k} and replacing u^{k}_{i, j} by its approximation U^{k}_{i, j} in Eq (4.2), we get the second EFDM as follows

    \begin{array}{l} (1+r_x^{2}+r_{y}^{2})\delta_{t}^{2}U_{i, j}^{k}-a^2\Delta_{h} U_{i, j}^{k} = f(U_{i, j}^{k}, U_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k}), \; \; {\mathbf x}_{i, j}\in \Omega_{h}, \; \; 0\leq k \leq n, \end{array} (4.4a)
    \begin{array}{l} U_{i, j}^{k} = \phi({\mathbf x}_{i, j}, t_k), \; \; {\mathbf x}_{i, j}\in \overline{\Omega}_{h}, \; \; -n_{1} \leq k \leq 0, \end{array} (4.4b)
    \begin{array}{l} U_{i, j}^{k} = \psi({\mathbf x}_{i, j}, t_k), \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; 0\leq k \leq n. \end{array} (4.4c)

    This subsection is suggested for the convergence of the second EFDM (4.4a)–(4.4c).

    Lemma 4.1 Let G^k = (1+r_x^2+r_y^2)\|\delta_te^{k+\frac{1}{2}}\|^2+a^{2}\langle e^k, e^{k+1}\rangle_{1, x}+a^{2}\langle e^k, e^{k+1}\rangle_{1, y} . Then the following inequalities

    \begin{array}{l} \|\delta_te^{k+\frac{1}{2}}\|^2\leq G^k, \quad |e^{k+\frac{1}{2}}|_1^2\leq\frac{G^k}{a^2}, \end{array} (4.5a)
    \begin{array}{l} |e^{k+1}|_{1}^{2}\le \frac{2(1+r_x^2+r_y^2)}{a^2}G^{k}, \end{array} (4.5b)
    \begin{array}{l} \|e^{k+1}\|^{2}_{1}\leq \Big\{1+\frac{(b_2-b_1)^2(d_2-d_1)^2} {6[(b_2-b_1)^2+(d_2-d_1)^2]}\Big\} \frac{2(1+r^{2}_{x}+r^{2}_{y})}{a^{2}}G^{k} \end{array} (4.5c)

    are valid.

    Proof. Using Lemma 3.1, we obtain

    \begin{array}{l} r^{2}_{x}\|\delta_{t}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4}\|\delta_{x}\delta_{t}e^{k+\frac{1}{2}}\|^{2} = r^{2}_{x}\|\delta_{t}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4h^{2}_{x}}h^{2}_{x}\|\delta_{x}\delta_{t}e^{k+\frac{1}{2}}\|\ge 0, \end{array} (4.6a)
    \begin{array}{l} r^{2}_{y}\|\delta_{t}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4}\|\delta_{y}\delta_{t}e^{k+\frac{1}{2}}\|^{2} = r^{2}_{y}\|\delta_{t}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4h^{2}_{y}}h^{2}_{y}\|\delta_{x}\delta_{t}e^{k+\frac{1}{2}}\|\ge 0. \end{array} (4.6b)

    Applying Eqs (4.6a), (4.6b) and the equality \alpha\beta = (\alpha+\beta)^{2}/4 -(\alpha-\beta)^{2}/4 , we have that

    \begin{eqnarray*} \begin{array}{ll} G^{k} = (1+r^{2}_{x}+r^{2}_{y})\|\delta_{t}e^{k+\frac{1}{2}}\|^{2}+\Big(a^{2}\|\delta_{x}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4h^{2}_{x}}\|\delta_{x}\delta_{t}e^{k+\frac{1}{2}}\|^{2}\Big) \\ \quad +\Big(a^{2}\|\delta_{y}e^{k+\frac{1}{2}}\|^{2} -\frac{a^{2}\tau^{2}}{4h^{2}_{y}}\|\delta_{y}\delta_{t}e^{k+\frac{1}{2}}\|^{2}\Big) \\ \quad \ge \|\delta_{t}e^{k+\frac{1}{2}}\|^{2}+a^{2}|e^{k+\frac{1}{2}}|^{2}_{1}, \nonumber \end{array} \end{eqnarray*}

    which shows that Eq (4.5a) is valid.

    Using the skills similar to those developed in Eqs (3.6) and (3.7), Lemma 3.1 and Eq (4.5a) in Lemma 4.1 gives

    \begin{array}{l} |e^{k+1}|_1^2\leq2|e^{k+\frac{1}{2}}|_1^2+\frac{2(r_x^2 +r_y^2)}{a^2}||\delta_te^{k+\frac{1}{2}}||^2 \leq\frac{2(1+r_x^2+r_y^2)}{a^2}G^k, \end{array} (4.7a)
    \begin{array}{l} \|e^{k+1}\|^2\leq\frac{(b_2-b_1)^2(d_2-d_1)^2} {6[(b_2-b_1)^2+(d_2-d_1)^2]}|e^{k+1}|_1^2\\ \quad \leq \frac{(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3a^2[(b_2-b_1)^2+(d_2-d_1)^2]}G^{k}. \end{array} (4.7b)

    Making use of Eqs (4.7a) and (4.7c) directly infers that

    \begin{eqnarray*} \begin{array}{ll} \|e^{k+1}\|^{2}_{1}\leq \Big\{1+\frac{(b_2-b_1)^2(d_2-d_1)^2} {6[(b_2-b_1)^2+(d_2-d_1)^2]}\Big\} \frac{2(1+r^{2}_{x}+r^{2}_{y})}{a^{2}}G^{k}. \end{array} \end{eqnarray*}

    The proof is completed.

    Assuming u({\mathbf x}, t) \in C^{4, 4}(\Omega\times[-s, T]) , there exists positive constant c_6 such that

    \begin{eqnarray} |(R_2)_{i, j}^k|\leq c_6(\tau^2+h_x^2+h_y^2+\frac{\tau^2}{h_x^2} +\frac{\tau^2}{h_y^2}). \end{eqnarray} (4.8)

    Theorem 4.1 Let u({\mathbf x}, t) \in \mathcal {C}^{4, 4}(\Omega\times [-s, T]) be exact solution of Eqs (1.2a)–(1.2c), U_{i, j}^{k} be the solution of the scheme Eqs (4.4a)–(4.4c) at node ({\mathbf x}_{i, j}, t_{k}) . Assume that there are constants \mu , \sigma and \epsilon such that \mu h\leq h_{x}\le h , \mu h\leq h_{y}\le h , \tau\leq\sigma h^{\frac{3}{2}+\epsilon} . It holds that

    \begin{eqnarray} \|e^k\|_1\leq c_7\Big(\tau^2+h_x^2+h_y^2+ \frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2}\Big) \end{eqnarray} (4.9)

    under the conditions that

    \begin{eqnarray*} \label{44} \begin{array}{ll} h\leq\min \Big\{(\frac{\mu\varepsilon_0} {3c_7\sigma^2})^{\frac{1}{2(1+\epsilon)}}, \frac{\sqrt{\mu}\varepsilon_0} {6c_7}, (\frac{\mu^3\varepsilon_0}{6c_7\sigma^2})^{\frac{1}{2\epsilon}} \Big\}, \quad \tau \Big\{4+ \frac{4c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2}\Big\}\leq 1 \end{array} \end{eqnarray*}

    and Assumption A hold. Here,

    c_7 = \sqrt{(1+\frac{(b_2-b_1)^2(d_2-d_1)^2}{6[(d_2-d_1)^2+ (b_2-b_1)^2]})\frac{2(1+r_x^2+r_y^2)c_8}{a^2}},
    c_8 = Tc_6^2(b_2-b_1)(d_2-d_1)\exp\Big(\big(4+\frac{4c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2}\big)T\Big).

    Proof. Also, let f(u_{i, j}^{k}) = f(u_{i, j}^{k}, u_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k}) , f(U_{i, j}^{k}) = f(U_{i, j}^{k}, U_{i, j}^{k-n_{1}}, {\mathbf x}_{i, j}, t_{k}) for convenience. Subtracting Eq (4.4a) from Eq (4.2), the error equations are derived as follows

    \begin{array}{l} (1+r_{x}^{2}+r_{y}^{2})\delta_{t}^{2}e_{i, j}^{k}-a^{2}\Delta_{h} e_{i, j}^{k} = f(u_{i, j}^{k})-f(U_{i, j}^{k})+(R_{2})_{i, j}^{k}, \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; 0\leq k \leq n, \end{array} (4.10a)
    \begin{array}{l} e_{i, j}^{k} = 0, \; \; {\mathbf x}_{i, j} \in \Omega_{h}, \; \; -n_1\leq k \leq 0, \end{array} (4.10b)
    \begin{array}{l} e_{i, j}^{k} = 0, \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; 0\leq k \leq n. \end{array} (4.10c)

    As e_{i, j}^k = 0 ( -n_1\leq k\leq0 ), it is obvious that Eq (4.9) holds for -n_1\leq k\leq0 . Assuming that Eq (4.9) is valid for -n_1\leq k\leq l , we will prove that it is also true for k = l+1 .

    As \mu h \le h_{x} \le h and \mu h \le h_{y} \le h , it is easy to find that

    \begin{equation} \frac{1}{\sqrt{h_{x}}}\le \frac{1}{\sqrt{\mu h}}, \quad \frac{1}{\sqrt{h_{y}}}\le \frac{1}{\sqrt{\mu h}}. \end{equation} (4.11)

    Thus, using \tau\le \sigma h^{\frac{3}{2}+\epsilon} and Eq (4.11), we obtain

    \begin{equation} \begin{array}{ll} \Big( \frac{\tau^2}{\sqrt{h_{x}}\sqrt{h_{y}}}+\frac{h_{x}^{ \frac{3}{2}}}{\sqrt{h_y}}+\frac{h_y^{\frac{3}{2}}} {\sqrt{h_x}}+\frac{\tau^2}{\sqrt{h_x^5}\sqrt{h_y}}+ \frac{\tau^2}{\sqrt{h_x}\sqrt{h_y^5}}\Big) \\ \le \frac{\sigma^{2} h^{3+2\epsilon}}{\sqrt{\mu h}\sqrt{\mu h}} +\frac{h^{\frac{3}{2}}}{\sqrt{\mu h}} +\frac{h^{\frac{3}{2}}}{\sqrt{\mu h}}+\frac{\sigma^{2}h^{3+2\epsilon}}{\sqrt{(\mu h)^{5}}\sqrt{\mu h}}+\frac{\sigma^{2}h^{3+2\epsilon}}{\sqrt{\mu h}\sqrt{(\mu h)^{5}}} \\ \le \frac{\sigma^{2}}{\mu}h^{2+2\epsilon}+\frac{2h}{\sqrt{\mu}}+\frac{2\sigma^{2}h^{2\epsilon}}{\mu^{3}}. \end{array} \end{equation} (4.12)

    Besides, applying h\leq\min{((\frac{\mu\varepsilon_0} {3c_7\sigma^2})^{\frac{1}{2(1+\epsilon)}}, \frac{\sqrt{\mu}\varepsilon_0} {6c_7}, (\frac{\mu^3\varepsilon_0}{6c_7\sigma^2})^{\frac{1}{2\epsilon}})} , it is easy to find that

    \begin{equation} \frac{\sigma^{2}}{\mu}h^{2(1+\epsilon)}\le \frac{\varepsilon_{0}}{3c_{7}}, \quad \frac{2h}{\sqrt{\mu}}\le \frac{\varepsilon_{0}}{3c_{7}}, \quad \frac{2\sigma^{2}}{\mu^{3}}h^{2\epsilon}\le \frac{\varepsilon_{0}}{3c_{7}}. \end{equation} (4.13)

    Thus, by applying the induction hypothesis, Eq (4.12) and Eq (4.13), we can deduce that

    \begin{eqnarray} \begin{array}{ll} |e_{i, j}^k|\leq\ h_x^{-\frac{1}{2}} h_y^{-\frac{1}{2}}\|e^k\|\leq\ h_x^{-\frac{1}{2}} h_y^{-\frac{1}{2}}\|e^k\|_{1}\leq h_x^{-\frac{1}{2}} h_y^{-\frac{1}{2}}c_7(\tau^2+h_x^2+h_y^2+ \frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2}) \\ = c_{7} \Big( \frac{\tau^2}{\sqrt{h_x}\sqrt{h_y}}+\frac{h_x^{ \frac{3}{2}}}{\sqrt{h_y}}+\frac{h_y^{\frac{3}{2}}} {\sqrt{h_x}}+\frac{\tau^2}{\sqrt{h_x^5}\sqrt{h_y}}+ \frac{\tau^2}{\sqrt{h_x}\sqrt{h_y^5}}\Big) \\ \leq c_7(\frac{\sigma^2h^{2+2\epsilon}}{\mu}+\frac{2 h}{\sqrt{\mu}}+\frac{2\sigma^2h^{2\epsilon}}{\mu^3}) < \varepsilon_0, \quad -n_{1}\le k \le l. \end{array} \end{eqnarray} (4.14)

    Hence, applying Assumption A directly infers that

    \begin{eqnarray} |f(U_{i, j}^{k})-f(u_{i, j}^{k})|\leq c_1(|e_{i, j}^{k}|+|e_{i, j}^{k-n_{1}}|), \quad -n_{1}\le k \le l. \end{eqnarray} (4.15)

    Acting the inner product with Eq (4.10a) by 2\delta_{\hat{t}}e^k yields that

    \begin{eqnarray} \begin{array}{ll} (1+r_x^2+r_y^2)\langle\delta_t^2e^k, 2\delta_{\hat{t}}e^k\rangle-a^2 \langle\delta_x^2e^k, 2\delta_{\hat{t}}e^k\rangle-a^2 \langle\delta_y^2e^k, 2\delta_{\hat{t}}e^k\rangle \\ = \langle f(u^k)-f(U^k), 2\delta_{\hat{t}}e^k\rangle +\langle(R_2)^{k}, 2\delta_{\hat{t}}e^k\rangle. \end{array} \end{eqnarray} (4.16)

    Similar to the proof of the Theorem 3.1 , and using Lemma 4.1 , the formula at the left of Eq (4.16) is estimated as follows

    \begin{eqnarray} \begin{array}{ll} (1+r_{x}^{2}+r_{y}^{2})(\delta_t^2e^k, 2\delta_{\hat{t}}e^k)-a^2 (\delta_x^{2}e^{k}, 2\delta_{\hat{t}}e^{k})-a^{2} (\delta_y^2e^{k}, 2\delta_{\hat{t}}e^{k})\\ = \frac{1+r_{x}^{2}+r_{y}^{2}}{\tau}(\|\delta_te^{k+\frac{1}{2}}\|^2- \|\delta_{t}e^{k-\frac{1}{2}}\|^2)+\frac{a^{2}}{\tau}\Big[\langle e^{k+1}, e^{k}\rangle_{1, x}-\langle e^{k}, e^{k-1}\rangle_{1, x}\Big]\\ \quad +\frac{a^{2}}{\tau}\Big[\langle e^{k+1}, e^{k}\rangle_{1, y}- \langle e^{k}, e^{k-1}\rangle_{1, y}\Big]\\ = \frac{1}{\tau}(G^{k}-G^{k-1}). \end{array} \end{eqnarray} (4.17)

    In what follows, we estimate each terms at the right of Eq (4.16). Applying the inequalities 2\alpha\beta\leq\frac{1}{\varepsilon}\alpha^2+\varepsilon \beta^2 and (\alpha+\beta)^2\leq2(\alpha^2+\beta^2) , Eq (4.15) and Lemma 4.1 , we can get that

    \begin{eqnarray} \begin{array}{ll} \langle f(u^{k})-f(U^{k}), 2\delta_{\hat{t}}e^k\rangle & \leq 2c_1^2(\|e^k\|^2+\|e^{k-n_1}\|^2)+\frac{1}{2} (||\delta_te^{k+\frac{1}{2}}\|^2+||\delta_te^{k-\frac{1}{2}}\|^2) \\ &\leq 2c_1^2(\|e^k\|^2+||e^{k-n_1}\|^2)+\frac{1}{2}(G^k+G^{k-1}). \end{array} \end{eqnarray} (4.18)

    Using Cauchy-Schwarz inequality and Lemma 4.1, it is easy to find that

    \begin{eqnarray} \begin{array}{ll} \langle(R_{2})^{k}, 2\delta_{\hat{t}}e^k\rangle \leq c_6^2(b_2-b_1)(d_2-d_1)(\tau^2+h_x^2+h_y^2+ \frac{\tau^2}{h_x^2}+\frac{\tau}{h_y^2})^2+ \frac{1}{2}(G^k+G^{k-1}). \end{array} \end{eqnarray} (4.19)

    Inserting Eq (4.17), Eq (4.18) and Eq (4.19) into Eq (4.16), adopting Lemma 3.1 and Eq (4.5b) in Lemma 4.1, and replacing k with \gamma , summing \gamma from 0 to k , we can arrive at

    \begin{eqnarray} \begin{array}{ll} G^k\leq \tau(\sum\limits_{\gamma = 0}^k G^\gamma+\sum\limits_{\gamma = -1}^{k-1} G^\gamma)+ \tau\frac{c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2}\sum\limits_{\gamma = -1}^{k-1}G^\gamma \\ +\tau\frac{c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2} \sum\limits_{\gamma = -n_1-1}^{k-n_1-1}G^\gamma+\tau \sum\limits^{k}_{\gamma = 0}\|(R_{2})^{\gamma}\|^2\\ \leq \tau\{2+\frac{2c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2}\}\sum\limits_{\gamma = 0}^k G^\gamma\\ \quad +Tc_6^2(b_2-b_1)(d_2-d_1)(\tau^2+h_x^2+h_y^2+ \frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2})^2. \end{array} \end{eqnarray} (4.20)

    As \tau\{4+\frac{4c_1^2(b_2-b_1)^2(d_2-d_1)^2(1+r_x^2+r_y^2)} {3[(b_2-b_1)^2+(d_2-d_1)^2]a^2}\}\leq 1 , applying Lemma 3.2 to Eq (4.20) derives that

    \begin{eqnarray} G^k\leq c_8(\tau^2+h_x^2+h_y^2+ \frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2})^2, \quad 0\leq k\leq l. \end{eqnarray} (4.21)

    Taking k = l in Eq (4.21) and using Eq (4.5c) derives that Eq (4.9) holds for k = l+1 . From induction hypothesis, it follows that the claimed finding holds. The proof is completed.

    Remark II From Theorem 4.1, we can find that numerical solutions obtained by the second EFDM (4.4a)–(4.4c) converge to exact solution with an order of O(\tau^{2}+h_{x}^{2}+h_{y}^2+\frac{\tau^2}{h_{x}^{2}}+\frac{\tau^2}{h_{y}^{2}}) in H^{1} -norm without any restriction no r_{x} and r_{y} . From truncation error Eq (4.3), it follows that the second EFDM (4.4a)–(4.4c) is conditionally consistent, and (R_{2})^{k}_{i, j} tends to zero as \tau = o(h_{x}) , \tau = o(h_{y}) , and h_{x} and h_{y} tend to zero. What's more, although the condition \tau\le\sigma h^{\frac{3}{2}+\epsilon} ( \sigma and \epsilon are both positive constant.) is necessary in Theorem 4.1, we would like to adopt the grid \tau = h^{2} and h_{x} = h_{y} = h to get numerical solutions with an optimal convergent order of O(h^{2}) in practical computations.

    In what follows, a REM in both space and time is developed for the 2nd EFDM (4.4a)–(4.4c).

    Theorem 4.2 Assume that u({\mathbf x}, t) \in \mathcal {C}^{4, 4}(\Omega\times [-s, T]) is the exact solution of Eqs (1.2a)–(1.2c), and U^{k}_{i, j}(\tau, h_{x}, h_{y}) is the numerical solution of the second EFDM (4.4a)–(4.4c) at ({\mathbf x}_{i, j}, t_{k}) using meshsizes h_{x} , h_{y} and \tau . Define the following REM

    \begin{equation} (U_{E})^{k}_{i, j} = \left\{\begin{array}{ll} \frac{1}{9}U_{i, j}^{k}(\tau, h_{x}, h_{y})- \frac{4}{9}U_{i, j}^{2k}(\frac{\tau}{2}, h_{x}, h_{y})-\frac{4}{9}U_{2i, 2j}^{2k}(\frac{\tau}{2}, \frac{h_{x}}{2}, \frac{h_{y}}{2})\\ +\frac{16}{9}U_{2i, 2j}^{4k}(\frac{\tau}{4}, \frac{h_{x}}{2}, \frac{h_{y}}{2}), \quad {\mathbf x}_{i, j}\in \bar{\Omega}_{h}, \quad 1\le n\le N, \\ \psi({\mathbf x}_{i, j}, t_{k}), \quad {\mathbf x}_{i, j}\in \bar{\Omega}_{h}. \quad -m\le k \le 0, \end{array}\right. \end{equation} (4.22)

    Then under the conditions of Theorem 4.1, h_{x} = O(h_{y}) , \frac{\tau}{h_{x}} and \frac{\tau}{h_{y}} \longrightarrow 0 as grid parameters \tau , h_{x} , h_{y} \longrightarrow 0 , we arrive at

    \begin{eqnarray} \|u^{k}-(U_E)^{k}\|_{1} \le C\Big[\tau^{2}+\tau^{4}+h_{x}^{4}+h_{y}^{4} +\frac{\tau^{4}}{h_{x}^{2}}+\frac{\tau^{4}}{h_{y}^{2}} +(\frac{\tau}{h_{x}})^{4}+(\frac{\tau}{h_{y}})^{4}\Big], \end{eqnarray} (4.23)

    as long as parameter grids \tau h_{x} and h_{y} are small enough.

    Proof. Let w_1({\mathbf x}, t) = \frac{1}{12}\frac{\partial^4u}{\partial t^4}({\mathbf x}, t) , w_2({\mathbf x}, t) = \frac{a^2}{12}\frac{\partial^4u}{\partial x^4}({\mathbf x}, t) , w_{3}({\mathbf x}, t) = \frac{a^2}{12}\frac{\partial^{4}u}{\partial y^{4}}({\mathbf x}, t) and w_4({\mathbf x}, t) = a^2\frac{\partial^2u}{\partial t^2}({\mathbf x}, t) . Then, we define the grid functions (w_l)_{i, j}^k = w_l({\mathbf x}_{i, j}, t_k) , (l = 1, 2, 3, 4) , {\mathbf x}_{i, j}\in \bar{\Omega}_h , -n_1\leq k\leq n . Thus, it follows from Eq (4.3) that

    \begin{equation} (R_2)_{i, j}^k = \tau^2(w_1)_{i, j}^k-h_x^2(w_2)_{i, j}^k-h_y^2(w_3)_{i, j}^k +(\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2})(w_4)_{i, j}^k+O(\tau^4+h_x^4+h_y^4+\frac{\tau^4}{h_x^2}+\frac{\tau^4}{h_y^2}). \end{equation} (4.24)

    Furthermore, the error Eqs (4.10a)–(4.10c) can be rewritten as

    \begin{array}{l} (1+r_{x}^{2}+r_{y}^{2})\delta_{t}^{2}e_{i, j}^{k}-a^2(\delta_{x}^{2}e_{i, j}^{k} +\delta_y^2e_{i, j}^{k}) = f(u_{i, j}^{k})-f(U_{i, j}^{k})+ \tau^2(w_{1})_{i, j}^{k}-h_{x}^{2}(w_2)_{i, j}^k-h_y^2(w_3)_{i, j}^k\\ +(\frac{\tau^2}{h_{x}^{2}}+\frac{\tau^2}{h_{y}^{2}})(w_4)_{i, j}^k+O(\tau^{4}+h_{x}^{4}+h_{y}^{4} +\frac{\tau^4}{h_x^2}+\frac{\tau^{4}}{h_{y}^{2}}), \; \; \; {\mathbf x}_{i, j}\in \Omega_{h}, \; \; 0\leq k \leq n, \end{array} (4.25a)
    \begin{array}{l} e_{i, j}^k = 0, \quad {\mathbf x}_{i, j}\in \Omega_h, \quad -n_1\leq k \leq 0, \end{array} (4.25b)
    \begin{array}{l} e_{0, j}^k = 0, \quad e_{m_1, j}^k = 0, \; \; \; 0\leq j\leq m_2, \quad 0\leq k \leq n, \end{array} (4.25c)
    \begin{array}{l} e_{i, 0}^k = 0, \quad e_{i, m_2}^k = 0, \; \; \; 0\leq i\leq m_1, \quad 0\leq k \leq n. \end{array} (4.25d)

    Suppose that functions v_{1}({\mathbf x}, t) , v_{2}({\mathbf x}, t) , v_{3}({\mathbf x}, t) and v_{4}({\mathbf x}, t) are exact solutions of the following IBVPs

    \begin{array}{l} \frac{\partial^2v_1}{\partial{t}^2}-a^{2}(\frac{\partial^2v_1}{\partial x^2} +\frac{\partial^2v_1}{\partial y^2}) = w_1({\mathbf x}, t)+h_{1}({\mathbf x}, t), \; \; ({\mathbf x}, t)\in \Omega\times[0, T], \end{array} (4.26a)
    \begin{array}{l} v_1({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in \Omega\times[-s, 0], \end{array} (4.26b)
    \begin{array}{l} v_1({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in\partial\Omega\times[0, T]; \end{array} (4.26c)
    \begin{array}{l} \frac{\partial^2v_2}{\partial{t}^2}-a^{2}(\frac{\partial^2v_2}{\partial x^2} +\frac{\partial^2v_2}{\partial y^2}) = w_2({\mathbf x}, t)+h_{2}({\mathbf x}, t), \; \; \; ({\mathbf x}, t)\in \Omega\times[0, T], \end{array} (4.27a)
    \begin{array}{l} \ v_2({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in \Omega\times[-s, 0], \end{array} (4.27b)
    \begin{array}{l} v_2({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in\partial\Omega\times[0, T]; \end{array} (4.27c)
    \begin{array}{l} \frac{\partial^2v_3}{\partial{t}^2}-a^{2}(\frac{\partial^2v_3}{\partial x^2} +\frac{\partial^2v_3}{\partial y^2}) = w_3({\mathbf x}, t)+h_{3}({\mathbf x}, t), \quad ({\mathbf x}, t)\in \Omega\times[0, T], \end{array} (4.28a)
    \begin{array}{l} v_3({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in \Omega\times[-s, 0], \end{array} (4.28b)
    \begin{array}{l} v_3({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in\partial\Omega\times[0, T]; \end{array} (4.28c)
    \begin{array}{l} \frac{\partial^2v_4}{\partial{t}^2}-a^{2}(\frac{\partial^2v_4}{\partial x^2} +\frac{\partial^2v_4}{\partial y^2}) = w_4({\mathbf x}, t)+h_{4}({\mathbf x}, t), \; \; \; ({\mathbf x}, t)\in \Omega\times[0, T], \end{array} (4.29a)
    \begin{array}{l} v_4({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in \Omega\times[-s, 0], \end{array} (4.29b)
    \begin{array}{l} v_4({\mathbf x}, t) = 0, \quad ({\mathbf x}, t)\in\partial\Omega\times[0, T]; \end{array} (4.29c)

    respectively, where

    \begin{array}{l} h_1({\mathbf x}, t) = f_{\mu}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_1({\mathbf x}, t)+f_{\upsilon}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_1({\mathbf x}, t-s), \end{array}
    \begin{array}{l} h_2({\mathbf x}, t) = f_{\mu}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)\upsilon_2({\mathbf x}, t) +f_{\upsilon}(u({\mathbf x}, t), u({\mathbf x}, t-s), x, y, t)v_2({\mathbf x}, t-s), \end{array} (4.30b)
    \begin{array}{l} h_3({\mathbf x}, t) = f_{\mu}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_3({\mathbf x}, t)+f_{\upsilon}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_3({\mathbf x}, t-s), \end{array} (4.30c)
    \begin{array}{l} h_4({\mathbf x}, t) = f_{\mu}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_4({\mathbf x}, t)+f_{\upsilon}(u({\mathbf x}, t), u({\mathbf x}, t-s), {\mathbf x}, t)v_4({\mathbf x}, t-s). \end{array} (4.30d)

    Denote (v_{l})_{i, j}^k = v_{l}({\mathbf x}_{i, j}, t_k) , l = 1, 2, 3, 4 . Approximating IBVPs (4.26a)–(4.26c), IBVPs (4.27a)–(4.27c), IBVPs (4.28a)–(4.28c) and IBVPs (4.29a)–(4.29c) by using the numerical methods similar to Eqs (4.4a)–(4.4c) yields that

    \begin{array}{l} (1+r_x^2+r_y^2)\delta_t^2(v_1)_{i, j}^k-a^2\Big[\delta_x^2(v_1)_{i, j}^k +\delta_y^2(v_1)_{i, j}^k\Big] = (w_1)_{i, j}^k+h_1({\mathbf x}_{i, j}, t_k) \\ +O(\tau^2+h_x^2+h_y^2+\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2}), \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; 0\leq k \leq n, \end{array} (4.31a)
    \begin{array}{l} (v_1)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; -n_1\leq k \leq 0, \end{array} (4.31b)
    \begin{array}{l} (v_1)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; \; 0\leq k \leq n, \end{array} (4.31c)
    \begin{array}{l} (1+r_{x}^{2}+r_{y}^{2})\delta_t^2(v_2)_{i, j}^k-a^2\Big[\delta_x^2(v_2)_{i, j}^k +\delta_y^2(v_2)_{i, j}^k\Big] = (w_2)_{i, j}^k+h_{2}({\mathbf x}_{i, j}, t_k) \\ +O(\tau^2+h_x^2+h_y^2+\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2}), \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; 0\leq k \leq n, \end{array} (4.32a)
    \begin{array}{l} (v_2)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; -n_1\leq k \leq 0, \end{array} (4.32b)
    \begin{array}{l} (v_2)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; \; 0\leq k \leq n, \end{array} (4.32c)
    \begin{array}{l} (1+r_{x}^{2}+r_{y}^{2})\delta_t^2(v_3)_{i, j}^k-a^2\Big[\delta_x^2(v_3)_{i, j}^k +\delta_y^2(v_3)_{i, j}^k\Big] = (w_3)_{i, j}^k+h_3({\mathbf x}_{i, j}, t_k) \\ +O(\tau^2+h_{x}^{2}+h_{y}^{2}+\frac{\tau^2}{h_{x}^{2}}+\frac{\tau^{2}}{h_{y}^{2}}), \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; 0\leq k \leq n, \end{array} (4.33a)
    \begin{array}{l} (v_{3})_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; -n_1\leq k \leq 0, \end{array} (4.33b)
    \begin{array}{l} (v_{3})_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; \; 0\leq k \leq n, \end{array} (4.33c)
    \begin{array}{l} (1+r_{x}^{2}+r_{y}^{2})\delta_{t}^{2}(v_4)_{i, j}^k-a^2\Big[\delta_x^2(v_4)_{i, j}^k +\delta_y^2(v_4)_{i, j}^k\Big] = (w_4)_{i, j}^k+h_4({\mathbf x}_{i, j}, t_k) \\ +O(\tau^2+h_{x}^{2}+h_{y}^{2}+\frac{\tau^2}{h_{x}^{2}}+\frac{\tau^{2}}{h_{y}^{2}}), \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; 0\leq k \leq n, \end{array} (4.34a)
    \begin{array}{l} (v_4)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in \Omega_h, \; \; \; -n_1\leq k \leq 0, \end{array} (4.34b)
    \begin{array}{l} (v_4)_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; \; 0\leq k \leq n. \end{array} (4.34c)

    Denote q_{i, j}^k = e_{i, j}^{k}+h_{x}^{2}(v_{1})_{i, j}^{k}+h_{y}^{2}(v_{2})_{i, j}^{k}+(\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2})(v_3)_{i, j}^k+\tau^2(v_4)_{i, j}^k , 0\leq i\leq m_1 , 0\leq j\leq m_2 , -n_1\leq k\leq n . Then, multiplying h^{2}_{x} , h^{2}_{y} , (\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2}) and \tau^2 to both sides of Eqs (4.31b)–(4.31d), Eqs (4.32b)–(4.32d), Eqs (4.33b)–(4.33d) and Eqs (4.34b)–(4.34d), respectively, and then adding up the obtained results, we arrive at

    \begin{array}{l} \delta_t^2q_{i, j}^k-a^2(\delta_x^2q_{i, j}^k+\delta_y^2q_{i, j}^k) = \tilde{h}_{i, j}^k+(\tilde{R_2})_{i, j}^k, \; \; \; \; \; {\mathbf x}_{i, j}\in\bar{\Omega}_h, \; \; \; 0\leq k \leq n, \end{array} (4.35a)
    \begin{array}{l} q_{i, j}^k = 0, \; \; \; i, j\in\bar{\Omega}_h, \; \; \; -n_1\leq k \leq 0, \end{array} (4.35b)
    \begin{array}{l} q_{i, j}^k = 0, \; \; \; \; \; \; {\mathbf x}_{i, j}\in\partial\Omega_{h}, \; \; \; 0\leq k \leq n, \end{array} (4.35c)

    where

    \begin{array}{l} (\tilde{h}_1)_{i, j}^k = f(u^{k}_{i, j}+h^{2}_{x}(v_{1})^{k}_{i, j}+h^{2}_{y}(v_{2})^{k}_{i, j} +(\frac{\tau^2}{h^{2}_{x}}+\frac{\tau^2}{h^{2}_{y}})(v_{3})^{k}_{i, j}+\tau^2(v_{4})^{k}_{i, j}, \end{array} (4.36a)
    \begin{array}{l} \quad u^{k-n_{1}}_{i, j}+h^{2}_{x}(v_{1})^{k-n_{1}}_{i, j}+h^{2}_{y}(v_{2})^{k-n_{1}}_{i, j} +(\frac{\tau^2}{h^{2}_{x}}+\frac{\tau^2}{h^{2}_{y}})(v_{3})^{k-n_{1}}_{i, j}+\tau^2(v_{4})^{k-n_{1}}_{i, j}, {\mathbf x}_{i, j}, t_{k}) \\ \quad -f(U^{k}_{i, j}, U^{k-n_{1}}_{i, j}, {\mathbf x}_{i, j}, t_{k}), \end{array} (4.36b)
    \begin{array}{l} (\tilde{R_2})_{i, j}^{k} = O\Big(\tau^{4}+h_{x}^{4}+h_{y}^{4}+\tau^{2}h^{2}_{x}+\tau^{2}h^{2}_{y}+\frac{\tau^{4}}{h_{x}^{2}}+\frac{\tau^{4}}{h_{y}^{2}} +((\frac{h_{x}}{h_{y}})^{2}+(\frac{h_{x}}{h_{y}})^{2})\tau^{2} \end{array} (4.36c)
    \begin{array}{l} \quad +(\frac{\tau}{h_{x}})^{4}+(\frac{\tau}{h_{x}})^{4}+(\frac{\tau^{2}}{h_{x}h_{y}})^{2}\Big). \end{array} (4.36d)

    It is worth mentioning that the following Taylor expansion formula

    \begin{equation} \begin{array}{ll} f(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k) +h^{2}_{x}\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_1)_{i, j}^k +f_{\upsilon}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_1)_{i, j}^{k-n_1}\Big] \nonumber \\ +h_y^2\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_2)_{i, j}^k+f_{\upsilon}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_{k})(v_2)_{i, j}^{k-n_1}\Big]\nonumber \\ +\Big[(\frac{\tau}{h_{x}})^2+(\frac{\tau}{h_{y}})^2\Big]\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_3)_{i, j}^k+f_{\upsilon}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k) (v_3)_{i, j}^{k-n_1}\Big]\nonumber \\ +\tau^2\Big[f_{\mu}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k)(v_4)_{i, j}^k +f_{\upsilon}(u_{i, j}^k, u_{i, j}^{k-n_1}, {\mathbf x}_{i, j}, t_k) (v_4)_{i, j}^{k-n_1}\Big]+\nonumber \\ O\Big(\tau^4+\frac{\tau^4}{h_x^2}+\frac{\tau^4}{h_x^2}+\tau^2h_x^2+\tau^2h_y^2+(\frac{h_x^2}{h_y^2} +\frac{h_y^2}{h_x^2})\tau^2+h_x^4+h_x^2h_y^2+h_y^4+\nonumber\\ (\frac{\tau}{h_x})^4 +(\frac{\tau^2}{h_xh_y})^2+(\frac{\tau}{h_y})^4\Big) \nonumber \end{array} \end{equation} (4.37)
    \begin{equation} \begin{array}{ll} = f(u^{k}_{i, j}+h^{2}_{x}(v_{1})^{k}_{i, j}+h^{2}_{y}(v_{2})^{k}_{i, j} +(\frac{\tau^2}{h^{2}_{x}}+\frac{\tau^2}{h^{2}_{y}})(v_{3})^{k}_{i, j}+\tau^2(v_{4})^{k}_{i, j}, \nonumber \\ \quad u^{k-n_{1}}_{i, j}+h^{2}_{x}(v_{1})^{k-n_{1}}_{i, j}+h^{2}_{y}(v_{2})^{k-n_{1}}_{i, j} +(\frac{\tau^2}{h^{2}_{x}}+\frac{\tau^2}{h^{2}_{y}})(v_{3})^{k-n_{1}}_{i, j}+\tau^2(v_{4})^{k-n_{1}}_{i, j}, {\mathbf x}_{i, j}, t_{k})\\ +\mathcal {O}\Big(\tau^4 +\frac{\tau^4}{h_x^2}+\frac{\tau^4}{h_y^2}+\tau^2h_x^2+\tau^2h_y^2+ (\frac{h_x^2}{h_y^2}+\frac{h_y^2}{h_x^2})\tau^2+h_x^4+h_x^2h_y^2+h_y^4+ (\frac{\tau}{h_x})^4 +(\frac{\tau^2}{h_xh_y})^2+(\frac{\tau}{h_y})^4\Big) \end{array} \end{equation}

    is applied in Eqs (4.35a)–(4.35c).

    Then, using the same techniques as those proposed in the proof of Theorem 4.1 and using the inequalities \alpha\beta\le (\alpha^2+\beta^2)/2 and \max((\frac{h_{y}}{h_{x}})^2, (\frac{h_{x}}{h_{y}})^2)\le \mu^{-2} , we have

    \begin{align} \|q^{k}\|_1\leq C\Big(\tau^2+h_x^4+h_y^4+\tau^4 +\frac{\tau^4}{h_x^2}+\frac{\tau^4}{h_y^2}+ (\frac{\tau}{h_x})^4 +(\frac{\tau}{h_y})^4\Big), \quad 0\leq k \leq n, \end{align} (4.38)

    where C is a positive constant and independent of grid parameters \tau , h_{x} and h_{y} . Finally, similar to Eq (3.39), using the triangle inequality to the following equality

    \begin{equation} \begin{array}{ll} u_{i, j}^k-(U_E)_{i, j}^k = \frac{1}{9}\Big\{(u_{i, j}^k-U_{i, j}^k(h_x, h_y, \tau)+h^{2}_{x}(v_1)^k_{i, j}+h_y^2(v_2)^k_{i, j}+(\frac{\tau^2}{h_x^2}+\frac{\tau^2}{h_y^2})(v_3)^k_{i, j}\nonumber\\ +\tau^2(v_4)^k_{i, j}\Big\} -\frac{4}{9}\Big\{(u_{i, j}^k-U_{i, j}^{2k}(h_x, h_y, \frac{\tau}{2})+h^{2}_{x}(v_1)^k_{i, j}+h_y^2(v_2)^k_{i, j} +(\frac{(\frac{\tau}{2})^2}{h_x^2}+\frac{(\frac{\tau}{2})^2}{h_y^2})(v_3)^k_{i, j} \nonumber\\ +(\frac{\tau}{2})^2(v_4)^k_{i, j}\Big\} -\frac{4}{9}\Big\{(u_{i, j}^k-U_{2i, 2j}^{2k}(\frac{h_x}{2}, \frac{h_y}{2}, \frac{\tau}{2})+(\frac{h_{x}}{2})^2(v_1)^k_{i, j}+(\frac{h_y}{2})^2(v_2)^k_{i, j} +\Big[\frac{(\frac{\tau}{2})^2}{(\frac{h_x}{2})^2}\nonumber\\ +\frac{(\frac{\tau}{2})^2}{(\frac{h_y}{2})^2}\Big](v_3)^k_{i, j}+(\frac{\tau}{2})^2(v_4)^k_{i, j}\Big\} +\frac{16}{9}\Big\{(u_{i, j}^k-U_{2i, 2j}^{4k}(\frac{h_x}{2}, \frac{h_y}{2}, \frac{\tau}{4})+(\frac{h_{x}}{2})^2(v_1)^k_{i, j}\nonumber\\ +(\frac{h_y}{2})^2(v_2)^k_{i, j} +\Big[\frac{(\frac{\tau}{4})^2}{(\frac{h_x}{2})^2} +\frac{(\frac{\tau}{4})^2}{(\frac{h_y}{2})^2}\Big](v_3)^k_{i, j}+(\frac{\tau}{4})^2(v_4)^k_{i, j}\Big\} \nonumber \end{array} \end{equation}

    and applying the estimation Eq (4.38) yield the claimed results. The proof is completed.

    Remark III Like Remark II, it follows from Theorem 4.2 that numerical solutions obtained by the second EFDM (4.4a), (4.4b) combined with REM in Eq (4.22) converge to the exact solution with an order of O(\tau^{2}+\tau^{4}+h_{x}^{4}+h_{y}^{4} +\frac{\tau^{4}}{h_{x}^{2}}+\frac{\tau^{4}}{h_{y}^{2}} +(\frac{\tau}{h_{x}})^{4}+(\frac{\tau}{h_{y}})^{4}) in H^{1} -norm. According to the convergence rate O(\tau^{2}+\tau^{4}+h_{x}^{4}+h_{y}^{4} +\frac{\tau^{4}}{h_{x}^{2}}+\frac{\tau^{4}}{h_{y}^{2}} +(\frac{\tau}{h_{x}})^{4}+(\frac{\tau}{h_{y}})^{4}) , the optimal estimation \|u^k-(U_E)^k\|_1 = O(h^{4}) , 0\leq k \leq n can be obtained as an optimally temporal and spatial relationship h_{x} = h_{y} = h and \tau = h^{2} is used. Thus this relationship h_{x} = h_{y} = h and \tau = h^{2} is applied in practical computations.

    Partial differential equations with several delays are extensively applied in scientific and engineering fields. For example, a delayed convective-diffusion equation (see [10])

    \begin{equation} \frac{\partial u}{\partial t} = \frac{\partial^{2}u}{\partial x^2} +\upsilon(g(u(x, t-\tau_{1})))\frac{\partial u}{\partial x} +c[f(u(x, t-\tau_{2}))-u(x, t)], \nonumber \end{equation}

    which arises from furnace control, models a furnace used to process metal sheets. Here, u is the temperature distribution in a metal sheet, v represents velocity, f denotes a distributed temperature source function, both v and f are dynamically adapted by a controlling device monitoring the current temperature distribution. However, the finite speed of the controller leads to two fixed delays of length \tau_{1} and \tau_{2} . Thus it is necessary and important to study numerical solutions of partial differential equations with several delays.

    This section focuses on the generalization of the proposed numerical algorithms and corresponding analytical results to 2D nonlinear wave equation with several delays as follows

    \begin{equation} \left \{\begin{array}{ll} u_{tt}-a^{2} (u_{xx}+u_{yy}) = f(u({\mathbf x}, t), u({\mathbf x}, t-s_{1}(t)), u({\mathbf x}, t-s_{2}(t)), \ldots, u({\mathbf x}, t-s_{L}(t)), {\mathbf x}, t), \\ \quad ({\mathbf x}, t)\in \Omega\times[0, T], \\ u({\mathbf x}, t) = \psi({\mathbf x}, t), \quad {\mathbf x}\in \bar{\Omega}, \quad t\in [-s, 0] \\ u({\mathbf x}, t) = \phi({\mathbf x}, t), \quad ({\mathbf x}, t)\in \partial\Omega\times[0, T], \\ \end{array}\right. \end{equation} (5.1)

    where s_{l}(t) > 0 (l = 1, 2, \ldots, L) and s = \max_{1\le l \le L}{s_{l}(t)} .

    As s_{\kappa}(t) = s_{\kappa} ( \kappa = 1, 2, \cdots, L ) are all constant delays, i.e. independent of temporal variable t , constrained temporal grid \tau = s_{1}/n_{1} = s_{2}/n_{2} = \cdots = s_{L}/n_{L}, (n_{l} \in \mathbb{Z}^{+}) is adopted to make t = K s_{l} (K \in \mathbb{Z}^{+}) locate the temporal grid nodes, thus avoiding the use of Lagrangian interpolation and reducing computational complexity. In this case, the first and second EFDMs, and their corresponding REMs for Eqs (1.2a)–(1.2c) can be easily adapted to the numerical solution of Eq (5.1) by replacing f(U^{k}_{i, j} , U^{k-n_{1}}_{i, j} , {\mathbf x}_{i, j}, t_{k}) with f(U^{k}_{i, j} , U^{k-n_{1}}_{i, j} , U^{k-n_{2}}_{i, j} , \ldots , U^{k-n_{L}}_{i, j} , {\mathbf x}_{i, j} , t_{k}) . Also, corresponding theoretical results are derived by using similar analytical methods as long as f(u({\mathbf x}, t) , u({\mathbf x}, t-s_{1}) , u({\mathbf x}, t-s_{2}) , \ldots , u({\mathbf x}, t-s_{L}) , {\mathbf x} , t) satisfies \varepsilon_{0} -continuous with respect to its first, second, \ldots, L+1 -th variables.

    As s_{l}(t) ( l = 1, 2, \cdots, L ) are variable delays, we should use non-constrained temporal grid, i.e. \tau = T/n . The linear Lagrange interpolation is used to approximate the nonlinear delayed terms. Let s_{l}(t_{k}) = P_{l}\tau+\theta_{l}\tau , (P_{l}\in \mathbb{Z}^{+} 0\le\theta_{l}\le 1) . Then delay term u({\mathbf x}, t-s_{l}(t_{k})) can be approximated by applying the linear Lagrange interpolation as follows u({\mathbf x}_{i, j}, t-s_{l}(t_{k})) = (1-\theta_{l})u^{k-P_{l}}_{i, j} +\theta_{l}u^{k-P_{l}-1}_{i, j} +O(\tau^2) . Write \widehat{U}^{k, l}_{i, j} = = (1-\theta_{l})U^{k-P_{l}}_{i, j} +\theta_{l}U^{k-P_{l}-1}_{i, j} , which is viewed as a second-order approximations to u({\mathbf x}_{i, j}, t-s_{l}(t_{k})) . Consequently, the first and second EFDMs for Eqs (1.2a)–(1.2c) can be easily adapted to the numerical solution of Eq (5.1) by replacing f(U^{k}_{i, j}, U^{k-n_{1}}_{i, j}, {\mathbf x}_{i, j}, t_{k}) with f(U^{k}_{i, j}, \widehat{U}^{k, 1}_{i, j}, \widehat{U}^{k, 2}_{i, j}, \ldots, \widehat{U}^{k, L}_{i, j}, {\mathbf x}_{i, j}, t_{k}) . Also, the convergence analysis of the modified first and second EFDMs can be constructed using the analytical techniques similar to those developed in Theorem 3.1 and Theorem 4.1.

    This section is devoted to numerical experiments. Three numerical examples are solved by the proposed methods. For convenience, denote the first EFDM in Eqs (3.3a)–(3.3c) by EFDM-I, and the second EFDM (4.4a)–(4.4c) by EFDM-II, respectively. Meanwhile, REM-I and REM-II are used to represent EFDM-I combined with REM (3.25), and EFDM-II combined with REM (4.22), respectively.

    To facilitate numerical calculations, the same meshsizes in the spatial directions are adopted (i.e., h_{x} = h_{y} = h ). Besides, as EFDM-I and REM-I are used, we should carefully choose temporal and spatial meshsizes to satisfy r_{x}^{2}+r_{y}^{2} < 1 for EFDM-I and r_{x}^{2}+r_{y}^{2} < \frac{1}{4} for REM-I. By Theorem 3.2, Theorem 4.1, Theorem 4.2, and Remarks I–III, h_{x} = h_{y} = h and \tau = h^{2} are often applied for REM-I, EFDM-II and REM-II to obtain numerical solutions with optimally convergent rates.

    The errors ME = \max\limits_{0\le k\le n}\|e^k\|_{\infty} , LE = \max\limits_{0\leq k\leq n}||e^k||^2 and HE = \max\limits_{0\leq k\leq n}\|e^k\|_1^2 and corresponding orders denoted by r_{\infty} , r_{L^2} and r_{H^1} , respectively, and CPU time in seconds are used to test the accuracy and performance of our algorithms. All computer programmes are carried out by MATLAB R2016a.

    Example 6.1. Set \Omega = [0, 1]\times[0, 1] . On \Omega\times[-s, T] , consider the numerical solutions of the following delayed wave equation

    \begin{equation} \frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}+u^2({\mathbf x}, t)u^2({\mathbf x}, t-s)+f({\mathbf x}, t), \; \; {\mathbf x}\in\Omega, \; \; 0\leq t \leq T \nonumber \\ \end{equation}

    with Dirichlet initial-boundary conditions by using our numerical methods. Here, f({\mathbf x}, t) = -(\sin{x}+\cos{y})^2(\sin{t})^2(\sin^{t-s})^2 . Initial-boundary-values conditions (IBVCs) are determined by its exact solution u({\mathbf x}, t) = (\sin{x}+\cos{y})\sin{t} .

    In Table 1, \tau = \frac{h}{2} is adopted to satisfy the condition r_{x}^{2}+r_{y}^{2} < 1 . Table 1 shows that EFDM-I is second-order accurate in time.

    Table 1.  Numerical results obtained by EFDM-I for Example 6.1 , s = 0.01 , T = 1 , (\tau = 0.5 h) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/50 3.3184\times10^{-6} 1.7790\times10^{-6} 8.2345\times10^{-6} 0.082
    1/100 8.3000\times10^{-7} 1.999 4.4483\times10^{-7} 2.000 2.0594\times10^{-6} 2.000 0.514
    1/200 2.0752\times10^{-7} 2.000 1.1121\times10^{-7} 2.000 5.1490\times10^{-7} 2.000 3.931
    1/400 5.1880\times10^{-8} 2.000 2.7803\times10^{-8} 2.000 1.2873\times10^{-7} 2.000 31.872

     | Show Table
    DownLoad: CSV

    In Table 2 and Table 3, timestep \tau = 1\times 10^{-4} is taken to test the spatial accuracies of EFDM-I and REM-I. Obviously, Table 2 and Table 3 exactly show that EFDM-I and REM-I are second-order and fourth-order accurate in space, respectively.

    Table 2.  Numerical results obtained by EFDM-I for Example 6.1 , s = 0.01 , T = 1 , (\tau = 1\times10^{-4}) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/2 2.2247\times10^{-3} 1.1124\times10^{-3} 4.586\times10^{-3} 0.045
    1/4 6.5754\times10^{-4} 1.759 3.5141\times 10^{-4} 1.663 1.5731\times10^{-3} 1.544 0.081
    1/8 1.6883\times10^{-4} 1.962 9.1542\times10^{-5} 1.941 4.1991\times10^{-4} 1.905 0.215
    1/16 4.2969\times10^{-5} 1.974 2.3100\times10^{-5} 1.987 1.0669\times10^{-4} 1.977 0.697

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results obtained by REM-I for Example 6.1 , s = 0.01 , T = 1 , (\tau = 1\times10^{-4}) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/2 1.3515\times10^{-4} 6.7575\times10^{-5} 2.7862\times10^{-4} 0.127
    1/4 6.6217\times10^{-6} 4.351 4.4421\times10^{-6} 3.927 2.0899\times10^{-5} 3.737 0.282
    1/8 4.5813\times10^{-7} 3.853 2.9362\times10^{-7} 3.919 1.6356\times10^{-6} 3.676 0.890
    1/16 2.8925\times10^{-8} 3.985 1.6823\times10^{-8} 4.126 1.2135\times10^{-7} 3.753 3.120

     | Show Table
    DownLoad: CSV

    In Table 4, \tau = h^{2} is taken. By Theorem 3.1, we have \max_{-n_{1}\le k \le n}\{\|u^{k}-(U_{E})^{k}\|_{1}\} = O(\tau^{2}+h^{4}_{x}+h^{4}_{y}) = O(h^{4}) as \tau = h^{2} . It is clearly observed from Table 4 that numerical solutions obtained by REM-I is convergent with an order of O(h^{4}) in L^{2} -, H^{1} - and L^{\infty} -norms.

    Table 4.  Numerical results obtained by REM-I for Example 6.1 , s = 0.01 , T = 1 , (\tau = h^2) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/10 9.4174\times10^{-7} - 4.8164\times10^{-7} - 2.228\times10^{-6} - 0.051
    1/20 5.9300\times10^{-8} 3.989 3.0126\times10^{-8} 3.999 1.4220\times10^{-7} 3.970 0.230
    1/30 1.1705\times10^{-8} 4.002 5.9652\times10^{-9} 3.994 2.8672\times10^{-8} 3.949 1.025
    1/40 3.5988\times10^{-9} 4.100 1.8360\times10^{-9} 4.096 9.0027\times10^{-9} 4.027 3.097

     | Show Table
    DownLoad: CSV

    In Table 5 and Table 6, \tau = h^{2} is used. By Remak II and Theorem 4.1, we can infer \max_{-n_{1}\le k \le n}\|e^{k}\|_{1} = O(h^{2}) provided by the EFDM-II with \tau = h^{2} . By Remak III and Theorem 4.2, we can derive \max_{-n_{1}\le k \le n}\{\|u^{k}-(U_{E})^{k}\|_{1}\} = O(h^{4}) yielded by the REM-II with \tau = h^{2} . Table 5 and Table 6 clearly show that numerical solutions obtained by EFDM-II and REM-II possess the convergent orders of O(h^{2}) and O(h^{4}) in L^{2} -, H^{1} - and L^{\infty} -norms, respectively.

    Table 5.  Numerical results obtained by EFDM-II for Example 6.1 , s = 0.01 , T = 1 , (\tau = h^2) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/10 2.7125\times10^{-3} 1.467\times10^{-3} 6.751\times10^{-3} 0.0311
    1/20 6.8927\times10^{-4} 1.977 3.6976\times10^{-4} 1.988 1.7092\times10^{-3} 1.982 0.061
    1/30 3.0704\times10^{-4} 1.994 1.6457\times10^{-4} 1.997 7.6142\times10^{-4} 1.994 0.232
    1/40 1.7280\times10^{-4} 1.998 9.2619\times10^{-5} 1.998 4.2865\times10^{-4} 1.997 0.695

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical results obtained by REM-II for Example 6.1 , s = 0.01 , T = 1 , (\tau = h^2) .
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/10 1.5748\times10^{-6} 1.0746\times10^{-6} 5.559\times10^{-6} 0.1500
    1/20 9.6861\times10^{-8} 4.023 6.4358\times10^{-8} 4.062 3.4267\times10^{-7} 4.020 1.292
    1/30 1.8432\times10^{-8} 4.092 1.2358\times10^{-8} 4.070 6.6824\times10^{-8} 4.032 6.175
    1/40 3.6389\times 10^{-9} 5.640 2.6048\times10^{-9} 5.412 1.6076\times10^{-8} 4.953 19.494

     | Show Table
    DownLoad: CSV

    From Table 7, we can find that to obtain much more accurate solutions, REMs often cost much lower time. For example, to achieve ME \approx 1.0\times 10^{-8} , the CPU times of REM-I and REM-II are both much less than that of EFDM-I. Obviously, REM-I is the highest in terms of computational efficiency, while EFDM-II is the lowest.

    Table 7.  Numerical results obtained by REM-II for Example 6.1 , s = 0.01 , T = 1 , (\tau = h^2) .
    Methods EFDM-I REM-I EFDM-II REM-II
    (\tau, h) (\frac{1}{800}, \frac{1}{400}) (\frac{1}{900}, \frac{1}{30}) (\frac{1}{140^{2}}, \frac{1}{140}) (\frac{1}{900}, \frac{1}{30})
    ME 5.1880\times10^{-8} 1.1705\times 10^{-8} 1.9212\times10^{-5} 1.8432 \times 10^{-8}
    CPU 31.872 1.025 48.804 6.175

     | Show Table
    DownLoad: CSV

    Figure 1 displays that numerical solutions provided by EFDM-I and EFDM-II coincide with exact solutions very well. This illustrates our methods are accurate and efficient.

    Figure 1.  Example 6.1 : Numerical solution at T = 1 obtained by EFDM-I with (\tau, h) = (1.0\times 10^{-04}, \frac{1}{16}) (left), numerical solution at T = 1 obtained by EFDM-II with (\tau, h) = (1.0\times 10^{-04}, \frac{1}{16}) (middle) and exact solution at T = 1 (right).

    In a word, Tables 17 and Figure 1 exactly confirm that theoretical findings agree with numerical findings very well.

    Example 6.2. Let \Omega = (0, 1)\times(0, 1) . On \Omega\times [-s, T] , we consider numerical solutions of the following wave equations with three constant delays

    \begin{eqnarray} u_{tt}- \Delta{u(\mathbf{x}, t)} = g(u(\mathbf{x}, t), u(\mathbf{x}, t-s_1), u(\mathbf{x}, t-s_2), u(\mathbf{x}, t-s_3))+f(\mathbf{x}, t), \end{eqnarray} (6.1)

    where s_1 = 0.1875 , s_2 = 0.3125 , s_3 = 0.5625 , s = \max(s_{1}, s_{2}, s_{3}) ,

    \begin{equation} \begin{array}{ll} g(u(\mathbf{x}, t), u(\mathbf{x}, t-s_1), u(\mathbf{x}, t-s_2), u(\mathbf{x}, t-s_3))\\ \quad = u^2(\mathbf{x}, t)+[u(\mathbf{x}, t-s_1)-1] u(\mathbf{x}, t-s_2)[u(\mathbf{x}, t-s_3)+1], \\ f(\mathbf{x}, t) = [-\exp(x+y)\sin(t)-3] \exp(x+y)\sin(t)\\ \quad -[\exp(x+y)\sin(t-s_1)-1] \exp(x+y)\sin(t-s_2)[\exp(x+y)\sin(t-s_3)+1]. \nonumber \end{array} \end{equation}

    IBVCs are determined by its exact solution u(\mathbf{x}, t) = \exp(x+y)\sin(t) .

    In Table 8, \tau = \frac{h}{2} is used to satisfy the condition r_{x}^{2}+r_{y}^{2} < 1 . Table 8 shows that the generalized EFDM-I is second-order temporally accurate.

    Table 8.  Numerical results obtained by EFDM-I for Example 6.2, T = 1 , ( \tau = \frac{h}{2} ).
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/8 7.1666\times10^{-4} 3.8867\times10^{-4} 1.809\times10^{-3} 0.0192
    1/16 1.8363\times10^{-4} 1.965 9.8195\times10^{-5} 1.985 4.6060\times10^{-4} 1.974 0.024
    1/32 4.6021\times10^{-5} 1.997 2.4611\times10^{-5} 1.996 1.1567\times10^{-4} 1.994 0.055
    1/64 1.1531\times10^{-5} 1.997 6.1565\times10^{-6} 1.999 2.8950\times10^{-5} 1.998 0.319
    1/512 1.8021\times10^{-7} 2.000 9.6215\times10^{-8} 2.0000 4.5251\times10^{-7} 2.0000 256.052

     | Show Table
    DownLoad: CSV

    In Table 9 and Table 10, \tau = 1.0\times 10^{-4} is fixed to test the accuracies of the generalized EFDM-I and generalized REM-I in spaces. Table 8 and Table 9 show that the generalized EFDM-I and generalized REM-I are second-order spatially accurate and fourth-order spatially accurate, respectively.

    Table 9.  Numerical results obtained by EFDM-I for Example 6.2, T = 1 , ( \tau = 1\times10^{-4} ).
    1/4 3.0678\times10^{-3} 1.6821\times10^{-3} 7.6089\times10^{-3} 0.1363
    1/8 8.1830\times10^{-4} 1.907 4.4351\times10^{-4} 1.923 2.0644\times10^{-3} 1.882 0.406
    1/16 2.0984\times10^{-4} 1.963 1.1218\times10^{-4} 1.983 5.2619\times10^{-4} 1.972 1.417
    1/32 5.2587\times10^{-5} 1.997 2.8120\times10^{-5} 1.996 1.3216\times10^{-4} 1.993 5.499

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical results obtained by REM-I for Example 6.2, T = 1 , ( \tau = 1\times10^{-4} ).
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 4.194\times10^{-5} 2.7443\times10^{-5} 1.2633\times10^{-4} 0.520
    1/8 2.773\times10^{-6} 3.919 1.7571\times10^{-6} 3.965 9.2531\times10^{-6} 3.771 1.862
    1/16 1.707\times10^{-7} 4.022 1.0624\times10^{-7} 4.048 6.8458\times10^{-7} 3.757 7.240
    1/32 1.645\times10^{-8} 3.376 9.8264\times10^{-9} 3.435 6.3410\times10^{-8} 3.432 30.986

     | Show Table
    DownLoad: CSV

    By Remark I and Theorem 3.2, we can find that the generalized REM-I possess the convergent rate of O(h^{4}) in H^{1} -norm as h_{x} = h_{y} = h and \tau = h^{2} . Table 11 exactly confirms that numerical solutions are convergent with an order of O(h^{4}) in L^{2} -, L^{\infty} - and H^{1} -norms as \tau = h^{2} which satisfes r_{x}^{2}+r_{y}^{2} < 1/4 .

    Table 11.  Numerical solutions provided by REM-I for Example 6.2, T = 1 , ( \tau = h^{2} ).
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 6.2221\times 10^{-5} 2.0943\times 10^{-5} 1.5250\times 10^{-4} 0.038
    1/8 4.0593\times 10^{-6} 3.938 1.3489\times 10^{-6} 3.957 1.0813\times 10^{-5} 3.818 0.048
    1/16 2.3929\times 10^{-7} 4.084 8.5942\times 10^{-8} 3.972 7.4360\times 10^{-7} 3.862 0.252
    1/32 1.5077\times 10^{-8} 3.988 5.4254\times 10^{-9} 3.986 5.6416\times 10^{-8} 3.720 4.051

     | Show Table
    DownLoad: CSV

    By Remark II and Theorem 4.1, we can derive that the generalized EFDM-II possesses the convergent rate of O(h^{2}) in H^{1} -norm as h_{x} = h_{y} = h and \tau = h^{2} . By Remark III and Theorem 4.2, we can deduce that the generalized REM-II owns the convergent order of O(h^{4}) in H^{1} -norm as h_{x} = h_{y} = h and \tau = h^{2} . Table 12 and Table 13 exactly confirm that the generalized EFDM-II and the generalized REM-II have the convergent rates of O(h^{2}) and O(h^{4}) with respect to L^{2} -, L^{\infty} - and H^{1} -norms, respectively.

    Table 12.  Numerical results provided by EFDM-II for Example 6.2, T = 1 , ( \tau = h^2 ).
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 3.8157\times 10^{-2} - 2.0972\times 10^{-2} - 9.4933\times 10^{-2} - 0.024
    1/8 1.0545\times 10^{-2} 1.855 5.7124\times 10^{-3} 1.876 2.6592\times 10^{-2} 1.836 0.021
    1/16 2.7222\times 10^{-3} 1.954 1.4551\times 10^{-3} 1.973 6.8253\times 10^{-3} 1.962 0.065
    1/32 6.8337\times 10^{-4} 1.994 3.6541\times 10^{-4} 1.994 1.7174\times 10^{-3} 1.991 0.734

     | Show Table
    DownLoad: CSV
    Table 13.  Numerical results obtained by REM-II for Example 6.2, T = 1 , ( \tau = h^2 ).
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 4.8076\times 10^{-4} 1.4869\times 10^{-4} 1.0973\times 10^{-3} 0.080
    1/8 2.9946\times 10^{-5} 4.005 8.8740\times 10^{-6} 4.067 7.1303\times 10^{-5} 3.944 0.156
    1/16 1.7963\times 10^{-6} 4.059 5.4497\times 10^{-7} 4.025 4.5193\times 10^{-6} 3.980 1.147
    1/32 1.1331\times 10^{-7} 3.987 3.3910\times 10^{-8} 4.006 2.8498\times 10^{-7} 3.987 17.646

     | Show Table
    DownLoad: CSV

    Table 14 confirms that REM-I is much more efficient than EFDM-I, and REM-II is also much more efficient than EFDM-II. This exactly shows REMs can greatly improve the computational efficiency indeed. Also, Table 14 shows that REM-I is the most efficient algorithm, in contrast, EFDM-II is the lowest.

    Table 14.  Comparisons of numerical results obtained by the current algorithms for Example 6.2 , T = 1 .
    Methods EFDM-I REM-I EFDM-II REM-II
    (\tau, h) (\frac{1}{1024}, \frac{1}{512}) (\frac{1}{1024}, \frac{1}{32}) (\frac{1}{140^{2}}, \frac{1}{140}) (\frac{1}{1024}, \frac{1}{32})
    ME 1.8021\times 10^{-7} 2.3929\times 10^{-7} 3.5806\times 10^{-5} 1.1331 \times 10^{-7}
    CPU 256.05 0.252 252.273 17.646

     | Show Table
    DownLoad: CSV

    Figure 2 exhibits that both of numerical solutions given by the generalized REM-I and the generalized REM-II agree with exact solutions very well. This confirms that the current REMs are accurate and efficient.

    Figure 2.  Example 6.2 : Numerical solutions at T = 1 obtained by REM-I with (\tau, h) = (\frac{1}{32^{2}}, \frac{1}{32}) (left), numerical solution at T = 1 obtained by REM-II with (\tau, h) = (\frac{1}{32^{2}}, \frac{1}{32}) (middle) and exact solution at T = 1 (right).

    In a word, Tables 814 and Figure 2 illustrate that the extensions of EFDM-I, REM-I, EFDM-II and REM-II to solve wave equations with several constant delays are practicable and preserve the accuracies of the original algorithms.

    Example 6.3. Let \Omega = (0, 1)\times(0, 1) . To further compare EFDM-I with EFDM-II, they are used to solve wave equations with two variable delays as follows

    \begin{eqnarray} u_{tt}-a^2 \Delta{u(\mathbf{x}, t)} = g(u(\mathbf{x}, t), u(\mathbf{x}, t-\nu_1(t)), u(\mathbf{x}, t-\nu_2(t))+f(\mathbf{x}, t), \quad ({\mathbf x}, t) \in \Omega\times(0, T] \end{eqnarray}

    where

    \begin{equation} \begin{array}{ll} g(u(\mathbf{x}, t), u(\mathbf{x}, t-\nu_1(t)), u(\mathbf{x}, t-\nu_2(t)) = \frac{\beta\theta^d u(\mathbf{x }, t)}{\theta^d+u^d( \mathbf{x}, t)} +\frac{b u(\mathbf{x}, t-\nu_1(t))}{1+u^2(\mathbf{x}, t-\nu_2(t)}, \nonumber \end{array} \end{equation}
    \begin{equation} \begin{array}{ll} f(\mathbf{x}, t) = (1-2a^2)\exp(x+y+t)- \frac{\exp(x+y+t-\frac{1}{2}|\sin(t)|)}{1+\exp(2x+2y+2t-\frac{1}{2}(1-\cos(t)))} \\ \quad -\frac{\frac{9}{2}\exp(x+y+t)}{9+\exp(2x+2y+2t)}. \nonumber \end{array} \end{equation}

    Numerical results are exhibited in Table 15 and Table 16, respectively. The exact solution to this problem is u(\mathbf{x}, t) = \exp(x+y+t) , by which we can get IBVCs. Here, we take the values of the parameters as follows: \beta = 0.5 , \theta = 3 , d = 2 , b = 1 , \nu_1(t) = \frac{1}{2}|\sin(t)| , and \nu_2(t) = \frac{1}{4}[1-\cos(t)] . From Table 15 and Table 16, we can obtain several conclusions as follows. (1) As stable condition r_{x}^{2}+r_{y}^{2} < 1 is satisfied, numerical solutions obtained by the generalized EFDM-I with linear Lagrangian interpolation are convergent with an order of O(h^{2}) in L^{2} -, H^{1} - and L^{\infty} -norms in the case of a = 1 and \tau = h^{2} . Besides, as \tau = h^{2} , numerical solutions obtained by the generalized EFDM-II with linear Lagrangian interpolation converge to exact solution with an order of O(h^{2}) in the cases of a = 1 , a = 10 , and a = 50 . These results show that by introducing linear Lagrangian interpolation, the extensions of the EFDM-I and EFDM-II to solve wave equations with variable delays are workable and efficient with the same accuracies as the original algorithms. (2) By Theorem 3.1 and Theorem 4.1, we can find that EFDM-II does not have any restriction on r_{x} and r_{y} , in contrast, r_{x}^{2}+r_{y}^{2} < 1 is necessary for EFDM-I. This finding is sufficiently confirmed by Table 15 and Table 16. For example, from Table 15 and Table 16, we can find that for larger a , much finer grids are needed to obtain accepted solution for EFDM-I, while, there are no grid requirements for EFDM-II whatever a is. (3) Using the same meshsizes, which fulfill r_{x}^{2}+r_{y}^{2} < 1 , numerical solutions yielded by EFDM-I more accurate than those provided by EFDM-II. Thus, EFDM-I and EFDM-II own the respective advantages of themselves. We use them according to our demands.

    Table 15.  Numerical results at T = 1 provided by EFDM-I for Example 6.3, ( \tau = h^2 ).
    a=1
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 6.4324\times 10^{-3} 3.3163\times 10^{-3} 1.5167\times 10^{-2} 0.022
    1/8 1.7193\times 10^{-3} 1.904 9.0800\times 10^{-4} 1.869 4.2917\times 10^{-3} 1.821 0.024
    1/16 4.2600\times 10^{-4} 2.013 2.3171\times 10^{-4} 1.970 1.1086\times 10^{-3} 1.953 0.064
    1/32 1.0658\times 10^{-4} 1.999 5.8214\times 10^{-5} 1.993 2.7952\times 10^{-4} 1.988 0.681
    a=10
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 7.7627\times 10^{20} 3.8921\times 10^{20} 1.7310\times 10^{21} 0.021
    1/8 3.0460\times 10^{63} 1.5232\times 10^{63} 6.8955e\times 10^{63} 0.024
    1/16 4.8082\times 10^{-4} - 1.4334\times 10^{-4} 1.2615\times 10^{-3} 0.064
    1/32 1.2264\times 10^{-4} 1.971 3.5381 \times 10^{-5} 2.018 3.1772\times 10^{-4} 1.989 0.677
    1/64 3.0881\times 10^{-5} 1.990 8.8345\times 10^{-6} 2.002 7.9602\times 10^{-5} 1.997 10.822
    a=50
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/4 3.7278\times 10^{43} 1.8725\times 10^{43} 8.3312\times 10^{43} 0.021
    1/8 1.1456\times 10^{151} 5.7339\times 10^{150} 2.5974\times 10^{151} 0.024
    1/16 inf nan inf 0.063
    1/32 inf nan inf 0.713
    1/64 9.8788\times 10^{321} nan inf 10.812
    1/72 2.6889\times 10^{-5} 1.2652\times 10^{-5} 6.4906\times 10^{-5} 17.489
    1/80 2.1797\times 10^{-5} 1.993 1.0233\times 10^{-5} 2.014 5.2583\times 10^{-5} 1.998 26.321

     | Show Table
    DownLoad: CSV
    Table 16.  Numerical results at T = 1 provided by EFDM-II, ( \tau = h^2 ).
    a=1
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/2 2.6289\times 10^{-1} 1.3144 \times 10^{-1} 5.4195 \times 10^{-1} 0.021
    1/4 7.5543\times 10^{-2} 1.799 4.0166\times 10^{-2} 1.710 1.7985 \times 10^{-1} 1.591 0.023
    1/8 1.9199 \times 10^{-2} 1.976 1.0250 \times 10^{-2} 1.970 4.8114 \times 10^{-2} 1.902 0.026
    1/16 4.7029\times 10^{-3} 2.029 2.5659 \times 10^{-3} 1.998 1.2252 \times 10^{-2} 1.973 0.068
    1/32 1.1735 \times 10^{-3} 2.003 6.4144 \times 10^{-4} 2.000 3.0783 \times 10^{-3} 1.993 0.757
    a=10
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/2 7.7084 \times 10^{-2} 2.1323\times 10^{-2} 1.7722 \times 10^{-1} 0.023
    1/4 2.2280 \times 10^{-2} 1.791 1.2451\times 10^{-2} 0.776 5.8129\times 10^{-2} 1.608 0.026
    1/8 5.3874 \times 10^{-3} 2.048 2.5637\times 10^{-3} 2.28 1.2847\times 10^{-2} 2.178 0.069
    1/16 1.3888 \times 10^{-3} 1.956 7.8055\times 10^{-4} 1.716 3.6596\times 10^{-3} 1.812 0.757
    1/64 3.4522 \times 10^{-4} 2.008 1.0268\times 10^{-4} 2.926 8.8984\times 10^{-4} 2.040 11.981
    a=50
    h ME r_\infty LE r_{L^2} HE r_{H^1} CPU
    1/2 7.5423 \times 10^{-2} 2.1693 \times 10^{-2} 1.7635 \times 10^{-1} 0.022
    1/4 2.0843\times 10^{-2} 1.855 5.8594\times 10^{-3} 1.888 5.3050\times 10^{-2} 1.733 0.026
    1/8 5.6388\times 10^{-3} 1.886 2.0030\times 10^{-3} 1.549 1.4149\times 10^{-2} 1.907 0.070
    1/16 1.4379\times 10^{-3} 1.971 7.5796\times 10^{-4} 1.402 3.6415\times 10^{-3} 1.958 0.756
    1/64 3.5599 \times 10^{-4} 2.014 1.5314\times 10^{-4} 2.307 9.1584\times 10^{-4} 1.991 11.962
    1/72 2.8676\times 10^{-4} 1.836 1.0786\times 10^{-4} 2.976 7.0903\times 10^{-4} 2.173 19.357

     | Show Table
    DownLoad: CSV

    In this paper, two explicit difference schemes have been derived for solving nonlinear wave equation with delays. The first EFDM (3.3a)–(3.3c) has been designed by adapting the standard explicit difference scheme for linear wave equations. It is conditionally convergent with an order of O(\tau^{2}+h^{2}_{x}+h^{2}_{y}) in H^{1} -norm as r^{2}_{x}+r^{2}_{y} < 1 . Besides, a spatial REM (3.25) has been derived to improve computational efficiency. The REM (3.25) is also conditionally convergent with a convergent rate of O(\tau^{2}+h^{4}_{x}+h^{4}_{y}) as 4(r^{2}_{x}+r^{2}_{y}) < 1 . Although these two methods are conditionally stable, we can get numerical solutions with optimal convergent rate as long as optimal temporal and spatial grids are taken by stable conditions.

    The second EFDM (4.4a)–(4.4c) has been proposed by generalizing the Du Fort-Frankel scheme which initially was proposed for parabolic equation with periodic boundary conditions. It is conditionally consistent. Namely, for its truncation error (R_{2})^{k}_{i, j} (see Eq (4.3)), as \tau = o(h_{x}) , \tau = o(h_{y}) , and h_{x} and h_{y} tend to zero, the truncation error (R_{2})^{k}_{i, j} tends to zero. Although numerical solutions obtained by the second EFDM (4.4a)–(4.4c) are convergent with an order of O(\tau^{2}+h^{2}_{x}+h^{2}_{y} +(\frac{\tau}{h_{x}})^{2}+(\frac{\tau}{h_{y}})^{2}) in H^{1} -norm under the conditions of \tau = o(h^{\frac{3}{2}}) ( h = \max(h_{x}, h_{y}) ) and other grids conditions, it is suggested that h_{x} = h_{y} = h and \tau = h^{2} are applied to obtain numerical solutions with an optimal convergent rate of O(h^{2}) H^{1} -norm. Besides, for the second EFDM (4.4a)–(4.4c), a new REM (4.22) has been derived to improve computational accuracy. As far as we know, there has been no numerical studies of nonlinear wave equations with delay by the Du Fort-Frankel scheme combined with REMs. Thus, the derivation and analyses of this new REM is a main contribution of this study. What's more, The second EFDM (4.4a)–(4.4c) and the corresponding REM for Eq (4.22) have no any restrictions on r_{x} and r_{y} .

    Finally, numerical results illustrate the correctness of the theoretical results, the performance of the algorithms and the comparisons among the current algorithms. The current algorithms own the respective advantages of themselves. We use them according to our demands.

    Recently, although energy-preserving Du Fort-Frankel schemes have been developed for sine-Gordon equation [47], coupled sine-Gordon equations [47] and Schrödinger with wave operator [48], we noted that there is few structuring-preserving numerical methods for nonlinear wave equations with delay. In future, we will consider the developments and analyses of structure-preserving Du Fort-Frankel schemes for nonlinear wave equations with delays.

    This work is partially supported by the National Natural Science Foundation of China (Grant No. 11861047), Natural Science Foundation of Jiangxi Province for Distinguished Young Scientists (No. 20212ACB211006), Natural Science Foundation of Jiangxi Province (No. 20202B ABL 201005).

    Authors are deeply grateful to Editors, Su Meng Editorial Assistant and the anonymous referees for their insightful comments and helpful suggestions, which have greatly improved this manuscript.



    [1] S. Chakraborty, K. Mali, S. Chatterjee, S. Banerjee, A. Sah, S. Pathak, et al., Bio-medical image enhancement using hybrid metaheuristic coupled soft computing tools, in 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), (2017), 231–236. https://doi.org/10.1109/UEMCON.2017.8249036
    [2] N. Du, Q. Luo, Y. Du, Y. Zhou, Color image enhancement: A metaheuristic Chimp optimization algorithm, Neural Process. Lett., 54 (2022), 4769–4808. https://doi.org/10.1007/s11063-022-10832-7 doi: 10.1007/s11063-022-10832-7
    [3] W. Wang, C. Zhang, Bifurcation of a feed forward neural network with delay and application in image contrast enhancement, Math. Biosci. Eng., 17 (2020), 387–403. https://doi.org/10.3934/mbe.2020021 doi: 10.3934/mbe.2020021
    [4] S. Chakraborty, A. Raman, S. Sen, K. Mali, S. Chatterjee, H. Hachimi, Contrast optimization using elitist metaheuristic optimization and gradient approximation for biomedical image enhancement, in 2019 Amity International Conference on Artificial Intelligence (AICAI), (2019), 712–717. https://doi.org/10.1109/AICAI.2019.8701367
    [5] M. J. Horry, S. Chakraborty, B. Pradhan, M. Fallahpoor, H. Chegeni, M. Paul, Factors determining generalization in deep learning models for scoring COVID-CT images, Math. Biosci. Eng., 18 (2021), 9264–9293. https://doi.org/10.3934/mbe.2021456 doi: 10.3934/mbe.2021456
    [6] R. Janarthanan, E. A. Refaee, K. Selvakumar, M. A. Hossain, R. Soundrapandiyan, M. Karuppiah, Biomedical image retrieval using adaptive neuro-fuzzy optimized classifier system, Math. Biosci. Eng., 19 (2022), 8132–8151. https://doi.org/10.3934/mbe.2022380 doi: 10.3934/mbe.2022380
    [7] J. R. Tang, N. A. M. Isa, Bi-histogram equalization using modified histogram bins, Appl. Soft Comput., 55 (2017), 31–43. https://doi.org/10.1016/j.asoc.2017.01.053 doi: 10.1016/j.asoc.2017.01.053
    [8] U. K. Acharya, S. Kumar, Genetic algorithm based adaptive histogram equalization (GAAHE) technique for medical image enhancement, Optik, 230 (2021), 166273. https://doi.org/10.1016/j.ijleo.2021.166273 doi: 10.1016/j.ijleo.2021.166273
    [9] T. Rahman, A. Khandakar, Y. Qiblawey, A. Tahir, S. Kiranyaz, S. B. A. Kashem, et al., Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images, Comput. Biol. Med., 132 (2021), 104319. https://doi.org/10.1016/j.compbiomed.2021.104319 doi: 10.1016/j.compbiomed.2021.104319
    [10] A. Qayyum, W. Sultani, F. Shamshad, R. Tufail, J. Qadir, Single-shot retinal image enhancement using untrained and pretrained neural networks priors integrated with analytical image priors, Comput. Biol. Med., 148 (2022), 105879. https://doi.org/10.1016/j.compbiomed.2022.105879 doi: 10.1016/j.compbiomed.2022.105879
    [11] R. Kumar, A. K. Bhandari, Spatial mutual information based detail preserving magnetic resonance image enhancement, Comput. Biol. Med., 146 (2022), 105644. https://doi.org/10.1016/j.compbiomed.2022.105644 doi: 10.1016/j.compbiomed.2022.105644
    [12] M. Jalali, H. Behnam, M. Shojaeifard, Echocardiography image enhancement using texture-cartoon separation, Comput. Biol. Med., 134 (2021), 104535. https://doi.org/10.1016/j.compbiomed.2021.104535 doi: 10.1016/j.compbiomed.2021.104535
    [13] K. G. Dhal, S. Ray, A. Das, S. Das, A survey on nature-inspired optimization algorithms and their application in image enhancement domain, Arch. Comput. Methods Eng., 26 (2019), 1607–1638. https://doi.org/10.1007/s11831-018-9289-9 doi: 10.1007/s11831-018-9289-9
    [14] S. Goel, A. Verma, N. Kumar, Gray level enhancement to emphasize less dynamic region within image using genetic algorithm, in 2013 3rd IEEE International Advance Computing Conference (IACC), (2013), 1171–1176. https://doi.org/10.1109/IAdCC.2013.6514393
    [15] S. Suresh, S. Lal, Modified differential evolution algorithm for contrast and brightness enhancement of satellite images, Appl. Soft Comput., 61 (2017), 622–641. https://doi.org/10.1016/j.asoc.2017.08.019 doi: 10.1016/j.asoc.2017.08.019
    [16] A. K. Bhandari, A. Kumar, S. Chaudhary, G. K. Singh, A new beta differential evolution algorithm for edge preserved colored satellite image enhancement, Multidimension. Syst. Signal Process., 28 (2017), 495–527. https://doi.org/10.1007/s11045-015-0353-4 doi: 10.1007/s11045-015-0353-4
    [17] H. K. Verma, S. Pal, Modified sigmoid function based gray scale image contrast enhancement using particle swarm optimization, J. Inst. Eng. India Ser. B, 97 (2016), 243–251. https://doi.org/10.1007/s40031-014-0175-z doi: 10.1007/s40031-014-0175-z
    [18] S. K. Ghosh, B. Biswas, A. Ghosh, A novel approach of retinal image enhancement using PSO system and measure of fuzziness, Procedia Comput. Sci., 167 (2020), 1300–1311. https://doi.org/10.1016/j.procs.2020.03.446 doi: 10.1016/j.procs.2020.03.446
    [19] H. Gao, W. Zeng, Color image enhancement based on Ant Colony Optimization Algorithm, Telkomnika, 13 (2015), 155–163. http://doi.org/10.12928/telkomnika.v13i1.1274 doi: 10.12928/telkomnika.v13i1.1274
    [20] H. Singh, A. Kumar, L. K. Balyan, A sine-cosine optimizer-based gamma corrected adaptive fractional differential masking for satellite image enhancement, in Harmony Search and Nature Inspired Optimization Algorithms, Springer, (2019), 633–645. https://doi.org/10.1007/978-981-13-0761-4_61
    [21] Y. Feng, S. Deb, G. G. Wang, A. H. Alavi, Monarch butterfly optimization: a comprehensive review, Expert Syst. Appl., 168 (2021), 114418. https://doi.org/10.1016/j.eswa.2020.114418 doi: 10.1016/j.eswa.2020.114418
    [22] S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
    [23] Y. Feng, G. G. Wang, A binary moth search algorithm based on self-learning for multidimensional knapsack problems, Future Gener. Comput. Syst., 126 (2022), 48–64. https://doi.org/10.1016/j.future.2021.07.033 doi: 10.1016/j.future.2021.07.033
    [24] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
    [25] I. Ahmadianfar, A. A. Heidari, S. Noshadian, H. Chen, A. H. Gandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022), 116516. https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
    [26] Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts, Expert Syst. Appl., 177 (2021), 114864. https://doi.org/10.1016/j.eswa.2021.114864 doi: 10.1016/j.eswa.2021.114864
    [27] J. Tu, H. Chen, M. Wang, A. H. Gandomi, The colony predation algorithm, J. Bionic Eng., 18 (2021), 674–710. https://doi.org/10.1007/s42235-021-0050-y doi: 10.1007/s42235-021-0050-y
    [28] H. Su, D. Zhao, A. A. Heidari, L. Liu, X. Zhang, M. Mafarja, et al., RIME: A physics-based optimization, Neurocomputing, 532 (2023), 183–214. https://doi.org/10.1016/j.neucom.2023.02.010 doi: 10.1016/j.neucom.2023.02.010
    [29] M. O. Oloyede, A. J. Onumanyi, H. Bello-Salau, K. Djouani, A. Kurien, Exploratory analysis of different metaheuristic optimization methods for medical image enhancement, IEEE Access, 10 (2022), 28014–28036. https://doi.org/10.1109/ACCESS.2022.3158324 doi: 10.1109/ACCESS.2022.3158324
    [30] J. Tang, J. Kim, E. Peli, Image enhancement in the JPEG domain for people with vision impairment, IEEE. Trans. Biomed. Eng., 51 (2004), 2013–2023. https://doi.org/10.1109/TBME.2004.834264 doi: 10.1109/TBME.2004.834264
    [31] J. Tang, X. Liu, Q. Sun, A direct image contrast enhancement algorithm in the wavelet domain for screening mammograms, IEEE. J. Sel. Top. Signal Process., 3 (2009), 74–80. https://doi.org/10.1109/JSTSP.2008.2011108 doi: 10.1109/JSTSP.2008.2011108
    [32] X. Liu, J. Tang, X. Zhang, A multiscale image enhancement method for calcification detection in screening mammograms, in 2009 16th IEEE International Conference on Image Processing (ICIP), (2009), 677–680. https://doi.org/10.1109/ICIP.2009.5414077
    [33] K. Lan, G. Li, Y. Jie, R. Tang, L. Liu, S. Fong, Convolutional neural network with group theory and random selection particle swarm optimizer for enhancing cancer image classification, Math. Biosci. Eng., 18 (2021), 5573–5591. https://doi.org/10.3934/mbe.2021281 doi: 10.3934/mbe.2021281
    [34] S. Fong, K. Lan, P. Sun, S. Mohammed, J. Fiaidhi, A time-series pre-processing methodology for biosignal classification using statistical feature extraction, in Proceedings of the 10th IASTED International Conference on Biomedical Engineering (Biomed'13), (2013), 207–214. https://doi.org/10.2316/P.2013.791-100
    [35] K. Lan, J. Zhou, X. Jiang, J. Wang, S. Huang, J. Yang, et al., Group theoretic particle swarm optimization for multi-level threshold lung cancer image segmentation, Quant. Imaging. Med. Surg., 13 (2023), 1312–1322. https://doi.org/10.21037/qims-22-295 doi: 10.21037/qims-22-295
  • This article has been cited by:

    1. Dingwen Deng, Zhu-an Wang, Zilin Zhao, The maximum norm error estimate and Richardson extrapolation methods of a second-order box scheme for a hyperbolic-difference equation with shifts, 2024, 1607-3606, 1, 10.2989/16073606.2024.2385424
    2. V. G. Pimenov, A. B. Lozhnikov, Richardson Method for a Diffusion Equation with Functional Delay, 2023, 321, 0081-5438, S204, 10.1134/S0081543823030173
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2744) PDF downloads(135) Cited by(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog