
The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by ⊕ and ⊗, representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form A⊗x⊕c=B⊗x⊕d, where A, B are given square matrices, c, d are column vectors and operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type f⊤⊗x→max or min constraint to A⊗x⊕c=B⊗x⊕d. We solve the equation in the form f(x)=v on the set of solutions of the equation A⊗x⊕c=B⊗x⊕d and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.
Citation: Helena Myšková, Ján Plavka. Optimizing the max-min function with a constraint on a two-sided linear system[J]. AIMS Mathematics, 2024, 9(4): 7791-7809. doi: 10.3934/math.2024378
[1] | Cagnur Corekli . The SIPG method of Dirichlet boundary optimal control problems with weakly imposed boundary conditions. AIMS Mathematics, 2022, 7(4): 6711-6742. doi: 10.3934/math.2022375 |
[2] | Ailing Zhu, Yixin Wang, Qiang Xu . A weak Galerkin finite element approximation of two-dimensional sub-diffusion equation with time-fractional derivative. AIMS Mathematics, 2020, 5(5): 4297-4310. doi: 10.3934/math.2020274 |
[3] | V. Raja, E. Sekar, S. Shanmuga Priya, B. Unyong . Fitted mesh method for singularly perturbed fourth order differential equation of convection diffusion type with integral boundary condition. AIMS Mathematics, 2023, 8(7): 16691-16707. doi: 10.3934/math.2023853 |
[4] | Kaifang Liu, Lunji Song . A family of interior-penalized weak Galerkin methods for second-order elliptic equations. AIMS Mathematics, 2021, 6(1): 500-517. doi: 10.3934/math.2021030 |
[5] | Nien-Tsu Hu, Sekar Elango, Chin-Sheng Chen, Murugesan Manigandan . Computational methods for singularly perturbed differential equations with advanced argument of convection-diffusion type. AIMS Mathematics, 2024, 9(8): 22547-22564. doi: 10.3934/math.20241097 |
[6] | Lanyin Sun, Kunkun Pang . Numerical solution of unsteady elastic equations with C-Bézier basis functions. AIMS Mathematics, 2024, 9(1): 702-722. doi: 10.3934/math.2024036 |
[7] | Zhengang Zhao, Yunying Zheng, Xianglin Zeng . Finite element approximation of fractional hyperbolic integro-differential equation. AIMS Mathematics, 2022, 7(8): 15348-15369. doi: 10.3934/math.2022841 |
[8] | Jinghong Liu . W1,∞-seminorm superconvergence of the block finite element method for the five-dimensional Poisson equation. AIMS Mathematics, 2023, 8(12): 31092-31103. doi: 10.3934/math.20231591 |
[9] | Zhongdi Cen, Jian Huang, Aimin Xu . A posteriori mesh method for a system of singularly perturbed initial value problems. AIMS Mathematics, 2022, 7(9): 16719-16732. doi: 10.3934/math.2022917 |
[10] | Essam R. El-Zahar, Ghaliah F. Al-Boqami, Haifa S. Al-Juaydi . Piecewise approximate analytical solutions of high-order reaction-diffusion singular perturbation problems with boundary and interior layers. AIMS Mathematics, 2024, 9(6): 15671-15698. doi: 10.3934/math.2024756 |
The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by ⊕ and ⊗, representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form A⊗x⊕c=B⊗x⊕d, where A, B are given square matrices, c, d are column vectors and operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type f⊤⊗x→max or min constraint to A⊗x⊕c=B⊗x⊕d. We solve the equation in the form f(x)=v on the set of solutions of the equation A⊗x⊕c=B⊗x⊕d and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.
We consider the following system of ℓ≥2 coupled singularly perturbed reaction-diffusion boundary value problems. Find u∈(C2(0,1)∩C[0,1])ℓ such that
{Lu=−Eu′′+Au=g in Ω=(0,1),u(0)=u0,u(1)=u1, | (1.1) |
where E=diag(ε21,…,ε2ℓ) with small perturbation parameters 0<εi<<1,i=1,2…,ℓ, the vector-valued function g=(g1,g2,…,gℓ)T and the reaction coefficients matrix A=(akl)ℓk,l=1 are twice continuously differentiable on [0,1] and the constants u0 and u1 are given. The exact solution of (1.1) is the vector u=(u1,u2,…,uℓ)T. We assume that A:[0,1]→R(ℓ,ℓ) and the vector valued function g:[0,1]→Rℓ are independent of the perturbation parameters E and the reaction matrix A is strongly diagonally dominant with
ℓ∑j=1j≠i‖ | (1.2) |
Then the condition (1.2) implies that \textbf{A} is an M -matrix and its inverse is positive definite and bounded in the maximum norm (see e.g., [1]). Under these assumptions, the problem (1.1) has a unique solution \textbf{u} = (u_1, \dots, u_\ell)^T\in (C^2(0, 1)\cap C[0, 1])^\ell . In this paper, without loss of generality we will assume the most general case
\begin{equation} 0 < \varepsilon_1 \leq \varepsilon_2\leq \dots \leq \varepsilon_\ell \ll 1, \end{equation} | (1.3) |
which always can be done, if necessary, by renumbering of the equations in the system.
It has been well-known that standard numerical methods including finite difference (FD) methods and finite element methods (FEM) are inefficient and inaccurate when applied to singularly perturbed problems (SPPs) on uniform meshes. The solutions of SPPs have boundary or/and interior layers which are very thin regions in which the solution or its derivative change abruptly. The width of the layers depends on the perturbation parameter and these layers are not resolved unless a large number of mesh points are used which is computationally expensive. As a remedy, fitted mesh methods based on layer-adapted meshes have been proposed and studied during recent years for solving boundary layer problems. The construction of these meshes require a priori knowledge on the bounds for the solution and its derivative. These meshes are finer in the part of boundary layers and coarser in the outside of region of the boundary layers. The well-known layer-adapted meshes are piecewise uniform Shishkin meshes [2] and Bakhvalov-type meshes [3]. We refer the readers to the books [1,2,4] and references therein for more details.
Unlike the non-coupled SPPs, the boundary layer behaviour of the solution to each equation in the system can be dramatically different and complicated. Each solution in the coupled system may have a sublayer corresponding to each of the perturbation parameter in the domain if the perturbation parameter in each equation has a different magnitude. This renders the construction of numerical methods very subtle. Shiskin [5] considered coupled system of two reaction diffusion equations on an infinite strip and he proved that the finite difference method is a robust method and has the rate of convergence \mathcal{O}(N^{-1/4}) on piecewise uniform meshes when the perturbation parameters are small and different each other. Later, it is shown that the method has higher order convergence in [6,7,8,9] using piecewise uniform Shishkin mesh. The finite element method has been developed and analyzed for SPPs in the papers [7,10,11]. Recently, the numerical solution of system of reaction diffusion problems have been presented in [12,13,14] and reference therein.
Although there has been increasing interest in the numerical solution of coupled system of two singularly perturbed differential equations, few articles discuss the numerical solution of coupled system of more than two singularly perturbed differential equations. Kellogg et al. [15] considered a system of { \ell\geq 2 } reaction-diffusion equations in two dimension with the same perturbation parameter for each equation in the system. A system of { \ell\geq 2 } reaction-diffusion equations each of which has a different perturbation parameter has been studied and analyzed in one dimension in [9]. In general, the reaction coefficient matrix \textbf{A} is assumed to be diagonally dominant with positive diagonal and nonpositive off-diagonal elements in most of the papers. However, this condition is weakened in [9]. Usually, error analyses in FEMs or DG methods have been analyzed in the energy norms derived from corresponding variational formulations. Unfortunately, these norms are too weak to capture the boundary layers of SPPs of reaction-diffusion type, see, e.g., [14,16,17,18]. Up to now, the only work on the balanced error estimates of finite element method for a system of \ell \geq 2 coupled singularly perturbed reaction-diffusion two-point boundary value problems is the paper of Lin and Stynes [14]. They have proved that the classical FEM using quadratic C^1 splines is order of \mathcal{O}(N^{-1} \ln N) in the balanced norm provided that each perturbation parameter is equal to the same small number. {A new FEM is presented for SPPs of reaction-diffusion type in the weighted and balanced norm in [19]. The convergence analysis of the classical FEM in a balanced norm on Bakhvalov-type rectangular meshes has been studied in [20].} The analysis is much more involved and complicated when each parameter in the system is different. The error analysis in the balanced norm, to the best of the authors' knowledge, is not studied in the literature when the perturbation parameters are different. In this paper, to fill this gap, we derive the error estimates of a weak Galerkin method for a system of \ell \geq 2 coupled SPPs of reaction-diffusion type in the energy and balanced norms when each equation in the system has a different parameter. The classical FEM on the balanced norm using C^0 -elements is open [21].
Wang and Ye [22] first introduced the weak Galerkin finite element method (WG-FEM) and presented for the second order elliptic equation. The key in the WG finite element scheme is to use the weak functions and weak derivatives on the completely discontinuous piecewise polynomials spaces. Since then, many papers have been devoted to WG finite element methods including the implementation results in [23], parabolic problems in [24], the Maxwell equations [25], the Stokes equations [26], the Helmholtz equations with high wave numbers in [27] and the multi-term time fractional diffusion equations in [28]. In [29], a discrete gradient and divergence operators have been introduced for convection-dominated problems. A uniformly convergent weak Galerkin finite element method on Shishkin mesh for convection-diffusion problem in one dimension has been presented in [30]. Uniform convergence of the WG-FEM on Shishkin mesh for SPPs of convection-dominated type has been studied in 2D in [31] and in 1D in [32] and singularly perturbed reaction-convection-diffusion problems with two parameters has been analyzed in [33]. Uniform convergence of a weak Galerkin method on Bakhvalov-type mesh for singularly perturbed convection-diffusion problem has been analyzed in [34,35] and nonlinear singularly perturbed reaction-diffusion problems in [36]. Supercloseness in an energy norm of a WG-FEM on a Bakhvalov-type mesh for a singularly perturbed two-point boundary value problem has been demonstrated in [37] and superconvergence results in [38]. The WG-FEM for two coupled system of SPPs of reaction-diffusion type has been presented in the energy norm in [39]. We wish to study a robust WG-FEM for the coupled systems of SPPs of reaction-diffusion type. Thus, the main aim of this paper is to construct a uniformly convergent WG-FEM for the problem (1.1).
The paper is organized as follows. In Section 2, we present and study a decomposition of the exact solution and a uniform Shishkin mesh. We introduce the WG-FEM in Section 3. Stability properties of the proposed method have been demonstrated in Section 4. Error analysis in the energy and balanced norm is presented in Sections 5 and 6, respectively. In Section 7, the numerical results are conducted to confirm the theory in the previous sections. Finally. conclusion is given in Section 8.
In this work, by C we mean a generic constant independent of N and the perturbation parameters \varepsilon_i, \; i = 1, \dots, \ell which may not be the same at each occurrence. Constants with subscript such as C_c are fixed numbers and also do not depend on \varepsilon_i, \; i = 1, \dots, \ell , and the mesh parameter N .
In this section, we first give a decomposition of the analytical solution of the linear system (1.1). Then we will derive the bounds for the solution and its derivatives. Next, a piecewise-uniform Shishkin mesh is constructed. Sobolev spaces with the related norms and some basic notations are introduced at the end of this section.
The solution of the system (1.1) can be decomposed as \textbf{u} = \textbf{R}+ \textbf{L} where \textbf{R} is the regular solution part and \textbf{L} is the layer parts. In light of (1.2), there is a constant \rho\in (0, 1) such that
\begin{align} \sum\limits_{\substack{j = 1 \\ j\neq i}}^\ell \Big\Vert \cfrac{a_{ij}}{a_{ii}} \Big\Vert_\infty < \rho, \quad \mbox{for}\quad i = 1, 2, \dots, \ell. \end{align} | (2.1) |
Define \alpha = \alpha(\rho) by
\alpha^2: = (1-\rho)\min\limits_{i = 1, \dots, \ell}\min\limits_{0\leq x\leq1} a_{ii}(x). |
For the future reference, we set
\mathcal{B}_{\mu}^\alpha (x): = e^{-\alpha x/\mu}+e^{-\alpha(1-x)/\mu}, |
and define \ell \times \ell matrix \Gamma = (c_{ij}) by
\begin{align} c_{ii} = 1, \text{ and } c_{ij} = -\Vert\cfrac{a_{ij}}{a_{ii}}\Vert_\infty \text{ for } i\neq j. \end{align} | (2.2) |
We assume that the matrix \Gamma is inverse monotone, that is, \Gamma^{-1} exists and
\begin{equation} \Gamma^{-1}\geq 0. \end{equation} | (2.3) |
We first provide the stability of the solution of (1.1) from [9].
Lemma 2.1. Assume that \boldsymbol{u} is the solution of (1.1) and the reaction coefficient matrix \boldsymbol{A} has strictly positive diagonal elements a_{ii} > 0 for i = 1, 2, \dots, \ell . Let the matrix \Gamma be inverse monotone. Then the solution to each equation in the system has the following bounds
\begin{align*} |u_i(x)|\leq \sum\limits_{j = 1}^\ell (\Gamma ^{-1})_{ij} \max \Big\{ \Big\Vert \cfrac{g_j}{a_{jj}}\Big\Vert, |u_{0, i}|, |u_{1, i}|\Big\} , \quad i = 1, \dots, \ell. \end{align*} |
Proof. We refer the reader to [9] for the detailed proof.
The following theorem states that the coercivity of \bf{A} and inverse monotone property (2.3) of the matrix \Gamma are related.
Theorem 2.2. [40] Assume that the reaction coefficient matrix \boldsymbol{A} has strictly positive diagonal elements a_{ii} > 0 for i = 1, 2, \dots, \ell and the matrix \Gamma is inverse monotone. Then, there is a constant diagonal matrix \bf{D} with positive elements and a positive constant \beta such that
\boldsymbol{v}^T \boldsymbol{D A}\boldsymbol{v}\geq \beta \boldsymbol{v}^T \boldsymbol{v}, \quad \forall \boldsymbol{v}\in \mathbb{R}^\ell, \quad x\in [0, 1]. |
Remark 2.1.
(1) If the matrix \boldsymbol{A} has the property (1.2), then the matrix \Gamma is a strongly diagonally dominant L_0 matrix which implies that the matix \Gamma is inverse monotone.
(2) If \boldsymbol{A} and \boldsymbol{g} are twice continuously differentiable, then the above stability result guarantees the existence of a unique solution \boldsymbol{u}\in C^4[0, 1]^\ell .
(3) The reaction matrix is assumed to be strongly diagonally dominant with positive diagonal elements and nonpositive off-diagonal elements in most of existence papers on coupled system of SPPs with the exception [9,41]. This assumption implies that the operator \mathcal{L} is inverse monotone and satisfies the maximum principle which is a useful tool in finite difference method. In this paper, the assumptions on \boldsymbol{A} are weakened and we consider problems in a more general setting.
(4) Since the form of system (1.1) and the matrix \Gamma do not change when a constant positive diagonal matrix is applied on the left, Theorem 2.2 implies that we can assume, without loss of generality, the reaction matrix \bf{A} is coercive if it has positive diagonal elements. That means that there exists \eta > 0 such that
\begin{equation} \boldsymbol{v}^T \boldsymbol{ A}\boldsymbol{v}\geq \eta \boldsymbol{v}^T \boldsymbol{v}, \quad \forall \boldsymbol{v}\in \mathbb{R}^\ell. \end{equation} | (2.4) |
We have to consider the solution decomposition consisting of smooth and layer components because of the boundary layers. Thus, we will use the following decomposition of \textbf{u} in the forthcoming analysis.
\begin{align*} \textbf{u} = \textbf{R}+ \textbf{L}_L+ \textbf{L}_R, \end{align*} |
where \textbf{R} is the smooth part, \textbf{L}_L and \textbf{L}_R are the boundary layer parts, and satisfy the following boundary value problems, respectively
\begin{align} \mathcal{L} \textbf{R}& = \textbf{g}\quad \mbox{on}\quad \Omega\quad \mbox{and }\quad \textbf{R}(0) = \textbf{A}^{-1}(0)\textbf{g}(0), \quad \textbf{R}(1) = \textbf{A}^{-1}(1)\textbf{g}(1), \end{align} | (2.5) |
\begin{align} \mathcal{L} \textbf{L}_L& = \textbf{0}\quad \mbox{on}\quad \Omega\quad \mbox{and }\quad \textbf{L}_L(0) = \textbf{u}_0- \textbf{R}(0), \quad \textbf{L}_L(1) = 0. , \end{align} | (2.6) |
\begin{align} \mathcal{L} \textbf{L}_R& = \textbf{0}\quad \mbox{on}\quad \Omega\quad \mbox{and }\quad \textbf{L}_R(0) = 0, \quad \textbf{L}_R(1) = \textbf{u}_1- \textbf{R}(1). \end{align} | (2.7) |
Here, the existence of the inverse matrix \textbf{A} ^{-1} is guaranteed by the condition (1.2).
Theorem 2.3. Assume that \boldsymbol{A} and \boldsymbol{g} are twice continuously differentiable. Then the solution \boldsymbol{u} of the system (1.1) can be decomposed as \boldsymbol{u} = \boldsymbol{R}+ \boldsymbol{L}_L+ \boldsymbol{L}_R , where \boldsymbol{R} and \boldsymbol{L} = \boldsymbol{L}_L+ \boldsymbol{L}_R satisfy
\begin{align} \vert R_i^{(k)}(x)\vert &\leq C , && \mathit{\text{for}}\quad k = 0, 1, \dots, 4, \quad i = 1, \dots, \ell \end{align} | (2.8) |
\begin{align} |L_i^{(k)}(x)|&\leq C \sum\limits_{m = i}^\ell \varepsilon_m^{-k}\mathcal{B}_{\varepsilon_m}^\alpha (x), &&\mathit{\mbox{for}}\quad k = 0, 1, 2, \quad i = 1, \dots, \ell \end{align} | (2.9) |
\begin{align} |L_i^{(k)}(x)|&\leq C \varepsilon_i^{2-k}\sum\limits_{m = 1}^\ell \varepsilon_m^{-2}\mathcal{B}_{\varepsilon_m}^\alpha (x), && \mathit{\mbox{for}}\quad k = 3, 4, \quad i = 1, \dots, \ell \end{align} | (2.10) |
Proof. A detailed proof can be found in [42].
Let N be an integer divisible by 2(\ell+1) . We define the transition points
\lambda_{\ell+1} = \dfrac{1}{2}, \quad \lambda_s = \min \bigg\{ \frac{s \lambda_{s+1}}{s+1}, \cfrac{\sigma \varepsilon_s}{\alpha}\ln N\bigg\}, \quad s = \ell, \dots, 1, \quad \mbox{and}\quad \lambda_0 = 0, |
where \sigma is a user-chosen constant with \sigma = \mathcal{O}(1) . In general, this parameter is chosen as \sigma \geq k+1 where k is the order of polynomials in the approximation space. Then we divide each of the intervals \Omega_s: = [\lambda_s, \lambda_{s+1}] and _s \Omega: = [1-\lambda_{s+1}, 1-\lambda_s] , s = 0, \dots, \ell into \cfrac{N}{2(\ell+1)} subintervals of equal mesh size
H_s = H_{2\ell+1-s} = \cfrac{2(\ell+1)(\lambda_{s+1}-\lambda_{s})}{N}, \quad s = 0, \dots, \ell. |
An example of a piecewise-uniform Shishkin mesh with N = 32 elements for a system of \ell = 3 reaction-diffusion equations is shown in Figure 1.
We next define the nodes recursively as
\begin{align*} x_0& = 0, \quad x_n = x_{n-1}+h_n\quad \text{ for } n = 1, \dots, N, \text{ where} \\ h_n& = \begin{cases} H_0, & n = 1, \dots, \cfrac{N}{2(\ell+1)}, \\ H_1, & n = \cfrac{N}{2(\ell+1)}+1, \dots, \cfrac{N}{(\ell+1)}, \\ \vdots\quad &\vdots \\ H_{2\ell+1}, & n = \cfrac{N(2\ell+1)}{2(\ell+1)}, \dots, N. \end{cases} \end{align*} |
We denote the mesh and a partition of the domain \varOmega by I_n = [x_{n-1}, x_n], \quad n = 1, \dots, N and \mathcal{T}_N = \{I_n:n = 1, \dots, N\} , respectively. For I_n\in \mathcal{T}_N , the outward unit normal \textbf{n}_{I_n} on I_n is defined as \textbf{n}_{I_n}(x_n) = 1 and \textbf{n}_{I_n}(x_{n-1}) = -1 ; for simplicity, we use \textbf{n} instead of \textbf{n}_{I_n} .
We use the following basic notations in the sequel. By L^2(\Omega) , we denote the space of square integrable functions on \Omega with the norm \Vert u\Vert^2_{L^2(\Omega)} = \int_{\Omega }u^2(x)\, d x and sometimes, we will use the abbreviation \Vert \cdot \Vert = \Vert u\Vert^2_{L^2(\Omega)} . The standard Sobolev space is denoted by H^k(\Omega) with the norm \Vert\cdot\Vert_{k, \Omega} and semi-norm |\cdot|_{k, \Omega} given as
\Vert u\Vert_{k, \Omega}^2 = \sum\limits_{j = 0}^k\Vert u^{(j)}\Vert^2_{L^2(\Omega)}, \quad |u|_{k, \Omega}^2 = \Vert u^{(k)}\Vert^2_{L^2(\Omega)}. |
We define the norm for a vector-valued function \textbf{u} as
\Vert \textbf{u} \Vert_{k, \Omega}^2 = \sum\limits_{i = 1}^{\ell} \Vert u_i\Vert_{k, \Omega}^2. |
For each interval I_n , the broken Sobolev space is defined by
H^k_N(\Omega) = \{ u\in L^2(\Omega):u\vert_{I_n}\in H^k(I_n), \quad \forall I_n \in \mathcal{T}_N\}, |
and the corresponding norm and semi-norm
\Vert \textbf{u}\Vert_{H^k_N(\Omega)}^2 = \sum\limits_{n = 1}^{N} \sum\limits_{i = 1}^{\ell}\Vert u_i\Vert_{k, I_n}^2, \quad | \textbf{u}|_{H^k_N(\Omega)}^2 = \sum\limits_{n = 1}^{N} \sum\limits_{i = 1}^{\ell}|u_i|_{k, I_n}^2. |
For the future reference we use the following notations
\begin{align*} \big(u, v\big)& = \sum\limits_{I_n\in \mathcal{T}_N}\big(u, v\big)_{I_n} = \sum\limits_{I_n\in \mathcal{T}_N}\int_{I_n} u(x) v(x) \, d x, \\ \big\langle u, v \big\rangle & = \sum\limits_{I_n\in \mathcal{T}_N}\big\langle u, v\big\rangle_{\partial I_n} = \sum\limits_{I_n\in \mathcal{T}_N}\Big( u(x_n)v(x_n)+u(x_{n-1})v(x_{n-1})\Big), \\ \Vert u\Vert^2& = \sum\limits_{n = 1}^{N}\Vert u\Vert_{I_n}^2 = \sum\limits_{n = 1}^{N}\big(u, u\big)_{I_n}. \end{align*} |
This section is devoted to introduce novel concepts such as weak functions and weak derivatives from which we define our method for the problem (1.1). For the rest of the paper, we denote by \mathbb{P}_k(I_n) the set of polynomials defined on I_n with degree at most k . The space of weak functions \mathcal{W}(I_n) on I_n is defined by
\begin{align*} \mathcal{W}(I_n) = \{u = \{u_0, u_b\}: u_0\in L^2(I_n), v_b \in L^\infty(\partial I_n)\}. \end{align*} |
Here, a weak function u = \{u_0, u_b\} has two components and the first component u_0 represents the value of u in (x_{n-1}, x_n) and u_b is interpreted as the value of u on \partial I_n = \{x_{n-1}, x_n\} . From now on, we assume that k = 2 unless otherwise mentioned.
Let S_{N}(I_n) be a local weak Galerkin (WG) finite element space given by
\begin{equation} S_{N}(I_n) = \{ u = \{u_0, u_b\}: u_0|_{I_n}\in \mathbb{P}_k(I_n), u_b|_{\partial I_n}\in \mathbb{P}_0(\partial I_n) \quad \forall I_n \in \mathcal{T}_N\}, \end{equation} | (3.1) |
where \mathbb{P}_0(\partial I_n) stands for constant polynomials on \partial I_n . We remark that the results can be extended to \mathbb{P}_k elements when k > 2 . However, in this case {some } additional compatibility conditions of the data will be required in order to have (2.8).
Next, we define a global WG finite element space S_N that comprises of weak functions u = \{u_0, u_b\} such that u_0|_{I_n}\in \mathbb{P}_k(I_n) and u_b|_{x_n} is the constant for n = 0, \dots, N .
Let S_N^0 be the subspace of S_N with zero boundary conditions, that is,
\begin{align} S_N^0 = \{ u = \{u_0, u_b\}: u \in S_N, u_b(0) = u_b(1) = 0\}. \end{align} | (3.2) |
The weak derivative d_{w, I_n} u \in \mathbb{P}_{{k-1}}(I_n) of a function u \in S_N(I_n) is defined to be the solution of the following equation
\begin{equation} \big( d_{w, I_n}u, v\big)_{I_n} = -\big( u_0, v^\prime\big)_{I_n}+\big \langle u_b, \textbf{n} v\big \rangle_{\partial I_n}, \qquad \forall v \in \mathbb{P}_{{k-1}}(I_n), \end{equation} | (3.3) |
where
(w, z)_{I_n} = \int_{I_n}w(x)z(x)\, d x, |
and
\big\langle w, z\textbf{n}\big\rangle_{\partial I_n} = w(x_n)z(x_n)-w(x_{n-1})z(x_{n-1}). |
The discrete weak derivative d_{w} u of the weak function u = \{u_0, u_b\} on the finite element space S_{N} is defined by
\left.\left(d_{w} u\right)\right|_{I_{n}} = d_{w, I_{{n}}}\left(\left.u\right|_{I_{n}}\right), \quad \forall u \in S_{N} . |
Our WG-FEM scheme for the system of singularly perturbed reaction-diffusion problems (1.1) is given as follows.
Algorithm 1 The weak Galerkin scheme for the linear system of singularly perturbed diffusionreaction problem. |
The WG-FEM for the problem (1.1) is to find \textbf{u}_N=(u_1^N, \dots, u_\ell^N)\in [S_N^0]^\ell which solves the following: \begin{equation} a(\textbf{u}_N, \textbf{v}_N)=L(\textbf{v}_N), \quad \forall \textbf{v}_N =(v_1^N, \dots, v_\ell^N)\in [S_N^0]^\ell. \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad (3.4) \end{equation} |
Here, the bilinear and the linear forms are defined by, for any u_i^N = \{u_{i0}, u_{ib}\} ,
\begin{align} a( \textbf{u}_N, \textbf{v}_N)& = \sum\limits_{i = 1}^\ell \varepsilon_i^2\big(d_w u_i^N, d_w v_i^N\big)+ \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij} u_{j0}, v_{i0}\big)+\sum\limits_{i = 1}^\ell s(u_i^N, v_i^N) , \\ s(u_i^N, v_i^N)& = \sum\limits_{n = 1}^{N} \big\langle \varrho_n (u_{i0}-u_{ib}), v_{i0}-v_{ib}\big\rangle_{\partial I_n}, \\ L( \textbf{v}_N)& = \sum\limits_{i = 1}^\ell \big(g_i, v_{{i0}}\big), \end{align} | (3.5) |
where \varrho_n \geq 0, n = 1, \dots N is the penalization parameter associated with the node x_n defined as follows:
\begin{align} \varrho_n = \begin{cases} {1}, & \text{for} \quad I_n \subset \varOmega_\lambda = [\lambda_\ell, 1-\lambda_\ell], \\ \cfrac{ N}{\ln N}, & \text{for}\quad I_n \subset \varOmega\setminus\varOmega_\lambda. \end{cases} \end{align} | (3.6) |
Choosing a penalty parameter in stabilized numerical methods is an important issue in uniform convergence estimates. Usually, the penalization parameter will depend on the perturbation parameters. For example, \varrho_n = \varepsilon_\ell^2 h_n^{-1} is taken in the WG finite element schemes [22,26,29]. However, a uniform convergence rate can not be attained for a penalization constant depending on the perturbation parameters.
In the following analysis, we will recall the following multiplicative trace inequality and the inverse inequality.
\begin{align} \Vert v\Vert _{L^2(\partial I_n)}^2&\le C(h_n^{-1}\Vert v\Vert _{L^2(I_n)}^2+\Vert v\Vert _{L^2(I_n)}\Vert v'\Vert _{L^2(I_n)}), \quad \forall v\in H^1(I_n), \end{align} | (4.1) |
\begin{align} \Vert v_N\Vert _{L^p(\partial I_n)}&\le C h_n^{-1/p}\Vert v_N\Vert _{L^p(I_n)}, \quad \forall 1\le p\le \infty , \quad \forall v_N\in \mathbb {P}_k(I_n). \end{align} | (4.2) |
We introduce the \mathcal{E} -weighted energy norm \Vert|\cdot \Vert| in [S_N^0]^\ell as follows: for \textbf{v} = (v_1^N, \dots, v_\ell ^N)^T = (\{v_{10}, v_{1b}\}, \dots, \{v_{\ell 0}, v_{\ell b}\})^T \in [S_N^0]^\ell ,
\begin{align} \Vert| \textbf{v}\Vert|^2& = \sum\limits_{i = 1}^\ell \varepsilon_i^2 \Vert d_w v_i^N\Vert^2+\eta \sum\limits_{i = 1}^\ell \Vert v_{i0}\Vert^2+\sum\limits_{i = 1}^\ell s( v_i^N, v_i^N), \end{align} | (4.3) |
where \eta is the coercivity constant of \textbf{A} .
We also introduce the discrete H^1 energy-like norm \Vert|\cdot\Vert|_{\varepsilon} in S_N^\ell+H^1(\Omega)^\ell defined as
\begin{align} \Vert| \textbf{v}\Vert|_\varepsilon^2& = \sum\limits_{i = 1}^\ell \varepsilon_i^2 \Vert v_{i0}^\prime \Vert^2+\eta \sum\limits_{i = 1}^\ell \Vert v_{i0}\Vert^2+\sum\limits_{i = 1}^\ell s( v_i^N, v_i^N), \end{align} | (4.4) |
where { v_{i0}^\prime} is the ordinary derivative of a functions {v_{i0}(x)} .
We show that the norms \Vert|\cdot\Vert| and \Vert|\cdot\Vert|_\varepsilon defined by (4.3) and (4.4), respectively are equivalent in the {weak Galerkin} finite element space [S_N^0]^\ell in the next lemma.
Lemma 4.1. Let \textbf{v}_N\in [S_N^0]^\ell . Then there are two positive constant C_l and C_s such that
\begin{equation} C_l \Vert| \boldsymbol{v}_N\Vert|\leq \Vert| \boldsymbol{v}_N\Vert|_\varepsilon \leq C_s \Vert| \boldsymbol{v}_N\Vert|. \end{equation} | (4.5) |
Proof. For any v^N_i = \{ v_{i0}, v_{ib}\}\in S_N^0 , by the definition of weak derivative (3.3) and integration by parts we arrive at
\begin{equation} \big( d_w v_i^N, w\big)_{I_n} = \big( v_{i0}^\prime , w\big)_{I_n}+\big\langle v_{ib}- v_{i0}, w \textbf{n} \big\rangle_{\partial I_n}, \quad \forall w \in \mathbb{P}_{k-1}(I_n). \end{equation} | (4.6) |
Choosing w = d_w v_i^N in the above Eq (4.6) yields
\begin{align*} \Vert d_w v_i^N\Vert^2_{I_n}& = \big( v_{i0}^\prime, d_w v_i^N\big)_{I_n}+\big\langle v_{ib}- v_{i0}, d_w v_i^N \textbf{n}\big\rangle_{\partial I_n}. \end{align*} |
Summing up the above equation over all I_n\in \mathcal{T}_N , and using the inverse inequality (4.2), we obtain
\begin{align*} \Vert d_w v_i^N\Vert^2 \leq C ( \Vert v_{i0}^\prime \Vert^2 + \sum\limits_{n = 1}^N h_n^{-1} \Vert v_{ib}- v_{i0} \Vert _{\partial I_n}^2)^{1/2} \Vert d_w v_i^N\Vert. \end{align*} |
Therefore, we have
\begin{align} \Vert d_w v_i^N\Vert^2 \leq C( \Vert v_{i0}^\prime \Vert^2 + \sum\limits_{n = 1}^N h_n^{-1} \Vert v_{ib}- v_{i0} \Vert _{\partial I_n}^2). \end{align} | (4.7) |
From the penalty parameter (3.6), we have
\begin{align} \cfrac{\varepsilon_i^2 h_n^{-1}}{\varrho_n} \leq \cfrac{\varepsilon_i h_n^{-1}}{\varrho_n}\leq C, \quad \mbox{ for}\; n = 1, \dots, N . \end{align} | (4.8) |
To see this, the minimal possible h_n = H_0 = \cfrac{2(\ell+1)\lambda_1}{N} implies that \cfrac{\varepsilon_1 h_n^{-1}}{\varrho_n} = \cfrac{\alpha}{2(\ell +1) \sigma} = :C . Hence using (4.8), we obtain
\begin{align*} \sum\limits_{n = 1}^{N}\varepsilon_i^2 h_n^{-1}\Vert v_{ib}- v_{i0} \Vert_{\partial I_n}^2 = \sum\limits_{n = 1}^{N}\cfrac{\varepsilon_i^2 h_n^{-1}}{\varrho_n} \varrho_n\Vert v_{ib}- v_{i0} \Vert_{\partial I_n}^2\leq C s( v_i^N, v_i^N), \end{align*} |
which together with (4.7) implies that
\begin{equation} \varepsilon_i^2\Vert d_w v_i^N\Vert^2\leq 2 \Big(\varepsilon_i^2 \Vert v_{i0}^\prime \Vert^2+ s( v_i^N, v_i^N)\Big). \end{equation} | (4.9) |
On the other hand, taking w = v_{i0}^\prime in the Eq (4.6) yields
\Vert v_{i0}^\prime \Vert^2_{I_n} = \big( v_{i0}^\prime, d_w v_i^N\big)_{I_n}-\big\langle v_{ib}- v_{i0}, v_{i0}^\prime \textbf{n} \big\rangle_{\partial I_n}. |
Summing up the above equation over all I_n\in \mathcal{T}_N , using the inverse inequality (4.2), we have
\begin{align*} \Vert v_{i0}^\prime \Vert^2 \leq C ( \Vert d_w v_i^N \Vert^2 + \sum\limits_{n = 1}^N h_n^{-1} \Vert v_{ib}- v_{i0}\Vert _{\partial I_n}^2)^{1/2} \Vert v_{i0}^\prime\Vert. \end{align*} |
Therefore, we have
\begin{align} \Vert v_{i0}^\prime \Vert^2 \leq C( \Vert d_w v_i^N\Vert^2+\sum\limits_{n = 1}^N h_n^{-1} \Vert v_{ib}- v_{i0}\Vert_{\partial I_n}^2). \end{align} | (4.10) |
With the help of (4.8), we result in
\begin{equation} \varepsilon_i ^2 \Vert v_{i0}^\prime \Vert^2 \leq C \Big(\varepsilon_i^2 \Vert d_w v_i^N\Vert^2+ s( v_i^N, v_i^N)\Big). \end{equation} | (4.11) |
We obtain the desired result (4.5) in view of the inequalities (4.9) and (4.11) and the definition of the norms \Vert|\cdot\Vert and \Vert|\cdot \Vert|_\varepsilon . Thus we complete the proof.
We next show that the coercivity of the bilinear form a(\cdot, \cdot) on [S_N^0]^\ell in the energy norm \Vert|\cdot\Vert| defined by (4.3).
Lemma 4.2. Let \textbf{v}_N \in [S_N^0]^\ell . Then there holds
\begin{equation} a( \boldsymbol{v}_N, \boldsymbol{v}_N)\geq \Vert| \boldsymbol{v}_N\Vert|^2. \end{equation} | (4.12) |
Proof. Using the coercivity (2.4) of the reaction matrix \bf{A} , we have
\begin{align*} a( \textbf{v}_N, \textbf{v}_N)& = \sum\limits_{i = 1}^\ell \varepsilon_i^2\Vert d_w v_i^N\Vert^2+ \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big(a_{ij}{ v_{j0}, v_{i0}}\big)+ \sum\limits_{i = 1}^\ell s( v_i^N, v_i^N) \\ &\geq \sum\limits_{i = 1}^\ell \varepsilon_i^2\Vert d_w v_i^N\Vert^2+ \eta \sum\limits_{i = 1}^\ell \Vert {v_{i0}}\Vert^2+ \sum\limits_{i = 1}^\ell s( v_i^N, v_i^N)\\ & = \Vert| \textbf{v}_N\Vert|^2. \end{align*} |
The proof is completed.
In light of Lemma 4.2, we deduce that
\Vert| \textbf{u}_N\Vert| \leq \Vert\textbf{g}\Vert, |
which in turn implies the problem (3.4) has a unique solution. The existence follows from the uniqueness.
As a result of Lemma 4.1 and Lemma 4.2, we conclude that the bilinear form a(\cdot, \cdot) is also coercive in the energy like norm \Vert|\cdot \Vert|_\varepsilon defined by (4.4).
Lemma 4.3. Let \textbf{v}_N, \textbf{w}_N \in [S_N^0]^\ell . Then there exist positive constants C_c and C_e such that
\begin{align} a( \boldsymbol{v}_N, \boldsymbol{w}_N)&\leq C_c \Vert| \boldsymbol{v}_N\Vert|_\varepsilon \Vert| \boldsymbol{w}_N\Vert|_\varepsilon, \end{align} | (4.13) |
\begin{align} a( \boldsymbol{v}_N, \boldsymbol{v}_N)&\geq C_e \Vert| \boldsymbol{v}_N\Vert|^2_\varepsilon. \end{align} | (4.14) |
In this section, we study the error analysis of the proposed numerical scheme applied to the problem (1.1) in the energy norm associated with the bilinear form. We will show that the WG-FEM solution converges uniformly in the energy norm with respect to the perturbation parameters. For the uniform convergence analysis on Shishkin mesh, we will use a special interpolation operator given in [11]. On each interval I_n , we introduce the set of k+1 nodal functional N_\ell defined as follows: for any v\in C(I_n)
\begin{align*} N_0(v)& = v(x_{n-1}), \quad N_k(v) = v(x_{n}), \\ N_m(v)& = \cfrac{1}{h_n^m}\int^{x_n}_{x_{n-1}}(x-x_{n-1})^{m-1}v(x)\, d x, \quad m = 1, \dots, k-1. \end{align*} |
A local interpolation \mathcal{I} :H^1(I_n)\to \mathbb{P}_{{k}}(I_n) is now defined by
\begin{equation} N_m ( \mathcal{I} v-v) = 0, \quad m = 0, 1, \dots, k. \end{equation} | (5.1) |
The local interpolation operator \mathcal{I} can used for constructing a continuous global interpolation.
Since \mathcal{I} v|_{I_n} is continuous on I_n and is in the H^1(I_n) space, we denote \mathcal{I} v|_{\partial I_n} by \mathcal{I} v|_{I_n} , for simplicity. Form this fact we observe that for any v\in H^1(I_n) we have
\begin{align} d_w( \mathcal{I} v) = ( \mathcal{I} v)^\prime. \end{align} | (5.2) |
Lemma 5.1. [11] For any w\in H^{k+1}(I_n), I_n \in \mathcal{T}_N , the interpolation \mathcal{I} w defined by (5.1) has the following estimates:
\begin{align} |w-\mathcal {I}w &|_{l, I_n}\le Ch_n^{k+1-l}|w|_{k+1, I_n}, \quad l = 0, 1, \ldots , k+1, \end{align} | (5.3) |
\begin{align} \Vert w-\mathcal {I}w&\Vert _{L_{\infty }(I_n)}\le Ch_n^{{k+1}}|w|_{k+1, \infty , I_n}, \end{align} | (5.4) |
where h_n is the length of element I_n and C is independent of h_n, and { \varepsilon_i, \; i = 1, \dots, \ell. }
Lemma 5.2. Let \mathcal{I} \boldsymbol{R} and \mathcal{I} \boldsymbol{L} be the interpolations of the regular part \boldsymbol{R} and the layer part \boldsymbol{L} of the solution { \boldsymbol{u} \in H^{k+1}(\varOmega) } on the piecewise-uniform Shishkin mesh, respectively. Assume also that \varepsilon_\ell \ln N \leq \ell \alpha /(2(\ell+1)\sigma) and let \varOmega_\lambda = [\lambda_\ell, 1-\lambda_\ell] . Then, we have \mathcal{I} \boldsymbol{u} = \mathcal{I} \boldsymbol{R}+ \mathcal{I} \boldsymbol{L} and the following interpolation estimates are satisfied for i = 1, \dots, \ell
\begin{align} \Vert (R_i-\mathcal {I} R_i)^{(l)}\Vert _{L^2(\varOmega )}&\le CN^{l-{(k+1)}}, \quad l = 0, 1, 2, \end{align} | (5.5) |
\begin{align} \Vert L_i-\mathcal {I}L_i\Vert _{L^2(\varOmega\setminus \varOmega_\lambda)}&\le C\varepsilon ^{1/2}(N^{-1}\ln N)^{{k+1}}, \end{align} | (5.6) |
\begin{align} N^{-1}\Vert (\mathcal {I}L_i)^\prime\Vert _{L^2(\varOmega _\lambda)}+ \Vert \mathcal {I}L_i\Vert _{L^2(\varOmega _\lambda)}&\le C(\varepsilon^{1/2}+N^{-1/2})N^{-\sigma}, \end{align} | (5.7) |
\begin{align} \Vert L_i\Vert _{L^{\infty }(\varOmega _\lambda)}+\varepsilon ^{-1/2}\Vert L_i\Vert _{L^2(\varOmega _\lambda)}&\le C N^{-\sigma}, \end{align} | (5.8) |
\begin{align} \Vert (L_i)^{(l)}\Vert _{L^2(\varOmega _\lambda)}&\le C\varepsilon ^{1/2-l}N^{-\sigma}, \quad l = 1, 2, \end{align} | (5.9) |
\begin{align} \Vert L_i- \mathcal {I}L_i\Vert _{L^2(\varOmega _\lambda)}&\le C(\varepsilon^{1/2}+N^{-1/2})N^{-\sigma}, \end{align} | (5.10) |
where \varepsilon^{1/2}: = \varepsilon_1^{1/2}+\dots, +\varepsilon_\ell^{1/2} . Furthermore, the following estimates hold true
\begin{align} \Vert (L_i-\mathcal {I}L_i)^{(l)}\Vert _{L^2(\varOmega _\lambda)}& \le C\varepsilon ^{1/2-l}N^{-\sigma}, \quad l = 1, 2, \end{align} | (5.11) |
\begin{align} \Vert (L_i-\mathcal {I}L_i)^{(l)}\Vert _{L^2(\varOmega\setminus\varOmega_\lambda)}& \le C\varepsilon ^{1/2-l}(N^{-1}\ln N)^{{k+1-\ell}}, \quad l = 1, 2. \end{align} | (5.12) |
Proof. The linearity of the interpolation implies that \mathcal{I} \textbf{u} = \mathcal{I}(\textbf{R}+ \textbf{L}) = \mathcal{I} \textbf{R}+ \mathcal{I} \textbf{L}. Applying the estimate (5.3), the bounds for the derivatives of regular components R_i of the solution in Lemma 2.1 and using (2.8), we obtain
\begin{align*} \Vert (R_i- \mathcal{I} R_i)^{(l)}\Vert \leq C N^{l-3}\vert R_i\vert_{{k+1}, \varOmega}\leq C N^{l-{(k+1)}}, \quad l = 0, 1, 2, \quad i = 1, \dots, \ell. \end{align*} |
This completes the proof of estimates (5.5).
Using the fact that \mathcal{B}_{\varepsilon_i}^\alpha(x)\leq \mathcal{B}_{\varepsilon_\ell}^\alpha (x) for i = 1, \dots, \ell and \lambda_\ell = \cfrac{\sigma \varepsilon_\ell}{\alpha} \ln N , we have
\begin{align*} \Vert L_i\Vert_{L^\infty(\varOmega_\lambda)}&\leq C \max\limits_{[\lambda_\ell, 1-\lambda_\ell]}\sum\limits_{m = i}^\ell\mathcal{B}_{\varepsilon_{{m}}}^\alpha(x) \\ &\leq C \max\limits_{[\lambda_\ell, 1-\lambda_\ell]} \Big( \exp (-\alpha x/\varepsilon_\ell)+\exp (-\alpha (1-x)/\varepsilon_\ell)\Big)\\ &\leq C N^{-\sigma}. \end{align*} |
The L^2 - norm estimate of the layer part of the solution on the sub-interval \varOmega_\lambda follows from
\begin{align*} \Vert L_i\Vert_{L^2(\varOmega_\lambda)}^2&\leq C \int_{\lambda_\ell}^{1-\lambda_\ell} (\sum\limits_{m = i}^\ell \mathcal{B}_{\varepsilon_{{m}}}^\alpha (x) )^2\, d x \\ &\leq C \sum\limits_{m = i}^\ell \int_{\lambda_\ell}^{1-\lambda_\ell}\Big( \exp (-2 \alpha x/\varepsilon_{{m}})+\exp (-2 \alpha (1-x)/\varepsilon_{{m}})\Big)\, d x\\ &\leq C \varepsilon N^{-2\sigma}. \end{align*} |
Hence, from the above inequalities we have
\begin{align*} \Vert L_i\Vert_{L^\infty(\varOmega_\lambda)}+\varepsilon^{-1/2} \Vert L_i\Vert_{L^2(\varOmega_\lambda)}\leq C N^{-\sigma}. \end{align*} |
Thus, we complete the proof of the estimate (5.8).
We also have
\begin{align*} \Vert(L_i)^{(l)}\Vert_{L^2(\varOmega_\lambda)}^2&\leq C \int_{\lambda_\ell}^{1-\lambda_\ell}\Big((\sum\limits_{m = i}^\ell \mathcal{B}_{\varepsilon_{{m}}}^\alpha (x) )^{(l)}\Big)^2\, d x \\ &\leq C \sum\limits_{m = i}^\ell\varepsilon_{{m}}^{-2l}\int_{\lambda_\ell}^{1-\lambda_\ell}\Big( \exp (-2 \alpha x/\varepsilon_{{m}})+\exp (-2 \alpha (1-x)/\varepsilon_{{m}})\Big)\, d x\\ &\leq C \varepsilon^{1-2l} N^{-2\sigma}. \end{align*} |
This proves the estimate (5.9).
Due to (5.3) of Lemma 5.1 and the bounds for derivatives (2.9), we obtain at once
\begin{align*} &\Vert L_i- \mathcal{I} L_i\Vert_{L^2(\varOmega\setminus\varOmega_\lambda)}^2 = \sum\limits_{I_n \subset \varOmega\setminus\varOmega_\lambda}\Vert L_i- \mathcal{I} L_i\Vert_{L^2(I_n)}^2\leq \sum\limits_{I_n \subset \varOmega\setminus\varOmega_\lambda} h_n^{{2(k+1)}} \Vert L_i^{({k+1})}\Vert_{L^2(I_n)}^2 \\ \leq& C \sum\limits_{s = 0}^{\ell-1} H_s^{{2(k+1)}}\varepsilon_i^{-2}\Big( \int_{\lambda_s}^{\lambda_{s+1}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x+ \int_{1-\lambda_{\ell-s}}^{1-\lambda_{\ell-1-s}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x\Big) \\ \leq&C \sum\limits_{m = 1}^\ell\sum\limits_{s = 0}^{\ell-1} \Big[\cfrac{2(\ell+1) (\lambda_{s+1}-\lambda_s)}{N}\Big]^{{2(k+1)}} \varepsilon_m^{-{2(k+1)}} \varepsilon_m \leq C \varepsilon (N^{-1}\ln N)^{{2(k+1)}}. \end{align*} |
Thus, the estimate (5.6) is proved.
For the proof of (5.7) we follow [11]. An inverse estimate yields that
\begin{align*} N^{-1}\Vert( \mathcal{I} L_i)^\prime\Vert_{L^2(\varOmega_\lambda)}\leq C \Vert \mathcal{I} L_i\Vert_{L^2(\varOmega_\lambda)}. \end{align*} |
We will derive a bound for \Vert \mathcal{I} L_i\Vert_{L^2(\varOmega_\lambda)} . For the interval I_n = (x_{n-1}, x_n) , we have the estimate for the local nodal functional N_m(L_i) as
|N_m(L_i)|\leq C \sum\limits_{p = i}^\ell\Big( \exp (-\alpha x_{n-1}/\varepsilon_p)+\exp(-\alpha (1-x_n)/\varepsilon_p)\Big). |
The local representation
\mathcal{I} L_i|_{I_n} = \sum\limits_{m = 0}^k N_m(L_i)\phi_m |
implies that
\begin{align} \begin{split} \Vert \mathcal{I} L_i\Vert_{L^2(I_n)}^2&\leq \sum\limits_{m = 0}^k |N_m(L_i)|^2\Vert\phi_m \Vert_{L^2(I_n)}^2\\ &\leq C N^{-1} \sum\limits_{p = i}^\ell\Big( \exp (-2 \alpha x_{n-1}/\varepsilon_p)+\exp(-2 \alpha (1-x_n)/\varepsilon_p)\Big), \end{split} \end{align} | (5.13) |
where we use the fact \Vert\phi_m \Vert_{L^2(I_n)}\leq CN^{-1} . Summing up over all I_n \subset \varOmega_\lambda yields that
\begin{align*} \sum\limits_{n = \frac{\ell N}{2(\ell+1)}+1}^{\frac{(\ell+2) N}{2(\ell+1)}} \Vert \mathcal{I} L_i\Vert_{L^2(I_n)}^2\leq C N^{-1} \sum\limits_{n = \frac{\ell N}{2(\ell+1)}+1}^{\frac{(\ell+2) N}{2(\ell+1)}} \sum\limits_{p = i}^\ell\Big( \exp (-2 \alpha x_{n-1}/\varepsilon_p)+\exp(-2 \alpha (1-x_n)/\varepsilon_p)\Big). \end{align*} |
Since the mesh size on \varOmega_\lambda is H_\ell = H_{\ell+1} , the term in the parenthesis on the right hand side of the above inequality can be written as
\begin{align*} &\exp (-2\alpha x_{n-1}/\varepsilon_p)+\exp(-2\alpha(1-x_n)/\varepsilon_p)\\ = &\exp ((-2\alpha x_{n-1}+2\alpha x_n-2 \alpha x_n)/\varepsilon_p)+\exp((-2\alpha(1-x_n)+2\alpha x_{n-1}-2 \alpha x_{n-1})/\varepsilon_p) \\ \leq & \exp (2 H_\ell \alpha/\varepsilon_p)\Big( \exp (-2\alpha x/\varepsilon_p)+\exp(-2\alpha(1-x)/\varepsilon_p)\Big)\quad \text{for}\quad x_{n-1} < x < x_n. \end{align*} |
Integrating the above inequality on I_n\subset \varOmega_\lambda and using the fact that H_\ell = \mathcal{O}(N^{-1}) , we have
\begin{align*} & N^{-1} \bigg(\exp (-2\alpha x_{n-1}/\varepsilon_p)+\exp(-2\alpha(1-x_n)/\varepsilon_p)\bigg)\\ \leq & \exp (2H_\ell \alpha/\varepsilon_p)\int_{x_{n-1}}^{x_n} \bigg( \exp (-2\alpha x/\varepsilon_p)+\exp(-2\alpha(1-x)/\varepsilon_p)\bigg)\, d x. \end{align*} |
Summing up the above inequality for n = \frac{\ell N}{2(\ell+1)}+1, \dots, \frac{(\ell+2) N}{2(\ell+1)}-1 leads to
\begin{align*} N^{-1} \sum\limits_{n = \frac{\ell N}{2(\ell+1)}+1}^{\frac{(\ell+2) N}{2(\ell+1)}-1} \sum\limits_{p = i}^\ell\bigg(\exp (-2\alpha x_{n-1}/\varepsilon_p)+\exp(-2\alpha(1-x_n)/\varepsilon_p)\bigg) \leq & C\varepsilon N^{-2\sigma}. \end{align*} |
It remains to bound on the last interval (x_{\frac{(\ell+2) N}{2(\ell+1)}-1}, x_{\frac{(\ell+2) N}{2(\ell+1)}}) . From the inequality (5.13), we have
\begin{align*} \Vert \mathcal{I} L_i\Vert_{L^2\big(I_{\frac{(\ell+2) N}{2(\ell+1)}}\big)}^2 &\leq N^{-1} \sum\limits_{p = i}^\ell\bigg( \exp (-2\alpha x_{\frac{(\ell+2) N}{2(\ell+1)}-1}/\varepsilon_p)+\exp(-2\alpha(1-x_{\frac{(\ell+2) N}{2(\ell+1)}})/\varepsilon_p)\bigg) \\ &\leq C N^{-(1+2\sigma)}. \end{align*} |
These two last estimates give the desired estimate. Thus the estimate (5.7) is proved.
From (5.7) and (5.8), we get
\Vert L_i- \mathcal {I}L_i\Vert _{L^2(\varOmega _\lambda)}\leq \Vert L_i \Vert_{L^2(\varOmega _\lambda)} +\Vert \mathcal {I}L_i\Vert _{L^2(\varOmega _\lambda)} \le C(\varepsilon^{1/2}+N^{-1/2})N^{-\sigma}, |
which completes the proof of (5.10).
Using the triangle inequality and (5.7) and (5.9), we have
\begin{align*} \Vert (L_i-\mathcal {I} L_i )'\Vert _{L^2(\varOmega _\lambda)}\le \Vert L_i'\Vert _{L^2(\varOmega _\lambda)}+\Vert (\mathcal {I}L_i)'\Vert _{L^2(\varOmega _\lambda)} \le C\varepsilon ^{-1/2}N^{-\sigma}. \end{align*} |
Similarly, using the inverse estimate, we get
\begin{align*} \Vert (L_i-\mathcal {I}L_i)''\Vert _{L^2(\varOmega _\lambda)}&\le \Vert L_i''\Vert _{L^2(\varOmega _\lambda)}+C N\Vert (\mathcal {I}L_i)'\Vert _{L^2(\varOmega _\lambda)}\\&\le C\varepsilon ^{-3/2}[1+(\varepsilon N)^{3/2}+(\varepsilon N)^{2}]N^{-(k+1)}\\&\le C\varepsilon ^{-3/2}N^{-\sigma}. \end{align*} |
Hence, we complete the proof of (5.11).
By (5.3) and (2.10), we have for l = 1, 2 ,
\begin{align*} &\Vert (L_i-\mathcal {I}L_i)^{(l)}\Vert _{L^2(\varOmega \setminus\varOmega _\lambda)}^2\\ = &\sum _{I_n\subset \varOmega \setminus\varOmega _\lambda}\Vert (L_i-\mathcal {I}L_i)^{(l)}\Vert _{L^2(I_n)}^2\le \sum _{I_n \subset \varOmega\setminus \varOmega _\lambda}Ch_n^{2({k+1}-l)}\Vert L_i^{({k+1})}\Vert _{L^2(I_n)}^2\\ \leq & C \sum\limits_{s = 0}^{\ell-1} H_s^{2({k+1}-l)}\varepsilon_i^{-2}\Big( \int_{\lambda_s}^{\lambda_{s+1}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x+ \int_{1-\lambda_{\ell-s}}^{1-\lambda_{\ell-1-s}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x\Big) \\ \leq&C\sum\limits_{m = 1}^\ell\sum\limits_{s = 0}^{\ell-1} \Big[\cfrac{2(\ell+1) (\lambda_{s+1}-\lambda_s)}{N}\Big]^{2({k+1}-l)} \varepsilon_m^{-{2(k+1)}} \varepsilon_m \leq C \varepsilon^{1-2l} (N^{-1}\ln N)^{2({k+1}-l)}, \end{align*} |
which shows (5.12). Thus we complete the proof.
The exact solution of problem (1.1) does not satisfy the WG-FEM scheme (3.4) and hence the WG-FEM lacks of consistency. Consequently, inconsistency leads to loss of the classical Galerkin orthogonality. As a result, we follow different techniques from the ones used in the standard finite element procedure to derive the error estimates.
Now Strang's second lemma provides a quasi-optimal bound for \Vert| \textbf{u}- \textbf{u}_N\Vert|_\varepsilon .
Theorem 5.3. Let \boldsymbol{u} and \boldsymbol{u}_N be the solutions of problems (1.1) and (3.4) respectively. Then there exists a positive constant C independent of N and \varepsilon_i such that
\begin{align} \Vert| \boldsymbol{u}- \boldsymbol{u}_N \Vert|_\varepsilon \leq C \Big( \inf _{ \boldsymbol{v}_N \in [S_N^0]^\ell} \Vert| \boldsymbol{u} - \boldsymbol{v}_N \Vert|_\varepsilon +\sup\limits_{\boldsymbol{w}_N\in [S_N^0]^\ell } \cfrac{\vert a( \boldsymbol{u}, \boldsymbol{w}_N)-L(\boldsymbol{w}_N)\vert}{\Vert|\boldsymbol{w}_N \Vert|_\varepsilon}\Big), \end{align} | (5.14) |
where a(\cdot, \cdot) is the bilinear form given by (3.5).
First, we will establish some error equations which will be needed in the error analysis below.
Lemma 5.4. Let \boldsymbol{u} = (u_1, \dots, u_\ell) be the solution of the problem (1.1). Then for any \boldsymbol{v}_N = (v_1^N, \dots, v_\ell^N) = (\{v_{10}, v_{1b}\}, \dots, \{v_{\ell 0}, v_{\ell b}\})\in [S_N^0]^\ell , we have
\begin{align} -\varepsilon^2_i \Big( u_i^{\prime \prime}, v_{i0}\Big)& = \varepsilon^2_i\Big(d_w( \mathcal{I} u_i), d_w v_i^N \Big)-T_1( u_i, v_i^N), \quad i = 1, \dots, \ell, \end{align} | (5.15) |
\begin{align} \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij}& u_j, v_{i0}\big) = \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij} \mathcal{I} u_j, v_{i0}\big) -T_2( \boldsymbol{u}, \boldsymbol{v}_N), \end{align} | (5.16) |
where
\begin{align} T_1( u_i, v_i^N)& = \varepsilon^2_i \big\langle ( u_i -\mathcal{I} u_i)^\prime, (v_{i0}-v_{ib}) \boldsymbol{n} \big\rangle, \end{align} | (5.17) |
\begin{align} T_2( \boldsymbol{u}, \boldsymbol{v}_N)& = \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij}( \mathcal{I} u_j- u_j), v_{i0}\big). \end{align} | (5.18) |
Proof. For any \textbf{v}_N\in [S_N^0]^\ell , using the commutative property (5.2) of the interpolation operator we have
\begin{equation} \Big(d_w( \mathcal{I} u_i), d_w v_i^N\Big)_{I_n} = \Big( ( \mathcal{I} u_i)^\prime, d_w v_i^N\Big)_{I_n}, \quad \forall I_n \in \mathcal{T}_N. \end{equation} | (5.19) |
Using the definition of the weak derivative (3.3) and integration by parts, we have
\begin{align} \Big( d_w v_i^N, ( \mathcal{I} u_i)^\prime\Big)_{I_n}& = -\Big( v_{i0}, ( \mathcal{I} u_i)^{\prime \prime}\Big)_{I_n}+\big\langle v_{ib}, \textbf{n} ( \mathcal{I} u_i)^\prime\big\rangle_{\partial I_n} \\ & = \Big( v_{i0}^\prime, ( \mathcal{I} u_i)^{\prime }\Big)_{I_n}-\big\langle v_{i0}-v_{ib}, \textbf{n} ( \mathcal{I} u_i)^\prime\big\rangle_{\partial I_n}. \end{align} | (5.20) |
From the definition of the interpolation and integration by parts, we obtain
\begin{align*} \Big( (u_i- \mathcal{I} u_i)^\prime, v_{i0}^\prime\Big)_{I_n} = -\Big(u_i- \mathcal{I} u_i, v_{i0}^{\prime \prime}\Big)_{I_n}+\big\langle u_i- \mathcal{I} u_i, \textbf{n} v_{i0}^\prime\big\rangle_{\partial I_n} = 0, \end{align*} |
which implies that
\begin{equation} \Big(( \mathcal{I} u_i)^{\prime }, v_{i0}^\prime\Big)_{I_n} = \Big( u_i^{\prime }, v_{i0}^\prime\Big)_{I_n}. \end{equation} | (5.21) |
We infer from the Eqs (5.19)–(5.21) that
\begin{equation} \Big(d_w( \mathcal{I} u_i), d_w v_i^N\Big)_{I_n} = \Big( u_i^{\prime }, v_{i0}^\prime\Big)_{I_n}-\big\langle v_{i0}-v_{ib}, \textbf{n} ( \mathcal{I} u_i)^\prime\big\rangle_{\partial I_n}. \end{equation} | (5.22) |
Summing up the Eq (5.22) over all interval I_n\in \mathcal{T}_N , we find
\begin{equation} \Big(d_w( \mathcal{I} u_i), d_w v_i^N\Big) = \Big( u_i^{\prime }, v_{i0}^\prime\Big)-\big\langle v_{i0}-v_{ib}, \textbf{n} ( \mathcal{I} u_i)^\prime\big\rangle. \end{equation} | (5.23) |
Using integration by parts, one can show that
- \Big( u_i^{\prime \prime}, v_{i0}\Big)_{I_n} = \Big( u_i^{\prime }, v_{i0}^\prime\Big)_{I_n}-\big\langle u_i^{\prime }, \textbf{n} v_{i0} \big\rangle_{\partial I_n}. |
Summing up the above equation over all interval I_n\in \mathcal{T}_N , we get
\begin{equation} \Big( u_i^{\prime }, v_{i0}^\prime\Big) = -\Big( u_i^{\prime \prime}, v_{i0}\Big)+\big\langle u_i^{\prime }, \textbf{n} ( v_{i0}-v_{ib})\big\rangle, \end{equation} | (5.24) |
where we used the fact that \big\langle u_i^{\prime }, \textbf{n} v_{ib}\big\rangle = 0. Finally, by plugging the Eq (5.24) into (5.23), we arrive at the desired result (5.15).
Lastly, the Eq (5.18) clearly holds. We complete the proof.
The following lemma will be useful in the error analysis.
Lemma 5.5. Assume that \textbf{u} = (u_1, \dots, u_\ell), \; \mathit{\text{with}}\; u_i \in H^{k+1}(\varOmega) is the solution of the problem (1.1). Then we have the following estimate
\begin{align*} \sum\limits_{I_n \subset \varOmega}\Vert\theta_i^\prime \Vert_{L^2(\partial I_n)}^2\leq \begin{cases} C\varepsilon_i^{-2}(N^{-1}\ln N)^{2k-1}, & I_n \subset \varOmega\setminus \varOmega_\lambda, \\ C\varepsilon_i^{-2}N^{-2(k+1)}, & I_n \subset \varOmega_\lambda, \end{cases} \end{align*} |
where \theta_i = u_i- \mathcal{I} u_i for i = 1, \dots, \ell.
Proof. From the trace inequality (4.1), we can write
\begin{align*} \Vert \theta_i^\prime\Vert _{L^2(\partial I_n)}^2\leq C( h_n^{-1} \Vert \theta_i^\prime \Vert_{L^2( I_n)}^2+\Vert \theta_i^\prime \Vert _{L^2(I_n)}\Vert \theta_i^{\prime \prime} \Vert _{L^2( I_n)}). \end{align*} |
It remains to estimate \Vert \theta_i^\prime \Vert_{L^2(I_n)} and \Vert \theta_i^{\prime \prime} \Vert _{L^2(I_n)} , individually. From the estimate (5.5), one has
\begin{align} \begin{split} \Vert(R_i- \mathcal{I} R_i)^\prime \Vert_{L^2(\varOmega)}&\leq C N ^{-k}, \quad i = 1, 2, \dots, \ell, \\ \Vert(R_i- \mathcal{I} R_i)^{\prime \prime } \Vert_{L^2(\varOmega)}&\leq C N ^{1-k}, \quad i = 1, 2, \dots, \ell. \end{split} \end{align} | (5.25) |
With the help of the estimate (5.11) and (5.12) one can show that
\begin{align} \begin{split} \Vert(L_i - \mathcal{I} L_i)^\prime \Vert_{L^2(\varOmega_\lambda)}&\leq C\varepsilon_i^{-1/2}N^{-\sigma}, \quad i = 1, 2, \dots, \ell, \\ \Vert(L_i - \mathcal{I} L_i)^{\prime \prime} \Vert_{L^2(\varOmega_\lambda)}&\leq C\varepsilon_i^{-3/2}N^{-\sigma}, \quad i = 1, 2, \dots, \ell, \\ \Vert(L_i - \mathcal{I} L_i)^\prime \Vert_{L^2(\varOmega\setminus \varOmega_\lambda)}&\leq C\varepsilon_i^{-1/2}(N^{-1}\ln N)^{k}, \quad i = 1, 2, \dots, \ell, \\ \Vert(L_i - \mathcal{I} L_i)^{\prime \prime} \Vert_{L^2(\varOmega\setminus \varOmega_\lambda)}&\leq C\varepsilon_i^{-3/2}(N^{-1}\ln N)^{k-1}, \quad i = 1, 2, \dots, \ell. \end{split} \end{align} | (5.26) |
With the help of the above estimates, the fact that \sigma\geq k+1 , and the triangle inequality, one can conclude that
\begin{align} \begin{split} \sum\limits_{I_n \subset \varOmega} \Vert\theta_i^\prime \Vert_{L^2(I_n)} & \le\begin{cases} C\varepsilon_i^{-1/2} N^{-k}(\varepsilon_i^{1/2}+ \ln^k N), & I_n\subset \varOmega\setminus\varOmega_\lambda, \\ C\varepsilon_i^{-1/2} N^{-k}(\varepsilon_i^{1/2}+ N^{-1}), & I_n\subset \varOmega_\lambda, \end{cases} \end{split} \end{align} | (5.27) |
and
\begin{align*} \sum\limits_{I_n \subset \varOmega} \Vert \theta_i^{\prime \prime} \Vert_{L^2(I_n)} &\leq \begin{cases} C\varepsilon_i^{-3/2} N^{1-k}(\varepsilon_i^{3/2}+ \ln^{k-1} N), & I_n\subset \varOmega\setminus\varOmega_\lambda, \\ C\varepsilon_i^{-3/2} N^{1-k}(\varepsilon_i^{3/2}+ N^{-2}), & I_n\subset \varOmega_\lambda. \\ \end{cases} \end{align*} |
The desired result follows from combining the above estimates and the mesh size h_n . Thus, we complete the proof.
Lemma 5.6. Assume that u_i \in H^{k+1}(\varOmega) and the penalization parameter \varrho_n is given by (3.6). If \sigma \geq k+1 , then we have
\begin{align} T( \boldsymbol{u}, \boldsymbol{v}_N)\leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}) \Vert| \boldsymbol{v}_N\Vert| _{\varepsilon}, \end{align} | (5.28) |
where T(\boldsymbol{u}, \boldsymbol{v}_N) = \sum_{i = 1}^\ell T_1(u_i, v_i^N)+T_2(\boldsymbol{u}, \boldsymbol{v}_N) and C is independent of N and \varepsilon_i, \; i = 1, \dots, \ell .
Proof. It follows from Cauchy-Schwarz inequality, Lemma 5.5 and the penalization parameter (3.6) that
\begin{align*} \begin{split} |T_1(u_i, v_i^N)| &\le \sum _{n = 1}^{N}\varepsilon_i^2 |\big\langle (u_i-\mathcal {I}u_i)', v_{i0}- v_{ib}\big\rangle _{\partial I_n}| \\&\le \sum _{n = 1}^{N}\varepsilon_i^2 \Vert (u_i-\mathcal {I}u_i)^\prime\Vert _{L^2(\partial I_n)}\Vert v_{i0}- v_{ib}\Vert _{L^2(\partial I_n)} \\&\le \left\{ \sum _{n = 1}^{N}\frac{\varepsilon_i^3}{\varrho_n}\Vert (u_i-\mathcal {I}u_i)'\Vert _{L^2(\partial I_n)}^2 \right\} ^{1/2}\left\{ \sum _{n = 1}^{N}\varrho_n\Vert v_{i0}- v_{ib} \Vert _{L^2(\partial I_n)}^2 \right\} ^{1/2} \\&\le \Big\{ \sum _{I_n\in \varOmega \setminus \varOmega_\lambda}\frac{\varepsilon_i^3}{\varrho_n}\Vert (u_i-\mathcal {I}u_i)'\Vert _{L^2(\partial I_n)}^2 \\ & \quad +\sum _{I_n\in \varOmega_\lambda}\frac{\varepsilon_i^3}{\varrho_n}\Vert (u_i-\mathcal {I}u_i)'\Vert _{L^2(\partial I_n)}^2 \Big\} ^{1/2} s^{1/2}( v_i^N, v_i^N)\\ &\le C \varepsilon^{1/2} (N^{-1}\ln N)^k s^{1/2}( v_i^N, v_i^N). \end{split} \end{align*} |
As a result
\begin{align} |T_1( \textbf{u}, \textbf{v}_N)|\leq \sum\limits_{i = 1}^\ell T_1(u_i, v_i^N)\leq C\varepsilon^{1/2} (N^{-1}\ln N)^k \Vert| \textbf{v}_N\Vert| _{\varepsilon}. \end{align} | (5.29) |
We next bound the term T_2(\textbf{u}, \textbf{v}_N) . We need to estimate \Vert u_i- \mathcal{I} u_i \Vert, \quad i = 1, \dots, \ell . Using the estimates (5.5)–(5.8) of Lemma 5.2 and Cauchy-Schwarz inequality, taking \sigma \geq k+1 , we get
\begin{align} \begin{split} \Vert u_i-\mathcal {I}u_i\Vert _{L^2(\varOmega )}&\le \Vert R_i-\mathcal {I} R_i\Vert _{L^2(\varOmega )} +\Vert L_i-\mathcal {I} L_i\Vert _{L^2(\varOmega\setminus \varOmega _\lambda)} \\&\quad +\, \Vert L_i\Vert _{L^2(\varOmega _\lambda)}+\Vert \mathcal {I} L_i\Vert _{L^2(\varOmega _\lambda)}\\&\le CN^{-(k+1)}[1+\varepsilon_\ell ^{1/2}(\ln N)^{k+1} \\&\quad +\, \varepsilon _\ell^{1/2} N^{-(\sigma-3)}+N^{-(\sigma -5/2)}] \\& \le C N^{-(k+1)}(1+\varepsilon_\ell ^{1/2}(\ln N)^{k+1})\\ & \le CN^{-(k+1)}. \end{split} \end{align} | (5.30) |
The above estimate (5.30) and Cauchy-Schwarz inequality yield the following bound
\begin{align*} {\sum\limits_{i = 1}^\ell} \sum\limits_{j = 1}^\ell\big( a_{ij}( \mathcal{I} u_j-u_j), v_{i0}\big) &\leq C {\sum\limits_{i = 1}^\ell}\sum\limits_{j = 1}^\ell \Vert u_j- \mathcal{I} u_j \Vert \Vert v_{i0} \Vert \le C N^{-(k+1)} \Vert v_{i0} \Vert. \end{align*} |
From the above estimate, we have
\begin{align} |T_2( \textbf{u}, \textbf{v}_N) | \le C N^{-(k+1)} \Vert| \textbf{v}_N\Vert|_{\varepsilon}. \end{align} | (5.31) |
From the estimates (5.29) and (5.31), we have the desired result. Thus we complete the proof.
Theorem 5.7. Let \boldsymbol{u} = (u_1, \dots, u_\ell) = \boldsymbol{R}+ \boldsymbol{L}\; \mathit{\text{with}}\; u_i\in H^{k+1}(\varOmega) be the solution of the problem (1.1) and assume that the conditions of Lemma 5.2 hold with \sigma\geq k+1 . Then, the following estimates hold true:
\begin{align} \begin{split} \Vert| \boldsymbol{R}- \mathcal{I} \boldsymbol{R}\Vert|_{\varepsilon}\leq & C N^{-(k+1)} \mathit{\text{and}} \Vert| \boldsymbol{L}- \mathcal{I} \boldsymbol{L}\Vert|_{\varepsilon}\leq C(\varepsilon_\ell^{1/2}(N^{-1}\ln N)^k +N^{-(k+1)}), \end{split} \end{align} | (5.32) |
where C is independent of N and \varepsilon_i, i = 1, \dots, \ell .
Proof. Since \theta_i^R: = R_i- \mathcal{I} R_i and \theta_i^L: = L_i- \mathcal{I} L_i are continuous on \varOmega , we get s(\theta_i^R, \theta_i^R) = s(\theta_i^L, \theta_i^L) = 0 for i = 1, \dots, \ell . Then we have
\begin{align} \Vert|R_i- \mathcal{I} R_i\Vert|_{\varepsilon}^2& = \sum\limits_{i = 1}^\ell \varepsilon^2_i \Vert (\theta_i^R)^\prime\Vert^2+\eta\sum\limits_{i = 1}^\ell\Vert \theta_i^R\Vert^2, \end{align} | (5.33) |
\begin{align} \Vert | L_i- \mathcal{I} L_i\Vert|_{\varepsilon}^2& = \sum\limits_{i = 1}^\ell \varepsilon^2_i \Vert (\theta_i^L)^\prime\Vert^2+\eta\sum\limits_{i = 1}^\ell\Vert \theta_i^L\Vert^2. \end{align} | (5.34) |
In the light of the interpolation errors (5.5) and (2.8), we obtain for i = 1, \dots, \ell
\begin{align*} \varepsilon^2_i\Vert (\theta_i^R)^\prime\Vert^2& \leq { \varepsilon^2_i ( N^{-k}\vert R_i\vert_{3, \varOmega})^2}\leq C \varepsilon^2_i N^{-2k} , \\ \Vert \theta_i^R\Vert^2 & \le C N^{-2(k+1)}, \end{align*} |
which together with (5.33) yields
\Vert| \textbf{R}- \mathcal{I} \textbf{R}\Vert|_{\varepsilon}\leq C N^{-(k+1)}. |
Using the inequalities (5.11), (5.12) and (5.30), we have
\begin{align*} \sum\limits_{i = 1}^\ell \varepsilon^2_i\Vert (\theta_i^L)^\prime\Vert^2 &\leq \sum\limits_{i = 1}^\ell\varepsilon^2_i \Big( \Vert (\theta_i^L)^\prime\Vert^2_{L^2(\varOmega\setminus \varOmega_\lambda) }+\Vert (\theta_i^L)^\prime\Vert^2_{L^2( \varOmega_\lambda) }\Big)\leq C \varepsilon ( (N^{-1}\ln N)^{2k}+N^{-2\sigma}) , \\ \Vert \theta_i^L\Vert^2 & \le N^{-2(k+1)}, \end{align*} |
which together with (5.34) gives the desired result
\Vert| \textbf{L}- \mathcal{I} \textbf{L}\Vert|_{\varepsilon}\leq C(\varepsilon^{1/2}(N^{-1}\ln N)^k +N^{-(k+1)}), |
where we have used \sigma \geq k+1 . The proof is completed.
We next estimate the consistency error \sup_{\textbf{w}_N\in [S_N^0]^\ell } \cfrac{\vert a(\textbf{u}, \textbf{w}_N)-L(\textbf{w}_N)\vert}{\Vert|\textbf{w}_N \Vert|_\varepsilon} .
Lemma 5.8. Assume that \boldsymbol{u} = (u_1, \dots, u_\ell), u_i \in H^{k+1}(\Omega), i = 1, \dots, \ell is the solution of (1.1). If \sigma \geq k+1 , then the following estimate holds true:
\sup\limits_{\boldsymbol{w}_N\in [S_N^0]^\ell } \cfrac{\vert a( \boldsymbol{u}, \boldsymbol{w}_N)-L(\boldsymbol{w}_N)\vert}{\Vert|\boldsymbol{w}_N \Vert|_\varepsilon} \leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}), |
where C is independent of N , and \varepsilon_i, \; i = 1, \dots, \ell .
Proof. Using the definition of bilinear form (3.4), the fact that s(u_i, w_i^N) = s(u_i- \mathcal{I} u_i, w_i^N) = 0 for i = 1, \dots, \ell and Lemma 5.4, we have
\begin{align*} E_{ \textbf{u}}( \textbf{w}_N): = & a( \textbf{u}, \textbf{w}_N)-L(\textbf{w}_N) = a( \mathcal{I} \textbf{u}, \textbf{w}_N)+a( \textbf{u} - \mathcal{I} \textbf{u}, \textbf{w}_N)-L( \textbf{w}_N)\\ = &\sum\limits_{i = 1}^\ell \big(-\varepsilon^2 u_i^{\prime \prime}+\sum\limits_{j = 1}^\ell a_{ij} u_j-g_i, w_{i0}\big)+ T( \textbf{u}, \textbf{w}_N)+ a( \textbf{u}- \mathcal{I} \textbf{u}, \textbf{w}_N) \\ = &T( \textbf{u}, \textbf{w}_N)+ a( \textbf{u}- \mathcal{I} \textbf{u}, \textbf{w}_N), \end{align*} |
where T(\textbf{u}, \textbf{w}_N) = T_1(\textbf{u}, \textbf{w}_N)+T_2(\textbf{u}, \textbf{w}_N) and T_1(\textbf{u}, \textbf{w}_N) = \sum_{i = 1}^\ell T_1(u_i, w_i^N) with T_1(u_i, w_i^N) and T_2(\textbf{u}, \textbf{w}_N) are given in (5.17) and (5.18), respectively. By Lemma 5.6, if \sigma \geq k+1 , the first term on the right side of the above equation can be estimated as
\begin{equation} T( \textbf{u}, \textbf{w}_N)\leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}) \Vert| \textbf{w}_N \Vert|_\varepsilon. \end{equation} | (5.35) |
For the second term, we use the continuity property (4.14) of bilinear form a(\cdot, \cdot) and again the fact that s(u_i - \mathcal{I} u_i, w_i^N) = 0, i = 1, \dots, \ell and we obtain
\begin{align*} a( \textbf{u}- \mathcal{I} \textbf{u}, \textbf{w}_N) \leq & C_c \Vert| \textbf{u}- \mathcal{I} \textbf{u} \Vert|_\varepsilon \Vert| \textbf{w}_N\Vert|_\varepsilon \\ = & C_c\sum\limits_{i = 1}^\ell\Big( \varepsilon_i^2 \Vert (u_i- \mathcal{I} u_i)^\prime \Vert^2 +\eta \Vert u_i- \mathcal{I} u_i \Vert^2 \Big)^{1/2} \Vert| \textbf{w}_N\Vert|_\varepsilon \\ \leq & C \sum\limits_{i = 1}^\ell \Big( \varepsilon_i^2 \Vert (R_i- \mathcal{I} R_i)^\prime \Vert^2 + \varepsilon_i^2 \Vert (L_i- \mathcal{I} L_i)^\prime \Vert_{L^2(\varOmega\setminus \varOmega_\lambda)}^2 \\ &+ \varepsilon_i^2 \Vert (L_i- \mathcal{I} L_i)^\prime \Vert_{L^2(\varOmega_\lambda)}^2+ \eta \Vert u_i- \mathcal{I} u_i \Vert^2 \Big)^{1/2} \Vert| \textbf{w}_N\Vert|_\varepsilon. \end{align*} |
Appealing the estimates (5.5), (5.11), (5.12), (5.30) and using (2.8), if \sigma \geq k+1 we obtain
\begin{align} \begin{split} a( \textbf{u}- \mathcal{I} \textbf{u}, \textbf{w}_N) \leq & C \Big( \varepsilon ^2 N^{-2k} +\varepsilon ^2\varepsilon^{-1} (N^{-1}\ln N)^{2k} +\varepsilon ^2\varepsilon^{-1} N^{-2(k+1)}+N^{-2(k+1)}\Big)^{1/2}\Vert| \textbf{w}_N\Vert|_\varepsilon\\ \leq & C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)})\Vert| \textbf{w}_N\Vert|_\varepsilon. \end{split} \end{align} | (5.36) |
From (5.35) and (5.36), we arrive at
\sup\limits_{\textbf{w}_N\in [S_N^0]^\ell } \cfrac{\vert E_{ \textbf{u}}( \textbf{w}_N)\vert}{\Vert|\textbf{w}_N \Vert|_\varepsilon} \leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}), |
which is the desired result. We complete the proof.
Theorem 5.9. Assume that \boldsymbol{u} = (u_1, \dots, u_\ell), u_i \in H^{k+1}(\Omega), i = 1, \dots, \ell is the exact solution and \boldsymbol{u}_N\in [S_N^0]^\ell is the WG-FEM solution given by (3.4) on the uniform Shishkin mesh for the problem (1.1), respectively. If \sigma \geq k+1 , then we have the following estimate
\Vert| \boldsymbol{u}- \boldsymbol{u}_N\Vert|_\varepsilon \leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}), |
where C is independent of N and \varepsilon_i, i = 1, \dots, \ell .
Proof. Using Theorem 5.7, if \sigma\geq k+1 we have
\begin{align*} \Vert| \textbf{R}- \mathcal{I} \textbf{R} \Vert|_\varepsilon &\leq CN^{-(k+1)}\\ \Vert| \textbf{L}- \mathcal{I} \textbf{L} \Vert|_\varepsilon &\leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}). \end{align*} |
Hence, we obtain
\Vert| \textbf{u}- \mathcal{I} \textbf{u} \Vert|_\varepsilon \leq \Vert| \textbf{R}- \mathcal{I} \textbf{R} \Vert|_\varepsilon+\Vert| \textbf{L}- \mathcal{I} \textbf{L} \Vert|_\varepsilon \leq C ( \varepsilon^{1/2} (N^{-1}\ln N)^k+N^{-(k+1)}). |
Note that \mathcal{I} \textbf{R} and \mathcal{I} \textbf{L} are both in [S_N^0]^\ell , the set of piecewise discontinuous polynomials {of degree at most k }, and that \mathcal{I} \textbf{R} + \mathcal{I} \textbf{L} \in [S_N^0]^\ell . Take \textbf{v}_N = (\mathcal{I} R_1+ \mathcal{I} L_1, \dots, \mathcal{I} R_\ell + \mathcal{I} L_\ell) \in S_N^\ell . Invoking Theorem 5.3 and Lemma 5.8, the desired result follows.
As stated in the introduction, error estimates in the corresponding energy norm of FEMs not adequate. The reason arises from the fact that the energy norm of the boundary layer functions \exp(\frac{-x}{\varepsilon_\ell}) and \exp(\frac{-(1-x)}{\varepsilon_\ell}) are of order \mathcal{O}(\varepsilon_\ell^{1/2}) . Therefore, the error estimates in the energy norm is not much strong than the L^2 -norm if \varepsilon_\ell \ll 1 . A stronger norm obtained by scaling of the coefficient of the H^1 -seminorm captures correctly the boundary layers. This norm is called the balanced norm defined as follows. For \textbf{v} = (v_1^N, \dots, v_\ell ^N)^T = (\{v_{10}, v_{1b}\}, \dots, \{v_{\ell 0}, v_{\ell b}\})^T \in [S_N^0]^\ell ,
\begin{align} \Vert \textbf{v}\Vert_b^2& = \sum\limits_{i = 1}^\ell \varepsilon_i \Vert d_w v_i^N\Vert^2+\eta \sum\limits_{i = 1}^\ell \Vert v_i^N\Vert^2+s^b( \textbf{v}_N, \textbf{v}_N), \end{align} | (6.1) |
where s^b(\textbf{u}_N, \textbf{v}_N) is given by
\begin{align} s^b( \textbf{u}_N, \textbf{v}_N)& = \sum\limits_{i = 1}^\ell \big\langle \varrho_n^b( u_{i0}- u_{ib}), v_{i0}-v_{ib}\big\rangle. \end{align} | (6.2) |
Here, the penalization parameter \varrho_n^b is now defined as
\begin{align} \varrho_n^b = \begin{cases} { \varepsilon }, & \text{on } \varOmega_\lambda, \\ \cfrac{ \varepsilon N}{\ln N}, & \text{on } \varOmega\setminus \varOmega_\lambda, \end{cases} \end{align} | (6.3) |
where \varepsilon = \sum_{i = 1}^\ell \varepsilon_i .
We note that the error bound N^{-(k+1)} independent of \varepsilon^{1/2} in Theorem 5.9 comes from the estimate of the L^2 -norm of \textbf{u}-\mathcal{I} \textbf{u} in the energy norm error estimates. These terms can be handled by replacing the special interpolation operator \mathcal{I} defined by (5.1) with a projection operator Q_h:H^1(I_n)\to S_N defined as follows.
Let P_h:L^2(I_n)\to \mathbb{P}_k(I_n) be the local weighted L^2 -projection restricted to interval I_n defined by
\begin{align} (\sum\limits_{i = 1}^\ell a_{ii}( P_h u_i-u_i), v)_{I_n} = 0, \quad \forall v\in \mathbb{P}_{k }(I_{n}), \quad n = 1, 2, \ldots , N. \end{align} | (6.4) |
This weighted L^2 -projection is well-defined because we assume that the diagonal elements are positive and the reaction coefficient matrix is strongly diagonally dominant matrix. With the aid of the Bramble-Hilbert lemma, one can show that for i = 1, \dots, \ell
\begin{align} \left\|u_i-P_{h} u_i\right\|_{L^2(I_{n})}+h_{n}\left\|(u_i-P_{h}u_i)^{\prime} \right\|_{L^2 (I_{n})} \leq C h_{n}^{s+1}|u_i|_{s+1, I_{n}}, \quad 0 \leq s \leq k. \end{align} | (6.5) |
We introduce the projection operator Q_{h}:H^1(I_n)\to S_N such that
\begin{align} Q_{h} u_i|_{{I_{n}}} = \left\{ Q_{0}u_i, Q_{b} u_i\right\} = \left\{ P_{h}u_i, \{u_i(x_{n-1}), u_i(x_{n})\}\right\} , \quad n = 1, 2, \ldots , N. \end{align} | (6.6) |
Clearly, Q_h u_i\in S_N^0 if u_i\in H_0^1(I_n) for i = 1, \dots, \ell . By (6.5), we have
\begin{align} \left\|Q_{0} u_i-u_i\right\|_{L^2(I_{n})} \leq C h_{n}^{s+1}|u_i|_{s+1, I_{n}}, \quad 0 \leq s \leq k, \quad i = 1, \dots, \ell. \end{align} | (6.7) |
The following trace and inverse inequalities will be used in the forthcoming analysis [43]. For any function \phi \in H(I_n), we have
\begin{align} \Vert \phi \Vert_{L^2(\partial I_n)}^2&\le C ( h_n^{-1}\Vert \phi \Vert_{L^2(I_n)}^2+h_n\Vert \phi^\prime \Vert_{L^2(I_n)}^2), \end{align} | (6.8) |
\begin{align} \Vert v_N'\Vert _{L^2(\partial I_n)}&\le Ch_n^{-1}\Vert v_N\Vert _{L^2(I_n)}, \quad \forall v_N\in \mathbb {P}_k(I_n). \end{align} | (6.9) |
We would like to derive similar estimates as Lemma 5.2 for the projection operator Q_0 which is essentially the generalized L^2 -projection. The following lemma will serve this purpose.
Lemma 6.1. Assume that the conclusions of Lemma 5.2 hold. Then we have the following error estimates for the operator Q_0 on the uniform Shishkin mesh.
\begin{align} \Vert u_i-Q_0 u_i\Vert_{L^\infty(\varOmega)} &\leq C \Vert u_i- \mathcal{I} u_i\Vert _{L^\infty(\varOmega)}, \end{align} | (6.10) |
\begin{align} \sum\limits_{I_n\subset \varOmega_\lambda} \Vert u_i-Q_0u_i\Vert _{L^2(I_n)}^2 &\leq C N^{-2k-3} \end{align} | (6.11) |
\begin{align} \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} \Vert u_i-Q_0u_i\Vert _{L^2(I_n)}^2 &\leq C\varepsilon( N^{-1}\ln N)^{2(k+1)}, \end{align} | (6.12) |
\begin{align} \sum\limits_{I_n\subset \varOmega_\lambda} \Vert (u_i-Q_0u_i)^\prime\Vert _{L^2(I_n)}^2 &\leq C \varepsilon^{-1/2} N^{-2k}, \end{align} | (6.13) |
\begin{align} \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} \Vert (u_i-Q_0u_i)^\prime\Vert _{L^2(I_n)}^2 &\leq C \varepsilon^{-1} N^{-2k}\ln^{2k+1}N. \end{align} | (6.14) |
Proof. It is known that the L^2 -projection Q_0 is L^\infty -stable [44]. Therefore, by the triangle inequality we have
\begin{align*} \Vert u_i-Q_0 u_i\Vert_{L^\infty(\varOmega)}&\leq \Vert u_i- \mathcal{I} u_i\Vert_{L^\infty(\varOmega)}+\Vert Q_0 (u_i- \mathcal{I} u_i)\Vert_{L^\infty(\varOmega)}\\ &\leq C \Vert u_i- \mathcal{I} u_i\Vert_{L^\infty(\varOmega)}, \end{align*} |
which proves (6.10). Using this inequality, we get
\begin{align*} \sum\limits_{I_n\subset \varOmega_\lambda} \Vert u_i-Q_0u_i\Vert _{L^2(I_n)}^2&\leq \sum\limits_{I_n\subset \varOmega_\lambda} h_n\Vert u_i-Q_0u_i\Vert _{L^\infty(I_n)}^2\\ &\leq C \sum\limits_{I_n\subset \varOmega_\lambda}h_n \Vert u_i- \mathcal{I} u_i\Vert _{L^\infty(I_n)}^2\\&\leq C N^{-2k-3}, \end{align*} |
where we used Lemma 5.2 and the fact that h_n = \mathcal{O}(N^{-1}) in \varOmega_\lambda . It follows from (6.7) that
\begin{align*} \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} \left\|Q_{0} u_i-u_i\right\|_{L^2(I_{n})} & \leq C \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} h_{n}^{k+1}|u_i|_{k+1, I_{n}}\\ &\le C \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} h_{n}^{k+1}(\Vert R_i^{(k+1)}\Vert _{L^2( I_{n})}+\Vert L_i^{(k+1)}\Vert _{L^2( I_{n})}). \end{align*} |
The first term on the right side of the above inequality can be bounded as
\begin{align} \begin{split} \sum\limits_{I_n\in \varOmega\setminus\varOmega_\lambda} h_n^{2(k+1)} \Vert R_i^{(k+1)}\Vert _{L^2( I_{n})}^2&\le \sum\limits_{I_n\in \varOmega\setminus\varOmega_\lambda} h_{n}^{2(k+1)}\int _{x_{n-1}}^{x_{n}}|R^{(k+1)}(x)|^2dx \\ & \le {} C \sum\limits_{I_n\in \varOmega\setminus\varOmega_\lambda} h_{n}^{2k+3} \le C \varepsilon (N^{-1}\ln N)^{2k+3}. \end{split} \end{align} | (6.15) |
Next, we estimate the layer parts on \varOmega\setminus\varOmega_\lambda .
\begin{align} \begin{split} \sum\limits_{I_n\in \varOmega\setminus\varOmega_\lambda} h_{n}^{2(k+1)}\Vert L_{i }^{(k+1)}\Vert ^2_{L^2(I_n)} \leq& C \sum\limits_{s = 0}^{\ell-1} H_s^{2(k+1)}\varepsilon_i^{-2(k-1)}\Big( \int_{\lambda_s}^{\lambda_{s+1}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x \\ &\quad + \int_{1-\lambda_{\ell-s}}^{1-\lambda_{\ell-1-s}}\Big(\sum\limits_{m = 1}^\ell \varepsilon_m^{-2} \mathcal{B}_{\varepsilon_m}^\alpha (x) \Big)^2\, d x\Big) \\ \leq&C\sum\limits_{m = 1}^\ell\sum\limits_{s = 0}^{\ell-1} \Big[\cfrac{2(\ell+1) (\lambda_{s+1}-\lambda_s)}{N}\Big]^{2(k+1)} \varepsilon_m^{-2(k+1)} \varepsilon_m \\ \leq& C \varepsilon (N^{-1}\ln N)^{2(k+1)}. \end{split} \end{align} | (6.16) |
From (6.15) and (6.16), we get
\sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda} \left\|Q_{0} u_i-u_i\right\|_{L^2(I_{n})}^2 \leq C \varepsilon (N^{-1}\ln N)^{2(k+1)}, |
which completes the proof of (6.12).
With the help of an inverse inequality on \varOmega_\lambda , we obtain
\begin{align*} \Vert ( \mathcal{I} u_i-Q_0u_i)^\prime\Vert _{L^2(I_n)}&\le C N \Vert \mathcal{I} u_i-Q_0u_i\Vert _{L^2(I_n)}\\ & = C N \Big( \Vert \mathcal{I} u_i-u_i\Vert _{L^2(I_n)}+\Vert u_i-Q_0u_i\Vert _{L^2(I_n)}\Big) \\ &\le C N^{-k}, \end{align*} |
because \Vert \mathcal{I} u_i-u_i\Vert _{L^2(I_n)} and \Vert Q_0 u_i-u_i\Vert _{L^2(I_n)} are both of order \mathcal{O}(N^{-(k+1)}) on \varOmega_\lambda . By Lemma 5.2 and the above estimate, we arrive at
\begin{align} \sum\limits_{I_n\subset \varOmega_\lambda}\Vert ( \mathcal{I} u_i-Q_0u_i)^\prime\Vert _{L^2(I_n)}^2\leq C N^{-2k}. \end{align} | (6.17) |
On the other hand, from Lemma 5.2, we have
\begin{align} { \varepsilon \Vert ( \mathcal{I} u_i-u_i)^\prime\Vert _{L^2(\varOmega_\lambda)}^2\leq \varepsilon \Vert (R_i - \mathcal{I} R_i)^\prime\Vert _{L^2(\varOmega_\lambda)}^2+ \varepsilon \Vert (L_i - \mathcal{I} L_i)^\prime\Vert _{L^2(\varOmega_\lambda)}^2\leq C \varepsilon^{1/2} N^{-2k}.} \end{align} | (6.18) |
Combining (6.17) and (6.18) gives the desired result (6.13). Finally, using an inverse estimate we obtain at once
\begin{align*} \sum\limits_{I_n\subset \varOmega \setminus\varOmega_\lambda}\Vert ( \mathcal{I} u_i-Q_0u_i)^\prime\Vert _{L^2(I_n)}^2 &\leq C \vert \varOmega \setminus\varOmega_\lambda\vert \sum\limits_{I_n\subset \varOmega \setminus\varOmega_\lambda}\Vert ( \mathcal{I} u_i-Q_0u_i)^\prime\Vert _{L^\infty(I_n)}^2\\ &\le C \sum\limits_{I_n\subset \varOmega \setminus\varOmega_\lambda} \cfrac{\vert \varOmega \setminus\varOmega_\lambda\vert }{h_n^2} \Vert \mathcal{I} u_i-Q_0u_i \Vert _{L^\infty(I_n)}^2\\ &\le C \cfrac{\varepsilon \ln N}{\varepsilon^2 (N^{-1}\ln N)^2}(N^{-1}\ln N)^{2(k+1)}\\ &\le C \varepsilon^{-1}N^{-2k}(\ln N)^{2(k+1)}, \end{align*} |
which proves (6.14). Thus, we complete the proof.
We derive the following error equations involving the projection Q_h which are similar to ones in Lemma 5.4. To this end, we still need another special projection operator defined as follows. We refer interested readers to [45] for details.
Lemma 6.2. [45] For u_i\in H^1(\varOmega) , there is a projection operator \pi_h u_i\in H^1(0, 1) , restricted on element I_n , \pi_h u_i \in \mathbb{P}_{k+1}(I_n) satisfies
\begin{align} &((\pi_{h}u_i)', q) = (u_i', q)_{I_{n}}, \quad \forall q \in P_{k}(I_{n}), \quad i = 1, 2, \ldots , \ell, \end{align} | (6.19) |
\begin{align} &\pi_h u_i(x_n) = u_i(x_n), \quad n = 1, \dots, N, \quad i = 1, \dots, \ell, \\&\Vert u_i-\pi_{h}u_i\Vert _{L^2(I_{n})}+h_{n}\Vert u_i'-(\pi_{h}u_i )'\Vert _{L^2(I_{n})}\le Ch^{s+1}_{n}\Vert u_i\Vert _{s+1}, \, \, \, 0 \le s \le k. \nonumber \end{align} | (6.20) |
Lemma 6.3. Let \boldsymbol{u} = (u_1, \dots, u_\ell) be the solution of the problem (1.1). Then for any \boldsymbol{v}_N = (v_1^N, \dots, v_\ell^N) = (\{v_{10}, v_{1b}\}, \dots, \{v_{\ell 0}, v_{\ell b}\})\in [S_N^0]^\ell , we have
\begin{align} -\varepsilon^2_i \Big( u_i^{\prime \prime}, v_{i0}\Big)& = \varepsilon^2_i\Big(d_w(Q_h u_i), d_w v_i^N \Big)-T_1^b( u_i, v_i^N), \quad i = 1, \dots, \ell, \end{align} | (6.21) |
\begin{align} \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij}& u_j, v_{i0}\big) = \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij}Q_0 u_j, v_{i0}\big) -T_2^b( \boldsymbol{u}, \boldsymbol{v}_N), \end{align} | (6.22) |
where
\begin{align} T_1^b( \boldsymbol{u}, \boldsymbol{v}_N)& = \sum\limits_{i = 1}^\ell \varepsilon^2_i \big\langle (u_i -\pi_h u_i)^\prime, (v_{i0}-v_{ib}) \boldsymbol{n} \big\rangle, \end{align} | (6.23) |
\begin{align} T_2^b( \boldsymbol{u}, \boldsymbol{v}_N)& = \sum\limits_{i = 1}^\ell \sum\limits_{j = 1}^\ell \big( a_{ij}(Q_0 u_j- u_j), v_{i0}\big). \end{align} | (6.24) |
Proof. Using the definition of the operator Q_h , the weak derivative (3.3), integration by parts and (6.19), we have
\begin{align*} \big(d_w(Q_h u_i), q\big)_{I_n} = & {} -\big(Q_0 u_i, q^\prime\big)_{I_n}+(Q_{h}u)_{n}q_{n}-(Q_{h}u)_{n-1}q_{n-1}\nonumber \\ = & {} -\big( u_i, q^\prime\big)_{I_n}+u_{n}q_{n}-u_{n-1}q_{n-1}\nonumber \\ = & {} \big( u_i^\prime, q\big)_{I_n}\\ = & {} \big( (\pi_h u_i)^\prime, q\big)_{I_n}, \quad \forall q \in \mathbb{P}_{2}(I_{n}), \quad \forall I_n \in \mathcal{T}_N, \end{align*} |
where v_{n} = v(x_{n}) and v_{n-1} = v(x_{n-1}) for a function v . This implies that
\begin{equation} \big(d_w(Q_h u_i), d_w v_i^N\big)_{I_n} = \big( (\pi_h u_i)^\prime, d_w v_i^N\big)_{I_n}, \quad \forall I_n \in \mathcal{T}_N. \end{equation} | (6.25) |
Following the same procedures as in the energy norm estimates, we prove (6.21). Clearly, we have the Eq (6.24). We complete the proof.
Lemma 6.4. Assume that \textbf{u} = (u_1, \dots, u_\ell), \;\mathit{\text{with}}\; u_i \in H^{k+1}(\varOmega) is the solution of the problem (1.1) and the penalization parameter \varrho_n^b is given by (6.3). Then we have
\begin{align} \vert T^b( \boldsymbol{u}, \boldsymbol{v}_N)\vert &\leq C \varepsilon^{1/2} (N^{-1}\ln N)^{k} \Vert|v_N\Vert|_\varepsilon, \end{align} | (6.26) |
\begin{align} \vert s^b(Q_h u_i, v_N)\vert &\le C \varepsilon^{1/2} N^{-k}(\ln N)^{k+1/2} \Vert|v_N\Vert|_\varepsilon, \end{align} | (6.27) |
where C is independent of N and \varepsilon_i, \; i = 1, \dots, \ell , T^b(\boldsymbol{u}, \boldsymbol{v}_N) = T_1^b(\boldsymbol{u}, \boldsymbol{v}_N)+T_2^b(\boldsymbol{u}, \boldsymbol{v}_N) , and s^b(Q_h u_i, v_N) is given by (6.2).
Proof. Note that T_2^b(\textbf{u}, \textbf{v}_N) = 0 due to the definition of the projection Q_h . By the inverse estimate (6.9), Lemma 5.5 and Lemma 6.2, we obtain at once
\begin{align*} \sum\limits_{I_n \subset \varOmega} \Vert \xi _i^\prime\Vert_{L^2(\partial I_n)}^2&\le \sum\limits_{I_n \subset \varOmega} \Vert \theta_i^\prime\Vert_{L^2(\partial I_n)}^2+ \sum\limits_{I_n \subset \varOmega} \Vert ( \mathcal{I} u_i-\pi_h u_i)^\prime\Vert_{L^2(\partial I_n)}^2\\ & \le \sum\limits_{I_n \subset \varOmega} \Vert \theta_i^\prime\Vert_{L^2(\partial I_n)}^2+C \sum\limits_{I_n \subset \varOmega} h_n^{-2} \Vert \mathcal{I} u_i-\pi_h u_i\Vert_{L^2( I_n)}^2 \\ &\leq \begin{cases} C\varepsilon_i^{-2}(N^{-1}\ln N)^{2k-1}, & I_n \subset \varOmega\setminus \varOmega_\lambda, \\ C\varepsilon_i^{-2}N^{-2(k+1)}, & I_n \subset \varOmega_\lambda, \end{cases} \end{align*} |
where \xi_i = u_i-\pi_h u_i and \theta_i = u_i- \mathcal{I} u_i for i = 1, \dots, \ell.
İmitating the arguments in the energy norm estimates and using the above fact, one can prove that
T^b( \textbf{u}, \textbf{v}_N) = T_1^b( \textbf{u}, \textbf{v}_N)\leq C \varepsilon^{1/2}(N^{-1}\ln N)^k\Vert|v_N\Vert|_\varepsilon . |
It follows from Cauchy–Schwarz inequality, the trace inequality (6.8) and Lemma 6.1 that
\begin{align*} \vert s^b(Q_h u_i, v_N)\vert &\le \sum\limits_{n = 1}^{N}\varrho_n^b \vert \langle Q_0 u_i-Q_b u_i, v_0-v_b\rangle_{\partial I_n} \vert \\ & = \sum\limits_{n = 1}^{N}\varrho_n^b \vert \langle Q_0 u_i- u_i, v_0-v_b\rangle_{\partial I_n} \vert \\ &\le C\Big( \sum\limits_{n = 1}^{N}\varepsilon \varrho_n^b \Vert u_i-Q_0 u_i\Vert_{L^2(\partial I_n)}^2\Big)^{1/2} \Big( \sum\limits_{n = 0}^{N-1} \varepsilon^{-1}\varrho_n^b \Vert v_0-v_b\Vert_{L^2(\partial I_n)}^2\Big)^{1/2}\\ &\le C \Big( \sum\limits_{n = 1}^{N} \varepsilon\varrho_n^b (h_n^{-1} \Vert u_i-Q_0 u_i\Vert_{L^2( I_n)}^2+h_n\Vert (u_i-Q_0 u_i)^\prime\Vert_{L^2( I_n)}^2) \Big)^{1/2}\Vert| v_N \Vert|_{\varepsilon}\\& \le C\Big[ \Big( \sum\limits_{I_n\subset \varOmega_\lambda}\varepsilon \varrho_n^b (h_n^{-1} \Vert u_i-Q_0 u_i\Vert_{L^2( I_n)}^2+h_n\Vert (u_i-Q_0 u_i)^\prime\Vert_{L^2( I_n)}^2) \Big)^{1/2}\Vert| v_N \Vert|_{\varepsilon}\\& \quad + \Big( \sum\limits_{I_n\subset\varOmega\setminus \varOmega_\lambda}\varepsilon \varrho_n^b (h_n^{-1} \Vert u_i-Q_0 u_i\Vert_{L^2( I_n)}^2+h_n\Vert (u_i-Q_0 u_i)^\prime\Vert_{L^2( I_n)}^2) \Big)^{1/2}\Big]\Vert| v_N \Vert|_{\varepsilon}\\ &\le C \Big[ \Big(\varepsilon^2 (N N^{-(2k+3)} +N^{-1}\varepsilon^{-1/2} N^{-2k} )\Big)^{1/2}\\ & \quad + \Big( \frac{\varepsilon^2 N}{\ln N} (\frac{N}{\varepsilon \ln N} \varepsilon (N^{-1}\ln N)^{2(k+1)}+\frac{\varepsilon \ln N}{N}\varepsilon ^{-1} (N^{-2k}\ln^{2k+1} N) \Big)^{1/2}\Big]\Vert| v_N\Vert|_{\varepsilon}\\ &\le C \varepsilon ^{1/2} N^{-k}(\ln N)^{k+1/2}\Vert| v_N\Vert|_{\varepsilon}. \end{align*} |
Here, we used the fact that \varepsilon^{-1}\varrho_n^b = \varrho_n . Therefore, we complete the proof.
The main result of this section is the following theorem.
Theorem 6.5. Assume that \boldsymbol{u} = (u_1, \dots, u_\ell), u_i \in H^{k+1}(\Omega), i = 1, \dots, \ell is the exact solution and \boldsymbol{u}_N = \{u_1^N, \dots, u_\ell^N\}\in [S_N^0]^\ell is the WG-FEM solution given by (3.4) on the uniform Shishkin mesh for the problem (1.1), respectively. If \sigma \geq k+1 , then we have the following improved balanced error estimate
\Vert \boldsymbol{u}- \boldsymbol{u}_N\Vert_b \leq C N^{-k}(\ln N)^{k+1/2}, |
where C is independent of N and \varepsilon_i, \; i = 1, \dots, \ell .
Proof. From Lemma 6.1 and Lemma 6.4, we obtain at once
\begin{align} \begin{split} \Vert \textbf{u}-Q_h \textbf{u} \Vert _{b}^2& \le C \Big[ \sum\limits_{i = 1}^\ell \varepsilon_i \Vert (u_i-Q_0 u_i)^\prime \Vert^2+ \sum\limits_{i = 1}^\ell \Vert u_i-Q_0 u_i\Vert^2\\ &\quad + \sum\limits_{i = 1}^\ell s(u_i-Q_h u_i, u_i-Q_h u_i)\Big]\\ & = { \Big[ \sum\limits_{i = 1}^\ell \varepsilon_i \Vert (u_i-Q_0 u_i)^\prime \Vert^2+ \sum\limits_{i = 1}^\ell \Vert u_i-Q_0 u_i\Vert^2}\\ &\quad {+ \sum\limits_{i = 1}^\ell s^b(Q_h u_i, Q_h u_i)\Big]}\\ &\le C \Big[ \varepsilon \varepsilon ^{-1/2} N^{-2k} + \varepsilon \varepsilon ^{-1} N^{-2k} \ln ^{2k+1} N + \varepsilon (N^{-1} \ln N)^{2(k+1)} \\ & \quad + N^{-(2k+3)} +\varepsilon N^{-2k} (\ln N) ^{2k+1} \Big] \\ &\le C N^{-2k} (\ln N) ^{2k+1}, \end{split} \end{align} | (6.28) |
where we have used that s^b(u_i, u_i) = 0 . Imitating the analyses in the energy norm estimates and using again Lemma 6.4, we have
\begin{align*} \Vert| \textbf{u}_N-Q_h \textbf{u}\Vert| _{\varepsilon}^2\le & a( \textbf{u}_N-Q_h \textbf{u}, \textbf{u}_N-Q_h \textbf{u})\\ \le & \sum\limits_{i = 1}^\ell \Big( T^b_1(u_i^N-Q_h u_i, u_i^N-Q_h u_i)+ s^b(u_i^N-Q_h u_i, u_i^N-Q_h u_i)\Big)\\ \le & \varepsilon^{1/2} N^{-k}(\ln N)^{k+1/2} \Vert| \textbf{u}_N-Q_h \textbf{u}\Vert|_{\varepsilon}. \end{align*} |
Therefore, we obtain
\Vert \textbf{u}_N-Q_h \textbf{u}\Vert _{b}^2\le C \varepsilon^{-1} \Vert| \textbf{u}_N-Q_h \textbf{u}\Vert| _{\varepsilon}^2\le C N^{-2k}(\ln N)^{2k+1} . |
Next, using the triangle inequality and combining the above estimate with (6.28) yield
\begin{align*} \Vert \textbf{u}- \textbf{u}_N\Vert _{b}\le C N^{-k}(\ln N)^{k+1/2}. \end{align*} |
The proof is now completed.
We present various numerical experiments to show the performance of the WG-FEM in this section. All the integration was calculated by using 5 -point Gauss-Legendre quadrature integral formula.
Example 7.1. Consider the following coupled system of reaction-diffusion problem with constant coefficients
\begin{align} \left\{ \begin{array}{ll} - \mathcal{E} \boldsymbol{u} ^{\prime \prime }+\boldsymbol{A} \boldsymbol{u} = \boldsymbol{g}\quad in \quad \Omega = (0, 1), \\ \boldsymbol{u}(0) = \mathit{\pmb{0}}, \quad \boldsymbol{u}(1) = \mathit{\pmb{0}}, \end{array}\right. \end{align} | (7.1) |
where \mathcal{E} = diag(\varepsilon_1^2, \varepsilon_2^2) with 0 < \varepsilon_1\leq \varepsilon_2 < < 1 , \quad \boldsymbol{g} = (g_1, g_2)^T, \quad \boldsymbol{A} = \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix} and g_1, g_2 are chosen such that
\begin{align*} u^1(x)& = \cfrac{e^{-x/\varepsilon_1}+e^{-(1-x)/\varepsilon_1}}{1+e^{-1/\varepsilon_1}}+ \cfrac{e^{-x/\varepsilon_2}+e^{-(1-x)/\varepsilon_2}}{1+e^{-1/\varepsilon_2}}-2, \\ u^2(x)& = \cfrac{e^{-x/\varepsilon_2}+e^{-(1-x)/\varepsilon_2}}{1+e^{-1/\varepsilon_2}}-1, \end{align*} |
is the exact solution \boldsymbol{u}(x) = (u^1(x), u^2(x)) of the system of reaction-diffusion problem (7.1). Note that R_i(x), i = 1, 2 is constant and (2.8) holds. We know that the solution has exponential layers of width \mathcal{O}(\varepsilon_2 |\ln \varepsilon_2|) at x = 0 and x = 1 , while only u^1(x) has an additional sublayer of width \mathcal{O}(\varepsilon_1 |\ln \varepsilon_1|) . We take \rho > 1/2 , \alpha = 0.99 and \sigma = 3 for this problem.
We applied the WG-FEM (3.4) for solving the problem (7.1). The numerical errors \textbf{e}: = \textbf{u}- \textbf{u}_N are computed in the energy norm by
\textbf{e}_{\varepsilon_1, \varepsilon_2 }^N = |||\textbf{e}|||^2_{\varepsilon} = \sum\limits_{i = 1}^2 \varepsilon_i^2 \Vert d_w e_i^N \Vert^2+\eta \sum\limits_{i = 1}^2 \Vert e_{i0}\Vert^2+\sum\limits_{i = 1}^2 s( e_i^N, e_i^N), |
for a fixed \varepsilon_1, \varepsilon_2 and N . We report the numerical experiments for the uniform error calculated by
\begin{align*} \textbf{e}^N = \max\limits_{\varepsilon_1, \varepsilon_2 = 1, 10^{-1}, \dots, 10^{-10} }\textbf{e}_{\varepsilon_1, \varepsilon_2 }^N \end{align*} |
in Table 1. The order of convergence r_{\varepsilon} is computed using mesh levels (N_1, ||| \textbf{e}^{N_1}|||_{\varepsilon}) and (N_2, ||| \textbf{e}^{N_2}|||_{\varepsilon}) :
\begin{align} r_{\varepsilon} = \cfrac{\ln (||| \textbf{e}^{N_1}|||_{\varepsilon}/||| \textbf{e}^{N_2}|||_{\varepsilon})}{\ln (N_1^{-1}\ln N_1)-\ln(N_2^{{-1}}\ln N_2)}. \end{align} | (7.2) |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
6 | 1.1284e-01 | - | 4.2924e-02 | - |
12 | 5.6774e-02 | 1.46 | 2.1549e-02 | 1.46 |
24 | 2.8440e-02 | 1.35 | 9.0168e-03 | 1.70 |
48 | 1.4228e-02 | 1.28 | 3.2876e-03 | 1.87 |
96 | 7.1152e-03 | 1.23 | 1.1018e-03 | 1.95 |
192 | 3.5577e-03 | 1.20 | 3.5170e-04 | 1.98 |
384 | 1.7888e-03 | 1.17 | 1.0885e-04 | 1.99 |
768 | 9.4867e-04 | 1.10 | 3.4639e-05 | 1.99 |
Table 1 shows that the energy norm error estimates exhibit k -order convergence which agrees perfectly with the theoretical error estimates.
In order to pay attention to the dependency of the energy norm on the parameters, we compute the energy norm estimates for a fixed \varepsilon_1 and different values of \varepsilon_2 . For instance, we first fixed \varepsilon_1 = 10^{-10} and take different values of \varepsilon_2 = 10^{-4}, \dots, 10^{-9} . The results are presented in Table 2, Table 4, Figures 2a and 2b. These results verify that the method is robust on the uniform Shishkin mesh and the order of convergence is of \mathcal{O}(\varepsilon^{1/2} (N^{-1}\ln N)^k) , where \varepsilon^{1/2} = \varepsilon_1^{1/2}+\varepsilon_2^{1/2} using the linear k = 1 and quadratic k = 2 element functions, which is in excellent agreement with the main result of Theorem 5.9. Moreover, we infer from Table 2 that \cfrac{|||u-u_N|||_{\varepsilon_{{j}}}}{|||u-u_N|||_{\varepsilon_{{j+2}}}}\approx \sqrt{\frac{{{10^{-j}}}}{{{10^{-(j+2)}}}}} for \varepsilon_{{j}} = \{{10^{-10}}, {{10^{-j}}}\}, \quad {j = 4, \dots, 9, } where |||u-u_N|||^2_{\varepsilon_{{j}}} = (10^{-10})^2 \Vert d_w e_1^N \Vert^2+(10^{-j})^2 \Vert d_w e_2^N \Vert^2+ \eta \sum_{i = 1}^2 \Vert e_{i0}\Vert^2+\sum_{i = 1}^2 s(e_i^N, e_i^N) .
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 5.2495e-03 | 3.1587e-03 | 1.8429e-03 | 1.0531e-03 | 5.9237e-04 | 3.2910e-04 | 1.8100e-04 | 9.8729e-05 |
r_{10^{-4}} | - | 0.97 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 1.6076e-03 | 9.6196e-04 | 5.5992e-04 | 3.1967e-04 | 1.7976e-04 | 9.9856e-05 | 5.4919e-05 | 2.9956e-05 |
r_{10^{-5}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 5.0801e-04 | 3.0395e-04 | 1.7690e-04 | 1.0098e-04 | 5.6778e-05 | 3.1536e-05 | 1.7342e-05 | 9.4732e-06 |
r_{10^{-6}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 1.5949e-04 | 9.5331e-05 | 5.5419e-05 | 3.1598e-05 | 1.7744e-05 | 9.8436e-06 | 5.4068e-06 | 2.9644e-06 |
r_{10^{-7}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
10^{-8} | 4.6992e-05 | 2.7802e-05 | 1.5980e-05 | 9.0030e-06 | 4.9943e-06 | 2.7368e-06 | 1.4915e-06 | 9.0148e-07 |
r_{10^{-8}} | - | 1.01 | 1.03 | 1.03 | 1.03 | 1.02 | 1.02 | 0.90 |
10^{-9} | 1.5921e-05 | 9.5379e-06 | 5.5415e-06 | 3.1571e-06 | 1.7716e-06 | 9.8469e-07 | 5.4071e-07 | 2.9678e-07 |
r_{10^{-9}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
\varepsilon_2/k=2 10^{-4} |
2.0323e-03 | 8.3959e-04 | 3.0403e-04 | 1.0161e-04 | 3.2400e-05 | 1.0025e-05 | 3.0394e-06 | 9.8278e-07 |
r_{10^{-4}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-5} | 6.4261e-04 | 2.6547e-04 | 9.6128e-05 | 3.2126e-05 | 1.0244e-05 | 3.1701e-06 | 9.7357e-07 | 3.0816e-07 |
r_{10^{-5}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-6} | 2.0300e-04 | 8.3843e-05 | 3.0354e-05 | 1.0142e-05 | 3.2335e-06 | 1.0005e-06 | 3.2525e-07 | 1.0300e-07 |
r_{10^{-6}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-7} | 6.3520e-05 | 2.6183e-05 | 9.4607e-06 | 3.1550e-06 | 1.0041e-06 | 3.1010e-07 | 9.6874e-07 | 3.0022e-07 |
r_{10^{-7}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-8} | 1.8097e-05 | 7.3152e-06 | 2.5923e-06 | 8.4824e-07 | 2.6607e-07 | 9.0074e-08 | 2.9335e-08 | 9.2754e-09 |
r_{10^{-8}} | - | 1.77 | 1.92 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
10^{-9} | 2.7387e-06 | 1.0380e-06 | 3.5300e-07 | 1.1388e-07 | 3.8601e-08 | 1.2545e-08 | 4.0893e-09 | 1.2875e-09 |
r_{10^{-9}} | - | 1.74 | 1.90 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
6 | 6.4414e-01 | - | 2.6892e-01 | - |
12 | 4.1317e-01 | 0.87 | 1.1638e-01 | 1.64 |
24 | 2.4710e-01 | 0.95 | 4.2967e-02 | 1.85 |
48 | 1.4238e-01 | 0.99 | 1.4454e-02 | 1.95 |
96 | 8.0297e-02 | 1.00 | 4.6177e-03 | 1.98 |
192 | 4.4646e-02 | 1.00 | 1.4293e-03 | 2.00 |
384 | 2.4561e-02 | 1.00 | 4.4355e-04 | 1.96 |
768 | 1.3400e-02 | 1.00 | 1.4159e-04 | 1.98 |
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 6.4390e-01 | 4.1311e-01 | 2.4709e-01 | 1.4237e-01 | 8.0296e-02 | 4.4645e-02 | 2.4561e-02 | 1.3398e-02 |
r_{10^{-4}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 6.4387e-01 | 4.1309e-01 | 2.4707e-01 | 1.4236e-01 | 8.0291e-02 | 4.4643e-02 | 2.4559e-02 | 1.3397e-02 |
r_{10^{-5}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 6.4366e-01 | 4.1292e-011 | 2.4696e-01 | 1.4229e-01 | 8.0244e-02 | 4.4613e-02 | 2.4542e-02 | 1.3387e-02 |
r_{10^{-6}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 6.4316e-01 | 4.1272e-01 | 2.4665e-01 | 1.4251e-01 | 8.0228e-02 | 4.4608e-02 | 2.4536e-02 | 1.3376e-02 |
r_{10^{-7}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-8} | 6.4319e-01 | 4.1270e-01 | 2.4566e-01 | 1.4251e-01 | 8.0225e-02 | 4.4607e-02 | 2.4528e-02 | 1.3373e-02 |
r_{10^{-8}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-9} | 6.4317e-01 | 4.1267e-01 | 2.4565e-01 | 1.4247e-01 | 8.0217e-02 | 4.4603e-02 | 2.4524e-02 | 1.3370e-02 |
r_{10^{-9}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
\varepsilon_2/k=2 10^{-4} |
2.6883e-01 | 1.1637e-01 | 4.2964e-02 | 1.4453e-02 | 4.6176e-03 | 1.4294e-03 | 4.3370e-04 | 1.3180e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-5} | 2.6881e-01 | 1.1636e-01 | 4.2961e-02 | 1.4452e-02 | 4.6172e-03 | 1.4293e-03 | 4.3366e-04 | 1.3175e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-6} | 2.6880e-01 | 1.1634e-01 | 4.2960e-02 | 1.4449e-02 | 4.6170e-03 | 1.4292e-03 | 4.3365e-04 | 1.3173e-04 |
r_{10^{-6}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-7} | 2.6878e-01 | 1.1632e-01 | 4.2961e-02 | 1.4449e-02 | 4.6168e-03 | 1.4290e-03 | 4.3362e-04 | 1.3171e-04 |
r_{10^{-7}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-8} | 2.6879e-01 | 1.1631e-01 | 4.2960e-02 | 1.4450e-02 | 4.6167e-03 | 1.4287e-03 | 4.3360e-04 | 1.3170e-04 |
- | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 | |
10^{-9} | 2.6877e-01 | 1.1630e-01 | 4.2958e-02 | 1.4448e-02 | 4.6166e-03 | 1.4288e-03 | 4.3359e-04 | 1.3170e-04 |
r_{10^{-9}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
This implies that the errors might be affected by a term involving \sqrt{\varepsilon_2} . We observe almost linear convergence up to a logarithmic factor using the linear elements and almost quadratic convergence using the quadratic elements if N gets larger. Hence, for larger N\geq 64 , the rate of convergence is of order \mathcal{O}(\varepsilon_2^{1/2}(N^{-1}\ln N)^k) which agrees with the theory indicated by Theorem 5.9. Table 2 shows that the errors and the order of convergence are dominated by the term N^{-(k+1)} when N and \varepsilon are smaller. We also observe that if \varepsilon_2 decreases for a fixed \varepsilon_1 , the energy norm error estimates get smaller. These observations suggest that the main result of Theorem 5.9 is sharp.
On the other hand, we compute the numerical errors \textbf{e}: = \textbf{u}- \textbf{u}_N with respect to the balanced norm by
\textbf{e}_{\varepsilon_1, \varepsilon_2 }^{N, b} = \Vert \textbf{e}\Vert _{b} = \sum\limits_{i = 1}^2 \varepsilon_i \Vert d_w e_i^N \Vert^2+\eta \sum\limits_{i = 1}^2 \Vert e_{i0}\Vert^2+\sum\limits_{i = 1}^2 s( e_i^N, e_i^N), |
for a fixed \varepsilon_1, \varepsilon_2 and N . We list the uniform balanced error bounds \textbf{e}^{N, b} calculated as before in Table 3. We also report the numerical results in the balanced norm in Table 4 and we notice that the error estimates in the balanced norm remain almost unchanged as \varepsilon_2 decreases for a fixed \varepsilon_1 unlike the estimates in the energy norm. This confirms the theory stated in Theorem 6.5. We have plotted the balanced norm error estimates for a fixed \varepsilon and varying \varepsilon_2 on log-log scale in Figures 2c and 2d for a viewable illustrations. Evidently, the errors stay almost constant while the parameters vary and behave like
\Vert \textbf{u}- \textbf{u}_N\Vert_b \leq C (N^{-1}\ln N)^{{k}}. |
This confirms the result of Theorem 6.5 up to a root of \ln N .
Example 7.2. We next consider the problem (1.1) with variable coefficients
\boldsymbol{A} = \begin{pmatrix} 3 & \quad 1-x &\quad x-1 \\ 2 &\quad 4+x&\quad -1 \\ 2&0 &3 \end{pmatrix} \;{ and }\; \boldsymbol{g} = \begin{pmatrix} 1 \\ x\\ 1+x^2\end{pmatrix}. |
We take \rho = 3/4 , \alpha = 0.80 and \sigma = 3 . The exact solution is unknown. Hence a finer mesh constructed as below is used for estimating the numerical errors.
We compute the errors \textbf{e} = \textbf{u}_N- \textbf{u}_{2N} where \textbf{u}_{2N} is our numerical solution computed on a mesh consisting of the initial uniform Shishkin mesh and the midpoints x_{n+1/2} = \frac{x_n+x_{n+1}}{2}, n = 0, \dots, N-1 . Therefore we calculate
\textbf{e}_{\varepsilon_1, \varepsilon_2, \varepsilon_3 }^N = |||\textbf{e}|||_{\varepsilon} = \sum\limits_{i = 1}^3 \varepsilon_i^2 \Vert d_w e_i^N \Vert^2+\eta \sum\limits_{i = 1}^2 \Vert e_{i0}\Vert^2+\sum\limits_{i = 1}^2 s( e_i^N, e_i^N), |
for a fixed \varepsilon_1, \varepsilon_2, \varepsilon_3 and N . The numerical results are listed in Tables 5 and 6 for the uniform errors in the energy and balanced norms, respectively
\begin{align*} \textbf{e}^N& = \max\limits_{\varepsilon_1, \varepsilon_2, \varepsilon_3 = 1, 10^{-1}, \dots, 10^{-10} }\textbf{e}_{\varepsilon_1, \varepsilon_2, \varepsilon_3 }^N, \\ \textbf{e}^{N, b}& = \max\limits_{\varepsilon_1, \varepsilon_2, \varepsilon_3 = 1, 10^{-1}, \dots, 10^{-10} }\textbf{e}_{\varepsilon_1, \varepsilon_2, \varepsilon_3 }^{N, b}, \end{align*} |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
16 | 4.1486e-01 | - | 2.6801e-01 | - |
3 2 | 2.9723e-01 | 0.71 | 1.6172e-01 | 1.07 |
64 | 1.9341e-01 | 0.84 | 7.8526e-02 | 1.41 |
128 | 1.1715e-01 | 0.93 | 3.1321e-02 | 1.71 |
256 | 6.7975e-02 | 0.97 | 1.0951e-02 | 1.88 |
512 | 3.8480e-02 | 0.99 | 3.5564e-03 | 1.95 |
1024 | 2.1446e-02 | 0.99 | 1.1086e-03 | 1.98 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
16 | 3.1691e00 | - | 2.5247e00 | - |
3 2 | 2.8939e00 | 0.19 | 1.8568e00 | 0.65 |
64 | 2.1681e00 | 0.57 | 1.0458e00 | 1.12 |
128 | 1.4063e00 | 0.80 | 4.6494e-01 | 1.50 |
256 | 8.3916e-01 | 0.92 | 1.7412e-01 | 1.76 |
512 | 4.7971e-01 | 0.97 | 5.8437e-02 | 1.90 |
1024 | 2.6815e-01 | 0.99 | 1.8476e-02 | 2.00 |
where \textbf{e}_{\varepsilon_1, \varepsilon_2, \varepsilon_3 }^{N, b} is defined as before and the order for convergence is calculated by (7.2). The results clearly suggest that the k -order uniform convergence in the energy norm, which is in good agreement with the main result of Theorem 5.9. The errors in the balanced norm behave like \mathcal{O}(N^{-1}\ln N)^k which agrees with the results of Theorem 6.5 up to a square root of \ln N . As before, we observe from Table 7 and Figures 3a and 3b that the energy norm estimates depend on \varepsilon^{1/2} = \varepsilon_1^{1/2}+\varepsilon_2^{1/2}+\varepsilon_3^{1/2} and errors change decreasingly as \varepsilon\to 0 while the balanced norm estimates do not depend on the parameters and the errors remain almost unchanged as seen from Table 8 and Figures 3c and 3d.
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 1.3469e-01 | 1.0211e-01 | 6.9269e-02 | 4.2854e-02 | 2.5059e-02 | 1.4210e-02 | 7.9163e-03 |
r_{10^{-3}} | - | 0.59 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-4} | 4.2826e-02 | 3.2265e-02 | 2.1878e-02 | 1.3535e-02 | 7.9145e-03 | 4.4877e-03 | 2.4997e-03 |
r_{10^{-4}} | - | 0.60 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-5} | 1.4555e-02 | 1.0287e-02 | 6.9246e-03 | 4.2798e-03 | 2.5021e-03 | 1.4187e-03 | 7.9017e-04 |
r_{10^{-5}} | - | 0.74 | 0.77 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-6} | 7.0511e-03 | 3.5121e-03 | 2.2119e-03 | 1.3538e-03 | 7.9006e-04 | 4.4773e-04 | 2.4931e-04 |
r_{10^{-6}} | - | 1.48 | 0.91 | 0.91 | 0.96 | 0.99 | 1.00 |
10^{-7} | 5.7885e-03 | 1.7282e-03 | 7.6617e-04 | 4.2944e-04 | 2.4621e-04 | 1.3883e-04 | 7.7093e-05 |
r_{10^{-7}} | - | 2.57 | 1.59 | 1.07 | 0.99 | 1.00 | 1.00 |
10^{-8} | 5.6470e-03 | 1.4335e-03 | 3.9795e-04 | 1.4327e-04 | 6.8296e-05 | 3.6229e-05 | 1.9525e-05 |
r_{10^{-8}} | - | 2.91 | 2.50 | 1.90 | 1.32 | 1.10 | 1.02 |
\varepsilon_2/k=2 10^{-3} |
9.1950e-02 | 6.0047e-02 | 3.1457e-02 | 1.3221e-02 | 4.7423e-03 | 1.5542e-03 | 4.8544e-04 |
r_{10^{-3}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-4} | 2.9024e-02 | 1.8958e-02 | 9.9341e-03 | 4.1758e-03 | 1.4979e-03 | 4.9086e-04 | 1.5329e-04 |
r_{10^{-4}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-5} | 9.1767e-03 | 5.9932e-03 | 3.1404e-03 | 1.3200e-03 | 4.7348e-04 | 1.5515e-04 | 4.8452e-05 |
r_{10^{-5}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-6} | 2.9026e-03 | 1.8921e-03 | 9.9100e-04 | 4.1640e-04 | 1.4930e-04 | 4.8908e-05 | 1.5269e-05 |
r_{10^{-6}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-7} | 9.2029e-04 | 5.8861e-04 | 3.0694e-04 | 1.2848e-04 | 4.5896e-05 | 1.4982e-05 | 4.6637e-06 |
r_{10^{-7}} | - | 0.95 | 1.27 | 1.62 | 1.84 | 1.95 | 1.99 |
10^{-8} | 3.0336e-04 | 1.5840e-04 | 7.8929e-05 | 3.1821e-05 | 1.0972e-05 | 3.4626e-06 | 1.0673e-06 |
r_{10^{-8}} | - | 1.38 | 1.37 | 1.69 | 1.90 | 2.00 | 2.00 |
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 3.1062e00 | 2.8495e00 | 2.1418e00 | 1.3911e00 | 8.3003e-01 | 4.7414e-01 | 2.6477e-01 |
r_{10^{-3}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-4} | 3.0996e00 | 2.8450e00 | 2.1391e00 | 1.3895e00 | 8.2909e-01 | 4.7356e-01 | 2.6442e-01 |
r_{10^{-4}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-5} | 3.0987e00 | 2.8445e00 | 2.1390e00 | 1.3892e00 | 8.2907e-01 | 4.7348e-01 | 2.6440e-01 |
r_{10^{-5}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-6} | 3.0980e00 | 2.8442e00 | 2.1387e00 | 1.3891e00 | 8.2903e-01 | 4.7342e-01 | 2.6436e-01 |
r_{10^{-6}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-7} | 3.0978e00 | 2.8440e00 | 2.1384e00 | 1.3890e00 | 8.2901e-01 | 4.7340e-01 | 2.6433e-01 |
r_{10^{-7}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-8} | 3.0975e00 | 2.8438e00 | 2.1382e00 | 1.3888e00 | 8.2888e-01 | 4.7338e-01 | 2.6430e-01 |
r_{10^{-8}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
\varepsilon_2/k=2 10^{-3} |
2.4792e00 | 1.8292e00 | 1.0335e00 | 4.6054e-01 | 1.7262e-01 | 5.7921e-02 | 1.8266e-02 |
r_{10^{-3}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-4} | 2.4790e00 | 1.8291e00 | 1.0333e00 | 4.6052e-01 | 1.7261e-01 | 5.7920e-02 | 1.8263e-02 |
r_{10^{-4}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-5} | 2.4785e00 | 1.8288e00 | 1.0330e00 | 4.6050e-01 | 1.7258e-01 | 5.7917e-02 | 1.8260e-02 |
r_{10^{-5}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-6} | 2.4776e00 | 1.8282e00 | 1.0325e00 | 4.6047e-01 | 1.7255e-01 | 5.7913e-02 | 1.8256e-02 |
r_{10^{-6}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-7} | 2.4765e00 | 1.8277e00 | 1.0322e00 | 4.6043e-01 | 1.7252e-01 | 5.7910e-02 | 1.8252e-02 |
r_{10^{-7}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-8} | 2.4754e00 | 1.8270e00 | 1.0315e00 | 4.6036e-01 | 1.7247e-01 | 5.7906e-02 | 1.8245e-02 |
r_{10^{-8}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
In this paper, we studied the WG-FEM for system of SPPs of reaction-diffusion type in which the equations have diffusion parameters of the different magnitudes on a piecewise uniform Shishkin mesh. With the help of a special interpolation operator, we derived optimal and uniform error bounds in the energy and the balanced norms up to a logarithmic factor. The proposed WG-FEM uses the procedure of elimination of the interior unknowns from the discrete linear system and thus the method is comparable with the classical FEM. We will investigate sharper error bounds in balanced norm and extend these results to to high dimensional problem on a tensor product meshes in the future work.
The authors would like to express their deep gratitude to the editors and anonymous referees for their valuable comments and suggestions that improve the presentation.
The authors declare that they have no conflict of interest.
[1] |
A. Aminu, P. Butkovič, Introduction to max-linear programming, IMA J. Manag. Math., 20 (2009), 233–249. https://doi.org/10.1093/imaman/dpn029 doi: 10.1093/imaman/dpn029
![]() |
[2] |
P., M. MacCaig, On the integer max-linear programming problem, Discrete Appl. Math., 162 (2014), 128–141. https://doi.org/10.1016/j.dam.2013.08.007 doi: 10.1016/j.dam.2013.08.007
![]() |
[3] | R. Cuninghame-Green, Minimax algebra, Berlin: Springer, 1979. https://doi.org/10.1007/978-3-642-48708-8 |
[4] | M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, K. Zimmermann, Linear optimization problems with inexact data, New York: Springer, 2006. https://doi.org/10.1007/0-387-32698-7 |
[5] | M. Gavalec, J. Plavka, Monotone interval eigenproblem in max-min algebra, Kybernetika, 46 (2010), 387–396. |
[6] | M. Gavalec, K. Zimmermann, Solving systems of two-sided (max, min)-linear equations, Kybernetika, 46 (2010), 405–414. |
[7] |
M. Hladík, On approximation of the best case optimal value in interval linear programming, Optim. Lett., 8 (2014), 1985–1997. https://doi.org/10.1007/s11590-013-0715-5 doi: 10.1007/s11590-013-0715-5
![]() |
[8] |
M. Hladík, How to determine basis stability in interval linear programming, Optim. Lett., 8 (2014), 375–389. https://doi.org/10.1007/s11590-012-0589-y doi: 10.1007/s11590-012-0589-y
![]() |
[9] |
J. Loetamonphong, S. C. Fang, R. E. Young, Multi-objective optimization problem with fuzzy relation equations constraints, Fuzzy Set. Syst., 127 (2002), 141–164. https://doi.org/10.1016/S0165-0114(01)00052-5 doi: 10.1016/S0165-0114(01)00052-5
![]() |
[10] |
H. Myšková, Interval systems of max-separable linear equations, Linear Algebra Appl., 403 (2005), 263–272. https://doi.org/10.1016/j.laa.2005.02.011 doi: 10.1016/j.laa.2005.02.011
![]() |
[11] |
H. Myšková, Control solvability of interval systems of max-separable linear equations, Linear Algebra Appl., 416 (2006), 215–223. https://doi.org/10.1016/j.laa.2005.11.008 doi: 10.1016/j.laa.2005.11.008
![]() |
[12] | H. Myšková, An iterative algorithm for testing solvability of max-min interval systems, Kybernetika, 48 (2012), 879–889. |
[13] | H. Myšková, On an algorithm for testing T4 solvability of max-min interval systems, Kybernetika, 48 (2012), 924–938. |
[14] |
H. Myšková, Interval max-plus systems of linear equations, Linear Algebra Appl., 437 (2012), 1992–2000. https://doi.org/10.1016/j.laa.2012.04.051 doi: 10.1016/j.laa.2012.04.051
![]() |
[15] |
J. Plavka, On eigenproblem for circulant matrices in max-algebra, Optimization, 50 (2001), 477–483. https://doi.org/10.1080/02331930108844576 doi: 10.1080/02331930108844576
![]() |
[16] | E. Sanchez, Medical diagnosis and composite advances, In: Fuzzy set theory and applications, 1979,437–444. |
[17] | Y. Tsukamoto, T. Terano, Failure diagnosis by using fuzzy logic, In: 1977 IEEE conference on decision and control including the 16th symposium on adaptive processes and a special symposium on fuzzy set theory and applications, New Orleans, 1977, 1390–1395. https://doi.org/10.1109/CDC.1977.271521 |
[18] | N. N. Vorobjov, Extremal algebra of positive matrices, Elektronische Datenverarbeitung und Kybernetik, 3 (1967), 39–71. |
[19] |
Y. K. Wu, S. M. Guu, Minimizing a linear function under a fuzzy max-min relational equation constraint, Fuzzy Set. Syst., 150 (2005), 147–162. https://doi.org/https://doi.org/10.1016/j.fss.2004.09.010 doi: 10.1016/j.fss.2004.09.010
![]() |
[20] | L. A. Zadeh, Toward a theory of max-min systems, In: Aspects of network and systems theory, 1971. |
[21] | K. Zimmermann, Extremální algebra (in Czech), 1976. |
1. | Şuayip Toprakseven, Xia Tao, Jiaxiong Hao, Error analysis of a weak Galerkin finite element method for singularly perturbed differential-difference equations, 2024, 30, 1023-6198, 435, 10.1080/10236198.2023.2291154 | |
2. | Suayip Toprakseven, Seza Dinibutun, A weak Galerkin finite element method for parabolic singularly perturbed convection-diffusion equations on layer-adapted meshes, 2024, 32, 2688-1594, 5033, 10.3934/era.2024232 | |
3. | Şuayip Toprakseven, Seza Dinibutun, A high-order stabilizer-free weak Galerkin finite element method on nonuniform time meshes for subdiffusion problems, 2023, 8, 2473-6988, 31022, 10.3934/math.20231588 |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
6 | 1.1284e-01 | - | 4.2924e-02 | - |
12 | 5.6774e-02 | 1.46 | 2.1549e-02 | 1.46 |
24 | 2.8440e-02 | 1.35 | 9.0168e-03 | 1.70 |
48 | 1.4228e-02 | 1.28 | 3.2876e-03 | 1.87 |
96 | 7.1152e-03 | 1.23 | 1.1018e-03 | 1.95 |
192 | 3.5577e-03 | 1.20 | 3.5170e-04 | 1.98 |
384 | 1.7888e-03 | 1.17 | 1.0885e-04 | 1.99 |
768 | 9.4867e-04 | 1.10 | 3.4639e-05 | 1.99 |
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 5.2495e-03 | 3.1587e-03 | 1.8429e-03 | 1.0531e-03 | 5.9237e-04 | 3.2910e-04 | 1.8100e-04 | 9.8729e-05 |
r_{10^{-4}} | - | 0.97 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 1.6076e-03 | 9.6196e-04 | 5.5992e-04 | 3.1967e-04 | 1.7976e-04 | 9.9856e-05 | 5.4919e-05 | 2.9956e-05 |
r_{10^{-5}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 5.0801e-04 | 3.0395e-04 | 1.7690e-04 | 1.0098e-04 | 5.6778e-05 | 3.1536e-05 | 1.7342e-05 | 9.4732e-06 |
r_{10^{-6}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 1.5949e-04 | 9.5331e-05 | 5.5419e-05 | 3.1598e-05 | 1.7744e-05 | 9.8436e-06 | 5.4068e-06 | 2.9644e-06 |
r_{10^{-7}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
10^{-8} | 4.6992e-05 | 2.7802e-05 | 1.5980e-05 | 9.0030e-06 | 4.9943e-06 | 2.7368e-06 | 1.4915e-06 | 9.0148e-07 |
r_{10^{-8}} | - | 1.01 | 1.03 | 1.03 | 1.03 | 1.02 | 1.02 | 0.90 |
10^{-9} | 1.5921e-05 | 9.5379e-06 | 5.5415e-06 | 3.1571e-06 | 1.7716e-06 | 9.8469e-07 | 5.4071e-07 | 2.9678e-07 |
r_{10^{-9}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
\varepsilon_2/k=2 10^{-4} |
2.0323e-03 | 8.3959e-04 | 3.0403e-04 | 1.0161e-04 | 3.2400e-05 | 1.0025e-05 | 3.0394e-06 | 9.8278e-07 |
r_{10^{-4}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-5} | 6.4261e-04 | 2.6547e-04 | 9.6128e-05 | 3.2126e-05 | 1.0244e-05 | 3.1701e-06 | 9.7357e-07 | 3.0816e-07 |
r_{10^{-5}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-6} | 2.0300e-04 | 8.3843e-05 | 3.0354e-05 | 1.0142e-05 | 3.2335e-06 | 1.0005e-06 | 3.2525e-07 | 1.0300e-07 |
r_{10^{-6}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-7} | 6.3520e-05 | 2.6183e-05 | 9.4607e-06 | 3.1550e-06 | 1.0041e-06 | 3.1010e-07 | 9.6874e-07 | 3.0022e-07 |
r_{10^{-7}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-8} | 1.8097e-05 | 7.3152e-06 | 2.5923e-06 | 8.4824e-07 | 2.6607e-07 | 9.0074e-08 | 2.9335e-08 | 9.2754e-09 |
r_{10^{-8}} | - | 1.77 | 1.92 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
10^{-9} | 2.7387e-06 | 1.0380e-06 | 3.5300e-07 | 1.1388e-07 | 3.8601e-08 | 1.2545e-08 | 4.0893e-09 | 1.2875e-09 |
r_{10^{-9}} | - | 1.74 | 1.90 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
6 | 6.4414e-01 | - | 2.6892e-01 | - |
12 | 4.1317e-01 | 0.87 | 1.1638e-01 | 1.64 |
24 | 2.4710e-01 | 0.95 | 4.2967e-02 | 1.85 |
48 | 1.4238e-01 | 0.99 | 1.4454e-02 | 1.95 |
96 | 8.0297e-02 | 1.00 | 4.6177e-03 | 1.98 |
192 | 4.4646e-02 | 1.00 | 1.4293e-03 | 2.00 |
384 | 2.4561e-02 | 1.00 | 4.4355e-04 | 1.96 |
768 | 1.3400e-02 | 1.00 | 1.4159e-04 | 1.98 |
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 6.4390e-01 | 4.1311e-01 | 2.4709e-01 | 1.4237e-01 | 8.0296e-02 | 4.4645e-02 | 2.4561e-02 | 1.3398e-02 |
r_{10^{-4}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 6.4387e-01 | 4.1309e-01 | 2.4707e-01 | 1.4236e-01 | 8.0291e-02 | 4.4643e-02 | 2.4559e-02 | 1.3397e-02 |
r_{10^{-5}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 6.4366e-01 | 4.1292e-011 | 2.4696e-01 | 1.4229e-01 | 8.0244e-02 | 4.4613e-02 | 2.4542e-02 | 1.3387e-02 |
r_{10^{-6}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 6.4316e-01 | 4.1272e-01 | 2.4665e-01 | 1.4251e-01 | 8.0228e-02 | 4.4608e-02 | 2.4536e-02 | 1.3376e-02 |
r_{10^{-7}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-8} | 6.4319e-01 | 4.1270e-01 | 2.4566e-01 | 1.4251e-01 | 8.0225e-02 | 4.4607e-02 | 2.4528e-02 | 1.3373e-02 |
r_{10^{-8}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-9} | 6.4317e-01 | 4.1267e-01 | 2.4565e-01 | 1.4247e-01 | 8.0217e-02 | 4.4603e-02 | 2.4524e-02 | 1.3370e-02 |
r_{10^{-9}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
\varepsilon_2/k=2 10^{-4} |
2.6883e-01 | 1.1637e-01 | 4.2964e-02 | 1.4453e-02 | 4.6176e-03 | 1.4294e-03 | 4.3370e-04 | 1.3180e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-5} | 2.6881e-01 | 1.1636e-01 | 4.2961e-02 | 1.4452e-02 | 4.6172e-03 | 1.4293e-03 | 4.3366e-04 | 1.3175e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-6} | 2.6880e-01 | 1.1634e-01 | 4.2960e-02 | 1.4449e-02 | 4.6170e-03 | 1.4292e-03 | 4.3365e-04 | 1.3173e-04 |
r_{10^{-6}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-7} | 2.6878e-01 | 1.1632e-01 | 4.2961e-02 | 1.4449e-02 | 4.6168e-03 | 1.4290e-03 | 4.3362e-04 | 1.3171e-04 |
r_{10^{-7}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-8} | 2.6879e-01 | 1.1631e-01 | 4.2960e-02 | 1.4450e-02 | 4.6167e-03 | 1.4287e-03 | 4.3360e-04 | 1.3170e-04 |
- | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 | |
10^{-9} | 2.6877e-01 | 1.1630e-01 | 4.2958e-02 | 1.4448e-02 | 4.6166e-03 | 1.4288e-03 | 4.3359e-04 | 1.3170e-04 |
r_{10^{-9}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
16 | 4.1486e-01 | - | 2.6801e-01 | - |
3 2 | 2.9723e-01 | 0.71 | 1.6172e-01 | 1.07 |
64 | 1.9341e-01 | 0.84 | 7.8526e-02 | 1.41 |
128 | 1.1715e-01 | 0.93 | 3.1321e-02 | 1.71 |
256 | 6.7975e-02 | 0.97 | 1.0951e-02 | 1.88 |
512 | 3.8480e-02 | 0.99 | 3.5564e-03 | 1.95 |
1024 | 2.1446e-02 | 0.99 | 1.1086e-03 | 1.98 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
16 | 3.1691e00 | - | 2.5247e00 | - |
3 2 | 2.8939e00 | 0.19 | 1.8568e00 | 0.65 |
64 | 2.1681e00 | 0.57 | 1.0458e00 | 1.12 |
128 | 1.4063e00 | 0.80 | 4.6494e-01 | 1.50 |
256 | 8.3916e-01 | 0.92 | 1.7412e-01 | 1.76 |
512 | 4.7971e-01 | 0.97 | 5.8437e-02 | 1.90 |
1024 | 2.6815e-01 | 0.99 | 1.8476e-02 | 2.00 |
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 1.3469e-01 | 1.0211e-01 | 6.9269e-02 | 4.2854e-02 | 2.5059e-02 | 1.4210e-02 | 7.9163e-03 |
r_{10^{-3}} | - | 0.59 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-4} | 4.2826e-02 | 3.2265e-02 | 2.1878e-02 | 1.3535e-02 | 7.9145e-03 | 4.4877e-03 | 2.4997e-03 |
r_{10^{-4}} | - | 0.60 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-5} | 1.4555e-02 | 1.0287e-02 | 6.9246e-03 | 4.2798e-03 | 2.5021e-03 | 1.4187e-03 | 7.9017e-04 |
r_{10^{-5}} | - | 0.74 | 0.77 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-6} | 7.0511e-03 | 3.5121e-03 | 2.2119e-03 | 1.3538e-03 | 7.9006e-04 | 4.4773e-04 | 2.4931e-04 |
r_{10^{-6}} | - | 1.48 | 0.91 | 0.91 | 0.96 | 0.99 | 1.00 |
10^{-7} | 5.7885e-03 | 1.7282e-03 | 7.6617e-04 | 4.2944e-04 | 2.4621e-04 | 1.3883e-04 | 7.7093e-05 |
r_{10^{-7}} | - | 2.57 | 1.59 | 1.07 | 0.99 | 1.00 | 1.00 |
10^{-8} | 5.6470e-03 | 1.4335e-03 | 3.9795e-04 | 1.4327e-04 | 6.8296e-05 | 3.6229e-05 | 1.9525e-05 |
r_{10^{-8}} | - | 2.91 | 2.50 | 1.90 | 1.32 | 1.10 | 1.02 |
\varepsilon_2/k=2 10^{-3} |
9.1950e-02 | 6.0047e-02 | 3.1457e-02 | 1.3221e-02 | 4.7423e-03 | 1.5542e-03 | 4.8544e-04 |
r_{10^{-3}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-4} | 2.9024e-02 | 1.8958e-02 | 9.9341e-03 | 4.1758e-03 | 1.4979e-03 | 4.9086e-04 | 1.5329e-04 |
r_{10^{-4}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-5} | 9.1767e-03 | 5.9932e-03 | 3.1404e-03 | 1.3200e-03 | 4.7348e-04 | 1.5515e-04 | 4.8452e-05 |
r_{10^{-5}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-6} | 2.9026e-03 | 1.8921e-03 | 9.9100e-04 | 4.1640e-04 | 1.4930e-04 | 4.8908e-05 | 1.5269e-05 |
r_{10^{-6}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-7} | 9.2029e-04 | 5.8861e-04 | 3.0694e-04 | 1.2848e-04 | 4.5896e-05 | 1.4982e-05 | 4.6637e-06 |
r_{10^{-7}} | - | 0.95 | 1.27 | 1.62 | 1.84 | 1.95 | 1.99 |
10^{-8} | 3.0336e-04 | 1.5840e-04 | 7.8929e-05 | 3.1821e-05 | 1.0972e-05 | 3.4626e-06 | 1.0673e-06 |
r_{10^{-8}} | - | 1.38 | 1.37 | 1.69 | 1.90 | 2.00 | 2.00 |
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 3.1062e00 | 2.8495e00 | 2.1418e00 | 1.3911e00 | 8.3003e-01 | 4.7414e-01 | 2.6477e-01 |
r_{10^{-3}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-4} | 3.0996e00 | 2.8450e00 | 2.1391e00 | 1.3895e00 | 8.2909e-01 | 4.7356e-01 | 2.6442e-01 |
r_{10^{-4}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-5} | 3.0987e00 | 2.8445e00 | 2.1390e00 | 1.3892e00 | 8.2907e-01 | 4.7348e-01 | 2.6440e-01 |
r_{10^{-5}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-6} | 3.0980e00 | 2.8442e00 | 2.1387e00 | 1.3891e00 | 8.2903e-01 | 4.7342e-01 | 2.6436e-01 |
r_{10^{-6}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-7} | 3.0978e00 | 2.8440e00 | 2.1384e00 | 1.3890e00 | 8.2901e-01 | 4.7340e-01 | 2.6433e-01 |
r_{10^{-7}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-8} | 3.0975e00 | 2.8438e00 | 2.1382e00 | 1.3888e00 | 8.2888e-01 | 4.7338e-01 | 2.6430e-01 |
r_{10^{-8}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
\varepsilon_2/k=2 10^{-3} |
2.4792e00 | 1.8292e00 | 1.0335e00 | 4.6054e-01 | 1.7262e-01 | 5.7921e-02 | 1.8266e-02 |
r_{10^{-3}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-4} | 2.4790e00 | 1.8291e00 | 1.0333e00 | 4.6052e-01 | 1.7261e-01 | 5.7920e-02 | 1.8263e-02 |
r_{10^{-4}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-5} | 2.4785e00 | 1.8288e00 | 1.0330e00 | 4.6050e-01 | 1.7258e-01 | 5.7917e-02 | 1.8260e-02 |
r_{10^{-5}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-6} | 2.4776e00 | 1.8282e00 | 1.0325e00 | 4.6047e-01 | 1.7255e-01 | 5.7913e-02 | 1.8256e-02 |
r_{10^{-6}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-7} | 2.4765e00 | 1.8277e00 | 1.0322e00 | 4.6043e-01 | 1.7252e-01 | 5.7910e-02 | 1.8252e-02 |
r_{10^{-7}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-8} | 2.4754e00 | 1.8270e00 | 1.0315e00 | 4.6036e-01 | 1.7247e-01 | 5.7906e-02 | 1.8245e-02 |
r_{10^{-8}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
6 | 1.1284e-01 | - | 4.2924e-02 | - |
12 | 5.6774e-02 | 1.46 | 2.1549e-02 | 1.46 |
24 | 2.8440e-02 | 1.35 | 9.0168e-03 | 1.70 |
48 | 1.4228e-02 | 1.28 | 3.2876e-03 | 1.87 |
96 | 7.1152e-03 | 1.23 | 1.1018e-03 | 1.95 |
192 | 3.5577e-03 | 1.20 | 3.5170e-04 | 1.98 |
384 | 1.7888e-03 | 1.17 | 1.0885e-04 | 1.99 |
768 | 9.4867e-04 | 1.10 | 3.4639e-05 | 1.99 |
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 5.2495e-03 | 3.1587e-03 | 1.8429e-03 | 1.0531e-03 | 5.9237e-04 | 3.2910e-04 | 1.8100e-04 | 9.8729e-05 |
r_{10^{-4}} | - | 0.97 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 1.6076e-03 | 9.6196e-04 | 5.5992e-04 | 3.1967e-04 | 1.7976e-04 | 9.9856e-05 | 5.4919e-05 | 2.9956e-05 |
r_{10^{-5}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 5.0801e-04 | 3.0395e-04 | 1.7690e-04 | 1.0098e-04 | 5.6778e-05 | 3.1536e-05 | 1.7342e-05 | 9.4732e-06 |
r_{10^{-6}} | - | 0.99 | 1.01 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 1.5949e-04 | 9.5331e-05 | 5.5419e-05 | 3.1598e-05 | 1.7744e-05 | 9.8436e-06 | 5.4068e-06 | 2.9644e-06 |
r_{10^{-7}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
10^{-8} | 4.6992e-05 | 2.7802e-05 | 1.5980e-05 | 9.0030e-06 | 4.9943e-06 | 2.7368e-06 | 1.4915e-06 | 9.0148e-07 |
r_{10^{-8}} | - | 1.01 | 1.03 | 1.03 | 1.03 | 1.02 | 1.02 | 0.90 |
10^{-9} | 1.5921e-05 | 9.5379e-06 | 5.5415e-06 | 3.1571e-06 | 1.7716e-06 | 9.8469e-07 | 5.4071e-07 | 2.9678e-07 |
r_{10^{-9}} | - | 0.99 | 1.01 | 1.01 | 1.00 | 1.00 | 1.00 | 0.99 |
\varepsilon_2/k=2 10^{-4} |
2.0323e-03 | 8.3959e-04 | 3.0403e-04 | 1.0161e-04 | 3.2400e-05 | 1.0025e-05 | 3.0394e-06 | 9.8278e-07 |
r_{10^{-4}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-5} | 6.4261e-04 | 2.6547e-04 | 9.6128e-05 | 3.2126e-05 | 1.0244e-05 | 3.1701e-06 | 9.7357e-07 | 3.0816e-07 |
r_{10^{-5}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-6} | 2.0300e-04 | 8.3843e-05 | 3.0354e-05 | 1.0142e-05 | 3.2335e-06 | 1.0005e-06 | 3.2525e-07 | 1.0300e-07 |
r_{10^{-6}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-7} | 6.3520e-05 | 2.6183e-05 | 9.4607e-06 | 3.1550e-06 | 1.0041e-06 | 3.1010e-07 | 9.6874e-07 | 3.0022e-07 |
r_{10^{-7}} | - | 1.73 | 1.88 | 1.96 | 1.99 | 2.00 | 2.00 | 1.99 |
10^{-8} | 1.8097e-05 | 7.3152e-06 | 2.5923e-06 | 8.4824e-07 | 2.6607e-07 | 9.0074e-08 | 2.9335e-08 | 9.2754e-09 |
r_{10^{-8}} | - | 1.77 | 1.92 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
10^{-9} | 2.7387e-06 | 1.0380e-06 | 3.5300e-07 | 1.1388e-07 | 3.8601e-08 | 1.2545e-08 | 4.0893e-09 | 1.2875e-09 |
r_{10^{-9}} | - | 1.74 | 1.90 | 2.00 | 2.02 | 2.00 | 2.00 | 2.00 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
6 | 6.4414e-01 | - | 2.6892e-01 | - |
12 | 4.1317e-01 | 0.87 | 1.1638e-01 | 1.64 |
24 | 2.4710e-01 | 0.95 | 4.2967e-02 | 1.85 |
48 | 1.4238e-01 | 0.99 | 1.4454e-02 | 1.95 |
96 | 8.0297e-02 | 1.00 | 4.6177e-03 | 1.98 |
192 | 4.4646e-02 | 1.00 | 1.4293e-03 | 2.00 |
384 | 2.4561e-02 | 1.00 | 4.4355e-04 | 1.96 |
768 | 1.3400e-02 | 1.00 | 1.4159e-04 | 1.98 |
\varepsilon_2/k=1 | N | |||||||
6 | 12 | 24 | 48 | 96 | 192 | 384 | 768 | |
10^{-4} | 6.4390e-01 | 4.1311e-01 | 2.4709e-01 | 1.4237e-01 | 8.0296e-02 | 4.4645e-02 | 2.4561e-02 | 1.3398e-02 |
r_{10^{-4}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-5} | 6.4387e-01 | 4.1309e-01 | 2.4707e-01 | 1.4236e-01 | 8.0291e-02 | 4.4643e-02 | 2.4559e-02 | 1.3397e-02 |
r_{10^{-5}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-6} | 6.4366e-01 | 4.1292e-011 | 2.4696e-01 | 1.4229e-01 | 8.0244e-02 | 4.4613e-02 | 2.4542e-02 | 1.3387e-02 |
r_{10^{-6}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-7} | 6.4316e-01 | 4.1272e-01 | 2.4665e-01 | 1.4251e-01 | 8.0228e-02 | 4.4608e-02 | 2.4536e-02 | 1.3376e-02 |
r_{10^{-7}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-8} | 6.4319e-01 | 4.1270e-01 | 2.4566e-01 | 1.4251e-01 | 8.0225e-02 | 4.4607e-02 | 2.4528e-02 | 1.3373e-02 |
r_{10^{-8}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
10^{-9} | 6.4317e-01 | 4.1267e-01 | 2.4565e-01 | 1.4247e-01 | 8.0217e-02 | 4.4603e-02 | 2.4524e-02 | 1.3370e-02 |
r_{10^{-9}} | - | 0.87 | 0.95 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 |
\varepsilon_2/k=2 10^{-4} |
2.6883e-01 | 1.1637e-01 | 4.2964e-02 | 1.4453e-02 | 4.6176e-03 | 1.4294e-03 | 4.3370e-04 | 1.3180e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-5} | 2.6881e-01 | 1.1636e-01 | 4.2961e-02 | 1.4452e-02 | 4.6172e-03 | 1.4293e-03 | 4.3366e-04 | 1.3175e-04 |
r_{10^{-4}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-6} | 2.6880e-01 | 1.1634e-01 | 4.2960e-02 | 1.4449e-02 | 4.6170e-03 | 1.4292e-03 | 4.3365e-04 | 1.3173e-04 |
r_{10^{-6}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-7} | 2.6878e-01 | 1.1632e-01 | 4.2961e-02 | 1.4449e-02 | 4.6168e-03 | 1.4290e-03 | 4.3362e-04 | 1.3171e-04 |
r_{10^{-7}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
10^{-8} | 2.6879e-01 | 1.1631e-01 | 4.2960e-02 | 1.4450e-02 | 4.6167e-03 | 1.4287e-03 | 4.3360e-04 | 1.3170e-04 |
- | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 | |
10^{-9} | 2.6877e-01 | 1.1630e-01 | 4.2958e-02 | 1.4448e-02 | 4.6166e-03 | 1.4288e-03 | 4.3359e-04 | 1.3170e-04 |
r_{10^{-9}} | - | 1.64 | 1.85 | 1.95 | 1.98 | 1.99 | 1.99 | 1.97 |
N | k=1 | k=2 | ||
\textbf{e}^N | r_\varepsilon | \textbf{e}^N | r_\varepsilon | |
16 | 4.1486e-01 | - | 2.6801e-01 | - |
3 2 | 2.9723e-01 | 0.71 | 1.6172e-01 | 1.07 |
64 | 1.9341e-01 | 0.84 | 7.8526e-02 | 1.41 |
128 | 1.1715e-01 | 0.93 | 3.1321e-02 | 1.71 |
256 | 6.7975e-02 | 0.97 | 1.0951e-02 | 1.88 |
512 | 3.8480e-02 | 0.99 | 3.5564e-03 | 1.95 |
1024 | 2.1446e-02 | 0.99 | 1.1086e-03 | 1.98 |
N | k=1 | k=2 | ||
\textbf{e}^{N, b} | r_\varepsilon | \textbf{e}^{N, b} | r_\varepsilon | |
16 | 3.1691e00 | - | 2.5247e00 | - |
3 2 | 2.8939e00 | 0.19 | 1.8568e00 | 0.65 |
64 | 2.1681e00 | 0.57 | 1.0458e00 | 1.12 |
128 | 1.4063e00 | 0.80 | 4.6494e-01 | 1.50 |
256 | 8.3916e-01 | 0.92 | 1.7412e-01 | 1.76 |
512 | 4.7971e-01 | 0.97 | 5.8437e-02 | 1.90 |
1024 | 2.6815e-01 | 0.99 | 1.8476e-02 | 2.00 |
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 1.3469e-01 | 1.0211e-01 | 6.9269e-02 | 4.2854e-02 | 2.5059e-02 | 1.4210e-02 | 7.9163e-03 |
r_{10^{-3}} | - | 0.59 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-4} | 4.2826e-02 | 3.2265e-02 | 2.1878e-02 | 1.3535e-02 | 7.9145e-03 | 4.4877e-03 | 2.4997e-03 |
r_{10^{-4}} | - | 0.60 | 0.76 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-5} | 1.4555e-02 | 1.0287e-02 | 6.9246e-03 | 4.2798e-03 | 2.5021e-03 | 1.4187e-03 | 7.9017e-04 |
r_{10^{-5}} | - | 0.74 | 0.77 | 0.89 | 0.96 | 0.99 | 1.00 |
10^{-6} | 7.0511e-03 | 3.5121e-03 | 2.2119e-03 | 1.3538e-03 | 7.9006e-04 | 4.4773e-04 | 2.4931e-04 |
r_{10^{-6}} | - | 1.48 | 0.91 | 0.91 | 0.96 | 0.99 | 1.00 |
10^{-7} | 5.7885e-03 | 1.7282e-03 | 7.6617e-04 | 4.2944e-04 | 2.4621e-04 | 1.3883e-04 | 7.7093e-05 |
r_{10^{-7}} | - | 2.57 | 1.59 | 1.07 | 0.99 | 1.00 | 1.00 |
10^{-8} | 5.6470e-03 | 1.4335e-03 | 3.9795e-04 | 1.4327e-04 | 6.8296e-05 | 3.6229e-05 | 1.9525e-05 |
r_{10^{-8}} | - | 2.91 | 2.50 | 1.90 | 1.32 | 1.10 | 1.02 |
\varepsilon_2/k=2 10^{-3} |
9.1950e-02 | 6.0047e-02 | 3.1457e-02 | 1.3221e-02 | 4.7423e-03 | 1.5542e-03 | 4.8544e-04 |
r_{10^{-3}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-4} | 2.9024e-02 | 1.8958e-02 | 9.9341e-03 | 4.1758e-03 | 1.4979e-03 | 4.9086e-04 | 1.5329e-04 |
r_{10^{-4}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-5} | 9.1767e-03 | 5.9932e-03 | 3.1404e-03 | 1.3200e-03 | 4.7348e-04 | 1.5515e-04 | 4.8452e-05 |
r_{10^{-5}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-6} | 2.9026e-03 | 1.8921e-03 | 9.9100e-04 | 4.1640e-04 | 1.4930e-04 | 4.8908e-05 | 1.5269e-05 |
r_{10^{-6}} | - | 0.91 | 1.27 | 1.61 | 1.83 | 1.94 | 1.98 |
10^{-7} | 9.2029e-04 | 5.8861e-04 | 3.0694e-04 | 1.2848e-04 | 4.5896e-05 | 1.4982e-05 | 4.6637e-06 |
r_{10^{-7}} | - | 0.95 | 1.27 | 1.62 | 1.84 | 1.95 | 1.99 |
10^{-8} | 3.0336e-04 | 1.5840e-04 | 7.8929e-05 | 3.1821e-05 | 1.0972e-05 | 3.4626e-06 | 1.0673e-06 |
r_{10^{-8}} | - | 1.38 | 1.37 | 1.69 | 1.90 | 2.00 | 2.00 |
\varepsilon_2/k=1 | N | ||||||
16 | 32 | 64 | 128 | 256 | 512 | 1024 | |
10^{-3} | 3.1062e00 | 2.8495e00 | 2.1418e00 | 1.3911e00 | 8.3003e-01 | 4.7414e-01 | 2.6477e-01 |
r_{10^{-3}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-4} | 3.0996e00 | 2.8450e00 | 2.1391e00 | 1.3895e00 | 8.2909e-01 | 4.7356e-01 | 2.6442e-01 |
r_{10^{-4}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-5} | 3.0987e00 | 2.8445e00 | 2.1390e00 | 1.3892e00 | 8.2907e-01 | 4.7348e-01 | 2.6440e-01 |
r_{10^{-5}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-6} | 3.0980e00 | 2.8442e00 | 2.1387e00 | 1.3891e00 | 8.2903e-01 | 4.7342e-01 | 2.6436e-01 |
r_{10^{-6}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-7} | 3.0978e00 | 2.8440e00 | 2.1384e00 | 1.3890e00 | 8.2901e-01 | 4.7340e-01 | 2.6433e-01 |
r_{10^{-7}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
10^{-8} | 3.0975e00 | 2.8438e00 | 2.1382e00 | 1.3888e00 | 8.2888e-01 | 4.7338e-01 | 2.6430e-01 |
r_{10^{-8}} | - | 0.18 | 0.56 | 0.80 | 0.92 | 0.97 | 0.99 |
\varepsilon_2/k=2 10^{-3} |
2.4792e00 | 1.8292e00 | 1.0335e00 | 4.6054e-01 | 1.7262e-01 | 5.7921e-02 | 1.8266e-02 |
r_{10^{-3}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-4} | 2.4790e00 | 1.8291e00 | 1.0333e00 | 4.6052e-01 | 1.7261e-01 | 5.7920e-02 | 1.8263e-02 |
r_{10^{-4}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-5} | 2.4785e00 | 1.8288e00 | 1.0330e00 | 4.6050e-01 | 1.7258e-01 | 5.7917e-02 | 1.8260e-02 |
r_{10^{-5}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-6} | 2.4776e00 | 1.8282e00 | 1.0325e00 | 4.6047e-01 | 1.7255e-01 | 5.7913e-02 | 1.8256e-02 |
r_{10^{-6}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-7} | 2.4765e00 | 1.8277e00 | 1.0322e00 | 4.6043e-01 | 1.7252e-01 | 5.7910e-02 | 1.8252e-02 |
r_{10^{-7}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |
10^{-8} | 2.4754e00 | 1.8270e00 | 1.0315e00 | 4.6036e-01 | 1.7247e-01 | 5.7906e-02 | 1.8245e-02 |
r_{10^{-8}} | - | 0.65 | 1.12 | 1.50 | 1.75 | 1.90 | 1.96 |