
Citation: Junyoung Jang, Kihoon Jang, Hee-Dae Kwon, Jeehyun Lee. Feedback control of an HBV model based on ensemble kalman filter and differential evolution[J]. Mathematical Biosciences and Engineering, 2018, 15(3): 667-691. doi: 10.3934/mbe.2018030
[1] | Mani Pavuluri, Amber May . I Feel, Therefore, I am: The Insula and Its Role in Human Emotion, Cognition and the Sensory-Motor System. AIMS Neuroscience, 2015, 2(1): 18-27. doi: 10.3934/Neuroscience.2015.1.18 |
[2] | David I. Dubrovsky . “The Hard Problem of Consciousness”. Theoretical solution of its main questions. AIMS Neuroscience, 2019, 6(2): 85-103. doi: 10.3934/Neuroscience.2019.2.85 |
[3] | Timothy J. Ricker . The Role of Short-term Consolidation in Memory Persistence. AIMS Neuroscience, 2015, 2(4): 259-279. doi: 10.3934/Neuroscience.2015.4.259 |
[4] | Anna Lardone, Marianna Liparoti, Pierpaolo Sorrentino, Roberta Minino, Arianna Polverino, Emahnuel Troisi Lopez, Simona Bonavita, Fabio Lucidi, Giuseppe Sorrentino, Laura Mandolesi . Topological changes of brain network during mindfulness meditation: an exploratory source level magnetoencephalographic study. AIMS Neuroscience, 2022, 9(2): 250-263. doi: 10.3934/Neuroscience.2022013 |
[5] | Brion Woroch, Alex Konkel, Brian D. Gonsalves . Activation of stimulus-specific processing regions at retrieval tracks the strength of relational memory. AIMS Neuroscience, 2019, 6(4): 250-265. doi: 10.3934/Neuroscience.2019.4.250 |
[6] | Amory H. Danek, Virginia L. Flanagin . Cognitive conflict and restructuring: The neural basis of two core components of insight. AIMS Neuroscience, 2019, 6(2): 60-84. doi: 10.3934/Neuroscience.2019.2.60 |
[7] | Andrea J. Rapkin, Steven M. Berman, Edythe D. London . The Cerebellum and Premenstrual Dysphoric Disorder. AIMS Neuroscience, 2014, 1(2): 120-141. doi: 10.3934/Neuroscience.2014.2.120 |
[8] | Yang Yu, Tsuyoshi Setogawa, Jumpei Matsumoto, Hiroshi Nishimaru, Hisao Nishijo . Neural basis of topographical disorientation in the primate posterior cingulate gyrus based on a labeled graph. AIMS Neuroscience, 2022, 9(3): 373-394. doi: 10.3934/Neuroscience.2022021 |
[9] | Aini Ismafairus Abd Hamid, Nurfaten Hamzah, Siti Mariam Roslan, Nur Alia Amalin Suhardi, Muhammad Riddha Abdul Rahman, Faiz Mustafar, Hazim Omar, Asma Hayati Ahmad, Elza Azri Othman, Ahmad Nazlim Yusoff . Distinct neural mechanisms of alpha binaural beats and white noise for cognitive enhancement in young adults. AIMS Neuroscience, 2025, 12(2): 147-179. doi: 10.3934/Neuroscience.2025010 |
[10] | Arianna Polverino, Pierpaolo Sorrentino, Matteo Pesoli, Laura Mandolesi . Nutrition and cognition across the lifetime: an overview on epigenetic mechanisms. AIMS Neuroscience, 2021, 8(4): 448-476. doi: 10.3934/Neuroscience.2021024 |
Adaptive Finite Element Methods (AFEM) for self-adjoint coercive problems written in the form
u∈V : B(u,v)=F(v),∀v∈V, |
iterate the sequence
SOLVE→ESTIMATE→MARK→REFINE |
to produce better and better approximations of u. Their practical efficiency is corroborated by sound theoretical results of convergence, complexity, and optimality, which in various cases (such as, e.g., conforming h-versions) completely explain the behaviour of the adaptive algorithms [11,13,14,15,18].
The standard AFEM realization preserves the conformity of the initial mesh, at the expense of performing a completion step in REFINE: In addition to elements marked for refinement due to their contribution to the global error estimator, other elements are refined. Without this step, one would obtain nonconforming meshes, containing elements with hanging nodes.
In the new perspective opened by the introduction of Virtual Element Methods (VEM) [3,4], elements with hanging nodes can be viewed as polygons with aligned edges, carrying virtual (i.e., non-accessible) functions together with standard polynomial functions. The potential advantage is that all activated degrees of freedom are motivated by error reduction, not just by geometric reasons. On the other hand, in this transformation of an adaptive FEM into an adaptive VEM, one loses the availability of a general convergence theory, which so far is lacking (although results on a posteriori error estimates [8,12] have been obtained, together with efficient practical recipes for refining polytopal meshes [2,9,10]).
Such a shift in perspective inspired the recent papers [5,6], devoted to the analysis of an adaptive VEM generated by the successive newest-vertex bisections of triangular elements without applying completion, in the lowest-order case (polynomial degree k=1). Despite the simple geometric setup, the investigation faced some VEM-specific obstacles in the analysis, giving answers that could prove useful in the study of more general adaptive VEM discretizations. For instance, a VEM solution uT∈VT⊂V, defined by the Galerkin projection
uT∈VT : BT(uT,vT)=FT(vT),∀vT∈VT, |
satisfies an a posteriori error bound of the type
‖u−uT‖2V ≲ η2T(uT)+ST(uT,uT), |
where ηT(uT) is a residual-type error estimator, ST(uT,uT) is the stabilization term that makes the discrete bilinear form BT(uT,vT) coercive in V, and for simplicity we assume piecewise constant data on the mesh T. Unfortunately, the term ST(uT,uT) need not reduce under a mesh refinement, as η2T(uT) does: This makes the convergence analysis problematic. However, one of the key results obtained in [5] states that ST(uT,uT) is dominated by η2T(uT), i.e.,
ST(uT,uT) ≲ η2T(uT), |
provided an assumption of admissibility of the non-conforming meshes generated by successive refinements is fulfilled; such a restriction, which appears to have little practical impact, amounts to requiring the uniform boundedness of the global index of all hanging node, a useful concept introduced in [5] to hierarchically organize the set of hanging nodes. Once the a posteriori error bound is reduced to
‖u−uT‖2V ≲ η2T(uT), |
the convergence analysis becomes feasible, and a contraction property is proven to hold for a linear combination of the (squared) energy norm of the error and the (squared) residual estimator.
The purpose of this paper is to extend the results in [5] to the case of VEMs of order k≥2 built on triangular meshes. The problem at hand is again a variable-coefficient, second-order self-adjoint elliptic equation with Dirichlet boundary conditions. The geometric concept of hanging node (a vertex for some elements, contained inside an edge of some other elements) is replaced by a functional one, referring to the degrees of freedom associated with the node; once the meaning of hanging node is clarified, the definition of global index of a node, and its role in the analysis, is similar to the one given in [5].
A significant difference with respect to the content of that paper concerns the control of the stabilization term, which does not involve only the residual estimator, but a new term, called the virtual inconsistency estimator and denoted by ΨT(uT). It measures the projection error, upon local spaces of polynomials, of certain expressions depending on the operator coefficients and the discrete solution; it vanishes when k=1 or when the coefficients are constant. The new stabilization bound, which we derive under an admissibility assumption of the mesh, takes the form
ST(uT,uT) ≲ η2T(uT)+Ψ2T(uT), |
which leads to the a posteriori, stabilization-free error control
‖u−uT‖2V ≲ η2T(uT)+Ψ2T(uT). |
Correspondingly, we obtain the convergence of the adaptive VEM of order k by proving a contraction result for a linear combination of (squared) energy norm of the error, (squared) residual estimator, and (squared) virtual inconsistency estimator.
Similarly to [5], we assume here that the data D of our boundary-value problem are piecewise polynomials of degrees related to k−1, on the initial mesh T0 and consequently on each mesh T derived by newest-vertex bisection. This is not a restriction, since we propose to insert the adaptive VEM procedure just described, which we now consider as a module GALERKIN, into an outer loop AVEM of the form
[T,uT]=AVEM(T0,ϵ0,ω,tol)
j=0
while ϵj>12tol do
[ˆTj,ˆDj]=DATA(Tj,D,ωϵj)
[Tj+1,Dj+1]=GALERKIN(ˆTj,ˆDj,ϵj)
ϵj+1←12ϵj
j←j+1
end while
return
where the module DATA produces, via greedy-type iterations, a piecewise polynomial approximation of the input data with prescribed accuracy, defined on a suitable refinement of the input partition. Manifestly, the target accuracy is matched after a finite number of calls to DATA and GALERKIN. Properties of complexity and quasi-optimality of this two-loop algorithm are investigated in [6] in the linear case k=1. We plan to do the same for the case k≥2 in a forthcoming paper.
The outline of this paper is as follows. In Sections 2 and 3, we introduce the model boundary-value problem, and its discretization by an enhanced version of the VEM ([1]). In Section 4 we define the global index of a node, and we formulate the admissibility assumption on the mesh. Two essential properties for bounding the stabilization term are established in Section 5. The a posteriori error estimators are defined in Section 6, whereas stabilization-free a posteriori error estimates are proven in Section 7. In Section 8, we investigate how the a posteriori error estimators are reduced under mesh refinement. These properties are needed to justify the refinement strategy in our adaptive module GALERKIN, which is described in Section 9. In Section 9, we discuss the proof of convergence of the loop GALERKIN. The paper ends with some numerical experiments, reported in Section 11.
We consider the following Dirichlet boundary value problem in a polygonal domain Ω,
{−∇⋅(A∇u)+cu=f in Ω,u=0 on ∂Ω, | (2.1) |
where A∈(L∞(Ω))2×2 is symmetric and uniformly positive definite in Ω, c∈L∞(Ω) and non-negative in Ω, f∈L2(Ω). Data will be denoted by D=(A,c,f). The variational formulation of this problem is written as
{find u∈V:=H10(Ω) such thatB(u,v)=(f,v),∀v∈V, | (2.2) |
where (⋅,⋅) is the scalar product in L2(Ω) and B(u,v):=a(u,v)+m(u,v) is the bilinear form associated with Problem (2.1), i.e.,
a(u,v):=(A∇u,∇v),m(u,v):=(cu,v). |
We denote the energy norm as |||⋅|||=√B(⋅,⋅), which satisfies
cB|v|2≤|||v|||2≤cB|v|2,∀v∈V, | (2.3) |
for suitable 0<cB≤cB.
Remark 2.1. For the sake of simplicity, in (2.1) we consider the Poisson problem with vanishing Dirichlet conditions on the whole boundary domain. The extension to generic Dirichlet and/or Neumann/Robin boundary conditions does not pose conceptual difficulties. In the numerical examples, we actually provide experiments with more general Dirichlet boundary conditions.
In order to find a discrete approximation of the solution of Problem (2.2), we firstly introduce a fixed initial partition T0 on the domain ¯Ω made of triangular elements E. We will denote by T any refinement of T0 obtained by a finite number of newest-vertex element bisections. We underline that we are not requiring T to be a conforming mesh, since hanging nodes may arise in the refinement. The classification of nodes, which will play a crucial role in the proofs presented in this paper, is postponed in Section 4.
According to the Virtual Element theory [3], an element E of the triangulation can be viewed as a polygon with more than three edges, if some hanging nodes are sitting on its boundary. We can then denote by EE the set of edges e of element E and E:=⋃E∈TEE. We finally define the diameter of an element E as hE=|E|1/2 and h=maxE∈T{hE}.
We introduce the functional spaces needed to apply the VEM. We start by defining the space of functions on the boundary of E, V∂E,k, which is constituted by the functions that are continuous on the boundary of E and that, when restricted to any edge of ∂E, are polynomials of degree k>0, i.e.,
V∂E,k:={v∈C0(∂E):v|e∈Pk(e),∀e⊂∂E}. |
Then, we define the "enhanced" VEM space in E, as done in [1], such that
VE,k:={v∈H1(E):v|∂E∈V∂E,k,Δv∈Pk(E),(v−Π∇Ev,q)E=0∀q∈Pk(E)∖Pk−2(E)}, | (2.4) |
where Pk(E)∖Pk−2(E) is the space spanned by the monomials of degree equal to k and k−1, and Π∇E:H1(E)→Pk(E) is the projector defined by
(∇(v−Π∇Ev),∇q)E=0,∀q∈Pk(E),∫∂E(v−Π∇Ev)=0. |
We remark that VE,k contains the polynomial space of degree k on E and its dimension is
dim(VE,k)=nEek+k(k−1)2, | (2.5) |
where nEe is the number of edges of E. We notice that in the case k>1 a function v in VE,k is uniquely defined by
● the set of the values at the vertices of E;
● the set of the values at the k−1 equally-spaced internal points on each edge of ∂E;
● the set of the moments 1|E|∫Ev(x)m(x)dx∀m∈Mk−2(E),
where the set Mp(E), p≥0, is defined as
Mp(E)={(x−xEhE)s,|s|≤p}. | (2.6) |
We will denote by μp(E,v)=(1|E|∫Ev(x)m(x)dx:m∈Mp(E)∖Mp−1(E)) the vector of the moments of v of order p. By |μp(E,v)| we will denote the l2-norm of this vector.
We can now introduce the global discrete space as
VT:={v∈V:v|E∈VE,k∀E∈T}. |
On T we need also to give the definition of the space of piecewise polynomial functions on T
WkT:={w∈L2(Ω):w|E∈Pk(E)∀E∈T}, | (2.7) |
and its subspace
V0T:=VT∩WkT, | (2.8) |
which plays a crucial role in the forthcoming analysis.
We now introduce a series of projectors that will be used in the rest of the paper. For any E∈T, we denote by Π0p,E:L2(E)→Pp(E) the L2(E)-orthogonal projector onto the space of polynomial of degree p on E. Thanks to the choice of the enhanced space VE,k (2.4), we remark that Π0k,Ev and Π0k−1,E∇v can be computed for any function v∈VE,k, see [1] for the details. To simplify the notation, in the following we will drop the symbol E from Π0k,E when no confusion arises. The global L2-orthogonal projector is denoted by Π0p,T:L2(Ω)→WpT.
We can also define the Lagrange interpolation operator IE:VE,k→Pk(E) on E, which builds a polynomial of degree k using the 3k degrees of freedom on the boundary of E and the moments of order ≤k−3, since
dim(Pk(E))=3k+(k−1)(k−2)2. |
Moreover, we will denote by IT:VT→WkT the Lagrange interpolation operator that restricts to IE on each E∈T.
In the rest of this paper, we assume that data D=(A,c,f) are piecewise polynomials of degree k−1 on the initial partition T0, hence on each partition T obtained by newest-vertex refinement. Their values on each element of the triangulation will be denoted by
(AE,cE,fE)∈(Pk−1(E))2×2×Pk−1(E)×Pk−1(E). |
We here define the bilinear forms that we need for the Galerkin discretization problem, starting from aE,mE:VE,k×VE,k→R, such that
aT(v,w):=∑E∈T∫E(AEΠ0k−1∇v)⋅(Π0k−1∇w)=:∑E∈TaE(v,w),mT(v,w):=∑E∈T∫EcEΠ0kvΠ0kw=:∑E∈TmE(v,w). |
We also introduce the symmetric bilinear form sE:VE×VE→R as
sE(v,w):=¯NE∑i=1v(xi)w(xi), |
where {xi}¯NEi=1 indicates the set of the degrees of freedom on the boundary of E. Indeed, we remark that in this case the stabilization term can be built without using the internal degrees of freedom, as shown in [7]. We assume for sE the existence of two positive constant cs and Cs independent on E, such that
cs|v|2≤sE(v,v)≤Cs|v|2,∀v∈VE∖Pk(E). | (3.1) |
We define the local stabilizing form as
SE(v,w)=sE(v−IEv,w−IEw),∀v,w∈VE, |
and the global stabilization form
ST(v,w):=∑E∈TSE(v,w),∀v,w∈VT. |
From (3.1), we get
ST(v,v)≃|v−ITv|2,∀v∈VT, |
where |⋅| denotes the broken H1-seminorm over T. Thus, we can now define the bilinear form BT(⋅,⋅), BT:VT×VT→R, as
BT(v,w)=aT(v,w)+mT(v,w)+γST(v,w), | (3.2) |
with γ independent of T satisfying γ≥γ0 for some fixed γ0>0. For the loading term we introduce FT:VT→R as
FT(v):=∑E∈T∫EfEΠ0kv=∑E∈T∫EfEv,∀v∈VT, | (3.3) |
since fE has been already approximated with a polynomial of degree k−1. Note that the equality in (3.3) remains true if fE is an approximation of f of degree k on E.
We have now defined all the forms that appear in the discrete formulation of the Problem (2.2). It reads as
{find uT∈VT such thatBT(uT,v)=FT(v),∀v∈VT. | (3.4) |
The bilinear form BT is continuous and coercive, hence, there exists a unique and stable solution of the Problem (3.4). Furthermore, the following result extends Lemma 2.6 in [5].
Lemma 3.1 (Gakerkin quasi-orthogonality). For any v∈VT and w∈V0T, it holds
aT(v,w)=a(v,w)−∑E∈T∫E(AE(I−Π0k−1)∇v)⋅∇w,mT(v,w)=m(v,w)−∑E∈T∫EcE((I−Π0k)v)w,ST(v,w)=0. |
Consequently,
|B(u−uT,w)|≲ST(uT,uT)1/2|w|1,Ω, |
where u is the solution of (2.2) and uT the solution of (3.4).
A crucial concept, firstly introduced in [5] for the case k=1, is the global index of a node: It will be used in the proofs of Section 5. In order to extend its definition to the case k>1, we preliminarily introduce some useful definitions.
Let
ˆE:={(x,y)∈R2:x≥0,y≥0,x+y≤1} |
be the reference element and denote by ˆRˆE,k the k-lattice built on ˆE, i.e.,
ˆRˆE,k:={(ik,jk)∈R2:i≥0,j≥0,i+j≤k}. |
Considering the affine function FE:ˆE→E mapping the reference element onto an element E∈T, we define the physical lattice on E by
RE,k:=FE(ˆRˆE,k), |
and the set of proper nodes of E as the points of the physical lattice sitting on the boundary of E, i.e.,
PE:=RE,k∩∂E. |
Observe that we implicitly assume that k≥2 is sufficiently small so that interpolation on equally spaced nodes is numerically stable.
Next, we denote by HE the set of hanging nodes of E, i.e., the set of points x∈∂E that are not proper nodes of E, but that are proper nodes of some other element E′, i.e.,
HE:={x∈∂E:∃E′∈T such that x∈PE′}∖PE. |
Finally, let NE:=PE∪HE be the set of all nodes sitting on E.
At the global level, N:=⋃E∈TNE will be the set of all nodes of the triangulation T, which we split into the set P:={x∈N:x∈PE ∀E containing x} of the proper nodes ofT, and the set H:=N∖P of the hanging nodes ofT.
Next, let us clarify what happens when a hanging node is created. Let S be an element edge that is being refined, i.e., split into two contiguous edges S− and S+. Before the refinement, S contains k+1 equally-spaced nodes ξn, n=1,…k+1: The endpoints and the k−1 internal ones. After the refinement, S contains 2k+1 nodes, precisely k+1 equally-spaced nodes on each sub-edge S±, with the midpoint in common; see Figure 1. The spacing of the 'old' nodes on S was |S|k (where |S| denotes the length of S), whereas the spacing of the 'new' nodes is |S|2k. Consequently, k+1 of these nodes coincide with those initially on S, and the new nodes introduced in the refinement are only k. We will denote these latter by ζi, i=1,…,k.
This suggests the following definition.
Definition 4.1 (closest neighbors of a node). With the previous notation, if x:=ζi is created as the midpoint of the segment [x′,x″]:=[ξni,ξni+1] for some ni, we define the set B(x):={x′,x″}.
We are ready to give the announced definition of global index of a node of the triangulation T.
Definition 4.2. (global index of a node). Given a node x∈N, we define its global index λ recursively as follows:
● If x is a proper node, then λ(x):=0;
● If x is a hanging node, with x′,x″∈B(x), then set
λ(x):=max{λ(x′),λ(x″)}+1. |
Figure 2 shows the evolution of the global index after three refinements in the cases k=2 (a) and k=3 (b). We remark that, for instance, the midpoint of the horizontal edge is a proper node in case (a), and a hanging node in case (b).
The largest global index in T will be denoted by ΛT:=maxx∈N{λ(x)}. In this paper, as in [5], we will consider sequences of successively refined triangulations {T} whose global index does not blow up.
Assumption 4.3. There exists a constant Λ>0 such that, for any triangulation T generated by successive refinements of T0, it holds
ΛT≤Λ. |
Any such triangulation will be called Λ-admissible.
In this section we discuss the validity of some results for the degree k>1 that will be used in the rest of the paper. We will highlight in particular the differences from the case k=1.
Proposition 5.1 (scaled Poincaré inequality in VT). There exists a constant CP>0, independent of T, such that
∑E∈Th−2E‖v‖2≤CP|v|2,∀v∈VTsuch that v(x)=0,∀x∈P. | (5.1) |
Proof. Let E∈T be an element of the triangulation. If E is an element of the original partition T0, all its vertices are proper nodes. Otherwise, E has been generated after some refinements by splitting an element ˜E into two elements, E and E′. Let L be the common edge shared by E and E′. If L is not further refined, then all the nodes on L are proper because they are shared by E and E′. If L is refined and k is even, then the midpoint of L is a proper node.
So, let us consider the case k odd and let us assume that L is refined M≥1 times. We focus in particular on the internal node ˉx of L is at distance |L|k from one of the endpoints, Figure 3 shows the case k=3. This point belongs to one of the M+1 intervals in which L is refined, having width |L|/2s, for some 1≤s≤M. We remark that s depends on how L has been refined (in the case of uniform refinements of L, one has 2s=M+1). We localize the chosen node ˉx in L by defining an m≥0 such that
|L|m2s≤|L|k≤|L|(m+1)2s, |
or, equivalently,
km≤2s≤k(m+1). | (5.2) |
The interval going from |L|m2s to |L|(m+1)2s is an edge for a smaller element E′, thus it contains k−1 internal nodes. Since they are equi-spaced, their positions are at
|L|2s(m+nk) with n=0,…,k. |
By taking n=2s−mk, which is compatible with conditions (5.2), we conclude that one of the internal nodes of E′ coincides with ˉx.
This guarantees that E has at least one proper node x on its boundary. By hypothesis v(x)=0, and so we can apply the classical Poincaré inequality,
h−2E‖v‖2≲|v|2, |
that concludes the proof.
Remark 5.2. The previous proof exploits the fact that when k>1, each element of the triangulation contains at least a proper node. This differs from the case k=1 in which the edges do not contain internal nodes, and then elements with all hanging nodes as vertices are admissible. As a further difference from the case k=1, we highlight that in Proposition 5.1 the constant CP does not depend on the constant Λ, whose existence has been introduced in Assumption 4.3.
The next result we are going to establish is a hierarchical representation of the interpolation error v−IEv on the boundary ∂E of an element E∈T. Assume that v∈VE,k, and let L be a side of the triangle E; for simplicity, in the sequel the restriction of v to L, which is a piecewise polynomial of degree k, will be still denoted by v. The subsequent bisections of L which generate the nodes in NE∩L allow us to write the difference (v−IEv)|L telescopically as
(v−IEv)|L=JL∑j=1(Ij−Ij−1)v; | (5.3) |
here, I0=IE|L, IJL is the identity operator, whereas Ijv for 1≤j≤JL−1 is the piecewise polynomial of degree k which interpolates v on the partition of L of level j, namely the partition formed by sub-edges of length ≤|L|2j.
In order to understand the structure of the detail (Ij−Ij−1)v, assume that S is a sub-edge of L of length =|L|2j−1, which is split into two sub-edges S± of length =|L|2j (see Figure 1). On S we have two interpolation operators, namely
I:=Ij−1|S:C0(S)→Pk(S) |
and
I∨:=Ij|S:C0(S)→Pk(S−,S+)={v∈C0(S):v|S−∈Pk(S−) and v|S+∈Pk(S+)}, |
which coincides with the interpolation operator I−:C0(S−)→Pk(S−) when restricted to S− and with the analogous operator I+ when restricted to S+. With the notation introduced just before Definition 4.1, we can quantify the discrepancy between the two interpolation operators by defining the k basis functions
ψi∈Pk(S−,S+) such that ψi(x)={1 if x=ζi,0 if x=ζj,j≠i,0 if x=ξn, n=1,…,k+1,1≤i≤k. |
See Figure 4 for a graphical representation of these functions in the cases k=1 (a), k=2 (b), k=3 (c).
Hence, the difference between the two interpolation operators on S can be written as
I∨v−Iv=k∑i=1d(v,ζi)ψi, |
where d is defined as
d(v,ζi):=(I∨v−Iv)(ζi)=(v−Iv)(ζi). | (5.4) |
The values of Iv at the k nodes ζi are a linear combination of the values of Iv at the k+1 nodes ζn, where Iv coincides with v. Thus, there exist coefficients αi,n such that
(Iv)(ζi)=k+1∑n=1αi,nv(ξn),i=1,…,k. | (5.5) |
The explicit values of these coefficients in the case k=2 for the two new nodes ζ1 and ζ2 are given in these expressions:
(Iv)(ζ1)=38v(ξ1)+34v(ξ2)−18v(ξ3),(Iv)(ζ2)=−18v(ξ1)+34v(ξ2)+38v(ξ3), |
where ξi≤ζi≤ξi+1, i=1,2. Similarly, in the case k=3, we get
(Iv)(ζ1)=516v(ξ1)+1516v(ξ2)−516v(ξ3)+116v(ξ4),(Iv)(ζ2)=−116v(ξ1)+916v(ξ2)+916v(ξ3)−116v(ξ4),(Iv)(ζ3)=116v(ξ1)−516v(ξ2)+1516v(ξ3)+516v(ξ4), |
where again ξi≤ζi≤ξi+1, i=1,2,3. Figure 5 shows both cases. We notice that the coefficients αi,n depend only on the relative positions of the nodes on S, not on the level j of refinement.
Summarizing, at the level j of refinement of the edge L, we get
(Ij−Ij−1)v=∑x∈HL,jd(v,x)ψx, |
where HL,j is the set of hanging nodes on L created at the level j of refinement, whereas
d(v,x)=(Ijv−Ij−1v)(x)=(v−Ij−1v)(x). |
Summing-up over the levels and recalling (5.3), we obtain
(v−IEv)|L=∑x∈HLd(v,x)ψx. |
where HL=HE∩L, whence
(v−IEv)|∂E=∑x∈HEd(v,x)ψx. |
We now introduce the subspace of VE,k
XE:={w∈VE,k:w(x)=0∀x∈PE, and μp(w,E)=0,0≤p≤k−3}, |
which contains v−IEv by definition of IE. On XE, we have two norms, namely the seminorm |w|1,E (which is a norm on XE due to the vanishing of w at the three vertices of E) and the norm
[[w]]XE:=(∑x∈HEd2(w,x)+|μk−2(E,w)|2)1/2. |
Note that, due to Assumption 4.3, the dimension of XE is uniformly bounded by a constant depending on Λ; furthermore, the number of possible patterns of hanging nodes on ∂E, which determines the details d(w,x), is also bounded in terms of Λ. As a consequence, the two norms are equivalent, with equivalence constants depending on Λ. Therefore,
∑x∈HEd2(w,x)≤[[w]]2XE≃|w|21,E,∀w∈XE. |
Since v−IEv∈XE and d(v−IEv,x)=d(v,x) for any x∈HE, we obtain
∑x∈HEd2(v,x)≲|v−IEv|21,E. |
Summing-up over all the elements of the triangulation, we arrive at the following result.
Lemma 5.3 (global interpolation error vs hierarchical errors). There exists a constant CD>0 depending on Λ but independent of the triangulation T such that
∑x∈Hd2(v,x)≤CD|v−ITv|2,∀v∈VT. | (5.6) |
Next, we introduce the interpolation operator
I0T:VT→V0T, | (5.7) |
where V0T is defined in (2.8), by the following conditions:
● (I0Tv)(x)=v(x) for all x∈P,
● μp(E,I0Tv)=μp(E,v) for all 0≤p≤k−3 and for all E∈T.
These conditions uniquely identify I0Tv. Indeed, if x∈H is generated by a refinement of level j of an edge L (say, x=ζi with the notation introduced before Definition 4.1), then (I0Tv)(x) can be expressed in terms of the values of I0Tv at the k+1 nodes (say, ξn) created at the previous levels of refinement of L, using the same coefficients as in formula (5.5), i.e.,
(I0Tv)(ζi)=k+1∑n=1αi,n(I0Tv)(ξn),i=1,…,k; | (5.8) |
and so on recursively.
The following result provides a representation of the error ITv−I0Tv.
Lemma 5.4. It holds
|ITv−I0Tv|2≃∑x∈Hδ2(v,x),∀v∈VT, |
where δ(v,x):=v(x)−(I0Tv)(x).
Proof. Consider an element E∈T. Recall that by construction it holds μp(E,IEv)=μp(E,v)=μp(E,I0Tv), whence μp(IEv−I0Tv,E)=0 for all 0≤p≤k−3. Consequently,
|IEv−I0Tv|2≃∑x∈PE|(IEv−I0Tv)(x)|2. |
If x∈PE, (IEv)(x)=v(x), hence
|IEv−I0Tv|2≃∑x∈PE|(v−I0Tv)(x)|2. |
Summing on all the elements of the partition, we get
∑E∈T|IEv−I0Tv|2≃∑x∈N|(v−I0Tv)(x)|2≃∑x∈H|(v−I0Tv)(x)|2, |
since if x∈P, (I0Tv)(x)=v(x). This concludes the proof.
Concatenating Lemmas 5.3 and 5.4, we can prove the second key property of this section.
Proposition 5.5 (comparison between interpolation operators). Let Tbe Λ-admissible. Then, there exists a constant CI>0, depending on Λ, but independent of T, such that
|v−I0Tv|≤CI|v−ITv|,∀v∈VT. |
Proof. Given a function v∈VT, by the triangle inequality
|v−I0Tv|=|v−I0Tv|1,T≤|v−ITv|1,T+|ITv−I0Tv|1,T, |
so it is enough to bound the last norm on the right-hand side. To this end, considering the vectors
δ=(δ(x))x∈H:=(δ(v,x))x∈H,d=(d(x))x∈H:=(d(v,x))x∈H, |
and recalling the two Lemmas, the proof can be concluded if we show that
‖δ‖l2(H)≲‖d‖l2(H). |
Given x∈H, assume that it is generated by a refinement of level j of an edge L (say, x=ζi with the notation introduced before Definition 4.1). Writing v∗:=I0Tv for short, and exploiting formulas (5.4) and (5.5), we get
δ(ζi)=v(ζi)−v∗(ζi)=v(ζi)−k+1∑n=1αi,nv∗(ξn)=v(ζi)−k+1∑n=1αi,nv(ξn)−k+1∑n=1αi,n(v∗(ξn)−v(ξn)))=d(ζi)+k+1∑n=1αi,nδ(ξn). | (5.9) |
Thus, we can build a matrix W:l2(H)→l2(H) such that δ=Wd, and we just need to prove that
||W||2≲1. |
We now organize the hanging nodes with respect to the global index λ∈[1,ΛT]. Calling Hλ={x∈H:λ(x)=λ}, and H=⋃1≤λ≤ΛTHλ, the matrix W can be factorized in lower triangular matrices Wλ, that change the nodes of level λ, leaving the others unchanged. In particular,
W=WΛTWΛT−1...W2W1, |
where W1 is just the identity matrix I, whereas each other matrix Wλ differs from the identity only in the rows of block λ. In each of these rows, all entries are zero, but the entries αi,n in the off-diagonal part and 1 on the diagonal. In order to estimate Wλ, we use the Hölder inequality ||Wλ||22≤||Wλ||1||Wλ||∞. From the construction of Wλ have that
||Wλ||∞≤maxn{k+1∑i=1|αi,n|}+1=:β1,||Wλ||1≤5kmaxi,n|αi,n|+1=:β2, |
where in the last inequality it has been used the fact that a hanging node of global index <λ may appear at most 5 times on the right-hand side of (5.9), since at most five edges meet at a node [5, Proposition 3.2]. These bring us to the following bound
||W||2≤∏2≤λ≤ΛT||Wλ||2≤(β1⋅β2)Λ−12 |
and the proof is concluded.
With the aim of discussing the a posteriori error analysis, and following [12], we define the a posteriori error estimators, starting from the internal residual over an element E, i.e.,
rT(E;v,D):=fE+∇⋅(AEΠ0k−1∇v)−cEΠ0kv, | (6.1) |
for any v∈VE,k. We highlight that in the case k=1, with piecewise constant data, the diffusion term in the residual vanishes. Furthermore, we define the jump residual over e, where e is an edge shared by two elements E1 and E2 of the partition T, as
jT(e;v,T):=[[AΠ0k−1∇v]]e=(AE1Π0k−1∇v|E1)⋅n1+(AE2Π0k−1∇v|E2)⋅n2, |
where ni denotes the unit normal vector to e pointing outward with respect to Ei; we set jT(e;v)=0 of e∈∂Ω. Then, let the local residual estimator associated with E be
η2T(E;v,D):=h2E||rT(E;v,D)||20,E+12∑e∈EEhE||jT(e;v,D)||20,e, | (6.2) |
and the global residual estimator as the sum of the local residuals
η2T(v,D):=∑E∈Tη2T(E;v,D). |
In contrast to what has been done for the case k=1, we also need to introduce the virtual inconsistency terms, defined by
Ψ2T,A(E;v,D):=||(I−Π0k−1)(AEΠ0k−1∇v)||20,E,Ψ2T,c(E;v,D):=h2E||(I−Π0k)(cEΠ0kv)||20,E, | (6.3) |
as well as their sum
Ψ2T(v,D):=∑E∈TΨ2T(E;v,D):=∑E∈TΨ2T,A(E;v,D)+Ψ2T,c(E;v,D). | (6.4) |
In this section we present one of the main results of this paper, a stabilization-free a posteriori error bound. In this view, we firstly start by introducing the classical Clément operator upon the space V0T, ˜I0T:V→V0T; it is defined at the proper nodes on the skeleton of T as the average of the target function on the support of the associated basis functions, whereas the internal moments (if any) coincide with those of the target function.
The scaled Poincaré inequality (Proposition 5.1) and Proposition 5.5 guarantee the validity of the error estimate for ˜I0T. Given these propositions, its proof does not involve the polynomial degree k, hence, it does not change with respect to the one presented in [5].
Lemma 7.1 (Clément interpolation estimate). ∀v∈V, it holds
∑E∈Th−2E‖v−˜I0Tv‖2≲|v|2, |
where the hidden constant depends on Λ but not on T.
We can now prove the following results, which is similar to Theorem 13 in [12], but with a slightly modified proof.
Proposition 7.2 (upper bound). There exists a constant Capost>0, independent of u, T, uT and γ, such that
|u−uT|21,Ω≤Capost(η2T(uT,D)+ST(uT,uT)). | (7.1) |
Proof. For any v∈V, using the definition of Problem (2.2), we have that
B(u−uT,v)=B(u,v)−B(uT,v)−(f,vT)+B(u,vT)=(f,v−vT)−B(uT,v)+B(u,vT)−B(uT,vT)+B(uT,vT)=((f,v−vT)−B(uT,v−vT))+B(u−uT,vT)=:I+II, |
where vT:=˜I0Tv∈V0T. The first term can be written as
I=∑E∈T{∫EfE(v−vT)−∫EAE∇uT⋅∇(v−vT)−∫EcEuT(v−vT)}=∑E∈T{∫EfE(v−vT)−∫E(AEΠ0k−1∇uT)⋅∇(v−vT)−∫E(cEΠ0kuT)(v−vT)}+∑E∈T{∫E(AE(Π0k−1−I)∇uT)⋅∇(v−vT)+∫E(cE(Π0k−I)uT)(v−vT)}=:I1+I2. |
The addend I1 can be expressed as
I1=∑E∈T{∫E(fE+∇⋅(AEΠ0k−1∇uT)−cEΠ0kuT)(v−vT)}+∑E∈T∫∂En⋅(AEΠ0k−1∇uT)(v−vT), |
which can be bounded by using Lemma 7.1,
|I1|≲ηT(uT,D)|v|1,Ω. |
On the other hand, noting that
‖(I−Π0k−1)∇uT‖=‖(I−Π0k−1)∇(I−Π0k)uT‖≤‖∇(I−Π0k)uT‖ | (7.2) |
and applying again Lemma 7.1 and the stability of the Clément operator in the H1 norm, the addend I2 can be bounded as follows:
|I2|≤(∑E∈T||AE(I−Π0k−1)∇uT||20,E||∇(v−vT)||20,E+∑E∈Th2E||cE(I−Π0k)uT||20,Eh−2E||v−vT||20,E)1/2≲(∑E∈T||AE(I−Π0k−1)∇uT||20,E+h2E||cE(I−Π0k)uT||20,E)1/2|v|1,Ω≲(∑E∈T||∇(uT−Π0kuT)||20,E+h2E||(uT−Π0kuT)||20,E)1/2|v|1,Ω≲(ST(uT,uT))1/2|v|1,Ω. |
Looking now at the term II, we have by Lemma 3.1
|B(u−uT,v)|≲ST(uT,uT)1/2|v|1,Ω. |
Finally, by taking v:=u−uT∈V, we get
B(u−uT,u−uT)≲(ηT(uT,D)+ST(uT,uT)1/2)|u−uT|1,Ω, |
which, using the coercivity of B, concludes the proof.
We now report a bound for the local residual estimator, proved in [12, Theorem 16].
Proposition 7.3 (local lower bound). There holds
η2T(E;uT,D)≲∑E′∈wE(|u−uT|21,E′+SE′(uT,uT)) |
where wE:={E′:|∂E∩∂E′|≠0}. The hidden constant is independent of γ, h, u and uT.
Summing on all the elements of the partition, we get the following corollary.
Corollary 7.4 (global lower bound). There exists a constant capost>0, independent of u, T, uT and γ, such that
capostη2T(uT,D)≤|u−uT|21,Ω+ST(uT,uT). |
In the following proposition we present a bound of the stabilization term. We remark that in the case k=1 the inconsistency term does not appear.
Proposition 7.5 (bound of the stabilization term). There exists a constant CB>0 independent of T, uT and γ, such that
γ2ST(uT,uT)≤CB(η2T(uT,D)+Ψ2T(uT,D)). | (7.3) |
Proof. From the Definition (3.2) of the form BT and from (3.4), ∀w∈V0T it holds
γST(uT,uT)=γST(uT,uT−w)=BT(uT,uT−w)−aT(uT,uT−w)−mT(uT,uT−w)=F(uT−w)−aT(uT,uT−w)−mT(uT,uT−w). |
Defining eT:=uT−w, we get
γST(uT,eT)=∑E∈T{∫EfeT−∫E(AEΠ0k−1(∇uT))⋅Π0k−1(∇eT)−∫EcEΠ0kuTΠ0keT}. | (7.4) |
We notice that
∫E(AEΠ0k−1(∇uT))⋅Π0k−1(∇eT)=∫E(Π0k−1(AEΠ0k−1∇uT))⋅∇eT=∫E(Π0k−1−I)(AEΠ0k−1∇uT)⋅∇eT+∫E(AEΠ0k−1∇uT)⋅∇eT | (7.5) |
and
∫EcEΠ0kuTΠ0keT=∫EΠ0k(cEΠ0kuT)eT=∫E(Π0k−I)(cEΠ0kuT)eT+∫EcE(Π0kuT)eT. | (7.6) |
By substituting (7.5) and (7.6) into (7.4), it results
γST(uT,uT)=∑E∈T∫E(f+∇⋅(AEΠ0k−1∇uT)−cEΠ0kuT)eT−∑E∈T∫∂En⋅∇(AEΠ0k−1∇uT)eT+∑E∈T∫E(I−Π0k−1)(AEΠ0k−1∇uT)⋅∇eT+∑E∈T∫E(I−Π0k)(cEΠ0kuT)eT≤∑E∈ThE||rT(E;uT,D)||0,Eh−1E||eT||0,E+12∑e∈Eh1/2e||jT(e;uT,D)||0,eh−1/2e||eT||0,e+∑E∈T||(I−Π0k−1)(AEΠ0k−1∇uT)||0,E||∇eT||0,E+∑E∈ThE||(I−Π0k)cEΠ0kuT||0,Eh−1E||eT||0,E≤∑E∈ThE||rT(E;uT,D)||0,Eh−1E||eT||0,E+12∑e∈Eh1/2e||jT(e;uT,D)||0,eh−1/2e||eT||0,e+Cinv∑E∈TΨA(E;uT,D)h−1E||eT||0,E+∑E∈TΨc(E;uT,D)h−1E||eT||0,E. |
With the same strategy used in [5], for any δ>0, we get
γST(uT,uT)≤12δ(η2T(uT,D)+Ψ2T(uT,D))+δ2ΦT(eT), |
where
ΦT(eT)=∑E∈T{max{C2inv,1}h−2E||eT||20,E+12∑e∈EEh−1E||eT||20,e}. |
Posing now w=I0TuT and applying Proposition 5.1, we get
ΦT(uT−I0TuT)≲|uT−I0TuT|21,Ω, |
whereas Proposition 5.5 yields
|uT−I0TuT|21,Ω≲|uT−ITuT|21,Ω≃ST(uT,uT), |
so we obtain
γ2ST(uT,uT)≤CB(η2T(uT,D)+Ψ2T(uT,D)), |
for a suitable constant CB>0.
Combining Propositions 7.2 and 7.5, we arrive at the following key result.
Corollary 7.6 (stabilization-free a posteriori error upper bound). It holds
|u−uT|21,Ω≤CU1η2T(uT,D)+CU2Ψ2T(uT,D), |
where CU1=Capost(CBγ2+1) and CU2=CapostCBγ2.
Remark 7.7. Note that the chosen stabilization affects the value of the constant capost, which in principle may depend on the polynomial degree and the geometry of the mesh. However, this dependence is under control; indeed, (i) we are not proposing a p-method, so the polynomial degree k is fixed, (ii) the refinement procedure is obtained by newest-vertex bisection, which guarantees shape regularity on the refined elements, (iii) Assumption 4.3 enforces an upper bound on the number of hanging nodes on each edge.
In view of the convergence analysis of the adaptive algorithm GALERKIN, in this section we analyse the effect of refining the partition T by applying one or more newest-vertex bisections to some of its elements. Specifically, in Sect. 8.1 we prove that the residual estimator (6.2) is reduced by a fixed fraction (up to an addend proportional to the stabilization term) when the element E is split into two elements by one bisection. We prove a similar result for the inconsistency term estimator (6.4), provided a suitable number of bisections is applied to E. Next, in Sect. 8.2 we establish a quasi-orthogonality property in the energy norm between the solutions on two partitions, one being a refinement of the other.
Let us consider an element E in T which is bisected into elements E1 and E2; the refined partition containing these two elements will be denoted by T∗. Given v∈VT, we notice that v is known on ∂E, and in particular at the new vertex of E1 and E2 produced by the bisection. Denoting by e=E1∩E2 the new edge, we associate a function v∗∈VT∗ to v such that v∗|∂E=v|∂E, v∗|e∈P1(e), and μp(Ei,v∗)=μp(E,v) for all 0≤p≤k−2 and for i=1,2. In the following we will write v instead of v∗ when no confusion arises.
Let ηT(E;v,D) be defined in (6.2) and ηT∗(E;v,D) be the sum of the local residual estimators on the two newly formed elements, defined as follows:
η2T∗(E;v,D):=2∑i=1η2T∗(Ei;v,D)=2∑i=1{h2Ei||rT(Ei;v,D)||20,Ei+12∑e∈EEihEi||jT(e;v,D)||20,e}, |
where we recall that hEi=1√2hE, i=1,2 We notice that, since D does not change under refinement, the functions fEi=fE|Ei, cEi=cE|Ei and AEi=AE|Ei will be denoted again by fE, cE and AE, respectively.
Lemma 8.1 (local residual estimator reduction). There exist constants μr∈(0,1) and cer,1>0 such that for any v∈VT
ηT∗(E;v,D)≤μrηT(E;v,D)+cer,1S1/2T(E)(v,v), |
where ST(E)(v,v):=∑E′∈T(E)SE′(v,v) with T(E):={E′∈T:EE∩EE′≠∅}.
Proof. Recalling the Definition (6.1), we have the following residuals
rE:=fE+∇⋅(AEΠ0k−1,E∇v)−cEΠ0k,Ev,rEi:=fE+∇⋅(AEΠ0k−1,Ei∇v)−cEΠ0k,Eiv. |
Writing
rEi=rE−∇⋅(AEΠ0k−1,E∇v−AEΠ0k−1,Ei∇v)+cEΠ0k,Ev−cEΠ0k,Eiv, |
we get, for any ϵ>0,
2∑i=1h2Ei||rEi||20,Ei≤2∑i=1h2Ei(1+ϵ)||rE||20,Ei+22∑i=1h2Ei(1+1ϵ)||∇⋅(AE(Π0k−1,E∇v−Π0k−1,Ei∇v))||20,Ei+22∑i=1h2Ei(1+1ϵ)||cE(Π0k,Ev−Π0k,Eiv)||20,Ei. |
The second term can be bounded by using the inverse inequality and the minimality of Π0k−1,Ei as follows:
2∑i=1h2Ei||∇⋅(AE(Π0k−1,E∇v−Π0k−1,Ei∇v))||20,Ei≲2∑i=1||Π0k−1,E∇v−Π0k−1,Ei∇v||20,Ei≤2||∇v−Π0k−1,E∇v||20,E+22∑i=1||∇v−Π0k−1,Ei∇v||20,Ei≤4|∇v−Π0k−1,E∇v|21,E≲|v−IEv|21,E≲SE(v,v), |
while, for the last term, using the Poincaré inequality we have
2∑i=1h2Ei||cE(Π0k,Ev−Π0k,Eiv)||20,Ei≲h2E2∑i=1||Π0k,Ev−Π0k,Eiv||20,Ei≤h2E||v−Π0k,Ev||20,E≲h2E|v−Π0k,Ev|21,E≲h2ESE(v,v). |
Finally, taking an appropriate value of ϵ and setting μ:=1+ϵ2∈(0,1) (for instance, if ϵ=12, μ=34) we get
2∑i=1h2Ei||rEi||20,Ei≤μh2E||rE||20,E+C(1+h2E)SE(v,v), |
where C>0 is a constant.
For the jump condition, we will essentially use the proof given in [5, Lemma 5.2]. In particular, we write jT∗(e;v)=jT(e;v)+(jT∗(e;v)−jT(e,v)) and for any ϵ>0
2∑j=1∑e∈EEihEi‖jT∗(e;v)‖2≤(1+ϵ)T1+(1+1ϵ)T2, |
with T1:=∑2i=1∑e∈EEihEi‖jT(e;v)‖2 and T2:=∑2i=1∑e∈EEihEi‖jT∗(e;v)−jT(e;v)‖2. On the new edge we notice that jT(e;v)=0, then,
T1≤1√2∑e∈EEhE‖jT(e;v)‖2. |
We now define T∗(Ei):={E′∈T∗:EEi∩EE′≠∅}; for any edge e∈EEi, we denote by Ei,e∈T∗(Ei) the element such that e=∂Ei∩∂Ei,e. Then,
‖jT∗(e;v)−jT(e;v)‖=‖[[A(Π0T∗−Π0T)∇v]]‖≤‖AE(Π0k−1,Ei−Π0k−1,E)∇v‖+‖AˆEi,e(Π0k−1,Ei,e−Π0k−1,ˆEi,e)∇v‖, |
where ˆEi,e indicates the parent of Ei,e. Using the trace inequality we have
T2≲2∑i=1∑E′∈T∗(Ei)||(Π0k−1,E′−Π0k−1^E′)∇v||20,E′≲2∑i=1∑E′∈T∗(Ei)(||∇v−Π0k−1,E′∇v||20,E′+||∇v−Π0k−1,^E′∇v||20,E′) |
Using now the minimality property of Π0k−1,E′ and Π0k−1,ˆE′, we easily get as above
T2≤∑E′∈T(E)||∇(v−IE′v)||20,E′≲∑E′∈T(E)SE′(v,v), |
which, for a sufficiently small ϵ, concludes the proof.
From this Lemma and the Lipschitz continuity of the residual estimator with respect to the argument v (whose proof is independent of the used polynomial degree, so we refer to [5, Lemma 5.3]), we immediately deduce the following result.
Proposition 8.2 (residual estimator reduction on refined elements). There exist constants μr∈(0,1), cer,1>0 and cer,2>0 independent of T such that for any v∈VT and w∈VT∗, and any element E∈T which is split into two children E1,E2∈T∗, one has
ηT∗(E;w,D)≤μr ηT(E;v,D)+cer,1S1/2T(E)(v,v)+cer,2|v−w|1,T(E). | (8.1) |
Given v∈VT and E∈T, consider the two virtual inconsistency terms ΨT,A(E,v,D) and ΨT,c(E,v,D) introduced in (6.3). When E is bisected into E1 and E2, the term ΨT,c(E,v,D) is reduced by a factor μc<1 up to an addend proportional to the stabilization term, i.e., there exists cvi,c>0 such that
(2∑i=1ΨT∗,c(Ei,v,D)2)1/2≤μcΨT,c(E,v,D)+cvi,cSE(v,v)1/2. | (8.2) |
This stems from the presence of the factor hE in front of the norm ||(I−Π0k)(cEΠ0kv)||0,E, with an argument similar to the one used in the proof of Lemma 8.1.
Due to the lack of the factor hE, a reduction result similar to (8.2) does not hold for ΨT,A(E,v,D). Indeed, since AEΠ0k−1,E∇v∈P2k−2(E), one may ask whether a constant μ<1 esists such that
2∑i=1‖(I−Π0k−1,Ei)q‖20,Ei≤μ2‖(I−Π0k−1,E)q‖20,E∀q∈P2k−2(E). | (8.3) |
Unfortunately, the answer is no, as it can be seen numerically, working on the reference element ˆE by affinity and identifying μ2 as the largest eigenvalue of a generalized eigenvalue problem. However, the same numerics indicates that if ˆE is split into 2m triangles of equal area by m successive levels of uniform bisections, then μ2 becomes <1 for m large enough, as seen in Table 1.
m=1 | m=2 | |
k=2 | 1.0000 | 0.3153 |
k=3 | 1.0000 | 0.6648 |
This is indeed predicted by the following result.
Lemma 8.3. Let E∈T. For any polynomial degree k≥1 there exists a minimal m∈N and a constant μ=μm<1 independent of E such that, if E is partitioned into 2m elements Ei of equal area by m levels of uniform newest vertex bisection, it holds
2m∑i=1‖(I−Π0k−1,Ei)q‖20,Ei≤μ2‖(I−Π0k−1,E)q‖20,E∀q∈P2k−2(E). | (8.4) |
Proof. Since by construction hEi=2−m/2hE, classical approximation results give
2m∑i=1‖(I−Π0k−1,Ei)q‖20,Ei≤Ck2−mh2E|q|21,E |
for some constant Ck depending on k. Replacing q by q−Π0k−1,Eq leaves the left-hand side unchanged, whereas on the right-hand side an inverse inequality yields
2m∑i=1‖(I−Π0k−1,Ei)q‖20,Ei≤CkCinv,k2−m‖q−Π0k−1,Eq‖20,E. |
One concludes taking as m the smallest integer such that μ2m:=CkCinv,k2−m<1.
Based on these results, let T∗m be a refinement of T in which the element E has undergone m levels of uniform refinements by newest vertex bisection, and has been replaced by 2m subelements Ei. Given v∈VT, let us set
Ψ2T∗m,A(E;v,D)=2m∑i=1‖(I−Π0Ei,k−1)(AEΠ0Ei,k−1∇v)‖20,Ei. |
Lemma 8.4. There exist constants ρA<1 and cvi,A>0 such that for any v∈VE,k
ΨT∗m,A(E;v,D)≤ρAΨT,A(E;v,D)+cvi,AS1/2E(v,v). |
Proof. Write
‖(I−Π0Ei,k−1)(AEΠ0Ei,k−1∇v)‖0,Ei≤‖(I−Π0Ei,k−1)(AEΠ0E,k−1∇v)‖0,Ei+‖AE(Π0Ei,k−1∇v−Π0E,k−1∇v)‖0,Ei, |
sum over i, and conclude using (8.4) and the usual arguments based on the minimality of the L2-orthogonal projections.
Let us set
Ψ2T∗m(E,v,D):=Ψ2T∗m,A(E;v,D)+Ψ2T∗m,c(E;v,D) |
with
Ψ2T∗m,c(E;v,D)=2m∑i=1h2Ei‖(I−Π0Ei,k)(cEΠ0Ei,kv)‖20,Ei. |
Applying a bound similar to (8.2) to the successive level of refinements, we arrive at the following result.
Lemma 8.5. There exist constants μvi<1 and cvi,1>0 such that for any v∈VE,k
ΨT∗m(E;v,D)≤μviΨT(E;v,D)+cvi,1S1/2E(v,v). |
Combining this estimate with the Lipschitz continuity property of the virtual inconsistency estimator, we obtain the following result.
Proposition 8.6 (virtual inconsistency estimator reduction on refined elements). There exist constants μvi∈(0,1), cvi,1>0 and cvi,2>0 independent of T such that for any v∈VT and w∈VT∗m, and any element E∈T which is split into 2m children Ei∈VT∗m, one has
ΨT∗m(E;w,D)≤μviΨT(E;v,D)+cvi,1S1/2E(v,v)+cvi,2|v−w|1,E. | (8.5) |
Let uT∗∈VT∗ be the solution of Problem (3.4) on the refined mesh \mathcal{T}_* . Hereafter we establish relations between the two energy errors {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u - u_\mathcal{T}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} and {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u - u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} . The first result follows from Proposition 5.5 and Lemma 3.1; the proof is independent of the used polynomial degree, so we refer to [5, Proposition 5.7].
Proposition 8.7 (comparison of the energy error under refinement). For any \delta \in (0, 1] there exists a constant C_E > 0 independent of \mathcal{T} and \delta such that
\begin{align*} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u - u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2\le (1 + \delta){\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u - u_{\mathcal{T}}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 - {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u_{\mathcal{T}_*} - u_\mathcal{T}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 + C_E \left( 1+ \frac{1}{\delta}\right)\left(S_\mathcal{T}(u_\mathcal{T},u_\mathcal{T}) + S_{\mathcal{T}_*}(u_{\mathcal{T}_*},u_{\mathcal{T}_*}) \right). \end{align*} |
Next result extends Corollary 5.8 in [5].
Proposition 8.8 (quasi-orthogonality of energy errors without stabilization). Given any \delta \in \left(0, \frac{1}{4}\right] , there exists \gamma_{\delta} > 0 such that for any \gamma > \gamma_\delta , it holds
\begin{align*} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u- u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 \le (1 + 4 \delta){\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u- u_{\mathcal{T}}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 - {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u_{\mathcal{T}_*}- u_{\mathcal{T}}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 + 2\delta \left(\Psi^2_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) + \Psi^2_{\mathcal{T}_*}(u_{\mathcal{T}_*}, \mathcal{D}) \right) . \end{align*} |
Proof. Let e: = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u -u_\mathcal{T}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} , e_*: = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u -u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} , S: = S_\mathcal{T}(u_\mathcal{T}, u_\mathcal{T}) , S_*: = S_{\mathcal{T}_*}(u_{\mathcal{T}_*}, u_{\mathcal{T}_*}) , \eta: = \eta_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) , \Psi: = \Psi_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) , \Psi_*: = \Psi_{\mathcal{T}_*}(u_{\mathcal{T}_*}, \mathcal{D}) and E: = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u_\mathcal{T} - u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} . From Corollary 7.4 and (2.3), we get \eta^2 \le \frac{S}{c_{apost}} + \frac{e^2}{c_{apost}\; c_\mathcal{B}}, while, from Proposition 7.5, S\le \frac{C_B}{\gamma^2}\left(\eta^2 + \Psi^2\right). Combining them, we have
\begin{align*} \left(1- \frac{C_B}{\gamma^2\; c_{apost}} \right)S\le \frac{C_B}{\gamma^2}\left( \frac{e^2}{c_{apost}\;c_\mathcal{B}} + \Psi^2\right). \end{align*} |
Doing the same on \mathcal{T}_* and defining
\begin{align*} \overline{C}: = \left(1- \frac{C_B}{\; c_{apost}} \right)^{-1} C_B \max\left\{1, \frac{1}{c_{apost}\;c_\mathcal{B}}\right\} \le \left(1- \frac{C_B}{\gamma^2\; c_{apost}} \right)^{-1} C_B \max\left\{1, \frac{1}{c_{apost}\;c_\mathcal{B}}\right\} \end{align*} |
provided \gamma^2\ge1 , we get
\begin{align*} S\le \frac{\overline{C}}{\gamma^2} \left( e^2 + \Psi^2\right), \qquad S_*\le \frac{\overline{C}}{\gamma^2} \left( e_*^2 + \Psi^2_*\right). \end{align*} |
Employing Proposition 8.7, we obtain
\begin{align*} e^2_* \le (1 + \delta)e^2 -E^2 + C_E \left( 1 + \frac{1}{\delta} \right)\frac{\overline{C}}{\gamma^2}(e^2 + e_*^2 + \Psi^2 + \Psi^2_*). \end{align*} |
If we define D: = C_E\left(1 + \frac{1}{\delta} \right)\, \overline{C} ,
\begin{align*} \left( 1 - \frac{D}{\gamma^2}\right) e^2_* \le \left(1 + \delta + \frac{D}{\gamma^2}\right)e^2 -E^2 + \frac{D}{\gamma^2}( \Psi^2 + \Psi^2_*). \end{align*} |
By choosing \gamma such that
\begin{equation} \frac{1}{\gamma^2}\le \frac{\delta}{D} , \end{equation} | (8.6) |
we get
\begin{align*} (1- \delta)e^2_* \le (1+2 \delta)e^2 - E^2 + \delta(\Psi^2 +\Psi^2_*), \end{align*} |
which concludes the proof by observing that \frac{1 + 2 \delta}{1-\delta}\le 1 + 4 \delta and \frac{\delta}{1-\delta}\le 2\delta , when \delta \le \frac{1}{4} .
Let us consider a \Lambda -admissible input mesh \mathcal{T}_0 , a set of approximated data \mathcal{D} which consist of piecewise polynomials of degree k-1 on \mathcal{T}_0 , and a tolerance \epsilon > 0 . The call
\begin{align*} [\mathcal{T}, u_{\mathcal{T}}] = {\sf GALERKIN}(\mathcal{T}_0, \mathcal{D},\epsilon) \end{align*} |
produces a \Lambda -admissible refined mesh \mathcal{T} and the Galerkin approximation u_\mathcal{T}\in \mathbb{V}_\mathcal{T} , such as
\begin{align*} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u- u_\mathcal{T}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}\le C_G \epsilon, \end{align*} |
where u is the solution of Problem (2.2) and C_G = \sqrt{c^\mathcal{B} \max\left\{C_{U_1}, C_{U_2}\right\}} , with c^\mathcal{B} is defined in (2.3) and C_{U_1}, C_{U_2} in Corollary 7.6. We obtain it by iterating the sequence
\begin{align*} {\sf SOLVE} \rightarrow {\sf ESTIMATE} \rightarrow {\sf MARK} \rightarrow {\sf REFINE} . \end{align*} |
At each step, a \Lambda- admissible mesh \mathcal{T}_j and the associated solution u_j of the discrete Problem (3.4) are produced. The process stops when the condition \eta^2_{\mathcal{T}_j}(u_j, \mathcal{D}) + \Psi^2_{ \mathcal{T}_j}(u_j, \mathcal{D}) \le \epsilon^2 is reached.
In particular, the modules are defined as follows:
● [u_\mathcal{T}] = {\sf SOLVE}(\mathcal{T}, \mathcal{D}) produces the solution of Problem (3.4) with data \mathcal{D} ;
● [\{\eta_\mathcal{T}(\cdot; u_\mathcal{T}, \mathcal{D})\}, \{\Psi_\mathcal{T}(\cdot; u_\mathcal{T}, \mathcal{D})\}] = {\sf ESTIMATE}(\mathcal{T}, u_\mathcal{T}) computes the local estimators on \mathcal{T} ;
● [\mathcal{M}] = {\sf MARK}(\mathcal{T}, \{\eta_{\mathcal{T}}(\cdot; u_\mathcal{T}, \mathcal{D}) \}, \{\Psi_\mathcal{T}(\cdot; u_\mathcal{T}, \mathcal{D})\}], \theta) implements the Dörfler criterion [15] and finds an almost minimal set \mathcal{M} of elements in \mathcal{T} such that
\begin{align} &\theta \left( \eta^2_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) + \Psi^2_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) \right) \le \sum\limits_{E\in \mathcal{M}} \left( \eta^2_\mathcal{T}(E; u_\mathcal{T}, \mathcal{D}) + \Psi^2_\mathcal{T}(E;u_\mathcal{T}, \mathcal{D}) \right), \end{align} | (9.1) |
for a given parameter \theta \in (0, 1) ;
● [\mathcal{T}_*] = {\sf REFINE}(\mathcal{T}, \mathcal{M}, \Lambda) returns a \Lambda -admissible refined mesh obtained from \mathcal{T} by suitable newest-vertex bisections of the elements in \mathcal{M} , and possibly of other elements to fullfil the \Lambda -admissibility condition.
It is worth adding some details about the procedure {\sf REFINE}. Let E \in \mathcal{M} be an element marked for refinement. For simplicity, hereafter let us set \eta: = \eta_\mathcal{T}(E; u_\mathcal{T}, \mathcal{D}) and \Psi: = \Psi_\mathcal{T}(E; u_\mathcal{T}, \mathcal{D}) . The refinement of E is performed as follows:
● If \eta \geq \Psi , then E is bisected once;
● If \eta < \Psi , then E is bisected m -times, where m has been introduced in Section 8.1.2 (see Lemma 8.3).
Denote by {\cal P}(E) the partition of E so obtained, and set \eta_*^2 : = \sum_{E' \in {\cal P}(E)} \eta_{{\cal P}(E)}^2(E'; u_ \mathcal{T}, \mathcal{D}) and \Psi_*^2 : = \sum_{E' \in {\cal P}(E)} \Psi_{{\cal P}(E)}^2(E'; u_ \mathcal{T}, \mathcal{D}). Then, recalling Lemmas 8.1 and 8.5, one gets when \eta \geq\Psi
\eta_* + \Psi_* \leq \frac{\mu_r+1}2 (\eta + \Psi) + c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}) . |
Indeed, \Psi can be written as \Psi = \lambda \eta for a certain \lambda\in[0, 1] and
\begin{align*} \eta_*+\Psi_*&\leq \mu_r\eta +\lambda \eta+ c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}) \\& = \frac{\mu_r + \lambda}{1+\lambda}(1+\lambda)\eta+ c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}) \\& = \frac{(\mu_r + \lambda)}{1+\lambda}(\eta +\Psi)+ c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}). \end{align*} |
In the case \eta < \Psi ,
\eta_* + \Psi_* \leq \max(\mu_r^m, \mu_{vi}) (\eta + \Psi) + c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}) . |
In all cases, it holds
\begin{equation} \eta_* + \Psi_* \leq \max\Big(\frac{\mu_r+1}2, \mu_{vi}\Big) (\eta + \Psi) + c \, S^{1/2}_{\mathcal{T}(E)}( u_\mathcal{T}, u_\mathcal{T}) , \end{equation} | (9.2) |
which shows that in each marked element the sum of the two estimators is reduced under refinement, up to the stabilization term. Note that for values k = 2 or 3 of the polynomial degree of practical use, two bisections ( m = 2 ) are enough when \eta < \Psi .
This refinement may create non-admissible hanging nodes, i.e., hanging nodes with global index larger than \Lambda . To remove them and guarantee \Lambda -admissibility of {\mathcal{T}_*} , further refinements should be applied. For the realization of this technical part, we refer to Section 11.1 in [6].
The following section proves the convergence of the {\sf GALERKIN} algorithm.
Proposition 10.1 (global estimators reduction). Let u_\mathcal{T} \in \mathbb{V}_\mathcal{T} be the solution of the discrete variational Problem (3.4). There exist constants \rho \in (0, 1) and C_{ger, 1}, C_{ger, 2} > 0 independent of \mathcal{T} such that, if \mathcal{T}_* is the refinement of \mathcal{T} obtained by applying {\sf REFINE}, one has for any w \in \mathbb{V}_{\mathcal{T}_*}
\begin{equation} \begin{split} & \eta_{\mathcal{T}_*}^2(w,\mathcal{D}) + \Psi_{\mathcal{T}_*}^2(w,\mathcal{D}) \\&\le \rho \left( \eta_\mathcal{T}^2(u_\mathcal{T},\mathcal{D})+ \Psi_\mathcal{T}^2(u_\mathcal{T},\mathcal{D}) \right) + \ C_{ger,1} S_{\mathcal{T}}(u_\mathcal{T},u_\mathcal{T}) + C_{ger,2} \left| {{u_\mathcal{T}-w}} \right|_{1, \Omega}^2\, . \end{split} \end{equation} | (10.1) |
Proof. One can reach the conclusion e.g., as in [5, proof of Proposition 5.5], using the bound (9.2) in each element E marked for refinement.
Theorem 10.2 (contraction property of {\sf GALERKIN}). Let \mathcal{M}\subset \mathcal{T} be the set of the marked elements relative to the solution u_\mathcal{T} \in \mathbb{V}_\mathcal{T} of the discrete variational Problem (3.4). If \mathcal{T}_* is the refinement of \mathcal{T} obtained by applying {\sf REFINE}, then for \gamma sufficiently large there exist \alpha \in (0, 1) and \beta > 0 , \zeta > 0 such that
\begin{equation*} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert { u -u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 + \beta\, \eta^2_{\mathcal{T}_*}(u_{\mathcal{T}_*}, \mathcal{D}) + \zeta \, \Psi^2_{\mathcal{T}_*}(u_\mathcal{T}, \mathcal{D}) \le\alpha \left( {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert { u -u_{\mathcal{T}}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}^2 + \beta \eta^2_{\mathcal{T}}(u_{\mathcal{T}}, \mathcal{D}) + \zeta \Psi^2_{\mathcal{T}}(u_\mathcal{T}, \mathcal{D}) \right). \end{equation*} |
Proof. To simplify notation, we set again e = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u -u_\mathcal{T}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} , e_* = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u -u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} , S = S_\mathcal{T}(u_\mathcal{T}, u_\mathcal{T}) , S_* = S_{\mathcal{T}_*}(u_{\mathcal{T}_*}, u_{\mathcal{T}_*}) , \eta = \eta_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) , \eta = \eta_{\mathcal{T}_*}(u_{\mathcal{T}_*}, \mathcal{D}) , \Psi = \Psi_\mathcal{T}(u_\mathcal{T}, \mathcal{D}) , \Psi_* = \Psi_{\mathcal{T}_*}(u_{\mathcal{T}_*}, \mathcal{D}) and E = {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {u_\mathcal{T} - u_{\mathcal{T}_*}} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert} . From Proposition 8.8,
e^2_* \le ( 1+ 4 \delta)e^2 -E^2 +2\delta (\Psi +\Psi_*), |
whereas using Proposition 10.1 and Proposition 7.5, we get
\begin{align*} \eta^2_* + \Psi^2_* &\le \rho ( \eta^2 + \Psi^2) + C_{ger,1}S + \frac{C_{ger,2}}{c_\mathcal{B}} E^2 \leq \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} \right) ( \eta^2 + \Psi^2) +\frac{C_{ger,2}}{c_\mathcal{B}} E^2 . \end{align*} |
Combining them, we get
\begin{equation*} \begin{split} e^2_* + \beta \eta^2_* + \left(\beta - 2\delta \right) \Psi^2_* &\leq ( 1+ 4 \delta)e^2 +\left( \frac{\beta C_{ger,2}}{c_\mathcal{B}} -1\right)E^2 \\ & + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} \right) \eta^2 + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{2\delta}{\beta} \right) \Psi^2 , \end{split} \end{equation*} |
which suggests choosing \beta such that
\begin{equation} \frac{\beta C_{ger,2}}{c_\mathcal{B}} = 1 . \end{equation} | (10.2) |
Next, we write
\begin{equation*} \begin{split} e^2_* + \beta \eta^2_* + \left(\beta - 2\delta \right) \Psi^2_* &\leq ( 1- \delta)e^2 + 5\delta \, e^2 \\ & + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} \right) \eta^2 + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{2\delta}{\beta} \right) \Psi^2 , \end{split} \end{equation*} |
and we invoke Corollary 7.6 to write
e^2 \leq {c^ {\mathcal{B}}}C_{apost} \Big(1+\frac{C_B}{\gamma^2}\Big)\eta^2 + {c^ {\mathcal{B}}}C_{apost} \frac{C_B}{\gamma^2} \Psi^2 , |
which gives
\begin{equation*} \begin{split} e^2_* + \beta \eta^2_* + \left(\beta - 2\delta \right) \Psi^2_* &\leq ( 1- \delta)e^2 + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{5\delta}{\beta} {c^ {\mathcal{B}}}C_{apost} \Big(1+\frac{C_B}{\gamma^2} \Big) \right) \eta^2 \\ & + \beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{2\delta}{\beta} + \frac{5\delta}{\beta} {c^ {\mathcal{B}}}C_{apost} \frac{C_B}{\gamma^2} \right) \Psi^2 . \end{split} \end{equation*} |
We now choose \gamma and \delta such that
\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{5\delta}{\beta} {c^ {\mathcal{B}}}C_{apost} \Big(1+\frac{C_B}{\gamma^2} \Big) \leq \frac{1+\rho}2 |
which holds true if
\begin{equation} \frac{C_{ger,1} C_B}{\gamma^2} \leq \frac{1-\rho}4 \qquad \text{and} \qquad \frac{5\delta}{\beta} {c^ {\mathcal{B}}}C_{apost}(1+C_B) \leq \frac{1-\rho}4 \end{equation} | (10.3) |
(recall that we already assumed \gamma^2\geq 1 ). Similarly, we choose \gamma and \delta such that
\beta \left(\rho + \frac{C_{ger,1} C_B}{\gamma^2} + \frac{2\delta}{\beta} + \frac{5\delta}{\beta} {c^ {\mathcal{B}}}C_{apost} \frac{C_B}{\gamma^2} \right) \leq (\beta - 2\delta) \frac{1+\rho}2 , |
which holds true if \gamma satisfies the first condition in (10.3), whereas \delta satisfies
\begin{equation} \left( 2 + 5 {c^ {\mathcal{B}}}C_{apost} {C_B} + \frac{1+\rho}{\beta} \right)\delta \leq \frac{1-\rho}4 . \end{equation} | (10.4) |
This proves the result, if we define \zeta : = \beta - 2\delta , with \beta defined by (10.2) and \delta < \frac{\beta}2 , and
\begin{equation} \alpha : = \min \left(1-\delta, \frac{1+\rho}2 \right) . \end{equation} | (10.5) |
The conditions on \gamma and \delta which lead to the desired estimate are given in (8.6), (10.3) and (10.4).
The aim of this numerical test is to confirm the convergence of our {\sf GALERKIN} algorithm. We consider a classical test with an L- shaped domain \Omega = (-1, 1)^2 \setminus (-1, 0)^2 and the reaction-diffusion problem (2.1), with polynomial coefficients of order one for the case k = 2 , i.e.,
\begin{align*} &\mathit{\boldsymbol{A}} = \begin{bmatrix} &2+ y &0\\ &0 &2+ x \end{bmatrix}, \qquad c = x+y+3, \end{align*} |
and polynomials of order two for the case k = 3 ,
\begin{align*} &\mathit{\boldsymbol{A}} = \begin{bmatrix} &2+ y^2 &0\\ &0 &2+ x^2 \end{bmatrix}, \qquad c = x^2+ y^2 . \end{align*} |
The forcing term f and the Dirichlet boundary conditions are chosen so that the solution of the problem results
\begin{equation} u_{\text {ex}}(r, \beta) = r^{2/3} \sin \left(\frac{2}{3}\left(\beta+ \frac{\pi}{2}\right)\right), \end{equation} | (11.1) |
where r and \beta are the polar coordinates. It is possible to prove that there exists a p with p \in \left(\frac{2}{k+1}, \frac{2}{k+ 2/3}\right) such that u_\text{ex}\in W_p^k(\Omega) when p\ge 1 , and u_\text{ex}\in L_p^k(\Omega) when p\in (0, 1) , where W_p^k(\Omega) and L_p^k(\Omega) indicate respectively Sobolev and Lipschitz spaces. Then, according to the theory of approximation classes [16,17], we expect the maximal rate of convergence, i.e., {\tt Ndofs}^\land{-k/2} , where {\tt Ndofs} is the number of the degrees of freedom. We apply the adaptive algorithm as described in Section 9 and for the marking strategy (9.1) we consider \theta = 0.5 . In order to compute the VEM error, we consider the computable quantity:
\begin{align*} {\tt H ^\land 1-error}: = \frac{\left(\sum_{E\in \mathcal{T}}\left\| {{\nabla(u_{\text{ex}}- \Pi^\nabla_E u_ \mathcal{T} )}} \right\|^2\right)^{1/2}}{\left\| {{\nabla u_{\text{ex}}}} \right\|_{0, \Omega}}. \end{align*} |
In Figure 6, we represent the evolution of {\tt H ^\land 1-error} and the estimator terms \eta_\mathcal{T}(u_ \mathcal{T}, \mathcal{D}) and \Psi_ \mathcal{T}(u_ \mathcal{T}, \mathcal{D}) , which confirms the results of Corollary 7.6. Furthermore, we notice that after a transient phase, the error and the estimator terms decays reach asymptotically the theoretical optimal rate {\tt Ndofs ^\land{-1.0}} (for the case k = 2 ) and {\tt Ndofs ^\land{-1.5}} (for the case k = 3 ). In Figure 7, we depict the meshes after 20, 35 and 50 loops of the adaptive algorithm in the case k = 2 . We highlight the presence of hanging nodes in the different meshes loops.
In this paper, we presented an adaptive VEM of order k\ge2 on nonconforming triangular meshes. In the analysis, the space \mathbb{V}^0_ \mathcal{T} of continuous, piecewise polynomials functions of degree k on the triangulation \mathcal{T} plays a fundamental role. Indeed, it is contained in the global VEM space, \mathbb{V}^0_ \mathcal{T} \subseteq \mathbb{V}_ \mathcal{T} , and guarantees a quasi-orthogonality property for any refinement {\mathcal{T}_*} of \mathcal{T} , since \mathbb{V}^0_ \mathcal{T} \subseteq \mathbb{V}^0_ {\mathcal{T}_*} . By pivoting on this space, we proved an a posteriori error estimate which does not contain the stabilization term appearing in the VEM discrete formulation. Consequently, we established the convergence of the adaptive VEM algorithm, by a contraction argument.
Extensions of our work include:
● The complexity and optimality analysis of the two step algorithm AVEM mentioned in the Introduction to account for non-polynomial data;
● The study of a variant of the adaptive algorithm in which the polynomial degree k may take large values, in the spirit of a p -version;
● The treatment of more general polygonal meshes which, as remarked in [5], seems non-trivial. The main difficulties lay in the choice of a suitable refinement strategy in replacement of the newest-vertex bisection used here, and in the lack of a conforming space \mathbb{V}^0_ \mathcal{T} for general polygonal meshes.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors performed this research in the framework of the Italian MIUR Award "Dipartimenti di Eccellenza 2018-2022" granted to the Department of Mathematical Sciences, Politecnico di Torino (CUP: E11G18000350001). CC was partially supported by the Italian MIUR through the PRIN grant 201752HKH8; DF thanks the INdAM-GNCS project "Metodi numerici per lo studio di strutture geometriche parametriche complesse" (CUP: E53C22001930001). The authors are members of the Italian INdAM-GNCS research group.
The authors declare no conflicts of interest.
[1] | [ Z. Abbas,A. R. Siddiqui, Management of hepatitis B in developing countries, World Journal of Hepatology, 3 (2011): 292-299. |
[2] | [ D. Lavanchy, Hepatitis B virus epidemiology, disease burden, treatment, and current and emerging prevention and control measures, Journal of Viral Hepatitis, 11 (2004): 97-107. |
[3] | [ B. M. Adams,H. T. Banks,M. Davidian,Hee-Dae Kwon,H. T. Tran,S. N. Wynne,E. S. Rosenberg, HIV dynamics: Modeling, data analysis, and optimal treatment protocols, Journal of Computational and Applied Mathematics, 184 (2005): 10-49. |
[4] | [ K. Blayneh,Y. Cao,H.-D. Kwon, Optimal control of vector-borne diseases: Treatment and prevention, Discrete and Continuous Dynamical Systems-series B, 11 (2009): 587-611. |
[5] | [ F. Brauer, P. Van den Driessche and J. Wu, Mathematical Epidemiology, Springer-Verlag, Berlin, Heidelberg, 2008. |
[6] | [ C. Castillo-Chavez, Blower, P. van den Driessche, D. Kirschner and A. -A. Yakubu, Mathematical Approaches for Emerging and Reemerging Infectious Diseases, Springer-Verlag, New York, 2002. |
[7] | [ F. Daum, Nonlinear filters: beyond the Kalman filter, IEEE Aerospace and Electronic Systems Magazine, 20 (2005): 57-69. |
[8] | [ G. Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer-Verlag Berlin Heidelberg, 2009. |
[9] | [ T. Fujimoto,R. R. Ranade, Two Characterizations of Inverse-Positive Matrices: The Hawkins-Simon Condition and the Le Chatelier-Braun Principle, Electronic Journal of Linear Algebra, 11 (2004): 59-65. |
[10] | [ J. Guedj,Y. Rotman,S. J. Cotler,C. Koh,P. Schmid, Understanding early serum hepatitis D virus and hepatitis B surface antigen kinetics during pegylated interferon-alpha therapy via mathematical modeling, Hepatology, 60 (2014): 1902-1910. |
[11] | [ L. G. Guidotti,R. Rochford,J. Chung,M. Shapiro,R. Purcell, Viral clearance without destruction of infected cells during acute HBV infection, Science, 284 (1999): 825-829. |
[12] | [ K. Ito,K. Kunisch, Asymptotic properties of receding horizon optimal control problems, SIAM Journal on Control and Optimization, 40 (2002): 1585-1610. |
[13] | [ H. Y. Kim, H. -D. Kwon, T. S. Jang, J. Lim and H. Lee, Mathematical modeling of triphasic viral dynamics in patients with HBeAg-positive chronic hepatitis B showing response to 24-week clevudine therapy, PloS One, 7 (2012), e50377. |
[14] | [ S. B. Kim, M. Yoon, N. S. Ku, M. H. Kim and J. E. Song, et, al., Mathematical modeling of HIV prevention measures including pre-exposure prophylaxis on hiv incidence in south korea, PloS One, 9 (2014), e90080. |
[15] | [ J. Lee,J. Kim,H.-D. Kwon, Optimal control of an influenza model with seasonal forcing and age-dependent transmission rates, Journal of Theoretical Biology, 317 (2013): 310-320. |
[16] | [ S. Lee,M. Golinski,G. Chowell, Modeling optimal age-specific vaccination strategies against pandemic influenza, Bulletin of Mathematical Biology, 74 (2012): 958-980. |
[17] | [ E. Jung,S. Lenhart,Z. Feng, Optimal control of treatments in a two-strain tuberculosis model, Discrete and Continuous Dynamical Systems -Series B, 2 (2002): 473-482. |
[18] | [ N. K. Martin,P. Vickerman,G. R. Foster,S. J. Hutchinson,D. J. Goldberg,M. Hickman, Can antiviral therapy for hepatitis C reduce the prevalence of HCV among injecting drug user populations? A modeling analysis of its prevention utility, Journal of Hepatology, 54 (2011): 1137-1144. |
[19] | [ R. Storn,K. Price, Differential evolution -a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, 11 (1997): 341-359. |
[20] | [ R. Thimme,S. Wieland,C. Steiger,J. Ghrayeb,K. A. Reimann, CD8(+) T cells mediate viral clearance and disease pathogenesis during acute hepatitis B virus infection, J. Virol, 77 (2003): 68-76. |
[21] | [ K. V. Price, R. M. Storn and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer-Verlag, Berlin, Heidelberg, 2005. |
[22] | [ Hepatitis B Foudation, http://www.hepb.org. |
1. | Haitao Yu, Chaofan Wang, Kai Li, Jiang Wang, Chen Liu, 2022, Resonance Dynamics and Latent Manifolds in Cortical Neural Networks with Temporal Interference Stimulation, 978-988-75815-3-6, 5717, 10.23919/CCC55666.2022.9902309 | |
2. | Robert Friedman, Higher Cognition: A Mechanical Perspective, 2022, 2, 2673-8392, 1503, 10.3390/encyclopedia2030102 | |
3. | Robert Friedman, All Is Perception, 2022, 14, 2073-8994, 1713, 10.3390/sym14081713 | |
4. | Robert Friedman, Geometry-Based Deep Learning in the Natural Sciences, 2023, 3, 2673-8392, 781, 10.3390/encyclopedia3030056 | |
5. | Robert Friedman, Large Language Models and Logical Reasoning, 2023, 3, 2673-8392, 687, 10.3390/encyclopedia3020049 |
m=1 | m=2 | |
k=2 | 1.0000 | 0.3153 |
k=3 | 1.0000 | 0.6648 |