Citation: Tomoyuki Suzuki, Keisuke Takasao, Noriaki Yamazaki. New approximate method for the Allen–Cahn equation with double-obstacle constraint and stability criteria for numerical simulations[J]. AIMS Mathematics, 2016, 1(3): 288-317. doi: 10.3934/Math.2016.3.288
[1] | Hyun Geun Lee . A mass conservative and energy stable scheme for the conservative Allen–Cahn type Ohta–Kawasaki model for diblock copolymers. AIMS Mathematics, 2025, 10(3): 6719-6731. doi: 10.3934/math.2025307 |
[2] | Chaeyoung Lee, Seokjun Ham, Youngjin Hwang, Soobin Kwak, Junseok Kim . An explicit fourth-order accurate compact method for the Allen-Cahn equation. AIMS Mathematics, 2024, 9(1): 735-762. doi: 10.3934/math.2024038 |
[3] | Youngjin Hwang, Jyoti, Soobin Kwak, Hyundong Kim, Junseok Kim . An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface. AIMS Mathematics, 2024, 9(12): 34447-34465. doi: 10.3934/math.20241641 |
[4] | Jaeyong Choi, Seokjun Ham, Soobin Kwak, Youngjin Hwang, Junseok Kim . Stability analysis of an explicit numerical scheme for the Allen-Cahn equation with high-order polynomial potentials. AIMS Mathematics, 2024, 9(7): 19332-19344. doi: 10.3934/math.2024941 |
[5] | Martin Stoll, Hamdullah Yücel . Symmetric interior penalty Galerkin method for fractional-in-space phase-field equations. AIMS Mathematics, 2018, 3(1): 66-95. doi: 10.3934/Math.2018.1.66 |
[6] | Yangfang Deng, Zhifeng Weng . Barycentric interpolation collocation method based on Crank-Nicolson scheme for the Allen-Cahn equation. AIMS Mathematics, 2021, 6(4): 3857-3873. doi: 10.3934/math.2021229 |
[7] | Haifeng Zhang, Danxia Wang, Zhili Wang, Hongen Jia . A decoupled finite element method for a modified Cahn-Hilliard-Hele-Shaw system. AIMS Mathematics, 2021, 6(8): 8681-8704. doi: 10.3934/math.2021505 |
[8] | Saulo Orizaga, Ogochukwu Ifeacho, Sampson Owusu . On an efficient numerical procedure for the Functionalized Cahn-Hilliard equation. AIMS Mathematics, 2024, 9(8): 20773-20792. doi: 10.3934/math.20241010 |
[9] | Narcisse Batangouna . A robust family of exponential attractors for a time semi-discretization of the Ginzburg-Landau equation. AIMS Mathematics, 2022, 7(1): 1399-1415. doi: 10.3934/math.2022082 |
[10] | Hyun Geun Lee, Youngjin Hwang, Yunjae Nam, Sangkwon Kim, Junseok Kim . Benchmark problems for physics-informed neural networks: The Allen–Cahn equation. AIMS Mathematics, 2025, 10(3): 7319-7338. doi: 10.3934/math.2025335 |
For each ε∈(0,1], we consider the following Allen-Cahn equation with double obstacle constraint:
uεt−Δuε+∂I[−1,1](uε)ε2∋uεε2 in Q:=(0,T)×Ω, | (1.1) |
∂uε∂ν=0 on Σ:=(0,T)×Γ, | (1.2) |
uε(0)=uε0 a.e. in Ω, | (1.3) |
where 0<T<∞, Ω is a bounded domain in RN (1≤N<+∞) with Lipschitz boundary Γ:=∂Ω, ν is an outward normal vector on Γ and uε0 is a given initial value. Also, the double obstacle constraint ∂I[−1,1](⋅) is the subdifferential of the indicator function I[−1,1](⋅) on the closed interval [-1, 1] defined by
I[−1,1](z):={0, if z∈[−1,1],+∞, otherwise. | (1.4) |
More precisely, ∂I[−1,1](⋅) is a set-valued mapping defined by
∂I[−1,1](z):={∅, if z<−1 or z>1,[0,∞), if z=1,{0}, if −1<z<1,(−∞,0], if z=−1. | (1.5) |
The Allen--Cahn equation was proposed to describe the macroscopic motion of phase boundaries. In this physical context, the function uε=uε(t,x) in (P)ε:={(1.1), (1.2), (1.3)} is a non-conserved order parameter that characterizes the physical structure. Indeed, let v=v(t,x) be the local ratio of the volume of a pure liquid relative to that of a pure solid at time t and position x∈Ω, defined by
v(t,x):=limr↓0volume of pure liquid in Br(x) at time t|Br(x)|, |
where Br(x) is the ball in RN with center x and radius r and |Br(x)| denotes its volume. Setting uε(t,x):=2v(t,x)−1 for any (t,x)∈Q, we then observe that uε(t,x) is the non-conserved order parameter that characterizes the physical structure:
{uε(t,x)=1for the pure liquid region,uε(t,x)=−1for the pure solid region,−1<uε(t,x)<1for the mixed region. |
Therefore, uε has two threshold values 1 and −1, and hence the constraint ∂I[−1,1](⋅) that appears in (1.1).
There is a vast literature on the Allen--Cahn equation with and without constraint ∂I[−1,1](⋅). For these studies, we refer to [1,3,6,7,12,8,9,20,22,26]. In particular, Bronsard and Kohn [6] studied the singular limit of (P)ε as ε→0 with a bistable potential W having both wells of equal depth and without constraint ∂I[−1,1](⋅). Also, Chen and Elliott [7] considered the asymptotic behavior of the solution to (P)ε as ε→0. However, there were no details in [7] about elements of the constraint ∂I[−1,1](uε) as ε→0. Recently, Farshbaf-Shaker et al. [8] provided results concerning properties of elements of ∂I[−1,1](uε) for the problem (P)ε as ε→0.
Also, there is a vast literature on the numerical analysis of the Allen--Cahn equation without constraint ∂I[−1,1](⋅). For these studies, we refer to [10,11,24,27,28]. However, we observe that it is hard to perform a numerical experiment of (P)ε, because the double obstacle constraint ∂I[−1,1](⋅) is a multivalued function (cf. (1.5)). Recently, Blank et al. [3] proposed as a numerical method the primal-dual active set algorithm for the local and nonlocal Allen--Cahn variational inequalities with constraint. Also, Farshbaf-Shaker et al. [8] obtained results for the limit of a solution uε and an element of ∂I[−1,1](uε), called the Lagrange multiplier, to (P)ε as ε→0. Moreover, they provided the numerical experiment to (P)ε via the Lagrange multiplier for a one-dimensional space for sufficiently small ε∈(0,1] in [9].
The approximate methods are used to obtain a priori estimate of the solutions to (P)ε (cf. [18,21,22,23]). Indeed, the Yosida approximation of ∂I[−1,1](⋅) is often used: for δ>0, the Yosida approximation (∂I[−1,1])δ(⋅) of ∂I[−1,1](⋅) is defined by:
(∂I[−1,1])δ(z)=[z−1]+−[−1−z]+δ,∀z∈R, | (1.6) |
where [z]+ is the positive part of z. Then, by considering approximate problems of (P)ε and letting δ↓0, we can obtain estimates of solutions to (P)ε.
Also, using the Yosida approximation (∂I[−1,1])δ(⋅) of ∂I[−1,1](⋅), numerical experiments of the approximate problem of (P)ε were often provided. Recently, in [25], the authors clarified the role of the stability condition in providing stable numerical experiments of the approximate problem of (P)ε via the Yosida approximation in the one-dimensional case. However, we observed that the numerical solution to the approximate equation takes the value outside [-1, 1] (cf. [25, Table 1]).
number of iterations i | the value of ui to (DE)εδ | the value of ui to (YDE)εδ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
In this paper, we propose a new approximate method for ∂I[−1,1](⋅) so that the numerical solutions to the approximate equation take values in [-1, 1]. Indeed, for each δ∈(0,1), we approximate ∂I[−1,1](⋅) by the following function Kδ(⋅), defined by:
Kδ(z):={z−1+δδ, if z>1−δ,]0, if z∈[−1+δ,1−δ],]z+1−δδ, if z<−1+δ. | (1.7) |
Then, for each ε∈(0,1] and δ∈(0,1), we study the following approximate problem of (P)ε, denoted by (P)εδ:
(P)εδ{(uεδ)t−Δuεδ+Kδ(uεδ)ε2=uεδε2 in Q=(0,T)×Ω,∂uεδ∂ν=0 on Σ=(0,T)×Γ,uεδ(0)=uε0 a.e. in Ω. |
The aim is to show the validity of our approximate method defined by (1.7). Moreover, for each ε>0 and δ>0, we present criteria for the standard explicit finite difference scheme to ensure stable numerical experiments of (P)εδ in a two-dimensional (2D) space. To this end, we consider the following ODE problem, denoted by (E)εδ:
(E)εδ{(uεδ)t+Kδ(uεδ)ε2=uεδε2in R, for t∈(0,T),uεδ(0)=uε0in R. |
Note that the unique solution uεδ(t) to (E)εδ takes the value in (0,1) (resp. (−1,0)) for all t∈[0,T], if the initial value uε0 takes the value in (0,1) (resp. (−1,0)) (see (i) of Corollary 2.1). Also, note that uεδ≡0,±1 is a stationary solution to (E)εδ (see Remark 3.2). We then find the criteria that yields stable numerical experiments of (E)εδ. Also, we perform some numerical simulations of (E)εδ. Finally, taking into account the theoretical results of (E)εδ, we derive the criteria ensuring stable numerical experiments of the 2D PDE problem (P)εδ.
(a) We show the existence-uniqueness of a solution uεδ to (P)εδ on [0, T] for each ε∈(0,1] and δ∈(0,1). Also, we prove that uεδ converges to the solution uε to (P)ε on [0, T] as δ→0.
(b) We present the criteria that ensure stable numerical simulations of the ODE problem (E)εδ. Also, we provide numerical experiments to (E)εδ for small ε∈(0,1] and δ∈(0,1).
(c) We consider instances when Ω is a bounded domain in R2. Then, we give the criteria yielding stable numerical simulations of the PDE problem (P)εδ. Also, we provide the numerical experiments of (P)εδ for small ε∈(0,1] and δ∈(0,1).
The plan of this paper is as follows. In Section 2, we discuss the validity of our approximate method defined by (1.7). Indeed, we prove the main result (Theorem 2.1) concerning the solvability and convergence result of (P)εδ corresponding to item (a) above. In Section 3, we consider (E)εδ numerically. Then, we prove the main result (Theorem 3.1) corresponding to item (b) above. Also, we provide several numerical experiments to (E)εδ for small ε∈(0,1] and δ∈(0,1). In the final Section 4, we consider from the view-point of numerical analysis (P)εδ for a 2D space. Then, we prove the main result (Theorem 4.1) corresponding to item (c) above and provide numerical experiments of (P)εδ for small ε∈(0,1] and δ∈(0,1).
Notation and basic assumptions
Throughout this paper, we consider the usual real Hilbert space structure denoted by H:=L2(Ω). The inner product in H is denoted by (⋅,⋅)H. We also write V:=H1(Ω).
In Section 2, we use some techniques of proper (that is, not identically equal to infinity), l.s.c. (lower semi-continuous), convex functions and their subdifferentials, which are useful in the systematic study of variational inequalities. Therefore, let us outline some notation and definitions. For a proper, l.s.c., convex function ψ:H→R∪{+∞}, the effective domain D(ψ) is defined by
D(ψ):={z∈H; ψ(z)<∞}. |
The subdifferential of ψ is a possibly multi-valued operator in H and is defined by z∗∈∂ψ(z) if and only if
z∈D(ψ)and(z∗,y−z)H≤ψ(y)−ψ(z)for all y∈H. |
Next, we recall the notion of convergence for convex functions developed by Mosco [19].
Definition 1.1 (cf. [19]). Let ψ, ψn (n∈N) be proper, l.s.c., convex functions on H. Then, we say that ψn converges to ψ on H in the sense of Mosco [19] as n→∞, if the following two conditions are satisfied:
(M1) for any subsequence {ψnk}⊂{ψn}, if zk→z weakly in H as k→∞, then
lim infk→∞ψnk(zk)≥ψ(z); |
(M2) for any z∈D(ψ), there is a sequence {zn} in H such that
zn→z in H as n→∞and limn→∞ψn(zn)=ψ(z). |
For various properties and related notions of the proper, l.s.c., convex function ψ and its subdifferential ∂ψ, we refer to the monograph by Brézis [4].
Next we present condition (A), which we shall use throughout the paper and assume applies to data.
(A) uε0∈K:={z∈V |z|≤1 a.e. in Ω}.
We begin by giving a rigorous definition of solutions to (P)εδ (ε∈(0,1] and δ∈(0,1)).
Definition 2.1. Let ε∈(0,1], δ∈(0,1) and uε0∈H. Then, a function uεδ:[0,T]→H is called a solution to (P)εδ on [0, T], if the following conditions are satisfied:
(i) uεδ∈W1,2(0,T;H)∩L∞(0,T;V).
(ii)}] The following variational identity holds:
((uεδ)t(t),z)H+∫Ω∇uεδ(t)⋅∇zdx+(Kδ(uεδ(t))ε2,z)H=(uεδ(t)ε2,z)Hforallz∈Vanda.e.t∈(0,T). |
(iii) uεδ(0)=uε0 in H.
Also, we give a rigorous definition of solutions to the problem (P)ε (ε∈(0,1]).
Definition 2.2. Let ε∈(0,1] and uε0∈H. Then, a function uε:[0,T]→H is called a solution to (P)ε on [0, T], if the following conditions are satisfied:
(i) uε∈W1,2(0,T;H)∩L∞(0,T;V), and |uε|≤1 a.e. on QT.
(ii) The following variational inequality holds:
(uεt(t)−1ε2uε(t),uε(t)−z)H+∫Ω∇uε(t)⋅(∇uε(t)−∇z)dx≤0forallz∈Kanda.e.t∈(0,T). |
(iii) uε(0)=uε0 in H.
Here we mention the result concerning the existence-uniqueness of solutions for (P)ε on [0, T] for each ε∈(0,1].
Proposition 2.1 (cf. [5,15]). Assume (A). Then, for each ε∈(0,1] and uε0∈K, there exists a unique solution uε to (P)ε on [0, T] in the sense of Definition 2.2.
Proof. Applying the abstract theory of nonlinear evolution equations (cf. [5,15]), we can prove this Proposition 2.1. Indeed, for each ε∈(0,1], we define a functional φε on H by
φε(z):={12∫Ω|∇z|2dx+1ε2∫ΩI[−1,1](z(x))dx,if z∈V with I[−1,1](z)∈L1(Ω),∞,otherwise. | (2.1) |
Clearly, φε is proper, l.s.c. and convex on H with the effective domain D(φε)={z∈V I[−1,1](z(⋅))∈L1(Ω)}. Then, the problem (P)ε can be rewritten as an abstract evolution equation of the form:
(CP)ε{ddtuε(t)+∂φε(uε(t))−1ε2uε(t)∋0 in H, for t>0,uε(0)=uε0 in H. |
Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,15]), we demonstrate the existence-uniqueness of a solution uε to (CP)ε, hence (P)ε, on [0, T] for each ε∈(0,1] in the sense of Definition 2.2. Thus, the proof of Proposition 2.1 is complete.
Now, we mention the first main result concerning the solvability and convergence of solutions to (P)εδ on [0, T].
Theorem 2.1. Assume (A). Then, for each ε∈(0,1], δ∈(0,1) and uε0∈K, there exists a unique solution uεδ to (P)εδ on [0, T] in the sense of Definition 2.1. Moreover, the following statements hold:
(i) If the initial value uε0(x) takes the value in [0,1] (resp. [−1,0]) for a.e. x∈Ω, the solution uεδ(t,x) also takes the value in [0,1] (resp. [−1,0]) for a.e. (t,x)∈Q.
(ii) uεδ converges to the unique solution uε of (P)ε on [0, T] in the sense that
uεδ→uε in C([0,T];H) as δ→0. | (2.2) |
To prove Theorem 2.1, we define a primitive ˆKδ by
ˆKδ(z):={12δz2−1−δδz+(1−δ)22δ, if z>1−δ,0, if z∈[−1+δ,1−δ],12δz2+1−δδz+(1−δ)22δ, if z<−1+δ. | (2.3) |
Clearly, ˆKδ(⋅) is continuous and convex on R with ∂ˆKδ(⋅)=Kδ(⋅) in R, where Kδ(⋅) is the function defined by (1.7). Then, we observe from (1.4) and (2.3) that the following lemma holds.
Lemma 2.1 (cf. [2, Section 5], [4, Chapter 2], [17, Section 2]). Let I[−1,1](⋅) and ˆKδ(⋅) be convex functions defined by (1.4) and (2.3), respectively. Then,
ˆKδ(⋅)→I[−1,1](⋅) onRinthesenseofMosco [19] asδ→0. | (2.4) |
Proof. We first check the condition (M1). Let {δk}⊂(0,1), {zk}⊂R and z∈R so that
δk↓0 and zk→z weakly in R as k→+∞. |
As R is a one-dimensional space, we observe that
zk→z in R as k→+∞. | (2.5) |
If z∈[−1,1], we easily observe from (1.4) and (2.3) that
lim infk→+∞ˆKδk(zk)≥I[−1,1](z)=0. |
Now we assume that z>1. Then, there exists a small positive constant μ such that
z≥1+μ>1. |
Then, from (2.5), there exists a number kμ∈N satisfying
|zk−z|<μ2 for all k≥kμ. |
Therefore, we have
zk>z−μ2≥1+μ2>1−δ for all k≥kμ. | (2.6) |
Hence, we infer from (2.3) and (2.6) that
ˆKδk(zk)≥ˆKδk(1+μ2)=12δk(1+μ2)2−1−δkδk(1+μ2)+(1−δk)22δk=μ28δk+μ2+δk2→+∞ as k→+∞. |
Thus, we observe that
lim infk→+∞ˆKδk(zk)=+∞=I[−1,1](z). |
Similarly, if z<−1, we have:
lim infk→+∞ˆKδk(zk)=+∞=I[−1,1](z). |
Thus (M1) holds.
Next we establish (M2). Assume that δn↓0 as n→+∞ and z∈D(I[−1,1]), namely, z∈[−1,1]. Put zn=z for all n∈N. Then, we easily observe from (2.3) that
limn→+∞ˆKδn(zn)=0=I[−1,1](z). |
Therefore (M2) holds.
This completes the proof of Lemma 2.1.
We observe from Lemma 2.1 and the general result of Mosco convergence (cf. [2, Theorem 3.66], [17, Theorem 8.1], [14, Proposition.9]) that
∂ˆKδ(⋅)=Kδ(⋅) converges to ∂I[−1,1](⋅) in the sense of resolvent convergence as δ→0 |
Therefore, Kδ(⋅) is the approximation of ∂I[−1,1](⋅) defined by (1.7).
Taking into account Lemma 2.1, we have the following lemma but omit here a detailed proof.
Lemma 2.2 (cf. [2, Section 5], [4, Chapter 2], [17, Section 2]). Let ε∈(0,1] and δ∈(0,1), and let ˆKδ(⋅) be the convex function defined by (2.3). Define
φεδ(z):={12∫Ω|∇z|2dx+1ε2∫ΩˆKδ(z(x))dx, if z∈V,∞, otherwise. | (2.7) |
Then, φεδ is proper, l.s.c. and convex on H with the effective domain D(φεδ)=V. Moreover,
φεδ(⋅)→φε(⋅) onHinthesenseofMosco [19] δ→0, | (2.8) |
where φε is the proper, l.s.c., convex functional on H defined by (2.1).
Now let us prove Theorem 2.1 considering the solvability and convergence of solutions to (P)εδ on [0, T].
Proof of Theorem 2.1. By the argument similar to (P)ε (cf. Proposition 2.1), we can show the existence-uniqueness of solutions to (P)εδ on [0, T] for each ε∈(0,1] and δ∈(0,1). Indeed, we infer from Lemma 2.2 that φεδ is proper, l.s.c. and convex on H with the effective domain D(φεδ)=V. Also, we observe that problem (P)εδ can be rewritten as an abstract evolution equation of the form:
(CP)εδ{ddtuεδ(t)+∂φεδ(uεδ(t))−1ε2uεδ(t)=0 in H, for t>0,uε(0)=uε0 in H. |
Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,15]), we can show the existence-uniqueness of a solution uεδ to (CP)εδ, hence (P)εδ, on [0, T] for each ε∈(0,1] and δ∈(0,1) in the sense of Definition 2.1.
Now we show (i) by the maximum principle arguments (cf. [13]). We present the proof only for initial values uε0(x)∈[0,1] for a.e. x∈Ω, because for uε0(x)∈[−1,0] the same arguments apply.
Assigning [uεδ(τ)−1]+ to z in (ii) of Definition 2.1, we get
12ddτ|[uεδ(τ)−1]+|2H+|∇[uεδ(τ)−1]+|2H+(Kδ(uεδ(τ))ε2,[uεδ(τ)−1]+)H=(uεδ(τ)ε2,[uεδ(τ)−1]+)Hfor a.e.τ∈(0,T). | (2.9) |
Adding (−1/ε2,[uεδ(τ)−1]+)H to the both side in (2.9), we observe that
12ddτ|[uεδ(τ)−1]+|2H+1ε2(Kδ(uεδ(τ))−1,[uεδ(τ)−1]+)H≤1ε2|[uεδ(τ)−1]+|2Hfor a.e.τ∈(0,T). | (2.10) |
Because Kδ(⋅) is monotone in R with Kδ(1)=1, we infer from (2.10) that
ddτ|[uεδ(τ)−1]+|2H≤2ε2|[uεδ(τ)−1]+|2Hfor a.e.τ∈(0,T). | (2.11) |
Therefore, applying the Gronwall lemma to (2.11), we observe from uε0(x)∈[0,1] for a.e. x∈Ω that
e−2ε2t|[uεδ(t)−1]+|2H≤|[uε0−1]+|2H=0 for all t∈[0,T], |
which implies that
uεδ(t,x)≤1 for a.e. (t,x)∈Q. | (2.12) |
Next, assigning [0−uεδ]+ to z in (ii) of Definition 2.1, we infer from similar arguments as above that
uεδ(t,x)≥0 for a.e. (t,x)∈Q. | (2.13) |
Thus, we conclude from (2.12) and (2.13) that (i) of Theorem 2.1 holds.
Now we show (ii). Note from Lemma 2.2 that φεδ(⋅)→φε(⋅) on H in the sense of Mosco [19] as δ→0 (cf. (2.8)). Therefore, from the abstract convergence theory of evolution equations (cf. [2,16]), we observe that uεδ converges to the unique solution uε of (CP)ε on [0, T] as δ→0 in the sense of (2.2). As uε (resp. uεδ) is the unique solution to (P)ε (resp. (P)εδ) on [0, T], we conclude that the convergence result (2.2) holds.
Thus, the proof of Theorem 2.1 is complete.
By arguments similar to the proof of Theorem 2.1, the following result of (E)εδ holds. Hence, its detailed proof is omitted.
Corollary 2.1 (cf. [5,15]). Let ε∈(0,1], δ∈(0,1), and uε0∈R with |uε0|≤1. Then, there exists a unique solution uεδ:[0,T]→R to (E)εδ on [0, T] such that uεδ∈W1,2(0,T) and the following equation holds:
(E)εδ{(uεδ)t+Kδ(uεδ)ε2=uεδε2inRfora.e.t∈(0,T),uεδ(0)=uε0 in R. |
Moreover, the following statements hold:
(i) If the initial value uε0 takes the value in (0,1) (resp. (−1,0)), the solution uεδ(t) also takes the value in (0,1) (resp. (−1,0)) for all t∈[0,T].
(ii) There exists a unique function uε∈W1,2(0,T) such that
uεδ→uε in C([0,T]) as δ→0 |
and uε is the unique solution of the following problem (E)ε on [0,T]:
(E)ε{uεt+∂I[−1,1](uε)ε2∋uεε2inRfora.e.t∈(0,T),uε(0)=uε0inR. |
Note from (1.5) that ∂I[−1,1](⋅) is a multivalued function and therefore very hard to investigate (E)ε numerically. However, we observe from Corollary 2.1 that (E)εδ is the approximate problem of (E)ε. Hence, in this section, we consider (E)εδ from the view-point of numerical analysis.
We present results concerning numerical experiments of (E)εδ. There is a vast method on numerical simulations of the ODE problem (e.g., backward Euler scheme, Runge--Kutta method and so on). The authors provided the numerical experiment to (E)ε via the Yosida approximation using the standard forward Euler method in [25]. To clarify the advantage of our new approximate method (1.7), we also provide numerical experiments of (E)εδ using the standard forward Euler method. To this end, we consider the following explicit finite difference scheme to (E)εδ, denoted by (DE)εδ:
(DE)εδ{un+1−unΔt+Kδ(un)ε2=unε2 in R,for n=0,1,2,⋯,Nt,u0=uε0in R, |
where Nt∈N is a given positive integer and △t:=T/Nt is the mesh size for time.
Note that un is the approximate solution of (E)εδ at the time t=n△t. Also, note that the explicit finite difference scheme (DE)εδ converges to (E)εδ as △t→0 because (DE)εδ is the standard time discretization scheme for (E)εδ.
Here, we present the result for an unstable numerical experiment of (DE)εδ for T=0.002, ε=0.003, δ=0.01, the initial data uε0=0.1, and the mesh size for time △t=0.000001:
We observe from Figure 1 that we have to choose suitable values for constants ε, δ, and mesh size of time-step △t to generate stable numerical results for (DE)εδ.
Our second main result of this paper concerns criteria for stable numerical simulations of (DE)εδ.
Theorem 3.1 (cf. [25, Theorem 7]). Let ε∈(0,1], δ∈(0,1) and △t∈(0,1]. Assume uε0∈(0,1) (resp. uε0∈(−1,0)) and T=∞. Let {un;n≥0} be the solution to (DE)εδ. Then, we have:
(i) Assume △t∈(0,δε2/(1−δ)). Then, un∈(0,1) (resp. un∈(−1,0)) for all n≥0. Moreover, un converges to 1 (resp. −1) monotonically as n→+∞.
(ii) Assume △t∈(δε2/(1−δ),2δε2/(1−δ)). Then, there is a positive number n0∈N such that un oscillates for all n≥n0. Moreover, un converges to 1 (resp. −1) as n→+∞.
Proof. We prove this theorem by arguments similar to those for the proof of [25, Theorem 7].
We present the proof only for initial values uε0∈(0,1), because for uε0∈(−1,0) the same arguments apply.
Assuming uε0∈(0,1), we set for simplicity
fδ(z):=Kδ(z)−z for z∈R. | (3.1) |
Then, we easily observe that
fδ(z)={1−δδ(z−1), if z>1−δ,−z, if z∈[−1+δ,1−δ],1−δδ(z+1), if z<−1+δ | (3.2) |
and z=−1,0,1 are zero points of fδ(⋅).
Note that the difference equation (DE)εδ is reformulated to give
un+1=un−△tε2fδ(un) in R,for n=0,1,2,⋯. | (3.3) |
Now, we prove (i). To this end, we assume that △t∈(0,δε2/(1−δ)). By mathematical induction, we show:
ui∈(0,1) for all i≥0. | (3.4) |
Clearly (3.4) holds for i=0 because u0=uε0∈(0,1).
We now assume that (3.4) holds for all i=0,1,⋯,n. If un∈(0,1−δ], we observe from (3.2), (3.3), and △t∈(0,δε2/(1−δ)) that
un≤un+1=un−Δtε2fδ(un)=un+Δtε2un≤1−δ+Δtε2(1−δ)<1−δ+δ1−δ(1−δ)=1, |
which implies that
un+1∈(0,1), if un∈(0,1−δ]. | (3.5) |
Next, if un∈(1−δ,1), we observe from (3.2), (3.3), and △t∈(0,δε2/(1−δ)) that
un≤un+1=un−Δtε2fδ(un)=un−Δtε2⋅1−δδ(un−1)<un−δ1−δ⋅1−δδ(un−1)=1, |
which implies that
un+1∈(0,1), if un∈(1−δ,1). | (3.6) |
From (3.5) and (3.6) we infer that (3.4) holds for i=n + 1. Therefore, we conclude by mathematical induction that (3.4) holds, which is the result similar to (i) of Corollary 2.1.
Also, by (3.2) and (3.4), we observe that fδ(un)≤0 for all n≥0. Therefore, we have from (3.3)
un+1=un−△tε2fδ(un)≥un for all n≥0. | (3.7) |
Therefore, we infer from (3.4) and (3.7) that {un;n≥0} is a bounded and increasing sequence with respect to n. Hence, there exists a point u∞∈R such that
un→u∞ in R as n→+∞. | (3.8) |
By taking the limit in (3.3) as n→+∞, we easily observe from the continuity of fδ(⋅) that u∞=1, which is the zero point of fδ(⋅). Hence, the proof of (i) is complete.
Next, we show (ii). To this end, we put
△t:=δε21−δτ for some τ∈(1,2). |
We assume that the initial value uε0∈(0,1−δ]. We can then find the minimal number n0∈N such that
un0∈(1−δ,1+δ) and ui∈(0,1−δ] for all i=0,1,⋯,n0−1. | (3.9) |
Indeed, if ui∈(0,1−δ] for all i=0,1,⋯,k, we observe from (3.3) that
uk+1=uk−Δtε2fδ(uk)=(1+Δtε2)uk=(1+Δtε2)2uk−1=⋯=(1+Δtε2)k+1u0. | (3.10) |
Taking into account (3.10), u0=uε0∈(0,1−δ], and
1+△tε2>1+δ1−δ>1, |
we can then find the minimal number n0∈N such that
un0>1−δ and ui∈(0,1−δ] for all i=0,1,⋯,n0−1. |
Also, by (3.3), we observe that
un0=un0−1−△tε2fδ(un0−1)=un0−1+△tε2un0−1<1−δ+2δ1−δ⋅(1−δ)=1+δ, |
hence (3.9) holds.
Now we show (ii) given the initial value uε0∈(0,1−δ]. We find from (3.2), (3.3) and (3.9) that
un0+1=un0−Δtε2fδ(un0)=un0−Δtε2⋅1−δδ(un0−1)=(1−τ)un0+τ, | (3.11) |
which implies that
un0+1+(τ−1)un0τ=1. | (3.12) |
Therefore, we see from (3.12) and τ∈(1,2) that the zero point 1 of fδ(⋅) is in the interval between un0 and un0+1.
un0+1=(1−τ)un0+τ>(1−τ)(1+δ)+τ>1−δ |
and
un0+1=(1−τ)un0+τ<(1−τ)(1−δ)+τ<1+δ, |
which implies that
un0+1∈(1−δ,1+δ). |
By (3.11), (3.12), and repeating the above procedure, we obtain
un∈(1−δ,1+δ) for all n≥n0 | (3.13) |
and un oscillates around the zero point 1 of fδ(⋅) for all n≥n0. Also, we observe from (3.11) and (3.13) that
|un+1−1|=|1−τ||un−1| for all n≥n0. | (3.14) |
Therefore, by τ∈(1,2) and (3.12)-(3.14), there exists a subsequence {nk} of {n} such that unk oscillates and converges to 1 as k→∞. Hence, taking into account the uniqueness of the limit point, we find that (ii) holds for the initial value uε0∈(0,1−δ].
From similar arguments as above, we find that (ii) holds for n0=0 if uε0∈(1−δ,1]. Therefore, the proof of (ii) is complete.
This completes the proof of Theorem 3.1.
Remark 3.1. Assume △t∈[2δε2/(1−δ),∞) and put △t:=δε2τ/(1−δ) for some τ≥2. Then, we observe that
1+△tε2>1+2δ1−δ>1 and |1−τ|≥1. |
Therefore, we infer from Theorem 3.1 (cf. (3.10), (3.11), (3.14)) that the solution un to (DE)εδ oscillates as n→∞, in general.
Remark 3.2. We infer from (3.2) and (3.3) that
un≡1 for all n≥0, if uε0=1,un≡0 for all n≥0, if uε0=0 |
and
un≡−1 for all n≥0, if uε0=−1. |
In comparison with this, the stationary solutions of the difference equation studied by [25] depend on δ without un≡0 (see [25, Remark 9]).
From (ii) of Theorem 3.1, we observe that un oscillates for sufficiently large n and converges to the zero point of fδ(⋅) for △t∈(δε2/(1−δ),2δε2/(1−δ)). However, for △t=2δε2/(1−δ), we have the following special case that the solution to (DE)εδ does not oscillate and coincides with the zero point of fδ(⋅) after some finite number of iterations.
Corollary 3.1 (cf. [25, Corollary 10]). Let ε∈(0,1], δ∈(0,1), △t=2δε2/(1−δ) and n∈N. Assume uε0:=(1−δ)n/(1+δ)n. Then, the solution to (DE)εδ is given by
ui={(1−δ1+δ)n−i,if i=0,1,⋯,n−1,1, if i≥n. | (3.15) |
Similarly, if uε0:=−(1−δ)n/(1+δ)n, then the solution to (DE)εδ is given by
ui={(1−δ1+δ)n−i,if i=0,1,⋯,n−1,−1, if i≥n. |
Proof. We present only the proof of (3.15) as similar arguments hold for uε0:=−(1−δ)n/(1+δ)n.
Note that uε0:=(1−δ)n/(1+δ)n∈(0,1−δ). Therefore we infer from (3.2), (3.3), and u0=uε0 that
u1=u0−△tε2fδ(u0)=uε0−2δ1−δ(−uε0)=1+δ1−δuε0=(1−δ1+δ)n−1. |
Similarly, we observe from u1∈(0,1−δ) that
u2=u1−△tε2fδ(u1)=1+δ1−δu1=(1−δ1+δ)n−2. |
Repeating this procedure, we note that from Remark 3.2 the solution to (DE)εδ is given by (3.15).
Taking into account Theorem 3.1, we present results of numerical experiments of (DE)εδ. To this end, we take
T=0.002,ε=0.01,δ=0.01 and the initial data uε0=0.1 |
as numerical data. Then, we observe that:
11−δ=11−0.01=1.010101010⋯ |
and
δε21−δ=0.0000010101010⋯. |
Setting △t=0.000001, we have:
δε21−δ=0.0000010101010⋯>△t=0.000001, |
which complies with (i) of Theorem 3.1. Hence, we have the following stable numerical result of (DE)εδ. Indeed, we observe from Figure 2 and Table 1 in Remark 3.3 that the solution to (DE)εδ converges to the stationary solution 1 monotonically.
Remark 3.3 (cf. [25, TABLE 1]). In [25], the authors provided numerical results of the following difference equation:
(YDE)εδ{un+1−unΔt+(∂I[−1,1])δ(un)ε2=unε2 inR,forn=0,1,2,⋯,Nt,u0=uε0inR. |
Then, we obtained the following Table 1 of numerical result of solutions to (YDE)εδ (cf. [25, TABLE 1]).
Next we set △t=0.000002 where we have
δε21−δ=0.0000010101010⋯<△t=0.000002<2δε21−δ, |
which complies with (ii) of Theorem 3.1. Hence, we observe from Figure 3 and Table 2 that the solution to (DE)εδ oscillates and converges to the stationary solution 1.
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
Setting △t=2δε2/(1−δ)=0.0000020202020⋯, we note Remark 3.1. Indeed, we observe from Figure 4 and Table 3 that the solution to (DE)εδ oscillates.
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
Setting △t=0.000005, we have:
2δε21−δ=0.0000020202020⋯<△t=0.000005. |
Therefore, noting Remark 3.1, we indeed observe from Figure 5 that the solution to (DE)εδ oscillates.
Now we consider △t=15δε2/(1−δ). Recalling Remark 3.1, we indeed observe from Figure 6 that the solution to (DE)εδ oscillates between three zero points of fδ(⋅).
Next, we consider a numerical example of Corollary 3.1. To this end, we use the following initial data:
u0:=(1−δ)6(1+δ)6=(1−0.01)6(1+0.01)6=0.88691688⋯. |
We observe from Table 4,Figure 7, and Corollary 3.1 that (3.15) holds with n=6:
number of iterations i | the value of ui |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |
From Theorem 3.1, Remark 3.3 and numerical experiments as above, we conclude that
(i) the mesh size for the time step △t needs to be smaller than δε2/(1−δ) to provide a stable numerical solution for (DE)εδ;
(ii) our new approximate method (1.7) is better than the Yosida approximation (1.6) because the solutions to (DE)εδ take values in [-1, 1] (cf. Table 1);
(iii) we provide a stable numerical examples of (DE)εδ with the initial data uε0:=(1−δ)n/(1+δ)n, even if the mesh size △t is equal to 2δε2/(1−δ).
Although a numerical study of (P)ε is hard as ∂I[−1,1](⋅) is multivalued (cf. (1.5)), we observe from Theorem 2.1 nevertheless that (P)εδ is an approximation to the problem (P)ε. Therefore, in this section, we shall consider (P)εδ in a 2D space from a numerical analysis view-point.
To extend the result obtained in [25, Theorem 16] and avoid the complicated arguments, we perform numerical experiments using the standard forward Euler method, although there is a vast method on numerical simulations of the PDE problem (e.g., backward Euler scheme, finite element method and so on).
For simplicity, assume that Ω:=(0,1)×(0,1) is a square domain in R2. We consider the following difference equation to the Allen--Cahn equation in (P)εδ:
un+1i,j−uni,jΔt−uni−1,j−2uni,j+uni+1,j(Δx)2−uni,j−1−2uni,j+uni,j+1(Δy)2+Kδ(uni,j)ε2=uni,jε2for n=0,1,⋯,Nt−1,i=1,2,⋯,Nx−1, and j=1,2,⋯,Ny−1, | (4.1) |
where Nt,Nx,Ny∈N are given integers, △t:=T/Nt is the mesh size for the time steps, and in the 2-D space △x:=1/Nx and △y:=1/Ny are the mesh sizes along the x-and y-axes.
Also, for the homogeneous Neumann boundary and initial conditions, we consider the following explicit situations:
un0,0=un1,1,unNx,0=unNx−1,1,un0,Ny=un1,Ny−1,unNx,Ny=unNx−1,Ny−1,uni,0=uni,1,uni,Ny=uni,Ny−1 for i=1,2,⋯,Nx−1,un0,j=un1,j,unNx,j=unNx−1,j for j=1,2,⋯,Ny−1} (1≤n≤Nt) | (4.2) |
and
u0i,j=uε0(xi,yj) for i=0,1,⋯,Nx, and j=0,1,⋯,Ny, | (4.3) |
where xi:=i△x and yj:=j△y.
In considering the explicit finite difference system (DP)εδ:=(4.1),(4.2),(4.3)}, we observe that uni,j is the approximate solution of (P)εδ at time tn:=n△t and position (xi,yj). Also, we observe that (DP)εδ converges to (P)εδ as △t→0, △x→0, and △y→0, because (DP)εδ is the standard time and space discretized form of (P)εδ in the 2D space.
Using Theorem 3.1, we observe that we also have to choose suitable values for the constants ε, δ, and the mesh sizes for time △t and space △x and △y to establish stable numerical results for (DP)εδ. We now announce our final main result concerning the stability of (DP)εδ.
Theorem 4.1. Let ε∈(0,1], δ∈(0,1), T>0, Ω:=(0,1)×(0,1) and uε0∈K∩C(¯Ω), where K is the set of initial data defined in (A). Also, let Nt,Nx,Ny be the integers so that △t∈(0,1], △x∈(0,1] and △y∈(0,1], where △t:=T/Nt, △x:=1/Nx and △y:=1/Ny. Let {uni,j;n=0,1,⋯,Nt, i=0,1,⋯,Nx, j=0,1,⋯,Ny} be the solution to (DP)εδ. Also, let c0∈(0,1) and assume that
0<Δt≤c0δε21−δ and 0≤Δt(Δx)2+Δt(Δy)2≤1−c02. | (4.4) |
Then, the solution to (DP)εδ is bounded in the following sense:
max0≤i≤Nx0≤j≤Ny|uni,j|≤1 forall n≥0. | (4.5) |
In particular, if the initial value uε0(x) takes the value in [0,1] (resp. [-1,0]) for a.e. x∈Ω, the following boundedness holds:
uni,j∈[0,1](resp.uni,j∈[−1,0]) forall n≥0,i=0,1,⋯,Nx and j=0,1,⋯,Ny. | (4.6) |
Proof. We demonstrate (4.5) by mathematical induction. Clearly (4.5) holds for n=0 because u0=uε0∈K. We next assume that
max0≤i≤Nx0≤j≤Ny|uℓi,j|≤1 for all ℓ=0,1,⋯,n. | (4.7) |
Then, we observe that the explicit finite difference equation (4.1) in (DP)εδ can be reformulated giving
un+1i,j=rxuni−1,j+rxuni+1,j+ryuni,j−1+ryuni,j+1+(1−2rx−2ry)uni,j−Δtε2fδ(uni,j)for all n=0,1,⋯,Nt−1,i=1,2,⋯,Nx−1, and j=1,2,⋯,Ny−1, | (4.8) |
where we put rx:=△t/(△x)2, ry:=△t/(△y)2, and fδ(⋅) is the function defined in (3.2). Note that z=−1,0,1 are the zero points of fδ(z).
We observe from (4.4), (4.7), and (4.8) that
1−un+1i,j=rx(1−uni−1,j)+rx(1−uni+1,j)+ry(1−uni,j−1)+ry(1−uni,j+1)+(1−2rx−2ry)(1−uni,j)+Δtε2fδ(uni,j)≥(1−2rx−2ry)(1−uni,j)+Δtε2fδ(uni,j)for all i=1,2,⋯,Nx−1, and j=1,2,⋯,Ny−1. | (4.9) |
Note from (3.2) that the function [−1,1]∋z→(1−2rx−2ry)(1−z)+△t/ε2fδ(z) is continuous. Also, we infer from (3.2), (4.4) and (4.7) that
(1−2rx−2ry)(1−z)+△tε2fδ(z)≥0 for all z∈[−1,1]. | (4.10) |
Indeed, we observe from (4.4) that
1−2rx−2ry≥c0>0. | (4.11) |
Therefore, it follows from (3.2) that the function [−1,1−δ]∋z→(1−2rx−2ry)(1−z)+△t/ε2fδ(z) attains a minimum value at z=1−δ. Therefore we obtain from (3.2) and (4.4) that
(1−2rx−2ry)(1−z)+Δtε2fδ(z)≥(1−2rx−2ry)(1−(1−δ))+Δtε2fδ(1−δ)=(1−2rx−2ry)δ−Δtε2(1−δ)≥c0δ−Δtε2(1−δ)≥0for all z∈[−1,1−δ]. | (4.12) |
Also, for any z∈[1−δ,1], we observe from (3.2) that
(1−2rx−2ry)(1−z)+Δtε2fδ(z)=(1−2rx−2ry)(1−z)+Δtε2⋅1−δδ(z−1)=[1−δδε2Δt−(1−2rx−2ry)]z+(1−2rx−2ry)−1−δδε2Δt. | (4.13) |
Here we note from (4.4) that
1−δδε2△t−(1−2rx−2ry)≤1−δδε2△t−c0≤0, |
which implies from (4.13) that the function [1−δ,1]∋z→(1−2rx−2ry)(1−z)+△t/ε2fδ(z) is non-increasing and attains a minimum value at z=1. Therefore, we have:
(1−2rx−2ry)(1−z)+△tε2fδ(z)≥△tε2fδ(1)=0 for all z∈[1−δ,1]. | (4.14) |
Hence, from (4.12) and (4.14), (4.10) holds. Therefore we find from (4.7), (4.9) and (4.10) that
1−un+1i,j≥0for all i=1,2,⋯,Nx−1, and j=1,2,⋯,Ny−1. | (4.15) |
Similarly, we observe from (4.4), (4.7), and (4.8) that
un+1i,j+1=rx(uni−1,j+1)+rx(uni+1,j+1)+ry(uni,j−1+1)+ry(uni,j+1+1)+(1−2rx−2ry)(uni,j+1)−Δtε2fδ(uni,j)≥(1−2rx−2ry)(uni,j+1)−Δtε2fδ(uni,j)for all i=1,2,⋯,Nx−1 and j=1,2,⋯,Ny−1. | (4.16) |
Clearly, we have from (3.2) that the function [−1,1]∋z→(1−2rx−2ry)(z+1)−△t/ε2fδ(z) is continuous. Also, using similar arguments as above (cf. (4.10)), we infer that the function [−1,1]∋z→(1−2rx−2ry)(z+1)−△t/ε2fδ(z) is non-negative. Indeed, we observe from (3.2), (4.4), and (4.11) that the function [−1+δ,1]∋z→(1−2rx−2ry)(z+1)−△t/ε2fδ(z) attains a minimum value at z=−1+δ. Therefore, it follows from (3.2) and (4.4) (cf. (4.12)) that
(1−2rx−2ry)(z+1)−Δtε2fδ(z)≥(1−2rx−2ry)((−1+δ)+1)−Δtε2fδ(−1+δ)=(1−2rx−2ry)δ+Δtε2(−1+δ)≥c0δ−Δtε2(1−δ)≥0 for all z∈[−1+δ,1]. | (4.17) |
Also, for any z∈[−1,−1+δ], we observe from (3.2) that
(1−2rx−2ry)(z+1)−Δtε2fδ(z)=(1−2rx−2ry)(z+1)−Δtε2⋅1−δδ(z+1)=[(1−2rx−2ry)−1−δδε2Δt]z+(1−2rx−2ry)−1−δδε2Δt. | (4.18) |
Here we note from (4.4) that
(1−2rx−2ry)−1−δδε2△t≥c0−1−δδε2△t≥0. |
Therefore, we infer from (4.18) that the function [−1,−1+δ]∋z→(1−2rx−2ry)(z+1)−△t/ε2fδ(z) is non-decreasing and attains a minimum value at z=−1. Hence, we have:
(1−2rx−2ry)(z+1)−△tε2fδ(z)≥−△tε2fδ(−1)=0 for all z∈[−1,−1+δ]. | (4.19) |
From (4.17) and (4.19), we obtain
(1−2rx−2ry)(z+1)−△tε2fδ(z)≥0 for all z∈[−1,1], |
which from (4.7) and (4.16) implies that
un+1i,j+1≥0for all i=1,2,⋯,Nx−1, and j=1,2,⋯,Ny−1. | (4.20) |
Taking into account the Neumann boundary condition, specifically (4.2), we observe from (4.15) and (4.20) that
max0≤i≤Nx0≤j≤Ny|un+1i,j|≤1, |
which implies that (4.7) holds for l=n + 1. Therefore, we conclude by mathematical induction that (4.5) holds.
Finally, we show (4.6). We present the proof only for initial values uε0(x)∈[0,1] for a.e. x∈Ω, because for uε0(x)∈[−1,0] the same arguments apply.
We demonstrate (4.6) by arguments similar to the proof of (4.5), namely, by mathematical induction. Clearly (4.6) holds for n=0 because u0(x)=uε0(x)∈[0,1] for a.e. x∈Ω. We next assume that
uℓi,j∈[0,1] for all ℓ=0,1,⋯,n,i=0,1,⋯,Nx and j=0,1,⋯,Ny. | (4.21) |
Note from (3.2) and (4.21) that
fδ(uni,j)≤0 for all i=0,1,⋯,Nx and j=0,1,⋯,Ny. |
Therefore, we observe from (4.8), (4.11) and (4.21) that
un+1i,j=rxuni−1,j+rxuni+1,j+ryuni,j−1+ryuni,j+1+(1−2rx−2ry)uni,j−Δtε2fδ(uni,j)≥0 for all i=1,2,⋯,Nx−1 and j=1,2,⋯,Ny−1. | (4.22) |
By arguments similar to the proof of (4.15), we also observe that
un+1i,j≤1 for all i=1,2,⋯,Nx−1 and j=1,2,⋯,Ny−1. | (4.23) |
Hence, from (4.22), (4.23) and the Neumann boundary condition, specifically (4.2), we observe that
un+1i,j∈[0,1] for all i=0,1,⋯,Nx and j=0,1,⋯,Ny, |
which implies that (4.21) holds for l=n + 1. Therefore, we conclude by mathematical induction that (4.6) holds, which is the result similar to (i) of Theorem 2.1.
This completes the proof of Theorem 4.1.
Remark 4.1. We can set c0=0 in (4.4) for the explicit finite difference scheme to the following usual 2D heat equation applying a homogeneous Neumann boundary condition:
{ut−Δu=0 in Q=(0,T)×Ω,∂u∂ν=0 on Σ=(0,T)×Γ,u(0,x)=u0(x),x∈Ω, |
where Ω:=(0,1)×(0,1) is a square domain in R2 and Γ:=∂Ω is the boundary of Ω.
Remark 4.2. We note that Theorem 4.1 holds for the homogeneous Dirichlet boundary condition. Also, we can establish stability criteria for a 3D space. Indeed, assume for simplicity that Ω:=(0,1)×(0,1)×(0,1). Let △z denote the mesh size along the z-axis in 3-D space. Also, let c0∈(0,1) and assume that
0<△t≤c0δε21−δ and 0≤△t(△x)2+△t(△y)2+△t(△z)2≤1−c02. |
Then, a boundedness result similar to (4.5) holds for Ω:=(0,1)×(0,1)×(0,1)⊂R3.
From Theorem 4.1, we determine the stable numerical experiments of (DP)εδ as follows. We set
Ω:=(0,1)×(0,1),T=0.01,δ=0.01 and Δx=Δy=0.005 |
as numerical data. Also, we consider the following initial data uε0(x,y) defined by
uε0(x,y)={0.2, if (x,y)∈[0.25,0.75]×[0.25,0.75],−0.7, if (x,y)∈Ω∖[0.25,0.75]×[0.25,0.75]. | (4.24) |
The graph of the initial data uε0(x,y) is as follows (Figure 8):
Now, setting ε=0.08 and △t=0.000005, we take c0=0.1. Then, we observe that:
△t(△x)2+△t(△y)2=0.000005(0.005)2×2=0.4<0.45=1−c02 |
and
c0δε21−δ=0.1×0.01×(0.08)21−0.01=0.0000064646464⋯. |
Therefore, we have
c0δε21−δ=0.0000064646464⋯>△t, |
which implies that the criteria condition (4.4) holds. Thus, we obtain a stable numerical experiment of (DP)εδ yielding Figure 9:
Next, we set ε=0.01, △t=0.000005, and consider c0=0.1. Then, we find that:
△t(△x)2+△t(△y)2=0.000005(0.005)2×2=0.4<0.45=1−c02, |
and
c0δε21−δ=0.1×0.01×(0.01)21−0.01=0.00000010101010⋯<△t. |
Therefore, the criteria (4.4) does not hold and hence yields an unstable numerical experiment of (DP)εδ. Indeed, we obtain Figure 10:
Finally, we consider the case ε=0.01, △t=0.0000005, and set c0=0.5. We then observe that:
△t(△x)2+△t(△y)2=0.0000005(0.005)2×2=0.04<0.25=1−c02, |
and
c0δε21−δ=0.5×0.01×(0.01)21−0.01=0.00000050505050⋯>△t, |
which implies that the inequalities (4.4) hold. Therefore, we obtain a stable numerical experiment of (DP)εδ that yields Figure 11:
Remark 4.3. From Theorem 4.1, stable numerical results for (DP)εδ are produced if we choose suitable values for the constants ε, δ, and mesh sizes for time △t and space △x and △y. Therefore, if we perform a numerical experiment of (P)ε for sufficiently small ε, we found it imperative to consider the original problem (P)ε using the primal-dual active set method of [3], the Lagrange multiplier method of [9] and so on.
From Theorem 4.1 and the numerical experiments presented above, we conclude that
(i) the mesh sizes for time-step △t and spatial-steps △x, △y must satisfy constraints
0<△t≤c0δε21−δ,0≤△t(△x)2+△t(△y)2≤1−c02 for some constant c0∈(0,1), |
to generate stable numerical simulations of (DP)εδ
(ii) the value δε2/(1−δ) is very important in providing numerical experiments of (DE)εδ and (DP)εδ (cf. Theorems 3.1 and 4.1);
(iii) our new approximate method (1.7) is better than the Yosida approximation (1.6), because the solutions to (DP)εδ also take values in [-1, 1] (cf. (DE)εδ).
This research was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B), No. 16K17622 and Scientific Research (C), No. 26400179. The authors express their gratitude to Professor Toyohiko Aiki for his valuable and useful comments. The authors also express their gratitude to an anonymous referee for reviewing the original manuscript and for many valuable comments that helped clarify and refine this paper.
[1] | S. Allen and J. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall., 27 (1979), 1084-1095. |
[2] | H. Attouch, Variational Convergence for Functions and Operators. Pitman Advanced Publishing Program, Boston-London-Melbourne, 1984. |
[3] | L. Blank, H. Garcke, and L. Sarbu, Primal-dual active set methods for Allen-Cahn variational inequalities with nonlocal constraints. Numer. Methods Partial Di erential Equations, 29 (2013), 999-1030. |
[4] | H. Br´ezis, Op´erateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam, 1973. |
[5] | H. Br´ezis, M. G. Crandall, and A. Pazy, Perturbations of nonlinear maximal monotone sets in Banach space. Comm. Pure Appl. Math., 23 (1970), 123-144. |
[6] | L. Bronsard and R. V. Kohn, Motion by mean curvature as the singular limit of Ginzburg-Landau dynamics. J. Differential Equations, 90 (1991), 211-237. |
[7] | X Chen and C. M. Elliott, Asymptotics for a parabolic double obstacle problem. Proc. Roy. Soc. London Ser. A, 444 (1994), 429-445. |
[8] | M. H. Farshbaf-Shaker, T. Fukao, and N. Yamazaki, Singular limit of Allen-Cahn equation with constraints and its Lagrange multiplier. Discrete Contin. Dyn. Syst., AIMS Proceedings (2015), 418-427. |
[9] | M. H. Farshbaf-Shaker, T. Fukao, and N. Yamazaki, (In Press) Lagrange multiplier and singular limit of double-obstacle problems for the Allen-Cahn equation with constraint. Math. Methods Appl. Sci.. |
[10] | X. Feng and A. Prohl, Numerical analysis of the Allen-Cahn equation and approximation for mean curvature flows. Numer. Math., 94 (2003), 33-65. |
[11] | X. Feng, H. Song, and T. Tang, Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Probl. Imaging, 7 (2013), 679-695. |
[12] | P. C. Fife, Dynamics of internal layers and diffusive interfaces. CBMS-NSF Regional Conf. Ser. in: Appl. Math., 53 (1988), SIAM, Philadelphia. |
[13] | A. Friedman, Partial Di erential Equations of Parabolic Type, Prentice-Hall, INC., Englewood Cliffs, N. J., 1964. |
[14] | Y. Giga, Y. Kashima, and N. Yamazaki, Local solvability of a constrained gradient system of total variation. Abstr. Appl. Anal., 2004 (2004), 651-682. |
[15] | A. Ito, N. Yamazaki, and N. Kenmochi, Attractors of nonlinear evolution systems generated by time-dependent subdifferentials in Hilbert spaces. Dynamical systems and differential equations, Vol. I (Springfield, MO, 1996), Discrete Contin. Dynam. Systems 1998, Added Volume I, 327-349. |
[16] | N. Kenmochi, Solvability of nonlinear evolution equations with time-dependent constraints and applications. Bull. Fac. Education, Chiba Univ., 30 (1981), 1-87. |
[17] | N. Kenmochi, Monotonicity and compactness methods for nonlinear variational inequalities. Handbook of Differential Equations, Stationary Partial Differential Equations, Vol. 4 (2007) ed. M. Chipot, Chapter 4, 203-298, North Holland, Amsterdam. |
[18] | N. Kenmochi and M. Niezg´odka, Systems of nonlinear parabolic equations for phase change problems. Adv. Math. Sci. Appl., 3 (1993/94), Special Issue, 89-117. |
[19] | U. Mosco, Convergence of convex sets and of solutions variational inequalities. Advances Math., 3 (1969), 510-585. |
[20] | T. Ohtsuka, Motion of interfaces by an Allen-Cahn type equation with multiple-well potentials. Asymptot. Anal., 56 (2008), 87-123. |
[21] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Optimal control problems of singular diffusion equation with constraint. Adv. Math. Sci. Appl., 18 (2008), 1-28. |
[22] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Convergence of numerical algorithm for optimal control problem of Allen-Cahn type equation with constraint. Proceedings of International Conference on: Nonlinear Phenomena with Energy Dissipation-Mathematical Analysis, Modelling and Simulation, GAKUTO Intern. Ser. Math. Appl., Vol. 29 (2008), 441-462. |
[23] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Optimal control problem for Allen-Cahn type equation associated with total variation energy. Discrete Contin. Dyn. Syst. Ser. S, 5 (2012), 159-181. |
[24] | J. Shen and X. Yang, Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discret. Contin. Dyn. Syst. Ser. A, 28 (2010), 1669-1691. |
[25] | T. Suzuki, K. Takasao, and N. Yamazaki, Remarks on numerical experiments of Allen-Cahn equations with constraint via Yosida approximation. Adv. Numer. Anal., 2016, Article ID 1492812, 16 pages. |
[26] | Y. Tonegawa, Integrality of varifolds in the singular limit of reaction-di usion equations. Hiroshima Math. J., 33 (2003), 323-341. |
[27] | X. Yang, Error analysis of stabilized semi-implicit method of Allen-Cahn equation. Discrete Contin. Dyn. Syst. Ser. B, 11 (2009), 1057-1070. |
[28] | J. Zhang and Q. Du, Numerical studies of discrete approximations to the Allen-Cahn equation in the sharp interface limit. SIAM J. Sci. Comput., 31 (2009), 3042-3063. |
1. | Dongsun Lee, Computing the area-minimizing surface by the Allen-Cahn equation with the fixed boundary, 2023, 8, 2473-6988, 23352, 10.3934/math.20231187 |
number of iterations i | the value of ui to (DE)εδ | the value of ui to (YDE)εδ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
number of iterations i | the value of ui |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |
number of iterations i | the value of ui to (DE)εδ | the value of ui to (YDE)εδ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
number of iterations i | the value of ui |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
number of iterations i | the value of ui |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |