Citation: Tomoyuki Suzuki, Keisuke Takasao, Noriaki Yamazaki. New approximate method for the Allen–Cahn equation with double-obstacle constraint and stability criteria for numerical simulations[J]. AIMS Mathematics, 2016, 1(3): 288-317. doi: 10.3934/Math.2016.3.288
[1] | Hyun Geun Lee . A mass conservative and energy stable scheme for the conservative Allen–Cahn type Ohta–Kawasaki model for diblock copolymers. AIMS Mathematics, 2025, 10(3): 6719-6731. doi: 10.3934/math.2025307 |
[2] | Chaeyoung Lee, Seokjun Ham, Youngjin Hwang, Soobin Kwak, Junseok Kim . An explicit fourth-order accurate compact method for the Allen-Cahn equation. AIMS Mathematics, 2024, 9(1): 735-762. doi: 10.3934/math.2024038 |
[3] | Youngjin Hwang, Jyoti, Soobin Kwak, Hyundong Kim, Junseok Kim . An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface. AIMS Mathematics, 2024, 9(12): 34447-34465. doi: 10.3934/math.20241641 |
[4] | Jaeyong Choi, Seokjun Ham, Soobin Kwak, Youngjin Hwang, Junseok Kim . Stability analysis of an explicit numerical scheme for the Allen-Cahn equation with high-order polynomial potentials. AIMS Mathematics, 2024, 9(7): 19332-19344. doi: 10.3934/math.2024941 |
[5] | Martin Stoll, Hamdullah Yücel . Symmetric interior penalty Galerkin method for fractional-in-space phase-field equations. AIMS Mathematics, 2018, 3(1): 66-95. doi: 10.3934/Math.2018.1.66 |
[6] | Yangfang Deng, Zhifeng Weng . Barycentric interpolation collocation method based on Crank-Nicolson scheme for the Allen-Cahn equation. AIMS Mathematics, 2021, 6(4): 3857-3873. doi: 10.3934/math.2021229 |
[7] | Haifeng Zhang, Danxia Wang, Zhili Wang, Hongen Jia . A decoupled finite element method for a modified Cahn-Hilliard-Hele-Shaw system. AIMS Mathematics, 2021, 6(8): 8681-8704. doi: 10.3934/math.2021505 |
[8] | Saulo Orizaga, Ogochukwu Ifeacho, Sampson Owusu . On an efficient numerical procedure for the Functionalized Cahn-Hilliard equation. AIMS Mathematics, 2024, 9(8): 20773-20792. doi: 10.3934/math.20241010 |
[9] | Narcisse Batangouna . A robust family of exponential attractors for a time semi-discretization of the Ginzburg-Landau equation. AIMS Mathematics, 2022, 7(1): 1399-1415. doi: 10.3934/math.2022082 |
[10] | Hyun Geun Lee, Youngjin Hwang, Yunjae Nam, Sangkwon Kim, Junseok Kim . Benchmark problems for physics-informed neural networks: The Allen–Cahn equation. AIMS Mathematics, 2025, 10(3): 7319-7338. doi: 10.3934/math.2025335 |
For each $\varepsilon \in (0, 1]$, we consider the following Allen-Cahn equation with double obstacle constraint:
$u^\varepsilon_t - \Delta u^\varepsilon +\frac{\partial I_{[-1, 1]}(u^\varepsilon)}{\varepsilon^2} \ni \frac{ u^\varepsilon }{\varepsilon^2} \ \mbox{ in } Q:= (0,T)\times \Omega,$ | (1.1) |
$\frac{\partial u^\varepsilon}{\partial \nu }=0 \ \mbox{ on } \Sigma := (0,T) \times \Gamma,$ | (1.2) |
$u^\varepsilon(0)=u_0^\varepsilon \ \mbox{ a.e. in } \Omega,$ | (1.3) |
where $0 < T < \infty$, $\Omega$ is a bounded domain in ${\mathbb R}^N$ $(1\le N < +\infty)$ with Lipschitz boundary $\Gamma :=\partial \Omega$, $\nu$ is an outward normal vector on $\Gamma$ and $u_0^\varepsilon$ is a given initial value. Also, the double obstacle constraint $\partial I_{[-1, 1]} (\cdot)$ is the subdifferential of the indicator function $ I_{[-1, 1]} (\cdot)$ on the closed interval [-1, 1] defined by
$I_{[-1, 1]} (z) := \left\{ \begin{array}{cl} 0 ,& \mbox{ if } z \in [-1, 1],\\ + \infty,& \mbox{ otherwise. } \end{array} \right.$ | (1.4) |
More precisely, $\partial I_{[-1, 1]} (\cdot)$ is a set-valued mapping defined by
$\partial {I_{[- 1,1]}}(z): = \left\{ {\begin{array}{*{20}{l}} {\emptyset ,}&{{\text{ if }}z < - 1{\text{ or }}z > 1,} \\ {[0,\infty ),\quad }&{{\text{ if }}z = 1,} \\ {\{ 0\} ,}&{{\text{ if }} - 1 < z < 1,} \\ {( - \infty ,0],}&{{\text{ if }}z = - 1.} \end{array}} \right.$ | (1.5) |
The Allen--Cahn equation was proposed to describe the macroscopic motion of phase boundaries. In this physical context, the function $u^\varepsilon=u^\varepsilon (t, x)$ in (P)$^\varepsilon$:={(1.1), (1.2), (1.3)} is a non-conserved order parameter that characterizes the physical structure. Indeed, let $v=v(t, x)$ be the local ratio of the volume of a pure liquid relative to that of a pure solid at time $t$ and position $x \in \Omega $, defined by
$v(t,x):=\lim_{r \downarrow 0} \frac{\mbox{volume of pure liquid in $B_r(x)$ at time $t$}}{|B_r(x)|},$ |
where $B_r(x)$ is the ball in ${\mathbb R}^N$ with center $x$ and radius $r$ and $|B_r(x)|$ denotes its volume. Setting $u^\varepsilon (t, x):=2v(t, x)-1$ for any $(t, x) \in Q$, we then observe that $u^\varepsilon (t, x)$ is the non-conserved order parameter that characterizes the physical structure:
$\left\{ \begin{array}{cl} u^\varepsilon (t,x)=1 & \mbox{for the pure liquid region,}\\ u^\varepsilon (t,x)=-1 & \mbox{for the pure solid region,}\\ -1<u^\varepsilon (t,x) <1 & \mbox{for the mixed region.} \end{array}\right.$ |
Therefore, $u^\varepsilon$ has two threshold values $1$ and $-1$, and hence the constraint $\partial I_{[-1, 1]} (\cdot)$ that appears in (1.1).
There is a vast literature on the Allen--Cahn equation with and without constraint $\partial I_{[-1, 1]} (\cdot)$. For these studies, we refer to [1,3,6,7,12,8,9,20,22,26]. In particular, Bronsard and Kohn [6] studied the singular limit of (P)$^\varepsilon$ as $\varepsilon \to 0$ with a bistable potential $W$ having both wells of equal depth and without constraint $\partial I_{[-1, 1]} (\cdot) $. Also, Chen and Elliott [7] considered the asymptotic behavior of the solution to (P)$^\varepsilon$ as $\varepsilon \to 0$. However, there were no details in [7] about elements of the constraint $\partial I_{[-1, 1]}(u^\varepsilon $) as $\varepsilon \to 0$. Recently, Farshbaf-Shaker et al. [8] provided results concerning properties of elements of $\partial I_{[-1, 1]}(u^\varepsilon $) for the problem (P)$^\varepsilon$ as $\varepsilon \to 0$.
Also, there is a vast literature on the numerical analysis of the Allen--Cahn equation without constraint $\partial I_{[-1, 1]} (\cdot)$. For these studies, we refer to [10,11,24,27,28]. However, we observe that it is hard to perform a numerical experiment of (P)$^\varepsilon $, because the double obstacle constraint $\partial I_{[-1, 1]}(\cdot)$ is a multivalued function (cf. (1.5)). Recently, Blank et al. [3] proposed as a numerical method the primal-dual active set algorithm for the local and nonlocal Allen--Cahn variational inequalities with constraint. Also, Farshbaf-Shaker et al. [8] obtained results for the limit of a solution $u^\varepsilon$ and an element of $\partial I_{[-1, 1]}(u^\varepsilon $), called the Lagrange multiplier, to (P)$^\varepsilon$ as $\varepsilon \to 0$. Moreover, they provided the numerical experiment to (P)$^\varepsilon$ via the Lagrange multiplier for a one-dimensional space for sufficiently small $\varepsilon \in (0, 1]$ in [9].
The approximate methods are used to obtain a priori estimate of the solutions to (P)$^\varepsilon$ (cf. [18,21,22,23]). Indeed, the Yosida approximation of $\partial I_{[-1, 1]}(\cdot)$ is often used: for $\delta >0$, the Yosida approximation $(\partial I_{[-1, 1]})_\delta (\cdot)$ of $\partial I_{[-1, 1]}(\cdot)$ is defined by:
$\displaystyle (\partial I_{[-1, 1]})_\delta (z)=\frac{[z-1]^+-[-1-z]^+}{\delta},\quad \forall z\in {\mathbb R},$ | (1.6) |
where $[z]^+$ is the positive part of $z$. Then, by considering approximate problems of (P)$^\varepsilon$ and letting $\delta \downarrow 0$, we can obtain estimates of solutions to (P)$^\varepsilon$.
Also, using the Yosida approximation $(\partial I_{[-1, 1]})_\delta (\cdot)$ of $\partial I_{[-1, 1]}(\cdot)$, numerical experiments of the approximate problem of (P)$^\varepsilon$ were often provided. Recently, in [25], the authors clarified the role of the stability condition in providing stable numerical experiments of the approximate problem of (P)$^\varepsilon$ via the Yosida approximation in the one-dimensional case. However, we observed that the numerical solution to the approximate equation takes the value outside [-1, 1] (cf. [25, Table 1]).
number of iterations i | the value of $u^i$ to (DE)$^\varepsilon_\delta$ | the value of $u^i$ to (YDE)$^\varepsilon_\delta$ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
In this paper, we propose a new approximate method for $\partial I_{[-1, 1]}(\cdot)$ so that the numerical solutions to the approximate equation take values in [-1, 1]. Indeed, for each $\delta \in (0, 1)$, we approximate $\partial I_{[-1, 1]}(\cdot)$ by the following function $K_\delta (\cdot)$, defined by:
$\displaystyle K_\delta (z):=\left\{ \begin{array}{cl} \displaystyle \frac{z-1+\delta}{\delta},& \mbox{ if } z>1-\delta,] 0,& \mbox{ if } z\in [-1+\delta,1-\delta],] \displaystyle \frac{z+1-\delta}{\delta},& \mbox{ if } z<-1+\delta . \end{array}\right.$ | (1.7) |
Then, for each $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$, we study the following approximate problem of (P)$^\varepsilon$, denoted by (P)$^\varepsilon_\delta$:
${\text{(P)}}_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{c}} {{{(u_\delta ^\varepsilon )}_t} - \Delta u_\delta ^\varepsilon + \frac{{{K_\delta }(u_\delta ^\varepsilon )}}{{{\varepsilon ^2}}} = \frac{{u_\delta ^\varepsilon }}{{{\varepsilon ^2}}}\quad {\text{ in }}Q = (0,T) \times \Omega ,} \\ {\frac{{\partial u_\delta ^\varepsilon }}{{\partial \nu }} = 0\;{\text{ on }}\Sigma = (0,T) \times \Gamma ,} \\ {u_\delta ^\varepsilon (0) = u_0^\varepsilon \;{\text{ a}}{\text{.e}}{\text{. in }}\Omega .} \end{array}} \right.$ |
The aim is to show the validity of our approximate method defined by (1.7). Moreover, for each $\varepsilon >0$ and $\delta >0$, we present criteria for the standard explicit finite difference scheme to ensure stable numerical experiments of (P)$^\varepsilon_\delta$ in a two-dimensional (2D) space. To this end, we consider the following ODE problem, denoted by (E)$^\varepsilon_\delta$:
${\text{(E)}}_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{c}} {{{(u_\delta ^\varepsilon )}_t} + \frac{{{K_\delta }(u_\delta ^\varepsilon )}}{{{\varepsilon ^2}}} = \frac{{u_\delta ^\varepsilon }}{{{\varepsilon ^2}}}{\text{in }}\mathbb{R},{\text{ for }}t \in (0,T),} \\ {u_\delta ^\varepsilon (0) = u_0^\varepsilon \;{\text{in }}\mathbb{R}.} \end{array}} \right.$ |
Note that the unique solution $ u^\varepsilon_\delta(t) $ to (E)$^\varepsilon_\delta$ takes the value in $(0, 1)$ (resp. $(-1, 0)$) for all $t\in [0, T]$, if the initial value $u^\varepsilon_0$ takes the value in $(0, 1)$ (resp. $(-1, 0)$) (see (i) of Corollary 2.1). Also, note that $u _\delta ^\varepsilon \equiv 0, \pm 1$ is a stationary solution to (E)$^\varepsilon _\delta$ (see Remark 3.2). We then find the criteria that yields stable numerical experiments of (E)$^\varepsilon_\delta$. Also, we perform some numerical simulations of (E)$^\varepsilon_\delta$. Finally, taking into account the theoretical results of (E)$^\varepsilon_\delta$, we derive the criteria ensuring stable numerical experiments of the 2D PDE problem (P)$^\varepsilon_\delta$.
(a) We show the existence-uniqueness of a solution $u^\varepsilon_\delta$ to $\mbox{(P)}^\varepsilon_\delta$ on [0, T] for each $\varepsilon\in (0, 1]$ and $\delta \in (0, 1)$. Also, we prove that $u^\varepsilon_\delta$ converges to the solution $u^\varepsilon$ to $\mbox{(P)}^\varepsilon$ on [0, T] as $\delta \to 0$.
(b) We present the criteria that ensure stable numerical simulations of the ODE problem (E)$^\varepsilon_\delta$. Also, we provide numerical experiments to (E)$^\varepsilon_\delta$ for small $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$.
(c) We consider instances when $\Omega$ is a bounded domain in ${\mathbb R}^2$. Then, we give the criteria yielding stable numerical simulations of the PDE problem (P)$^\varepsilon_\delta$. Also, we provide the numerical experiments of (P)$^\varepsilon_\delta$ for small $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$.
The plan of this paper is as follows. In Section 2, we discuss the validity of our approximate method defined by (1.7). Indeed, we prove the main result (Theorem 2.1) concerning the solvability and convergence result of (P)$^\varepsilon_\delta$ corresponding to item (a) above. In Section 3, we consider (E)$^\varepsilon_\delta$ numerically. Then, we prove the main result (Theorem 3.1) corresponding to item (b) above. Also, we provide several numerical experiments to (E)$^\varepsilon_\delta$ for small $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$. In the final Section 4, we consider from the view-point of numerical analysis (P)$^\varepsilon_\delta$ for a 2D space. Then, we prove the main result (Theorem 4.1) corresponding to item (c) above and provide numerical experiments of (P)$^\varepsilon_\delta$ for small $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$.
Notation and basic assumptions
Throughout this paper, we consider the usual real Hilbert space structure denoted by $H :=L^2(\Omega)$. The inner product in $H$ is denoted by $(\cdot, \cdot)_H$. We also write $V:=H^1(\Omega)$.
In Section 2, we use some techniques of proper (that is, not identically equal to infinity), l.s.c. (lower semi-continuous), convex functions and their subdifferentials, which are useful in the systematic study of variational inequalities. Therefore, let us outline some notation and definitions. For a proper, l.s.c., convex function $\psi :H\rightarrow{\mathbb R}\cup\{+ \infty\}$, the effective domain $D(\psi)$ is defined by
$D(\psi):=\{z\in H;\ \psi(z) < \infty\}.$ |
The subdifferential of $\psi$ is a possibly multi-valued operator in $H$ and is defined by $ z^*\in\partial \psi(z)$ if and only if
$z \in D(\psi)\quad \mbox{and}\quad (z^*,y-z)_H \leq \psi(y) - \psi(z)\quad \mbox{for all } y\in H.$ |
Next, we recall the notion of convergence for convex functions developed by Mosco [19].
Definition 1.1 (cf. [19]). Let $\psi $, $\psi_n $ ($ n \in {\mathbb N }$) be proper, l.s.c., convex functions on $H$. Then, we say that $\psi_n$ converges to $\psi$ on $H$ in the sense of Mosco [19] as $n \to \infty$, if the following two conditions are satisfied:
(M1) for any subsequence $\{ \psi_{ n_k } \} \subset \{\psi_n \} $, if $z_k \to z$ weakly in $H$ as $k\to \infty $, then
$\liminf_{ k \to \infty } \psi_{ n_k } ( z_k ) \geq \psi (z);$ |
(M2) for any $z \in D(\psi) $, there is a sequence $\{z_n \}$ in $H$ such that
$z_n \to z \ {\rm in} \ H \ {\rm as} \ n \to \infty \quad \mbox{and } \quad \lim_{ n \to \infty } \psi_n ( z_n ) = \psi (z).$ |
For various properties and related notions of the proper, l.s.c., convex function $\psi$ and its subdifferential $\partial \psi $, we refer to the monograph by Brézis [4].
Next we present condition (A), which we shall use throughout the paper and assume applies to data.
(A) $u_0^\varepsilon \in K:=\{z\in V \; \ | z |\le 1 \mbox{ a.e. in } \Omega \}$.
We begin by giving a rigorous definition of solutions to (P)$^\varepsilon_\delta$ ($\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$).
Definition 2.1. Let $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$ and $u^\varepsilon_0 \in H$. Then, a function $u^\varepsilon_\delta :[0, T]\to H$ is called a solution to (P)$^\varepsilon_\delta$ on [0, T], if the following conditions are satisfied:
(i) $u^\varepsilon_\delta \in W^{1, 2}(0, T;H)\cap L^\infty (0, T;V)$.
(ii)}] The following variational identity holds:
$\begin{gathered} {\left( {{{(u_\delta ^\varepsilon )}_t}(t),z} \right)_H} + \int\limits_\Omega \nabla u_\delta ^\varepsilon (t) \cdot \nabla z{\mkern 1mu} dx + {\left( {\frac{{{K_\delta }(u_\delta ^\varepsilon (t))}}{{{\varepsilon ^2}}},z} \right)_H} = {\left( {\frac{{u_\delta ^\varepsilon (t)}}{{{\varepsilon ^2}}},z} \right)_H} \\ for all z \in V and a.e. t \in (0,T). \\ \end{gathered} $ |
(iii) $u^\varepsilon_\delta (0)=u^\varepsilon_0$ in $H$.
Also, we give a rigorous definition of solutions to the problem (P)$^\varepsilon$ ($\varepsilon \in (0, 1]$).
Definition 2.2. Let $\varepsilon \in (0, 1]$ and $u^\varepsilon_0 \in H$. Then, a function $u^\varepsilon :[0, T]\to H$ is called a solution to (P)$^\varepsilon$ on [0, T], if the following conditions are satisfied:
(i) $u^\varepsilon \in W^{1, 2}(0, T;H)\cap L^\infty (0, T;V)$, and $|u^\varepsilon|\le 1$ a.e. on $Q_T$.
(ii) The following variational inequality holds:
$\begin{gathered} {\left( {u_t^\varepsilon (t) - \frac{1}{{{\varepsilon ^2}}}{u^\varepsilon }(t),{u^\varepsilon }(t) - z} \right)_H} + \int\limits_\Omega \nabla {u^\varepsilon }(t) \cdot (\nabla {u^\varepsilon }(t) - \nabla z)dx \leq 0 \\ for all z \in K and a.e. t \in (0,T). \\ \end{gathered} $ |
(iii) $u^\varepsilon(0)=u^\varepsilon_0 $ in $H$.
Here we mention the result concerning the existence-uniqueness of solutions for (P)$^\varepsilon$ on [0, T] for each $\varepsilon \in (0, 1]$.
Proposition 2.1 (cf. [5,15]). Assume (A). Then, for each $\varepsilon \in (0, 1]$ and $u_0^\varepsilon \in K$, there exists a unique solution $u^\varepsilon$ to (P)$^\varepsilon $ on [0, T] in the sense of Definition 2.2.
Proof. Applying the abstract theory of nonlinear evolution equations (cf. [5,15]), we can prove this Proposition 2.1. Indeed, for each $\varepsilon \in (0, 1]$, we define a functional $\varphi ^\varepsilon$ on $H$ by
$\varphi ^\varepsilon (z)\!:= \!\left\{ \begin{array}{cl} \!\! \displaystyle \frac{1}2 \int_\Omega |\nabla z|^2dx + \frac{1}{\varepsilon^2}\int_\Omega I_{[-1, 1]} (z(x)) dx,& \mbox{if } z \in V \mbox{ with } I_{[-1, 1]} (z)\in L^1(\Omega),\\ \infty,& \mbox{otherwise. } \end{array} \right.$ | (2.1) |
Clearly, $\varphi^\varepsilon$ is proper, l.s.c. and convex on $H$ with the effective domain $D(\varphi^\varepsilon)=\{z\in V \; \ I_{[-1, 1]} (z(\cdot))\in L^1(\Omega)\}$. Then, the problem (P)$^\varepsilon$ can be rewritten as an abstract evolution equation of the form:
${{\text{(CP)}}^\varepsilon }\left\{ {\begin{array}{*{20}{l}} {\frac{d}{{dt}}{u^\varepsilon }(t) + \partial {\varphi ^\varepsilon }({u^\varepsilon }(t)) - \frac{1}{{{\varepsilon ^2}}}{u^\varepsilon }(t) \ni 0{\text{ in }}H,{\text{ for }}t > 0,} \\ {{u^\varepsilon }(0) = u_0^\varepsilon \;{\text{ in }}H.} \end{array}} \right.$ |
Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,15]), we demonstrate the existence-uniqueness of a solution $u^\varepsilon$ to (CP)$ ^\varepsilon $, hence (P)$^\varepsilon $, on [0, T] for each $\varepsilon \in (0, 1]$ in the sense of Definition 2.2. Thus, the proof of Proposition 2.1 is complete.
Now, we mention the first main result concerning the solvability and convergence of solutions to (P)$^\varepsilon_\delta$ on [0, T].
Theorem 2.1. Assume (A). Then, for each $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$ and $u_0^\varepsilon \in K$, there exists a unique solution $u^\varepsilon_\delta$ to (P)$^\varepsilon_\delta $ on [0, T] in the sense of Definition 2.1. Moreover, the following statements hold:
(i) If the initial value $ u^\varepsilon_0(x) $ takes the value in $[0,1]$ (resp. $[-1,0]$) for a.e. $x\in \Omega$, the solution $ u^\varepsilon_\delta(t, x) $ also takes the value in $[0,1]$ (resp. $[-1,0]$) for a.e. $(t, x)\in Q$.
(ii) $u^\varepsilon_\delta$ converges to the unique solution $u^\varepsilon$ of (P)$^\varepsilon $ on [0, T] in the sense that
$u^\varepsilon_\delta \to u^\varepsilon \ \mbox{ in } C([0,T];H) \ \mbox{ as } \delta \to 0.$ | (2.2) |
To prove Theorem 2.1, we define a primitive $\hat{K}_\delta$ by
${\hat K_\delta }(z): = \left\{ {\begin{array}{*{20}{l}} {\frac{1}{{2\delta }}{z^2} - \frac{{1 - \delta }}{\delta }z + \frac{{{{(1 - \delta )}^2}}}{{2\delta }},{\text{ if }}z > 1 - \delta ,} \\ { 0,{\text{ }} {\text{if }}z \in [- 1 + \delta ,1 - \delta],} \\ {\frac{1}{{2\delta }}{z^2} + \frac{{1 - \delta }}{\delta }z + \frac{{{{(1 - \delta )}^2}}}{{2\delta }},{\text{ if }}z < - 1 + \delta .} \end{array}} \right.$ | (2.3) |
Clearly, $\hat{K}_\delta (\cdot)$ is continuous and convex on ${\mathbb R}$ with $\partial \hat{K}_\delta(\cdot)=K_\delta(\cdot)$ in ${\mathbb R}$, where $K_\delta(\cdot)$ is the function defined by (1.7). Then, we observe from (1.4) and (2.3) that the following lemma holds.
Lemma 2.1 (cf. [2, Section 5], [4, Chapter 2], [17, Section 2]). Let $I_{[-1, 1]}(\cdot)$ and $\hat{K}_\delta (\cdot)$ be convex functions defined by (1.4) and (2.3), respectively. Then,
${\hat K_\delta }( \cdot ) \to {I_{[- 1,1]}}( \cdot ){\text{ }}on \mathbb{R} in the sense of Mosco{\text{ }}[19]{\text{ }}as \delta \to 0.$ | (2.4) |
Proof. We first check the condition (M1). Let $\{\delta_k \} \subset (0, 1)$, $\{ z_k \} \subset {\mathbb R}$ and $z\in {\mathbb R}$ so that
$\delta_k \downarrow 0 \ \mbox{ and } \ z_k \to z \ \mbox{ weakly in } {\mathbb R} \ \mbox{ as } k \to +\infty.$ |
As ${\mathbb R}$ is a one-dimensional space, we observe that
$z_k \to z \ \mbox{ in } {\mathbb R} \mbox{ as } k \to +\infty .$ | (2.5) |
If $z \in [-1, 1]$, we easily observe from (1.4) and (2.3) that
$\liminf_{k\to +\infty} \hat{K}_{\delta_k} (z_k) \geq I_{[-1, 1]} ( z )=0.$ |
Now we assume that $z>1$. Then, there exists a small positive constant $\mu$ such that
$z \geq 1+\mu>1.$ |
Then, from (2.5), there exists a number $k_\mu \in {\mathbb N}$ satisfying
$|z_k-z|<\frac{\mu}{2} \ \mbox{ for all } k \geq k_\mu.$ |
Therefore, we have
$z_k> z-\frac{\mu}{2}\geq 1+\frac{\mu}2>1-\delta \ \mbox{ for all } k \geq k_\mu.$ | (2.6) |
Hence, we infer from (2.3) and (2.6) that
$\begin{gathered} {{\hat K}_{{\delta _k}}}({z_k}) \geq {{\hat K}_{{\delta _k}}}\left( {1 + \frac{\mu }{2}} \right) \\ = \frac{1}{{2{\delta _k}}}{\left( {1 + \frac{\mu }{2}} \right)^2} - \frac{{1 - {\delta _k}}}{{{\delta _k}}}\left( {1 + \frac{\mu }{2}} \right) + \frac{{{{(1 - {\delta _k})}^2}}}{{2{\delta _k}}} \\ = \frac{{{\mu ^2}}}{{8{\delta _k}}} + \frac{\mu }{2} + \frac{{{\delta _k}}}{2} \\ \to + \infty \;{\text{ as }}k \to + \infty . \\ \end{gathered} $ |
Thus, we observe that
$\liminf_{k\to +\infty} \hat{K}_{\delta_k} (z_k) =+\infty= I_{[-1, 1]} ( z ).$ |
Similarly, if $z < -1$, we have:
$\liminf_{k\to +\infty} \hat{K}_{\delta_k} (z_k) =+\infty= I_{[-1, 1]} ( z ).$ |
Thus (M1) holds.
Next we establish (M2). Assume that $\delta_n \downarrow 0$ as $n\to + \infty$ and $z \in D(I_{[-1, 1]})$, namely, $z \in [-1, 1]$. Put $z_n=z$ for all $n \in {\mathbb N}$. Then, we easily observe from (2.3) that
$\lim_{n\to +\infty} \hat{K}_{\delta_n} (z_n) =0= I_{[-1, 1]} ( z ).$ |
Therefore (M2) holds.
This completes the proof of Lemma 2.1.
We observe from Lemma 2.1 and the general result of Mosco convergence (cf. [2, Theorem 3.66], [17, Theorem 8.1], [14, Proposition.9]) that
$\partial {\hat K_\delta }( \cdot ) = {K_\delta }( \cdot ){\text{ converges to }}\partial {I_{[- 1,1]}}( \cdot ){\text{ in the sense of resolvent convergence as }}\delta \to 0$ |
Therefore, $K_\delta(\cdot)$ is the approximation of $\partial I_{[-1, 1]}(\cdot)$ defined by (1.7).
Taking into account Lemma 2.1, we have the following lemma but omit here a detailed proof.
Lemma 2.2 (cf. [2, Section 5], [4, Chapter 2], [17, Section 2]). Let $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$, and let $\hat{K}_\delta (\cdot)$ be the convex function defined by (2.3). Define
$\varphi _\delta ^\varepsilon (z): = \left\{ {\begin{array}{*{20}{l}} {\frac{1}{2}\int\limits_\Omega | \nabla z{|^2}dx + \frac{1}{{{\varepsilon ^2}}}\int\limits_\Omega {{{\hat K}_\delta }} (z(x))dx,{\text{ }}\quad {\text{ }}if{\text{ }}z \in V,} \\ { \infty ,{\text{ }}\quad {\text{ }}otherwise.{\text{ }}} \end{array}} \right.$ | (2.7) |
Then, $\varphi ^\varepsilon_\delta$ is proper, l.s.c. and convex on $H$ with the effective domain $D(\varphi^\varepsilon_\delta)=V $. Moreover,
$\varphi _\delta ^\varepsilon ( \cdot ) \to {\varphi ^\varepsilon }( \cdot ){\text{ }}on H in the sense of Mosco{\text{ }}[19]{\text{ }}\delta \to 0,$ | (2.8) |
where $\varphi ^\varepsilon$ is the proper, l.s.c., convex functional on $H$ defined by (2.1).
Now let us prove Theorem 2.1 considering the solvability and convergence of solutions to (P)$^\varepsilon_\delta$ on [0, T].
Proof of Theorem 2.1. By the argument similar to (P)$^\varepsilon$ (cf. Proposition 2.1), we can show the existence-uniqueness of solutions to (P)$_\delta^\varepsilon$ on [0, T] for each $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$. Indeed, we infer from Lemma 2.2 that $\varphi^\varepsilon_\delta$ is proper, l.s.c. and convex on $H$ with the effective domain $D(\varphi^\varepsilon_\delta)=V$. Also, we observe that problem (P)$_\delta^\varepsilon$ can be rewritten as an abstract evolution equation of the form:
${\text{(CP)}}_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{l}} {\frac{d}{{dt}}u_\delta ^\varepsilon (t) + \partial \varphi _\delta ^\varepsilon (u_\delta ^\varepsilon (t)) - \frac{1}{{{\varepsilon ^2}}}u_\delta ^\varepsilon (t) = 0{\text{ in }}H,{\text{ for }}t > 0,} \\ {{u^\varepsilon }(0) = u_0^\varepsilon \;{\text{ in }}H.} \end{array}} \right.$ |
Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,15]), we can show the existence-uniqueness of a solution $u^\varepsilon_\delta$ to (CP)$_\delta ^\varepsilon $, hence (P)$_\delta ^\varepsilon $, on [0, T] for each $\varepsilon \in (0, 1]$ and $\delta \in (0, 1)$ in the sense of Definition 2.1.
Now we show (i) by the maximum principle arguments (cf. [13]). We present the proof only for initial values $u_0^\varepsilon (x) \in [0,1]$ for a.e. $x \in \Omega $, because for $u_0^\varepsilon (x) \in [-1,0]$ the same arguments apply.
Assigning $[u^\varepsilon_\delta (\tau)-1]^+$ to $z$ in (ii) of Definition 2.1, we get
$\begin{array}{*{20}{c}} {\frac{1}{2}\frac{d}{{d\tau }}\left| {{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2 + \left| {\nabla {{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2 + {{\left( {\frac{{{K_\delta }(u_\delta ^\varepsilon (\tau ))}}{{{\varepsilon ^2}}},{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right)}_H} = {{\left( {\frac{{u_\delta ^\varepsilon (\tau )}}{{{\varepsilon ^2}}},{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right)}_H}} \\ {{\text{for a}}{\text{.e}}. \tau \in (0,T).} \end{array}$ | (2.9) |
Adding $\left(-1/\varepsilon^2, [u^\varepsilon_\delta (\tau)-1]^+ \right)_H$ to the both side in (2.9), we observe that
$\begin{array}{*{20}{c}} {\frac{1}{2}\frac{d}{{d\tau }}\left| {{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2 + \frac{1}{{{\varepsilon ^2}}}{{\left( {{K_\delta }(u_\delta ^\varepsilon (\tau )) - 1,{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right)}_H} \leq \frac{1}{{{\varepsilon ^2}}}\left| {{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2} \\ {{\text{for a}}{\text{.e}}. \tau \in (0,T).} \end{array}$ | (2.10) |
Because $K_\delta(\cdot)$ is monotone in ${\mathbb R}$ with $K_\delta(1)=1$, we infer from (2.10) that
$\frac{d}{{d\tau }}\left| {{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2 \leq \frac{2}{{{\varepsilon ^2}}}\left| {{{[u_\delta ^\varepsilon (\tau ) - 1]}^ + }} \right|_H^2\;{\text{for a}}{\text{.e}}. \tau \in (0,T).$ | (2.11) |
Therefore, applying the Gronwall lemma to (2.11), we observe from $u_0^\varepsilon (x) \in [0,1]$ for a.e. $x \in \Omega $ that
$e^{-\frac{2}{\varepsilon^2}t}\left| [u^\varepsilon_\delta (t)-1]^+\right|_H^2 \le \left| [u^\varepsilon_0-1]^+\right|_H^2=0 \ \mbox{ for all }t \in [0,T],$ |
which implies that
$u^\varepsilon_\delta (t,x)\le 1 \ \mbox{ for a.e. } (t,x)\in Q.$ | (2.12) |
Next, assigning $[0-u^\varepsilon_\delta]^+$ to $z$ in (ii) of Definition 2.1, we infer from similar arguments as above that
$u^\varepsilon_\delta (t,x) \ge 0 \ \mbox{ for a.e. } (t,x)\in Q.$ | (2.13) |
Thus, we conclude from (2.12) and (2.13) that (i) of Theorem 2.1 holds.
Now we show (ii). Note from Lemma 2.2 that $\varphi^\varepsilon_\delta (\cdot) \to \varphi^\varepsilon (\cdot)$ on $H$ in the sense of Mosco [19] as $\delta \to 0 $ (cf. (2.8)). Therefore, from the abstract convergence theory of evolution equations (cf. [2,16]), we observe that $u^\varepsilon_\delta$ converges to the unique solution $u^\varepsilon $ of (CP)$^\varepsilon$ on [0, T] as $\delta \to 0$ in the sense of (2.2). As $u^\varepsilon$ (resp. $u^\varepsilon_\delta$) is the unique solution to (P)$^\varepsilon$ (resp. (P)$^\varepsilon_\delta$) on [0, T], we conclude that the convergence result (2.2) holds.
Thus, the proof of Theorem 2.1 is complete.
By arguments similar to the proof of Theorem 2.1, the following result of (E)$_\delta^\varepsilon$ holds. Hence, its detailed proof is omitted.
Corollary 2.1 (cf. [5,15]). Let $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$, and $u^\varepsilon_0 \in {\mathbb R}$ with $|u^\varepsilon_0| \le 1$. Then, there exists a unique solution $u^\varepsilon_\delta :[0, T]\to \mathbb{R}$ to (E)$^\varepsilon_\delta $ on [0, T] such that $u^\varepsilon_\delta \in W^{1, 2}(0, T)$ and the following equation holds:
$({\text{E}})_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{l}} {{{(u_\delta ^\varepsilon )}_t} + \frac{{{K_\delta }(u_\delta ^\varepsilon )}}{{{\varepsilon ^2}}} = \frac{{u_\delta ^\varepsilon }}{{{\varepsilon ^2}}}\;in \mathbb{R} for a.e. t \in (0,T),} \\ {u_\delta ^\varepsilon (0) = u_0^\varepsilon \;{\text{ in }}\mathbb{R}.} \end{array}} \right.$ |
Moreover, the following statements hold:
(i) If the initial value $ u^\varepsilon_0$ takes the value in $(0, 1)$ (resp. $(-1, 0)$), the solution $ u^\varepsilon_\delta(t) $ also takes the value in $(0, 1)$ (resp. $(-1, 0)$) for all $t\in [0, T]$.
(ii) There exists a unique function $u^\varepsilon \in W^{1, 2}(0, T)$ such that
$u_\delta ^\varepsilon \to {u^\varepsilon }\;{\text{ in }}C([0,T]){\text{ as }}\;\delta \to 0$ |
and $u^\varepsilon$ is the unique solution of the following problem (E)$^\varepsilon$ on $[0, T] $:
${({\text{E}})^\varepsilon }\left\{ {\begin{array}{*{20}{c}} {u_t^\varepsilon + \frac{{\partial {I_{[- 1,1]}}({u^\varepsilon })}}{{{\varepsilon ^2}}} \ni \frac{{{u^\varepsilon }}}{{{\varepsilon ^2}}}\;in \mathbb{R} for a.e. t \in (0,T),} \\ {{u^\varepsilon }(0) = u_0^\varepsilon \;in \mathbb{R}.} \end{array}} \right.$ |
Note from (1.5) that $\partial I_{[-1, 1]} (\cdot)$ is a multivalued function and therefore very hard to investigate (E)$^\varepsilon$ numerically. However, we observe from Corollary 2.1 that (E)$^\varepsilon_\delta$ is the approximate problem of (E)$^\varepsilon $. Hence, in this section, we consider (E)$^\varepsilon_\delta$ from the view-point of numerical analysis.
We present results concerning numerical experiments of (E)$^\varepsilon_\delta$. There is a vast method on numerical simulations of the ODE problem (e.g., backward Euler scheme, Runge--Kutta method and so on). The authors provided the numerical experiment to (E)$^\varepsilon$ via the Yosida approximation using the standard forward Euler method in [25]. To clarify the advantage of our new approximate method (1.7), we also provide numerical experiments of (E)$^\varepsilon_\delta$ using the standard forward Euler method. To this end, we consider the following explicit finite difference scheme to (E)$^\varepsilon_\delta $, denoted by (DE)$^\varepsilon_\delta $:
${\text{(DE)}}_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{c}} {\frac{{{u^{n + 1}} - {u^n}}}{{\Delta t}} + \frac{{{K_\delta }({u^n})}}{{{\varepsilon ^2}}} = \frac{{{u^n}}}{{{\varepsilon ^2}}}\quad {\text{ in }}\mathbb{R},{\text{for }}n = 0,1,2,\cdots ,{N_t},} \\ {{u^0} = u_0^\varepsilon \; {\text{in }}\mathbb{R},} \end{array}} \right.$ |
where $N_t \in {\mathbb N}$ is a given positive integer and $\triangle t:=T/{N_t}$ is the mesh size for time.
Note that $u^n$ is the approximate solution of (E)$^\varepsilon_\delta$ at the time $t=n\triangle t$. Also, note that the explicit finite difference scheme (DE)$^\varepsilon_\delta$ converges to (E)$^\varepsilon_\delta$ as $\triangle t\to 0$ because (DE)$^\varepsilon_\delta$ is the standard time discretization scheme for (E)$^\varepsilon_\delta $.
Here, we present the result for an unstable numerical experiment of (DE)$^\varepsilon_\delta$ for $T=0.002$, $\varepsilon=0.003$, $\delta=0.01$, the initial data $u_0^\varepsilon=0.1$, and the mesh size for time $\triangle t=0.000001$:
We observe from Figure 1 that we have to choose suitable values for constants $\varepsilon$, $\delta $, and mesh size of time-step $\triangle t$ to generate stable numerical results for (DE)$^\varepsilon_\delta$.
Our second main result of this paper concerns criteria for stable numerical simulations of (DE)$^\varepsilon_\delta$.
Theorem 3.1 (cf. [25, Theorem 7]). Let $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$ and $\triangle t \in (0, 1]$. Assume $u_0^\varepsilon \in (0, 1)$ (resp. $u_0^\varepsilon \in (-1, 0) $) and $T=\infty$. Let $\{u^n; n \geq 0 \}$ be the solution to (DE)$^\varepsilon_\delta$. Then, we have:
(i) Assume $\triangle t \in \left(0, \delta\varepsilon^2/(1-\delta)\right)$. Then, $u^n \in (0, 1)$ (resp. $u^n \in (-1, 0) $) for all $n\geq 0$. Moreover, $u^n$ converges to $1$ (resp. $-1$) monotonically as $n \to + \infty$.
(ii) Assume $\triangle t \in \left(\delta\varepsilon^2/(1-\delta), 2\delta\varepsilon^2/(1-\delta)\right)$. Then, there is a positive number $n_0 \in {\mathbb N}$ such that $u^n$ oscillates for all $n \geq n_0$. Moreover, $u^n$ converges to $1$ (resp. $-1$) as $n \to + \infty$.
Proof. We prove this theorem by arguments similar to those for the proof of [25, Theorem 7].
We present the proof only for initial values $u_0^\varepsilon \in (0, 1)$, because for $u_0^\varepsilon \in (-1, 0)$ the same arguments apply.
Assuming $u_0^\varepsilon \in (0, 1)$, we set for simplicity
$f_\delta (z):=K_\delta (z)-z \quad \mbox{ for } z \in {\mathbb R}.$ | (3.1) |
Then, we easily observe that
${f_\delta }(z) = \left\{ {\begin{array}{*{20}{l}} {\frac{{1 - \delta }}{\delta }(z - 1),}&{{\text{ if }}z > 1 - \delta ,} \\ { - z,}&{{\text{ if }}z \in [- 1 + \delta ,1 - \delta],} \\ {\frac{{1 - \delta }}{\delta }(z + 1),}&{{\text{ if }}z < - 1 + \delta } \end{array}} \right.$ | (3.2) |
and $z=-1, 0, 1$ are zero points of $f_\delta (\cdot)$.
Note that the difference equation (DE)$^\varepsilon_\delta$ is reformulated to give
$u^{n+1}= u^{n}-\frac{\triangle t}{\varepsilon^2} f_\delta (u^{n} ) \ \mbox{ in ${\mathbb R}$,for } n=0,1,2,\cdots .$ | (3.3) |
Now, we prove (i). To this end, we assume that $\triangle t \in \left(0, \delta\varepsilon^2/(1-\delta)\right)$. By mathematical induction, we show:
$u^i \in ( 0,1) \ \mbox{ for all } i \geq 0 .$ | (3.4) |
Clearly (3.4) holds for $i=0$ because $ u^0=u_0^\varepsilon \in (0, 1)$.
We now assume that (3.4) holds for all $i=0, 1, \cdots, n$. If $u^n \in (0, 1-\delta]$, we observe from (3.2), (3.3), and $\triangle t \in \left(0, \delta\varepsilon^2/(1-\delta)\right)$ that
$\begin{gathered} {u^n} \leq {u^{n + 1}} = {u^n} - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }({u^n}) \\ = {u^n} + \frac{{\Delta t}}{{{\varepsilon ^2}}}{u^n} \\ \leq 1 - \delta + \frac{{\Delta t}}{{{\varepsilon ^2}}}(1 - \delta ) \\ < 1 - \delta + \frac{\delta }{{1 - \delta }}(1 - \delta ) = 1,\\ \end{gathered} $ |
which implies that
$u^{n+1} \in (0,1),\ \mbox{ if } u^{n} \in (0,1-\delta].$ | (3.5) |
Next, if $u^n \in (1-\delta, 1)$, we observe from (3.2), (3.3), and $\triangle t \in \left(0, \delta\varepsilon^2/(1-\delta)\right)$ that
$\begin{gathered} {u^n} \leq {u^{n + 1}} = {u^n} - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }({u^n}) \\ = {u^n} - \frac{{\Delta t}}{{{\varepsilon ^2}}} \cdot \frac{{1 - \delta }}{\delta }({u^n} - 1) \\ < {u^n} - \frac{\delta }{{1 - \delta }} \cdot \frac{{1 - \delta }}{\delta }({u^n} - 1) = 1,\\ \end{gathered} $ |
which implies that
$u^{n+1} \in (0,1),\ \mbox{ if } u^n \in (1-\delta,1).$ | (3.6) |
From (3.5) and (3.6) we infer that (3.4) holds for i=n + 1. Therefore, we conclude by mathematical induction that (3.4) holds, which is the result similar to (i) of Corollary 2.1.
Also, by (3.2) and (3.4), we observe that $f_\delta (u^n) \le 0$ for all $n\ge 0$. Therefore, we have from (3.3)
$u^{n+1}= u^n-\frac{\triangle t}{\varepsilon^2} f_\delta (u^n )\ge u^n \quad \mbox{ for all } n\geq 0.$ | (3.7) |
Therefore, we infer from (3.4) and (3.7) that $\{ u^n; n\geq 0\}$ is a bounded and increasing sequence with respect to $n$. Hence, there exists a point $u^\infty \in {\mathbb R}$ such that
$u^{n}\to u^\infty \mbox{ in } {\mathbb R} \mbox{ as } n \to + \infty.$ | (3.8) |
By taking the limit in (3.3) as $n\to + \infty$, we easily observe from the continuity of $f_\delta (\cdot)$ that $u^\infty=1$, which is the zero point of $f_\delta (\cdot)$. Hence, the proof of (i) is complete.
Next, we show (ii). To this end, we put
$\triangle t :=\frac{\delta\varepsilon^2}{1-\delta}\tau \ \ \mbox{ for some } \tau \in (1,2).$ |
We assume that the initial value $u_0^\varepsilon \in (0, 1-\delta]$. We can then find the minimal number $n_0\in {\mathbb N}$ such that
$u^{n_0} \in (1-\delta,1+\delta ) \mbox{ and } u^i \in (0,1-\delta] \mbox{ for all }i=0,1,\cdots,n_0-1.$ | (3.9) |
Indeed, if $u^i \in (0, 1-\delta]$ for all $i=0, 1, \cdots, k$, we observe from (3.3) that
$\begin{gathered} {u^{k + 1}} = {u^k} - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }({u^k}) = \left( {1 + \frac{{\Delta t}}{{{\varepsilon ^2}}}} \right){u^k} \\ = {\left( {1 + \frac{{\Delta t}}{{{\varepsilon ^2}}}} \right)^2}{u^{k - 1}} \\ = \cdots \\ = {\left( {1 + \frac{{\Delta t}}{{{\varepsilon ^2}}}} \right)^{k + 1}}{u^0}. \\ \end{gathered} $ | (3.10) |
Taking into account (3.10), $u^0=u_0^\varepsilon \in (0, 1-\delta] $, and
$1+\frac{\triangle t}{\varepsilon^2}> 1+\frac{\delta}{1-\delta}>1,$ |
we can then find the minimal number $n_0\in {\mathbb N}$ such that
${u^{{n_0}}} > 1 - \delta {\text{ and }}{u^i} \in (0,1 - \delta]{\text{ for all }}i = 0,1,\cdots ,{n_0} - 1.$ |
Also, by (3.3), we observe that
$u^{n_0}= u^{n_0-1}-\frac{\triangle t}{\varepsilon^2} f_\delta (u^{n_0-1}) =u^{n_0-1}+\frac{\triangle t}{\varepsilon^2} u^{n_0-1} <1-\delta+\frac{2\delta}{1-\delta}\cdot (1-\delta)=1+\delta,$ |
hence (3.9) holds.
Now we show (ii) given the initial value $u_0^\varepsilon \in (0, 1-\delta]$. We find from (3.2), (3.3) and (3.9) that
$\begin{gathered} {u^{{n_0} + 1}} = {u^{{n_0}}} - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }({u^{{n_0}}}) \\ = {u^{{n_0}}} - \frac{{\Delta t}}{{{\varepsilon ^2}}} \cdot \frac{{1 - \delta }}{\delta }({u^{{n_0}}} - 1) \\ = (1 - \tau ){u^{{n_0}}} + \tau ,\\ \end{gathered} $ | (3.11) |
which implies that
$\frac{u^{n_0+1}+(\tau-1) u^{n_0}}\tau= 1.$ | (3.12) |
Therefore, we see from (3.12) and $\tau \in (1, 2)$ that the zero point $1$ of $f_\delta (\cdot)$ is in the interval between $u^{n_0}$ and $u^{n_0+1}$.
$u^{n_0+1}= (1-\tau)u^{n_0}+\tau>(1-\tau)(1+\delta) +\tau>1-\delta$ |
and
$u^{n_0+1}= (1-\tau)u^{n_0}+\tau < (1-\tau)(1-\delta) +\tau < 1+\delta ,$ |
which implies that
$u^{n_0+1} \in (1-\delta,1+\delta).$ |
By (3.11), (3.12), and repeating the above procedure, we obtain
$u^{n} \in (1-\delta,1+\delta) \ \mbox{ for all }n\geq n_0$ | (3.13) |
and $u^{n}$ oscillates around the zero point $1$ of $f_\delta (\cdot)$ for all $n\geq n_0$. Also, we observe from (3.11) and (3.13) that
$\left|u^{n+1}-1\right|=|1-\tau| \left|u^{n}-1\right| \quad \mbox{ for all } n\geq n_0.$ | (3.14) |
Therefore, by $\tau \in (1, 2)$ and (3.12)-(3.14), there exists a subsequence $\{n_k \}$ of $\{ n\}$ such that $u^{n_k}$ oscillates and converges to $1$ as $k \to \infty$. Hence, taking into account the uniqueness of the limit point, we find that (ii) holds for the initial value $u_0^\varepsilon \in (0, 1-\delta]$.
From similar arguments as above, we find that (ii) holds for $n_0=0$ if $u_0^\varepsilon \in (1-\delta, 1]$. Therefore, the proof of (ii) is complete.
This completes the proof of Theorem 3.1.
Remark 3.1. Assume $\triangle t \in \left[2\delta\varepsilon^2/(1-\delta), \infty \right)$ and put $\triangle t :=\delta\varepsilon^2\tau/(1-\delta)$ for some $\tau \geq 2$. Then, we observe that
$1+\frac{\triangle t}{\varepsilon ^2} > 1 + \frac{2\delta }{1-\delta}>1 \quad \mbox{ and } \quad |1-\tau|\ge 1.$ |
Therefore, we infer from Theorem 3.1 (cf. (3.10), (3.11), (3.14)) that the solution $u^n$ to (DE)$^\varepsilon_\delta$ oscillates as $n \to \infty$, in general.
Remark 3.2. We infer from (3.2) and (3.3) that
$\begin{gathered} {u^n} \equiv 1\;{\text{ for all }}n \geq 0,{\text{ if }}u_0^\varepsilon = 1,\hfill \\ {u^n} \equiv 0\;{\text{ for all }}n \geq 0,{\text{ if }}u_0^\varepsilon = 0 \hfill \\ \end{gathered} $ |
and
$u^n \equiv -1 \ \mbox{ for all } n \geq 0,\mbox{ if } u_0^\varepsilon=-1.$ |
In comparison with this, the stationary solutions of the difference equation studied by [25] depend on $\delta$ without $u^n \equiv 0$ (see [25, Remark 9]).
From (ii) of Theorem 3.1, we observe that $u^n$ oscillates for sufficiently large $n$ and converges to the zero point of $f_\delta(\cdot)$ for $\triangle t \in \left(\delta\varepsilon^2/(1-\delta), 2\delta\varepsilon^2/(1-\delta)\right)$. However, for $\triangle t=2\delta\varepsilon^2/(1-\delta)$, we have the following special case that the solution to (DE)$^\varepsilon_\delta$ does not oscillate and coincides with the zero point of $f_\delta (\cdot)$ after some finite number of iterations.
Corollary 3.1 (cf. [25, Corollary 10]). Let $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$, $\triangle t=2\delta\varepsilon^2/(1-\delta)$ and $n\in{\mathbb N}$. Assume $u_0^\varepsilon :=(1-\delta)^{n}/(1+\delta)^{n}$. Then, the solution to (DE)$^\varepsilon_\delta$ is given by
${u^i} = \left\{ {\begin{array}{*{20}{l}} {{{\left( {\frac{{1 - \delta }}{{1 + \delta }}} \right)}^{n - i}},if{\text{ }}i = 0,1,\cdots ,n - 1,} \\ {1,{\text{ }}if{\text{ }}i \geq n.} \end{array}} \right.$ | (3.15) |
Similarly, if $u_0^\varepsilon :=-(1-\delta)^{n}/(1+\delta)^{n}$, then the solution to (DE)$^\varepsilon_\delta$ is given by
${u^i} = \left\{ {\begin{array}{*{20}{l}} {{{\left( {\frac{{1 - \delta }}{{1 + \delta }}} \right)}^{n - i}},if{\text{ }}i = 0,1,\cdots ,n - 1,} \\ {-1,{\text{ }}if{\text{ }}i \geq n.} \end{array}} \right.$ |
Proof. We present only the proof of (3.15) as similar arguments hold for $u_0^\varepsilon :=-(1-\delta)^{n}/(1+\delta)^{n}$.
Note that $u_0^\varepsilon :=(1-\delta)^{n}/(1+\delta)^{n} \in (0, 1-\delta)$. Therefore we infer from (3.2), (3.3), and $u^0=u_0^\varepsilon$ that
$u^1= u^0 -\frac{\triangle t}{\varepsilon^2} f_\delta (u^0 ) =u^\varepsilon_0 -\frac{2\delta}{1-\delta}(-u^\varepsilon_0 ) = \frac{1+\delta}{1-\delta}u^\varepsilon_0= \left( \frac{1-\delta}{1+\delta}\right)^{n-1}.$ |
Similarly, we observe from $u^1\in (0, 1-\delta)$ that
$u^2= u^1 -\frac{\triangle t}{\varepsilon^2} f_\delta (u^1 )= \frac{1+\delta}{1-\delta}u^1=\left( \frac{1-\delta}{1+\delta}\right)^{n-2}.$ |
Repeating this procedure, we note that from Remark 3.2 the solution to (DE)$^\varepsilon_\delta$ is given by (3.15).
Taking into account Theorem 3.1, we present results of numerical experiments of (DE)$^\varepsilon_\delta$. To this end, we take
$T = 0.002,\varepsilon = 0.01,\delta = 0.01{\text{ and the initial data }}u_0^\varepsilon = 0.1$ |
as numerical data. Then, we observe that:
$\displaystyle\frac1{1-\delta}=\frac1{1-0.01}=1.010101010\cdots$ |
and
$\displaystyle \frac{\delta\varepsilon^2}{1-\delta}= 0.0000010101010\cdots.$ |
Setting $\triangle t=0.000001$, we have:
$\displaystyle \frac{\delta\varepsilon^2}{1-\delta}= 0.0000010101010\cdots>\triangle t=0.000001,$ |
which complies with (i) of Theorem 3.1. Hence, we have the following stable numerical result of (DE)$^\varepsilon_\delta$. Indeed, we observe from Figure 2 and Table 1 in Remark 3.3 that the solution to (DE)$^\varepsilon_\delta$ converges to the stationary solution 1 monotonically.
Remark 3.3 (cf. [25, TABLE 1]). In [25], the authors provided numerical results of the following difference equation:
$({\text{YDE}})_\delta ^\varepsilon \left\{ {\begin{array}{*{20}{c}} {\frac{{{u^{n + 1}} - {u^n}}}{{\Delta t}} + \frac{{{{(\partial {I_{[- 1,1]}})}_\delta }({u^n})}}{{{\varepsilon ^2}}} = \frac{{{u^n}}}{{{\varepsilon ^2}}}\quad {\text{ }}in \mathbb{R},forn = 0,1,2,\cdots ,{N_t},} \\ {{u^0} = u_0^\varepsilon \;in \mathbb{R}.} \end{array}} \right.$ |
Then, we obtained the following Table 1 of numerical result of solutions to (YDE)$^\varepsilon_\delta$ (cf. [25, TABLE 1]).
Next we set $\triangle t=0.000002$ where we have
$\displaystyle \frac{\delta\varepsilon^2}{1-\delta}= 0.0000010101010\cdots<\triangle t=0.000002< \frac{2\delta\varepsilon^2}{1-\delta},$ |
which complies with (ii) of Theorem 3.1. Hence, we observe from Figure 3 and Table 2 that the solution to (DE)$^\varepsilon_\delta$ oscillates and converges to the stationary solution 1.
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
Setting $\triangle t=2\delta\varepsilon^2/(1-\delta)=0.0000020202020\cdots$, we note Remark 3.1. Indeed, we observe from Figure 4 and Table 3 that the solution to (DE)$^\varepsilon_\delta$ oscillates.
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
Setting $\triangle t=0.000005$, we have:
$2\frac{\delta\varepsilon^2}{1-\delta}= 0.0000020202020\cdots<\triangle t=0.000005.$ |
Therefore, noting Remark 3.1, we indeed observe from Figure 5 that the solution to (DE)$^\varepsilon_\delta$ oscillates.
Now we consider $\triangle t=15\delta\varepsilon^2/(1-\delta)$. Recalling Remark 3.1, we indeed observe from Figure 6 that the solution to (DE)$^\varepsilon_\delta$ oscillates between three zero points of $f_\delta (\cdot)$.
Next, we consider a numerical example of Corollary 3.1. To this end, we use the following initial data:
$u_0:=\frac{(1-\delta)^6}{(1+\delta)^6}=\frac{(1-0.01)^6}{(1+0.01)^6}=0.88691688\cdots.$ |
We observe from Table 4,Figure 7, and Corollary 3.1 that (3.15) holds with n=6:
number of iterations $i$ | the value of $u^i$ |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |
From Theorem 3.1, Remark 3.3 and numerical experiments as above, we conclude that
(i) the mesh size for the time step $\triangle t$ needs to be smaller than $\delta\varepsilon^2/(1-\delta)$ to provide a stable numerical solution for (DE)$^\varepsilon_\delta$;
(ii) our new approximate method (1.7) is better than the Yosida approximation (1.6) because the solutions to (DE)$^\varepsilon_\delta$ take values in [-1, 1] (cf. Table 1);
(iii) we provide a stable numerical examples of (DE)$^\varepsilon_\delta$ with the initial data $u_0^\varepsilon :=(1-\delta)^{n}/(1+\delta)^{n}$, even if the mesh size $\triangle t$ is equal to $2\delta\varepsilon^2/(1-\delta) $.
Although a numerical study of (P)$^\varepsilon$ is hard as $\partial I_{[-1, 1]} (\cdot)$ is multivalued (cf. (1.5)), we observe from Theorem 2.1 nevertheless that (P)$^\varepsilon_\delta$ is an approximation to the problem (P)$^\varepsilon $. Therefore, in this section, we shall consider (P)$^\varepsilon_\delta$ in a 2D space from a numerical analysis view-point.
To extend the result obtained in [25, Theorem 16] and avoid the complicated arguments, we perform numerical experiments using the standard forward Euler method, although there is a vast method on numerical simulations of the PDE problem (e.g., backward Euler scheme, finite element method and so on).
For simplicity, assume that $\Omega:=(0, 1)\times (0, 1)$ is a square domain in ${\mathbb R^2}$. We consider the following difference equation to the Allen--Cahn equation in (P)$^\varepsilon_\delta $:
$\begin{gathered} \frac{{u_{i,j}^{n + 1} - u_{i,j}^n}}{{\Delta t}} - \frac{{u_{i - 1,j}^n - 2u_{i,j}^n + u_{i + 1,j}^n}}{{{{(\Delta x)}^2}}} - \frac{{u_{i,j - 1}^n - 2u_{i,j}^n + u_{i,j + 1}^n}}{{{{(\Delta y)}^2}}} + \frac{{{K_\delta }(u_{i,j}^n)}}{{{\varepsilon ^2}}} = \frac{{u_{i,j}^n}}{{{\varepsilon ^2}}} \hfill \\ {\text{for }}n = 0,1,\cdots ,{N_t} - 1,\;i = 1,2,\cdots ,{N_x} - 1,{\text{ and }}j = 1,2,\cdots ,{N_y} - 1,\hfill \\ \end{gathered} $ | (4.1) |
where $N_t, N_x, N_y \in {\mathbb N}$ are given integers, $\triangle t:=T/N_t$ is the mesh size for the time steps, and in the 2-D space $\triangle x:=1/N_x$ and $\triangle y:=1/N_y$ are the mesh sizes along the $x$-and $y$-axes.
Also, for the homogeneous Neumann boundary and initial conditions, we consider the following explicit situations:
$\begin{array}{*{20}{c}} {\left. {\begin{array}{*{20}{c}} {u_{0,0}^n = u_{1,1}^n,\quad u_{{N_x},0}^n = u_{{N_x} - 1,1}^n,} \\ {u_{0,{N_y}}^n = u_{1,{N_y} - 1}^n,\quad u_{{N_x},{N_y}}^n = u_{{N_x} - 1,{N_y} - 1}^n,} \\ {u_{i,0}^n = u_{i,1}^n,\quad u_{i,{N_y}}^n = u_{i,{N_y} - 1}^n\;{\text{ for }}i = 1,2,\cdots ,{N_x} - 1,} \\ {u_{0,j}^n = u_{1,j}^n,\quad u_{{N_x},j}^n = u_{{N_x} - 1,j}^n\;{\text{ for }}j = 1,2,\cdots ,{N_y} - 1} \end{array}} \right\}{\text{ }}(1 \leq n \leq {N_t})} \end{array}$ | (4.2) |
and
$\displaystyle u_{i,j}^0=u_0^\varepsilon (x_i,y_j)\ \mbox{ for } i=0,1,\cdots,N_x,\mbox{ and } j=0,1,\cdots,N_y,$ | (4.3) |
where $x_i:=i\triangle x$ and $y_j:=j\triangle y$.
In considering the explicit finite difference system (DP)$^\varepsilon_\delta :=(4.1), (4.2), (4.3)\}$, we observe that $u_{i, j}^n$ is the approximate solution of (P)$^\varepsilon_\delta$ at time $t_n:=n\triangle t$ and position $(x_i, y_j)$. Also, we observe that (DP)$^\varepsilon_\delta$ converges to (P)$^\varepsilon_\delta$ as $\triangle t\to 0$, $\triangle x \to 0$, and $\triangle y \to 0$, because (DP)$^\varepsilon_\delta$ is the standard time and space discretized form of (P)$^\varepsilon_\delta$ in the 2D space.
Using Theorem 3.1, we observe that we also have to choose suitable values for the constants $\varepsilon$, $\delta $, and the mesh sizes for time $\triangle t$ and space $\triangle x$ and $\triangle y$ to establish stable numerical results for (DP)$^\varepsilon_\delta$. We now announce our final main result concerning the stability of (DP)$^\varepsilon_\delta$.
Theorem 4.1. Let $\varepsilon \in (0, 1]$, $\delta \in (0, 1)$, $T>0$, $\Omega:=(0, 1)\times (0, 1)$ and $u_0^\varepsilon \in K\cap C(\overline{\Omega}) $, where $K$ is the set of initial data defined in (A). Also, let $N_t, N_x, N_y$ be the integers so that $\triangle t \in (0, 1]$, $\triangle x \in (0, 1]$ and $\triangle y \in (0, 1]$, where $\triangle t:=T/N_t$, $\triangle x:=1/N_x$ and $\triangle y:=1/N_y$. Let $\{u^n_{i, j}; n=0, 1, \cdots, N_t, \ i=0, 1, \cdots, N_x, \ j=0, 1, \cdots, N_y \}$ be the solution to (DP)$^\varepsilon_\delta$. Also, let $c_0 \in (0, 1)$ and assume that
$0 < \Delta t \leq \frac{{{c_0}\delta {\varepsilon ^2}}}{{1 - \delta }}\quad {\text{ }}and{\text{ }}\quad 0 \leq \frac{{\Delta t}}{{{{(\Delta x)}^2}}} + \frac{{\Delta t}}{{{{(\Delta y)}^2}}} \leq \frac{{1 - {c_0}}}{2}.$ | (4.4) |
Then, the solution to (DP)$^\varepsilon_\delta$ is bounded in the following sense:
$\mathop {\max }\limits_{\begin{array}{*{20}{c}} {0 \leq i \leq {N_x}} \\ {0 \leq j \leq {N_y}} \end{array}} \left| {u_{i,j}^n} \right| \leq 1\quad {\text{ }}for all{\text{ }}n \geq 0.$ | (4.5) |
In particular, if the initial value $ u^\varepsilon_0(x) $ takes the value in [0,1] (resp. [-1,0]) for a.e. $x\in \Omega$, the following boundedness holds:
$u_{i,j}^n \in [0,1]\;(resp. u_{i,j}^n \in [- 1,0])\;{\text{ }}for all{\text{ }}n \geq 0,i = 0,1,\cdots ,{N_x}{\text{ }}and{\text{ }}j = 0,1,\cdots ,{N_y}.$ | (4.6) |
Proof. We demonstrate (4.5) by mathematical induction. Clearly (4.5) holds for $n=0$ because $u^0=u_0^\varepsilon \in K$. We next assume that
$\mathop {\max }\limits_{\begin{array}{*{20}{c}} {0 \leq i \leq {N_x}} \\ {0 \leq j \leq {N_y}} \end{array}} \left| {u_{i,j}^\ell } \right| \leq 1\quad {\text{ for all }}\ell = 0,1,\cdots ,n.$ | (4.7) |
Then, we observe that the explicit finite difference equation (4.1) in (DP)$^\varepsilon_\delta$ can be reformulated giving
$\begin{array}{*{20}{c}} {u_{i,j}^{n + 1} = {r_x}u_{i - 1,j}^n + {r_x}u_{i + 1,j}^n + {r_y}u_{i,j - 1}^n + {r_y}u_{i,j + 1}^n} \\ { + (1 - 2{r_x} - 2{r_y})u_{i,j}^n - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n)} \\ {{\text{for all }}n = 0,1,\cdots ,{N_t} - 1,\;i = 1,2,\cdots ,{N_x} - 1,{\text{ and }}j = 1,2,\cdots ,{N_y} - 1,} \end{array}$ | (4.8) |
where we put $r_x:=\triangle t/(\triangle x)^2$, $r_y:=\triangle t/(\triangle y)^2$, and $f_\delta (\cdot)$ is the function defined in (3.2). Note that $z=-1, 0, 1$ are the zero points of $f_\delta (z)$.
We observe from (4.4), (4.7), and (4.8) that
$\begin{gathered} 1 - u_{i,j}^{n + 1} = {r_x}\left( {1 - u_{i - 1,j}^n} \right) + {r_x}\left( {1 - u_{i + 1,j}^n} \right) \\ + {r_y}\left( {1 - u_{i,j - 1}^n} \right) + {r_y}\left( {1 - u_{i,j + 1}^n} \right) \\ + (1 - 2{r_x} - 2{r_y})\left( {1 - u_{i,j}^n} \right) + \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n) \\ \geq (1 - 2{r_x} - 2{r_y})\left( {1 - u_{i,j}^n} \right) + \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n) \\ {\text{for all }}i = 1,2,\cdots ,{N_x} - 1,{\text{ and }}j = 1,2,\cdots ,{N_y} - 1. \\ \end{gathered} $ | (4.9) |
Note from (3.2) that the function $[-1, 1]\ni z\to (1-2r_x-2r_y)(1-z)+ \triangle t/\varepsilon ^2 f_\delta(z)$ is continuous. Also, we infer from (3.2), (4.4) and (4.7) that
$(1-2r_x-2r_y)(1-z)+ \frac{\triangle t}{\varepsilon ^2} f_\delta(z) \geq 0\ \mbox{ for all } z \in [-1, 1].$ | (4.10) |
Indeed, we observe from (4.4) that
$1-2r_x-2r_y\geq c_0 >0.$ | (4.11) |
Therefore, it follows from (3.2) that the function $[-1, 1-\delta]\ni z\to (1-2r_x-2r_y)(1-z)+ \triangle t/\varepsilon ^2f_\delta(z)$ attains a minimum value at $z=1-\delta$. Therefore we obtain from (3.2) and (4.4) that
$\begin{gathered} (1 - 2{r_x} - 2{r_y})(1 - z) + \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(z) \hfill \\ \geq (1 - 2{r_x} - 2{r_y})\left( {1 - (1 - \delta )} \right) + \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(1 - \delta ) \hfill \\ = (1 - 2{r_x} - 2{r_y})\delta - \frac{{\Delta t}}{{{\varepsilon ^2}}}(1 - \delta ) \hfill \\ \geq {c_0}\delta - \frac{{\Delta t}}{{{\varepsilon ^2}}}(1 - \delta ) \hfill \\ \geq 0\qquad {\text{for all }}z \in [- 1,1 - \delta]. \hfill \\ \end{gathered} $ | (4.12) |
Also, for any $z \in [1-\delta, 1]$, we observe from (3.2) that
$\begin{gathered} (1 - 2{r_x} - 2{r_y})(1 - z) + \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(z) \hfill \\ = (1 - 2{r_x} - 2{r_y})(1 - z) + \frac{{\Delta t}}{{{\varepsilon ^2}}} \cdot \frac{{1 - \delta }}{\delta }(z - 1) \hfill \\ = \left[{\frac{{1 - \delta }}{{\delta {\varepsilon ^2}}}\Delta t - (1 - 2{r_x} - 2{r_y})} \right]z + (1 - 2{r_x} - 2{r_y}) - \frac{{1 - \delta }}{{\delta {\varepsilon ^2}}}\Delta t. \hfill \\ \end{gathered} $ | (4.13) |
Here we note from (4.4) that
$\frac{1-\delta}{\delta\varepsilon ^2} \triangle t-(1-2r_x-2r_y) \le \frac{1-\delta}{\delta\varepsilon ^2} \triangle t-c_0 \le 0,$ |
which implies from (4.13) that the function $[1-\delta, 1]\ni z\to (1-2r_x-2r_y)(1 -z)+ \triangle t/\varepsilon ^2 f_\delta(z)$ is non-increasing and attains a minimum value at $z=1$. Therefore, we have:
$\displaystyle (1-2r_x-2r_y)(1-z)+ \frac{\triangle t}{\varepsilon ^2} f_\delta(z) \geq\frac{\triangle t}{\varepsilon ^2} f_\delta(1) =0 \ \mbox{ for all } z \in [1-\delta,1].$ | (4.14) |
Hence, from (4.12) and (4.14), (4.10) holds. Therefore we find from (4.7), (4.9) and (4.10) that
$\begin{gathered} 1 - u_{i,j}^{n + 1} \geq 0 \\ {\text{for all }}i = 1,2,\cdots ,{N_x} - 1,{\text{ and }}j = 1,2,\cdots ,{N_y} - 1. \\ \end{gathered} $ | (4.15) |
Similarly, we observe from (4.4), (4.7), and (4.8) that
$\begin{gathered} u_{i,j}^{n + 1} + 1 = {r_x}\left( {u_{i - 1,j}^n + 1} \right) + {r_x}\left( {u_{i + 1,j}^n + 1} \right) \\ + {r_y}\left( {u_{i,j - 1}^n + 1} \right) + {r_y}\left( {u_{i,j + 1}^n + 1} \right) \\ + (1 - 2{r_x} - 2{r_y})\left( {u_{i,j}^n + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n) \\ \geq (1 - 2{r_x} - 2{r_y})\left( {u_{i,j}^n + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n) \\ {\text{for all }}i = 1,2,\cdots ,{N_x} - 1{\text{ and }}j = 1,2,\cdots ,{N_y} - 1. \\ \end{gathered} $ | (4.16) |
Clearly, we have from (3.2) that the function $[-1, 1]\ni z\to (1-2r_x-2r_y)(z+1)-\triangle t/\varepsilon ^2 f_\delta(z)$ is continuous. Also, using similar arguments as above (cf. (4.10)), we infer that the function $[-1, 1]\ni z\to (1-2r_x-2r_y)(z+1)-\triangle t/\varepsilon ^2 f_\delta(z)$ is non-negative. Indeed, we observe from (3.2), (4.4), and (4.11) that the function $[-1+\delta, 1]\ni z\to (1-2r_x-2r_y)(z+1)-\triangle t/\varepsilon ^2 f_\delta(z)$ attains a minimum value at $z=-1+\delta$. Therefore, it follows from (3.2) and (4.4) (cf. (4.12)) that
$\begin{gathered} (1 - 2{r_x} - 2{r_y})\left( {z + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(z) \hfill \\ \geq (1 - 2{r_x} - 2{r_y})\left( {( - 1 + \delta ) + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }( - 1 + \delta ) \hfill \\ = (1 - 2{r_x} - 2{r_y})\delta + \frac{{\Delta t}}{{{\varepsilon ^2}}}( - 1 + \delta ) \hfill \\ \geq {c_0}\delta - \frac{{\Delta t}}{{{\varepsilon ^2}}}(1 - \delta ) \hfill \\ \geq 0\quad {\text{ for all }}z \in \left[{ - 1 + \delta ,1} \right]. \hfill \\ \end{gathered} $ | (4.17) |
Also, for any $z \in [-1, -1+\delta]$, we observe from (3.2) that
$\begin{gathered} (1 - 2{r_x} - 2{r_y})\left( {z + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(z) \hfill \\ = (1 - 2{r_x} - 2{r_y})\left( {z + 1} \right) - \frac{{\Delta t}}{{{\varepsilon ^2}}} \cdot \frac{{1 - \delta }}{\delta }(z + 1) \hfill \\ = \left[{(1 - 2{r_x} - 2{r_y}) - \frac{{1 - \delta }}{{\delta {\varepsilon ^2}}}\Delta t} \right]z + (1 - 2{r_x} - 2{r_y}) - \frac{{1 - \delta }}{{\delta {\varepsilon ^2}}}\Delta t. \hfill \\ \end{gathered} $ | (4.18) |
Here we note from (4.4) that
$(1-2r_x-2r_y)-\frac{1-\delta}{\delta\varepsilon ^2} \triangle t\ge c_0-\frac{1-\delta}{\delta\varepsilon ^2} \triangle t \ge 0.$ |
Therefore, we infer from (4.18) that the function $[-1, -1+\delta]\ni z\to (1-2r_x-2r_y)(z+1)-\triangle t/\varepsilon ^2 f_\delta(z)$ is non-decreasing and attains a minimum value at $z=-1$. Hence, we have:
$\displaystyle (1-2r_x-2r_y)(z+1)- \frac{\triangle t}{\varepsilon ^2} f_\delta(z) \geq - \frac{\triangle t}{\varepsilon ^2} f_\delta(-1) =0 \ \mbox{ for all } z \in [-1,-1+\delta].$ | (4.19) |
From (4.17) and (4.19), we obtain
$(1-2r_x-2r_y)(z+1)- \frac{\triangle t}{\varepsilon ^2} f_\delta(z) \geq 0 \ \mbox{ for all } z \in [-1, 1],$ |
which from (4.7) and (4.16) implies that
$\begin{gathered} u_{i,j}^{n + 1} + 1 \geq 0 \\ {\text{for all }}i = 1,2,\cdots ,{N_x} - 1,{\text{ and }}j = 1,2,\cdots ,{N_y} - 1. \\ \end{gathered} $ | (4.20) |
Taking into account the Neumann boundary condition, specifically (4.2), we observe from (4.15) and (4.20) that
$\mathop {\max }\limits_{\begin{array}{*{20}{c}} {0 \leq i \leq {N_x}} \\ {0 \leq j \leq {N_y}} \end{array}} \left| {u_{i,j}^{n + 1}} \right| \leq 1,$ |
which implies that (4.7) holds for l=n + 1. Therefore, we conclude by mathematical induction that (4.5) holds.
Finally, we show (4.6). We present the proof only for initial values $u_0^\varepsilon (x)\in [0,1]$ for a.e. $x\in \Omega$, because for $u_0^\varepsilon (x)\in [-1,0]$ the same arguments apply.
We demonstrate (4.6) by arguments similar to the proof of (4.5), namely, by mathematical induction. Clearly (4.6) holds for $n=0$ because $u^0(x)=u_0^\varepsilon (x) \in [0,1]$ for a.e. $x\in \Omega$. We next assume that
$u^\ell _{i,j} \in [0,1] \ \mbox{ for all } \ell = 0,1,\cdots,n,i=0,1,\cdots,N_x \mbox{ and } j=0,1,\cdots,N_y .$ | (4.21) |
Note from (3.2) and (4.21) that
$f_\delta(u_{i,j}^{n})\le 0 \ \mbox{ for all } i=0,1,\cdots,N_x \mbox{ and } j=0,1,\cdots,N_y.$ |
Therefore, we observe from (4.8), (4.11) and (4.21) that
$\begin{gathered} u_{i,j}^{n + 1} = {r_x}u_{i - 1,j}^n + {r_x}u_{i + 1,j}^n + {r_y}u_{i,j - 1}^n + {r_y}u_{i,j + 1}^n + (1 - 2{r_x} - 2{r_y})u_{i,j}^n - \frac{{\Delta t}}{{{\varepsilon ^2}}}{f_\delta }(u_{i,j}^n) \\ \geq 0\;\;{\text{ for all }}i = 1,2,\cdots ,{N_x} - 1{\text{ and }}j = 1,2,\cdots ,{N_y} - 1. \\ \end{gathered} $ | (4.22) |
By arguments similar to the proof of (4.15), we also observe that
$\displaystyle u_{i,j}^{n+1}\le 1 \ \ \mbox{ for all } i=1,2,\cdots,N_x -1 \mbox{ and } j=1,2,\cdots,N_y -1.$ | (4.23) |
Hence, from (4.22), (4.23) and the Neumann boundary condition, specifically (4.2), we observe that
$u^{n+1}_{i,j} \in [0,1] \ \mbox{ for all } i=0,1,\cdots,N_x \mbox{ and } j=0,1,\cdots,N_y ,$ |
which implies that (4.21) holds for l=n + 1. Therefore, we conclude by mathematical induction that (4.6) holds, which is the result similar to (i) of Theorem 2.1.
This completes the proof of Theorem 4.1.
Remark 4.1. We can set $c_0=0$ in (4.4) for the explicit finite difference scheme to the following usual 2D heat equation applying a homogeneous Neumann boundary condition:
$\left\{ \begin{gathered} {u_t} - \Delta u = 0\;{\text{ }}in{\text{ }}Q = (0,T) \times \Omega ,\\ \frac{{\partial u}}{{\partial \nu }} = 0\;{\text{ }}on{\text{ }}\Sigma = (0,T) \times \Gamma ,\\ u(0,x) = {u_0}(x),\;\;x \in \Omega ,\\ \end{gathered} \right.$ |
where $\Omega:=(0, 1)\times(0, 1)$ is a square domain in ${\mathbb R}^2$ and $\Gamma :=\partial \Omega$ is the boundary of $\Omega$.
Remark 4.2. We note that Theorem 4.1 holds for the homogeneous Dirichlet boundary condition. Also, we can establish stability criteria for a 3D space. Indeed, assume for simplicity that $\Omega:=(0, 1)\times (0, 1) \times (0, 1)$. Let $\triangle z$ denote the mesh size along the $z$-axis in 3-D space. Also, let $c_0 \in (0, 1)$ and assume that
$0 < \triangle t \le \frac{c_0\delta\varepsilon^2}{1-\delta} \quad \mbox{ and } \quad 0\le \frac{\triangle t}{(\triangle x)^2}+\frac{\triangle t}{(\triangle y)^2}+\frac{\triangle t}{(\triangle z)^2}\le \frac{1-c_0}2 .$ |
Then, a boundedness result similar to (4.5) holds for $\Omega:=(0, 1)\times (0, 1) \times (0, 1) \subset {\mathbb R}^3$.
From Theorem 4.1, we determine the stable numerical experiments of (DP)$^\varepsilon_\delta$ as follows. We set
$\Omega : = (0,1) \times (0,1),T = 0.01,\delta = 0.01{\text{ and }}\Delta x = \Delta y = 0.005$ |
as numerical data. Also, we consider the following initial data $u_{0}^\varepsilon (x, y)$ defined by
$u_0^\varepsilon (x,y) = \left\{ {\begin{array}{*{20}{c}} {0.2,}&{{\text{ if }}(x,y) \in [0.25,0.75] \times [0.25,0.75],} \\ { - 0.7,}&{{\text{ if }}(x,y) \in \Omega \backslash [0.25,0.75] \times [0.25,0.75].} \end{array}} \right.$ | (4.24) |
The graph of the initial data $u_0^\varepsilon(x, y)$ is as follows (Figure 8):
Now, setting $\varepsilon=0.08$ and $\triangle t=0.000005$, we take $c_0=0.1$. Then, we observe that:
$\frac{\triangle t}{(\triangle x)^2}+\frac{\triangle t}{(\triangle y)^2}=\frac{0.000005}{(0.005)^2}\times 2=0.4<0.45=\frac{1-c_0}{2}$ |
and
$\frac{c_0\delta\varepsilon^2}{1-\delta}=\frac{0.1\times0.01\times(0.08)^2}{1-0.01}=0.0000064646464\cdots .$ |
Therefore, we have
$\frac{c_0\delta\varepsilon^2}{1-\delta}=0.0000064646464\cdots >\triangle t,$ |
which implies that the criteria condition (4.4) holds. Thus, we obtain a stable numerical experiment of (DP)$^\varepsilon_\delta$ yielding Figure 9:
Next, we set $\varepsilon=0.01$, $\triangle t=0.000005$, and consider $c_0=0.1$. Then, we find that:
$\frac{\triangle t}{(\triangle x)^2}+\frac{\triangle t}{(\triangle y)^2} =\frac{0.000005}{(0.005)^2}\times 2=0.4<0.45=\frac{1-c_0}{2},$ |
and
$\frac{c_0\delta\varepsilon^2}{1-\delta}=\frac{0.1\times0.01\times(0.01)^2}{1-0.01} =0.00000010101010 \cdots <\triangle t.$ |
Therefore, the criteria (4.4) does not hold and hence yields an unstable numerical experiment of (DP)$^\varepsilon_\delta$. Indeed, we obtain Figure 10:
Finally, we consider the case $\varepsilon=0.01$, $\triangle t=0.0000005$, and set $c_0=0.5$. We then observe that:
$\frac{\triangle t}{(\triangle x)^2}+\frac{\triangle t}{(\triangle y)^2}=\frac{0.0000005}{(0.005)^2}\times 2=0.04<0.25=\frac{1-c_0}{2},$ |
and
$\frac{c_0\delta\varepsilon^2}{1-\delta}=\frac{0.5\times0.01\times(0.01)^2}{1-0.01}=0.00000050505050 \cdots >\triangle t,$ |
which implies that the inequalities (4.4) hold. Therefore, we obtain a stable numerical experiment of (DP)$^\varepsilon_\delta$ that yields Figure 11:
Remark 4.3. From Theorem 4.1, stable numerical results for (DP)$_\delta^\varepsilon$ are produced if we choose suitable values for the constants $\varepsilon$, $\delta $, and mesh sizes for time $\triangle t$ and space $\triangle x$ and $\triangle y$. Therefore, if we perform a numerical experiment of (P)$^\varepsilon$ for sufficiently small $\varepsilon$, we found it imperative to consider the original problem (P)$^\varepsilon$ using the primal-dual active set method of [3], the Lagrange multiplier method of [9] and so on.
From Theorem 4.1 and the numerical experiments presented above, we conclude that
(i) the mesh sizes for time-step $\triangle t$ and spatial-steps $\triangle x$, $\triangle y$ must satisfy constraints
$0 < \triangle t \le \frac{c_0\delta\varepsilon^2}{1-\delta},\quad 0\le \frac{\triangle t}{(\triangle x)^2}+\frac{\triangle t}{(\triangle y)^2}\le \frac{1-c_0}2 \ \ \mbox{ for some constant } c_0\in (0,1),$ |
to generate stable numerical simulations of (DP)$^\varepsilon_\delta$
(ii) the value $\delta\varepsilon^2/(1-\delta)$ is very important in providing numerical experiments of (DE)$^\varepsilon_\delta$ and (DP)$^\varepsilon_\delta$ (cf. Theorems 3.1 and 4.1);
(iii) our new approximate method (1.7) is better than the Yosida approximation (1.6), because the solutions to (DP)$^\varepsilon_\delta$ also take values in [-1, 1] (cf. (DE)$^\varepsilon_\delta$).
This research was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B), No. 16K17622 and Scientific Research (C), No. 26400179. The authors express their gratitude to Professor Toyohiko Aiki for his valuable and useful comments. The authors also express their gratitude to an anonymous referee for reviewing the original manuscript and for many valuable comments that helped clarify and refine this paper.
[1] | S. Allen and J. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall., 27 (1979), 1084-1095. |
[2] | H. Attouch, Variational Convergence for Functions and Operators. Pitman Advanced Publishing Program, Boston-London-Melbourne, 1984. |
[3] | L. Blank, H. Garcke, and L. Sarbu, Primal-dual active set methods for Allen-Cahn variational inequalities with nonlocal constraints. Numer. Methods Partial Di erential Equations, 29 (2013), 999-1030. |
[4] | H. Br´ezis, Op´erateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam, 1973. |
[5] | H. Br´ezis, M. G. Crandall, and A. Pazy, Perturbations of nonlinear maximal monotone sets in Banach space. Comm. Pure Appl. Math., 23 (1970), 123-144. |
[6] | L. Bronsard and R. V. Kohn, Motion by mean curvature as the singular limit of Ginzburg-Landau dynamics. J. Differential Equations, 90 (1991), 211-237. |
[7] | X Chen and C. M. Elliott, Asymptotics for a parabolic double obstacle problem. Proc. Roy. Soc. London Ser. A, 444 (1994), 429-445. |
[8] | M. H. Farshbaf-Shaker, T. Fukao, and N. Yamazaki, Singular limit of Allen-Cahn equation with constraints and its Lagrange multiplier. Discrete Contin. Dyn. Syst., AIMS Proceedings (2015), 418-427. |
[9] | M. H. Farshbaf-Shaker, T. Fukao, and N. Yamazaki, (In Press) Lagrange multiplier and singular limit of double-obstacle problems for the Allen-Cahn equation with constraint. Math. Methods Appl. Sci.. |
[10] | X. Feng and A. Prohl, Numerical analysis of the Allen-Cahn equation and approximation for mean curvature flows. Numer. Math., 94 (2003), 33-65. |
[11] | X. Feng, H. Song, and T. Tang, Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Probl. Imaging, 7 (2013), 679-695. |
[12] | P. C. Fife, Dynamics of internal layers and diffusive interfaces. CBMS-NSF Regional Conf. Ser. in: Appl. Math., 53 (1988), SIAM, Philadelphia. |
[13] | A. Friedman, Partial Di erential Equations of Parabolic Type, Prentice-Hall, INC., Englewood Cliffs, N. J., 1964. |
[14] | Y. Giga, Y. Kashima, and N. Yamazaki, Local solvability of a constrained gradient system of total variation. Abstr. Appl. Anal., 2004 (2004), 651-682. |
[15] | A. Ito, N. Yamazaki, and N. Kenmochi, Attractors of nonlinear evolution systems generated by time-dependent subdifferentials in Hilbert spaces. Dynamical systems and differential equations, Vol. I (Springfield, MO, 1996), Discrete Contin. Dynam. Systems 1998, Added Volume I, 327-349. |
[16] | N. Kenmochi, Solvability of nonlinear evolution equations with time-dependent constraints and applications. Bull. Fac. Education, Chiba Univ., 30 (1981), 1-87. |
[17] | N. Kenmochi, Monotonicity and compactness methods for nonlinear variational inequalities. Handbook of Differential Equations, Stationary Partial Differential Equations, Vol. 4 (2007) ed. M. Chipot, Chapter 4, 203-298, North Holland, Amsterdam. |
[18] | N. Kenmochi and M. Niezg´odka, Systems of nonlinear parabolic equations for phase change problems. Adv. Math. Sci. Appl., 3 (1993/94), Special Issue, 89-117. |
[19] | U. Mosco, Convergence of convex sets and of solutions variational inequalities. Advances Math., 3 (1969), 510-585. |
[20] | T. Ohtsuka, Motion of interfaces by an Allen-Cahn type equation with multiple-well potentials. Asymptot. Anal., 56 (2008), 87-123. |
[21] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Optimal control problems of singular diffusion equation with constraint. Adv. Math. Sci. Appl., 18 (2008), 1-28. |
[22] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Convergence of numerical algorithm for optimal control problem of Allen-Cahn type equation with constraint. Proceedings of International Conference on: Nonlinear Phenomena with Energy Dissipation-Mathematical Analysis, Modelling and Simulation, GAKUTO Intern. Ser. Math. Appl., Vol. 29 (2008), 441-462. |
[23] | T. Ohtsuka, K. Shirakawa, and N. Yamazaki, Optimal control problem for Allen-Cahn type equation associated with total variation energy. Discrete Contin. Dyn. Syst. Ser. S, 5 (2012), 159-181. |
[24] | J. Shen and X. Yang, Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discret. Contin. Dyn. Syst. Ser. A, 28 (2010), 1669-1691. |
[25] | T. Suzuki, K. Takasao, and N. Yamazaki, Remarks on numerical experiments of Allen-Cahn equations with constraint via Yosida approximation. Adv. Numer. Anal., 2016, Article ID 1492812, 16 pages. |
[26] | Y. Tonegawa, Integrality of varifolds in the singular limit of reaction-di usion equations. Hiroshima Math. J., 33 (2003), 323-341. |
[27] | X. Yang, Error analysis of stabilized semi-implicit method of Allen-Cahn equation. Discrete Contin. Dyn. Syst. Ser. B, 11 (2009), 1057-1070. |
[28] | J. Zhang and Q. Du, Numerical studies of discrete approximations to the Allen-Cahn equation in the sharp interface limit. SIAM J. Sci. Comput., 31 (2009), 3042-3063. |
1. | Dongsun Lee, Computing the area-minimizing surface by the Allen-Cahn equation with the fixed boundary, 2023, 8, 2473-6988, 23352, 10.3934/math.20231187 |
number of iterations i | the value of $u^i$ to (DE)$^\varepsilon_\delta$ | the value of $u^i$ to (YDE)$^\varepsilon_\delta$ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
number of iterations $i$ | the value of $u^i$ |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |
number of iterations i | the value of $u^i$ to (DE)$^\varepsilon_\delta$ | the value of $u^i$ to (YDE)$^\varepsilon_\delta$ |
0 | 0.100000 | 0.100000 |
1 | 0.101000 | 0.101000 |
2 | 0.102010 | 0.102010 |
3 | 0.103030 | 0.103030 |
4 | 0.104060 | 0.104060 |
5 | 0.105101 | 0.105101 |
| | |
224 | 0.928940 | 0.928940 |
225 | 0.938230 | 0.938230 |
226 | 0.947612 | 0.947612 |
227 | 0.957088 | 0.957088 |
228 | 0.966659 | 0.966659 |
229 | 0.976325 | 0.976325 |
230 | 0.986089 | 0.986089 |
231 | 0.995950 | 0.995950 |
232 | 0.999959 | 1.005909 |
233 | 1.000000 | 1.010059 |
234 | 1.000000 | 1.010101 |
235 | 1.000000 | 1.010101 |
236 | 1.000000 | 1.010101 |
237 | 1.000000 | 1.010101 |
| | |
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102000 |
2 | 0.104040 |
| |
120 | 0.994959 |
121 | 1.004940 |
122 | 0.995159 |
123 | 1.004745 |
124 | 0.995350 |
125 | 1.004557 |
126 | 0.995534 |
127 | 1.004376 |
128 | 0.995711 |
129 | 1.004203 |
| |
574 | 0.999999 |
575 | 1.000001 |
576 | 0.999999 |
577 | 1.000000 |
578 | 1.000000 |
579 | 1.000000 |
580 | 1.000000 |
| |
number of iterations $i$ | the value of $u^i$ |
0 | 0.100000 |
1 | 0.102020 |
2 | 0.104081 |
3 | 0.106184 |
4 | 0.108329 |
5 | 0.110517 |
| |
111 | 0.920801 |
112 | 0.939403 |
113 | 0.958381 |
114 | 0.977742 |
115 | 0.997495 |
116 | 1.002505 |
117 | 0.997495 |
118 | 1.002505 |
119 | 0.997495 |
120 | 1.002505 |
| |
number of iterations $i$ | the value of $u^i$ |
0 | 0.886917 |
1 | 0.904834 |
2 | 0.923114 |
3 | 0.941763 |
4 | 0.960788 |
5 | 0.980198 |
6 | 1.000000 |
7 | 1.000000 |
8 | 1.000000 |
9 | 1.000000 |
10 | 1.000000 |
| |