
In this paper, we study the numerical solution of a parabolic complementarity problem which is a widely used model in many fields, such as option pricing, risk measures, etc. Using a power penalty method we represent the complementarity problem as a nonlinear parabolic partial differential equation (PDE). Then, we use the trapezoidal rule as the time discretization, for which we have to solve a nonlinear equation at each time step. We solve such a nonlinear equation by the fixed-point iteration and in this methodology solving a tridiagonal linear system is the major computation. We present an efficient backward substitution algorithm to handle this linear system. Numerical results are given to illustrate the advantage of the proposed algorithm (compared to the built-in command backslash in Matlab) in terms of CPU time.
Citation: Haiyan Song, Fei Sun. A numerical method for parabolic complementarity problem[J]. Electronic Research Archive, 2023, 31(2): 1048-1064. doi: 10.3934/era.2023052
[1] | Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119 |
[2] | Xin Liu, Guang-Xin Huang . New error bounds for the tensor complementarity problem. Electronic Research Archive, 2022, 30(6): 2196-2204. doi: 10.3934/era.2022111 |
[3] | Peng Zhou, Teng Wang . The parameter-Newton iteration for the second-order cone linear complementarity problem. Electronic Research Archive, 2022, 30(4): 1454-1462. doi: 10.3934/era.2022076 |
[4] | Dongmei Yu, Yifei Yuan, Yiming Zhang . A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of H+-matrices. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007 |
[5] | Yan Li, Yaqiang Wang . Some new results for B1-matrices. Electronic Research Archive, 2023, 31(8): 4773-4787. doi: 10.3934/era.2023244 |
[6] | Yiyuan Qian, Haiming Song, Xiaoshen Wang, Kai Zhang . Primal-dual active-set method for solving the unilateral pricing problem of American better-of options on two assets. Electronic Research Archive, 2022, 30(1): 90-115. doi: 10.3934/era.2022005 |
[7] | Hongsong Feng, Shan Zhao . A multigrid based finite difference method for solving parabolic interface problem. Electronic Research Archive, 2021, 29(5): 3141-3170. doi: 10.3934/era.2021031 |
[8] | Mingtao Cui, Wennan Cui, Wang Li, Xiaobo Wang . A polygonal topology optimization method based on the alternating active-phase algorithm. Electronic Research Archive, 2024, 32(2): 1191-1226. doi: 10.3934/era.2024057 |
[9] | Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247 |
[10] | Youngmok Jeon, Dongwook Shin . Immersed hybrid difference methods for elliptic boundary value problems by artificial interface conditions. Electronic Research Archive, 2021, 29(5): 3361-3382. doi: 10.3934/era.2021043 |
In this paper, we study the numerical solution of a parabolic complementarity problem which is a widely used model in many fields, such as option pricing, risk measures, etc. Using a power penalty method we represent the complementarity problem as a nonlinear parabolic partial differential equation (PDE). Then, we use the trapezoidal rule as the time discretization, for which we have to solve a nonlinear equation at each time step. We solve such a nonlinear equation by the fixed-point iteration and in this methodology solving a tridiagonal linear system is the major computation. We present an efficient backward substitution algorithm to handle this linear system. Numerical results are given to illustrate the advantage of the proposed algorithm (compared to the built-in command backslash in Matlab) in terms of CPU time.
We are interested in the following parabolic complementarity problem
{∂u∂t−∂∂x(a(x)∂u∂x)≤g(x,t),u(x,t)−u∗(x,t)≤0,(∂u∂t−∂∂x(a(x)∂u∂x)−g(x,t))⋅(u(x,t)−u∗(x,t))=0, | (1.1) |
where x∈Ω⊂R, t∈(0,T) and a(x)>0. The functions u∗ and g are known and the initial and boundary conditions for the unknown solution u is
u(x,t)=0 for (x,t)∈∂Ω×(0,T),u(x,0)=u0(x) for x∈Ω. | (1.2) |
This kind of dynamic problem arises in many real-world applications, such as engineering, stochastic control, mechanics, economics, and risk measures (see, e.g., [1,2,3,4,5,6,7,8]). In general, it is impossible to obtain an analytical solution of (1.1) except for some special cases and hence we have to rely on numerical methods in practice.
There are many numerical methods for the problem (1.1) and were studied by various authors. A popular approach is to reformulate the problem as a parabolic PDE using the linear penalty term of the form μ[p−u∗]+, where μ≫1 is a constant, p denotes the solution to the penalized equation and [a]+:=max{a,0}. This method has been discussed in [2,3,9,10,11,12]. It has been proved in [2] that the solution of the penalized equation converges to the original one at a rate of O(μ−12). In [10,13,14] power penalty methods have been used for solving constrained optimization problems by using an improved penalty term
μ[p(x,t)−u∗(x,t)]1k+=0, | (1.3) |
where k≥1 is a constant. (This includes the linear penalty k=1 mentioned above as the special case.) The authors proved in [10] that p converges to the exact solution u in a proper Sobolev norm at a rate of O(μ−k2). However, the penalty term (1.3) has an unbounded derivative when p−u∗→0+, and therefore it needs to be smoothed locally [10]. In [11], the authors generalized the penalty term in (1.3) to the form of
μ([p(x,t)−u∗(x,t)]++ϵ)1k=0, | (1.4) |
where 1≫ϵ>0 is a smoothing parameter. The authors proved that the penalized solution converges to that of the original one at a rate of O([μ−k+ϵ(1+μϵ1k)]1/2).
The aim of this paper is to study the numerical solution of the following penalized equation
{∂p∂t−∂∂x(a(x)∂p∂x)+μ([p−u∗]++ϵ)1k=g+μϵ1k,(x,t)∈Ω×(0,T),p(x,t)=0,(x,t)∈∂Ω×(0,T),p(x,0)=u0(x),x∈Ω. | (1.5) |
In the following, we let f=g+μϵ1k. We will first discretize (1.5) in space by using the centered finite difference method [15]. This leads to large-scale nonlinear ordinary differential equations, for which we use the trapezoidal rule [16,17] for time discretization. Such a time discretization avoids solving nonlinear equations, but at each time step, we have to handle a large-scale tridiagonal linear system, which is the major computation. By looking for the structure of the coefficient matrix we propose a backward substitution algorithm for such a linear system. Numerically, we find that this algorithm is more efficient than the widely used built-in command 'backslash' in Matlab in terms of CPU time.
The rest of this paper is organized as follows. In Section 2 we recall some results concerning the unique solvability of (1.5) and convergence of p to u. In Section 3 we present the space and time discretization for (1.5). We present the backward substitution algorithm in Section 4 for solving the large-scale tridiagonal linear system at each time step. Numerical results are given in Section 5 and we conclude this paper in Section 6.
In this section, we recall some results for the penalized parabolic equation (1.5) at the continuous level. To this end, we introduce some notations used in the following. With 1≤s≤∞, we let Ls(Ω)={v:(∫Ω|v(x)|sdx)1k<∞} denote the space of all s-power integrable functions on Ω. The inner product on L2(Ω) is denoted by (⋅,⋅)Ω. We use ‖⋅‖Ls(Ω) to denote the norm on Ls(Ω) and Hs(Ω) to denote the Sobolev space with norm ‖⋅‖k,Ω. Let Cs(Ω) (respectively, Cs(ˉΩ)) be the function set of which a function and its derivatives of up to order s are continuous on Ω (respectively, ˉΩ). Let Hs0(Ω)={v∈Hs(Ω):v(x)=0,x∈∂Ω}. For any Hilbert space H(Ω), we use Ls(0,T;H(Ω)) to denote the space
Ls(0,T;H(Ω))={v(⋅,t):v(⋅,t)∈H(Ω) a.e. in (0,T);‖v(⋅,t)‖H∈Ls(0,T)}, |
where 1≤k≤∞ and ‖⋅‖H denotes the natural norm on H(Ω). The norm in this space is denoted by ‖⋅‖Ls(0,T;H), i.e.,
‖v‖Ls(0,T;H(Ω))=(∫T0‖v(⋅,t)‖sHdt)1s. |
We use H−1(Ω) to denote the dual space of H1(Ω) and use ⟨⋅,⋅⟩ to denote the duality pair between a Hilbert space and its dual space.
By an integration by parts, it is clear that the variational problem corresponding to (1.5) is the following problem: find p(t)∈H10(Ω) such that for all v∈H10(Ω) it holds
(−∂p(t)∂t,v)+(a(x)∂xp(t),∂xv)+μ(([p(t)−u∗]++ϵ)1k,v)=(f,v), | (2.1) |
with the initial condition p(x,0)=u0(x) in Ω and f=g+μϵ1k.
Theorem 1. For a(x)>0, problem (2.1) has a unique solution for fixed ϵ≥0 and μ≥1.
Proof. Clearly, problem (2.1) is equivalent to
(−∂p(t)∂t,v)+(a(x)∂xp(t),∂xv)+μ(Φϵ(p(t)),v)=(f−μ(ϕ(0)+ϵ)1k,v), |
where Φϵ(w)=(ϕ(w)+ϵ)1k−(ϕ(0)+ϵ)1k. Clearly,
Φϵ(0)=(ϕ(0)+ϵ)1k−(ϕ(0)+ϵ)1k=0, |
and Φϵ(w) is a monotonically increasing function of w because of the monotonicity of ([w−u∗]++ϵ)1k in w. This implies
⟨Φϵ(v)−Φϵ(w),v−w⟩≥0,∀v,w∈H10(Ω). | (2.2a) |
Since a(x)>0, it holds
(a(x)∇v,∇v)≥a0(∇v,∇v)=a0‖∇v‖0≥C‖v‖1, | (2.2b) |
where we have used the Poincare-Friedrich inequality, i.e., ‖∇v‖0≥C‖v‖1(∀v∈H10(Ω)). By integration by parts, we therefore have
⟨(−∂x(a(x)∂xv))−(−∂x(a(x)∂xw)),v−w⟩=⟨a(x)(∂xv−∂xw),∂x(v−w)⟩≥0, | (2.2c) |
which holds for any v,w∈H10(Ω). From (2.2a) and (2.2b) we have
⟨μΦϵ(v)−∂x(a(x)∂xv))−(μΦϵ(w)−∂x(a(x)∂xw))),v−w⟩≥0. | (2.2d) |
Similarly, by using (2.2b) it holds
⟨μΦϵ(v)−∂x(a(x)∂xv)),v⟩≥C‖v‖21+μ(Φϵ(v),v)≥C‖v‖21, | (2.3) |
because (Φϵ(v),v)=(Φϵ(v)−Φϵ(0),v−0)≥0.
From the definitions of Φϵ we have
(Φϵ(v),w)=∫Ω([v−u∗]++ϵ)1kwdx−∫Ω([−u∗]++ϵ)1kwdx≤[(∫Ω([v−u∗]++ϵ)2/kdx)1/2+C1(t)]‖w‖0≤C[(∫Ω([v−u∗]++ϵ)2dx)1/2k+C1(t)]‖w‖0=C(‖[v−u∗]++ϵ‖1k0+C1(t))‖w‖0]≤(C2‖v‖0+C1(t))‖w‖0, |
where we have used ‖v‖1k0≤max{1,‖v‖0} for k≥1. Here, C1∈L2(0,T) is a generic positive function of t and C2>0 a generic constant independent of ϵ and k. This gives
⟨μΦϵ(v)−∂x(a(x)∂xv)),w⟩≤‖w‖1(C1(t)+C2‖v‖1),∀v,w∈H10(Ω). |
Dividing both sides of this inequality by ‖w‖1 and taking the supremum with respect to w we obtain
‖μΦϵ(v)−∂x(a(x)∂xv))‖H−1(Ω)≤C1(t)+C2‖v‖1. | (2.4) |
Now, by using (2.2d), (2.3) and (2.4) the unique existence of the solution of (2.1) is guaranteed by the theory established in [4,18].
We now present the convergence of the approximate solution p of the penalized problem (1.5) to that of the original problem (1.1).
Theorem 2. Let a(x)>0 and ∂u∂t∈Lk+1(Ω). Then there exists a constant C>0, independent of u,p and μ such that
‖u−p‖L∞(0,T;L2(Ω))+‖u−p‖L2(0,T;H10(Ω))≤C[1μk+ϵ(μϵ1k+1)]1/2, | (2.5) |
where μ,k and ϵ are the parameters used in (1.5).
Proof. In the asymptotic sense (i.e., ϵ→0) it is clear that (1.5) is equivalent to
∂p∂t−∂∂x(a(x)∂p∂x)+μ([p−u∗]++ϵ)1k=g. | (2.6) |
(Here we have used f=g+μϵ1k.) In [3,11,12] it was proved that the solution of (2.6) and the solution of the original problem (1.1) satisfy (2.5). Since
μϵ1k≤ϵ12(μϵ1k+1)12, |
it is clear that the solution of (1.5) and the solution of the original problem (1.1) satisfy (2.5) as well.
We now present space and time discretizations for the penalized equation (1.5). To this end, we partition the computation domain Ω×[0,T] by mesh sizes h and τ and denote an arbitrary grid on this domain by (jh,nτ), where j=0,1,…,J and n=0,1,…,N. We first discretize the spatial derivative by the centered finite difference formula
∂∂x(a(x)∂p∂x)|x=xj=aj+1pj+1(t)−ajpj(t)h−ajpj(t)−aj−1pj−1(t)hh+rj=aj+1pj+1(t)−2ajpj(t)+aj−1pj−1(t)h2+rj, |
where al=a(xl), pl(t)≈p(xl,t) (with l=j−1,j,j+1), rj=O(h2) is the truncation error and j=1,2,…,J−1. For j=0 and j=J it holds pj(t)=0 according to the boundary condition in (1.5).
Dropping this truncation error in the above formulas gives the approximations as
∂∂x(a(x)∂p∂x)|x=xj≈aj+1pj+1(t)−2ajpj(t)+aj−1pj−1(t)h2. |
Substituting this into (1.5) gives the semi-discrete system as
p′(t)+Ap(t)+μ([p(t)−u∗(t)]++ϵ)1k=f(t),t∈(0,T), | (3.1) |
with initial value condition p(0)=u0, where
p(t)=[p1(t)p2(t)⋮pJ−1(t)],u∗=[u∗(x1,t)u∗(x2,t)⋮u∗(xJ−1,t)],f(t)=[f(x1,t)f(x2,t)⋮f(xJ−1,t)],ϵ=[ϵϵ⋮ϵ]. | (3.2a) |
The Matrix A is a tridiagonal matrix
A=1h2[2a1−a2−a12a2−a3⋱⋱⋱−aJ−32aJ−2−aJ−1−aJ−22aJ−1]. | (3.2b) |
We now introduce time discretization for (3.1). To match the second-order accuracy of the space discretization, we need a second-order time discretization. To this end, we rewrite (3.1) as an integral equation in the interval [tn,tn+1]
∫tn+1tnp′(s)ds=∫tn+1tn(f(s)−Ap(s)−μ([p(s)−u∗(s)]++ϵ)1k)ds, |
i.e.,
p(tn+1)−p(tn)=∫tn+1tn(f(s)−Ap(s)−μ([p(s)−u∗(s)]++ϵ)1k)ds. | (3.3) |
Let f(t)−Ap(t)−μ([p(t)−u∗(t)]++ϵ)1k=(q1(t),q2(t),…,qJ−1(t))⊤. Then,
∫tn+1tn(f(s)−Ap(s)−μ([p(s)−u∗(s)]++ϵ)1k)ds=[∫tn+1tnq1(s)ds∫tn+1tnq2(s)ds⋮∫tn+1tnqJ−1(s)ds]. |
Let q(t) be an arbitrary component of the vector function (q1(t),q2(t),…,qJ−1(t))⊤. The integral ∫tn+1tnq(s)ds is the area of the trapezoid with a curved edge as shown in Figure 1 on the left. In general, it is impossible to get the precise value of such an area. To get an approximation of this area we can consider a regular trapezoid with the curved edge being replaced by a straight edge; see Figure 1 on the right. Then, we have
∫tn+1tnq(s)ds=p(tn)+p(tn+1)2(tn+1−tn)+ηn=p(tn)+p(tn+1)2τ+ηn, |
where ηn=O(τ2) denotes the approximation error [15,Chapter 2].
Dropping the approximation error and letting pn≈p(tn), we therefore obtain an approximation of the right hand-side of (3.3):
∫tn+1tn(f(s)−Ap(s)−μ([p(s)−u∗(s)]++ϵ)1k)ds≈τ~fn−τA2(pn+pn+1)−τμ2(([pn−u∗n]++ϵ)1k+([pn+1−u∗n+1]++ϵ)1k), |
where ~fn=f(tn)+f(tn+1)2. Substituting this into (3.3) gives the full discretization of the penalized equation (1.5):
pn+1−pn=τ~fn−τA2(pn+pn+1)−τμ2(([pn−u∗n]++ϵ)1k+([pn+1−u∗n+1]++ϵ)1k), |
where n=0,1,…,N−1. This is a nonlinear system due to the term
([pn+1−u∗n+1]++ϵ)1k. |
To handle such a nonlinear system we propose a fixed-point iteration as follows. We can use Newton's method to handle such a nonlinear problem [19], but in this case, we have to compute the Jacobian matrix of the whole nonlinear term. First, we rewrite this system as
pn+1+τA2pn+1=τμ2([pn+1−u∗n+1]++ϵ)1k+bn,bn:=τ~fn+pn−τA2pn−τμ2([pn−u∗n]++ϵ)1k. | (3.4) |
(The vector bn is a known term.) Then, we solve pn+1 via the following iterations
p[l+1]n+1+τA2p[l+1]n+1=τμ2([p[l]n+1−u∗n+1]++ϵ)1k+bn,l=0,1,…,lmax−1,pn+1=p[lmax]n+1, | (3.5) |
where l is the iteration index and lmax is specified by the prescribed tolerance.
The following is the convergence analysis for the above fixed-point iteration.
Theorem 3. Let a(x)>0 for x∈Ω, ϵ>0 and k>1. Then the fixed-point iteration (3.5) converges to the unique solution with a rate at least
ρ=τμamaxh2[2amaxh2+(2−2cos(hπ))τ]kϵk−1k, | (3.6) |
if ρ<1, where amax=maxx∈Ωa(x).
Proof. From (3.5) we have
p[l+1]n+1=τμ2(Ix+τA2)−1([p[l]n+1−u∗n+1]++ϵ)1k+~bn, |
where ~bn=(Ix+τA2)−1bn and Ix∈R(J−1)×(J−1) is an identity matrix. Let
Y(x)=([x−u∗n+1]++ϵ)1k,F(x)=τμ2(Ix+τA2)−1Y(x)+~bn. | (3.7) |
Then, it is sufficient to study the contraction property of the function F(x), i.e.,
‖F(x1)−F(x2)‖∞≤ρ‖x1−x2‖∞, | (3.8) |
where ρ is the quantity that we need to look insight into. The analysis of ρ is given as follows. It holds
F(x1)−F(x2)=τμ2(Ix+τA2)−1[Y(x1)−Y(x2)], |
which implies
‖F(x1)−F(x2)‖∞≤τμ2‖(Ix+τA2)−1‖∞‖Y(x1)−Y(x2)‖∞. | (3.9) |
From the structure of the matrix A (cf. (3.2b)) we have
Ix+τA2=Ix+τ2ΛDa, |
where
Λ=1h2[2−1−12−1⋱⋱⋱−12−1−12],Da=[a1a2⋱aJ−2aJ−1]. |
Let λ(⋅) denote an arbitrary eigenvalue of the involved matrix. It it clear
λ(Ix+τA2)=1+τ2λ(ΛDa). |
Since a(x)>0 we know that Da is an invertible diagonal matrix with positive diagonal elements. Hence, by a similarity transform it holds
λ(ΛDa)=λ(D12a(ΛDa)D−12a)=λ(D12aΛD12a), | (3.10a) |
where D±12a=diag(a±121,a±122,…,a±12J−1). The matrix D12aΛD12a is a symmetric positive definite matrix and thus it holds (see, e.g., [20, Chapter 5])
minz∈RJ−1z⊤D12aΛD12azz⊤z≤λ(D12aΛD12a)≤maxz∈RJ−1z⊤D12aΛD12azz⊤z. | (3.10b) |
By letting ~z=D12az we have
z⊤D12aΛD12azz⊤z=~z⊤Λ~z~z⊤D−1a~z. | (3.11) |
Since Λ is a symmetric positive definite matrix, from [20,Chapter 5] we have
λmin(Λ)≤~z⊤Λ~z≤λmax(Λ),∀~z∈RJ−1. |
From [21,22] we know that the eigenvalues of the tridiagonal matrix Λ are given by
λj(Λ)=2−2cos(jπJ)h2,j=1,2,…,J−1. |
Since J=1h, it holds
λmin(Λ)=2−2cos(hπ)h2,λmax(Λ)=2+2cos(hπ)h2. |
This gives
2−2cos(hπ)h2≤~z⊤Λ~z≤2+2cos(hπ)h2,∀~z∈RJ−1. | (3.12a) |
Let amax=maxx∈Ωa(x) and amin=minx∈Ωa(x). Then, it holds
a−1max~z⊤~z≤~z⊤D−1a~z≤a−1min~z⊤~z,∀~z∈RJ−1. | (3.12b) |
Substituting (3.12a) and (3.12b) into (3.11) gives
2−2cos(hπ)h2amax=λmin(Λ)amax≤z⊤D12aΛD12azz⊤z≤λmax(Λ)amin=2+2cos(hπ)h2amin. |
This together with (3.10a) and (3.10b) gives
λ(Ix+τA2)∈[1+(2−2cos(hπ))τ2amaxh2,1+(2+2cos(hπ))τ2aminh2]. | (3.13) |
This indicates that Ix+τA2 is an invertible matrix, since λmin(Ix+τA2)>0.
We next estimate of the infinity-norm of the inverse matrix (Ix+τA2)−1. By the definition of the infinity-norm of a square matrix, it holds
‖(Ix+τA2)−1‖∞=maxz∈RJ−1,‖z‖∞=1‖(Ix+τA2)−1z‖∞. |
Since λ(Ix+τA2) is invertible, the eigenvectors of this matrix consist of a complete basis of the space RJ−1. Then, without loss of generality, we assume that z is an arbitrary normalized eigenvector (with corresponding eigenvalue λ), i.e., (Ix+τA2)z=λz. Hence,
(Ix+τA2)−1z=λ−1z. |
This implies (
‖(Ix+τA2)−1z‖∞=|λ|∈[2aminh22aminh2+(2+2cos(hπ))τ,2amaxh22amaxh2+(2−2cos(hπ))τ]. |
In summary, we have
‖(Ix+τA2)−1‖∞∈[2aminh22aminh2+(2+2cos(hπ))τ,2amaxh22amaxh2+(2−2cos(hπ))τ]. | (3.14) |
We next explore the relationship between ‖Y(x1)−Y(x2)‖∞ and ‖x1−x2‖∞ with Y(x) being the function defined by (3.7). Since
Y(x)=(([x1−u∗1]++ϵ)1k,([x2−u∗2]++ϵ)1k,…,([xJ−1−u∗J−1]++ϵ)1k)⊤, |
it is sufficient to study the contraction property of y(x):=([x]++ϵ)1k with x∈R. We claim
|y(x1)−y(x2)|≤1kϵk−1k|x1−x2|. | (3.15) |
We consider three cases as shown in Figure 2. For the case x1≤0 and x2≤0, it is clear that (3.15) holds. For x1≤0 and x2>0 (i.e., the middle case in Figure 2), it holds
|y(x1)−y(x2)|=y(x2)−y(0)=y′(ξ)x2=x2k(ξ+ϵ)k−1k≤x2−x1kϵk−1k=|x2−x1|kϵk−1k, |
where ξ∈(0,x2). It remains to consider x2>x1>0 (cf. Figure 2 on the right). We have
|y(x1)−y(x2)|=y(x2)−y(x1)=y′(ξ)(x2−x1)=x2−x1k(ξ+ϵ)k−1k≤x2−x1kϵk−1k=|x2−x1|kϵk−1k. |
In summary, for all the three cases shown in Figure 2 the estimate (3.15) holds. Hence,
‖Y(x1)−Y(x2)‖∞≤1kϵk−1k‖x1−x2‖∞. | (3.16) |
Now, substituting (3.14) and (3.16) into (3.9) gives
‖F(x1)−F(x2)‖∞≤τμamaxh2[2amaxh2+(2−2cos(hπ))τ]kϵk−1k‖x1−x2‖∞. | (3.17) |
This proves the desired result as stated by Theorem 3.
In the implementation of (3.5) we have to solve a linear system
(I+τA2)p[l+1]n+1=b[l]n,b[l]n:=τμ2([p[l]n+1−u∗n+1]++ϵ)1k+bn, | (4.1) |
for each iteration index l≥0. This would be the major computation for solving the penalized equation (1.5) and in this section, we propose an algorithm to handle this problem. (There are also many other efficient algorithms can be used to solve such a banded linear system, such as the hybrid algorithm in [23].)
To clearly describe our idea, we let
p[l+1]n+1=(y1,y2,…,yJ−1)⊤,b[l]n=(b1,b2,…,bJ−1)⊤,γ=τ2h2,˜a1=γa21+2γa1,˜b1=b11+2γa1,˜aj=γaj+11+2γaj−γaj−1˜aj−1,˜bj=bj+γaj−1˜bj−11+2γaj−γaj−1˜aj−1,j=2,3,…,J−2. | (4.2) |
Then, it holds
{(1+2γa1)y1−γa2y2=b1,−γaj−1+(1+2γaj)yj−γaj+1yj+1=bj,j=2,3,…,J−2,−γaJ−2yJ−2+(1+2γaJ−1)yJ−1=bJ−1. | (4.3) |
From the first equation we have
y1=γa21+2γa1y2+b11+2γa1=˜a1y2+˜b1. | (4.4a) |
Substituting this into the middle equation in (4.3) with j=2 gives
−γa1(˜a1y2+˜b1)+(1+2γa2)y2−γa3y3=b2, |
i.e.,
y2=γa31+2γa2−γa1˜a1y3+b2+γa1˜b11+2γa2−γa1˜a1=˜a2y3+˜b2. |
In general we have
yj=−γaj−1yj−1+(1+2γaj)yj−γaj+1yj+1=−γaj−1(˜aj−1yj+˜bj−1)+(1+2γaj)yj−γaj+1yj+1, |
i.e.,
yj=γaj+11+2γaj−γaj−1˜aj−1yj+1+bj+γaj−1˜bj−11+2γaj−γaj−1˜aj−1=˜ajyj+1+˜bj, | (4.4b) |
where j=2,3,…,J−2. In particular for j=J−1 we have yJ−2=˜aJ−2yJ−1+˜bJ−2. This together with the last equation in (4.3) gives
{yJ−2=˜aJ−2yJ−1+˜bJ−2,−γaJ−2yJ−2+(1+2γaJ−1)yJ−1=bJ−1. |
Substituting the first equation into the second one gives
yJ−1=bJ−1+γaJ−2˜bJ−21+2γaJ−1−γaJ−2˜aJ−2. | (4.4c) |
With yJ−1, we can get yJ−2,yJ−3,…,y1 via a backward substitution according to (4.4a) and (4.4b).
In summary, we can solve the linear system (4.1) as follows.
Algorithm 1 Backward Substitution Algorithm. |
Initialization: Define {˜aj,˜bj}J−2j=1 according to (4.2). |
Step 1 Compute yJ−1 according to (4.4c). |
Step 2 Compute {yJ−2,yJ−3,…,y2,y1} according to (4.4b) and (4.4a). |
Step 3 Let p[l+1]n+1=(y1,y2,…,yJ−1)⊤. |
In this section, we present numerical results to study the proposed fixed-point iteration (3.5) and the backward substitution algorithm in Section 4. We use the following data
Ω=(0,1),t∈(0,T) with T=4,a(x)=min{0.8,max{0.2,sin(πx2)}},u∗=sin(√tx|0.5−x|π),g=2(1−x)sin(2πt(T−t)x). | (5.1) |
For the penalty parameters, we use ϵ=1e−4, k=4 and several values of μ. All numerical results are implemented by Matlab R2017a installed in a desk computer with Mac OS and 2.7 GHz Intel Core i5. The tolerance for the fixed-point iteration (3.5) is set to tol=min{h2,τ2}10, which is sufficient to match the discretization errors.
Let h=τ=1100. Then, in Figure 3 we plot the profile of p(x,t) and u∗(x,t) for two values of μ: μ=3 (top row) and μ=15 (bottom row). We see that the parameter μ has a remarkable influence on the solution p. In particular, for small μ (e.g., μ=3) it does not hold p≤u∗ for all (x,t)∈Ω×(0,T) as we can see in the top-right subfigure. For large μ, as we can see in the bottom-right subfigure, it holds p≤u∗ uniformly.
In Figure 4 we show the local details of the constraint u∗ and the computed solution p at two different time points: t=2 (left) and t=4 (right). Here we consider three values of penalty parameter μ: μ=3,7,15. The results in Figure 4 clearly indicate that as μ grows the condition p≤u∗ holds uniformly on the space and time domains. This observation confirms the conclusion in [3,10,11,12].
In Figure 5 we show the measured iteration number of the fixed-point iteration (3.5) for three values of the penalty parameter μ. (Such an iteration number is the quantity Lmax such that the infinity norm of the residual arrives at the prescribed tolerance tol = min{h2,τ2}10.) We see that as μ grows the required iteration number of the fixed-point iteration (3.5) increases as well. This is an undesirable phenomenon and needs further study.
Let
Itmaxfp=maxn=1,2,…,NItnfp, |
where the subscript 'fp' denotes fixed-point iteration and Itnfp is the iteration number for the n-th time point. The quantity Itmaxfp is the maximal iteration number over all the N time points. In Figure 6 we show Itmaxfp as we refine the discretization mesh sizes h (and τ) from 2−6 to 2−11. Clearly, the results in Figure 6 indicates that the convergence rate of the fixed-point iteration algorithm is robust with respect to the mesh sizes. Such robustness is a very important property when high accuracy (i.e., h and τ should be small) is pursued.
We next study the backward substitution algorithm proposed in Section 4. We will compare it with the built-in command backslash in Matlab. Let p be the solution obtained by using the backward substitution algorithm proposed in Section 4 and pinv is the solution obtained by using the backslash command in Matlab. These two linear solvers are used to handle the same linear system in (3.5) for each fixed-point iteration. In Figure 7 we show the error between p and pinv on the space and time domains. We see that both linear solvers leads to the same numerical solution if we neglect the roundoff error due to floating point operations. This implies that the backward substitution algorithm indeed produces a reliable numerical solution.
Using the two linear solvers for implementing the fixed-point iteration (3.5), we show in Figure 8 the measured CPU time for solving the penalized equation (1.5). Clearly, the computation using the backward substitution algorithm needs much less CPU time and this indicates a great advantage for using it in practice.
We have made a numerical study of the parabolic complementarity problems. We first represent the problem via a penalty method following the idea in [2,3,9,10,11,12] and then we discretize the penalized equation via a Crank-Nicolson method consisting of a centered finite difference formula for the space derivative and a trapezoidal rule for time discretization. In each time step, we have to solve a nonlinear system due to the penalty term and we proposed a fixed-point iteration to handle such a nonlinear system. The major computation of the fixed-point iteration is to solve a tridiagonal linear problem, for which we proposed a backward substitution algorithm, which is a direct linear solver (i.e., it is not iterative). Extensive numerical results are given, which indicate how the discretization mesh sizes and the penalty parameters affect the convergence rate of the fixed-point iteration. In terms of CPU time, the numerical results also indicate an obvious advantage of the backward substitution algorithm compared to the built-in command backslash in Matlab. This would be a very important advantage to deal with large-scale nonlinear/linear systems, such as the one arising from discretizing the backward stochastic differential equation (BSDE) [6,8]. The research of BSDE in risk measure is a hot topic at present, which provides a new direction for us to further study the relation between numerical method for parabolic complementarity problem and risk measure. In the forthcoming study, we will further discuss how to use numerical method proposed in this paper to handle risk measure.
The authors are very grateful to the anonymous referees for their careful reading of a preliminary version of the manuscript and their valuable suggestions, which greatly improved the quality of this paper. This work is supported by the Guangdong Basic and Applied Basic Research Foundation (2020A1515110671), the Jiangmen Basic and Applied Basic Research Foundation (2021030100070004859).
The authors declare that there is no conflicts of interest.
[1] |
L. Angermann, S. Wang, Convergence of a fitted finite volume method for European and American option valuation, Numer. Math., 106 (2007), 1–40. https://doi.org/10.1007/s00211-006-0057-7 doi: 10.1007/s00211-006-0057-7
![]() |
[2] | A. Bensoussan, J. L. Lions, Applications of Variational Inequalities in Stochastic Control, North-Holland Amsterdam, New York, Oxford, 1978. |
[3] |
T. B. Gyulov, M. N. Koleva, Penalty method for indifference pricing of American option in a liquidity switching market, Appl. Numer. Math., 172 (2022), 525–545. https://doi.org/10.1016/j.apnum.2021.11.002 doi: 10.1016/j.apnum.2021.11.002
![]() |
[4] | J. Haslinger, M. Miettinen, Finite Element Method for Hemivariational Inequalities, Kluwer Academic Publisher, 1999. https://doi.org/10.1007/978-1-4757-5233-5 |
[5] |
L. Scurria, D. Fauconnier, P. Jiranek, T. Tamarozzi, A Galerkin/hyper-reduction technique to reduce steady-state elastohydrodynamic line contact problems, Comput. Methods Appl. Mech. Eng., 386 (2021), 114132. https://doi.org/10.1016/j.cma.2021.114132 doi: 10.1016/j.cma.2021.114132
![]() |
[6] |
W. D. Zhao, Numerical methods for forward backward stochastic differential equations, Math. Numer. Sin., 37 (2015), 337–373. https://doi.org/10.12286/jssx.2015.4.337 doi: 10.12286/jssx.2015.4.337
![]() |
[7] |
L. Jiang, Convexity, translation invariance and subadditivity for g-expectations and related risk measures, Ann. Appl. Probab., 18 (2008), 245–258. https://doi.org/10.1214/105051607000000294 doi: 10.1214/105051607000000294
![]() |
[8] |
I. Penner, A. Reveillac, Risk measures for processes and BSDEs, Finance Stochastics, 19 (2015), 23–66. https://doi.org/10.1007/s00780-014-0243-x doi: 10.1007/s00780-014-0243-x
![]() |
[9] | R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer-Verlag, 1984. https://doi.org/10.1007/978-3-662-12613-4 |
[10] |
S. Wang, X. Q. Yang, K. L. Teo, Power penalty method for a linear complementarity problem arising from American option valuation, J. Optim. Theory Appl., 129 (2006), 227–254. https://doi.org/10.1007/s10957-006-9062-3 doi: 10.1007/s10957-006-9062-3
![]() |
[11] |
S. Wang, C. S. Huang, A power penalty method for solving a nonlinear parabolic complementarity problem, Nonlinear Anal. Theory Methods Appl., 69 (2008), 1125–1137. https://doi.org/10.1016/j.na.2007.06.014 doi: 10.1016/j.na.2007.06.014
![]() |
[12] |
M. Chen, C. Huang, A power penalty method for a class of linearly constrained variational inequality, J. Ind. Manage. Optim., 14 (2018), 1381–1396. https://doi.org/10.3934/jimo.2018012 doi: 10.3934/jimo.2018012
![]() |
[13] | A. M. Rubinov, X. Q. Yang, Lagrange-type Functions in Constrained Non-convex Optimization, Kluwer Academic Publishers, Dordrecht, Holland, 2003. |
[14] |
X. Q. Yang, X. X. Huang, Nonlinear lagrangian approach to constrained optimization problems, SIAM J. Optim., 11 (2001), 1119–1144. https://doi.org/10.1137/S1052623400371806 doi: 10.1137/S1052623400371806
![]() |
[15] | M. H. Holmes, Introduction to Mumerical Methods in Differential Equations, Springer New York, 2009. |
[16] |
O. T. Hanna, New explicit and implicit "improved Euler" methods for the integration of ordinary differential equations, Comput. Chem. Eng., 12 (1988), 1083–1086. https://doi.org/10.1016/0098-1354(88)87030-3 doi: 10.1016/0098-1354(88)87030-3
![]() |
[17] |
R. Santos, L. Alves, A comparative analysis of explicit, IMEX and implicit strong stability preserving Runge-Kutta schemes, Appl. Numer. Math., 159 (2021), 204–220. https://doi.org/10.1016/j.apnum.2020.09.007 doi: 10.1016/j.apnum.2020.09.007
![]() |
[18] | E. Zeidler, Nonlinear Functional Analysis and Its Applications Ⅱ/B: Nonlinear Monotone Operators, Springer-Verlag, New York, 1990. |
[19] |
Y. L. Zhao, P. Y. Zhu, X. M. Gu, X. L. Zhao, H. Y. Jian, A preconditioning technique for all-at-once system from the nonlinear tempered fractional diffusion equation, J. Sci. Comput., 83 (2020), 10. https://doi.org/10.1007/s10915-020-01193-1 doi: 10.1007/s10915-020-01193-1
![]() |
[20] | G. H. Golub, C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, 2012. |
[21] |
C. M. da Fonseca, On the eigenvalues of some tridiagonal matrices, J. Comput. Appl. Math., 200 (2007), 283–286. https://doi.org/10.1016/j.cam.2005.08.047 doi: 10.1016/j.cam.2005.08.047
![]() |
[22] |
A. R. Willms, Analytic results for the eigenvalues of certain tridiagonal matrices, SIAM J. Matrix Anal. Appl., 30 (2008), 639–656. https://doi.org/10.1137/070695411 doi: 10.1137/070695411
![]() |
[23] |
W. Luo, X. M. Gu, B. Carpentieri, A hybrid triangulation method for banded linear systems, Math. Comput. Simul., 194 (2022), 97–108. https://doi.org/10.1016/j.matcom.2021.11.012 doi: 10.1016/j.matcom.2021.11.012
![]() |
Algorithm 1 Backward Substitution Algorithm. |
Initialization: Define {˜aj,˜bj}J−2j=1 according to (4.2). |
Step 1 Compute yJ−1 according to (4.4c). |
Step 2 Compute {yJ−2,yJ−3,…,y2,y1} according to (4.4b) and (4.4a). |
Step 3 Let p[l+1]n+1=(y1,y2,…,yJ−1)⊤. |