
The objective of this paper is to describe the problem of boundary optimal control of the reaction-advection-diffusion equation for not very regular Dirichlet data and enumerate its qualitative properties. We check that the state equation is well-posed and we introduce the penalization technique to take into account the boundary condition of the Dirichlet type. Then, we consider the corresponding optimal boundary control problem and give the optimality conditions. Finally, we conducted a numerical investigation of the convergence of the solution of the penalized problem to the solution of the non-penalized one when the penalty parameter tends to zero in regular and non-regular domains.
Citation: Bader Saad Alshammari, Daoud Suleiman Mashat, Fouad Othman Mallawi. Numerical investigation for a boundary optimal control of reaction-advection-diffusion equation using penalization technique[J]. Mathematical Modelling and Control, 2024, 4(3): 336-349. doi: 10.3934/mmc.2024027
[1] | Ji-Huan He, Shuai-Jia Kou, Hamid M. Sedighi . An ancient Chinese algorithm for two-point boundary problems and its application to the Michaelis-Menten kinetics. Mathematical Modelling and Control, 2021, 1(4): 172-176. doi: 10.3934/mmc.2021016 |
[2] | Di Zhang, Ningkui Sun, Xuemei Han . Dynamical behavior of solutions of a free boundary problem. Mathematical Modelling and Control, 2024, 4(1): 1-8. doi: 10.3934/mmc.2024001 |
[3] | Vladimir Stojanovic . Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming. Mathematical Modelling and Control, 2023, 3(3): 181-191. doi: 10.3934/mmc.2023016 |
[4] | Iman Malmir . Novel closed-loop controllers for fractional nonlinear quadratic systems. Mathematical Modelling and Control, 2023, 3(4): 345-354. doi: 10.3934/mmc.2023028 |
[5] | Zhenguo Luo, Liping Luo . New criteria for oscillation of damped fractional partial differential equations. Mathematical Modelling and Control, 2022, 2(4): 219-227. doi: 10.3934/mmc.2022021 |
[6] | Ihteram Ali, Imtiaz Ahmad . Applications of the nonlinear Klein/Sinh-Gordon equations in modern physics: a numerical study. Mathematical Modelling and Control, 2024, 4(3): 361-373. doi: 10.3934/mmc.2024029 |
[7] | Bharatkumar K. Manvi, Shravankumar B. Kerur, Jagadish V Tawade, Juan J. Nieto, Sagar Ningonda Sankeshwari, Hijaz Ahmad, Vediyappan Govindan . MHD Casson nanofluid boundary layer flow in presence of radiation and non-uniform heat source/sink. Mathematical Modelling and Control, 2023, 3(3): 152-167. doi: 10.3934/mmc.2023014 |
[8] | S. Y. Tchoumi, Y. Kouakep-Tchaptchie, D. J. Fotsa-Mbogne, J. C. Kamgang, J. M. Tchuenche . Optimal control of a malaria model with long-lasting insecticide-treated nets. Mathematical Modelling and Control, 2021, 1(4): 188-207. doi: 10.3934/mmc.2021018 |
[9] | Abduljawad Anwar, Shayma Adil Murad . On the Ulam stability and existence of $ L^p $-solutions for fractional differential and integro-differential equations with Caputo-Hadamard derivative. Mathematical Modelling and Control, 2024, 4(4): 439-458. doi: 10.3934/mmc.2024035 |
[10] | Shipeng Li . Impulsive control for stationary oscillation of nonlinear delay systems and applications. Mathematical Modelling and Control, 2023, 3(4): 267-277. doi: 10.3934/mmc.2023023 |
The objective of this paper is to describe the problem of boundary optimal control of the reaction-advection-diffusion equation for not very regular Dirichlet data and enumerate its qualitative properties. We check that the state equation is well-posed and we introduce the penalization technique to take into account the boundary condition of the Dirichlet type. Then, we consider the corresponding optimal boundary control problem and give the optimality conditions. Finally, we conducted a numerical investigation of the convergence of the solution of the penalized problem to the solution of the non-penalized one when the penalty parameter tends to zero in regular and non-regular domains.
In this paper, we aim to study the optimal control problem for the reaction-advection-diffusion equation with non-sufficiently smooth Dirichlet condition on the boundary on a regular bounded domain of R2. We start by proving the existence and uniqueness of the solution of the state equation. Since the use of Dirichlet boundary condition poses practical difficulties, we will apply a penalization technique that approaches the state equation by another having a boundary condition of type Robin, which is more easy to implement. This method was introduced by Babu˘ska [1] for elliptic equations and considered later in several works, see for example [2,3,4] where several authors obtained the expected convergence rate and in [5] where an asymptotic expansion of the penalized solution is studied. For the boundary (Dirichlet) control problems of partial differential equations, where the control variable is generated in L2, Robin penalization thus presents a simple and efficient way to solve such problems (see [6,7,8,9,10,11,12,13]). We therefore, propose in this paper to apply this approach to the problem of boundary control of the reaction-advection-diffusion equation.
Let Ω be an open regular bounded domain of R2, with a Lipschitz-continuous boundary Γ, and we set
Υ=Ω×]0,T[ |
and
Π=Γ×]0,T[, |
where T is a positive constant. The optimal control problem consists of finding optimal Dirichlet data g∈L2(Π) which minimizes a cost function depending on the solution u of the state Eq (2.1) given hereafter and on the control g. Therefore, it is necessary to define a suitable functional framework which allows us to obtain results of existence, uniqueness, and stability.
Our goal in this work is to mathematically analyze the penalization technique applied on a Dirichlet condition. More precisely, we will replace the Dirichlet condition u=g on Π by a Robin-like condition, as follows:
ε∂uε∂n+uε=g, on Π |
where ε is a small enough positive constant tending to zero. The advantage of this technique is to define the state variable uε in L2(0,T,L2(Ω)) for all boundary conditions g in L2(Π). Therefore, we will consider an optimal control problem defined on L2(Π). Furthermore, numerically, taking into account the Dirichlet boundary condition is not made exactly, but it is rather approached by using a penalization term. Since the state variable is used in the expression of the cost function for the optimal control, the convergence of the penalized variable uε towards the state variable u should be proved. However, due to some difficulties, this task is in progress and we will give only a numerical investigation regarding the convergence in this paper.
Let f∈L2(Υ), u0∈L2(Ω), and g∈L2(Π), and we aim to find u the solution of
∂u∂t−∇(μ∇u)+v⋅∇u+σu=f in Υ,u=g on Π,u(0,⋅)=u0 in Ω, | (2.1) |
where μ is the diffusion coefficient, σ is the reaction coefficient, and
v=(v1,v2)T |
represents a vector field in (L∞(Υ))2 describing the advection coefficient. The initial condition is given by u0∈L2(Ω). μ and σ are assumed to be positive. Equation (2.1) is called a reaction-advection-diffusion equation and has several applications in science and engineering. Transport of heat, momentum and energy, solid mechanics, CO2 sequestration, computational fluid dynamics, and biophysics are a few research fields that need to solve (2.1) numerically as part of the solution strategy. For the sake of simplification, we suppose that:
div v=0 in Ω |
and
v⋅n=0 in Π. |
The resolution of this problem relies on the duality technique; but, to explain it we need to recall the essential properties of Eq (2.1) with a homogeneous Dirichlet data. Problem (2.1) admits a weak formulation as follows: Find u∈L2(0,T,H10(Ω)) such that, for all v∈H10(Ω), we have:
(ddtu(t),v)L2(Ω)+a(u(t),v)=l(v),u(0)=u0, | (2.2) |
where
a(u,v)=∫Ωμ∇u∇vdx+∫Ω(v⋅∇u)vdx+∫Ωσuvdx,l(v)=∫Ωfvdx. |
Problem (2.2) admits (according to [14]) a unique solution in L2(0,T,H10(Ω)) with
‖u‖L2(0,T,H10(Ω))≤C(‖f‖L2(Υ)+‖u0‖L2(Ω)). |
To write the weak formulation of system (2.1) with g∈L2(Π), we apply the duality technique, which involves looking for u∈L2(Υ) such that, for all functions ϕ in
L2(0,T,H2(Ω)∩H10(Ω))∩H1(0,T,L2(Ω)) |
with ϕ(T)=0, we have
−(u,∂ϕ∂t)L2(Υ)−(u,∇(μ∇ϕ))L2(Υ)−(u,v⋅∇ϕ)L2(Υ)+(u,σϕ)L2(Υ)=(f,ϕ)L2(Υ)−(g,∂ϕ∂n)L2(Π)+(u(0),ϕ(0))L2(Ω). | (2.3) |
In order to prove that problem (2.3) is well-posed, we need to use the Lax-Milgram lemma. It suffices to observe that (2.3) is equivalent to: finding u∈L2(Υ) such that:
(u,ψ)L2(Υ)=(f,ϕψ)L2(Υ)−(g,∂ϕψ∂n)L2(Π)+(u(0),ϕψ(0))L2(Ω), ∀ψ∈L2(Υ), | (2.4) |
where
ϕψ∈L2(0,T,H2(Ω)∩H10(Ω))∩H1(0,T,L2(Ω)) |
is the unique solution of the following problem:
−∂ϕψ∂t−∇(μ∇ϕψ)−v⋅∇ϕψ+σϕψ=ψ, in Υ,ϕψ(T)=0, in Ω, |
satisfying
‖ϕψ‖L2(0,T,H10(Ω))≤C‖ψ‖L2(Υ). |
Therefore, the operator
ψ↦(f,ϕψ)L2(Υ)−(g,∂ϕψ∂n)L2(Π)+(u(0),ϕψ(0))L2(Ω) |
is bounded since
‖∂ϕψ∂n‖L2(Π)≤‖ϕψ‖L2(0,T,H10(Ω))≤C‖ψ‖L2(Υ). |
Therefore, the problem (2.4) (and then the problem (2.3)) admits a unique solution u∈L2(Υ) with the stability relation
‖u‖L2(Υ)≤C(‖g‖L2(Π)+‖f‖L2(Υ)+‖u0‖L2(Ω)). |
To overcome the difficulties of the numerical implementation of the Dirichlet condition, we will use the penalization technique using a small enough parameter ε by introducing the set of problems
∂uε∂t−∇(μ∇uε)+v⋅∇uε+σuε=f in Υ,ε∂uε∂n+uε=g on Π,uε(0,⋅)=u0 in Ω | (2.5) |
to approximate problem (2.1). The penalization technique allows us to work in the Sobolev space H1(Ω). The weak formulation associated with (2.2) consists of finding uε∈L2(0,T,H1(Ω)) such that, for all functions v∈H1(Ω), we have
(ddtuε(t),v)L2(Ω)+a(uε(t),v)=l(v),uε(0)=u0, | (2.6) |
with
a(u,v)=∫Ωμ∇u∇vdx+∫Ω(v⋅∇u)vdx+∫Ωσuvdx+1ε∫Γuvdx,l(v)=∫Ωfvdx+1ε∫Γgvdx. |
Problem (2.5) admits a unique solution in L2(0,T,H1(Ω)) and satisfies the following result:
‖uε‖L2(Υ)≤C(‖f‖L2(Υ)+‖g‖L2(Π)+‖u0‖L2(Ω)), |
where the constant C is independent of ε.
Concerning the convergence of uε to u, some tools developed in [4] and a fixed point technique used in [15], could make it possible to establish a rate of convergence comparable to that demonstrated in the case of the pure diffusion equation. This conjecture is supported by some numerical experiments.
The goal is to examine the problem of optimal boundary control of the reaction-advection-diffusion equation. This involves minimizing a functional criterion
J=J(u,g) |
which depends on g as well as on u the solution of (2.1). We show the existence and uniqueness of the control problem, which leads us allows to write the first-order optimality conditions. Then, we introduce the sequence of control problems obtained by penalizing the Dirichlet condition and we give an existence result of the optimal solution.
We consider the functional cost given by
J(u,g)=12∫Υ(u−ud)2dxdt+β2∫Πg2dΠ |
where ud∈L2(Υ) is the predefined state (desired profile). The dependent term of g is the marginal cost.
Consider (2.1) where g∈L2(Π). The boundary optimal control problem is thus expressed by
(P):min | (3.1) |
Let
F(g) = J(u, g). |
Then, the function F then verifies the following conditions:
● F is strictly convex;
● F is coercive ( \lim_{\|g\|_{L^2(\Pi)} \rightarrow +\infty} F(g) = +\infty );
● F is continuous, and this follows from the continuous dependence of u on g .
Thus, the minimization problem (3.1) admits a unique solution \bar g \in L^2(\Pi) .
We want to act on a parabolic partial differential equation, through the boundary condition of the Dirichlet type, so that the solution is close to a profile fixed beforehand, with the best quality-price ratio. For our purpose, it is a question of minimizing a criterion function which depends on the solution of the problem and on the data at the boundary which represents the control. The difficulty of analyzing this problem strongly depends on the space of possible controls. Indeed, if we consider the space L^2(0, T, H^{1\over2} (\Gamma)) , we can have a well-posed state equation on L^ 2(0, T, H^{1}(\Omega)) that is easy to deal with, while identifying the solution to the optimal control problem presents serious difficulties inherent in the complex form of the norm of L^2(0, T, H^{1\over2}(\Gamma)) . The alternative that we propose to study therefore amounts to working in L^2(\Pi) .
Looking for the necessary conditions of optimality, the triple
(\bar g, \bar u, \bar p)\in L^2(\Pi)\times L^2(\Upsilon)\times L^2(\Upsilon) |
formed by the optimal control, optimal state, and associated conjoint state is then the unique solution of the following system:
\left\{ \begin{array}{l}\begin{align} \begin{split} \quad \dfrac {\partial \bar u} { \partial t} -\nabla(\mu \nabla \bar u) + {\mathit{\boldsymbol{v}}}\cdot\nabla\bar u + \sigma \bar u & = f, \;\; \mbox {in} \;\; \Upsilon, \\ \quad \bar u & = \bar g , \;\;\mbox {on} \;\; \Pi, \\ \quad \bar u(0, .) & = u_0, \;\; \mbox{in} \;\; \Omega, \\ - \dfrac {\partial\bar p} { \partial t} -\nabla(\mu \nabla \bar p)-{\mathit{\boldsymbol{v}}}\cdot\nabla \bar p + \sigma \bar p & = \bar u-u_d, \;\;\mbox {in} \;\; \Upsilon, \\ \quad \bar p & = 0, \;\;\mbox {on}\; \; \Pi, \\ \quad \bar p(T, .) & = 0, \;\;\mbox{in} \;\; \Omega, \\ \quad \beta \bar g -\dfrac {\partial \bar p}{\partial n} & = 0, \;\;\mbox {on} \;\; \Pi. \\\end{split} \end{align} \end{array} \right. | (3.2) |
Conversely, if (\bar u, \bar p) satisfies the previous system, then (\bar u, \dfrac{1}{\beta} \dfrac{\partial \bar p}{\partial n}) is the solution of ({\cal{P}}) .
Remark 3.1. Strictly speaking, the advection term in the equation of the adjoint state is of the form (div({\mathit{\boldsymbol{v}}} p)) , but thanks to the assumptions made on {\mathit{\boldsymbol{v}}} (namely div\, {\mathit{\boldsymbol{v}}} = 0 ) in \Omega and {\mathit{\boldsymbol{v}}}\cdot n = 0 on \Pi , we obtain the form given above.
We are interested in this part in the penalization of the control problem ({ \cal P }) given by (3.3). The sequence of penalized control problems, where the Dirichlet condition is taken into account by the penalization technique, is of the form
\begin{equation} ({\cal P}_\varepsilon): \min\limits_{g \in L^2(\Pi)} J(u_\varepsilon, g); u_\varepsilon \in L^2(0, T, H^1(\Omega)) \mbox{ solution of } (2.5) \end{equation} | (3.3) |
which represents an approach to problem ({\cal{P}}) . For the same conditions verified by the functional J , we show that the penalized control problem ({\cal{P}} _{\varepsilon}) has a unique solution
(\bar u_{ \varepsilon }, \bar g_\varepsilon) \in L^2(0, T, H^1(\Omega)) \times L^2(\Pi). |
It will then constitute an approximation of the optimal solution (\bar u, \bar g) of ({\cal{P}}) . The triple (\bar g_{\varepsilon}, \bar u_{\varepsilon}, \bar p_{\varepsilon}) formed by the optimal control, optimal state, and associated adjoint state in the space
L^2(\Pi)\times L^2(0, T, H^1(\Omega))\times L^2(0, T, H^1(\Omega)) |
is the unique solution of the system of the following first-order optimality conditions:
\left\{ \begin{array}{lllll} \begin{align} \begin{split}\quad \dfrac {\partial\bar u_{\varepsilon}} { \partial t} -\nabla(\mu \nabla \bar u_{\varepsilon}) +{\mathit{\boldsymbol{v}}}\cdot\nabla \bar u_{\varepsilon} +\sigma \bar u_{\varepsilon} & = f, \;\; \mbox {in} \;\; \Upsilon, \\ \quad \varepsilon \dfrac { \partial\bar u_{\varepsilon}}{ \partial n} +\bar u_{\varepsilon} & = \bar g_{\varepsilon} \;\; \mbox {on}\;\; \Pi, \\ \quad \bar u_{\varepsilon}(0, \cdot) & = u_0, \;\; \mbox{in} \;\; \Omega, \\ - \dfrac {\partial\bar p_{\varepsilon}} { \partial t} -\nabla(\mu \nabla \bar p_{\varepsilon}) - {\mathit{\boldsymbol{v}}}.\nabla \bar p_{\varepsilon} +\sigma \bar p_{\varepsilon} & = \bar u_{\varepsilon}-u_d, \;\;\mbox {in}\;\; \Upsilon, \\ \quad \varepsilon \; \dfrac { \partial\bar p_{\varepsilon}}{ \partial n} +\bar p_{\varepsilon} & = 0, \;\; \mbox {on} \;\; \Pi, \\ \quad \bar p_{\varepsilon}(T, \cdot) & = 0, \;\; \mbox{in} \;\; \Omega, \\ \quad \beta \bar g_{\varepsilon} +\dfrac{1} {\varepsilon} \bar p_{\varepsilon} & = 0, \;\; \mbox {on} \;\; \Pi. \end{split} \end{align} \end{array} \right. | (3.4) |
Reciprocally, if (\bar u_{\varepsilon}, \bar p_{\varepsilon}) satisfies the previous system, then (\bar u_{\varepsilon}, - \dfrac{1}{\beta} \bar p_{\varepsilon}) is the solution of ({\cal{P}}_{\varepsilon}) . The convergence of the optimal solution (\bar u_{\varepsilon}, \bar g_{\varepsilon}) towards the optimal solution of the Dirichlet problem (\bar u, \bar g) is in progress. We will present in the following section some numerical experiments illustrating the convergence results.
In this section, we present the numerical approximation, using the finite element method, for both direct and optimal control problems as well as the penalized problems. We recall that an effective and simple way to deal with the Dirichlet condition consists of using the penalization technique presented above. Based on this procedure, the solution of Eq (2.1) and the associated control problem are respectively approximated by those of the penalized Eq (2.5) and of the penalized control problem (3.3). For the approximation of the state equations, we adopt the finite element method for the discretization in space and the implicit Euler scheme for the discretization in time. For control problems, we use the conjugate gradient method. Finally, relying on numerical tests, we confirm the desired-state convergence results.
Before numerically solving the penalized control problem (3.3), we describe in this section the numerical resolution of the reaction-advection-diffusion Eq (2.1) using the penalization technique in order to take into account the Dirichlet condition. For the discretization in space, we consider for all h > 0 a triangulation \tau_h of \Omega . This triangulation of step h is a partition into a finite number of triangles \kappa called elements, such that two neighboring triangles have in common a vertex or an entire side. The size h_\kappa of each \kappa\in\tau_h is less than or equal to h (providing that size is the length of the longest side of \kappa ). The vertices of the triangle \kappa are denoted {(s_i)}_{1 \leq i \leq 3} . The triangulation \tau_h is assumed to be regular, i.e., there exists a real \alpha > 0 verifying \sigma_\kappa\geq \alpha h_\kappa , \forall \kappa \in \tau_h , where \sigma_\kappa is the diameter of the circle inscribed in \kappa , and h_\kappa is the size of \kappa . The finite element discrete space intended to approximate H^1(\Omega) is made up of piecewise affine functions:
V_h = \{v_h\in C(\bar \Omega ), \, v_h/\kappa \in {\mathbb {P}}_{1}(\kappa), \forall \kappa \in \tau_h \}, |
where \mathbb {P}_1 describes the set of polynomial of first degree. We introduce the space
V_h^\Gamma = \Bigl \{ v_h^\Gamma \in C(\Gamma), \, {v_h}_{/S} \in \mathbb {P}_1(S), \, \forall S \in \tau_h^\Gamma \Bigr \}, |
where {\tau_h}^{\Gamma} describes the triangulation of \Gamma and S is a segment of \Gamma . V_h has a finite dimension ( = m) . It admits a basis {(\phi_i)}_{i = 1, m} . The one commonly used is composed of shape functions, which are defined as follows:
{\phi_i}({\bf{s_j}}) = {\delta}_{ij} = \left\{ \begin{array}{ll} 1, \quad {\rm{if}} \ i = j , \\ 0 , \quad {\rm{else}}. \end{array} \right. |
The basis of V_h^{\Gamma} is formed by the set of functions
{( \phi_i^\Gamma = {\phi_i}_{/\Gamma})}_{1\leq i \leq m^\Gamma } ( dim V_h^{\Gamma} = m_\Gamma), |
which represent the image by the operator trace of functions {{\phi_i}} (basic functions of V_h ).
By injecting into the variational formulation (2.2) the decomposition of the solution u_{\varepsilon, h} in the space V_h , given by:
u_{\varepsilon, h}(x, t) = \sum\limits_{j = 1}^m u_j(t) \phi_j(x) |
with
u_j(t) = u_{\varepsilon, h}(s_j, t), |
and choosing as test functions the functions
v_h = \phi_i \; (1\leq i \leq m) , |
we get the semi-discrete system
\begin{align} \left\{ \begin{array}{lllll} M U^\prime(t)+A U(t) = M F(t)+ \dfrac{1}{\varepsilon} M^\Gamma G(t) - G_{{\mathit{\boldsymbol{v}}}}(t), \\ U(0) = U_0, \\ \end{array} \right. \end{align} | (4.1) |
with U(t) the unknown vector of components (u_i(t))_{1 \leq i \leq m} , G_{{\mathit{\boldsymbol{v}}}}(t) the vector of components
( \int_\Omega ({{\mathit{\boldsymbol{v}}}}(t) \cdot \nabla u(t))\phi_i \, dx)_{1 \leq i \leq m}, |
G(t) the vector of components
g_i(t) = g_h(s_i, t) |
if s_i is a vertex on \Gamma and g_i(t) = 0 else, where g_h denotes the space approximation of g in V_h^\Gamma , and F(t) the vector of components
f_i(t) = f(s_i, t), \ \ 1 \leq i \leq m |
et
U_0 = (u_0(s_i))_{1 \leq i \leq m}. |
Matrices M , A , and M^\Gamma are defined by:
\begin{align*} M & = (M_{ij})_{1 \leq i, j \leq m}, \ \ \ M_{ij} = (\phi_i, \phi_j)_{L^2(\Omega)}\ \ A = (A_{ij})_{1 \leq i, j \leq m}, \\ A_{ij}& = \mu (\nabla\phi_i, \nabla \phi_j)_{L^2(\Omega)} \hfill + \sigma (\phi_i, \phi_j)_{L^2(\Omega)} + \dfrac{1}{ \varepsilon} (\phi_i^\Gamma, \phi_j^\Gamma)_{L^2(\Gamma)}, \\ M^\Gamma & = ( M^\Gamma_{ij})_{1 \leq i, j \leq m}, \\ M^\Gamma_{ij}& = (\phi_i^\Gamma, \phi_j^\Gamma)_{L^2(\Gamma)}, \mbox{ if }1 \leq i, j \leq m_\Gamma, \ \ \hfill M^\Gamma_{ij} = 0 \mbox{ else.} \\ \end{align*} |
For the temporal discretization, we consider the subdivision of the time interval
[0, T] = \bigcup\limits_{n = 0}^{N_T-1} [t_n, t_{n+1}], \quad t_n = n dt , \quad dt = T/N_T, |
and we use the implicit Euler scheme. By denoting U^{(n)} as an approximation of U(t_n) , we obtain the total discrete system associated with the reaction-advection-diffusion equation:
\begin{align} \begin{aligned} \begin{split} \left\{ \begin{array}{lllll} \left(\dfrac{M}{dt }+ A\right)U^{(n+1)} = \dfrac{M}{dt } U^{(n)}+M F^{(n+1)}+\dfrac{1}{ \varepsilon} M^\Gamma G^{(n+1)}- G_{{\mathit{\boldsymbol{v}}}}^{(n)}, \\ U^0 = U_0, \end{array} \right. \end{split}\end{aligned} \end{align} | (4.2) |
where
F^{(n)} = F(t_n) |
et
G^{(n)} = G(t_n). |
The term G_{{\mathit{\boldsymbol{v}}}}^{(n)} , coming from the discretization of the advection term, is the vector of components
(\sum\limits_{j = 1}^m U_j^{(n)} \int_\Omega ({{\mathit{\boldsymbol{v}}}}^{(n+1)} \cdot \nabla \phi_j)\phi_i \, dx)_{1 \leq i \leq m} |
with
{\mathit{\boldsymbol{v}}}^{(n)} = ({\mathit{\boldsymbol{v}}}_1^{(n)}, {\mathit{\boldsymbol{v}}}_2^{(n)}) |
and
{\mathit{\boldsymbol{v}}}_k^{(n)} = ({\mathit{\boldsymbol{v}}}_k(s_i, t_n))_{1 \leq i \leq m}, \ \ 1 \leq k \leq 2. |
Note that, for reasons of computational simplification, the "advection" term is treated explicitly (see [16,17]), which allows us to have a symmetric matrix of the system. One can choose, in principle, to also discretize the advection term implicitly, but this will complicate and make more expensive the resolution of the problem, even if this scheme has better stability.
Solving the algebraic system (4.2) requires the inversion of the matrix \left(\dfrac{M}{dt }+ A\right) which is symmetric definite positive. Among the different algorithms that can be adopted, we have chosen the one that uses the Cholesky factorization. In addition, for the calculation of the integrals appearing in the expressions of the matrices intervening in (4.2), one uses the formulas of quadrature. More precisely, the integrals on the boundary are approximated by Simpson's formula, and the surface integrals are calculated either by the degree 1 formula using the vertices of the triangles or by the degree 2 formula using the midpoints of the edges.
In this subsection, we study the discretization of the control problem associated with the reaction-advection-diffusion equation. The optimal solution (\bar u, \bar g) of the control problem (3.1) is approximated by the optimal solution (\bar u_\varepsilon, \bar g_\varepsilon) of the penalized problem (3.3). Let (u_{d, h}, g_{d, h}) to be an approximation of the predefined state (desired profile) on the triangulation \tau_h . The discrete equivalent of the control problem (3.3) is given by:
\begin{eqnarray*} \begin{array}{rll} \min\limits_{u_h \in V_h^\Gamma} \; \left\{ \;J_h(u_{\varepsilon, h}, g_h), \, u_{\varepsilon, h} \mbox{ approximate solution of } (4.2)\right\}, \end{array} \end{eqnarray*} |
where the discrete version J_h(., .) of the objective function J(., .) is given by:
J_h(u_{\varepsilon, h}, g_h) = \dfrac{1}{2}\int_Q(u_{\varepsilon, h}-u_{d, h})^2 dx dt +\dfrac {\beta}{2} \int_\Pi(g_h-g_{d, h})^2 d\Pi. |
Using the right rectangle method for time integration, the discrete control problem can be written as:
\begin{equation} \begin{array}{rll} \min\limits_{G} \; \{ \;J_{h, dt }(U, G), \, U \mbox{ solution of (4.2)} \}, \end{array} \end{equation} | (4.3) |
where the expression of J_{h, dt } is given by:
\begin{align*} J_{h, dt }(U, G) & = \dfrac{dt }{2} \Bigg( \sum^{N_T}_{n = 1}(M(U^{(n)}-U^{(n)}_d), U^{(n)}-U^{(n)}_d) \hfill\\& + \beta \sum^{N_T}_{n = 1} (M^\Gamma (G^{(n)}-G^{(n)}_d), G^{(n)}-G^{(n)}_d) \Bigg) \end{align*} |
with
U = (U^{(n)})_{n = 1}^{N_T}, \; \; G = (G^{(n)})_{n = 1}^{N_T}, \; \;g_d^{(n)} = (g_d(s_i, t^n))_{i = 1}^m |
and
G_d^{(n)} = (g_d(s_i, t^n))_{i = 1}^m. |
The calculation of the optimality conditions associated with the discrete control problem (4.3) involves the discrete adjoint state P which leads to a determination of the optimal discrete control G . The triple (G, U, P) formed by the discrete optimal control, the discrete optimal state, and the discrete adjoint state is the unique solution of the system:
\left\{ \begin{array}{llll} \begin{eqnarray*} \begin{aligned} \quad \left(\dfrac{M}{dt }+ A\right)P^{(n-1)} & = \dfrac{M}{dt } P^{(n)}+M (U^{(n)}-U_d^{(n)})- D_V^{(n)}, \\ \quad P^{N^T} & = 0, \quad n = N_T, \cdots, 1, \\ \left(\dfrac{M}{dt }+ A\right)U^{(n+1)} & = \dfrac{M}{dt } U^{(n)}+M F^{(n+1)}+\dfrac{1}{ \varepsilon} M^\Gamma G^{(n+1)} - G_V^{(n)}, \\ \quad U^0 & = U_0, \quad n = 0, \cdots, N_T-1, \\ \quad G^{(n)} & = - \dfrac{1}{\varepsilon \beta}P^{(n-1)} + G_d^{(n)}, \quad n = 1, \cdots, N_T, \end{aligned} \end{eqnarray*}\end{array} \right. |
where
D_V^{(n)} = (\sum\limits_{j = 1}^m P_j^{(n)} \int_\Omega ({\mathit{\boldsymbol{v}}}^{(n)}.\nabla \phi_j)\phi_i \, dx)_{1 \leq i \leq m}. |
In order to use the conjugate gradient method for solving the discrete control problem (4.3), we need to calculate the gradient of the functional J_{h, dt } , which is given, thanks to the adjoint state technique, by:
\begin{eqnarray*} \begin{array}{lllll} \nabla J_{h, dt }(G, U) = \sum\limits_{n = 1}^{N^T} M^\Gamma (\dfrac{1}{\varepsilon}P^{(n-1)} + \beta (G^{(n)}-G_d^{(n)})). \end{array} \end{eqnarray*} |
A problem like (4.3) is usually solved by iterative methods that construct a sequence {\{u_i\}}^k_{i = 0} \; (k \in \mathbb {N}) to gradually move towards a better approximation of the "solution", i.e, the point satisfying the optimality condition. Now that we have given a discretization or an approximation of the cost function and the deputy state, we use to evaluate the minimum of the problem (4.3) the gradient method which belongs to a family of iterative methods called the descent method since the construction of the sequence {\{u_i\}}^k_{i = 0} is such that F(u_{i+1}) < F(u_i) \forall i \in \mathbb {N} , where F denotes the function to be minimized, which, in our case, is given by J_{h, dt } . The following outlines the gradient method:
(1) Available data:
● Initialization u_0 , a tolerance \gamma > 0 .
● The function F to be minimized.
(2) Construction:
● A sequence {({u_i})}_{1\leq i \leq n } such that F(u_{i+1}) < F(u_i) \forall i .
● If {u_i} satisfies \|\nabla F (u_i) \| \leq \gamma then {u_i} is an approximation of the minimum sought.
Algorithm 1. Optimization algorithm.
1: Initialization u_0 .
2: Solving the state equation corresponding to u_0 and calculation of F .
3: Calculation of the adjoint state and the \nabla F .
4: If the stop condition is satisfied, then go to step 9. Otherwise, continue with the following steps:
5: Calculation of the direction of descent d_n .
6: Exact search for the linear step {\alpha}_n (the criterion is quadratic).
7: Construction of u_{n+1} = u_n+ {\alpha}_n d_n .
8: Repeat steps 2 and 3 for u_{n+1} then go to step 4.
9: u_{n+1} is an approximation of the desired minimum.
The descent step {\alpha}_n plays a very important role in various optimization methods. The search for this coefficient then requires the resolution of a one-dimensional minimization problem. We consider the merit function \alpha \longmapsto q(\alpha) representing F(g_n+{\alpha} d_n) which is strictly convex. So, the optimal step {\alpha}_n is the unique minimum of the merit function q , it satisfies the following problem:
\begin{equation} \begin{array}{ll} q'({\alpha}_n) = (\nabla F(g_n+{\alpha}_n d_n), d_n) = 0. \end{array} \end{equation} | (4.4) |
In the case of the criterion functional used in this work, we show that \alpha_n is given by:
\begin{align*} \alpha_n & = - \dfrac { { \int_\Upsilon } (u_g-u_d) \; z_{d_n}\; + \beta { \int_\Pi } (g-g_d)d_n }{ \int_\Upsilon z_{d_n}^2 + \beta { \int_\Pi } d_n^2 }\\ & = - \dfrac { \nabla F(u_n)\cdot d_n } { { \int_\Upsilon } z_{d_n}^2 + \beta { \int_\Pi } d_n^2 }, \end{align*} |
where z_{d_n} is the solution of the following problem:
\begin{align*} \dfrac { \partial z_{d_n} } {\partial t} -\Delta z_{d_n}+{\mathit{\boldsymbol{v}}}\cdot\nabla z_{d_n} +\sigma z_{d_n}& = 0 \ \ \mbox {in} \ \ \Upsilon, \\ \varepsilon \dfrac { \partial { z_{d_n} }} {\partial n} + z_{d_n} & = d_n \ \ \mbox {on} \ \ \Pi, \\ z_{d_n}(0, \cdot)& = 0 \ \ \mbox{in} \ \ \Omega. \end{align*} |
We present here a series of numerical tests to illustrate the results of convergence of the penalized solutions (equations of state and problems of associated control) to solutions of non-penalized problems when the penalty parameter \varepsilon tends to zero. For this, a computer code has been implemented which consists, solving the penalized control problem ({\cal P} _\varepsilon) . We recall that, for the numerical resolution of the partial differential equations whose state and the adjoint state are solutions, the matrix of the system is obtained by the Cholesky process, and its inversion is calculated by the descent-ascent method. The boundary condition considered in the problems treated here represents a penalization of the Dirichlet condition. Since the functional considered here is quadratic, the used optimization method is the conjugate gradient combined with an exact linear search. Let us recall that for the numerical resolution, the discretization in space is given by the finite element method (FEM) and the discretization in time is given by the implicit Euler scheme. In the following we present some numerical tests illustrating the convergence of the penalized solution towards the non-penalized solution when the penalization parameter \varepsilon tends to zero, first for the equation of state and then for the associated control problem.
The parameter values are used to illustrate the theoretical results, but they have no physical significance. The simulations are done using MATLAB software.
In the first example, we take the domain as the square
\Omega = [0, 1]^2 |
(see Figure 1) and consider the function
\phi(x_1, x_2, t) = 4 \pi e^{(x_1+x_2+2 t)}. |
Let
{\mathit{\boldsymbol{v}}} = ({\mathit{\boldsymbol{v}}}_1, {\mathit{\boldsymbol{v}}}_2), |
where
{\mathit{\boldsymbol{v}}}_1 = -x_1(x_1-1)(2x_2-1) |
and
{\mathit{\boldsymbol{v}}}_2 = x_2(x_2-1)(2x_1-1)) |
and {\mathit{\boldsymbol{v}}} verify the assumptions
\mbox{div} \, {\mathit{\boldsymbol{v}}} = 0\; \; \text{in}\; \; \Upsilon |
and
{\mathit{\boldsymbol{v}}} \cdot n = 0 \; \; \text{on}\; \; \Pi. |
One can easily verify that
u = \phi = 4\pi e^{(x_1+x_2+2 t)} |
is a solution (Figure 2) of:
\dfrac {\partial u} { \partial t} -\nabla(\mu \nabla u) + {\mathit{\boldsymbol{v}}}\cdot \nabla u + \sigma u = f |
in \Upsilon, u = \phi on \Pi , u(0) = \phi(0) in \Omega , where \mu = 1 , and
f = ({\mathit{\boldsymbol{v}}}_1 + {\mathit{\boldsymbol{v}}}_2 + \sigma)\phi . |
The associated penalized problem is given by:
\begin{align*} \dfrac {\partial u_\varepsilon} { \partial t} -\nabla(\mu \nabla u_\varepsilon) + {\mathit{\boldsymbol{v}}}\cdot \nabla u_\varepsilon + \sigma u_\varepsilon& = f \text{ in } \Upsilon, \\ \varepsilon \dfrac { \partial u_\varepsilon}{ \partial n} + u_\varepsilon & = \phi \text{ on } \Pi, \\ u_\varepsilon(0)& = \phi(0) \text{ in } \Omega. \end{align*} |
In Figure 3, and in order to evaluate the rate of convergence, we represent the error (u_\varepsilon -u) in norm L^2(\Upsilon) as a function of \varepsilon for time steps dt \in \{0.01, 0.001\} . We can observe that the error converges linearly to zero when the penalty parameter \varepsilon tends to zero. This is confirmed by the calculation of the slope, in the region far from the plateau (Figure 4). The slope is worth 0.92 when dt = 0.01 and 0.90 when dt = 0.001 . In Figure 3, we display also the curve of the error (u_\varepsilon -u) in norm L^2(0, T, H^1(\Omega) for the two time steps dt \in \{ 0.01, 0.001 \} .
In the second example, we get a non-regular exact solution by considering the non-convex domain
\Omega = [0, 1]^2\setminus[0, \dfrac{1}{2}]^2 |
as shown in Figure 5. The exact solution given here is
y = \phi: = 4 \pi e^{(x_1+x_2+2 t)} + r^{2/3} sin(\dfrac{2\theta}{3}), |
where (r, \theta) are the polar coordinates of a point M(x_1, x_2) , r being the distance between M(x_1, x_2) and O({1 \over 2}, {1 \over 2}) , and \theta the angle between (OA) and (OM) where A({1 \over 2}, 0) .
We use the same numerical experiments as in Example 1. In order to better take into account the regularity of the solution and to obtain a better precision calculation, we consider here a mesh twice as fine as that used in the regular case ( h = 0.00625 ). For the temporal discretization, we choose dt \in \{ 0.01, 0.001 \} . For different values of \varepsilon , we calculated the error (u_\varepsilon - u) in norm L^2(\Upsilon) and L^2(0, T, H^1(\Omega)) , where u and u_\varepsilon are the solutions of Eqs (2.1) and (2.5), respectively (Figure 6).
Note that the theoretical results given in [4] for a pure diffusion predict a linear behavior in \varepsilon for the norm error L^2(\Upsilon) and an \varepsilon^{\frac{2}{3}} behavior for the norm error L^2(0, T, H^1(\Omega)) . In Figure 7, and for the same data as above, we represent the error (u_\varepsilon - u) in norm L^2(\Upsilon) and L^2(0, T, H^1(\Omega) as a function of \varepsilon , where this time u and u_\varepsilon are the solutions of the reaction-advection-diffusion Eqs (2.1) and (2.5), respectively, where it is necessary to replace the expression of f by
\begin{align} \begin{split} f & = 4 \pi e^{(x_1+x_2+2 t)} ({\mathit{\boldsymbol{v}}}_1 + {\mathit{\boldsymbol{v}}}_2 + \sigma) \\ & + \dfrac{2}{3} r^{-1/3} ({\mathit{\boldsymbol{v}}}_1\sin(\dfrac{2\theta}{3}) + {\mathit{\boldsymbol{v}}}_2 \cos(\dfrac{2\theta}{3})) +\sigma r^{2/3} sin(\dfrac{2\theta}{3}). \end{split} \end{align} | (4.5) |
We observe the same error behavior as in the regular case, namely a convergence toward zero (Figure 6). We also observe a saturation of the error from \varepsilon = 10^{-3} . The calculated slopes are given by 0.92 and 0.77 for the norms L^2 and L^2(H^1) , respectively. A more advanced numerical study using higher-order time schemes (e.g., the Crank-Nicholson scheme) and quadrature formulas more precise for the integration of the advection term could give better precision on the calculation of the slopes.
In this subsection, we present some numerical results on the convergence of the optimal solution of the control problem associated with the reaction-advection-diffusion equation. For this, we consider two examples, the first concerns a very regular exact optimal solution and the second an exact solution containing a singular part. We then compare the calculated optimal solution, using the penalty technique, to the exact solution in order to evaluate the convergence rates as a function of the penalty parameter \varepsilon intervening in the Robin condition penalizing the Dirichlet condition.
We start first with the case where the exact optimal solution to be approximated is given by:
\overline u = 4\pi e^{(x_1+x_2+2 t)} , \quad \overline g = \overline u_{|\Pi} |
on the domain
\Omega = ]0, 1[^2. |
Then, let
u_d = \overline u \; \; \text{and}\; \; g_d = {u_d}_{|\Pi} |
and consider the optimal control problem
\min \dfrac{1}{2} \int_{\Upsilon} (u-u_d)^2 \, dx dt + \dfrac{1}{2} \int_\Pi (g-g_d)^2 \, d\Pi, |
where u is the solution of:
\begin{equation} \dfrac {\partial u} { \partial t} -\nabla(\mu \nabla u) + {\mathit{\boldsymbol{v}}}\cdot \nabla u + \sigma u = f \, \mbox { in } \Upsilon, \, u = g \mbox { on } \Pi \end{equation} | (4.6) |
with
u(0) = u_d(0) |
in \Omega, \mu = 1 ,
f = ({\mathit{\boldsymbol{v}}}_1 + {\mathit{\boldsymbol{v}}}_2+ \sigma)u_d |
and {\mathit{\boldsymbol{v}}} the function used previously. One can easily verify that the minimum of J is reached in (u_d, g_d) and is equal to zero. The discretization of the control problem is the one described in Section 4.3 with a mesh step h = 0.0125 and a time step dt = 0.01 . The minimum of the cost function is calculated by the gradient method combined with a tolerance of 10^{-5} and an exact linear search. At each iteration, the discrete algebraic systems associated with the Robin condition are inverted by the Cholesky method, knowing that the factorization is performed only once in pre-processing. For several values of the penalty parameter, \varepsilon , the discrete optimal state \overline u_{\varepsilon, h}^{dt } and the discrete optimal control \overline g_{\varepsilon, h}^{dt } were calculated. In Figure 8, we represent, on a logarithmic scale, the relative errors:
\varepsilon \mapsto \dfrac{ \|\overline u_{\varepsilon, h}^{dt } -\overline u\|}{\|\overline u\|} \quad \text{and} \quad \dfrac{ \|\overline g_{\varepsilon, h}^{dt } -\overline g\|}{\|\overline g\|} |
in norms L^2(\Upsilon) and L^2(0, T, H^1(\Omega)) for the optimal state and in L^2(\Pi) norm for the optimal control.
One can observe on Figure 8 a decrease of the calculated errors towards zero, which confirms the results of convergences of the optimal penalized solution (Robin) to the exact optimal solution (Dirichlet) when the penalty parameter \varepsilon tends to zero.
In addition, we calculated the slopes (convergence rate) corresponding to the different curves, and for values of \varepsilon far from the saturation zone, which, as explained in the examples preceding numeric concerning the equation of state, is due to the fact that the error in h and dt outweighs that coming from the penalty parameter, \varepsilon . The calculated slopes are given by 0.92 , 0.73 , and 0.94 for the error on the optimal state in L^2(\Upsilon) and L^2(0, T, H^1(\Omega)) norms and for the error in L^2(\Pi) norm on the optimal control, respectively.
The last example concerns an exact optimal solution containing a singular part. For this, we take the non-convex domain
\Omega = ]0, 1[^2 \setminus ]0, 1/2[^2 |
used below (see Figure 5). We give the functions:
\overline u_d = 4\pi e^{(x_1+x_2+2 t)} + r^{\dfrac{2}{3}} \sin \dfrac{2}{3}\theta |
and
\overline g_d = \overline {u_d}_{|\Pi}. |
and the same objective function J(u, g) considered in the previous example, where u is the solution of (4.6), and the second member f is given by (4.5). The optimal solution is given by (\overline u = u_d, \overline g = g_d) . By carrying out the same numerical experiment as that of the preceding example, we show the various errors associated with the control of the reaction-advection-diffusion equation in the Figure 9.
The evaluation of the slope for the different errors leads to the following results: the slopes for the norms L^2(L^2) and L^2(H^1) on the optimal state \overline u and for the norm L^2 on the optimal control \overline g are 0.92 , 0.76 , and 0.91 , respectively. The results obtained here allow us to conclude that the penalized optimal solution (Robin) converges to the optimal solution associated with the Dirichlet condition when the penalization parameter \varepsilon tends to 0 .
We discussed the boundary optimal control problem for the reaction-advection-diffusion equation for not very regular Dirichlet boundary condition. We checked the well-posedness of the state equation. Then, we introduced the penalization technique of the boundary condition and we considered, then, the corresponding boundary optimal control problem and gave the optimality conditions. Finally, we conducted a numerical analysis of a time scheme/finite element discretization of the penalized control system. We investigated the convergence by some numerical experiments. This work should be confirmed by some theoretical results on the convergence of the optimal penalized solution towards the optimal solution of the Dirichlet problem, which is in progress.
The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University, Arar, KSA for funding this research work through the project number "NBU-SAFIR-2024". The authors are also grateful to the unknown referees for the many constructive suggestions, which helped to improve the presentation of the paper.
All authors declare no conflicts of interest in this paper.
[1] |
I. Babu{\breve {\rm{s}}}ka, The finite element method with penalty, Math. Comput., 27 (1973), 221–228. https://doi.org/10.2307/2005611 doi: 10.2307/2005611
![]() |
[2] |
A. Kirsch, The Robin problem for the helmholtz equation as a singular perturbation problem, Numer. Funct. Anal. Optimiz., 8 (1985), 1–20. https://doi.org/10.1080/01630568508816201 doi: 10.1080/01630568508816201
![]() |
[3] |
I. Lasiecka, J. Sokolowski, Semidiscrete approximation of hyperbolic boundary value problem with nonhomogeneous Dirichlet boundary conditions, SIAM J. Math. Anal., 20 (1989), 1366–1387. https://doi.org/10.1137/0520090 doi: 10.1137/0520090
![]() |
[4] | F. B. Belgacem, H. E. Fekih, J. Raymond, A penalized Robin approach for solving a parabolic equation with nonsmooth dirichlet boundary conditions, Asymptotic Anal., 34 (2003), 121–136. |
[5] | M. Costabel, M. Dauge, A singularly perturbed mixed boundary value problem, Commun. PDE, 21 (1996), 1919–1949. |
[6] |
E. Casas, M. Mateos, J. P. Raymond, Penalization of Dirichlet optimal control problems, ESAIM Control Optim. Calc. Var., 15 (2009), 782–809. https://doi.org/10.1051/cocv:2008049 doi: 10.1051/cocv:2008049
![]() |
[7] |
L. Hou, S. S. Ravindran, A penalized neumann control approach for solving an optimal Dirichlet control problem for the Navier-Stokes equations, SIAM J. Control Optim., 36 (1998), 1795–1814. https://doi.org/10.1137/S0363012996304870 doi: 10.1137/S0363012996304870
![]() |
[8] |
L. Hou, S. S. Ravindran, Numerical approximation of optimal flow control problems by a penalty method: error estimates and numerical results, SIAM J. Sci. Comput., 20 (1999), 1753–1777. https://doi.org/10.1137/S1064827597325153 doi: 10.1137/S1064827597325153
![]() |
[9] | T. Masrour, Contrôlabilité et observabilité des sytèmes distribués, problèmes et méthodes, Thesis Ecole Nationale Ponts Chaussées, 1995. |
[10] | N. Arada, H. E. Fekih, J. Raymond, Asymptotic analysis of some control problems, Asymptotic Anal., 24 (2000), 343–366. |
[11] |
F. B. Belgacem, C. Bernardi, H. E. Fekih, Dirichlet boundary control for a parabolic equation with a final observation Ⅰ: a space-time mixed formulation and penalization, Asymptotic Anal., 71 (2011), 101–121. https://doi.org/10.3233/ASY-2010-1015 doi: 10.3233/ASY-2010-1015
![]() |
[12] |
F. B. Belgacem, H. E. Fekih, H. Metoui, Singular perturbation for the Dirichlet boundary control of elliptic problems, Math. Model. Numer. Anal., 37 (2003), 833–850. https://doi.org/10.1051/m2an:2003057 doi: 10.1051/m2an:2003057
![]() |
[13] | M. E. Hajji, F. Jday, Boundary data completion for a diffusion-reaction equation based on the minimization of an energy error functional using conjugate gradient method, Punjab Univ. J. Math., 51 (2019), 25–43. |
[14] | J. Lions, E. Magenes, Problèmes aux limites non homogènes et applications, Dunod, 1968. |
[15] | A. Nguyen, J. Raymond, Control problems for convection-diffusion equations with control localized on manifolds, ESAIM Control Optim. Calc. Var., 6 (2001), 467–488. |
[16] |
R. Glowinski, J. Lions, Exact and approximate controllability for distributed parameter systems, part Ⅰ, Acta Numer., 3 (1994), 269–378. https://doi.org/10.1017/S0962492900002452 doi: 10.1017/S0962492900002452
![]() |
[17] |
R. Glowinski, J. Lions, Exact and approximate controllability for distributed parameter systems, part Ⅱ, Acta Numer., 4 (1995), 159–328. https://doi.org/10.1017/S0962492900002543 doi: 10.1017/S0962492900002543
![]() |