Loading [MathJax]/jax/element/mml/optable/GreekAndCoptic.js
Special Issues

Efficient numerical methods for elliptic optimal control problems with random coefficient

  • Efficient numerical methods for solving Poisson equation constraint optimal control problems with random coefficient are discussed in this paper. By applying the finite element method and the Monte Carlo approximation, the original optimal control problem is discretized and transformed into an optimization problem. Taking advantage of the separable structures, Algorithm 1 is proposed for solving the problem, where an alternating direction method of multiplier is used. Both computational and storage costs of this algorithm are very high. In order to reduce the computational cost, Algorithm 2 is proposed, where the multi-modes expansion is introduced and applied. Further, for reducing the storage cost, we propose Algorithm 3 based on Algorithm 2. The main idea is that the random term is shifted to the objective functional, which could be computed in advance. Therefore, we only need to solve a deterministic optimization problem, which could reduce all the costs significantly. Moreover, the convergence analyses of the proposed algorithms are established, and numerical simulations are carried out to test the performances of them.

    Citation: Xiaowei Pang, Haiming Song, Xiaoshen Wang, Jiachuan Zhang. Efficient numerical methods for elliptic optimal control problems with random coefficient[J]. Electronic Research Archive, 2020, 28(2): 1001-1022. doi: 10.3934/era.2020053

    Related Papers:

    [1] Xiaowei Pang, Haiming Song, Xiaoshen Wang, Jiachuan Zhang . Efficient numerical methods for elliptic optimal control problems with random coefficient. Electronic Research Archive, 2020, 28(2): 1001-1022. doi: 10.3934/era.2020053
    [2] Jingshi Li, Jiachuan Zhang, Guoliang Ju, Juntao You . A multi-mode expansion method for boundary optimal control problems constrained by random Poisson equations. Electronic Research Archive, 2020, 28(2): 977-1000. doi: 10.3934/era.2020052
    [3] Guanrong Li, Yanping Chen, Yunqing Huang . A hybridized weak Galerkin finite element scheme for general second-order elliptic problems. Electronic Research Archive, 2020, 28(2): 821-836. doi: 10.3934/era.2020042
    [4] Ziqiang Wang, Chunyu Cen, Junying Cao . Topological optimization algorithm for mechanical-electrical coupling of periodic composite materials. Electronic Research Archive, 2023, 31(5): 2689-2707. doi: 10.3934/era.2023136
    [5] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu . Convergence and quasi-optimality of $ L^2- $norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28(4): 1459-1486. doi: 10.3934/era.2020077
    [6] Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247
    [7] Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin . Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition. Electronic Research Archive, 2021, 29(5): 3383-3403. doi: 10.3934/era.2021044
    [8] Mingtao Cui, Wennan Cui, Wang Li, Xiaobo Wang . A polygonal topology optimization method based on the alternating active-phase algorithm. Electronic Research Archive, 2024, 32(2): 1191-1226. doi: 10.3934/era.2024057
    [9] Kun Wang, Lin Mu . Numerical investigation of a new cut finite element method for Stokes interface equations. Electronic Research Archive, 2025, 33(4): 2503-2524. doi: 10.3934/era.2025111
    [10] Jye Ying Sia, Yong Kheng Goh, How Hui Liew, Yun Fah Chang . Constructing hidden differential equations using a data-driven approach with the alternating direction method of multipliers (ADMM). Electronic Research Archive, 2025, 33(2): 890-906. doi: 10.3934/era.2025040
  • Efficient numerical methods for solving Poisson equation constraint optimal control problems with random coefficient are discussed in this paper. By applying the finite element method and the Monte Carlo approximation, the original optimal control problem is discretized and transformed into an optimization problem. Taking advantage of the separable structures, Algorithm 1 is proposed for solving the problem, where an alternating direction method of multiplier is used. Both computational and storage costs of this algorithm are very high. In order to reduce the computational cost, Algorithm 2 is proposed, where the multi-modes expansion is introduced and applied. Further, for reducing the storage cost, we propose Algorithm 3 based on Algorithm 2. The main idea is that the random term is shifted to the objective functional, which could be computed in advance. Therefore, we only need to solve a deterministic optimization problem, which could reduce all the costs significantly. Moreover, the convergence analyses of the proposed algorithms are established, and numerical simulations are carried out to test the performances of them.



    It is well known that many physical and engineering problems could be described by the optimal control problems (OCPs) with partial differential equation (PDE) or random/stochastic partial differential equation (RPDE/SPDE) constraints, such as the optimal heat source problem, the optimal design of the aircraft, and the weather forecasting problem etc. For the deterministic PDE optimal control problem, there exists plenty research results, and we refer to [15,21,26] for more details. However, only few works have been developed for the RPDE/SPDE optimal control problem.

    Generally speaking, the numerical methods for RPDE/SPDE constraint optimal control problems are divided into two categories: sample methods and projection methods. For the former, Cao et al. proposed an improved Monte Carlo method using compression variance technique to deal with optimal control problems under random Burger constraints in 2003 [7]. Based on the gradient error, Kouri et al. combined the trust region method and the stochastic collocation method to propose an adaptive collocation method, which is superior to the traditional Newtonian conjugate gradient method [17]. For the latter, Xiu developed a polynomial chaos expansion for the target functional, which transforms the original SPDE constraint optimal control problem to a nonlinear problem with finite coefficients [25]. However, no theoretical guarantee was given there. In 2011, Gunzburger and his collaborators transformed the SPDE constraint optimal control problem into a KKT nonlinear system, and then proved the existence of the solution of the KKT system. By using Karhunen Loeve (KL) expansion or appropriate orthogonal decomposition, they obtained a series of deterministic optimization systems numerically [13]. In 2013, Kunoth and Schwab used generalized polynomial chaos expansion (gPC) to handle the RPDE/SPDE optimal control problems [18]. Recently, Taylor approximation and variance reduction method was proposed by Chen et al. for PDE-constrained optimal control under uncertainty [8].

    In this paper, we mainly concentrate on the efficient numerical methods for the optimal control problem constrained by the Poisson equation with random coefficient. The major challenges for dealing with the randomness are: (Ⅰ) How to deal with the uncertainties appeared in the constraint and the objective functional, and formulate a discretized optimization problem. (Ⅱ) How to solve the resulted optimization problem, and propose an efficient algorithm. We will analyze the challenges above one by one, and present the corresponding solutions.

    Monte Carlo (MC) method [5,7], the stochastic collocation (SC) method [1,2], the stochastic Galerkin (SG) method [22,23], the polynomial chaos (PC) expansion [16], and so on are some of the commonly used methods for dealing with the first issue. Based on the generality and parallelizability of MC method, we will apply it to deal with the random sample space. Further, using the finite element method (FEM) to approximate the physical space, we shall obtain a discretized optimization problem. This is the most natural idea, but it will face huge difficulties in computation and storage when the high accuracy solution is required. The KL expansion is a spectral representation of the covariance function, which consists of the eigenvalues and eigenfunctions. This method is widely used in RPDE/SPDE fields when the eigenvalues decay exponentially or non-Gaussian spatially-dependent random with large correlation length, otherwise it will run into the curse of dimensionality [3]. As an improvement, the multi-modes expansion (MME) [11] will be considered to approximate the state variable firstly, and then carry out MC and FEM subsequently. By exploiting the relationships among the terms of MME expansion, we can transform the discretized optimization problem into a quadratic programming problem with deterministic linear constraints. In this paper, we will follow this approach to resolve the first challenge.

    The second challenge is closed related to the first one. If the first challenge can be handled very well, the latter one will be much less difficult. For solving the resulted optimization problem, there exist fruitful algorithms, such as the gradient descent method, the conjugate gradient method, the Newton's method, the regularization method, and the alternating direction method of multipliers (ADMM) etc., the interested readers are referred to [13,21] and references therein for the rich literature. Because of the separability of state and control of the resulted optimization problem, ADMM is applied in this paper, which has been used in the optimal control problem with deterministic PDE constraints, and displayed satisfactory results [14,26]. Moreover, to the best of our knowledge, there is no complete convergence analysis for elliptic RPDE-constrained optimization problems. The existing results merely present the difference between the exact solution and the discretized solution [13,19], while the error for solving the optimization problem is ignored. Here, based on the convergence analysis of ADMM in [6] and the techniques for dealing with the former challenge, the global error estimate in terms of the deviation between objective functionals will be established for our proposed algorithms later.

    For the clarity of the structure, the efficient algorithm proposed in this paper is developed by three steps. Firstly, the MC, the FEM, and the ADMM are used sequentially to formulate a primitive algorithm, Algorithm 1, which needs to compute M+1 inverses of N×N matrices, and store O(M+1) matrices (see Section 3), where M and N denote the sample number in probability space and the degree of freedom in physical space, respectively. To reduce the computation cost, we propose algorithm 2, where the MME technique is introduced. Using this algorithm, we only need to compute inverse matrix twice, but still need to store O(M+1) matrices (see Section 4). As the last step, by applying the recurrence relation on MME, the random term in the constraint is shifted to the objective functional. Then, based on Algorithm 2, we propose Algorithm 3, in which we only need to compute inverse matrix twice, and store O(1) matrices (see Section 5). Naturally, Algorithm 3 reduces the computation cost and storage cost significantly.

    The outline of this paper is as follows. In Section 2, the model problem, the well-posedness of the solution, and the first order optimal condition of the model problem are introduced. Algorithm 1, 2, 3, and their convergence analyses are presented in Section 3, 4, and 5, respectively. In Section 6, numerical simulations are carried out to test the performance of the proposed algorithms. Finally, some conclusion remarks are given in Section 7.

    The model problem, the well-posedness of the studied model, and its corresponding first order optimal condition will be presented in this section.

    For easy of description, most of the notations and definitions used in this paper are adopted from the reference [9]. We consider the model problem on a polygon DR2 with the boundary D. Lp(D), Lp(D), Hp(D), and Hp(D) denote the Lebesgue integrable function spaces and Hilbert spaces on D and D, respectively (1p+). In addition, we introduce a standard probability space (Θ,F,P), and a Bochner space Lp(Θ,X) with the norms

    vLp(Θ,X)=(Θv(ξ,)pXdP(ξ))1p<,1p<,vL(Θ,X)=esssupξΘv(ξ,)X<,

    where X stands for a Hilbert space.

    Using the notions above, the Poisson equation constraint optimal control problem with random coefficient and Dirichlet boundary condition could be written as

    (OCP){minyY,uUF(y(ξ,x),u(x))subject to:{(α(ξ,x)y(ξ,x))=u(x),inD,y(ξ,x)=g(x),onD,

    with

    F(y(ξ,x),u(x)):=E[12D|y(ξ,x)yd|2dx]+γ2D|u(x)ud|2dx, (1)

    where ydL2(D) and udH2(D) stand for the desirable state variable and the control variable respectively, which are given in advance. The random diffusion coefficient α(ξ,x) belongs to L2(Θ,D), where ξ represents a random sample in Θ. The expectation of ρ(ξ) is defined by E[ρ(ξ)]=Θρ(ξ)dP(ξ), and the boundary condition gL2(D) is a known function, which is set to be 0 in this paper for simplicity.

    The corresponding variational formulation of the Poisson equation with random coefficient is given by: Finding yL2(Θ,H10(D)), such that

    E[a(y,v)]=E[(u,v)L2(D)],vL2(Θ,H10(D)), (2)

    with a(y,v)=(α(ξ,x)y,v)L2(D). In order to ensure the well-posedness of the variational formulation (2), the following assumptions on α are needed.

    Assumption 1. ([3]) The random diffusion coefficient α(ξ,x) in (OCP) is assumed to be a strongly measurable mapping from Θ to L(D). And there exist constants 0<α1<α2< such that random coefficient α(x,ξ) is uniformly elliptic, i.e., for every ξΘ, there holds

    0<α1essinfDα(ξ,x)α(ξ,)L(D)α2<.

    Lemma 2.1. ([2,4,12]) If Assumption 1 holds, then for any given right-hand-side function uL2(D), the variational equation (2) has a unique solution y(ξ,x)L2(Θ,H10(D)). Furthermore, we have the following pointwise H1-norm estimate

    y(ξ,)H1(D)1α1uH1(D)1α1uL2(D), (3)

    and the global L2-error estimate

    E[y2L2(D)]C(α)u2L2(D), (4)

    where C(α) is a positive constant depending on α1 and α2.

    For simplicity, the diffusion coefficient α(ξ,x) in (OCP) is set to be

    α(ξ,x)=1+εσ(ξ,x), (5)

    other cases could be considered similarly. Here, σ(ξ,x)L2(Θ,L(D)) denotes the random process satisfying

    P{ξΘ;σ(ξ,x)L(D)1}=1,

    and ε stands for the magnitude of the perturbation. It is not difficult to obtain that the random coefficient α defined in (5) satisfies Assumption 1. Therefore, according to Lemma 2.1, there exists a linear operator T:L2(D)L2(Θ,H1(D)), such that

    y=T(u), (6)

    where u and y satisfy the variational equation (2). Further, it follows from the inequality (3) that the following estimate holds,

    E[T(u)jH1(D)]=E[y(u)jH1(D)]C1ujL2(D), (7)

    where j=2 or 4, and C1 is a constant depending on the bound of α.

    Therefore, if the state variable space is Y=L2(Θ,H10(D)), and the control variable space is U=L2(D), the optimal control problem (OCP) could be rewritten as

    (P){minyY,uUF(y(ξ,x),u(x))subject toy(ξ,x)=T(u(x)),

    which can also be reformulated as

    minuUJ(u)=minuUF(T(u),u)=minuU{E[12D|T(u)yd|2dx]+γ2D|uud|2dx}. (8)

    Here, J is a functional from L2(D) to R, and the existence and uniqueness results of the solution for (8) could be found in [16,19].

    Finally, let u be the optimal solution of (8). By applying the definition of the Frˊechet derivative [9], the linearity of the expectation operator E, and the solution operator T defined in (6), the first order optimal condition of (P) can be written as

    J(u)=E[T(T(u)yd)]+γ(uud)=0, (9)

    where T is the adjoint operator of T.

    Now, the optimal control problem (P), and its first order optimal condition have all been presented. In the remainder of the paper, we will propose three algorithms for solving the optimal control problem (P) incrementally, and compare them with each other.

    The first algorithm studied in this paper is presented in this section. The motivation comes from the finite element method (FEM), the Monte Carlo approximation (MC), and the alternating directions method of multipliers (ADMM). We will carry out these techniques sequentially to solve the optimal control problem (P), and present an global error estimate in the form of the deviation between the objective functionals.

    Let Kh be a regular triangulation of the spatial domain D, then the linear finite element function space could be defined by

    Vh={vhH10(D)|vh|KP1(K),KKh},

    where P1 is the space of polynomials of degree less than or equal to 1. Further, let R=R(x)=(ϕ1(x),,ϕN(x)) denotes a vector valued function consisting of all the basis functions of Vh, the state variable y(ξ,x)L2(Θ,H1(D)) can be approximated by

    yh(ξ,x)=Ni=1yi(ξ)ϕi(x)=Ry(ξ), (10)

    which belongs to Yh=L2(Θ,Vh). Here, the vector y(ξ)=(y1(ξ),,yN(ξ))TRN for any fixed ξΘ.

    Based on above notations, for a fixed sample ξΘ and a given control variable uL2(D), the FEM approximation for the constraint equation of (OCP) shall be described as : Finding yh(ξ,)Vh such that

    a(yh(ξ,),vh)=(u,vh)L2(D),vhVh. (11)

    This means there exists an operator Th:L2(D)L2(Θ,Vh)L2(Θ,H1(D)) such that yh=Th(u). It is easy to see that Th is a linear operator.

    Lemma 3.1. The pathwise FEM discretization (11) satisfies the following stability estimate: for any fixed sample ξΘ,

    Th(u)H1(D)=yh(ξ,)H1(D)c(α)uL2(D), (12)

    where c(α) depends on α.

    The proof, which will be omitted here, follows from the coerciveness and continuity of the bilinear form a(u,v). The interested readers are referred to [15].

    Now, we generalize the stability estimate of the deterministic problem to the random coefficient case. It is well known that the FEM approximation of (2) is: Finding yhL2(Θ,Vh) such that

    E[a(yh,vh)]=E[(u,vh)L2(D)],vhL2(Θ,Vh). (13)

    By applying the Assumption 1 and the estimate (12), we can derive:

    Lemma 3.2. For any control variable uL2(D), if yh is the FEM solution of (13), then we have

    E[ThujH1(D)]=E[yhjH1(D)]C2ujL2(D), (14)

    where j=2,4, and C2 is a positive constant depending on α and j.

    By using the estimates (4), (7), (14), the Theorem 4.3 in [10], and the triangle inequality, we can obtain the following lemma.

    Lemma 3.3. ([11,19]) If the control variable uL2(D), then it holds

    (E[(TTh)ujL2(D)])1jC3h2uL2(D), (15)

    where j=2,4, and C3 is a positive constant independent of the mesh size h.

    Similar to (10), the control variable u could be approximated by

    uh=RuUh=Vh, (16)

    where u=(u1,,uN)TRN. Substituting the state variable y and the control variable u in (P) by yh and uh, respectively, we can get a stochastic optimization problem

    (P1h){minyhYh,uhUhF(yh(ξ,x),uh(x))subject toyh(ξ,x)=Th(uh(x)).

    In this subsection, we mainly apply the standard MC method to approximate the expectation E in (P1h). Let {ξj}Mj=1 (M1) be the identically distributed independent samples chosen from the probability space (Θ,F,P). For any state variable yL2(Θ,H1(D)), by using the law of large numbers and the central limit theorem, we can approximate the expectation E[yh(ξ,x)] by

    EM[yh(ξ,x)]=1MMj=1yh(ξj,x), (17)

    which satisfies the error estimate (cf. [4])

    (E[E[yh]EM[yh]2L2(D)])121M(E[yh2L2(D)])12. (18)

    Substituting the MC approximation (17) into (P1h), we can derive the following discretized optimization problem:

    (PM1h){minyhYh,uhUhFM,h(yh(ξ,x),uh(x))subject toyh(ξj,x)=Th(ξj,uh(x)),j=1,,M,

    where

    FM,h(yh,uh)=EM[12Dyhyd2dx]+γ2Duhud2dx. (19)

    Similar to (P), the optimization problem (PM1h) has a unique solution uM,hVh, which satisfies the first order optimal condition

    E[EM[Th(Th(uM,h)yd)]]+γ(uM,hQhud)=0, (20)

    where Th is the adjoint operator of Th, and Qh is the L2 projection operator from L2(D) to Vh.

    Using the first order optimal conditions (9) and (20), we can get

    Lemma 3.4. ([19]) Let u and uM,h be the optimal controls of (P) and (PM1h), respectively, we have the following estimate

    uuM,hL2(D)K1M+K2h2 (21)

    with

    K1=Cγ48(ydL2(D)+uL2(D)),K2=Cγ(ydL2(D)+uL2(D)+udL2(D)),

    where the constant C is independent of M and h.

    Further, applying the definitions of the functionals F(y,u) and FM,h(yh,uh), the operators T and Th, and the error estimate (21) in Lemma 3.4, we can get

    Theorem 3.5. Let (y,u) and (yM,h,uM,h) be the optimal solutions of the optimal control problem (P) and it approximation (PM1h), respectively, then we have

    E[|F(y,u)FM,h(yM,h,uM,h)|]ˉK1M+ˉK2h2, (22)

    where

    ˉK1=82(C1uL2(D)+ydL2(D))2+γ(uL2(D)+udL2(D))K1+(K1C1C2uL2(D)+K1C2ydL2(D))+22(C2K1(C1uL2(D)+ydL2(D)))+22((C1C3uL2(D)+C3ydL2(D)+C1C2K2)uL2(D)+C2K2ydL2(D))+42(C32u4+C22K22+2C22K21),ˉK2=((C1C3uL2(D)+C3ydL2(D)+C1C2K2)uL2(D)+C2ydL2(D)K2)+γ(uL2(D)+udL2(D))K2+42(C32u4L2(D)+C22K22).

    For the sake of structural consideration, we put the proof in Appendix.

    In this subsection, we will introduce the ADMM to solve the optimization problem (PM1h), and present the error estimate of this algorithm. For these purposes, we need to rewrite the optimization problem (PM1h) in the matrix-vector form firstly. Replacing yh and uh in (19) by Ry(ξ) and Ru, which have been defined in (10) and (16), the optimization problem (PM1h) can be rewritten as

    (˜PM1h){miny(ξj),uRNFM,h(y(ξ),u)subject toA1(ξj)y(ξj)=B1u,j=1,,M,

    where FM,h(y(ξ),u)=FM,h(Ry(ξ),Ru)=FM,h(yh,uh), and

    A1(ξj)=(α(ξj,x)ϕm,ϕn)L2(D)RN×N,B1=(ϕm,ϕn)L2(D)RN×N.

    The augmented Lagrangian functional of the optimization problem (˜PM1h) is

    L1β(y(ξ),u,λ(ξ))=FM,h(y(ξ),u)EM[(λ(ξ),A1(ξ)y(ξ)B1u)]+EM[β2A1(ξ)y(ξ)B1u2],

    where λ(ξ)RN is the Lagrange multipliers and β denotes the penalty parameter. Here and hereafter, the notation (,) represents the inner product of vectors, and stands for the l2 norm of a vector. It is clear that the optimization problem (˜PM1h) equals the saddle-point problem (cf. [21])

    miny(ξ),umaxλ(ξ)L1β(y(ξ),u,λ(ξ)). (23)

    For the saddle-point problem (23), the ADMM is an efficient algorithm. We will apply ADMM to solve this problem, and the whole process can be described by

    Notice that every iteration of the ADMM has the closed-form solution

    yt+1(ξj)=(B1+βA1(ξj)TA1(ξj))1(c+βA1(ξj)TB1ut+A1(ξj)Tλt(ξj)),ut+1=(γB1+βBT1B1)1EM[γd+βBT1A1(ξ)yt+1(ξ)BT1λt(ξ)],λt+1(ξj)=λt(ξj)β(A1(ξj)yt+1(ξj)B1ut+1), (24)

    with j=1,,M.

    Now, we consider the convergence of the ADMM. It is easy to find that the stiffness matrixs A1(ξj), j=1,,M, and the mass matrix B1 are all symmetric positive definite matrices, and FM,h(y(ξ),u) is a convex functional. For the deterministic case under conditions above, the convergence result has been established in [6]. Applying the Assumption 1, it is not hard to obtain the following convergence result for the random case (cf. [20,24]).

    Algorithm 3 SMME-FEM-MC-ADMM.
    Begin
    Let t=0, and set the initial values ε03, u0,λ0;
    Compute the vectors c and d, the matrices A3 and B3;
    for j=1,2,,M do
      Generate the matrices B0(ξj) and P(ξj);
    end for
    while εt3>ε03 do
      yt+10=argminy0L3β(y0,ut,λt);
      ut+1=argminuL3β(yt+10,u,λt);
      λt+1=λtβ(A3yt+10B3ut+1);
      Increament t=t+1;
      Update εt3=ut+1ut+λt+1λt;
    end while
    End

     | Show Table
    DownLoad: CSV

    Lemma 3.6. For the problem (23), after t iterations, the iterate solution of the objective function generated by ADMM converges at the rate O(1t), that is

    E[|FM,h(yt,ut)FM,h(yM,uM)|]ˉK3t, (25)

    where (yM,uM) is the theoretical solution of (23), and

    ˉK3=12E[EM[u0uMλ0λ˜Hβ+1βλ0]],˜Hβ=(βBT1B10012βI).

    Further, applying the Theorem 3.5, the identity

    FM,h(yM,h,uM,h)FM,h(Ryt,Rut)=FM,h(yM,uM)FM,h(yt,ut),

    the Lemma 3.6, and the triangle inequality, we can get the global error estimate.

    Theorem 3.7. A global error estimate for Algorithm 1 is as follows:

    E[|F(y,u)FM,h(Ryt,Rut)|]ˉK1M+ˉK2h2+ˉK3t. (26)

    From Algorithm 1, we can see that the stiffness matrix A1(ξ) is a random one, which needs to be computed and stored for every sample ξj, j=1,,M. The cost to compute A1(ξ) and the mass matrix B1 is O((M+1)N2). In addition, from the closed form solutions (24), we also need to calculate and save the inverse matric (γB1+βBT1B1)1, and the inverse matrices (B1+βA1(ξj)TA1(ξj))1 for all the samples. Thus, we must compute M+1 inverses of N×N matrices, which will cost a large amount of memory and computation time.

    To reduce the computational cost in Algorithm 1, we shall introduce the multi-modes expansion (MME), and propose a new algorithm. The main idea is to approximate the state variable by MME technique, and then transform the original constraint in Algorithm 1 and thus shift the randomness to the right hand side mass matrix. The advantage is that we can calculate the expectation of the right hand side random matrix in advance, then compute and store the inverse of the matrix only once, which could reduce the computational cost significantly.

    The multi-modes expansion for the state variable y can be expressed as (cf. [11])

    y=q=0εqyq, (27)

    where ε is the magnitude of the perturbation defined in (5), and yqL2(Θ,H10(D)). Substituting (27) into the state equation of the optimal control problem (OCP), and comparing the coefficients of the powers of ε, we could get the following Poisson equations in recursive form.

    Δy0=u,inD,Δyq=(σ(ξ,)yq1),inD,q1,yq=0,onD,q0. (28)

    It is apparent that the relationship between y0 and u is deterministic, so we can solve the equations (28) by recursion for q1. The well-posedness of the MME solutions has been established in Theorem 3.1 of the reference [11].

    Lemma 4.1. The equations (28) has a unique solution yqL2(Θ,H10(D)). Besides, for any q0, if yqL2(Θ,H2(D)), then the following energy estimate holds

    E[yq2H2(D)]c(α)q+1u2L2(D), (29)

    where c(α) is given by (12) and is independent of q and ε.

    For the purpose of numerical solution, we truncate the expression (27) after the first Q terms, and denote it by

    yQ=Q1q=0εqyq. (30)

    Lemma 4.2. ([11]) If the perturbation magnitude ε<min{1,c(α)12}, then we have

    limQEyyQL2(D)=0.

    Lemma 4.1 shows that the problem (28) is well-posed, and Lemma 4.2 shows that the expression (30) is an approximation of the state variable y. Based on these facts, there must exist a linear operator TQ:L2(D)L2(Θ,H1(D)) such that

    yQ=TQ(u). (31)

    Feng et al. [11] showed that the truncation error of MME after Q terms is O(εQ).

    Lemma 4.3. Under the assumptions of Lemma 4.1 and Lemma 4.2, it holds that

    (E[(TTQ)ujL2(D)])1j=(E[yyQjL2(D)])1jC4εQuL2(D), (32)

    where j=2 or 4, C4 is a positive constant depending on α, Q and j.

    In this subsection, we will apply FEM to discretize yq(q0) on physical space, and MC method to approximate the expectation E on probability space. Then the discretized optimization problem, its first order optimal condition, and the convergence analysis for the proposed approach will be presented sequentially.

    For any fixed ξΘ, the FEM approximation of yq(ξ,x) in (28) can be written as

    yq,h(ξ,x)=Ni=1yq,i(ξ)ϕi(x)=Ryq(ξ),q0, (33)

    where yq(ξ)RN. Naturally, the truncated MME state yQ(ξ,x) could be approximated by

    yQh(ξ,x)=Q1q=0εqyq,h(ξ,x)=RQ1q=0εqyq(ξ)=RyQ(ξ). (34)

    And the variational formulation of the constraint equation in the truncated MME form is given by: For each ξΘ, find yQh(ξ,x)L2(Θ,Vh) such that

    (y0,h(x),vh(x))L2(D)=(u(x),vh(x))L2(D),(yq,h(ξ,x),vh(x))L2(D)=(σ(ξ,x)yq1,h(ξ,x),vh(x))L2(D), (35)

    for any vh(x)Vh, where q=1,,Q1. Similar to (31), there exists a linear operator TQh:L2(D)L2(Θ,Vh)L2(Θ,H1(D)) such that

    yQh=TQh(u). (36)

    Lemma 4.4. (cf. [11]) Under the assumption of Lemma 4.2, for j=2,4 we have the following stability estimate

    E[TQhujL2(D)]=E[yQh(u)jL2(D)]C5ujL2(D), (37)

    and the error estimate

    (E[(TQTQh)ujL2(D)])1j=(E[yQyQhjL2(D)])1jC6h2uL2(D), (38)

    where C5 and C6 are constants depending on ε and Q.

    Further, applying the triangle inequality, the estimates (32) and (38), we have the following error estimate.

    Lemma 4.5. Under the conditions of Lemma 4.1 and Lemma 4.4, we have the following estimate

    (E[(TTQh)ujL2(D)])1jC(εQ+h2)uL2(D), (39)

    where j=2 or 4.

    By arguments similar to the ones used to get the optimization problem (19), applying the expressions (16), (34) and (36), the original problem (P) could be approximated by

    (PQ2h){minyQh(ξ)Yh,uhUhFM,h(yQh(ξ,x),uh)subject toyQh(ξj,x)=(TQh(uh))(ξ,xj)j=1,,M,

    where FM,h has been given in (19). Similar to (PM1h), the optimization problem (PQ2h) has a unique solution uQM,hVh, which satisfies the first order optimal condition

    E[EM[TQh(TQhuQM,hyd)]]+γ(uQM,hQhud)=0. (40)

    Using the first order optimal conditions (9) and (40), we have

    Lemma 4.6. (cf. [19]) Under the conditions of Lemma 4.1-4.5, the following error estimate

    uuQM,hL2(D)˜C1M+˜C2εQ+˜C3h2, (41)

    holds, where

    ˜C1=48γC5(C5uL2(D)+ydL2(D)),˜C2=CC4γ(ydL2(D)+(C1+C5)uL2(D)),˜C3=Cu0L2(D)+1γCC5(ydL2(D)+(C1+C5)uL2(D)).

    Similar to Theorem 3.5, we can get the error estimate on the difference of the objective functionals, as follows

    Theorem 4.7. Let (y,u) and (yQM,h,uQM,h) be the optimal solutions of the optimal control problem (P) and its approximation (PQ2h), respectively, then we have

    E[|F(y,u)FM,h(yQM,h,uQM,h)|]ˉC1M+ˉC2εQ+ˉC3h2, (42)

    where

    ˉC1=γ(uL2(D)+udL2(D))˜C1+82(C1uL2(D)+ydL2(D))2+(C1C5uL2(D)+C5ydL2(D))˜C1+22(C1uL2(D)+ydL2(D))C2˜C1,ˉC2=γ(uL2(D)+udL2(D))˜C2+((C1CuL2(D)+C3ydL2(D))uL2(D)+22(C1uL2(D)+ydL2(D))C2˜C2+(C1C5uL2(D)+C5ydL2(D))˜C2),ˉC3=γ(uL2(D)+udL2(D))˜C3+((C1CuL2(D)+C3ydL2(D))uL2(D)+(C1C5uL2(D)+C5ydL2(D))˜C3)+42˜C23.

    The proof is similar to Theorem 3.5, we omit it here.

    In this subsection, the ADMM algorithm for solving the optimization problem (PQ2h) will be presented. In order to describe the algorithm clearly, we simplify (PQ2h) firstly.

    Substituting the control variable u in the variational formulation (35) by uh defined in (16), and using the definition (33) on yq,h, the constraint of the optimization problem (PQ2h) could be rewritten as

    A2y0=B1u,A2yq(ξ)=B0(ξ)yq1(ξ),q1, (43)

    with

    A2=(ϕm,ϕn)L2(D),B1=(ϕm,ϕn)L2(D),andB0(ξ)=(σ(ξ)ϕm,ϕn)L2(D).

    Applying the definition of yQh in (34), we know that

    A2yQ(ξ)=Q1q=0εqA2yq(ξ).

    Further, combining above equality with the equations (43), the relation between state variable yQ(ξ) and control variable u can be given by

    A2yQ(ξ)=B2(ξ)u,

    where

    B2(ξ)={B1,Q=1,(I+εB0(ξ)A12)B1,Q=2,(I+(εB0(ξ))(I+εA12B0(ξ))A12)B1,Q=3.

    Therefore, the optimization problem (PQ2h) can be reformulated as:

    (˜PQ2h){minyQ(ξ),uRNFM,h(yQ(ξ),u)subject toA2yQ(ξj)=B2(ξj)u,j=1,,M,

    where the functional FM,h is defined in (23).

    The optimization problem (˜PQ2h) is equivalent to the saddle point problem

    minyQ(ξ),umaxλ(ξ)L2β(yQ(ξ),u,λ(ξ)), (44)

    with the augmented Lagrangian function

    L2β(yQ(ξ),u,λ(ξ))=FM,h(yQ(ξ),u)EM[(λ(ξ),A2yQ(ξ)B2(ξ)u)]+EM[β2A2yQ(ξ)B2(ξ)u2],

    where λ(ξ)RN is the Lagrange multipliers and β denotes the penalty parameter.

    Now, the ADMM algorithm for solving the saddle point problem (44) could be described by

    The ADMM process has the following closed form solutions

    yQ,t+1(ξj)=(B1+βAT2A2)1(c+βAT2B2(ξj)ut+AT2λt(ξj)),ut+1=(γB1+βEM[B2(ξ)TB2(ξ)])1EM[γd+B2(ξ)T(βA2yQ,t+1(ξ)λt(ξ))],λt+1(ξj)=λt(ξj)β(A2yQ,t+1(ξj)B2(ξ)ut+1), (45)
    Algorithm 3 SMME-FEM-MC-ADMM.
    Begin
    Let t=0, and set the initial values ε03, u0,λ0;
    Compute the vectors c and d, the matrices A3 and B3;
    for j=1,2,,M do
      Generate the matrices B0(ξj) and P(ξj);
    end for
    while εt3>ε03 do
      yt+10=argminy0L3β(y0,ut,λt);
      ut+1=argminuL3β(yt+10,u,λt);
      λt+1=λtβ(A3yt+10B3ut+1);
      Increament t=t+1;
      Update εt3=ut+1ut+λt+1λt;
    end while
    End

     | Show Table
    DownLoad: CSV

    with j=1,2,M. Similar to Lemma 3.6, let (yQ,M,uQ,M) be the exact solution of the problem (˜PQ2h), which satisfies (yQM,h,uQM,h)=(RyQ,M,RuQ,M), then there exists a constant ˜K3 such that

    E[|FM,h(yQ,M,h,uQ,M,h)FM,h(RyQ,t,Rut)|]=E[|FM,h(yQ,M,uQ,M)FM,h(yQ,t,ut)|]˜K31t.

    Furthermore, applying the estimate of Theorem 4.7 and triangle inequality, we obtain the following estimate:

    Theorem 4.8. Let (yQ,t,ut) denote the solution obtained after using FEM, MME, and MC approximation and (y,u) be the solution of (1). Then we have

    E[|F(y,u)FM,h(RyQ,t,Rut)|]ˉC1M+ˉC2εQ+ˉC3h2+˜K31t, (46)

    where F(,) and FM,h(,) are functionals defined by (1) and (19), respectively.

    From the iterations (45), we can see that all the matrices are deterministic except the matrix B2(ξ). Fortunately, the matrix EM[B2(ξ)TB2(ξ)] could be obtained in advance for the given samples. Therefore, we need to compute only two inverse matrices, which could reduces the computational cost on the inverse matrices in the Algorithm 1 greatly. However, in addition to the cost of computing and storing A2, we still need to compute and store the mass matrices B2(ξj) for all samples. Thus, the storage requirement does not decrease, it is still O((M+1)N2).

    In this section, a new algorithm is proposed to overcome the disadvantages of the Algorithm 2, which can reduce the storage requirement significantly.

    From the variational formulation (35), we know that yq depends on the preceeding term yq1 for q1. Therefore, we can establish a relationship between y0 and yq. Furthermore, applying the definition (30) of yQ, there must exist an relation between y0 and yQ. Based on above facts, using the equations (34) and (43), we can obtain

    yQ(ξ)=Q1q=0εqyq(ξ)=P(ξ)y0, (47)

    where P(ξ) is a matrix with random variables, defined as follows

    P(ξ)={I,Q=1,I+εA12B0(ξ),Q=2,I+εA12B0(ξ)+ε2(A12B0(ξ))2,Q=3.

    The relationship (47) implies that yQ(ξ) is determined by y0, so we need to consider only the constraint on y0. The optimization problem (˜PQ2h) shall be simplified as

    (˜PQ3h){miny0,uRNFM,h(P(ξ)y0,u)subject toA3y0B3u=0,

    where the functional FM,h is defined in (23), A3=A2 and B3=B1 are two deterministic matrices defined in (43). This problem is a classical optimization problem, which is the same as

    miny0,umaxλL3β(y0,u,λ), (48)

    with the augmented Lagrangian function

    L3β(y0,u,λ)=FM,h(P(ξ)y0,u)(λ,A3y0B3u)+β2A3y0B3u2,

    where λRN is Lagrange multiplier independent of the samples, and β is the penalty parameter.

    Now, we present the ADMM algorithm to solve the saddle point problem (48).

    Algorithm 3 SMME-FEM-MC-ADMM.
    Begin
    Let t=0, and set the initial values ε03, u0,λ0;
    Compute the vectors c and d, the matrices A3 and B3;
    for j=1,2,,M do
      Generate the matrices B0(ξj) and P(ξj);
    end for
    while εt3>ε03 do
      yt+10=argminy0L3β(y0,ut,λt);
      ut+1=argminuL3β(yt+10,u,λt);
      λt+1=λtβ(A3yt+10B3ut+1);
      Increament t=t+1;
      Update εt3=ut+1ut+λt+1λt;
    end while
    End

     | Show Table
    DownLoad: CSV

    By straightforward calculations, we can get

    yt+10=(EM[P(ξ)TB3P(ξ)]+βAT3A3)1(EM[P(ξ)Tc]+βAT3B3ut+AT3λt),ut+1=(γB3+βBT3B3)1(γd+βBT3A3yt+10BT3λt),λt+1=λtβ(A3yt+10B3ut+1). (49)

    It is not hard to see that Algorithm 3 has the same convergence property as Algorithm 2 (see Theorem 4.8). From the iterations (49), we can see that we don't need to store the information for all the samples. The only matrices needing to be remembered are EM[P(ξ)], A3, and B3, which means the memory cost is O(N2). Moreover, we need to compute inverse matrices only twice. The above advantages are in sharp contrast with the Algorithm 1 and Algorithm 2. Hence, Algorithm 3 is the most efficient one both in computational and storage aspects.

    In this section, results of numerical experiments, which are carried out to test the performance of the proposed algorithms, will be reported. The convergence order of the FEM for all algorithms is verified firstly. And then, the ADMM convergence order in the form of the objective function is shown. Finally, to illustrate the advantage of Algorithm 3, we present the storage costs of the three algorithms. All the simulations are conducted using Matlab on a high-performance computer with 16 kernels and 256 GB RAM.

    We consider the optimal control problem on the spatial domain D=[1,1]2, and assume the random variable σ(x,) associated with the random coefficient α(ξ,x) obeys the uniform distribution on the interval of [1,1] for any fixed xD. The weighted parameter γ of the objective functional is set to be 1, the desired state and control are set to be

    yd=sin(πx)sin(πy),ud=2π2sin(πx)sin(πy).

    It is not difficult to find that yd and ud are the optimal state and control of the original problem (P) in the expected sense. In addition, the parameters associated with the ADMM are set to be β=1, ε0i=104, i=1,2,3, respectively.

    To verify the convergence order of the FEM, we fix other parameters firstly. For simplicity, the sample number of the MC, the iteration steps of the ADMM, the magnitude of the perturbation defined in (5), and the truncated length of the MME are set to be M=1000, t=20, ε=104, and Q=1, respectively. Other cases could be discussed in similar manners. Now, we carry out our algorithms on the nested meshes with h=(1/2)i,i=1,2,3,4,5, and present the results in Figure 1. We emphasize that Algorithm 1 is out of memory even with spatial meshsize h=1/32. For aesthetics, we extend the line to the fifth point. From Figure 1, we can see that the FEM convergence order with respect to the control variable u and the state variable y are all of order 2 for the proposed three algorithms, which coincide with the theoretical analysis in Sections 3-5. To illustrate the correctness of the algorithms intuitively, the images of the optimal control in theory, and the numerical solutions on the proposed algorithms with h=1/32 are shown in Figure 2, where the numerical solution for Algorithm 1 is obtained with h=1/16 for the memory limit.

    Figure 1.  The FEM convergence order on the state and the control for three algorithms.
    Figure 2.  The theoretical solution and numerical solutions on the control for three algorithms.

    Now, we will test the performance of ADMM. For this purpose, other parameters in the proposed algorithms need to be fixed in advance. Without loss of generality, we set M=1000, h=1/32, and Q=1 for the latter two algorithms and the Algorithm 1 with h=1/16. The ADMM errors for the proposed algorithms in the form of the deviation between functionals are shown in Figure 3. From the results, we can see that the convergence rate is superior to O(1t), which means that the numerical performance of ADMM is better than the theoretical case.

    Figure 3.  The ADMM convergence rate for three algorithms.

    Finally, we analyze the CPU memory cost of the proposed algorithms, which is one of the main concerns in this paper. For the fixed MC parameter M=1000, the MME perturbation magnitude ε=104 and the iteration number t=20, the variation trends of the CPU memory cost with respect to the FEM parameter h and the MME parameter Q are given in Table 1. From Table 1, we can find three obvious facts: (1) For any one of the three algorithms, the memory cost increase as the parameter h decreases. (2) For any one of the Algorithm 2 and 3, the memory cost does not have obvious change with respect to the parameter Q. (3) For any fixed h and Q, the memory cost of Algorithm 3 is significantly lower than other two algorithms. The above facts agree with the theoretical analysis in Section 3, 4, and 5 very well.

    Table 1.  The CPU memory costs for different h and Q: unit GB.
    hAlgorithm 1Algorithm 2Algorithm 3
    Q=1 Q=2 Q=3 Q=1 Q=2 Q=3
    141.271E-11.009E-11.195E-11.221E-11.967E-31.007E-32.006E-3
    181.655E01.277E01.676E01.685E08.136E-32.723E-35.859E-3
    1161.807E11.792E12.481E12.487E14.856E-28.097E-27.375E-2
    132out of memory2.677E2out of memoryout of memory7.546E-11.161E01.626E0

     | Show Table
    DownLoad: CSV

    In order to observe the storage changes of the three proposed algorithms with respect to the sample number M, we fix the FEM parameter h=1/16, the stochastic perturbation parameter ε=104 and the ADMM iteration number t=20. The CPU memory costs with different cases are presented in Table 2. From Table 2, we can see that (1) The memory costs of Algorithm 1 and 2 increase linearly with the sample number M, which roughly equals O((M+1)N2). (2) The storage cost of Algorithm 3 basically remain unchanged with the increasing of the sample number, that is approximately O(N2). Based on all the results exhibited in Table 1 and Table 2, we can conclude that Algorithm 3 is an efficient method for the optimal control problem with random coefficient.

    Table 2.  The CPU memory costs for different M and Q: unit GB.
    MAlgorithm 1Algorithm 2Algorithm 3
    Q=1 Q=2 Q=3 Q=1 Q=2 Q=3
    16.143E-25.456E-28.195E-27.839E-24.514E-26.814E-25.881E-2
    102.498E-12.511E-12.964E-12.976E-14.568E-26.785E-25.901E-2
    1021.927E01.929E02.511E02.513E04.603E-26.887E-26.074E-2
    1031.807E11.792E12.481E12.487E14.856E-26.931E-27.375E-2

     | Show Table
    DownLoad: CSV

    In this paper, three algorithms are proposed for the elliptic equation constraint optimal control problem with random coefficient. The Algorithm 1 is based on the FEM discretization, the MC approximation, and the ADMM iteration, which has global convergence. However, the computational cost and the storage cost are both very high. As an improvement, the Algorithm 2 is proposed, which reduce the computational cost by using the MME technique, but memory cost remain high. To overcome the disadvantage of Algorithm 2, Algorithm 3 is proposed, which reduce the memory cost significantly by transforming the constraint with random term to a deterministic one. Numerical simulations are carried out to verify the theoretical analysis, which illustrate the efficiency of the Algorithm 3 further. The applications of the proposed algorithms to other random optimal control problems will be discussed in our future works.

    The work of H. Song was supported by the NSF of China under the grant No.11701210, the science and technology department project of Jilin Province under the grant No. 20190103029JH and 20200201269JC. the education department project of Jilin Province under the grant No. JJKH20180113KJ, and the fundamental research funds for the Central Universities. The work of X. Pang was supported by the NSF of China under the grant No. 11871245, 11771179, and 11726102. The work of J. Zhang was supported by the NSF of China under the grant No. 11971221 and 11731006, the Shenzhen Sci-Tech Fund under the grant No. JCYJ20170818153840322 and JCYJ20190809150413261, and Guangdong Provincial Key Laboratory of Computational Science and Material Design under the grant No. 2019B030301001. The authors also wish to thank the High Performance Computing Center of Jilin University, the Computing Center of Jilin Province, and the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education for essential computing support.

    Proof. Using the definitions of the F(y,u) and FM,h(yh,uh), and subtracting (19) from (1), we can get

    E[|F(y,u)FM,h(yM,h,uM,h)|]E[|E[12D|yyd|2dx]EM[12DyM,hyd2dx]|]+|γ2D|uud|2dxγ2DuM,hud2dx|=12(I+II). (50)

    For the first term i, we have

    I=E[|E[yyd2L2(D)]EM[yM,hyd2L2(D)]|]E[|(EEM)[Tuyd2L2(D)]|]+E[|EM[Tuyd2L2(D)ThuM,hyd2L2(D)]|]=I1+I2.

    By using the MC error estimate (18), the triangle inequality, the estimate (7) with j=4, and the inequality (52), it yields

    I1=E[|(EEM)[Tuyd2L2(D)]|](E[((EEM)[Tuyd2L2(D)])2])12(E[(EEM)[Tuyd4L2(D)]])121M(E[Tuyd4L2(D)])1421M(E[(TuL2(D)+ydL2(D))4])1421M((E[8Tu4L2(D)])14+(E[8yd4L2(D)])14)21M8(C1uL2(D)+ydL2(D))2. (51)

    Applying the Cauchy-Schwarz inequality, the triangle inequality, the MC error estimate (18), and the inequality

    (E[(m+n)4])14(E[8(m4+n4)])14(E[8m4])14+(E[8n4])14,m,n0, (52)

    we can get

    I2=E[|EM[ThuM,hTu2L2(D)+2(TuThuM,h,Tuyd)]|]2E[EM[TuThuM,hL2(D)TuydL2(D)]]+E[EMThuM,hTu2L2(D)]=I21+I22,

    and

    I212E[(EME)[TuThuM,hL2(D)(TuL2(D)+ydL2(D))]]+2E[TuThuM,hL2(D)(TuL2(D)+ydL2(D))]2M(E[(TuL2(D)+ydL2(D))2TuThuM,h2L2(D)])12+2(E[TuThuL2(D)TuL2(D)]+E[TuThuL2(D)ydL2(D)])+2(E[Th(uuM,h)L2(D)TuL2(D)]+E[Th(uuM,h)L2(D)ydL2(D)])2M(E[(TuL2(D)+ydL2(D))4])14(E[TuThuM,h4L2(D)])14+2(E[Tu2L2(D)])12(E[(TTh)u2L2(D)])12+2(E[yd2L2(D)])12(E[(TTh)u2L2(D)])12+2(E[Tu2L2(D)])12(E[Th(uuM,h)2L2(D)])12+2(E[yd2L2(D)])12(E[Th(uuM,h)2L2(D)])12.

    Taking j=2,4 in (7), (14), and (15), it follows from (52) that

    I212M((E[8Tu4L2(D)])14+(E[8yd4L2(D)])14)((E[8(TTh)u4L2(D)])14+(E[8Th(uuM,h)4L2(D)])14)+2(C1C3h2u2L2(D)+ydL2(D)C3h2uL2(D))+2(C1C2uuM,hL2(D)+C2ydL2(D)uuM,hL2(D))=2M(48(C1uL2(D)+ydL2(D)))(48(C3h2uL2(D)+C2uuM,hL2(D)))+2((C1C3u2L2(D)+C3ydL2(D)uL2(D))h2)+2((C1C2uL2(D)+C2ydL2(D))uuM,hL2(D)). (53)

    Similarly, we obtain

    (54)

    When and , we have

    (55)

    Next, we deal with the second term, applying the Cauchy-Schwarz inequality, triangle inequality, and the estimate (21), we obtain

    Substituting , , and into (50), the conclusion is confirmed naturally.



    [1] A stochastic collocation method for elliptic partial differential equations with random input data. SIAM Rev (2010) 52: 317-355.
    [2] A stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. (2007) 45: 1005-1034.
    [3] Solving elliptic boundary value problems with uncertain coefficients by the finite element method: The stochastic formulation. Comput. Methods. Appl. Mech. Engrg. (2005) 194: 1251-1294.
    [4] Multi-level Monte Carlo finite element method for elliptic PDEs with stochastic coefficients. Numer. Math. (2011) 119: 123-161.
    [5] R. E. Caflisch, Monte-Carlo and quasi-Monte Carlo methods, in Acta Numerica, Acta Numer., 7, Cambridge University Press, Cambridge, 1998, 1–49. doi: 10.1017/S0962492900002804
    [6] complexity analysis of the alternating direction method of multipliers. Sci. China Math. (2019) 62: 795-808.
    [7] An efficient Monte Carlo method for optimal control problems with uncertainty. Comput. Optim. Appl. (2003) 26: 219-230.
    [8] Taylor approximation and variance reduction for PDE-constrained optimal control under uncertainty. J. Comput. Phys. (2019) 385: 163-186.
    [9] P. G. Ciarlet, The Finite Element Method for Elliptic Problems, Studies in Mathematics and its Applications, 4, North-Holland Publishing Co., Amsterdam-New York-Oxford, 1978.
    [10] An efficient numerical method for acoustic wave scattering in random media. SIAM/ASA J. Uncertain. Quantif. (2015) 3: 790-822.
    [11] A multimodes Monte Carlo finite element method for elliptic partial differential equations with random coefficients. Int. J. Uncertain. Quantif. (2016) 6: 429-443.
    [12] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics, Springer-Verlag, Berlin, 2001. doi: 10.1007/978-3-642-61798-0
    [13] Error estimates of stochastic optimal Neumann boundary control problems. SIAM J. Numer. Anal. (2011) 49: 1532-1552.
    [14] An alternating direction method of multipliers for the optimization problem constrained with a stationary Maxwell system. Commun. Comput. Phys. (2018) 24: 1435-1454.
    [15] M. Hinze, R. Pinnau, M. Ulbrich and S. Ulbrich, Optimization with PDE Constraints, Mathematical Modelling: Theory and Applications, 23, Springer, New York, 2009. doi: 10.1007/978-1-4020-8839-1
    [16] Finite element approximations of stochastic optimal control problems constrained by stochastic elliptic PDEs. J. Math. Anal. Appl. (2011) 384: 87-103.
    [17] D. P. Kouri, M. Heinkenschloss, D. Ridzal and B. G. van Bloemen Waanders, A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty, SIAM J. Sci. Comput., 35 (2013), A1847–A1879. doi: 10.1137/120892362
    [18] Analytic regularity and GPC approximation for control problems constrained by linear parametric elliptic and parabolic PDEs. SIAM J. Control Optim. (2013) 51: 2442-2471.
    [19] An efficient alternating direction method of multipliers for optimal control problems constrained by random Helmholtz equations. Numer. Algorithms (2018) 78: 161-191.
    [20] On the Schrödinger equation and the eigenvalue problem. Comm. Math. Phys. (1983) 88: 309-318.
    [21] J. C. De los Reyes, Numerical PDE-Constrained Optimization, SpringerBriefs in Optimization, Springer, Cham, 2015. doi: 10.1007/978-3-319-13395-9
    [22] Stochastic Galerkin method for elliptic SPDEs: A white noise approach. Discrete Contin. Dyn. Syst. Ser. B (2006) 6: 941-955.
    [23] Stochastic Galerkin method for constrained optimal control problem governed by an elliptic integro-differential PDE with stochastic coefficients. Int. J. Numer. Anal. Model. (2015) 12: 593-616.
    [24] Lower bounds for higher eigenvalues by finite difference methods. Pacific J. Math. (1958) 8: 339-368.
    [25] Fast numerical methods for robust optimal design. Eng. Optim. (2008) 40: 489-504.
    [26] An alternating direction method of multipliers for elliptic equation constrained optimization problem. Sci. China Math. (2017) 60: 361-378.
  • This article has been cited by:

    1. Sida Lin, Lixia Meng, Jinlong Yuan, Changzhi Wu, An Li, Chongyang Liu, Jun Xie, Sequential adaptive switching time optimization technique for maximum hands-off control problems, 2024, 32, 2688-1594, 2229, 10.3934/era.2024101
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3185) PDF downloads(334) Cited by(1)

Figures and Tables

Figures(3)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog