Loading [MathJax]/jax/output/SVG/jax.js
Special Issues

Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition

  • In Bayesian inverse problems, using the Markov Chain Monte Carlo method to sample from the posterior space of unknown parameters is a formidable challenge due to the requirement of evaluating the forward model a large number of times. For the purpose of accelerating the inference of the Bayesian inverse problems, in this work, we present a proper orthogonal decomposition (POD) based data-driven compressive sensing (DCS) method and construct a low dimensional approximation to the stochastic surrogate model on the prior support. Specifically, we first use POD to generate a reduced order model. Then we construct a compressed polynomial approximation by using a stochastic collocation method based on the generalized polynomial chaos expansion and solving an l1-minimization problem. Rigorous error analysis and coefficient estimation was provided. Numerical experiments on stochastic elliptic inverse problem were performed to verify the effectiveness of our POD-DCS method.

    Citation: Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin. Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition[J]. Electronic Research Archive, 2021, 29(5): 3383-3403. doi: 10.3934/era.2021044

    Related Papers:

    [1] Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin . Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition. Electronic Research Archive, 2021, 29(5): 3383-3403. doi: 10.3934/era.2021044
    [2] Youjun Deng, Hongyu Liu, Xianchao Wang, Dong Wei, Liyan Zhu . Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements. Electronic Research Archive, 2021, 29(5): 3081-3096. doi: 10.3934/era.2021027
    [3] Chunhua Chu, Yijun Chen, Qun Zhang, Ying Luo . Joint linear array structure and waveform design for MIMO radar under practical constraints. Electronic Research Archive, 2022, 30(9): 3249-3265. doi: 10.3934/era.2022165
    [4] Lot-Kei Chou, Siu-Long Lei . High dimensional Riesz space distributed-order advection-dispersion equations with ADI scheme in compression format. Electronic Research Archive, 2022, 30(4): 1463-1476. doi: 10.3934/era.2022077
    [5] Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119
    [6] Dengke Xu, Linlin Shen, Yuanyang Tangzhu, Shiqi Ke . Bayesian analysis of random effects panel interval-valued data models. Electronic Research Archive, 2025, 33(5): 3210-3224. doi: 10.3934/era.2025141
    [7] John Daugherty, Nate Kaduk, Elena Morgan, Dinh-Liem Nguyen, Peyton Snidanko, Trung Truong . On fast reconstruction of periodic structures with partial scattering data. Electronic Research Archive, 2024, 32(11): 6481-6502. doi: 10.3934/era.2024303
    [8] Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu . A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095
    [9] Xuerui Li, Lican Kang, Yanyan Liu, Yuanshan Wu . Distributed Bayesian posterior voting strategy for massive data. Electronic Research Archive, 2022, 30(5): 1936-1953. doi: 10.3934/era.2022098
    [10] Qing Zhao, Shuhong Chen . Regularity for very weak solutions to elliptic equations of $ p $-Laplacian type. Electronic Research Archive, 2025, 33(6): 3482-3495. doi: 10.3934/era.2025154
  • In Bayesian inverse problems, using the Markov Chain Monte Carlo method to sample from the posterior space of unknown parameters is a formidable challenge due to the requirement of evaluating the forward model a large number of times. For the purpose of accelerating the inference of the Bayesian inverse problems, in this work, we present a proper orthogonal decomposition (POD) based data-driven compressive sensing (DCS) method and construct a low dimensional approximation to the stochastic surrogate model on the prior support. Specifically, we first use POD to generate a reduced order model. Then we construct a compressed polynomial approximation by using a stochastic collocation method based on the generalized polynomial chaos expansion and solving an l1-minimization problem. Rigorous error analysis and coefficient estimation was provided. Numerical experiments on stochastic elliptic inverse problem were performed to verify the effectiveness of our POD-DCS method.



    The inverse problems refer to exploring the inherent nature of things based on observable phenomena. There are many parameters of interest in scientific and engineering problems that cannot be directly observed. We often discuss these quantities indirectly through known data, that is, to formulate such problems in the form of inverse problems [14,7]. It can be seen from this that inverse problems are the products of the rapid development of science and engineering, which have been widely used in various fields such as geological engineering [25,10], medicine [4], environment [31], telemetry [28], control [2] and so on.

    Due to the fact that parameters are sensitive to observation data, which is usually impure, this leads to the inverse problems are ill-posed, which can be addressed with regularization [9,27]. However, regularization methods only provide point estimates of the unknown parameters and without quantifying the uncertainties of the solution. Then, the statistical inference method [14,26] entered the researchers' vision. In Bayesian inverse problems, the unknown parameters are treated as random vectors with known prior distribution, and we need to infer their conditional distribution under observation data, namely the posterior distribution. In general, it is difficult to obtain the analytical form of the posterior distribution, so we often consider using the Markov Chain Monte Carlo (MCMC) method [3] to explore the posterior space. In these sampling algorithms, the forward model solution needs to be calculated for each candidate sample. We know that evaluating forward models is expensive in many practical problems, so direct sampling algorithms are prohibitive. To overcome this difficulty, many scholars have proposed a series of surrogate models and effective sampling algorithms to reduce the computational cost of Bayesian inverse problems. The former achieves the goal by reducing the cost of a single forward model evaluation. For example, the use of generalized polynomial chaos (gPC) basis functions to express the solutions of forward problems is proposed in [21,22], the projection-based model reduction techniques can be seen in [19,15], and adaptive methods to construct surrogate models are presented in [32,17]. The latter achieves this purpose by reducing the number of samples required, which has been studied in [23,20].

    Here, we focus on the construction of effective surrogate models. Since unknown parameters are treated as random vector ξ(ω), the forward problems can be regarded as the stochastic problems on prior support Γ. Although the stochastic problem is more complicated than the original one, we can calculate the approximate posterior probability corresponding to every candidate sample at a negligible cost once its solution expression obtained. In the physical domain D and image probability space of unknown parameters, we can use the deterministic and stochastic basis functions to represent the solutions of stochastic problems [8]. For instance, with the stochastic finite element methods, an approximation of solution u(x,ξ):D×ΓR has the form:

    ˇu(x,ξ)=ni=1nuj=1ˇcijφj(x)ψi(ξ), (1)

    where the coefficient ˇcij is with respect to (w.r.t.) a finite element basis φj and a multivariate polynomial chaos (PC) basis ψi.

    In the current work, we propose the data-driven compressive sensing method based on proper orthogonal decomposition for constructing the accurate approximate solutions of stochastic forward problems to accelerate the calculation of Bayesian inverse problems. The POD-DCS method is derived from the data-driven compressive sensing method proposed in [18]. But here, we use proper orthogonal decomposition (POD) basis functions instead of Karhunen-Loève basis, so as to generate a reduced order model (ROM) and avoid the recovery of covariance. According to the idea of compressive sensing (CS), an accurate approximate solution of the ROM can be obtained by calculating a basis pursuit (BP) problem [18,30] with a small amount of data. The advantage of our method is that it constructs the reduced order model using snapshots first, which causes the degree of freedom (DoF) of the forward problem to be reduced from the cardinality of the finite element to the that of POD, and then we only need to reconstruct the low-dimensional ROM at a lower cost. From the numerical results we can see that when using the same number of fully discrete finite element solutions, our scheme can improve accuracy and sparsity compared to the conventional CS method based on finite element basis. Moreover, the cost of evaluating a forward model using our method is only a fraction of that with the finite element method (FEM), so it can speed up the calculation of Bayesian inverse problems effectively. As we all know, many practical problems can be described by partial differential equations (PDE). Here, we concentrate only on elliptic PDE, which is widely used in the studies of oil reservoirs and groundwater [25], and its ill-posedness have been discussed in [16]. Of course, our method can be naturally extended to other partial differential equations. All the computations were performed using MATLAB R2014b on a personal computer with 1.60GHz CPU and 4GB RAM.

    The rest of this paper is organized as follows. In Section 2, we describe the model problem used as the background of our study, then introduce the framework of Bayesian inference and the stochastic surrogate model. In Section 3, we discuss how to construct the POD-DCS approximate solution for a stochastic forward problem, and provide some direct simulation results and the corresponding algorithm. The error and sparsity analyses of our scheme are conducted in Section 4. In Section 5, we compare our POD-DCS scheme with other methods, and use it to solve elliptic inverse problem. Finally, we give some conclusions and indicate possible future work in Section 6.

    Consider an underground steady state aquifers modelled by two-dimensional elliptic partial differential equation with Dirichlet boundary as

    {(a(x)u(x))=f,xD,u(x)=0,xD, (2)

    where domain D=[0,1]2. Let u(x) be the hydraulic head, a(x) be the permeability field, and apply a constant source term in the whole domain and satisfy f2. The goal of this elliptic inverse problem is to estimate the unknown permeability field a(x) from a set of observations z=[u(x1,ξ),,u(xnz,ξ)]TRnz of the solution u at points {xs}nzs=1. In this work, we consider the permeability field defined by np=4 radial basis functions with centers x1,,xnp as:

    a(x)=0.01+npi=1ξiexp(xxi220.02), (3)

    where 2 is Euclidean norm. These radial basis functions are shown in Figure 1, and what we need to do is determine these weights {ξi}npi=1.

    Figure 1.  Radial basis functions used to define the permeability field.

    Let ξ=[ξ1,,ξnp]T be the weight vector, and the permeability field and model output depending on ξ are denoted as a(x;ξ) and u(x;ξ), respectively. Without loss of generality, here we consider the additive error, i.e.,

    zs=u(xs;ξ)+es,s=1,,nz (4)

    where {xs}nzs=1 are sensor locations, ξRnp is the weight vector corresponding to the data z=[z1,,zns]TRnz, and additive error e=[e1,,enz]TRnz comes from measurement error, etc. Then, the forward model can be written as

    z=G(ξ)+e, (5)

    where G:RnpRnz is a mapping from input parameters ξ to noise-free data. Let the noise e and parameters ξ are independent mutually. Suppose further that the components of e are independent and identically distributed (i.i.d.), and satisfy eiN(0,σ2e) for i=1,,nz.

    In Bayesian inverse problems, the unknown parameters ξ is regarded as random vector in a properly defined probability space (Ω,F,P), whose components are i.i.d.. Moreover, the prior probability density function of ξ is known. In this work, we only consider the uniform prior in [0,1]np.

    By Bayes' rule, we can infer the posterior probability density function π(ξ|z) of unknown parameters ξ by combining the prior information π(ξ), likelihood function π(z|ξ) and observations z, i.e.,

    π(ξ|z)π(z|ξ)π(ξ). (6)

    According the forward model (5) and the assumption of noise independence, likelihood function π(z|ξ) has the form

    π(z|ξ)=nzi=1πei(ziGi(ξ))exp(G(ξ)z222σ2e). (7)

    In many practical problems, the posterior distributions (6) are analytically intractable. Consequently, many sampling algorithms, e.g. MCMC, have been used to ascertain the posterior space. The framework of Bayesian inverse problems with direct sampling algorithm is shown in Figure 2. The form of the likelihood function (7) implies that for each candidate sample ξ, we need to evaluate the corresponding forward model G(ξ), which is expensive generally. Therefore, the naive sampling methods are prohibitive. A series of reduced order models and effective sampling algorithms have been developed to deal with this problem. In order to reduce the total computational cost, the former is by reducing the cost of a single evaluation of the forward model, and the latter is by reducing the number of samples required. In current work, we concentrate on constructing a surrogate model to reduce the cost of a single forward model evaluation.

    Figure 2.  Bayesian inverse problems framework with direct sampling algorithm.

    In probability space (Ω,F,P), we denote the probability density function of random variable ξi(ω):ΩR as πi(ξi), whose range is Γi=ξi(Ω)R. Because the components of ξ are assumed to be independent, the joint probability density function of random vector ξ has the form

    π(ξ)=npi=1πi(ξi) (8)

    with support Γ=npi=1Γi=[0,1]np. Here, the joint probability density function π(ξ) is equivalent to the prior density function in Bayes' formula (6).

    For any ωΩ, the realization of random vector ξ is belong to Γ. Therefore, in the Bayesian framework, we can restate the boundary value problem (2) as following stochastic forward problem in image probability space (Γ,B(Γ),π(ξ)dξ) rather than abstract probability space (Ω,F,P):

    {(a(x,ξ)u(x,ξ))=f,(x,ξ)D×Γ,u(x,ξ)=0,xD. (9)

    Here, B(Γ) is the Borel σ-algebra on Γ, and π(ξ)dξ is the measure of random vector ξ. In fact, we can construct explicit expression of the solution u(x,ξ) to stochastic problem (9). Then for any ωΩ, substituting the realization ξ(ω) into u(x,ξ), the function value is consistent with the solution u(x;ξ) of the deterministic problem (2) which has the same input ξ. Using this solution expression, we can repeatedly evaluate the forward model (2) at a negligible cost. For this reason, the model (9) is called the stochastic surrogate model of problem (2).

    In this section, we propose a data-driven compressive sensing method based on proper orthogonal decomposition, which can be used to construct efficient and sparse solution for a stochastic forward model.

    It's well-known that the solution u(x,ξ):D×ΓR of stochastic forward model (9) can be represented by stochastic and deterministic basis functions. Here, we propose the POD-DCS method, which chooses a set of data-driven POD basis functions {ϕj(x)}mj=1 for domain D and a set of gPC basis functions {ψi(ξ)}ni=1 for prior support Γ, and the coefficients are determined by solving a BP problem. Next, we discuss how to construct the POD-DCS approximate solution for a stochastic forward problem.

    The corresponding degree of freedom of the finite element method is expressed by nu, and let SRnu×Ks be the snapshot matrix:

    S=[uh(x,ξ1),,uh(x,ξKs)]:=[uh1(x),,uhKs(x)], (10)

    where uh(x,ξi)Rnu is the finite element solution of problem (9) associated with input parameter ξiΓ. These random inputs {ξi}Ksi=1 are sampling from Γ randomly. By solving eigenvalue problem

    RV=VΛ (11)

    with

    Rij=1Ks(uhi(x),uhj(x)),i,j=1,,Ks, (12)

    we can obtain the diagonal singular value matrix Λ with Λjj=λj, and the matrix of singular vectors V=[v1,,vKs] with the j-th component

    vj=[v(j)1,,v(j)Ks]T,j=1,Ks. (13)

    Since R is positive semidefinite and symmetric, we can make the eigenvectors {vj}Ksj=1 to be orthogonal and sort the eigenvalues in a descending order. After normalization, the POD basis can be represented as

    ϕj(x)=1KsλjKsi=1v(j)iuhi(x),j=1,,m. (14)

    Here, the dimension of POD basis, m, is determined by the energy ratio νm defined as

    νm=mj=1λj/Ksj=1λj. (15)

    Once the basis functions are computed, the solution u(x,ξ) can be approximated as

    ˆu(x,ξ)=mj=1αj(ξ)ϕj(x). (16)

    The coefficients {αj}mj=1 are called reduced states and satisfy the following algebraic system

    Aα=F, (17)

    where

    α=[α1(ξ),,αm(ξ)]T,Fi=Df(x)ϕi(x)dx,i=1,,m,Aij=Da(x,ξ)ϕj(x)ϕi(x)dx,i,j=1,,m.

    From the definition of the POD basis functions we known that ϕi(x) and ϕj(x) are orthogonal for i,j=1,,m, and {ϕj(x)}mj=1 minimize the error measure

    M=1KsKsi=1uhi(x)mj=1(uhi(x),ϕj(x))ϕj(x)2L2(D) (18)

    with minimum Ksj=m+1λj [12]. When Ks is large enough, M can be regarded as an approximation of the mean square error in the L2(D)-norm

    ¯M=E[uh(x,ξ)mj=1(uh(x,ξ),ϕj(x))ϕj(x)2L2(D)]. (19)

    For 4-dimensional stochastic problem (9), we draw Ks samples of input parameters ξ from prior support Γ randomly, and evaluate the corresponding forward model (9) by using finite element method with mesh size h=26 (i.e. nu=4225). These solutions compose the snapshot matrix S. For Ks=50,100,200 and 300, the first 15 eigenvalues of correlation matrix R are shown on the left of Figure 3. We can observe that the eigenvalues have very little differences when Ks100, but have large gaps between Ks=50 and Ks=100.

    Figure 3.  (Left): The first 15 eigenvalues of correlation matrix R for Ks=50,100,200 and 300. (Right): The cumulative energy ratios of the first 15 POD basis functions with Ks=100,200 and 300.

    In order to explore the prior space fully and ensure the accuracy of the POD-based reduced order model, we need use Ks100 realizations to construct the snapshot matrix. The right side of the Figure 3 shows the cumulative energy ratios of the first 15 POD basis functions with Ks=100,200 and 300. We can see that the energy ratios of first 2 POD basis functions have slight differences for different Ks. For each case, the first 3 basis functions can capture 99% energy of snapshot set, and the cumulative energy ratio of the first 9 basis functions is very close to 1.

    We define the expectation and variance of L2(D)-norm error between the full finite element solution uh(x,ξ) and the POD-based approximate solution ˆu(x,ξ) as

    ˆE=E[uhˆu2L2(D)], (20)

    and

    ˆV=Var[uhˆu2L2(D)], (21)

    where E[] means the expectation and Var[] represents the variance. Here, we approximate these statistics with Kr=500 samples.

    Figure 4 displays these error estimates of the POD-based approximate solution associated with different Ks and different degrees of freedom m. We observe that for a given m, the expectations and variances in these three cases only have a little differences. And when m4, the errors decrease very quickly, while they are very small variations with m9. The results for m=3,6,9 and 12 are shown in Table 1. It is clear that the change in energy from 9 to 12 POD basis functions is so small that it is almost negligible, and the improvements in the corresponding error statistics are also small. Moreover, for m=9, the order of magnitude of the differences in ˆE is -4, and in variance ˆV is -5 between these three cases, while they are larger for m=3 and m=6. Therefore, we consider using Ks=100 realizations to generate the POD basis functions and choose m to be 9 in the next experiment to ensure accuracy without consuming too much computing resources.

    Figure 4.  The statistics estimation of L2(D)-norm error between the full finite element solution and the POD-based ROM solution with Ks=100,200 and 300. (Left): Expectation. (Right): Variance.
    Table 1.  The statistics estimation of L2(D)-norm error between the finite element solution and the POD-based approximate solution with Ks=100,200,300 and m=3,6,9,12.
    m 3 6 9 12
    Ks=100 νm 0.9906 0.9988 0.9998 0.9999
    ˆE×102 5.2424 0.9679 0.2840 0.1267
    ˆV×103 8.1367 0.8915 0.1475 0.0282
    Ks=200 νm 0.9908 0.9988 0.9998 0.9999
    ˆE×102 4.9558 1.0836 0.2422 0.0907
    ˆV×103 6.7935 0.9507 0.1023 0.0126
    Ks=300 νm 0.9900 0.9986 0.9997 0.9999
    ˆE×102 4.6537 0.8260 0.2229 0.0622
    ˆV×103 5.5292 0.4418 0.0898 0.0037

     | Show Table
    DownLoad: CSV

    In our POD-DCS algorithm, using the POD method first can greatly reduce the DoF of the problem in hand, and simplify the subsequent processing. Next, we utilize the idea of compressive sensing to express the reduced states {αj(ξ)}mj=1 with gPC basis functions, and determine the expression coefficients by using a small amount of data and solving a l1-minimization problem.

    Clearly, once the POD basis functions {ϕj(x)}mj=1 are obtained, we only need to compute the reduced states αj(ξ):ΓR in the expression (16) for j=1,,m. Here, we use the Nth-order generalized polynomial chaos to represent the reduced states as the form

    αj(ξ)ni=1cijψi(ξ),j=1,,m, (22)

    where {ψi(ξ)}ni=1 represents a set of multivariate Legendre polynomials with order N that are orthonormal w.r.t. the joint probability density function π(ξ), and ψ1(ξ)=1. The cardinality of the stochastic polynomial basis is

    n=(N+np)!N!np!. (23)

    The stochastic collocation method [29] based on polynomial interpolation can be utilized to determine the coefficients {cij}n,mi=1,j=1. With collocation points {ξi}Kci=1, we have a linear system

    (α1(ξ1)αm(ξ1)α1(ξKc)αm(ξKc))=(ψ1(ξ1)ψn(ξ1)ψ1(ξKc)ψn(ξKc))(c11c1mcn1cnm).

    Simply expressed as

    ˆA=Ψc, (24)

    where ˆA=(αj(ξi))RKc×m is the matrix of reduced state generated by solving system (17), c=(cij)Rn×m is the coefficient matrix that needs to be determined, Ψ=(ψi(ξj))RKc×n represents the stochastic information matrix, and Kc is the number of stochastic collocation points. As shown in Equation (23), with the increase of random input dimension np, the cardinality n of polynomial basis functions grows very quickly, which will cause linear system (24) to be underdetermined, i.e., Kc<n.

    Based on the idea of compressive sensing, given a highly incomplete set of reduced states by solving algebraic system (17), an accurate approximate solution of linear system (24) can be obtained by solving the BP problem

    vec(˜c)=argminvec(c)vec(c)1,subject tovec(ˆA)=Θvec(c), (25)

    where dictionary matrix Θ=ImΨ is the Kronecker product of an m×m identity matrix Im with stochastic information matrix Ψ, vec(ˆA) denotes the vectorization of matrix ˆA by column, and ˜c=(˜cij) represents the coefficient matrix we calculated. Then the reduced states can be approximated by

    ˜αj(ξ)=ni=1˜cijψi(ξ),j=1,,m. (26)

    Combining equations (16) and (26), we can get the POD-DCS approximate solution of stochastic elliptic problem (9) as

    ˜u(x,ξ)=ni=1mj=1˜cijψi(ξ)ϕj(x). (27)

    Remark 1. By using the orthogonality of multivariate Legendre polynomials, we have

    E[˜u(x,ξ)]=mj=1˜c1jϕj(x), (28)
    E[˜u2(x,ξ)]=ni=1(mj=1˜cijϕj(x))(mj=1˜cijϕj(x)). (29)

    The details of our POD-DCS method for stochastic forward problem is presented as following Algorithm 1.

    Algorithm 1 The POD-DCS algorithm for stochastic forward problem
    Input: Grid parameter h, positive integer Ks, energy ratio ν, collocation points {ξi}Kci=1, the order of polynomial N.
    Output: POD basis function {ϕj(x)}mj=1, and the coefficient matrix ˜c.
    1: Draw a set of random inputs {ξk}Ksk=1 from prior support Γ independently, and compute the corresponding solutions {uh(x,ξk)}Ksk=1 of problem (9) by using FEM with mesh size h, then compose a snapshot matrix S.
    2: Generate the POD basis functions {ϕj(x)}mj=1 as (14) by solving eigenvalue problem (11), where m is the smallest positive integer such that νmν holds.
    3: Construct the reduced state matrix ˆA as (24) by solving algebraic system (17) with collocation points {ξi}Kci=1.
    4: Select appropriate stochastic basis functions {ψi(ξ)}ni=1 with order N.
    5: Obtain coefficient matrix ˜c by solving BP problem (25).
    6: Generate the approximate reduced states (˜αj(ξ)) and the POD-DCS approximate solution ˜u(x,ξ) of system (9) as
    ˜αj(ξ)=ni=1˜cijψi(ξ),j=1,,m,˜u(x,ξ)=ni=1mj=1˜cijψi(ξ)ϕj(x).

     | Show Table
    DownLoad: CSV

    Table 1 illustrates that using 100 realizations to generate the POD-based approximate solution with dimension 9 can guarantee the accuracy of the model. Thus, we utilize 100 snapshots to construct ROM, and retain only the first 9 basis functions. In this case, ˆE=2.8400×103 and ˆV=1.4750×104.

    Now, we need to determine the coefficients {˜cij}n,mi=1,j=1 in the expression (26) by solving l1-minimization problem (25). Here, the 7th-order Legendre PC basis functions ψ(ξ) are used to represent the reduced states αj(ξ) for j=1,,m, and the total DoF of matirx ˜c is n×m=2970. Without loss of generality, we only consider the coefficients with absolute values larger than a fixed threshold τ, i.e., set the entries of ˜c whose absolute value smaller than the threshold τ to 0, and let ˜Rτ be the proportion of non-zero coefficients in matrix ˜c, that is

    ˜Rτ=#{|˜c|>τ}#{˜c}. (30)

    Like the definitions of ˆE and ˆV, we denote the expectation and variance of L2(D)-norm error between the full finite element solution uh(x,ξ) and the POD-DCS approximate solution ˜u(x,ξ) as ˜E and ˜V respectively, namely

    ˜E=E[uh˜u2L2(D)], (31)

    and

    ˜V=Var[uh˜u2L2(D)]. (32)

    The sparsity and error estimation of the POD-DCS method w.r.t. different number of collocation points sampled from the prior space randomly and different thresholds are drawn in Figure 5. Compared with τ=0, for each Kc, the ratio of non-zero coefficients is less than 5% with τ=0.1, but the mean of the L2(D)-norm error is much larger. This is due to the fact that the threshold is so large that some influential basis functions are ignored. For τ=0.01, the highest proportion of non-zero coefficients in matrix ˜c is about 33%, and the error statistics are almost consistent with those of τ=0. Therefore, we can take the threshold τ=0.01 to ensure the accuracy. Through regression, the expectation of our approach converges with order 1.58 and variance converges with order 2.97, that is, ˜EO(K3/2c) and ˜VO(K3c), which can be seen in Figure 6. Such a highly "incomplete" set of reduced states does contain enough information to accurately recover the reduced order model. When Kc200, the expectations ˜E and variances ˜V of L2(D)-norm error do not change much as Kc increase. So here we use 200 collocation points to determine the coefficients (˜cij) of expression (27), that is, we need to calculate 200 algebraic systems (17). At this time, ˜R0.01=20.24%, ˜E=4.1758×103, and ˜V=2.4931×104

    Figure 5.  Sparsity and error estimations of the POD-DCS method with different Kc and different threshold τ. (Left): The proportion ˜Rτ of non-zero coefficients in matrix ˜c; (Middle): The expectation ˜E of L2(D)-norm error; (Right): The variance ˜V of L2(D)-norm error.

    The POD-DCS method has been described in the previous section. In order to introduce the conclusions of error analysis and coefficients estimation of our scheme, we first introduce several relevant properties of the compressive sensing method in this section.

    Definition 4.1 (see [11]). A vector bRn is called k-sparse if b0k holds, and its best k-term approximation error in the lp-norm is defined as

    σk,p(b)=infx0kxbp, (33)

    where b0 is the number of non-zero terms in vector b.

    Note that for 0<q<p and set s=1/q1/p>0, the prior estimation holds:

    σk,p(b)ksbq. (34)

    In order to ensure that the matrix c can be reconstructed exactly by using l1 minimization, we introduce the restricted isometry property (RIP).

    Lemma 4.2 (see [5,24]). If for any k-sparse vector bRn, there exists a constant δ(0,1) such that the inequality

    (1δ)b22Ab22(1+δ)b22 (35)

    holds. Then δk(A):=min(δ) is called the restricted isometry constant (RIC) of matrix ARm×n, and the matrix A is said to satisfy the RIP of order k with RIC δk(A). Similarly, denoting the RIC of matrix BRp×q by δk(B), we can obtain the inequality

    max{δk(A),δk(B)}δk(AB)δk(A)+δk(B)+δk(A)δk(B). (36)

    Based on Lemma 4.2 we know that the RIC of dictionary matrix Θ in (25) satisfies δk(Θ)=δk(Ψ).

    Lemma 4.3 (see [11]). Assuming that δ3k(Θ)<1/3, then the solution of l1-minimization problem (25) satisfies

    vec(c)vec(˜c)2Cδσk,1(vec(c))k (37)

    where the constant Cδ>0 depends only on δ3k(Θ).

    By Lemma 4.3, we give the error estimation of our POD-DCS method as following.

    Theorem 4.4. There exist constants C1,C2,C3,C4,θ>0 and 0<q<1, assuming that the dictionary matrix Θ in (25) satisfies δ3k(Θ)<1/3. Then with probability close to one, the expectation (31) of L2(D)-norm error between the finite element solution and the POD-DCS approximate solution satisfies

    ˜EC1ˆVKs+C2Ksj=m+1λj+C3Nθ+C4k12qvec(c)2q, (38)

    where C3,C4 depend on the smoothness of the reduced states and δk(Θ) respectively, while constants C1,C2 are universal.

    Proof. By using the inequality 2aba2+b2, the error (31) follows that

    ˜E=E[uhˆu+ˆu˜u2L2(D)]2{E[uhˆu2L2(D)]+E[ˆu˜u2L2(D)]}.

    Denote

    I1=E[uhˆu2L2(D)],I2=E[ˆu˜u2L2(D)].

    We first estimate I1. The standard Monte Carlo Finite Element Method (MCFEM) approximates the expectation function I1 by the average of samples. Note that the random inputs {ξj}Ksj=1 used to generate the snapshot set are sampling from prior support Γ randomly. Thus, we can use these samples to approximate I1 as

    ˉI1=1KsKsj=1uhjˆuj2L2(D).

    where uhj and ˆuj represent the finite element solution and POD-based solution with random input ξj for j=1,,Ks, respectively. The number of realizations, Ks, controls the statistical error ES=I1ˉI1 [1], and

    ES=E[uhˆu2L2(D)]1KsKsj=1uhjˆuj2L2(D)=Var[uhˆu2L2(D)]Ks×KsE[uhˆu2L2(D)]Ksj=1uhjˆuj2L2(D)KsVar[uhˆu2L2(D)].

    According to the central limit theorem we have

    ESN(0,ˆV/Ks),

    then choose a constant Cq1.65, such that

    |ES|CqˆV/Ks,

    holds with probability close to one, where Cq is called quantile. With the error measure (18), ˉI1 satisfies

    ˉI1=1KsKsj=1uhjˆuj2L2(D)=Ksj=m+1λj.

    As (22), we use multivariate polynomials with order N to approximate reduced state αj(ξ) for j=1,,m, then there exist constants CN,θ>0 such that

    Γ(αj(ξ)ni=1cijψi(ξ))2π(ξ)dξCNNθ,j=1,,m,

    holds. The constant CN depends on the smoothness of {αj(ξ)}mj=1. Using the biorthogonality of the POD-DCS solution and Lemma 4.3, we have

    I2=ΓD(mj=1(αj(ξ)˜αj(ξ))ϕj(x))2π(ξ)dxdξ=mj=1Γ(αj(ξ)˜αj(ξ))2π(ξ)dξ=mj=1Γ(αj(ξ)ni=1cijψi(ξ)+ni=1(cij˜cij)ψi(ξ))2π(ξ)dξ2mj=1Γ(αj(ξ)ni=1cijψi(ξ))2π(ξ)dξ+2mj=1ni=1(cij˜cij)2
    2mCNNθ+2C2δσ2k,1(vec(c))k.

    By the prior estimation (34), we can arrive the error estimate with 0<q<1

    ˜E2(CqˆVKs+Ksj=m+1λj+2mCNNθ+2C2δσ2k,1(vec(c))k)C1ˆVKs+C2Ksj=m+1λj+C3Nθ+C4k12qvec(c)2q,

    which completes the proof.

    This theorem implies that the mean square error ˜E consists of four parts, including statistical error, truncation error, polynomial approximation error and sparse reconstruction error. Among them, the first two items are derived from the POD-based reduced order model, and the last two items are caused by the sparse reconstruction of the corresponding reduced order model. Therefore, we can balance these four errors and discrete error to save computing resources for a desired accuracy.

    The error analysis has been completed and we are now ready to show the sparsity of our POD-DCS solution.

    Theorem 4.5. The coefficient matrix ˜c in the POD-DCS solution expression (27) satisfies the estimation

    ni=1˜c2ij=λj,j=1,m. (39)

    Proof. According to the eigenvalue problem (11) and the POD basis function (14) we obtain

    λjϕj(x)=1KsλjKsi=1λjv(j)iuhi(x)=1KsλjKsi=1(1KsKsr=1(uhi(x),uhr(x))v(j)ruhi(x))=1KsKsi=1(uhi(x),ϕj(x))uhi(x).

    Thus, we know that λjϕj(x) can be regarded as an approximation of the expectation of (uh(x,ξ),ϕj(x))uh(x,ξ). Using the POD-DCS solution to approximate finite element solution, and combining the biorthogonality of the POD-DCS solution, λjϕj(x) can be approximated by

    E[(˜u(x,ξ),ϕj(x))˜u(x,ξ)]=Γ(Dni=1mk=1˜cikψi(ξ)ϕk(x)ϕj(x)dx)ni=1mk=1˜cikψi(ξ)ϕk(x)π(ξ)dξ
    =Γ(ni=1˜cijψi(ξ))ni=1mk=1˜cikψi(ξ)ϕk(x)π(ξ)dξ=mk=1(ni=1˜cij˜cik)ϕk(x),

    which implies that (39) holds true.

    Figure 7 confirms the conclusion of Theorem 4.5 numerically. The error in this figure is due to the fact that we only use the average of 100 samples to approximate the expectation, and the error of POD-DCS solution. It is well-known that the eigenvalues decay rapidly in many practical problems, so Theorem 4.5 implies that the coefficients are compressible and illustrates the feasibility of using the POD-based ROM first.

    Figure 6.  Error estimations of the POD-DCS method w.r.t. different collocation point sizes and τ=0.01.
    Figure 7.  The eigenvalues and coefficients in the POD-DCS method with Ks=100, m=9 and Kc=200.

    In this section, we compare the POD-DCS scheme with POD-based ROM and the conventional CS method, and use our method to solve the 4-dimensional elliptic inverse problem (2)-(3) to further describe its feasibility and advantages.

    The error estimates and sparsity of our POD-DCS scheme and other two different methods are shown in Table 2. All three methods are constructed with 100 full discrete finite element solutions. Among them, the POD-DCS method is constructed as above discussion with Ks=100,m=9,Kc=200, POD-based ROM has same dimension m, and CS represents the conventional compressive sensing method.

    Table 2.  Error estimates and sparsity for different methods.
    POD-DCS POD CS
    E×103 4.1758 2.8399 7.3896
    V×104 2.4931 1.4750 11.1484
    R0.01 0.2024 - 0.2182

     | Show Table
    DownLoad: CSV

    Obviously, the POD method has highest accuracy, while the reconstruction error of reduced states makes the accuracy of our POD-DCS method to be inferior to POD, and relatively little information leads to the worst accuracy of the conventional CS method. In terms of sparsity, the proposed POD-DCS scheme is slightly better than the CS method. However, the total DoF of our method is 0.2024×n×m=601, which is only 0.2% of that in CS method, and its DoF is 0.2182×n×nu3.04×105. The reason for the sharply decrease in the DoF is due to the fact that we use the reduced order model, which leads to the DoF in physical space decline from nu to m. Therefore, compared to the CS method, the proposed method can not only improve the accuracy and sparsity, but also greatly reduce the degree of freedom.

    From the discussions in Section 3 and Theorem 4.4 we know that the accuracy of the POD-DCS method can be improved by increasing the number of snapshots, the dimension of POD basis, the number of collocation points and the order of polynomials. These quantities can be determined with the required accuracy.

    Here we compare the efficiency of different methods. Table 3 summarizes the cost of each stage in the construction of different models. From the perspective of model construction, the number of full finite element solutions used in the three methods is the same. For POD-based ROM, although it has high accuracy and does not need to solve the BP problem, the time required to evaluate the model once is about 5 times that of the other two methods, which is not conducive to the implementation of the sampling algorithm. It takes only 0.2095s to evaluate the solution expression obtained by CS method, but the size of the dictionary matrix Θ in the corresponding BP problem (25) is (100×nu)×(n×nu), which is usually so large that it will exceed the computing resources or the calculation speed will be very slow. According to the nature of the finite element basis functions, here we can decompose the BP problem into solving the each column of the coefficient matrix c, that is, transform the problem into solving nu l1-minimization problems with size of 100×n. This process takes about 1.4 hours. Since the sparse reconstruction in our POD-DCS scheme is just for reduced states, and the size of dictionary matrix is (m×Kc)×(m×n). It takes only 21s to solve this BP problem, which is 0.42% of the time required by the CS method. For the POD-DCS method, we also need to evaluate the reduced order models w.r.t. Kc collocation points, but it is usually much cheaper than the full finite element method.

    Table 3.  Computational times, in seconds, given by different methods.
    # of full FE solution Time for per FE solution # of POD solution Time for per POD solution Time for BP Time for per model output
    POD-DCS 100 2.1970 200 1.0085 21 0.2002
    POD 100 2.1970 - - - 1.0085
    CS 100 2.1970 - - 5045 0.2095

     | Show Table
    DownLoad: CSV

    Therefore, from an efficiency perspective, the offline cost of POD-based ROM is relatively small, but the online cost is larger than the other two methods. While compared with the CS method, the online time of the POD-DCS method only has a little difference, but the offline time has a obvious advantage. Moreover, our method achieves 11 times acceleration when evaluating a forward model, and the time required to construct the solution expression of stochastic surrogate model can be offset by the repeated calculation of the forward model. It is well known that both POD and stochastic collocation methods can deal with highly nonlinear problems, so for such complex problems, our method will be more attractive due to its high efficiency.

    The accuracy and efficiency comparison of the POD-DCS method with other two methods has been completed. Now, we utilize this method to deal with the elliptic inverse problem (2)-(3). We use finite element method with mesh size h=28 to generated noise-free data associated with input ξo=[0.3,0.3,0.3,0.3]T. Note that the mesh size is finer than that used in inversion in order to prevent the "inverse crime". The true permeability field a(x;ξo) and noise-free output are shown in Figure 8. The nz=49 measurement sensors are uniformly distributed over D with grid spacing 23.

    Figure 8.  (Left): The true permeability field a(x) used for generating the synthesize data z; (Right): The model outputs associated with true permeability field, where black dots are the measurement sensors.

    In this Bayesian inverse problem, the components of weight vector ξ are assumed to be i.i.d. and have a prior ξiU(0,1) for i=1,,np, where np=4. As described in section 3, we first construct the POD-DCS approximate solution ˜u(x,ξ) of the stochastic problem (9) in prior support Γ=[0,1]np with Ks=100,m=9,Kc=200 and τ=0.01. Then substituting this approximate solution expression into (7) to get the approximate likelihood function, thereby obtaining the approximate posterior probability density function from Bayes' rule (6). The framework for solving the Bayesian inverse problems using the POD-DCS approximate solution of the stochastic surrogate model is shown in Figure 9. Denote the approximate likelihood function and the approximate posterior probability density function associated with POD-DCS solution as ˜π(z|ξ) and ˜π(ξ|z) respectively. The Hellinger distance and Kullback-Leibler distance between the exact posterior distribution π(ξ|z) and its approximation ˜π(ξ|z) have been shown in [26] and [30]. Here, we use delayed-rejection adaptive Metropolis (DRAM) algorithm to explore the approximate posterior distribution ˜π(ξ|z), whose computational cost is only a fraction of that in exploring the true posterior distribution π(ξ|z). This is because the cost of evaluating a finite element solution is about 11 times that of a POD-DCS solution, which can be observed from Table 3.

    Figure 9.  Bayesian inverse problems framework with POD-DCS approximate solution.

    Here we consider that the observation data z contains noise e, which satisfies zero-mean Gaussian distribution N(0,σ2eInz) with σe=0.05. For the finite element method, conventional CS method and POD-DCS method, we use the DRAM algorithm to draw 3×104 samples respectively. Among them, the first 5000 samples are discarded as burn-in periods and the last 25000 samples are used to estimate the posterior densities. The 95% confidence interval and posterior marginal probability density of each component of unknown input parameters ξ=[ξ1,ξ2,ξ3,ξ4] are shown in Table 4 and Figures 10. Obviously, the 95% confidence intervals w.r.t. all these three methods contain true input parameters ξo, and the length of these confidence intervals does not exceed 0.03. But compared with the conventional CS method, the posterior marginal densities obtained by our scheme are closer to that obtained by FEM. In Figure 11, we display the estimated permeability field and its error, which correspond to the maximum a posterior probability (MAP) parameters generated by the POD-DCS method. We can see that the error between the estimated permeability field and the true permeability field is small enough. Therefore, the proposed method can deal with Bayesian inverse problems efficiently.

    Table 4.  The 95% confidence intervals of each input parameter obtained by different methods.
    FEM POD-DCS CS
    ξ1 [0.2898, 0.3173] [0.2883, 0.3164] [0.2963, 0.3248]
    ξ2 [0.2858, 0.3134] [0.2844, 0.3126] [0.2913, 0.3175]
    ξ3 [0.2913, 0.3191] [0.2927, 0.3200] [0.2906, 0.3208]
    ξ4 [0.2750, 0.3016] [0.2762, 0.3047] [0.2736, 0.3022]

     | Show Table
    DownLoad: CSV
    Figure 10.  The posterior marginal densities of unknown parameters with σe=0.05 noise in the observations.
    Figure 11.  The estimated permeability field and error corresponding to the MAP parameters generated by the POD-DCS method. (Left): Estimated permeability field; (Right): Error.

    From the previous discussion, it is clear that the POD-DCS surrogate model speeds up the evaluation speed of model while ensuring the accuracy. Therefore, this method can be considered for Bayesian inverse problem, optimal control and other problems requiring repeated evaluation of forward model. Note that the BP problem (25) is solvable in polynomial time [6]. While the dictionary matrix Θ we used depends on the numbers of POD bases and stochastic bases, i.e. m and n. Therefore, for a specific problem, the dimension of POD-based ROM and the degree of the polynomial interpolation should be selected appropriately. It is further known that this method has limitations for the problem whose eigenvalue decays slowly, which is the inherent shortcoming of the POD method.

    In summary, for statistical inverse problems, we can regard the deterministic forward problem as a stochastic forward problem on the prior support of unknown parameters, and the solutions of these two problems with the same input are equal. Therefore, in this work, we propose a data-driven compressive sensing method based on proper orthogonal decomposition to construct the solution expression of stochastic surrogate model to accelerate the Bayesian inference of an inverse problem. The snapshot-based POD method is first used to construct the ROM of stochastic problem, then the stochastic collocation method based on gPC basis functions is adopted to represent the reduced states, and the coefficients are determined by solving an l1-minimization problem. Substituting this approximate solution into the likelihood function can obtain an approximate likelihood function, thereby obtaining an approximate posterior. The approximate posterior space can be explored faster than the original one. We make error analysis and coefficient estimation for this method, and prove that as the number of snapshots, POD cardinality, polynomial order, and number of collocation points increase, the mean square error of approximate solution will decrease. In addition, the expression coefficients are related to the eigenvalues. A series of numerical experiments show that when using the same number of full finite element solutions, the POD-DCS method has better accuracy and sparsity than the conventional CS method. Although our method needs to solve the ROM w.r.t. the collocation points, the sparse reconstruction of low-dimensional reduced states is much cheaper than the CS method. Moreover, in inferring the permeability field for elliptic PDE, our method can not only get an accurate result, but also be much faster than the direct calculation.

    Accelerating the decline of eigenvalue in POD method is a problem worthy of consideration. And for complex problems, evaluating the forward model is costly. In order to construct an accurate reduced order model, we usually need many realizations, which can be prohibitive. Therefore, we plan to use a multi-fidelity scheme or select appropriate snapshots to overcome this difficulty. This is the subject of future work.



    [1] Solving elliptic boundary value problems with uncertain coefficients by the finite element method: The stochastic formulation. Comput. Methods Appl. Mech. Engrg. (2005) 194: 1251-1294.
    [2] Inverse problems of optimal control in creep theory. J. Appl. Ind. Math. (2012) 6: 421-430.
    [3] Markov Chain Monte Carlo method and its application. Journal of the Royal Statistical Society (1998) 47: 69-100.
    [4] One-dimensional inverse scattering problem for optical coherence tomography. Inverse Problems (2005) 21: 499-524.
    [5] The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris (2008) 346: 589-592.
    [6] S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., 43 (2001), 129-159. doi: 10.1137/S003614450037906X
    [7] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-03537-5
    [8] A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. (2011) 230: 3015-3034.
    [9] H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problem, Kluwer Academic Publishers, 1996.
    [10] An inverse problem arising from the displacement of oil by water in porous media. Appl. Numer. Math. (2009) 59: 2452-2466.
    [11] M. Fornasier and H. Rauhut, Compressive sensing, in: O. Scherzer (Ed.), Handbook of mathematical methods in imaging, Springer New York, (2015), 205-256.
    [12] Optimal control of stochastic flow over a backward-facing step using reduced-order modeling. SIAM J. Sci. Comput. (2011) 33: 2641-2663.
    [13] Stochastic finite element methods for partial differential equations with random input data. Acta Numer. (2014) 23: 521-650.
    [14] J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Springer-Verlag, New York, 2005.
    [15] Statistical inverse problems: Discretization, model reduction and inverse crimes. J. Comput. Appl. Math. (2007) 198: 493-504.
    [16] Conditional well-posedness for an elliptic inverse problem. SIAM J. Appl. Math. (2011) 71: 952-971.
    [17] J. Li and Y. M. Marzouk, Adaptive construction of surrogates for the Bayesian solution of inverse problems, SIAM J. Sci. Comput., 36 (2014), A1163-A1186. doi: 10.1137/130938189
    [18] Data-driven compressive sensing and applications in uncertainty quantification. J. Comput. Phys. (2018) 374: 787-802.
    [19] Parameter and state model reduction for large- scale statistical inverse problems. SIAM J. Sci. Comput. (2010) 32: 2523-2542.
    [20] J. Martin, L. C. Wilcox, C. Burstedde and O. Ghattas, A stochastic newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM J. Sci. Comput., 34 (2012), A1460-A1487. doi: 10.1137/110845598
    [21] Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. (2007) 224: 560-586.
    [22] Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems. J. Comput. Phys. (2009) 228: 1862-1902.
    [23] N. Petra, J. Martin, G. Stadler and O. Ghattas, A computational framework for infinite-dimensional Bayesian inverse problems, Part II: Stochastic Newton MCMC with application to ice sheet flow inverse problems, SIAM J. Sci. Comput., 36 (2014), A1525-A1555. doi: 10.1137/130934805
    [24] Sparse Legendre expansions via l1-minimization. J. Approx. Theory (2012) 164: 517-533.
    [25] An inverse problem for the steady state diffusion equation. SIAM J. Appl. Math. (1981) 41: 210-221.
    [26] Inverse problems: A Bayesian perspective. Acta Numer. (2010) 19: 451-559.
    [27] (1977) Solutions of Ill-Posed Problems,. Halsted Press.
    [28] Model reduction and pollution source identification from remote sensing data. Inverse Probl. Imaging (2009) 3: 711-730.
    [29] (2010) Numerical Methods for Stochastic Computations: A Spectral Method Approach,. Princeton University Press.
    [30] L. Yan and L. Guo, Stochastic collocation algorithms using l1-minimization for Bayesian solution of inverse problems, SIAM J. Sci. Comput., 37 (2015), A1410-A1435. doi: 10.1137/140965144
    [31] Review of parameter identification procedures in groundwater hydrology: The inverse problem. Water Resources Research (1986) 22: 95-108.
    [32] A wavelet adaptive-homotopy method for inverse problem in the fluid-saturated porous media. Appl. Math. Comput. (2009) 208: 189-196.
  • This article has been cited by:

    1. Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang, Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference, 2022, 30, 2688-1594, 2335, 10.3934/era.2022119
    2. Sohail Reddy, Hillary R Fairbanks, Accelerating multilevel Markov Chain Monte Carlo using machine learning models, 2025, 100, 0031-8949, 056008, 10.1088/1402-4896/adb52a
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2796) PDF downloads(254) Cited by(2)

Figures and Tables

Figures(11)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog