
In Bayesian inverse problems, using the Markov Chain Monte Carlo method to sample from the posterior space of unknown parameters is a formidable challenge due to the requirement of evaluating the forward model a large number of times. For the purpose of accelerating the inference of the Bayesian inverse problems, in this work, we present a proper orthogonal decomposition (POD) based data-driven compressive sensing (DCS) method and construct a low dimensional approximation to the stochastic surrogate model on the prior support. Specifically, we first use POD to generate a reduced order model. Then we construct a compressed polynomial approximation by using a stochastic collocation method based on the generalized polynomial chaos expansion and solving an l1-minimization problem. Rigorous error analysis and coefficient estimation was provided. Numerical experiments on stochastic elliptic inverse problem were performed to verify the effectiveness of our POD-DCS method.
Citation: Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin. Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition[J]. Electronic Research Archive, 2021, 29(5): 3383-3403. doi: 10.3934/era.2021044
[1] | Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin . Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition. Electronic Research Archive, 2021, 29(5): 3383-3403. doi: 10.3934/era.2021044 |
[2] | Youjun Deng, Hongyu Liu, Xianchao Wang, Dong Wei, Liyan Zhu . Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements. Electronic Research Archive, 2021, 29(5): 3081-3096. doi: 10.3934/era.2021027 |
[3] | Chunhua Chu, Yijun Chen, Qun Zhang, Ying Luo . Joint linear array structure and waveform design for MIMO radar under practical constraints. Electronic Research Archive, 2022, 30(9): 3249-3265. doi: 10.3934/era.2022165 |
[4] | Lot-Kei Chou, Siu-Long Lei . High dimensional Riesz space distributed-order advection-dispersion equations with ADI scheme in compression format. Electronic Research Archive, 2022, 30(4): 1463-1476. doi: 10.3934/era.2022077 |
[5] | Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119 |
[6] | Dengke Xu, Linlin Shen, Yuanyang Tangzhu, Shiqi Ke . Bayesian analysis of random effects panel interval-valued data models. Electronic Research Archive, 2025, 33(5): 3210-3224. doi: 10.3934/era.2025141 |
[7] | John Daugherty, Nate Kaduk, Elena Morgan, Dinh-Liem Nguyen, Peyton Snidanko, Trung Truong . On fast reconstruction of periodic structures with partial scattering data. Electronic Research Archive, 2024, 32(11): 6481-6502. doi: 10.3934/era.2024303 |
[8] | Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu . A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095 |
[9] | Xuerui Li, Lican Kang, Yanyan Liu, Yuanshan Wu . Distributed Bayesian posterior voting strategy for massive data. Electronic Research Archive, 2022, 30(5): 1936-1953. doi: 10.3934/era.2022098 |
[10] | Qing Zhao, Shuhong Chen . Regularity for very weak solutions to elliptic equations of $ p $-Laplacian type. Electronic Research Archive, 2025, 33(6): 3482-3495. doi: 10.3934/era.2025154 |
In Bayesian inverse problems, using the Markov Chain Monte Carlo method to sample from the posterior space of unknown parameters is a formidable challenge due to the requirement of evaluating the forward model a large number of times. For the purpose of accelerating the inference of the Bayesian inverse problems, in this work, we present a proper orthogonal decomposition (POD) based data-driven compressive sensing (DCS) method and construct a low dimensional approximation to the stochastic surrogate model on the prior support. Specifically, we first use POD to generate a reduced order model. Then we construct a compressed polynomial approximation by using a stochastic collocation method based on the generalized polynomial chaos expansion and solving an l1-minimization problem. Rigorous error analysis and coefficient estimation was provided. Numerical experiments on stochastic elliptic inverse problem were performed to verify the effectiveness of our POD-DCS method.
The inverse problems refer to exploring the inherent nature of things based on observable phenomena. There are many parameters of interest in scientific and engineering problems that cannot be directly observed. We often discuss these quantities indirectly through known data, that is, to formulate such problems in the form of inverse problems [14,7]. It can be seen from this that inverse problems are the products of the rapid development of science and engineering, which have been widely used in various fields such as geological engineering [25,10], medicine [4], environment [31], telemetry [28], control [2] and so on.
Due to the fact that parameters are sensitive to observation data, which is usually impure, this leads to the inverse problems are ill-posed, which can be addressed with regularization [9,27]. However, regularization methods only provide point estimates of the unknown parameters and without quantifying the uncertainties of the solution. Then, the statistical inference method [14,26] entered the researchers' vision. In Bayesian inverse problems, the unknown parameters are treated as random vectors with known prior distribution, and we need to infer their conditional distribution under observation data, namely the posterior distribution. In general, it is difficult to obtain the analytical form of the posterior distribution, so we often consider using the Markov Chain Monte Carlo (MCMC) method [3] to explore the posterior space. In these sampling algorithms, the forward model solution needs to be calculated for each candidate sample. We know that evaluating forward models is expensive in many practical problems, so direct sampling algorithms are prohibitive. To overcome this difficulty, many scholars have proposed a series of surrogate models and effective sampling algorithms to reduce the computational cost of Bayesian inverse problems. The former achieves the goal by reducing the cost of a single forward model evaluation. For example, the use of generalized polynomial chaos (gPC) basis functions to express the solutions of forward problems is proposed in [21,22], the projection-based model reduction techniques can be seen in [19,15], and adaptive methods to construct surrogate models are presented in [32,17]. The latter achieves this purpose by reducing the number of samples required, which has been studied in [23,20].
Here, we focus on the construction of effective surrogate models. Since unknown parameters are treated as random vector
ˇu(x,ξ)=n∑i=1nu∑j=1ˇcijφj(x)ψi(ξ), | (1) |
where the coefficient
In the current work, we propose the data-driven compressive sensing method based on proper orthogonal decomposition for constructing the accurate approximate solutions of stochastic forward problems to accelerate the calculation of Bayesian inverse problems. The POD-DCS method is derived from the data-driven compressive sensing method proposed in [18]. But here, we use proper orthogonal decomposition (POD) basis functions instead of Karhunen-Loève basis, so as to generate a reduced order model (ROM) and avoid the recovery of covariance. According to the idea of compressive sensing (CS), an accurate approximate solution of the ROM can be obtained by calculating a basis pursuit (BP) problem [18,30] with a small amount of data. The advantage of our method is that it constructs the reduced order model using snapshots first, which causes the degree of freedom (DoF) of the forward problem to be reduced from the cardinality of the finite element to the that of POD, and then we only need to reconstruct the low-dimensional ROM at a lower cost. From the numerical results we can see that when using the same number of fully discrete finite element solutions, our scheme can improve accuracy and sparsity compared to the conventional CS method based on finite element basis. Moreover, the cost of evaluating a forward model using our method is only a fraction of that with the finite element method (FEM), so it can speed up the calculation of Bayesian inverse problems effectively. As we all know, many practical problems can be described by partial differential equations (PDE). Here, we concentrate only on elliptic PDE, which is widely used in the studies of oil reservoirs and groundwater [25], and its ill-posedness have been discussed in [16]. Of course, our method can be naturally extended to other partial differential equations. All the computations were performed using MATLAB R2014b on a personal computer with 1.60GHz CPU and 4GB RAM.
The rest of this paper is organized as follows. In Section 2, we describe the model problem used as the background of our study, then introduce the framework of Bayesian inference and the stochastic surrogate model. In Section 3, we discuss how to construct the POD-DCS approximate solution for a stochastic forward problem, and provide some direct simulation results and the corresponding algorithm. The error and sparsity analyses of our scheme are conducted in Section 4. In Section 5, we compare our POD-DCS scheme with other methods, and use it to solve elliptic inverse problem. Finally, we give some conclusions and indicate possible future work in Section 6.
Consider an underground steady state aquifers modelled by two-dimensional elliptic partial differential equation with Dirichlet boundary as
{−∇⋅(a(x)∇u(x))=f,x∈D,u(x)=0,x∈∂D, | (2) |
where domain
a(x)=0.01+np∑i=1ξiexp(−‖x−xi‖220.02), | (3) |
where
Let
zs=u(xs;ξ)+es,s=1,…,nz | (4) |
where
z=G(ξ)+e, | (5) |
where
In Bayesian inverse problems, the unknown parameters
By Bayes' rule, we can infer the posterior probability density function
π(ξ|z)∝π(z|ξ)π(ξ). | (6) |
According the forward model (5) and the assumption of noise independence, likelihood function
π(z|ξ)=nz∏i=1πei(zi−Gi(ξ))∝exp(−‖G(ξ)−z‖222σ2e). | (7) |
In many practical problems, the posterior distributions (6) are analytically intractable. Consequently, many sampling algorithms, e.g. MCMC, have been used to ascertain the posterior space. The framework of Bayesian inverse problems with direct sampling algorithm is shown in Figure 2. The form of the likelihood function (7) implies that for each candidate sample
In probability space
π(ξ)=np∏i=1πi(ξi) | (8) |
with support
For any
{−∇⋅(a(x,ξ)∇u(x,ξ))=f,(x,ξ)∈D×Γ,u(x,ξ)=0,x∈∂D. | (9) |
Here,
In this section, we propose a data-driven compressive sensing method based on proper orthogonal decomposition, which can be used to construct efficient and sparse solution for a stochastic forward model.
It's well-known that the solution
The corresponding degree of freedom of the finite element method is expressed by
S=[uh(x,ξ1),⋯,uh(x,ξKs)]:=[uh1(x),⋯,uhKs(x)], | (10) |
where
RV=VΛ | (11) |
with
Rij=1Ks(uhi(x),uhj(x)),i,j=1,⋯,Ks, | (12) |
we can obtain the diagonal singular value matrix
vj=[v(j)1,⋯,v(j)Ks]T,j=1,⋯Ks. | (13) |
Since
ϕj(x)=1√KsλjKs∑i=1v(j)iuhi(x),j=1,⋯,m. | (14) |
Here, the dimension of POD basis,
νm=m∑j=1λj/Ks∑j=1λj. | (15) |
Once the basis functions are computed, the solution
ˆu(x,ξ)=m∑j=1αj(ξ)ϕj(x). | (16) |
The coefficients
Aα=F, | (17) |
where
α=[α1(ξ),⋯,αm(ξ)]T,Fi=∫Df(x)ϕi(x)dx,i=1,⋯,m,Aij=∫Da(x,ξ)∇ϕj(x)⋅∇ϕi(x)dx,i,j=1,⋯,m. |
From the definition of the POD basis functions we known that
M=1KsKs∑i=1‖uhi(x)−m∑j=1(uhi(x),ϕj(x))ϕj(x)‖2L2(D) | (18) |
with minimum
¯M=E[‖uh(x,ξ)−m∑j=1(uh(x,ξ),ϕj(x))ϕj(x)‖2L2(D)]. | (19) |
For 4-dimensional stochastic problem (9), we draw
In order to explore the prior space fully and ensure the accuracy of the POD-based reduced order model, we need use
We define the expectation and variance of
ˆE=E[‖uh−ˆu‖2L2(D)], | (20) |
and
ˆV=Var[‖uh−ˆu‖2L2(D)], | (21) |
where
Figure 4 displays these error estimates of the POD-based approximate solution associated with different
3 | 6 | 9 | 12 | ||
0.9906 | 0.9988 | 0.9998 | 0.9999 | ||
5.2424 | 0.9679 | 0.2840 | 0.1267 | ||
8.1367 | 0.8915 | 0.1475 | 0.0282 | ||
0.9908 | 0.9988 | 0.9998 | 0.9999 | ||
4.9558 | 1.0836 | 0.2422 | 0.0907 | ||
6.7935 | 0.9507 | 0.1023 | 0.0126 | ||
0.9900 | 0.9986 | 0.9997 | 0.9999 | ||
4.6537 | 0.8260 | 0.2229 | 0.0622 | ||
5.5292 | 0.4418 | 0.0898 | 0.0037 |
In our POD-DCS algorithm, using the POD method first can greatly reduce the DoF of the problem in hand, and simplify the subsequent processing. Next, we utilize the idea of compressive sensing to express the reduced states
Clearly, once the POD basis functions
αj(ξ)≈n∑i=1cijψi(ξ),j=1,⋯,m, | (22) |
where
n=(N+np)!N!np!. | (23) |
The stochastic collocation method [29] based on polynomial interpolation can be utilized to determine the coefficients
(α1(ξ1)⋯αm(ξ1)⋮⋮α1(ξKc)⋯αm(ξKc))=(ψ1(ξ1)⋯ψn(ξ1)⋮⋮ψ1(ξKc)⋯ψn(ξKc))(c11⋯c1m⋮⋮cn1⋯cnm). |
Simply expressed as
ˆA=Ψc, | (24) |
where
Based on the idea of compressive sensing, given a highly incomplete set of reduced states by solving algebraic system (17), an accurate approximate solution of linear system (24) can be obtained by solving the BP problem
vec(˜c)=argminvec(c)‖vec(c)‖1,subject tovec(ˆA)=Θvec(c), | (25) |
where dictionary matrix
˜αj(ξ)=n∑i=1˜cijψi(ξ),j=1,⋯,m. | (26) |
Combining equations (16) and (26), we can get the POD-DCS approximate solution of stochastic elliptic problem (9) as
˜u(x,ξ)=n∑i=1m∑j=1˜cijψi(ξ)ϕj(x). | (27) |
Remark 1. By using the orthogonality of multivariate Legendre polynomials, we have
E[˜u(x,ξ)]=m∑j=1˜c1jϕj(x), | (28) |
E[˜u2(x,ξ)]=n∑i=1(m∑j=1˜cijϕj(x))(m∑j′=1˜cij′ϕj′(x)). | (29) |
The details of our POD-DCS method for stochastic forward problem is presented as following Algorithm 1.
Algorithm 1 The POD-DCS algorithm for stochastic forward problem |
Input: Grid parameter |
Output: POD basis function |
1: Draw a set of random inputs |
2: Generate the POD basis functions |
3: Construct the reduced state matrix |
4: Select appropriate stochastic basis functions |
5: Obtain coefficient matrix |
6: Generate the approximate reduced states |
Table 1 illustrates that using 100 realizations to generate the POD-based approximate solution with dimension 9 can guarantee the accuracy of the model. Thus, we utilize 100 snapshots to construct ROM, and retain only the first 9 basis functions. In this case,
Now, we need to determine the coefficients
˜Rτ=#{|˜c|>τ}#{˜c}. | (30) |
Like the definitions of
˜E=E[‖uh−˜u‖2L2(D)], | (31) |
and
˜V=Var[‖uh−˜u‖2L2(D)]. | (32) |
The sparsity and error estimation of the POD-DCS method w.r.t. different number of collocation points sampled from the prior space randomly and different thresholds are drawn in Figure 5. Compared with
The POD-DCS method has been described in the previous section. In order to introduce the conclusions of error analysis and coefficients estimation of our scheme, we first introduce several relevant properties of the compressive sensing method in this section.
Definition 4.1 (see [11]). A vector
σk,p(b)=inf‖x‖0≤k‖x−b‖p, | (33) |
where
Note that for
σk,p(b)≤k−s‖b‖q. | (34) |
In order to ensure that the matrix
Lemma 4.2 (see [5,24]). If for any
(1−δ)‖b‖22≤‖Ab‖22≤(1+δ)‖b‖22 | (35) |
holds. Then
max{δk(A),δk(B)}≤δk(A⊗B)≤δk(A)+δk(B)+δk(A)δk(B). | (36) |
Based on Lemma 4.2 we know that the RIC of dictionary matrix
Lemma 4.3 (see [11]). Assuming that
‖vec(c)−vec(˜c)‖2≤Cδσk,1(vec(c))√k | (37) |
where the constant
By Lemma 4.3, we give the error estimation of our POD-DCS method as following.
Theorem 4.4. There exist constants
˜E≤C1√ˆVKs+C2Ks∑j=m+1λj+C3N−θ+C4k1−2q‖vec(c)‖2q, | (38) |
where
Proof. By using the inequality
˜E=E[‖uh−ˆu+ˆu−˜u‖2L2(D)]≤2{E[‖uh−ˆu‖2L2(D)]+E[‖ˆu−˜u‖2L2(D)]}. |
Denote
I1=E[‖uh−ˆu‖2L2(D)],I2=E[‖ˆu−˜u‖2L2(D)]. |
We first estimate
ˉI1=1KsKs∑j=1‖uhj−ˆuj‖2L2(D). |
where
ES=E[‖uh−ˆu‖2L2(D)]−1KsKs∑j=1‖uhj−ˆuj‖2L2(D)=√Var[‖uh−ˆu‖2L2(D)]Ks×KsE[‖uh−ˆu‖2L2(D)]−Ks∑j=1‖uhj−ˆuj‖2L2(D)√KsVar[‖uh−ˆu‖2L2(D)]. |
According to the central limit theorem we have
ES∼N(0,ˆV/Ks), |
then choose a constant
|ES|≤Cq√ˆV/Ks, |
holds with probability close to one, where
ˉI1=1KsKs∑j=1‖uhj−ˆuj‖2L2(D)=Ks∑j=m+1λj. |
As (22), we use multivariate polynomials with order
∫Γ(αj(ξ)−n∑i=1cijψi(ξ))2π(ξ)dξ≤CNN−θ,j=1,⋯,m, |
holds. The constant
I2=∫Γ∫D(m∑j=1(αj(ξ)−˜αj(ξ))ϕj(x))2π(ξ)dxdξ=m∑j=1∫Γ(αj(ξ)−˜αj(ξ))2π(ξ)dξ=m∑j=1∫Γ(αj(ξ)−n∑i=1cijψi(ξ)+n∑i=1(cij−˜cij)ψi(ξ))2π(ξ)dξ≤2m∑j=1∫Γ(αj(ξ)−n∑i=1cijψi(ξ))2π(ξ)dξ+2m∑j=1n∑i=1(cij−˜cij)2 |
≤2mCNN−θ+2C2δσ2k,1(vec(c))k. |
By the prior estimation (34), we can arrive the error estimate with
˜E≤2(Cq√ˆVKs+Ks∑j=m+1λj+2mCNN−θ+2C2δσ2k,1(vec(c))k)≤C1√ˆVKs+C2Ks∑j=m+1λj+C3N−θ+C4k1−2q‖vec(c)‖2q, |
which completes the proof.
This theorem implies that the mean square error
The error analysis has been completed and we are now ready to show the sparsity of our POD-DCS solution.
Theorem 4.5. The coefficient matrix
n∑i=1˜c2ij=λj,j=1,⋯m. | (39) |
Proof. According to the eigenvalue problem (11) and the POD basis function (14) we obtain
λjϕj(x)=1√KsλjKs∑i=1λjv(j)iuhi(x)=1√KsλjKs∑i=1(1KsKs∑r=1(uhi(x),uhr(x))v(j)ruhi(x))=1KsKs∑i=1(uhi(x),ϕj(x))uhi(x). |
Thus, we know that
E[(˜u(x,ξ),ϕj(x))˜u(x,ξ)]=∫Γ(∫Dn∑i=1m∑k=1˜cikψi(ξ)ϕk(x)ϕj(x)dx)n∑i′=1m∑k′=1˜ci′k′ψi′(ξ)ϕk′(x)π(ξ)dξ |
=∫Γ(n∑i=1˜cijψi(ξ))n∑i′=1m∑k′=1˜ci′k′ψi′(ξ)ϕk′(x)π(ξ)dξ=m∑k′=1(n∑i=1˜cij˜cik′)ϕk′(x), |
which implies that (39) holds true.
Figure 7 confirms the conclusion of Theorem 4.5 numerically. The error in this figure is due to the fact that we only use the average of 100 samples to approximate the expectation, and the error of POD-DCS solution. It is well-known that the eigenvalues decay rapidly in many practical problems, so Theorem 4.5 implies that the coefficients are compressible and illustrates the feasibility of using the POD-based ROM first.
In this section, we compare the POD-DCS scheme with POD-based ROM and the conventional CS method, and use our method to solve the 4-dimensional elliptic inverse problem (2)-(3) to further describe its feasibility and advantages.
The error estimates and sparsity of our POD-DCS scheme and other two different methods are shown in Table 2. All three methods are constructed with 100 full discrete finite element solutions. Among them, the POD-DCS method is constructed as above discussion with
POD-DCS | POD | CS | |
|
4.1758 | 2.8399 | 7.3896 |
2.4931 | 1.4750 | 11.1484 | |
0.2024 | - | 0.2182 |
Obviously, the POD method has highest accuracy, while the reconstruction error of reduced states makes the accuracy of our POD-DCS method to be inferior to POD, and relatively little information leads to the worst accuracy of the conventional CS method. In terms of sparsity, the proposed POD-DCS scheme is slightly better than the CS method. However, the total DoF of our method is
From the discussions in Section 3 and Theorem 4.4 we know that the accuracy of the POD-DCS method can be improved by increasing the number of snapshots, the dimension of POD basis, the number of collocation points and the order of polynomials. These quantities can be determined with the required accuracy.
Here we compare the efficiency of different methods. Table 3 summarizes the cost of each stage in the construction of different models. From the perspective of model construction, the number of full finite element solutions used in the three methods is the same. For POD-based ROM, although it has high accuracy and does not need to solve the BP problem, the time required to evaluate the model once is about 5 times that of the other two methods, which is not conducive to the implementation of the sampling algorithm. It takes only 0.2095s to evaluate the solution expression obtained by CS method, but the size of the dictionary matrix
Time for per FE solution | Time for per POD solution | Time for BP | Time for per model output | |||
POD-DCS | 100 | 2.1970 | 200 | 1.0085 | 21 | 0.2002 |
POD | 100 | 2.1970 | - | - | - | 1.0085 |
CS | 100 | 2.1970 | - | - | 5045 | 0.2095 |
Therefore, from an efficiency perspective, the offline cost of POD-based ROM is relatively small, but the online cost is larger than the other two methods. While compared with the CS method, the online time of the POD-DCS method only has a little difference, but the offline time has a obvious advantage. Moreover, our method achieves 11 times acceleration when evaluating a forward model, and the time required to construct the solution expression of stochastic surrogate model can be offset by the repeated calculation of the forward model. It is well known that both POD and stochastic collocation methods can deal with highly nonlinear problems, so for such complex problems, our method will be more attractive due to its high efficiency.
The accuracy and efficiency comparison of the POD-DCS method with other two methods has been completed. Now, we utilize this method to deal with the elliptic inverse problem (2)-(3). We use finite element method with mesh size
In this Bayesian inverse problem, the components of weight vector
Here we consider that the observation data
FEM | POD-DCS | CS | |
|
[0.2898, 0.3173] | [0.2883, 0.3164] | [0.2963, 0.3248] |
[0.2858, 0.3134] | [0.2844, 0.3126] | [0.2913, 0.3175] | |
[0.2913, 0.3191] | [0.2927, 0.3200] | [0.2906, 0.3208] | |
[0.2750, 0.3016] | [0.2762, 0.3047] | [0.2736, 0.3022] |
From the previous discussion, it is clear that the POD-DCS surrogate model speeds up the evaluation speed of model while ensuring the accuracy. Therefore, this method can be considered for Bayesian inverse problem, optimal control and other problems requiring repeated evaluation of forward model. Note that the BP problem (25) is solvable in polynomial time [6]. While the dictionary matrix
In summary, for statistical inverse problems, we can regard the deterministic forward problem as a stochastic forward problem on the prior support of unknown parameters, and the solutions of these two problems with the same input are equal. Therefore, in this work, we propose a data-driven compressive sensing method based on proper orthogonal decomposition to construct the solution expression of stochastic surrogate model to accelerate the Bayesian inference of an inverse problem. The snapshot-based POD method is first used to construct the ROM of stochastic problem, then the stochastic collocation method based on gPC basis functions is adopted to represent the reduced states, and the coefficients are determined by solving an
Accelerating the decline of eigenvalue in POD method is a problem worthy of consideration. And for complex problems, evaluating the forward model is costly. In order to construct an accurate reduced order model, we usually need many realizations, which can be prohibitive. Therefore, we plan to use a multi-fidelity scheme or select appropriate snapshots to overcome this difficulty. This is the subject of future work.
[1] |
Solving elliptic boundary value problems with uncertain coefficients by the finite element method: The stochastic formulation. Comput. Methods Appl. Mech. Engrg. (2005) 194: 1251-1294. ![]() |
[2] |
Inverse problems of optimal control in creep theory. J. Appl. Ind. Math. (2012) 6: 421-430. ![]() |
[3] |
Markov Chain Monte Carlo method and its application. Journal of the Royal Statistical Society (1998) 47: 69-100. ![]() |
[4] |
One-dimensional inverse scattering problem for optical coherence tomography. Inverse Problems (2005) 21: 499-524. ![]() |
[5] |
The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris (2008) 346: 589-592. ![]() |
[6] |
S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., 43 (2001), 129-159. doi: 10.1137/S003614450037906X
![]() |
[7] |
D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-03537-5
![]() |
[8] |
A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. (2011) 230: 3015-3034. ![]() |
[9] | H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problem, Kluwer Academic Publishers, 1996. |
[10] |
An inverse problem arising from the displacement of oil by water in porous media. Appl. Numer. Math. (2009) 59: 2452-2466. ![]() |
[11] | M. Fornasier and H. Rauhut, Compressive sensing, in: O. Scherzer (Ed.), Handbook of mathematical methods in imaging, Springer New York, (2015), 205-256. |
[12] |
Optimal control of stochastic flow over a backward-facing step using reduced-order modeling. SIAM J. Sci. Comput. (2011) 33: 2641-2663. ![]() |
[13] |
Stochastic finite element methods for partial differential equations with random input data. Acta Numer. (2014) 23: 521-650. ![]() |
[14] | J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Springer-Verlag, New York, 2005. |
[15] |
Statistical inverse problems: Discretization, model reduction and inverse crimes. J. Comput. Appl. Math. (2007) 198: 493-504. ![]() |
[16] |
Conditional well-posedness for an elliptic inverse problem. SIAM J. Appl. Math. (2011) 71: 952-971. ![]() |
[17] |
J. Li and Y. M. Marzouk, Adaptive construction of surrogates for the Bayesian solution of inverse problems, SIAM J. Sci. Comput., 36 (2014), A1163-A1186. doi: 10.1137/130938189
![]() |
[18] |
Data-driven compressive sensing and applications in uncertainty quantification. J. Comput. Phys. (2018) 374: 787-802. ![]() |
[19] |
Parameter and state model reduction for large- scale statistical inverse problems. SIAM J. Sci. Comput. (2010) 32: 2523-2542. ![]() |
[20] |
J. Martin, L. C. Wilcox, C. Burstedde and O. Ghattas, A stochastic newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM J. Sci. Comput., 34 (2012), A1460-A1487. doi: 10.1137/110845598
![]() |
[21] |
Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. (2007) 224: 560-586. ![]() |
[22] |
Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems. J. Comput. Phys. (2009) 228: 1862-1902. ![]() |
[23] |
N. Petra, J. Martin, G. Stadler and O. Ghattas, A computational framework for infinite-dimensional Bayesian inverse problems, Part II: Stochastic Newton MCMC with application to ice sheet flow inverse problems, SIAM J. Sci. Comput., 36 (2014), A1525-A1555. doi: 10.1137/130934805
![]() |
[24] |
Sparse Legendre expansions via l1-minimization. J. Approx. Theory (2012) 164: 517-533. ![]() |
[25] |
An inverse problem for the steady state diffusion equation. SIAM J. Appl. Math. (1981) 41: 210-221. ![]() |
[26] |
Inverse problems: A Bayesian perspective. Acta Numer. (2010) 19: 451-559. ![]() |
[27] | (1977) Solutions of Ill-Posed Problems,. Halsted Press. |
[28] |
Model reduction and pollution source identification from remote sensing data. Inverse Probl. Imaging (2009) 3: 711-730. ![]() |
[29] | (2010) Numerical Methods for Stochastic Computations: A Spectral Method Approach,. Princeton University Press. |
[30] |
L. Yan and L. Guo, Stochastic collocation algorithms using l1-minimization for Bayesian solution of inverse problems, SIAM J. Sci. Comput., 37 (2015), A1410-A1435. doi: 10.1137/140965144
![]() |
[31] |
Review of parameter identification procedures in groundwater hydrology: The inverse problem. Water Resources Research (1986) 22: 95-108. ![]() |
[32] |
A wavelet adaptive-homotopy method for inverse problem in the fluid-saturated porous media. Appl. Math. Comput. (2009) 208: 189-196. ![]() |
1. | Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang, Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference, 2022, 30, 2688-1594, 2335, 10.3934/era.2022119 | |
2. | Sohail Reddy, Hillary R Fairbanks, Accelerating multilevel Markov Chain Monte Carlo using machine learning models, 2025, 100, 0031-8949, 056008, 10.1088/1402-4896/adb52a |
3 | 6 | 9 | 12 | ||
0.9906 | 0.9988 | 0.9998 | 0.9999 | ||
5.2424 | 0.9679 | 0.2840 | 0.1267 | ||
8.1367 | 0.8915 | 0.1475 | 0.0282 | ||
0.9908 | 0.9988 | 0.9998 | 0.9999 | ||
4.9558 | 1.0836 | 0.2422 | 0.0907 | ||
6.7935 | 0.9507 | 0.1023 | 0.0126 | ||
0.9900 | 0.9986 | 0.9997 | 0.9999 | ||
4.6537 | 0.8260 | 0.2229 | 0.0622 | ||
5.5292 | 0.4418 | 0.0898 | 0.0037 |
Algorithm 1 The POD-DCS algorithm for stochastic forward problem |
Input: Grid parameter |
Output: POD basis function |
1: Draw a set of random inputs |
2: Generate the POD basis functions |
3: Construct the reduced state matrix |
4: Select appropriate stochastic basis functions |
5: Obtain coefficient matrix |
6: Generate the approximate reduced states |
POD-DCS | POD | CS | |
|
4.1758 | 2.8399 | 7.3896 |
2.4931 | 1.4750 | 11.1484 | |
0.2024 | - | 0.2182 |
Time for per FE solution | Time for per POD solution | Time for BP | Time for per model output | |||
POD-DCS | 100 | 2.1970 | 200 | 1.0085 | 21 | 0.2002 |
POD | 100 | 2.1970 | - | - | - | 1.0085 |
CS | 100 | 2.1970 | - | - | 5045 | 0.2095 |
FEM | POD-DCS | CS | |
|
[0.2898, 0.3173] | [0.2883, 0.3164] | [0.2963, 0.3248] |
[0.2858, 0.3134] | [0.2844, 0.3126] | [0.2913, 0.3175] | |
[0.2913, 0.3191] | [0.2927, 0.3200] | [0.2906, 0.3208] | |
[0.2750, 0.3016] | [0.2762, 0.3047] | [0.2736, 0.3022] |
3 | 6 | 9 | 12 | ||
0.9906 | 0.9988 | 0.9998 | 0.9999 | ||
5.2424 | 0.9679 | 0.2840 | 0.1267 | ||
8.1367 | 0.8915 | 0.1475 | 0.0282 | ||
0.9908 | 0.9988 | 0.9998 | 0.9999 | ||
4.9558 | 1.0836 | 0.2422 | 0.0907 | ||
6.7935 | 0.9507 | 0.1023 | 0.0126 | ||
0.9900 | 0.9986 | 0.9997 | 0.9999 | ||
4.6537 | 0.8260 | 0.2229 | 0.0622 | ||
5.5292 | 0.4418 | 0.0898 | 0.0037 |
Algorithm 1 The POD-DCS algorithm for stochastic forward problem |
Input: Grid parameter |
Output: POD basis function |
1: Draw a set of random inputs |
2: Generate the POD basis functions |
3: Construct the reduced state matrix |
4: Select appropriate stochastic basis functions |
5: Obtain coefficient matrix |
6: Generate the approximate reduced states |
POD-DCS | POD | CS | |
|
4.1758 | 2.8399 | 7.3896 |
2.4931 | 1.4750 | 11.1484 | |
0.2024 | - | 0.2182 |
Time for per FE solution | Time for per POD solution | Time for BP | Time for per model output | |||
POD-DCS | 100 | 2.1970 | 200 | 1.0085 | 21 | 0.2002 |
POD | 100 | 2.1970 | - | - | - | 1.0085 |
CS | 100 | 2.1970 | - | - | 5045 | 0.2095 |
FEM | POD-DCS | CS | |
|
[0.2898, 0.3173] | [0.2883, 0.3164] | [0.2963, 0.3248] |
[0.2858, 0.3134] | [0.2844, 0.3126] | [0.2913, 0.3175] | |
[0.2913, 0.3191] | [0.2927, 0.3200] | [0.2906, 0.3208] | |
[0.2750, 0.3016] | [0.2762, 0.3047] | [0.2736, 0.3022] |