1.
Introduction
The heart is a hollow, muscular organ that receives blood from the veins and pumps it out through the arteries, back into the circulatory system. Its activity between two consecutive heartbeats is referred to as the cardiac cycle, which consists of a sequence of alternating contraction (systole) and relaxation (diastole) of the cardiac chambers. The active left ventricular systole is the main agent of blood ejection in the circulatory system and results, at the macroscopic level, in a longitudinal shortening, a wall thickening and a torsion around the longitudinal axis. This latter is due to the particular fiber distribution of the cardiac muscle cells, the cardiomyocytes, which highly influences the mechanical response of the heart. Although different physical phenomena occur in the cardiac function - namely, the propagation of the electrical potential, ion dynamics, active contraction of cardiomyocytes, tissue mechanics and blood circulation, see, e.g., [1,2] - in this article we focus on the mechanical activity of the cardiac tissue. In particular, we take into account both the passive response of the tissue, and the active shortening of the muscular fibers, by adopting the finite elasticity models to describe the heart contraction and relaxation.
Describing the mechanics of the cardiac tissue requires complex constitutive laws, characterized by an exponential strain energy function and the presence of muscular fibers and sheets. For this reason, the passive myocardium is modeled as an hyperelastic orthotropic material, ultimately yielding highly nonlinear mechanical problems to be solved. The numerical approximation of these problems through high-fidelity, full order models (FOMs) such as those built on the finite element method entails severe computational costs, because of the need to account for complex geometrical configurations, and suitable spatial and temporal discretizations. Computational complexity is even more exacerbated if one is interested in going beyond a single, direct simulation. Indeed, when simulating cardiac mechanics, several input data – ultimately depending on a set of parameters – affect the problem under investigation, often varying within a broad range, and possibly hampered by uncertainty.
Addressing the impact of input parameters variation on mechanical outputs of clinical interest thus plays a key role in order to explore the tissue response in many different conditions [3,4], to calibrate the numerical solver and, ultimately, to obtain personalized models taking into account inter-patient variability [5]. From a numerical perspective, those tasks yield the solution of parameter estimation, sensitivity analysis, and uncertainty quantification problems; all these tasks share the need of multiple queries of the parameter-to-solution map, in accurate (and possibly very fast) way. However, the construction of a surrogate model for the input-output map, which directly approximates the quantity of interest as a function of the input (possibly, random) parameters, is often preferred to a reduced order approximation of the field variables. For instance, the effect of uncertainty of both global myocardial material properties and the local variability of the microstructure orientation on the left ventricular function has been considered in [6] using meta-models built through a polynomial chaos expansion; similarly, in [4] both sensitivity analysis and forward UQ methods have been applied to investigate the left ventricular function during the full cardiac cycle also involving a circulatory model, to assess the effect of regional wall thickness, fiber orientation, passive material parameters, active stress and the circulatory model, again using polynomial chaos expansion surrogate models. However, despite being more intrusive to implement, ROMs obtained through a projection process on the equations governing the FOM yield more accurate approximations than data fits and usually generate more significant computational gains than lower-fidelity models, requiring less training data [7].
In this paper we adapt ROMs developed in the last decade [8,9] for parameterized PDEs to the case of time-dependent nonlinear problems arising in cardiac mechanics. We consider projection-based ROMs built through the reduced basis method, suitably equipped with hyper-reduction strategies to assemble nonlinear terms efficiently. Exploiting proper orthogonal decomposition (POD) for the construction of the reduced spaces, we can rely on completely physics-based ROMs, rather than on data-driven strategies such as the ones recently proposed and exploiting, e.g., a fully-automatic deep learning (DL) workflow to generate both volumetric parameters and strain measures from cine-MRI data [10], Gaussian process interpolations [11,12,13], or physics-informed neural networks [14]. Indeed, despite their nonlinear nature, parameterized problems in elastodynamics such as those arising from cardiac modeling do not pose serious issues regarding the dimensionality of the reduced spaces, the slow decay of singular values when considering POD, or the assumption stated by the reduced basis (RB) method entailing a linear superimposition of modes.
The construction of projection-based reduced order models addressing nonlinear unsteady problems related with cardiac mechanics has been considered in very few contributions so far. A quasi-static model was employed in [15] to describe the contraction of an idealized left ventricle geometry with few degrees of freedom, where a POD-Galerkin ROM was used at each time step, also exploiting further hyper-reduction techniques such as the discrete empirical interpolation (DEIM) and its matrix version (MDEIM) to efficiently assemble the residual vectors and the Jacobian matrices, respectively, required by Newton iterations at the ROM level. In this case, the activation of the heart contraction was given by the solution of a high-fidelity electrophysiology model, whilst the pressure caused by the presence of blood in the ventricular chamber was neglected. An extension to quasi-Newton methods can be found in [16]. A POD-Galerkin ROM for a patient-specific biventricular cardiac model has been introduced and analyzed in [17], for the sake of parameter estimation based on medical images. More recently, the inverse analysis of large nonlinear cardiac simulations has been considered in [18], where POD-Galerkin ROMs have been built for the contraction of a four-chambers cardiac model built over a patient-specific cardiac geometry, involving a monolithic coupling between a POD-reduced three-dimensional structural model – however treating the passive myocardium as isotropic – and a zero-dimensional Windkessel model for the blood circulation. In this latter case, a speedup ranging from 5 to 13 times compared to the FOM is achieved by the ROM, when dealing with the evaluation of cardiac outputs as functions of the contractility, the myofiber activation/deactivation rate, the onset of ventricular systole/diastole. No hyper-reduction strategies are considered, in this latter examples, to enhance the construction of the ROM.
In this paper we build POD-Galerkin-DEIM reduced order models, relying on (i) POD for finding a low-dimensional trial subspace, (ii) Galerkin projection to generate, in a physically consistent way, a reduced order model, and (iii) hyper-reduction techniques, through the discrete empirical interpolation method (DEIM), to accelerate the assembling of nonlinear (with respect to the solution, input parameters, or both) terms. Such hyper-reduced ROMs are thus exploited for the efficient and accurate solution to the time-dependent cardiac mechanics problem, on idealized and patient-specific left ventricle geometries, albeit using a relative low number of degrees of freedom for practical reasons. Our mechanical problem is uncoupled from both cardiac electrophysiology and blood dynamics. However, we employ suitable analytical time-dependent functions to mimic the influence of both the active stress and the blood circulation, to assess the performance of the proposed methodology when the whole cardiac cycle is taken into account. Treating the passive myocardium as transversely orthotropic makes the problem much more challenging compared to the case where the material is treated as isotropic, concerning the efficiency of the hyper-reduction stage. Indeed, if the construction of a reduced subspace to approximate the problem solution does not pose serious issues, resulting in extremely low dimensional spaces even for complex material laws, the bottleneck is represented by the assembling of reduced operators, and the projection of the approximated operators through DEIM.
The structure of this paper is as follows. After a brief recall (Section 2) on the mathematical modeling of cardiac mechanics, we describe the high-fidelity finite element (FE) approximation we start from (Section 3). We then introduce the key tools of the proposed ROM technique (Section 4): the POD-Galerkin method and suitable hyper-reduction strategies, to tackle nonlinear time-dependent problems. Numerical results dealing with a benchmark prolate spheroid geometry – concerning the passive inflation without and with the active contraction – and the full cardiac cycle of a patient-specific left ventricle geometry are then shown (Section 5). Finally, open critical issues and future perspectives are outlined (Section 6).
2.
Mathematical models for cardiac mechanics
After introducing a basic setting for solid mechanics, in this section we provide the formulation of the mathematical problems we are interested in. Given a continuum body B embedded in a three-dimensional Euclidean space, let us denote by Ω0⊂R3 its reference configuration at time t=0 and by Ωt⊂R3 its current configuration at time t>0. The motion of the body χ:Ω0×R+→R3 is defined as a function which takes a generic material point X∈Ω0 and maps it onto the corresponding spatial point x=χ(X,t)∈Ω, for all times t>0. The displacement field is defined as
for all times t≥0, and represents the unknown of the problem we are interested in. A central quantity in finite deformations theory is the so-called deformation gradient,
which describes the relationship between quantities in the undeformed and the deformed configurations of the body, and can be expressed in terms of the displacement field as
where I is the identity matrix and ∇0 denotes the material gradient. The change in volume between the reference and the current configurations at time t>0 is given by the determinant of the deformation tensor,
also known as volume ratio. If there is no motion, i.e., x=X, we obtain the consistency condition J=1. In general, a motion for which this condition holds is said to be isochoric.
Another measure of deformation is the right Cauchy-Green deformation tensor C=FTF, whilst a common measure of the strain is the Green-Lagrange strain tensor
which is defined in the reference configuration and is zero in absence of motion. All these kinematic quantities are commonly used to express constitutive equations describing the relationship between mechanical forces and the material motion. We will restrict ourselves to the case of hyperelastic materials, for which we can assume that there exists a strain energy density function W such that
gives the functional stress-strain relationships, where P is the first Piola-Kirchhoff stress tensor.
2.1. Passive and active mechanics
We assume the myocardium to be hyperelastic, orthotropic, with nonlinear passive behavior, as experiments have shown higher material stiffness and mechanical response along the cardiac fibers. Many constitutive laws have been derived for both its passive and active description – see, e.g., [19,20,21,22,23,24] – taking into account varying material symmetries. In this work, we adopt the relation proposed in [19] and commonly referred to as the Guccione law, which assumes the material to be transversely isotropic, with its primary material axis oriented along the local fiber direction. The corresponding strain-energy density function is given by
where the three-dimensional transverse isotropy with respect to the fiber coordinate system is accounted by choosing
Here, Eij, i,j∈{f,s,n} are the components of the Green-Lagrange strain tensor E, the material constant C scales the stresses and the coefficients bf, bs, bn are related to the material stiffness in the fiber, sheet and cross-fiber directions, respectively.
Since biological tissues are mostly composed of water, the material density ρ0 is often taken constant in time [23] so that the conservation of mass implies J=1 and an incompressible formulation is adopted. However, as often done in cardiac mechanics [25,26], incompressibility is weakly imposed, and an isochoric-volumetric decoupling of the strain energy function is employed, yielding
The volumetric term Wvol(J) is chosen to penalize large volume variations, so that it is usually a convex function with global minimum in J=1, e.g.,
Another crucial aspect is the inclusion of the active contractile forces in the constitutive equation. Active properties are time-dependent and anisotropic, with more active stress generated along the local muscle fiber direction [27]. Active tension can be integrated into the passive stress tensor in different ways, among which the active stress and the active strain represent the most common approaches [28]. Here, we adopt the former, and add to the passive first Piola-Kirchhoff stress tensor a time-dependent active tension Ta(t;μ), which is assumed to act only in the fiber direction,
where f0∈R3 denotes the reference unit vector in the fiber direction. Since F can be written in terms of the displacement field u, in the following we will write P=P(u).
2.2. Parameterized problems in cardiac mechanics
We have now all the ingredients required to formulate the mathematical model to describe the motion of a body, specifically the cardiac left ventricle, in terms of an initial boundary-value problem (IBVP) for the elastodynamics equation, that is, the equation of motion for continuum mechanics.
To account for inter-patient variability, different scenarios can be described by assuming the dependence of the mechanical displacement on a set of input parameters, collectively denoted as μ∈P⊂RP, where P is a suitable compact set. For the application at hand, possible parameters of interest are the coefficients of the constitutive law, such as those related to the stiffness of the cardiac muscle in different direction or the bulk modulus K related to the material incompressibility; fibers orientation is highly patient-specific and have a crucial impact on the torsion and shortening of the ventricle; moreover, the pressure exerting on the endocardium or the active tension greatly affects the mechanical behavior; see Section 5.
The parameterized IBVPs of interest in cardiac mechanics can be stated in general form as follows: given μ∈P, a body force field b0=b0(X,t;μ), a prescribed displacement ˉu=ˉu(X,t;μ) and surface traction ¯T=¯T(X,t,N;μ), find the displacement field u(μ):Ω0×[0,T)→R3 such that
where α,β∈R, ΓD0∪ΓN0∪ΓR0=∂Ω0 and Γi0∩Γj0=∅ for i,j∈{D,N,R}. These equations are inherently nonlinear and an additional source of (exponential) nonlinearity is introduced in the material law (2.2). Hence, suitable discretization techniques are required for the solution of the IBVP.
3.
High-fidelity approximation
Before addressing the construction of ROMs for the mechanical problem (2.3), we introduce its full-order approximation, upon which the ROMs will be built. In this work we rely on the finite element method (FEM) for the space discretization, and on backward differentiation formula (BDF) schemes for time discretization. In particular, we neglect the external body forces b0, assume homogeneous Dirichlet boundary conditions, i.e., ˉu=0, and set α=β=0, thus obtaining homogeneous Neumann boundary conditions on ΓR0. Moreover, the boundaries ΓD0, ΓN0 and ΓR0 represent the base (representing the artificial boundary resulting from truncation of the heart below the valves in a short axis plane), the endocardium and the epicardium, respectively.
First of all, let us introduce the weak formulation for (2.3) in material description [29]. By setting
multiplying the equation of motion (2.3)1 by a test function η∈V(Ω0) and integrating over Ω0, we obtain the weak formulation of (2.3) as follows
From (3.1), we can set the finite element (FE) discretization of the problem. Let us denote by Th a hexahedral mesh of Ω0 such that ⋃τ∈Thτ is an approximation of the domain Ω0, where h>0 denotes the grid size, and define the FE space of dimension r≥1
where Qr(τ) is the set of polynomials of degree smaller than or equal to r on each element τ and dim(Xrh)=Ndofh,r denotes the total number of degrees of freedom (dofs). The FE space of vector-valued functions is defined as
whose dimension Nh=3Ndofh,r corresponds to the total number of structural dofs, including those associated with the Dirichlet boundary conditions, and φi:Ω0→R3, for i=1,…,Nh, are the vector-valued basis functions. Given w(t)∈V(Ω0), its approximation ˜wh(t)≈w(t) in Vh can be expressed as a linear combination of the basis functions {φi}Nhi=1 as
where wh(t)=[wh,i(t)]Nhi=1∈RNh denotes the corresponding vector of coefficients in the expansion with respect to the FE basis, that is, the unknown of our high-fidelity FOM.
We can now introduce the semi-discrete Galerkin-FE approximation of the problem, which takes the form of a second-order dynamical system: for each t∈(0,T), find uh(t;μ)∈V(Ω0) such that
where uh,0=[(u0,φi)[L2(Ω0)]3]Nhi=1, ∂tuh,0=[(˙u0,φi)[L2(Ω0)]3]Nhi=1 and
For the time discretization of the dynamical system (3.2), we consider a uniform partition {tn=nΔt,n=0,…,Nt} of the interval (0,T), where Δt=TNt is the time step length. To approximate the first and the second derivative at time tn, we employ the following BDF schemes of order 1
respectively, where the superscripts n, n−1 and n−2 denote the solution uh computed at time tn, tn−1 and tn−2, e.g., uh(tn)=unh. From now on, we indicate all the quantities computed at time tn with the superscript n, for n=0,…,Nt. Note that we employ implicit time integration schemes in order to avoid restrictions on the time step due to the highly nonlinear terms of the strain energy density function considered in (2.2).
Finally, we obtain the following fully-discrete approximation of (2.3): for each n=1,…,Nt, find unh∈RNh such that
where u−1h and u0h are given, and
For the solution of the nonlinear problems arising at each time step, we use the Newton method, turning the nonlinear problem (3.3) into a sequence of linear problems of the following form: given μ∈P, for each n=1,…,Nt, given an initial guess un,(0)h(μ), for k≥0, find δu(k)h(μ)∈RNh such that
until ‖R(un,(k+1)h(μ),tn;μ)‖2/‖R(un,(0)h(μ),tn;μ)‖2<ε, where ε>0 is a prescribed tolerance. Here, J∈RNh×Nh denotes the Jacobian matrix, whose components are given by
The initial guess un,(0)h(μ) is set equal to the initial guess uh,0(μ), when n=1, and is equal to the solution at previous time iteration un−1h(μ), for n=2,…,Nt. From now on, we will refer to (3.4) as to the FOM for our problem.
Remark 1. As an alternative to Newton iterative scheme is, e.g., Broyden's quasi-Newton method [30], as done, e.g., in [16,31] for the construction of ROMs in nonlinear elasticity problems. In this way we can avoid the computation of the Jacobian matrix at each iteration k≥0 by replacing it with rank-one updates, based on residuals computed at previous iterations.
4.
Projection-based model order reduction
To mitigate the computational cost associated with the FOM solution, we introduce a projection-based ROM, by relying on the reduced basis (RB) method [8]. To make this paper self-contained, in this section we describe the construction of a POD-Galerkin-DEIM ROM for nonlinear time-dependent problems arising in cardiac mechanics, generalizing the strategies proposed in [15]. Details related to both POD and DEIM methods are reported in the Appendix.
4.1. POD-Galerkin method for solution-space reduction
The goal of the RB method for parameterized PDEs is to approximate the Nh-dimensional solution manifold
with a low number N (less than a few dozens, or hundreds at most) of basis functions, forming the so-called reduced basis, whose nodal values are collected column-wise in the RB matrix V∈RNh×N. This is usually done by performing a Galerkin projection of the high-fidelity problem onto the N-dimensional subspace spanned by the reduced basis functions, obtaining a reduced problem facing a much lower computational complexity, still respecting the structure of the underlying PDE and retaining the essential features of the parameter-to-solution map. More precisely, the reduction procedure consists of:
1) the construction of a reduced basis V=[ξ1|…|ξN]∈RNh×N from FOM solution snapshots
obtained for different parameter values μℓ∈P, ℓ=1,…,ns, suitably sampled over P;
2) the definition of the ROM by forcing the high-fidelity residual vector R∈RNh computed over the RB solution to be orthogonal to the subspace spanned by the columns of V, that is
3) the solution of (4.2) for a given μ∈P, so that, at the end, unh(μ)≈VunN(μ), n=1,…,Nt.
Two popular methods for the construction of the RB basis are the greedy algorithm [32], based on an a posteriori error estimator, and the proper orthogonal decomposition (POD), based on a singular value decomposition (SVD) of the snapshots matrix. Since for time-dependent, nonlinear problems error bounds are usually extremely difficult to obtain, in this work we rely on the POD method, which Section 4.2 is devoted to.
The solution to (4.2) can be found by employing Newton method, obtaining a sequence of N-dimensional linear systems: given μ∈P and, for n=1,…,Nt, the initial guess un,(0)N(μ)=un−1N(μ), find δun,(k)N(μ)∈RN such that, for k≥0,
until ‖VTR(Vun,(k+1)N(μ),tn;μ)‖2/‖VTR(Vun,(0)N(μ),tn;μ)‖2<εnwt, where εnwt>0 is a chosen tolerance.
The efficiency of the RB method relies, mainly, on two assumptions. First, that the solution manifold Mh has low-dimension; second, that the reduction procedure can be split into offline and online stages, where the latter is completely independent of the high-fidelity dimension [33]. The first hypothesis concerns the approximability of the solution set, and is verified in the case of the problems we are focusing on, as shown in Section 5; the second assumption, unfortunately, does not hold for nonlinear problems as the ones we are considering. In fact, since both the residual vector R and the Jacobian matrix J in (4.3) depend on the solution at the previous iteration Vun,(k)N(μ), they need to be assembled at each Newton step. This means that, in order to set up the reduced system (4.3)1, we need to assemble the high-dimensional arrays before projecting them onto the reduced space spanned by the columns of V, entailing a computational complexity that still depends on (suitable powers of) Nh. To overcome this issue, a further level of approximation, know as hyper-reduction, must be introduced, thus pursuing an approximate-then-reduce strategy.
4.2. Reduced basis construction
For the construction of the reduced basis V∈RNh×N, introduced in Section 4.1, we use the POD. Due to its ease of implementation, and its deep mathematical root (related to the analysis of compact operators, to matrix SVD, and to dimensionality reduction in data analysis, just to mention a few links), POD has been applied in a broad range of engineering fields to reduce the dimension of a given data set, in an optimal sense. In this work, POD is used to build the RB basis V, as well as for the construction of the DEIM basis ΦR for the nonlinear terms (see Section 4.3).
Given the snapshot matrix Su defined in (4.1), with ns<Nh, POD aims at approximating the solution manifold with a low-dimensional linear subspace, retaining as much as possible of the information gathered in the snapshots. In particular, the N-dimensional POD basis is obtained by computing the SVD Su=UΣZT of Su, and then collecting the first N columns of U∈RNh×Nh corresponding to the left singular vectors, i.e.,
The singular values σ1≥⋯≥σr>0, where r≤min(Nh,ns) is the rank of S, provide a heuristic criteria for choosing the basis dimension N
where εPOD>0 is a given tolerance (see also Appendix A). A rapid decay of the singular values means that a limited number of POD modes is potentially sufficient to represent the entire manifold, so that the problem is reducible. This is exactly the kind of situation faced when dealing with the reduction of the solution space related to a problem in nonlinear elastodynamics. Moreover, an efficient, non-deterministic version of POD can be obtained by relying on the so called randomized-SVD (see Appendix B), which offers a powerful tool for performing low-rank matrix approximation, especially when dealing with massive data sets, as in the cases we have considered.
4.3. DEIM for hyper-reduction/system approximation
In the case of PDEs featuring nonaffine dependence on the parameters and/or nonlinear (high-order polynomial, or non-polynomial) dependence on the field variable, a further level of reduction must be introduced to guarantee the offline-online decoupling in the ROM construction [33]. To recover the ROM efficiency, state-of-the-art methods, such as the empirical interpolation method (EIM) [34,35,36], the discrete empirical interpolation method (DEIM) [37], its variant matrix DEIM [38,39,40], the missing point estimation [41] and the Gauss-Newton with approximated tensors [42], aim at recover an affine expansion of the nonlinear operators by computing only a few entries of the nonlinear terms.
Although originally developed in the context of nonaffine operators, DEIM represents a valid hyper-reduction technique also for nonlinear parameterized PDEs (see, e.g., [15,31,43,44,45,46]), employing an interpolation scheme for the approximation of the nonlinear function. The key idea of DEIM is to replace the nonlinear arrays in (4.3) with a collateral reduced-basis expansion, computed through a (hopefully, inexpensive) interpolation procedure. In the case of DEIM, the construction of the interpolation points, commonly referred to as magic points, is based on a greedy algorithm, while the (prior) construction of the reduced basis is obtained by performing POD (or randomized-SVD) on a set of proper snapshots; in the case of EIM, both tasks are performed at the same time, exploiting a greedy algorithm. The DEIM algorithm, as originally proposed in [37], is outlined in Appendix C.
For the case at hand, the high-dimensional residual R is projected onto a reduced subspace of dimension m<Nh spanned by a basis ΦR∈RNh×m
where c∈Rm is the vector of the unknown amplitudes. The matrix ΦR can be pre-computed offline by performing POD on a set of high-fidelity residuals collected when solving (4.3) for n′s training input parameters (different from the one used for the RB basis construction),
Remark 2. In addition to the set of snapshots (4.4), we can consider the FOM residuals collected when solving (3.4) during the RB-basis construction, i.e.,
Taking into account both FOM and ROM residuals entail no extra computational cost and usually improves the overall accuracy, assuming n′s>ns. On the contrary, collecting FOM residuals only gave inaccurate results for all the performed test cases, as shown, for the stationary case, in [15]. In fact, DEIM aims at approximating the nonlinear operators evaluated at the ROM solution, rather than at the FOM solution. Therefore, the construction of the reduced space and system approximation must be performed sequentially.
For every new instance of the parameter, the μ-dependent coefficient vector c is efficiently evaluated online by collocating the approximation at the m components selected by a greedy procedure, that is,
where ΦR|I and R(⋅)|I are the restrictions of ΦR and R(⋅) to the subset of indices I, respectively. We thus define the hyper-reduced residual vector approximating VTR(Vun,(k)N(μ),tn;μ) as
During the online phase we need to assemble the μ-dependent quantities R(Vun,(k)N,tn;μ)|I only, which are vectors of (possibly small) dimension m. All other quantities are constant (in fact, ΦR does not depend on t>0, nor on μ∈P) and can be pre-computed and stored offline. Finally, the Jacobian approximation to VTJ(Vun,(k)N,tn;μ)V can be computed as the derivative of RN,m(Vun,(k)N,tn;μ) with respect to the reduced displacement, that is,
or by relying on the MDEIM algorithm, as done in [15]. However, since we employ automatic differentiation to get (an approximation of) the Jacobian matrices, we adopt the former approximation. As before, the only quantity that must be computed online is the restriction of the Jacobian matrix to the rows corresponding to the magic points, i.e., J(Vun,(k)N,tn;μ)|I∈Rm×Nh. Note that employing DEIM can be regarded as the use of an exact Newton method on the reduced problem RN,m(Vun,(k)N(μ),tn;μ)=0, such that the k-th Newton iteration for its solution reads
DEIM thus avoids any full-order evaluation, highly decreasing the computational effort, provided its dimension is not too large [9]. Unfortunately, as we will see in the following section, when dealing with problems arising in nonlinear elastodynamics characterized by highly nonlinear constitutive laws, the DEIM dimension cannot be kept small, if aiming at ensuring a sufficient accuracy level.
In Algorithm 1 and 2 we report the offline stage and the online stage of the POD-Galerkin-DEIM ROM, respectively.
Remark 3. The m points selected by the DEIM algorithm correspond to a subset of nodes of the computational mesh, which, together with the neighboring nodes (i.e., those sharing the same cell), form the so-called reduced mesh (see Figure 1). Since the entries of any FE-vector are associated with the dofs of the problem, RN,m and JN,m can be computed by integrating the corresponding FOM residual and Jacobian only on the quadrature points belonging to the reduced mesh, respectively.
Remark 4. We recall that both the FOM and the ROM arrays, such as the solution and the residual vectors, are column vectors whose elements are the values of the associated quantities evaluated on the dofs of the physical mesh. Therefore, when assembling the snapshots matrices in order to compute the RB and the DEIM basis, the corresponding arrays are stacked column-wise.
5.
Numerical results
In this section we investigate the performances of POD-Galerkin-DEIM on the solution to the parameterized nonlinear time-dependent mechanical problem (2.3), focusing on cardiac applications, namely:
(i) two test cases on an idealized left ventricle geometry, simulating cardiac relaxation and contraction, respectively; the steady-state versions of these test cases have been introduced in [47] as benchmarks for the validation of cardiac mechanics software;
(ii) an idealized full cycle of a patient-specific left ventricle, where both pressure and active stress are imposed.
In all these scenarios, the traction vector ¯T is given by
where g(t;μ) represents blood pressure exerting on the endocardium and will be further specified, according to the application at hand. As a measure of the accuracy of the ROM with respect to the FOM, for a given parameter instance, we consider time-averaged L2-errors of the displacement vector, that are,
The CPU time ratio, i.e., the ratio between FOM and ROM computational times, is used to measure the ROM efficiency, since it represents the speed-up of the ROM with respect to the FOM. The code is implemented in Python in our software package pyfex, which contains a Python binding with the in-house Finite Element library lifex (https://lifex.gitlab.io/lifex), a high-performance C++ library developed within the iHEART project* and based on the deal.II (https://www.dealii.org) Finite Element core [48]. All the computations have been performed on a PC desktop computer with 3.70GHz Intel Core i5-9600K CPU and 16GB RAM.
*iHEART - An Integrated Heart Model for the simulation of the cardiac function, European Research Council (ERC) grant agreement No 740132, P.I. Prof. A. Quarteroni.
5.1. Benchmark problems with a prolate spheroid geometry
We first investigate the performances of a POD-Galerkin-DEIM reduced order models to address benchmark problems in cardiac mechanics. In both cases, the reference geometry Ω0⊂R3 is that of a truncated ellipsoid and the material law adopted is the nearly-incompressible Guccione relation (2.2), although different parameter values are taken into account, according to [47]. For what concerns the boundary condition of problem (2.3), we apply a linear external pressure
at the endocardium (i.e., ΓN0), with ˜p>0, simulating the presence of blood inside the cardiac chamber, and consider homogeneous Neumann and Dirichlet conditions at the epicardium (i.e., ΓR0 with α=β=0) and on the base (i.e., ΓD0), respectively, the latter representing the artificial boundary resulting from truncation of the heart below the valves in a short axis plane.
The FOM is built on a hexahedral mesh with 4804 elements and 6455 vertices, depicted in Figure 2, resulting in a high-fidelity dimension Nh=19365, since Q1-FE (that is, bilinear FE on a hexahedral mesh) are used. We point out that having chosen a rather coarse computational mesh results in moderate speed-ups of the ROM with respect to the FOM in the considered test cases. However, taking into account finer meshes, thus larger high-fidelity dimensions Nh, higher speed-ups can be achieved by the POD-Galerkin-DEIM ROMs, since the reduced basis dimension N remains small (see Section 5.1.2).
5.1.1. Passive inflation of a ventricle
In this first case, we do not account for anisotropy and only consider the passive contribution of the Piola tensor, so that the obtained deformation pattern simulates passive ventricular diastole. As parameters, we consider
● the material stiffness in fiber and cross-fiber directions bf,bn∈[0.8,1.2],
● the multiplicative factor C∈[8⋅103,12⋅103] Pa.
All the other parameters are fixed to their reference values: bs=bfs=bfn=bsn=1, K=50⋅103 Pa and ˜p=10⋅103. For the time setting, we choose t∈(0,0.25) s and a uniform time step Δt=5⋅10−3 s, resulting in a total number of 50 time instances. In Figure 3 we report the high-fidelity displacement at different time instants. In order to compute the FOM solution, almost 340 s are required in average.
For ns=50 points sampled in the parameter space P through latin hypercube sampling (LHS), we collect the solution snapshots in order to compute the RB basis V∈RNh×N. The singular values arising from the SVD of the snapshots matrix Su are reported in Figure 4, where a rapid decay of the reported quantity is observed. We expect that a small number of basis functions is sufficient for the ROM to guarantee a good approximation of the high-fidelity solution manifold. To assess the performance of the method with respect to the dimension of the reduced subspace, we consider
as POD tolerances, corresponding to a RB dimension of N=5, 7, 12, 16, 26, 32 and 55, respectively.
First, we investigate the performance of the ROM without hyper-reduction. Figure 5 shows the error and the CPU time ratio over εPOD, where the average is computed over a testing set of 20 parameters, different from those used for the generation of FOM snapshots. The plot confirms that few basis functions are required to accurately approximate the high-fidelity displacements.
In particular, we observe that the relative error ϵrel associated with the ROM approximation decreases of almost three orders of magnitude when going from N=5 to N=55. However, since at each Newton step we need to assemble high-dimensional arrays before projecting them onto the reduced space, the CPU time required by the ROM for all RB dimensions, despite decreasing as N becomes smaller, is almost comparable to the one of the FOM.
Then, we investigate the impact of hyper-reduction on the ROM solution error and efficiency. In order to build the POD-Galerkin-DEIM hyper-ROM, we consider N=16 basis functions for state reduction, obtained for εPOD=10−5. For the construction of the residual basis ΦR∈RNh×m, we rely on a snapshots set computed for n′s=50 points in P and apply POD on SR using
corresponding to m=84, 127, 148, 207, 239 and 368, and compare these options.
As shown in Table 1 and Figure 6, no loss of accuracy is experienced when decreasing the number of DEIM interpolation points m. However, higher tolerances εDEIM on the DEIM error would not ensure the convergence of the reduced Newton system for all testing parameters, such that no higher speed-up can be achieved by further reducing the size of the residual basis. Furthermore, we point out that m>N, meaning that the residual vector shows higher variability with respect to the displacement. The POD-Galerkin-DEIM hyper-ROM with m=84 is able to achieve a speed-up of ×9.3 compared to the FOM, still achieving an approximation error equals to the projection error, that is ϵrel≈10−3.
5.1.2. Passive inflation and active contraction of a ventricle
The second benchmark takes into account both a varying fiber distribution and contractile forces, dealing with the inflation and the active contraction of an idealized left ventricle with transversely isotropic material properties. Although it is still an idealized test case, the displacement field reproduces the typical twisting motion of the systole in the left ventricle, caused by the distribution of the muscular fibers. To compute the fiber orientation in cardiac geometries, suitable rule-based methods have been developed [49,50,51,52], which usually depend on a set of parameter angles. In this particular case, we consider the method proposed in [53], where αepi and αendo represent possible values of the rotation angle of the fibers on the epicardium and endocardium, respectively.
Moreover, we surrogate the presence of the active generation forces driving the contraction mechanics exploiting an active stress approach, and considering anisotropic active tension applied in the fiber direction only, see Section 2.1. In particular, the parameterized active tension Ta=Ta(t;μ) in the fiber direction is modeled as a linear function of the form
To assess the performance of a POD-Galerkin-DEIM hyper-ROM to reduce the myocardium contraction, we consider as unknown parameters:
● the maximum value of the active tension ˜Ta∈[49.5⋅103,70.5⋅103] Pa,
● the fiber angles αepi∈[−105.5,−74.5]∘ and αendo∈[74.5,105.5]∘,
both related to the active components of the strain energy functions. In particular, higher values of the active tension correspond to a larger contraction of the cardiac muscle, whilst the fiber angles influence the twisting motion. All the other parameters are fixed to their reference values, namely bf=8, bs=bn=bsn=2, bfs=bfn=4, C=2⋅103 Pa, K=50⋅103 Pa and ˜p=15⋅103. Regarding time discretization, we choose t∈(0,0.25) s and a uniform time step Δt=5⋅10−3 s. Given a training set of ns=50 points obtained by sampling the parameter space P, we perform a convergence analysis of the ROM without hyper-reduction by constructing the reduced basis V∈RNh×N for different values of N and computing the associated approximation errors.
From Figure 7, we observe a slower decay of the singular values of Su with respect to the previous test case of passive inflation (note the different values on the y axis). In fact, using POD tolerances
we obtain the RB dimensions N=16, 22, 39, 50, 87, 109 and 178, respectively, much larger than the ones obtained for similar tolerances on the previous test case. This behavior is somehow expected, as the underlying system dynamics, simulating ventricular contraction and associated torsion, is more involved than the idealized diastole, in which tissue anisotropy is neglected.
As shown by the results obtained so far, the construction of a ROM is highly problem dependent, since the parameters considered, e.g., in the constitutive relation, strongly influence the form of the solution manifold, thus the RB dimension necessary to obtain comparable accuracy between the different ROMs. The error and the CPU speed-ups averaged over a testing set of 20 parameters are both shown in Figure 8, as functions of the POD tolerance εPOD. As already discussed, although we observe lower online CPU times when smaller RB dimensions N are employed, the speed-up achieved by the ROM without hyper-reduction is negligible. For what concerns the approximation error, we observe a reduction of almost two orders of magnitude when going from N=16 to N=178.
Given V∈RNh×N with N=16, we construct the POD-Galerkin-DEIM approximation by considering n′s=200 parameter samples. Figure 9 shows the decay of the singular values of SR, that is, the snapshots matrix of the residual vectors R(Vun,(k)N(μℓ′),tn;μℓ′). We observe that the reported curve decreases very slowly, so that we expect a large number of basis functions to be required to correctly approximate the nonlinear operators. In fact, by computing ΦR∈RNh×m using
as DEIM tolerances, we obtain m=303, 456, 543, 776, 902 and 1233, respectively. Higher values of εDEIM (related to hopefully smaller dimensions m) were not sufficient to guarantee the convergence of the reduced Newton problem for all the parameter combinations considered. The average relative error over a set of 20 parameters and the computational speed-up are both reported in Figure 10. In particular, we observe that the relative error is between 4⋅10−3 and 8⋅10−3, indeed quite close to the ones reported in Figure 8. In this respect, provided a sufficient DEIM dimension is considered, hyper-reduction does not impact significantly on the ROM accuracy. DEIM performances are not influenced by the size of the sample considered for the residual arrays. Indeed, collecting a smaller number n′s of samples would only have small effects on the approximation error, as shown in Figure 11.
Similar results are obtained when using a finer mesh with 13025 vertices, corresponding to Nh=39075 dofs for the FOM. In this case, a reduced basis of dimension N=16 is computed using εPOD=10−3, while the POD tolerance εDEIM for the approximation of the nonlinear arrays has to be chosen no larger than 10−4 to ensure Newton convergence, obtaining m=385 DEIM terms and a corresponding approximation error ϵrel≈5⋅10−2. Also in this case, the highest speed-up achieved by the ROM is greater than that obtained for the coarser mesh (i.e., ×8 against ×6), suggesting that the POD-Galerkin-DEIM convenience grows as the dimension of the underlying FOM increases.
Indeed, when a larger reduced basis is considered, such as in the case of N=39 basis functions obtained for εPOD=10−4 in the case of a FOM with dimension Nh=19365, a remarkable improvement in accuracy is achieved, as shown in Figure 12 and Table 2. However, a higher number of DEIM magic points is required for the solution of the reduced nonlinear system, ultimately doubling the online CPU time required to solve the hyper-ROM for each parameter instance.
As a matter of fact, the residual basis dimension m is highly influenced by the size N of the reduced subspace for the solution, so that a larger basis V requires a greater number of interpolation points to correctly approximate the reduced nonlinear operators, thus reducing the overall speed-up of the ROM. Therefore, choosing N=16 represents a good trade off between accuracy and efficiency. The FOM and the POD-Galerkin-DEIM ROM solutions computed using m=303 at different time instants are shown in Figures 13 and 14 for two parameter values, together with their point-wise difference. We observe a good agreement between the high-fidelity and the reduced dynamics, with the greatest pointwise approximation error located near the apex, that is where the greatest displacement is experienced. About 6 minutes are required in average to solve the FOM for each instance of the parameter, while the solution of the hyper-ROM requires less than a minute, thus achieving a speed-up equal to ×6.2.
At last, we consider a higher dimensional parameter space, taking P=12 input parameters,
as follows:
● the material stiffness in different directions bf∈[6.6,9.4], bs,bn,bsn∈[1.65,2.35], bfs,bfn∈[3.3,4.7],
● the bulk modulus K∈[4⋅104,6⋅104] Pa,
● the multiplicative constant C∈[1.5⋅103,2.5⋅103] Pa,
● the maximum active tension ˜Ta∈[49.5⋅103,70.5⋅103] Pa,
● the fiber angles αepi∈[−105.5,−74.5]∘ and αendo∈[74.5,105.5]∘,
● the steepness of the pressure ramp ˜p∈[14⋅103,16⋅103] Pa.
We consider ns=50 and n′s=75 training parameters sampled in the parameter space P through LHS for the construction of the FOM solution and the ROM residual snapshots matrices, respectively. Further details are summarized in Table 3.
Despite increasing the number of unknown parameters from 3 to 12, the hyper-ROM dimension N is almost unaffected by the choice of a higher dimensional parameter space, as the mechanical behavior during systole is mostly influenced by the active stress and the fiber orientation. To better see this, we perform a sensitivity analysis in order to quantify the effects of parameter variation on some quantities of interest, that are outputs y(μ) associated with the PDE solution u(X,t;μ).
A variance-based measure of sensitivity can be obtained, e.g., through the Sobol' indices. These latter can be computed by evaluating the underlying model with multiple parameter values and analyzing the statistical properties of the associated input-output samples. In particular, we define
for i=1,…,P, where the main effect indices (5.1) represent the main contribution of each input factor alone to the variance of the output, whereas the total effect indices (5.2) include also the interactions with the other parameters. Above, Eμ∼i[⋅] and Varμ∼i(⋅) denote the conditional expectation and variance, respectively, taken over μ∼i (all parameter components but μ(i)); similarly, Eμ(i)[⋅] and Varμ(i)(⋅) denote the conditional expectation and variance, respectively, taken over μ(i).
In our case, we consider as quantities of interest the displacement of the apex A0 in the direction perpendicular to the basal plane and the displacement of a point P0 located between the apex and the base on the epicardium in the direction perpendicular to the basal plane, both evaluated every 10 time steps (see Figure 15).
We consider Nbase=50 base samples for the Saltelli method [54], corresponding to Nbase(P+2)=700 input-output evaluations, and rely on the POD-Galerkin-DEIM ROM with N=19 and m=574 (see Table 3) to speed-up the computations. The average over time of the sensitivity indices related to the apex A0 and point P0 displacements are reported in Figure 16. From these results, we can conclude that the most influential parameters are the maximum value ˜Ta of the active tension and the parameter αepi related to the fiber orientation at the epicardium.
5.2. Idealized full cycle of the left ventricle with a patient specific geometry
Finally, we want to analyze the performances of the POD-Galerkin-DEIM method when the whole cardiac cycle is taken into account. For the sake of simplicity, we employ suitable analytical time-dependent functions to represent the influence of the active stress model and blood circulation. In particular, the active tension Ta(t;μ) and the pressure g(t;μ) are computed as follows:
1) for a fixed set of physiological parameters, we solve the cardiac electromechanics (EM) problem coupled with a lumped-parameter model for hemodynamics, as done in [55].
2) by performing a cubic spline interpolation of the space-averaged active tension TEMa(t) and of the pressure gEM(t) coming from the EM simulation, we obtain the corresponding analytical surrogate functions TMa(t) and gM(t), reported in Figure 17.
3) to take into account a parameter-dependence, we define
in order to model different maximum active tensions and loading conditions.
The reference geometry Ω0⊂R3 is a patient-specific left ventricle, pre-processed from the Zygote Solid 3D heart model [56] reconstructed from an high resolution computed tomography scan, whose associated hexahedral mesh with 6167 vertices is reported in Figure 18. The FOM is built using Q1-FE, such that the high-fidelity dimension is Nh=18501. Differently from all other test cases considered so far, we assume the following boundary conditions, according to [57]:
● Robin boundary conditions at the epicardium Γepi0 (or ΓR0),
with K⊥=2⋅105, K∥=C⊥=2⋅104 and C∥=2⋅103;
● Neumann boundary conditions at the endocardium Γendo0 (or ΓN0),
● energy-consistent boundary conditions [58] at the base Γbase0,
This choice of boundary conditions do not affect the construction of the ROM, since no assumptions about the form of the FOM are made in the reduction strategy. As for the previous test cases, we consider the nearly-incompressible Guccione law and adopt the active stress approach. Regarding the fiber distribution, we employ the rule-based method proposed in [49], depending on parameter angles αepi, αendo, βepi and βendo. For time discretization, we consider a uniform time step Δt=5⋅10−3 s and set T=0.8 s, corresponding to the duration of a single heartbeat, resulting in a total number of 160 time iterations. We choose, as unknown parameters, those most affecting cardiac deformation during systole and diastole, that are,
● the multiplicative factor of the constitutive law C∈[0.44⋅103,1.32⋅103] Pa;
● the active tension parameter ˜Ta∈[49.5⋅103,70.5⋅103] Pa;
● the fiber angle at the epicardium αepi∈[−75,−45]∘.
The active tension and the fiber angle influence the contraction and the twisting motion typical of the ventricular systole, whereas C is related to the stiffness of the tissue. All the other parameters are fixed to their reference values, namely bf=8, bs=6, bn=bfn=bsn=3, bfn=12, K=50⋅103 Pa, ˜p=15⋅103 Pa, αendo=60∘, βepi=20∘ and βendo=−20∘. Computing the high-fidelity solution for each instance of the parameter required almost 15 minutes.
Remark 5. Due to the fact that we are neglecting the coupling between mechanics and circulation, non-physiological pressure-volume loops are obtained, especially in the first time steps of the heartbeat, corresponding to ventricular systole, which lacks of the isometric contraction phase. In addition to this, the end-diastolic configuration of the left ventricle should be recovered before starting the mechanical simulation. However, we point out that our aim is to test and analyze the reduction methodology on parameterized time-dependent nonlinear PDEs. Moreover, the simulated cardiac cycle can be regarded as a sufficiently accurate reproduction of both systole and diastole deformations for the purpose at hand.
For the construction of the reduced basis V, we collect nS=20 FOM solutions and perform POD with εPOD=10−3, obtaining a reduced subspace of dimension N=28. To build the residual basis, we perform n′s=50 ROM simulations and collect the residual snapshots. Since we are using Nt=160 and n′s=50, and at least two Newton iterations are performed at each time step, we end up with a residual snapshots matrix SR of more than 16000 columns. For this reason, we rely on randomized-SVD to speed-up the computation of ΦR, by choosing a priori the number of DEIM basis, rather the the POD tolerance εDEIM (see Algorithm 4). Table 4 summarizes the average errors computed over a testing set of 20 random parameters and the CPU times obtained using m=850, 1000 and 1200, while volume and pressure-volume loop for m=850 are reported in Figure 19. Here, POD-Galerkin-DEIM outputs are compared to those of the FOM for two different values of the parameters vector. We point out that no convergence of the reduced Newton system has been obtained for all testing parameters when using smaller residual basis.
In Figure 20 we finally report the POD-Galerkin-DEIM solutions at four time instants, for two different values of the parameter vector μ=[C,˜Ta,αepi], respectively. Moreover, the corresponding pointwise difference between the high-fidelity solution and its reduced-order approximation is also reported, showing that the error does not increase over time.
6.
Conclusions
In this paper we have introduced and investigated a projection-based reduced order modeling framework to deal with problems arising in cardiac mechanics, which present great challenges from a numerical standpoint due to the complex material behavior. The POD-Galerkin ROM, equipped with DEIM hyper-reduction, has enabled to accurately approximate the high-fidelity solution of parameterized nonlinear, time-dependent problems in elastodynamics. Examples have been shown related to test cases reproducing different phases of the cardiac cycle on both idealized and realistic left ventricle geometries. The hyper-ROMs proposed rely on (ⅰ) POD for the construction of a low-dimensional trial subspace to represent the problem solution, (ⅱ) a Galerkin projection to generate, in a physically consistent way, the reduced order model, and (ⅲ) suitable hyper-reduction techniques such as the discrete empirical interpolation method to enhance the assembling of nonlinear terms (with respect to the solution, to input parameters, or to both). In the considered test cases, POD provides low-dimensional subspace, still preserving a sufficient fidelity: despite their highly nonlinear nature, elastodynamics problems can be effectively reduced by exploiting projection-based strategies, with POD-Galerkin ROMs showing very good accuracy even in presence of a handful of basis functions. In this respect, mechanical problems do not pose the same issue as transport, wave, or convection-dominated phenomena, such as those related with cardiac electrophysiology, where coherent structures propagate over time and the RB method may yield inefficient ROMs, necessitating a high number of basis functions.
Performing hyper-reduction by means of the DEIM technique allows to achieve good results in terms of computational speed-ups of the ROMs with respect to the FOM without affecting the overall approximation error. In particular, since only a handful of basis functions is required for solution-space reduction, we expect even higher gains when finer computational meshes are used. Furthermore, we point out that relying on randomized-SVD allows to deal with massive datasets of nonlinear arrays which would be otherwise difficult to handle with deterministic techniques, such as POD.
A serious issue is instead represented by the assembling of reduced operators in this framework, and the projection of the approximated operators through DEIM onto the RB space. In fact, even if the nonlinear quantities are assembled onto a reduced mesh, a large residual basis may be needed to ensure the convergence of Newton method for complex applications, overall compromising the ROM efficiency, as most of the online CPU time is required for assembling the reduced residual vector. In addition to the fact that hyper-reduction aims at approximating vanishing terms with a global basis, the highly nonlinear nature of the constitutive laws makes the residual terms orders of magnitude more expensive to be approximated than the parameter-to-solution map, regarding both the number of basis functions involved in their expansions, and the computational time required at the offline and the online stages.
This observation suggests the idea of relying on surrogate models to perform operator approximation, overcoming the need to assemble the nonlinear terms onto the computational mesh. Pursuing this strategy, will allow us to rely on physics-based (thus, consistent) ROMs built through a POD-Galerkin RB method, however avoiding the computational burden entailed by classical hyper-reduction strategies. The analysis and the application of deep learning-based methods to perform hyper-reduction will make the object of forthcoming publications.
Author contributions
● Conceptualization: all authors.
● Formal analysis: Ludovica Cicci.
● Funding acquisition: Alfio Quarteroni.
● Investigation: Ludovica Cicci.
● Methodology: Ludovica Cicci, Stefano Pagani, Andrea Manzoni.
● Software: Ludovica Cicci, Stefano Pagani.
● Supervision: Stefania Fresca, Stefano Pagani, Andrea Manzoni, Alfio Quarteroni.
● Validation: Ludovica Cicci.
● Visualization: Ludovica Cicci.
● Writing - original draft: Ludovica Cicci, Andrea Manzoni.
● Writing - review & editing: Stefania Fresca, Stefano Pagani, Andrea Manzoni, Alfio Quarteroni.
Acknowledgments
The authors have been supported by the ERC Advanced Grant iHEART, "An integrated heart model for the simulation of the cardiac function'', 2017-2022, P. I. A. Quarteroni (ERC2016AdG, project ID: 740132).
Conflict of interest
The authors declare no conflict of interest.
Appendix
A. Proper orthogonal decomposition
Let sh:P→Vh be a parameter-dependent function and sh(μ)∈RNh the corresponding vector of dofs, where Nh=dim(Vh). For a given set of training parameters S={μ1,…,μns}⊂P, with ns<Nh, we define the snapshots matrix
where si=sh(μi), i=1,…,ns. Proper Orthogonal Decomposition (POD) aims at approximating the manifold Ms={sh(μ)∈Vh:μ∈P} identified by the image of sh with a low-dimensional linear subspace, retaining as much as possible of the information gathered in the snapshots. To achieve this goal, the singular value decomposition (SVD) of S,
is computed, where U=[ξ1|…|ξNh]∈RNh×Nh and Z=[ζ1|…|ζns]∈Rns×ns are orthogonal matrices collecting column-wise the left and the right singular vectors, respectively. The diagonal matrix Σ=diag(σ1,…,σr)∈RNh×ns contains all singular values of S, sorted in descending order σ1≥⋯≥σr>0, where r≤min(Nh,ns) is the rank of S, so that we can write
The N-dimensional POD basis V=[ξ1|…|ξN] is obtained by collecting the first N columns of U corresponding to the N largest singular values (see Algorithm 3). At the basis of POD, the Schmidt-Eckart-Young theorem (originally introduced for integral operators) provides a criterion for the best low-rank approximation of a given matrix of rank r. In fact, the basis V∈RNh×N is such that
has rank N≤r and satisfies the following optimality property:
where ‖⋅‖F is the Frobenius norm. Furthermore, thanks to (A.1), the singular values of the snapshots matrix provide an heuristic criteria for choosing the basis dimension N, which can be computed as the minimum integer satisfying the condition
where εPOD>0 is a given tolerance.
B. Randomized singular valued decomposition
Randomized Singular Valued Decomposition (randomized-SVD) provides a non-deterministic alternative to POD. Randomization offers, in fact, a powerful tool for performing low-rank matrix approximation, especially when dealing with massive data sets. The randomized approach usually beats its classical competitors in terms of computational speed-up, accuracy and robustness [59]. The key idea of randomized SVD is to split the task of computing an approximated singular value decomposition of a given matrix into a first random stage, and a second deterministic one, see Algorithm 4. The former exploits random sampling to construct a low-dimensional subspace that captures most of the action of the input matrix; the latter is meant to restrict the given matrix to this subspace and then manipulate the associated reduced matrix with classical deterministic algorithms, to obtain the desired low-rank approximations.
This randomized approach is particularly convenient when the snapshots matrix is high-dimensional, i.e., when Nh and ns are large. In fact, finding the first k dominant singular-values for a dense input matrix of dimension Nh×ns, requires O(Nhnslog(k)) floating-point operations for a randomized algorithm, in contrast with O(Nhnsk) flops for a classical one. Finally, Algorithm 4 can be adapted to solve the following problem: given a target error tolerance ε>0, find k=k(ε) and Q∈RNh×k satisfying
C. Discrete empirical interpolation method (DEIM)
The DEIM algorithm for the approximation of a generic nonlinear function f:P→RNh, f=f(τ) (where τ=t and/or μ), as originally proposed in [37], is outlined as follows†:
†For simplicity, we consider vector representations of functions, assuming that all quantities have been discretized on a FE mesh.
1) construct a set of snapshots obtained by sampling f(τ) at random values τ1,…,τns and apply POD to extract a basis from these snapshots, i.e.,
where εDEIM>0 is a given tolerance such that a condition similar to (A.2) holds;
2) iteratively select m≪Nh indices I⊂{1,…,Nh}, corresponding to a subset rows of ΦF, using a greedy procedure, which minimizes the interpolation error over the snapshots set;
3) given τ∉{τ1,…,τns}, impose the interpolation conditions at the selected entries I
Here ΦF|I∈Rm×m is the matrix formed by the I rows of ΦF; as a result, we obtain
Algorithm 5 reports the greedy procedure to determine the DEIM interpolation points. At each step k∈{2,…,m}, the k-th magic point is selected as the quadrature point maximizing the (absolute value of the) difference between the basis vector ϕk and its current approximation ΦF(:,1:k−1)ψ relyiìng on the k−1 computed magic points. Magic points are hierarchical and non-repeated by construction.
Condition (C.1) can be generalized to the case where more sample indices than basis function are considered, leading to a gappy POD reconstruction. he solution to the least-squares problem
would yield fm(τ)=ΦFΦ†F|If(τ)|I, where the Moore-Penrose inverse of a full column rank matrix A∈Rn×m is A†:=(ATA)−1AT.