
Citation: William A. Toscano, Jr., Hitakshi Sehgal, Emily Yang, Lindsey Spaude, A. Frank Bettmann. Environment-Gene interaction in common complex diseases: New approaches[J]. AIMS Molecular Science, 2014, 1(4): 126-140. doi: 10.3934/molsci.2014.4.126
[1] | Mohd Danish Siddiqi, Fatemah Mofarreh . Schur-type inequality for solitonic hypersurfaces in $ (k, \mu) $-contact metric manifolds. AIMS Mathematics, 2024, 9(12): 36069-36081. doi: 10.3934/math.20241711 |
[2] | Shexiang Hai, Liang He . The steepest descent method for fuzzy optimization problems under granular differentiability. AIMS Mathematics, 2025, 10(4): 10163-10186. doi: 10.3934/math.2025463 |
[3] | Nülifer Özdemir, Şirin Aktay, Mehmet Solgun . Some results on almost paracontact paracomplex Riemannian manifolds. AIMS Mathematics, 2025, 10(5): 10764-10786. doi: 10.3934/math.2025489 |
[4] | Nasser Bin Turki, Sharief Deshmukh, Olga Belova . A note on closed vector fields. AIMS Mathematics, 2024, 9(1): 1509-1522. doi: 10.3934/math.2024074 |
[5] | Amira Ishan . On concurrent vector fields on Riemannian manifolds. AIMS Mathematics, 2023, 8(10): 25097-25103. doi: 10.3934/math.20231281 |
[6] | Mohammad Aamir Qayyoom, Rawan Bossly, Mobin Ahmad . On CR-lightlike submanifolds in a golden semi-Riemannian manifold. AIMS Mathematics, 2024, 9(5): 13043-13057. doi: 10.3934/math.2024636 |
[7] | Mehmet Gülbahar . Qualar curvatures of pseudo Riemannian manifolds and pseudo Riemannian submanifolds. AIMS Mathematics, 2021, 6(2): 1366-1376. doi: 10.3934/math.2021085 |
[8] | Xiaosheng Li . New Einstein-Randers metrics on certain homogeneous manifolds arising from the generalized Wallach spaces. AIMS Mathematics, 2023, 8(10): 23062-23086. doi: 10.3934/math.20231174 |
[9] | Rabia Cakan Akpınar, Esen Kemer Kansu . Metallic deformation on para-Sasaki-like para-Norden manifold. AIMS Mathematics, 2024, 9(7): 19125-19136. doi: 10.3934/math.2024932 |
[10] | Ghodratallah Fasihi-Ramandi . Hamilton’s gradient estimate for fast diffusion equations under geometric flow. AIMS Mathematics, 2019, 4(3): 497-505. doi: 10.3934/math.2019.3.497 |
This paper proposes a new method for computing spherical area-preserving mappings of topological spheres using a Riemannian optimization framework.
Area-preserving mappings can serve as parameterization of surfaces and have been applied to the field of computer graphics and medical imaging [6]. State-of-the-art methods for the computation of area-preserving parameterizations include diffusion-based methods [8], optimal mass transportation-based methods [18], and nonlinear optimization methods [20]. It is worth noting that most previous works consider area-preserving parameterization from simply connected open surfaces to planar regions. For the spherical area-preserving mapping of genus-zero closed surfaces, recently developed methods include the adaptive area-preserving parameterization method [7], the spherical optimal transportation mapping [10], and the stretch energy minimization method [21].
Riemannian optimization generalizes unconstrained, continuous optimization (defined on Euclidean space) following the same principle that Riemannian geometry is a generalization of Euclidean space. In traditional optimization methods, nonlinear matrix manifolds rely on the Euclidean vector space structure. In contrast, in the Riemannian optimization framework, algorithms and convergence analysis are formulated based on the language of differential geometry, and numerical linear algebra techniques are then used for implementation. Riemannian optimization focuses on matrix manifolds, which are manifolds in the sense of classical Riemannian geometry that admit a natural representation of their elements in the form of matrices. Retraction mappings play a vital role in this framework, as they are used to turn objective functions defined in Euclidean space into functions defined on an abstract manifold so that constraints are taken into account explicitly. A vast body of literature on the theory and implementation of Riemannian optimization now exists. Some of the earliest references for this field, especially for line-search methods, go back to Luenberger [15], Gabay [14], and Udrişte [19]. More recent literature that encompasses the findings and developments of the last three decades is [2,5,12].
Our Riemannian gradient descent (RGD) method involves classical components from computational geometry (simplicial surfaces and mappings) and Riemannian optimization (line search, retractions, and Riemannian gradients). To the best of the authors' knowledge, this is the first time this approach has been used in computational geometry. In this paper, we minimize the normalized stretch energy [20] for simplicial mappings subject to the constraint that the image of the mapping belongs to a power manifold of $ n $ unit spheres embedded in $ \mathbb{R}^{3} $.
In this paper, we propose an RGD method to compute spherical area-preserving mappings on a power manifold of unit spheres. In particular, the main contributions are as follows:
(i) We combine the tools from the Riemannian optimization framework and the components from computational geometry to propose an RGD method for computing spherical area-preserving mappings of topological spheres.
(ii) We explore two different line-search strategies: one using MATLAB's $\texttt{fminbnd}$, and the other using the quadratic/cubic interpolant approximation from [11,§ 6.3.2].
(iii) We conduct extensive numerical experiments on several mesh models to demonstrate the accuracy and efficiency of the algorithm.
(iv) We demonstrate the competitiveness and efficiency of our algorithm over three state-of-the-art methods for computing area-preserving mappings.
(v) We show that our algorithm is stable with respect to small perturbations in the initial mesh model.
(vi) We apply the algorithm to the concrete application of landmark-aligned surface registration between two human brain models.
The remaining part of this paper is organized as follows. Section 2 introduces the main concepts on simplicial surfaces and mappings and presents the formulation of the objective function. Section 3 provides some preliminaries on the Riemannian optimization framework, briefly recalls the geometry of the unit sphere, and then describes the tools needed to perform optimization on the power manifold. Section 4 is about the RGD method and reports known convergence results. Section 5 discusses the extensive numerical experiments to evaluate our algorithm in terms of accuracy and efficiency, and provides a concrete application. Finally, we wrap our paper with conclusions and a future outlook in Section 6. The appendices contain other details about the calculations of the area of the simplicial mapping, the Hessian of the objective function, and the line-search procedure used in the RGD method.
In this section, we list the notations and symbols adopted in order of appearance in the paper. Symbols specific to a particular section are usually not included in this list.
$ \tau $ | A triangular face |
$ \left\vert \tau \right\vert $ | The area of the triangle $ \tau $ |
$ \mathcal{M} $ | Simplicial surface |
$ \mathcal{V}(\mathcal{M}) $ | Set of vertices of $ \mathcal{M} $ |
$ \mathcal{F}(\mathcal{M}) $ | Set of faces of $ \mathcal{M} $ |
$ \mathcal{E}(\mathcal{M}) $ | Set of edges of $ \mathcal{M} $ |
$ v_{i}, \ v_{j}, \ v_{k} $ | Vertices of a triangular face |
$ f $ | Simplicial mapping |
$ {\bf{f}} $ | Representative matrix of $ f $ |
${\bf{f}} _{\ell} $ | Coordinates of a vertex $ f(v_{\ell}) $ |
$ {\rm{vec}} $ | Column-stacking vectorization operator |
$ S^2 $ | Unit sphere in $ \mathbb{R}^{3} $ |
$ (S^2)^n $ | Power manifold of $ n $ unit spheres in $ \mathbb{R}^{3} $ |
$ E_{A}(f) $ | The authalic energy |
$ E_{S}(f) $ | The stretch energy |
$ L_{S}(f) $ | Weighted Laplacian matrix |
$ \omega_{S} $ | Modified cotangent weights |
$ \mathcal{A}(f) $ | Area of the image of $ f $ |
$ E(f) $ | The normalized stretch energy |
$ \mathrm{T}_{{\bf{x}}} S^2 $ | Tangent space to $ S^2 $ at $ {\bf{x}} $ |
$ \text{P}_{\mathrm{T}_{{\bf{x}}} S^2} $ | Orthogonal projector onto the tangent space to $ S^2 $ at $ {\bf{x}} $ |
$ \text{P}_{\mathrm{T}_{{\bf{f}}_{\ell}} (S^2)^n} $ | Orthogonal projector onto the tangent space to $ (S^2)^n $ at $ {\bf{f}}_{\ell} $ |
R | Retraction mapping |
$ \nabla E(f) $ | Euclidean gradient of $ E(f) $ |
$ \text{grad } E(f) $ | Riemannian gradient of $ E(f) $ |
We introduce the simplicial surfaces and mappings in subsection 2.1 and the objective function in subsection 2.2.
A simplicial surface $ {\mathcal M} $ is the underlying set of a simplicial $ 2 $-complex $ \mathcal{K}({\mathcal M}) = {\mathcal F}({\mathcal M})\cup {\mathcal E}({\mathcal M})\cup\mathcal{V}({\mathcal M}) $ composed of vertices
$ \mathcal{V}( {\mathcal M}) = \left\{v_\ell = \left( v_\ell^1, v_\ell^2, v_\ell^2 \right) ^{\top} \in \mathbb{R}^3 \right\}_{\ell = 1}^n, $ |
oriented triangular faces
$ {\mathcal F}( {\mathcal M}) = \left\{ \tau_\ell = [v_{i_\ell}, v_{j_\ell}, v_{k_\ell}] \mid v_{i_\ell}, v_{j_\ell}, v_{k_\ell} \in\mathcal{V}( {\mathcal M}) \right\}_{\ell = 1}^m, $ |
and undirected edges
$ {\mathcal E}( {\mathcal M}) = \left\{ [v_i, v_j] \mid [v_i, v_j, v_k]\in {\mathcal F}( {\mathcal M}) \text{ for some } v_k\in\mathcal{V}( {\mathcal M}) \right\}. $ |
A simplicial mapping $ f\colon {\mathcal M}\to \mathbb{R}^3 $ is a particular type of piecewise affine mapping with the restriction mapping $ f|_\tau $ being affine, for every $ \tau\in {\mathcal F}({\mathcal M}) $. We denote
$ \mathbf{f}_\ell := f(v_\ell) = \left( f_\ell^1, f_\ell^2, f_\ell^3 \right) ^{\top}, \; \text{ for every }v_\ell\in\mathcal{V}( {\mathcal M}) . $ |
The mapping $ f $ can be represented as a matrix
$ f=[f⊤1⋮f⊤n]=[f11f21f31⋮⋮⋮f1nf2nf3n]=:[f1f2f3], $
|
(2.1) |
or a vector
$ {\rm{vec}}( \mathbf{f}) = [f1f2f3] . $
|
The authalic energy for simplicial mappings $ f\colon {\mathcal M}\to \mathbb{R}^{3} $ is defined as [20]
$ E_A(f) = E_S(f) - {\mathcal A}(f), $ |
where $ E_{S} $ is the stretch energy defined as
$ E_{S}(f) = \frac{1}{2} \, {\rm{vec}}( \mathbf{f}) ^{\top} (I_3\otimes L_S(f)) {\rm{vec}}( \mathbf{f}), $ |
where $ I_{3} $ is the identity matrix of size 3-by-3, $ \otimes $ denotes the Kronecker product, and $ L_S(f) $ is the weighted Laplacian matrix $ L_S(f) $. The latter is defined by
$ [LS(f)]i,j={−∑[vi,vj,vk]∈F(M)[ωS(f)]i,j,kif [vi,vj]∈E(M),−∑ℓ≠i[LS(f)]i,ℓif j=i,0otherwise, $
|
(2.2) |
in which $ \omega_S(f) $ is the modified cotangent weight defined as
$ [ωS(f)]i,j,k=cot(θki,j(f))|f([vi,vj,vk])|2|[vi,vj,vk]|, $
|
(2.3) |
with $ \theta_{i, j}^k(f) $ being the angle opposite to the edge $ f([v_i, v_j]) $ at the point $ f(v_k) $ on the image $ f({\mathcal M}) $, as illustrated in Figure 1.
It is proved that $ E_A(f)\geq 0 $ and the equality holds if and only if $ f $ preserves the area [20,Corollary 3.4]. Due to the optimization process, the image area $ {\mathcal A}(f) $ is not constant, hence we introduce a prefactor $ | {\mathcal M}|/ {\mathcal A}(f) $ and define the normalized stretch energy as
$ E(f)=|M|A(f)ES(f). $
|
(2.4) |
To perform numerical optimization via the RGD method, we need to compute the Euclidean gradient. By applying the formula $ \nabla E_S(f) = 2\, (I_3 \otimes L_S(f)) {\rm{vec}}(\mathbf{f}) $ from [20,(3.6)], the gradient of $ E(f) $ can be formulated as
$ ∇E(f)=∇(|M|A(f)ES(f))=|M|A(f)∇ES(f)+ES(f)∇|M|A(f)=2|M|A(f)(I3⊗LS(f))vec(f)−|M|ES(f)A(f)2∇A(f). $
|
(2.5) |
The following proposition gives an explicit formula for calculating $ \nabla {\mathcal A}(f) $.
Proposition 2.1 (Formula for $ \nabla {\mathcal A} $). The gradient of $ {\mathcal A} $ can be explicitly formulated as
$ ∇A(f|τ)=|τ|A(f|τ)vec(LS(f|τ)fτ). $
|
(2.6) |
Proof. By applying the explicit formulas $ E_S(f|_\tau) = \frac{ {\mathcal A}(f|_\tau)^2}{|\tau|} $ and $ \nabla E_S(f|_\tau) = 2 \, {\rm{vec}}(L_S(f|_\tau) \, \mathbf{f}_\tau) $ from [20], the chain rule yields
$ 2 \, {\rm{vec}}(L_S(f|_\tau) \, \mathbf{f}_\tau) = \nabla E_S(f|_\tau) = \frac{2 {\mathcal A}(f|_\tau)}{|\tau|} \nabla {\mathcal A}(f|_\tau), $ |
which is equivalent to (2.6).
Other details about the calculation of $ {\mathcal A}(f) $ are reported in Appendix 7.
The Riemannian optimization framework [2,5,12] solves constrained optimization problems where the constraints have a geometric nature. This approach utilizes the underlying geometric structure of the problems, which allows the constraints to be taken explicitly into account. The optimization variables are constrained to a smooth manifold, and the optimization is performed on that manifold. Typically, the manifolds considered are matrix manifolds, meaning there is a natural representation of their elements in matrix form. In particular, in this paper, the problem is formulated on a power manifold of $ n $ unit spheres embedded in $ \mathbb{R}^{3} $, and we use the RGD method for minimizing the cost function (2.4) on this power manifold.
Traditional optimization methods rely on the Euclidean vector space structure. For instance, the steepest descent method for minimizing a function $ g\colon \mathbb{R}^{n} \to \mathbb{R} $ updates the current iterate $ \mathbf{x}_{k} $ by moving in the direction $ \mathbf{d}_{k} $ of the anti-gradient of $ g $, by a step size $ \alpha_{k} $ chosen according to an appropriate line-search rule. However, on a nonlinear manifold, the vector addition $ \mathbf{x}_{k} + \alpha_{k} \mathbf{d}_{k} $ does not make sense in general due to the manifold curvature. We need the notion of tangent vectors and their length to generalize the steepest descent direction to a Riemannian manifold. The retraction mapping is also a vital tool in the Riemannian optimization framework. It is a mapping from the tangent space to the manifold, used to transform objective functions defined in Euclidean space into functions defined on a manifold while explicitly taking the constraints into account.
Similarly to the line-search method in Euclidean space, a line-search method in the Riemannian framework determines at a current iterate $ \mathbf{x}_{k} $ on a manifold $ M $ a search direction $ \boldsymbol{\xi} $ on the tangent space $ \mathrm{T}_{\mathbf{x}}M $. The next iterate $ \mathbf{x}_{k+1} $ is then determined by a line search along a curve $ \alpha \mapsto \text{R}_{ \mathbf{x}}(\alpha \boldsymbol{\xi}) $ where $ \text{R}_{ \mathbf{x}} \colon \mathrm{T}_{\mathbf{x}}M \to M $ is the retraction mapping. The procedure is then repeated for $ \mathbf{x}_{k+1} $ taking the role of $ \mathbf{x}_{k} $. Similarly to optimization methods in Euclidean space, search directions can be the negative of the Riemannian gradient, leading to the Riemannian steepest descent method. Other choices of search directions lead to other methods, e.g., Riemannian versions of the trust-region method [1] or BFGS [17].
In what follows, we introduce some fundamental geometry concepts used in Riemannian optimization, which are necessary to formulate the RGD method for our problems. We first briefly recall the geometry of the unit sphere $ S^{2} $ embedded in $ \mathbb{R}^{3} $, and then we switch to the power manifold of $ n $ unit spheres, denoted by $ \big(S^{2}\big)^{n} $.
The unit sphere $ S^{2} $ is a Riemannian submanifold of $ \mathbb{R}^{3} $ defined as
$ S^{2} = \lbrace \mathbf{x} \in \mathbb{R}^{3} \colon \mathbf{x} ^{\top} \mathbf{x} = 1 \rbrace. $ |
The Riemannian metric (inner product) on the sphere is inherited from the embedding space $ \mathbb{R}^{3} $, i.e.,
$ \langle \boldsymbol{\xi}, \boldsymbol{\eta} \rangle_{ \mathbf{x}} = \boldsymbol{\xi} ^{\top} \boldsymbol{\eta}, \quad \boldsymbol{\xi}, \, \boldsymbol{\eta} \in \mathrm{T}_{\mathbf{x}}S^{2}, $ |
where $ \mathrm{T}_{\mathbf{x}}S^{2} $ is the tangent space to $ S^{2} $ at $ \mathbf{x} \in S^{2} $, defined as the set of all vectors orthogonal to $ \mathbf{x} $ in $ \mathbb{R}^{3} $, i.e.,
$ \mathrm{T}_{\mathbf{x}}S^{2} = \lbrace \mathbf{z} \in \mathbb{R}^{3} \colon \mathbf{x} ^{\top} \mathbf{z} = 0 \rbrace. $ |
The projector $ \text{P}_{ \mathrm{T}_{\mathbf{x}}S^{2}} $ from $ \mathbb{R}^{3} $ onto the tangent space $ \mathrm{T}_{\mathbf{x}}S^{2} $ is a mapping $ \text{P}_{ \mathrm{T}_{\mathbf{x}}S^{2}} \colon \mathbb{R}^{3} \to \mathrm{T}_{\mathbf{x}}S^{2} $, defined by
$ PTxS2(z)=(I3−xx⊤)z. $
|
(3.1) |
In the following, points on the unit sphere are denoted by $ \mathbf{f}_{\ell} $ (the vertices of the simplicial mapping $ f $), and tangent vectors are represented by $ \boldsymbol{\xi}_{\ell} $.
We aim to minimize the function $ E(f) = E(\mathbf{f}_{1}, \dots, \mathbf{f}_{n}) $ (2.4), where each $ \mathbf{f}_{\ell} $, $ \ell = 1, \dots, n $, lives on the same manifold $ S^{2} $. This leads us to consider the power manifold of $ n $ unit spheres
$ \big(S^{2}\big)^{n} = \underbrace{ S^{2} \times S^{2} \times \cdots S^{2}}_{n \ \text{times}}, $ |
with the metric of $ S^{2} $ extended elementwise. In the remaining part of this section, we present the tools from Riemannian geometry needed to generalize gradient descent to this manifold. The projector onto the tangent space is used to compute the Riemannian gradient, as it will be explained in Section 4. The projection onto the power manifold turns points of $ \mathbb{R}^{n\times 3} $ into points of $ \big(S^{2}\big)^{n} $. Finally, the retraction turns an objective function defined on $ \mathbb{R}^{n \times 3} $ into an objective function defined on the manifold $ \big(S^{2}\big)^{n} $.
As seen above, the projector onto the tangent space to the unit sphere at the point $ \mathbf{x} $ is given by (3.1). Here, the points are denoted by $ \mathbf{f}_{\ell} \in \mathbb{R}^{3} $, $ \ell = 1, \dots, n $, so we write
$ \text{P}_{\mathrm{T}_{ \mathbf{f}_{\ell}} S^{2}} = I_{3} - \mathbf{f}_{\ell} \mathbf{f}_{\ell} ^{\top}. $ |
It clearly changes for every vertex $ \mathbf{f}_{\ell} $. The projector from $ \mathbb{R}^{n\times 3} $ onto the tangent space at $ \mathbf{f} $ to the power manifold $ \big(S^{2}\big)^{n} $ is a mapping
$ \text{P}_{\mathrm{T}_{\mathbf{f}}\big(S^{2}\big)^{n}} \colon \mathbb{R}^{n \times 3} \to \mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n}, $ |
and can be represented by a block diagonal matrix of size $ 3n \times 3n $, i.e.,
$ PTf(S2)n:=blkdiag(PTf1S2,PTf2S2,…,PTfnS2)=[PTf1S2PTf2S2⋱PTfnS2]. $
|
(3.2) |
In practice, we never actually create this matrix. Instead, we implement an efficient version using vectorized operations (MATLAB's $\texttt{bsxfun} $).
The projection of a single vertex $ \mathbf{f}_{\ell} $ from $ \mathbb{R}^{3} $ to the unit sphere $ S^{2} $ is given by the normalization
$ \widetilde{ \mathbf{f}}_{\ell} = \dfrac{ \mathbf{f}_{\ell}}{\| \mathbf{f}_{\ell}\|_{2}}. $ |
Hence, the projection of the whole of $ \mathbf{f} $ onto the power manifold $ \big(S^{2}\big)^{n} $ is given by
$ \text{P}_{ \big(S^{2}\big)^{n}} \colon \mathbb{R}^{n \times 3} \to \big(S^{2}\big)^{n}, $ |
defined by
$ \mathbf{f} \mapsto \widetilde{ \mathbf{f}} := \text{diag}\!\left( \dfrac{1}{\| \mathbf{f}_{1}\|_{2}}, \dfrac{1}{\| \mathbf{f}_{2}\|_{2}}, \ldots, \dfrac{1}{\| \mathbf{f}_{n}\|_{2}} \right) [f1f2⋯fn] ^{\top}. $
|
Again, this representative matrix is only shown for illustrative purposes; in the actual implementation, we use row-wise normalization of $ \mathbf{f} $.
A retraction is a mapping from the tangent space to the manifold used to turn tangent vectors into points on the manifold and functions defined on the manifold into functions defined on the tangent space; see [2,3] for more details.
The retraction of a single tangent vector $ \boldsymbol{\xi}_{\ell} $ from $ \mathrm{T}_{ \mathbf{f}_{\ell}} S^{2} $ to $ S^{2} $ is a mapping $ \text{R}_{ \mathbf{f}_{\ell}} \colon \mathrm{T}_{ \mathbf{f}_{\ell}} S^{2} \to S^{2} $, defined by [2,Example 4.1.1]
$ \text{R}_{ \mathbf{f}_{\ell}}( \boldsymbol{\xi}_{\ell}) = \dfrac{ \mathbf{f}_{\ell} + \boldsymbol{\xi}_{\ell}}{\| \mathbf{f}_{\ell} + \boldsymbol{\xi}_{\ell} \|}. $ |
Figure 2 provides an illustration of the retraction from $ \mathrm{T}_{ \mathbf{f}_{\ell}} S^{2} $ to $ S^{2} $.
For the power manifold $ \big(S^{2}\big)^{n} $, the retraction of all the tangent vectors $ \boldsymbol{\xi}_{\ell} $, $ \ell = 1, \dots, n $, is a mapping
$ \text{R}_{ \mathbf{f}} \colon \mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n} \to \big(S^{2}\big)^{n}, $ |
defined by
$ [ξ1⋯ξn]⊤↦diag(1‖f1+ξ1‖2,…,1‖fn+ξn‖2)[f1+ξ1⋯fn+ξn]⊤. $
|
(3.3) |
Again, this retraction is implemented by row-wise normalization of $ \mathbf{f} + \boldsymbol{\xi} $.
We are now in the position of introducing the RGD method. In this section, we will first provide the formula for the Riemannian gradient on the power manifold $ \big(S^{2}\big)^{n} $. Then, we will explain the RGD method by providing its pseudocode, and finally, we will recall the known theoretical results that ensure the convergence of RGD.
The Riemannian gradient of the objective function $ E $ in (2.4) is given by the projection onto $ \mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n} $ of the Euclidean gradient of $ E $, namely,
$ grad E(f)=PTf(S2)n(∇E(f)), $
|
(4.1) |
where $ \nabla E $ is explicitly formulated in (2.5). This is always the case for embedded submanifolds; see [2,§ 3.6.1]. Figure 3 illustrates the difference between the Euclidean and the Riemannian gradient for one point on the unit sphere.
Given an initial iterate $ f^{(0)} \in \big(S^{2}\big)^{n} $, the RGD algorithm generates a sequence of iterates $ \lbrace f^{(k)} \rbrace $ as follows. At each iteration $ k = 0, 1, 2, \ldots $, it chooses a search direction $ \mathbf{d}^{(k)} = -\text{grad } E(f^{(k)}) $ in the tangent space $ \mathrm{T}_{ \mathbf{f}^{(k)}} \big(S^{2}\big)^{n} $ such that the sequence $ \lbrace \mathbf{d}^{(k)} \rbrace $ is gradient related. Then, the new point $ f^{(k+1)} $ is chosen such that it satisfies the sufficient decrease condition
$ E(f(k))−E(f(k+1))≥c(E(f(k))−E(Rf(k)(αkd(k)))), $
|
(4.2) |
where $ c > 0 $ and $ \alpha_{k} $ is the step size for the given $ \mathbf{d}^{(k)} $.
The RGD method on the power manifold $ \big(S^{2}\big)^{n} $ is summarized in Algorithm 1. It has been adapted from [2,p. 63]. In practice, in our numerical experiments of Section 5, the initial mapping $ \mathbf{f}^{(0)}\in \big(S^{2}\big)^{n} $ is computed by applying the fixed-point iteration (FPI) method of [21] until the first increase in energy is detected; see subsection 5.3, Algorithm 2. The line-search procedure used in line 7 is described in detail in Appendix 9.
Algorithm 1: The RGD method on $ \big(S^{2}\big)^{n} $. |
1 Given objective function $ E $, power manifold $ \big(S^{2}\big)^{n} $, initial iterate $ \mathbf{f}^{(0)}\in \big(S^{2}\big)^{n} $, projector $ \text{P}_{\mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n}} $ from $ \mathbb{R}^{n\times 3} $ to $ \mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n} $, retraction $ \text{R}_{ \mathbf{f}} $ from $ \mathrm{T}_{ \mathbf{f}} \big(S^{2}\big)^{n} $ to $ \big(S^{2}\big)^{n} $;
Result: Sequence of iterates $ \lbrace f^{(k)} \rbrace $. 2 $ k \leftarrow 0 $; ![]() |
Known convergence results
The RGD method has theoretically guaranteed convergence. For the reader's convenience, we report the two main results on the convergence of RGD. The first result is about the convergence of Algorithm 1 to critical points of the objective function. The second result is about the convergence of RGD to a local minimizer with a line-search technique.
Theorem 4.1 ([2,Theorem 4.3.1]). Let $ \lbrace f^{(k)} \rbrace $ be an infinite sequence of iterates generated by Algorithm 1. Then, every accumulation point $ f^{(\ast)} $ of $ \lbrace f^{(k)} \rbrace $ is a critical point of the cost function $ E $.
Remark 4.1. We are implicitly saying that a sequence can have more than one accumulation point; for example, from a sequence $ \lbrace f^{(k)} \rbrace $, we may extract two subsequences such that they have two distinct accumulation points.
The proof of Theorem 4.1 can be done by contradiction, but it remains pretty technical, so we refer the interested reader to [2,p. 65]. It should be pointed out that Theorem 4.1 only guarantees the convergence to critical points. It does not tell us anything about their nature, i.e., whether the critical points are local minimizers, local maximizers, or saddle points. However, if the smallest eigenvalue of the Hessian of $ E $ at $ f^{(\ast)} $, $ \lambda_{H, \min} > 0 $, then the critical point $ f^{(\ast)} $ is a local minimizer of $ E $. Under this assumption (i.e., that $ \lambda_{H, \min} > 0 $), [2,§ 4.5.2] gives an asymptotic convergence bound for Algorithm 1. Indeed, the following result uses the smallest and largest eigenvalues of the Hessian of the objective function at a critical point $ f^{(\ast)} $.
Theorem 4.2 ([2,Theorem 4.5.6]). Let $ \lbrace f^{(k)} \rbrace $ be an infinite sequence of iterates generated by Algorithm 1, converging to a point $ f^{(\ast)} $. (By Theorem 4.1, $ f^{(\ast)} $ is a critical point of $ E $.) Let $ \lambda_{H, \min} $ and $ \lambda_{H, \max} $ be the smallest and largest eigenvalues of the Hessian of $ E $ at $ f^{(\ast)} $. Assume that $ \lambda_{H, \min} > 0 $ (hence $ f^{(\ast)} $ is a local minimizer of $ E $). Then, given $ r $ in the interval $ (r_{\ast}, 1) $ with $ r_{\ast} = 1-\min\left(2\sigma \bar{\alpha}\lambda_{H, \min}, 4\sigma(1-\sigma)\beta \frac{\lambda_{H, \min}}{\lambda_{H, \max}} \right) $, there exists an integer $ K \geq 0 $ such that
$ E(f(k+1))−E(f(∗))≤(r+(1−r)(1−c))(E(f(k))−E(f(∗))), $
|
(4.3) |
for all $ k\geq K $, where $ c $ is the parameter in (4.2) of Algorithm 1. Note that $ 0 < r_{\ast} < 1 $ since $ \beta, \sigma \in (0, 1) $.
As noted in [2,p. 71], in typical cases of Algorithm 1, the constant $ c $ in the descent condition (4.2) equals 1, hence (4.3) reduces to $ E(f^{(k+1)}) - E(f^{(\ast)}) \leq r\left(E(f^{(k)}) - E(f^{(\ast)}) \right) $.
In this section, we demonstrate the convergence behavior of the RGD method using twelve mesh models and two line-search techniques. We present numerical results in tables and provide convergence plots. The computational time cost of the RGD in every table does not include the time cost for the initial mapping by the FPI method. In subsection 5.2, we introduce a correction for bijectivity, which helps to unfold the folding triangles. We then compare RGD to the FPI method of [21] in subsection 5.3 and the adaptive area-preserving parameterization method of [7] in subsection 5.4. In subsection 5.5, we compare our RGD to the spherical optimal transportation mapping proposed by Cui et al. [10]. Then, in subsection 5.6, we show that the algorithm is robust to noise by starting the algorithm from an initial guess with small perturbations. Finally, in subsection 5.7, we apply our algorithm to the concrete application of surface registration of two brain models.
We conducted all our experiments on a laptop Lenovo ThinkPad T460s, with Windows 10 Pro and MATLAB R2021a installed, with Intel Core i7-6600 CPU, 20GB RAM, and Mesa Intel HD Graphics 520. The benchmark triangular mesh models used in our numerical experiments are shown in Figure 4, arranged per increasing number of vertices and faces, from the top left to bottom right.
To provide the RGD method with a good initial mapping, we first apply the FPI method of [21], described in subsection 5.3. Specifically, we apply the FPI until the first increase in energy occurs. We adopted two different line-search strategies: one that uses MATLAB's $\texttt{fminbnd}$ and another that uses the quadratic/cubic interpolant of [11,§ 6.3.2], described in Appendix 9.
In all the experiments, we monitor the authalic energy $ E_A(f) := E_S(f) - {\mathcal A}(f) $ defined in subsection 2.2 instead of the normalized stretch energy (2.4) because when $ E_{A} = 0 $, we know from the theory that $ f $ is an area-preserving mapping. Strictly speaking, in practice, we never obtain a mapping that exactly preserves the area, but we obtain a mapping that is area-distortion minimizing, since $ E_{A} $ is never identically zero. We also monitor the ratio between the standard deviation and the mean of the area ratio. This quantity has been considered in [8]. Finally, the computational time is always reported in seconds, and #Fs denotes the number of folding triangles.
Tables 1 and 2 report the numerical results for RGD for minimizing the normalized stretch energy $ E $, when run for a fixed maximum number of iterations (10 on the left and 100 on the right). Table 1 is for the RGD that uses the $\texttt{fminbnd}$ line-search strategy, while Table 2 is for the RGD that uses the quadratic/cubic interpolant from [11,§ 6.3.2]. Similarly to the tables, Figures 5 and 6 illustrate the convergence behavior of RGD in the same setting when run for 100 iterations.
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1950 | $ 2.17 \times 10^{-1} $ | 0.85 | 4 | 0.1459 | $ 1.28 \times 10^{-1} $ | 9.37 | 2 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 1.66 | 0 | 0.0156 | $ 3.05 \times 10^{-3} $ | 14.95 | 0 | ||
Cortical Surface | 0.0214 | $ 4.68 \times 10^{-3} $ | 3.62 | 0 | 0.0200 | $ 4.15 \times 10^{-3} $ | 32.40 | 0 | ||
Bull | 0.1492 | $ 2.58 \times 10^{-1} $ | 4.99 | 4 | 0.1380 | $ 2.31 \times 10^{-1} $ | 43.27 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 13.59 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 136.63 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 16.83 | 0 | 0.1922 | $ 4.69 \times 10^{-1} $ | 160.20 | 0 | ||
Gargoyle | 0.0690 | $ 5.28 \times 10^{-2} $ | 19.55 | 0 | 0.0653 | $ 4.84 \times 10^{-2} $ | 192.62 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 16.52 | 0 | 0.0525 | $ 3.38 \times 10^{-2} $ | 161.01 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 20.59 | 0 | 0.0404 | $ 2.04 \times 10^{-2} $ | 226.99 | 0 | ||
Chess King | 0.0687 | $ 6.07 \times 10^{-2} $ | 52.36 | 21 | 0.0639 | $ 5.14 \times 10^{-2} $ | 518.18 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 140.59 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 1 111.67 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 270.63 | 1 | 0.0511 | $ 3.29 \times 10^{-2} $ | 2 320.19 | 1 |
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1936 | $ 2.16 \times 10^{-1} $ | 0.36 | 4 | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 0.99 | 0 | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | ||
Cortical Surface | 0.0216 | $ 4.75 \times 10^{-3} $ | 1.40 | 0 | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | ||
Bull | 0.1492 | $ 2.59 \times 10^{-1} $ | 1.77 | 4 | 0.1348 | $ 2.19 \times 10^{-1} $ | 18.89 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 6.60 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 61.93 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 7.75 | 0 | 0.1894 | $ 4.54 \times 10^{-1} $ | 76.76 | 0 | ||
Gargoyle | 0.0688 | $ 5.26 \times 10^{-2} $ | 7.81 | 0 | 0.0646 | $ 4.76 \times 10^{-2} $ | 80.52 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 7.18 | 0 | 0.0525 | $ 3.39 \times 10^{-2} $ | 75.60 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 8.30 | 0 | 0.0390 | $ 1.91 \times 10^{-2} $ | 89.62 | 0 | ||
Chess King | 0.0692 | $ 6.07 \times 10^{-2} $ | 20.06 | 21 | 0.0647 | $ 5.23 \times 10^{-2} $ | 207.47 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 57.71 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 654.57 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 70.83 | 1 | 0.0512 | $ 3.29 \times 10^{-2} $ | 775.36 | 1 |
A comparison of the computational times between the two line-search strategies shows that the line-search strategy with a quadratic/cubic interpolant (Table 2) is much more efficient than the line-search strategy that uses MATLAB's $\texttt{fminbnd}$ (Table 1). In many cases, the former even yields more accurate results than the latter. This is particularly evident for the Art Statuette (1 111.67 s versus 654.57 s) and the Bimba Statue (2 320.19 s versus 775.36 s) mesh models. The numerical results show that, in general, RGD can still decrease energy as the number of iterations increases.
Table 3 presents the results using a different stopping criterion instead of a fixed maximum number of iterations. Specifically, we keep track of the following quantity
$ \Delta E_{A}^{(k)} := \frac{E_{A}(f^{(k-1)})-E_{A}(f^{(k)})}{E_{A}(f^{(k-1)})}, $ |
Before bijectivity correction | After bijectivity correction | ||||||||||
Model Name | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | Time | #Its | |||
Right Hand | 0.2063 | $ 8.01 \times 10^{-2} $ | 3 | 0.1071 | $ 8.12 \times 10^{-2} $ | 0 | 6.67 | 127 | |||
David Head | 0.0123 | $ 1.87 \times 10^{-3} $ | 0 | --------------------- | 83.66 | 432 | |||||
Cortical Surface | 0.0187 | $ 4.23 \times 10^{-3} $ | 0 | --------------------- | 9.22 | 52 | |||||
Bull | 0.1520 | $ 2.37 \times 10^{-1} $ | 11 | 0.1403 | $ 2.40 \times 10^{-1} $ | 3 | 12.20 | 50 | |||
Bulldog | 0.0351 | $ 1.31 \times 10^{-2} $ | 0 | --------------------- | 72.64 | 60 | |||||
Lion Statue | 0.1940 | $ 4.79 \times 10^{-1} $ | 2 | 0.1938 | $ 4.79 \times 10^{-1} $ | 0 | 2.63 | 2 | |||
Gargoyle | 0.0704 | $ 5.17 \times 10^{-2} $ | 3 | 0.0682 | $ 5.14 \times 10^{-2} $ | 0 | 26.71 | 17 | |||
Max Planck | 0.0464 | $ 3.54 \times 10^{-2} $ | 0 | --------------------- | 10.58 | 8 | |||||
Bunny | 0.0417 | $ 2.17 \times 10^{-2} $ | 0 | --------------------- | 20.77 | 13 | |||||
Chess King | 0.0715 | $ 5.17 \times 10^{-2} $ | 43 | 0.0641 | $ 5.38 \times 10^{-2} $ | 17 | 370.68 | 107 | |||
Art Statuette | 0.0409 | $ 2.14 \times 10^{-2} $ | 0 | --------------------- | 31.53 | 3 | |||||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 2 | 0.0514 | $ 3.32 \times 10^{-2} $ | 0 | 19.24 | 2 |
which represents the relative improvement in the authalic energy $ E_{A}(f) $ between two successive iterates, and we stop the RGD method if the value of $ \Delta E_{A}^{(k)} $ is smaller than $ 10^{-3} $.
It is apparent from Table 3 that RGD stops at different numbers of iterations for different mesh models. Comparing with Table 2 above, it suggests that the improvement after doing more iterations is not significant for a few mesh models, especially Lion Statue, Art Statuette, and Bimba Statue. Table 3 also shows the values of the monitored quantities SD/Mean, $ E_{A}(f) $, and #Fs before and after the bijectivity correction described in subsection 5.2. In all cases where the bijectivity correction is applied, the number of folding triangles #Fs decreases or reduces to zero. Moreover, the post-processing for bijectivity correction improves the SD/Mean value in all the cases. For the Right Hand mesh model, the SD/Mean value even improves significantly from 0.2063 to 0.1071 before and after the bijectivity correction. In one case (Gargoyle), the value of authalic energy $ E_{A}(f) $ also improves.
Figure 7 shows the convergence behavior for the first three smallest mesh models considered, namely Right Hand, David Head, and Cortical Surface, when the algorithm is run for many more iterations; here, 10 000 iterations. It shows that RGD keeps decreasing the authalic energy $ E_{A} $, albeit very slowly. The values of SD/Mean are also improved for all three mesh models. The corresponding numerical results are reported in Table 4; results for 100 iterations are also reported for easier comparison.
$ 100 $ Iterations | $ 10 000 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | 0.0545 | $ 2.07 \times 10^{-2} $ | 431.28 | 0 | ||
David Head | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | 0.0029 | $ 1.01 \times 10^{-4} $ | 1 018.87 | 0 | ||
Cortical Surface | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | 0.0045 | $ 2.06 \times 10^{-4} $ | 1 328.56 | 0 |
We compute with MATLAB the eigenvalues of the Hessian matrix at the minimizer. Table 5 reports on the smallest eigenvalue of the Hessian of the initial mapping (produced by the FPI method) and eigenvalues of the Hessian of the mapping when the prescribed maximum number of RGD iterations is achieved. The eigenvalues of the Hessian are computed by the MATLAB built-in function $\texttt{eigs}$ with the option $\texttt{smallestabs}$ and the number of eigenvalues to compute being 2 000. We observe that the Hessian eigenvalues are significantly closer to zero after running the RGD method, as we expected.
Smallest eigenvalue of the Hessian | ||
Model Name | After FPI | After RGD |
Right Hand | $ -2.4429 \times 10^{0} $ | $ -1.9013 \times 10^{-1} $ |
David Head | $ -1.8481 \times 10^{0} $ | $ -2.8445 \times 10^{-6} $ |
Cortical Surface | $ -4.4940 \times 10^{-1} $ | $ -3.6768 \times 10^{-3} $ |
Figure 8 displays the spherical mappings resulting from applying our RGD algorithm to the benchmark mesh models considered, offering a qualitative insight into the goodness of the mappings obtained.
The proposed RGD method does not guarantee the produced mapping is bijective. To remedy this drawback, we introduce a post-processing method to ensure bijectivity in the mapping. This is achieved by employing a modified version of the mean value coordinates, as described in [13].
Suppose we have a spherical mapping $ f\colon {\mathcal M}\to S^{2} $ that may not be bijective. First, we map the spherical image to the extended complex plane $ \overline{\mathbb{C}} = \mathbb{C}\cup\{\infty\} $ by the stereographic projection
$ ΠS2(x,y,z)=x1−z+iy1−z. $
|
(5.1) |
Denote the mapping $ h = \varPi_{ S^{2}}\circ f $ and the complex-valued vector $ \mathbf{h}_i = h(v_i) $, for $ v_i\in\mathcal{V}({\mathcal M}) $. The folding triangular faces in the southern hemisphere are now mapped in $ \mathbb{D}\subset\overline{\mathbb{C}} $, which can be unfolded by solving the linear systems
$ [LM(h)]I,I~hI=−[LM(h)]I,BhB, $
|
(5.2) |
where $ \mathtt{I} = \{ i \mid | h(v_i) | < r \} $ denotes the vertex index set with $ r $ being a value slightly larger than $ 1 $, e.g., $ r = 1.2 $, $ \mathtt{B} = \{1, \ldots, n\} \backslash \mathtt{I} $, and $ L_{M} $ is the Laplacian matrix defined as
$ [LM(h)]i,j={−∑[vi,vj,vk]∈F(M)[ωM(h)]i,j,kif [vi,vj]∈E(M),−∑ℓ≠i[LM(h)]i,ℓif j=i,0otherwise, $
|
(5.3a) |
with $ \omega_M(h) $ being a variant of the mean value weight [13] defined as
$ [ωM(h)]i,j,k=1‖hi−hj‖tanφki,j(h)2, $
|
(5.3b) |
in which $ \varphi_{i, j}^k(h) $ is the angle opposite to the edge $ [h(v_j), h(v_k)] $ at the point $ h(v_i) $ on $ h({\mathcal M}) $, as illustrated in Figure 9. Then, the mapping is updated by replacing $ \mathbf{h}_{\mathtt{I}} $ with $ \widetilde{\mathbf{h}_{\mathtt{I}}} $. Next, an inversion
$ Inv(z)=1ˉz $
|
(5.4) |
is performed to reverse the positions of the southern and northern hemispheres. Then, the linear system (5.2) is solved again to unfold the triangular faces originally located in the northern hemisphere. We denote the updated mapping as $ \widetilde{h} $. Ultimately, the corrected mapping is given by $ \varPi_{ S^2}^{-1}\circ \widetilde{h} $, where $ \varPi_{ S^2}^{-1} $ denotes the inverse stereographic projection
$ Π−1S2(u+iv)=(2uu2+v2+1, 2vu2+v2+1, u2+v2−1u2+v2+1). $
|
(5.5) |
In our numerical experiments, we perform the bijectivity correction both after the FPI method and after the RGD method, if needed.
To validate the algorithm, we compare it with the numerical results on the same models obtained using the FPI method. This method was proposed by Yueh et al. [21] to calculate area-preserving boundary mapping while parameterizing 3-manifolds.
The FPI method of stretch energy minimization (SEM) is performed as follows. First, a spherical harmonic mapping $ f^{(0)}\colon {\mathcal M}\to S^2 $ is computed by the Laplacian linearization method [4]. Suppose a spherical mapping $ f^{(k)} $ is computed. Then, we map the spherical image to the extended complex plane by the stereographic projection (5.1) together with an inversion (5.4). Then, we define the mapping $ h^{(k)}\colon {\mathcal M}\to\mathbb{C}\cup\{\infty\} $ as
$ h^{(k)}(v) = \mathrm{Inv} \circ\varPi_{ S^2}\circ f^{(k)}(v). $ |
The mapping $ h^{(k)} $ is represented as the complex-valued vector by $ \mathbf{h}^{(k)}_i = h^{(k)}(v_i) $. Next, we define the interior vertex index set as
$ \mathtt{I}^{(k)} = \{ i \mid | h^{(k)}(v_i) | < r \}, $ |
which collects the indices of vertices in the circle centered at the origin of radius $ r $. Other vertices are defined as boundary vertices and the associated index set is defined as
$ \mathtt{B}^{(k)} = \{1, \ldots, n\} \backslash \mathtt{I}^{(k)}. $ |
Then, the interior mapping is updated by solving the linear system
$ [L_S(f^{(k)})]_{\mathtt{I}^{(k)}, \mathtt{I}^{(k)}} \mathbf{h}^{(k+1)}_{\mathtt{I}^{(k)}} = -[L_S(f^{(k)})]_{\mathtt{I}^{(k)}, \mathtt{B}^{(k)}} \mathbf{h}^{(k)}_{\mathtt{B}^{(k)}}, $ |
while the boundary mapping remains the same, i.e., $ \mathbf{h}^{(k)}_{\mathtt{B}^{(k+1)}} = \mathbf{h}^{(k)}_{\mathtt{B}^{(k)}} $. The updated spherical mapping $ f^{(k+1)}\colon {\mathcal M}\to S^2 $ is computed by
$ f^{(k+1)}(v_i) = \varPi_{ S^2}^{-1}(\mathbf{h}^{(k)}_i), $ |
where $ \varPi_{ S^2}^{-1} $ is the inverse stereographic projection (5.5). This procedure is summarized by Algorithm 2.
Algorithm 2: FPI method of the SEM [21]. |
1 Given a genus-zero closed mesh $ {\mathcal M} $, a tolerance $ \varepsilon $, a radius $ r $ (e.g., $ \varepsilon = 10^{-6} $, $ r = 1.2 $);
Result: A spherical area-preserving parameterization $ \mathbf{f} $. 2 Compute a spherical conformal map $ \mathbf{g} $ using the Laplacian linearization method [4]; 3 Perform the stereographic projection $ \mathbf{h}_\ell = \varPi_{ S^2}(\mathbf{g}_\ell), \ell = 1, \ldots, n $, as in (5.1); 4 Let $ \delta = \infty $; ![]() |
Table 6 reports the numerical results for the FPI method applied to the twelve benchmark mesh models considered. Two different stopping criteria are considered: increase in authalic energy (columns 2 to 6) and maximum number of iterations (columns 7 to 10). From Table 6, it appears that performing more iterations of the FPI method does not necessarily improve the mapping $ f $. In most cases, except for David Head and Cortical Surface, the authalic energy and the values of SD/Mean increase. This motivates us to use the FPI method only to calculate a good initial mapping and then switch to RGD.
Energy $ E_{A} $ Increased | $ 100 $ Iterations | ||||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | #Its | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.2050 | $ 2.86 \times 10^{-1} $ | 0.08 | 12 | 6 | 0.4598 | $ 2.92 \times 10^{0} $ | 1.35 | 67 | ||
David Head | 0.0191 | $ 4.66 \times 10^{-3} $ | 0.35 | 0 | 8 | 0.0169 | $ 3.58 \times 10^{-3} $ | 4.30 | 0 | ||
Cortical Surface | 0.0220 | $ 4.93 \times 10^{-3} $ | 0.85 | 0 | 15 | 0.0174 | $ 3.21 \times 10^{-3} $ | 5.62 | 0 | ||
Bull | 0.1504 | $ 2.74 \times 10^{-1} $ | 1.29 | 8 | 18 | 0.1876 | $ 4.59 \times 10^{-1} $ | 6.90 | 40 | ||
Bulldog | 0.0381 | $ 1.49 \times 10^{-2} $ | 2.61 | 0 | 10 | 0.1833 | $ 3.99 \times 10^{-1} $ | 22.22 | 53 | ||
Lion Statue | 0.1940 | $ 5.10 \times 10^{-1} $ | 1.12 | 1 | 4 | 0.2064 | $ 5.28 \times 10^{-1} $ | 23.67 | 38 | ||
Gargoyle | 0.0704 | $ 5.47 \times 10^{-2} $ | 2.64 | 0 | 11 | 4.1020 | $ 4.85 \times 10^{2} $ | 36.10 | 1955 | ||
Max Planck | 0.0544 | $ 3.67 \times 10^{-2} $ | 1.35 | 0 | 5 | 0.1844 | $ 1.67 \times 10^{1} $ | 25.99 | 144 | ||
Bunny | 0.0423 | $ 2.24 \times 10^{-2} $ | 6.40 | 0 | 20 | 0.0394 | $ 3.96 \times 10^{-2} $ | 35.78 | 2 | ||
Chess King | 0.0713 | $ 6.91 \times 10^{-2} $ | 6.35 | 9 | 8 | 1.0903 | $ 1.79 \times 10^{1} $ | 88.04 | 1655 | ||
Art Statuette | 0.0411 | $ 2.15 \times 10^{-2} $ | 23.27 | 0 | 7 | 0.0908 | $ 1.07 \times 10^{-1} $ | 342.95 | 126 | ||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 29.94 | 6 | 9 | 0.0932 | $ 7.42 \times 10^{-2} $ | 305.00 | 144 |
In this section, we compare the numerical results of our RGD method with the adaptive area-preserving parameterization for genus-zero closed surfaces proposed by Choi et al. [7]. The computational procedure is summarized as follows. First, the mesh is punctured by removing two triangular faces $ \tau_1 $ and $ \tau_2 $ that share a common edge. Then, the FPI of the SEM [22] is applied to compute an area-preserving initial mapping $ {g}_0\colon {\mathcal M}\backslash\{\tau_1, \tau_2\}\to\mathbb{D} := \{\mathbf{x}\in\mathbb{R}^2 \mid \|\mathbf{x}\|_2\leq 1\} $. The Beltrami coefficient of the mapping $ {g}_0 $ is denoted by $ \mu_{g_0} $. Next, a quasi-conformal map $ g\colon {\mathcal M}\backslash\{\tau_1, \tau_2\}\to\mathbb{D} $ with $ \|\mu_g\|_\infty = \|\lambda\mu_{g_0}\|_\infty < 1 $ is constructed [9]. The scaling factor $ \lambda\in[0, 1] $ is chosen to be $ 0.2 $ in practice. Then, the optimal mass transport mapping $ h_r\colon {\mathcal M}\backslash\{\tau_1, \tau_2\}\to r\mathbb{D} $ is computed by the method proposed by Zhao et al. [23], with the optimal radius $ r $ satisfying
$ r = \mathop {{\rm{argmin}}}\limits_{} r\int |\mu_{(h_r\circ rg)^{-1}(z)}|^2 \, \mathrm{d}z. $ |
The final spherical area-preserving parameterization is obtained by the composition mapping $ f = \varPi_{ S^{2}}^{-1}\circ h_r \circ rg $, where $ \varPi_{ S^{2}}^{-1} $ denotes the inverse stereographic projection (5.5).
Table 7 reports on the numerical results. The algorithm of Choi et al. [7] was run with the default parameters found in the code package*. The only satisfactory results are those for the David Head mesh model. However, even in this case, the values of the monitored quantities (SD/Mean, $ E_{A}(f) $, and computation time) do not compete with those obtained by running our RGD method for only ten iterations; compare with the fourth column of Table 2. Our method is much more efficient and accurate and provides mappings with better bijectivity properties.
*Available at https://www.math.cuhk.edu.hk/ptchoi/files/adaptiveareapreservingmap.zip.
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Foldings |
Right Hand | 18.3283 | $ 4.84 \times 10^{3} $ | 218.03 | 672 |
David Head | 0.0426 | $ 2.27 \times 10^{-2} $ | 298.76 | 0 |
Cortical Surface | 0.6320 | $ 1.14 \times 10^{0} $ | 420.20 | 10 |
Bull | 8.5565 | $ 1.82 \times 10^{3} $ | 34.42 | 335 |
Bulldog | 9.2379 | $ 1.22 \times 10^{3} $ | 183.94 | 338 |
Lion Statue | 0.2626 | $ 8.96 \times 10^{-1} $ | 1498.91 | 540 |
Gargoyle | 0.3558 | $ 1.30 \times 10^{0} $ | 1483.35 | 571 |
Max Planck | 11.6875 | $ 1.49 \times 10^{3} $ | 195.39 | 575 |
Bunny | 27.6014 | $ 8.94 \times 10^{3} $ | 157.87 | 208 |
Chess King | 11.8300 | $ 1.65 \times 10^{3} $ | 608.55 | 948 |
Art Statuette | 394.4414 | $ 9.93 \times 10^{0} $ | 2284.79 | 2242 |
Bimba Statue | 0.5110 | $ 2.01 \times 10^{0} $ | 16 773.34 | 11 821 |
In this section, we compare the numerical results of our RGD method with the spherical optimal transportation mapping proposed by Cui et al. [10]. The computational procedure can be summarized as follows. First, the algorithm calculates a conformal map from a genus-zero closed surface $ {\mathcal M} $ to the unit sphere and then normalizes the area of $ {\mathcal M} $. Iteratively, a power diagram is computed from the sphere and the power radii. The gradient of the energy is calculated from the vertices, followed by the Hessian matrix, which depends on the source measure. A linear equation is solved at every step to update the power radii, and a line search strategy is used. The step length parameter is chosen so that all cells of the power diagram are non-degenerated. If some cells are degenerated, the step length is reduced by half, and the power diagram is computed again until all cells are non-degenerated. After the loop, the centroid of each power cell in the power diagram is computed, and the mapping from each vertex to each centroid of the power cell is returned. The algorithmic details can be found in [10,Algorithm 1].
Here, we run the algorithm of [10] with the default parameters found in the code package†. The target mesh for the mesh model was adapted to be its spherical harmonic mapping as suggested in [10].
†Available at https://www3.cs.stonybrook.edu/gu/software/SOT/index.html
From Table 8, it appears that the method of [10] is capable, at least when working, of computing bijective spherical mappings within a relatively moderate number of iterations. However, the area-preserving properties, as measured by SD/Mean and $ E_{A}(f) $, are inferior to those obtained with our RGD method for all four mesh models considered.
Model Name | SD/Mean | $ E_{A}(f) $ | #Foldings | # Iterations |
David Head | 0.4189 | $ 2.25 \times 10^{0} $ | 0 | 27 |
Cortical Surface | 0.5113 | $ 3.11 \times 10^{0} $ | 0 | 27 |
Bulldog | 0.8665 | $ 1.00 \times 10^{1} $ | 0 | 33 |
Max Planck | 0.5619 | $ 4.38 \times 10^{0} $ | 0 | 25 |
In this section, we investigate the numerical stability of our scheme. To this aim, we introduce Gaussian random noise to the vertex normal of each vertex in every mesh model according to a given value of noise variance $ \sigma_{\mathrm{noise}} $. We then re-run the entire algorithm, i.e., we first perform a few iterations of the FPI method (Algorithm 2) to obtain a mapping that is used as an initial mapping for the RGD method (Algorithm 1). We then calculate a relative error on the authalic energy, as defined below in (5.6). We repeat this procedure for different values of noise variance $ \sigma_{\mathrm{noise}} $.
Figure 10 shows the original mesh model of the Lion Statue (panel (a)) and two noisy versions (panels (b) and (c)).
We compute the following relative error on the authalic energy
$ err-EA(f,˜f):=#Vertices×|EA(˜f)−EA(f)|∑v∈V(M)‖˜v−v‖2, $
|
(5.6) |
where $ {E}_{A}\big(\tilde{f}\big) $ and $ E_{A}(f) $ are the authalic energies after 100 iterations of RGD for the mesh model with and without noise, respectively, and $ \widetilde{v} $ and $ v $ denote the coordinates of the vertices of the mesh model with and without noise, respectively.
Tables 9 and 10 report the numerical results for all the noisy mesh models, with $ \sigma_{\mathrm{noise}} = 1 \times 10^{-3} $ and $ 5 \times 10^{-3} $, respectively. The values of authalic energy for the non-noisy mesh models from Table 2 are reported in the second last column for easier comparison. We observe that the values for SD/Mean and $ E_{A}\big(\tilde{f}\big) $ remain bounded and reasonable with respect to the original mesh models, demonstrating that our method is stable to noise.
$ \sigma_{\mathrm{noise}} = 1 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.1254 | $ 1.03 \times 10^{-1} $ | 3.13 | 14 | 1 | $ 9.40 \times 10^{-2} $ | $ 5.84 \times 10^{-2} $ | ||
David Head | 0.0108 | $ 1.44 \times 10^{-3} $ | 9.98 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 4.47 \times 10^{-3} $ | ||
Cortical Surface | 0.0187 | $ 3.82 \times 10^{-3} $ | 12.78 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 3.45 \times 10^{-3} $ | ||
Bull | 0.2528 | $ 9.23 \times 10^{-1} $ | 16.74 | 19 | 6 | $ 2.19 \times 10^{-1} $ | $ 1.12 \times 10^{0} $ | ||
Bulldog | 0.0273 | $ 6.94 \times 10^{-3} $ | 59.26 | 0 | 0 | $ 1.27 \times 10^{-2} $ | $ 7.19 \times 10^{-6} $ | ||
Lion Statue | 0.1630 | $ 3.27 \times 10^{-1} $ | 66.78 | 1 | 0 | $ 4.54 \times 10^{-1} $ | $ 1.36 \times 10^{-4} $ | ||
Gargoyle | 0.1677 | $ 3.66 \times 10^{-1} $ | 65.72 | 0 | 0 | $ 4.76 \times 10^{-2} $ | $ 4.45 \times 10^{-3} $ | ||
Max Planck | 0.0464 | $ 2.67 \times 10^{-2} $ | 64.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 5.27 \times 10^{-5} $ | ||
Bunny | 0.0297 | $ 1.11 \times 10^{-2} $ | 81.31 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 5.32 \times 10^{-3} $ | ||
Chess King | 0.0747 | $ 6.08 \times 10^{-2} $ | 193.04 | 119 | 38 | $ 5.23 \times 10^{-2} $ | $ 1.03 \times 10^{-3} $ | ||
Art Statuette | 0.0235 | $ 6.88 \times 10^{-3} $ | 636.30 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 2.54 \times 10^{-2} $ | ||
Bimba Statue | 0.0541 | $ 3.33 \times 10^{-2} $ | 759.36 | 2 | 0 | $ 3.29 \times 10^{-2} $ | $ 5.81 \times 10^{-4} $ |
$ \sigma_{\mathrm{noise}} = 5 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.2304 | $ 5.77 \times 10^{-1} $ | 3.12 | 10 | 4 | $ 9.40 \times 10^{-2} $ | $ 1.45 \times 10^{0} $ | ||
David Head | 0.0211 | $ 5.51 \times 10^{-3} $ | 9.43 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 6.84 \times 10^{-3} $ | ||
Cortical Surface | 0.0301 | $ 1.02 \times 10^{-2} $ | 12.72 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 2.28 \times 10^{-1} $ | ||
Bull | 0.1723 | $ 2.98 \times 10^{-1} $ | 17.40 | 18 | 4 | $ 2.19 \times 10^{-1} $ | $ 1.19 \times 10^{-1} $ | ||
Bulldog | 0.0629 | $ 3.86 \times 10^{-2} $ | 61.75 | 2 | 0 | $ 1.27 \times 10^{-2} $ | $ 3.23 \times 10^{-5} $ | ||
Lion Statue | 0.1044 | $ 1.36 \times 10^{-1} $ | 69.55 | 0 | 0 | $ 4.54 \times 10^{-1} $ | $ 3.36 \times 10^{-4} $ | ||
Gargoyle | 0.0974 | $ 9.68 \times 10^{-2} $ | 69.32 | 1 | 0 | $ 4.76 \times 10^{-2} $ | $ 6.87 \times 10^{-4} $ | ||
Max Planck | 0.0782 | $ 7.11 \times 10^{-2} $ | 67.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 2.75 \times 10^{-4} $ | ||
Bunny | 0.0455 | $ 2.57 \times 10^{-2} $ | 80.93 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 4.40 \times 10^{-3} $ | ||
Chess King | 0.1521 | $ 2.72 \times 10^{-1} $ | 187.57 | 95 | 35 | $ 5.23 \times 10^{-2} $ | $ 2.64 \times 10^{-2} $ | ||
Art Statuette | 0.0981 | $ 5.41 \times 10^{-2} $ | 617.75 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 5.92 \times 10^{-2} $ | ||
Bimba Statue | 0.1083 | $ 1.25 \times 10^{-1} $ | 743.00 | 76 | 0 | $ 3.29 \times 10^{-2} $ | $ 1.68 \times 10^{-1} $ |
A registration mapping between surfaces $ {\mathcal M}_0 $ and $ {\mathcal M}_1 $ refers to a bijective mapping $ g \colon {\mathcal M}_0\to {\mathcal M}_1 $. An ideal registration mapping keeps important landmarks aligned while preserving specified geometry properties. In this section, we demonstrate a framework for the computation of landmark-aligned area-preserving parameterizations of genus-zero closed surfaces.
Suppose a set of landmark pairs $ \{(p_i, q_i) \mid p_i\in {\mathcal M}_0, \ q_i\in {\mathcal M}_1\}_{i = 1}^m $ is given. The goal is to compute an area-preserving simplicial mapping $ g\colon {\mathcal M}_0\to {\mathcal M}_1 $ that satisfies $ g(p_i) \approx q_i $, for $ i = 1, \ldots, m $. First, we compute area-preserving parameterizations $ f_0\colon {\mathcal M}_0\to S^{2} $ and $ f_1\colon {\mathcal M}_1\to S^{2} $ of surfaces $ {\mathcal M}_0 $ and $ {\mathcal M}_1 $, respectively. The simplicial registration mapping $ h \colon S^{2} \to S^{2} $ that satisfies $ h\circ f_0(p_i) = f_1(q_i) $, for $ i = 1, \ldots, m $, can be carried out by minimizing the registration energy
$ E_R(h) = E_S(h) + \sum\limits_{i = 1}^m \lambda_i \| h\circ f_0(p_i) - f_1(q_i) \|^2. $ |
Let
$ \mathbf{h} = [(h∘f0(v1))⊤⋮(h∘f0(vn))⊤] = [h1h2h3] $
|
be the matrix representation of $ h $. The gradient of $ E_R $ with respect to $ \mathbf{h} $ can be formulated as
$ \nabla E_R(h) = 2 \, (I_3\otimes L_S(h)) {\rm{vec}}(\mathbf{h}) + {\rm{vec}}(\mathbf{r}), $ |
where $ \mathbf{r} $ is the matrix of the same size as $ \mathbf{h} $ given by
$ \mathbf{r}(i, :) = {2λi(h(i,:)−(f1(qi))⊤)if pi is a landmark,(0,0,0)otherwise. $
|
In practice, we define the midpoints $ c_i $ of each landmark pairs on $ S^{2} $ as
$ c_i = \frac{1}{2}( f_0(p_i) + f_1(q_i)), $ |
for $ i = 1, \ldots, m $, and compute $ h_0 $ and $ h_1 $ on $ S^{2} $ that satisfy $ h_0\circ f_0(p_i) = c_i $ and $ h_1\circ f_1(q_i) = c_i $, respectively. Ultimately, the registration mapping $ g\colon {\mathcal M}_0\to {\mathcal M}_1 $ is obtained by the composition mapping $ g = f_1^{-1} \circ h_1^{-1} \circ h_0 \circ f_0 $. Figure 11 schematizes this composition of functions for the landmark-aligned morphing process from one brain to another.
A landmark-aligned morphing process from $ {\mathcal M}_0 $ to $ {\mathcal M}_1 $ can be constructed by the linear homotopy $ H\colon {\mathcal M}_0\times[0, 1]\to\mathbb{R}^3 $ defined as
$ H(v,t)=(1−t)v+tg(v). $
|
(5.7) |
In Figure 12, we demonstrate the morphing process from one brain to another brain by four snapshots at four different values of $ t $. The brain surfaces are obtained from the source code package in [9].
In this paper, we introduced an RGD method for computing spherical area-preserving mappings of genus-zero closed surfaces. Our approach combines the tools of Riemannian optimization and computational geometry to develop a method based on the minimization of the normalized stretch energy. The proposed algorithm has theoretically guaranteed convergence and is accurate, efficient, and robust. We tested two different line-search strategies and conducted extensive numerical experiments on various mesh models to demonstrate the algorithm's stability and effectiveness. By comparing with three existing methods for computing area-preserving mappings, we demonstrated that our algorithm is more efficient than these state-of-the-art methods. Moreover, we show that our approach is stable and robust even when the mesh model undergoes small perturbations. Finally, we applied our algorithm to the practical problem of landmark-aligned surface registration between two human brain models.
There are some directions in which we could conduct further research. Specifically, we would like to enhance the speed of convergence of the algorithm we have proposed while keeping the computational cost low. One potential way to improve this would be to use appropriate Riemannian generalizations of the conjugate gradient method or the limited memory BFGS (L-BFGS) method, as suggested in [17]. Another research direction may target genus-one or higher genus closed surfaces.
The image area of the simplicial mapping $ f $ can be calculated as follows. Let $ \tau = [v_i, v_j, v_k] $ and denote $ f_{ij}^\ell = f_{i}^\ell - f_{j}^\ell $, $ f_{ik}^\ell = f_{i}^\ell - f_{k}^\ell $, and $ f_{jk}^\ell = f_{ij}^\ell - f_{k}^\ell $, for $ \ell = 1, 2, 3 $. The image area of a simplicial mapping $ f $ can be formulated as
$ {\mathcal A}(f) = \sum\limits_{\tau\in {\mathcal F}( {\mathcal M})} |f(\tau)| = \sum\limits_{\tau\in {\mathcal F}( {\mathcal M})} \frac{1}{2} \sqrt{ {\mathcal A}_{12}(f|_\tau)^2 + {\mathcal A}_{13}(f|_\tau)^2 + {\mathcal A}_{23}(f|_\tau)^2}, $ |
where
$ {\mathcal A}_{12}(f|_{\tau}) = f_{ij}^1 f_{ik}^2 - f_{ij}^2 f_{ik}^1, \quad {\mathcal A}_{13}(f|_{\tau}) = f_{ij}^1 f_{ik}^3 - f_{ij}^3 f_{ik}^1, \quad {\mathcal A}_{23}(f|_{\tau}) = f_{ij}^2 f_{ik}^3 - f_{ij}^3 f_{ik}^2. $ |
The functionals $ {\mathcal A}_{12} $, $ {\mathcal A}_{13} $ and $ {\mathcal A}_{23} $ measure the image area of mappings $ \varPi_{P_{xy}}\circ f $, $ \varPi_{P_{xz}}\circ f $ and $ \varPi_{P_{yz}}\circ f $, respectively. The partial derivatives of $ {\mathcal A}(f|_{\tau}) $ can be formulated as
$ ∂∂f1iA(f|τ)=14A(f|τ)(A12(f|τ)f2jk+A13(f|τ)f3jk),∂∂f2iA(f|τ)=−14A(f|τ)(A12(f|τ)f1jk−A23(f|τ)f3jk),∂∂f3iA(f|τ)=−14A(f|τ)(A13(f|τ)f1jk−A23(f|τ)f2jk),∂∂f1jA(f|τ)=−14A(f|τ)(A12(f|τ)f2ik+A13(f|τ)f3ik),∂∂f2jA(f|τ)=14A(f|τ)(A12(f|τ)f1ik−A23(f|τ)f3ik),∂∂f3jA(f|τ)=14A(f|τ)(A13(f|τ)f1ik+A23(f|τ)f2ik),∂∂f1kA(f|τ)=14A(f|τ)(A12(f|τ)f2ij+A13(f|τ)f3ij),∂∂f2kA(f|τ)=−14A(f|τ)(A12(f|τ)f1ij−A23(f|τ)f3ij),∂∂f3kA(f|τ)=−14A(f|τ)(A13(f|τ)f1ij+A23(f|τ)f2ij). $
|
In this appendix, we describe the calculation of the Hessian of the stretch energy functional. Let $ \tau = [v_i, v_j, v_k] $. From [20,Theorem 3.5 and (3.1)], we know that
$ ∇fℓES(f|τ)=2LS(f|τ)f|ℓτ,ℓ=1,2,3, $
|
where
$ LS(f|τ)=14|τ|[(fi−fk)⊤(fj−fk)+(fi−fj)⊤(fk−fj)−(fi−fk)⊤(fj−fk)−(fi−fj)⊤(fk−fj)−(fi−fk)⊤(fj−fk)(fi−fk)⊤(fj−fk)+(fj−fi)⊤(fk−fi)−(fj−fi)⊤(fk−fi)−(fi−fj)⊤(fk−fj)−(fj−fi)⊤(fk−fi)(fi−fj)⊤(fk−fj)+(fj−fi)⊤(fk−fi)], $
|
and $ \mathbf{f}|_\tau^\ell = (\mathbf{f}_i^\ell, \mathbf{f}_j^\ell, \mathbf{f}_k^\ell)^\top $, $ \ell = 1, 2, 3 $. A direct calculation yields that the Hessian matrix
$ \mathrm{Hess}(E_S(f|_{\tau})) = [∂ES(f|τ)∂fsi∂fti∂ES(f|τ)∂fsi∂ftj∂ES(f|τ)∂fsi∂ftk∂ES(f|τ)∂fsj∂fti∂ES(f|τ)∂fsj∂ftj∂ES(f|τ)∂fsj∂ftk∂ES(f|τ)∂fsk∂fti∂ES(f|τ)∂fsk∂ftj∂ES(f|τ)∂fsk∂ftk] _{s, t = 1}^3 $
|
can be formulated as
$ \mathrm{Hess}(E_S(f|_{\tau})) = \frac{1}{2|\tau|} [h2ijkh2ijk⊤+h3ijkh3ijk⊤h1ijkh2ijk⊤−2h2ijkh1ijk⊤h1ijkh3ijk⊤−2h3ijkh1ijk⊤h2ijkh1ijk⊤−2h1ijkh2ijk⊤h1ijkh1ijk⊤+h3ijkh3ijk⊤h2ijkh3ijk⊤−2h3ijkh2ijk⊤h3ijkh1ijk⊤−2h1ijkh3ijk⊤h3ijkh2ijk⊤−2h2ijkh3ijk⊤h1ijkh1ijk⊤+h2ijkh2ijk⊤] , $
|
where $ \mathbf{h}_{ijk}^\ell = (f_j^\ell-f_k^\ell, f_k^\ell-f_i^\ell, f_i^\ell-f_j^\ell) ^{\top} $, $ \ell = 1, 2, 3 $.
In this appendix, we describe the line-search procedure used in our RGD method. We start by detailing how to compute the derivative appearing in the sufficient decrease condition, and then we describe the interpolant line-search strategy from [11,§ 6.3.2].
In Euclidean space $ \mathbb{R}^{n} $, the steepest descent method updates a current iterate by moving in the direction of the anti-gradient, by a step size chosen according to an appropriate line-search rule. The step size is related to the sufficient decrease condition [16]
$ ϕ(αk)≤ϕ(0)+c1αkϕ′(0). $
|
(C.1) |
In the Euclidean setting, the univariate function $ \phi(\alpha) $ in the line-search procedure is
$ \phi(\alpha) = E\big( \mathbf{f}^{(k)} + \alpha \, \mathbf{d}^{(k)}\big), $ |
where $ E $ is the objective function considered, and $ \mathbf{f}^{(k)} $ is the current iterate. However, this function changes in the Riemannian optimization framework because we cannot directly perform the vector addition $ \mathbf{f}^{(k)} + \alpha \, \mathbf{d}^{(k)} $ without leaving the manifold.
Let $ \mathbf{f}^{(k)} $ be a point of $ \big(S^{2}\big)^{n} $ at the $ k $th iteration of our RGD algorithm, and $ \text{R}_{ \mathbf{f}^{(k)}} $ the retraction at $ \mathbf{f}^{(k)} $, as defined in subsection 3.2.3. Let the objective function be $ E \colon \big(S^{2}\big)^{n} \to \mathbb{R} $. For a fixed point $ \mathbf{f}^{(k)} $ and a fixed tangent vector $ \mathbf{d}^{(k)} $, we introduce the vector-valued function $ \boldsymbol{\psi} \colon \mathbb{R} \to \big(S^{2}\big)^{n} $, defined by $ \alpha \mapsto \text{R}_{ \mathbf{f}^{(k)}}(\alpha\, \mathbf{d}^{(k)}) $. Hence, by composition of functions, we have $ \phi \colon \mathbb{R} \to \mathbb{R} $, defined by
$ \phi(\alpha) := E(\boldsymbol{\psi} (\alpha)). $ |
Note that $ \phi(0) = E(\boldsymbol{\psi}(0)) = E(\mathbf{f}^{(k)}) =: E^{(k)} $, due to the definition of retraction.
To evaluate the sufficient decrease condition (9.1), we need to calculate $ \phi'(0) $, and to compute $ \phi'(0) $ we need to calculate the derivative of the retraction $ \text{R}_{ \mathbf{f}^{(k)}} (\alpha \, \mathbf{d}^{(k)}) $ with respect to $ \alpha $. The derivative of $ \phi(\alpha) $ is given by the chain rule
$ \phi'(\alpha) = \nabla E(\boldsymbol{\psi}(\alpha)) ^{\top} \boldsymbol{\psi}'(\alpha), $ |
and then we evaluate it at $ \alpha = 0 $, i.e.,
$ \phi'(0) = \nabla E( \mathbf{f}^{(k)}) ^{\top} \boldsymbol{\psi}'(0). $ |
In general, the retraction and the derivative $ \boldsymbol{\psi}'(\alpha) $ depend on the choice of the manifold. The differentials of a retraction provide the so-called vector transports, and the derivative of the retraction on the unit sphere appears in [2,§ 8.1.2].
In the following formulas, we omit the superscript $ ^{(k)} $ referring to the current iteration of RGD and assume that the line-search direction $ \mathbf{d} $ is also partitioned as the matrix $ \mathbf{f} $; see (2.1). Recalling the retraction on the power manifold $ \big(S^{2}\big)^{n} $ from (3.3), we can write $ \boldsymbol{\psi}(\alpha) $ as
$ \boldsymbol{\psi}(\alpha) = \text{R}_{ \mathbf{f}}(\alpha \, \mathbf{d}) = [1‖f1+αd1‖21‖f2+αd2‖2⋱1‖fn+αdn‖2] [f⊤1+αd⊤1f⊤2+αd⊤2⋮f⊤n+αd⊤n] . $
|
The derivative $ \boldsymbol{\psi}'(\alpha) $ can be computed via the formula (cf. [2,Example 8.1.4])
$ \boldsymbol{\psi}'(\alpha) = [−(f1+αd1)⊤d1‖f1+αd1‖32(f1+αd1)⊤−(f2+αd2)⊤d2‖f2+αd2‖32(f2+αd2)⊤⋮−(fn+αdn)⊤dn‖fn+αdn‖32(fn+αdn)⊤] + [d⊤1‖f1+αd1‖2d⊤2‖f2+αd2‖2⋮d⊤n‖fn+αdn‖2] , $
|
At $ \alpha = 0 $, this simplifies into
$ \boldsymbol{\psi}'(0) = [d⊤1‖f1‖2d⊤2‖f2‖2⋮d⊤n‖fn‖2] - [(f1)⊤d1‖f1‖32f⊤1(f2)⊤d2‖f2‖32f⊤2⋮(fn)⊤dn‖fn‖32f⊤n] . $
|
This value is needed to evaluate the sufficient decrease condition (9.1).
Quadratic/cubic approximation
At every step of our RGD algorithm, we want to satisfy the sufficient decrease condition (9.1). Here, we adopt the safeguarded quadratic/cubic approximation strategy described in [11,§ 6.3.2].
In the following, we let $ \alpha_{k} $ and $ \alpha_{k-1} $ denote the step lengths used at iterations $ k $ and $ k-1 $ of the optimization algorithm, respectively. We denote the initial guess using $ \alpha_{0} $. We suppose that the initial guess is given; alternatively, one can use [16,(3.60)] as initial guess, i.e.,
$ \alpha_{0} = \frac{2(E^{(k)} - E^{(k-1)})}{\phi'(0)}. $ |
If $ \alpha_{0} $ satisfies the sufficient decrease condition (9.1), i.e.,
$ \phi(\alpha_{0}) \leq \phi(0) + c_{1} \alpha_{0} \phi'(0), $ |
then $ \alpha_{0} $ is accepted as step length, and we terminate the search. Otherwise, we build a quadratic approximation $ \phi_{\mathrm{q}}(\alpha) $ of $ \phi(\alpha) $ using the information we have, that is, $ \phi(0) $, $ \phi'(0) $, and $ \phi(\alpha_{0}) $. The quadratic model is
$ \phi_{\mathrm{q}}(\alpha) = \left[\phi(\alpha_{0}) - \phi(0) -\phi'(0) \, \alpha_{0}\right]\alpha^{2} + \phi'(0)\, \alpha + \phi(0). $ |
The new trial value $ \alpha_{1} $ is defined as the minimizer of this quadratic, i.e.,
$ \alpha_{1} = - \frac{\phi'(0) \, \alpha_{0}^{2}}{2 \left[\phi(\alpha_{0}) - \phi(0) -\phi'(0) \, \alpha_{0}\right]}. $ |
We terminate the search if the sufficient decrease condition is satisfied at $ \alpha_{1} $, i.e.,
$ \phi(\alpha_{1}) \leq \phi(0) + c_{1} \alpha_{1} \phi'(0). $ |
Otherwise, we need to backtrack again. We now have four pieces of information about $ \phi(\alpha) $, so it is desirable to use all of them. Hence, at this and any subsequent backtrack steps during the current iteration of RGD, we use a cubic model $ \phi_{\mathrm{c}}(\alpha) $ that interpolates the four pieces of information $ \phi(0) $, $ \phi'(0) $, and the last two values of $ \phi(\alpha) $, and set $ \alpha_{k} $ to the value of $ \alpha $ at which $ \phi_{\mathrm{c}}(\alpha) $ has its local minimizer.
Let $ \alpha_{\mathrm{prev}} $ and $ \alpha_{\mathrm{2prev}} $ be the last two previous values of $ \alpha_{k} $ tried in the backtrack procedure. The cubic that fits $ \phi(0) $, $ \phi'(0) $, $ \phi(\alpha_{\mathrm{prev}}) $, and $ \phi(\alpha_{\mathrm{2prev}}) $ is
$ \phi_{\mathrm{c}}(\alpha) = a\alpha^{3} + b\alpha^{2} + \phi'(0)\, \alpha + \phi(0), $ |
where
$ [ab] = \dfrac{1}{\alpha_{\mathrm{prev}}-\alpha_{\mathrm{2prev}}} [1α2prev−1α22prev−α2prevα2prevαprevα22prev] [ϕ(αprev)−ϕ(0)−ϕ′(0)αprevϕ(α2prev)−ϕ(0)−ϕ′(0)α2prev] . $
|
The local minimizer of this cubic is given by [11,(6.3.18)]
$ \frac{-b+\sqrt{b^{2}-3a\phi'(0)}}{3a}, $ |
and we set $ \alpha_{k} $ equal to this value. If necessary, this process is repeated, using a cubic interpolant of $ \phi(0) $, $ \phi'(0) $, and the two most recent values of $ \phi $, namely, $ \phi(\alpha_{\mathrm{prev}}) $ and $ \phi(\alpha_{\mathrm{2prev}}) $, until a step size that satisfies the sufficient decrease condition is located. Numerical experiments in Section 5 demonstrate the usage and effectiveness of this line-search technique.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Marco Sutti and Mei-Heng Yueh: conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology development, project administration, resource provision, software development, supervision, validation, visualization, writing of the original draft, review, and editing. All authors of this article have been contributed equally. All authors have read and approved the final version of the manuscript for publication.
The work of the first author was supported by the National Center for Theoretical Sciences in Taiwan (R.O.C.) under the NSTC grant 112-2124-M-002-009-. The work of the second author was supported by the National Science and Technology Council and the National Center for Theoretical Sciences in Taiwan.
The authors declare that they have no conflict of interest.
[1] |
Nakamura J, Mutlu E, Sharma V, et al. (2014) The Endogenous Exposome. DNA Repair (Amst) 19: 3-13. doi: 10.1016/j.dnarep.2014.03.031
![]() |
[2] |
Wild CP (2012) The exposome: From concept to utility. Int J Epidemiol 41: 24-32. doi: 10.1093/ije/dyr236
![]() |
[3] | EPA, Persistent Organic Pollutants: A Global Issue, A Global Response. EPA, 2009. Available from: http://www2.epa.gov/international-cooperation/persistent-organic-pollutants-global-issue-global-response#pops (Accessed July 17, 2014). |
[4] |
Magliano DJ, Loh VH, Harding JL, et al. (2014) Persistent Organic Pollutants and Diabetes: A Review of the Epidemiological Evidence. Diabetes Meab 40: 1-14. doi: 10.1016/j.diabet.2013.09.006
![]() |
[5] |
Myre M, Imbeault P (2014) Persistent Organic Pllutants Meet Adipose Tissue Hypoxia: Does Cross-talk Contribute to Obesity. Obes Res 15: 19-28. doi: 10.1111/obr.12086
![]() |
[6] |
Taylor KW, Novak RF, Anderson HA, et al. (2013) Evaluation of the Association Between Persistent Organic pollutants (POPs) and diabetes in Epidemiological Studies: A National Toxicology Program Workshop Review. Environ Health Perspect 121: 774-783. doi: 10.1289/ehp.1205502
![]() |
[7] |
Gascon M, Morales E, Sunyer J, et al. (2013) Effects of Persistent Organic Pollutants on the Developing Respiratory and Immune Systems: A Systematic Review. Environ Int 52: 51-65. doi: 10.1016/j.envint.2012.11.005
![]() |
[8] |
Li QQ, Loganath A, Chong YS, et al. (2006) Persistent Organic Pollutants and Adverse Health Effects in Humans. J Toxicol Env Health Part A 69: 1987-2005. doi: 10.1080/15287390600751447
![]() |
[9] |
Zielinski I, Tsui I-C (1995) Cystic Fibrosis: Genotypic and Phenotypic Variations. Ann, Rev Genet 29: 777-807. doi: 10.1146/annurev.ge.29.120195.004021
![]() |
[10] | Braun A, Roth R, McGinnis MJ (2003) Technology challenges in screening single gene disorders. Eur J Pediatr 162: S 13-16. |
[11] | WHO (2014) World Health Statistics 2014. Geneva, Switzerland: World Health Organization, 180. |
[12] | Russell JY (2010) A Philosophical Framework for an Open and Critical Transdisciplinary Inquuiry. In: Brown VA, Harris JA, Russell JY, editors. Tackling Wicked Problems Through the Transdisciplinary Imagination. NY, NY: Earthscan, 31-60. |
[13] | Stokols D, Hall KL, Vogel AL (2013) Transdisciplinary Public Health: Definitions, Core Characteristics, and Strategies for Success. In: Haire-Joshu D, McBride TD, editors. Transdisciplinary Public Health: Research, Education, and Practice. San Francisco, CA: Jossey-Bass, 1-30. |
[14] |
Jacobs JA, Frickel S (2009) Interdisciplinarity: A Critical Assessment. Annu Rev Sociol 35: 43-65. doi: 10.1146/annurev-soc-070308-115954
![]() |
[15] | Klein JT (2008) Evaluation of Interdisciplinary and Transdisciplinary Research - A Literature Review. Amer J Prev Med Suppl 2 35: S-116-S-123. |
[16] | Hays TC (2008) The Science of Team Science - Commentary on Measurements of Scientific Readiness. Amer J Prev Med Suppl 2 35: S-193-S195. |
[17] | DE (1997) Pasteur's Quadrant: Basic Science and Technological Innovation: Brookings Institution, Washington, DC. |
[18] | NRC (2014) Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering and Beyond. Washington, DC: National Academies Press, 137. |
[19] | Roche M, Glick TF (1944) Thomas Edison and Basic Science, Edison: Myths and Reality. Arbor: 39-50. |
[20] | Duggan MB (2014) Prevention of childhood malnutrition: immensity of the challenge and variety of strategies. Paediatr Int Child Health 32: 190-203. |
[21] |
Blackburn GL (2001) Pasteur's Quadrant and Malnutrition. Nature 409: 397-401. doi: 10.1038/35053187
![]() |
[22] |
Verbeke W (2005) Consumer acceptance of functional foods: Socio-demographic, cognitive and attitudinal determinants. Food Quality and Preference 16: 45-57. doi: 10.1016/j.foodqual.2004.01.001
![]() |
[23] | Bhutta ZA, Darmstadt GL (2014) A Role for Science Investments in Advancing Newborn Health. Science Translational Medicine 6: 253-258. |
[24] | Childress DS (1999) Working in Pasteur's Quadrant. J Rehabilitation Res Dev 36: xi-xii. |
[25] | De Luigi AJ, Cooper RA (2014) Adaptive sports technology and biomechanics: prosthetics. PM R 8 Suppl.: S40-S57. |
[26] |
Hardbower DM, Peek RM, Jr., Wilson KT (2014) At the Bench: Helicobacter pylori, dysregulated host responses, DNA damage, and gastric cancer. J Leukoc Biol 96: 201-212. doi: 10.1189/jlb.4BT0214-099R
![]() |
[27] |
Dalal RS, Moss SF (2014) At the Bedside: Helicobacter pylori, dysregulated host responses, DNA damage, and gastric cancer. J Leukoc Biol 96: 213-224. doi: 10.1189/jlb.4BT0214-100R
![]() |
[28] | Nobel Committee, The Nobel Prize in Physiology or Medicine Prize Announcement. The Nobel Committee for Physiology or Medicine,2005. Available from:http://www.nobelprize.org/nobel_prizes/medicine/laureates/2005/announcement.html. |
[29] | Konturek SJ, Konturek PC, Brzozowski T, et al. (2005) From nerves and hormones to bacteria in the stomach; Nobel prize for achievements in gastrology during last century. J Physiol Pharmacol 56: 507-536. |
[30] |
Marshall B (2006) Helicobacter connections. Chem Med Chem 1: 783-802. doi: 10.1002/cmdc.200600153
![]() |
[31] | Woo KT, Lau YK, Yap HK, et al. (2006) 3rd College of Physicians' lecture--translational research: From bench to bedside and from bedside to bench; incorporating a clinical research journey in IgA nephritis (1976 to 2006). Ann Acad Med Singapore 35: 735-741. |
[32] |
Hamilton JG, Edwards HM, Khoury MJ, et al. (2014) Cancer screening and genetics: a tale of two paradigms. Cancer Epidemiol Biomarkers Prev 23: 909-916. doi: 10.1158/1055-9965.EPI-13-1016
![]() |
[33] | Preston R (2000) The Genome Warrier. The New Yorker 76: 66-83. |
[34] | Tchounwou PB, Toscano WA (2011) Environmental Epidemiology and Human Health-Biomarkers of Disease and Genetic Susceptibility In: Nriagu JO, editor. Encyclopedia of Environmentl Health Sciences. San Diego, CA: Elsevier, 357-366. |
[35] |
Teo S, Morgan M, Stirling D, et al. (2000) Assessment of the In Vitro and In Vivo Genotoxicity of Thalomid (Thalidomide). Teratog Carcinog Mutagen 20: 301-311. doi: 10.1002/1520-6866(2000)20:5<301::AID-TCM6>3.0.CO;2-2
![]() |
[36] |
Portin P (2002) Historical Development of the Concept of the Gene. J Med Philosophy 27: 257-286. doi: 10.1076/jmep.27.3.257.2980
![]() |
[37] | Crick FHC (1958) Central Dogma of Molecular Biology. Symp Soc Exp Biol XII 12: 139-173. |
[38] |
Crick FHC (1970) Central Dogma of Molecular Biology. Nature (London) 227: 561-563. doi: 10.1038/227561a0
![]() |
[39] |
Hahn WC, Weinberg RA (2002) Modeling the Molecular Circuitry of Cancer. Nat Rec Cancer 2: 331-341. doi: 10.1038/nrc795
![]() |
[40] | Hocquette JF (2005) Where are we in genomics? J Physiol Pharmacol Suppl 3: 37-70. |
[41] |
Sharma S, Kelly TK, Jones PA (2010) Epigenetics in cancer. Carcinogenesis 31: 27-36. doi: 10.1093/carcin/bgp220
![]() |
[42] |
Long CR, Westhusin ME, Golding MC (2014) Reshaping the transcriptional frontier: epigenetics and somatic cell nuclear transfer. Mol Reprod Dev 81: 183-193. doi: 10.1002/mrd.22271
![]() |
[43] | Nilsson E, Skinner MK (2014) Definitions and History of Generational Epigeneti Inheritance. In: Tollefsbol TO, editor. Translational Epigenetics: Evidence and Debate. San Diego, CA: Academic Press, 11-24. |
[44] | Hoile SP, Lillycrop KA, Grenfel LR, et al. (2014) Phenotypic and Epigenetic Inheritance Across Multiple Generations in Mammals through the Female Line. In: Tollefsbol TO, editor. Transgenerational Epigenetics: Evidence and Debate. San Diego, CA: Academic Press, 269-278. |
[45] | Mashoodh R, Champagne FA (2014) Paternal Epigenetic Inheritance. In: Tollefsbol TO, editor. Transgenerational Epigenetics: Evidence and Debate, 231-236. |
[46] | Toscano WA, Oehlke KP (2011) Nutrigenomics: A New Frontier in Environmental Health Sciences. In: Nriagu JO, editor. Encyclopedia of Environmental Health Burlington, VT: Elsevier, 202-205. |
[47] | Toscano WA, Oehlke KP, Kafouri R (2010) An Environmental Systems Biology Approach to the Study of Asthma. In: Pawnkar R, Holgate S, Rosenwasser LJ, editors. Allergy Frontiers: Future Perspectives. Springer: NY, NY, 239-252. |
[48] | Toscano WAJ, Lee P, Oehlke KP (2008) The Role of Environmental Health Research in Understanding Chronic Conditions. Minn Physician 21: 27-28. |
[49] |
McLachlan JA (2001) Environmental Signaling: What Embryos and Evolution Teach us about Endocrine Disrupting Chemicals. Endocr Rev 22: 319-341. doi: 10.1210/edrv.22.3.0432
![]() |
[50] |
Vandenberg LN (2014) Non-monotonic Dose Responses in Studies of Endocrine Disrupting Chemicals: Bisphenol-A as a Case Study. Dose-Response 12: 259-276. doi: 10.2203/dose-response.13-020.Vandenberg
![]() |
[51] |
Vandenberg LN, Colborn T, Hayes TB, et al. (2012) Hormones and endocrine-disrupting chemicals: low-dose effects and nonmonotonic dose responses. Endocr Rev 33: 378-455. doi: 10.1210/er.2011-1050
![]() |
[52] | Vandenberg LN, Erlich S, Belcher SM, et al. (2013) Low-dose Effects of Bisphenol-A: An Integrated review of in vitro, Laboratory, Animal, and Epidemiology Studies. Endocrine Disruptors 1: 1-20. |
[53] | CDC (2014) National Diabetes Statistics Report, 2014. Center for Disease Control, Atlanta, GA. |
[54] |
Misra A, Bhardwai S (2014) Obesity and the Metbolic Syndrome in Developing Countries: Focus on South Asians. Nestle Nutr Inst Workshop Ser 78: 133-140. doi: 10.1159/000354952
![]() |
[55] |
Chase K, Sharma RP (2013) Epigenetic Developmental Programs and Adipogenesis - Implications for Psychotropic Induced Obesity. Epigenetics 8: 1133-1140. doi: 10.4161/epi.26027
![]() |
[56] | Janesick A, Blumberg B (2011) Minireview: PPARγ as the target of obesogens. J Ster Biochem & Mol Biol 127: 4-8. |
[57] | Janesick A, Blumberg B (2011) The role of environmental obesogens in the obesity epidemic. Obesity before birth: Springer. 383-399. |
[58] | Kishida K, Shimomura H, Nishizawa N, et al. (2001) Enhancement of the aquaporin adipose gene expression by a peroxisome proliferator-activated receptor gamma. J Biol Chem 276: 48572-48579. |
[59] |
Grun F, Blumberg B (2009) Minireview: the case for obesogens. Mol Endocrinol 23: 1127-1134. doi: 10.1210/me.2008-0485
![]() |
[60] |
Kemper MF, Stirone C, N. KD, et al. (2014) Genomic and non-genomic regulation of PGC1 isoforms by estrogen to increase cerebral vascular mitochondrial biogenesis and reactive oxygen species. Eur J Pharmacol 723: 322-329. doi: 10.1016/j.ejphar.2013.11.009
![]() |
[61] |
Riu A, Grimaldi M, le Maire A, et al. (2011) Peroxisome Proliferator-Activated Receptor γ Is a Target for Halogenated Analogs of Bisphenol A. Environ Health Perspect 119: 1227-1232. doi: 10.1289/ehp.1003328
![]() |
[62] |
Swedenborg E, Ruegg J, Mäkelä S, et al. (2009) Endocrine disruptive chemicals: mechanisms of action and involvement in metabolic disorders. J Mol Endovrinol 43: 1-10. doi: 10.1677/JME-08-0132
![]() |
[63] |
Willy PJ, Murray IR, Busch BB, et al. (2004) Regulation of PPARγ coactivator 1α (PGC-1α) signaling by an estrogen-related receptor α (ERRα) ligand. Proc Natl Acad Sci (USA) 101: 8912-8917. doi: 10.1073/pnas.0401420101
![]() |
[64] | Alaynick WA (2006) Nuclear Receptors, Mitochondria and Lipid Metabolism. Mitochondrion 8: 329-337. |
[65] |
Koo S-H, Satoh H, Herzig S, et al. (2004) PGC-1 Promotes Insulin Resistance in Liver through PPAR-alpha-dependent Induction of TRB-3. Nature Med 10: 530-534. doi: 10.1038/nm1044
![]() |
[66] |
Sargis RM, Johnson DN, Choudhury RA, et al. (2010) Environmental endocrine disruptors promote adipogenesis in the 3T3-L1 cell line through glucocorticoid receptor activation. Obesity 18: 1283-1288. doi: 10.1038/oby.2009.419
![]() |
[67] |
Unterberger C, Stapls KJ, Smallie T, et al. (2008) Role of STAT3 in Glucocorticoid-induced Expression of the Human IL-10 Gene. Mol Immunol 45: 3230-3237. doi: 10.1016/j.molimm.2008.02.020
![]() |
[68] |
Tanaka T, Nobuiaki Y, Kishimoto T, et al. (1997) Defective Adipocyte Differentiation in Mice Lacking the C/EBP or C/EBP Gene. EMBO J 16: 7432-7443. doi: 10.1093/emboj/16.24.7432
![]() |
[69] |
Berger J, Tanen M, Elbrecht A, et al. (2001) Peroxisome Proliferator-activated Receptor-γ Ligands Inhibit Adipocyte 11β-Hydroxysteroid Dehydrogenase Type 1 Expression and Activity. J Biol Chem 276: 12629-12635. doi: 10.1074/jbc.M003592200
![]() |
[70] |
Fukazawa H, Hoshino K, Shiozawa T, et al. (2001) Identification and Quantification of Chlorinated Bisphenol A in Wastewater from Wastepaper Recycling Plants. Chemosphere 44: 973-979. doi: 10.1016/S0045-6535(00)00507-5
![]() |
[71] |
Guerra P, Eljarrat E, Barcelo D (2010) Simultaneous Determination Hexabromocyclododecane, Tetrabromobisphenol A, and Related Compounds in Sewage Sludge and Sediment tsamples from the Ebro River Basin (Spain). Anal Bianal Chem 397: 2817-2824. doi: 10.1007/s00216-010-3670-3
![]() |
[72] | Kolpin DW, Furlong ET, Meyer MT, et al. (2002) Pharmaceuticals, Hormones, and Other Organic Wastewater Contaminants in U.S. Streams, 1999-2000: A National Reconnaissance. Environ Sci Technol 36: 1202-1211. |
[73] |
Korshin G, Kim J, Gan L (2006) Comparative Study of Reactions of Endocrine Eisruptors, Bisphenol-A and Diethylstilbestrol in Electrochemical Treatment and Chlorination. Water Res 40: 1070-1078. doi: 10.1016/j.watres.2006.01.003
![]() |
[74] |
Gallard H, Leclercq A, Croué J-P (2004) Chlorination of Bisphenol A: Kinetics and By-products Formation. Chemosphere 56: 465-473. doi: 10.1016/j.chemosphere.2004.03.001
![]() |
[75] | Levian C, Ruiz E, Yang X (2014) The Pathogenesis of Obesity from a Genomic and Systems Biology Perspective. Yale J Biol Med 87: 113-126. |
[76] |
Zhao S, Iyengar R (2012) Systems pharmacology: Network analysis to identify multiscale mechanisms of drug action. Ann Rev Pharmacol Toxicol 52: 505-521. doi: 10.1146/annurev-pharmtox-010611-134520
![]() |
[77] |
Toscano WA, Oehlke KP (2005) Systems Biology: New Approaches to Old Environmental Health Problems. Int J Environ Res Public Health 2: 4-9. doi: 10.3390/ijerph2005010004
![]() |
[78] | Leischow SJ, Best A, Trochim WM, et al. (2008) Systems Thinking to Improve the Public's Health. Amer J Prev Med Suppl 2 35: S-196 - S-203. |
[79] |
Cox J, Mann M (2011) Quantitative, High-Resolution Proteomics for Data-Driven Systems Biology. Annu Rev Biochem 80: 273-299. doi: 10.1146/annurev-biochem-061308-093216
![]() |
[80] |
Li H, Deng H (2010) Systems Genetics, Bioinformatics and eQTL Mapping. Genetica 138: 915-924. doi: 10.1007/s10709-010-9480-x
![]() |
[81] |
Majithia AR, Flannick J, Shahanian P, et al. (2014) Rare variants in PPARG with decreased activity in adipocyte differentiation are associated with increased risk of type 2 diabetes. Proc Natl Acad Sci (USA) 111: 13127-13132. doi: 10.1073/pnas.1410428111
![]() |
[82] |
Hoffman S (2011) Computational Analysis of High Throughput Sequencing Data. Methods Mol Biol 719: 199-217. doi: 10.1007/978-1-61779-027-0_9
![]() |
[83] | Dondorp WJ, de Wert GM (2013) The Thousand-Dollar Genome: An Ethical Exploration. Eur J Hum Genet Suppl 1: S6-S-26. |
[84] | Zhang L, Kim S (2014) Learning Gene Networks under SNP Perturbations using eQTL Datasets. PLoS Comput Biol 10: e1003420. |
[85] |
Meaburn E, Schulz R (2012) Next generation sequencing in epigenetics: insights and challenges. Semin Cell Dev Biol 23: 192-199. doi: 10.1016/j.semcdb.2011.10.010
![]() |
[86] |
Pellegrini M, Ferrri R (2012) Epigenetic analysis: ChIP-chip and ChIP-seq. Methods Mol Biol 802: 377-387. doi: 10.1007/978-1-61779-400-1_25
![]() |
[87] |
Ku CS, Naidoo N, Wu M, et al. (2011) Studying the epigenome using next generation sequencing. J Med Genet 48: 721-730. doi: 10.1136/jmedgenet-2011-100242
![]() |
[88] |
Patel CJ, Battacharya J, Butte AJ (2010) An Environment-Wide Association Study (EWAS) on Type 2 Diabetes Mellitus. PLoS One 5: e10746. doi: 10.1371/journal.pone.0010746
![]() |
[89] |
Manolio TA, Brooks LD, Collins F (2008) A HapMap harvest of insights into the genetics of common disease. J Clin Invest 118: 1590-1605. doi: 10.1172/JCI34772
![]() |
[90] | Zeggini E, Weeden MN, Lindgren CM, et al. Replication of Genome-Wide Association Signals in UK Samples Reveals Risk Loci for Type 2 Diabetes. Science (Washington) 316: 1336-1341. |
[91] | Ioannidis J, Loy EY, Poulton R, et al. (2009) Researching Nongenetic Determinants of Disease A Comparison and Proposed Unification. Sci Transl Med 1: 7ps8. |
[92] |
Davis AP, Rosenstein MC, Wiegers TC, et al. (2011) DiseaseComps: a metric that discovers similar diseases based upon common toxicogenomic profiles at CTD. Bioinformation 7: 154-156. doi: 10.6026/97320630007154
![]() |
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1950 | $ 2.17 \times 10^{-1} $ | 0.85 | 4 | 0.1459 | $ 1.28 \times 10^{-1} $ | 9.37 | 2 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 1.66 | 0 | 0.0156 | $ 3.05 \times 10^{-3} $ | 14.95 | 0 | ||
Cortical Surface | 0.0214 | $ 4.68 \times 10^{-3} $ | 3.62 | 0 | 0.0200 | $ 4.15 \times 10^{-3} $ | 32.40 | 0 | ||
Bull | 0.1492 | $ 2.58 \times 10^{-1} $ | 4.99 | 4 | 0.1380 | $ 2.31 \times 10^{-1} $ | 43.27 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 13.59 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 136.63 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 16.83 | 0 | 0.1922 | $ 4.69 \times 10^{-1} $ | 160.20 | 0 | ||
Gargoyle | 0.0690 | $ 5.28 \times 10^{-2} $ | 19.55 | 0 | 0.0653 | $ 4.84 \times 10^{-2} $ | 192.62 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 16.52 | 0 | 0.0525 | $ 3.38 \times 10^{-2} $ | 161.01 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 20.59 | 0 | 0.0404 | $ 2.04 \times 10^{-2} $ | 226.99 | 0 | ||
Chess King | 0.0687 | $ 6.07 \times 10^{-2} $ | 52.36 | 21 | 0.0639 | $ 5.14 \times 10^{-2} $ | 518.18 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 140.59 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 1 111.67 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 270.63 | 1 | 0.0511 | $ 3.29 \times 10^{-2} $ | 2 320.19 | 1 |
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1936 | $ 2.16 \times 10^{-1} $ | 0.36 | 4 | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 0.99 | 0 | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | ||
Cortical Surface | 0.0216 | $ 4.75 \times 10^{-3} $ | 1.40 | 0 | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | ||
Bull | 0.1492 | $ 2.59 \times 10^{-1} $ | 1.77 | 4 | 0.1348 | $ 2.19 \times 10^{-1} $ | 18.89 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 6.60 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 61.93 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 7.75 | 0 | 0.1894 | $ 4.54 \times 10^{-1} $ | 76.76 | 0 | ||
Gargoyle | 0.0688 | $ 5.26 \times 10^{-2} $ | 7.81 | 0 | 0.0646 | $ 4.76 \times 10^{-2} $ | 80.52 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 7.18 | 0 | 0.0525 | $ 3.39 \times 10^{-2} $ | 75.60 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 8.30 | 0 | 0.0390 | $ 1.91 \times 10^{-2} $ | 89.62 | 0 | ||
Chess King | 0.0692 | $ 6.07 \times 10^{-2} $ | 20.06 | 21 | 0.0647 | $ 5.23 \times 10^{-2} $ | 207.47 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 57.71 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 654.57 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 70.83 | 1 | 0.0512 | $ 3.29 \times 10^{-2} $ | 775.36 | 1 |
Before bijectivity correction | After bijectivity correction | ||||||||||
Model Name | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | Time | #Its | |||
Right Hand | 0.2063 | $ 8.01 \times 10^{-2} $ | 3 | 0.1071 | $ 8.12 \times 10^{-2} $ | 0 | 6.67 | 127 | |||
David Head | 0.0123 | $ 1.87 \times 10^{-3} $ | 0 | --------------------- | 83.66 | 432 | |||||
Cortical Surface | 0.0187 | $ 4.23 \times 10^{-3} $ | 0 | --------------------- | 9.22 | 52 | |||||
Bull | 0.1520 | $ 2.37 \times 10^{-1} $ | 11 | 0.1403 | $ 2.40 \times 10^{-1} $ | 3 | 12.20 | 50 | |||
Bulldog | 0.0351 | $ 1.31 \times 10^{-2} $ | 0 | --------------------- | 72.64 | 60 | |||||
Lion Statue | 0.1940 | $ 4.79 \times 10^{-1} $ | 2 | 0.1938 | $ 4.79 \times 10^{-1} $ | 0 | 2.63 | 2 | |||
Gargoyle | 0.0704 | $ 5.17 \times 10^{-2} $ | 3 | 0.0682 | $ 5.14 \times 10^{-2} $ | 0 | 26.71 | 17 | |||
Max Planck | 0.0464 | $ 3.54 \times 10^{-2} $ | 0 | --------------------- | 10.58 | 8 | |||||
Bunny | 0.0417 | $ 2.17 \times 10^{-2} $ | 0 | --------------------- | 20.77 | 13 | |||||
Chess King | 0.0715 | $ 5.17 \times 10^{-2} $ | 43 | 0.0641 | $ 5.38 \times 10^{-2} $ | 17 | 370.68 | 107 | |||
Art Statuette | 0.0409 | $ 2.14 \times 10^{-2} $ | 0 | --------------------- | 31.53 | 3 | |||||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 2 | 0.0514 | $ 3.32 \times 10^{-2} $ | 0 | 19.24 | 2 |
$ 100 $ Iterations | $ 10 000 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | 0.0545 | $ 2.07 \times 10^{-2} $ | 431.28 | 0 | ||
David Head | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | 0.0029 | $ 1.01 \times 10^{-4} $ | 1 018.87 | 0 | ||
Cortical Surface | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | 0.0045 | $ 2.06 \times 10^{-4} $ | 1 328.56 | 0 |
Smallest eigenvalue of the Hessian | ||
Model Name | After FPI | After RGD |
Right Hand | $ -2.4429 \times 10^{0} $ | $ -1.9013 \times 10^{-1} $ |
David Head | $ -1.8481 \times 10^{0} $ | $ -2.8445 \times 10^{-6} $ |
Cortical Surface | $ -4.4940 \times 10^{-1} $ | $ -3.6768 \times 10^{-3} $ |
Energy $ E_{A} $ Increased | $ 100 $ Iterations | ||||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | #Its | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.2050 | $ 2.86 \times 10^{-1} $ | 0.08 | 12 | 6 | 0.4598 | $ 2.92 \times 10^{0} $ | 1.35 | 67 | ||
David Head | 0.0191 | $ 4.66 \times 10^{-3} $ | 0.35 | 0 | 8 | 0.0169 | $ 3.58 \times 10^{-3} $ | 4.30 | 0 | ||
Cortical Surface | 0.0220 | $ 4.93 \times 10^{-3} $ | 0.85 | 0 | 15 | 0.0174 | $ 3.21 \times 10^{-3} $ | 5.62 | 0 | ||
Bull | 0.1504 | $ 2.74 \times 10^{-1} $ | 1.29 | 8 | 18 | 0.1876 | $ 4.59 \times 10^{-1} $ | 6.90 | 40 | ||
Bulldog | 0.0381 | $ 1.49 \times 10^{-2} $ | 2.61 | 0 | 10 | 0.1833 | $ 3.99 \times 10^{-1} $ | 22.22 | 53 | ||
Lion Statue | 0.1940 | $ 5.10 \times 10^{-1} $ | 1.12 | 1 | 4 | 0.2064 | $ 5.28 \times 10^{-1} $ | 23.67 | 38 | ||
Gargoyle | 0.0704 | $ 5.47 \times 10^{-2} $ | 2.64 | 0 | 11 | 4.1020 | $ 4.85 \times 10^{2} $ | 36.10 | 1955 | ||
Max Planck | 0.0544 | $ 3.67 \times 10^{-2} $ | 1.35 | 0 | 5 | 0.1844 | $ 1.67 \times 10^{1} $ | 25.99 | 144 | ||
Bunny | 0.0423 | $ 2.24 \times 10^{-2} $ | 6.40 | 0 | 20 | 0.0394 | $ 3.96 \times 10^{-2} $ | 35.78 | 2 | ||
Chess King | 0.0713 | $ 6.91 \times 10^{-2} $ | 6.35 | 9 | 8 | 1.0903 | $ 1.79 \times 10^{1} $ | 88.04 | 1655 | ||
Art Statuette | 0.0411 | $ 2.15 \times 10^{-2} $ | 23.27 | 0 | 7 | 0.0908 | $ 1.07 \times 10^{-1} $ | 342.95 | 126 | ||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 29.94 | 6 | 9 | 0.0932 | $ 7.42 \times 10^{-2} $ | 305.00 | 144 |
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Foldings |
Right Hand | 18.3283 | $ 4.84 \times 10^{3} $ | 218.03 | 672 |
David Head | 0.0426 | $ 2.27 \times 10^{-2} $ | 298.76 | 0 |
Cortical Surface | 0.6320 | $ 1.14 \times 10^{0} $ | 420.20 | 10 |
Bull | 8.5565 | $ 1.82 \times 10^{3} $ | 34.42 | 335 |
Bulldog | 9.2379 | $ 1.22 \times 10^{3} $ | 183.94 | 338 |
Lion Statue | 0.2626 | $ 8.96 \times 10^{-1} $ | 1498.91 | 540 |
Gargoyle | 0.3558 | $ 1.30 \times 10^{0} $ | 1483.35 | 571 |
Max Planck | 11.6875 | $ 1.49 \times 10^{3} $ | 195.39 | 575 |
Bunny | 27.6014 | $ 8.94 \times 10^{3} $ | 157.87 | 208 |
Chess King | 11.8300 | $ 1.65 \times 10^{3} $ | 608.55 | 948 |
Art Statuette | 394.4414 | $ 9.93 \times 10^{0} $ | 2284.79 | 2242 |
Bimba Statue | 0.5110 | $ 2.01 \times 10^{0} $ | 16 773.34 | 11 821 |
Model Name | SD/Mean | $ E_{A}(f) $ | #Foldings | # Iterations |
David Head | 0.4189 | $ 2.25 \times 10^{0} $ | 0 | 27 |
Cortical Surface | 0.5113 | $ 3.11 \times 10^{0} $ | 0 | 27 |
Bulldog | 0.8665 | $ 1.00 \times 10^{1} $ | 0 | 33 |
Max Planck | 0.5619 | $ 4.38 \times 10^{0} $ | 0 | 25 |
$ \sigma_{\mathrm{noise}} = 1 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.1254 | $ 1.03 \times 10^{-1} $ | 3.13 | 14 | 1 | $ 9.40 \times 10^{-2} $ | $ 5.84 \times 10^{-2} $ | ||
David Head | 0.0108 | $ 1.44 \times 10^{-3} $ | 9.98 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 4.47 \times 10^{-3} $ | ||
Cortical Surface | 0.0187 | $ 3.82 \times 10^{-3} $ | 12.78 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 3.45 \times 10^{-3} $ | ||
Bull | 0.2528 | $ 9.23 \times 10^{-1} $ | 16.74 | 19 | 6 | $ 2.19 \times 10^{-1} $ | $ 1.12 \times 10^{0} $ | ||
Bulldog | 0.0273 | $ 6.94 \times 10^{-3} $ | 59.26 | 0 | 0 | $ 1.27 \times 10^{-2} $ | $ 7.19 \times 10^{-6} $ | ||
Lion Statue | 0.1630 | $ 3.27 \times 10^{-1} $ | 66.78 | 1 | 0 | $ 4.54 \times 10^{-1} $ | $ 1.36 \times 10^{-4} $ | ||
Gargoyle | 0.1677 | $ 3.66 \times 10^{-1} $ | 65.72 | 0 | 0 | $ 4.76 \times 10^{-2} $ | $ 4.45 \times 10^{-3} $ | ||
Max Planck | 0.0464 | $ 2.67 \times 10^{-2} $ | 64.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 5.27 \times 10^{-5} $ | ||
Bunny | 0.0297 | $ 1.11 \times 10^{-2} $ | 81.31 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 5.32 \times 10^{-3} $ | ||
Chess King | 0.0747 | $ 6.08 \times 10^{-2} $ | 193.04 | 119 | 38 | $ 5.23 \times 10^{-2} $ | $ 1.03 \times 10^{-3} $ | ||
Art Statuette | 0.0235 | $ 6.88 \times 10^{-3} $ | 636.30 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 2.54 \times 10^{-2} $ | ||
Bimba Statue | 0.0541 | $ 3.33 \times 10^{-2} $ | 759.36 | 2 | 0 | $ 3.29 \times 10^{-2} $ | $ 5.81 \times 10^{-4} $ |
$ \sigma_{\mathrm{noise}} = 5 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.2304 | $ 5.77 \times 10^{-1} $ | 3.12 | 10 | 4 | $ 9.40 \times 10^{-2} $ | $ 1.45 \times 10^{0} $ | ||
David Head | 0.0211 | $ 5.51 \times 10^{-3} $ | 9.43 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 6.84 \times 10^{-3} $ | ||
Cortical Surface | 0.0301 | $ 1.02 \times 10^{-2} $ | 12.72 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 2.28 \times 10^{-1} $ | ||
Bull | 0.1723 | $ 2.98 \times 10^{-1} $ | 17.40 | 18 | 4 | $ 2.19 \times 10^{-1} $ | $ 1.19 \times 10^{-1} $ | ||
Bulldog | 0.0629 | $ 3.86 \times 10^{-2} $ | 61.75 | 2 | 0 | $ 1.27 \times 10^{-2} $ | $ 3.23 \times 10^{-5} $ | ||
Lion Statue | 0.1044 | $ 1.36 \times 10^{-1} $ | 69.55 | 0 | 0 | $ 4.54 \times 10^{-1} $ | $ 3.36 \times 10^{-4} $ | ||
Gargoyle | 0.0974 | $ 9.68 \times 10^{-2} $ | 69.32 | 1 | 0 | $ 4.76 \times 10^{-2} $ | $ 6.87 \times 10^{-4} $ | ||
Max Planck | 0.0782 | $ 7.11 \times 10^{-2} $ | 67.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 2.75 \times 10^{-4} $ | ||
Bunny | 0.0455 | $ 2.57 \times 10^{-2} $ | 80.93 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 4.40 \times 10^{-3} $ | ||
Chess King | 0.1521 | $ 2.72 \times 10^{-1} $ | 187.57 | 95 | 35 | $ 5.23 \times 10^{-2} $ | $ 2.64 \times 10^{-2} $ | ||
Art Statuette | 0.0981 | $ 5.41 \times 10^{-2} $ | 617.75 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 5.92 \times 10^{-2} $ | ||
Bimba Statue | 0.1083 | $ 1.25 \times 10^{-1} $ | 743.00 | 76 | 0 | $ 3.29 \times 10^{-2} $ | $ 1.68 \times 10^{-1} $ |
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1950 | $ 2.17 \times 10^{-1} $ | 0.85 | 4 | 0.1459 | $ 1.28 \times 10^{-1} $ | 9.37 | 2 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 1.66 | 0 | 0.0156 | $ 3.05 \times 10^{-3} $ | 14.95 | 0 | ||
Cortical Surface | 0.0214 | $ 4.68 \times 10^{-3} $ | 3.62 | 0 | 0.0200 | $ 4.15 \times 10^{-3} $ | 32.40 | 0 | ||
Bull | 0.1492 | $ 2.58 \times 10^{-1} $ | 4.99 | 4 | 0.1380 | $ 2.31 \times 10^{-1} $ | 43.27 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 13.59 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 136.63 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 16.83 | 0 | 0.1922 | $ 4.69 \times 10^{-1} $ | 160.20 | 0 | ||
Gargoyle | 0.0690 | $ 5.28 \times 10^{-2} $ | 19.55 | 0 | 0.0653 | $ 4.84 \times 10^{-2} $ | 192.62 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 16.52 | 0 | 0.0525 | $ 3.38 \times 10^{-2} $ | 161.01 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 20.59 | 0 | 0.0404 | $ 2.04 \times 10^{-2} $ | 226.99 | 0 | ||
Chess King | 0.0687 | $ 6.07 \times 10^{-2} $ | 52.36 | 21 | 0.0639 | $ 5.14 \times 10^{-2} $ | 518.18 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 140.59 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 1 111.67 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 270.63 | 1 | 0.0511 | $ 3.29 \times 10^{-2} $ | 2 320.19 | 1 |
$ 10 $ Iterations | $ 100 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1936 | $ 2.16 \times 10^{-1} $ | 0.36 | 4 | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | ||
David Head | 0.0178 | $ 4.04 \times 10^{-3} $ | 0.99 | 0 | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | ||
Cortical Surface | 0.0216 | $ 4.75 \times 10^{-3} $ | 1.40 | 0 | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | ||
Bull | 0.1492 | $ 2.59 \times 10^{-1} $ | 1.77 | 4 | 0.1348 | $ 2.19 \times 10^{-1} $ | 18.89 | 1 | ||
Bulldog | 0.0369 | $ 1.41 \times 10^{-2} $ | 6.60 | 0 | 0.0343 | $ 1.27 \times 10^{-2} $ | 61.93 | 0 | ||
Lion Statue | 0.1935 | $ 4.77 \times 10^{-1} $ | 7.75 | 0 | 0.1894 | $ 4.54 \times 10^{-1} $ | 76.76 | 0 | ||
Gargoyle | 0.0688 | $ 5.26 \times 10^{-2} $ | 7.81 | 0 | 0.0646 | $ 4.76 \times 10^{-2} $ | 80.52 | 0 | ||
Max Planck | 0.0537 | $ 3.54 \times 10^{-2} $ | 7.18 | 0 | 0.0525 | $ 3.39 \times 10^{-2} $ | 75.60 | 0 | ||
Bunny | 0.0417 | $ 2.18 \times 10^{-2} $ | 8.30 | 0 | 0.0390 | $ 1.91 \times 10^{-2} $ | 89.62 | 0 | ||
Chess King | 0.0692 | $ 6.07 \times 10^{-2} $ | 20.06 | 21 | 0.0647 | $ 5.23 \times 10^{-2} $ | 207.47 | 17 | ||
Art Statuette | 0.0408 | $ 2.14 \times 10^{-2} $ | 57.71 | 0 | 0.0405 | $ 2.10 \times 10^{-2} $ | 654.57 | 0 | ||
Bimba Statue | 0.0514 | $ 3.31 \times 10^{-2} $ | 70.83 | 1 | 0.0512 | $ 3.29 \times 10^{-2} $ | 775.36 | 1 |
Before bijectivity correction | After bijectivity correction | ||||||||||
Model Name | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | SD/Mean | $ E_{A}\big(f\big) $ | #Fs | Time | #Its | |||
Right Hand | 0.2063 | $ 8.01 \times 10^{-2} $ | 3 | 0.1071 | $ 8.12 \times 10^{-2} $ | 0 | 6.67 | 127 | |||
David Head | 0.0123 | $ 1.87 \times 10^{-3} $ | 0 | --------------------- | 83.66 | 432 | |||||
Cortical Surface | 0.0187 | $ 4.23 \times 10^{-3} $ | 0 | --------------------- | 9.22 | 52 | |||||
Bull | 0.1520 | $ 2.37 \times 10^{-1} $ | 11 | 0.1403 | $ 2.40 \times 10^{-1} $ | 3 | 12.20 | 50 | |||
Bulldog | 0.0351 | $ 1.31 \times 10^{-2} $ | 0 | --------------------- | 72.64 | 60 | |||||
Lion Statue | 0.1940 | $ 4.79 \times 10^{-1} $ | 2 | 0.1938 | $ 4.79 \times 10^{-1} $ | 0 | 2.63 | 2 | |||
Gargoyle | 0.0704 | $ 5.17 \times 10^{-2} $ | 3 | 0.0682 | $ 5.14 \times 10^{-2} $ | 0 | 26.71 | 17 | |||
Max Planck | 0.0464 | $ 3.54 \times 10^{-2} $ | 0 | --------------------- | 10.58 | 8 | |||||
Bunny | 0.0417 | $ 2.17 \times 10^{-2} $ | 0 | --------------------- | 20.77 | 13 | |||||
Chess King | 0.0715 | $ 5.17 \times 10^{-2} $ | 43 | 0.0641 | $ 5.38 \times 10^{-2} $ | 17 | 370.68 | 107 | |||
Art Statuette | 0.0409 | $ 2.14 \times 10^{-2} $ | 0 | --------------------- | 31.53 | 3 | |||||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 2 | 0.0514 | $ 3.32 \times 10^{-2} $ | 0 | 19.24 | 2 |
$ 100 $ Iterations | $ 10 000 $ Iterations | |||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.1204 | $ 9.40 \times 10^{-2} $ | 4.07 | 1 | 0.0545 | $ 2.07 \times 10^{-2} $ | 431.28 | 0 | ||
David Head | 0.0156 | $ 3.04 \times 10^{-3} $ | 9.16 | 0 | 0.0029 | $ 1.01 \times 10^{-4} $ | 1 018.87 | 0 | ||
Cortical Surface | 0.0200 | $ 3.72 \times 10^{-3} $ | 16.01 | 0 | 0.0045 | $ 2.06 \times 10^{-4} $ | 1 328.56 | 0 |
Smallest eigenvalue of the Hessian | ||
Model Name | After FPI | After RGD |
Right Hand | $ -2.4429 \times 10^{0} $ | $ -1.9013 \times 10^{-1} $ |
David Head | $ -1.8481 \times 10^{0} $ | $ -2.8445 \times 10^{-6} $ |
Cortical Surface | $ -4.4940 \times 10^{-1} $ | $ -3.6768 \times 10^{-3} $ |
Energy $ E_{A} $ Increased | $ 100 $ Iterations | ||||||||||
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Fs | #Its | SD/Mean | $ E_{A}(f) $ | Time | #Fs | ||
Right Hand | 0.2050 | $ 2.86 \times 10^{-1} $ | 0.08 | 12 | 6 | 0.4598 | $ 2.92 \times 10^{0} $ | 1.35 | 67 | ||
David Head | 0.0191 | $ 4.66 \times 10^{-3} $ | 0.35 | 0 | 8 | 0.0169 | $ 3.58 \times 10^{-3} $ | 4.30 | 0 | ||
Cortical Surface | 0.0220 | $ 4.93 \times 10^{-3} $ | 0.85 | 0 | 15 | 0.0174 | $ 3.21 \times 10^{-3} $ | 5.62 | 0 | ||
Bull | 0.1504 | $ 2.74 \times 10^{-1} $ | 1.29 | 8 | 18 | 0.1876 | $ 4.59 \times 10^{-1} $ | 6.90 | 40 | ||
Bulldog | 0.0381 | $ 1.49 \times 10^{-2} $ | 2.61 | 0 | 10 | 0.1833 | $ 3.99 \times 10^{-1} $ | 22.22 | 53 | ||
Lion Statue | 0.1940 | $ 5.10 \times 10^{-1} $ | 1.12 | 1 | 4 | 0.2064 | $ 5.28 \times 10^{-1} $ | 23.67 | 38 | ||
Gargoyle | 0.0704 | $ 5.47 \times 10^{-2} $ | 2.64 | 0 | 11 | 4.1020 | $ 4.85 \times 10^{2} $ | 36.10 | 1955 | ||
Max Planck | 0.0544 | $ 3.67 \times 10^{-2} $ | 1.35 | 0 | 5 | 0.1844 | $ 1.67 \times 10^{1} $ | 25.99 | 144 | ||
Bunny | 0.0423 | $ 2.24 \times 10^{-2} $ | 6.40 | 0 | 20 | 0.0394 | $ 3.96 \times 10^{-2} $ | 35.78 | 2 | ||
Chess King | 0.0713 | $ 6.91 \times 10^{-2} $ | 6.35 | 9 | 8 | 1.0903 | $ 1.79 \times 10^{1} $ | 88.04 | 1655 | ||
Art Statuette | 0.0411 | $ 2.15 \times 10^{-2} $ | 23.27 | 0 | 7 | 0.0908 | $ 1.07 \times 10^{-1} $ | 342.95 | 126 | ||
Bimba Statue | 0.0515 | $ 3.32 \times 10^{-2} $ | 29.94 | 6 | 9 | 0.0932 | $ 7.42 \times 10^{-2} $ | 305.00 | 144 |
Model Name | SD/Mean | $ E_{A}(f) $ | Time | #Foldings |
Right Hand | 18.3283 | $ 4.84 \times 10^{3} $ | 218.03 | 672 |
David Head | 0.0426 | $ 2.27 \times 10^{-2} $ | 298.76 | 0 |
Cortical Surface | 0.6320 | $ 1.14 \times 10^{0} $ | 420.20 | 10 |
Bull | 8.5565 | $ 1.82 \times 10^{3} $ | 34.42 | 335 |
Bulldog | 9.2379 | $ 1.22 \times 10^{3} $ | 183.94 | 338 |
Lion Statue | 0.2626 | $ 8.96 \times 10^{-1} $ | 1498.91 | 540 |
Gargoyle | 0.3558 | $ 1.30 \times 10^{0} $ | 1483.35 | 571 |
Max Planck | 11.6875 | $ 1.49 \times 10^{3} $ | 195.39 | 575 |
Bunny | 27.6014 | $ 8.94 \times 10^{3} $ | 157.87 | 208 |
Chess King | 11.8300 | $ 1.65 \times 10^{3} $ | 608.55 | 948 |
Art Statuette | 394.4414 | $ 9.93 \times 10^{0} $ | 2284.79 | 2242 |
Bimba Statue | 0.5110 | $ 2.01 \times 10^{0} $ | 16 773.34 | 11 821 |
Model Name | SD/Mean | $ E_{A}(f) $ | #Foldings | # Iterations |
David Head | 0.4189 | $ 2.25 \times 10^{0} $ | 0 | 27 |
Cortical Surface | 0.5113 | $ 3.11 \times 10^{0} $ | 0 | 27 |
Bulldog | 0.8665 | $ 1.00 \times 10^{1} $ | 0 | 33 |
Max Planck | 0.5619 | $ 4.38 \times 10^{0} $ | 0 | 25 |
$ \sigma_{\mathrm{noise}} = 1 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.1254 | $ 1.03 \times 10^{-1} $ | 3.13 | 14 | 1 | $ 9.40 \times 10^{-2} $ | $ 5.84 \times 10^{-2} $ | ||
David Head | 0.0108 | $ 1.44 \times 10^{-3} $ | 9.98 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 4.47 \times 10^{-3} $ | ||
Cortical Surface | 0.0187 | $ 3.82 \times 10^{-3} $ | 12.78 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 3.45 \times 10^{-3} $ | ||
Bull | 0.2528 | $ 9.23 \times 10^{-1} $ | 16.74 | 19 | 6 | $ 2.19 \times 10^{-1} $ | $ 1.12 \times 10^{0} $ | ||
Bulldog | 0.0273 | $ 6.94 \times 10^{-3} $ | 59.26 | 0 | 0 | $ 1.27 \times 10^{-2} $ | $ 7.19 \times 10^{-6} $ | ||
Lion Statue | 0.1630 | $ 3.27 \times 10^{-1} $ | 66.78 | 1 | 0 | $ 4.54 \times 10^{-1} $ | $ 1.36 \times 10^{-4} $ | ||
Gargoyle | 0.1677 | $ 3.66 \times 10^{-1} $ | 65.72 | 0 | 0 | $ 4.76 \times 10^{-2} $ | $ 4.45 \times 10^{-3} $ | ||
Max Planck | 0.0464 | $ 2.67 \times 10^{-2} $ | 64.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 5.27 \times 10^{-5} $ | ||
Bunny | 0.0297 | $ 1.11 \times 10^{-2} $ | 81.31 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 5.32 \times 10^{-3} $ | ||
Chess King | 0.0747 | $ 6.08 \times 10^{-2} $ | 193.04 | 119 | 38 | $ 5.23 \times 10^{-2} $ | $ 1.03 \times 10^{-3} $ | ||
Art Statuette | 0.0235 | $ 6.88 \times 10^{-3} $ | 636.30 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 2.54 \times 10^{-2} $ | ||
Bimba Statue | 0.0541 | $ 3.33 \times 10^{-2} $ | 759.36 | 2 | 0 | $ 3.29 \times 10^{-2} $ | $ 5.81 \times 10^{-4} $ |
$ \sigma_{\mathrm{noise}} = 5 \times 10^{-3} $ | |||||||||
Model Name | SD/Mean | $ E_{A}\big(\tilde{f}\big) $ | Time | #Fs | #Fs after b.c. | $ E_{A}(f) $ | err-$ E_{A}\big(f, \tilde{f}\big) $ | ||
Right Hand | 0.2304 | $ 5.77 \times 10^{-1} $ | 3.12 | 10 | 4 | $ 9.40 \times 10^{-2} $ | $ 1.45 \times 10^{0} $ | ||
David Head | 0.0211 | $ 5.51 \times 10^{-3} $ | 9.43 | 0 | 0 | $ 3.04 \times 10^{-3} $ | $ 6.84 \times 10^{-3} $ | ||
Cortical Surface | 0.0301 | $ 1.02 \times 10^{-2} $ | 12.72 | 0 | 0 | $ 3.72 \times 10^{-3} $ | $ 2.28 \times 10^{-1} $ | ||
Bull | 0.1723 | $ 2.98 \times 10^{-1} $ | 17.40 | 18 | 4 | $ 2.19 \times 10^{-1} $ | $ 1.19 \times 10^{-1} $ | ||
Bulldog | 0.0629 | $ 3.86 \times 10^{-2} $ | 61.75 | 2 | 0 | $ 1.27 \times 10^{-2} $ | $ 3.23 \times 10^{-5} $ | ||
Lion Statue | 0.1044 | $ 1.36 \times 10^{-1} $ | 69.55 | 0 | 0 | $ 4.54 \times 10^{-1} $ | $ 3.36 \times 10^{-4} $ | ||
Gargoyle | 0.0974 | $ 9.68 \times 10^{-2} $ | 69.32 | 1 | 0 | $ 4.76 \times 10^{-2} $ | $ 6.87 \times 10^{-4} $ | ||
Max Planck | 0.0782 | $ 7.11 \times 10^{-2} $ | 67.30 | 0 | 0 | $ 3.39 \times 10^{-2} $ | $ 2.75 \times 10^{-4} $ | ||
Bunny | 0.0455 | $ 2.57 \times 10^{-2} $ | 80.93 | 0 | 0 | $ 1.91 \times 10^{-2} $ | $ 4.40 \times 10^{-3} $ | ||
Chess King | 0.1521 | $ 2.72 \times 10^{-1} $ | 187.57 | 95 | 35 | $ 5.23 \times 10^{-2} $ | $ 2.64 \times 10^{-2} $ | ||
Art Statuette | 0.0981 | $ 5.41 \times 10^{-2} $ | 617.75 | 0 | 0 | $ 2.10 \times 10^{-2} $ | $ 5.92 \times 10^{-2} $ | ||
Bimba Statue | 0.1083 | $ 1.25 \times 10^{-1} $ | 743.00 | 76 | 0 | $ 3.29 \times 10^{-2} $ | $ 1.68 \times 10^{-1} $ |