Processing math: 100%
Review Special Issues

Role of nucleolar dysfunction in neurodegenerative disorders: a game of genes?

  • Within the cell nucleus the nucleolus is the site of rRNA transcription and ribosome biogenesis and its activity is clearly essential for a correct cell function, however its specific role in neuronal homeostasis remains mainly unknown. Here we review recent evidence that impaired nucleolar activity is a common mechanism in different neurodegenerative disorders. We focus on the specific causes and consequences of impaired nucleolar activity to better understand the pathogenesis of neurodegenerative disorders, such as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis/frontotemporal dementia (ALS/FTD). In particular, we discuss the genetic and epigenetic factors that might regulate nucleolar function in these diseases. In addition, we describe novel animal models enabling the dissection of the context-specific series of events triggered by nucleolar disruption, also known as nucleolar stress. Finally, we suggest how this novel mechanism could help to identify strategies to treat these still incurable disorders.

    Citation: Rosanna Parlato, Holger Bierhoff. Role of nucleolar dysfunction in neurodegenerative disorders: a game of genes?[J]. AIMS Molecular Science, 2015, 2(3): 211-224. doi: 10.3934/molsci.2015.3.211

    Related Papers:

    [1] Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd . Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences and Engineering, 2012, 9(3): 553-576. doi: 10.3934/mbe.2012.9.553
    [2] Xinli Hu, Wenjie Qin, Marco Tosato . Complexity dynamics and simulations in a discrete switching ecosystem induced by an intermittent threshold control strategy. Mathematical Biosciences and Engineering, 2020, 17(3): 2164-2178. doi: 10.3934/mbe.2020115
    [3] Xuepeng Zheng, Bin Nie, Jiandong Chen, Yuwen Du, Yuchao Zhang, Haike Jin . An improved particle swarm optimization combined with double-chaos search. Mathematical Biosciences and Engineering, 2023, 20(9): 15737-15764. doi: 10.3934/mbe.2023701
    [4] Xiaoke Li, Qingyu Yang, Yang Wang, Xinyu Han, Yang Cao, Lei Fan, Jun Ma . Development of surrogate models in reliability-based design optimization: A review. Mathematical Biosciences and Engineering, 2021, 18(5): 6386-6409. doi: 10.3934/mbe.2021317
    [5] Daniele Bernardo Panaro, Andrea Trucchia, Vincenzo Luongo, Maria Rosaria Mattei, Luigi Frunzo . Global sensitivity analysis and uncertainty quantification for a mathematical model of dry anaerobic digestion in plug-flow reactors. Mathematical Biosciences and Engineering, 2024, 21(9): 7139-7164. doi: 10.3934/mbe.2024316
    [6] Christophe Prud'homme, Lorenzo Sala, Marcela Szopos . Uncertainty propagation and sensitivity analysis: results from the Ocular Mathematical Virtual Simulator. Mathematical Biosciences and Engineering, 2021, 18(3): 2010-2032. doi: 10.3934/mbe.2021105
    [7] Abdon ATANGANA, Seda İĞRET ARAZ . Deterministic-Stochastic modeling: A new direction in modeling real world problems with crossover effect. Mathematical Biosciences and Engineering, 2022, 19(4): 3526-3563. doi: 10.3934/mbe.2022163
    [8] Giorgos Minas, David A Rand . Parameter sensitivity analysis for biochemical reaction networks. Mathematical Biosciences and Engineering, 2019, 16(5): 3965-3987. doi: 10.3934/mbe.2019196
    [9] Ying-Cheng Lai, Kwangho Park . Noise-sensitive measure for stochastic resonance in biological oscillators. Mathematical Biosciences and Engineering, 2006, 3(4): 583-602. doi: 10.3934/mbe.2006.3.583
    [10] H. Y. Swathi, G. Shivakumar . Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification. Mathematical Biosciences and Engineering, 2023, 20(7): 12529-12561. doi: 10.3934/mbe.2023558
  • Within the cell nucleus the nucleolus is the site of rRNA transcription and ribosome biogenesis and its activity is clearly essential for a correct cell function, however its specific role in neuronal homeostasis remains mainly unknown. Here we review recent evidence that impaired nucleolar activity is a common mechanism in different neurodegenerative disorders. We focus on the specific causes and consequences of impaired nucleolar activity to better understand the pathogenesis of neurodegenerative disorders, such as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis/frontotemporal dementia (ALS/FTD). In particular, we discuss the genetic and epigenetic factors that might regulate nucleolar function in these diseases. In addition, we describe novel animal models enabling the dissection of the context-specific series of events triggered by nucleolar disruption, also known as nucleolar stress. Finally, we suggest how this novel mechanism could help to identify strategies to treat these still incurable disorders.


    During the last several decades, computational advances as well as new insights that reduce the computational cost of methods have made many problems feasible to solve. This has a great impact for uncertainty and sensitivity analysis as a means of reliability engineering, where a huge number of evaluations need to be carried out to thoroughly investigate their behavior and to identify critical model parameters. In recent years, substantial development has advanced the current landscape of stochastic computational problems. Surrogate modeling has become a very popular method to effectively map complex input-output relationships and to tremendously reduce the computational cost of successive uncertainty and sensitivity evaluations. The general Polynomial Chaos Expansion (GPCE) [1,2,3] has proven to be a versatile method to predict model behavior for systems where direct evaluation can be cumbersome and very time consuming [4]. GPCE allows physical and engineering systems to be investigated with various Quantities Of Interest (QOI) as functions of various uncertain model parameters. The goal lies in characterizing the sensitivities of the output dependencies of the system inputs, which can be done by performing a Sobol decomposition [5] or gradient based measures [6]. A lot of research can be found on this process of uncertainty quantification (UQ) using the GPCE model [2,3,7]. It has found applications in non-destructive material testing [8], neuroscience [9,10,11,12], mechanical engineering [13], aerospace engineering[14], electrical engineering [15,16], fluid dynamics [17], and various other fields. However, compared to engineering, there is a relative lack of targeted applications of GPCE in the life sciences [18,19,20,21,22,23]. In this branch of science, strong assumptions are made about the parameter values to be chosen. Moreover, the analysis of individuals requires the analysis of model behavior to consider stochastic parameter definitions instead of deterministic approaches.

    The majority of modern research on GPCE has focused on non-intrusive approaches where the problem at hand can be treated as a black box system. There exist a number of possibilities to further improve the basic GPCE approach. On one hand, it is possible to modify the assembly process of the basis functions by identifying and choosing the most suitable order [24], splitting the GPCE problem in a multi-element-GPCE (ME-GPCE) [25], or applying an adaptive algorithm to extend the number of samples and basis functions iteratively [26,27]. This reduces the number of unknown GPCE coefficients and the number of required model evaluations.

    On the other hand, the potential of a more efficient GPCE approximation lies in the selection of the sampling locations prior to any GPCE approximation. To take a closer look at this topic, we thoroughly investigate GPCE optimized sampling schemes compared to standard random sampling. We will (i) show the performance of standard Monte Carlo methods in the framework of polynomial chaos, (ii) improve them with space-filling sampling designs using state-of-the-art Latin Hypercube Sampling (LHS) schemes [28,29], (iii) apply principles of Compressed Sensing (CS) and optimal sampling for L1-minimization [30,31,32]. We investigate the performance and reliability of the sampling schemes in a comprehensive numerical study, which consists of three representative test cases and one practical example. On one hand, LHS has seen a lot of usage in the fields of reliability engineering and uncertainty analysis since it is equipped with good space-filling properties [33,34]. The basic principle was improved by Jin et al. (2003) [35], who proposed an optimization scheme based on the Enhanced Stochastic Evolutionary algorithm (ESE), which maximizes the Maximum-Minimal Distance criterion [36] to reliably construct sample sets with a very even spread. It can be reasonably assumed that GPCE can benefit significantly from this, since it ensures to some extent that the parameter space is scanned evenly and thus all features of the transfer function can be found. On the other hand, CS emerged in the field of efficient data recovery to reconstruct signals with a much smaller number of samples than the Shannon-Nyquist criterion would suggest [30,37,38,39]. This has been applied in a number of cases where the number of samples available is limited [40,41,42]. Because it is not possible to select the required basic functions in advance, most GPCE dictionaries are over-complete, which leads to sparse coefficient vectors. Using these properties, compressive sampling recently became popular in the framework of UQ. Another appealing fact is that in computational UQ, it is possible to freely draw additional samples, thus enabling a multitude of new possibilities compared to real data acquisition, where the number of measurements is limited and possibly even restricted. This provided a new subcategory of sparse Polynomial Chaos Expansions [26,43,44,45] where generally solvers like Least Angle Regression (LARS) [46] or Orthogonal Matching Pursuit (OMP) [47] are used to determine the sparse coefficient vectors. Better GPCE recoverability has also been shown by designing a unique sampling method for Legendre Polynomials using the Chebyshev measure [31] or by extending this with a distinct coherence parameter and sampling from an altered input variable distribution by using a Monte-Carlo Markov-Chain algorithm [32]. Those methods, however, are restricted to problems with a low number of random variables employing a high polynomial order. Progress has also been made in defining criteria such as the mutual coherence $ \mu $, the RIP-constant [48], and a number of correlation-constants to categorize measurement matrices and quantify their possible recovery. In this paper, we focus on evaluating the mutual coherence parameter as a global measure for a minimization objective and on a combination of different local criteria. We adopted the proposed "near optimal" sampling method of Alemazkoor and Meidani [49], which uses a greedy algorithm to ensure a more stable recovery. We additionally use the same framework to create mutual coherence optimal GPCE matrices. This is meant to serve as a comparative example. Additionally, we propose a global approach to create an L1-optimal design by using an iterative algorithm to maximize the local and global optimality criteria. Given those two approaches to construct the set of sampling points, we are proposing a hybrid design that is partially created using LHS and then expanded according to a chosen L1-optimality criteria or vice versa. This aims to give a broad overview over not only the effectiveness of the two branches, but also over possible enhancements or coupling occurrences from their interaction. In this paper, we are going to compare the aforementioned sampling strategies on a set of test problems with varying order and dimension. We investigate their error convergence over different sample sizes. We also test their applicability on a practical example, which consists of an electrode-impedance model used to characterize the impedance of brain tissue. All algorithms are implemented in the open source python package "pygpc" [50], and the scripts to run the presented benchmarks are provided in the Supplemental Material. The remainder of the paper is structured as follows.

    The theoretical background of GPCE is revisited in Section 2. It is followed by introducing the different sampling schemes, namely standard Random Sampling in Section 3.1, LHS in Section 3.4 and CS-optimal sampling in Section 3.6. An overview about the test problems to which the sampling schemes are applied is given in Section 4 together with the benchmark results 4. Finally, the results are discussed in Section 5.

    In GPCE, the $ d $ parameters of interest, which are assumed to underlie a distinct level of uncertainty, are modeled as a $ d $-variate random vector denoted by $ {\boldsymbol{\xi}} = (\xi_1, \, \xi_2, \, ... \xi_d) $ following some probability density function (pdf) $ p_i(\xi_i) $, with $ i = 1, ..., d $. The random parameters are defined in the probability space $ (\Theta, \Sigma, P) $. The event or random space $ \Theta $ contains all possible events. $ \Sigma $ is a $ \sigma $-Algebra over $ \Theta $, containing sets of events, and $ P $ is a function assigning the probabilities of occurrence to the events. The number of random variables $ d $ determines the dimension of the uncertainty problem. It is assumed that the parameters are statistically mutually independent from each other. In order to perform a GPCE expansion, the random variables must have a finite variance, which defines the problem in the $ L_2 $-Hilbert space.

    The quantity of interest (QOI) will be analyzed in terms of the random variables $ {\boldsymbol{\xi}} $, is $ y({\bf{r}}) $. It may depend on some external parameters $ {\bf{r}} = (r_{0}, \, ..., \, r_{R-1}) $ like space, where $ R = 3 $, or any other dependent parameters. Those are treated as deterministic and are not considered in the uncertainty analysis.

    The basic concept of GPCE is to find a functional dependence between the random variables $ {\boldsymbol{\xi}} $ and the solutions $ y({\bf{r}}, {\boldsymbol{\xi}}) $ by means of an orthogonal polynomial basis $ \Psi({\boldsymbol{\xi}}) $. In its general form, it is given by:

    $ y(r,ξ)=αAcα(r)Ψα(ξ).
    $
    (2.1)

    A separate GPCE expansion must be performed for every considered parameter set $ {\bf{r}} $. The discrete number of QOIs is denoted as $ N_y $.

    The terms are indexed by the multi-index $ {\boldsymbol{\alpha}} = (\alpha_0, ..., \alpha_{d-1}) $, which is a $ d $-tuple of non-negative integers $ {\boldsymbol{\alpha}}\in\mathbb{N}_0^d $. The sum is carried out over the multi-indices, contained in the set $ \mathcal{A} $.

    The function $ \Psi_{{\boldsymbol{\alpha}}}({\boldsymbol{\xi}}) $ are the polynomial basis functions of GPCE. They are composed of polynomials $ \psi_{\alpha_i}(\xi_i) $.

    $ Ψα(ξ)=di=1ψαi(ξi)
    $
    (2.2)

    The polynomials $ \psi_{\alpha_i}(\xi_i) $ are defined for each random variable separately according to the corresponding input pdf $ p_i(\xi_i) $. They must be chosen to be orthogonal with respect to the pdfs of the random variables, e.g. Jacobin polynomials for beta-distributed random parameters or Hermite polynomials for normal-distributed random variables.

    The family of polynomials for an optimal basis of continuous probability distributions is given by the Askey scheme [51]. The index of the polynomials denotes its order (or degree). In this way, the multi-index $ {\boldsymbol{\alpha}} $ corresponds to the order of the individual basis functions forming the joint basis function.

    In general, the set $ \mathcal{A} $ of multi-indices can be freely chosen according to the problem under investigation. In practical applications, the maximum order GPCE is frequently used. In this case, the set $ \mathcal{A} $ includes all polynomials whose total order does not exceed a predefined order $ p $. In the present work, the concept of maximum order GPCE is extended by introducing the interaction order $ p_i $. An interaction order $ p_i({\boldsymbol{\alpha}}) $ can be assigned to each multi-index $ {\boldsymbol{\alpha}} $. The multi-index reflects the respective powers of the polynomial basis functions of random variables, $ \Psi_{{\boldsymbol{\alpha}}}({\boldsymbol{\xi}}) $:

    $ pi(α)=α0,
    $
    (2.3)

    where $ \lVert{\boldsymbol{\alpha}}\rVert_0 = \#(i:\alpha_i > 0) $ is the zero (semi)-norm, quantifying the number of non-zero index entries. The reduced set of multi-indices is then constructed by the following rule:

    $ A(p,pi):={αNd0:α1pα0pi}
    $
    (2.4)

    It includes all elements from a total order GPCE with the restriction of the interaction order $ p_i $. Reducing the number of basis functions is advantageous especially in case of high-dimensional problems. This is supported by observations in a number of studies, where the magnitude of the coefficients decreases with increasing order and interaction [52]. Besides that, no hyperbolic truncation was applied to the basis functions [26].

    After constructing the polynomial basis, the corresponding GPCE-coefficients $ c_{{\boldsymbol{\alpha}}}({\bf{r}}) $ must be determined for each output quantity. In this regard, the output variables are projected from the $ d $-dimensional probability space $ \Theta $ into the $ N_c $-dimensional polynomial space $ \mathcal{P}_{N_c} $. This way, an analytical approximation of the solutions $ y({\bf{r}}, {\boldsymbol{\xi}}) $ as a function of its random input parameters $ {\boldsymbol{\xi}} $ is derived and very computationally-efficient investigation of its stochastics is made possible.

    The GPCE from (2.1) can be written in matrix form as:

    $ Y=ΨC
    $
    (2.5)

    Depending on the sampling strategy, one may define a diagonal positive-definite matrix $ {\bf{W}} $ whose diagonal elements $ {\bf{W}}_{i, i} $ are given by a function of sampling points $ w({\boldsymbol{\xi}}^{(i)}) $.

    $ WY=WΨC
    $
    (2.6)

    The GPCE-coefficients for each QOI (columns of $ {\bf{C}} $) can then be found by using solvers that minimize either the L1 or the L2 norm of the residuum depending on the expected sparsity of the coefficient vectors. Each row in 2.6 corresponds to a distinct sampling point $ {\boldsymbol{\xi}}_i $. For this reason, the choice of the sampling points has a considerable influence on the characteristics and solvability of the equation system.

    Complex numerical models can be very computationally intensive. To enable uncertainty and sensitivity analysis of such models, the number of sampling points must be reduced to a minimum. Minimizing the sampling points may lead to a situation where there are fewer observations than unknowns, i.e. $ M \leq K $, resulting in a under-determined system of equations with infinitely many solutions for $ {\bf{c}} $. Considering compressive sampling, we want $ {\bf{c}} $ to be the sparsest solution, formulating the recovery problem as:

    $ minc||c||0subjecttoΨc=u
    $
    (2.7)

    where $ ||.||_0 $ indicates the $ \ell_0 $-norm, the number of non-zero entries in $ {\bf{c}} $. This optimization problem is NP-hard and not convex. The latter property can be overcome by reformulating it using the L1-norm:

    $ minc||c||1subjecttoΨc=u
    $
    (2.8)

    It has been shown that if $ [{\bf{\Psi}}] $ is sufficiently incoherent and $ {\bf{c}} $ is sufficiently sparse, the solution of the $ \ell_0 $ minimization is unique and equal to the solution of the L1 minimization [53]. The minimization in equation 2.8 is called basis pursuit [54] and can be solved using linear programming.

    The most straightforward sampling method is to draw samples according to the input distributions. In this case, one proceeds with a Monte Carlo method to sample the random domain without any sophisticated process for choosing the sampling locations. The random samples must be chosen independently and should be uncorrelated, but a simple sampling process may inadvertently violate this requirement, especially when the number of sampling points is small (a situation we are targeting). For instance, the sampling points can be concentrated in certain regions that do not reveal some important features of the model's behavior, thus significantly degrading the overall quality of the GPCE approximation.

    Coherence-optimal (CO) sampling aims to improve the stability of the coefficients when solving (2.6). It was introduced by Hampton and Doostan in the framework of GPCE in [52]. The Gram matrix (also referred to as the gramian or information matrix) defined in eq. (3.1) and its properties play a central role when determining the GPCE coefficients. Coherence-optimal sampling has been the building block for a number of sampling strategies that aim for an efficient sparse recovery of the PC [49,55]. It generally outperforms random sampling by a large margin on problems with higher order than dimensionality $ p \geq d $ and has been claimed to perform well on any given problem when incorporated in compressive sampling approaches [32]. It is defined by:

    $ GΨ=1NgΨTΨ
    $
    (3.1)

    CO sampling seeks to minimize the spectral matrix norm between the Gram matrix and the identity matrix, i.e. $ ||{\bf{G_\Psi}}-{\bf{I}}|| $, by minimizing the coherence parameter $ \mu $:

    $ μ=supξΩPj=1|w(ξξ)ψj(ξ)|2
    $
    (3.2)

    This can be done by sampling the input parameters with an alternative distribution:

    $ PY(ξ):=c2P(ξ)B2(ξ),
    $
    (3.3)

    where $ c $ is a normalization constant, $ P({\boldsymbol{\xi}}) $ is the joint probability density function of the original input distributions, and $ B({\boldsymbol{\xi}}) $ is an upper bound of the PC basis:

    $ B(ξ):=Pj=1|ψj(ξ)|2
    $
    (3.4)

    To avoid defining the normalization constant $ c $, a Markov Chain Monte Carlo approach using a Metropolis-Hastings sampler [56] is used to draw samples from $ P_{{\bf{Y}}}({\boldsymbol{\xi}}) $ in (3.3). For the Mertopolis-Hastings sampler, it is necessary to define a sufficient candidate distribution. For a coherence optimal sampling according to (3.2), this is realized by a proposal distribution $ g(\xi) $ [52]. By sampling from a different distribution than $ P(\xi) $, however, it is not possible to guarantee $ {\bf{\Psi}} $ to be a matrix of orthonormal polynomials. Therefore $ {\bf{W}} $ needs to be a diagonal positive-definite matrix of weight-functions $ w(\xi) $. In practice, it is possible to compute $ {\bf{W}} $ with:

    $ wi(ξ)=1Bi(ξ)
    $
    (3.5)

    A detailed description about the technique can be found in [52].

    A judicious choice of sampling points $ \{{\boldsymbol{\xi}}^{(i)}\}_{i}^{N_g} $ allows us to improve the properties of the Gramian without any prior knowledge about the model under investigation. The selection of an appropriate optimization criterion derived from $ [{\bf{G_\Psi}}] $ and the identification of the corresponding optimal sampling locations is the core concept of optimal design of experiment (ODE). The most popular criterion is $ D $-optimality, where the goal is to increase the information content from a given number of sampling points by minimizing the determinant of the inverse of the Gramian:

    $ ϕD=|GΨ1|1/Nc,
    $
    (3.6)

    $ D $-optimal designs are focused on precise estimation of the coefficients. Besides $ D $-optimal designs, there exist many other alphabetic optimal designs, such as $ A $-, $ E $-, $ I $-, or $ V $- optimal designs with different goals and criteria. A nice overview of these designs can be found in [57,58].

    Hadigol and Doostan investigated the convergence behavior of $ A $-, $ D $- and $ E $-optimal designs [55] in combination with coherence-optimal sampling in the framework of least squares GPCE. They found that those designs clearly outperform standard random sampling. Their analysis was restricted to cases where the number of sampling points is larger than the number of unknown coefficients ($ N_g > N_c $). Based on the current state of knowledge, our analysis focuses on investigating the convergence properties of $ D $-optimal and $ D $-coherence-optimal designs in combination with L1 minimization where $ N_g < N_c $.

    In order to overcome the disadvantages of standard random sampling for low sample sizes, one may use sampling schemes that improve the coverage of the random space. Early work on this topic focused on pseudo-random sampling while optimizing distinct distance and correlation criteria between the sampling points. Designs optimizing the Maximum Minimal distance [35,36,59] or Audze-Eglais Designs [60,61] proved to be both more efficient and more reliable than standard random sampling schemes. Space-filling optimal sampling such as Latin Hypercube Sampling (LHS) is nowadays frequently being used in the framework of GPCE [14,55,62]. In the following, we briefly introduce two prominent distance criteria we used in our space-filling optimal sampling approaches.

    Maximum-Minimal distance criterion: The maximum-minimal distance criterion is a space-filling optimality criterion. A design can be called maximum-minimum distance optimal if it maximizes the minimum inter-site distance [36]:

    $ min1i,jn,ijd(xi,xj)subjecttod(xi,xj)=dij=(mk=1|xikxjk|t)1t
    $
    (3.7)

    where $ d({\bf{x}}_{i}, {\bf{x}}_{j}) $ is the distance between two sampling points $ {\bf{x}}_{i} $ and $ {\bf{x}}_{j} $, and $ t = 1 $ or $ 2 $. A design optimized in its minimum inter-site distance is able to create well-distributed sampling points. For a low number of sampling points, however, the sampling points may be heavily biased towards the edges of the sampling space because the distance criterion pushes the sampling points relentlessly outwards and away from possible features close to the center of the sampling space [59].

    The $ \varphi_{p} $ criterion: To counteract the shortcomings of the plain inter-side distance in (3.7), the equivalent $ \varphi_{p} $ criterion has been proposed [59]. A $ \varphi_{p} $-optimal design can be constructed by setting up a distance list $ (d_{1}, ..., d_{s}) $ obtained by sorting the inter-site distances $ d_{ij} $ together with a corresponding index list $ (J_{1}, ..., J_{s}) $. The $ d_{i} $ are distinct distance values, $ d_{1} < d_{2} < ... < d_{s} $ and $ J_{s} $ are the corresponding indices of pairs of sites in the design separated by $ d_{i} $. A design can then be called $ \varphi_{p} $-optimal if it minimizes:

    $ φp=(si=1Jidpi)1p
    $
    (3.8)

    We empirically choose $ p $ as 10 in the numerical construction of LHS designs.

    As the investigation of the following sampling schemes in a sparse reconstruction suggests, functions with a high number of variables occur very commonly. Since the goal of a sparse reconstruction is to reduce the sampling size, an important caveat to the optimization of criteria based around the distance $ d_i, d_{i, j} $ in the two criteria maximum-minimal distance and the $ \phi_p $ is that it shows a systematic bias that breaks the uniformity sought in the following optimization algorithm in section 3.5.3. This problem has been identified recently by Vořechovský and Eliáš [63,64] and becomes apparent in efficient optimization. They introduced a new distance criterion called periodic distance:

    $ ¯dij=(mk=1(min(|xikxjk|,1|xikxjk|))t)1t
    $
    (3.9)

    With $ \bar{d_{ij}} $, it is possible to calculate the periodic maximum-minimum distance and the periodic $ \phi_p $ criterion as $ \min_{1 \leq i, j\leq n, i\neq j} \bar{d_{ij}} $ and $ \phi_p(\bar{d_{ij}}) $, respectively, by using the periodic distance instead of the conventional euclidean distance metric. Further, for the $ \phi_p $ criterion, it was shown by the same authors and Mašek that specifying the $ p $-exponent based on investigations of the potential energy of the design can lead to an enhancement in the space-filling and projection properties as well as a decrease in its discrepancy. For successful application of LHS, they recommend $ p = N_{var} + 1 $, where $ N_{var} $ is the number of variables in the given function [65].

    In LHS, the $ d $-dimensional sampling domain is segmented into $ n $ subregions corresponding to the $ n $ sampling points to be drawn. LHS designs ensure that every subregion is sampled only once. This method can be mathematically expressed by creating a matrix of sampling points $ {\bf{\Pi}} $ with:

    $ πi,j=pi,jui,jn,
    $
    (3.10)

    where $ {\bf{P}} $ is a matrix of column-wise randomly permuted indices of its rows and $ {\bf{U}} $ is a matrix of independent uniformly distributed random numbers $ u \in [0, 1] $.

    The space-filling properties of LHS designs can be improved by optimizing the $ \varphi_{p} $ criterion. A pseudo-optimal design can be determined by creating a pool of $ n_i $ standard LHS designs and choosing the one with the best $ \varphi_{p} $ criterion. As $ n_i $ reaches infinity, the design will become space-filling optimal. In this study we used $ n_i = 100 $ iterations, which was found to be an efficient trade-off between computational cost and $ \varphi_{p} $-optimality.

    The Enhanced Stochastic Evolutionary Algorithm Latin Hypercube Sampling (LHS-ESE) is a very stable space-filling optimal algorithm designed by Jin et al. (2003) [35]. The resulting designs aim for a specified $ \varphi_p $ parameter and achieve that by multiple element-wise exchanges of an initial LHD in an inner loop while storing their respective $ \varphi_p $ - parameters in an outer loop. This process shows a far smaller variance in the space-filling criteria of the created sample sets.

    However, we observed that the LHS-ESE scheme often undersamples the boundaries of the random domain, which is disadvantageous for transfer functions with high gradients close to the parameter boundaries. This is less apparent for a high number of samples but becomes a serious drawback when the number of sampling points is low; this effect can be linked to the systematic bias pointed out in [63]. In order to overcome this problem, we modified the LHS-ESE algorithm by shrinking the first and last subregion of the interval to a fraction of their original sizes while keeping the remaining intermediate $ n-2 $ subregions equally spaced. The procedure is illustrated in Figure 1. The initial matrix for the Latin Hypercube design $ {\bf{\Pi}} $ will then be changed as $ {\bf{P}} $ is randomly permuted according to:

    $ πi,1=πi,n=αpi,jui,jn,
    $
    (3.11)
    Figure 1.  Schematic representation of the SC-ESE, where the outer area is cut to an $ \alpha $ fraction of its original size and the center is stretched outward.

    where $ \alpha $ is the fraction to which the size of the border interval is decreased, and $ i \in [1, d] $ and $ j \in [1, n] $ are used to cover the size reduction of the intervals at the edges. Our empirical studies showed that a reduction of $ \alpha = \frac{1}{4} $ counteracts the aforementioned undersampling close to the border. We pick the index $ j $ to target these edges, since after the normalization by dividing by $ n $, the indices $ 1 $ and $ n $ for $ j $ are expected for the values closest to $ 0 $ and $ 1 $ respectively, which occur at the borders of the sampled section. The centre is then stretched by:

    $ \pi_{i, j_c} = {pi,jcui,jcn(1α)pi,jcn2forjn2pi,jcui,jcn+(1α)pi,jcn2else,
    $
    (3.12)

    with $ j_c $ being the indices of $ j $ without the border domains $ 1 $ and $ n $, $ j_c = j \setminus \{1, n\} $. After that alteration, the elements of each column in $ {\bf{\Pi}} $ can be randomly permuted to proceed with the construction of the Latin Hypercube design just like in 3.10. If $ \alpha $ is made smaller, then the size of the guaranteed sampling region at the border also becomes smaller, thus forcing the sampling point to be chosen closer to the border as illustrated in Figure 1. To the best of our knowledge, the Enhanced Stochastic Evolutionary LHS algorithm has not yet been studied in the context of GPCE.

    Compressive Sampling is a novel method, first introduced in the field of signal processing, that allows the recovery of signals with significantly fewer samples assuming that the signals are sparse: i.e. a certain portion of the coefficients are zero, meaning that the coefficient vector $ {\bf{c}} $ can be well-approximated with only a small number of non-vanishing terms. A coefficient vector $ {\bf{c}} $ that is $ s $-sparse obeys:

    $ ||c||0s,sN
    $
    (3.13)

    The locations of the sampling points have a profound impact on the reconstruction quality because they determine the properties of the GPCE matrix. There are several criteria that can be evaluated exclusively on the basis of the GPCE matrix that may favor the reconstruction. It could be shown that optimization of those criteria lead to designs that promote successful reconstruction [49,66]. In the following, we give a brief overview about the different criteria we considered in this study.

    The mutual coherence (MC) of a matrix measures the cross-correlations between its columns by evaluating the largest absolute and normalized inner product between different columns. It can be evaluated by:

    $ μ(Ψ)=max1i,jNc,ji|ψTiψj|||ψi||2||ψj||2
    $
    (3.14)

    The objective is to select sampling points that minimize $ \mu({\bf{\Psi}}) $ for a desired L1-optimal design. It is noted that minimizing the mutual-coherence considers only the worst-case scenario and does not necessarily improve compressive sampling performance in general [39].

    It is shown in [39,45,46,47] that the robustness and accuracy of signal recovery can be increased by minimizing the distance between the Gram matrix $ {\bf{G_\bf{\Psi}}} $ and the identity matrix $ {\bf{I}}_{N_c} $:

    $ γ(Ψ)=1NminΨRM×Nc|INcGΨ||2F
    $
    (3.15)

    where $ ||\cdot||_F $ denotes the Frobenius norm and $ N : = K \times (K - 1) $ is the total number of column pairs. Note that the optimization of only the average cross-correlation can result in large mutual coherence and is regularly prone to inaccurate recovery. In this context, Alemazkoor and Meidani (2018) [49] proposed a hybrid optimization criteria $ f({\bf{\Psi}}) $, which minimizes both the average cross-correlation $ \gamma({\bf{\Psi}}) $ and the mutual coherence $ \mu({\bf{\Psi}}) $:

    $ argminf(Ψ)=argmin(μimin(μ)max(μ)min(μ))2+(γimin(γ)max(γ)min(γ))2
    $
    (3.16)

    with $ \boldsymbol\mu = (\mu_{1}, \mu_{2}, ..., \mu_{i}) $ and $ \boldsymbol\gamma = (\gamma_1, \gamma_2, ..., \gamma_i) $

    We used a greedy algorithm as shown in Algorithm 1 to determine L1-optimal sets of sampling points. In this algorithm, we generate a pool of $ M_p $ samples and randomly pick an initial sample. In the next iteration, we successively add a sampling point and calculate the respective optimization criteria. After evaluating all possible candidates, we select the sampling point yielding the best criterion and append it to the existing set. This process is repeated until the sampling set has the desired size $ M $.

    Algorithm 1 Greedy algorithm to determine L1-optimal sets of sampling points
    1:  create a random pool of $M_p$ samples
    2:   create the measurement matrix ${\bf{\Psi_{pool}}}$ of the samples
    3:    initiate ${\bf{\Psi_{opt}}}$ as a random row $r$ of ${\bf{\Psi_{pool}}} $
    4:   add row $ r $ to the added rows $r_{added} $
    5:   for $ i $ in (2, M) do
    6:         for $ j $ in (1, $ M_p $ without $r_{added}$) do
    7:             $ {\bf{\Psi_j}}$ = row-concatenate $({\bf{\Psi_{opt}}}, r_j)$
    8:             $ f_j = f({\bf{\Psi_j}}) $
    9:             evaluate $f_j = f({\bf{\Psi_j}}) $
    10:         save $ f_i = argmin(f_j) $ for all $ j $ and $ j_{best} $
    11:         add $ r_{j_{best}} $ to $ r_{added} $
    12:         $ {\bf{\Psi_{opt}}} $ = row-concatenate $ ({\bf{\Psi_{opt}}}, r_{j_{best}}) $
    13:  Return $ {\bf{\Psi_{opt}}} $ and $ r_{added} = X_{best} $

     | Show Table
    DownLoad: CSV

    The respective performances of the sampling schemes are thoroughly investigated based on four different scenarios. We compare the accuracy of the resulting GPCE approximation with respect to the original model and investigate the convergence properties and recoverability of the different sampling schemes. Following comparable studies [49], we used uniformly distributed random variables in all examples and constructed GPCE bases using Legendre polynomials. This is the most general case, as any other input distribution can (in principle) be emulated by modifying the post-processing stage of the GPCE. The examples compare the sampling schemes on three theoretical test functions: (i) The Ishigami Function representing a low-dimensional problem that requires a high approximation order; (ii) the six-dimensional Rosenbrock Function representing a problem of medium dimension and approximation order; and (iii) the 30-dimensional Linear Paired Product (LPP) Function [49] using a low-order approximation. Finally, we consider a practical example, which consists of an electrode model used to measure the impedance of biological tissues for different frequencies. All sampling schemes were implemented in the open-source python package pygpc [50], and the corresponding scripts to run the benchmarks are provided in the supplemental material. The sparse coefficient vectors were determined using the LARS-Lasso solver from scipy [67].

    A summary about the GPCE parameters for each test case is given in Table 1. For each test function, we successively increased the approximation order until an NRMSD of $ \varepsilon < 10^{-5} $ is reached. We assumed a very high number of sampling points. In this regard, we wanted to eliminate approximation order effects in order to focus on the convergence with respect to the number of sampling points.

    Table 1.  Overview of numerical examples.
    Function Problem Dim. Order Int. Order Basisfunctions
    Ishigami Low dim., high order $ 2 $ $ 12 $ $ 2 $ $ 91 $
    Rosenbrock Med. dim., med. order $ 6 $ $ 5 $ $ 2 $ $ 181 $
    LPP High dim., low order $ 30 $ $ 2 $ $ 2 $ $ 496 $
    Electrode Application example $ 7 $ $ 5 $ $ 3 $ $ 596 $

     | Show Table
    DownLoad: CSV

    For each sampling scheme and test case, we created a large set of sampling points that can be segmented in different sizes. For each case, we computed the associated GPCE approximation and calculated the normalized root mean square deviation (NRMSD) between the GPCE approximation $ \tilde{y} $ and the solution of the original model $ y $ using an independent test set containing $ N_t = 10.000 $ random sampling points. The NRMSD is given by:

    $ ε=1NtNti=1(yi˜yi)2max(y)min(y)
    $
    (4.1)

    We evaluated the average convergence of the NRMSD together with the success rate of each sampling scheme by considering $ 30 $ repetitions. In addition, we quantified the convergence of the first two statistical moments, i.e. the mean and the standard deviation. The results are presented in the supplemental material. Reference values for the mean and standard deviation were obtained for each test function from $ N = 10^7 $ evaluations from the original model functions.

    As a first test case, we investigate the performance of the different sampling schemes considering the Ishigami function [68]. It is often used as an example for uncertainty and sensitivity analysis because it exhibits strong nonlinearity and nonmonotonicity. It is given by:

    $ y=sin(x1)+asin(x2)2+bx43sin(x1)
    $
    (4.2)

    This example represents a low-dimensional problem requiring a high polynomial order to provide an accurate surrogate model. We defined $ x_1 $ and $ x_2 $ as uniformly distributed random variables and set $ x_3 = 1 $. The remaining constants are $ a = 7 $ and $ b = 0.1 $ according to [69] and [70]. The approximation order was set to $ p = 12 $, resulting in $ N_c = 91 $ basis functions. We investigated the function in the interval $ (-\pi, \pi)^2 $ as shown in Figure 2.

    Figure 2.  Two-dimensional Ishigami function used to investigate the performance of different sampling schemes.

    The convergence results for the different sampling schemes are shown in Figure 3. It shows the dependence of the NRMSD $ \varepsilon $ on the number of sampling points $ N $ of the best sampling scheme from the LHS (Figure 3(a)) and L1-optimal sampling schemes (Figure 3(b)). The graphs of the error convergence consist of box-plots with whiskers of the error over different sampling sizes and are then connected with lines representing the median. Standard random sampling has the largest boxes and a black line on top for reference. We defined a target error level of $ 10^{-3} $, indicated by a horizontal red line, which corresponds to a relative error between the GPCE approximation and the original model function of 0.1%. Additionally, the mutual coherence of the sampling sets is shown in Figure 3(c) and (d). The success rate of the best sampling schemes from both categories and standard random sampling is shown in Figure 3(e). In the Table shown in Figure 3(f), the median number of grid points required to reach that target error $ \hat{N}_{\varepsilon} $ together with its standard deviation is shown. We also evaluated the median of the required number of sampling points for the random sampling scheme to determine the GPCE coefficients considering the L2 norm using the Moore-Penrose pseudo-inverse. All other evaluations have been performed using the LARS-Lasso solver. The success rates of the algorithms with the lowest 99% recovery sampling size $ \hat{N}_{\varepsilon}^{(99\%)} $ of each category of sampling schemes are marked in bold.

    Figure 3.  (a) and (b) Convergence of the NRMSD $ \varepsilon $ with respect to the number of sampling points $ N $ considering the Ishigami function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $ 0.1\% $, $ 1\% $, and $ 10\% $; (f) average number of sampling points needed to reach the error threshold of $ 0.1\% $, a success rate of $ 95\% $, $ 99\% $, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.

    The sparsity of the model function greatly influences the reconstruction properties and hence the effectiveness of the sampling schemes. A GPCE approximation of the Ishigami function with an accuracy of $ < 10^{-5} $ requires $ k = 12 $ out of the available $ N_c = 91 $ coefficients ($ 13\% $). Considering standard random sampling, the use of a conventional L2 solver requires $ 127 $ sampling points to achieve a GPCE approximation with an error of less than $ 10^{-3} $ (see first row of Table in Figure 3(f)). In contrast, by using the L1 based LARS-Lasso solver, the number of required sampling points reduces to $ 31 $, which serves as a baseline to compare the performance of the investigated sampling schemes. By using the ESE enhanced LHS sampling scheme, the number of sampling points could be reduced to $ 25 $, a substantial relative saving of $ 13 $% compared to standard random sampling. In the category of L1-optimal sampling, D-coherence optimal sampling schemes performed best; these schemes showed a slight increase in samples for the convergence.

    In addition to the average convergence of the sampling schemes, their reliability was calculated to evaluate their practical applicability. We have quantified reliability by calculating the number of sampling points required to achieve success rates of $ 95\% $ and $ 99\% $ in reaching the target error of $ 10^{-3} $; the reliability is determined by the relative number of repetitions required to reach the target error. Finally, we tested the hypothesis that the number of sampling points to reach the target error is significantly lower compared to standard random sampling. The Shapiro-Wilk-Test indicated that the error threshold distributions are not normally distributed. For this reason, we used the one-tailed Mann-Whitney U-test to compute the corresponding p-values. The generally good performance of LHS (SC-ESE) sampling is underpinned by a p-value of $ 4.3\cdot10^{-5} $. D-Coherence-optimal grids show a similar success rate and outperform standard random sampling (as measured by the numbers of sampling points required to achieve success rates of 95% and 99%) by factors of 9 and 8 respectively, signifying higher stability compared to standard random sampling on the Ishigami function.

    Alongside the NRMSD, we calculated the mutual coherence of the GPCE matrix for each sampling scheme. It is shown in Figure 3(c) and (d). It can be seen that the mutual coherence is very stable around 0.7 for all LHS schemes considering a sampling size of 25. In contrast, L1-optimal grids show large variation. The coherence-optimal designs yield GPCE matrices with much higher coherences as defined in (3.14) compared to standard random sampling. D-Coherence-optimal sampling manages to reduce the mutual coherence the most after 25 samples; however, it shows very non-linear behavior for larger sampling sets, where it increases strongly above the level of random sampling.

    As a second test case, we used the $ d $-dimensional generalized Rosenbrock function, also referred to as the Valley or Banana function [71]. It is given by:

    $ y=d1i=1100(xi+1x2i)2+(xi1)2
    $
    (4.3)

    The Rosenbrock function is a popular test problem for gradient-based optimization algorithms [72,73]. In Figure 4, the function is shown in its two-dimensional form. The function is unimodal, and the global minimum lies in a narrow, parabolic valley. However, even though this valley is easy to approximate, it has more complex behavior close to the boundaries. To be consistent with our definitions of "low", "medium", and "high" dimensions and approximation orders in the current work, we classify this problem as medium-dimensional and requiring a moderate polynomial order approximation order to yield an accurate surrogate model. Accordingly, we defined the number of dimensions to be $ d = 6 $ and set the approximation order to $ p = 5 $ to ensure an approximation with an NRMSD of $ \varepsilon < 10^{-5} $ when using a high number of samples. This results in $ N_c = 181 $ basis functions. The generalized Rosenbrock function is also used by Alemazkoor and Meidani [49] to compare the performance of different L1-optimal sampling strategies. We have used the same test function to make the results comparable and to be able to better integrate our study into the previous literature.

    Figure 4.  Rosenbrock function in its two-dimensional form. In the present analysis, the Rosenbrock function of dimension $ d = 6 $ is investigated.

    The results of the error convergence for the investigated sampling schemes are shown in Figure 5(a) and (b). The mutual coherence of these algorithms is shown in Figure 5(c) and (d). The success rates of the best-performing sampling schemes from each category are shown in Figure 5(e), and the statistics are summarized in Table 5(f). The Rosenbrock function can be exactly replicated by the polynomial basis functions of the GPCE using $ k = 23 $ out of the $ N_c = 181 $ available coefficients ($ 13\% $). Random sampling in combination with L2-minimization requires $ 190 $ sampling points to reach the target error of $ 10^{-3} $. In contrast, only $ 76 $ samples are required when using the L1 based LARS-Lasso solver, which again serves as a baseline for comparison. The LHS (SC-ESE) algorithm is substantially less efficient than the other two LHS designs (STD and MM) in this test case. With LHS (MM) it is possible to achieve a reduction of sampling points by roughly 8% compared to standard random sampling. From all investigated sampling schemes, MC-CC optimal grids performed best and required 13% fewer sampling points than standard random grids. It is followed by Coherence Optimal sampling, which reduces the required samples by 4.5%.

    Figure 5.  (a) and (b) Convergence of the NRMSD $ \varepsilon $ with respect to the number of sampling points $ N $ considering the Rosenbrock function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $ 0.1\% $, $ 1\% $, and $ 10\% $; (f) average number of sampling points needed to reach the error threshold of $ 0.1\% $, a success rate of $ 95\% $, $ 99\% $, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.

    In terms of success rate, the sampling schemes differ considerably. Standard random sampling requires $ N_{sr}^{95\%} = 92.5 $ and $ N_{sr}^{99\%} = 112.5 $ sampling points to achieve success rates of $ 95\% $ and $ 99\% $ respectively. Standard LHS designs are more stable and require only $ N_{sr}^{95\%} = 73.8 $ and $ N_{sr}^{99\%} = 75.4 $ sampling points, respectively. MC-CC grids are able to surpass all other L1-optimal grids by achieving $ N_{sr}^{95\%} = 73 $ and $ N_{sr}^{99\%} = 75.4 $.

    The mutual coherence of the measurement matrices for each algorithm is shown in Figure 5(c) and (d). It shows the same behavior for LHS sampling schemes as in case of the Ishigami function. This time, the sampling size of interest is larger, with about $ 80 $ samples for the random sampling convergence. In this region, LHS (SC-ESE) shows the lowest mutual coherence at about $ 0.45 $. The L1 sampling schemes are all able to reduce beyond the level of random sampling. This time, MC-CC sampling emerges as the leading design in that regard, which is then followed by MC and D-Coherence optimal designs.

    As a third test case, we used the $ d $-dimensional Linear Paired Product (LPP) function [49] assuming a linear combination between two consecutive dimensions:

    $ y=di=1xixi+1
    $
    (4.4)

    It has $ d $ local minima except for the global one. It is continuous, convex and unimodal. In the present context, it is investigated having $ d = 30 $ dimensions with an approximation order of $ p = 2 $, resulting in $ N_c = 496 $ basis functions. This test case represents high-dimensional problems requiring a low approximation order. This test function is also used by Alemazkoor and Meidani (2018) [49] but considering $ d = 20 $ random variables.

    Figure 6.  Linear Paired Product (LPP) function in its two-dimensional form. In the present analysis, we investigated it with $ d = 30 $ dimensions.

    The error convergence for the different sampling schemes is shown in Figure 7(a) and (b). The mutual coherence is visualized in 7(c) and (d), the success rates of the best-performing sampling schemes from each category are shown in Figure 7(e), and the statistics are summarized in Table 7(d). This test function can be exactly replicated by the polynomial basis functions of the GPCE using $ k = 29 $ out of $ N_c = 496 $ available coefficients ($ 6\% $). In this case, random sampling requires $ 496 $ sampling points using L2-minimization and $ 110 $ samples using L1-minimization. LHS designs showed similar convergence behavior to standard random sampling, with no reported improvement for LHS (MM) and LHS (STD) sampling and an increase in samples for LHS (SC-ESE). In contrast, L1-optimal designs did not manage to improve on the sampling count, with MC-CC and Coherence-Optimal sampling showing the best convergence rates. However, only D-optimal and D-Coherence optimal sampling increased by more then $ 4\% $, indicating very little variability between the sampling schemes. It can be observed that the variance in the range between $ 110 $ and $ 120 $ sampling points is very high for all sampling methods. The reason for this is that the LPP function is very sparse, and the convergence is mainly determined by the L1 solver. An additional sample point can lead to an abrupt reduction of the approximation error and a "perfect" recovery. This is often observed with L1 minimization.

    Figure 7.  (a) and (b) Convergence of the NRMSD $ \varepsilon $ with respect to the number of sampling points $ N $ considering the Linear Paired Product (LPP) function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $ 0.1\% $, $ 1\% $, and $ 10\% $; (f) average number of sampling points needed to reach the error threshold of $ 0.1\% $, a success rate of $ 95\% $, $ 99\% $, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.

    To achieve the $ 95\% $ and $ 99\% $ success rates, standard random sampling requires $ N_{sr}^{(95\%)} = 117 $ and $ N_{sr}^{(99\%)} = 119.4 $ samples. The LHS (MM) algorithm performs slightly better and requires $ N_{sr}^{(95\%)} = 113.8 $ and $ N_{sr}^{(99\%)} = 115.4 $ samples. L1-optimal sampling schemes show tremendously weaker stability for this test-function. Here, only D-optimal designs reach the range of standard random sampling and the LHS variations, with $ N_{sr}^{(95\%)} = 128.1 $ and $ N_{sr}^{(99\%)} = 129.6 $.

    The mutual coherence of the measurement matrices for this test case are shown in Figure 7(c) and (d). The LHS (SC-ESE) shows the lowest coherence for the category of LHS grids, very similar to the previous example. L1 (MC) and L1 (MC-CC) designs display the lowest mutual coherence for L1-optimal designs, while the remaining L1-optimal sampling schemes are densely packed around the region slightly below random sampling.

    The last test case is an application example from electrical engineering. The aim is to estimate the sensitivity of the intrinsic impedance of a probe used to measure the impedance of biological tissues for different frequencies. The model is shown in Figure 8(a) and consists of a Randles circuit that was modified according to the coaxial geometry of the electrode. The lumped parameters model the different contributions of the physical phenomena. The resistance $ R_s $ models the contribution of the serial resistance of the electrolyte into which the electrode is dipped. The constant phase element $ Q_{dl} $ models the distributed double layer capacitance of the electrode. The resistance $ R_{ct} $ models the charge transfer resistance between the electrode and the electrolyte. The elements $ Q_d $ and $ R_d $ model the diffusion of charge carriers and other particles towards the electrode surface. The constant phase elements $ Q_{dl} $ and $ Q_d $ have impedances of $ 1/\left(Q_{dl}(j\omega)^{\alpha_{dl}}\right) $ and $ 1/\left(Q_{d}(j\omega)^{\alpha_{d}}\right) $, respectively. The electrode impedance, according to the Randles circuit shown in Figure 2(a), is given by:

    $ ˉZ(ω)=Rs+(Qdl(jω)αdl+1Rct+Rd1+RdQd(jω)αd)1
    $
    (4.5)
    Figure 8.  Electrode impedance model: (a) Randles circuit; (b) Real part and (c) imaginary part of the electrode impedance as a function of $ R_s $ and $ \alpha_{dl} $. The remaining parameters are set to their respective mean values.

    The impedance is complex-valued and depends on the angular frequency $ \omega = 2 \pi f $, which acts as an equivalent to the deterministic parameter $ {\bf{r}} $ from eq. 2.1. A separate GPCE is constructed for each frequency. In this analysis, the angular frequency is varied between $ 1 $ Hz and $ 1 $ GHz with $ 1000 $ logarithmically spaced points. The real part and the imaginary part are treated independently. The application example thus consists of $ 2000 $ QOIs, for each of which a separate GPCE is performed. The approximation error is estimated by averaging the NRMSD over all QOIs. Accordingly, the impedance of the equivalent circuit depends on seven parameters, which will be treated as uncertain: $ (R_s, R_{ct}, R_d, Q_d, \alpha_d, Q_{dl}, \alpha_{dl}) $. They are modeled as uniformly-distributed random variables with a deviation of $ \pm10 $% from their estimated mean values, with the exception of $ R_s $, which was defined between $ 0 \Omega $ and $ 1 k\Omega $. The parameters were estimated by fitting the model to impedance measurements from a serial dilution experiment of KCl with different concentrations. The parameter limits are summarized in Table 2. In preliminary investigations, we successively increased the approximation order until we reached an accurate surrogate model with an NRMSD of $ \varepsilon < 1^{-5} $. It was found that the parameters in this test problem strongly interact with each other, which explains the rather high order of approximation compared to the smooth progression of the real and imaginary parts in the cross sections shown in Figure 8. This means that (for example) when five parameters of first order interact with each other, the maximum GPCE order is reached, and this coefficient is significant compared to (for example) a fifth order approximation of a single parameter.

    Table 2.  Estimated mean values of the electrode impedance model, determined from calibration experiments, and limits of parameters.
    Parameter Min. Mean Max.
    $ R_s $ $ 0 $ $ \Omega $ $ 0.5 $ k$ \Omega $ $ 1 $ k$ \Omega $
    $ R_{ct} $ $ 9 $ k$ \Omega $ $ 10 $ k$ \Omega $ $ 1.1 $ k$ \Omega $
    $ R_d $ $ 108 $ k$ \Omega $ $ 120 $ k$ \Omega $ $ 132 $ k$ \Omega $
    $ Q_{d} $ $ 3.6 \cdot 10^{-10} $ F $ 4.0 \cdot 10^{-10} $ F $ 4.4 \cdot 10^{-10} $ F
    $ Q_{dl} $ $ 5.4 \cdot 10^{-7} $ F $ 6 \cdot 10^{-7} $ F $ 6.6 \cdot 10^{-7} $ F
    $ \alpha_d $ $ 0.855 $ $ 0.95 $ $ 1.0 $
    $ \alpha_{dl} $ $ 0.603 $ $ 0.67 $ $ 0.737 $

     | Show Table
    DownLoad: CSV

    The results of the error convergence are shown in Figure 9(a) and (b), and the corresponding mutual coherences are shown in 9(c) and (d). The success rates of the best-performing sampling schemes from each category are shown in Figure 9(e), and the statistics are summarized in the Table Figure 9(f). The practical example consisting of the probe impedance model can be considered as non-sparse. It requires $ k = 500 $ out of $ N_c = 596 $ coefficients ($ 84\% $) to reach an accuracy of $ < 10^{-5} $. Random sampling requires $ 269 $ random samples to construct an accurate surrogate model using conventional L2 minimization. By using L1 minimization, the number of samples reduces to $ 82 $. The LHS (SC-ESE) sampling scheme shows very good performance, requiring only $ 70.2 $ samples to reach the target error, which corresponds to a decrease of $ 14 $%. In this test case, for L1-optimal sampling schemes, only MC-CC sampling managed to yield an improvement of the median by $ 2 $ samples, yet they display a severe lack in stability as discussed in the next part.

    Figure 9.  (a) and (b) Convergence of the NRMSD $ \varepsilon $ with respect to the number of sampling points $ N $ considering the probe impedance model. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $ 0.1\% $, $ 1\% $, and $ 10\% $; (f) average number of sampling points needed to reach the error threshold of $ 0.1\% $, a success rate of $ 95\% $, $ 99\% $, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.

    Random grids require $ N_{sr}^{(95\%)} = 90.7 $ and $ N_{sr}^{(99\%)} = 93.8 $ samples to reach the desired success rates. Besides its good average convergence, LHS (SC-ESE) grids show significantly better stability, requiring only $ N_{sr}^{(95\%)} = 71.9 $ and $ N_{sr}^{(99\%)} = 84.4 $ samples, which corresponds to a decrease of $ 10 $% for both. A general lack of robust recovery is found for pure L1-optimal sampling schemes. Their success rates exceed those of random sampling, much like their median, rendering them inefficient on this test case.

    Figure 9(c) and (d) shows the mutual coherences of the electrode impedance model. LHS (SC-ESE) still has the lowest mutual coherence in their category and may even show a lower mutual coherence than any L1-optimal design for single test runs. As seen previously, the lowest coherence for L1-optimal sampling can be observed in the case of MC-CC and MC sampling. Both of them form the bottom line of L1-optimal sampling in the region of $ 80 $ samples; MC sampling rises above D and D-Coherence optimal sampling for larger sampling sizes.

    The examined sampling schemes showed different strengths and weaknesses depending on the test problem. In order make general statements regarding their performance, we have weighted the error crossing $ \hat{N}_{\varepsilon} $ and success rates $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ with the corresponding value of random sampling and averaged the results over all investigated test problems. The results are shown in Table 3. It can be observed that LHS (SC-ESE) grids outperform random sampling in terms of average convergence and success rate for two of the test functions while maintaining a very competitive $ N_{sr}^{(99\%)} $ on all test functions. They share this quality with the other LHS grids; however, the SC-ESE variation manages to score the largest sample reduction of $ 14.6 $% regarding the $ \hat{N}_{\varepsilon} $ measure and about $ 35 $% on the two success rate measures. It is still closely trailed by the other two LHS sampling schemes. For the two higher-dimensional examples, however, LHS (STD) and LHS (MM) clearly show the most stable recovery success as seen in the $ N_{sr}^{(99\%)} $ for all test functions, with a sample reduction of $ 15 $% on average. L1-optimal sampling schemes managed to achieve a significant sample reduction for the Rosenbrock function; specifically, MC-CC sampling is unrivalled regarding the $ \hat{N}_{\varepsilon} $ decreases. However, the success rate measures are paralleled by LHS sampling. In terms of stability, only D-Coherence optimal sampling shows some perseverance, as its largest increase of samples for the $ N_{sr}^{(99\%)} $ is $ 52 $% for the Electrode Model function. Surprisingly, the remaining L1-optimal grids, namely D-optimal, MC, MC-CC and Coherence-optimal sampling, all require increased sampling sizes for the $ N_{sr}^{(99\%)} $ for the LPP function and the Electrode Model. D-optimal sampling generally shows a slower convergence compared to other L1 optimal schemes, however it still proved to be comparable to them for the success rate of 99%. For the test functions that were investigated in the context of a high-order GPCE approximation, no sampling scheme except MC-CC showed a reduction of samples, and all of them showed large increases in the amount of samples needed for the success rate targets.

    Table 3.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach an NRMSD of $ 10^{-3} $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach success rates of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.869 0.615 0.674 1.246 1.068 0.887 1.018 1 1 0.856 0.898 0.900 0.997 0.895 0.865
    LHS (MM) 0.872 0.660 0.712 0.919 0.827 0.691 1 0.972 0.966 1.027 1.019 0.999 0.9545 0.870 0.842
    LHS (STD) 0.872 0.649 0.706 0.921 0.798 0.670 1 0.987 0.983 1.006 1.030 1.017 0.950 0.866 0.844
    L1 (MC) 1.131 0.887 0.956 0.961 1.049 1.378 1.037 1.282 - 1.039 2.226 - 1.042 1.361 1.167
    L1 (MC-CC) 1.257 0.969 1.000 0.868 0.789 0.670 1.028 1.235 1.449 0.972 1.223 2.187 1.031 1.054 1.327
    L1 (D) 1.503 1.026 1.048 1.038 0.953 0.797 1.091 1.079 1.082 1.429 1.427 1.433 1.265 1.121 1.089
    L1 (D-COH) 1.076 0.820 0.847 0.999 0.941 0.797 1.082 1.154 1.164 1.414 1.526 1.518 1.143 1.110 1.082
    L1 (CO) 1.882 2.010 2.183 0.955 0.892 0.751 1.009 1.132 1.160 1.05 1.289 1.351 1.224 1.331 1.361

     | Show Table
    DownLoad: CSV

    We thoroughly investigated the convergence properties of space-filling sampling schemes, L1-optimal sampling schemes minimizing the mutual coherence of the GPCE matrix, hybrid versions of both, and optimal designs of experiment by considering different classes of problems. We compared their performance against standard random sampling and found great differences between the different sampling schemes. To the best of our knowledge, this study is currently the most comprehensive comparing different sampling schemes for GPCE.

    Very consistently, a great reduction in the number of sampling points was observed for all test cases when using L1 minimization compared to least-squares approximations. The reason for this is that the GPCE basis in its traditional form of construction is almost always over-complete, and oftentimes not all basis functions and parameter interactions are required or appropriate for modeling the underlying transfer function. For this reason, the use of L1 minimization algorithms in the context of GPCE is strongly recommended as long as the approximation error is verified by an independent test set or leave-one-out cross validation.

    The first three test cases can be considered "sparse" in the framework of GPCE. It is noted that these values are high in comparison to typical values in signal processing and do not fully meet the definition of sparse signals, where $ ||{\bf{c}}||_0 \ll N_c $. The sparse character is reflected in the shape of the convergence curves, which show a steep drop in the approximation error after reaching a certain number of sampling points. Non-sparse solutions (as in case of the probe impedance model) show a gradual exponential decrease of the approximation error.

    The Ishigami function represents a class of problems where the QOI exhibits comparatively complex behavior within the parameter space. LHS designs, and especially LHS (SC-ESE), outperform all other investigated sampling schemes by taking advantage of their regular and space-filling properties, which ensures that model characteristics are covered over the whole sampling space. Similar benefits were observed for the probe impedance model.

    D-optimal and D-coherence optimal designs were investigated in [55] by comparing their performance to standard random sampling using L2 minimization. In this context, the number of chosen sampling points had to be considerably larger than the number of basis functions, i.e. $ N_g \gg N_c $. In the present analysis, we loosened this constraint and decreased the number of sampling points below the number of basis functions $ N_g < N_c $. We were able to observe improved performance for the first three test cases (Ishigami function, Rosenbrock function, Linear Paired Product function), where the relative number of non-zero coefficients is between $ 6-13 $%. In the case of the non-sparse probe impedance model with a ratio of non-zero coefficients of $ 84\% $, D-optimal and D-coherence optimal designs were less efficient compared to standard random sampling.

    In our analysis, we did not find a relationship between mutual coherence and error convergence. A good example is the excellent convergence of LHS algorithms and the comparatively high mutual coherence in the case of the Ishigami function (Figure 3) or the comparatively late convergence of mutual coherence optimized sampling schemes in the case of the Rosenbrock function (Figure 5). This is in accordance with the observations reported by Alemazkoor et al. (2018) [49]. However, it has been observed that minimizing maximum cross-correlation does not necessarily improve the recovery accuracy of compressive sampling. It can be concluded that both the properties of the transfer function and the sparsity of the model function greatly influence the reconstruction properties and hence the effectiveness of the sampling schemes.

    We minimized the maximum cross-correlation of the measurement matrix $ {\bf{\Psi}} $; however, this minimization demonstrated few benefits in reducing the number of sampling points to determine a GPCE approximation [39,74].

    All numerical examples were applied to uniformly distributed random inputs and Legendre polynomials, while in many real-world applications, different distributions may have to be assigned to each random variable. This requires the use of different polynomial basis functions, which would change the properties of the GPCE matrix. This can have a major influence on the performance of L1-optimal sampling schemes. In contrast, we expect fewer differences for LHS based grids because they only depend on the shape of the input pdfs and not additionally on the basis functions.

    A large interest in the field of compressed sensing lies in identifying a lower bound on the sampling size needed for an accurate reconstruction. This depends on two factors: the properties of the measurement matrix and the sparsity of the solution. The formulation of general statements for GPCE matrices, which are constructed dynamically and depend on the number of random variables and type of input pdfs, is only possible with many preliminary considerations about their structure and type. Moreover, it requires additional work on the topic of sparsity estimation. Both topics are very recent and subject to current and future research.

    The sampling methods analyzed were based on the premise that the entire grid is created prior to the calculations. The importance of covering important aspects of the quantity of interest in the sampling space suggests the development of adaptive sampling methods. An iterative construction of the set of sampling points would benefit from information of already calculated function values. For this purpose, e.g. the gradients in the sampling points could be used to refine regions with high spatial frequencies. We believe that those methods would have great potential to further reduce the number of sampling points.

    As a result, the convergence rate as well as the reliability could be increased considerably when using LHS or D-coherence optimal sampling schemes. Even though the LHS (SC-ESE) was enhanced in this paper to perform better in corner regions, it still performs less than optimally in cases where function features are close to the borders of the sampling region. In further investigations, it may become crucial to address the systematic bias in the optimization of the criterion used for the ESE algorithm by using the periodic distance as shown in section 3.5 to fully remedy this shortcoming.

    The advantages of the more advanced sampling methods over standard random sampling were even more pronounced when considering the first two statistical moments, i.e., the mean and standard deviation (see supplemental material).

    We minimized the maximum cross-correlation of the measurement matrix $ {\bf{\Psi}} $, but few benefits arose from reducing the number of sampling points to determine a GPCE approximation [39,74]. We could not observe that L1-optimal sampling schemes are superior to their competitors. It has also been repeatedly shown that $ \mu $ may only be able to optimize the recovery in a worst case scenario and therefore acts as a lower bound on the expected error [39]. In this sense, we also could not observe a direct relationship between mutual coherence and error convergence. From our results, we conclude that the sampling points should be chosen to capture all properties of the model function under investigation rather than to optimize the properties of the GPCE matrix to yield an accurate surrogate model more efficiently. This is in contrast to the results reported by Alemazkoor et al. (2018) [49] in case of the LPP function. We could reproduce their results for mutual coherence optimal sampling. However, random sampling performed much better in our case than they reported. For this reason, they argued for an application of L1-optimized sampling schemes in case of high-dimensional sparse functions. They investigated the LPP function considering $ 20 $ dimensions, whereas we considered $ 30 $ random variables. Nevertheless, we could not reproduce their results for random sampling in this test case as well. A possible reason could be the use of a different L1 solver, which could lead to potential changes in effectiveness of certain algorithms and should therefore be taken into consideration when comparing the results.

    This work was supported by the German Science Foundation (DFG) (WE 59851/2); The NVIDIA Corporation (donation of one Titan Xp graphics card to KW). We acknowledge support for the publication costs by the Open Access Publication Fund of the Technische Universität Ilmenau.

    The authors declare there is no conflict of interest.

    In the following the convergence of the NRMSD and the success rates are examined considering an error threshold of $ 10\% $.

    Table 4.  Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10^{-3}$ considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ p-value
    Random 20.5$\pm$7.1 40.5 41.7 1
    LHS (SC-ESE) 14.9$\pm$1.8 18.2 18.9 $1.3\cdot 10^{-08}$
    LHS (MM) 16.8$\pm$3.0 20.8 23.1 $2.6\cdot 10^{-05}$
    LHS (STD) 17.6$\pm$3.3 24.0 25.7 $2.9\cdot 10^{-03}$
    L1 (MC) 21.2$\pm$4.7 30.0 33.4 0.72
    L1 (MC-CC) 21.9$\pm$4.7 30.5 33.8 0.82
    L1 (D) 25.3$\pm$4.0 34.3 34.9 1.0
    L1 (D-COH) 20.7$\pm$2.5 23.2 23.8 0.21
    L1 (CO) 41.4$\pm$9.9 46.5 65.2 1

     | Show Table
    DownLoad: CSV
    Table 5.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10 \% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 51.3$ \pm $8.9 62.0 64.7 1.0
    LHS (SC-ESE) 50.7$ \pm $5.3 58.5 60.4 0.36
    LHS (MM) 46.1$ \pm $4.7 54.5 58.8 $ 1.2\cdot 10^{-02} $
    LHS (STD) 47.3$ \pm $4.3 52.8 55.1 $ 2.9\cdot 10^{-02} $
    L1 (MC) 49.7$ \pm $11.0 65.3 67.7 0.31
    L1 (MC-CC) 41.5$ \pm $6.4 50.8 51.7 $ 1.4\cdot 10^{-04} $
    L1 (D) 41.0$ \pm $5.4 50.8 51.7 $ 6.6\cdot 10^{-05} $
    L1 (D-COH) 37.6$ \pm $5.1 44.0 49.9 $ 9.3\cdot 10^{-07} $
    L1 (CO) 45.9$ \pm $7.7 56.5 61.2 $ 2.1\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Table 6.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 75.4$ \pm $8.8 82.5 87.9 1.0
    LHS (SC-ESE) 70.0$ \pm $10.4 88.5 89.7 0.49
    LHS (MM) 72.3$ \pm $11.5 86.0 90.8 0.17
    LHS (STD) 73.8$ \pm $9.6 86.3 89.3 0.30
    L1 (MC) 74.4$ \pm $12.3 90.2 96.6 0.62
    L1 (MC-CC) 71.5$ \pm $12.2 90.5 100.4 0.21
    L1 (D) 67.4$ \pm $10.1 84.8 89.2 $ 3.4\cdot 10^{-02} $
    L1 (D-COH) 67.3$ \pm $12.0 90.5 95.1 $ 5.6\cdot 10^{-02} $
    L1 (CO) 73.7$ \pm $11.9 90.5 92.4 0.52

     | Show Table
    DownLoad: CSV
    Table 7.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10\% $considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 19.4$ \pm $1.2 21.0 24.1 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 20.0$ \pm $1.7 23.8 24.7 $ 1.0\cdot 10^{0} $
    LHS (MM) 18.8$ \pm $0.3 19.0 19.7 $ 4.3\cdot 10^{-04} $
    LHS (STD) 19.0$ \pm $0.3 19.5 19.9 $ 8.1\cdot 10^{-03} $
    L1 (MC) 19.8$ \pm $4.6 31.5 33.4 $ 9.9\cdot 10^{-01} $
    L1 (MC-CC) 19.5$ \pm $7.5 39.5 49.1 $ 8.8\cdot 10^{-01} $
    L1 (D) 19.7$ \pm $3.9 29.8 31.4 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 19.8$ \pm $4.1 31.0 31.8 $ 1.0\cdot 10^{0} $
    L1 (CO) 19.5$ \pm $4.3 31.5 33.4 $ 8.7\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 8.  Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10\%$ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.728 0.451 0.452 0.987 0.944 0.934 0.928 1.073 1.020 1.029 1.135 1.025 0.932 0.901 0.858
    LHS (MM) 0.818 0.514 0.554 0.898 0.879 0.909 0.959 1.042 1.033 0.971 0.904 0.817 0.912 0.835 0.828
    LHS (STD) 0.859 0.593 0.616 0.921 0.851 0.851 0.979 1.045 1.015 0.981 0.929 0.826 0.935 0.855 0.844
    L1 (MC) 1.030 0.741 0.801 0.968 1.05 1.046 0.987 1.094 1.099 1.018 1.500 1.386 1.001 1.096 1.083
    L1 (MC-CC) 1.065 0.753 0.811 0.809 0.819 0.799 0.949 1.097 1.142 1.006 1.881 2.037 0.957 1.138 1.215
    L1 (D) 1.232 0.846 0.836 0.798 0.819 0.799 0.894 1.027 1.015 1.016 1.417 1.303 0.985 1.027 0.988
    L1 (D-COH) 1.007 0.574 0.572 0.733 0.710 0.771 0.893 1.097 1.082 1.022 1.476 1.320 0.914 0.964 0.921
    L1 (CO) 2.015 1.148 1.564 0.894 0.911 0.946 0.978 1.097 1.051 1.005 1.500 1.386 1.223 1.164 1.237

     | Show Table
    DownLoad: CSV

    In the following the convergence of the NRMSD and the success rates are examined considering an error threshold of $ 1\% $.

    Table 9.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10^{-3} $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 24.7$ \pm $8.3 44.0 48.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 23.1$ \pm $2.8 24.8 27.8 $ 7.1\cdot 10^{-06} $
    LHS (MM) 24.1$ \pm $4.0 28.6 29.7 $ 2.9\cdot 10^{-03} $
    LHS (STD) 24.5$ \pm $3.0 29.3 29.9 $ 1.3\cdot 10^{-02} $
    L1 (MC) 27.5$ \pm $5.9 38.5 41.8 $ 8.9\cdot 10^{-01} $
    L1 (MC-CC) 29.7$ \pm $6.2 39.5 44.2 $ 1.0\cdot 10^{0} $
    L1 (D) 39.3$ \pm $5.6 43.5 45.7 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 24.7$ \pm $4.4 35.0 37.4 $ 6.5\cdot 10^{-01} $
    L1 (CO) 49.3$ \pm $12.2 73.0 86.1 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 10.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 72.2$ \pm $10.7 90.0 108.1 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 91.4$ \pm $4.9 97.6 98.7 $ 1.0\cdot 10^{0} $
    LHS (MM) 65.8$ \pm $4.5 75.5 76.7 $ 2.1\cdot 10^{-04} $
    LHS (STD) 69.3$ \pm $3.8 72.8 74.4 $ 4.7\cdot 10^{-03} $
    L1 (MC) 71.7$ \pm $18.3 93.0 143.7 $ 4.7\cdot 10^{-01} $
    L1 (MC-CC) 65.4$ \pm $3.9 69.2 69.8 $ 2.8\cdot 10^{-07} $
    L1 (D) 61.3$ \pm $6.2 68.9 75.3 $ 1.7\cdot 10^{-06} $
    L1 (D-COH) 74.9$ \pm $5.9 82.5 85.8 $ 6.5\cdot 10^{-01} $
    L1 (CO) 69.6$ \pm $6.6 81.5 83.4 $ 7.3\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Table 11.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 109.4$ \pm $4.8 117.0 119.4 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 111.7$ \pm $5.1 117.0 119.4 $ 9.5\cdot 10^{-01} $
    LHS (MM) 109.4$ \pm $3.9 113.8 115.4 $ 3.8\cdot 10^{-01} $
    LHS (STD) 109.5$ \pm $4.2 115.5 117.4 $ 8.0\cdot 10^{-01} $
    L1 (MC) 113.7$ \pm $12.0 147.5 - $ 9.9\cdot 10^{-01} $
    L1 (MC-CC) 112.7$ \pm $15.8 141.0 167.0 $ 9.3\cdot 10^{-01} $
    L1 (D) 119.1$ \pm $6.7 125.2 128.2 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 118.7$ \pm $10.0 133.5 137.1 $ 1.0\cdot 10^{0} $
    L1 (CO) 110.5$ \pm $11.1 133.5 138.7 $ 7.6\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 12.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a NRMSD of $ 10\% $ considering the Electrode Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 42.0$ \pm $4.0 48.8 50.4 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 39.4$ \pm $1.5 42.7 43.7 $ 3.2\cdot 10^{-03} $
    LHS (MM) 39.3$ \pm $3.4 46.5 46.9 $ 3.6\cdot 10^{-02} $
    LHS (STD) 39.4$ \pm $3.1 46.2 46.8 $ 3.4\cdot 10^{-02} $
    L1 (MC) 43.8$ \pm $7.2 60.0 66.1 $ 8.5\cdot 10^{-01} $
    L1 (MC-CC) 39.6$ \pm $26.0 72.0 149.0 $ 2.8\cdot 10^{-01} $
    L1 (D) 44.5$ \pm $8.1 61.0 66.2 $ 9.9\cdot 10^{-01} $
    L1 (D-COH) 48.6$ \pm $17.4 90.0 91.7 $ 1.0\cdot 10^{0} $
    L1 (CO) 44.6$ \pm $7.8 60.0 69.7 $ 8.6\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 13.  Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $1\%$ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$ $\hat{N}_{\varepsilon}$ $N_{sr}^{(95\%)}$ $N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.934 0.564 0.579 1.266 1.084 0.913 1.021 1 1 0.938 0.875 0.867 1.040 0.881 0.840
    LHS (MM) 0.976 0.650 0.619 0.911 0.839 0.710 1 0.972 0.966 0.935 0.954 0.931 0.956 0.854 0.806
    LHS (STD) 0.992 0.665 0.622 0.961 0.809 0.688 1.001 0.987 0.983 0.938 0.949 0.930 0.973 0.866 0.844
    L1 (MC) 1.114 0.875 0.871 0.993 1.033 1.329 1.039 1.261 - 1.043 1.231 1.312 1.032 1.100 1.171
    L1 (MC-CC) 1.201 0.898 0.921 0.906 0.769 0.646 1.030 1.205 1.399 0.942 1.477 2.956 1.020 1.087 1.481
    L1 (D) 1.590 0.989 0.952 0.849 0.766 0.697 1.088 1.071 1.074 1.059 1.251 1.313 1.147 1.019 1.009
    L1 (D-COH) 1.002 0.795 0.779 1.038 0.917 0.794 1.085 1.141 1.148 1.156 1.846 1.819 1.070 1.175 1.127
    L1 (CO) 1.998 1.659 1.794 0.964 0.906 0.772 1.010 1.141 1.162 1.060 1.231 1.383 1.258 1.234 1.278

     | Show Table
    DownLoad: CSV

    Besides of the NRMSD, which quantifies the general quality of the GPCE surrogates with the original model function, we quantified the convergence of the first two statistical moments, i.e. the mean and the standard deviation. Reference values were obtained for each testfunction by calculating both the mean and the standard deviation from $N = 10^7$ model evaluations.

    Figure 10.  (a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the Ishigami function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.
    Table 14.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 0.1\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 28.3$ \pm $10.4 54.0 59.4 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 24.8$ \pm $4.0 29.4 29.9 $ 5.9\cdot 10^{-04} $
    LHS (MM) 24.4$ \pm $3.5 29.5 29.9 $ 4.9\cdot 10^{-05} $
    LHS (STD) 24.8$ \pm $4.1 29.9 32.8 $ 7.6\cdot 10^{-04} $
    L1 (MC) 38.7$ \pm $9.3 52.0 64.2 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 37.3$ \pm $7.5 48.0 49.7 $ 1.0\cdot 10^{0} $
    L1 (D) 28.2$ \pm $6.6 39.8 40.7 $ 6.2\cdot 10^{-01} $
    L1 (D-COH) 25.8$ \pm $5.3 39.2 39.9 $ 3.5\cdot 10^{-01} $
    L1 (CO) 67.9$ \pm $38.6 308.8 348.2 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 15.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 1\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 24.4$ \pm $7.5 39.5 45.6 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 18.2$ \pm $5.2 26.5 28.7 $ 7.0\cdot 10^{-05} $
    LHS (MM) 19.0$ \pm $2.7 23.3 23.9 $ 1.4\cdot 10^{-05} $
    LHS (STD) 19.3$ \pm $5.1 26.5 28.5 $ 2.2\cdot 10^{-04} $
    L1 (MC) 28.5$ \pm $7.0 38.9 39.7 $ 9.9\cdot 10^{-01} $
    L1 (MC-CC) 29.8$ \pm $6.6 39.8 44.9 $ 1.0\cdot 10^{0} $
    L1 (D) 24.6$ \pm $4.0 31.0 36.5 $ 6.0\cdot 10^{-01} $
    L1 (D-COH) 24.6$ \pm $4.0 32.0 37.1 $ 6.9\cdot 10^{-01} $
    L1 (CO) 49.4$ \pm $15.0 68.0 88.6 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 16.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 10\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 14.4$ \pm $5.3 22.3 25.8 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 9.7$ \pm $2.4 13.6 13.9 $ 5.0\cdot 10^{-04} $
    LHS (MM) 10.7$ \pm $3.2 14.8 15.7 $ 6.6\cdot 10^{-03} $
    LHS (STD) 12.6$ \pm $2.3 14.9 17.1 $ 2.3\cdot 10^{-02} $
    L1 (MC) 21.5$ \pm $7.2 32.5 34.4 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 21.6$ \pm $10.5 33.0 35.4 $ 9.0\cdot 10^{-01} $
    L1 (D) 18.6$ \pm $4.0 22.2 22.9 $ 9.8\cdot 10^{-01} $
    L1 (D-COH) 19.5$ \pm $5.1 22.8 23.7 $ 1.0\cdot 10^{0} $
    L1 (CO) 36.8$ \pm $13.6 47.5 73.9 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 17.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 0.1\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 29.9$ \pm $11.1 54.5 61.8 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 29.1$ \pm $3.8 35.5 36.7 $ 2.2\cdot 10^{-02} $
    LHS (MM) 28.1$ \pm $2.8 29.9 33.5 $ 9.8\cdot 10^{-04} $
    LHS (STD) 28.3$ \pm $4.0 35.2 37.7 $ 1.4\cdot 10^{-02} $
    L1 (MC) 48.2$ \pm $22.2 162.5 198.1 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 37.3$ \pm $7.5 48.0 49.7 $ 9.7\cdot 10^{-01} $
    L1 (D) 39.6$ \pm $12.2 59.5 65.1 $ 9.9\cdot 10^{-01} $
    L1 (D-COH) 39.5$ \pm $9.4 48.5 67.2 $ 1.0\cdot 10^{0} $
    L1 (CO) 86.0$ \pm $46.0 292.2 297.6 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 18.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 1\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 24.9$ \pm $11.0 49.0 55.9 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 24.6$ \pm $4.3 29.5 29.9 $ 5.3\cdot 10^{-02} $
    LHS (MM) 23.8$ \pm $6.2 27.0 28.6 $ 4.8\cdot 10^{-04} $
    LHS (STD) 24.1$ \pm $6.8 28.9 29.7 $ 1.4\cdot 10^{-02} $
    L1 (MC) 29.3$ \pm $11.5 39.6 39.9 $ 6.5\cdot 10^{-01} $
    L1 (MC-CC) 29.8$ \pm $6.6 39.8 44.9 $ 9.8\cdot 10^{-01} $
    L1 (D) 24.7$ \pm $7.8 37.5 38.7 $ 7.1\cdot 10^{-02} $
    L1 (D-COH) 24.8$ \pm $7.0 37.5 38.7 $ 3.1\cdot 10^{-01} $
    L1 (CO) 49.6$ \pm $17.8 91.2 94.2 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 19.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 10\% $ considering the Ishigami function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 15.3$ \pm $7.6 25.5 27.4 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 9.5$ \pm $7.2 22.6 26.5 $ 2.0\cdot 10^{-01} $
    LHS (MM) 12.8$ \pm $5.8 18.5 22.5 $ 8.1\cdot 10^{-02} $
    LHS (STD) 11.3$ \pm $6.9 22.5 23.7 $ 1.3\cdot 10^{-01} $
    L1 (MC) 9.5$ \pm $11.3 37.2 37.8 $ 5.8\cdot 10^{-01} $
    L1 (MC-CC) 21.6$ \pm $10.5 33.0 35.4 $ 9.6\cdot 10^{-01} $
    L1 (D) 15.8$ \pm $7.6 23.5 31.0 $ 6.1\cdot 10^{-01} $
    L1 (D-COH) 15.3$ \pm $7.1 25.5 29.1 $ 7.9\cdot 10^{-01} $
    L1 (CO) 36.2$ \pm $18.4 51.5 70.4 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 20.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 0.1\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 76.0$ \pm $10.9 92.5 112.5 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 94.8$ \pm $5.0 99.1 99.8 $ 1.0\cdot 10^{0} $
    LHS (MM) 70.0$ \pm $5.1 75.8 77.4 $ 8.8\cdot 10^{-04} $
    LHS (STD) 69.9$ \pm $3.3 73.5 75.4 $ 1.8\cdot 10^{-04} $
    L1 (MC) 72.0$ \pm $9.0 97.0 125.6 $ 2.6\cdot 10^{-01} $
    L1 (MC-CC) 65.8$ \pm $6.2 71.5 73.7 $ 3.3\cdot 10^{-09} $
    L1 (D) 78.9$ \pm $7.5 88.1 89.6 $ 7.5\cdot 10^{-01} $
    L1 (D-COH) 76.0$ \pm $6.8 86.3 89.3 $ 6.9\cdot 10^{-01} $
    L1 (CO) 72.4$ \pm $6.4 84.2 84.8 $ 4.4\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Table 21.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 1\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 72.7$ \pm $12.7 93.5 110.1 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 89.1$ \pm $24.1 97.2 97.8 $ 9.9\cdot 10^{-01} $
    LHS (MM) 68.6$ \pm $5.3 75.5 75.9 $ 1.8\cdot 10^{-02} $
    LHS (STD) 69.4$ \pm $3.4 73.0 74.7 $ 4.3\cdot 10^{-03} $
    L1 (MC) 71.6$ \pm $10.5 94.0 134.2 $ 5.6\cdot 10^{-01} $
    L1 (MC-CC) 59.8$ \pm $9.3 66.7 67.7 $ 3.0\cdot 10^{-08} $
    L1 (D) 77.9$ \pm $7.7 86.5 87.7 $ 8.1\cdot 10^{-01} $
    L1 (D-COH) 75.7$ \pm $6.6 85.5 88.4 $ 8.5\cdot 10^{-01} $
    L1 (CO) 69.3$ \pm $8.4 81.0 83.7 $ 5.3\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Figure 11.  (a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the Rosenbrock function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.
    Table 22.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 10\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 59.5$ \pm $13.4 67.5 72.9 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 22.2$ \pm $7.6 37.5 38.7 $ 7.7\cdot 10^{-1} $
    LHS (MM) 54.9$ \pm $10.0 60.5 61.7 $ 1.2\cdot 10^{-02} $
    LHS (STD) 55.3$ \pm $9.7 62.5 62.9 $ 4.0\cdot 10^{-02} $
    L1 (MC) 49.3$ \pm $26.6 78.0 131.6 $ 6.4\cdot 10^{-03} $
    L1 (MC-CC) 34.6$ \pm $12.4 53.0 56.1 $ 1.6\cdot 10^{-07} $
    L1 (D) 42.4$ \pm $7.8 50.0 52.4 $ 1.3\cdot 10^{-06} $
    L1 (D-COH) 53.9$ \pm $21.9 66.5 68.7 $ 2.2\cdot 10^{-02} $
    L1 (CO) 51.6$ \pm $14.0 64.0 70.2 $ 3.6\cdot 10^{-03} $

     | Show Table
    DownLoad: CSV
    Table 23.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 0.1\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 76.0$ \pm $10.9 92.5 112.5 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 95.0$ \pm $5.1 99.1 99.8 $ 1.0\cdot 10^{0} $
    LHS (MM) 70.0$ \pm $5.2 75.8 77.4 $ 1.0\cdot 10^{-03} $
    LHS (STD) 70.0$ \pm $3.3 73.5 75.4 $ 2.6\cdot 10^{-04} $
    L1 (MC) 72.0$ \pm $9.2 97.5 126.6 $ 4.0\cdot 10^{-01} $
    L1 (MC-CC) 66.0$ \pm $4.2 73.0 75.4 $ 5.8\cdot 10^{-08} $
    L1 (D) 79.0$ \pm $7.5 88.1 89.6 $ 8.1\cdot 10^{-01} $
    L1 (D-COH) 76.0$ \pm $6.8 86.3 89.3 $ 4.8\cdot 10^{-01} $
    L1 (CO) 73.6$ \pm $6.7 84.0 84.8 $ 1.3\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 24.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 1\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 75.7$ \pm $16.8 91.8 110.8 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 94.6$ \pm $5.0 99.8 100.0 $ 1.0\cdot 10^{0} $
    LHS (MM) 69.9$ \pm $11.9 75.8 76.7 $ 2.1\cdot 10^{-03} $
    LHS (STD) 69.8$ \pm $15.6 73.5 75.4 $ 1.8\cdot 10^{-04} $
    L1 (MC) 71.8$ \pm $22.8 92.0 96.8 $ 1.3\cdot 10^{-01} $
    L1 (MC-CC) 65.8$ \pm $17.3 71.5 74.4 $ 1.2\cdot 10^{-07} $
    L1 (D) 78.5$ \pm $7.3 87.1 88.6 $ 7.8\cdot 10^{-01} $
    L1 (D-COH) 75.7$ \pm $6.6 85.5 88.4 $ 5.1\cdot 10^{-01} $
    L1 (CO) 71.8$ \pm $6.5 83.0 84.7 $ 8.1\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Table 25.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 10\% $ considering the Rosenbrock function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 66.3$ \pm $32.5 85.5 98.2 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 6.3$ \pm $42.8 96.2 96.8 $ 6.0\cdot 10^{-01} $
    LHS (MM) 62.4$ \pm $29.4 71.5 72.7 $ 1.0\cdot 10^{-01} $
    LHS (STD) 62.0$ \pm $29.4 69.8 72.1 $ 1.1\cdot 10^{-01} $
    L1 (MC) 34.1$ \pm $34.5 84.0 90.5 $ 2.7\cdot 10^{-01} $
    L1 (MC-CC) 52.4$ \pm $27.7 65.5 66.7 $ 8.8\cdot 10^{-03} $
    L1 (D) 56.5$ \pm $17.2 67.2 70.8 $ 6.3\cdot 10^{-02} $
    L1 (D-COH) 53.9$ \pm $21.9 66.5 68.7 $ 1.6\cdot 10^{-02} $
    L1 (CO) 35.7$ \pm $31.0 75.5 80.2 $ 8.8\cdot 10^{-02} $

     | Show Table
    DownLoad: CSV
    Table 26.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 0.1\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (MM) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (STD) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (MC) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (D) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (CO) 5.0$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 27.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 1\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (MM) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    LHS (STD) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (MC) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (D) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $
    L1 (CO) 4.9$ \pm $0.0 5.0 5.0 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Figure 12.  (a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the linear paired product function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.
    Table 28.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 10\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    LHS (MM) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    LHS (STD) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    L1 (MC) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    L1 (MC-CC) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    L1 (D) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $
    L1 (CO) 4.5$ \pm $0.0 4.0 4.0 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 29.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 0.1\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 114.0$ \pm $9.4 127.5 137.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 110.0$ \pm $5.1 117.0 119.4 $ 1.5\cdot 10^{-02} $
    LHS (MM) 110.0$ \pm $5.8 117.0 119.4 $ 3.8\cdot 10^{-03} $
    LHS (STD) 110.0$ \pm $5.2 115.8 118.8 $ 2.2\cdot 10^{-02} $
    L1 (MC) -$ \pm $- - - -
    L1 (MC-CC) 113.0$ \pm $17.0 145.0 174.0 $ 3.2\cdot 10^{-01} $
    L1 (D) -$ \pm $- - - -
    L1 (D-COH) -$ \pm $- - - -
    L1 (CO) -$ \pm $- - - -

     | Show Table
    DownLoad: CSV
    Table 30.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 1\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 113.9$ \pm $9.4 127.5 137.0 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 109.9$ \pm $5.1 117.0 119.4 $ 2.8\cdot 10^{-02} $
    LHS (MM) 109.8$ \pm $5.8 117.0 119.4 $ 6.6\cdot 10^{-03} $
    LHS (STD) 109.9$ \pm $5.3 115.8 118.8 $ 4.1\cdot 10^{-02} $
    L1 (MC) -$ \pm $- - - -
    L1 (MC-CC) 112.9$ \pm $16.5 144.0 171.3 $ 3.9\cdot 10^{-01} $
    L1 (D) -$ \pm $- - - -
    L1 (D-COH) -$ \pm $- - - -
    L1 (CO) -$ \pm $- - - -

     | Show Table
    DownLoad: CSV
    Table 31.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 10\% $ considering the LPP function. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 113.3$ \pm $9.5 128.5 134.6 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 108.6$ \pm $19.5 116.0 118.4 $ 1.8\cdot 10^{-02} $
    LHS (MM) 108.0$ \pm $6.1 116.0 118.4 $ 6.9\cdot 10^{-03} $
    LHS (STD) 108.5$ \pm $5.5 114.8 117.1 $ 4.4\cdot 10^{-02} $
    L1 (MC) 114.2$ \pm $12.6 138.0 150.9 $ 9.2\cdot 10^{-01} $
    L1 (MC-CC) 111.2$ \pm $9.8 123.5 127.5 $ 2.6\cdot 10^{-01} $
    L1 (D) 119.2$ \pm $6.9 126.8 128.6 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 120.2$ \pm $10.7 137.5 147.1 $ 1.0\cdot 10^{0} $
    L1 (CO) 112.5$ \pm $11.9 133.8 138.0 $ 6.6\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 32.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 0.1\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 48.0$ \pm $6.0 56.0 59.1 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 39.7$ \pm $1.8 43.8 45.4 $ 1.1\cdot 10^{-05} $
    LHS (MM) 39.8$ \pm $3.8 49.0 51.4 $ 1.4\cdot 10^{-04} $
    LHS (STD) 40.0$ \pm $3.9 47.8 50.8 $ 1.9\cdot 10^{-04} $
    L1 (MC) 48.6$ \pm $6.8 64.2 64.8 $ 8.4\cdot 10^{-01} $
    L1 (MC-CC) 47.6$ \pm $24.1 70.0 147.2 $ 4.0\cdot 10^{-01} $
    L1 (D) 39.8$ \pm $5.1 53.0 57.7 $ 7.2\cdot 10^{-04} $
    L1 (D-COH) 39.9$ \pm $11.1 66.0 79.2 $ 5.4\cdot 10^{-02} $
    L1 (CO) 51.6$ \pm $6.7 60.8 63.1 $ 9.9\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 33.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 1\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 26.4$ \pm $5.4 34.5 36.4 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 26.2$ \pm $3.8 31.5 32.7 $ 4.0\cdot 10^{-01} $
    LHS (MM) 19.9$ \pm $2.3 23.5 29.9 $ 8.6\cdot 10^{-07} $
    LHS (STD) 19.9$ \pm $1.9 25.5 26.7 $ 2.9\cdot 10^{-06} $
    L1 (MC) 32.1$ \pm $6.4 36.5 37.7 $ 9.9\cdot 10^{-01} $
    L1 (MC-CC) 30.1$ \pm $9.5 48.0 59.7 $ 9.6\cdot 10^{-01} $
    L1 (D) 24.5$ \pm $6.2 35.5 36.7 $ 3.9\cdot 10^{-01} $
    L1 (D-COH) 20.0$ \pm $5.1 32.3 35.8 $ 9.2\cdot 10^{-03} $
    L1 (CO) 33.0$ \pm $6.2 37.7 38.7 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Figure 13.  (a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the electrode impedance model. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.
    Table 34.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 10\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 18.2$ \pm $0.2 18.5 18.9 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 18.3$ \pm $0.1 18.0 18.0 $ 5.6\cdot 10^{-01} $
    LHS (MM) 18.1$ \pm $0.1 18.0 18.0 $ 1.0\cdot 10^{-06} $
    LHS (STD) 18.1$ \pm $0.1 18.0 18.0 $ 3.6\cdot 10^{-06} $
    L1 (MC) 18.4$ \pm $0.3 18.9 19.0 $ 9.8\cdot 10^{-01} $
    L1 (MC-CC) 18.3$ \pm $5.1 24.0 36.0 $ 9.6\cdot 10^{-01} $
    L1 (D) 18.2$ \pm $0.3 18.8 18.9 $ 5.1\cdot 10^{-01} $
    L1 (D-COH) 18.2$ \pm $0.2 18.0 18.7 $ 1.5\cdot 10^{-02} $
    L1 (CO) 18.4$ \pm $0.3 18.9 19.7 $ 1.0\cdot 10^{0} $

     | Show Table
    DownLoad: CSV
    Table 35.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 0.1\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 97.4$ \pm $12.5 117.5 131.3 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 77.4$ \pm $4.8 82.5 83.7 $ 5.0\cdot 10^{-11} $
    LHS (MM) 88.1$ \pm $8.5 97.0 107.7 $ 5.2\cdot 10^{-05} $
    LHS (STD) 89.4$ \pm $6.4 95.5 103.0 $ 7.5\cdot 10^{-05} $
    L1 (MC) 99.6$ \pm $20.4 158.0 253.8 $ 6.7\cdot 10^{-01} $
    L1 (MC-CC) 86.7$ \pm $28.4 117.5 202.3 $ 5.6\cdot 10^{-04} $
    L1 (D) 115.0$ \pm $5.3 120.0 128.7 $ 1.0\cdot 10^{0} $
    L1 (D-COH) 114.2$ \pm $12.3 132.5 135.8 $ 1.0\cdot 10^{0} $
    L1 (CO) 95.5$ \pm $10.5 114.0 117.1 $ 2.6\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 36.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 1\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 54.7$ \pm $10.0 75.0 79.8 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 45.9$ \pm $3.4 48.8 49.7 $ 1.2\cdot 10^{-06} $
    LHS (MM) 49.2$ \pm $3.9 53.9 55.4 $ 3.0\cdot 10^{-03} $
    LHS (STD) 49.0$ \pm $3.5 55.0 57.4 $ 1.3\cdot 10^{-02} $
    L1 (MC) 57.1$ \pm $8.2 72.5 77.5 $ 9.5\cdot 10^{-01} $
    L1 (MC-CC) 52.5$ \pm $11.2 78.5 94.1 $ 2.9\cdot 10^{-01} $
    L1 (D) 46.0$ \pm $5.5 56.0 58.4 $ 2.6\cdot 10^{-05} $
    L1 (D-COH) 47.3$ \pm $12.7 79.0 86.5 $ 2.4\cdot 10^{-03} $
    L1 (CO) 59.0$ \pm $7.1 67.5 74.3 $ 9.2\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 37.  Estimated number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 10\% $ considering the Electrode Probe Model. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively. The bold rows highlight the best performing sampling schemes in each category.
    Grid $ \hat{N}_{\varepsilon} $ $ N_{sr}^{(95\%)} $ $ N_{sr}^{(99\%)} $ p-value
    Random 32.7$ \pm $5.0 36.8 37.7 $ 1.0\cdot 10^{0} $
    LHS (SC-ESE) 29.6$ \pm $2.6 32.5 33.7 $ 2.7\cdot 10^{-02} $
    LHS (MM) 23.5$ \pm $4.6 33.5 34.7 $ 1.0\cdot 10^{-05} $
    LHS (STD) 28.2$ \pm $4.0 32.2 32.9 $ 8.8\cdot 10^{-04} $
    L1 (MC) 33.6$ \pm $5.5 37.7 39.4 $ 9.1\cdot 10^{-01} $
    L1 (MC-CC) 33.4$ \pm $7.0 47.5 50.4 $ 8.4\cdot 10^{-01} $
    L1 (D) 32.7$ \pm $4.6 35.2 35.9 $ 2.7\cdot 10^{-01} $
    L1 (D-COH) 32.9$ \pm $4.5 36.8 37.7 $ 6.8\cdot 10^{-01} $
    L1 (CO) 34.3$ \pm $3.9 39.0 40.7 $ 9.8\cdot 10^{-01} $

     | Show Table
    DownLoad: CSV
    Table 38.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 0.1\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.877 0.544 0.503 1.248 1.071 0.887 1.000 1.000 1.000 0.828 0.781 0.768 0.988 0.849 0.790
    LHS (MM) 0.864 0.546 0.503 0.921 0.819 0.688 1.000 1.000 1.000 0.829 0.875 0.870 0.904 0.810 0.765
    LHS (STD) 0.877 0.554 0.552 0.921 0.795 0.670 1.000 1.000 1.000 0.835 0.854 0.860 0.908 0.801 0.770
    L1 (MC) 1.367 0.963 1.081 0.947 1.049 1.116 1.000 1.000 1.000 1.013 1.147 1.097 1.082 1.040 1.074
    L1 (MC-CC) 1.321 0.889 0.837 0.867 0.773 0.655 1.000 1.000 1.000 0.991 1.250 2.491 1.045 1.978 1.246
    L1 (D) 0.996 0.738 0.685 1.039 0.953 0.797 1.000 1.000 1.000 0.829 0.946 0.976 0.966 1.896 0.905
    L1 (D-COH) 0.912 0.727 0.671 1.000 0.932 0.793 1.000 1.000 1.000 0.832 1.179 1.340 0.936 0.960 0.955
    L1 (CO) 2.403 5.719 5.861 0.953 0.911 0.754 1.000 1.000 1.000 1.076 1.085 1.068 1.358 2.179 2.171

     | Show Table
    DownLoad: CSV
    Table 39.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 1\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.745 0.671 0.629 1.226 1.040 0.889 1.000 1.000 1.000 0.992 0.913 0.898 0.991 0.906 0.854
    LHS (MM) 0.779 0.589 0.523 0.944 0.807 0.689 1.000 1.000 1.000 0.756 0.681 0.821 0.870 0.769 0.758
    LHS (STD) 0.790 , 0.671 0.625 0.955 0.781 0.678 1.000 1.000 1.000 0.756 0.739 0.734 0.875 0.798 0.759
    L1 (MC) 1.171 0.984 0.871 0.985 1.005 1.219 1.000 1.000 1.000 1.219 1.058 1.036 1.094 1.012 1.031
    L1 (MC-CC) 1.223 1.008 0.985 0.824 0.714 0.615 1.000 1.000 1.000 1.143 1.391 1.640 1.048 1.028 1.060
    L1 (D) 1.009 0.785 0.800 1.073 0.925 0.797 1.000 1.000 1.000 0.931 1.029 1.008 1.003 0.935 0.901
    L1 (D-COH) 1.008 0.810 0.814 1.042 0.914 0.803 1.000 1.000 1.000 0.758 0.935 0.984 0.952 0.915 0.900
    L1 (CO) 2.027 1.722 1.943 0.954 1.252 1.094 1.063 1.000 1.000 1.076 1.085 1.068 1.308 1.170 1.192

     | Show Table
    DownLoad: CSV
    Table 40.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the mean of $ 10\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.670 0.610 0.539 0.373 0.556 0.531 1.000 1.000 1.000 1.001 0.973 0.952 0.761 0.785 0.756
    LHS (MM) 0.739 0.667 0.609 0.923 0.896 0.846 1.000 1.000 1.000 0.993 0.973 0.952 0.914 0.884 0.852
    LHS (STD) 0.873 0.669 0.663 0.928 0.926 0.863 1.000 1.000 1.000 0.993 0.973 0.952 0.948 0.892 0.869
    L1 (MC) 1.493 1.461 1.333 0.829 1.156 1.805 1.000 1.000 1.000 1.006 , 1.020 1.004 1.082 1.159 1.285
    L1 (MC-CC) 1.497 1.483 1.372 0.581 0.785 0.770 1.000 1.000 1.000 1.005 1.297 1.905 1.021 1.145 1.262
    L1 (D) 1.293 1.000 0.886 0.712 0.741 0.719 1.000 1.000 1.000 0.999 1.014 1.003 1.001 0.939 0.902
    L1 (D-COH) 1.354 1.026 0.919 0.906 0.985 0.942 1.000 1.000 1.000 0.996 0.973 0.989 1.064 0.996 0.973
    L1 (CO) 2.553 2.135 2.864 0.867 0.948 0.963 1.000 1.000 1.000 1.010 1.024 1.042 1.358 1.277 1.467

     | Show Table
    DownLoad: CSV
    Table 41.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 0.1\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.970 0.651 0.594 1.250 1.071 0.887 0.965 0.918 , 0.872 0.794 0.702 0.637 0.988 0.849 0.790
    LHS (MM) 0.937 0.549 0.542 0.921 0.819 0.688 0.965 0.918 , 0.872 0.904 0.826 0.820 0.904 0.810 0.765
    LHS (STD) 0.946 , 0.647 0.610 0.921 0.795 0.670 0.965 0.908 0.867 0.918 0.813 0.784 0.908 0.801 0.770
    L1 (MC) 1.609 , 2.982 3.206 0.947 1.054 1.125 - - - 1.022 1.345 1.933 1.082 1.040 1.074
    L1 (MC-CC) 1.247 0.881 0.804 0.868 0.789 0.670 - - - 0.889 1.000 1.541 1.045 1.978 1.246
    L1 (D) 1.324 1.092 1.053 1.039 0.953 0.797 - - - 1.180 1.021 0.980 0.966 1.896 0.905
    L1 (D-COH) 1.318 0.890 1.087 1.000 0.932 0.793 - - - 1.172 1.128 1.034 0.936 0.960 0.955
    L1 (CO) 2.871 5.362 4.816 0.969 0.908 0.754 - - - 0.980 0.970 0.892 1.358 2.179 2.171

     | Show Table
    DownLoad: CSV
    Table 42.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 1\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.989 0.602 0.535 1.250 1.088 0.902 0.964 0.918 0.872 0.839 0.650 0.623 1.010 0.814 0.733
    LHS (MM) 0.956 0.551 0.512 0.923 0.826 0.692 0.964 , 0.918 0.872 0.898 0.718 0.694 0.935 0.753 0.692
    LHS (STD) 0.968 0.590 0.531 0.922 0.801 0.681 0.964 0.908 0.867 0.895 0.733 0.719 0.937 0.758 0.733
    L1 (MC) 1.175 0.809 0.714 0.948 1.003 0.874 - - - 1.042 0.967 0.971 1.041 0.945 0.890
    L1 (MC-CC) 1.197 0.813 0.803 0.869 0.779 0.671 - - - 0.959 1.047 1.179 1.006 0.910 0.913
    L1 (D) 0.992 0.765 0.692 1.037 0.950 0.800 - - - 0.839 0.747 0.732 0.967 0.865 0.806
    L1 (D-COH) 0.997 0.765 0.692 1.000 0.932 0.798 - - - 0.863 1.053 1.084 0.965 0.938 0.894
    L1 (CO) 1.989 1.862 1.686 0.948 0.905 0.764 - - - 1.077 0.900 0.931 1.254 1.135 1.095

     | Show Table
    DownLoad: CSV
    Table 43.  Relative and average number of grid points $ \hat{N}_{\varepsilon} $ of different sampling schemes to reach a relative error of the std of $ 10\% $ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $ N_{sr}^{(95\%)} $ and $ N_{sr}^{(99\%)} $ show the number of samples needed to reach a success rate of $ 95\% $ and $ 99\% $, respectively.
    Ishigami Rosenbrock LPP Electrode Average (all testfunctions)
    Grid $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$ $ \hat{N}_{\varepsilon}$ $ N_{sr}^{(95\%)}$ $ N_{sr}^{(99\%)}$
    Random (L1) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    LHS (SC-ESE) 0.620 0.887 0.967 0.096 1.126 0.986 0.959 0.903 0.880 0.905 0.884 0.894 0.645 0.950 0.932
    LHS (MM) 0.841 0.725 0.821 0.942 0.836 0.740 0.954 0.903 0.880 0.721 0.912 0.920 0.865 0.868 0.840
    LHS (STD) 0.740 0.882 0.865 0.936 0.816 0.734 0.958 0.893 0.870 0.863 0.878 0.871 0.874 0.867 0.835
    L1 (MC) 0.624 1.461 1.381 0.515 0.982 0.922 1.008 1.074 1.121 1.027 1.027 1.045 0.794 1.136 1.117
    L1 (MC-CC) 1.412 1.294 1.292 0.790 0.766 0.679 0.982 0.961 0.947 1.022 1.293 1.337 1.052 1.078 1.064
    L1 (D) 1.032 0.922 1.131 0.853 0.787 0.721 1.052 0.986 0.955 1.001 0.959 0.951 0.984 0.966 0.940
    L1 (D-COH) 1.001 1.000 1.062 0.814 0.778 0.700 1.061 1.070 1.093 1.008 1.000 1.000 0.971 0.962 0.964
    L1 (CO) 2.366 2.020 2.569 0.540 0.883 0.817 0.993 1.041 1.025 1.049 1.061 1.080 1.237 1.256 1.373

     | Show Table
    DownLoad: CSV
    [1] Boulon S, Westman BJ, Hutten S, et al. (2010) The nucleolus under stress. Mol Cell 40: 216-227. doi: 10.1016/j.molcel.2010.09.024
    [2] Mayer C, Grummt I (2005) Cellular stress and nucleolar function. Cell Cycle 4: 1036-1038. doi: 10.4161/cc.4.8.1925
    [3] Grummt I (2013) The nucleolus-guardian of cellular homeostasis and genome integrity. Chromosoma 122: 487-497. doi: 10.1007/s00412-013-0430-0
    [4] Rubbi CP, Milner J (2003) Disruption of the nucleolus mediates stabilization of p53 in response to DNA damage and other stresses. Embo J 22: 6068-6077. doi: 10.1093/emboj/cdg579
    [5] Hetman M, Pietrzak M (2012) Emerging roles of the neuronal nucleolus. Trends Neurosci 35: 305-314. doi: 10.1016/j.tins.2012.01.002
    [6] Parlato R, Kreiner G (2013) Nucleolar activity in neurodegenerative diseases: a missing piece of the puzzle? J Mol Med (Berl) 91: 541-547. doi: 10.1007/s00109-012-0981-1
    [7] Haeusler AR, Donnelly CJ, Periz G, et al. (2014) C9orf72 nucleotide repeat structures initiate molecular cascades of disease. Nature 507: 195-200. doi: 10.1038/nature13124
    [8] Chan HY (2014) RNA-mediated pathogenic mechanisms in polyglutamine diseases and amyotrophic lateral sclerosis. Front Cell Neurosci 8: 431.
    [9] Rohrer JD, Isaacs AM, Mizielinska S, et al. (2015) C9orf72 expansions in frontotemporal dementia and amyotrophic lateral sclerosis. Lancet Neurol 14: 291-301. doi: 10.1016/S1474-4422(14)70233-9
    [10] Kwon I, Xiang S, Kato M, et al. (2014) Poly-dipeptides encoded by the C9orf72 repeats bind nucleoli, impede RNA biogenesis, and kill cells. Science 345: 1139-1145. doi: 10.1126/science.1254917
    [11] Wen X, Tan W, Westergard T, et al. (2014) Antisense proline-arginine RAN dipeptides linked to C9ORF72-ALS/FTD form toxic nuclear aggregates that initiate in vitro and in vivo neuronal death. Neuron 84: 1213-1225. doi: 10.1016/j.neuron.2014.12.010
    [12] Tao Z, Wang H, Xia Q, et al. (2015) Nucleolar stress and impaired stress granule formation contribute to C9orf72 RAN translation-induced cytotoxicity. Hum Mol Genet 24: 2426-2441. doi: 10.1093/hmg/ddv005
    [13] Riancho J, Ruiz-Soto M, Villagra NT, et al. (2014) Compensatory Motor Neuron Response to Chromatolysis in the Murine hSOD1(G93A) Model of Amyotrophic Lateral Sclerosis. Front Cell Neurosci 8: 346.
    [14] Palanca A, Casafont I, Berciano MT, et al. (2014) Reactive nucleolar and Cajal body responses to proteasome inhibition in sensory ganglion neurons. Biochim Biophys Acta 1842: 848-859. doi: 10.1016/j.bbadis.2013.11.016
    [15] Cong R, Das S, Ugrinova I, et al. (2012) Interaction of nucleolin with ribosomal RNA genes and its role in RNA polymerase I transcription. Nucleic Acids Res 40: 9441-9454. doi: 10.1093/nar/gks720
    [16] Tsoi H, Lau TC, Tsang SY, et al. (2012) CAG expansion induces nucleolar stress in polyglutamine diseases. Proc Natl Acad Sci U S A 109: 13428-13433. doi: 10.1073/pnas.1204089109
    [17] Tsoi H, Chan HY (2013) Expression of expanded CAG transcripts triggers nucleolar stress in Huntington's disease. Cerebellum 12: 310-312. doi: 10.1007/s12311-012-0447-6
    [18] Ross CA, Tabrizi SJ (2011) Huntington's disease: from molecular pathogenesis to clinical treatment. Lancet Neurol 10: 83-98. doi: 10.1016/S1474-4422(10)70245-3
    [19] Lee J, Hwang YJ, Ryu H, et al. (2014) Nucleolar dysfunction in Huntington's disease. Biochim Biophys Acta 1842: 785-790. doi: 10.1016/j.bbadis.2013.09.017
    [20] Kreiner G, Bierhoff H, Armentano M, et al. (2013) A neuroprotective phase precedes striatal degeneration upon nucleolar stress. Cell Death Differ 20: 1455-1464. doi: 10.1038/cdd.2013.66
    [21] Lee J, Hwang YJ, Boo JH, et al. (2011) Dysregulation of upstream binding factor-1 acetylation at K352 is linked to impaired ribosomal DNA transcription in Huntington's disease. Cell Death Differ 18: 1726-1735. doi: 10.1038/cdd.2011.38
    [22] Hwang YJ, Han D, Kim KY, et al. (2014) ESET methylates UBF at K232/254 and regulates nucleolar heterochromatin plasticity and rDNA transcription. Nucleic Acids Res 42: 1628-1643. doi: 10.1093/nar/gkt1041
    [23] Lee J, Hwang YJ, Kim KY, et al. (2013) Epigenetic mechanisms of neurodegeneration in Huntington's disease. Neurotherapeutics 10: 664-676. doi: 10.1007/s13311-013-0206-5
    [24] Ammal Kaidery N, Tarannum S, Thomas B (2013) Epigenetic landscape of Parkinson's disease: emerging role in disease mechanisms and therapeutic modalities. Neurotherapeutics 10: 698-708. doi: 10.1007/s13311-013-0211-8
    [25] Masliah E, Dumaop W, Galasko D, et al. (2013) Distinctive patterns of DNA methylation associated with Parkinson disease: identification of concordant epigenetic changes in brain and peripheral blood leukocytes. Epigenetics 8: 1030-1038. doi: 10.4161/epi.25865
    [26] Healy-Stoffel M, Ahmad SO, Stanford JA, et al. (2013) Altered nucleolar morphology in substantia nigra dopamine neurons following 6-hydroxydopamine lesion in rats. Neurosci Lett 546: 26-30. doi: 10.1016/j.neulet.2013.04.033
    [27] Healy-Stoffel M, Omar Ahmad S, Stanford JA, et al. (2014) Differential effects of intrastriatal 6-hydroxydopamine on cell number and morphology in midbrain dopaminergic subregions of the rat. Brain Res 1574: 113-119. doi: 10.1016/j.brainres.2014.05.045
    [28] Rieker C, Engblom D, Kreiner G, et al. (2011) Nucleolar disruption in dopaminergic neurons leads to oxidative damage and parkinsonism through repression of mammalian target of rapamycin signaling. J Neurosci 31: 453-460. doi: 10.1523/JNEUROSCI.0590-10.2011
    [29] Kang H, Shin JH (2014) Repression of rRNA transcription by PARIS contributes to Parkinson's disease. Neurobiol Dis 73C: 220-228.
    [30] Vilotti S, Codrich M, Dal Ferro M, et al. (2012) Parkinson's Disease DJ-1 L166P Alters rRNA Biogenesis by Exclusion of TTRAP from the Nucleolus and Sequestration into Cytoplasmic Aggregates via TRAF6. PLoS One 7: e35051. doi: 10.1371/journal.pone.0035051
    [31] Vilotti S, Biagioli M, Foti R, et al. (2012) The PML nuclear bodies-associated protein TTRAP regulates ribosome biogenesis in nucleolar cavities upon proteasome inhibition. Cell Death Differ 19: 488-500. doi: 10.1038/cdd.2011.118
    [32] Iacono D, O'Brien R, Resnick SM, et al. (2008) Neuronal hypertrophy in asymptomatic Alzheimer disease. J Neuropathol Exp Neurol 67: 578-589. doi: 10.1097/NEN.0b013e3181772794
    [33] Pietrzak M, Rempala G, Nelson PT, et al. (2011) Epigenetic Silencing of Nucleolar rRNA Genes in Alzheimer's Disease. PLoS One 6: e22585. doi: 10.1371/journal.pone.0022585
    [34] da Silva AM, Payao SL, Borsatto B, et al. (2000) Quantitative evaluation of the rRNA in Alzheimer's disease. Mech Ageing Dev 120: 57-64. doi: 10.1016/S0047-6374(00)00180-9
    [35] McStay B, Grummt I (2008) The epigenetics of rRNA genes: from molecular to chromosome biology. Annu Rev Cell Dev Biol 24: 131-157. doi: 10.1146/annurev.cellbio.24.110707.175259
    [36] Strohner R, Nemeth A, Jansa P, et al. (2001) NoRC--a novel member of mammalian ISWI-containing chromatin remodeling machines. EMBO J 20: 4892-4900. doi: 10.1093/emboj/20.17.4892
    [37] Mayer C, Schmitz KM, Li J, et al. (2006) Intergenic transcripts regulate the epigenetic state of rRNA genes. Mol Cell 22: 351-361. doi: 10.1016/j.molcel.2006.03.028
    [38] Santoro R, Li J, Grummt I (2002) The nucleolar remodeling complex NoRC mediates heterochromatin formation and silencing of ribosomal gene transcription. Nat Genet 32: 393-396. doi: 10.1038/ng1010
    [39] Gu L, Frommel SC, Oakes CC, et al. (2015) BAZ2A (TIP5) is involved in epigenetic alterations in prostate cancer and its overexpression predicts disease recurrence. Nat Genet 47: 22-30.
    [40] Wu P, Zuo X, Deng H, et al. (2013) Roles of long noncoding RNAs in brain development, functional diversification and neurodegenerative diseases. Brain Res Bull 97: 69-80. doi: 10.1016/j.brainresbull.2013.06.001
    [41] Santoro R, Schmitz KM, Sandoval J, et al. (2010) Intergenic transcripts originating from a subclass of ribosomal DNA repeats silence ribosomal RNA genes in trans. EMBO Rep 11: 52-58. doi: 10.1038/embor.2009.254
    [42] Wan J, Yourshaw M, Mamsa H, et al. (2012) Mutations in the RNA exosome component gene EXOSC3 cause pontocerebellar hypoplasia and spinal motor neuron degeneration. Nat Genet 44: 704-708. doi: 10.1038/ng.2254
    [43] Bierhoff H, Dammert MA, Brocks D, et al. (2014) Quiescence-induced LncRNAs trigger H4K20 trimethylation and transcriptional silencing. Mol Cell 54: 675-682. doi: 10.1016/j.molcel.2014.03.032
    [44] Evertts AG, Manning AL, Wang X, et al. (2013) H4K20 methylation regulates quiescence and chromatin compaction. Mol Biol Cell 24: 3025-3037. doi: 10.1091/mbc.E12-07-0529
    [45] Sarg B, Koutzamani E, Helliger W, et al. (2002) Postsynthetic trimethylation of histone H4 at lysine 20 in mammalian tissues is associated with aging. J Biol Chem 277: 39195-39201. doi: 10.1074/jbc.M205166200
    [46] Shumaker DK, Dechat T, Kohlmaier A, et al. (2006) Mutant nuclear lamin A leads to progressive alterations of epigenetic control in premature aging. Proc Natl Acad Sci U S A 103: 8703-8708. doi: 10.1073/pnas.0602569103
    [47] Bierhoff H, Schmitz K, Maass F, et al. (2010) Noncoding transcripts in sense and antisense orientation regulate the epigenetic state of ribosomal RNA genes. Cold Spring Harb Symp Quant Biol 75: 357-364. doi: 10.1101/sqb.2010.75.060
    [48] Murayama A, Ohmori K, Fujimura A, et al. (2008) Epigenetic control of rDNA loci in response to intracellular energy status. Cell 133: 627-639. doi: 10.1016/j.cell.2008.03.030
    [49] Donmez G, Outeiro TF (2013) SIRT1 and SIRT2: emerging targets in neurodegeneration. EMBO Mol Med 5: 344-352. doi: 10.1002/emmm.201302451
    [50] Herskovits AZ, Guarente L (2014) SIRT1 in neurodevelopment and brain senescence. Neuron 81: 471-483. doi: 10.1016/j.neuron.2014.01.028
    [51] Li Y, Xu W, McBurney MW, et al. (2008) SirT1 inhibition reduces IGF-I/IRS-2/Ras/ERK1/2 signaling and protects neurons. Cell Metab 8: 38-48. doi: 10.1016/j.cmet.2008.05.004
    [52] Zhang F, Wang S, Gan L, et al. (2011) Protective effects and mechanisms of sirtuins in the nervous system. Prog Neurobiol 95: 373-395. doi: 10.1016/j.pneurobio.2011.09.001
    [53] Smith MR, Syed A, Lukacsovich T, et al. (2014) A potent and selective Sirtuin 1 inhibitor alleviates pathology in multiple animal and cell models of Huntington's disease. Hum Mol Genet 23: 2995-3007. doi: 10.1093/hmg/ddu010
    [54] Schnapp A, Pfleiderer C, Rosenbauer H, et al. (1990) A growth-dependent transcription initiation factor (TIF-IA) interacting with RNA polymerase I regulates mouse ribosomal RNA synthesis. EMBO J 9: 2857-2863.
    [55] Grewal SS, Evans JR, Edgar BA (2007) Drosophila TIF-IA is required for ribosome synthesis and cell growth and is regulated by the TOR pathway. J Cell Biol 179: 1105-1113. doi: 10.1083/jcb.200709044
    [56] Mayer C, Bierhoff H, Grummt I (2005) The nucleolus as a stress sensor: JNK2 inactivates the transcription factor TIF-IA and down-regulates rRNA synthesis. Genes Dev 19: 933-941. doi: 10.1101/gad.333205
    [57] Hoppe S, Bierhoff H, Cado I, et al. (2009) AMP-activated protein kinase adapts rRNA synthesis to cellular energy supply. Proc Natl Acad Sci U S A 106: 17781-17786. doi: 10.1073/pnas.0909873106
    [58] DuRose JB, Scheuner D, Kaufman RJ, et al. (2009) Phosphorylation of eukaryotic translation initiation factor 2alpha coordinates rRNA transcription and translation inhibition during endoplasmic reticulum stress. Mol Cell Biol 29: 4295-4307. doi: 10.1128/MCB.00260-09
    [59] Nguyen le XT, Mitchell BS (2013) Akt activation enhances ribosomal RNA synthesis through casein kinase II and TIF-IA. Proc Natl Acad Sci U S A 110: 20681-20686. doi: 10.1073/pnas.1313097110
    [60] Kiryk A, Sowodniok K, Kreiner G, et al. (2013) Impaired rRNA synthesis triggers homeostatic responses in hippocampal neurons. Front Cell Neurosci 7: 207.
    [61] Shamsi F, Parlato R, Collombat P, et al. (2014) A genetic mouse model for progressive ablation and regeneration of insulin producing beta-cells. Cell Cycle 13: 3948-3957. doi: 10.4161/15384101.2014.952176
    [62] Parlato R, Kreiner G, Erdmann G, et al. (2008) Activation of an endogenous suicide response after perturbation of rRNA synthesis leads to neurodegeneration in mice. J Neurosci 28: 12759-12764. doi: 10.1523/JNEUROSCI.2439-08.2008
    [63] Domanskyi A, Geissler C, Vinnikov IA, et al. (2011) Pten ablation in adult dopaminergic neurons is neuroprotective in Parkinson's disease models. FASEB J 25: 2898-2910. doi: 10.1096/fj.11-181958
    [64] Yuan X, Zhou Y, Casanova E, et al. (2005) Genetic inactivation of the transcription factor TIF-IA leads to nucleolar disruption, cell cycle arrest, and p53-mediated apoptosis. Mol Cell 19: 77-87. doi: 10.1016/j.molcel.2005.05.023
    [65] Erickson JD, Bazan NG (2013) The nucleolus fine-tunes the orchestration of an early neuroprotection response in neurodegeneration. Cell Death Differ 20: 1435-1437. doi: 10.1038/cdd.2013.107
    [66] Plotkin JL, Day M, Peterson JD, et al. (2014) Impaired TrkB receptor signaling underlies corticostriatal dysfunction in Huntington's disease. Neuron 83: 178-188. doi: 10.1016/j.neuron.2014.05.032
    [67] Lee JH, Tecedor L, Chen YH, et al. (2015) Reinstating aberrant mTORC1 activity in Huntington's disease mice improves disease phenotypes. Neuron 85: 303-315. doi: 10.1016/j.neuron.2014.12.019
    [68] Ravikumar B, Vacher C, Berger Z, et al. (2004) Inhibition of mTOR induces autophagy and reduces toxicity of polyglutamine expansions in fly and mouse models of Huntington disease. Nat Genet 36: 585-595. doi: 10.1038/ng1362
    [69] Hamdane N, Stefanovsky VY, Tremblay MG, et al. (2014) Conditional inactivation of Upstream Binding Factor reveals its epigenetic functions and the existence of a somatic nucleolar precursor body. PLoS Genet 10: e1004505. doi: 10.1371/journal.pgen.1004505
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(8076) PDF downloads(1399) Cited by(12)

Article outline

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog