Processing math: 100%
Research article Special Issues

Effects of biomass particle size on yield and composition of pyrolysis bio-oil derived from Chinese tallow tree (Triadica Sebifera L.) and energy cane (Saccharum complex) in an inductively heated reactor

  • Received: 14 October 2015 Accepted: 29 November 2015 Published: 03 December 2015
  • In the face of fluctuating petroleum costs and a growing demand for energy, the need for an alternative and sustainable energy source has increased. A viable solution for this problem can be attained by using thermochemical conversion, pyrolysis, of existing biomass sources for the production of liquid fuels. This study focuses on the effect that biomass particle size has on the conversion of biomass into liquid pyrolysis oil. Energy cane and Chinese tallow tree biomass were pyrolyzed at 550 ℃. The particle size ranges studied were < 0.5, 0.5 to 1.4, 1.4 to 2.4 and, 2.4 to 4.4 mm. The results indicate that the range from 0.5-1.4 mm is a better range for optimizing bio-oil production while keeping water content low.

    Citation: Gustavo Aguilar, Pranjali D. Muley, Charles Henkel, Dorin Boldor. Effects of biomass particle size on yield and composition of pyrolysis bio-oil derived from Chinese tallow tree (Triadica Sebifera L.) and energy cane (Saccharum complex) in an inductively heated reactor[J]. AIMS Energy, 2015, 3(4): 838-850. doi: 10.3934/energy.2015.4.838

    Related Papers:

    [1] Peng Zhong, Xuanlong Wu, Li Zhu, Aohao Yang . A new APSO-SPC method for parameter identification problem with uncertainty caused by random measurement errors. AIMS Mathematics, 2025, 10(2): 3848-3865. doi: 10.3934/math.2025179
    [2] Xuanlong Wu, Peng Zhong, Weihao Lin, Jin Deng . Multi-body dynamic evolution sequence-assisted PSO for interval analysis. AIMS Mathematics, 2024, 9(11): 31198-31216. doi: 10.3934/math.20241504
    [3] Ying Sun, Yuelin Gao . An improved composite particle swarm optimization algorithm for solving constrained optimization problems and its engineering applications. AIMS Mathematics, 2024, 9(4): 7917-7944. doi: 10.3934/math.2024385
    [4] Peter Vajda, Jozef Bódi, Antonio G. Camacho, José Fernández, Roman Pašteka, Pavol Zahorec, Juraj Papčo . Gravimetric inversion based on model exploration with growing source bodies (Growth) in diverse earth science disciplines. AIMS Mathematics, 2024, 9(5): 11735-11761. doi: 10.3934/math.2024575
    [5] Daniel Clemente-López, Esteban Tlelo-Cuautle, Luis-Gerardo de la Fraga, José de Jesús Rangel-Magdaleno, Jesus Manuel Munoz-Pacheco . Poincaré maps for detecting chaos in fractional-order systems with hidden attractors for its Kaplan-Yorke dimension optimization. AIMS Mathematics, 2022, 7(4): 5871-5894. doi: 10.3934/math.2022326
    [6] Yilihamujiang Yimamu, Zui-Cha Deng, Liu Yang . An inverse volatility problem in a degenerate parabolic equation in a bounded domain. AIMS Mathematics, 2022, 7(10): 19237-19266. doi: 10.3934/math.20221056
    [7] Mohamed S. Elhadidy, Waleed S. Abdalla, Alaa A. Abdelrahman, S. Elnaggar, Mostafa Elhosseini . Assessing the accuracy and efficiency of kinematic analysis tools for six-DOF industrial manipulators: The KUKA robot case study. AIMS Mathematics, 2024, 9(6): 13944-13979. doi: 10.3934/math.2024678
    [8] Zhimin Liu, Ripeng Huang, Songtao Shao . Data-driven two-stage fuzzy random mixed integer optimization model for facility location problems under uncertain environment. AIMS Mathematics, 2022, 7(7): 13292-13312. doi: 10.3934/math.2022734
    [9] Liu Yang, Lijun Yin, Zuicha Deng . Drift coefficient inversion problem of Kolmogorov-type equation. AIMS Mathematics, 2021, 6(4): 3432-3454. doi: 10.3934/math.2021205
    [10] Peng Wang, Weijia He, Fan Guo, Xuefang He, Jiajun Huang . An improved atomic search algorithm for optimization and application in ML DOA estimation of vector hydrophone array. AIMS Mathematics, 2022, 7(4): 5563-5593. doi: 10.3934/math.2022308
  • In the face of fluctuating petroleum costs and a growing demand for energy, the need for an alternative and sustainable energy source has increased. A viable solution for this problem can be attained by using thermochemical conversion, pyrolysis, of existing biomass sources for the production of liquid fuels. This study focuses on the effect that biomass particle size has on the conversion of biomass into liquid pyrolysis oil. Energy cane and Chinese tallow tree biomass were pyrolyzed at 550 ℃. The particle size ranges studied were < 0.5, 0.5 to 1.4, 1.4 to 2.4 and, 2.4 to 4.4 mm. The results indicate that the range from 0.5-1.4 mm is a better range for optimizing bio-oil production while keeping water content low.


    From a very general perspective, solving an inverse problem involves the integration of methodologies aimed at determining the causes given the effects. This can be exemplified by scenarios such as diagnosing a disease based on symptoms or designing engineering solutions within certain constraints. Common inverse problems in geophysics are the determination of the density distribution in the subsoil using gravity observations at the terrain surface (gravimetric inverse problem [1]), using magnetic observations for mineral prospecting [2], the problem of finding the velocity distribution of a geological medium [3], the seismic reflection inverse problem (which maps the Earth sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled seismic sources positioned near the surface) [4], the determination of fault parameters using ground deformation [5], or the monitoring of magma volumes and their supply rates in volcanic areas also using ground deformation [6]. Some of these problems are ill-posed, i.e., either the inverse problem does not admit solution, the solution exists but is not unique, or finally, the solution exists and is unique but is unstable, that is, a small change in the data produces a big change in the solution [7, pp. 7-8].

    Many inverse problems in geophysics and other fields involve a high number of parameters due to the complexity of the problem and/or the need to model phenomena with high degree of detail. Despite this complexity, physical models are often simplified for ease of formulation and due to the inability to account for all real-world details in a model. There are also problems, such as ones based on potential fields (gravity, magnetics), whose solutions have inherent non-uniqueness (see, for example, [8, pp. 214–217] or [9, pp. 20–22]). Finally, the observed data set is always incomplete and noisy. All these circumstances contribute to the ill-posedness of the problem, leading to many solutions (called equivalent), all of them compatible with the observed data set and, in the case of nonlinear inverse problems, probably located in disconnected curvilinear valleys of the cost function topography [10]. It was said by Parker in [11] that "the goal of inverse theory is to determine the parameters from the observations or, in the face of the inevitable limitations of actual measurement, to find out as much as possible about them".

    Traditionally, the ill-posed nature of inverse problems has been addressed (selection of best model and its uncertainty estimation) through the combined use of local optimization methods and regularization techniques (see, for example, [12]). Although this approach is fast, it can have, for some nonlinear inverse problems, the drawback of providing a solution that depends strongly on an initial model, resulting in an obtained solution that is not the global minimum of the cost function, but a local one. Also, as important as the global minimum estimation, the uncertainty appraisal is a key task in any inverse problem, but the results based on local optimization methods are, in the case of nonlinear problems, only valid in the neighborhood of the solution that has been found [10].

    Global search methods such as genetic algorithms [13], simulated annealing [14] or particle swarm optimization [15] (among others) offer advantages over local optimization methods as they do not rely on an initial model and explore the entire parameter space, thereby enabling estimation of the global minimum of the cost function. After their execution they provide a collection of samples (models) from the parameter space together with their misfit values, from which the user can select the subset of models whose misfit are below a predefined tolerance. The solution of the inverse problem can then be expressed, as it stated by Albert Tarantola in [16], as "the collection of all the models that have not been falsified" (that are compatible with all the a priori information and have a misfit below a predefined tolerance). This is of particular importance in the case of nonlinear inverse problems, where, using these compatible models, the topography of the cost function lower than the prescribed tolerance can be depicted, making uncertainty appraisal more realistic than using local optimization methods [10].

    In some inverse problems, especially in highly dimensional ones, performing posterior analysis and uncertainty appraisal can be challenging. Using only statistics referring to each individual parameter could not be adequate when a global analysis of the entire set of parameters is needed. This is the case with gravity inversion in sedimentary basins, and in this paper we introduce a methodology designed in order to make posterior analysis easily feasible.

    In its more general form, a discrete nonlinear inverse problem is presented as

    $ F(m)=d+ε,
    $
    (2.1)

    where vector $ \mathbf{m} $ belongs to the parameter space $ \mathcal{M} $, $ \mathbf{m}\in\mathcal{M} $, and is composed of a finite number of parameters, $ \mathbf{m} = \{m_1, m_2, \dots, m_n\} $; vector $ \mathbf{d}\in\mathcal{D} $ represents the observed data, $ \mathbf{d} = \{d_1, d_2, \dots, d_l\} $, belonging to the data space $ \mathcal{D} $; and $ \boldsymbol{\varepsilon} $ is the observational noise, always present. The direct problem functional $ \mathbf{F}:\mathcal{M}\rightarrow \mathcal{D} $ is considered in this paper as nonlinear.

    An inverse problem, linear or not, has many equivalent solutions since they are able to predict the observed data to within some misfit tolerance and are compatible with some prior knowledge. Then, uncertainty estimation in discrete inverse problems involves finding the family $ M_\text{tol} $ of models compatible (below a predefined tolerance value, $ \text{tol} $) with the observations and other a priori information, i.e., finding all $ \mathbf{m}\in M_\text{tol} $ subject to $ \| \mathbf{F}(\mathbf{m})- \mathbf{d}\|_p\leq\text{tol} $. By selecting $ p = 2 $, a least squares problem is proposed, i.e., the model $ \mathbf{m} $ which minimizes the cost function

    $ c(m)=F(m)d2
    $
    (2.2)

    is determined. The cost function (2.2) is in most cases complemented with a regularization term (see [12] for example) in order to stabilize the inversion, but as this does not affect our discussion about the existence of disconnected equivalence regions in the cost function landscape of nonlinear inverse problems, we will ignore it in order to simplify the notation.

    One common method for solving a nonlinear inverse problem using local optimization is Newton's method, which implies the linearization of the forward operator $ \mathbf{F} $ around an initial model $ \mathbf{m}_0 $, so

    $ F(m)=F(m0)+JFm0(mm0)+o(mm02),
    $
    (2.3)

    where $ \mathbf{JF}_{ \mathbf{m}_0} $ is the Jacobian of $ \mathbf{F} $ in $ \mathbf{m}_0 $, and $ o(\| \mathbf{m}- \mathbf{m}_0\|_2) $ represents the terms of order greater than $ 1 $ in the Taylor expansion. Taking into account Eqs (2.1)–(2.3), the solution of a discrete nonlinear inverse problem consists of finding the model $ \mathbf{m} $ which minimizes the cost function

    $ c(m)=JFm0(mm0)d+F(m0)2.
    $
    (2.4)

    As the problem is now linear, well known linear algebra techniques can be employed in order to solve it through an iterative procedure, where the initial model $ \mathbf{m}_0 $ is updated at each step (see [12]).

    Once the model $ \mathbf{m} $ that minimizes $ c(\mathbf{m}) $ is determined, the error analysis can be performed. The boundary of the region $ \| \mathbf{JF}_{ \mathbf{m}_0}(\mathbf{m}- \mathbf{m}_0)- \mathbf{d}+ \mathbf{F}(\mathbf{m}_0)\|_2\leq\text{tol} $ is a hyper-quadric whose axis can be determined in order to provide the uncertainty appraisal (see [10]). However, in the Taylor expansion of the forward operator (Eq (2.3)), the terms of order greater than $ 1 $ have been neglected, so this truncation will introduce an error in the approximation of the cost function landscape of the original nonlinear problem. For a deeper understanding of the cost function landscape in linear and nonlinear inverse problems, readers can consult [10,17].

    We will illustrate the preceding statements with a synthetic example. Let a nonlinear function be $ y = e^{\alpha x}+2e^{\beta x} $. For values $ (\alpha, \beta) = (10, 1) $, a set of $ 51 $ equally spaced values for $ y $ have been generated for $ x\in[-0.25, 0.25] $ using $ \Delta x = 0.01 $. Then, the computed $ y $ values have been contaminated with white noise of distribution $ N(\mu, \sigma^2) = N(0, 5\%^2) $, where $ 5\% $ means a standard deviation of $ 5\% $ of the value of $ y $. With the noisy $ y $ values, we have applied Newton's method in order to estimate $ \mathbf{m} $, and we have obtained the results presented in Figure 1. The upper pictures represent the same solution, but the upper left one shows as background the real nonlinear cost function topography, while the upper right one shows the linearized cost function; the lower picture shows the linearized cost function for other starting guess $ \mathbf{m}_0 $, as will be explained. The equivalence regions of $ 10\% $ relative misfit ($ 100\cdot\| \mathbf{F}(\mathbf{m})- \mathbf{d}\|_2/\| \mathbf{d}\|_2 $) are shown, and it can be seen that in the case of the original nonlinear problem two regions are present (green lines).

    Figure 1.  Least squares relative cost function topography ($ 100\cdot\| \mathbf{F}(\mathbf{m})- \mathbf{d}\|_2/\| \mathbf{d}\|_2 $) of the synthetic example $ y = e^{\alpha x}+2e^{\beta x}+N(0, 5\%^2) $. Upper left: real nonlinear cost function topography. Upper right: linearized cost function topography for a solution starting with point $ \mathbf{m}_0 = (6, 2) $. Lower: linearized cost function topography for a solution starting with point $ \mathbf{m}_0 = (-2, 6) $. Color bars represent the relative misfit as percentages.

    However, the estimation using a local optimization method depends for this problem strongly on the location of the initial guess $ \mathbf{m}_0 $. The result corresponding to the upper right plot in Figure 1 was obtained using $ \mathbf{m}_0 = (\alpha_0, \beta_0) = (6, 2) $, and the solution is enclosed in the equivalence region containing the global minimum of the cost function. It can also be seen that the estimated solution (black circle) does not match the true solution (black dot), which is a consequence of the presence of noise in the data (see [10] for details). However, if the initial guess is $ \mathbf{m}_0 = (\alpha_0, \beta_0) = (-2, 6) $, as can be seen in the lower picture of Figure 1, the solution is enclosed in the equivalence region of the upper part of the figure, which is a local minimum that is far from the true solution $ \mathbf{m} = (\alpha, \beta) = (10, 1) $.

    About the cost function topography, we can see in Figure 1 (upper left) its main characteristic in the case of nonlinear problems: isolines of equal misfit are arbitrary bent curves. In linear problems, equal misfit isolines are in the general case hyper-ellipsoids (tending to be elliptic hyper-cylinders for ill-posed problems), but this landscape is totally different in the nonlinear case (see [10,17] for details). It can be seen (Figure 1, upper left) how the two $ 10\% $ nonlinear equivalence regions have different sizes and shapes, and the problem solution does not have to be in its center. On the contrary, when a local optimization method is employed, we linearize the problem around the best estimation, and the cost function topography corresponds to a linear problem, as can be seen in Figure 1 (upper right and lower). In this particular example the isolines of equal misfit are ellipses centered in the estimated solution, as we are dealing with a two-dimensional problem. When we apply a local optimization method to a nonlinear inverse problem, two important and general conclusions related to error analysis can be derived:

    ● Only one equivalence region (around the estimated solution) will be determined, ignoring other possible regions included in the cost function topography. Furthermore, if the initial guess $ \mathbf{m}_0 $ is far from the region containing the global minimum, the algorithm could be trapped in a local one with its own linearized equivalence region, making both the estimation and the posterior error analysis useless (this is the case with the example in the lower part of Figure 1).

    ● Although the estimated solution corresponds to the cost function global minimum, the linearized equivalence region is different in shape from the nonlinear one, as can be seen in Figure 1. All models inside an equivalence region of prescribed tolerance must be taken into account, as they can explain the observations (see [16,18,19]). However, when we compare the linear and nonlinear regions, we can see that (1) the linearized equivalence region contains models that are outside the true nonlinear equivalence region, and (2) models belonging to the nonlinear equivalence region are located outside the linearized one, so they will be ignored.

    These facts are much more pronounced in highly nonlinear problems, so the error analysis based on the linearized cost function topography is valid only in the very near surroundings of the estimated solution, making, in many cases, the uncertainty appraisal useless.

    Global optimization methods can have significant improvements over local optimization ones in certain cases. First of all, it is not necessary to have an initial guess for the solution, as these methods explore the entire parameter space (actually a predefined subset of it, based on the knowledge of the problem), so it is difficult to get trapped in cost function local minima. This is excellent for nonlinear inverse problems and their complex cost function topographies. Also, these methods are based on the evaluation of the forward problem, so neither linearization nor big systems of equations and their factorization are needed. On the contrary, when the problem is highly dimensional, and/or the forward problem is computationally expensive, global optimization methods can be slow. Some techniques can be applied in order to mitigate this drawback [20], although not always is possible.

    Nevertheless, it is in the uncertainty analysis where global optimization methods have their principal strengths. During the execution, a set of models is generated covering the parameter space following the rules of each method (genetic algorithms, simulated annealing, particle swarm optimization, etc.), i.e., the cost function topography is sampled following certain rules and focusing on the equivalence regions containing the minima. Then, by selecting the models with a misfit below a predefined tolerance, the real nonlinear equivalence regions can be depicted. In the following sections, we present a methodology designed in order to select the corresponding models for a correct uncertainty representation in a gravity inversion method applied to sedimentary basins.

    In [21], GRAVPSO2D, a software for the solution of the two-dimensional gravity inversion problem in sedimentary basins based on the particle swarm optimization (PSO) algorithm, was presented (some other studies have been developed in recent years about this topic, such as [22,23,24,25], for example). The 2D approximation can be used for gravity when a dimension of an anomalous body is much larger than the other two, which is common in sedimentary basins, where the depth is generally less than the horizontal dimensions (Nettleton suggested in [26, p. 206] that a $ 2\times $ or $ 3\times $ factor is enough). The basin is then modeled as a set of rectangles [27,28,29,30], and after the prescription of density values, the rectangles' thickness (i.e., the sediment depths) are determined. The observations are gravity anomalies derived from gravity observations at the terrain surface level.

    Figure 2 shows a schematic representation of the model. The basin, as stated in the previous paragraph, is composed of a set of $ M $ rectangles with known density contrast (it can be constant or variable in depth), whose lower sides represent the interface between the basement and the sediments, and their upper sides are coincident with the terrain surface (which does not need to be planar as in the example of Figure 2). The observed points form a set of $ N $ elements located at the terrain surface level. The gravitational anomaly $ \Delta g $ generated by a set of $ M $ rectangles over each observation point is

    $ Δgi=Mj=1F(Δρj,rij,zj), i=1,2,N,
    $
    (3.1)
    Figure 2.  Two-dimensional model of a sedimentary basin as a set of rectangles.

    where $ \Delta\rho_j $ is the density contrast ($ \rho_\text{sediments}-\rho_\text{basement}) $ of each rectangle $ j $, $ \mathbf{r}_{ij} $ is the top side position of each rectangle $ j $ with respect each point $ i $, and $ z_j $ is the $ z $ lower side coordinate of each rectangle $ j $ (the problem unknowns). The expression for $ F\left(\Delta\rho_j, \mathbf{r}_{ij}, z_j\right) $ can be seen in [31, pp. 524-525] or [32] and is nonlinear with respect to $ r $ and $ z $. Equation (3.1) can be extended with the inclusion of a term in order to take into account a regional trend in the gravity anomaly, and, in order to reduce the non-uniqueness of the problem, several kind of constraints can be applied to the prisms, such as fixed depths (data from boreholes, for example) [21]. In this paper we work with Eq (3.1) for the sake of simplicity, since the inclusion of other terms does not affect the discussion. Also, a filtering step is needed in order to produce geologically meaningful results (see [21] for details).

    The first step before the PSO execution is the selection of the search space for the parameters. In our case, this means definitions of upper and lower limits for the prisms' $ z $ coordinates in Figure 2. During the execution of PSO, a set of models is generated, all enclosed inside the search space. Among these models, the one with the lowest misfit (Eq (2.2)) is considered as the global minimum of the cost function. Also we have at our disposal all the other models, so posterior statistics can be made.

    Figure 3 shows a synthetic example where a basin is modeled as a set of $ 80 $ rectangles of 1000 m horizontal width, with upper sides at height 0 m, and density contrast $ \Delta\rho = -500 \mathrm{~kg} \mathrm{~m}^{-3} $. Their true depths are shown as the solid black line in Figure 3 (left). The horizontal positions of a set of $ 70 $ points at height 0 m were randomly generated, and gravity anomaly induced by the basin was computed in each one. These observations were contaminated with random white noise of distribution $ \Delta \rho = -500 \mathrm{~kg} \mathrm{~m}^{-3} $. In order to recover the true model, a population of $ 400 $ models with density contrast $ \Delta\rho = -500 \mathrm{~kg} \mathrm{~m}^{-3} $ was generated inside the search bounds, selected wide enough to contain all geologically plausible models (red dotted lines in Figure 3, left), and $ 200 $ iterations were performed, so a total of 80 000 models were created using the CP-PSO member in its cloud version, i.e., no parameter tuning must be configured as each particle has its own PSO parameters, automatically chosen [33].

    Figure 3.  Synthetic example of a sedimentary basin. Left: general overview of the solution containing the best estimated model together with the true model and residuals (red dots for the best and black dots for the true model), and the uncertainty region composed by all generated models with relative misfit below 10% (51 612). Right: empirical cumulative density function and histogram for depth of rectangle number $ 54 $ in the profile.

    The inversion took a time of about 35 s to be completed, * and the best generated model can be shown in Figure 3 (left) as a blue line and has a relative misfit of 4.19%, which is an estimation of the global minimum of the cost function. This misfit value is smaller than that corresponding to the true model (black line, labeled as reference model in the figure, misfit of 5.44%), which is an effect of the presence of $ \Delta \rho = -500 \mathrm{~kg} \mathrm{~m}^{-3} $ noise in data [10]. Still, we are interested in models with a relative misfit below a predefined tolerance, 10% for example, so all of them are extracted from the whole set of 80 000 generated models. In our case the number of models belonging to the equivalence regions with relative misfit below 10% is 51 612, and Figure 3 (right) shows the statistics that can be made for each element (rectangle number $ 54 $ as an example in this case). With the empirical cumulative density function, ECDF, (upper right panel) the probability of a depth range can be easily computed, while the histogram (lower right panel), which actually is another way to represent the ECDF, allows us to analyze in a more efficient way the frequencies of different depths. In this example we can see in the histogram the depth of the best model together with the depths of the mean, median, and mode models for rectangle $ 54 $. Such models do not have to be coincident, and, as can be seen, the best model does not have to be composed by the ensemble of most frequent depths in each rectangle.

    *The computations were performed on a laptop equipped with an Intel Core i7-4800MQ CPU with four cores at 2.70 GHz, running Debian GNU/Linux and MATLAB R2017a. The function for the direct problem solution was written in C and compiled using OpenMP, so all four cores of the CPU were used. After several runs we can conclude that the time spent in the uncertainty analysis is about $ 10\% $ of the total time, showing that the forward modelling step is the bottleneck of any global search algorithm.

    The range of variation for each rectangle depth is shown in Figure 3 (left) as yellow bars. These bars could be seen as the limits of an equiprobable range, but this is not true in the general case, as is shown in the lower right panel of Figure 3. So, ECDFs and histograms must be carefully inspected in order to provide a detailed behavior of the problem solution, revealing the importance of both as support to the global view.

    The uncertainty analysis for this kind of problem is sometimes absent in the literature or, in most cases, restricted to the comparison of the inversion results with other information such as previous models, data from boreholes, and/or other geological information. In other cases (see, for example, [22,23]) it is performed by computing the standard deviation for each rectangle, taking into account all models with misfit below the prescribed tolerance, i.e., using the range of variation depicted in Figure 3 (left). On the other hand, [25] makes the uncertainty assessment via principal component analysis, building the 2D cost function topography corresponding to the two principal components of the solutions. Other authors make cross-plots between different unknowns in order to see their correlation, such as in [34], although in problems with a high number of parameters it may not be practical due to the high number of combinations between them (in problems involving few unknowns it can be a valid option, such as in [35]).

    As was stated in the previous section, yellow bars in Figure 3 (left) show the limits between each rectangle depth varies for all the models below 10% relative misfit tolerance. Nevertheless, this set can contain models belonging to several disjointed equivalence regions, as we are dealing with a nonlinear problem. This can be proven by the fact that in our example, the models composed of the ensembles of all upper and lower limits of the yellow bars have relative misfits of 31.73% and 33.64%, greater than the tolerance of 10%. However, in order to depict correctly an estimation of the equivalence region of 10% relative misfit where the global minimum is contained, we must extract only the models belonging to this region. We have developed an algorithm in order to do so, based on the information provided by the empirical cumulative density functions previously computed for each rectangle.

    The algorithm works on the ECDF functions computed for each rectangle using the models below the predefined tolerance (51 612 models in our exercise). It can be summarized in the following steps (we search for the upper bound depths, but in the case of lower bound the framework is the same except for minor changes in some comparisons):

    $ (1) $ Select all models of relative misfit around the objective equivalence region: for example, between 9% and 11% if we are looking for the 10% region. From these models, select those completely above the best estimated model.

    $ (2) $ Compute the Euclidean distance from each of the previous selected models to the estimated best model and select a small set ($ 10 $ models, for example) of the closest ones. Compute the mean of such models in order to obtain a first approximation to the upper bound of the equivalence region, $ \texttt{upper_bound} $. The misfit of this model will not be exactly the searched value. We can use the closest model as first approximation, but experience demostrated that the algorithm is more efficient by using the described way.

    $ (3) $ Extract from ECDFs the probability of depth for each rectangle in the previous upper bound candidate model.

    $ (4) $ Add a small probability increment, $ \texttt{IncProb} $ (experience has shown that 0.01 is a good value), to the extracted probabilities and interpolate in the ECDFs their corresponding depths. We have a new $ \texttt{upper_bound} $ candidate. Compute its new misfit.

    $ (5) $ Start an infinite loop while the misfit of the upper bound is different than the misfit of the equivalence region to search (in practice, a small tolerance must be employed, as it is almost impossible to get an upper bound misfit identical to any prescribed value).

    (a) Update the value of $ \texttt{IncProb} $ to $ \texttt{IncProb/1.25} $ (1.25 is a good value, again estimated by experience, after performing a lot of inversions in very different environments and using very different numbers of observations and basin models) in order to gradually get closer to algorithm convergence. If $ \texttt{IncProb} $ takes a very small value (10−8 for example), its original value is restored.

    (b) If the misfit of $ \texttt{upper_bound} $ is lower than the equivalence region to search, we add again the $ \texttt{IncProb} $ value to the candidate model. If it is greater, we subtract the $ \texttt{IncProb} $ value. We interpolate new depth values with the updated probability values. Increments and decrements of probability do not mean increments or decrements of the same values in depth, as each rectangle has its own empirical cumulative distribution function.

    (c) The $ \texttt{upper_bound} $ model must have for each rectangle a depth value lower than that corresponding to the same rectangle in the best model, so if any of them is deeper, the value of the best model is assigned.

    (d) Compute the new misfit of $ \texttt{upper_bound} $.

    It must be stressed that taking different values than the presented in the previous algorithm description for the $ \texttt{IncProb} $ parameter and its updating factor does not mean that the algorithm fails, but rather that it will execute less efficiently. These values, as was explained, come from experience and must be understood as suboptimal ones. The pseudocode presented in Algorithm 1 shows the described steps in order to determine the depths of the upper bound.

    Algorithm 1: Upper bound of equivalence region containing the best model.
    Data: equivalent models, ECDFs, equivalence region to search
    Extract first candidate for upper bound and the probabilities of their depths;
    Add probability increment and compute new upper bound depths;
    Compute misfit for new depths;

    Figure 4 shows the same synthetic example as in Figure 3 after applying the described algorithm. We can see (left panel) how the 10% equivalence region is now narrower than the one in the previous example, as it corresponds to only one region, the one where the best model is included. Their upper and lower bounds have now a relative misfit of 10%, and any other model inside these bounds has a misfit lower than this value. From the original 51 612 models with relative misfit lower than 10%, only 16 194 are totally inside the bounds, which means that the remaining 35 418 models belong to other equivalent regions containing local minima of the cost function, as they are not fully contained in the bounds (this is a proof of the excellent exploratory character of the CP-PSO family member employed in the inversion). In the lower right panel of Figure 4, we can see the histogram of rectangle $ 54 $, where we can appreciate that the range in depths has diminished from approximately $ [925\text{ m}, 1650\text{ m}] $ in Figure 3 to approximately $ [1220\text{ m}, 1400\text{ m}] $ (if we take into account the 80% most frequent depths, the ranges change from $ [1190\text{ m}, 1450\text{ m}] $ to $ [1290\text{ m}, 1340\text{ m}] $). Also, the histogram is not as symmetrical as the one in Figure 3, showing that in nonlinear inverse problems the minimum of the cost function is not placed in general at the geometrical center of the equivalence region (see also Figure 1).

    Figure 4.  Synthetic example of a sedimentary basin. Left: general overview of the solution containing the best estimated model together with the true model and residuals (red dots for the best and black dots for the true model), and the equivalence region of 10% relative misfit, which encloses entirely 16 194 of the 51 612 models below the 10% tolerance. Right: empirical cumulative density function and histogram for depth of rectangle number $ 54 $.

    In this section, we apply our methodology to a gravity profile in the Argelès-Gazost basin, an ancient glacial valley located in the French Pyrenees that is now occupied by the Gave de Pau River. In this area, formed by a glacier that belonged to a larger ice tongue overlying this Pyrenean region in the Quaternary period, various gravity surveys and inversions have been carried out in order to determine the geometry of the basement relief [36,37,38]. A 3D inversion using PSO was also carried out previously in [39], and in this work we have selected a profile crossing the deepest area in order to check the presented methodology, as well as to compare the 3D and 2D approaches.

    The Argelès-Gazost basin is a flat and elongated valley located at an elevation of about 450 m above mean sea level, filled by Quaternary deposits and surrounded by limestones and flysches [38,40]. It has a length of about $ 7\text{ km} $ along a NNW direction and a width varying between $ 1\text{ km} $ and $ 2\text{ km} $ (see Figure 5, left). Its deepest area is located in the southern part, between the towns of Argelès-Gazost and Pierrefitte-Nestalas. In this area, there are $ 9 $ gravity observations almost aligned forming a profile about 1600 m long, which we have used in the inversion (Figure 5, left, red line). The profile is located in such a way that the longest dimension of the basin is perpendicular to it, so this situation is optimal for a 2D inversion. In order to perform the inversion as close as possible to [39], we have used the same value for density contrast, $ \Delta\rho = -600 \mathrm{~kg} \mathrm{~m}^{-3} $ (also the same value as the one employed by Moussirou in [36] and Perrouty in [37,38]). As in [39], a regional trend for the gravity anomalies was modeled using a plane, and we have applied the same model for our profile in this experiment in order to obtain the same residual anomaly as for the 3D inversion. The basin was modeled as a set of $ 12 $ rectangles of 150 m width, and the CP-PSO family member was employed in the inversion using a population of $ 600 $ models and $ 400 $ iterations, so a total of 240 000 models was generated.

    Figure 5.  Left: Argelès-Gazost basin localization and gravity observations (black dots) employed in [39]. In this study a profile of $ 9 $ points crossing the deepest area (red line), between Argelès-Gazost and Pierrefitte-Nestalas, is used in the inversion. Right: best model and 15% equivalence region (upper), and ECDF and histogram for rectangle number $ 4 $ (lower).

    Figure 5 (right) shows the results of the inversion for the profile. The best model reached a relative misfit of 1.00%, which is lower than the misfit obtained in [39] for the 3D inversion (9.72%). This was expected, as the 3D inversion comprised the whole basin, and our profile is small and has a very good concentration of observations. About the deepest point in the profile, it corresponds to rectangle number $ 4 $ and reaches a depth of 335 m for the best model with a range of $ [280\text{ m}, 370\text{ m}] $ for the 15% equivalence region (the range $ [315\text{ m}, 350\text{ m}] $ comprises 80% of models). These values are almost the same as those obtained in the 3D inversion, where the maximum estimated depth was 333 m with a range of $ [288\text{ m}, 379\text{ m}] $ for the same 15% equivalence region. A total of 91 565 models are inside the 15% equivalence region containing the best model, indicating the good explorative character of the inversion, as the remaining 148 435 models explored the rest of the parameter space.

    Moussirou obtained a maximum depth value of 300 m in [36] using a 2.5D modeling based on [41], while Perrouty in [38] obtained a value of 232 m using VPmg inversion (Virtual Prism magnetic and gravity inversion), [42]. The value of 300 m by Moussirou lies in our depth range estimation for the 15% equivalence region, although the depth computed by Perrouty is outside this region. This is not due to the inversion method but due to the regional trend elimination in the gravity signal, as in [38] a $ 3^\text{rd} $ degree surface was applied to model the trend, absorbing part of the residual Bouguer signal. As a consequence, the amplitude of the residual anomaly obtained in [38] reaches only −4.2 mGal, while in this work and in [39], where a plane was set to model the regional trend, the amplitude reaches −5.98 mGal, reflecting a deeper sediments-basement interface.

    Uncertainty analysis is a crucial task in any type of inverse problem. Due to the nature of the cost function topography in nonlinear problems, where multiple minima exist (in cases involving potential fields, there are infinitely many global minima) and the equivalence regions have arbitrarily curved shapes, local optimization methods based on the linearization of the forward operator lack the ability to produce precise uncertainty analysis of the results. Global optimization methods such as particle swarm optimization offer an excellent alternative to local ones, as they explore the entire parameter space and are not constrained by any initial model or by the linearization of the problem. They provide a collection of models, which is valuable information that must be carefully examined to perform uncertainty analysis.

    These models generally belong to various equivalence regions of the cost function, each of them with its local minimum, but our interest lies in the one containing the global minimum. Conducting statistics with the entire set of equivalent solutions can lead to erroneous results as models from different equivalence regions may be mixed. Therefore, a methodology for selecting the correct models, regardless of the inverse problem being addressed, must be developed. We have proposed an algorithm tailored for a gravity inverse problem to perform this selection based on the analysis of the empirical cumulative distribution function (ECDF) of each parameter. The proposed algorithm works for the gravity inversion problem in sedimentary basins and improves the efficiency of equivalence region detection compared to our previous version of the software. However, this does not imply that it is suitable for other problems, which must be carefully analyzed to develop the most appropriate procedure. We also emphasized the importance of analyzing not only the overall view of the solution but also the ECDF and the histogram of each individual parameter to illustrate its particular distribution.

    Finally, a real example was presented where a profile in the Argelès-Gazost basin was inverted, analyzed using the proposed methodology, and compared with previous results. The 2D inversion produces a result fully compatible with the 3D inversion previously performed, demonstrating that, under suitable conditions, the 2D approach to the gravity inverse problem applied to sedimentary basins is as feasible as 3D inversion.

    The proposed algorithm for uncertainty analysis in the two-dimensional gravity inverse problem applied to sedimentary basins has been implemented in versions $ \geq2 $ of GRAVPSO2D, which can be downloaded, together with a detailed 54-page user manual, from the web page of the Bureau Gravimétrique International: https://bgi.obs-mip.fr/catalogue/?uuid = 9ce23226-bb75-462e-bd60-5997a768a359.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    J. L. G. Pallero acknowledges the support of the Universidad Politécnica de Madrid through a Programa Propio de Movilidad grant for a research stay in Toulouse in 2021. He also acknowledges the support of Laboratoire GET (Université de Toulouse, CNRS, IRD, CNES), the Bureau Gravimétrique International (BGI), and CNES, which allowed him to develop part of this research in Toulouse during a research stay in 2022 (work supported by CNES, CNRS, and IRD).

    Prof. M. Z. Fernández-Muñiz is the Guest Editor of special issue "New insights of the Application of Inverse Problems and Machine Learning in Science and Technology" for AIMS Mathematics. Prof. M. Z. Fernández-Muñiz was not involved in the editorial review and the decision to publish this article.

    The authors declare no conflicts of interest.

    [1] Muley PD, Henkel C, Abdollahi KK, et al. (2015) Pyrolysis and Catalytic Upgrading of Pinewood Sawdust Using Induction Heating Reactor. Energ Fuel.
    [2] Urbatsch L (2000) Chinese tallow tree (Triadica sebifera (L.) Small. Plant Guide. Natural Resources Conservation Service (NRCS).
    [3] Heo HS, Park HJ, Park Y-K, et al. (2010) Bio-oil production from fast pyrolysis of waste furniture sawdust in a fluidized bed. Bioresource technol 101: S91-S96. doi: 10.1016/j.biortech.2009.06.003
    [4] Lu Q, Yang Xl, Zhu Xf (2008) Analysis on chemical and physical properties of bio-oil pyrolyzed from rice husk. J Anal Appl Pyrol 82: 191-198. doi: 10.1016/j.jaap.2008.03.003
    [5] McKendry P (2002) Energy production from biomass (part 2): conversion technologies. Bioresource Technol 83: 47-54. doi: 10.1016/S0960-8524(01)00119-5
    [6] Mohan D, Pittman CU, Steele PH (2006) Pyrolysis of Wood/Biomass for Bio-oil: A Critical Review. Energ Fuel 20: 848-889. doi: 10.1021/ef0502397
    [7] Bergström D, Israelsson S, ñhman M, et al. (2008) Effects of raw material particle size distribution on the characteristics of Scots pine sawdust fuel pellets. Fuel Process Technol 89: 1324-1329. doi: 10.1016/j.fuproc.2008.06.001
    [8] Liao R, Gao B, Fang J (2013) Invasive plants as feedstock for biochar and bioenergy production. Bioresource technol 140: 439-442. doi: 10.1016/j.biortech.2013.04.117
    [9] Bridgwater AV, Meier D, Radlein D (1999) An overview of fast pyrolysis of biomass. Org Geochem 30: 1479-1493. doi: 10.1016/S0146-6380(99)00120-5
    [10] Tsai WT, Lee MK, Chang YM (2006) Fast pyrolysis of rice straw, sugarcane bagasse and coconut shell in an induction-heating reactor. J Anal Appl Pyrol 76: 230-237. doi: 10.1016/j.jaap.2005.11.007
    [11] Uzun BB, Kanmaz G (2013) Effect of operating parameters on bio-fuel production from waste furniture sawdust. Waste Manage Res 31: 361-367. doi: 10.1177/0734242X12470402
    [12] Ateş F, Pütün E, Pütün AE (2004) Fast pyrolysis of sesame stalk: yields and structural analysis of bio-oil. J Anal Appl Pyrol 71: 779-790. doi: 10.1016/j.jaap.2003.11.001
    [13] Miao Z, Grift TE, Hansen AC, et al. (2011) Energy requirement for comminution of biomass in relation to particle physical properties. Ind Crop Prod 33: 504-513. doi: 10.1016/j.indcrop.2010.12.016
    [14] Onay ñ (2003) Production of Bio-Oil from Biomass: Slow Pyrolysis of Rapeseed (Brassica napus L.) in a Fixed-Bed Reactor. Energ Sources 25: 879-892.
    [15] Şensöz S, Angın D, Yorgun S (2000) Influence of particle size on the pyrolysis of rapeseed (Brassica napus L.): fuel properties of bio-oil. Biomass Bioenerg 19: 271-279.
    [16] Rhén C, Gref R, Sjöström M, et al. (2005) Effects of raw material moisture content, densification pressure and temperature on some properties of Norway spruce pellets. Fuel Process Technol 87: 11-16. doi: 10.1016/j.fuproc.2005.03.003
    [17] Shen J, Wang XS, Garcia-Perez M, et al. (2009) Effects of particle size on the fast pyrolysis of oil mallee woody biomass. Fuel 88: 1810-1817. doi: 10.1016/j.fuel.2009.05.001
    [18] Bennadji H, Smith K, Serapiglia MJ, et al. (2014) Effect of Particle Size on Low-Temperature Pyrolysis of Woody Biomass. Energ Fuel 28: 7527-7537. doi: 10.1021/ef501869e
    [19] Kim M, Day DF (2011) Composition of sugar cane, energy cane, and sweet sorghum suitable for ethanol production at Louisiana sugar mills. J Ind Microbiol Biot38: 803-807.
    [20] Jubinsky G, Anderson LC (1996) The invasive potential of Chinese tallow-tree (Sapium sebiferum Roxb.) in the Southeast. Castanea: 226-231.
    [21] Fennell LP, Boldor D (2013) Dielectric characterization of the seeds of invasive Chinese Tallow Tree. J Microwave Power EE 47: 237-250.
    [22] Henkel C (2014) A Study of Induction Pyrolysis of Lignocellulosic Biomass for the Production of Bio-oil: Louisiana State University.
    [23] Bedmutha RJ, Ferrante L, Briens C, et al. (2009) Single and two-stage electrostatic demisters for biomass pyrolysis application. Chem Eng Process: Process Intensification 48: 1112-1120. doi: 10.1016/j.cep.2009.02.007
    [24] Scholze B, Meier D (2001) Characterization of the water-insoluble fraction from pyrolysis oil (pyrolytic lignin). Part I. PY-GC/MS, FTIR, and functional groups. J Anal Appl Pyrol 60: 41-54.
    [25] Demirbas A, Demirbas H (2004) Estimating the Calorific Values of Lignocellulosic Fuels. Energ Explor Exploit 22: 135. doi: 10.1260/0144598041475198
    [26] Luo S, Xiao B, Hu Z, et al. (2010) Effect of particle size on pyrolysis of single-component municipal solid waste in fixed bed reactor. Int J Hydrogen Energ 35: 93-97. doi: 10.1016/j.ijhydene.2009.10.048
    [27] Vamvuka D, Kakaras E, Kastanaki E, et al. (2003) Pyrolysis characteristics and kinetics of biomass residuals mixtures with lignite. Fuel 82: 1949-1960. doi: 10.1016/S0016-2361(03)00153-4
    [28] Jung SH, Kang BS, Kim JS (2008) Production of bio-oil from rice straw and bamboo sawdust under various reaction conditions in a fast pyrolysis plant equipped with a fluidized bed and a char separation system. J Anal Appl Pyrol 82: 240-247. doi: 10.1016/j.jaap.2008.04.001
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7386) PDF downloads(1303) Cited by(19)

Figures and Tables

Figures(7)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog