Research article Special Issues

Analytical results for positivity of discrete fractional operators with approximation of the domain of solutions


  • We study the monotonicity method to analyse nabla positivity for discrete fractional operators of Riemann-Liouville type based on exponential kernels, where (CFRc0θF)(t)>ϵΛ(θ1)(F)(c0+1) such that (F)(c0+1)0 and ϵ>0. Next, the positivity of the fully discrete fractional operator is analyzed, and the region of the solution is presented. Further, we consider numerical simulations to validate our theory. Finally, the region of the solution and the cardinality of the region are discussed via standard plots and heat map plots. The figures confirm the region of solutions for specific values of ϵ and θ.

    Citation: Pshtiwan Othman Mohammed, Donal O'Regan, Dumitru Baleanu, Y. S. Hamed, Ehab E. Elattar. Analytical results for positivity of discrete fractional operators with approximation of the domain of solutions[J]. Mathematical Biosciences and Engineering, 2022, 19(7): 7272-7283. doi: 10.3934/mbe.2022343

    Related Papers:

    [1] Marie Vandekerckhove, Yu-lin Wang . Emotion, emotion regulation and sleep: An intimate relationship. AIMS Neuroscience, 2018, 5(1): 1-17. doi: 10.3934/Neuroscience.2018.1.1
    [2] Monireh Asadi Ghaleni, Forouzan Fattahi Masrour, Narjes Saryar, Alexandra J. Bratty, Ebrahim Norouzi, Matheus Santos de Sousa Fernandes, Georgian Badicu . Effects of an intervention combining physical activity and components of Amygdala and Insula Retraining (AIR) on sleep and working memory among older male adults. AIMS Neuroscience, 2024, 11(4): 421-438. doi: 10.3934/Neuroscience.2024025
    [3] Antonio G. Lentoor . Cognitive and neural mechanisms underlying false memories: misinformation, distortion or erroneous configuration?. AIMS Neuroscience, 2023, 10(3): 255-268. doi: 10.3934/Neuroscience.2023020
    [4] Ritwik Das, Artur Luczak . Epileptic seizures and link to memory processes. AIMS Neuroscience, 2022, 9(1): 114-127. doi: 10.3934/Neuroscience.2022007
    [5] Timothy J. Ricker . The Role of Short-term Consolidation in Memory Persistence. AIMS Neuroscience, 2015, 2(4): 259-279. doi: 10.3934/Neuroscience.2015.4.259
    [6] Andrea J. Rapkin, Steven M. Berman, Edythe D. London . The Cerebellum and Premenstrual Dysphoric Disorder. AIMS Neuroscience, 2014, 1(2): 120-141. doi: 10.3934/Neuroscience.2014.2.120
    [7] Pauline M. Insch, Gillian Slessor, Louise H. Phillips, Anthony Atkinson, Jill Warrington . The Impact of Aging and Alzheimers Disease on Decoding Emotion Cues from Bodily Motion. AIMS Neuroscience, 2015, 2(3): 139-152. doi: 10.3934/Neuroscience.2015.3.139
    [8] Galina V. Portnova, Michael S. Atanov . Nonlinear EEG parameters of emotional perception in patients with moderate traumatic brain injury, coma, stroke and schizophrenia. AIMS Neuroscience, 2018, 5(4): 221-235. doi: 10.3934/Neuroscience.2018.4.221
    [9] Wanda M. Snow, Brenda M. Stoesz, Judy E. Anderson . The Cerebellum in Emotional Processing: Evidence from Human and Non-Human Animals. AIMS Neuroscience, 2014, 1(1): 96-119. doi: 10.3934/Neuroscience.2014.1.96
    [10] Justyna Gerłowska, Krzysztof Dmitruk, Konrad Rejdak . Facial emotion mimicry in older adults with and without cognitive impairments due to Alzheimer's disease. AIMS Neuroscience, 2021, 8(2): 226-238. doi: 10.3934/Neuroscience.2021012
  • We study the monotonicity method to analyse nabla positivity for discrete fractional operators of Riemann-Liouville type based on exponential kernels, where (CFRc0θF)(t)>ϵΛ(θ1)(F)(c0+1) such that (F)(c0+1)0 and ϵ>0. Next, the positivity of the fully discrete fractional operator is analyzed, and the region of the solution is presented. Further, we consider numerical simulations to validate our theory. Finally, the region of the solution and the cardinality of the region are discussed via standard plots and heat map plots. The figures confirm the region of solutions for specific values of ϵ and θ.



    Notation:

    g: Gain around a feedback loop (dimensionless);

    G: Greens function (matrix) for atmospheric transport;

    M(t): Mass of carbon in the atmosphere at time t;

    p: Laplace transform variable (in years1);

    p: Vector of parameters in terrestrial carbon model;

    R(t): Response function for carbon fluxes (dimensionless);

    S(t): Anthropogenic CO2 flux into the atmosphere. (expressed in terms of mass of carbon)

    t: Time (in years);

    x: Vector of CO2 fluxes;

    X: Covariance matrix for CO2 observations;

    Y: Covariance matrix for prior estimates of fluxes;

    z: Vector of observations of concentrations;

    ψ: Airborne fraction. The proportion of CO2 emissions that remain in the atmosphere (dimensionless);

    Λ: Lagrange multiplier;

    Θ: Objective function.


    1. Introduction

    Studies of the global carbon cycle involve a broad range of sciences including oceanography, chemistry and economics and draw on mathematics, statistics and computer science [1]. Falkowski et al. [2] have noted that the complexity of the carbon cycle stands as a challenge to our understanding the earth as a system. Canadell et al. [3] have illustrated the complexity of data and techniques involved in studying just one part of the system. Raupach et al. [4] reviewed the data requirements and uncertainties in terrestrial carbon studies. They argued that the data uncertainties are just as important as the actual data themselves. Enting et al. [5] gave an overview of uncertainties associated with combining multiple data streams to produce a regional decomposition of the global carbon cycle.

    In trying to extend these overviews, the present paper considers carbon cycle models within a modeling spectrum, and focuses on three canonical problems of estimation, described here as "projection", "calibration" and "deconvolution". The modeling frameworks are described in terms of a spectrum [6] defined as running from empirical "black box" models to reductionist "white box" models' (glass box would have been better analogy). Representative points on the spectrum are:

    ● curve fitting, including fits based on assuming constant airborne fraction;

    ● response functions as a representation of general linear systems;

    ● lumped models based on empirical and phenomenological considerations;

    ● highly resolved models, based on a reductionist analysis of processes.

    The application of the various types of model to carbon cycle studies is described in the following section.

    The three global scale problems can be defined in terms of a response function relation

    M(t)=M0+tS(t)R(tt)dt (1)

    where M(t) is the mass of carbon in the atmosphere (typically in GtC, i.e. Pg Carbon), R(t) is a response function that gives the proportion of anthropogenic CO2 emissions remaining in the atmosphere after time t and S(t) is the anthropogenic emissions (and any other fluxes not included in the response R(t)). For relation (1), the three problems become:

    ● projection: determine M(t) given R(t), S(t);

    ● calibration: determine R(t) given M(t), S(t);

    ● deconvolution: determine S(t) given M(t), R(t).

    Each of these can be cast as a problem in estimation. For "calibration" and "deconvolution", the estimation problem is commonly ill-conditioned. This simple description has glossed over the necessity of defining the bounds (and boundary conditions) of the system in question. This issue is addressed in Section 3, in connection with feedbacks.

    The "calibration" problem is discussed in Section 4 and "deconvolution" in Section 5, considering the type of temporal deconvolution arising from relation (1). The analogous problem of spatial deconvolution is discussed in Section 6. The "projection" problem is left until last (Section 7), since much of the uncertainty arises from uncertainties propagated from the calibration, and possibly from deconvolution.


    2. The spectrum


    2.1. Characteristics

    The concept of a spectrum of carbon cycle modeling [7] [see also 8] was based on a more general description by Karplus [6] of a spectrum of modeling running from statistical to mechanistic models. Examples from carbon cycle studies, running from "black box" models to "white box" models, are given in the following subsections. These regions of the spectrum are identified as "curve fitting", "response functions", "phenomenological" models and "mechanistic reductionist" models. The general principle [9] is that what ever cannot be modeled deterministically must be modeled statistically.

    Moving towards the "white box" end of the spectrum involves a number of tradeoffs. The "whiter" models incorporate greater input, often from universal laws such as conservation of mass. The use of more information should lead to a reduction in uncertainty. However, this implies a greater dependence on the correctness of how such input is used [10]. Application of standard statistical techniques of inference is often easier at the "black box" statistical modeling end of the spectrum. Apart from direct analysis of uncertainties, statistical analyses can be valuable in determining the information content of various data [11] and providing a tool for experimental design [12]. However, just as whiter box modeling will be conditional on the validity of structural information, purely statistical analyses involving parameter fitting can be equally susceptible to inappropriate parameterisation. It is for this sort of reason that Evans and Stark [13] recommend the use of non-parametric estimation in inverse problems.

    An intercomparison study [14] compared a range of approaches to calibrating an idealised non-linear terrestrial model. Unsuprisingly, the results showed little sensitivity to the fitting technique used in the calibration. The important differences arose from the statistical characteristics that were assumed in the statistical model. This confirmed the importance of having soundly-based statistical characterisations [4]. Any statistical analysis has to assume some statistical model [see for example 9]. Such models can be rejected (i.e. the assumption can falsified), but never proved to be correct.


    2.2. Curve fitting

    An illustrative example is fitting the CO2 data, over multi-decadal timescales, with a relation of the form:

    M(t)=M0+Aexp(αt) (2)

    estimating M0, A and possibly α. This is a straightforward regression analysis.

    This approach could give an estimate ˆM0 of the pre-industrial level of atmospheric CO2. This was relevant before ice-core data provided direct measurements of CO2 in pre-industrial air [see for example 15].

    Additional detail of the carbon cycle is obtained by explicit consideration of the emissions that drive the CO2 changes. The simplest form is to characterize the relation in terms of an "airborne fraction", ψ, which is the proportion of anthropogenic emissions that remain in the atmosphere. If ψ is taken as a constant:

    M(t)=M0+ψtS(t)dt (3)

    Fitting concentrations to integrated emissions is effectively using a regression analysis to "calibrate" a minimal model.

    Laurmann and Spreiter [16] argued that assuming a constant airborne fraction gave a good approximation for most 20th century carbon cycle calculations. This remains true almost 30 years later, to the extent that it remains debatable whether or not the airborne fraction has changed [17,and references therein]. The main exception was that assuming a constant airborne fraction was inadequate for the "deconvolution" calculation of unknown source contributions. This reflects the general characteristic of inverse calculations being sensitive to errors and inaccuracies in model as well as data.


    2.3. Response functions

    The most general (causal) linear relation between emissions and concentrations is in terms of a response function, R(t), that gives the proportion of a CO2 emission that remains in the atmosphere at time t after emission. The integrated effect of emissions is thus:

    M(t)=M0+t0S(t)R(tt)dt (4)

    More generally, a response function is the Green's function for a linear system with no intrinsic time-dependence. Response function modeling of the carbon cycle was introduced by Oeschger and Heimann [18] in order to emphasise how intepretation of the present state of the carbon cycle depends on the past history.

    For emissions growth exp(λft) the airborne fraction is given by

    ψ=λf0R(t)exp(λft)dt (5)

    apart from the effects of initial conditions.

    The response function for CO2 emissions plays a central role in the definition of the "global warming potential" that is used to define the relative importance of emissions of different greenhouse gases [19,and references therein].

    Response functions can also be used to represent sub-systems within more complex models [20,21,22,23].

    The analysis of how different components of the carbon cycle interact, in terms of how their response functions combine, can be greatly simplified by the use of Laplace transforms [24] (see also Section 3 below).

    A range of statistical analyses of models expressed in terms of response functions can be undertaken using the formalism of the Kalman filter [25].


    2.4. Lumped models

    Models that reflect multiple aspects of the carbon cycle, together with their interactions, provide scope for incorporating more diverse data into carbon cycle analyses. Such models are often termed empirical or phenomenological.

    Geochemical box models use a compartmental structure to represent mass transfers between (notionally) well-mixed reservoirs. For the carbon cycle, this allows model calibration using 14C (and to a lesser extent 13C). However 20th century CO2 data mainly constrain one specific statistical moment of the CO2 response (expressed as r(p=0.02) in terms of the Laplace transform) [11].

    The ill-conditioned nature of model calibration in response functions extends to the calibration of lumped models. The potential improvement from having more types of the data can be offset by the increased uncertainty from having more model unknowns. Enting and Pearman [26] addressed this difficulty by using a "constrained inversion" calibration, adding a Bayesian prior constraint on parameter values.

    A study by Young et al. [27] gives an example of the utility of comparing modeling at different points on the model spectrum. That study, in spite of using fewer data than corresponding lumped model studies, claimed, with high statistical significance, to have excluded any long-term residual contribution from CO2 emissions identified in other modeling. The discrepancy was subsequently identified as neglect of the data covariance by Young et al. who treated annual values from a spline fit to more widely spaced observations as being independent data.


    2.5. Highly Resolved Earth System Models

    Models of the full earth system have been developed by successive coupling of models of particular subsystems such as atmospheric dynamics, ocean dynamics, land surface interactions, geochemical cycles, etc. The aim of such modeling is to make the models fully reductionist, with parameters derived from universal physical, chemical and biological principles. To the extent that this can be achieved, the calibration problem disappears. In reality, some degree of empirical parameterization is usually needed, although this may be based on empirical representations of many parts of the earth system and not just the carbon cycle. A related class of models is "Earth System Models of Intermediate Complexity" (EMICs). These achieve computational efficiency at the expense of detailed resolution.


    2.6. Hybrid

    Along the path to developing full earth system models, an important approach was to model one part of the system with a detailed mechanistic model and use observed data as a boundary condition representing other components of the earth system. An alternative was to use a "black box" representation of one part of the system, coupled to more detailed representations of other components. An example is the Bern-CC model which had a simple ocean model and a detailed dynamical vegetation model, forced by detailed climate forcing [22].

    MAGICC is a simple climate model that has evolved over many years. The current version is version 6 [23]. The simple representation of the climate is driven by simple models of forcing by CO2 and other radiatively active trace atmospheric constituents. Some components uses response functions based on other models. MAGICC does not model carbon isotopes directly, but isotopic constraints can be incorporated by emulating other models. Bodman [28] [see also 29] has recently undertaken new calibration studies, mainly drawing on recent ocean temperature data.


    3. Feedbacks

    The starting point for modeling any system is to identify the components of the system, defining its boundaries and, where appropriate, the applicable boundary conditions. For the carbon cycle, Sundquist [30] describes a hierarchy of systems, distinguished by time-scales. For each particular timescale, the faster components are taken as being in equilibrium and treated as a single component, while the slower components are treated as fixed boundary conditions.

    Feedback refers to the situation where the output, y, of a system is added to the input, x. In the lineaar case, it can be characterized by a scale factor, g, called the gain. Thus

    y=Axy=A(x+gy) (6)

    so that

    y=A1gx (7)

    giving an amplification of the input by a factor of 1/(1g). If additional feedback gy occurs, the result is

    y=A(x+gy+gy)=A1ggx (8)

    For multiple linear feedbacks, the gains add.

    Since feedbacks are defined in terms of inputs and outputs, whether a process is or is not called a feedback depends on how the system boundaries are defined.

    For general linear systems, such instantaneous relations need to be replaced by relations between convolution operators defined by response functions. As noted above, the Laplace transform formalism provides a convenient and compact way of describing such relations. Following Enting [24], we use uppercase letters to denote functions in the time domain and the corresponding lower case letters to denote their Laplace transforms. The perturbation to the atmospheric carbon content is Q(t)=M(t)M0. Mass balance has ˙Q as a sum of fluxes

    ˙Q=jΦj (9)

    which has the Laplace transform

    pq(p)=jϕj(p) (10)

    This can be written in the form of a response function relation as

    q(p)=r(p)[jϕj(p)]withr(p)=1p (11)

    where 1/p is the Laplace transform of the integration operator.

    If we split off the net atmosphere-ocean flux as

    q(p)=r(p)[ϕO+jϕj(p)] (12)

    and write the integrated atmosphere-ocean flux as a response to atmospheric change

    ϕO/p=βO(p)q(p) (13)

    then we obtain

    q(p)=r(p)1r(p)pβO(p)[jϕj(p)]=rO(p)[jϕj(p)] (14)

    implying a response function RO with Laplace transform rO given by

    rO(p)=1ppβO(p) (15)

    This form of ocean response, RO, was used by by Oeschger and Heimann [18], introducing the use of response functions into carbon cycle studies.

    Similarly if atmosphere-land fluxes ΦL are treated as distinct, and modeled as a response to atmospheric concentration,

    q(p)=rO(p)[q(p)βL(p)+jϕj(p)]=rO+L(p)[jϕj(p)] (16)

    with

    rO+L(p)=rO(p)1rO(p)βL(p)=1ppβO(p)pβL(p) (17)

    The response βL(p) corresponds to the so-called 'CO2-fertilization'.

    As emphasized above, the question of which, if any, of these processes is designated as a feedback depends on the definition of the system. Wigley and Raper [31] describe terrestrial uptake as a feedback, modifying the behavior of the atmosphere-ocean carbon system, while Willeit et al. [32] regard both the oceanic and terrestrial uptakes as negative feedbacks reducing the atmospheric accumulation of carbon.

    The Laplace transform formalism can be extended to the case where there is a flux driven by temperature change [8]. This is only a "feedback" when one models the combined carbon climate system with temperature responding to CO2 – otherwise temperature is just a prescribed forcing. The formalism is equivalent to that of Friedlingstein [33], with constants β and γ generalized to become Laplace transforms. This approach was used by Rubino et al. [34] to formulate a statistical regression analysis for analysing CO2 changes over the Little Ice Age. In diagnostic calculations, assuming a constant γ may be subject to limitations similar to that of assuming constant ψ (i.e. constant βO and βL in equation (17)).


    4. Calibration

    As described in the introduction, the calibration problem corresponds to estimating R(t). At the extreme "black box" end of the spectrum, estimating an airborne fraction is essentiallly only estimating a single weighted moment of R, as described by equation (5). Equation (3) was in terms of a cumulative airborne fraction ψ. It is also possible to define an instantaneous airborne fraction by

    ˙M(t)=ψ(t)S(t) (18)

    If S(t) increases exponentially, the two definitions are equivalent. In practice, ψ(t) needs to be determined as an average over some time interval. Recently the possibility of changing ψ(t) has been raised [17,and references therein]. As discussed by Gloor et al. [17], one must first ask whether the estimated changes are statistically significant, and then whether the changing estimates of ψ are consistent with S(t) departing from exponential. Only when these two aspects have been addressed, can changes in ψ(t) be interpreted as reflecting changes in the behavior of the carbon cycle, i.e. departures from the behavior given by a time-invariant response function.

    The problem of estimating the response function R(t) from observed data is extremely ill-conditioned. Enting [11] has noted that, since the emission growth has been close to exponential over the 20th century, the available information essentially only constrains a single moment of the response function. (This moment happens to correspond to the Laplace transform of R(t) at a single inverse time-scale p0.02 yr1). A small amount of additional information can be obtained by adding the constraint R(t)0 and ˙R(t)0 [35,36]. This corresponds to assuming that the carbon cycle can be represented as a compartmental model, but not making any further assumptions about the model structure.

    Given the limited scope for estimating R(t) directly from observations, estimates required for use in defining global warming potentials have been obtained by analysing the behaviors of more reductionist models [19]. Note however, that the results reported by Joos et al. [19] are, for the most part, calculated from models that include the climate-to-carbon feedback.

    Moving away from the black box end of the spectrum, there is a class of phenomenological models that represent the carbon cycle in terms of discrete reservoirs. As noted earlier, this can be done in terms of a range of timescales [30]; the present discussion addresses the decadal to millennial aspects in terms of the atmosphere oceans and biosphere. For these timescales, isotopic data are particularly valuable. Broecker et al. [37] emphasized the importance of 14C, especially before ice-core CO2 data were available [15]. The 14C data provide information both on the long timescales of 14C decay (with 5730 year half-life) and the shorter timescales associated with the movement through the system of a pulse of 14C from nuclear weapons testing. The 14C data had been modeled in several ways. Often the atmospheric 14C level is treated as an observed boundary condition. More detailed analyses treat the 14C as a response to production: naturally from cosmic rays and artificially from nuclear weapons testing and power production.

    Experience with the calibration of such reservoir models shows that the calibration is ill-conditioned. This difficulty was addressed by Enting and Pearman [26] by adopting empirical estimates of the various model parameters. These were used in Bayesian analysis with numerical least squares fitting of an objective function, Θ, that included both these prior parameter estimates and the available observations of carbon cycle behavior.

    More fully reductionist models can be used to represent one or more components of the system.

    Typically such models are tuned to agree with observations but generally formal statistical calibration techniques are not used. Comparison studies, termed "intercomparisons" are commonly undertaken to compare alternative models in greater detail than simply comparing the behavior to available observations.

    In carbon cycle studies, atmospheric models are mainly used in the form of transport models for spatial deconvolution (see Section 6).

    Ocean General Circulation Models are used to determine ocean carbon uptake, but, as noted above, they are typically tuned but not subject to formal statistical calibration. The tuning can, and usually does, incorporate data from tracers other than those involved in the carbon cycle. These reductionist models are thus drawing on a wider observational base than is possible for models further towards the black box end of the spectrum.

    Models of the carbon balance of the terrestrial biota, span a wide range of complexity. This runs from a simple response to atmospheric CO2 (the so-called CO2-fertilization) through to models that incorporate responses to climate and even ecosystem succession in response to climate change.

    A "calibration" problem that is of great current interest is that of determining the extent to which changing climate affects the carbon cycle. As noted in Section 3 this is only a "feedback", within a model that includes both carbon and climate; within models that are confined to the carbon cycle, climate forcing appears as a specified boundary condition as in the analysis by Rubino et al. [34].

    These distinctions are particularly important when considering changes over the 20th century and to the present. In this time period, temperatures have changed significantly and so the observed behavior of the carbon cycle will include effects from climate-to-carbon feedback. CO2 response functions used by the IPCC, and in the intercomparison by Joos et al. [19], include this feedback contribution.


    5. Deconvolution

    The deconvolution problem was defined in the introduction as determining an external CO2 source, S(t), given observations, M(t), and a carbon cycle model that can, for linear analysis, be characterized by a response, R(t). As with feedbacks, this description can be applied to a range of different analyses, depending on which CO2 fluxes are classed as "external" and which are "internal" fluxes characterized in terms of the response function.

    Enting and Mansbridge [38], using a Laplace transform description, noted that the deconvolution should have the same degree of numerical sensitivity as numerical differentiation where high frequency noise is amplified more than lower frequencies. For models expressed in terms of response functions, a deconvolution calculation including associated uncertainties can be performed using the formalism of the Kalman filter [25].

    For numerical models, the deconvolution can be implemented using what are termed "mass balance inversions". In a model defined by equations:

    ˙xj=fj({xk},t)+sj(t) (19)

    and for some cases where xk(t) is known for some k and the corresponding sk(t) is unknown, one can write

    sj(t)=˙xjfj({xk},t) (20)

    to determine sj(t) as the model is integrated forward from initial conditions. This dependence, at time t, on the past integration over times t<t is a generalisation of the characteristics of the response function formalism.

    This calculation does not require linearity in the model and is easily implemented if the model is modularized to calculate the fj. (Such modularization can also aid in finding an initial equilibrium state by numerical solution of fj({xk},t0)=0. [26].)

    The ill-conditioning arises through the need to determine ˙xj from observations of xj. This type of deconvolution was performed by Siegenthaler and Oeschger [39], tracking the atmospheric CO2 history obtained from ice cores. Later work has extended the technique to what is called "double deconvolution" where the model tracks the atmospheric levels of both CO2 and 13CO2.


    6. Spatial deconvolution


    6.1. The problem

    The space-time distribution of concentrations of long-lived tracers such as CO2 contains considerable information about sources and sinks. However atmospheric transport and mixing makes the estimation of source-sink strengths a challenging mathematical problem. The present author has described the problem and applications in a monograph [9] and a recent review, on which the present section is based [40].


    6.2. Techniques

    The initial CO2 inversion study by Bolin and Keeling [41] modeled atmospheric transport as a one-dimensional diffusion process, by taking averages of over height and longitude. Early mass-balance inversions used two-dimensional models (averaged over longitude). Most recent inversions, particularly for CO2, have used full three-dimensional transport models.

    Very early in the development of trace gas inversions, two main types of calculation emerged:

    Mass-balance inversion In this technique, an atmospheric transport model is run with boundary conditions specified in terms of concentrations, and surface fluxes are deduced from the requirement of local conservation of mass. It involves applying the same substitution as going from equation (19) to equation (20) at all surface grid points. The technique was developed using two-dimensional models [42,43,44], but there has been some later work with 3-D models e.g. [45] requiring extensive interpolation of concentration data. Dargaville and Simmonds [46] developed a related "data-assimilation" technique, using multiple model runs to interpolate the concentration data.

    Synthesis inversion In this approach the source is broken into blocks with specified space-time distributions of fluxes multiplied by unknown scale-factors. For each block, concentrations resulting from unit sources are calculated using a transport model. The fluxes are estimated by the process of estimating the scale factors x. This was done by fitting observed concentrations z with a linear combination of the calculated concentrations Gx. Early applications [47,48] used ad hoc fits. The term "synthesis" was apparently first used for this type of inversion in a study of methane [49]. The process was formalized as one of (Bayesian) statistical estimation by [50] so that the formalism calculates uncertainties for the estimated fluxes. The flux estimates, ˆx, were the minimizers of

    Θ=[Gxz]TX[Gxz]+[xxprior]TY[xxprior] (21)

    where X is the covariance matrix of observational uncertainty and Y is the covariance matrix for prior uncertainties on the fluxes. (In this paper the superscript T denotes the matrix transpose (or adjoint in the case of operators) and the notation does not distinguish between row and column vectors, relying on the context to indicate the appropriate case.)

    Such Bayesian synthesis inversions have been applied to carbon cycle studies in two main cases. The so-called cyclostationary cases where each year has the same fluxes and the fully time dependent case which can capture interannual variability. This latter case is very sensitive to form of time-correlation in input statistics. Studies of the role of statistical models used in Kalman filter formalisms [51] have helped illustrate how such sensitivity arises.


    6.3. Resolution

    The ill-conditioning in trace gas inversions has been recognized from the very first of such calculations [41] where Bolin and Keeling noted the problem and stated that "no details of the sources and sinks are reliable".

    The difficulty of the inversion can be quantified, in part, by determining how rapidly the error amplification grows as the resolution increases. This has been analysed for the dependence on latitudinal wave-number n. If the forward problem attenuates fluxes of wave number n with a nα decay, then the inverse problem involves an error amplification which grows as nα. Numerical modelling in 2-D shows n1 growth in the extent to which errors with latitudinal wave-number n were amplified. In contrast, the Bolin and Keeling analysis implied n2 growth, fully justifying the pessimistic assessments quoted above. The reason for the difference between the n2 and n1 growth was identified by finding that the n2 growth was an artifact of approximating the boundary value problem as a vertically-averaged problem [52].


    6.4. Adjoint methods

    Two difficulties are concealed by the mathematical elegance of the linear estimation equations that come from multi-variate normal distributions:

    ● the formalism does not apply if either the linear relation, zGx, or the normality assumption is invalid;

    ● even if these assumptions apply, using the linear equations may not be the best way to find the minimum of the cost function.

    An alternative approach to data fitting is to use gradient techniques that aim to minimize the cost function directly, using generic minimization techniques, based on the gradient. For the linear case derived from (21),

    12xΘ=GTX[Gxz]+Y[xxprior] (22)

    In terms of computational efficiency, Gxz is easy to evaluate, since Gx is obtained by a single model integration with sources x. Multiplying this vector by X is a simple matrix-times-vector operation in general, and specific cases are often much simpler if X has a block-diagonal (or even diagonal) structure due to independence of various subsets of data. The difficulty comes from multiplying this vector by GT. The direct approach, used in synthesis inversion, is to calculate the full matrix G by integrating the model with a set of basis functions. The number of such integrations is the dimensionality of x. For high-resolution inversions this becomes computationally infeasible.

    The alternative to such "brute-force" is to use what is known as an adjoint model. This is a model whose operation corresponds to the effect of GT. This adjoint model is then run with X[Gxz] as its input. There are software tools [53] that take the computer code (that formally computes Gx for arbitrary x) and automatically generates "adjoint" code that calculates GTz for arbitrary z. Adjoint techniques apply to linearized calculations as well as the fully-linear case described here. A recent review [54] relates the different perspectives of pure mathematics, computer science and various application areas. The term "adjoint" refers to the adjoint of a linear sensitivity operator. The computational process of constructing an "adjoint model" corresponds to a sparse matrix factorization of this adjoint operator.

    These adjoint techniques have been the basis for obtaining high-resolution inversions, regularized, at least in part, by smoothness constraints rather than relying on fixing the spatial structure of low-resolution basis functions [55].


    6.5. Process inversion

    A greater understanding of the carbon cycle requires going beyond estimates of fluxes and investigating the underlying processes. Earlier reviews [9,40] have noted calculations that could be regarded as precursors of such process inversions.

    In the last few years, there have been several studies that could be classed as "genuine" process inversions. These have taken the form of estimating parameters in terrestrial models [56,57]. If one has fluxes as functions x(p) of parameters, p, then the gradient of the cost function is given by

    12pΘ=(px)GTX[Gx(p)z] (23)

    These calculations have been performed using the adjoint of the combined terrestrial+transport model i.e. (px)GT, to obtain the gradients used in the iterative minimization of the cost function.


    7. Projections

    The main use of carbon cycle modeling of future times is in estimating future atmospheic concentrations, either as an end in itself or as a component of a broader earth system model. These calculations often take the form of "projections", i.e. conditional predictions, given a set of prescribed future emissions. Boschetti et al. [10] have argued that this conditional aspect is an essential aspect of modeling. Model results are dependent on such things as the validity of a reductionist analysis or the appropriateness of a statistical model. Projections into the future carry the assumption of the absence of unexpected changes in forcing, from such things as nuclear war, asteroid impact or global pandemics.

    Within the assumptions (the absence of unknown unknowns), it is still important to charactrerise the uncertainties (the known unknowns). One important aspect of model uncertainty comes from uncertainty in the calibration. Enting and Pearman [26] showed how a numerical implementation of the Lagrange multiplier formalism could be use to characterize the range of future projections (e.g. the CO2 concentration in 2100) consistent with a specified degree of calibration mismatch. The objective Θ was replaced by

    Θ=ΘΛC(2100) (24)

    and the mimimal values of Θ explored for various values of Λ.

    As well as the need to propagate calibration uncertainty into the future, calculations where the calibration stage includes deconvolution of past sources need to ensure consistency in how the past is matched to the future [58].


    8. Conclusions

    The role of estimation and inversion in carbon cycles has evolved over time in response to new data and new understanding. Some key steps (updated from [9,40]) have been:

    ● before 1958: the growth in CO2 was not well established;

    ● post 1958: about half of the fossil CO2 was found to remain in the atmosphere;

    ● circa 1975: models calibrated using 14C indicated an imbalance which became known as the "missing sink";

    ● circa 1980: estimates of large emissions from deforestation (subsequently revised downwards) exacerbated these discrepancies;

    ● 1983: response function analysis [18] pointed out that present atmospheric budget depends, in part, on past imbalances — the budget for a particular time cannot be considered in isolation from previous changes;

    ● 1985 onwards: ice-core data provide the requisite information about past changes, allowing a deconvolution of past emissions [39];

    ● circa 1990: conflicting views of the carbon budget: low ocean uptake vs. high ocean uptake from both inversion [48,47] and isotopic [59,60] studies;

    ● 1992: much of the discrepancy resolved by Sarmiento and Sundquist [61] clarifying distinctions between different type of atmospheric budget — a consistent analysis of the CO2/13CO2 budget was produced by Heimann and Maier-Reimer [62];

    ● circa 1995: 13C records indicated large interannual variability [63,64];

    ● mid 1990s: additional constraints were provided by measurements of trends in O2/N2 obtained by a variety of techniques [65,66,67];

    ● circa 1995: concern about the "rectifier effect", i.e. mean gradients induced by diurnal and seasonal covariance between fluxes and transport, and the extent to which inversions were biased due to models under-estimating the effect [68];

    ● circa 2000: increasing focus on processes rather than budgets;

    ● concern about feedbacks arising from disruption of the carbon cycle by anthropogenic climate change [33,69].

    ● large scale cooperative project RECCAP [70].

    ● annual reports on the carbon budget [71] and more recently [72].

    As emphasised above, the additional information captured by reductionist modeling (as compared to empirical statistical analysis) has its accuracy conditional on the validity of the underlying reductionist analysis. In particular, modeling that combines multiple components needs to pay particular attention to consistency of definitions [58]. However equal importance must be given to the consistency of the statistical characterisation, as emphasised by Raupach et al. [4] and illustrated by examples noted above [e.g. 51]. The underlying assumptions need to be made explicit and, to the extent possible, tested. This applies particularly to the ill-conditioned inverse problems of model calibration and estimation of forcing. The use of the modeling spectrum allows for multiple perspectives and comparisons that can provide such checks for consistency.


    Acknowledgements

    This paper draws on the work of many collaborators, both from CSIRO and beyond. Of particular importance are the contributions of Mike Raupach whose untimely death sadly diminished our field. Nathan Clisby provided helpful comments on the manuscript.

    Section 6 draws on talk at Transcom Tsukuba and an expanded version in Lectures on Turbulence, Air Pollution Modelling and Applications Ed. D. Moriera and M. T. Vilhena.


    Conflict of interest

    All authors declare no conflicts of interest in this paper.




    [1] C. Goodrich, A. C. Peterson, Discrete Fractional Calculus, Springer, Berlin, 2015. https://doi.org/10.1007/978-3-319-25562-0
    [2] T. Abdeljawad, D. Baleanu, On fractional derivatives with exponential kernel and their discrete versions, Rep. Math. Phys., 80 (2017), 11–27. https://doi.org/10.1016/S0034-4877(17)30059-9 doi: 10.1016/S0034-4877(17)30059-9
    [3] T. Abdeljawad, Q. M. Al-Mdallal, M. A. Hajji, Arbitrary order fractional difference operators with discrete exponential kernels and applications, Discrete Dyn. Nat. Soc., 2017 (2017). https://doi.org/10.1155/2017/4149320 doi: 10.1155/2017/4149320
    [4] T. Abdeljawad, On Riemann and Caputo fractional differences, Comput. Math. Appl., 62 (2011), 1602–1611. https://doi.org/10.1016/j.camwa.2011.03.036 doi: 10.1016/j.camwa.2011.03.036
    [5] T. Abdeljawad, Different type kernel h–fractional differences and their fractional h–sums, Chaos, Solitons Fractals, 116 (2018), 146–156. https://doi.org/10.1016/j.chaos.2018.09.022 doi: 10.1016/j.chaos.2018.09.022
    [6] P. O. Mohammed, T. Abdeljawad, Discrete generalized fractional operators defined using h-discrete Mittag-Leffler kernels and applications to AB fractional difference systems, Math. Methods Appl. Sci., (2020), 1–26, https://doi.org/10.1002/mma.7083 doi: 10.1002/mma.7083
    [7] G. C. Wu, M. K. Luo, L. L. Huang, S. Banerjee, Short memory fractional differential equations for new neural network and memristor design, Nonlinear Dyn., 100 (2020), 3611–3623. https://doi.org/10.1007/s11071-020-05572-z doi: 10.1007/s11071-020-05572-z
    [8] L. L. Huang, G. C. Wu, D. Baleanu, H. Y. Wang, Discrete fractional calculus for interval-valued systems, Fuzzy Sets Syst., 404 (2021), 141–158. https://doi.org/10.1016/j.fss.2020.04.008 doi: 10.1016/j.fss.2020.04.008
    [9] T. Abdeljawad, S. Banerjee, G. C. Wu, Discrete tempered fractional calculus for new chaotic systems with short memory and image encryption, Optik, 2018 (2020), 163698. https://doi.org/10.1016/j.ijleo.2019.163698 doi: 10.1016/j.ijleo.2019.163698
    [10] G. C. Wu, D. Baleanu, Discrete fractional logistic map and its chaos, Nonlinear Dyn., 75 (2014), 283–287. https://doi.org/10.1007/s11071-013-1065-7 doi: 10.1007/s11071-013-1065-7
    [11] R. Dahal, C. S. Goodrich, A monotonocity result for discrete fractional difference operators, Arch. Math., 102 (2014), 293–299. https://doi.org/10.1007/s00013-014-0620-x doi: 10.1007/s00013-014-0620-x
    [12] C. S. Goodrich, A convexity result for fractional differences, Appl. Math. Lett., 35 (2014), 158–162. https://doi.org/10.1016/j.aml.2014.04.013 doi: 10.1016/j.aml.2014.04.013
    [13] F. Atici, M. Uyanik, Analysis of discrete fractional operators, Appl. Anal. Discrete Math., 9 (2015), 139–149. https://doi.org/10.2298/AADM150218007A doi: 10.2298/AADM150218007A
    [14] P. O. Mohammed, O. Almutairi, R. P. Agarwal, Y. S. Hamed, On convexity, monotonicity and positivity analysis for discrete fractional operators defined using exponential kernels, Fractal Fractional, 6 (2022), 55. https://doi.org/10.3390/fractalfract6020055 doi: 10.3390/fractalfract6020055
    [15] T. Abdeljawad, D. Baleanu, Monotonicity analysis of a nabla discrete fractional operator with discrete Mittag-Leffler kernel, Chaos, Solitons Fractals, 102 (2017), 106–110. https://doi.org/10.1016/j.chaos.2017.04.006 doi: 10.1016/j.chaos.2017.04.006
    [16] P. O. Mohammed, C. S. Goodrich, A. B. Brzo, Y. S. Hamed, New classifications of monotonicity investigation for discrete operators with Mittag-Leffler kernel, Math. Biosci. Eng., 19 (2022), 4062–4074. https://doi.org/10.3934/mbe.2022186 doi: 10.3934/mbe.2022186
    [17] C. Goodrich, C. Lizama, Positivity, monotonicity, and convexity for convolution operators, Discrete Contin. Dyn. Syst., 40 (2020), 4961–4983. https://doi.org/10.3934/dcds.2020207 doi: 10.3934/dcds.2020207
    [18] C. S. Goodrich, B. Lyons, Positivity and monotonicity results for triple sequential fractional differences via convolution, Analysis, 40 (2020), 89–103. https://doi.org/10.1515/anly-2019-0050 doi: 10.1515/anly-2019-0050
    [19] C. S. Goodrich, J. M. Jonnalagadda, B. Lyons, Convexity, monotonicity and positivity results for sequential fractional nabla difference operators with discrete exponential kernels, Math. Methods Appl. Sci., 44 (2021), 7099–7120. https://doi.org/10.1002/mma.7247 doi: 10.1002/mma.7247
    [20] P. O. Mohammed, C. S. Goodrich, F. K. Hamasalh, A. Kashuri, Y. S. Hamed, On positivity and monotonicity analysis for discrete fractional operators with discrete Mittag-Leffler kernel, Math. Methods Appl. Sci., (2022), 1–20. https://doi.org/10.1002/mma.8176 doi: 10.1002/mma.8176
    [21] B. Jia, L. Erbe, A. Peterson, Two monotonicity results for nabla and delta fractional differences, Arch. Math., 104 (2015), 589–597. https://doi.org/10.1007/s00013-015-0765-2 doi: 10.1007/s00013-015-0765-2
    [22] R. Dahal, C. S. Goodrich, Mixed order monotonicity results for sequential fractional nabla differences, J. Differ. Equations Appl., 25 (2019), 837–854. https://doi.org/10.1080/10236198.2018.1561883 doi: 10.1080/10236198.2018.1561883
    [23] I. Suwan, T. Abdeljawad, F. Jarad, Monotonicity analysis for nabla h-discrete fractional Atangana-Baleanu differences, Chaos, Solitons Fractals, 117 (2018), 50–59. https://doi.org/10.1016/j.chaos.2018.10.010 doi: 10.1016/j.chaos.2018.10.010
    [24] F. Du, B. Jia, L. Erbe, A. Peterson, Monotonicity and convexity for nabla fractional (q,h)-differences, J. Differ. Equations Appl., 22 (2016), 1224–1243. https://doi.org/10.1080/10236198.2016.1188089 doi: 10.1080/10236198.2016.1188089
    [25] P. O. Mohammed, T. Abdeljawad, F. K. Hamasalh, On Riemann-Liouville and Caputo fractional forward difference monotonicity analysis, Mathematics, 9 (2021), 1303. https://doi.org/10.3390/math9111303 doi: 10.3390/math9111303
    [26] R. Dahal, C. S. Goodrich, B. Lyons, Monotonicity results for sequential fractional differences of mixed orders with negative lower bound, J. Differ. Equations Appl., 27 (2021), 1574–1593. https://doi.org/10.1080/10236198.2021.1999434 doi: 10.1080/10236198.2021.1999434
    [27] C. S. Goodrich, A note on convexity, concavity, and growth conditions in discrete fractional calculus with delta difference, Math. Inequal. Appl., 19 (2016), 769–779. https://doi.org/10.7153/mia-19-57 doi: 10.7153/mia-19-57
    [28] C. S. Goodrich, A sharp convexity result for sequential fractional delta differences, J. Differ. Equations Appl., 23 (2017), 1986–2003. https://doi.org/10.1080/10236198.2017.1380635 doi: 10.1080/10236198.2017.1380635
  • This article has been cited by:

    1. Sara Y. Kim, Sarah M. Kark, Ryan T. Daley, Sara E. Alger, Daniella Rebouças, Elizabeth A. Kensinger, Jessica D. Payne, Interactive effects of stress reactivity and rapid eye movement sleep theta activity on emotional memory formation, 2020, 30, 1050-9631, 829, 10.1002/hipo.23138
    2. Sarah K. Schäfer, Benedikt E. Wirth, Marlene Staginnus, Nicolas Becker, Tanja Michael, M. Roxanne Sopp, Sleep's impact on emotional recognition memory: A meta-analysis of whole-night, nap, and REM sleep effects, 2020, 51, 10870792, 101280, 10.1016/j.smrv.2020.101280
    3. Dan Denis, Sara Y. Kim, Sarah M. Kark, Ryan T. Daley, Elizabeth A. Kensinger, Jessica D. Payne, Slow oscillation‐spindle coupling is negatively associated with emotional memory formation following stress, 2021, 0953-816X, 10.1111/ejn.15132
    4. Ateka A. Contractor, Danica C. Slavish, Nicole H. Weiss, Ahmad M. Alghraibeh, Ali A. Alafnan, Daniel J. Taylor, Moderating effects of sleep difficulties on relations between posttraumatic stress disorder symptoms and positive memory count, 2021, 0021-9762, 10.1002/jclp.23142
    5. Sophie Schwartz, Alice Clerget, Lampros Perogamvros, Enhancing imagery rehearsal therapy for nightmares with targeted memory reactivation, 2022, 32, 09609822, 4808, 10.1016/j.cub.2022.09.032
    6. Dan Denis, Kristin E. G. Sanders, Elizabeth A. Kensinger, Jessica D. Payne, Sleep preferentially consolidates negative aspects of human memory: Well-powered evidence from two large online experiments, 2022, 119, 0027-8424, 10.1073/pnas.2202657119
    7. Per Davidson, Peter Jönsson, Ingegerd Carlsson, Edward Pace-Schott, Does Sleep Selectively Strengthen Certain Memories Over Others Based on Emotion and Perceived Future Relevance?, 2021, Volume 13, 1179-1608, 1257, 10.2147/NSS.S286701
    8. Esther Yuet Ying Lau, Mark Lawrence Wong, Yeuk Ching Lam, Kristy Nga Ting Lau, Ka Fai Chung, Benjamin Rusak, Sleep and Inhibitory Control Over Mood-Congruent Information in Emerging Adults With Depressive Disorder, 2021, 83, 1534-7796, 1004, 10.1097/PSY.0000000000000996
    9. Xinran Niu, Mia F. Utayde, Kristin E. G. Sanders, Dan Denis, Elizabeth A. Kensinger, Jessica D. Payne, Age-related positivity effect in emotional memory consolidation from middle age to late adulthood, 2024, 18, 1662-5153, 10.3389/fnbeh.2024.1342589
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1908) PDF downloads(52) Cited by(0)

Article outline

Figures and Tables

Figures(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog