
The cerebellum constitutes ten percent of brain volume and contains the majority of brain neurons. Although it was historically viewed primarily as processing motoric computations, current evidence supports a more comprehensive role, where cerebro-cerebellar feedback loops also modulate various forms of cognitive and affective processing. Here we present evidence for a role of the cerebellum in premenstrual dysphoric disorder (PMDD), which is characterized by severe negative mood symptoms during the luteal phase of the menstrual cycle. Although a link between menstruation and cyclical dysphoria has long been recognized, neuroscientific investigations of this common disorder have only recently been explored. This article reviews functional and structural brain imaging studies of PMDD and the similar but less well defined condition of premenstrual syndrome (PMS). The most consistent findings are that women with premenstrual dysphoria exhibit greater relative activity than other women in the dorsolateral prefrontal cortex and posterior lobules VI and VII of the neocerebellum. Since both brain areas have been implicated in emotional processing and mood disorders, working memory and executive functions, this greater activity probably represents coactivation within a cerebro-cerebellar feedback loop regulating emotional and cognitive processing. Some of the evidence suggests that increased activity within this circuit may preserve cerebellar structure during aging, and possible mechanisms and implications of this finding are discussed.
Citation: Andrea J. Rapkin, Steven M. Berman, Edythe D. London. The Cerebellum and Premenstrual Dysphoric Disorder[J]. AIMS Neuroscience, 2014, 1(2): 120-141. doi: 10.3934/Neuroscience.2014.2.120
[1] | Ke Guo, Wanbiao Ma . Global dynamics of an SI epidemic model with nonlinear incidence rate, feedback controls and time delays. Mathematical Biosciences and Engineering, 2021, 18(1): 643-672. doi: 10.3934/mbe.2021035 |
[2] | Yu Ji . Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. Mathematical Biosciences and Engineering, 2015, 12(3): 525-536. doi: 10.3934/mbe.2015.12.525 |
[3] | Jinliang Wang, Gang Huang, Yasuhiro Takeuchi, Shengqiang Liu . Sveir epidemiological model with varying infectivity and distributed delays. Mathematical Biosciences and Engineering, 2011, 8(3): 875-888. doi: 10.3934/mbe.2011.8.875 |
[4] | Pengyan Liu, Hong-Xu Li . Global behavior of a multi-group SEIR epidemic model with age structure and spatial diffusion. Mathematical Biosciences and Engineering, 2020, 17(6): 7248-7273. doi: 10.3934/mbe.2020372 |
[5] | Pengfei Liu, Yantao Luo, Zhidong Teng . Role of media coverage in a SVEIR-I epidemic model with nonlinear incidence and spatial heterogeneous environment. Mathematical Biosciences and Engineering, 2023, 20(9): 15641-15671. doi: 10.3934/mbe.2023698 |
[6] | Maoxing Liu, Yuhang Li . Dynamics analysis of an SVEIR epidemic model in a patchy environment. Mathematical Biosciences and Engineering, 2023, 20(9): 16962-16977. doi: 10.3934/mbe.2023756 |
[7] | Tingting Xue, Long Zhang, Xiaolin Fan . Dynamic modeling and analysis of Hepatitis B epidemic with general incidence. Mathematical Biosciences and Engineering, 2023, 20(6): 10883-10908. doi: 10.3934/mbe.2023483 |
[8] | N. H. AlShamrani, A. M. Elaiw . Stability of an adaptive immunity viral infection model with multi-stages of infected cells and two routes of infection. Mathematical Biosciences and Engineering, 2020, 17(1): 575-605. doi: 10.3934/mbe.2020030 |
[9] | Thomas Torku, Abdul Khaliq, Fathalla Rihan . SEINN: A deep learning algorithm for the stochastic epidemic model. Mathematical Biosciences and Engineering, 2023, 20(9): 16330-16361. doi: 10.3934/mbe.2023729 |
[10] | Xiao-Min Huang, Xiang-ShengWang . Traveling waves of di usive disease models with time delay and degeneracy. Mathematical Biosciences and Engineering, 2019, 16(4): 2391-2410. doi: 10.3934/mbe.2019120 |
The cerebellum constitutes ten percent of brain volume and contains the majority of brain neurons. Although it was historically viewed primarily as processing motoric computations, current evidence supports a more comprehensive role, where cerebro-cerebellar feedback loops also modulate various forms of cognitive and affective processing. Here we present evidence for a role of the cerebellum in premenstrual dysphoric disorder (PMDD), which is characterized by severe negative mood symptoms during the luteal phase of the menstrual cycle. Although a link between menstruation and cyclical dysphoria has long been recognized, neuroscientific investigations of this common disorder have only recently been explored. This article reviews functional and structural brain imaging studies of PMDD and the similar but less well defined condition of premenstrual syndrome (PMS). The most consistent findings are that women with premenstrual dysphoria exhibit greater relative activity than other women in the dorsolateral prefrontal cortex and posterior lobules VI and VII of the neocerebellum. Since both brain areas have been implicated in emotional processing and mood disorders, working memory and executive functions, this greater activity probably represents coactivation within a cerebro-cerebellar feedback loop regulating emotional and cognitive processing. Some of the evidence suggests that increased activity within this circuit may preserve cerebellar structure during aging, and possible mechanisms and implications of this finding are discussed.
The study of host-virus interactions using dynamical models (within-host models) has improved our understanding of the mechanistic interactions that govern chronic infections caused by pathogens such as human immunodeficiency virus [1,2] and hepatitis B virus[4], and mechanistic interactions that govern acute infections caused by pathogens such as influenza virus [5,6], dengue virus [7,8], Zika virus [10], and severe acute respiratory syndrome coronavirus 2 [11,12]. Regardless of the virus considered, the most basic within-host model has a general structure that includes the interaction between the cells susceptible to the virus, the cells infected by the virus, and the virus at short (acute) and long (chronic) timescales. The emergence of unexpected dynamics in the virus data, new information about the virus' life-cycle, data describing host immunity to the infection, or a combination of some or all of the above, may require addition of complexity into the within-host modeling process (see [13,14] for a review).
Data fitting techniques for simple or complex within-host models use (normalized) least-squares approaches, in which the Euclidean distance between the data and the mathematical model is minimized with respect to the unknown parameters. The first step in the parameter estimation algorithm is to provide an initial guess for each parameter based on prior biological knowledge, such as the duration of eclipse stages, life-span of an infected cell and/or virus in vitro, and knowledge from modeling of virus dynamics of related viruses. When prior knowledge is unknown, the user makes the assumption that any positive parameter guess is acceptable. Then, an optimization search algorithm is employed until a termination criterion is reached. For many within-host mathematical models and corresponding datasets, the optimization is ill-posed due to the structure of the model and/or the frequency of the data [15]. As a result, some parameters may be difficult or impossible to quantify. To determine whether the uncertainty in parameter estimations is due to the model or the data, both structural and practical identifiability questions need to be addressed.
Structural identifiability investigates the ability of a model to reveal its unknown parameters from noise-free infinite amount of data [16,17,18]. When nonstructural identifiability of parameters occurs, it is important to find the source of non-identifiability, such as correlation between model parameters. This allows the user to propose additional assumptions needed to make the model structurally identifiable. Only after the structural identifiability of the unknown parameters is guaranteed, can one conduct data fitting schemes to estimate parameter values.
Practical identifiability investigates the ability of a model to reveal unknown structurally identifiable parameters under scarce and noisy (among subjects) data, often examined using Monte Carlo simulations [18,19,20], the Fisher information matrix (FIM) or correlation matrix [16,21,22,23], Bayesian method [24], and the profile likelihood method [25,26,27]. As with the structural identifiability, it is important to identify whether the practical identifiability issues are due to model structure. Additionally, it is important to determine whether increased data frequency, availability of data measurements for more than one model variable, and/or relaxing restrictions imposed on the unknown parameters can improve practical identifiability issues.
To address these important considerations in model validation, one needs to compare a set of models for the same virus infection system and the same empirical data. Here, we accomplish that by investigating four previously developed models of influenza A virus (IAV) infection in mice [28]. The first three models, all validated with the same virus titer dataset, are ranging from the basic within-host model to models with increased complexity through the addition of nonlinear terms and/or the inclusion of additional variables for the host cell populations infected by the influenza virus. The fourth model is the most complex, due to the addition of both nonlinear terms and variables for the host immune system. This results in a large number of unknown parameters. To compensate for the added complexity, this model is validated with two datasets: the same virus titer data and an additional immune cell population data.
The goal of this study is to determine how model complexity and data availability induce uncertainty in parameter estimates. Using the proposed models as proof of concept, we aim to provide a framework for model validation, from structural to practical identifiability, that can be generalized to other models of virus infections.
We consider four within-host models of acute infections used to describe influenza A virus infection in mice [5]. They all describe the same influenza A virus titer data, but they account for increased biological complexity, as follows. Model 1 assumes that influenza A virus infects all available susceptible target cells before being cleared according to first order infected cells death and viral clearance rates (target cell limitation); Model 2 explains an observed viral biphasic decay in the data by assuming a second order (density dependent) infected cell killing rate; Model 3 explains an observed viral expansion delay in the data by assuming the presence of an eclipse phase; and Model 4 utilizes a secondary immune cells dataset by adding a model population that describes immune-mediated antiviral responses. With each model, we include biological realism that describes the dynamics of virus expansion and decay in more detail, while at the same time increasing model complexity through the addition of nonlinearities and increased numbers of model parameters. The flow charts of the four models are presented in Figure 1. Below we describe all models in detail, and address the ability of accounting for complexity given the available data by investigating structural and practical identifiability of each model considered.
Model 1 is the classical target-cell limitation model of viral infection, which considers the interaction between target cells, infected cells, and virus as follows [5,28]. Target cells, $ T $, interact with the virus, $ V $, at rate $ \beta $ to become infected cells $ I $. Infected cells die at per capita rate $ \delta $ and produce virus at rate $ \pi $. Virus is cleared at rate $ c $. Model 1 is described by the system of ordinary differential equations (ODE) Eq (2.1) below,
$ Model 1: dTdt=−βTV,dIdt=βTV−δI,dVdt=πI−cV, $ | (2.1) |
with initial conditions $ T(0) = T_0, $ $ I(0) = I_0, $ and $ V(0) = 0 $.
Experimental data has shown that, following peak expansion, virus decays in a biphasic manner. To capture the dynamics of viral decay, a modified death rate has been considered. It assumes that the rate of infected cell clearance increases as the density of infected cells decreases, as described by $ \delta/(K_{\delta}+I) $, where $ \delta $ is the maximum per capita death rate and $ K_{\delta} $ is infected cell population where death rate is half-maximal [28]. This leads to the modified target-cell limitation Model 2 given by the ODE system Eq (2.2) below,
$ Model 2: dTdt=−βTV,dIdt=βTV−δKδ+II,dVdt=πI−cV, $ | (2.2) |
with initial conditions $ T(0) = T_0, $ $ I(0) = I_0, $ and $ V(0) = 0 $.
It was observed experimentally that, following influenza A virus exposure, there is a delay between infection of target cells and viral production by infected cells [29]. The delay was accounted for by assuming that, upon infection, cells enter an eclipse phase $ I_1 $, where cells are infected but do not produce virus. They become productively infected $ I_2 $ after $ 1/k $ days [6], where $ 1/k $ is the average time spent in eclipse phase. This leads to the target-cell limitation model with eclipse phase Model 3 given by the ODE system Eq (2.3) below,
$ Model 3: dTdt=−βTV,dI1dt=βTV−kI1,dI2dt=kI1−δKδ+I2I2,dVdt=πI2−cV, $ | (2.3) |
with initial conditions $ T(0) = T_0, $ $ I_1(0) = I_0 $, $ I_2(0) = 0, $ and $ V(0) = 0 $.
The first three models do not explicitly account for any immune responses, but indirectly assume infected cell loss at nonspecific rate $ \delta $ (or $ \delta/(K_\delta +I_2) $) and viral clearance at nonspecific rate $ c $. The observed biphasic viral decay captured by Models 2 and 3 given by Eqs (2.2) and (2.3), however, has the additional feature that the timing of the second phase viral decay coincides with the development of adaptive immune cells in the form of $ \mathrm{CD8}^+ $ T cells, which are responsible for killing infected cells and resolving the infection [5]. To account for adaptive immunity (especially in the presence of immune cell data), an additional variable $ E $ is considered. It only accounts for the effector $ \mathrm{CD8}^+ $ T cell population (and ignores the memory $ \mathrm{CD8}^+ $ T cell population), as follows. In the absence of infection, a baseline of influenza A virus-specific effector $ \mathrm{CD8}^+ $ T cells are present, $ E(0) = E_0 $. Infection results in recruitment of additional effector $ \mathrm{CD8}^+ $ T cells at a rate proportional to the productively infected cells $ I_2 $. This is modeled in a density dependent manner at rate $ \lambda/(K_E+E) $, where $ \lambda $ is the maximum influx and $ K_E $ is the effector $ \mathrm{CD8}^+ $ T cell population where the influx is half-maximal. Effector $ \mathrm{CD8}^+ $ T cells proliferate in the presence of infection. This is modeled by a delayed term $ \eta I_2(t-\tau_I) E $, which assumes that expansion occurs following interaction between effector $ \mathrm{CD8}^+ $ T cells and cells that became productively infected $ \tau_I $ days ago. To account for effector $ \mathrm{CD8}^+ $ T cells function, the model assumes that effector $ \mathrm{CD8}^+ $ T cells kill infected cells in a density dependent manner modeled by the term $ \delta_E/(K_{\delta}+I_2) $, where $ \delta_E $ is the maximum per capita killing rate and $ K_{\delta} $ is the $ I_2 $ concentration where the killing is half-maximal. A nonspecific infected cell killing rate $ \delta $ is still considered. The resulting delay differential equations (DDE) immune model is described by the DDE system Eq (2.4) below,
$ dTdt=−βTV,dI1dt=βTV−kI1,dI2dt=kI1−δI2−δEKδ+I2EI2,dVdt=πI2−cV,dEdt=λKE+EI2+ηEI2(t−τI)−dEE, $ | (2.4) |
with initial conditions $ T(0) = T_0, $ $ I_1(0) = I_0 $, $ V(0) = 0, $ $ E(0) = E_0 $, and $ I_{2}(t) = 0 $ for $ -\tau_I\leq t\leq 0 $.
To unify the goal of investigating uncertainty in parameter estimates when fitting ODE systems of virus dynamics to data, we first approximate the DDE system given by Eq (2.4) with an ODE system as follows [30]. For a delay of $ \tau_I $ days, we incorporate $ n $ dummy variables which all span $ \tau_I/n $ days in the variable $ I_2 $'s dynamics. Briefly, we let $ y_i $ be the productively infected cell populations at times $ t-\frac{i}{n}\tau_I $ days post infection, for $ i = 1, ..., n $, and consider the following ODE system for dummy variables $ y_i(t) $,
$ dy1dt=I2−nτIy1,⋮dyidt=nτIyi−1−nτIyi,⋮dyndt=nτIyn−1−nτIyn, $ | (2.5) |
with $ y_i(0) = 0 $ for $ i = 1, ..., n $. Then, the delayed productively infected cell population is given by
$ I2(t−τI)≈yn(t). $ | (2.6) |
Without loss of generality, we assume $ n = 3 $. The corresponding immune Model 4 is given by the ODE system Eq (2.7) below,
$ Model 4: dTdt=−βTV,dI1dt=βTV−kI1,dI2dt=kI1−δI2−δEKδ+I2EI2,dVdt=πI2−cV,dEdt=λKE+EI2+ηEy3−dEE,dy1dt=I2−3τIy1,dy2dt=3τIy1−3τIy2,dy3dt=3τIy2−3τIy3, $ | (2.7) |
with initial conditions $ T(0) = T_0, $ $ I_1(0) = I_0 $, $ I_2(0) = 0 $, $ V(0) = 0, $ $ E(0) = E_0 $, and $ y_i(0) = 0 $ for $ i = 1, 2, 3 $.
To study the structural identifiability of the Models 1–4, we rewrite them in the following general form
$ x′(t)=f(x,p), $ | (3.1) |
and the observations as
$ y(t)=g(x,p). $ | (3.2) |
Here, $ x $ denotes the state variables, $\boldsymbol{p}$ is the parameter vector, and $ y $ is the output (given by the empirical data), also called the observations. The generic model given by Eq (3.1) is called structurally identifiable if the parameter vector $\boldsymbol{p}$ can be determined uniquely from the observations given by the smooth curve $ y(t) $. Otherwise, it is said to be unidentifiable. The formal definition of structural identifiability is provided below.
Definition 1. Let $\boldsymbol{p}$ and $ \hat{{\boldsymbol{ p }}} $ be two distinct parameter vectors. Model Eq (3.1) is said to be globally (uniquely) structurally identifiable if and only if,
$ g(x, {\boldsymbol{p}}) = g(x, \hat{{\boldsymbol{p}}}) \quad implies \quad {\boldsymbol{p}} = \hat{{\boldsymbol{p}}}. $ |
Definition 2. Model Eq (3.1) is said to be locally structurally locally identifiable if for any $\boldsymbol{p}$ within an open neighborhood of $ \hat{{\boldsymbol{ p }}} $ in the parameter space,
$ g(x, {\boldsymbol{p}}) = g(x, \hat{{\boldsymbol{p}}}) \quad implies \quad {\boldsymbol{p}} = \hat{{\boldsymbol{p}}}. $ |
Various methods have been proposed for analyzing the structural identifiability of ODE models [16,17,31]. In this study, we employ the differential algebra approach. It performs the elimination of unobserved state variables, resulting in equations expressed as functions of model parameters and observed state variables. These are referred to as the input-output equations, and are differential-algebraic polynomials consisting of the outputs, $ y(t) $, with model parameters, $\boldsymbol{p}$, as coefficients. The formal definition of structural identifiability within the differential algebra approach for model Eq (3.1) is provided below.
Definition 3. Let $ c({\boldsymbol{p}}) $ denote the coefficients of the input-output equation corresponding to model Eq (3.1). We say that model Eq (3.1) is structurally identifiable from unlimited observations $ y(t) $ if and only if,
$ c({\boldsymbol{p}}) = c({\hat{{\boldsymbol{p}}}}) \quad implies \quad {\boldsymbol{p}} = {\hat{{\boldsymbol{p}}}} . $ |
Studying structural identifiability of ODE models using the differential algebra methods can be accomplished using several platforms and available open-source software. Here, we present three such platforms: the differential algebra for identifiability of system (DAISY) [32], the identifiable combinations web application (COMBOS) [33], and the StructuralIdentifiability.jl in JULIA [34].
There are many similarities among the three methods. All of them offer insights into the structural identifiability status of each parameter by categorizing them into locally identifiable, globally identifiable, or non-identifiable. They employ a differential elimination method to calculate input-output equations of the considered system, and test the one-to-one map between the coefficients of the input-output equations and model parameters. COMBOS and the StructuralIdentifiability.jl package in JULIA are superior to DAISY, as they provide globally identifiable parameter correlations in an otherwise non-identifiable system. Even though DAISY does not print parameter correlations, the correlations can be derived using the coefficients of the input-output equations and algebraic manipulations in software such as MATHEMATICA. Of the three software, COMBOS does not print the input-output equations, making for a faster (yet more opaque) platform. Previous studies have shown that COMBOS works best for small to medium-size models and is not assured for models with large parameter vectors [33,35,36]. While highly similar, it is up to the user to determine which software is best suited for studying the identifiability of the models considered.
To determine whether the considered models can reveal their parameters, we examine the structural identifiability of Models 1–3, given by Eqs (2.1)–(2.3), under unlimited observations of viral load and the structural identifiability of Model 4, given by Eq (2.7), under unlimited combined observations of viral load and effector $ \mathrm{CD8}^+ $ T cell concentrations. We used the differential algebra software DAISY.
We assume that all Model 1's parameters $ {\boldsymbol{p}} = \{\beta, \delta, \pi, c\} $ are unknown and that we have unlimited empirical observations of the viral load, $ y(t) = V(t) $. Using DAISY [32], we obtain the following input-output equation in variable $ V $ and model parameters $\boldsymbol{p}$,
$ 0=V‴V−V″V′+V″V2β+V″V(c+δ)−V′2(c+δ)+V′V2β(c+δ)+V3βcδ. $ | (3.3) |
By Definition 3, we need to examine whether another set of parameters, $ \hat{ {\boldsymbol{p}}} = \{\hat{\beta}, \hat{\delta}, \hat{\pi}, \hat{c}\} $ can produce the same empirical observation $ V(t) $, making the map from the parameter space $\boldsymbol{p}$ to the coefficients of input-output equation Eq (3.3) one-to-one. The coefficients of input-output equation Eq (3.3) are $ c({\boldsymbol{p}}) = \{ \beta, c+\delta, c \delta\} $. To determine whether the map from the parameter space $ {\boldsymbol{p}} $ to the coefficients $ c({\boldsymbol{p}}) $ is one-to-one, we set $ c({\boldsymbol{p}}) = c(\hat{{\boldsymbol{p}}}) $, which is the following system:
$ {β=ˆβ,c+δ=ˆc+ˆδ,cδ=ˆcˆδ}. $ | (3.4) |
Solving Eq (3.4) results in the following two sets of solutions:
$ S1:{β=ˆβ,c=ˆc,δ=ˆδ},S2:{β=ˆβ,c=ˆδ,δ=ˆc}. $ |
Hence, only the infection rate $ \beta $ is globally structurally identifiable, while the infected cells killing rate $ \delta $ and the virus clearance rate $ c $ are locally identifiable. Lastly, the virus production rate $ \pi $ does not appear in the input-output equation Eq (3.3). Therefore, it is not structurally identifiable. We summarize the results for Model 1 below (see Table 1).
Model | Observed states | DAISY | JULIA | COMBOS |
Model 1 unknown initial conditions |
$ V(t) $ | $ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable $ \pi $ nonidentifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable with 2 solutions |
Model 2 unknown initial conditions |
$ V(t) $ | $\{ \beta, c \}$ globally identifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$\{ \beta, c \}$ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 3 unknown initial conditions |
$ V(t) $ | $ \{\beta, c, k\} $ globally identifiable Correlations: $ \frac{\delta}{K_\delta}= \frac{\hat{\delta}}{\hat{K_\delta}} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, k\} $ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta \pi= \hat{\delta} \hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 4 unknown initial conditions |
$ V(t), E(t) $ | $\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable Correlations: $ \delta_E\eta =\hat{\delta}_E\hat{\eta} $, $ K_\delta \eta =\hat{K}_\delta\hat{\eta} $, $ \frac{\lambda}{\eta}= \frac{\hat{\lambda}}{\hat{\eta}} $, $ \frac{\pi}{\eta}= \frac{\hat{\pi}}{\hat{\eta}} $ |
$\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable $ \{\delta_E, K_\delta, \pi, \lambda, \eta\} $ nonidentifiable Correlations: $ K_{\delta} \pi = \hat{K}_{\delta} \hat{\pi}, $ $ K_{\delta} \lambda = \hat{K}_{\delta} \hat{\lambda}, $ $ \eta K_{\delta} =\hat{\eta}\hat{K}_{\delta} $, $ \delta_E \pi = \hat{\delta}_E \hat{\pi} $ |
identifiability unknown |
All models known initial conditions except for $ E_0 $ |
$ V(t) $ $ V(t), E(t) $ |
Models 1–3 globally identifiable Model 4 identifiability unknown |
Models 1–4 globally identifiable |
Models 1–4 identifiability unknown |
Proposition 1. Model 1 given by Eq (2.1) is not structured to identify all of its parameters from unlimited viral load observations, $ V(t) $. More precisely, parameter $ \beta $ is globally structurally identifiable, parameters $ c $ and $ \delta $ are locally structurally identifiable, and parameter $ \pi $ is not structurally identifiable. Moreover, Model 1 is globally structural identifiable under known initial conditions.
We assume that all parameters $ {\boldsymbol{p}} = \{\beta, \delta, K_\delta, \pi, c\} $ of Model 2, given by Eq (2.2), are unknown and that we have unlimited empirical observations of the viral load, $ y(t) = V(t) $. Using DAISY, we obtain the following input-output equation,
$ 0=V‴V′2V+2V‴V′V2c+2V‴V′VKδπ+V‴V3c2+2V‴V2cKδπ+V‴VK2δπ2−V″V′3+V″V′2V2β−V″V′2Vc−2V″V′2Kδπ+2V″V′V3βc+V″V′V2(2βKδπ+c2)−V″V′K2δπ2+V″V4βc2+V″V3c(2βKδπ+c2)+V″V2Kδπ(βKδπ+2c2)+V″VKδπ2(cKδ+δ)−V′4c+V′3V2βc−2V′3Vc2+V′3π(−2cKδ−δ)+2V′2V3βc2+V′2V2(2βcKδπ+βδπ−c3)−2V′2Vcπ(cKδ+δ)−V′2Kδπ2(cKδ+δ)+V′V4βc3+2V′V3βcπ(cKδ+δ)+V′V2π(βcK2δπ+βδKδπ−c2δ)+V4βc2δπ+V3βcδKδπ2. $ | (3.5) |
As before, we examine whether another set of parameters, $ \hat{ {\boldsymbol{p}}} $, can produce the same empirical observation $ V(t) $, making the map from the parameter space $\boldsymbol{p}$ to the coefficients of input-output equation Eq (3.5) one-to-one. If we set $ c({\boldsymbol{p}}) = c(\hat{{\boldsymbol{p}}}) $, we obtain
$ c=ˆc,β=ˆβ,Kδπ=ˆKδˆπ,cπ(cKδ+δ)=ˆcˆπ(ˆc^Kδ+ˆδ),βc2δπ=ˆβˆc2ˆδˆπ,βcKδπ+βδπ−c3=ˆβˆc^Kδˆπ+ˆβˆδˆπ−ˆc3,Kδπ2(ckδ+δ)=^Kδˆπ2(ˆc^Kδ+ˆδ), $ |
with solutions
$ \{c = \hat{c}, \quad \beta = \hat{\beta}, \quad \delta\pi = \hat{\delta}\hat{\pi}, \quad \pi K_\delta = \hat{\pi}\hat{K}_\delta\}. $ |
Hence, Model 2 is not structurally identifiable. In particular, infection rate $ \beta $, viral clearance rate $ c $, and the products $ \delta \pi $, $ K_\delta \pi $ (but not the individual parameters $ \delta $, $ \pi $, and $ K_\delta $) are globally identifiable. Since the correlations $ \delta \pi $ and $ K_\delta \pi $ are known, fixing one of these parameters can make model Eq (2.2) identifiable. We summarize the structureal identifiability results for Model 2 below (see Table 1).
Proposition 2. Model 2 given by Eq (2.2) is not structured to identify all of its parameters from unlimited viral load observations, $ V(t) $. More precisely, parameters $ \beta $ and $ c $ are globally structurally identifiable. Moreover, the parameter products $ \delta \pi $ and $ K_\delta \pi $ are globally identifiable. Since the correlations are known, fixing $ \delta $, $ \pi $, or $ K_\delta $ makes the Model 2 globally structurally identifiable from unlimited observations $ V(t) $. Moreover, Model 2 is globally structural identifiable under known initial conditions.
We assume that all parameters $ {\boldsymbol{p}} = \{\beta, \delta, k, \delta_E K_\delta, \pi, c\} $ of Model 3, given by Eq (2.3), are unknown and that we have unlimited empirical observations of the viral load, $ y(t) = V(t) $. Using DAISY, we can derive the input-output equations (they are too messy and will not be presented here). As before, we examine whether another set of parameters, $ \hat{ {\boldsymbol{p}}} $, can produce the same empirical observation $ V(t) $, making the map from parameter space $\boldsymbol{p}$ to coefficients of input-output equation (not shown) one-to-one. If we set $ c({\boldsymbol{p}}) = c(\hat{{\boldsymbol{p}}}) $, we obtain
$ \{c = \hat{c}, \quad \beta = \hat{\beta}, \quad k = \hat{k}, \quad\frac{\delta}{K_\delta} = \frac{\hat{\delta}}{\hat{K_\delta}}, \quad \pi K_\delta = \hat{\pi}\hat{K_\delta}\}. $ |
Hence, Model 3 is not structurally identifiable. In particular, the infection rate $ \beta $, the eclipse parameter $ k $, the viral clearance rate $ c $, the ratio $ \delta/ K_\delta $, and the product $ K_\delta \pi $ (but not the individual parameters $ \delta $, $ \pi $, and $ K_\delta $) are globally identifiable. Since the correlations are known, fixing one of these parameters makes the model Eq (2.3) identifiable. We summarize the results for Model 3 below (see Table 1).
Proposition 3. Model 3 given by Eq (2.3) is not structured to identify all of its parameters from unlimited viral load observations, $ V(t) $. More precisely, parameters $ \beta $, $ k $, and $ c $ are globally structurally identifiable. Moreover, the parameter ratio $ \delta/K_\delta $ and the parameter product $ K_\delta \pi $ are globally identifiable. Since the correlations are known, fixing the parameter $ \delta $, $ \pi $, or $ K_\delta $ makes Model 3 globally structurally identifiable from unlimited observations $ V(t) $. Moreover, Model 3 is globally structural identifiable under known initial conditions.
To study the structural identifiability of Model 4 (given by Eq (2.7)), we assume that all parameters, $ {\boldsymbol{p}} = \{\beta, \delta, k, \delta_E K_\delta, \pi, c\, \lambda, \eta, d_E, \tau_I, E_0\} $, are unknown and that we have unlimited empirical observations for the viral load $ y_1(t) = V(t) $ and the effector cell CD8$ ^+ $ T cell data $ y_2(t) = E(t) $. Using DAISY, we can obtain input-output equations (they are messy and will not be presented here). As before, we examine whether another set of parameters, $ \hat{ {\boldsymbol{p}}} $, can produce the same empirical observations $ V(t) $ and $ E(t) $, making the map from the parameter space $\boldsymbol{p}$ to the coefficients of input-output equations (not shown) one-to-one. If we set $ c({\boldsymbol{p}}) = c(\hat{{\boldsymbol{p}}}) $, we obtain
$ {c=ˆc,β=ˆβ,k=ˆk,dE=ˆdE,δ=ˆδ,KE=ˆKE,τI=ˆτI,δEη=ˆδEˆη,Kδη=ˆKδˆη,λη=ˆλˆη,πη=ˆπˆη}. $ |
Hence, Model 4 is not structurally identifiable. In particular, the infection rate $ \beta $, the eclipse parameter $ k $, the viral clearance rate $ c $, the effector cells death rate $ d_E $, the generic killing rate $ \delta $, the half-maximal level $ K_E $, the delay $ \tau_I $, the ratios $ \lambda/ \eta $ and $ \pi/\eta $, and the products $ d_E \eta $ and $ K_\delta \eta $ (but not the individual parameters $ \delta_E, \eta, K_\delta, \pi, \lambda $) are globally identifiable. If the parameter $ \eta $ is fixed, then the model Eq (2.7) becomes identifiable. We summarize the results for Model 4 below (see Table 1).
Proposition 4. Model 4 given by Eq (2.7) is not structured to identify all of its parameters from unlimited viral load and effector cell observations, $ V(t) $ and $ E(t) $. More precisely, parameters $ \beta $, $ k $, $ c $, $ d_E $, $ \delta $, $ K_E $, and $ \tau_I $ are globally structurally identifiable. Moreover, the parameter ratios $ \lambda/ \eta $ and $ \pi/\eta $ and the parameter products $ d_E \eta $ and $ K_\delta \eta $ are globally identifiable. If the parameter $ \eta $ is fixed, then Model 4 becomes globally structurally identifiable from unlimited observations $ V(t) $ and $ E(t) $.
We do not know (from DAISY) whether knowing initial conditions guarantees global stability of Model 4 (see Table 1).
Studying structural identifiability of ODE models can be achieved using software other than DAISY To determine how these methods compare, results from three platforms, DAISY, COMBOS [33], and StructuralIdentifiability.jl in JULIA [34], for Models 1–4 are presented side by side in Table 1.
We find that all three software uncover the same structural identifiability results for Models 1–3. On the other hand, DAISY and StructuralIdentifiability.jl in JULIA uncover the same identifiability results (while COMBOS cannot find results) for Model 4 under unknown initial conditions. Even though Models 3 and 4 employ different interpretations of the parameter correlations among platforms, simple algebraic manipulations show that the obtained correlations are equivalent. Given the similarity in the results among Models 1–3, it is up to the user to decide which of the three software is best suited for their analysis. Similarly, given the similarity in the results among DAISY and StructuralIdentifiability.jl in JULIA for Model 4 with unknown initial conditions, it is up to the user to decide which of the two software is best suited for their analysis. However, only StructuralIdentifiability.jl in JULIA can be used to determine the structural identifiability of Model 4 with unknown $ E_0 $ and known other initial conditions. Hence, for larger systems with nonlinear terms of interactions, this method should be employed.
We use previously published longitudinal influenza A infectious virus titer and $ \text{CD8}^+ $ T cell data in mice from Smith et al. [5]. Adult mice were inoculated intranasally with 75 $ \text{TCID}_{50} $ of mouse adapted influenza A/Puerto Rico/8/34 (H1N1) (PR8) virus.
Total infectious virus ($ \log10 \text{ TCID}_{50} $ per lung) was measured for ten mice each day. Nine days after inoculation, the infectious virus was no longer detectable in any of the mice. Therefore, we only consider infectious virus titer data from the first nine days post inoculation in our analyses. We let $ \mathbb{E}(V_{data}(i)) $ be the mean infectious virus titer data at day $ i = \{1, ..., 9\} $ and $ \mathbb{V}\textbf{ar}(V_{data}(j)) $ be the infectious virus titer variance at days $ i = \{1, ..., 9\} $ among the ten mice.
Moreover, total effector $ \text{CD8}^+ $ T cells (cells per lung) were measured daily for five mice. Since influenza A-specific effector $ \text{CD8}^+ $ T cells were detectable for all twelve days of the study, we consider effector $ \text{CD8}^+ $ T cells data from the first twelve days post inoculation in our analyses. We let $ \mathbb{E}(E_{data}(j)) $ be the mean $ \text{CD8}^+ $ T cell data (per lung) at day $ j = \{1, ..., 12\} $ and $ \mathbb{V}\textbf{ar}(E_{data}(j)) $ be the $ \text{CD8}^+ $ T cell data variance at days $ j = \{1, ..., 12\} $ among the five mice.
For all models, we assume known initial conditions $ T(0) = 10^7 $ cells/ml, $ I(0) = 75 $ cells/ml, and $ V(0) = 0 $ virus/ml as in [5]. For Models 3 and 4, we additionally assume that $ I_2(0) = 0 $, and for Model 4, we assume $ y_i(0) = 0 $, for $ i = 1, 2, 3 $. For Model 4, we assume $ E(0) = E_0 $ is unknown, therefore adding $ E_0 $ to the parameter vector to be estimated from the data. Lastly, we assume all parameters are unknown. When parameters are either very large or very small, we estimate their value on natural logarithmic scale. In particular, we estimate $ \bf{p_1} = \{ \ln(\beta), \delta, \pi, c \} $ for Model 1, $ \bf{p_2} ={ln(β),ln(δ),ln(Kδ),π,c}$forModel2,$p3 = \{\ln(\beta), \ln(\delta), \ln(K_{\delta}), \pi, c, k\} $ for Model 3, and $ \bf{p_4} $$ = \{\ln(\beta), \delta, \ln(K_{\delta}), \pi, c, k, \delta_E, \ln(\eta), \ln(\lambda), d_E, \ln(K_E), \tau_I, \ln(E_0) \} $ for Model 4.
To estimate parameters $ \bf{p_1} $–$ \bf{p_3} $, we fit the predicted viral load $ \log_{10} V^w_{model}(i) $ given by Models 1–3 to the longitudinal mean (among the ten mice) infectious virus ($ \log_{10} \text{ TCID}_{50} $ per lung) $ \mathbb{E}(V_{data}(i)) $, knowing that the variance in the data at day $ i $ is $ \mathbb{V}\textbf{ar}(V_{data}(i)) $, for $ i = \{1...9\} $ days. We assume that the data satisfies the following statistical model [19,37]
$ E(Vdata(i))=log10Vwmodel(i,pw)+ϵi√Var(Vdata(i)), $ | (4.1) |
where $ V^w_{model}(i) $ is the predicted virus trajectory given by Model w at days $ i = \{1, ..., 9\} $ post infection; $ \bf{p_1} ={ln(β),δ,π,c}$,$p2 = \{ \ln(\beta), \ln(\delta), \ln(K_{\delta}), \pi, c \} $, and $ \bf{p_3} $$ = \{\ln(\beta), \ln(\delta), \ln(K_{\delta}), \pi, c, k\} $; and $ \epsilon_i $ are independent and identically distributed with mean zero and standard deviation $ \sigma $. Given the statistical model Eq (4.1), we assume that the measured data, $ \mathbb{E}(V_{data}(i)) $, follows a normal distribution with a mean equal to the model prediction $ \log_{10} V^w_{model}(i) $ and with variance equal to $ \sigma^{2}\mathbb{V}\textbf{ar}(V_{data}(i)) $. Moreover, the availability of measurements from several animals that vary with time allows us to account for the change in data variance over time, $ \mathbb{V}\textbf{ar}(V_{data}(i)) $. Therefore, we consider the following functional (weighted residual sum of squares), to estimate the model parameters,
$ RSSw(pw)=9∑i=1(log10Vwmodel(i,pw)−E(Vdata(i)))2Var(Vdata(i)). $ | (4.2) |
Consequently, parameters of Models 1–3 are estimated by minimizing the weighted least-squares given by
$ Jw(pw)=minpwRSSw(pw). $ | (4.3) |
Moreover, to estimate parameters $ \bf{p_4} $, we fit both the predicted viral load $ \log_{10} V_{model}^4(i) $, given by Model 4, to the longitudinal mean (among ten mice) infectious virus $ \mathbb{E}(V_{data}(i)) $ (knowing that the variance in the data at days $ i = \{1...9\} $ is $ \mathbb{V}\textbf{ar}(V_{data}(i)) $) and the predicted effector cell population $ \log_{10} E_{model}^4(j) $, given by Model 4, to the longitudinal mean (among five mice) $ \mathrm{CD8}^+ $ T cell data $ \mathbb{E}(E_{data}(j)) $ (knowing that the variance in the data at days $ j = \{1...12\} $ is $ \mathbb{V}\textbf{ar}(E_{data}(j)) $). We assume that the data is satisfying the following statistical model [19,37]
$ E(Vdata(i))=log10V4model(i,p4)+ϵi√Var(Vdata(i)), $ | (4.4) |
$ E(Edata(j))=log10E4model(j,p4)+ηj√Var(Edata(j)), $ | (4.5) |
where $ V^4_{model}(i) $ is the predicted virus trajectory given by Model 4 at days $ i = \{1, ..., 9\} $ post infection, $ E^4_{model}(j) $ is the predicted $ \mathrm{CD8}^+ $ T cell population given by Model 4 at days $ j = \{1, ..., 12\} $ post infection, and $ \bf{p_4} $$ = \{\ln(\beta), \delta, \ln(K_{\delta}), \pi, c, k, \delta_E, \ln(\eta), \ln(\lambda), d_E, $ $ \ln(K_E), \tau_I, \ln(E_0) \} $. Here, $ \epsilon_i $ and $ \eta_j $ are independent and identically distributed with mean zero and standard deviations $ \sigma_V $ and $ \sigma_E $, respectively. As before, the measured data $ \mathbb{E}(E_{data}(j)) $ follows a normal distribution whose mean is the model prediction $ \log_{10} E^w_{model}(i) $ and whose variance is $ \sigma_E^{2}\mathbb{V}\textbf{ar}(E_{data}(j)) $. We consider the following functional (weighted residual sum of squares),
$ RSS4(p4)=u19∑i=1(log10V4model(i,p4)−E(Vdata(i)))2Var(Vdata(i))+u212∑j=1(log10Emodel(j,p4)−E(Edata(j)))2Var(Edata(j)). $ | (4.6) |
Consequently, parameters of Model 4 are estimated by minimizing the weighted least-squares given by,
$ J4(p4)=minp4RSS4(p4). $ | (4.7) |
Note that we weighted the virus and effector cells contributions, with weights $ u_1 = 1 $ and $ u_2 = \max_j \mathbb{V}\textbf{ar}(E_{data}(j))/\max_i \mathbb{V}\textbf{ar}(V_{data}(i)) $. We minimize all least-square functionals $ \text{RSS}_w $ using the fmincon function in MATLAB with ranges for parameters $ \mathbf{p_w} $ given in Table 2.
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta\times 10^{-5} $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ |
$ \delta $ | 0–25 | $ 10^2 $–$ 10^8 $ | $ 10^2 $–$ 10^8 $ | $ 0 $–$ 15 $ |
$ K_{\delta} $ | - | $ 10^3 $–$ 10^7 $ | $ 10^3 $–$ 10^7 $ | $ 10^1 $–$ 10^5 $ |
$ \pi $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ |
$ c $ | 0–25 | 0–25 | 0–25 | 0–25 |
$ k $ | - | - | $ 4 $–$ 6 $ | $ 4 $–$ 6 $ |
$ \delta_E $ | - | - | - | $ 0 $–$ 25 $ |
$ \eta\times 10^{-7} $ | - | - | - | $ 10^{-2} $–$ 100 $ |
$ \lambda\times 10^3 $ | - | - | - | $ 10^{-2} $–$ 10^2 $ |
$ d_E $ | - | - | - | $ 0 $–$ 25 $ |
$ K_E\times 10^6 $ | - | - | - | $ 0.1 $–$ 10^3 $ |
$ \tau_I $ | - | - | - | 0–10 |
$ E_0\times 10^3 $ | - | - | - | $ 0.1 $–$ 10 $ |
To compare Models 1–4, we calculate the corrected Akaike information criteria (AICc), given below
$ AICc=nln(Jwn)+2(M+1)+2(M+1)(M+2)n−M, $ | (4.8) |
where $ n $ is the number of data points used to estimate parameters $ \mathbf{p_w} $ and $ M $ is the number of estimated parameters. In Models 1–3, $ n = 9 $ and $ M = 4, 5 $, and $ 6 $, respectively. In Model 4, $ n = 21 $ and $ M = 13 $.
To quantify the uncertainty associated with predicted solutions of each model, we perform parametric bootstrapping. It is a simulation-based method which assumes that data comes from a known distribution with unknown parameters. For Models 1–3, we assume that the predicted viral population for best parameter estimates, $ \log_{10} V^w_{model}(i, \mathbf{p_w}) $, is the mean of the data's normal distribution and $ \sigma^2 \mathbb{V}\textbf{ar}(V_{data}(i)) $ is its variance (see Eq (4.1)). Then, $ \sigma $ can be approximated as follows,
$ \sigma^2 \approx \frac{1}{n -M} \sum\limits_{i = 1}^n \frac{(\log_{10} V^w_{model}(i, \mathbf{p_w})-\mathbb{E}(V_{data}(i)))^2}{\mathbb{V}\textbf{ar}(V_{data}(i))}, $ |
(see Banks et al. for a full derivation [37]). Here, $ n = 9 $ is the number of viral samples and $ M $ is the number of parameters ($ M = 4 $, $ M = 5 $, and $ M = 6 $ for Models 1–3, respectively). To find a confidence region in our model predictions, we generate $ 1000 $ simulated datasets using the distribution space given by Eq (4.1), and fit Models 1–3 to each datasets.
Similarly, for Model 4, assuming that viral data and effector cell data come from distributions with means $ \log_{10} V^4_{model}(i, \mathbf{p_4}) $ and $ \log_{10} E^4_{model}(i, \mathbf{p_4}) $ (the predicted variables for best parameter fits) and that $ \sigma_V^2 \mathbb{V}\textbf{ar}(V_{data}(i)) $ and $ \sigma_E^2 \mathbb{V}\textbf{ar}(E_{data}(j)) $ are the variances, then
$ \sigma_V^2 \approx \frac{1}{n_{tot}- M} \sum\limits_{i = 1}^{n_V} \frac{(\log_{10} V^4_{model}(i, \mathbf{p_4})-\mathbb{E}(V_{data}(i)))^2}{\mathbb{V}\textbf{ar}(V_{data}(i))}, $ |
and
$ \sigma_E^2 \approx \frac{1}{n_{tot}- M} \sum\limits_{j = 1}^{n_E} \frac{(\log_{10}(E_{model}(j, \mathbf{p_4})-\mathbb{E}(E_{data}(j)))^2}{\mathbb{V}\textbf{ar}(E_{data}(j))}, $ |
as before. Here, $ n_V = 9 $ is the number of viral samples, $ n_E = 12 $ is the number of $ \mathrm{CD8}^+ $ T cell samples, $ n_{tot} = n_V+n_E = 21 $ is the number of total data samples, and $ M = 13 $ is the number of parameters fitted.
We fitted Models 1–3 to previously published longitudinal influenza A infectious virus titer and we fitted Model 4 to both longitudinal influenza A infectious virus titer and longitudinal $ \text{CD8}^+ $ T cell data in infected mice [5], using a normalized least-square optimization algorithm (see Section 4). The results from fitting $ V(t) $ given by Models 1–3 to viral load data are shown in Figure 2A–C and the best parameter fits are given in Table 3. The results from fitting both $ V(t) $ and $ E(t) $ given by Model 4 to viral titer and effector cell data are shown in Figure 2D and the best parameter fits are given in Table 3. Model selection, using the corrected $ AIC_c $, predicts that Model 4 best describes the data (see Table 3). To quantify the uncertainty associated with predicted solutions of each model, we find a $ 90\% $ confidence region in our model predictions (see Section 4.5), illustrated by shaded gray areas in Figure 2A–C for Models 1–3 and by gray and blue shaded regions in Figure 2D for Model 4. We see large error regions in virus population predictions for all models during the decay phase (gray regions in Figure 2A–D). Moreover, Models 2–4 better capture the virus population expansion phase compared to Model 1 (gray regions in Figure 2B–D versus gray region in Figure 2A). Lastly, the largest error in $ \mathrm{CD8}^+ $ T cell prediction in Model 4 occurs in the second week of infection (blue region in Figure 2D).
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta \times 10^{-5} $ | 0.48 | 6.88 | 6.20 | 8.39 |
$ \delta $ | 1.47 | $ 1.59\times 10^6 $ | $ 1.72\times 10^6 $ | 0.196 |
$ K_{\delta} $ | - | $ 4.69\times 10^4 $ | $ 2.37\times 10^5 $ | $ 5.49\times10^3 $ |
$ \pi $ | 1.18 | 0.86 | 1.69 | 0.69 |
$ c $ | 1.48 | 6.49 | 12.34 | 6.19 |
$ k $ | - | - | 4.82 | 5.02 |
$ \delta_E $ | - | - | - | 14.20 |
$ \eta\times 10^{-7} $ | - | - | - | 3.59 |
$ \lambda\times 10^3 $ | - | - | - | 1.31 |
$ d_E $ | - | - | - | 0.20 |
$ K_E\times 10^6 $ | - | - | - | 9.68 |
$ \tau_I $ | - | - | - | 1.69 |
$ E_{0}\times 10^3 $ | - | - | - | 1.21 |
$ J_w $ | 21.12 | 2.09 | 2.39 | 2.14 |
$ \text{AIC}_\text{c} $ | 37.67 | 40.88 | 114.1 | 36.78 |
While structural identifiability investigates whether parameters can be uniquely determined from a model given unlimited data in the absence of measurement error or noise, practical identifiability determines whether parameters can be accurately identified in real-world scenarios, where observed discrete and variable among subject data is contaminated with measurement errors. We and others have employed several methods to study practical identifiability of within-host and infectious disease models, such as Monte Carlo simulations [18,19,20], Fisher information matrix or correlation matrix [16,21,22,23], and Bayesian methods [24]. In this study, we use the profile likelihood method [25,38] described in detail below.
Consider that the vector of parameters p is partitioned into $ \textbf{p} $ = ($ r $, $ \textbf{s} $), where $ r $ represents the parameter whose practical identifiability we are investigating and s represents the vector of remaining parameters. The profile likelihood of $ r $ is given by,
$ PL(r)=minsRSS(r,s), $ | (6.1) |
where $ \text{RSS} $ is the objective functional used for data fitting (in our case, Eq (4.2) for Models 1–3 and Eq (4.6) for Model 4). In other words, PL$ (r) $ finds the minimum of the objective functional $ \text{RSS}(r, s) $ for an array of fixed $ r $ values over the space of the remaining parameters $ \textbf{s} $. The shape of PL$ (r) $ informs the identifiability of $ r $, with a u-shaped PL$ (r) $ that exceeds a threshold (corresponding to a chosen confidence level) indicating practical identifiability of $ r $ and a flat PL$ (r) $ indicating nonpractical identifiability of $ r $.
We estimate PL($ r $) over a mesh of equally spaced values of $ r $, centered at the best-fit estimate $ \hat{r} $, with the number of mesh points chosen to have enough data to generate a confidence interval for $ \hat{r} $, as follows. If we consider a model with parameters $ \textbf{p} $ unknown and a model with parameters $ \textbf{s} $ unknown, we obtain two nested models that differ by parameter $ r $. It has been shown that the likelihood ratio of the nested models converges to a $ \chi^2 $ distribution with one degree of freedom, $ df = 1 $ (see Corrollary 2 in [38] for more detail). This helps us define the $ \Delta $-level likelihood-based confidence interval for parameter $ r $ to be
$ CI={r|PL(r)<J+Δ}, $ | (6.2) |
where $ \Delta $ is the percentile of the $ \chi^2 $ distribution with $ df = 1 $, and $ J $ is the weighted least-squares functional at the best parameter estimate [38]. This can be summarized, as follows.
Definition 4. Let $ \text{CI} = \{r|\text{PL}(r) < J+\Delta\} $ be the likelihood-based confidence interval for parameter $ r $.
i. If $ \text{CI}\subset[r_1, r_2] $, where $ r_1 $ and $ r_2 $ are finite, then parameter $ r $ is practically identifiable.
ii. If either $ r_1 $ or $ r_2 $ is infinite, then parameter $ r $ is not practically identifiable.
A model is practically identifiable if all parameters are practically identifiable.
For Models 1–3, we generated the profile likelihoods $ \text{PL}(r) $ for each parameter $ r \in \{\mathbf{p_w} \}$ for $ \mathbf{w} = \{1, 2, 3\} $ by fitting the functional $ \text{RSS}_w(\mathbf{p_w}) $ given by Eq (4.2) and Model $ \mathbf{w} $ to mean population viral load data. We obtained best estimates for the remaining parameters $ \textbf{s} $ over a mesh of equally spaced, known $ r $ values. Similarly, for Model 4, we generated the profile likelihood $ \text{PL}(r) $ for unknown parameters $ r \in \{\mathbf{p_4} \}$ by fitting functional $ \text{RSS}_4(\mathbf{p_4}) $ given by Eq (4.6) simultaneously to the mean population viral titer and the mean population effector cell data. We obtained best estimates for the remaining parameters $ \textbf{s} $ over a mesh of equally spaced, known $ r $ values.
Additionally, to further explore the relationship between data availability and practical identifiability, we generated profile likelihoods for parameters $ \mathbf{p_w} $ and Model $ \mathbf{w} $ using simulated, noise-free, high frequency datasets. In particular, we assumed that the virus titer data was collected every fourteen minutes (for a total of 899 evenly spaced points) for Models 1–3 and that both the virus titer data and the effector cell data were collected every fourteen minutes (for a total of 899 evenly spaced points of each data type) for Model 4. We fitted each model to this high frequency data and generated profile likelihoods of the resulting parameter values. In all of our models, we chose $ \Delta $ to be the 90-th percentile of the $ \chi^2 $-distribution, $ \chi^2_{90} $ [38]. This guaranteed us a $ 90\% $ confidence interval in the estimated parameter $ r $.
Since we determined that Models 1–4 are structurally identifiable under known initial conditions and unlimited data, we were allowed to search for best estimates for all models' parameters. The resulting fitting routine may still be ill-posed, given that the data consisted of discrete (daily) datasets that varied among the infected mice, rather than the unlimited data required by the structural identifiability theory. Therefore, we performed practical identifiability for Models 1–4 under the discrete subject data in [28]. We used the Profile Likelihood practical identifiability method [23,25,26,27], which has the advantage of not only determining whether a parameter is practically identifiable, but also of determining a $ 90\% $ confidence interval for the parameter space where practical identifiability occurs.
When using the empirical population mean virus titer data in [5] for Model 1, we found that $ \beta $ is practically identifiable with $ 90\% $ confidence interval $ CI\subset[1.64\times10^{-6}, 2.94\times10^{-5}] $ and $ \pi $ is practically identifiable with $ 90\% $ confidence interval $ CI\subset[0.31, 3.18] $, respectively. On the other hand, both $ \delta $ and $ c $ are not practically identifiable, with identical $ 90\% $ confidence intervals $ CI\subset[0.87, \infty] $ (see Figure 3A). Adding high frequency data and rerunning the profile likelihood analysis resulted in practical identifiability of all four parameters, consistent with the structural identifiability results for Model 1 (see Figure 3B).
Similar to Model 1, when using the empirical population mean virus titer data in [5] for Model 2, we found that $ \delta $, $ K_\delta $, and $ \pi $ are practically identifiable with $ 90\% $ confidence intervals $ CI\subset[1.41\times10^6, 2.06\times10^6] $, $ CI\subset[1.74\times10^4, 4.19\times10^5] $, and $ CI\subset[0.44, 5.24] $, respectively. On the other hand, both $ \beta $ and $ c $ are not practically identifiable, with $ 90\% $ confidence intervals $ CI\subset[6.62\times10^{-6}, \infty] $ and $ CI\subset[3.98, \infty] $, respectively (see Figure 3C). Adding high frequency data and rerunning the profile likelihood analysis resulted in practical identifiability of all five parameters, consistent with the structural identifiability results for Model 2 (see Figure 3D).
For Model 3, when using the empirical population mean virus titer data in [5], we found that $ \beta $, $ \delta $, $ K_\delta $, and $ \pi $ are practically identifiable with $ 90\% $ confidence intervals $ CI\subset[2.17\times10^{-5}, 1.60\times10^{-4}] $, $ CI\subset[1.47\times10^6, 2.03\times10^6] $, $ CI\subset[1.59\times10^5, 4.32\times10^5] $, and $ CI\subset[1.07, 4.49] $, respectively. On the other hand, both $ c $ and $ k $ are not practically identifiable, with $ 90\% $ confidence intervals $ CI\subset[3.98, \infty] $ and $ CI\subset[\infty, \infty] $, respectively (see Figure 3E). Adding high frequency data and rerunning the profile likelihood analysis did not result in practical identifiability of all six parameters (see Figure 3F). However, if we additionally relaxed constraints on parameters $ c $ and $ k $ to range in the $ [0, 1000] $ and $ [0, 50] $ intervals, compared to the constraints chosen in Table 2, we observed practical identifiability of all five parameters, consistent with the structural identifiability results for Model 3 (see Figure 4).
For Model 4, when using the discrete empirical population mean virus titer data and the empirical population mean effector cell data in [5] simultaneously, we found that $ \pi $ and $ E_0 $ are practically identifiable with $ 90\% $ confidence intervals $ CI\subset[0.29, 2.49] $ and $ CI\subset[66, 5.41\times10^{-4}] $. Parameters $ k $, $ \lambda $, and $ K_E $ are not practically identifiable with the same $ 90\% $ confidence interval $ CI\subset[-\infty, \infty] $. Parameters $ \beta $, $ \delta_E $, $ K_\delta $, $ \eta $, and $ c $ are also not practically identifiable with $ 90\% $ confidence intervals $ CI\subset[8.00\times10^{-6}, \infty] $, $ CI\subset[1.10, \infty] $, $ CI\subset[42.98, \infty] $, $ CI\subset[5.37\times10^{-8}, \infty] $, and $ CI\subset[3.89, \infty] $, respectively. Lastly, parameters $ \delta $, $ d_E $, and $ \tau $ are not practically identifiable on the positive domain with an undefined lower bound (ULB) for the $ 90\% $ confidence interval and a finite upper bound. In particular, $ CI\subset[ULB, 0.63] $, $ CI\subset[ULB, 3.20] $, $ CI\subset[ULB, 6.99] $, for $ \delta $, $ d_E $, and $ \tau_I $, respectively (see Figure 5A). Adding high frequency data and rerunning the profile likelihood analysis resulted in practical identifiability of all thirteen parameters, consistent with the structural identifiability results for Model 4 (see Figure 5B).
In this study, we investigated the conditions needed to ensure model identifiability in four models of influenza A virus dynamics in infected mice. To apply the same methodology and software, all considered models were either modeled by systems of ODEs (Models 1–3 given by equations Eqs (2.1)–(2.3)) or approximated by a system of ODEs (Model 4 given by Eq (2.7)). The considered models differ in the number of equations (corresponding to the number of variables) from three in Models 1 and 2, to four in Model 3 to eight in Model 4. Consequently, the number of unknown parameters is a maximum of four in Model 1, a maximum of five in Model 2, a maximum of six in Model 3, and a maximum of thirteen in Model 4. Lastly, the terms of interaction include only mass-action and linear terms for Model 1 and mass-action, linear terms, and density dependence terms for Models 2–4.
We found that the increased complexity needed to capture biological realism comes with a cost. It resulted in increased uncertainty in parameter estimates not only when discrete and noisy virus and immune cell empirical data is used for validation but also when we assumed (hypothetically) that unlimited data is available. This means that data fitting should not be conducted until it is established that parameters can be revealed from unlimited data under the considered model structure. In other words, the first step in the model validation is determining whether all unknown parameters are structurally identifiable (see Figure 6).
When it comes to investigating the structural identifiability of systems of ODEs several software platforms are available. Here, we compared results from three of them: DAISY [32], COMBOS [33], and StructuralIdentifiability.jl in JULIA [34]. For Models 1–3 and unlimited virus titer data, we found the same classification for the structurally identifiable parameters and the same (or equivalent) correlations between the nonstructurally identifiable parameters, regardless of the software used (Table 1). For the more complex Model 4 and unlimited virus titer and effector CD8$ ^+ $ T cell data, however, only StructuralIdentifiability.jl in JULIA found that the model is structurally identifiable under known (with the exception of initial effector population, $ E_0 $) initial conditions (Table 1). When initial conditions are unknown, we found identical classification for structurally identifiable parameters and equivalent correlations between the nonstructurally identifiable parameters among StructuralIdentifiability.jl in JULIA and DAISY (Table 1). COMBOS cannot handle the structural stability analyses for Model 4, regardless of whether initial conditions are known or not (Table 1). While increased difficulty in analyzing the structural identifiability of Model 4 is not surprising given its increased dimensionality (eight equations), multiple parameters (thirteen), and complex terms of interaction, this model is validated with two datasets (virus titer load and effector CD8$ ^+ $ T cells). Our analysis showed that the addition of data for another model variable did not compensate for the size of the model and number of unknown parameters.
Interestingly, we found that all parameters (for all models) are structurally identifiable under known (with the exception of initial effector population, $ E_0 $) initial conditions. Given that this is an inoculation study, the assumption of known viral inoculum (set by the experiment) and of known initial target cell population (approximated based on the animal body weight) is not unreasonable. When Models 1–4 are used to model natural infection, however, such initial conditions would be unknown due to differences in individual routes of infection, heterogeneity in individual immune responses, and variability in patient susceptibility. Under such unknowns, Models 1–4 would become structurally unidentifiable. Hence, it would be impossible to estimate all parameters even in the presence of unlimited data. A reduction of the unknown parameter space (based on the reported parameter correlations) would be needed before model validation can be attempted.
We next validated Models 1–3 with discrete (daily) virus titer data (for the first nine days) and validated Model 4 with discrete (daily) virus titer data (for the first nine days) and discrete (daily) CD8$ ^+ $ T cell data (for the first twelve days). Model selection (based on $ AIC_c $) favored Model 4 as the best model (Table 3). Interestingly, Model 1 was the second best model, even though it had the largest $ 90\% $ error region around the predicted mean virus fit (Figure 2, gray shaded regions). All models perform the worst during the contraction interval (Figure 2, gray and blue shaded regions), suggesting uncertainty in death rates estimates (for the virus and infected cells).
We used the best parameter estimates obtained through data fitting for Models 1–4 to further investigate their practical identifiablity. Knowing that data used for validation was collected daily and that there was variability among the subjects at each time point, we wanted to determine whether there is uncertainty in estimated parameters. When it comes to practical identifiability, several methods are available, such as the Monte Carlo simulations [18,19,20], the Fisher information matrix or correlation matrix [16,21,22,23], and Bayesian methods [24]. In this study, we used the profile likelihood method [25,38] for two main reasons. First, it allowed us to not just classify the models as practically or non-practically identifiable, but to determine a $ 90\% $ confidence interval for each practically identifiable parameter. Second, it allowed us to determine the required assumptions needed to improve practical identifiability, while maintaining biological realism (for example, by imposing positivity for all parameters).
We found that none of the models are practically identifiable for the daily empirical data collected in [28] and the parameter range restrictions imposed in Table 2 (see Figures 3A, C, E, and 5A). While Model 1, Model 2, and Model 4 become practically identifiable if we assume data is collected every fourteen minutes (see Figures 3A, D, and 5B), Model 3 does not (see Figure 3F). For this model, we can achieve practical identifiability only when we assume that the viral clearance rate can reach values as high as $ c = 500 $ per day (corresponding to a life-span for the influenza virus of $ 2.9 $ minutes), and that the epithelial cells spend $ 1/k = 1.2 $ hours in the eclipse phase before they become productively infected (see Figure 4). While large influenza clearance rates have been reported before [28], the eclipse phase $ 1/k $ is assumed to be conserved in a tight interval of 4–6 hours in most respiratory infections [6,11,28]. Therefore, this parameter is not practically identifiable from Model 3 even when data is collected at high frequency. This is a situation where a parameter should be removed from the data fitting routine in order to improve the uncertainty in the estimates of the remaining parameters (see Figure 6).
Our study has several limitations. First, Model 4 was originally expressed as a five order system of DDEs. Given the lack of methods that can be used to determine the structural identifiability of DDEs, we approximated it with an eight order system of ODEs. More work is needed to determine whether we maintain (or improve) the practical identifiability results when the DDE system is used in the place of the ODE system. Second, we assumed that all model parameters are unknown. It is feasible that the practical identifiability will be improved if certain parameters (such as the eclipse phase) were assumed known. Lastly, all our practical identifiability results come in the context of daily data collection. It would be interesting to see how data collected with random frequency (especially unavailability of data measurements before peak viral load) changes the results. In conclusion, we investigated the structural and practical identifiability of four nested ODE models of influenza A virus infection in mice. We determined the tradeoff between model complexity (defined as combined system dimension, number of unknown parameters, nonlinearity in model interactions), data availability, and our ability to reliably estimate model parameters. We presented solutions for improving model identifiability. While our results dependent on the structure of the models considered the available data, the methods are generalizable and their use is needed to improve reliability and reproducibility of parameter estimates in other systems of ODEs applied to discrete biological data.
Identifiability analysis has critical implications for experimental design, particularly when it comes to ensuring that the data obtained in these experiments will provide accurate estimation of parameters. For instance, if a model is not structurally identifiable even with noise-free unlimited data, then no experimental design will allow for estimation of certain parameters. Furthermore, structural identifiability analysis informs us which variables need to be measured in order to obtain reliable parameter estimates. Therefore, these experiments can be designed with some structurally identifiable models in mind. On the other hand, practical identifiability reveals the optimal data sampling frequency, where the data is more informative for certain parameters. These results will refine the experimental design to obtain data at those times and reduce the uncertainty in parameter estimates.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conceptualization: NT and SMC; analysis: YRL and NHB; funding acquisition SMC and NT; writing - original draft preparation: YRL, NHB, NT and SMC; writing - review and editing: YRL, NHB, NT and SMC.
SMC and NHB acknowledge partial support from National Science Foundation grant No. 2051820 and NIH NIGMS 1R01GM152743-01. This research was enabled in part through the Virginia Tech Center for the Mathematics of Biosystems (VTCMB-033500). NT acknowledges partial support from National Science Foundation grant DMS 1951626. NT and YRL acknowledge support from NIH NIGMS 1R01GM152743-01.
Necibe Tuncer is one of the special issue editors for Mathematical Biosciences and Engineering (MBE) and was not involved in the editorial review or the decision to publish this article. All authors declare no conflict of interest.
[1] | Association AP (2013) American Psychiatric Association: Diagnositc and Statistical Manual of Mental Disorders, 5 Eds. American Psychiatric Association: Diagnositc and Statistical Manual of Mental Disorders. |
[2] |
Epperson CN (2013) Premenstrual dysphoric disorder and the brain. Am J Psychiatry 170:248-252. doi: 10.1176/appi.ajp.2012.12121555
![]() |
[3] |
O'Brien PM, Backstrom T, Brown C, et al. (2011) Towards a consensus on diagnostic criteria, measurement and trial design of the premenstrual disorders: the ISPMD Montreal consensus. Arch Womens Ment Health 14: 13-21. doi: 10.1007/s00737-010-0201-3
![]() |
[4] | Veith I (1965) The History of a Disease. Chicago: Chicago University Press. |
[5] | Herculano-Houzel S (2010) Coordinated scaling of cortical and cerebellar numbers of neurons. Front Neuroanat 4: 12. |
[6] |
Tien RD, Ashdown BC (1992) Crossed cerebellar diaschisis and crossed cerebellar atrophy: correlation of MR findings, clinical symptoms, and supratentorial diseases in 26 patients. AJR Am J Roentgenol 158: 1155-1159. doi: 10.2214/ajr.158.5.1566683
![]() |
[7] | Ito M (1993) New concepts in cerebellar function. Rev Neurol (Paris) 149: 596-599. |
[8] |
Harlow HF, Harlow M (1962) Social deprivation in monkeys. Sci Am 207: 136-146. doi: 10.1038/scientificamerican1162-136
![]() |
[9] | Prescott JW (1970) Early somatosensory deprivation as ontogenic process in the abnormal development of the brain and behavior. Medical Primatology 1970: 356-375. |
[10] |
Nashold BS, Jr. , Slaughter DG (1969) Effects of stimulating or destroying the deep cerebellar regions in man. J Neurosurg 31: 172-186. doi: 10.3171/jns.1969.31.2.0172
![]() |
[11] | Heath RG (1977) Modulation of emotion with a brain pacemaker. Treatment for intractable psychiatric illness. J Nerv Ment Dis 165: 300-317. |
[12] | Cooper IS, Amin, L. , Gilman, S. , Waltz, J. M. (1974) The Effect of chronic stimulation of cerebellar cortex on epilepsy in Man. The Cerebellum, Epilepsy and Behavior. New York: Plenum Press. pp. 199-172. |
[13] |
Heath RG, Franklin DE, Shraberg D (1979) Gross pathology of the cerebellum in patients diagnosed and treated as functional psychiatric disorders. J Nerv Ment Dis 167: 585-592. doi: 10.1097/00005053-197910000-00001
![]() |
[14] | Heath RG, Llewellyn RC, Rouchell AM (1980) The cerebellar pacemaker for intractable behavioral disorders and epilepsy: follow-up report. Biol Psychiatry 15: 243-256. |
[15] | Heath RG, Franklin DE, Walker CF, et al. (1982) Cerebellar vermal atrophy in psychiatric patients. Biol Psychiatry 17: 569-583. |
[16] | Schmahmann JD, Sherman JC (1998) The cerebellar cognitive affective syndrome. Brain 121 ( Pt4): 561-579. |
[17] |
Schmahmann JD, Weilburg JB, Sherman JC (2007) The neuropsychiatry of the cerebellum - insights from the clinic. Cerebellum 6: 254-267. doi: 10.1080/14734220701490995
![]() |
[18] | Schmahmann JD (1991) An emerging concept. The cerebellar contribution to higher function. Arch Neurol 48: 1178-1187. |
[19] |
Schmahmann JD (1996) From movement to thought: anatomic substrates of the cerebellar contribution to cognitive processing. Hum Brain Mapp 4: 174-198. doi: 10.1002/(SICI)1097-0193(1996)4:3<174::AID-HBM3>3.0.CO;2-0
![]() |
[20] |
Allen G, Buxton RB, Wong EC, et al. (1997) Attentional activation of the cerebellum independent of motor involvement. Science 275: 1940-1943. doi: 10.1126/science.275.5308.1940
![]() |
[21] |
Stoodley CJ, Schmahmann JD (2009) Functional topography in the human cerebellum: a meta-analysis of neuroimaging studies. Neuroimage 44: 489-501. doi: 10.1016/j.neuroimage.2008.08.039
![]() |
[22] |
Koziol LF, Budding D, Andreasen N, et al. (2014) Consensus paper: the cerebellum's role in movement and cognition. Cerebellum 13: 151-177. doi: 10.1007/s12311-013-0511-x
![]() |
[23] |
Schraa-Tam CK, Rietdijk WJ, Verbeke WJ, et al. (2012) fMRI activities in the emotional cerebellum: a preference for negative stimuli and goal-directed behavior. Cerebellum 11: 233-245. doi: 10.1007/s12311-011-0301-2
![]() |
[24] |
Ferrucci R, Giannicola G, Rosa M, et al. (2012) Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness. Cogn Emot 26: 786-799. doi: 10.1080/02699931.2011.619520
![]() |
[25] |
Grimaldi G, Argyropoulos GP, Boehringer A, et al. (2014) Non-invasive cerebellar stimulation--a consensus paper. Cerebellum 13: 121-138. doi: 10.1007/s12311-013-0514-7
![]() |
[26] | West RL (1996) An application of prefrontal cortex function theory to cognitive aging. Psych Bull120: 272-292. |
[27] |
Hogan MJ (2004) The cerebellum in thought and action: a fronto-cerebellar aging hypothesis. New Ideas in Psychology 22: 97-125. doi: 10.1016/j.newideapsych.2004.09.002
![]() |
[28] | Eckert MA (2011) Slowing down: age-related neurobiological predictors of processing speed. Front Neurosci 5: 25. |
[29] |
Woodruff-Pak DS, Vogel RW, Ewers M, et al. (2001) MRI-assessed volume of cerebellum correlates with associative learning. Neurobiology of Learning and Memory 76: 342-357. doi: 10.1006/nlme.2001.4026
![]() |
[30] |
MacLullich AMJ, Edmond CL, Ferguson KJ, et al. (2004) Size of the neocerebellar vermis is associated with cognition in healthy elderly men. Brain and Cognition 56: 344-348. doi: 10.1016/j.bandc.2004.08.001
![]() |
[31] |
Paul R, Grieve SM, Chaudary B, et al. (2009) Relative contributions of the cerebellar vermis and prefrontal lobe volumes on cognitive function across the adult lifespan. Neurobiol Aging 30:457-465. doi: 10.1016/j.neurobiolaging.2007.07.017
![]() |
[32] | Eckert MA, Keren NI, Roberts DR, et al. (2010) Age-related changes in processing speed: unique contributions of cerebellar and prefrontal cortex. Front Hum Neurosci 4: 10. |
[33] |
Hogan MJ, Staff RT, Bunting BP, et al. (2011) Cerebellar brain volume accounts for variance in cognitive performance in older adults. Cortex 47: 441-450. doi: 10.1016/j.cortex.2010.01.001
![]() |
[34] |
Rasgon N, Serra M, Biggio G, et al. (2001) Neuroactive steroid-serotonergic interaction: responses to an intravenous L-tryptophan challenge in women with premenstrual syndrome. Eur J Endocrinol 145: 25-33. doi: 10.1530/eje.0.1450025
![]() |
[35] |
Hamakawa H, Kato T, Murashita J, et al. (1998) Quantitative proton magnetic resonance spectroscopy of the basal ganglia in patients with affective disorders. Eur Arch Psychiatry Clin Neurosci 248: 53-58. doi: 10.1007/s004060050017
![]() |
[36] | Renshaw PF, Levin JM, Kaufman MJ, et al. (1997) Dynamic susceptibility contrast magnetic resonance imaging in neuropsychiatry: present utility and future promise. Eur Radiol 7 Suppl 5:216-221. |
[37] |
Buchpiguel C, Alavi A, Crawford D, et al. (2000) Changes in cerebral blood flow associated with premenstrual syndrome: a preliminary study. J Psychosom Obstet Gynaecol 21: 157-165. doi: 10.3109/01674820009075623
![]() |
[38] |
Rasgon NL, Thomas MA, Guze BH, et al. (2001) Menstrual cycle-related brain metabolite changes using 1H magnetic resonance spectroscopy in premenopausal women: a pilot study. Psychiatry Res 106: 47-57. doi: 10.1016/S0925-4927(00)00085-8
![]() |
[39] |
Epperson CN, Haga K, Mason GF, et al. (2002) Cortical gamma-aminobutyric acid levels across the menstrual cycle in healthy women and those with premenstrual dysphoric disorder: a proton magnetic resonance spectroscopy study. Arch Gen Psychiatry 59: 851-858. doi: 10.1001/archpsyc.59.9.851
![]() |
[40] |
Jovanovic H, Cerin A, Karlsson P, et al. (2006) A PET study of 5-HT1A receptors at different phases of the menstrual cycle in women with premenstrual dysphoria. Psychiatry Res 148:185-193. doi: 10.1016/j.pscychresns.2006.05.002
![]() |
[41] |
Eriksson O, Wall A, Marteinsdottir I, et al. (2006) Mood changes correlate to changes in brain serotonin precursor trapping in women with premenstrual dysphoria. Psychiatry Res 146: 107-116. doi: 10.1016/j.pscychresns.2005.02.012
![]() |
[42] |
Batra NA, Seres-Mailo J, Hanstock C, et al. (2008) Proton magnetic resonance spectroscopy measurement of brain glutamate levels in premenstrual dysphoric disorder. Biol Psychiatry 63:1178-1184. doi: 10.1016/j.biopsych.2007.10.007
![]() |
[43] |
Protopopescu X, Tuescher O, Pan H, et al. (2008) Toward a functional neuroanatomy of premenstrual dysphoric disorder. J Affect Disord 108: 87-94. doi: 10.1016/j.jad.2007.09.015
![]() |
[44] |
Bannbers E, Gingnell M, Engman J, et al. (2012) The effect of premenstrual dysphoric disorder and menstrual cycle phase on brain activity during response inhibition. J Affect Disord 142:347-350. doi: 10.1016/j.jad.2012.04.006
![]() |
[45] |
Gingnell M, Morell A, Bannbers E, et al. (2012) Menstrual cycle effects on amygdala reactivity to emotional stimulation in premenstrual dysphoric disorder. Horm Behav 62: 400-406. doi: 10.1016/j.yhbeh.2012.07.005
![]() |
[46] |
Gingnell M, Bannbers E, Wikstrom J, et al. (2013) Premenstrual dysphoric disorder and prefrontal reactivity during anticipation of emotional stimuli. Eur Neuropsychopharmacol 23: 1474-1483. doi: 10.1016/j.euroneuro.2013.08.002
![]() |
[47] | Baller EB, Wei SM, Kohn PD, et al. (2013) Abnormalities of dorsolateral prefrontal function in women with premenstrual dysphoric disorder: a multimodal neuroimaging study. Am J Psychiatry170: 305-314. |
[48] |
Jeong HG, Ham BJ, Yeo HB, et al. (2012) Gray matter abnormalities in patients with premenstrual dysphoric disorder: an optimized voxel-based morphometry. J Affect Disord 140: 260-267. doi: 10.1016/j.jad.2012.02.010
![]() |
[49] |
Berman SM, London ED, Morgan M, et al. (2013) Elevated gray matter volume of the emotional cerebellum in women with premenstrual dysphoric disorder. J Affect Disord 146: 266-271. doi: 10.1016/j.jad.2012.06.038
![]() |
[50] | Rapkin A (2003) A review of treatment of premenstrual syndrome and premenstrual dysphoric disorder. Psychoneuroendocrinology 28 Suppl 3: 39-53. |
[51] |
Halbreich U (2008) Selective serotonin reuptake inhibitors and initial oral contraceptives for the treatment of PMDD: effective but not enough. CNS Spectr 13: 566-572. doi: 10.1017/S1092852900016849
![]() |
[52] |
Nevatte T, O'Brien PM, Backstrom T, et al. (2013) ISPMD consensus on the management of premenstrual disorders. Arch Womens Ment Health 16: 279-291. doi: 10.1007/s00737-013-0346-y
![]() |
[53] | Raichle M (1987) Circulatory and Metabolic Correlates of brain function in normal humans. Handbook of Physiology-The nervous system Bethesda: American Physiological Society V:643-674. |
[54] |
Rapkin AJ, Berman SM, Mandelkern MA, et al. (2011) Neuroimaging evidence of cerebellar involvement in premenstrual dysphoric disorder. Biol Psychiatry 69: 374-380. doi: 10.1016/j.biopsych.2010.09.029
![]() |
[55] | Mackenzie G, Maguire J (2014) The role of ovarian hormone-derived neurosteroids on the regulation of GABA receptors in affective disorders. Psychopharmacology (Berl). |
[56] |
Schmidt PJ, Nieman LK, Danaceau MA, et al. (1998) Differential behavioral effects of gonadal steroids in women with and in those without premenstrual syndrome. New England Journal of Medicine 338: 209-216. doi: 10.1056/NEJM199801223380401
![]() |
[57] | Hanstock C, Allen, PS (2000) Segmentation of brain from a PRESS localized single volume using double inversion recovery for simultaneous T1 nulling. 8th Annual Meeting of the International Society for Magnetic Resonance in Medicine. Denver, Colorado. |
[58] |
Diedrichsen J, Balsters JH, Flavell J, et al. (2009) A probabilistic MR atlas of the human cerebellum. Neuroimage 46: 39-46. doi: 10.1016/j.neuroimage.2009.01.045
![]() |
[59] | Kalpouzos G, Persson J, Nyberg L (2012) Local brain atrophy accounts for functional activity differences in normal aging. Neurobiol Aging 33: 623 e621-623 e613. |
[60] | Diedrichsen J, Verstynen T, Schlerf J, et al. (2010) Advances in functional imaging of the human cerebellum. Curr Opin Neurol 23: 382-387. |
[61] |
Baldacara L, Nery-Fernandes F, Rocha M, et al. (2011) Is cerebellar volume related to bipolar disorder? J Affect Disord 135: 305-309. doi: 10.1016/j.jad.2011.06.059
![]() |
[62] |
De Bellis MD, Kuchibhatla M (2006) Cerebellar volumes in pediatric maltreatment-related posttraumatic stress disorder. Biol Psychiatry 60: 697-703. doi: 10.1016/j.biopsych.2006.04.035
![]() |
[63] |
Frodl TS, Koutsouleris N, Bottlender R, et al. (2008) Depression-related variation in brain morphology over 3 years: effects of stress? Arch Gen Psychiatry 65: 1156-1165. doi: 10.1001/archpsyc.65.10.1156
![]() |
[64] |
Peng J, Liu J, Nie B, et al. (2011) Cerebral and cerebellar gray matter reduction in first-episode patients with major depressive disorder: a voxel-based morphometry study. Eur J Radiol 80:395-399. doi: 10.1016/j.ejrad.2010.04.006
![]() |
[65] |
Kim D, Cho HB, Dager SR, et al. (2013) Posterior cerebellar vermal deficits in bipolar disorder. J Affect Disord 150: 499-506. doi: 10.1016/j.jad.2013.04.050
![]() |
[66] |
Schutter DJ, Koolschijn PC, Peper JS, et al. (2012) The cerebellum link to neuroticism: a volumetric MRI association study in healthy volunteers. PLoS One 7: e37252. doi: 10.1371/journal.pone.0037252
![]() |
[67] |
Adler CM, DelBello MP, Jarvis K, et al. (2007) Voxel-based study of structural changes in first-episode patients with bipolar disorder. Biol Psychiatry 61: 776-781. doi: 10.1016/j.biopsych.2006.05.042
![]() |
[68] |
Spinelli S, Chefer S, Suomi SJ, et al. (2009) Early-life stress induces long-term morphologic changes in primate brain. Arch Gen Psychiatry 66: 658-665. doi: 10.1001/archgenpsychiatry.2009.52
![]() |
[69] |
Draganski B, Gaser C, Busch V, et al. (2004) Neuroplasticity: changes in grey matter induced by training. Nature 427: 311-312. doi: 10.1038/427311a
![]() |
[70] |
Kwok V, Niu Z, Kay P, et al. (2011) Learning new color names produces rapid increase in gray matter in the intact adult human cortex. Proc Natl Acad Sci U S A 108: 6686-6688. doi: 10.1073/pnas.1103217108
![]() |
[71] |
Oral E, Ozcan H, Kirkan TS, et al. (2013) Luteal serum BDNF and HSP70 levels in women with premenstrual dysphoric disorder. Eur Arch Psychiatry Clin Neurosci 263: 685-693. doi: 10.1007/s00406-013-0398-z
![]() |
[72] |
Anim-Nyame N, Domoney C, Panay N, et al. (2000) Plasma leptin concentrations are increased in women with premenstrual syndrome. Hum Reprod 15: 2329-2332. doi: 10.1093/humrep/15.11.2329
![]() |
[73] |
Oldreive CE, Harvey J, Doherty GH (2008) Neurotrophic effects of leptin on cerebellar Purkinje but not granule neurons in vitro. Neurosci Lett 438: 17-21. doi: 10.1016/j.neulet.2008.04.045
![]() |
[74] |
Riad-Gabriel MG, Jinagouda SD, Sharma A, et al. (1998) Changes in plasma leptin during the menstrual cycle. Eur J Endocrinol 139: 528-531. doi: 10.1530/eje.0.1390528
![]() |
[75] |
Narita K, Kosaka H, Okazawa H, et al. (2009) Relationship between plasma leptin level and brain structure in elderly: a voxel-based morphometric study. Biol Psychiatry 65: 992-994. doi: 10.1016/j.biopsych.2008.10.006
![]() |
[76] |
Matochik JA, London ED, Yildiz BO, et al. (2005) Effect of leptin replacement on brain structure in genetically leptin-deficient adults. J Clin Endocrinol Metab 90: 2851-2854. doi: 10.1210/jc.2004-1979
![]() |
[77] |
London ED, Berman SM, Chakrapani S, et al. (2011) Short-term plasticity of gray matter associated with leptin deficiency and replacement. J Clin Endocrinol Metab 96: E1212-1220. doi: 10.1210/jc.2011-0314
![]() |
[78] |
Tommaselli GA, Di Carlo C, Bifulco G, et al. (2003) Serum leptin levels in patients with premenstrual syndrome treated with GnRH analogues alone and in association with tibolone. Clin Endocrinol (Oxf) 59: 716-722. doi: 10.1046/j.1365-2265.2003.01911.x
![]() |
[79] |
Akturk M, Toruner F, Aslan S, et al. (2013) Circulating insulin and leptin in women with and without premenstrual disphoric disorder in the menstrual cycle. Gynecol Endocrinol 29: 465-469. doi: 10.3109/09513590.2013.769512
![]() |
[80] |
Eikelis N, Esler M, Barton D, et al. (2006) Reduced brain leptin in patients with major depressive disorder and in suicide victims. Mol Psychiatry 11: 800-801. doi: 10.1038/sj.mp.4001862
![]() |
[81] |
Westling S, Ahren B, Traskman-Bendz L, et al. (2004) Low CSF leptin in female suicide attempters with major depression. J Affect Disord 81: 41-48. doi: 10.1016/j.jad.2003.07.002
![]() |
[82] | Yoshida-Komiya H, Takano K, Fujimori K, et al. (2014) Plasma levels of leptin in reproductive-aged women with mild depressive and anxious states. Psychiatry Clin Neurosci. |
[83] |
Lawson EA, Miller KK, Blum JI, et al. (2012) Leptin levels are associated with decreased depressive symptoms in women across the weight spectrum, independent of body fat. Clin Endocrinol (Oxf) 76: 520-525. doi: 10.1111/j.1365-2265.2011.04182.x
![]() |
[84] |
Chirinos DA, Goldberg R, Gellman M, et al. (2013) Leptin and its association with somatic depressive symptoms in patients with the metabolic syndrome. Ann Behav Med 46: 31-39. doi: 10.1007/s12160-013-9479-5
![]() |
[85] |
Kloiber S, Ripke S, Kohli MA, et al. (2013) Resistance to antidepressant treatment is associated with polymorphisms in the leptin gene, decreased leptin mRNA expression, and decreased leptin serum levels. Eur Neuropsychopharmacol 23: 653-662. doi: 10.1016/j.euroneuro.2012.08.010
![]() |
[86] |
Johnston JM, Greco SJ, Hamzelou A, et al. (2011) Repositioning leptin as a therapy for Alzheimer's disease. Therapy 8: 481-490. doi: 10.2217/thy.11.57
![]() |
[87] | Johnston J HW, Fardo D, Greco S, Perry G, Montine T, Trojanowski J, Shaw L, Ashford J, Tezapsidis N (2013) For The Alzheimer's Disease Neuroimaging Initiative. Low Plasma Leptin in Cognitively Impaired ADNI Subjects- Gender Differences and Diagnostic and Therapeutic Potential. Curr Alzheimer Res. |
[88] |
Rapkin AJ, Morgan M, Goldman L, et al. (1997) Progesterone metabolite allopregnanolone in women with premenstrual syndrome. Obstet Gynecol 90: 709-714. doi: 10.1016/S0029-7844(97)00417-1
![]() |
[89] |
Singh M, Su C (2013) Progesterone and neuroprotection. Horm Behav 63: 284-290. doi: 10.1016/j.yhbeh.2012.06.003
![]() |
[90] |
Azcoitia I, Arevalo MA, De Nicola AF, et al. (2011) Neuroprotective actions of estradiol revisited. Trends Endocrinol Metab 22: 467-473. doi: 10.1016/j.tem.2011.08.002
![]() |
[91] |
Gao Q, Horvath TL (2008) Cross-talk between estrogen and leptin signaling in the hypothalamus. Am J Physiol Endocrinol Metab 294: E817-826. doi: 10.1152/ajpendo.00733.2007
![]() |
[92] |
Hedges VL, Ebner TJ, Meisel RL, et al. (2012) The cerebellum as a target for estrogen action. Front Neuroendocrinol 33: 403-411. doi: 10.1016/j.yfrne.2012.08.005
![]() |
[93] |
Ghidoni R, Boccardi M, Benussi L, et al. (2006) Effects of estrogens on cognition and brain morphology: involvement of the cerebellum. Maturitas 54: 222-228. doi: 10.1016/j.maturitas.2005.11.002
![]() |
[94] |
Boccardi M, Ghidoni R, Govoni S, et al. (2006) Effects of hormone therapy on brain morphology of healthy postmenopausal women: a Voxel-based morphometry study. Menopause 13: 584-591. doi: 10.1097/01.gme.0000196811.88505.10
![]() |
[95] |
Robertson D, Craig M, van Amelsvoort T, et al. (2009) Effects of estrogen therapy on age-related differences in gray matter concentration. Climacteric 12: 301-309. doi: 10.1080/13697130902730742
![]() |
[96] |
Kim SG, Ogawa S (2012) Biophysical and physiological origins of blood oxygenation level-dependent fMRI signals. J Cereb Blood Flow Metab 32: 1188-1206. doi: 10.1038/jcbfm.2012.23
![]() |
[97] |
D'Esposito M, Deouell LY, Gazzaley A. (2003) Alterations in the BOLD fMRI signal with ageing and disease: a challenge for neuroimaging. Nat Rev Neurosci 4: 863-872. doi: 10.1038/nrn1246
![]() |
[98] |
Ances BM, Liang CL, Leontiev O, et al. (2009) Effects of aging on cerebral blood flow, oxygen metabolism, and blood oxygenation level dependent responses to visual stimulation. Hum Brain Mapp 30: 1120-1132. doi: 10.1002/hbm.20574
![]() |
[99] |
Gauthier CJ, Madjar C, Desjardins-Crepeau L, et al. (2013) Age dependence of hemodynamic response characteristics in human functional magnetic resonance imaging. Neurobiol Aging 34:1469-1485. doi: 10.1016/j.neurobiolaging.2012.11.002
![]() |
[100] |
Sui R, Zhang L (2012) Cerebellar dysfunction may play an important role in vascular dementia. Med Hypotheses 78: 162-165. doi: 10.1016/j.mehy.2011.10.017
![]() |
[101] | arrett DD, Kovacevic N, McIntosh AR, et al. (2010) Blood oxygen level-dependent signal variability si more than just noise. J Neurosci 30: 4914-4921. |
[102] | Grady CL, Garrett DD (2013) Understanding variability in the BOLD signal and why it matters for aging. Brain Imaging Behav. |
1. | Mourad Chamekh, Mohamed Ali Latrach, Fadel Jday, Multi-step semi-analytical solutions for a chikungunya virus system, 2023, 2731-6734, 10.1007/s43994-023-00027-8 | |
2. | Muhammad Farman, Aamir Shehzad, Ali Akgül, Dumitru Baleanu, Manuel De la Sen, Modelling and Analysis of a Measles Epidemic Model with the Constant Proportional Caputo Operator, 2023, 15, 2073-8994, 468, 10.3390/sym15020468 | |
3. | Olumuyiwa James Peter, Sania Qureshi, Mayowa M. Ojo, Ratchada Viriyapong, Amanullah Soomro, Mathematical dynamics of measles transmission with real data from Pakistan, 2022, 2363-6203, 10.1007/s40808-022-01564-7 | |
4. | Salah Alsahafi, Stephen Woodcock, Exploring HIV Dynamics and an Optimal Control Strategy, 2022, 10, 2227-7390, 749, 10.3390/math10050749 | |
5. | Zakaria Yaagoub, Karam Allali, Global Stability of Multi-Strain SEIR Epidemic Model with Vaccination Strategy, 2023, 28, 2297-8747, 9, 10.3390/mca28010009 | |
6. | Ahmed Alshehri, Miled El Hajji, Mathematical study for Zika virus transmission with general incidence rate, 2022, 7, 2473-6988, 7117, 10.3934/math.2022397 | |
7. | Nikhila Yaladanda, Rajasekhar Mopuri, Hari Prasad Vavilala, Srinivasa Rao Mutheneni, Modelling the impact of perfect and imperfect vaccination strategy against SARS CoV-2 by assuming varied vaccine efficacy over India, 2022, 15, 22133984, 101052, 10.1016/j.cegh.2022.101052 | |
8. | Mohammed H. Alharbi, Global investigation for an "SIS" model for COVID-19 epidemic with asymptomatic infection, 2023, 20, 1551-0018, 5298, 10.3934/mbe.2023245 | |
9. | Amer Hassan Albargi, Miled El Hajji, Mathematical analysis of a two-tiered microbial food-web model for the anaerobic digestion process, 2023, 20, 1551-0018, 6591, 10.3934/mbe.2023283 | |
10. | Yantao Luo, Yunqiang Yuan, Zhidong Teng, ANALYSIS OF A DEGENERATED DIFFUSION SVEQIRV EPIDEMIC MODEL WITH GENERAL INCIDENCE IN A SPACE HETEROGENEOUS ENVIRONMENT, 2024, 14, 2156-907X, 2704, 10.11948/20230346 | |
11. | Rukhsar Ikram, Amir Khan, Mostafa Zahri, Aeshah A. Raezah, The impact of Lévy noise on a stochastic measles model, 2023, 138, 2190-5444, 10.1140/epjp/s13360-023-04625-7 | |
12. | Maoxing Liu, Yuhang Li, Dynamics analysis of an SVEIR epidemic model in a patchy environment, 2023, 20, 1551-0018, 16962, 10.3934/mbe.2023756 | |
13. | Dilber Uzun Ozsahin, Najeeb Alam Khan, Araib Aqeel, Hijaz Ahmad, Maged F. Alotaibi, Muhammad Ayaz, Junyuan Yang, Mathematical modeling and dynamics of immunological exhaustion caused by measles transmissibility interaction with HIV host, 2024, 19, 1932-6203, e0297476, 10.1371/journal.pone.0297476 | |
14. | Liang’an 良安 Huo 霍, Yue 跃 Yu 于, Impact of individual behavior adoption heterogeneity on epidemic transmission in multiplex networks, 2023, 32, 1674-1056, 108703, 10.1088/1674-1056/acea65 | |
15. | Jiandong Nie, Qiaoling Chen, Zhidong Teng, Yihan Zhang, Ramziya Rifhat, Dynamics of a Stochastic Measles Model with General Incidence Rate and Black–Karasinski Process, 2024, 47, 0126-6705, 10.1007/s40840-024-01771-8 | |
16. | Miled El Hajji, Periodic solutions for chikungunya virus dynamics in a seasonal environment with a general incidence rate, 2023, 8, 2473-6988, 24888, 10.3934/math.20231269 | |
17. | Miled El Hajji, Dalal M. Alshaikh, Nada A. Almuallem, Periodic Behaviour of an Epidemic in a Seasonal Environment with Vaccination, 2023, 11, 2227-7390, 2350, 10.3390/math11102350 | |
18. | Young Rock Kim, Youngho Min, Joy Nana Okogun-Odompley, Rehana Naz, A mathematical model of COVID-19 with multiple variants of the virus under optimal control in Ghana, 2024, 19, 1932-6203, e0303791, 10.1371/journal.pone.0303791 | |
19. | Marlon Nunes Gonzaga, Marcelo Martins de Oliveira, Allbens Picardi Faria Atman, Role of Vaccination Strategies to Host-Pathogen Dynamics in Social Interactions, 2024, 26, 1099-4300, 739, 10.3390/e26090739 | |
20. | Amar Nath Chatterjee, Santosh Kumar Sharma, Fahad Al Basir, A Compartmental Approach to Modeling the Measles Disease: A Fractional Order Optimal Control Model, 2024, 8, 2504-3110, 446, 10.3390/fractalfract8080446 | |
21. | Amer Hassan Albargi, Miled El Hajji, Bacterial Competition in the Presence of a Virus in a Chemostat, 2023, 11, 2227-7390, 3530, 10.3390/math11163530 | |
22. | W. Ahmad, A. I. K. Butt, M. Rafiq, Z. Asif, T. Ismaeel, N. Ahmad, Modeling, analyzing and simulating the Measles transmission dynamics through efficient computational optimal control technique, 2024, 139, 2190-5444, 10.1140/epjp/s13360-024-05368-9 | |
23. | Zakaria Yaagoub, Fractional two-strain SVLIR epidemic model with vaccination and quarantine strategies, 2025, 13, 2195-268X, 10.1007/s40435-024-01561-x | |
24. | Tianqi Song, Yishi Wang, Chuncheng Wang, Qi An, Modeling and analyzing COVID-19 infections in South Africa, 2025, 18, 1793-5245, 10.1142/S1793524523501036 | |
25. | Nassira Madani, Zakia Hammouch, EL-Houssine Azroul, New model of HIV/AIDS dynamics based on Caputo–Fabrizio derivative order: Optimal strategies to control the spread, 2025, 18777503, 102612, 10.1016/j.jocs.2025.102612 |
Model | Observed states | DAISY | JULIA | COMBOS |
Model 1 unknown initial conditions |
$ V(t) $ | $ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable $ \pi $ nonidentifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable with 2 solutions |
Model 2 unknown initial conditions |
$ V(t) $ | $\{ \beta, c \}$ globally identifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$\{ \beta, c \}$ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 3 unknown initial conditions |
$ V(t) $ | $ \{\beta, c, k\} $ globally identifiable Correlations: $ \frac{\delta}{K_\delta}= \frac{\hat{\delta}}{\hat{K_\delta}} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, k\} $ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta \pi= \hat{\delta} \hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 4 unknown initial conditions |
$ V(t), E(t) $ | $\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable Correlations: $ \delta_E\eta =\hat{\delta}_E\hat{\eta} $, $ K_\delta \eta =\hat{K}_\delta\hat{\eta} $, $ \frac{\lambda}{\eta}= \frac{\hat{\lambda}}{\hat{\eta}} $, $ \frac{\pi}{\eta}= \frac{\hat{\pi}}{\hat{\eta}} $ |
$\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable $ \{\delta_E, K_\delta, \pi, \lambda, \eta\} $ nonidentifiable Correlations: $ K_{\delta} \pi = \hat{K}_{\delta} \hat{\pi}, $ $ K_{\delta} \lambda = \hat{K}_{\delta} \hat{\lambda}, $ $ \eta K_{\delta} =\hat{\eta}\hat{K}_{\delta} $, $ \delta_E \pi = \hat{\delta}_E \hat{\pi} $ |
identifiability unknown |
All models known initial conditions except for $ E_0 $ |
$ V(t) $ $ V(t), E(t) $ |
Models 1–3 globally identifiable Model 4 identifiability unknown |
Models 1–4 globally identifiable |
Models 1–4 identifiability unknown |
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta\times 10^{-5} $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ |
$ \delta $ | 0–25 | $ 10^2 $–$ 10^8 $ | $ 10^2 $–$ 10^8 $ | $ 0 $–$ 15 $ |
$ K_{\delta} $ | - | $ 10^3 $–$ 10^7 $ | $ 10^3 $–$ 10^7 $ | $ 10^1 $–$ 10^5 $ |
$ \pi $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ |
$ c $ | 0–25 | 0–25 | 0–25 | 0–25 |
$ k $ | - | - | $ 4 $–$ 6 $ | $ 4 $–$ 6 $ |
$ \delta_E $ | - | - | - | $ 0 $–$ 25 $ |
$ \eta\times 10^{-7} $ | - | - | - | $ 10^{-2} $–$ 100 $ |
$ \lambda\times 10^3 $ | - | - | - | $ 10^{-2} $–$ 10^2 $ |
$ d_E $ | - | - | - | $ 0 $–$ 25 $ |
$ K_E\times 10^6 $ | - | - | - | $ 0.1 $–$ 10^3 $ |
$ \tau_I $ | - | - | - | 0–10 |
$ E_0\times 10^3 $ | - | - | - | $ 0.1 $–$ 10 $ |
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta \times 10^{-5} $ | 0.48 | 6.88 | 6.20 | 8.39 |
$ \delta $ | 1.47 | $ 1.59\times 10^6 $ | $ 1.72\times 10^6 $ | 0.196 |
$ K_{\delta} $ | - | $ 4.69\times 10^4 $ | $ 2.37\times 10^5 $ | $ 5.49\times10^3 $ |
$ \pi $ | 1.18 | 0.86 | 1.69 | 0.69 |
$ c $ | 1.48 | 6.49 | 12.34 | 6.19 |
$ k $ | - | - | 4.82 | 5.02 |
$ \delta_E $ | - | - | - | 14.20 |
$ \eta\times 10^{-7} $ | - | - | - | 3.59 |
$ \lambda\times 10^3 $ | - | - | - | 1.31 |
$ d_E $ | - | - | - | 0.20 |
$ K_E\times 10^6 $ | - | - | - | 9.68 |
$ \tau_I $ | - | - | - | 1.69 |
$ E_{0}\times 10^3 $ | - | - | - | 1.21 |
$ J_w $ | 21.12 | 2.09 | 2.39 | 2.14 |
$ \text{AIC}_\text{c} $ | 37.67 | 40.88 | 114.1 | 36.78 |
Model | Observed states | DAISY | JULIA | COMBOS |
Model 1 unknown initial conditions |
$ V(t) $ | $ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable $ \pi $ nonidentifiable Correlations: $ c+\delta =\hat{c}+\hat{\delta} $, $ c\delta =\hat{c}\hat{\delta} $ |
$ \beta $ globally identifiable $ \{c, \delta\} $ locally identifiable with 2 solutions |
Model 2 unknown initial conditions |
$ V(t) $ | $\{ \beta, c \}$ globally identifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$\{ \beta, c \}$ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta\pi=\hat{\delta}\hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 3 unknown initial conditions |
$ V(t) $ | $ \{\beta, c, k\} $ globally identifiable Correlations: $ \frac{\delta}{K_\delta}= \frac{\hat{\delta}}{\hat{K_\delta}} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, k\} $ globally identifiable $ \{\pi, K_\delta, \delta\} $ nonidentifiable Correlations: $ \delta \pi= \hat{\delta} \hat{\pi} $, $ \pi K_\delta =\hat{\pi}\hat{K_\delta} $ |
$ \{\beta, c, K_\delta \pi, \delta \pi\} $ globally identifiable |
Model 4 unknown initial conditions |
$ V(t), E(t) $ | $\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable Correlations: $ \delta_E\eta =\hat{\delta}_E\hat{\eta} $, $ K_\delta \eta =\hat{K}_\delta\hat{\eta} $, $ \frac{\lambda}{\eta}= \frac{\hat{\lambda}}{\hat{\eta}} $, $ \frac{\pi}{\eta}= \frac{\hat{\pi}}{\hat{\eta}} $ |
$\{ \beta, c, d_E, \delta, k, K_E, \tau_I \}$ globally identifiable $ \{\delta_E, K_\delta, \pi, \lambda, \eta\} $ nonidentifiable Correlations: $ K_{\delta} \pi = \hat{K}_{\delta} \hat{\pi}, $ $ K_{\delta} \lambda = \hat{K}_{\delta} \hat{\lambda}, $ $ \eta K_{\delta} =\hat{\eta}\hat{K}_{\delta} $, $ \delta_E \pi = \hat{\delta}_E \hat{\pi} $ |
identifiability unknown |
All models known initial conditions except for $ E_0 $ |
$ V(t) $ $ V(t), E(t) $ |
Models 1–3 globally identifiable Model 4 identifiability unknown |
Models 1–4 globally identifiable |
Models 1–4 identifiability unknown |
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta\times 10^{-5} $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ | $ 0.1 $–$ 10 $ |
$ \delta $ | 0–25 | $ 10^2 $–$ 10^8 $ | $ 10^2 $–$ 10^8 $ | $ 0 $–$ 15 $ |
$ K_{\delta} $ | - | $ 10^3 $–$ 10^7 $ | $ 10^3 $–$ 10^7 $ | $ 10^1 $–$ 10^5 $ |
$ \pi $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ | 0–$ 10^2 $ |
$ c $ | 0–25 | 0–25 | 0–25 | 0–25 |
$ k $ | - | - | $ 4 $–$ 6 $ | $ 4 $–$ 6 $ |
$ \delta_E $ | - | - | - | $ 0 $–$ 25 $ |
$ \eta\times 10^{-7} $ | - | - | - | $ 10^{-2} $–$ 100 $ |
$ \lambda\times 10^3 $ | - | - | - | $ 10^{-2} $–$ 10^2 $ |
$ d_E $ | - | - | - | $ 0 $–$ 25 $ |
$ K_E\times 10^6 $ | - | - | - | $ 0.1 $–$ 10^3 $ |
$ \tau_I $ | - | - | - | 0–10 |
$ E_0\times 10^3 $ | - | - | - | $ 0.1 $–$ 10 $ |
Parameter | Model 1 | Model 2 | Model 3 | Model 4 |
$ \beta \times 10^{-5} $ | 0.48 | 6.88 | 6.20 | 8.39 |
$ \delta $ | 1.47 | $ 1.59\times 10^6 $ | $ 1.72\times 10^6 $ | 0.196 |
$ K_{\delta} $ | - | $ 4.69\times 10^4 $ | $ 2.37\times 10^5 $ | $ 5.49\times10^3 $ |
$ \pi $ | 1.18 | 0.86 | 1.69 | 0.69 |
$ c $ | 1.48 | 6.49 | 12.34 | 6.19 |
$ k $ | - | - | 4.82 | 5.02 |
$ \delta_E $ | - | - | - | 14.20 |
$ \eta\times 10^{-7} $ | - | - | - | 3.59 |
$ \lambda\times 10^3 $ | - | - | - | 1.31 |
$ d_E $ | - | - | - | 0.20 |
$ K_E\times 10^6 $ | - | - | - | 9.68 |
$ \tau_I $ | - | - | - | 1.69 |
$ E_{0}\times 10^3 $ | - | - | - | 1.21 |
$ J_w $ | 21.12 | 2.09 | 2.39 | 2.14 |
$ \text{AIC}_\text{c} $ | 37.67 | 40.88 | 114.1 | 36.78 |