Processing math: 100%
Research article Special Issues

Identifiability investigation of within-host models of acute virus infection


  • Received: 29 July 2024 Revised: 24 September 2024 Accepted: 09 October 2024 Published: 28 October 2024
  • Uncertainty in parameter estimates from fitting within-host models to empirical data limits the model's ability to uncover mechanisms of infection, disease progression, and to guide pharmaceutical interventions. Understanding the effect of model structure and data availability on model predictions is important for informing model development and experimental design. To address sources of uncertainty in parameter estimation, we used four mathematical models of influenza A infection with increased degrees of biological realism. We tested the ability of each model to reveal its parameters in the presence of unlimited data by performing structural identifiability analyses. We then refined the results by predicting practical identifiability of parameters under daily influenza A virus titers alone or together with daily adaptive immune cell data. Using these approaches, we presented insight into the sources of uncertainty in parameter estimation and provided guidelines for the types of model assumptions, optimal experimental design, and biological information needed for improved predictions.

    Citation: Yuganthi R. Liyanage, Nora Heitzman-Breen, Necibe Tuncer, Stanca M. Ciupe. Identifiability investigation of within-host models of acute virus infection[J]. Mathematical Biosciences and Engineering, 2024, 21(10): 7394-7420. doi: 10.3934/mbe.2024325

    Related Papers:

    [1] Jun Wang, Yanni Zhu, Kun Wang . Existence and asymptotical behavior of the ground state solution for the Choquard equation on lattice graphs. Electronic Research Archive, 2023, 31(2): 812-839. doi: 10.3934/era.2023041
    [2] Xudong Shang . Normalized ground states to the nonlinear Choquard equations with local perturbations. Electronic Research Archive, 2024, 32(3): 1551-1573. doi: 10.3934/era.2024071
    [3] Zhiyan Ding, Hichem Hajaiej . On a fractional Schrödinger equation in the presence of harmonic potential. Electronic Research Archive, 2021, 29(5): 3449-3469. doi: 10.3934/era.2021047
    [4] Ping Yang, Xingyong Zhang . Existence of nontrivial solutions for a poly-Laplacian system involving concave-convex nonlinearities on locally finite graphs. Electronic Research Archive, 2023, 31(12): 7473-7495. doi: 10.3934/era.2023377
    [5] Xiaoyong Qian, Jun Wang, Maochun Zhu . Existence of solutions for a coupled Schrödinger equations with critical exponent. Electronic Research Archive, 2022, 30(7): 2730-2747. doi: 10.3934/era.2022140
    [6] Nan Deng, Meiqiang Feng . New results of positive doubly periodic solutions to telegraph equations. Electronic Research Archive, 2022, 30(3): 1104-1125. doi: 10.3934/era.2022059
    [7] Haijun Luo, Zhitao Zhang . Existence and stability of normalized solutions to the mixed dispersion nonlinear Schrödinger equations. Electronic Research Archive, 2022, 30(8): 2871-2898. doi: 10.3934/era.2022146
    [8] Senli Liu, Haibo Chen . Existence and asymptotic behaviour of positive ground state solution for critical Schrödinger-Bopp-Podolsky system. Electronic Research Archive, 2022, 30(6): 2138-2164. doi: 10.3934/era.2022108
    [9] Yixuan Wang, Xianjiu Huang . Ground states of Nehari-Pohožaev type for a quasilinear Schrödinger system with superlinear reaction. Electronic Research Archive, 2023, 31(4): 2071-2094. doi: 10.3934/era.2023106
    [10] Hui Jian, Min Gong, Meixia Cai . Global existence, blow-up and mass concentration for the inhomogeneous nonlinear Schrödinger equation with inverse-square potential. Electronic Research Archive, 2023, 31(12): 7427-7451. doi: 10.3934/era.2023375
  • Uncertainty in parameter estimates from fitting within-host models to empirical data limits the model's ability to uncover mechanisms of infection, disease progression, and to guide pharmaceutical interventions. Understanding the effect of model structure and data availability on model predictions is important for informing model development and experimental design. To address sources of uncertainty in parameter estimation, we used four mathematical models of influenza A infection with increased degrees of biological realism. We tested the ability of each model to reveal its parameters in the presence of unlimited data by performing structural identifiability analyses. We then refined the results by predicting practical identifiability of parameters under daily influenza A virus titers alone or together with daily adaptive immune cell data. Using these approaches, we presented insight into the sources of uncertainty in parameter estimation and provided guidelines for the types of model assumptions, optimal experimental design, and biological information needed for improved predictions.



    Water and gas movement to production oil zone represents a relevant issue in the oil industry. In fact, coning and cresting phenomena [1,2,3] can cause significant decrease in well productivity and recovery factors. In extreme cases, the unpredicted arrival of undesired fluids into the production well can cause the premature shut-in of the well itself. Furthermore, the production of large quantities of water, especially in the depletion stage of the reservoirs, commonly causes additional high costs for treatment and disposal [4]. Early water breakthrough can be a serious problem when the waterflooding approach is applied in the phase of secondary recovery. In this technique, water is injected into the reservoir formation to displace residual oil. The water from injection wells sweeps the displaced oil to adjacent production wells, causing significant fluids movements in the reservoir.

    In order to mitigate all the problems associated with undesired fluids displacement, it is crucial to map fluids' distribution in the reservoir zone, possibly in real time. Furthermore, it is important to make probabilistic predictions about when and where phenomena like coning or cresting can occur at the production wells [5]. Such a key information can be used for tuning production in optimum rate to avoid or to delay water breakthrough time as far as possible.

    A different category of problem associated with the displacement of fluids in the reservoir, is how to optimize the storage of CO2 into deep geological formations. CO2 capture and storage (CCS) represents nowadays one of the viable options for reducing the emissions of anthropogenic CO2 into the atmosphere [6,7]. An open question associated with the CCS strategy is how to maximize the sweep efficiency and storage capacity of CO2 in geological formations. This goal can be achieved by adopting a controlled injection scenario. As in the case of the water displacement problem during oil production, a crucial objective is mapping and predicting the movements of the CO2 plume during and after injection.

    In both cases, (mapping and predicting water and CO2 displacement), several geophysical techniques can be applied, including seismic as well as non-seismic methods. All these methods are based on the principle that the different types of fluid (oil, water and CO2) show different physical properties and, consequently, can be detected using geophysical methods sensitive to the variations of these properties. However, detecting the fluids, estimating their saturation in the reservoir and predicting future displacements represent partial aspects of the general problem of reservoir management optimization. An additional pragmatic aspect is how to use the entire available information in order to take the best actions for avoiding premature water breakthrough as well as for optimizing the CO2 storage.

    Despite obvious intrinsic differences, both types of problems share many common features. Therefore, they can be faced through a similar, comprehensive approach. The workflow proposed in this paper includes the following main steps.

    1) Fluid mapping: defining a reliable model of fluid distribution and saturation in the reservoir, before, during and after production/injection; 2) Fluid prediction: using the available historical information acquired through a sequence of monitor surveys for predicting the space-time evolution of the fluid distribution; 3) Actions prescription: using the entire information about past, present and predicted fluid distribution for decision/action prescription.

    The final goal of these steps is management optimization addressed to hydrocarbon production as well as to CO2 storage operations.

    Some authors have proposed interesting solutions for managing both oil production and CO2 storage optimization problems [8,9,10,11].

    Liao et al. [12] have discussed ensemble-based optimization techniques of the water-alternating-gas (WAG) injection process.

    Shamshiri and Jafarpour [13] have developed a framework for equalizing and delaying (under constant total injected volume of water) the breakthrough time of the injected fluid at production wells. Furthermore, they have shown how to apply the same approach to optimize CO2 sequestration. In that case, well producers are not present, so the authors have introduced the concept of pseudo production wells. These represent just virtual wells used to indicate the arrival of CO2 (and its fractional production rate) at selected locations. The final scope is to equalize production rates and to delay CO2 breakthrough. The authors have supported their approach with synthetic tests, showing its theoretical effectiveness in both cases of water breakthrough risk mitigation and CO2 storage optimization. However, they did not discuss any method for effectively mapping over time the fluids displacement in the reservoir zone.

    In this paper, I apply an approach similar to that of Shamshiri and Jafarpour, but I introduce additional ideas and methods for real-time fluid mapping. Furthermore, I introduce a Machine Learning approach for predicting future fluids displacements and, finally, for defining a prescriptive policy for optimal reservoir management, both in case of oil production as well as in case of CO2 injection and storage.

    The structure of this paper is related to the workflow shown in the conceptual scheme of Figure 1. First, I recall the basics of Electric Resistivity Tomography (ERT). This is aimed at defining the fluid model and its continuous update (in almost real time). Then, I discuss how to apply effective techniques based on Artificial Neural Networks and Recurrent Networks in particular, for probabilistic prediction of fluids movements. Finally, I explain how Reinforcement Learning algorithms can support the crucial phase of actions prescription, using the entire set of information about fluid distribution.

    Figure 1.  The main blocks of the workflow for predictive and prescriptive management, in case of oil production as well as in case of CO2 injection and storage.

    After the methodological section, I discuss several synthetic 2D and 3D tests in order to clarify the details of each step, and to highlight benefits and limitations of my approach.

    In this section, I discuss the three main methodological aspects of the workflow. These concern, respectively, the techniques to map the distribution of fluids over time, the methods to predict their displacement in future, and the approaches to optimize the management policy of the reservoir.

    Electric Resistivity Tomography (ERT) is a consolidated approach to retrieve, by appropriate inversion algorithms, a resistivity model from surface and/or borehole measurements of electric potentials taken along multichannel arrays of electrodes. Different rock formations and materials generally show different resistivity values. Therefore, the retrieved ERT models are commonly used in geosciences as well as for supporting engineering studies. Often, resistivity tomography is successfully applied in hydrogeological studies for monitoring fluids movements over time [14,15,16]. In a time-lapse ERT survey (sometimes indicated also as "DC time-lapse survey", where DC means "Direct Current"), electrodes are installed at fixed locations during the period of monitoring. First, a "base resistivity data set" is collected. The inversion of this initial data set produces a "base resistivity model" to be used as a "reference model". Then, one or more "monitor surveys" are repeated during the period of monitoring. The purpose of time-lapse ERT is to detect possible variations of resistivity distribution caused by fluid displacements from one survey to another. These variations can be retrieved using various inversion approaches. The simplest one is by subtracting the inverted monitor model from the inverted base model. A more sophisticated inversion strategy is known as "Difference inversion" where we invert the difference between the monitor and the base data sets. This approach allows minimizing the coherent inversion artifacts, and the output is a "difference image"; this represents the spatial distribution of resistivity differences, generally expressed in percent, between base and monitor models.

    In ERT method, it is easy to acquire multiple data sets over time (monitor surveys), even on daily basis. Inverting the differences between consecutive data sets, allows obtaining a dynamic model of variations of electric resistivity caused by changes in fluid saturation and hydraulic permeability. These can be seasonal changes of the water table, or salty water invasion in coastal areas. The same approach can be applied also to monitor fluids distribution in hydrocarbon reservoirs [17,18] for monitoring the process of CO2 storage and for other purposes [19,20,21,22].

    Furthermore, the ERT approach can be used also in conjunction with other methods based on electric and electromagnetic measurements. For instance, Dell'Aversana et al. [23,24] and Bottazzi et al. [25] proposed to combine electric dipoles and magnetic coils permanently installed on the well completion for detecting reservoir fluids movements in real time during oil production and water injection.

    In this second methodological sub-section, I summarize the methods that I use for predicting the ERT response from the data acquired through a sequence of time-lapse surveys. For that purpose, I apply several techniques of time series forecasting. A time series is a sequence of data (such as experimental measurements) over time. An important objective in time series analysis is to forecast its future values, at a given time t. Under certain assumptions, we can hope to make probabilistic predictions using the information given by the past observations.

    Time-lapse ERT method can produce large volumes of data in the form of time series. In the part of this paper dedicated to synthetic examples, I will show how and in what measure electric resistivity changes when fluids move in the subsurface over time. These changes can show different trends depending on spatial distribution of fluids, rock types, permeability and many other factors. If we take multiple measurements of electrical potentials through many monitor surveys (eventually combined with other types of measurements, like seismic data, pressure, and temperature), we can create a "historical data set". This can be used for predicting the evolution of fluids in the subsurface by applying forecasting algorithms.

    There are many approaches to the problem of time series forecasting, with variable level of complexity and effectiveness. For example, a simple class of methods includes Exponential Smoothing that is based on a weighted sum of past observations; the model uses an exponentially decreasing weight for past observations [26]. Single Exponential Smoothing, SES for short, is effective for univariate data without a trend or seasonality. Double Exponential Smoothing is an extension to Exponential Smoothing that explicitly adds support for trends in the univariate time series. Triple Exponential Smoothing is a further extension of the previous approaches that explicitly includes the possibility to take into account for seasonality in the univariate time series.

    Approaches that are more complex are based on Multivariate Analysis (MVA). This is useful in problems where multiple measurements are made on each experimental unit and the relations among these measurements and their structures play an important role. In my workflow, I apply both univariate-based methods (including Exponential Smoothing) as well as MVA based approaches. In fact, ERT data sets are internally correlated, for reasons that I am going to explain.

    Among the various predictive approaches, I use Neural Networks [27,28] that allow predicting the trend of time series by learning from the past historical trends measured in the data. In that frame, a specific effective approach for predicting the time evolution of time series is through Recurrent Neural Networks (RNNs) [29]. The basic idea in RNNs is to add a time delay unit and a feedback connection in order to use information from a previous state for inferring a model in the subsequent state. We can imagine the simplest RNN as consisting of just three layers: one input layer with d input neurons, one hidden layer with c hidden neurons and one output layer with q output neurons. At each time point t, the network contains three weight matrices, Wih, Whh and Who that connect, respectively: input to hidden layer; recurrent hidden layer to hidden layer; hidden layer to output. These weight matrices represent the parameters of the RNNs. The hidden layer captures information about what happened in the previous time step. It can be considered as the "memory" of the network. In such a way, RNNs make use of sequential information. In traditional Neural Networks, all inputs are assumed independent from the outputs; instead, in RNNs we assume that there is some significant correlation between input and output. For instance, let us suppose that we desire to predict the next word in a sentence. For that goal, we cannot ignore the words that came before it. RNNs have been developed for purposes like this (and in fact, they are widely applied in computational linguistics).

    We can think about RNNs as they had a sort of memory state that is used for predicting a model of the future states. For that reason, we can expect that RNNs are useful when we want to retrieve a dynamic model based on time-lapse geophysical measurements. This is the case in which we desire to predict the fluids movements within a reservoir (hydrocarbons, water or CO2) using the data acquired in the past through multiple monitor ERT surveys. Indeed, my method applies RNNs to time-lapse ERT data for predicting how resistivity distribution changes over time because of fluids movements. In the section dedicated to synthetic tests, I will discuss illustrative examples of this predictive approach.

    In the previous sub-sections, I have discussed how we can take electric measurements for mapping fluids displacement in the reservoir. Furthermore, we can apply various techniques of time-series analysis for predicting the evolution of the fluids distribution over time. The next step consists in the prescriptive part of the workflow. My goal is to use all the real and predicted data for optimizing the decision flow and the future actions for various purposes. As discussed in the introduction, monitoring and predicting water movements in the reservoir represents an important aspect in production management optimization. Furthermore, in the CCS frame, monitoring and predicting the movements of the CO2 plume during and after injection, is crucial for maximizing the sweep efficiency and storage capacity of CO2 in geologic formations. Both types of prescriptive aspects (addressed to oil production and CO2 storage optimization) can be supported by a Reinforcement Learning approach.

    The basic idea of Reinforcement Learning is to emulate the natural learning behavior in machines and automata, generally called "agents", operating in their "environment" [30,31,32]. In simple words, in every particular state of its environment, the agent performs some action and receives some type of positive or negative feedback about the appropriateness of its behavior. Let us suppose that the agent and the environment are in a certain state st. The agent selects an action at to take at time t in its environment. That action will affect some aspects of the environment where the agent itself is operating. The effects of the agent's action will be returned by the modified environment in the form of a reward (or a punishment) and a new state st+1. The final objective of the agent is to maximize the total reward accumulated during its lifetime (cumulative reward). In other words, in the Reinforcement Learning framework, the agent has to learn how to optimize its actions for a long-term horizon. The cumulative reward that is achievable from a given state s, is called the Value function, V(s). Commonly, a "discount factor", γ, is applied to the various rewards in order to assign higher weight to rewards received near than rewards received further in the future. Based on these rewards and/or punishments received from its environment, the agent tries to learn progressively a "policy" for maximizing the cumulative long-term reward. The policy, often denoted by the symbol π, is expressed by a function that takes the current environment state s belonging to the set S of all possible states to return an action a belonging to the set A of all possible actions.

    π(s):SA (1)

    Finding the best policy can be a very difficult problem to solve when the agent acts in complex environments with a wide range of possible discrete or continuous states and actions. Moreover, complexity increases further if the rewards is sparse and delayed. There are many different approaches falling in the class of Reinforcement Learning methods. Among the various algorithms, I selected the Q-Learning method, for the possibility to combine its simplicity with the effectiveness of deep neural networks (Deep Q-Learning). This method takes its name from the so-called Q-function. This represents the "quality" of a certain action given a state. It is a function of a state-action pair and returns a real value.

    Q(s,a)=S×AR (2)

    The optimal Q-function, Q*(s, a), represents the expected total reward received by an agent starting in the state s, picks the action a, and then will behave optimally afterwards. The importance of finding the optimal Q-function is that we can use it to extract the optimal policy π* by choosing the action a that gives maximum reward Q*(s, a) for state s. We have:

    π(s)=argmaxaQ(s,a)sS. (3)

    The Bellman equation tells us that the maximum future reward is the reward r the agent received for entering the current state s plus the maximum future reward for the next state s'.

    Q(s,a)=r+γmaxa'Q(s',a'), (4)

    where γ is the discount factor that reduces the contribution of future rewards with respect the immediate reward. We can use a recursive formula for finding Q(s, a):

    Qt+1(st,at)=Qt(st,at)+α[rt+1+γmaxaQt(St+1,a)Qt(st,at)] (5)

    The factor α, called "learning rate", gives us the possibility to control how much we desire to take into account the difference between the previous Q value and the new one. With this approach, we start by using random values (or any guess value) for the Q-function. Then, when the agent proceeds exploring its environment, the initial Q values progressively converge to the optimal ones, based on the positive and/or negative feedbacks the agent receives after its actions.

    The Q-Learning approach can fit well our purpose of finding an efficient action policy for maximizing oil production, as well as for optimizing CO2 storage. In order to apply this method, in both cases I need to clarify what I mean by an agent, what is the environment where it acts, what is a state and what is an action, and how we estimate the reward. Finally, I need to clarify quantitatively what I intend for best policy. It means that I need to formalize some type of objective function that should converge towards an optimal policy addressed to oil reservoir management and/or CO2 storage.

    Let us start from the case of oil production. As I said, an important aspect of oil recovery maximization is to mitigate the risk of undesired phenomena of water invasion into the production wells, such as coning and cresting. Our goal is to regulate properly the production and/or injection parameters (pressure, production rate, number and location of injection wells, and so forth) in order to have high production of oil and, at the same time, low risk of water breakthrough. This goal can be achieved if we can continuously monitor the distance of the waterfront from the production well(s). In fact, it is of primary importance to have the possibility to control the complex dynamics of fluids movements (using the ERT method, in my approach) in order to prevent the invasion of water into the well(s). Consequently, we can learn how to regulate our production parameters in order to reach an optimal balance between production rates, water injection rates and water breakthrough risk. This is a typical optimization problem, and it can be addressed with the Q-Learning method. In fact, in this context, every production or injection well (or every production compartment of any individual well) can represent an agent. The state of each agent is defined by its production parameters, together with the correspondent water breakthrough risk. The action of each agent consists of regulating the production/injection variables. A positive reward consists of high oil production and, at the same time, low risk of water breakthrough; this risk is inversely proportional to the distance of the waterfront from the production well (estimable through ERT).

    Having these basic concepts in mind, in order to apply the Q-Learning method, we start by constructing an action-state M x M matrix (Table 1). This matrix allows representing the transition rules and the correspondent rewards in our environment. It consists of M possible states (rows) that can change into another state (action or transition). Hence, in each state, the agent has M potential actions (columns) which it can choose. Depending on the particular state in which the agent is and the subsequent action which the agent then takes, there is a unique distribution for transition to a next state. This means that the transition probabilities to any next state depend on (only) the previous state as well as the action then taken (i.e. we are assuming the conditions of a Markovian Decision Process, briefly MDP).

    Table 1.  Example of R-Matrix (or R-Table) with assigned rewards for each state transition. Each reward value Rij is assigned as explained in the text.

     | Show Table
    DownLoad: CSV

    After each of the agent's M actions, the agent will get a reward Rij. (i.e. the reward obtained when moving to the sate j from the state i). As I said earlier, we need to define a Reward Function that represents, for each state in our process, the way to estimate the expected value of the reward for an action taken in that state. I set this function to be proportional to the oil production rate and to its increment when moving from a state to another. In our case, the state transition consists of regulating the production parameters (for instance, increasing oil production rate in a certain well, or in a certain sector of a well). Furthermore, the rewards must be decreased by introducing a penalization term in case the oil rate increment causes a water displacement, increasing the risk of water breakthrough in the production well. Such a risk is related to the waterfront distance from the well: high distance (based on ERT models) corresponds to low water breakthrough risk, and vice versa. I set the Reward function using the following simple formula:

    R(s,s',a)=Pcur(s)+P(s,s',a)r(s,s',a) (6)

    where Pcur(s) is the oil production in the current state s, P(s,s',a) is the variation of oil production when moving to the next state s' (through the action a), and r(s,s',a) is the correspondent variation of water breakthrough risk. That risk term is inversely proportional to the average distance variation, Δdwater, (observed or predicted) of the water table from the production well (or, better, from the sector of the well equipped with the electrodes). That distance variation is determined through the ERT survey performed at the state s, together with the predictive method discussed above and applied to the state s'. We have:

    r(s,s',a)1/dwater(s,s',a) (7)

    For example, if the agent wi (a certain production well) is in a state si (i.e. it is characterized by certain production parameters and a certain water breakthrough risk), and it takes an action ak (for instance it increases production rate without affecting significantly the waterfront distance), then we assign a positive reward to that action-state transition, without any risk penalization term. The reason is that, after moving into the new state, our well has improved oil production without increasing significantly the risk of water breakthrough. Instead, if the agent wi is in a state si and it takes another action al that increases production rate, but with the undesired effect of increasing significantly the risk of water breakthrough, then we add the penalization term (7) to the total reward. After assigning a total reward to each action-state transition, we obtain the Reward Matrix or, briefly, the R-Matrix. Table 1 shows an example of this matrix obtained from a synthetic test (discussed in the following) where I simulated some schematic transitions from states characterized by different production and risk values. I simulated a scenario where the production increment causes the displacement of water towards the production well. Consequently, the reward of each transition is penalized every time the waterfront advances towards the well itself.

    For instance, the transition from the "Low Production—Low Risk" state (LP-LR in the matrix of Table 1) to the "High Production—Medium Risk" state (HP-MR) has a Reward of 0.6 (highlighted in red). The normalized production and risk values used for the rewards computation are shown above and on the left of the table, for each row and each column.

    Creating an R-Matrix is just the starting point of the Q-Learning optimization process. In fact, the R-Matrix is a schematic representation of the rewards that we receive from the environment where our agent acts. I remind that that our objective is to optimize the entire process of action-state transitions in the agent's environment; in our case, the final goal is to reach, as soon as possible, a stable condition of maximum oil production with minimum (or, at least, with controlled) risk of water breakthrough. Such an optimum condition is obtained through the iterative process of updating the Q values through the Bellman equation (4) and the associated recursive formula (5). As said earlier, initially we set random (or null) values for the Q-function. Then, the agent proceeds in its exploration of the environment and acts in it, getting positive and/or negative feedbacks from it. In this process, the agent always tries to optimize its rewards by taking the best possible action that it learns. In such a way, the initial Q values are progressively updated through the recursive Bellman formula in order to converge to the optimal ones. The result will be an updated Q-Table (or Q-Matrix). Such matrix allows defining the optimal policy in our environment, in terms of i, j steps with the maximum long-term reward. For illustrative purposes, I prepared a tutorial Python code showing in detail all the Q-Learning steps mentioned above [33].

    A crucial question is how to implement these theoretical concepts into a pragmatic approach. In order to allow practical applications, the optimisation Q-Learning process must be translated into an "automatic and iterative tuning mechanism" of the production/injection variables. Such a tuning procedure is possible if it can take profit from the continuous information available from our agents (the production/injection wells) about their dynamic environment. This type of information is the one that we can get in terms of oil production rate and water table distance using ERT data. In fact, assuming to have installed in each well a continuous and permanent monitoring/mapping ERT system, we can obtain a real-time update about the movements of fluids in a wide range of distances around and between the wells. Furthermore, we have seen in section 2.2 (Predictive methods) that we can also forecast the fluid movements. It means that we can associate a value of actual and future water-breakthrough risk to each state/action during the process of oil production. In other words, we can construct our R-Matrix and update it when necessary during the production history of our wells. Finally, the agents (our wells) learn through the Q-Learning algorithm how to set an optimal policy for maximizing the production and minimizing the water breakthrough risk at each moment of the production history. Consequently, all the production/injection parameters are regulated based on such an optimal policy. The scheme of Figure 2 summarizes the entire approach described above.

    Figure 2.  Schematic block diagram for an optimal policy definition supported by ERT permanent fluid mapping system and Q-Learning iterative method.

    An analogous workflow, with some necessary modifications, can be applied for defining a best policy addressed to optimal management of CO2 geological storage. Following Shamshiri and Jafarpour [13], in this case the goal is to maximize the sweep efficiency and storage capacity of CO2 in geologic formations by a controlled injection. That goal can be reached by adjusting injection rate allocations by accounting for aquifer heterogeneity. In this case, "the objective is to maximize the sweep efficiency by mobilizing the injected CO2 uniformly in all directions and exposing a larger volume of fresh brine to the CO2 plume, leading to an enhanced solubility and residual trapping".

    In order to formalize this idea, the authors cited above introduce the concept of pseudo production wells. These are virtual producers with negligible total fluid production rate. They are introduced in the simulation just with the purpose to indicate the arrival of CO2 (and its fractional production rate). That information (the arrival time of CO2) is used to equalize production rates at all pseudo-producers and to delay CO2 breakthrough. Finally, Shamshiri and Jafarpour formalize their optimization problem by minimizing a term of an objective function that penalizes the difference between CO2 production rates at different pseudo production wells. Furthermore, a second term is introduced in the objective function to delay the CO2 breakthrough at the same pseudo-production wells.

    I use the same approach in my problem formalization. However, differently from Shamshiri and Jafarpour, I do not need to introduce the concept of pseudo-production wells. In fact, as explained earlier, we can monitor the movements of the injected CO2 through real measurements. Indeed, we can use permanent ERT layout installed in the injection wells (as well as in additional monitoring wells) in order to control (and to predict) the CO2 movements in all directions.

    An additional fundamental difference with respect to the approach of Shamshiri and Jafarpour is in my technique for optimizing the injection control parameters. Differently from Shamshiri and Jafarpour, my idea is to adjust these parameters through the Q-Learning approach. In other words, I search for a best policy through the iterative application of the Bellman equation, as explained in the previous sub-section. This time, the goal is that CO2 movement in the reservoir is as uniform as possible, despite the existing heterogeneities and preferential flow paths. Figure 3 shows the block diagram addressed to optimize CO2 geological storage. It is conceptually similar to the workflow addressed to oil production optimization (Figure 2), but with the necessary adaptations to the problem of CO2 storage (highlighted in red).

    Figure 3.  Schematic block diagram for an optimal policy definition for CO2 geological storage, supported by ERT permanent CO2 mapping system and Q-Learning iterative method.

    In this section, I discuss 2D and 3D ERT time-lapse synthetic tests. The discussion is split into two sub-sections. First, I show how we can use ERT method for monitoring the CO2 plume during injection. Next, I show how we can map oil displacement by water during production. Finally, I show synthetic tests concerning the prediction of future fluid displacements, using the predictive methods introduced in section 2.2.

    The Carbon Capture and Storage (CCS) workflow includes CO2 capture from large stationary sources, transport to selected locations and, finally, injection and storage into porous geological formations, with the final goal to isolate CO2 from the atmosphere [34]. Unfortunately, the possibility of leakage from the storage reservoirs cannot be excluded, and this can cause serious consequences. In fact, leaked fluids can migrate along faults or other discontinuities, accumulating in formations overlying the storage formations. Consequently, they can interfere with another subsurface activity (such as natural gas storage, deep-well injection of wastes and so forth), or can cause groundwater contamination. Finally, CO2 can reach the surface and escape into the atmosphere. In order to mitigate the leakage risk, it is crucial to have one or more effective methods for monitoring the diffusion of the CO2 plume in the subsoil over time. As I have said earlier, ERT tomography offers significant chances to map the movements of CO2 in the subsurface, if we assume that it significantly affects the bulk resistivity after injection in the storage formation. That assumption is reasonable and supported by modelling as well as proven by real experiments of CO2 storage and monitoring. An effective way to apply ERT for this purpose is to use borehole arrays of densely spaced electrodes in multiple injection/monitoring wells. With this approach, we can expect to perform permanent geo-electric monitoring of the CO2 movements with relatively high spatial resolution in the rock volume between the wells.

    For illustrative purposes, I start with a simple 2D time-lapse inversion test of synthetic cross-well ERT data. My goal is to show benefits and limitations (in terms of spatial accuracy and model uncertainties) that we can expect when we acquire and invert ERT data using borehole layouts [35]. I simulate a simplified scenario of CO2 injection, where there is just one injection well and one monitoring well at a distance of 250 m. I assume that the wells cross an aquifer in a geological formation. In this test, the inversion algorithm inverts the electric potentials and tries to retrieve directly the resistivity distribution. Fluid saturation can be estimated through a separate step, using for instance the second Archie's law. The uncertainty propagation from electric potential data to resistivity model space is estimated by assigning, in the input data file, an error-vector describing the standard deviation of reciprocal measurements for each electric quadrupole [36]. The uncertainties propagation from the resistivity model to the saturation model depends on the uncertainties on porosity and on the parameters of the petro-physical relationship(s) used for the resistivity-saturation conversion, as discussed in Dell'Aversana et al. [37]. In these synthetic tests, I assume that porosity and the parameters of the Archie's law are perfectly known.

    For each inversion, I set the average background resistivity at 1 Ωm. The depth scale in the reference figures is a relative depth, ranging from 0 m to 400 m. I assume that the CO2 injection in this aquifer formation creates a significant resistivity contrast (two orders of magnitude) between the background geology and the rock formation interested by the CO2 plume.

    For this specific test, I have used the open source "R2" software package based on Python libraries [38,39,40,41]. Furthermore, I have used the "pyres" Python wrapper package for R2. This creates a powerful and flexible programmatic interface for modeling and inversion of ERT datasets using the NumPy, SciPy, and Matplotlib Python packages. The R2/pyres software allows forward/inverse solution for 3D or 2D current flow in a quadrilateral or triangular mesh. The mesh is made up of a set of elements. Electrodes are specified at node points corresponding with the corners of the elements. The boundary conditions along all four boundaries of the mesh are Neumann conditions (zero flux). R2 software uses a mesh of finite elements to compute the model, in both forward and inverse mode. The mesh is based on a quadrilateral (structured or unstructured) or triangular (unstructured) mesh of finite elements. Node points on its perimeter define each element, and the voltage is computed at each node point given a dipole current source applied at a pair of nodes. As we will see in the following test results (shown in Figure 4), the elements of the mesh gradually increase in size laterally and vertically outside the region of investigation. The inverse solution is based on a regularized objective function combined with weighted least squares (an Occam's type solution) as defined in Binley and Kemna [39].

    Figure 4.  Example of 2D sequential inversions of synthetic cross-well ERT data.

    In the test here described, I put electrodes in both wells at a constant distance of 5 m, for a thickness of 120 m, assuming that this is the total thickness of the aquifer. Furthermore, I added some electrodes below the aquifer zone (from 120 m to 250 m, with a larger spacing than in the electrodes above). I simulated the acquisition of a base survey at time T0 (first row in the figure) and of two monitor surveys at times T1 and T2 (rows 2 and 3). I inverted the synthetic response for each simulated scenario, using a uniform starting model of 1 Ωm. The first column of panels shows the "true models" at each time. We can see a high resistivity plume growing over time caused by the expanding CO2 plume and progressively moving from the injection well towards the monitoring well. The white dotted line has the scope of better showing the boundaries of the plume, although in the reality we can expect a smoothed transition zone in terms of resistivity values. The second column of panels show the inverted model of the synthetic data associated to each individual survey from T0 to T2. Finally, the third column of panels show the same inverted models together with the inversion mesh. A zoomed image is shown in Figure 5.

    Figure 5.  Zoomed image of the inverted model at time step T0 showing also the unstructured finite-element mesh.

    The first general comment that we can do is that the "real models" are reproduced quite well through inversion, despite I added 5% of noise in the input data. The evolution of the CO2 plume (the resistivity anomaly) over time appears clearly in terms of growing size and direction of migration. Of course, there are unavoidable approximations and uncertainties on the inverted models. The "true shape" is not reproduced with accuracy everywhere and it appears generally overestimated in size by the effects of the smoothing operator included in the inversion algorithm. Consequently, the resistivity of the CO2 plume is underestimated. However, these limitations can be mitigated by a better use of the mesh/inversion parameters that can be fixed optimally in the input files of the R2 software. For instance, we can improve the mesh using different parameters for the element number and size. In my test, I tried to find an optimal compromise between accuracy and computation times. The right panels of Figure 4 clearly show that I have used a detailed mesh in the area where electrodes are densely spaced, whereas the mesh elements become larger in the area below. Of course, the mesh and the smoothing parameters can be better tuned based on the user's experience. Finally, the inversion result can be improved depending on the available computation resources and on the electrodes layout.

    The key message of this test is that the ERT method can provide, in principle, an effective approach for mapping the space-time evolution of the CO2 plume during and after injection. The problem of resistivity underestimation can be mitigated by the fact that, in time-lapse applications, we are interested mainly to the relative variations of resistivity rather than its absolute values. This point will be better illustrated in the next 3D simulation test. The main limitation of this approach is that the wells where we put the electrodes cannot be too far. Based on modelling and real experiments, we can say that the spacing between wells should not be larger than 300-400 m, also depending on the length of the borehole layout. In fact, longer layouts allow longer investigation distances. However, the investigated volume can be expanded if the borehole ERT layout is complemented by a 3D surface layout.

    The following 3D test simulates a scenario of a significant resistivity change in an oil reservoir and below it, caused by the water table approaching four horizontal production wells. This test is described with additional details in a previous paper. Here, I recall just the key aspects of this simulation.

    In the production scenario of this test, I simulated the response of a DC cross-hole acquisition survey, where a multi-channel electrode system is deployed in four parallel horizontal wells. These are located at a mutual constant distance of 250 m. The wells are at a constant depth of 2340 m below the surface, in correspondence of the top of the reservoir. In each well, I simulated an acquisition layout consisting of 15 electrodes, 25 m spaced.

    Figure 6 shows the grid used for the modelling and the four horizontal wells where the electrodes (black dots) are installed for acquisition. The grid consists of irregular rectangular cells. Their size changes in function of the electrodes spacing. The maximum expected spatial resolution of the inverted model parameter (resistivity, in this case) corresponds to the minimum half-spacing between the electrodes.

    Figure 6.  Modelling grid and electrodes' layout in the four horizontal wells.

    I used the "PUNQ-S3 reservoir model" developed by the Imperial College of London [42]. It represents a small-size industrial reservoir scenario consisting of 19 × 28 × 5 grid blocks. A South and East fault system bounds the modelled hydrocarbon field. Moreover, there is an aquifer bounding the reservoir to the North and West. In order to generate the porosity/permeability fields, a geostatistical model based on Gaussian Random Fields has been used. A synthetic history has been simulated using the reservoir simulator ECLIPSE. I selected a few significant scenarios of variations in terms of fluids saturation, in order to simulate a base-model and a few monitor-models. Then, I applied the Archie's second law [43] for transforming the porosity and saturation distributions into the corresponding resistivity model. Finally, I simulated the DC acquisition using the derived resistivity model. This consists of five levels (having a thickness of 10 m each) with variable resistivity. The left panel of Figure 7 shows an example view of the base resistivity model of the reservoir; the orange/red colors indicate resistivity of the order of 100 Ωm, while the blue color indicates about 3-4 Ωm. The resistivity of the background (transparent in the figure) is set to 1 Ωm.

    Figure 7.  Maps of the base reservoir model (left) and monitor reservoir model (right), when the waterfront is approaching the wells.

    The reservoir is discretized by square cells of 100 m × 100 m and with a thickness of 10 m. The right panel of the same figure shows the monitor-reservoir model. The acquisition layout with electrodes installed in the four wells is highlighted in red. The monitor model shows a significant variation in conductivity (or, equivalently, in resistivity). In this simulation, I assumed that the movement of the waterfront approaches the wells from the South-West side (bottom left, in Figure 7) and from below. This waterfront is responsible of the decrease of resistivity observed below the wells. I simulated the acquisition in both base and monitor models, before and after the movement of water, using a "Mixed-dipole gradient" array, recording 2145 electric potentials in total. In order to simulate noisy data, I contaminated the synthetic response with 5% of Gaussian noise. Finally, I inverted the data using a Robust inversion algorithm that minimizes the absolute difference between the measured and calculated apparent resistivity [44].

    I subtracted the simulated response in the two scenarios (base and monitor models). The result represents the Difference data vector, i.e. the input for the time-lapse Difference inversion. The inverted model represents the Difference conductivity model. This is the 3D spatial distribution of the changes of conductivity from the base to the monitor model. As an example of the inversion results, Figure 8 shows three displays of the anomaly retrieved through Difference inversion at three different depths. It is expressed in terms of percentage variation of electric conductivity, where the green color is the reference 0% of variation.

    Figure 8.  Variation of the conductivity distribution retrieved by Difference inversion at 60, 90 and 160 m below the wells (from left to right, respectively). Color scale: Green = 0%; Blue = 15-20%; Violet = 30%. Yellow, Orange and Red indicate negative variations in the color scale.

    This test confirms that we can detect, in principle, significant variations of electric conductivity associated to waterfront movements, using a cross-well electrodes layout permanently installed in production wells and performing time-lapse DC tomography. As already remarked earlier, one of the main benefits of DC tomography is the simplicity and the rapidity by which we can acquire repeated surveys and invert time-lapse data sets in relatively short time (for instance, on daily basis). This allows performing a quasi-real time reservoir monitoring. This is an effective approach for continuous control of how the waterfront responds to oil production. Figure 8 shows just an example of the geo-electrical response that we can expect from time-lapse DC tomography in case of significant movements of the waterfront below the production wells. Finally, the time-space correlation of water movements in response to oil production is a fundamental information for optimizing decisions and actions during production.

    Of course, before moving from modelling results to real applications we need to solve practical, operational and engineering issues. In particular, it is crucial to define appropriate constructive solutions for the well completion. We discussed those important issues and the possible solutions in dedicated papers, including significant results of laboratory experiments.

    Figure 9 shows a schematic 2D scenario where the water table (resistivity = 1 Ωm) is moving upward (dashed lines), following an irregular dynamic trend. We can imagine that this is a typical situation where oil over-production or a waterflooding technique is causing an undesired "cresting" phenomenon. In this simulation, I deployed an ERT permanent layout (electrode spacing: 10 m) into a horizontal well (relative elevation = 0 m). My goal is to monitor the water movements in order to avoid the possible invasion in the producing well. Doing many monitor surveys like this, I created a dynamic database of electric potential measurements over time. Finally, I used these measurements for training my RNN predictive algorithm.

    Figure 9.  Simulation scenario. Black dashed curve: waterfront at time step T0. Red dashed line: waterfront at time step T1. Vertical axis represents a relative depth with respect to the horizontal well (thick black line).

    Figure 10 shows the apparent resistivity pseudo section obtained through the reference ERT survey at time T0, when the water table is marked by the black dashed curve in the previous figure.

    Figure 10.  Apparent resistivity pseudo-section at reference time step T0.

    Figure 11 shows the ERT section obtained by inversion of the apparent resistivity pseudo-section of Figure 10. Looking at this resistivity model, it is clear that the fluid movements in a certain area will affect the response of a cluster of observation points. Consequently, each prediction should be based on the history of a cluster of correlated measurements collected over time in the same "zone". For example, looking at Figure 11, we expect that the measurements at Q1 and Q3 be correlated with measurements at Q2 (where Q1, Q2 and Q3 indicate three examples of theoretical image points illuminated by the ERT response). Such a spatial correlation between clusters of measurements in the data set justifies my multivariate approach.

    Figure 11.  Example of ERT model (obtained through inversion of the pseudo-section of Figure 10), in a scenario of detection of fluids below the measurement points.

    Using a large database of apparent resistivity pseudo-sections (like that of Figure 10) obtained through a sequence of ERT monitor surveys, we can try to predict the water table position at the next time step. For that purpose, I applied a well-known predictive approach: the Multivariate LSTM Forecast Model [45]. LSTM means "Long-Short Term Memory". It is a type of Recurrent Neural Network allowing the network to retain long-term dependencies at a given time from many time steps before. My objective is to fit the LSTM predictive algorithm on the ERT time-lapse data.

    Let us explain the key steps of the workflow. First, I split the historical dataset of multiple monitor surveys (apparent resistivity or, equivalently, electric potentials pseudo-section) into train and test sets. Next, I run the training phase on these past data. It is possible to monitor the training results by plotting the loss function vs. the number of epochs (Figure 12).

    Figure 12.  Trend of training and validation loss functions vs. number of iterations.

    After the LSTM model is fit, I use it to forecast the future data. This means predicting the future electric potentials or, equivalently, the apparent resistivity pseudo-section, in a changed scenario of the water table (such as the one represented by the red dashed line in Figure 9).

    Figure 13 shows the comparison between "true" future and predicted future data at two time steps in the future at just one single measurement point. This is, for instance, one measurement point of a pseudo-section like the one of Figure 10. In this example, I know in advance the "true" future, because this is a synthetic test. This allows me checking the reliability of my LSTM Forecast Model. We see that the forecasts are very accurate. Such accuracy is possible because the prediction is based on the past measurements using a multivariate approach. In other words, the LSTM Forecast Model, if trained properly on past correlated data, can provide robust predictions.

    Figure 13.  Example of multiple step predictions. Time step 0 corresponds to the first future monitor survey, negative time steps correspond to the past surveys.

    Combining the predictions of electric potentials at all measurements points, I create a table of electric potentials (Table 2). It includes the data for all the measurements points along the entire acquisition layout observed at the past time steps (i.e. past monitor surveys) and predicted for future time steps (i.e., future monitor surveys).

    Table 2.  Electric potentials at the various measurements points (here indicated as "Quadrupole" mid points), acquired through past monitor surveys (data columns relative to negative time steps) and used for predictions (last column).

     | Show Table
    DownLoad: CSV

    Finally, the predicted potentials are used for creating a predicted apparent resistivity pseudo-section (Figure 14). Inverting it, I obtain a predicted resistivity model (Figure 15).

    Figure 14.  Predicted apparent resistivity section.
    Figure 15.  Predicted ERT model at time step T1.

    To mention an additional illustrative example, I applied the same predictive approach to the 2D data set discussed in the section 3.1 (2D borehole ERT for CO2 monitoring). Figure 16 shows the inverted model from the predicted data just one time-step ahead. The inverted resistivity model represents the predicted CO2 plume at the "future" time step T3, where the synthetic ERT data at time T3 have been predicted through the LSTM Forecast Model.

    Figure 16.  CO2 plume prediction at time T3 (compare with Figure 4).

    Despite the obvious differences in the case of optimizing oil production or CO2 storage, there are common issues and possible similar solutions. Many crucial problems are associated to the fact that fluids generally show high mobility in the subsoil. Hence, possible solutions depend on our ability to map in real time, and possibly to predict, the movements of these fluids in the reservoir. In this paper, I have introduced a complete workflow to optimize the key steps in both processes of oil production and safe CO2 storage and monitoring. I have discussed the benefits (and the limitations too) offered by electric (and electromagnetic) methods in the crucial phase of mapping fluids over time, where fluids can be brine, hydrocarbons and CO2, depending on the context.

    The main conclusions can be summarized as following: 1) Electric Resistivity Tomography (ERT) is a valid tool for mapping fluids' displacement over time, during oil production as well as CO2 injection. Synthetic tests show that this method is effective for mapping reservoir fluids. Resolution increases if densely spaced electrodes are deployed in the production/injection wells. These wells must be spaced at distances never larger than few hundred meters. 2) Future displacement of fluids can be predicted using Recurrent Neural Networks applied to historical data recorded by Electric Resistivity Tomography layouts through multiple monitor surveys. 3) Q-Learning represents a valid approach for setting an optimal prescriptive policy for managing oil production as well as CO2 injection. Such an optimal policy is based on the time-lapse Resistivity Electrical historical data combined with the Recurrent Neural Network predictions.

    The author declares no conflicts of interest in this paper.



    [1] M. A. Stafford, L. Corey, Y. Cao, E. S. Daar, D. D. Ho, A. S. Perelson, Modeling plasma virus concentration during primary HIV infection, J. Theor. Biol., 203 (2000), 285–301. https://doi.org/10.1006/jtbi.2000.1076 doi: 10.1006/jtbi.2000.1076
    [2] A. S. Perelson, A. U. Neumann, M. Markowitz, J. M. Leonard, D. D. Ho, HIV-1 dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation time, Science, 271 (1996), 1582–1586. https://doi.org/10.1126/science.271.5255.1582 doi: 10.1126/science.271.5255.1582
    [3] S. M. Ciupe, R. M. Ribeiro, P. W. Nelson, A. S. Perelson, Modeling the mechanisms of acute hepatitis B virus infection, J. Theor. Biol., 247 (2007), 23–35. https://doi.org/10.1016/j.jtbi.2007.02.017 doi: 10.1016/j.jtbi.2007.02.017
    [4] S. M. Ciupe, H. Dahari, A. Ploss, Mathematical models of early hepatitis B virus dynamics in humanized mice, Bull. Math. Biol., 86 (2024), 53. https://doi.org/10.1007/s11538-024-01284-2 doi: 10.1007/s11538-024-01284-2
    [5] M. A. Myers, A. P. Smith, L. C. Lane, D. J. Moquin, R. Aogo, S. Woolard, et al., Dynamically linking influenza virus infection kinetics, lung injury, inflammation, and disease severity, eLife, 10 (2021), e68864. https://doi.org/10.7554/eLife.68864 doi: 10.7554/eLife.68864
    [6] P. Baccam, C. Beauchemin, C. A. Macken, F. G. Hayden, A. S. Perelson, Kinetics of influenza A virus infection in humans, J. Virol., 80 (2006), 7590–7599. https://doi.org/10.1128/jvi.01623-05 doi: 10.1128/jvi.01623-05
    [7] R. Ben-Shachar, K. Koelle, Minimal within-host dengue models highlight the specific roles of the immune response in primary and secondary dengue infections, J. R. Soc. Interface, 12 (2015), 20140886. https://doi.org/10.1098/rsif.2014.0886 doi: 10.1098/rsif.2014.0886
    [8] R. Nikin-Beers, S. M. Ciupe, Modelling original antigenic sin in dengue viral infection, Math. Med. Biol., 35 (2018), 257–272. https://doi.org/10.1093/imammb/dqx002 doi: 10.1093/imammb/dqx002
    [9] R. Nikin-Beers, S. M. Ciupe, The role of antibody in enhancing dengue virus infection, Math. Biosci., 263 (2015), 83–92. https://doi.org/10.1016/j.mbs.2015.02.004 doi: 10.1016/j.mbs.2015.02.004
    [10] K. Best, J. Guedj, V. Madelain, X. de Lamballerie, S. Lim, C. E. Osuna, et al., Zika plasma viral dynamics in nonhuman primates provides insights into early infection and antiviral strategies, Proc. Natl. Acad. Sci., 114 (2017), 8847–8852. https://doi.org/10.1073/pnas.1704011114 doi: 10.1073/pnas.1704011114
    [11] R. Ke, C. Zitzmann, D. D. Ho, R. M. Ribeiro, A. S. Perelson, In vivo kinetics of SARS-CoV-2 infection and its relationship with a person's infectiousness, Proc. Natl. Acad. Sci., 118 (2021), e2111477118. https://doi.org/10.1073/pnas.2111477118 doi: 10.1073/pnas.2111477118
    [12] N. Heitzman-Breen, S. M. Ciupe, Modeling within-host and aerosol dynamics of SARS-CoV-2: The relationship with infectiousness, PLoS Comput. Biol., 18 (2022), e1009997. https://doi.org/10.1371/journal.pcbi.1009997 doi: 10.1371/journal.pcbi.1009997
    [13] S. M. Ciupe, J. M. Heffernan, In-host modeling, Infect. Dis. Model., 2 (2017), 188–202. https://doi.org/10.1016/j.idm.2017.04.002
    [14] S. M. Ciupe, J. M. Conway, Incorporating intracellular processes in virus dynamics models, Microorganisms, 12 (2024), 900. https://doi.org/10.3390/microorganisms12050900 doi: 10.3390/microorganisms12050900
    [15] M. Chung, M. Binois, R. B. Gramacy, J. M. Bardsley, D. J. Moquin, A. P. Smith, et al., Parameter and uncertainty estimation for dynamical systems using surrogate stochastic processes, SIAM J. Sci. Comput., 41 (2019), A2212–A2238. https://doi.org/10.1137/18M1213403 doi: 10.1137/18M1213403
    [16] H. Miao, C. Dykes, L. M. Demeter, J. Cavenaugh, S. Y. Park, A. S. Perelson, et al., Modeling and estimation of kinetic parameters and replicative fitness of HIV-1 from flow-cytometry-based growth competition experiments, Bull. Math. Biol., 70 (2008), 1749–1771. https://doi.org/10.1007/s11538-008-9323-4 doi: 10.1007/s11538-008-9323-4
    [17] M. C. Eisenberg, S. L. Robertson, J. H. Tien, Identifiability and estimation of multiple transmission pathways in Cholera and waterborne disease, J. Theor. Biol., 324 (2013), 84–102. https://doi.org/10.1016/j.jtbi.2012.12.021 doi: 10.1016/j.jtbi.2012.12.021
    [18] N. Tuncer, M. Martcheva, Determining reliable parameter estimates for within-host and within-vector models of Zika virus, J. Biol. Dyn., 15 (2021), 430–454. https://doi.org/10.1080/17513758.2021.1970261 doi: 10.1080/17513758.2021.1970261
    [19] N. Tuncer, H. Gulbudak, V. L. Cannataro, M. Martcheva, Structural and practical identifiability issues of immuno-epidemiological vector–host models with application to rift valley fever, Bull. Math. Biol., 78 (2016), 1796–1827. https://doi.org/10.1007/s11538-016-0200-2 doi: 10.1007/s11538-016-0200-2
    [20] N. Heitzman-Breen, Y. R. Liyanage, N. Duggal, N. Tuncer, S. M. Ciupe, The effect of model structure and data availability on Usutu virus dynamics at three biological scales, Royal Society Open Science, 11 (2024), 231146. https://doi.org/10.1098/rsos.231146 doi: 10.1098/rsos.231146
    [21] N. Tuncer, T. T. Le, Structural and practical identifiability analysis of outbreak models, Math. Biosci., 299 (2018), 1–18. https://doi.org/10.1016/j.mbs.2018.02.004 doi: 10.1016/j.mbs.2018.02.004
    [22] P. Das, M. Igoe, A. Lacy, T. Farthing, A. Timsina, C. Lanzas, et al., Modeling county level COVID-19 transmission in the greater St. Louis area: Challenges of uncertainty and identifiability when fitting mechanistic models to time-varying processes, Math. Biosci., 371 (2024), 109181. https://doi.org/10.1016/j.mbs.2024.109181 doi: 10.1016/j.mbs.2024.109181
    [23] Y. Kao, M. C. Eisenberg, Practical unidentifiability of a simple vector-borne disease model: Implications for parameter estimation and intervention assessment, Epidemics, 25 (2018), 89–100. https://doi.org/10.1016/j.epidem.2018.05.010 doi: 10.1016/j.epidem.2018.05.010
    [24] S. M. Ciupe, N. Tuncer, Identifiability of parameters in mathematical models of SARS-CoV-2 infections in humans, Sci. Rep., 12 (2022), 14637. https://doi.org/10.1038/s41598-022-18683-x doi: 10.1038/s41598-022-18683-x
    [25] A. Raue, C. Kreutz, T. Maiwald, J. Bachmann, M. Schilling, U. Klingmüller, et al., Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood, Bioinformatics, 25 (2009), 1923–1929. https://doi.org/10.1093/bioinformatics/btp358 doi: 10.1093/bioinformatics/btp358
    [26] A. Boianelli, V. K. Nguyen, T. Ebensen, K. Schulze, E. Wilk, N. Sharma, et al., Modeling influenza virus infection: A roadmap for influenza research, Viruses, 7 (2015), 5274–5304. https://doi.org/10.3390/v7102875 doi: 10.3390/v7102875
    [27] M. J. Simpson, A. P. Browning, D. J. Warne, O. J. Maclaren, R. E. Baker, Parameter identifiability and model selection for sigmoid population growth models, J. Theor. Biol., 535 (2022), 110998. https://doi.org/10.1016/j.jtbi.2021.110998 doi: 10.1016/j.jtbi.2021.110998
    [28] A. P. Smith, D. J. Moquin, V. Bernhauerova, A. M. Smith, Influenza virus infection model with density dependence supports biphasic viral decay, Front. Microbiol., 9 (2018), 1554. https://doi.org/10.3389/fmicb.2018.01554 doi: 10.3389/fmicb.2018.01554
    [29] J. J. Sedmak, S. E. Grossberg, Interferon bioassay: Reduction in yield of myxovirus neuraminidases, J. Gen. Virol., 21 (1973), 1–7. https://doi.org/10.1099/0022-1317-21-1-1 doi: 10.1099/0022-1317-21-1-1
    [30] Y. Sun, W. J. Jusko, Transit compartments versus gamma distribution function to model signal transduction processes in pharmacodynamics, J. Pharm. Sci., 87 (1998), 732–737. https://doi.org/10.1021/js970414z doi: 10.1021/js970414z
    [31] A. S. Perelson, P. W. Nelson, Modeling viral infections, in Proceedings of Symposia in Applied Mathematics, 59 (2002), 139–172.
    [32] G. Bellu, M. P. Saccomani, S. Audoly, L. D'Angiò, DAISY: A new software tool to test global identifiability of biological and physiological systems, Comput. Methods Programs Biomed., 88 (2007), 52–61. https://doi.org/10.1016/j.cmpb.2007.07.002 doi: 10.1016/j.cmpb.2007.07.002
    [33] N. Meshkat, C. E. Kuo, J. DiStefano III, On finding and using identifiable parameter combinations in nonlinear dynamic systems biology models and COMBOS: A novel web implementation, PLoS One, 9 (2014), e110261. https://doi.org/10.1371/journal.pone.0110261 doi: 10.1371/journal.pone.0110261
    [34] R. Dong, C. Goodbrake, H. Harrington, G. Pogudin, Differential elimination for dynamical models via projections with applications to structural identifiability, SIAM J. Appl. Algebra Geom., 7 (2023), 194–235. https://doi.org/10.1137/22M1469067 doi: 10.1137/22M1469067
    [35] H. Hong, A. Ovchinnikov, G. Pogudin, C. Yap, SIAN: Software for structural identifiability analysis of ODE models, Bioinformatics, 35 (2019), 2873–2874. https://doi.org/10.1093/bioinformatics/bty1069 doi: 10.1093/bioinformatics/bty1069
    [36] X. R. Barreiro, A. F. Villaverde, Benchmarking tools for a priori identifiability analysis, Bioinformatics, 39 (2023), btad065. https://doi.org/10.1093/bioinformatics/btad065 doi: 10.1093/bioinformatics/btad065
    [37] H. T. Banks, S. Hu, W. C. Thompson, Modeling and Inverse Problems in the Presence of Uncertainty, Chapman and Hall/CRC, 2014. https://doi.org/10.1201/b16760
    [38] S. A. Murphy, A. W. Van der Vaart, On profile likelihood, J. Am. Stat. Assoc., 95 (2000), 449–465. https://doi.org/10.1080/01621459.2000.10474219
  • This article has been cited by:

    1. Dandan Yang, Zhenyu Bai, Chuanzhi Bai, Existence of Solutions for Nonlinear Choquard Equations with (p, q)-Laplacian on Finite Weighted Lattice Graphs, 2024, 13, 2075-1680, 762, 10.3390/axioms13110762
    2. Xiaoguang Li, Local minimizers for the NLS equation with localized nonlinearity on noncompact metric graphs, 2025, 23, 2391-5455, 10.1515/math-2024-0129
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(904) PDF downloads(52) Cited by(0)

Figures and Tables

Figures(6)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog