
Citation: Alessia Civallero, Cristina Zucca. The Inverse First Passage time method for a two dimensional Ornstein Uhlenbeck process with neuronal application[J]. Mathematical Biosciences and Engineering, 2019, 16(6): 8162-8178. doi: 10.3934/mbe.2019412
[1] | Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . Gauss-diffusion processes for modeling the dynamics of a couple of interacting neurons. Mathematical Biosciences and Engineering, 2014, 11(2): 189-201. doi: 10.3934/mbe.2014.11.189 |
[2] | Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . A simple algorithm to generate firing times for leaky integrate-and-fire neuronal model. Mathematical Biosciences and Engineering, 2014, 11(1): 1-10. doi: 10.3934/mbe.2014.11.1 |
[3] | Giuseppe D'Onofrio, Enrica Pirozzi . Successive spike times predicted by a stochastic neuronal model with a variable input signal. Mathematical Biosciences and Engineering, 2016, 13(3): 495-507. doi: 10.3934/mbe.2016003 |
[4] | Virginia Giorno, Amelia G. Nobile . Exact solutions and asymptotic behaviors for the reflected Wiener, Ornstein-Uhlenbeck and Feller diffusion processes. Mathematical Biosciences and Engineering, 2023, 20(8): 13602-13637. doi: 10.3934/mbe.2023607 |
[5] | Virginia Giorno, Serena Spina . On the return process with refractoriness for a non-homogeneous Ornstein-Uhlenbeck neuronal model. Mathematical Biosciences and Engineering, 2014, 11(2): 285-302. doi: 10.3934/mbe.2014.11.285 |
[6] | Shinsuke Koyama, Ryota Kobayashi . Fluctuation scaling in neural spike trains. Mathematical Biosciences and Engineering, 2016, 13(3): 537-550. doi: 10.3934/mbe.2016006 |
[7] | Giacomo Ascione, Enrica Pirozzi . On a stochastic neuronal model integrating correlated inputs. Mathematical Biosciences and Engineering, 2019, 16(5): 5206-5225. doi: 10.3934/mbe.2019260 |
[8] | Asma Alshehri, John Ford, Rachel Leander . The impact of maturation time distributions on the structure and growth of cellular populations. Mathematical Biosciences and Engineering, 2020, 17(2): 1855-1888. doi: 10.3934/mbe.2020098 |
[9] | Jakub Cupera . Diffusion approximation of neuronal models revisited. Mathematical Biosciences and Engineering, 2014, 11(1): 11-25. doi: 10.3934/mbe.2014.11.11 |
[10] | Massimiliano Tamborrino . Approximation of the first passage time density of a Wiener process to an exponentially decaying boundary by two-piecewise linear threshold. Application to neuronal spiking activity. Mathematical Biosciences and Engineering, 2016, 13(3): 613-629. doi: 10.3934/mbe.2016011 |
In many situations arising from applications (i.e. neuroscience, finance, reliability, ...), the quantity of interest is the first time that a random quantity crosses a given fixed level. In a mathematical framework, this corresponds to the first passage time (FPTs) of a stochastic process through an eventually time dependent boundary. However, it can happen that the FPT distribution is known as well as the random process, while one is interested in determining the corresponding time dependent boundary. This is the so-called Inverse FPT problem. This problem has been investigated both from a theoretical [1,2] and an empirical point of view [3,4,5,6] in the one-dimensional case. In [1] the existence and uniqueness of the solution of the Inverse FPT is studied. In [2] the problem is interpreted in terms of an optimal stopping problem. A numerical algorithm has been proposed in [6] for the Wiener process. In [5] the Inverse FPT of an Ornstein Uhlenbeck (OU) process has been studied and it has been applied to a classification method with applications to neuroscience. The same framework is in [4], where possible thresholds corresponding to Gamma distributed FPTs for an OU process has been investigated with modelling purposes.
Here, we resort again to the Inverse FPT method, we generalize the algorithms to a two-dimensional OU process and we study the possibility to have Inverse Gaussian (IG) or Gamma distributed FPTs. The choice of these two distributions grounds on their role in neurosciences [7,8,9,10,11,12,13] and reliability theory [14,15].
In Section 2 we introduce the two dimensional Gauss-Markov process of interest, underlying some properties that we will use to deal with the Inverse FPT algorithm. In Section 3 we introduce the Inverse FPT method for a two dimensional process but we postpone to a future work the mathematical discussion about its convergence. In Section 4 we apply the algorithm to two choices of the FPT distribution, determining the thresholds corresponding to IG or Gamma FPTs probability density function (pdf). We underline the differences between the two models and we explain how heavy or light tails influence the boundary behavior. The last section discusses the obtained results in neuroscience contest. The two compartment model of Leaky Integrate and Fire type presented in [9] and studied in [16] describes the membrane potential evolution of a neuron as a two-dimensional OU process. Hence, in Section 5 we reinterpret the Inverse FPT results in this framework.
Let us consider a stochastic process X={(X1(t),X2(t)),t≥0} that is solution of the following stochastic differential system
{dX1(t)={−αX1(t)+β[X2(t)−X1(t)]}dtdX2(t)={−αX2(t)+β[X1(t)−X2(t)]+μ}dt+σdBt | (2.1) |
with X(0)=0 and where B is a one-dimensional standard Brownian motion. Here, α>0, β, μ and σ>0 are constants.
To solve the stochastic differential system (2.1), we rewrite it in matrix form
dX(t)=[AX(t)+M(t)]dt+GdB(t), | (2.2) |
where
A=(−α−βββ−α−β), M(t)=M=(0μ) and G=(000σ). |
It is an autonomous linear stochastic differential equation, in particular it is a two-dimensional Ornstein Uhlenbeck process, special case of a Gauss-Markov diffusion process [17]. The solution of (2.2) is
{X1(t)=μ2(1−e−αtα−1−e−(α+2β)tα+2β)+σ2∫t0(e−α(t−s)−e−(α+2β)(t−s))dB(s)X2(t)=μ2(1−e−αtα+1−e−(α+2β)tα+2β)+σ2∫t0(e−α(t−s)+e−(α+2β)(t−s))dB(s). | (2.3) |
It is a Gaussian vector with mean
m(t)=E(X(t))=[μ2(1−e−αtα−1−e−(α+2β)tα+2β)μ2(1−e−αtα+1−e−(α+2β)tα+2β)] | (2.4) |
and variance-covariance matrix Q(t−s) where,
Q(t)=[Q(11)Q(12)Q(12)Q(22)](t) | (2.5) |
and
Q(11)(t)=12(1α−2α+β+1α+2β−e−2αt(1α−2e−2βta+β+e−4βtα+2β))Q(12)(t)=121−e−2αtα−1−e−2(α+2β)tα+2βQ(22)(t)=12(1α+2α+β+1α+2β−e−2αt(1α+2e−2βtα+β+e−4βtα+2β)). |
Trajectories of the process are plotted in Figure 1. The different behavior of the two components is evident: the noisy behavior is prevalent in X2, while the first component X1 is smoother. Indeed, as shown in (2.3), the multiplying function of the random term on the first component reduces the noise effect.
We consider the first passage time of the first component of the process (2.1)
T=inf{t>0:X1(t)>S(t)} | (2.6) |
where S(t) is a continuous function with S(0)≥X1(0)=0.
Note that it is possible to rewrite (2.3) in iterative form. This version is useful for simulation purposes, in order to generate the trajectories in an exact way. Discretizing the time interval [0,T] with the partition π:0=t0<t1<⋯<tN=T in N subintervals of constant length h=TN, we can express the position of the process at time tk+1 in terms of the position of the process at time tk
X(tk+1)=12[e−αh+e−(α+2β)he−αh−e−(α+2β)he−αh−e−(α+2β)he−αh+e−(α+2β)h]X(tk)+μ2[1−e−αhα−1−e−(α+2β)hα+2β1−e−αhα+1−e−(α+2β)hα+2β]+σ2Ik | (2.7) |
where the term
Ik=[∫tk+1tk(e−α(tt+1−s)−e−(α+2β)(tk+1−s))dB(s)∫tk+1tk(e−α(tt+1−s)−e−(α+2β)(tk+1−s))dB(s)], | (2.8) |
known as innovation, is a Gaussian vector with zero mean and variance-covariance matrix Q(h).
In the following we will also need the conditioned mean of the first component
m(1)(t|(X1(θ),X2(θ)),θ)=E(X1(t)|X(θ)=(X1(θ),X2(θ)))=μ2(2β−αe−α(t−θ)−2βe−α(t−θ)+αe−(α+2β)(t−θ)α(α+2β))−X1(θ)e−α(t−θ)2(1+e−2β(t−θ))−X2(θ)e−α(t−θ)2(1−e−2β(t−θ)) | (2.9) |
and the conditioned variance of the first component
Q(11)(t|(X1(θ),X2(θ)),θ)=Var(X1(t)|X(θ)=(X1(θ),X2(θ)))=σ2e−2α(t−θ)[2αe−2β(t−θ)(α+2β)−αe−4β(t−θ)(α+β)+2β2e2α(t−θ)−α2−3αβ−2β2]8α(α+β)(α+2β) | (2.10) |
In some instances can be useful to transfer the time dependency from the boundary shape S(t) to an input M(t). Mathematically it is possible to relate these two situations with a simple space transformation. Indeed, the space transformation
Y1(t)=X1(t)−S(t)+Σ. | (2.11) |
changes our process X given by (2.1), originated in X(0)=x0 in presence of a time dependent boundary S(t)
{dX1(t)={−αX1(t)+β[X2(t)−X1(t)]}dtdX2(t)={−αX2(t)+β[X1(t)−X2(t)]+μ}dt+σdBtX(0)=x0S(t) | (2.12) |
into a two dimensional process characterized by time dependent input M(t) and constant threshold Σ
{dY1(t)={−αY1(t)+β[X2(t)−Y1(t)]+μ1(t)}dtdX2(t)={−αX2(t)+β[Y1(t)−X2(t)]+μ2(t)}dt+σdBtX(0)=x0−S(0)+ΣΣ. | (2.13) |
Here, the term
M(t)=[μ1(t)μ2(t)]=[−(α+β)(S(t)−Σ)−S′(t)μ+β(S(t)−Σ)] | (2.14) |
can be interpreted as an external input acting with different weights on the two compartments. Note that in (2.14) S(t) should be interpreted as a function of time and not as a boundary of a FPT problem. Indeed, in this case the boundary of the model is constant.
The stochastic process (2.1) can be used to describe a system whose behavior depends by two components that are strictly correlated by the parameter β. The second component is driven by a random Gaussian noise and its evolution is stopped when the first component reaches a given fixed level. This model, known as two compartment model, has many interesting applications, for example in neuroscience, reliability and finance.
The inverse FPT problem consists in searching the unknown boundary S(t) given that the FPT density fT(t) is known. We work under the assumption that the boundary S(t) exists, it is unique and sufficiently regular.
Let us consider a diffusion process X={(X1(t),X2(t)),t≥0}, solution of the stochastic differential equation (2.2). The proposed method is based on the numerical approximation of the following Volterra integral equation [16]
1−Erf(S(t)−m(1)(t)√2Q(11)(t))=∫t0fT(θ)EZ(θ)[1−Erf(S(t)−m(1)(t|(S(θ),X2(θ)),θ)√2Q(11)(t|(S(θ),X2(θ)),θ))]dθ | (3.1) |
where Z(t) is a random variable that represents the position of the second component X2 of the process when the first component X1 hits the boundary at time t, i.e.
P(Z(t)<z)=P(X2(T)<z|T=t,X(t0)=y). | (3.2) |
Let us fix a time interval [0,Θ] and a partition π:0=t0<t1<⋯<tN=Θ in N subintervals of constant length h=ΘN. Using Euler formula for integrals [18], equation (3.1) can be approximated as
1−Erf(S∗(ti)−m(1)(ti)√2Q(11)(ti))=h⋅i∑j=1fT(tj)EZ(tj)[1−Erf(S∗(ti)−m(1)(ti|(S∗(tj),X2(tj)),tj)√2Q(11)(ti|(S∗(tj),X2(tj)),tj))] | (3.3) |
∀i=1…N.
Equation (3.3) represents a non linear system of N equations in N unknown S∗(t1),…,S∗(tN) that can be solved by means of root finding iterative algorithms [19]. Its solution gives an approximation S∗(t) of the boundary S(t) in the partition points π. Note that in step i the only unknown quantity is S(ti) and it is estimated using the boundary approximations S∗(t1),…,S∗(ti−1), computed in the previous steps.
The quantity
θi,k=EZ(tk)[1−Erf(S∗(ti)−m(1)(ti|S∗(tk|(S∗(tk),X2(tk)),tk)√2Q(11)(ti|(S∗(tk),X2(tk)),tk))] | (3.4) |
is not easily handled because it depends on the unknown time dependent boundary. In general, the computation of θk,k is not trivial but, performing a suitable limit on the considered process we can show that θk,k=2 for each value of k. To compute (3.4) when k≠i, we use a Monte Carlo method: we simulate the process X until the first component exceeds the threshold and we save the corresponding value of Z. At step i, we need to compute θi,k for k=1,…,i−1. The presence of an expectation with respect to Z(tk) determines a difficulty for the estimation of (3.4) through Monte Carlo because we need the value of Z(tk)=X2(tk) at time T=tk. To circumvent this problem we introduce an approximate approach as follows. At step i we approximate the threshold with a piecewise linear curve with knots in the already computed boundary values. Hence, for τ∈[tj−1,tj], j=1,…,i−1 we substitute the exact boundary with
ˆS(τ)=S∗(tj)−S∗(tj−1)tj−tj−1τ+tjS∗(tj−1)−S∗(tj)tj−1tj−tj−1 | (3.5) |
and we simulate the process up to ti−1 or until it reaches the threshold. To compute θi,k, k=1,⋯,i−1 we use only the trajectories that crossed the approximated boundary (3.5) in a neighbourhood of tk. Then, in correspondence to each of these sample paths we identify with {Zk,k=1…M} the sequence of values of the second component of the process X (when the first component has exceeded the threshold). In this way, the Monte Carlo estimate for θi,j is
˜θi,j=1−∑Mk=1Erf(S(ti)−m(1)(ti|(S(tj),Zk),tj)√2Q(11)(ti|(S(tj),Zk),tj))M. |
It is possible to prove that this further approximation does not seriously influence the reliability of the algorithm.
In this Section we illustrate the use of the Inverse FPT method through two examples. The first situation concerns FPTs with Inverse Gaussian distribution. The second one deals with the Gamma distribution. Lastly, a comparison between boundaries and drift terms arising in the two examples is developed.
The IG random variable T has pdf
fT(t)=[λ2πt3]1/2exp[−λ(t−ρ)22ρ2t],t≥0, | (4.1) |
where ρ>0 is the mean and λ>0 is the shape parameter. Mean, variance and coefficient of variation are given by
E(T)=ρVar(T)=ρ3λCV=CV(T)=√Var(T)E(T)=√ρλ. | (4.2) |
Throughout all this paper, when not differently specified, we fix the values of the two compartment model as follows: α=0.33,β=0.2 and σ=1. We choose the shape of the IG distribution by fixing its mean E[T]=4 and different values of CV (see Figure 2, panel (a)). From (4.2) we see that changes of CV imply changes of the shape parameter λ. Moreover, as CV increases, the density becomes more peaked. The corresponding shapes of the time varying thresholds are illustrated in panels (b) where the shapes of the boundary present a maximum that tends to disappear as CV grows to higher values.
When ρ is finite, the IG distribution has light tails but if ρ→∞ the IG becomes
fT(t)=[λ2πt3]1/2exp[−λ2t],t≥0, | (4.3) |
and it catches the heavy tails feature of interest for the analysis of some data [7,8,12]. The density (4.3) is known to be the density of the FPT of a Brownian motion with zero drift and diffusion coefficient ν through a constant boundary b with the relation
λ=b2ν2. | (4.4) |
Since E(T)=∞, it makes no sense to compute the CV but, in order to compare light and heavy tails distributions, we use the same values of λ in Figure 2 and 3. In Figure 3, different shapes of the pdfs (panel (a)) and of the corresponding boundaries (panel (b)) are shown. The heavy tails of this distribution determine a new shape for the threshold that has a decreasing maximum as λ increases, followed by a minimum and by an increasing shape of the boundary. The maximum tends to disappear for large values of λ and the values of the boundary are essentially positive. The growth of the boundary, for larger values of t, stop to allow the crossing of the samples determining the tail of the distribution. Figure 3 refers to the time interval [0,20], corresponding to a low probabilistic mass. A check for longer intervals does not change the results from a qualitative viewpoint, while higher probability masses are reached (figure not shown).
A random variable T is Gamma distributed if its pdf is
fT(t)=γκΓ(κ)tκ−1e−γt,t≥0. | (4.5) |
Here, γ>0 is the rate parameter and κ>0 is the shape parameter. Such a random variable is characterized by the following mean, variance and coefficient of variation
E(T)=κγVar(T)=κγ2CV=CV(T)=1√κ. | (4.6) |
The shapes of Gamma pdf for different values of the parameters γ and κ can be seen in Figure 4, panel (a). The shapes of the Gamma pdf strongly change with the value of CV. We recall that CV=1 corresponds to the exponential distribution. Tails of the Gamma distribution are light, decaying to zero as an exponential. The corresponding shapes of the time varying thresholds are shown in panel (b) where the values of the two compartment model are: α=0.33, β=0.2 and σ=1. Here, as CV increases, the maximum of the boundary disappears and the threshold time varying threshold becomes flat or, eventually when CV=2, increasing. This is the main difference of the boundary behavior with respect to the IG case.
Often, IG and Gamma distributions appear as output of models of the same phenomenon but, for different choices of diffusion parameters. Hence, it seems useful to compare these distributions in terms of corresponding boundaries, using the Inverse FPT method. Hence, in this subsection we compare the boundaries corresponding to IG and to Gamma distributions when mean and CV are the same.
Figure 5 shows the time varying boundaries corresponding to the IG (dashed) and the Gamma-distributed (solid) interspike intervals (ISIs) with the same mean value and the same CV. In this example E[T]=10 while CV=[0.5,1,1.5]. Densities and corresponding boundaries become more and more different as we increase the CV value (cf. inbox of Figure 5). The different spreading of probability mass of the two classes of distributions is reflected in different shapes of the corresponding boundaries. Since the IG density has heavier tails, the probability mass should not be consumed for short times. For this reason the boundary increases allowing crossings for large times.
To help a physical interpretation of the results in terms of input of a two compartment model, in Figure 6 we compare the behavior of the two components (2.14) of M(t), when the boundary is transformed into a constant through (2.11). We ask what would be the input to both compartments if the output distribution is fixed and threshold is constant. We illustrate the cases of FPT distributed as IG (a-b), IG with heavy tails (c-d) and Gamma (e-f), respectively. Reinterpreting the time dependent boundary in terms of a modification of the drift allows to interpret our results in terms of increasing or decreasing drift. We note that a positive drift on the first component is always necessary to obtain the prescribed FPT distribution. When CV≤1, FPTs distributed as Gamma or IG imply similar input. On the contrary, when tails of IG are heavy (panel (c)) or CV is large enough (panels (a) or (e)), the drift term of the first component strongly changes becoming decreasing. Interestingly the behavior of the second component μ2(t) (panels (b), (d) and (f)) is the opposite of that of μ1(t).
Lastly, we change the comparison criterion and we apply the Inverse FPT method varying the values of the parameter μ in (2.1). We fix the values of the model as follows: α=0.02, β=0.02 and σ=0.4. We consider examples of boundaries corresponding IG or Gamma spiking densities for CV=0.5 (Figure 7) or CV=1 (Figure 8).
In the figures, we also compare the boundaries (thicker line) with the mean of the first component E[X1(t)]. We note that boundary always intersects the function E[X1(t)]. Interestingly, if we fix the firing FPT and its CV, the intersection value is the same for different values of the parameter μ. This fact can be easily understood by noting that a change in μ determines the same shift both on the mean value of X1(t) and on the boundary. However, this value changes considering different CVs.
We give here an example of application of the Inverse FPT method to neuroscience.
Simplest neuronal models resort to one-dimensional processes to describe the membrane potential evolution. This choice implies a strong simplification of the neuronal structure that is identified by a single point. More complex models introduce bivariate stochastic processes to discriminate the membrane potential dynamics in the dendritic or in the trigger zone [9]. Neurophysiological reasons suggest the existence of an interaction between the membrane potential dynamics in the two zones and, when the first component (the trigger one) attains a boundary value, the neuron releases a spike. A reasonable simplification allows to add a noisy term only to the dendritic component. The reset after the spike can include both the components or only the trigger zone. Here, we will consider only the case of total resetting of both components to a resting value that we fix, for simplicity, equal to zero.
In this framework, the stochastic process (2.1) describes the depolarization of the trigger zone and the dendritic one, respectively [9]. The model assumes that external inputs, with intensity μ and variability σ, influence the second compartment and a weight β takes into account the interconnection between the parts of the neuron. Moreover, the constant α>0 accounts for the spontaneous membrane potential decay (cf. Figure 9). Then, the FPT T mimics the ISI of the neuron and the boundary S(t) corresponds to the spiking threshold for the neuron.
Often, IG and Gamma distribution fit neuronal data and FPT may help to interpret the presence of these distributions. Here, we reinterpret Figures 2-8 in the neuronal model framework. Hence, constants α and β will be measured in ms−1, while μ will be measured in mV ms−1 and σ in mV ms−1/2.
Figures 2, 3, 4 and 5 reinterpret the heaviness of the tail of the ISI distributions in terms of the threshold shapes. As we increase the CV value, IG and Gamma densities and the corresponding boundaries become more and more different. This means that CV plays an important role in the formulation of the model. Moreover, in the case of the Gamma ISI distribution, the slope of the threshold becomes increasing when CV is large enough. A similar increasing behavior of the boundary could be obtained with IG distributed ISIs with heavy tails (cf. Figure 3).
In Figures 7 and 8 we investigate not only the behavior of the time varying firing threshold but also the dynamics of the underlying two compartment neuronal model. The mean membrane potentials and the corresponding boundaries are plotted for different values of the mean input μ and for different values of CV. For low input μ, the curves exhibit a maximum after which the firing threshold starts to decrease. As the input μ increases, the firing threshold from concave and decreasing become convex and increasing. Indeed, a big input μ facilitates the spiking. Therefore, to obtain the assigned distribution, the threshold must move away, becoming increasing.
Lastly in Figure 10 we study the role of the parameter β in the model, applying the Inverse FPT method to IG (panel (a)) and Gamma (panel (b)) FPT distribution and varying the values of the parameter β. As β decreases, the boundary becomes almost constant and equal to zero. This is consistent with the fact that, for β=0, the two components of the process gets independent and X1 is deterministic and equal to zero, since X(0)=0. Then, in order to have a crossing and to get a prescribed distribution, the threshold should approach zero.
The extension of the Inverse FPT method to two-dimensional OU diffusion processes allows to study the shape of the boundaries for a given FPT pdf. We applied the algorithm to FPT distributed as an Inverse Gaussian and as a Gamma random variable. Differences in the boundary shape corresponding to FPTs with heavy or light tails enlighten different features of the corresponding two compartment model.
Lastly, we reinterpret the obtained results in a neuroscience framework. The shape of the boundaries corresponding to different firing distributions may enlighten features of the model eventually recognizing instances of scarce physiological significance such as diverging thresholds.
The authors are grateful to Professor L. Sacerdote and Professor L. Kostal for their interesting and useful comments and suggestions. This work is partially supported by INDAM-GNCS.
The authors declare there is no conflict of interest.
[1] | X. Chen, L. Cheng, J. Chadam, et al., Existence and uniqueness of solutions to the inverse boundary crossing problem for diffusions, Ann. Appl. Probab., 21(2011), 1663–1693. |
[2] | E. Ekström and S. Janson, The inverse first-passage problem and optimal stopping, Ann. Appl. Probab., 26(2016), 3154–3177. |
[3] | M. Abundo, An inverse first-passage problem for one-dimensional diffusions with random starting point, Stat. Probab. Lett. 82(2012), 7–14. |
[4] | P. Lansky, L. Sacerdote and C. Zucca, The Gamma renewal process as an output of the diffusion leaky integrate-and-fire neuronal model, Biol. Cybern., 110(2016),193–200. |
[5] | L. Sacerdote, A. E. P. Villa and C. Zucca, On the classification of experimental data modeled via a stochastic leaky integrate and fire model through boundary values, Bull. Math. Biol., 68(2006),1257–1274. |
[6] | C. Zucca and L. Sacerdote, On the inverse first-passage-time problem for a Wiener process, Ann. Appl. Prob., 19(2009), 1319–1346. |
[7] | G. L. Gerstein and B. Mandelbrot, Random walk models for the spike activity of a single neuron, Biophys. J., 4(1964),41–68. |
[8] | A. Klaus, S. Yu and D. Plenz, Statistical Analyses Support Power Law Distributions Found in Neuronal Avalanches, PLoS ONE, 6(2011), e19779. |
[9] | P. Lansky and R. Rodriguez, Two-compartment stochastic model of a neuron, Physica D, 132(1999), 267–286. |
[10] | L. M. Ricciardi and L.Sacerdote, The Ornstein-Uhlenbeck process as a model for neuronal activity. I. Mean and variance of the firing time, Biological Cybernetics, 35(1979), 1–9. |
[11] | L. Sacerdote and M. T. Giraudo, Stochastic Integrate and Fire Models: a Review on Mathematical Methods and their Applications Stochastic Biomathematical Models with Applications to Neuronal Modeling, in Lecture Notes in Mathematics series (Biosciences subseries) (eds. Bachar, Batzel and Ditlevsen), Springer, 2058(2013), 99–142. |
[12] | Y. Tsubo, Y. Isomura and T. Fukai, Power-Law Inter-Spike Interval Distributions Infer a Conditional Maximization of Entropy in Cortical Neurons, PLoS Comput. Biol., 8(2012), e1002461. |
[13] | N. Yannaros, On Cox processes and Gamma-renewal processes, J. Appl. Probab., 25(1988), 423–427. |
[14] | J. L. Folks and R. S. Chhikara, The Inverse Gaussian Distribution and Its Statistical Application–A Review, J. Royal Statist. Soc. Series B Methodol. 40(1978), 263–289. |
[15] | S. Iyengar and G. Patwardhan, Recent developments in the inverse Gaussian distribution, Handbook Statist., 7(1988),479–490. |
[16] | E. Benedetto, L. Sacerdote and C. Zucca, A first passage problem for a bivariate diffusion process: numerical solution with an application to neuroscience when the process is Gauss-Markov, J. Comp. Appl. Math., 242(2013), 41–52. |
[17] | L. Arnold, Stochastic Differential Equations: Theory and Applications, Krieger Publishing Company, Malabar, Florida, 1974. |
[18] | M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables, Dover, New York, 1964. |
[19] | K. E. Atkinson, An introduction to numerical analysis, Wiley, New York, 1989. |
1. | Alexander Klump, The Inverse First-passage Time Problem as Hydrodynamic Limit of a Particle System, 2023, 25, 1387-5841, 10.1007/s11009-023-10020-7 | |
2. | Alexander Klump, Mladen Savov, Conditions for existence and uniqueness of the inverse first-passage time problem applicable for Lévy processes and diffusions, 2025, 35, 1050-5164, 10.1214/25-AAP2157 |