Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A two-state neuronal model with alternating exponential excitation

  • Received: 13 February 2019 Accepted: 04 April 2019 Published: 18 April 2019
  • We develop a stochastic neural model based on point excitatory inputs. The nerve cell depolarisation is determined by a two-state point process corresponding the two states of the cell. The model presumes state-dependent excitatory stimuli amplitudes and decay rates of membrane potential. The state switches at each stimulus time. We analyse the neural firing time distribution and the mean firing time. The limit of the firing time at a definitive scaling condition is also obtained. The results are based on an analysis of the first crossing time of the depolarisation process through the firing threshold. The Laplace transform technique is widely used.

    Citation: Nikita Ratanov. A two-state neuronal model with alternating exponential excitation[J]. Mathematical Biosciences and Engineering, 2019, 16(5): 3411-3434. doi: 10.3934/mbe.2019171

    Related Papers:

    [1] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . Gauss-diffusion processes for modeling the dynamics of a couple of interacting neurons. Mathematical Biosciences and Engineering, 2014, 11(2): 189-201. doi: 10.3934/mbe.2014.11.189
    [2] Virginia Giorno, Serena Spina . On the return process with refractoriness for a non-homogeneous Ornstein-Uhlenbeck neuronal model. Mathematical Biosciences and Engineering, 2014, 11(2): 285-302. doi: 10.3934/mbe.2014.11.285
    [3] Giuseppe D'Onofrio, Enrica Pirozzi . Successive spike times predicted by a stochastic neuronal model with a variable input signal. Mathematical Biosciences and Engineering, 2016, 13(3): 495-507. doi: 10.3934/mbe.2016003
    [4] Massimiliano Tamborrino . Approximation of the first passage time density of a Wiener process to an exponentially decaying boundary by two-piecewise linear threshold. Application to neuronal spiking activity. Mathematical Biosciences and Engineering, 2016, 13(3): 613-629. doi: 10.3934/mbe.2016011
    [5] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . A simple algorithm to generate firing times for leaky integrate-and-fire neuronal model. Mathematical Biosciences and Engineering, 2014, 11(1): 1-10. doi: 10.3934/mbe.2014.11.1
    [6] Tatyana S. Turova . Structural phase transitions in neural networks. Mathematical Biosciences and Engineering, 2014, 11(1): 139-148. doi: 10.3934/mbe.2014.11.139
    [7] Antonio Di Crescenzo, Fabio Travaglino . Probabilistic analysis of systems alternating for state-dependent dichotomous noise. Mathematical Biosciences and Engineering, 2019, 16(6): 6386-6405. doi: 10.3934/mbe.2019319
    [8] Virginia Giorno, Amelia G. Nobile . Exact solutions and asymptotic behaviors for the reflected Wiener, Ornstein-Uhlenbeck and Feller diffusion processes. Mathematical Biosciences and Engineering, 2023, 20(8): 13602-13637. doi: 10.3934/mbe.2023607
    [9] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . A leaky integrate-and-fire model with adaptation for the generation of a spike train. Mathematical Biosciences and Engineering, 2016, 13(3): 483-493. doi: 10.3934/mbe.2016002
    [10] Alessia Civallero, Cristina Zucca . The Inverse First Passage time method for a two dimensional Ornstein Uhlenbeck process with neuronal application. Mathematical Biosciences and Engineering, 2019, 16(6): 8162-8178. doi: 10.3934/mbe.2019412
  • We develop a stochastic neural model based on point excitatory inputs. The nerve cell depolarisation is determined by a two-state point process corresponding the two states of the cell. The model presumes state-dependent excitatory stimuli amplitudes and decay rates of membrane potential. The state switches at each stimulus time. We analyse the neural firing time distribution and the mean firing time. The limit of the firing time at a definitive scaling condition is also obtained. The results are based on an analysis of the first crossing time of the depolarisation process through the firing threshold. The Laplace transform technique is widely used.


    A neuron is surrounded by a membrane with selective conductivity depending on its current state. The membrane potential V=V(t) undergoes a sudden change, which is called the "spike" potential. The spike (impulse) is generated in response to an external influence and only when it exceeds a certain threshold. Since most cells of central nervous system are characterised by spontaneously emitted train of impulses, stochastic modelling is of primary interest in the field. It is assumed that the excitatory stimuli that occur in random time are random and depend on the current state of the neuron. Each stimulus is followed by a refractory period, which corresponds to an exponential decay of the potential. In view of this, the first-passage-time problem for the underlying stochastic processes is important for the description of the neuronal firing.

    A large number of models of single neurons were developed: from simple threshold models to biologically plausible "portrait" models. Best results were achieved when complicated experimental features were combined with a rather simple mathematical model, see, for example, the mathematical model of the Nobel Prize winner Hodgkin's of the squid giant axon by [1]. The first simple threshold neuron model is considered to be the model proposed by Louis Lapicque in 1907 [2] which is usually called the "leaky integrator" or the "forgetful integrate-and-fire" model. In modern terms, the simplest version of this model with an external forcing term described by a Brownian motion W=W(t) yields the following stochastic differential equation

    dV(t)=(ICV(t)CR)dt+σdW(t),V(0)=V0.

    A spike is generated, once the process V(t) hits the firing threshold H.

    Since excitatory stimuli are intermittent, synaptic input should be modelled by means of point processes. This approach was proposed by R. Stein [3], see also [4]. In [5], Stein's model is briefly presented in a rigorous manner. See also recent paper [6] on this subject.

    Stein's model presumes a time evolution of the potential described by a stochastic equation based on two independent Poisson processes N+ and N

    dV(t)=1τV(t)dt+a+dN+(t)adN(t),

    where a± denote the amplitudes of excitatory/inhibitory currents.

    An excitatory model with an exponentially decaying membrane potential and a non-homogeneous Poisson process driving the consecutive neuronal stimuli is studied by [7]. Methods for assessing how well this model describes neural spikes are based on the time-rescaling technique, [8].

    The detailed review of the existing models can be found in [9].

    The model proposed in this paper suggests that neurons take one of two states, alternating at each stimulus time. Similar ideas are widely used in neural modelling. For example, in the recent monograph [10], the three-phase Stein model was presented: the standard Stein model is supplemented by an additional 0-phase, which starts at the end of the refractory period and lasts until depolarisation occurs. In [11], a two-phase model is studied, based on neuronal oscillations interrupted by stochastic behaviour. The authors claim that this can be explained by a bistability in the ensemble dynamics of coupled integrate and fire neurons. See also the paper [12], where some practical observations are presented that can serve as the basis for such an approach.

    To describe the model, consider the right-continuous process ε=ε(t), t0, which has the state space {0,1}, with independent consecutive (random) holding times {Tn}n1. Let N=N(t) be the process counting the switching of ε till time t,

    N(t)=max{n:knTkt},t>0.

    To construct a model of neural activity, we will use a well-studied class of jump-telegraph stochastic processes, see, for example, the review in [13] and [14]. Recent paper [15] intensively developed methods for studying the distributions of first passage times for such processes.

    The jump-telegraph process X=X(t),t0, with additive jumps, is defined by the stochastic equation

    dX(t)=cε(t)dt+YN(t)dN(t),t>0,X(0)=0, (1.1)

    which by integration yields

    X(t)=t0cε(τ)dτ+N(t)n=1Yn. (1.2)

    Here velocities c0,c1 are two real constants, Yn,n1 are independent random variables independent of ε, corresponding to jumps which accompany each velocity switching. By Xi(t),t0, we denote the solution of (1.1), provided with the additional initial condition: ε(0)=i,i{0,1}.

    We propose a neural potential model that includes state-dependent decay rates of membrane potential, along with multiplicative state-dependent stimuli. The model suggests a change in state at the each stimulus time.

    This model presumes the nerve cell depolarisation V=V(t), t0, to be determined by the stochastic exponential of X, see (1.1)-(1.2). In other words, process V=V(t) is the solution of the stochastic equation

    dV(t)=V(t)(cε(t)dt+YN(t)dN(t)),t>0,V|t0=V0. (1.3)

    The counting process N=N(t) corresponds to the number of excitatory stimuli received by the neuron till time t, Yn is the voltage level at the stimulus time, and the negative constants c0,c1 correspond to the rates of exponential decay of the membrane potential being in the state 0 and 1 respectively.

    The solution V=V(t) of (1.3) is given by stochastic exponential of X,

    V(t)=V0Et(X)=V0exp(t0cε(s)ds)N(t)n=1(1+Yn)=V0exp(t0cε(s)ds+N(t)n=1log(1+Yn)). (1.4)

    Let the holding times Tn,n1, have alternating distributions π0 and π1, that is πi(dt)=P{T1dt|ε(0)=i},i{0,1}, and

    π0(dt):=P{Tndt|ε(T1++Tn1)=0},π1(dt):=P{Tndt|ε(T1++Tn1)=1},n2.

    Jumps Yn have the distributions g0 and g1, alternately, together with alternating distributions π0 and π1 of holding times.

    For each t,t>0, the distribution of V(t) can be expressed by means of the given distributions π0,π1,g0,g1, see e. g. [15].

    The main interest of the neural modelling lies in the properties of the first passage time of the depolarisation process V=V(t) through the firing threshold H,H>0. We are interested to study the distribution of of the stopping time Tx,x=logH/V0, of the jump-telegraph process,

    Tx=inf{t>0|t0cε(s)ds+N(t)n=1log(1+Yn)>x}=inf{t>0|V(t)>H}.

    For a continuous version of such processes, Yn=0n, this mathematical problem is well known and studied, see e.g. [16,17,18]. See also the recent paper [19], where arbitrary sequences of velocities and jumps intensities were applied. The number of level-crossings for the telegraph process has been analysed in [20].

    Properties of Tx with nontrivial jumps are less known, see [7,21,22], where a single state model with independent exponentially distributed jumps, log(1+Yn)Exp(b), is being studied. Some solutions for a jump-diffusion process are presented in [23,24,25], the martingale methods are developed by [26] (see the review in [27]). For applications of jump-diffusion processes to models of neuronal activity see [28].

    In this paper, we generalise the results of [7] on a two-state model with positive independent random jumps Zn=log(1+Yn), n1, having the alternating exponential distributions, Exp(b0) and Exp(b1), b0,b1>0, which corresponds to the (alternating) Pareto distributions of the second kind (Lomax distributions) of Yn, used in economics and actuarial science, see [29,30],

    P{Yny}=(1+y)bi,y>0,i{0,1}.

    In Section 2, we treat the problem in a general setting. For the case of exponentially distributed jumps, explicit formulae for the moment generating function of Tx,x>0, are obtained. The firing probabilities and the mean values of the firing time are also studied.

    Section 3 concerns the limit behaviour of Tx under small frequent stimuli. In Section 4 we give an overview of the single-state case, including the limit behaviour under the parameters' scaling similar to that described in Section 3.

    Let T(i)x be the the first passage time of Xi(t), (1.1)-(1.2), through the positive threshold x,

    T(i)x=inf{t>0:Xi(t)>x},i{0,1}.

    By definition, we set T(i)x|x<00,i{0,1}.

    Since c0,c10, the process X exceeds the threshold xx,x>0, just by jumping. Conditioning on the first switching we have the following identities in law:

    T(0)xD=T(0)+T(1)xc0T(0)Y(0),T(1)xD=T(1)+T(0)xc1T(1)Y(1), (2.1)

    see the definition of X(t), (1.2). Here T(0)/Y(0) and T(1)/Y(1) are the first holding time / the first stimulus amplitude at the states ε(0)=0 and ε(0)=1 respectively.

    Denote by ϕi(x)=ϕi(x;q) the Laplace transform of T(i)x,

    ϕi(x):=E[exp(qT(i)x)],q>0. (2.2)

    By definition, 0ϕi(x)1x, and ϕ0(x)1,ϕ1(x)1, if x<0. Integrating by parts in (2.2), we have

    ϕi(x)=E[exp(qT(i)x)]=0eqtdP{T(i)x<t}=0qeqtP{T(i)x<t}dt=P{T(i)x<eq}=P{sup0<t<eqXi(t)>x},

    where eq is an exponentially distributed random variable, Exp(q), independent of ε and {Yn}n1.

    Assuming that the sequential jumps Yn have the distributions g0 and g1, alternating together with the alternating distributions π0 and π1 of holding times, identity (2.1) can be written as

    ϕ0(x)=E[exp(qT(0)x)]=E[eqT×exp(qT(1)xc0TY)|Tπ0,Yg0],ϕ1(x)=E[exp(qT(1)x)]=E[eqT×exp(qT(0)xc1TY)|Tπ1,Yg1]. (2.3)

    Equations (2.3) are equivalent to

    {ϕ0(x)=0π0(τ)eqτ[¯G0(xc0τ)+xc0τϕ1(xc0τy)g0(dy)]dτ,ϕ1(x)=0π1(τ)eqτ[¯G1(xc1τ)+xc1τϕ0(xc1τy)g1(dy)]dτ. (2.4)

    Here

    ¯Gi(y)=P{Yy|ε=i}=ygi(dy)

    denotes the conditional survivor function of the stimulus amplitude under the state ε=i,i{0,1}.

    In what follows, assume excitatory inputs to be positive and exponentially distributed, that is

    ¯G0(y)=eb0y1,¯G1(y)=eb1y1, (2.5)

    with the corresponding density functions

    g0(y)=b0exp(b0y)1y>0,g1(y)=b1exp(b1y)1y>0, (2.6)
    b0,b1>0.

    We try to find the solution ϕ=(ϕ0,ϕ1) of (2.4) in the form

    ϕ(x)=Nk=1eξkxAk,x>0, (2.7)

    with the indefinite coefficients Ak=(A0k,A1k),Ak0, and ξk,ξkb0,b1, k=1,,N. Substituting (2.7) and (2.5)-(2.5) into (2.4), we obtain the following algebraic system:

    {Nk=1A0keξkx=eb0xˆπ0(qc0b0)+b0Nk=1A1kb0ξk[eξkxˆπ0(qc0ξk)eb0xˆπ0(qc0b0)],Nk=1A1keξkx=eb1xˆπ1(qc1b1)+b1Nk=1A0kb1ξk[eξkxˆπ1(qc1ξk)eb1xˆπ1(qc1b1)], (2.8)

    where

    ˆπi(p)=0eptπi(t)dt,i{0,1},

    is the Laplace transform of the distribution of the holding time.

    From (2.8), we get the following linear equations for the indefinite coefficients ξk and Aik, i{0,1},k=1,,N:

    b0Nk=1A1kb0ξk=1,b1Nk=1A0kb1ξk=1 (2.9)

    and

    A1k=b0ξkb0A0kˆπ0(qc0ξk),A0k=b1ξkb1A1kˆπ1(qc1ξk), (2.10)
    k=1,,N.

    The k-th system of (2.10) has a nontrivial (Aik0) solution, if and only if ξk=ξk(q) is the root of the equation

    ˆπ0(qc0ξ)ˆπ1(qc1ξ)=(1ξb0)(1ξb1).(2.11)q

    Since the mappings ξˆπ0(qc0ξ) and ξˆπ1(qc1ξ),ξ>0, (with negative c0 and c1) are positive decreasing functions and ˆπ0(q)ˆπ1(q)<1q>0, equation (2.11)q has exactly two real positive roots ξ1=ξ1(q),ξ1<b0b1, and ξ2=ξ2(q), ξ2>b0b1, see Fig. 1.

    Figure 1.  Positive roots ξ1 and ξ2 of equation (2.11)q.

    Hence, the moment generating function ϕ(x;q) is defined by (2.7) with N=2,

    ϕ(x;q)=eξ1(q)xA1+eξ2(q)xA2. (2.12)

    The corresponding coefficients A1=(A01,A11) and A2=(A02,A12) are determined by system (2.9)-(2.10), which splits onto two dual linear systems

    {b1(A01b1ξ1+A02b1ξ2)=1,A01ˆπ0(qc0ξ1)+A02ˆπ0(qc0ξ2)=1,

    and

    {b0(A11b0ξ1+A12b0ξ2)=1,A11ˆπ1(qc1ξ1)+A12ˆπ1(qc1ξ2)=1.

    After easy algebra, one can obtain the following explicit formulae

    A01=b1ξ1b1f0(ξ2)b1f0(ξ2)f0(ξ1)=b1ξ1b1(1b1f0(ξ1)f0(ξ2)f0(ξ1)), (2.13)
    A02=b1ξ2b1b1f0(ξ1)f0(ξ2)f0(ξ1)=b1ξ2b1(1f0(ξ2)b1f0(ξ2)f0(ξ1)), (2.14)
    A11=b0ξ1b0f1(ξ2)b0f1(ξ2)f1(ξ1)=b0ξ1b0(1b0f1(ξ1)f1(ξ2)f1(ξ1)), (2.15)
    A12=b0ξ2b0b0f1(ξ1)f1(ξ2)f1(ξ1)=b0ξ2b0(1f1(ξ2)b0f1(ξ2)f1(ξ1)), (2.16)

    where the following notations

    f0(ξ)=f0(ξ;q)=b1ξˆπ0(qc0ξ),f1(ξ)=f1(ξ;q)=b0ξˆπ1(qc1ξ),ξ0, (2.17)

    are used.

    To study the firing probabilities, P{T(0)x<},P{T(1)x<}, and the mean value of firing time, E[T(0)x],E[T(1)x], we are interested to analyse the limits of the moment generating function and its derivative under q0,

    limq0ϕ(x;q),limq0dϕ(x;q)dq.

    To do this, keeping in mind (2.12), we need ξk(q)|q0 and dξk(q)dq|q0, k=1,2, where ξ1=ξ1(q), ξ1(q)<b0b1 and ξ2=ξ2(q),ξ2(q)>b1b1,q>0, are the two branches of (positive) roots of (2.11)q.

    The firing probabilities P{T(i)x<}=limq0ϕi(x;q),i{0,1}, are presented by the following proposition.

    Proposition 2.1. Let the mean value of the holding time intervals Tn,n1, exists, that is

    E0[T]:=E[T|ε=0]=dˆπ0(q)dq|q=0<,E1[T]:=E[T|ε=1]=dˆπ1(q)dq|q=0<. (2.18)

    ● If

    c0E0[T]+c1E1[T]+b10+b11<0, (2.19)

    then the limits ξ=limq0ξ1(q) and ξ=limq0ξ2(q) exist and are positive.

    The firing probabilities are given by

    P{T(0)x<}=A01eξx+A02eξx,P{T(1)x<}=A11eξx+A12eξx, (2.20)

    where Aik=Aik(ξ,ξ),i{0,1},k=1,2, are defined by (2.13)-(2.16) with ξ1=ξ,ξ2=ξ.

    ● Otherwise, if

    c0E0[T]+c1E1[T]+b10+b110, (2.21)

    then the firing occurs a. s.

    P{T(0)x<}=P{T(1)x<}=1. (2.22)

    Proof. Note that ˆπ0(qc0ξ),ˆπ1(qc1ξ),c0,c1<0, are positive decreasing convex functions of q>0 and of ξ>0.

    Since ˆπ0(0)=ˆπ1(0)=1, then ξ=0 is the root of equation (2.11)0. The other roots of equation (2.11)0 depend on the relation between the values of the derivative on ξ at point ξ=0 of the sides of this equation. The derivatives of the RHS and of the LHS of (2.11)q are given by

    ddξ[(1ξb0)(1ξb1)]|ξ0=(1b0+1b1), (2.23)
     and ddξ[ˆπ0(qc0ξ)ˆπ1(qc1ξ)]|ξ0,q0=c0E0[T]+c1E1[T], (2.24)

    respectively, see (2.18).

    We have two distinct situations.

    ● Let (2.19) holds, that is (1b0+1b1)>c0E0[T]+c1E1[T].

    Since ˆπ0(c0ξ)ˆπ1(c1ξ),ξ>0, is the positive decreasing convex function, ξ(1ξ/b0)(1ξ/b1) is convex, and

    ddξ[ˆπ0(c0ξ)ˆπ1(c1ξ)]|ξ0<ddξ[(1ξb0)(1ξb1)]|ξ0

    equation (2.11)0 has explicitly two positive roots: ξ<b0b1,ξ>b0b1, see Fig. 2 (left). Functions qˆπ0(qc0ξ) and qˆπ1(qc1ξ) are monotone decreasing. Hence, the roots ξ1(q),ξ2(q) of (2.11)q, q>0, are monotone functions. Further, for any positive q we have

    Figure 2.  Two positive roots ξ and ξ of equation (2.11)0 in the case (2.19), (left); one positive root ξ of (2.11)0 in the case (2.21), (right).
    ξ<ξ1(q)<b0b1<b0b1<ξ2(q)<ξ,

    and

    limq0ξ1(q)=ξ,limq0ξ2(q)=ξ.

    Equalities (2.20) follow by passing to limit in (2.12)-(2.17).

    ● On the contrary, let (2.21) holds, that is (1b0+1b1)c0E0[T]+c1E1[T].

    In the case of the strict inequality,

    ddξ[ˆπ0(c0ξ)ˆπ1(c1ξ)]|ξ0>ddξ[(1ξb0)(1ξb1)]|ξ0,

    equation (2.11)0 has only one positive root

    ξ, ξ>b0b1, see Fig. 2 (right), and

    limq0ξ1(q)=0,limq0ξ2(q)=ξ. (2.25)

    In this case, see (2.17),

    limq0f0(ξ1(q);q)=b1,limq0f1(ξ1(q);q)=b0.

    Hence, by (2.13)-(2.16) we have

    limq0A01=limq0A11=1,limq0A02=limq0A12=0. (2.26)

    Therefore,

    limq0ϕ0(x;q)=limq0ϕ1(x;q)=1

    and

    P{T(0)x<}=P{T(1)x<}=1.

    If the equality holds, (1b0+1b1)=c0E0[T]+c1E1[T], then we have

    2c0c1E0[T]E1[T]=(b10+b11)2c20(E0[T])2c21(E1[T])2,

    and

    d2dξ2[ˆπ0(c0ξ)ˆπ1(c1ξ)]|ξ0=c20E0T2+c21E1T2+2c0c1E0[T]E1[T]=(b10+b11)2+c20Var0[T]+c21Var1[T]>2b0b1=d2dξ2[(1ξb0)(1ξb1)]|ξ0,

    Therefore, since ˆπ0(c0ξ)ˆπ1(c1ξ) and (1ξb0)(1ξb1) are decreasing convex functions, the same result occurs: (2.25)-(2.26), and then (2.22).

    The mean value of the firing time can be obtained in a similar way. We need some auxiliary results.

    Let ξ1(q),ξ2(q) be the two branches of positive roots of (2.11)q,q>0, and condition (2.21) holds. By proposition 2.1, 0<ξ1(q)<b0b1<ξ2(q) and

    limq0ξ1(q)=0,limq0ξ2(q)=ξ.

    Let coefficients Aik=Aik(ξ1,ξ2;q) be defined by (2.13)-(2.17).

    Lemma 2.2. Let (2.21) be satisfied. The following limit relations hold:

    dξ1(q)dq|q0=E0[T]+E1[T]b10+b11+c0E0[T]+c1E1[T]=:σ>0 (2.27)

    and

    ddq[A01(ξ1(q),ξ2(q);q)]|q0=B0(ξ)σb1, (2.28)
    ddq[A02(ξ1(q),ξ2(q);q)]|q0=(ξb11)B0(ξ), (2.29)
    ddq[A11(ξ1(q),ξ2(q);q)]|q0=B1(ξ)σb0, (2.30)
    ddq[A12(ξ1(q),ξ2(q);q)]|q0=(ξb01)B1(ξ), (2.31)

    where

    B0(ξ)=b1E0[T]σ(1+b1c0E0[T])f0(ξ;0)b1,B1(ξ)=b0E1[T]σ(1+b0c1E1[T])f1(ξ;0)b0. (2.32)

    Proof. Substitute ξ1=ξ1(q) into (2.11)q. Formula (2.27) follows from (2.23)-(2.24) by differentiating in (2.11)q. The derivative in q at q0 gives

    (E0[T]+E1[T])+dξ1(q)dq|q0(c0E0[T]+c1E1[T])=dξ1(q)dq|q0(1b0+1b1).

    Under condition (2.21), limq0ξ1(q)=0. By definition (2.17), it follows that

    limq0f0(ξ1(q),q)=b1,limq0f1(ξ1(q),q)=b0,

    and

    limq0[f0(ξ;q)q|ξ=ξ1(q)]=b1E0[T],limq0[f1(ξ;q)q|ξ=ξ1(q)]=b0E1[T],
    limq0[f0(ξ;q)ξ|ξ=ξ1(q)]=1b1c0E0[T],limq0[f1(ξ;q)ξ|ξ=ξ1(q)]=1b0c1E1[T].

    With this in mind, by (2.13)-(2.14) you can get

    limq0[A01(ξ1,ξ2;q)ξ1|ξ1=ξ1(q),ξ2=ξ2(q)]=1b1+limq0[f0(ξ;q)ξ|ξ=ξ1(q)]f0(ξ;0)b1=1b11+b1c0E0[T]f0(ξ;0)b1, (2.33)
    limq0[A02(ξ1,ξ2;q)ξ1|ξ1=ξ1(q),ξ2=ξ2(q)]=b1ξb1limq0[f0(ξ)ξ|ξ=ξ1(q)]f0(ξ;0)b1,=b1ξb11+b1c0E0[T]f0(ξ;0)b1, (2.34)
    limq0[A01(ξ1,ξ2;q)q|ξ1=ξ1(q),ξ2=ξ2(q)]=limq0f0(ξ1(q))q|ξ1=ξ1(q)f0(ξ;0)b1=b1E0[T]f0(ξ;0)b1, (2.35)
    limq0[A02(ξ1,ξ2;q)q|ξ1=ξ1(q),ξ2=ξ2(q)]=b1ξb1limq0f0(ξ1(q))q|ξ1=ξ1(q)f0(ξ;0)b1=b1ξb1b1E0[T]f0(ξ;0)b1. (2.36)

    and

    limq0[A01(ξ1,ξ2;q)ξ2|ξ1=ξ1(q),ξ2=ξ2(q)]=0,limq0[A02(ξ1,ξ2;q)ξ2|ξ1=ξ1(q),ξ2=ξ2(q)]=0. (2.37)

    Substituting (2.33)-(2.37) into

    dA01dq|q0=A01ξ1|q0ξ1(0)+A01ξ2|q0ξ2(0)+A01q|q0,dA02dq|q0=A02ξ1|q0ξ1(0)+A02ξ2|q0ξ2(0)+A02q|q0,

    where ξk(0):=dξk(q)/dq|q0,k=1,2, one can get Eqns (2.28)-(2.29). Eqns (2.30)-(2.31) follow similarly.

    Proposition 2.3. If (2.21) holds, then the mean firing time M(x)=(E[T(0)x],E[T(1)x]) is finite and is given by the entries

    E[T(0)x]=σ(x+1b1)B0(ξ)[1+(ξb11)exp(ξx)], (2.38)
    E[T(1)x]=σ(x+1b0)B1(ξ)[1+(ξb01)exp(ξx)], (2.39)

    where σ,B0(ξ),B1(ξ) and ξ=limq0ξ2(q) are defined in Lemma 2.2.

    A numerical example with various values of jump amplitudes is depicted in Fig. 3: the average firing time increases when up jumps vary from large to small.

    Figure 3.  Mean values of firing times ET(0)x,x=1, c0=1,c1=2, (2.38), with exponentially distributed holding times, Exp(λ), depending on λ=λ0=λ1. From left to right: b0=0.1,b1=0.5;b0=1,b1=5;b0=2,b1=10;b0=4,b1=20.

    Proof. Since M(x)=limq0dϕ(x;q)dq, by (2.12)

    M(x)=exp(xξ1|q0)dA1dq|q0exp(xξ2|q0)dA2dq|q0+xexp(xξ1|q0)(ξ1A1)|q0+xexp(xξ2|q0)(ξ2A2)|q0.

    By (2.25) and (2.26) we have A1|q0=1,A2|q0=0, ξ1|q0=0,ξ2|q0=ξ and ξ1|q0=σ. Therefore,

    M(x)=dA1dq|q0exp(ξx)dA2dq|q0+xσ1,

    which by Lemma 2.2 gives (2.38)-(2.39).

    Remark 2.1. Firing time distribution in the markovian case. Let the holding times be exponentially distributed with alternating mean values λ10,λ11. In this case, the pair X(t),ε(t),t0, is the Markov process.

    Since,

    ˆπ0(p)=λ0λ0+p,ˆπ1(p)=λ1λ1+p,

    functions f0 and f1, see (2.17), are defined by

    f0(ξ)=λ10(b1ξ)(λ0+qc0ξ),f1(ξ)=λ11(b0ξ)(λ1+qc1ξ),

    and equation (2.11)q becomes

    λ0λ1(λ0+qc0ξ)(λ1+qc1ξ)=(1ξb0)(1ξb1).

    In this case formulae (2.38)-(2.39) for mean firing times hold with

    B0(ξ)=b1σ(λ0+b1c0)ξ(c0ξλ0b1c0),B1(ξ)=b0σ(λ1+b0c1)ξ(c1ξλ1b0c1),

    where ξ=limq0ξ2(q).

    Formulae (2.38)-(2.39) for mean firing times can be simplified to an explicit form also in the case an alternating compound Poisson process, that is if c0=c1=0, which gives a nice additional result.

    Proposition 2.4. Let the telegraph component vanish, c0=c1=0.

    In this case condition (2.21) always holds. The mean firing times of Tx are given by

    E[T(0)x]=σ(x+1b1)+b1E0[T]σ2b[1+b0b1exp(2bx)], (2.40)

    and

    E[T(1)x]=σ(x+1b0)+b0E1[T]σ2b[1+b1b0exp(2bx)], (2.41)

    where σ, see (2.27), is simplified to

    σ=b0b12b(E0[T]+E1[T]),2b=b0+b1.

    Proof. The moment generating function ϕ is given by (2.12), where ξ1(q),ξ2(q) are the two (positive) roots of the equation (2.11)q with c0=c1=0:

    (b0ξ)(b1ξ)=Cq,Cq=b0b1ˆπ0(q)ˆπ1(q),q0.

    Explicitly,

    ξ1=b12D,ξ2=b+12D,where D=(b0b1)2+4Cq. (2.42)

    Formulae for the coefficients Aik,i{0,1},k=1,2, can be simplified. First, by (2.17) we have

    f0(ξ1)=b1b0+D2ˆπ0(q),f0(ξ2)=b1b0D2ˆπ0(q). (2.43)

    Further, by (2.13) and (2.14)

    A01=b1b0+D2b1(b1b0D)2b1ˆπ0(q)2D=2Cq+b1ˆπ0(q)(b1b0+D)2b1D=ˆπ0(q)(1+Δ0)2

    and

    A02=b1b0D2b12b1ˆπ0(q)(b1b0+D)2D=(b1b0D)b1ˆπ0(q)+2Cq2b1D=ˆπ0(q)(1Δ0)2,

    where Δ0=b1b0+2b0ˆπ1(q)D.

    Similarly, by (2.15) and (2.16)

    A11=ˆπ1(q)(1+Δ1)2,A12=ˆπ1(q)(1Δ1)2,

    where Δ1=b0b1+2b1ˆπ0(q)D.

    After easy algebra one can obtain the explicit formulae for the moment generating functions of Tx:

    ϕ0(x;q)=ˆπ0(q)exp(bx)[cosh(Dx/2)+Δ0sinh(Dx/2)],ϕ1(x;q)=ˆπ1(q)exp(bx)[cosh(Dx/2)+Δ1sinh(Dx/2)].

    Further, by (2.42) ξ=limq0ξ2(q)=2b; by (2.43) f0(ξ;0)=b0. Similarly, f1(ξ;0)=b1 and by (2.32)

    B0(ξ)=b1E0[T]σ2b,B1(ξ)=b0E1[T]σ2b.

    Under these simplifications, formulae (2.38)-(2.39), Proposition 2.3, become (2.40)-(2.41).

    Consider the case of small frequent stimuli. We assume that the mean values of jumps, b10,b11, and the holding times, Tn, consistently tend to zero. We are interested to study the asymptotical behaviour of the firing time under these circumstances. Let's set the exact assertion.

    Let the parameters of stochastic stimulation of a neuron be scaled as follows: first, the mean stimuli amplitudes consistently tend to zero,

    E0[Y]=b100,E1[Y]=b110; (3.1)

    second, the holding time intervals tend to zero. More precisely, assume that for any fixed positive q

    ˆπ0(q)1,ˆπ1(q)1, (3.2)

    such that the derivatives of the moment generating functions ˆπ0 and ˆπ1 exist and vanish:

    m0(q):=ˆπ0(q)=E0[TeqT]0,m1(q):=ˆπ1(q)=E1[TeqT]0, (3.3)
    ˆπ0(q)=E0[T2eqT]0,ˆπ1(q)=E1[T2eqT]0. (3.4)

    Assume the convergence rates at (3.1) to be comparable,

    b0b1=E1[T]E0[T]β; (3.5)

    convergence rates at (3.1), (3.3)-(3.4) to be consistent as follows:

    E0[Y]ˆπ0(q)=1b0m0(q)v0,E1[Y]ˆπ1(q)=1b1m1(q)v1,v0,v10, (3.6)

    and

    b0E0[T2eqT]0,b1E1[T2eqT]0. (3.7)

    Coefficients v0,v1 describe an additional positive trend arising in the jump-telegraph process due to small frequent positive jumps.

    Let

    κ:=1+c0v10+β(1+c1v11). (3.8)

    To analyse the comportment of the roots ξ1 and ξ2 of (2.11)q we will use the following decomposition of the moment generating functions of the holding times distributions:

    ˆπ0(qc0ξ)=ˆπ0(q)+c0ξm0(q)+R0(ξ;q),ˆπ1(qc1ξ)=ˆπ1(q)+c1ξm1(q)+R1(ξ;q), (3.9)

    where by (3.4)

    R0(ξ;q)=c20ξ2E0[T2eqTn0(c0ξT)n(n+2)!]c20ξ2E0[T2eqT]0,R1(ξ;q)=c21ξ2E1[T2eqTn0(c1ξT)n(n+2)!]c21ξ2E1[T2eqT]0.

    Condition (3.7) provides the uniform in ξ convergence:

    b0R0(ξ;q)ξ2=b0ˆπ0(qc0ξ)ˆπ0(q)c0ξm0(q)ξ20,b1R1(ξ;q)ξ2=b1ˆπ1(qc1ξ)ˆπ1(q)c1ξm1(q)ξ20. (3.10)

    Theorem 3.1. Let b0,b1, and the holding times be asymptotically zero, such that conditions (3.1)-(3.6) and (3.7) met.

    If κ(0,+], then for any x,x>0,

    Txγx,a.s.,

    where

    γ=v10+βv11κ=v10+βv111+c0v10+β(1+c1v11),γ>0.

    If κ0, then

    Tx+a.s.

    Proof. To analyse the asymptotical behaviour of Tx we need to evaluate the comportment of the positive roots ξ1(q),ξ2(q),ξ1(q)<ξ2(q), of equation (2.11)q.

    It turns out, the behaviour of the smaller root ξ1,ξ1<b0b1, depends on the sign of κ, (3.8).

    Due to (3.9) equation (2.11)q takes the form

    (ˆπ0(q)+c0ξm0(q)+R0(ξ;q))(ˆπ1(q)+c1ξm1(q)+R1(ξ;q))=(1ξb0)(1ξb1),

    which can be rewritten as

    A0A1ξ+A2ξ2=0. (3.11)

    Here A0 and A1 are constants (depending only on q),

    A0=1ˆπ0(q)ˆπ1(q),
    A1=1b0+1b1+c0m0(q)ˆπ1(q)+c1m1(q)ˆπ0(q),

    and A2=A2(ξ) is given by

    A2=A2(ξ)=1b0b1c0c1m0(q)m1(q)ξ2[R0(ˆπ1(q)+c1ξm1(q))+R1(ˆπ0(q)+c0ξm0(q))+R0R1].

    By (3.10) and (3.6)

    lim(b0A0)=limb0(1ˆπ0(q)ˆπ1(q))=qlimb0(E0[T]+E1[T])=qlimb0(m0(0)+m1(0))=q(v10+βv11)0. (3.12)

    By (3.6) and (3.1)-(3.2), b0A1 converges to κ, (3.8),

    lim[b0A1]=lim[1+b0b1+c0b0m0(q)ˆπ1(q)+c1b0m1(q)ˆπ0(q)]=1+β+c0v10+βc1v11=κ. (3.13)

    Next,

    b0A2=b11c0c1m1(q)b0m0(q)b0R0ξ2(ˆπ1(q)+c1ξm1(q))b0R1ξ2(ˆπ0(q)+c0ξm0(q))b0R0ξ2R1.

    Since ξ=ξ1<b0b1, the terms ξm0(q) and ξm1(q) by (3.6) are uniformly bounded. Therefore, by (3.10),

    lim[b0A2(ξ1)]=0. (3.14)

    If κ>0, the smaller root ξ1 of (3.11) has the following limit:

    limξ1(q)=lim2A0A1+A214A0A2. (3.15)

    By (3.12), (3.13) and (3.14) this limit is positive and finite:

    limξ1(q)=limA0A1=q(v10+βv11)κ=qγ0.

    Further, under this scaling

    limf0(ξ1)b1=lim1ξ1/b1ˆπ0(qc0ξ1)=1,limf1(ξ1)b0=lim1ξ1/b0ˆπ1(qc1ξ1)=1

    Under the scaling (3.1), the greater root ξ2, ξ2>b0b1, always goes to infinity, ξ2; moreover, f0(ξ2)/b1 and f1(ξ2)/b0 are finite.

    As a consequence, A01 and A11, which are defined by (2.13) and (2.15), converge to 1 and A02 and A12 are bounded.

    Summarising, we obtain

    limϕ(x)=exp(qγx),

    which means

    Txv0+βv1κx=γxa.s.

    If the limit lim[b0A1]=κ in (3.13) is not positive, κ0, then (see (3.15))

    limξ1(q)=+,

    which corresponds to

    limϕ(x)=0

    and

    Tx+a.s.

    Remark 3.1. The result of Theorem 3.1 can be interpreted as the behaviour of two types of neurons : if κ>0, then the scaled model corresponds to the so called a tonically discharging cell, a phasic cell appears when κ0, that is Tx, the firing rate drops to zero, see [31].

    Remark 3.2. In the markovian case, that is if ˆπ0(q)=λ0/(q+λ0),ˆπ1(q)=λ1/(q+λ1), conditions (3.2)-(3.7) hold when

    λ0,λ1+,λ0/b0v0,λ1/b1v1.

    The crucial parameter κ becomes

    κ=1+c0v10+β(1+c1v11)=1+lim[b0(b11+c0λ10+c1λ11)].

    The model of neural activity based on a jump-telegraph process, see (1.1), (1.4), which is studied in Sections 2 and 3 can be simplified, restricting to the case with one state. Consider the particular case of the neural model (1.4) based on the single-state symmetric process

    X(t)=ct+N(t)n=1Yn,c0, (4.1)

    with independent positive exponentially distributed stimuli amplitudes Yn,YnExp(b), b>0; the independent inter-arrival times {Tn}n1, are identically distributed with the density function π(t). When Tn are exponentially distributed, such a model has been studied in detail by [7] and [21].

    The moment generating function ϕ(x)=E[exp(qTx)] of the first passage time Tx is given by (2.12),

    ϕ(x)=eξ1xA1+eξ2xA2,

    where A1=A01=A11,A2=A02=A12 are defined by (2.13)-(2.17), ξ1=ξ1(q) and ξ2=ξ2(q) are the two branches of positive roots of (2.11)q.

    For the single-state process X(t), defined by (4.1), equation (2.11)q is simplified to

    ˆπ(qcξ)=|1ξb|.

    More precisely, ξ1,ξ1<b, is the positive root of ˆπ(qcξ)=1ξ/b, and ξ2,ξ2>b, is the root of ˆπ(qcξ)=ξ/b1. In this case, functions f0 and f1, which are defined by (2.17), coincide,

    f0(ξ)f1(ξ)=bξˆπ(qcξ)=:f(ξ),

    that is,

    f(ξ1)=bξ11ξ1/b=b,f(ξ2)=bξ2ξ2/b1=b. (4.2)

    By (4.2) and (2.13)-(2.16)

    A1=A01=A11=bξ1b=ˆπ(qcξ1),A2=A02=A12=0,

    and the moment generating function ϕ (depending only on ξ1) is given by

    ϕ(x;q)=bξbeξx, (4.3)

    where ξ=ξ1(q),0<ξ1(q)<b,q>0, is the root of

    ˆπ(qcξ)=1ξ/b. (4.4)

    Function ˆπ=ˆπ(q),q0, is positive, convex and decreasing, hence the root of (4.4) exists and function ξ1(q),q0, increases,

    0<q1<q20<ξ1(q1)<ξ1(q2)<b.

    Therefore, limq0ξ1(q) exists and, by Proposition 2.1,

    limq0ξ1(q)={ξ>0, if cE[T]+b1<0,0, if cE[T]+b10.

    Further, the firing probability P{Tx<}=limq0ϕ(x;q) is given by

    P{Tx<}={bξbexp(ξx), if cE[T]+b1<0,1, if cE[T]+b10,

    In the case cE[T]+b1>0, by differentiating in (4.4) (see also (2.27)) one can obtain

    limq0dξ1(q)dq=E[T]b1+cE[T]>0. (4.5)

    The mean firing time is finite: by (4.3) and (4.5) one can obtain

    E[Tx]=dϕ(q)dq|q0=(x+b1)limq0dξ1(q)dq=(1+bx)E[T]1+bcE[T]. (4.6)

    This result is in concordance with Proposition 2.3. In this case, by (2.27),

    σ=E[T]b1+cE[T],

    and by (2.32),

    B0(ξ)=B1(ξ)=bE[T]σ(1+bcE[T])f(ξ)b=0,

    which by (2.38)-(2.39) gives (4.6).

    In particular, consider the model (4.1) defined by the compound Poisson process with the (negative) drift c, that is, let the inter-switching times of model (4.1) be exponentially distributed, π(t)=λexp(λt). Now, formula (4.3) holds with ξ,0<ξ<b, which is the unique positive root of the equation (4.4):

    λλ+qcξ=1ξ/b.

    We have

    0<ξ=ξ(q)=λ+q+bc(λ+q+bc)24bcq2c=b+˜q˜q2+4bcλ2c<b, (4.7)

    where ˜q:=q+λbc. Due to (4.3), the moment generating function ϕ is given by

    ϕ(x;q)=EeqTx=eξx+b1ddxeξx=2λλ+qbc+q2+(λ+bc)2+2q(λbc)×exp(λ+q+bcq2+(λ+bc)2+2q(λbc)2cx), (4.8)

    which coincide with [7,(39)]. By applying the inverse Laplace transform L1qt to (4.8) one can obtain the firing density fTx(t). Formula [32,2.2.5-18] applied to exp(ξ(q)x) with ξ(q) defined by (4.7) shows

    L1qt[exp(ξ(q)x)]=azt2+2atI1(zt2+2at)exp(bx(λbc)t),

    where a=x/(2c),z=2bcλ. After easy algebra, from (4.8) we obtain

    fTx(t)=λxxct[I0(w)2ctI1(w)w]exp(bx(λbc)t) (4.9)

    with w:=2λbt(xct). Formula (4.9) was derived in [7, Theorem 3.1] using another technique.

    Firing probability, P{Tx<}, and moments of Tx can be also obtained: if λ+bc>0, then (2.21) holds and

    E[Tx]=1+bxλ+bc.

    If λ+bc0, then E[Tx]=+. This coincides with the known result, see [7, Proposition 4.2].

    The limit behaviour under small frequent stimuli, Section 3, in the case of the single-state model, also looks simple.

    In this case, condition (3.1) corresponds to b and β=1, see (3.5); (3.6)-(3.7) follows, if λ+ and λbv.

    By (4.3)

    Eexp(qTx)=ϕ(x)=bξ1bexp(ξ1x),

    where ξ1=ξ1(q) is the positive root, 0<ξ1(q)<b, of

    λλ+qcξ=1ξb.

    One can see that for any q>0 under this scaling

    ξ1(q)=2qv+c+q/b+(v+c+q/b)24cq/b{qv+c,if v+c>0,+,if v+c0,

    and for any x>0

    ϕ(x){exp(qx/(v+c)), if v+c>0,0,if h+c0.

    Therefore,

    Tx{xv+c,if v+c>0 ,+,if v+c0.

    The main goal of this paper is to study the stochastic model based on two states/phases of the nerve cell, alternating at random times of exponential excitatory inputs. The corresponding single-phase Stein's model is well known and presented in detail, see [3,5,10]. Our model generalises and modifies the two-phase cell cycle model, presented by [10,11]. In this paper, we have obtained the explicit formulae for firing probability and the mean firing time, under the certain necessary condition. The asymptotical behaviour of the firing time under small frequent stimuli has been also presented. The known results of the single-state homogeneous model, [7], follow as a special case.

    Since the real activity of neurons depends on the current state/phase of the organism, the proposed model, based on two alternating patterns, fits well with a naive understanding of this issue. The structure of this model can serve as a guide for practitioners: it would be interesting to discover this two-phase phenomenon of the behaviour of neurons in an experiment.

    The author thanks two anonymous referees for their helpful comments that improved the paper.

    The author declares no conflicts of interest in this paper.



    [1] A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its applica- tion to conduction and excitation in nerve, J. Physiol., 117 (1952), 500–544.
    [2] M. L. Lapicque, Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation, (French) [Quantitative research on electrical excitation of nerves treated as a polarization], J. De Physiol. Pathol. Gen., 9 (1907), 620–635.
    [3] R. B. Stein, A theoretical analysis of neuronal variability, Biophys. J., 5 (1965), 173–194.
    [4] P. Lánsk´ y and V. Lánská, Diffusion approximation of the neuronal model with synaptic reversal potentials, Biol. Cybern., 56 (1987), 19–26.
    [5] V. Capasso and D. Bakstein, An Introduction to Continuous-Time Stochastic Processes. The- ory, Models, and Applications to Finance, Biology, and Medicine, Springer-Verlag, New York, Heidelberg, Dordrecht, London, 2012.
    [6] S. Olmi, D. Angulo-Garcia, A. Imparato, et al., Exact firing time statistics of neurons driven by discrete inhibitory noise, Sci. Rep., 7 (2017), 1577.
    [7] A. Di Crescenzo and B. Martinucci, Analysis of a stochastic neuronal model with excitatory inputs and state-dependent effects, Math. Biosci., 209 (2007), 547–563.
    [8] E. N. Brown, R. Barbieri, V. Ventura, et al., The time-rescaling theorem and its application to neural spike train data analysis, Neural Comput., 14 (2001), 325–346.
    [9] H. C. Tuckwel, Introduction to Theoretical Neurobiology, Volume 2, Nonlinear and Stochastic Theories, Cambridge University Press, 1988.
    [10] R. Rudnicki and M. Tyran-Kami´ nska, Piecewise Deterministic Processes in Biological Models, Spinger-Verlag, 2017.
    [11] H. U. Bauer and K. Pawelzik, Alternating oscillatory and stochastic dynamics in a model for a neuronal assembly, Phys. D Nonlin. Phenom., 69 (1993), 380–393.
    [12] M. Walczak and T. Błasiak, Midbrain dopaminergic neuron activity across alternating brain states of urethane anaesthetized rat. Eur. J. Neurosci., 45 (2017), 1068–1077.
    [13] A. D. Kolesnik and N. Ratanov, Telegraph Processes and Option Pricing, Springer-Verlag, Heidelberg-New York-Dordrecht-London, 2013.
    [14] S. Zacks, Sample Path Analysis and Distributions of Boundary Crossing Times, Lecture Notes in Mathematics, vol. 2203, Springer-Verlag, 2017.
    [15] N. Ratanov, First crossing times of telegraph processes with jumps, Methodol. Comput. Appl. Probab., (2019). https://doi.org/10.1007/s11009-019-09709-5
    [16] L. Beghin, L. Nieddu and E. Orsingher, Probabilistic analysis of the telegrapher's process with drift by means of relativistic transformations, J. Appl. Math. Stoch. Anal., 14 (2001), 11–25.
    [17] L. Bogachev and N. Ratanov, Occupation time distributions for the telegraph process, Stoch. Process. Appl., 121 (2011), 1816–1844.
    [18] O. López and N. Ratanov, On the asymmetric telegraph processes, J. Appl. Prob. 51 (2014), 569–589.
    [19] N. Ratanov, Self-exciting piecewise linear processes, ALEA Lat. Am. J. Probab. Math. Stat. 14 (2017), 445–471.
    [20] A. A. Pogorui, R. M. Rodrguez-Dagnino and T. Kolomiets, The first passage time and estimation of the number of level-crossings for a telegraph process, Ukr. Math. J. 67 (2015), 998–1007.
    [21] G. D'Onofrio, C. Macci and E. Pirozzi, Asymptotic results for first-passage times of some expo- nential processes, Methodol. Comput. Appl. Probab., 20 (2018), 1453–1476.
    [22] A. Di Crescenzo and A. Meoli, On a jump-telegraph process driven by an alternating fractional Poisson process, J. Appl. Probab., 55 (2018), 94–111.
    [23] M. Abundo, On the first hitting time of a one-dimensional diffusion and a compound Poisson process, Methodol. Comput. Appl. Probab., 12 (2010), 473–490.
    [24] N. Ratanov, Option pricing model based on a Markov-modulated diffusion with jumps, Braz. J. Probab. Stat. 24 (2010), 413–431.
    [25] L. Breuer, First passage times for Markov-additive processes with positive jumps of phase type, J. Appl. Prob. 45 (2008), 779–799.
    [26] V. Srivastava, S. F. Feng, J. D. Cohen, et al., A martingale analysis of first passage times of time-dependent Wiener diffusion models, J. Math. Psychol., 77 (2017), 94–110.
    [27] A. N. Shiryaev, On martingale methods in the boundary crossing problems for brownian motion, Sovrem. Probl. Mat., 8 (2007), 3–78.
    [28] M. T. Giraudo and L. Sacerdote, Jump-diffusion processes as models for neuronal activity, BioSys- tems, 40 (1997), 75–82.
    [29] N. L. Johnson, S. Kotz and N. Balakrishnan, Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics) Wiley-Interscience, 1994.
    [30] K. S. Lomax, Business failures; another example of the analysis of failure data, J. Amer. Stat. Assoc., 49 (1954), 847–852.
    [31] H. C. Tuckwell, Introduction to theoretical neurobiology: Volume 1, Linear Cable Theory and Dendritic Structure, Cambridge University Press, 1988.
    [32] A. P. Prudnikov, Y. A. Brychkov and O. I. Marichev, Integrals and Series, Vol. 5. Inverse Laplace Transforms, Gordon and Breach Science Publ. 1992.
  • This article has been cited by:

    1. Nikita Ratanov, Ornstein-Uhlenbeck Processes of Bounded Variation, 2020, 1387-5841, 10.1007/s11009-020-09794-x
    2. Nikita Ratanov, Mean-reverting neuronal model based on two alternating patterns, 2020, 196, 03032647, 104190, 10.1016/j.biosystems.2020.104190
    3. Nikita Ratanov, Alexander D. Kolesnik, 2022, Chapter 2, 978-3-662-65826-0, 31, 10.1007/978-3-662-65827-7_2
    4. Gabriel Maik, Grzegorz Mzyk, Paweł Wachel, Exponential excitations for effective identification of Wiener system, 2024, 97, 0020-7179, 1956, 10.1080/00207179.2023.2246070
    5. Henryk Gzyl, Lorentz covariant physical Brownian motion: Classical and quantum, 2025, 472, 00034916, 169857, 10.1016/j.aop.2024.169857
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4599) PDF downloads(613) Cited by(5)

Figures and Tables

Figures(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog