Loading [MathJax]/jax/output/SVG/jax.js
Research article

Edge detection of remote sensing image based on Grünwald-Letnikov fractional difference and Otsu threshold


  • With the development of remote sensing technology, the resolution of remote sensing images is improving, and the presentation of geomorphic information is becoming more and more abundant, the difficulty of identifying and extracting edge information is also increasing. This paper demonstrates an algorithm to detect the edges of remote sensing images based on Grünwald–Letnikov fractional difference and Otsu threshold. First, a convolution difference mask with two parameters in four directions is constructed by using the definition of the Grünwald–Letnikov fractional derivative. Then, the mask is convolved with the gray image of the remote sensing image, and the edge detection image is obtained by binarization with Otsu threshold. Finally, the influence of two parameters and threshold values on detection results is discussed. Compared with the results of other detectors on the NWPU VHR-10 dataset, it is found that the algorithm not only has good visual effect but also shows good performance in quantitative evaluation indicators (binary graph similarity and edge pixel ratio).

    Citation: Chao Chen, Hua Kong, Bin Wu. Edge detection of remote sensing image based on Grünwald-Letnikov fractional difference and Otsu threshold[J]. Electronic Research Archive, 2023, 31(3): 1287-1302. doi: 10.3934/era.2023066

    Related Papers:

    [1] Virginia Giorno, Serena Spina . On the return process with refractoriness for a non-homogeneous Ornstein-Uhlenbeck neuronal model. Mathematical Biosciences and Engineering, 2014, 11(2): 285-302. doi: 10.3934/mbe.2014.11.285
    [2] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . A simple algorithm to generate firing times for leaky integrate-and-fire neuronal model. Mathematical Biosciences and Engineering, 2014, 11(1): 1-10. doi: 10.3934/mbe.2014.11.1
    [3] Giuseppe D'Onofrio, Enrica Pirozzi . Successive spike times predicted by a stochastic neuronal model with a variable input signal. Mathematical Biosciences and Engineering, 2016, 13(3): 495-507. doi: 10.3934/mbe.2016003
    [4] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora . Gauss-diffusion processes for modeling the dynamics of a couple of interacting neurons. Mathematical Biosciences and Engineering, 2014, 11(2): 189-201. doi: 10.3934/mbe.2014.11.189
    [5] Meng Gao, Xiaohui Ai . A stochastic Gilpin-Ayala mutualism model driven by mean-reverting OU process with Lévy jumps. Mathematical Biosciences and Engineering, 2024, 21(3): 4117-4141. doi: 10.3934/mbe.2024182
    [6] Giuseppina Albano, Virginia Giorno . Inference on the effect of non homogeneous inputs in Ornstein-Uhlenbeck neuronal modeling. Mathematical Biosciences and Engineering, 2020, 17(1): 328-348. doi: 10.3934/mbe.2020018
    [7] Zhihao Ke, Chaoqun Xu . Structure analysis of the attracting sets for plankton models driven by bounded noises. Mathematical Biosciences and Engineering, 2023, 20(4): 6400-6421. doi: 10.3934/mbe.2023277
    [8] Yongxin Gao, Shuyuan Yao . Persistence and extinction of a modified Leslie-Gower Holling-type Ⅱ predator-prey stochastic model in polluted environments with impulsive toxicant input. Mathematical Biosciences and Engineering, 2021, 18(4): 4894-4918. doi: 10.3934/mbe.2021249
    [9] Hideaki Kim, Shigeru Shinomoto . Estimating nonstationary inputs from a single spike train based on a neuron model with adaptation. Mathematical Biosciences and Engineering, 2014, 11(1): 49-62. doi: 10.3934/mbe.2014.11.49
    [10] Shinsuke Koyama, Ryota Kobayashi . Fluctuation scaling in neural spike trains. Mathematical Biosciences and Engineering, 2016, 13(3): 537-550. doi: 10.3934/mbe.2016006
  • With the development of remote sensing technology, the resolution of remote sensing images is improving, and the presentation of geomorphic information is becoming more and more abundant, the difficulty of identifying and extracting edge information is also increasing. This paper demonstrates an algorithm to detect the edges of remote sensing images based on Grünwald–Letnikov fractional difference and Otsu threshold. First, a convolution difference mask with two parameters in four directions is constructed by using the definition of the Grünwald–Letnikov fractional derivative. Then, the mask is convolved with the gray image of the remote sensing image, and the edge detection image is obtained by binarization with Otsu threshold. Finally, the influence of two parameters and threshold values on detection results is discussed. Compared with the results of other detectors on the NWPU VHR-10 dataset, it is found that the algorithm not only has good visual effect but also shows good performance in quantitative evaluation indicators (binary graph similarity and edge pixel ratio).



    Among stochastic neuronal models the Leaky Integrate-and-Fire (LIF) model with colored noise ([1,2]) is a refined version of the classical one driven by white noise ([3,4]). The standard LIF model cannot explain the high variability in the neuronal response to stimulations and the adaptation phenomena ([5,6,7,8]), hence more accurate models are needed. The introduction of memory can be achieved in various and different ways. For instance, in [9] two different methods of inclusion of memory are presented: Langevin equations with colored noise and fractional Langevin equations. The latter have been also studied in [10,11], but we will focus in this work on the first one, which can be used to introduce correlated inputs in the model.

    Indeed, correlated inputs have been widely considered to explain adaptation phenomena. One way to introduce correlated inputs is to consider dependence between the excitatory and inhibitory stimuli, as done in [12] and [13], in which IF and Hodgkin-Huxley models with mutually correlated inputs are considered. Another way to introduce correlated inputs in neuronal models is to consider time-correlated stochastic inputs. This kind of approach is first used in [14] and then in [1]. Time-correlated inputs are also described in [15] with the aid of a binary correlated stochastic process. Finally, let us also recall that time-correlated inputs have been used also to model stochastic currents in spatial models, as done in [16].

    We mainly refer to the model presented in [1], in which time-correlated stochastic inputs are modelled by using a colored noise, that is to say a noise induced by an Ornstein-Uhlenbeck process, in contrast with the white one, which is induced by a Brownian motion. In particular, such colored noise could arise from a diffusive approximation of the synaptic current (see [17]). A specific investigation of a colored noise LIF model has been conducted, for instance, in [2]. In particular, the processes involved in the model are strictly linked to integrated Gaussian processes. Indeed, we will see that the colored noise induces an integrated Gaussian term in the membrane potential process. This term is non-Markov, then the membrane potential process is not a Gauss-Markov process. However, approximating Gauss-Markov processes can be used to estimate the firing rate of the neuron (see [2]). Moreover, the integrated Gaussian term could be further investigated by using some recent techniques on integrated Gaussian processes (see [18,19]), which could lead to another estimation of the firing rate of this model. Such investigation will be carried on in future works.

    Here, in Section 2, we consider a stochastic model based on a couple of stochastic equations including a correlated input (the colored noise) and a current term. It is derived by applying a linear approximation as in [20] to the stochastic model obtained combining those of [21] and [22]. We give specific justifications about the choice of the involved parameters and, differently from the previous works, we study the model varying the correlation time of the colored noise.

    Furthermore, in our investigation we distinguish two different kinds of colored noises, depending on two different reset mechanisms. The first one is linked to the cases in which the noise is induced by some inner mechanism of the neurons (for instance the activation of a gate), and then has to be reset together with the membrane potential process. For this reason we will call it endogenous noise. Moreover, in such case we will consider a non-stationary noise and for modeling it we adopt a non-stationary Ornstein-Uhlenbeck process.

    The second one is instead linked with an external noise: it could be seen as the correlated noise of synaptic or injected inputs. For this reason, even if the membrane potential is reset, the inputs are not affected by this reset and so does the noise. Thus we will call it exogenous noise. Finally, in such case, since it does not depend on the dynamics of the neuron, we could assume that the noise is in its stationary state. Then, we will consider a stationary Ornstein-Uhlenbeck (OU) process to model this noise.

    We remark that these two cases allow to specialize the proposed model in the attempt to make it more close to some neurophysiological evidences. Moreover, from the mathematical point of view, our aim is also to investigate the model when a non-stationary or a stationary OU process is used to model the noise.

    Furthermore, in the model we also include a current input term. We give motivations about the choice of the kind of the current term; then, we show the equivalence between two different ways of including the current term in the model. Indeed, it is possible to consider the current term as an additive term directly in the differential dynamics of the membrane potential process (as in the model here investigated), or, differently, it is possible to consider the current term included in the dynamics of the noise in such a way that an encoded current is in the noise itself (see, for instance, [2] and [23]). The latter case can be considered to model the dynamics of a neuron embedded in a neuronal network ([24,25]). Specifically, in Appendix A the link between the model considered in [2] and [23] and the model here proposed is explained.

    In Section 3, for the general case, we give the expressions for the mean and covariance functions of the membrane potential process and for the colored noise process. Then, in Sections 4 and 5 we provide the specialized formulae of the mean and covariance of the membrane potential process in the endogenous and in the exogenous noise case, respectively. By numerical evaluations we plot the surfaces of the covariance functions in order to compare also graphically the results.

    Finally, we consider the difficulty that arises in such models related to simulation. Indeed, models involving colored noise also involve differential equations with non-regular terms, hence the classical Euler scheme could not converge. In Section 6, we propose a simulation algorithm to solve this difficulty based on a random Euler scheme given in [26] and use it to obtain some numerical approximations of the probability density functions of the first firing time. In particular, we give some measurements of the approximation error in terms of maximum relative error with respect to the mean and the variance function of the process. We also use these measures to compare our algorithm with the classical Euler one. Finally, with these simulations, we will also show in a qualitative way the role of the correlation of the noise in the firing activity.

    We first consider the model based on the following pair of stochastic equations:

    dV(t)=gLCm[V(t)VL]dtη(t)Cmdt+I(t)Cmdt, V(0)=V0 (2.1)
    dη(t)=[η(t)ητ]dt+στdW(t), η(0)=η0 (2.2)

    where η(t)dt is the colored noise: in (2.1) it is in place of the white noise dW(t) as usual in the stochastic differential equation (SDE) of a LIF model. Specifically, the colored noise stands for a correlated input process, Cm is the membrane capacitance, gL is the leak conductance, VL is the resting level potential, I(t) is the input current, W(t) is a standard Brownian motion, η is the resting value of η(t), τ is the correlation time of η(t) and σ is the intensity of the noise. A spike is emitted when V(t) attains a threshold Vth. The reset mechanism after a spike is fundamental to realize a neuronal model for the firing activity. However, different mechanism of reset of the noise term can be used to describe different physiological phenomena. We will consider:

    (a) the case of an endogenous noise, i.e. if V(t)>Vth then V(t+)V0 and η(t+)η0

    (b) the case of an exogenous noise, i.e. if V(t)>Vth then V(t+)V0 and η(t+)η(t) (no reset in the noise).

    Case (a): some modeling justifications.

    To better understand the nature of the reset mechanism given in (a), let us consider a conductance based model that takes in consideration the leak, synaptic and muscarinic K+ current. Following the lines of [21], in particular Equation 1 and Section 2.2.2 for the expression of the muscarinic K+ current, we have

    dV(t)=gLCm(V(t)VL)dtgMCmp(t)(V(t)VK)dt+Isyn(t)Cmdt (2.3)

    where p(t) is a gating variable and Isyn(t) is the synaptic current. For the gating variable p(t), we refer to [22,Equation 11], thus we have

    dp(t)=F(p(t),V)dt+σpdW(t). (2.4)

    To obtain a more tractable model, we proceed to linearize the gating variable equation, following the reduction method presented in [20,Section 2.4 and Appendix A]. In particular we have from [20,Equation 37]

    dp(t)p(t)pτpdt+σpdW(t) (2.5)

    where τp is the (mean) time constant of p and p is the (mean) equilibrium state of the gating variable, and then, by approximating V(t)VKΔV with ΔV constant, we also have

    dV(t)gLCm(V(t)VL)dtgMΔVCmp(t)dt+Isyn(t)Cmdt. (2.6)

    Moreover, since we are using a fixed threshold model instead of an adaptive threshold one, the gating variable has to be reset together with the membrane potential. In particular if V(t)>Vth, then V(t+)Vreset=V0+δV and p(t+)=preset=p+δp where δV and δp are small perturbations on the initial values that are due to the low spiking activity of a neuron which is subject to a slow muscarinic K+ current (see [20]). By simple algebraic transformations, one can recognize in this model Equations (2.1) and (2.2). Indeed, one can set

    η=gMΔVpη0=gMΔVp0τ=τpσ=gMσpτpΔV (2.7)

    and, finally we obtain η(t)=gMp(t)ΔV.

    Specifications of the parameters as functions of the correlation time of the noise.

    In order to specify the above parameters η,η0,τ in the model (2.1)-(2.2), we need to know ΔV,p and τp. These quantities can be estimated by considering an average ¯V of the process V and using the formulas (see [20,Table 1])

    τp(¯V)=τmax3.3e¯V+3520+e¯V+3520 (2.8)
    p(¯V)=11+e¯V+3510. (2.9)
    Table 1.  Values of parameters for simulations.
    Membrane capacitance Cm=1μF/cm2
    Resting membrane potential VL=70mV
    Initial membrane potential V0=70mV
    Threshold membrane potential Vth=50mV
    Muscarinic membrane potential VK=90mV
    Leak conductance gL=0.1mS/cm2
    Muscarinic conductance gM=0.1mS/cm2
    Characteristic time of the membrane θ=10ms
    Maximal characteristic time τmax=20ms
    Initial current I0=3nA

     | Show Table
    DownLoad: CSV

    Typically, one could assume ¯V60mV (mean membrane voltage in vivo) and then obtain τp200ms and p0.08. However, we will vary these values, studying the model for different values of τp and determining ¯V by inverting formula (2.8) and then p from formula (2.9).

    Moreover, depending on eventual modeling necessity, one could decide to reset also the input current term I(t). Finally, if we assume last spike is far away in time from the initial time of the simulation, we can pose δp0, since p stabilize itself near p before a spike.

    Case (b): some modeling justifications.

    Case (b) is quite different. Indeed, it can be used to describe phenomena in which the synaptic current is itself noisy and correlated. Time-correlated synaptic currents have been already considered in LIF models in [14] and in [1], while they can be used in more complicated models, such spatial models with depolarization (see [16]). In particular, colored noise is preferred to white noise in the synaptic currents when the synapses exhibit a finite relaxation time (see, for instance, [17], in particular Equations 3.5 and 3.6).

    Another important difference between the two cases (a) and (b) will be the choice of η and η0. In the endogenous noise case (a), the values of η0 and η have their effects on the dynamics of the model. Following [20], one could choose ηR, while η0=η+δη, where δηR is a perturbation on the resting value η. In the exogenous case (b) we do not have any reset mechanism on η, hence one could consider η0 as a free parameter. However, if we want η to be a baseline noise, it could be convenient for it to be stationary. To obtain the stationarity of η, one could consider η=0 and then

    η0=στ0esτdWs

    (cf. for instance the initial value of Eq.(1.3) in [27]).

    The current term.

    For the input current term I(t), let us consider

    I(t)={I0etβ,t00t<0. (2.10)

    This kind of current term mimics the behavior of a synaptic current with exponential decay (see [17,Section 2.2.2], and [24,25]). Exponential currents have been also used to describe synaptic activity in non-linear models such as the Quadratic Integrate-and-Fire model (see [28]) or the Exponential Integrate-and-Fire model (see [29]). However, our current exhibits a zero baseline justified by the assumption of a zero mean synaptic input (see Equation 3.1 of [17]). Alternatively, such kind of current can be also justified by considering a single synaptic input and an exponential synaptic kernel (see Equation 7.4 of [30]). Moreover, this current is deterministic. Indeed, in the endogenous noise case (a) the noisy part of the current is neglected in favor of the endogenous noise of the gating variable, while in the exogenous noise case (b) the noisy part of the current is explicitly written as the coupled process η(t), after using the diffusion approximation given in [17,Equation 3.1]. One can also use periodic currents instead of exponential ones (see, for instance, [31,32]). However, we will focus on such currents in future works, together with some deeper analysis of the firing times probability densities and the construction of an adaptive threshold model (to substitute the reset mechanism) as done in [33].

    Here, at first we consider the current in (2.10) as a synaptic current with I0=0.5nA, that increases the voltage for a few mV, (see left side of Figure 1 and Figure 2), and β=5ms ([17], [30]). Then, we also consider the current in (2.10) as an injected current (as done, for instance, in [10]) that can have a greater initial value than the one of a single synaptic input. In particular, to perform simulations of firing times, we set its initial value I0=3nA, since a strong excitatory input is required to observe a spike in an admissible time. Indeed, consider that in (2.1) the stochastic input η(t) can act like an inhibitory current whose modulus can reach values up to 2.5nA in its confidence interval of the 99%. Furthermore, in (2.10) we set β=200ms in order to have a decay time that is comparable to the value of the correlation time τ (τ200ms, according to [20]).

    Figure 1.  Mean (on the left) and variance (on the right) functions of V in the endogenous noise case for I0=0.5nA, β=5ms, σ=1 and different values of τ (the other values are in Table 1). To simplify the plots, we pose δη=0. (We do not take into account the reset mechanism.).
    Figure 2.  Mean (on the left) and variance (on the right) functions of V in the exogenous noise case for I0=0.5nA, β=5ms, σ=1 and different values of τ (the other values are in Table 1). (We do not take into account the reset mechanism.).

    About the probability density function of firing times, a Gauss-Markov (GM) process can be used to approximate the solution process V(t) (see [2]); hence a Volterra integral equation (see [34]) can be exploited to obtain some approximations of the firing densities.

    Here, we determine the mean and covariance functions of the V(t) processes (obtained by solving Equations (2.1) and (2.2)) in order to investigate the different behaviors of the neuronal potential when it is subject to the two different stimulations of cases (a) and (b). Moreover, by simulation procedures we also provide estimates of firing densities in order to give further useful comparisons.

    Before working with a specific case, let us show what can we say about a generic solution of Eq. (2.1), where η is a generic GM process with almost surely continuous paths. We have the following proposition.

    Proposition 1. Let V be a strong solution of (2.1) with V0R, η a GM process with almost surely continuous paths adapted to the filtration generated by η0 and the Brownian motion W, I(t) a continuous function and θ:=CmgL. Hence we have

    V(t)=V0etθ+VL[1etθ]etθCmt0esθη(s)ds+etθCmt0esθI(s)ds. (3.1)

    In particular, we also have

    E[V(t)]=V0etθ+VL[1etθ]etθCmt0esθE[η(s)]ds+etθCmt0esθI(s)ds (3.2)

    and

    Cov(V(t),V(s))=et+sθC2m[0,t]×[0,s]eu+vθCov(η(u),η(v))dudv. (3.3)

    Proof. To obtain (3.1), one has just to observe that for fixed ωΩ, Eq. (2.1) is a linear non-autonomous and non-homogeneous ordinary differential equation (ODE), hence it is easy to find a solution (which exists almost surely since η(t) has continuous paths for almost every ωΩ, hence sesθη(s) is integrable in [0,t]).

    For Eq. (3.2), we just need to observe that tV0etθ+VL[1etθ] is a deterministic function, hence it coincides with its own expectation, while

    E[etθCmt0esθη(s)ds]=etθCmE[t0esθη(s)ds]=etθCmt0esθE[η(s)]ds

    where the second equality follows from Fubini's theorem, since η(s) is in L2(P) for any s[0,t], thus E[|η(s)|]<+ and, being η a GM process, sE[η(s)] is continuous, hence integrable in [0,t].

    Finally, Eq. (3.3) follows from the fact that

    etθCmt0esθη(s)dsE[etθCmt0esθη(s)ds]=etθCmt0esθ(η(s)E[η(s)])ds

    hence

    Cov(V(t),V(s))=E[(etθCmt0euθη(u)duE[etθCmt0euθη(u)du])××(esθCms0evθη(v)dvE[esθCms0evθη(v)dv])]=et+sθC2mE[[0,t]×[0,s]eu+vθ(η(u)E[η(u)])(η(v)E[η(v)])dudv]=et+sθC2m[0,t]×[0,s]eu+vθCov(η(u),η(v))dudv

    where the last equality follows from Fubini's theorem, since η(u)L2(P) for any u[0,max{t,s}], hence η(u)η(v)L1(P) for any (u,v)[0,t]×[0,s], and, being η a GM process, Cov(η(u),η(v)) is continuous and then integrable in [0,t]×[0,s].

    From this proposition we also have that the solution process V involves the integral of a GM process. Such integrals are widely studied in [18,19].

    Now let us consider the process η solution of (2.2). We know how to write the process η in [0,+) as

    η(t)=η(1etτ)+etτ[η0+στt0esτdW(s)] (3.4)

    and then

    mη(t):=E[η(t)]=η(1etτ)+etτE[η0].

    Hence we know that limt+mη(t)=η independently from the choice of η0. In the endogenous noise case (a), we have that η0=η+δη, so

    η(t)=η+etτ[δη+στt0esτdW(s)]

    and

    E[η(t)]=η+etτδη. (3.5)

    In the exogenous noise case (b), we pose η=0 and η0=στ0esτdW(s), which is a zero mean Gaussian random variable with variance given by σ22τ, that guarantees the stationarity. In particular in this case we have

    η(t)=etτ[η0+στt0esτdW(s)]=etτστtesτdW(s)

    and E[η(t)]=0. From now on let us denote mη(t)=E[η(t)] and Cov(η(t),η(s))=cη(t,s). Moreover, since η is a GM process, we can define the covariance factors h1 and h2 to be, for s<t,

    cη(t,s)=h1(s)h2(t)

    and the ratio that is given by

    r(t)=h1(t)h2(t).

    In the exogenous case, it will be useful the define the function ρ(t)=r(t)r(0) since r(0)0 in such case.

    Let us consider the endogenous noise case (a). Writing Eq. (2.1) in its integral form, we have

    V(t)=V0t0gLCm[V(s)VL]dst0η(s)Cmds+t0I(s)Cmds. (4.1)

    In the endogenous noise case, the covariance function of η is given by

    cη(s,t)=σ22τetτ[esτesτ], 0st. (4.2)

    Hence we can choose the covariance factors h1 and h2 to be

    h1(t)=σ2(etτetτ),h2(t)=στetτ

    to obtain a ratio r given by

    r(t)=τ2(e2tτ1).

    By using Doob's formula (see [35])

    η(t)=mη(t)+h2(t)˜W(r(t)) (4.3)

    for some Brownian motion ˜W (despite being equal in law, it could be different from the process W), we can exploit Eq. (4.1) as

    V(t)=V0gLCmt0[V(s)VL]ds+1Cmt0I(s)ds1Cm[t0mη(s)ds+t0h2(s)˜W(r(s))ds].

    Moreover, recall that in this case the process η has to be reset after a spike.

    Note that if in Eq. (4.1) gL=0 and I(t)0,t>0, the process V(t) is the integrated over time of a non-stationary Ornstein-Uhlenbeck process. Specifically, in [19] a detailed analysis of a similar process (in terms of integrals of GM processes) was done. We can say that those results can be used for the above model with gL0 and when the function I(t)0,t>0.

    Here, by using Proposition 1, we obtain the mean and covariance functions for the special model considered in Eqs. (2.1)-(2.2) with the current I(t) as in Eq. (2.10).

    Corollary 1. The stochastic process V(t) of model based on Eqs. (2.1)-(2.2) with V0 real number, η0=0 and I(t) as in Eq. (2.10) has the following mean function (for β,τθ)

    E[V(t)]=V0etθ+VL(1etθ)θηCm(1etθ)τθCmα2δη(etτetθ)+βθCmα1I0(etβetθ)

    and covariance function for ts

    Cov[V(t),V(s)]=σ2θ2C2mα2[θ2α3etsθ+τθα2α3etθsτθ2α2et+sθ+τ2α3etsτ+τθα2α3esθtττ2α2et+sτ] (4.4)

    where we set θ=CmgL, α1=βθ, α2=τθ and α3=τ+θ.

    Proof. These two expressions follow respectively from (3.2) and (3.3) with mη and cη given in (3.5) and (4.2) and I given in (2.10), after some straightforward but cumbersome calculations.

    From the expression of the covariance in (4.4), we can exploit the variance function

    D[V(t)]=σ2θ2C2mα2[α22α3+2τθα2α3eα3τθtθ2α2e2tθτ2α2e2tτ]

    where θ,α2,α3 are defined in Corollary 1.

    Plots of the mean and the variance function are shown in Figure 1. Here, we can see that the mean function admits the horizontal asymptote limt+E[V(t)]=VLηθ/Cm (see in the Figure 1, left). Note that the initial excitatory effect of the current I(t) is tempered by the inhibitory one of the endogenous noise η(t), being the latter definitely visible for large times. For the variance function, we have limt+D[V(t)]=σ2θ22C2m(τ+θ), hence its horizontal asymptote decreases as τ increases, as we can see in the Figure 1 (right). Indeed, as τ increases, the process has a lower variance, but the covariance decreases more slowly, as we can see in Figure 3.

    Figure 3.  Contour map of the covariance function of V in the endogenous case for σ=1 and different values of τ: 100ms on the left, 150 ms in the middle and 200ms on the right (the other values are in Table 1).

    Also, simulations can be performed. Approximations of the First Passage Time (FPT) probability density function (pdf) were obtained in Section 6. Indeed, the FPT pdf is useful to model the firing density of the neuron.

    In the exogenous noise case (b), the covariance function of η is given by

    cη(s,t)=σ22τe|ts|/τ. (5.1)

    As we specified before, in this case the process η is stationary and, as we can see from Eq. (5.1), the function cη(s,t) depends only on |ts|. Moreover, the covariance factors can be chosen as

    h1(t)=σ2et/τ,h2(t)=στet/τ,

    while the ratio r and the function ρ(t):=r(t)r(0) are given by

    r(t)=h1(t)/h2(t)=τ2e2t/τ,ρ(t)=r(t)r(0)=τ2(e2t/τ1).

    Hence, by using the representation of η(t) given in Eq. (4.3) and the fact that ˜W(r(t))d=˜W(ρ(t))+˜W(r(0)), we obtain from (4.1):

    V(t)d=V0t0gLCm[V(s)VL]ds+1Cmt0I(s)ds1Cm[t0mη(s)ds+t0h2(s)˜W(ρ(s))ds+˜W(r(0))t0h2(s)ds]

    where ˜W(r(0)) is not almost surely 0 and it is correlated with t0h2(s)˜W(ρ(s))ds. Let us also recall that this time, even if we use a different Brownian motion ˜W, we only obtain equality in law.

    Here, by still using Proposition 1, we can also obtain the mean and the covariance function of V.

    Corollary 2. The stochastic process V(t) of model based on Eqs. (2.1)-(2.2) with V0 a real number, η0=στ0esτdW(s), η=0 and I(t) as in Equation (2.10) has the following mean function (for β,τθ)

    E[V(t)]=V0etθ+VL(1etθ)+βθα1CmI0(etβetθ) (5.2)

    and covariance function for ts

    Cov[V(t),V(s)]=σ2θ22C2mα2[θα3etsθτα3etθsττα3esθtτ+τα3etsτ+et+sθ] (5.3)

    where we set θ=CmgL, α1=βθ, α2=τθ and α3=τ+θ.

    Proof. To obtain Eq. (5.2) we just have to use (3.2), recalling that I is given in Eq. (2.10) and E[η(t)]=0 for any t0.

    For Eq. (5.3), one just have to use (3.3) where cη is given in (5.1).

    From the expression of the covariance in (5.3), we can exploit the variance function

    D[V(t)]=σ2θ22C2mα2[α2α32τα3eα3τθt+e2tθ]

    where θ,α2,α3 are defined in Corollary 2.

    It is important to observe that since E[η(t)]=0, the mean function of V does not depend on τ. Moreover, it admits an horizontal asymptote given by limt+E[V(t)]=VL. A plot of the mean function is given in Figure 2 (left). Moreover, since also in this case limt+D[V(t)]=σ2θ22C2m(τ+θ), the horizontal asymptote of the variance function decreases as τ increases, as it can be seen in Figure 2 (right). From Figure 4, we can conclude that even in this case, while the variance is lower, the covariance decreases slowly as τ increases.

    Figure 4.  Contour map of the covariance function of V in the exogenous case for σ=1 and different values of τ: 100ms on the left, 150ms in the middle and 200ms on the right (the other values are in Table 1).

    Comparing plots in Figure 1 and Figure 2 it is possible to see how the values of τ affect the behaviour of the variance functions in both cases of noise. Furthermore, in Figure 3 and in Figure 4, as τ varies, the effects on the surfaces of the covariance functions are also put in evidence.

    Suppose we want to simulate the process V(t) which is part of the solution of the equations

    {dV(t)=gLCm[V(t)VL]dtη(t)Cmdt+I(t)Cmdt,V(0)=V0dη(t)=[η(t)ητ]dt+στdW(t),η(0)=η0

    where I(t) is a sufficiently regular function (in our case IC). η(t) is an Ornstein-Uhlenbeck process, hence its paths are β-Hölder continuous almost surely for any β<12.

    Specifically, the noise η(t) is defined on some filtered probability space (Ω,F,Ft,P) hence we have that for ωΩ, V(,ω) is the realization of the process V related to the random event ω. If we fix ωΩ (hence if we consider a single realization of V), the equation

    dV(t,ω)=gLCm[V(t,ω)VL]dtη(t,ω)Cmdt+I(t)Cmdt

    is a linear non-homogeneous and non-autonomous Ordinary Differential Equation (ODE) that could be written in the form

    ˙V(t,ω)=gLCm[V(t,ω)VL]η(t,ω)Cm+I(t)Cm.

    Let us then fix

    f(t,x,ω)=gLCm[xVL]η(t,ω)Cm+I(t)Cm

    to write the ODE as

    ˙V(t,ω)=f(t,V(t,ω),ω).

    However, f(t,x,ω) is only β-Hölder continuous in t (except for a null-set with respect to ω) for β<12, hence the classical Euler scheme could not converge. Indeed, to assure convergence of a classical numerical scheme, one needs some regularity assumptions on f, not only with respect to x (which are needed to assure the existence of the solution), but also with respect to t. In this case, the β-Hölder continuity of f could be not enough to guarantee the convergence of a classical Euler scheme and then we decide to adopt a Monte-Carlo based scheme. Indeed, since f is a Caratheodory function, we can still use a random Euler scheme (as done in [26]). However, to use this scheme we should know exactly the values of η(t,ω). Indeed the random Euler scheme is based on a Monte-Carlo quadrature algorithm (see [26] for details), in which the integrand function has to be evaluated in uniformly distributed random times inside the integration domain. Thus, since we need to numerically simulate also η (with an Euler scheme as exploited in [36]), we have to consider approximatively uniformly distributed random times. The idea of the algorithm is the following: we first simulate η(t) with a classical Euler scheme in a finer lattice; then we simulate V(t) by using the random Euler scheme given in [26] where the random times for the Monte-Carlo quadrature procedure are chosen on the simulation lattice of η(t).

    The simulation scheme.

    Hence, to simulate the process, let us fix three natural numbers δ,ε,m with δ<ε and let us define Δt=10δ and Δs=10ε (in particular Δs is a finer time step with respect to Δt and Δt/ΔsN, while the choice of m is linked to the precision of the Monte-Carlo quadrature scheme). Moreover, let us denote Vn=V(nΔt) and ηl=η(lΔs).

    Step 1 Initialize V0 and η0;

    Step 2 Suppose we have simulated Vn and ηl such that nΔt=lΔs. We want to simulate Vn+1;

    Step 3 Simulate ηh until hΔs=(n+1)Δt: to do that suppose we have already simulated ηh1 and simulate ξN(0,1), then pose

    ηh=ηh1Δsτ(ηh1η)+στΔsξ;

    Step 4 Simulate a vector U=(U1,,Um) of i.i.d. uniform random variables in [0,Δt];

    Step 5 Consider the values ˜Ui obtained from Ui by rounding up the ε-th decimal digit;

    Step 6 Simulate the value Vn+1 as

    Vn+1=VngLCm[VnVL]Δt+ΔtmCmmh=1(I(l(h)Δs)ηl(h))

    where l(h) is defined in such a way that l(h)Δs=nΔt+˜Uh for h=1,m;

    Step 7 Return to Step 2.

    In particular observe that the quantity l(h)N is well-defined. Indeed, since ˜Uh is rounded up to the ε-th decimal digit, it can be written as ˜Uh=uh10ε with uhN. Thus Δt=10δ=10ε10εδ where εδ>0, and Δs=10ε; finally l(h)=n10εδ+uh.

    Finally, let us remark that the reset mechanism can be easily implemented by controlling whenever the simulated process Vn exceeds the threshold Vth.

    Some simulation results

    As example, we produced some sample paths of V by using such algorithm in the endogenous noise case, without considering the reset mechanism. These paths are shown in Figure 5, together with a representation of their mean function and a confidence interval determined by using the variance function.

    Figure 5.  Simulated sample paths of V in the endogenous noise case for β=200ms, σ/τ=0.1 and different values of τ: 150ms on the left and 200ms on the right (the other values are in Table 1). The mean function is plotted with a blue line while the dashed ones represent a confidence interval of the 99%. Since we want to simulate only the process, without focusing on the reset condition, we pose δη=0.

    To give an estimate of the error, let us denote with ¯Vn the sample mean of a certain number of simulated processes at the time tn and Dn their sample variance. We will give a measure of the error in terms of maximum relative error, that is to say the quantities.

    EM=maxn1|¯VnE[V(tn)]||E[V(tn)]|,ED=maxn1|DnD[V(tn)]|D[V(tn)].

    In particular we will compare our simulation algorithm with the classical Euler scheme, which is given by

    Vn+1=VngLCm(VnVL)ΔtηnCmΔt+I(tn)CmΔt;ηn+1=ηnητΔt+στξnΔt

    where ξn is a standard normal random variable. It is interesting to see that since the first step in Vn of Euler method is given by

    V1=V0gLCm(V0VL)Δtη0CmΔt+I(t0)CmΔt

    where V0 and η0 are fixed real numbers, V1 is always deterministic, thus D1=0 and we have ED1. As we can see from Table 2, the maximum relative error with respect to the mean EM seems to be stable at almost 0.1% for our simulation algorithm, while it is almost 1% with the classical Euler scheme. Moreover, it is interesting to see that even for m=1 the maximum relative error with respect to the variance ED is lower that the one of the Euler scheme. In particular, such error decreases as m increases up to m50, reaching ED1.6%.

    Table 2.  Maximum relative errors for δ=1, ε=3 and different values of m, determined on 10000 simulated trajectories. For the classical Euler scheme, we put Δt=0.1ms. Moreover β=200ms and σ/τ=0.1, while the other values are in Table 1.
    m EM ED
    1 0.001107844 0.499877
    5 0.001048744 0.1214291
    10 0.00126541 0.06886903
    50 0.0009804761 0.0162239
    100 0.0009565757 0.0274757
    Eul 0.01117287 1

     | Show Table
    DownLoad: CSV

    Such simulation algorithm can be useful to estimate the FPT of V. Indeed, in Figure 6 we have shown some simulated pdf of these FPT for three different values of τ. In this figure we can see how different values of τ produce different pdfs. In particular as τ increases, we can see from this figure that the mode of the FPT shifts to the right. However, a more careful and appropriate analysis of the FPT densities has to be carried on to study how τ affects such functions.

    Figure 6.  Simulated probability density function of the first passage time of V in the endogenous noise case for β=200ms, σ/τ=0.1 and different values of τ (the other values are in Table 1).

    We have considered a modification of the LIF model with a colored noise which could represent different neuronal mechanisms (such as gating variables or correlated noise for an injected current). This model is described is Section 2 and is constituted of two coupled equations: a linear SDE and a linear ODE with an irregular term. Moreover, in this section we described the different reset mechanisms for the modelling process. In Section 3 we study some general properties of solutions of these coupled equations. In particular, we distinguish between an endogenous noise (in Section 4) and an exogenous noise (in Section 5), and we determine the mean, the covariance and the variance functions of the process V(t) in such cases obtained by adopting a non-stationary and stationary Ornstein-Uhlenbeck process, respectively. Finally, in Section 6 we provide a simulation algorithm for such process, in which we had to keep in mind the irregularity of the terms of the ODE. Such simulation algorithm can be used to obtain some numerical approximation of the FPT of V(t) to a threshold. This last simulation also gives us a validation of the usability of the model. Indeed, the parameters described in Table 1 and β are intrinsic of the problem we are studying, while σ and τ should be estimable parameters (σ is linked to the variability of the process, hence it can be estimated by using the spread of the pdf of the FPT). The simulations of the FPT cast light on the effect of τ: as τ grows the mode of the pdf of the FPT shifts to right. Hence τ is an observable parameter and it could be estimated from the observation of the FPTs.

    This work is partially supported by GNCS. We acknowledge the constructive criticism of anonymous reviewers on an earlier version of this paper.

    The authors declare there is no conflict of interest.

    Another interesting choice for the input current is given by

    I(t)=etττt0esτS(s)ds (A.1)

    for some stimuli sS(s). In this case, supposing η=η0=0 we could consider the process ξ(t)=η(t)+I(t) which solves the equation

    dξ(t)=S(t)ξ(t)τdt+στdW(t), ξ(0)=η(0)+I(0).

    This kind of stimuli has been considered in [23] to describe the case in which the input current received by the neuron has already been codified by other neurons in the network. In particular we have the following equivalence.

    Proposition 2. If I(t) is a C1 function then the model given by Equations (2.1)-(2.2) is equivalent to the one given by

    dV(t)=gLCm[V(t)VL]dt+ξ(t)Cmdt, V(0)=V0 (A.2)
    dξ(t)=[S(t)ξ(t)τ]dt+στdW(t), ξ(0)=ξ0, (A.3)

    that is to say:

    i For any IC1 there exists a continuous function tS(t) such that if V(t) is solution of (2.1) then V(t) is also solution of (A.2);

    ii For any SC0 there exists a function IC1 such that if V(t) is solution of (A.2) then V(t) is also solution of (2.1).

    Proof. We gave a closed form for the solution of (2.1) in Equation (3.1). Moreover, we have a closed form for the solution of (A.2) given by

    V(t)=V0etθ+VL[1etθ]+etθCmt0esθξ(s)ds. (A.4)

    Hence if we want V to be solution of (2.1) and (A.2) simulaneously, we need

    etθCmt0esθξ(s)ds=etθCmt0esθη(s)ds+etθCmt0esθI(s)ds (A.5)

    that is implied by ξ(t)=I(t)η(t). Moreover, we gave a closed form for a solution of (2.2) in Equation (3.4). We can also give a closed form for a solution of (A.3):

    ξ(t)=etτ[ξ0+1τt0esτS(s)dsστt0esτdW(s)] (A.6)

    where the sign can be substituted to the + by symmetry of the Itô integral term. Hence, to obtain the relation ξ(t)=I(t)η(t), one has to pose ξ0=η0, and

    I(t)=η(1etτ)+1τetτt0esτS(s)ds. (A.7)

    We have then showed ii, since Equation (A.7) gives us the function I we have to choose in order to assure that a solution of (A.2) is also solution of (2.1).

    To show i, suppose IC1 and observe that, from (A.7),

    t0esτS(s)ds=τetτI(t)+τη(etτ1).

    Applying the time derivative operator to this equation we have

    etτS(t)=etτI(t)+τetτI(t)+etτη

    thus we finally have

    S(t)=I(t)+τI(t)+η

    concluding the proof of i.



    [1] C. He, S. L. Li, D. H. Xiong, P. Z. Fang, M. S. Liao, Remote sensing image semantic segmentation based on edge information guidance, Remote Sens., 12 (2020). https://doi.org/10.3390/rs12091501 doi: 10.3390/rs12091501
    [2] Z. Z. Tu, Y. Ma, C. L. Li, J. Tang, B. Luo, Edge-guided non-local fully convolutional network for salient object detection, IEEE Trans. Circuits Syst. Video Technol., 31 (2021), 582–593. https://doi.org/10.1109/TCSVT.2020.2980853 doi: 10.1109/TCSVT.2020.2980853
    [3] H. L. Zhao, B. Wu, Y. B. Guo, G. Chen, D. Ye, SSWS: An edge detection algorithm with strong semantics and high detectability for spacecraft, Optik, 247 (2021). https://doi.org/10.1016/j.ijleo.2021.168037 doi: 10.1016/j.ijleo.2021.168037
    [4] C. Xu, H. Liu, W. M. Cao, J. Q. Feng, Multispectral image edge detection via Clifford gradient, Sci. China-Inf. Sci., 55 (2012), 260–269. https://doi.org/10.1007/s11432-011-4540-0 doi: 10.1007/s11432-011-4540-0
    [5] S. Amstutz, J. Fehrenbach, Edge detection using topological gradients: A scale-space approach, J. Math. Imaging Vision, 52 (2015), 249–266. https://doi.org/10.1007/s10851-015-0558-z doi: 10.1007/s10851-015-0558-z
    [6] J. T. Tang, Q. B. Shi, S. G. Hu, Z. Y. Ren, Edge detection based on curvature of gravity gradient tensor, Chin. J. Geophys. Chin. Edit., 62 (2019), 1872–1884. https://doi.org/10.6038/cjg2019M0427 doi: 10.6038/cjg2019M0427
    [7] V. B. S. Prasath, D. N. H. Thanh, N. Q. Hung, L. M. Hieu, Multiscale gradient maps augmented fisher information-based image edge detection, IEEE Access, 8 (2020), 141104–141110. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [8] H. H. Zhao, P. F. Xiao, X. Z. Feng, Optimal Gabor filter-based edge detection of high spatial resolution remotely sensed images, J. Appl. Remote Sens., 11 (2017). https://doi.org/10.1117/1.JRS.11.015019 doi: 10.1117/1.JRS.11.015019
    [9] W. C. Lin, J. W. Wang, Edge detection in medical images with quasi high-pass filter based on local statistics, Biomed. Signal Process. Control, 39 (2018), 294–302. https://doi.org/10.1016/j.bspc.2017.08.011 doi: 10.1016/j.bspc.2017.08.011
    [10] G. B. Chen, Z. W. Sun, Z. Li, Road identification algorithm for remote sensing images based on wavelet transform and recursive operator, IEEE Access, 8 (2020), 141824–141837. https://doi.org/10.1109/ACCESS.2020.3012997 doi: 10.1109/ACCESS.2020.3012997
    [11] A. Isar, C. Nafornita, G. Magu, Hyperanalytic wavelet-based robust edge detection, Remote Sens., 13 (2021), 141104–141110. https://doi.org/10.3390/rs13152888 doi: 10.3390/rs13152888
    [12] M. Han, X. Yang, E. Jiang, An extreme learning machine based on cellular automata of edge detection for remote sensing images, IEEE Access, 19 (2015), 27–34. https://doi.org/10.1016/j.neucom.2015.08.121 doi: 10.1016/j.neucom.2015.08.121
    [13] L. Huang, X. Q. Yu, X. Q. Zuo, Edge detection in UAV remote sensing images using the method integrating zernike moments with clustering algorithms, Int. J. Aerosp. Eng., 2017 (2017), 141104–141110. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [14] Z. Qu, S. Y. Wang, L. Liu, D. Y. Zhou, Visual cross-image fusion using deep neural networks for image edge detection, IEEE Access, 7 (2019), 57604–57615. https://doi.org/10.1109/ACCESS.2019.2914151 doi: 10.1109/ACCESS.2019.2914151
    [15] X. G. Zheng, GPNRBNN: A robot image edge detection method based on gaussian positive-negative radial basis neural network, Sens. Imaging, 22 (2021). https://doi.org/10.1007/s11220-021-00351-5 doi: 10.1007/s11220-021-00351-5
    [16] G. B. Chen, Z. Y. Jiang, M. M. Kamruzzaman, Radar remote sensing image retrieval algorithm based on iImproved sobel operator, J. Visual Commun. Image Represent., 8 (2019), https://doi.org/10.1016/j.jvcir.2019.102720 doi: 10.1016/j.jvcir.2019.102720
    [17] M. Mohammadpour, A. Bahroudi, M. Abedi, Automatic lineament extraction method in mineral exploration using CANNY algorithm and hough transform, IEEE Access, 54 (2020), 366–382. https://doi.org/10.1134/S0016852120030085 doi: 10.1134/S0016852120030085
    [18] H. Q. Wu, J. Yan, The mechanism of digitized landscape architecture design under edge computing, Plos One, 16 (2021), 141104–141110. https://doi.org/10.1371/journal.pone.0252087 doi: 10.1371/journal.pone.0252087
    [19] G. Wang, C. Lopez-Molina, B. D. Baets, Automated blob detection using iterative Laplacian of Gaussian filtering and unilateral second-order Gaussian kernels, Digital Signal Process., 96 (2019). https://doi.org/10.1016/j.dsp.2019.102592 doi: 10.1016/j.dsp.2019.102592
    [20] R. Chetia, S. M. B. Boruah, P. P. Sahu, Quantum image edge detection using improved Sobel mask based on NEQR, Quantum Inf. Process., 20 (2021). https://doi.org/10.1007/s11128-020-02944-7 doi: 10.1007/s11128-020-02944-7
    [21] A. Jan, S. A. Parah, B. A. Malik, M. Rashid, Secure data transmission in IoTs based on CLoG edge detection, Future Generat. Comput. Syst. Int. J. Esci., 121 (2021), 59–73. https://doi.org/10.1016/j.future.2021.03.005 doi: 10.1016/j.future.2021.03.005
    [22] P. Amoako-Yirenkyi, J. K. Appati, I. K. Dontwi, A new construction of a fractional derivative mask for image edge analysis based on Riemann-Liouville fractional derivative, Adv. Differ. Equations, 2016 (2016). https://doi.org/10.1186/s13662-016-0946-8 doi: 10.1186/s13662-016-0946-8
    [23] Y. F. Pu, P. Siarry, A. Chatterjee, Z. N. Wang, Z. Yi, Y. G. Liu, et al., A fractional-order variational framework for retinex: fractional-order partial differential equation-based formulation for multi-scale nonlocal contrast enhancement with texture preserving, IEEE Trans. Image Process., 27 (2018), 1214–1229. https://doi.org/10.1109/TIP.2017.2779601 doi: 10.1109/TIP.2017.2779601
    [24] K. Liu, Y. Z. Tian, Research and analysis of deep learning image enhancement algorithm based on fractional differential, Chaos, Solitons Fractals, 131 (2020), 14–110. https://doi.org/10.1016/j.chaos.2019.109507 doi: 10.1016/j.chaos.2019.109507
    [25] Q. T. Ma, F. F. Dong, D. X. Kong, A fractional differential fidelity-based PDE model for image denoising, IEEE Access, 28 (2017), 635–647. https://doi.org/10.1007/s00138-017-0857-z doi: 10.1007/s00138-017-0857-z
    [26] Q. Wang, J. Ma, S. Y. Yu, L. Y. Tan, Noise detection and image denoising based on fractional calculus, Chaos, Solitons Fractals, 131 (2020), 109463. https://doi.org/10.1016/j.chaos.2019.109463 doi: 10.1016/j.chaos.2019.109463
    [27] Y. S. Zhang, F. Zhang, B. Z. Li, Image restoration method based on fractional variable order differential, IEEE Access, 29 (2018), 999–1024. https://doi.org/10.1007/s11045-017-0482-z doi: 10.1007/s11045-017-0482-z
    [28] F. F. Dong, Q. T. Ma, Single image blind deblurring based on the fractional-order differential, IEEE Access, 78 (2019), 1960–1977. https://doi.org/10.1016/j.camwa.2019.03.033 doi: 10.1016/j.camwa.2019.03.033
    [29] Y. S. Zhang, Y. R. Tian, A new active contour medical image segmentation method based on fractional varying-order differential, Mathematics, 10 (2022), https://doi.org/10.3390/math10020206 doi: 10.3390/math10020206
    [30] J. E. Lavín-Delgado, J. E. Solís-Pérez, J. F. Gómez–Aguilar, R. F. Escobar–Jiménez, A new fractional–order mask for image edge detection based on Caputo-Fabrizio fractional-order derivative without singular kernel, Circuits, Syst. Signal Process., 39 (2020), 1419–1448. https://doi.org/10.1007/s00034-019-01200-3 doi: 10.1007/s00034-019-01200-3
    [31] A. Nandal, H. Gamboa-Rosales, A. Dhaka, J. M. Celaya-Padilla, Image edge detection using fractional calculus with feature and contrast enhancement, Circuits Syst. Signal Process., 37 (2018), 3946–3972. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [32] M. Hacini, F. Hachouf, A. Charef, A bi-directional fractional-order derivative mask for image processing applications, IET Image Process., 14 (2020), 2512–2524. https://doi.org/10.1049/iet-ipr.2019.0467 doi: 10.1049/iet-ipr.2019.0467
    [33] S. K. Mishra, K. K. Singh, R. Dixit, M. K. Bajpai, Design of fractional calculus based differentiator for edge detection in color images, IEEE Access, 80 (2021), 29965–29983. https://doi.org/10.1007/s11042-021-11187-2 doi: 10.1007/s11042-021-11187-2
    [34] N. R. Babu, K. Sanjay, P. Balasubramaniam, EED: Enhanced edge detection algorithm via generalized integer and fractional-order operators, Circuits, Syst. Signal Process., 41 (2022), 5492–5534. https://doi.org/10.1007/s00034-022-02028-0 doi: 10.1007/s00034-022-02028-0
    [35] C. P. Li, D. L. Qian, Y. Q. Chen, On Riemann–Liouville and caputo derivatives, Discrete Dyn. Nat. Soc., 2011 (2011), 1–15. https://doi.org/10.1155/2011/562494 doi: 10.1155/2011/562494
    [36] G. Y. Zhang, J. P. Liu, J. Wang, Z. H. Tang, Y. F. Xie, FoGDbED: Fractional-order Gaussian derivatives-based edge-relevant structure detection using Caputo-Fabrizio definition, Digital Signal Process., 98 (2019). https://doi.org/10.1016/j.dsp.2019.102639 doi: 10.1016/j.dsp.2019.102639
    [37] N. Aboutabit, A new construction of an image edge detection mask based on Caputo–Fabrizio fractional derivative, Vis. Comput., 37 (2020), 1545–1557. https://doi.org/s00371-020-01896-4
    [38] D. G. Shao, T. Zhou, F. Liu, S. L. Yi, Y. Xiang, L. Ma, et al., Ultrasound speckle reduction based on fractional order differentiation, J. Med. Ultrason., 44 (2016), 227–237. https://doi.org/10.1007/s10396-016-0763-4 doi: 10.1007/s10396-016-0763-4
    [39] S. Balochian, H. Baloochian, Edge detection on noisy images using Prewitt operator and fractional order differentiation, Multimedia Tools Appl., 81 (2022), 9759–9770. https://doi.org/10.1007/s11042-022-12011-1 doi: 10.1007/s11042-022-12011-1
    [40] G. Cheng, J. W. Han, P. C. Zhou, G. Lei, Multi-class geospatial object detection and geographic image classification based on collection of part detectors, ISPRS J. Photogramm. Remote Sens., 98 (2014), 119–132. https://doi.org/10.1016/j.isprsjprs.2014.10.002 doi: 10.1016/j.isprsjprs.2014.10.002
    [41] F. M. Atici, S. Chang, J. Jonnalagadda, Grünwald-Letnikov fractional operators: From past to present, Fract. Differ. Calculus, 11 (2021), 147–159. https://doi.org/10.7153/fdc-2021-11-10 doi: 10.7153/fdc-2021-11-10
    [42] Y. L. Song, J. F. Qu, C. M. Liu, Real-time registration of remote sensing images with a Markov chain model, J. Real-Time Image Process., 18 (2020), 1527–1540. https://doi.org/10.1007/s11554-020-01043-1 doi: 10.1007/s11554-020-01043-1
    [43] X. Lu, Y. J. Zhang, Human body flexibility fitness test based on image edge detection and feature point extraction, IEEE Access, 24 (2020), 8673–8683. https://doi.org/10.1007/s00500-020-04869-w doi: 10.1007/s00500-020-04869-w
    [44] S. Roy, D. Das, S. Lal, J. Kini, Novel edge detection method for nuclei segmentation of liver cancer histopathology images, J. Ambient Intell. Hum. Comput., 2021 (2021). https://doi.org/10.1007/s12652-021-03308-4 doi: 10.1007/s12652-021-03308-4
    [45] N. Tariq, R. A. Hamzah, T. F. Ng, S. L. Wang, H. Ibrahim, Quality assessment methods to evaluate the performance of edge dtection algorithms for digital image: A systematic literature review, IEEE Access, 9 (2021). 87763–87776. https://doi.org/10.1109/ACCESS.2021.3089210 doi: 10.1109/ACCESS.2021.3089210
    [46] Y. Han, M. Yin, P. H. Duan, P. Ghamisi, Edge-preserving filtering-based dehazing for remote sensing images, IEEE Geosci. Remote Sens. Lett., 2021 (2021). https://doi.org/10.1109/LGRS.2021.3103381 doi: 10.1109/LGRS.2021.3103381
    [47] D. D. He, G. Wang, An algorithm of fuzzy edge detection for wetland remote sensing image based on fuzzy theory, Appl. Nanosci., 2022 (2022). https://doi.org/10.1007/s13204-021-02209-4 doi: 10.1007/s13204-021-02209-4
  • This article has been cited by:

    1. Giacomo Ascione, Bruno Toaldo, A Semi-Markov Leaky Integrate-and-Fire Model, 2019, 7, 2227-7390, 1022, 10.3390/math7111022
    2. Mario Abundo, Enrica Pirozzi, On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes, 2019, 7, 2227-7390, 991, 10.3390/math7100991
    3. Enrica Pirozzi, A Symmetry-Based Approach for First-Passage-Times of Gauss-Markov Processes through Daniels-Type Boundaries, 2020, 12, 2073-8994, 279, 10.3390/sym12020279
    4. Giacomo Ascione, Enrica Pirozzi, On the Construction of Some Fractional Stochastic Gompertz Models, 2020, 8, 2227-7390, 60, 10.3390/math8010060
    5. Enrica Pirozzi, 2020, Chapter 26, 978-3-030-45092-2, 211, 10.1007/978-3-030-45093-9_26
    6. Mario Abundo, Enrica Pirozzi, Fractionally Integrated Gauss-Markov processes and applications, 2021, 10075704, 105862, 10.1016/j.cnsns.2021.105862
    7. Mario Abundo, Enrica Pirozzi, On the Estimation of the Persistence Exponent for a Fractionally Integrated Brownian Motion by Numerical Simulations, 2023, 7, 2504-3110, 107, 10.3390/fractalfract7020107
    8. Giacomo Ascione, Yuliya Mishura, Enrica Pirozzi, Time-changed fractional Ornstein-Uhlenbeck process, 2020, 23, 1311-0454, 450, 10.1515/fca-2020-0022
    9. Nikolai Leonenko, Enrica Pirozzi, First passage times for some classes of fractional time-changed diffusions, 2022, 40, 0736-2994, 735, 10.1080/07362994.2021.1953386
    10. Giacomo Ascione, Yuliya Mishura, Enrica Pirozzi, The Fokker–Planck equation for the time-changed fractional Ornstein–Uhlenbeck stochastic process, 2022, 152, 0308-2105, 1032, 10.1017/prm.2021.45
    11. Enrica Pirozzi, Some Fractional Stochastic Models for Neuronal Activity with Different Time-Scales and Correlated Inputs, 2024, 8, 2504-3110, 57, 10.3390/fractalfract8010057
    12. Enrica Pirozzi, Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications, 2024, 12, 2227-7390, 3094, 10.3390/math12193094
    13. Enrica Pirozzi, 2025, Chapter 22, 978-3-031-83887-3, 243, 10.1007/978-3-031-83885-9_22
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1858) PDF downloads(141) Cited by(4)

Figures and Tables

Figures(7)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog