Loading [MathJax]/jax/output/SVG/jax.js
Research article

Firing rate distributions in a feedforward network of neural oscillators with intrinsic and network heterogeneity

  • Received: 12 September 2018 Accepted: 28 January 2019 Published: 08 March 2019
  • The level of firing rate heterogeneity in a population of cortical neurons has consequences for how stimuli are processed. Recent studies have shown that the right amount of firing rate heterogeneity (not too much or too little) is a signature of efficient coding, thus quantifying the relative amount of firing rate heterogeneity is important. In a feedforward network of stochastic neural oscillators, we study the firing rate heterogeneity stemming from two sources: intrinsic (different individual cells) and network (different effects from presynaptic inputs). We find that the relationship between these two forms of heterogeneity can lead to significant changes in firing rate heterogeneity. We consider several networks, including noisy excitatory synaptic inputs, and noisy inputs with both excitatory and inhibitory inputs. To mathematically explain these results, we apply a phase reduction and derive asymptotic approximations of the firing rate statistics assuming weak noise and coupling. Our analytic calculations reveals how the interaction between intrinsic and network heterogeneity results in different firing rate distributions. Our work shows the importance of the phase-resetting curve (and various transformations of it revealed by our analytic calculations) in controlling firing rate statistics.

    Citation: Kyle Wendling, Cheng Ly. Firing rate distributions in a feedforward network of neural oscillators with intrinsic and network heterogeneity[J]. Mathematical Biosciences and Engineering, 2019, 16(4): 2023-2048. doi: 10.3934/mbe.2019099

    Related Papers:

    [1] Changgui Gu, Ping Wang, Tongfeng Weng, Huijie Yang, Jos Rohling . Heterogeneity of neuronal properties determines the collective behavior of the neurons in the suprachiasmatic nucleus. Mathematical Biosciences and Engineering, 2019, 16(4): 1893-1913. doi: 10.3934/mbe.2019092
    [2] Alan Dyson . Traveling wave solutions to a neural field model with oscillatory synaptic coupling types. Mathematical Biosciences and Engineering, 2019, 16(2): 727-758. doi: 10.3934/mbe.2019035
    [3] Nikita Ratanov . A two-state neuronal model with alternating exponential excitation. Mathematical Biosciences and Engineering, 2019, 16(5): 3411-3434. doi: 10.3934/mbe.2019171
    [4] Tatyana S. Turova . Structural phase transitions in neural networks. Mathematical Biosciences and Engineering, 2014, 11(1): 139-148. doi: 10.3934/mbe.2014.11.139
    [5] Bin Zhang, Linkun Sun, Yingjie Song, Weiping Shao, Yan Guo, Fang Yuan . DeepFireNet: A real-time video fire detection method based on multi-feature fusion. Mathematical Biosciences and Engineering, 2020, 17(6): 7804-7818. doi: 10.3934/mbe.2020397
    [6] P. E. Greenwood, L. M. Ward . Rapidly forming, slowly evolving, spatial patterns from quasi-cycle Mexican Hat coupling. Mathematical Biosciences and Engineering, 2019, 16(6): 6769-6793. doi: 10.3934/mbe.2019338
    [7] Shinsuke Koyama, Lubomir Kostal . The effect of interspike interval statistics on the information gainunder the rate coding hypothesis. Mathematical Biosciences and Engineering, 2014, 11(1): 63-80. doi: 10.3934/mbe.2014.11.63
    [8] Maoxing Liu, Jie Zhang, Zhengguang Li, Yongzheng Sun . Modeling epidemic in metapopulation networks with heterogeneous diffusion rates. Mathematical Biosciences and Engineering, 2019, 16(6): 7085-7097. doi: 10.3934/mbe.2019355
    [9] Bruno Buonomo, Giuseppe Carbone, Alberto d'Onofrio . Effect of seasonality on the dynamics of an imitation-based vaccination model with public health intervention. Mathematical Biosciences and Engineering, 2018, 15(1): 299-321. doi: 10.3934/mbe.2018013
    [10] Ryotaro Tsuneki, Shinji Doi, Junko Inoue . Generation of slow phase-locked oscillation and variability of the interspike intervals in globally coupled neuronal oscillators. Mathematical Biosciences and Engineering, 2014, 11(1): 125-138. doi: 10.3934/mbe.2014.11.125
  • The level of firing rate heterogeneity in a population of cortical neurons has consequences for how stimuli are processed. Recent studies have shown that the right amount of firing rate heterogeneity (not too much or too little) is a signature of efficient coding, thus quantifying the relative amount of firing rate heterogeneity is important. In a feedforward network of stochastic neural oscillators, we study the firing rate heterogeneity stemming from two sources: intrinsic (different individual cells) and network (different effects from presynaptic inputs). We find that the relationship between these two forms of heterogeneity can lead to significant changes in firing rate heterogeneity. We consider several networks, including noisy excitatory synaptic inputs, and noisy inputs with both excitatory and inhibitory inputs. To mathematically explain these results, we apply a phase reduction and derive asymptotic approximations of the firing rate statistics assuming weak noise and coupling. Our analytic calculations reveals how the interaction between intrinsic and network heterogeneity results in different firing rate distributions. Our work shows the importance of the phase-resetting curve (and various transformations of it revealed by our analytic calculations) in controlling firing rate statistics.


    The underlying mechanisms of how cortical neural networks process stimuli is an important and complex topic. In a population of neurons, understanding how sensory signals are efficiently encoded and transmitted to higher areas of the brain is especially challenging given that neurons are known to be stochastic and have heterogeneous attributes. For this reason, and due to the nature of quantifying information content and efficient coding, many advances stem from a combination of experiments and computational modeling [1,2,3,4,5,6]. Firing rate heterogeneity is the most commonly studied heterogeneous attribute; it has been shown to have consequences on neural coding in the olfactory bulb [6,59], on the visual system (i.e., orientation tuning curves [7,53]), and on a variety of other systems including the auditory [18] and electrosensory systems [4,29]. In general, the population firing rate and the level of its heterogeneity directly affect important information-theoretic measures of coding such as the Fisher information and mutual information [35]. Firing rate heterogeneity is a fundamental entity for systems that code signals based on rate, or the total number of spikes. Although we do not focus on the potential impact of higher order spiking statistics (i.e., covariance [2]) or spike timing in this paper, firing rate heterogeneity has deep implications on the way cells encode sensory signals.

    Several studies have shown that increased firing rate heterogeneity is a signature of more efficient coding (e.g., Fisher Information [6,53], discrimination error of distinct trial-averaged firing rates [4]). However, the relative level of heterogeneity is important because other studies have shown that too much or too little heterogeneity can be detrimental. This has been shown, for example, in spiking model networks [8], in a generalized linear model (GLM) fit to olfactory bulb neurons [59], and in orientation tuning curves [9]. Thus, the actual level of heterogeneity, not just whether neurons are heterogeneous, is important to quantify. Analyzing firing rate heterogeneity in model networks has the potential benefit of enabling predictions of how neural attributes relate to efficient coding.

    An understudied aspect of firing rate heterogeneity is the genesis of this network statistic from underlying neural attributes. Many theoretical studies specify the firing rate heterogeneity a priori, but interactions of cellular and network attributes are nonlinear and can lead to unexpected spiking dynamics [3,10,11,12,13]. However, others have focused on how intrinsic attributes affect neural coding [1,14]. Importantly, experimental measurements have shown that both intrinsic and network heterogeneity are material. At the cellular level, many intrinsic factors influence the firing rate of a cell such as ion channel composition or cell morphology. Our modeling centers on neural oscillators, a class of cells whose intrinsic properties lead to repetitive action potentials. We primarily consider intrinsic heterogeneity via a neuron's phase-response curve ( PRC) [15], an experimentally measurable entity that quantifies how inputs advance/delay time to next spike in oscillating neurons. Experiments have shown that PRCs are heterogeneous in olfactory bulb cells [16] as well as in mice visual cortex [55]. The researchers [55] demonstrate that the PRC type can modulate with acetylcholine (carbachol) rather than altering stimulus input. At the network level, the same presynaptic input strength can result in heterogeneous postsynaptic responses. Effective synaptic input strength is determined by many physiological parameters: presynaptic firing rate, postsynaptic potential size [23,38] for each presynaptic spike, or in the total number of inputs [38,48,49] (in addition to these cortical examples, experiments in the cerebellum show postsynaptic targets are heterogeneous [25,65]). We do not distinguish here between these different aspects of network heterogeneity; rather we use a simple parameter for input strength to model network heterogeneity (see [7,12], who modeled network heterogeneity in a similar way).

    In this study, we focus on the nonlinear interactions of intrinsic and network heterogeneity because such theoretical studies are relatively rare (i.e, [6,8,16,59,66] considered intrinsic heterogeneity alone, and [7,33,49] considered network heterogeneity alone). Some exceptions are Marder and colleagues [11,12], who have considered many heterogeneous attributes in detail, including intrinsic and network, with a focus on the detailed structure of voltage traces with relatively small numbers of neurons. Both intrinsic and synaptic diversity in two coupled oscillators have been considered in dynamic clamp experiments combined with theoretical modeling in [32,54] (see [30] for a more thorough review). At least in the context of altering the spiking statistics of large networks of neurons with noise, the only other studies that we are aware of that consider the effects of both intrinsic and network heterogeneity are: [3,10].

    In [10], we analyzed how the relationship between intrinsic (spike threshold) and network heterogeneity, along with the operating regime, alter firing rate distributions in a generic recurrent network model of leaky integrate-and-fire (LIF) neurons. In [3], we made stimulus-dependent predictions about the relationship between threshold and synaptic input, as well as effective network connectivity, with modeling and in vivo recordings of the hindbrain in weakly electric fish. However, these works do not include realistic intrinsic dynamics when modeling the relationship between intrinsic and network heterogeneity. Simple spiking models such as LIF do not account for ion channel dynamics that can have significant and counter-intuitive effects on firing rate statistics (i.e., blocking a potassium current does not always increase firing rates [41,43]). In addition, higher dimensional models that incorporate these features can better characterize the intrinsic dynamics of neurons, as opposed to LIF. LIF and many spiking models only integrate network input without properly accounting for how the current state affects responses.

    Here we consider a feedforward network of heterogeneous Morris-Lecar cells [45], where "feedforward" indicates that target neurons do not affect the presynaptic population. We find the relationship between intrinsic (PRC) and network heterogeneity can dramatically alter firing rate heterogeneity or distribution. We use a phase reduction method to calculate asymptotic firing rates assuming weak noise and coupling. Our first order analytic formulas qualitatively and succinctly capture the observed firing rate statistics of the high-dimensional model. We also demonstrate the value of the second order analytic formulas in capturing how firing rates change as the noise level varies. Our work reveals which aspects of the PRC control firing rate dynamics in the weak coupling and noise regime.

    Consider a population of N distinct neural oscillators receiving independent noise, coupled to a presynaptic population providing feedforward input. Let XjRn denote the state variables of the jth{1,2,,N} oscillator. The uncoupled and noise-free oscillatory system would be characterized as dXjdt=Fj(Xj) with a period Tj. With the addition of noise and coupling, the equation for Xj is of the form:

    dXjdt=Fj(Xj)+qjkwj,kG(Xj,Yk)+ςξj(t) (2.1)

    where ς1 is the power of the noise, qjwj,kG1 is the coupling, and ξj(t) are independent white noise processes with zero mean and variance ξ(t)ξ(t)=δ(tt). Combining noise and coupling perturbations, we define the system as dXjdt=:Fj(Xj)+ϵj and assume the models are Itˆo stochastic differential equations (SDEs). We also assume that the unperturbed system dXjdt=Fj(Xj) has an asymptotically stable limit cycle, X0(t)=X0(t+Tj).

    We use a phase reduction method applied to model above (Eq. 2.1) to obtain a network of phase oscillator models [17,26]. The phase reduction employed here is standard and has been described previously by many authors (see Chapter 8 of [28] and Chapter 1 of [15]). Note that there are other phase reduction methods; for example, see [56] for addressing how the time-scale of the noise and return time to limit cycle could yield a different phase reduction; see [52,57] for phase descriptions with noise and without a stable limit cycle; with moderate to strong perturbations, see [21,46,47]. For our purposes, we will focus on the case where the oscillators return to the limit cycle very quickly with weak perturbations.

    Near the limit cycle, there is a function ϕ:RnS1 mapping a neighborhood of the limit cycle to the one-dimensional phase along the limit cycle, Θj[0,Tj), where Tj is the period of the uncoupled oscillator. Defining Θj=ϕ(Xj), the variable Θj satisfies

    dΘjdt=1+Xϕ(Xj)ϵj. (2.2)

    This analytically exact equation has to be further approximated because Xj is not captured with the phase variables. However, since ϵj is small, we can approximate Xj by X0(Θj) to get a simpler equation for Θj:

    dΘjdt1+Z(Θj)ϵj(Θ1,...,.ΘN), (2.3)

    where Z(Θ):=Xϕ(X0(Θ)). The function Z(Θ) is called the adjoint and satisfies the linear equation

    Z(Θ)=DxFj(X0(Θ))TZ(Θ). (2.4)

    A normalization condition uniquely determines the solution to this equation (see [15]). We assume that the noise and coupling only affect the voltage component (i.e., voltage is the first component so ϵj has 0 except in the first), which holds for a wide class of neuron models. Thus, for the vector product (i.e., last term of RHS of Eq. 2.3), the only relevant quantity is the first component of Z(Θ), which we call Δj, the (infinitesimal) Phase Resetting Curve or PRC of the neuron. The PRC is a periodic function that has negligible value at the end points because in neurons, perturbations have negligible effect on the dynamics at the moment of a spike, and is also an experimentally measurable entity. The PRCs here are are calculated with the program XPPAUT [27].

    The result of applying the phase reduction and scaling Eq. 2.3 by Tj is the following set of stochastic differential equations:

    dΘjdt=ωj+qjkPj,k+σ22Δj(Θj)Δj(Θj)+σΔj(Θj)ξj(t) (2.5)

    where again the noise is independent, frequency ωj=1Tj because the phase variables Θj have been scaled to take on values in [0, 1), and

    Pj,k=1Tjwj,k1TjTj0Δj(t)G(X0(t),¯Yk(t))dt. (2.6)

    The first factor 1Tj is from the scaling. ¯Yk(t) is the average value of presynaptic input. Δj(t) is the time-scaled version of the PRC Δj(t):=TjΔj(tTj). Here the noise level ς is simply scaled by the period: σ=ςTj. The term σ22Δj(Θj)Δj(Θj) is a result of Itô's Lemma because the original model (Eq. 2.1) is an Itô stochastic differential equation (SDE). We model intrinsic heterogeneity with different Δj (although there is minuscule variation in ωj in the models we consider), and network heterogeneity is captured by qj that scales the presynaptic input.

    The specific multi-dimensional model we focus on is the Morris-Lecar model [45]. Although this model has only 2 state variables (dynamics of voltage trace and potassium gating), it is an ideal model for our purposes because our central focus is on intrinsic heterogeneity manifested with different PRCs. Indeed, a wide variety of PRCs result with varying 4 parameters (Figure 3C, D). The model dynamics consist of multiple time-scales, three intrinsic ionic currents, and spike generation. At its core, the dynamics of the voltage trace are modeled by the difference between an applied background current and the intrinsic ionic currents, so dVjdtIappIion. Meanwhile, potassium dynamics are modeled by the difference between the steady-state fraction of open potassium channel at the current voltage level and the actual current fraction of open channels, so dWJdtWinf(Vj)Wj. The model is:

    CmdVjdt =IappgL(VjEL)gKW(t)(VjEK)gCam(Vj)(VjECa) qjMk=1wj,ksk(t)(VjEsynk)+ςξj(t)dWjdt =ϕ W(Vj)WjτW(Vj) (2.7)

    ξj(t) is an independent white noise process with strength ς and Cm is the membrane capacitance. The parameters gL, gK, and gCa are the maximal conductances for the leak current, potassium channels, and calcium channels, respectively; EL, EK, and ECa are the corresponding reversal potentials. The functions in the model are:

    m(V)=12[1+tanh((VV1)/V2),]τW(V)=1cosh((VV3)/(2V4)),W(V)=12[1+tanh((VV3)/V4)], (2.8)

    where m and W model the steady-state fraction of open calcium and potassium channels, respectively, and τW represents the time scale of potassium; V1 through V4 are the parameters that affect the intrinsic ion channel dynamics. All of the intrinsic parameters are fixed except V3, V4, ϕ and the background current Iapp (see Table 1). These four parameters are varied to model intrinsic heterogeneity, resulting in different PRCs (see Figure 3C). In some circumstances, ϕ is calculated via the temperature [28], but here we assume it can vary within the population.

    Table 1.  Morris-Lecar Parameter Values. The fixed parameters are from [45], the parameters for synapses (τr,τd,A) are below, see Eqs. 2.9–2.10; the subscripts denote whether the pre-synaptic input is excitatory (i.e., τrE) or inhibitory (i.e., τrI). In the second table, the parenthetical notation refers to the extreme values, obtained from [27] files (mlecar.ode), that we vary to determine how periodic firing arises.
    Fixed Values Value Value Value
    Cm 20 μFcm2 gl 2 mScm2 gK 8 mScm2
    gCa 4 mScm2 EL -60 mV EK -84 mV
    ECa 120 mV V1 -1.2 mV V2 18 mV
    τrE 1 ms τdE 5 ms AE 2 EsynE 80 mV
    τrI 2 ms τdI 10 ms AI 2 EsynI -60 mV
    Parameters for Intrinsic Heterogeneity Value
    V3 12 mV to 1 mV
    V4 17 mV to 31 mV
    φ 0.06667 ms−1 to 0.04 ms−1
    Iapp 47 μAcm2 to 109 μAcm2

     | Show Table
    DownLoad: CSV

    Finally, the sum qjMk=1wj,ksk(t)(VjEsynk) represents the total pre-synaptic input assumed to be feedforward from an unmodeled population. The parameter qj models network heterogeneity; the values are chosen independently from a uniform distribution from positive values: qjU(qmin,qmax), with either [qmin,qmax]=[0.005,0.015] or [qmin,qmax]=[0.001,0.003]. The same presynaptic input strength can result in heterogeneous postsynaptic responses (see [7,12] who modeled network heterogeneity in a similar way). The coupling matrix wj,k is an N×M matrix with N=730 neurons (population of interest) and M=1000 synapses, k{1,,M}, consisting entirely of 0's and 1's. The coupling matrix is an Erdös-Rényi graph, with the probability of connection of 0.3 for all wj,k (independently chosen). The random number of presynaptic inputs to the N=730 neurons is a minor source of variability but not one of our significant sources of heterogeneity. The synapse variable sk(t) is modeled by the following ODE system:

    dskdt=sk+akτd (2.9)
    dakdt=akτr+Aδ(tt) (2.10)

    The times t when a instantaneously jumps a(t)a(t)+A are random, governed by a homogeneous Poisson process with rate λ.

    The synaptic kernel associated with the synapse model is:

    s(tt)=H(tt)τrasτdτr(e(tt)/τde(tt)/τr) (2.11)

    where H(x) is the Heaviside step function (1 if x>0, and 0 if x<0). Here we assume τd>τr, commonly observed in cortical synapses.

    The result of applying the general phase reduction method in section 2.1 to the feedforward Morris-Lecar network with conductance-based synaptic input is described here.

    The noise level ς in the Morris-Lecar model is simply scaled by the period so that:

    σ=ςTj.

    Recall that the time-scaled version of the PRC, Δj(t), is defined as the following: Δj(t):=TjΔj(tTj). As described previously for a general model (Eq. 2.5), we scale time by Tj and use the dimensionless version of the PRC, Δj. The feedforward coupling term (Eq. 2.6) is approximated by:

    Pj,k=Δj(θ)wj,k1CmTjTj0sk(t)(EsynkVLCj(t))dt. (2.12)

    where VLCj(t) is the unperturbed voltage in one cycle. The PRC is taken out of the integral as an ad-hoc approximation, although partially justified because the feedforward input is very noisy (synapses are driven by a Poisson process) and is effectively pulse-coupled inputs when Esynk is much larger/smaller than the possible values of Vj(t) (see [31,40] who use phase oscillator models of a similar form). Finally, the synaptic input sk(t) is approximated by applying the synaptic filter (Eq. 2.11) at all discretized points in the period and averaging because the synapses are driven by a Poisson process. Thus,

    dΘjdt=ωj+σ22ΔjΔj+qjˉPjΔj+σΔjξj(t) (2.13)
    Θj(t)1,Θj(t+)=0 (2.14)
    ˉPj:=kwj,k1Cm1TjTj0s(tt)(EsynkVLCj(t))dtt (2.15)

    where t denotes the average over all t[0,Tj).

    Here we describe the phase reduction theory to capture the firing statistics of the full Morris-Lecar feedforward network. The phase reduction (Eqs. 2.13–2.15) makes the subsequent asymptotic calculations feasible. Stochastic systems are often characterized by a probability density equation which is described by a Fokker-Planck [51] equation. Let

    Pr(Θj(t)(θ,θ+dθ))=ϱj(θj,t)dθ. (2.16)

    The corresponding Fokker-Planck equation of the jth neuron is:

    ϱjt=θ{[ωj+qjˉPjΔj(θj)+σ22Δj(θ)Δj(θ)]ϱjσ22θ{Δ2j(θ)ϱj}}=:θ{Jj(θ,t)} (2.17)

    with periodic boundary conditions: ϱj(θ=0,t)=ϱj(θ=1,t) and normalization 10ϱj(θ,t)dθ=1. The firing rate of actions potentials is:

    rj(t):=Pr(Θj(t)1)Unit Time=Jj(θ=1,t) (2.18)

    The probability flux Jj(θ,t) can re-written by differentiation to get:

    ϱjt=θ{ [ωjσ22Δj(θj)Δj(θj)+qjˉPjΔj(θj)]ϱj(θ)σ22Δ2j(θ)ϱjv}

    We are only interested in the steady-state solution (ϱjt=0):

    0=θ{ [ωjσ22Δj(θj)Δj(θj)+qjˉPjΔj(θj)]ϱj(θ)σ22Δ2j(θ)ϱj(θ) } (2.19)
    0=θ{Jj(θ)}

    Since Jj(θ) is constant, we pick a convenient θ, namely θ=1, which gives the firing rate of the jth neuron. This value of θ has the added benefit that the PRC vanishes there at the moment of firing: Δ(1)=0. Thus, the firing rate is simply:

    rj=ωjϱj(1) (2.20)

    We exploit the weak coupling and noise assumption by applying a standard asymptotic approximation to the P.D.F. ϱj to obtain an approximation for the firing rate of the jth neuron, and consequently the firing rate heterogeneity (i.e., standard deviation across all rj):

    ϱj(θ)=ϱ0(θ)+ϵϱ1(θ)+ϵ2ϱ2(θ)+O(ϵ3) (2.21)

    The P.D.F. has to integrate to 1 (i.e., 10ϱj(θ)dθ=1); we assume the following normalization conditions for the asymptotic approximation of ϱ:

    10ϱ0(θ) dθ=1, 10ϱl(θ)dθ=0 l1 (2.22)

    so that any truncation from 0th order and onward results in the correct normalization. Substituting Eq. 2.21 into Eq. 2.19 while assuming σ2 and qjˉPj are order ϵ gives:

    Jj(θ)=(ωjϵσ22Δj(θ)Δj(θ)+ϵqjˉPjΔj(θj))[ϱ0(θ)+ϵϱ1(θ)+ϵ2ϱ2(θ)]ϵσ22Δ2j(θ)ϱj(θ) (2.23)

    This results in a hierarchy of asymptotic equations (after dividing by ϵl). We impose the condition that each order of the approximation for Jj has 0 derivative (with respect to θ), equivalent to assuming the steady-state Fokker-Planck Eq. 2.19 is satisfied in all orders. The 0th order equation is simply:

    Jj(θ)=ωjϱ0(θ)

    With the normalization condition: 10ϱ0(θ) dθ=1 and the fact that JjK0 is constant (i.e., Jj has 0 derivative) results in ϱ01, giving the following 0th order approximation to rj:

    rjωj (2.24)

    First Order Formula

    The 1st order formula is derived via the O(ϵ) equation in Eq. 2.23. Note that:

    ϱj(θ)=ϵϱ1(θ)+ϵ2ϱ2(θ)+O(ϵ3) (2.25)

    Since ϱj(θ) is multiplied by an order ϵ term (σ22), the resulting 1st order approximation for Jj does not contain derivatives of ϱl:

    Jj(θ)=ωjϱ1(θ)σ22Δj(θ)Δj(θ)ϱ0(θ)+qjˉPjΔj(θj)ϱ0(θ) (2.26)

    Setting Jj to a constant K1 via 0=θ{Jj(θ)} and substituting for ϱ0 results in a simple equation for ϱ1:

    K1=ωjϱ1(θ)σ22Δj(θ)Δj(θ)+qjˉPjΔj(θj)
    ϱ1(θ)=1ωj[σ22Δj(θ)Δj(θ)qjˉPjΔj(θj)+K1] (2.27)

    The constant K1 is determined by integrating both sides from θ=0 to 1 and using the normalization condition: 10ϱ1(θ) dθ=0.

    0=1ωj[σ24Δ2j(θ)|10qjˉPj10Δj(θ)dθ+K1θ|10]
    0=1ωj(qjˉPjˉΔj+K1)
    K1=qjˉPjˉΔj.

    Thus we have:

    ϱ1(θ)=1ωj[σ22Δj(θ)Δj(θ)+qjˉPj(ˉΔjΔj(θ))]. (2.28)

    where: ˉΔ:=10Δ(θ)dθ. The resulting 1st order approximation is:

    rjωj+qjˉPjˉΔj (2.29)

    Second Order Formula

    We derive a 2nd order approximation to the P.D.F. and firing rate analogously. Again we use Eq. 2.23 but only consider O(ϵ2) this time.

    Jj(θ)=ωjϱ2(θ)σ22Δj(θ)Δj(θ)ϱ1(θ)+qjˉPjΔj(θ)ϱ1(θ)σ22Δ2j(θ)ϱ1(θ) (3.30)

    From the equation for ϱ1 (Eq. 2.28), we have:

    ϱ1(θ)=1ωj[σ22Δj(θ)Δj(θ)+σ22(Δj(θ))2qjˉPjΔj(θ)] (2.31)

    Substituting ϱ1(θ) and ϱ1(θ) into the RHS of Eq. 2.30:

    =ωjϱ2(θ)σ22Δj(θ)Δj(θ)[1ωj(σ22Δj(θ)Δj(θ)qjˉPjΔj(θ)+qjˉPjˉΔj)] +qjˉPjΔj(θ)[1ωj(σ22Δj(θ)Δj(θ)qjˉPjΔj(θ)+qjˉPjˉΔj)] σ22Δ2(θ)[1ωj(σ22Δj(θ)Δj(θ)+σ22(Δj(θ))2qjˉPjΔj(θ))]

    We skip some of the straight forward yet tedious calculations. Assuming all order of ϱj satisfy the equation 0=θ{Jj(θ)} as before, the equation for ϱ2 is:

    K2= ωjϱ2(θ)1ωj[(σ22Δj(θ)Δj(θ))2+σ22qjˉPj(ˉΔjΔj(θ)Δj(θ)Δ2j(θ)Δj(θ))] +1ωj[σ22qjˉPjΔ2j(θ)Δj(θ)+(qjˉPj)2(ˉΔjΔj(θ)Δ2j(θ))] 1ωj[(σ22)2Δ3j(θ)Δj(θ)+(σ22Δj(θ)Δj(θ))2σ22qjˉPjΔ2j(θ)Δj(θ)]

    To distinguish the contribution of noise, network inputs, etc., the terms are grouped by (σ22)2, (qjˉPj)2, and the interaction between these two: σ22qjˉPj:

    K2 =ωjϱ2(θ)1ωj[(σ22)2(2(Δj(θ)Δj(θ))2+Δ3j(θ)Δj(θ))] 1ωj[σ22qjˉPj(ˉΔjΔj(θ)Δj(θ)Δ2j(θ)Δj(θ))] 1ωj[(qjˉPj)2(Δ2j(θ)ˉΔjΔj(θ))] (2.32)

    We solve for ϱ2(θ) and K2 by integrating both sides in θ from 0 to 1, using the normalization condition (Eq.2.22) and recalling that Δj(0)=Δj(1)=0. Then, we have:

    K2=1ωj[(σ22)2(210Δ2(θ)(Δ(θ))2dθ+10Δ3(θ)Δ(θ)dθ)+(qjˉPj)2(¯Δ2ˉΔ2)] (2.33)

    where ¯Δ2:=10Δ2(θ)dθ. This equation can be further simplified with integration by parts:

    10Δ3(θ)Δ(θ)dθ=Δ3(θ)Δ(θ)|10310Δ2(θ)(Δ(θ))2dθ=310Δ2(θ)(Δ(θ))2dθ

    Eq. 2.33 is equal to:

    K2=1ωj[(σ22)2(10Δ2j(θ)(Δj(θ))2dθ)+(qjˉPj)2(ˉΔ2j¯Δ2j)] (2.34)

    The final result for ϱ2 using Eqs 2.32 and 2.34 is:

    ϱ2(θ) =1ω2j[(σ22)2(10Δ2j(θ)(Δj(θ))2dθ)+(qjˉPj)2(ˉΔ2j¯Δ2j)] +1ω2j[(σ22)2(2(Δj(θ)Δj(θ))2+Δ3j(θ)Δj(θ))] +1ω2j[σ22qjˉPj(ˉΔjΔj(θ)Δj(θ)Δ2j(θ)Δj(θ))] +1ω2j[(qjˉPj)2(Δ2j(θ)ˉΔjΔj(θ))] (2.35)

    Evaluating at θ=1 greatly simplifies ϱ2, the only value that matters for the firing rate.

    ϱ2(1)=1ω2j[(σ22)2(10Δ2j(θ)(Δj(θ))2dθ)+(qjˉPj)2(ˉΔ2j¯Δ2j)] (2.36)

    The 2nd order approximation to the firing rate then is:

    rjωj+qjˉPjˉΔj+1ωj[(σ22)2(10Δ2j(θ)(Δj(θ))2dθ)+(qjˉPj)2(ˉΔ2j¯Δ2j)] (2.37)

    Finally, we consider the standard deviation across the population predicted by our analytic theory: σ(r)=1NNj=1(rjμr)2. We only use the 1st order approximation (Eq. 2.29) in the firing rate heterogeneity approximation – in the Results section the 2nd order formula is not useful for the large feedforward network.

    σ(r)σ(ω+qˉP¯Δ). (2.38)

    (all variables in the standard deviation σ are vectors).

    We start with an illustrative example that highlights the importance of properly accounting for intrinsic dynamics. Consider the Morris-Lecar (ML) model with two different parameter sets (Figure 1). The voltage trajectories are similar and the phase planes have minor discrepancies that are predominant in the W variable, so one would naively think that the population response would be similar. However, we will show that the population response can be different with two simple homogeneous networks in Figure 2. Although experimentalists interested in systems neuroscience and coding tend to focus primarily on network coupling structure and synaptic dynamics, we show that cellular attributes alone can shape response statistics and subsequently provide a mathematical explanation of these observations.

    Figure 1.  The Morris-Lecar model with two different parameter sets. A) Voltage and potassium gating variable trajectories with no coupling and weak noise, ς=3 (see Eqs. 2.7–2.8 with v3=12, v4=17, ϕ=0.06667, Iapp=47). B) Corresponding phase plane for variables in A, with an asymptotically stable limit cycle. C) Similar to A with no coupling and weak noise, ς=3, but with a different parameter set (v3=1, v4=28, ϕ=0.042, Iapp=97.5). D) Corresponding phase plane for variables in C, again with an asymptotically stable limit cycle. The parameters v3, v4 and ϕ control the dynamics of the W variable.
    Figure 2.  Population firing rate for homogeneous Morris-Lecar neurons receiving (heterogeneous) feedforward synaptic input. A) The resulting population firing rate for two homogeneous populations (SNIC is from Figure 1A, B, Hopf is from Figure 2C, D). The two populations have vastly different mean firing rates and standard deviations (i.e., heterogeneity) with the same synaptic input despite similar trajectories (Figure 1). The gray whiskers bars represent 3 standard deviations for visual purposes. B) Same data as in A except showing all N=730 cell's firing rate. C) The bifurcation diagram of the SNIC model and the Hopf model (D) indicate that the intrinsic dynamics are vastly different despite similar trajectories (red star indicates the precise Iapp values used). The excitatory and inhibitory synaptic inputs are statistically identical for each of the two homogeneous populations, with a Poisson process input rate of λ = 20 Hz and network heterogeneity: qjU(0.001,0.003) (uniform distribution, mean = 0.002); see Table 1 and Eqs. 2.9–2.10.

    Consider two homogeneous populations of ML cells, each having different intrinsic dynamics (Figure 1 and Eqs. 2.7–2.8) which we call SNIC (saddle-node bifurcation on invariance circle) and Hopf. We use these names (SNIC and Hopf) because the stable limit cycles emerge from these respective types of bifurcations. Each homogeneous population, of size N=730, receives feedforward input that is heterogeneous, modeled by a factor qj (i.e., network heterogeneity) that scales input from a feedforward population of M=1000, qj1000k=1wj,ksk(t)(VjEsynk); cells in the target population are uncoupled. The coupling between the presynaptic and postsynaptic populations is randomly and independently chosen (Erdös-Rényi graph), with the probability of connection of 0.3 for all wj,k; all feedforward networks in this paper have this structure (all except Figure 6). These two network models have vastly different population firing rate statistics with statistically identical input (Figure 2). Not only are the mean firing rates different, but the standard deviation of the population firing rate vary more with network type (SNIC or Hopf) than the actual type of input (excitatory versus inhibitory input, Figure 2A, B). The SNIC network has consistently larger firing rate heterogeneity than the Hopf network. The mean firing rate also changes more significantly between excitatory and inhibitory inputs for SNIC cells than it does for Hopf cells. This surprising result can be explained with our theoretical calculations (see below). Note that the bifurcation diagrams for each network (Figure 2C, D) are very different, providing a hint to a possible explanation of the observations; one network type has cells that undergo a SNIC bifurcation and the other following a Hopf bifurcation with increased current injection.

    Biophysical neuron models often have many dimensions, and in a coupled network, analysis is often intractable because of the dimensions and nonlinear processing of presynaptic inputs. The assumption that both coupling and noise are weak, however, allows us to employ the phase reduction while still incorporating biophysically-realistic dynamics. The phase-resetting curve ( PRC) is a key entity in the phase reduction. We use the common convention for the PRC that an excitatory input at phase θ will delay the time until the next spike if Δ(θ)<0 but will advance the time until the next spike if Δ(θ)>0 (see Figure 3A). These advances or delays with perturbations naturally impact firing rate. We derive approximations to the individual firing rates: 1st order Eq. 2.29. Specifically, the approximation to firing rate heterogeneity (standard deviation across the population) is:

    σ(r)|ˉΔ|σ(ˉPq)
    Figure 3.  A) Schematic showing phase reduction method and the importance of the PRC (Δ) for characterizing how small positive perturbations in voltage alter the time to next spike. B) The phase plane showing the (uncoupled) asymptotically stable limit cycles for 68 different Morris-Lecar parameter sets, modeling intrinsic heterogeneity. C) The corresponding 68 PRCs; note that the periods Tj are similar. D) The integral of the PRC ˉΔj:=Tj0Δj(θ)dθ have a wide range of values, with color corresponding to C. The inset shows 2 extreme PRCs.

    This is derived from Eq. 2.38, noting that frequencies (ωj), integrated PRCs (ˉΔ:=10Δj(θ)dθ), and average input strength times noise (σ(ˉPq)) are all the same within a population. Thus, independent of input type, the consistently lower firing rate heterogeneity in the Hopf population occurs because the integral of the PRC ˉΔ is smaller compared to the SNIC network; recall that all other components of the prior equation are fixed. The ˉΔ are shown in Figure 3D – the smallest value (dark blue) is the Hopf and the largest value (dark red) is the SNIC. Note that how limit cycles emerge has deep implications on the shape of the PRC, i.e., Hopf generally has large negative region versus SNIC has mostly positive PRC [26]. SNIC is often called Type I because the firing rate vs. input current curve is continuous, and Hopf is often called Type II because the firing rate vs. input current curve has a discontinuous jump.

    Next we consider an ML network with both intrinsic heterogeneity via the PRC and network heterogeneity in the same manner as before with a parameter qj. Experimentalists often focus on either measuring intrinsic cellular attributes (uncoupled) or synaptic dynamics and connectivity structure because of feasibility in recordings and precise control. However, we show here that not only do both cellular and circuit attributes matter, but they have to be taken together and not separately because they interact nonlinearly to affect firing rate statistics. To this end, we developed 68 parameter sets to model intrinsic heterogeneity by systematically varying four parameters: effective input current and three others related to recovery variable dynamics (see Table 1). The effective input current is varied to ensure all cells have an approximate period of 85ms when varying the other three parameters (see all limit cycles in Figure 3B and PRCs in Figure 3C). The PRC of a prototypical SNIC or Type I oscillator [26] would result from parameters similar to the following: V3=12mV, V4=17mV, and ϕ=0.06667ms1. Meanwhile, the PRC of a prototypical Hopf or Type II oscillator [26] would result from parameters on the opposite extremes: V3=1mV, V4=31mV, and ϕ=0.04ms1. We produced intermediate PRCs by uniformly selecting the three parameter values from the ranges bracketed by these prototypical values and then adjusting current to establish periodic uniformity. Figure 3 also shows the integrals of the 68 PRCs ˉΔ (Figure 3D), which as we have already seen, is important for firing rate heterogeneity. We remark that there are notable differences in the both the PRCs and its integral ˉΔ.

    The network in Figure 4 is from duplicating the unique set of 68 parameter sets to obtain a larger network of N=730 with a wide range of intrinsic dynamics; the network is feedforward with unmodeled 'pre-synaptic' neurons that only provide excitatory presynaptic input. This network has both intrinsic heterogeneity via the PRCs (Figure 3C) and network heterogeneity via the qj parameter that scales the network input.

    Figure 4.  A) Cell-to-cell comparison of firing rate of cell population (N=730) for Monte Carlo simulation of ML model versus phase reduction theory, with qj and ˉΔj uncorrelated. Note that the 2nd order approximation closely follows the 1st order approximation. B) Cell-to-cell comparison of firing rate by Monte Carlo simulation of ML model versus 1st order phase reduction theory at various correlations between qj and ˉΔj. The range of firing rates grows larger as the correlation is increased from ρ=0.7 to ρ=0.7. C) Mean absolute error of approximations across five correlation levels. The 1st order approximation significantly reduces error compared to 0th order approximation and slightly outperforms 2nd order approximation. D) Firing rate heterogeneity (standard deviation) with five correlation levels. The phase reduction theory qualitatively captures the increase in firing rate heterogeneity as the correlation between qj and ˉΔj increases from ρ=0.7 to ρ=0.7. Here qjU(0.005,0.015) (uniform distribution, mean = 0.01). Using the Brown-Forsythe test for equal variance on the Monte Carlo firing rates, we find that the standard deviation of the firing rates is statistically significant between all levels of correlation (significance level α=0.01).

    We perform Monte Carlo simulations to accurately capture the firing rate heterogeneity. With both sources of heterogeneity chosen independently, we see significant heterogeneity (measured by the standard deviation across the entire population (Figure 4A)) even though the intrinsic periods are roughly equal ωjω (Figure 3C). Recall the firing rate heterogeneity approximation Eq. 2.38 (σ(ω+qPˉΔ)) that has the term:

    qPˉΔ.

    Our calculations predict that the relative level of firing rate heterogeneity can be modulated by changing the relationship, or correlation, between components of the intrinsic ˉΔ and network heterogeneity q (neglecting P, average synaptic input). Specifically, if qj is negatively correlated with ˉΔj, the firing rate heterogeneity will be relatively smaller since the product of large and small numbers will not deviate much cross the population j=1,,N. Contrast that when the correlation is positive: the product of large with large and small with small numbers willl result in larger heterogeneity across the population. Note that given a vector of ˉΔj values, we can generate qj so that the vector of q and ˉΔ have any specified (Pearson's) correlation coefficient, see [3,10]. Indeed, this prediction is precisely what is shown in the Monte Carlo simulations (Figure 4B on a cell-by-cell basis and Figure 4D); these trends are captured by both the 1st order (Eq. 2.29) and 2nd order (Eq. 2.37) approximations.

    Our 1st theory is valuable, producing significantly less error than the 0th order theory that just incorporates intrinsic frequency (Figure 4C). We can see from Figure 4AB that the error of cell-to-cell firing rate approximations increases with actual firing rate, which is not surprising since the deviations from the uncoupled system ε are relatively large. However, the trends are still captured qualitatively with the phase reduction theory, and the cell-to-cell error appears to be largely independent of correlation (Figure 4B). From Figure 4C, we see that the firing rate heterogeneity approximation is better with ρ(ˉΔ,q)<0, and the error increases with ρ. The 2nd order approximation (Eq. 2.37) does not improve the predictive power; in fact it is slightly less accurate yet more complicated than the 1st order approximation.

    We next applied our theory with the same heterogeneous ML neurons but with presynaptic input from both excitatory and inhibitory conductance-based inputs. The Poisson process input rates are scaled to three levels: λE=0.2 and λI=0.1, λE=0.4 and λI=0.2, and λE=0.8 and λI=0.4. In Figure 5A and B, we show the cell-to-cell comparisons for the 1st order approximation (Eq. 2.29) with Monte Carlo simulations, including just the two extreme correlation values, ρ=0.7 and ρ=0.7. We see from Figure 5CD that the asymptotic approximation becomes very inaccurate as firing rates increase, but it still qualitatively captures the mean firing rate trends (Figure 5C) and the firing rate heterogeneity trends (Figure 5D). In Figure 5CD, with a fixed correlation, our approximation captures how the firing rate changes with presynaptic input rate (i.e., varying 1X, 2X, 3X with a fixed color); also, the approximation captures how the mean and standard deviation both increase with correlation (fixing synaptic input rate, gray lines connecting stars and squares).

    Figure 5.  A) Cell-to-cell comparison of firing rates for ρ=0.7 but now for balanced network of E and I inputs with three different levels of presynaptic firing rates. The asymptotic theory becomes less accurate as the presynaptic firing rate increases but still qualitatively follows the ML model (Monte Carlo). B) Cell-to-cell comparison but for ρ=0.7. The asymptotic theory qualitatively captures the increasing firing rate and firing rate variability. C) Comparison of mean firing rates at five different correlation levels. Though error increases as synaptic input rates (1X to 3X) increase, the relationship between correlation structure and mean firing rate is still captured by the approximation. D) Comparison of firing rate standard deviations at five different correlation levels. Again, error increases as synaptic input rates increase, but the approximation still qualitatively predicts the increase in heterogeneity as correlation increases from negative to positive. Here qjU(0.005,0.015) (uniform distribution, mean = 0.01). We again using the Brown-Forsythe test for equal variance to evaluate the Monte Carlo firing rates for all three balanced networks. At significance level α=0.01, we find statistically significant differences in firing rate standard deviation between all pair-wise comparisons of the five correlation levels for each network.

    Thus far, the 2nd order approximation to the firing rate has not been any more informative than the 1st order approximation. This result was initially quite puzzling because one would expect that the 2nd order approximation to be more accurate than 1st order, at least for small, perhaps minuscule parameters. However, even with very small parameters, the 2nd order approximation does no better in any of the networks we considered. Upon further investigation, we find that both terms in the 2nd order approximation (see Eq. 2.37 or Eq. 2.40) are positive for all of the networks thus far, so no matter how small the parameters are, this observation will hold since the 1st order approximation overestimates all firing rates. It would seem that the 2nd order approximation yields no information, however note that it can likely account for differences in how the noise parameter σ or other aspects of the PRC (Δ2, Δ, ¯Δ2:=Δ2(θ)dθ) affect firing rate. Thus, we demonstrate the value of this calculation on two single phase oscillator SDEs where we have more precise control over the components (i.e., ωj,qjˉPj,σ), rather than having them endowed from the full coupled network. Consider the simpler equation (based on Eq. 2.13):

    dΘjdt=1+σ22ΔjΔj+QΔj+σΔjξj(t) (3.1)

    where we set the intrinsic frequency to 1 and treat the network heterogeneity (formerly qjˉPj) as a specified parameter Q. The 2nd order approximation is then (see Eq. 2.37)

    rj1+QˉΔj+[σ4410Δ2j(θ)(Δj(θ))2dθ+Q2(ˉΔ2j¯Δ2j)] (3.2)

    We select two cells where the integrals of the PRCs (ˉΔ) are similar but the dynamics of the PRCs are drastically different (Figure 6A). These two PRCs are obtained from the full ML network but scaled to have (arbitrary) units of phase. Consequently, while our 1st order theory (Eq. 2.29) would predict similar firing rates, our 2nd order theory (Eq. 2.40) might capture the discrepancies between the two cells via the additional terms. In the bar charts of Figure 6B, note that the Q and Q2 terms in Eq. 2.40 do not vary much, while there is a relatively large discrepancy in the σ4 term. Thus, our theory indicates that the variation in the noise/coupling term σ, rather than the network strength Q, drives the difference in firing rate of the cells. As σ increases, our 2nd order calculation predicts that Cell II will then fire at a faster rate than Cell I (Figure 6D). Our Monte Carlo simulation corroborates the theory (Figure 6C), with Cell I's firing rate only mildly increasing with the increase in noise and coupling while Cell II's firing rate increases at a far more significant rate. The theory qualitatively matches the results for both cells up to approximately σ=25, at which point our assumption of weak noise and coupling has been violated because σO(ϵ). Therefore, the 2nd order firing rate approximation is valuable in capturing firing rate dynamics despite quantitative inaccuracies in our models.

    Figure 6.  A) The PRCs for both cells have similar integrals but demonstrate distinct behavior captured by the 2nd order theory B) Only the term associated with σ4 differs noticeably between cells (here σ=10 and Q=0.5). Thus, noise drives the difference in firing rate. C) Monte Carlo simulation of firing rate as σ varies. This difference in firing rate is not predicted by the 1st order theory. D) 2nd order theory for both cells, which qualitatively captures the faster firing rate for Cell II as noise increases.

    Analyzing the modulation of firing rate heterogeneity in model neural networks could potentially link how neural attributes (cellular and circuit) are related to efficient coding. As we have seen in Figure 2, seemingly similar populations of neurons receiving the same synaptic input can have significantly different firing rate means and standard deviations. This statistical discrepancy demonstrates that we cannot ignore intrinsic heterogeneity. We then consider a feedforward network with combined intrinsic heterogeneity and network heterogeneity in order to evaluate their nonlinear interaction. Our asymptotic calculations, developed from phase reduction analysis, succinctly capture the impact of how sources of heterogeneity lead to firing rate heterogeneity in several feedforward networks: with excitatory input only and with different levels of balanced excitatory and inhibitory input.

    Using our theory, we can effectively modulate the level of firing rate heterogeneity by altering the correlation structure between sources of intrinsic and network heterogeneity in a biophysical (Morris-Lecar) model. The elegant 1st order theory demonstrates the impact of phase-resetting curves on firing rate statistics and its nonlinear interaction with network input. Although the calculations are less accurate if the weak noise and coupling assumptions are violated, they still capture the overarching trends of intrinsically oscillating neurons in these feedforward networks. The 2nd order theory is not any more informative than 1st order in our large feedforward networks. However, the 2nd order theory can account for the impact of noise on firing rate dynamics in single cell models (Figure 6), giving an overall more complete asymptotic theory. Unlike many common spiking models (e.g., leaky integrate-and-fire), we have incorporated biophysically realistic intrinsic dynamics with significant heterogeneity.

    The spike threshold of the cell (or equivalently, the intrinsic frequency of the phase oscillator) is a crucial attribute that controls output firing rates, since low threshold (high intrinsic frequency) will directly cause high firing rates and vice-versa. Thus, variable spike threshold or frequency is a common way to induce intrinsic heterogeneity [3,8,10,44]. Threshold heterogeneity has been shown experimentally in cortical cells [22] and has crucial effects in the electrosensory system [44] and others [50]. We did not focus on spike threshold as an intrinsic heterogeneous attribute because the interaction between network heterogeneity and heterogeneity in these variables was unsurprising and had been analyzed before by [10]. A low effective network strength correlated with high spike threshold would of course lead to lower firing rate heterogeneity, while the opposite would lead to higher firing rate heterogeneity. These results are fairly intuitive and consistent with our results, so we focused on the relationship between effective network strength and intrinsic phase-dependent response to inputs because the results are not obvious. Thus, accounting for cellular response over a cycle, beyond just the spike frequency, should be a consideration to experimentalists to account for different neural responses.

    Although the oscillating regime is not always applicable for neural dynamics, there are many times when neural networks can at least be well-modeled by oscillators (also see [64]). Neural oscillator models have already been successfully used to model the olfactory system [16,66] as well as breathing [24] and locomotion [58]. There are other mathematical frameworks that account for oscillatory dynamics [21,46,57] generated from excitable cells [39,48] that we have not used here because of our phase oscillator assumption, but may be effective in analyzing (intrinsic) firing rate heterogeneity.

    Neural networks were the underlying motivation for this work, but the general framework of using phase reduction methods to study output heterogeneity from nonlinear interaction of heterogeneous components might generalize to other fields. Indeed, there is a long history of using phase reduction methods in biology and other fields. Winfree pioneered the approach for biological systems of synchronized oscillators, with a particular focus on circadian rhythms [60,61]. Kuramoto's model of coupled oscillators is usable for such phenomenon as the Belousov-Zhabotinsky chemical reaction [17,36,37]. The call and response of two tree frogs, an anti-phase synchronized system, can also be modeled via phase oscillators [19]. In engineering, the leg movements of passively walking robots [42], electric circuits powering flashing LEDs with periodicity [20], and injection locking of neighboring electric circuits and lasers [63] all have limit-cycle oscillations.

    The exact relationship of various heterogeneous components, such as the correlation of q and functions of Δ, is not precisely known, so we systematically varied these entities in this modeling study. However, [34] labeled and classified 1600 neurons and their connectivity profiles in the mouse visual cortex in vitro. Some other experimental studies suggest that heterogeneous attributes are not random but have structure [11,12,62]. Thus, technological advances may lead to opportunities to experimentally measure these relationships. Our theoretical study explores the consequences on spiking statistics with idealized relationships to provide a better sense of how heterogeneity and relationships between forms of heterogeneity shape neural responses. This theoretical work complements studies of how entities that are difficult to measure experimentally (i.e. intrinsic and network diversity) affect firing rate activity, which is easier to measure. Such theories are necessary to ultimately understand how the relationship of neural attributes affect neural coding.

    Cheng Ly is supported by a grant from the Simons Foundation # 355173.

    All authors declare no conflict of interest in any capacity.



    [1] J. Gjorgjieva, R. A. Mease, W. J. Moody, et al., Intrinsic neuronal properties switch the mode of information transmission in networks, PLoS Comput. Biol., 10 (2014), e1003962.
    [2] A. Kohn, R. Coen-Cagli, I. Kanitscheider, et al., Correlations and neuronal population information, Annu. Rev. Neurosci., 39 (2016), 237–256.
    [3] C. Ly and G. Marsat, Variable synaptic strengths controls the firing rate distribution in feedforward neural networks, J. Comput. Neurosci., 44 (2018), 75–95.
    [4] G. Marsat and L. Maler, Neural heterogeneity and efficient population codes for communication signals, J. Neurophysiol.y, 104 (2010), 2543–2555.
    [5] R. A. Mease, M. Famulare, J. Gjorgjieva, et al., Emergence of adaptive computation by single neurons in the developing cortex, J. Neurosci., 33 (2013), 12154–12170.
    [6] K. Padmanabhan and N. N. Urban, Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content, Nature Neurosci., 13 (2010), 1276–1282.
    [7] M. I. Chelaru and V. Dragoi, Efficient coding in heterogeneous neuronal populations, P. Nat. Acad. Sci., 105 (2008), 16344–16349.
    [8] J. Mejias and A. Longtin, Optimal heterogeneity for coding in spiking neural networks, Phys. Rev. Lett., 108 (2012), 228102.
    [9] A. S. Ecker, P. Berens, A. S. Tolias, et al., The effect of noise correlations in populations of diversely tuned neurons, J. Neurosci., 31 (2011), 14272–14283.
    [10] C. Ly, Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity, J. Comput. Neurosci., 39 (2015), 311–327.
    [11] E. Marder, Variability, compensation, and modulation in neurons and circuits, P. Nat. Acad. Sci., 108 (2011), 15542–15548.
    [12] E. Marder and J.-M. Goaillard, Variability, compensation and homeostasis in neuron and network function, Nature Rev. Neurosci., 7 (2006), 563.
    [13] A. Roxin, N. Brunel, D. Hansel, et al., On the distribution of firing rates in networks of cortical neurons, J. Neurosci., 31 (2011), 16217–16226.
    [14] S. Ratté, S. Hong, E. De Schutter, et al., Impact of neuronal properties on network coding: roles of spike initiation dynamics and robust synchrony transfer, Neuron, 78 (2013), 758–772.
    [15] N.W. Schultheiss, A. A. Prinz and R. J. Butera, Phase Response Curves in Neuroscience: Theory, Experiment, and Analysis, vol. 6 of Springer Series in Computational Neuroscience, Springer, 2012.
    [16] S. D. Burton, G. B. Ermentrout and N. N. Urban, Intrinsic heterogeneity in oscillatory dynamics limits correlation-induced neural synchronization, J. Neurophysiol., 108 (2012), 2115–2133.
    [17] J. A. Acebrón, L. Bonilla, C. J. P. Vincente,et al., The kuramoto model: A simple paradigm for synchronization phenomena, Rev. Mod. Phys., 77 (2005), 137–185.
    [18] J. Ahn, L. Kreeger, S. Lubejko, et al., Heterogeneity of intrinsic biophysical properties among cochlear nucleus neurons improves the population coding of temporal information, J. Neurophysiol., 111 (2014), 2320–2331.
    [19] I. Aihara, Modeling synchronized calling behavior of japanese tree frogs, Phys. Rev. E, 80 (2009), 011918.
    [20] K. Arai and H. Nakao, Phase coherence in an ensemble of uncoupled limit-cycle oscillators receiving common poisson impulses, Phys. Rev. E, 77 (2008), 036218.
    [21] P. Ashwin, S. Coombes and R. Nicks, Mathematical frameworks for oscillatory network dynamics in neuroscience, J. Math. Neurosci., 6 (2016), 2.
    [22] R. Azouz and C. M. Gray, Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo, P. Nat. Acad. Sci., 97 (2000), 8110–8115.
    [23] A. Bremaud, D. West and A. Thomson, Binomial parameters differ across neocortical layers and with different classes of connections in adult rat and cat neocortex, P. Nat. Acad. Sci., 104 (2007), 14134–14139.
    [24] R. J. Butera Jr, J. Rinzel and J. C. Smith, Models of respiratory rhythm generation in the prebotzinger complex. ii. populations of coupled pacemaker neurons, J. Neurophysiol., 82 (1999), 398–415.
    [25] F. P. Chabrol, A. Arenz, M. T. Wiechert, et al., Synaptic diversity enables temporal coding of coincident multisensory inputs in single neurons, Nature Neurosci., 18 (2015), 718.
    [26] B. Ermentrout, Type i membranes, phase resetting curves, and synchrony, Neural. Comput., 8 (1996), 979–1001.
    [27] B. Ermentrout, Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students, vol. 14, Siam, 2002.
    [28] G. B. Ermentrout and D. H. Terman, Mathematical Foundations of Neuroscience, vol. 35 of Interdisciplinary Applied Mathematics, Springer, 2010.
    [29] A. Georgopoulos, A. Schwartz and R. Kettner, Neuronal population coding of movement direction, Science, 233 (1986), 1416–1419.
    [30] J. Gjorgjieva, G. Drion and E. Marder, Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performance, Curr. Opin. Neurobiol., 37 (2016), 44–52.
    [31] P. Goel and B. Ermentrout, Synchrony, stability, and firing patterns in pulse-coupled oscillators, Physica D: Nonlinear Phenomena, 163 (2002), 191–216.
    [32] R. Grashow, T. Brookings and E. Marder, Compensation for variable intrinsic neuronal excitability by circuit-synaptic interactions, J. Neurosci., 30 (2010), 9145–9156.
    [33] G. Hermann and J. Touboul, Heterogeneous connections induce oscillations in large-scale networks, Phys. Rev. Lett., 109 (2012), 018702.
    [34] X. Jiang, S. Shen, C. R. Cadwell, et al., Principles of connectivity among morphologically defined cell types in adult neocortex, Science, 350 (2015), aac9462.
    [35] S. Kay, Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory, Prentice Hall PTR, 1993.
    [36] Y. Kuramoto, Chemical oscillations, waves and turbulence, Berlin: Springer, 1984.
    [37] Y. Kuramoto, Self-entrainment of a population of coupled non-linear oscillators, in International Symposium on Mathematical Problems in Theoretical Physics (ed. H. Araki), Springer Berlin Heidelberg, Berlin, Heidelberg, 1975, 420–422.
    [38] R. B. Levy and A. D. Reyes, Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex, J Neurosci., 32 (2012), 5609–5619.
    [39] C. Ly and B. Doiron, Noise-enhanced coding in phasic neuron spike trains, PLoS ONE, 4 (2017), e0176963.
    [40] C. Ly and G. B. Ermentrout, Analysis of recurrent networks of pulse-coupled noisy neural oscillators, SIAM J. Appl. Dyn. Syst., 9 (2010), 113–137.
    [41] C. Ly, T. Melman, A. L. Barth, et al., Phase-resetting curve determines how bk currents affect neuronal firing, J. Comput. Neurosci., 30 (2011), 211–223.
    [42] T. McGeer et al., Passive dynamic walking, I. J. Robotic Res., 9 (1990), 62–82.
    [43] A. L. Meredith, S.W.Wiler, B. H. Miller, et al., Bk calcium-activated potassium channels regulate circadian behavioral rhythms and pacemaker output, Nature Neurosci., 9 (2006), 1041.
    [44] J. Middleton, A. Longtin, J. Benda, et al., Postsynaptic receptive field size and spike threshold determine encoding of high-frequency information via sensitivity to synchronous presynaptic activity, J. Neurophysiol., 101 (2009), 1160–1170.
    [45] C. Morris and H. Lecar, Voltage oscillations in the barnacle giant muscle fiber, Biophys. J., 35 (1981), 193–213.
    [46] J. M. Newby and M. A. Schwemmer, Effects of moderate noise on a limit cycle oscillator: Counterrotation and bistability, Phys. Rev. Lett., 112 (2014), 114101.
    [47] S. Oprisan, A. Prinz and C. Canavier, Phase resetting and phase locking in hybrid circuits of one model and one biological neuron, Biophys. J., 87 (2004), 2283–2298.
    [48] A. Oswald, B. Doiron, J. Rinzel, et al., Spatial profile and differential recruitment of gabab modulate oscillatory activity in auditory cortex, J. Neurosci., 29 (2009), 10321–10334.
    [49] D. Parker, Variable properties in a single class of excitatory spinal synapse, J. Nneurosci., 23 (2003), 3154–3163.
    [50] N. Priebe and D. Ferster, Inhibition, spike threshold, and stimulus selectivity in primary visual cortex, Neuron, 57 (2008), 482–497.
    [51] H. Risken, The fokker-planck equation. methods of solution and applications, vol. 18 of, Springer series in synergetics, 301.
    [52] J. T. Schwabedal and A. Pikovsky, Phase description of stochastic oscillations, Phys. Rev. Lett., 110 (2013), 204102.
    [53] M. Shamir and H. Sompolinsky, Implications of neuronal diversity on population coding, Neural computation, 18 (2006), 1951–1986.
    [54] A. A. Sharp, F. K. Skinner and E. Marder, Mechanisms of oscillation in dynamic clamp constructed two-cell half-center circuits, J. Neurophysiol., 76 (1996), 867–883.
    [55] K. M. Stiefel, B. S. Gutkin and T. J. Sejnowski, Cholinergic neuromodulation changes phase response curve shape and type in cortical pyramidal neurons, PloS one, 3 (2008), e3947.
    [56] J.-N. Teramae, H. Nakao and G. B. Ermentrout, Stochastic phase reduction for a general class of noisy limit cycle oscillators, Phys. Rev. Lett., 102 (2009), 194102.
    [57] P. J. Thomas and B. Lindner, Asymptotic phase for stochastic oscillators, Physical review letters, 113 (2014), 254101.
    [58] T. I. Tóth, M. Grabowska, N. Rosjat, et al., Investigating inter-segmental connections between thoracic ganglia in the stick insect by means of experimental and simulated phase response curves, Biol.l cybern., 109 (2015), 349–362.
    [59] S. J. Tripathy, K. Padmanabhan, R. C. Gerkin, et al., Intermediate intrinsic diversity enhances neural population coding, P. Natl. Acad. Sci. USA, 110 (2013), 8248–8253.
    [60] A. T.Winfree, Biological rhythms and the behavior of populations of coupled oscillators, J. Theor. Biol., 16 (1967), 15–42.
    [61] A. T. Winfree, The geometry of biological time, vol. 12, Springer Science & Business Media, 2001.
    [62] L. Yassin, B. L. Benedetti, J.-S. Jouhanneau, et al., An embedded subnetwork of highly active neurons in the neocortex, Neuron, 68 (2010), 1043–1050.
    [63] R. A. York and T. Itoh, Injection-and phase-locking techniques for beam control [antenna arrays], IEEE T. Microw. Theory, 46 (1998), 1920–1929.
    [64] R. Yuste, J. N. MacLean, J. Smith, et al., The cortex as a central pattern generator, Nature Rev. Neurosci., 6 (2005), 477.
    [65] V. Zampini, J. K. Liu, M. A. Diana, et al., Mechanisms and functional roles of glutamatergic synapse diversity in a cerebellar circuit, eLife, 5 (2016), e15872.
    [66] P. Zhou, S. Burton, N. Urban, et al., Impact of neuronal heterogeneity on correlated colored noise-induced synchronization, Front. comput. neurosci., 7 (2013), 113.
  • This article has been cited by:

    1. Kyle P. Wendling, Cheng Ly, Statistical Analysis of Decoding Performances of Diverse Populations of Neurons, 2021, 33, 0899-7667, 764, 10.1162/neco_a_01355
    2. Bharat Singhal, István Z. Kiss, Jr-Shin Li, Optimal Phase-Selective Entrainment of Heterogeneous Oscillator Ensembles, 2023, 22, 1536-0040, 2180, 10.1137/22M1521201
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5027) PDF downloads(752) Cited by(2)

Figures and Tables

Figures(6)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog