Citation: Krystian Zawadzki. The performance of ETFs on developed and emerging markets with consideration of regional diversity[J]. Quantitative Finance and Economics, 2020, 4(3): 515-525. doi: 10.3934/QFE.2020024
[1] | Evangelos Almpanis, Constantinos Siettos . Construction of functional brain connectivity networks from fMRI data with driving and modulatory inputs: an extended conditional Granger causality approach. AIMS Neuroscience, 2020, 7(2): 66-88. doi: 10.3934/Neuroscience.2020005 |
[2] | Robert A. Moss, Jarrod Moss . The Role of Dynamic Columns in Explaining Gamma-band Synchronization and NMDA Receptors in Cognitive Functions. AIMS Neuroscience, 2014, 1(1): 65-88. doi: 10.3934/Neuroscience.2014.1.65 |
[3] | Robert A. Moss, Jarrod Moss . Commentary on the Pinotsis and Friston Neural Fields DCM and the Cadonic and Albensi Oscillations and NMDA Receptors Articles. AIMS Neuroscience, 2014, 1(2): 158-162. doi: 10.3934/Neuroscience.2014.2.158 |
[4] | Chris Cadonic, Benedict C. Albensi . Oscillations and NMDA Receptors: Their Interplay Create Memories. AIMS Neuroscience, 2014, 1(1): 52-64. doi: 10.3934/Neuroscience.2014.1.52 |
[5] | Elisa Magosso, Giulia Ricci, Mauro Ursino . Modulation of brain alpha rhythm and heart rate variability by attention-related mechanisms. AIMS Neuroscience, 2019, 6(1): 1-24. doi: 10.3934/Neuroscience.2019.1.1 |
[6] | Eduardo Mercado III . Relating Cortical Wave Dynamics to Learning and Remembering. AIMS Neuroscience, 2014, 1(3): 185-209. doi: 10.3934/Neuroscience.2014.3.185 |
[7] | Soraya L. Sá Canabarro, Ana Garcia, Corina Satler, Maria Clotilde Henriques Tavares . Interaction between Neural and Cardiac Systems during the Execution of the Stroop Task by Young Adults: Electroencephalographic Activity and Heart Rate Variability. AIMS Neuroscience, 2017, 4(1): 28-51. doi: 10.3934/Neuroscience.2017.1.28 |
[8] | Anna Lardone, Marianna Liparoti, Pierpaolo Sorrentino, Roberta Minino, Arianna Polverino, Emahnuel Troisi Lopez, Simona Bonavita, Fabio Lucidi, Giuseppe Sorrentino, Laura Mandolesi . Topological changes of brain network during mindfulness meditation: an exploratory source level magnetoencephalographic study. AIMS Neuroscience, 2022, 9(2): 250-263. doi: 10.3934/Neuroscience.2022013 |
[9] | Robert Friedman . Themes of advanced information processing in the primate brain. AIMS Neuroscience, 2020, 7(4): 373-388. doi: 10.3934/Neuroscience.2020023 |
[10] |
M. Rosario Rueda, Joan P. Pozuelos, Lina M. Cómbita, Lina M. Cómbita .
Cognitive Neuroscience of Attention From brain mechanisms to individual differences in efficiency. AIMS Neuroscience, 2015, 2(4): 183-202. doi: 10.3934/Neuroscience.2015.4.183 |
This paper shows how recordings of gamma oscillations—under different experimental conditions or from different subjects—can be combined with a class of population models called neural fields and dynamic causal modeling (DCM) to distinguish among alternative hypotheses regarding cortical structure and function. This approach exploits inter-subject variability and trial-specific effects associated with modulations in the peak frequency of gamma oscillations. It draws on the computational power of Bayesian model inversion, when applied to neural field models of cortical dynamics. Bayesian model comparison allows one to adjudicate among different mechanistic hypotheses about cortical excitability, synaptic kinetics and the cardinal topographic features of local cortical circuits. It also provides optimal parameter estimates that quantify neuromodulation and the spatial dispersion of axonal connections or summation of receptive fields in the visual cortex.
This paper provides an overview of a family of neural field models that have been recently implemented using the DCM toolbox of the academic freeware Statistical Parametric Mapping (SPM). The SPM software is a popular platform for analyzing neuroimaging data, used by several neuroscience communities worldwide. DCM allows for a formal (Bayesian) statistical analysis of cortical network connectivity, based upon realistic biophysical models of brain responses. It is this particular feature of DCM—the unique combination of generative models with optimization techniques based upon (variational) Bayesian principles—that furnishes a novel way to characterize functional brain architectures. In particular, it provides answers to questions about how the brain is wired and how it responds to different experimental manipulations. The role of neural fields in this general framework has been discussed elsewhere, see [1]. Neural fields have a long and illustrious history in mathematical neuroscience, see e.g. [2] for a review. These models include horizontal intrinsic connections within layers or laminae of the cortical sheet and prescribe the time evolution of cell activity—such as mean depolarization or (average) action potential density.
Our overview comprises two parts: in the first part, we use neural fields to simulate neural activity and distinguish the effects of post synaptic filtering on predicted responses in terms of synaptic rate constants that correspond to different timescales and distinct neurotransmitters. This application of neural fields follows the tradition of many studies, in which neural fields (and mean field models in general) have been used to explain cortical activity based on qualitative changes of models activity induced by changes in model parameters, like synaptic efficacy and connection strengths, see e.g.[3,4,5,6,7,8] . We will focus on the links between beta and gamma oscillations - mediated by the lateral propagation of neuronal spiking activity—using a field model that incorporates canonical cortical microcircuitry, where each population or layer has a receptor complement based on findings in cellular neuroscience.
In the second part of this paper, we follow a different route, and use neural fields quantitatively—that is to fit empirical data recorded during visual stimulation, see e.g. [9,10,11,12]. We focus on neuromodulatory effects and discuss particular applications of DCMs with neural fields to explain invasive and non-invasive data. We present two studies of spectral responses obtained from the visual cortex during visual perception experiments: in the first study, MEG data were acquired during a task designed to show how activity in the gamma band is related to visual perception. This experiment tried to determine the spectral properties of an individual's gamma response, and how this relates to underlying visual cortex microcircuitry. In the second study, we exploited high density—spatially resolved—data from multi-electrode electrocorticographic (ECoG) arrays to study the effect of varying stimulus contrast on cortical excitability and gamma peak frequency. These data were acquired at the Ernst Strüngmann Institute for Neuroscience, in collaboration with the Max Planck Society in Frankfurt.
Below, we first discuss the anatomy of the visual cortex in relation to gamma oscillations and the synaptic kinetics of underlying neuronal sources. We then turn to simulations of neural field models that can yield predictions of observed gamma activity.
The functional specialization of visual (and auditory) cortex is reflected in its patchy or modular organization—in which local cortical structures share common response properties. This organization may be mediated by a patchy distribution of horizontal intrinsic connections that can extend up to 8 mm, linking neurons with similar receptive fields: see e.g. [13,14,15]. The existence of patchy connections in different cortical areas (and species) has been established with tracer studies in man, macaque and cat: [14], [16] and [15] respectively. It has been shown that such connections can have profound implications for neural field dynamics: see [17]. The precise form of such connections may be explained by self-organization under functional and structural constraints; for example, minimizing the length of myelinated axons to offset the cost of transmitting action potentials [18,19]. Generic constraints of this sort have been used to motivate general principles of connectivity; namely, that evolution attempts to optimize a trade-off between metabolic cost and topological complexity [20]. In short, sensory cortices can be characterized by a patchy organization that is conserved over the cortex and which allows for both convergence and divergence of cortical connections. Given the conservation of this horizontal connectivity, synaptic densities can be approximated by isotropic distributions with an exponential decay over the cortical surface. These can be combined to form patchy distributions, using connectivity kernels with non-central peaks to model sparse intrinsic connections in cortical circuits that mediate horizontal interactions [21]. In this case, neurons are assumed to receive signals both from their immediate neighbors and remote populations that share the same functional selectivity [22]. We will see below that using such connectivity configurations, dynamic causal modelling of non-invasive MEG data can identify the parameters of lateral interactions within a bounded cortical patch or manifold in the primary visual cortex V1.
Lateral connections in V1 mediate the receptive fields of local populations, whose form is usually assumed to comprise an excitatory centre and an inhibitory surround. The particular configuration of these fields has been shown to be contrast-sensitive, see [23,24]. Also in [25], the authors recorded single units in V1 when monkeys fixated a point on the screen and were shown patches of drifting gratings at various contrasts and sizes. Using a difference of Gaussians model—to fit the spatial extent of contributions of the excitatory centre and inhibitory surround—they showed that at higher contrasts, the excitatory centre of receptive fields in V1 had a smaller stimulus summation field. The authors found that—at lower contrasts—V1 receptive fields were 2.3 times larger than at higher contrasts. These results suggest that receptive fields are not invariant to stimulus properties. Similarly, [26] recorded from superficial cells in monkey V1 while they presented oriented bars of varying lengths. They found that V1 receptive fields were on average about 4 times larger at low contrast compared to high contrast, when they were presented in isolation; and about twice as large when they were presented in the context of a textured background. The authors conclude that the excitatory-inhibitory balance between the classical and non-classical receptive field is not static but can be modulated by stimulus context. At high contrast, neurons are strongly inhibited when a stimulus falls outside the classical receptive field and encroaches on the non-classical receptive field. At lower contrast, V1 receptive fields have enhanced spatial summation, indicating that inhibition relative to excitation may be reduced.
It is also known that gamma band oscillations (30-100 Hz) in V1 are sensitive to stimulus properties like contrast and stimulus size [27]. Crucially, with increasing contrast, gamma oscillations increase in peak frequency. In summary, experimental studies show that gamma peak frequency, stimulus contrast and the excitatory-inhibitory balance between the classical and non-classical receptive field are intimately related. Below, we will address these relationships using Bayesian model comparison of dynamic causal models that embody different hypotheses about contrast-specific changes in the connectivity architectures that underlie receptive fields and induced responses.
Neural fields are based on partial differential equations that embody two operations: the first pertains to a description of propagating afferent input—often in the form of a wave equation. The second transforms the input to postsynaptic depolarization and may come in two flavors: one is based upon convolution operators, while the second entails nonlinear equations, in which changes in postsynaptic potential involve the product of synaptic conductances and potential differences associated with different channel types. In the first (convolution) case, postsynaptic depolarization is modeled as a (generally linear) convolution of presynaptic spiking input that can be formulated in terms of linear differential equations. In the second (conductance) case, the equations of motion for neuronal states are necessarily nonlinear and second-order (with respect to hidden neuronal states), in accord with electromagnetic laws and are reminiscent of single cell conductance models [28].
Gamma and beta band and oscillations have been found to depend on neuronal interactions mediated by both gap junctions and synaptic transmission [29]. In the case of convolution models, these effects are modeled collectively by intrinsic connectivity constants. In the context of DCM, these quantities have been shown to accurately reflect neurotransmitter density and synaptic efficacy [30]. In [31] a third ligand-gated ion channel was included to model conductances controlled by the NMDA receptor. This is a nice example of a more biologically realistic description of neuronal interactions that considers neurotransmitter and other conductance-specific mechanisms. This biological realism entails the use of conductance based models at the cost of increased computational demands, in relation to the convolution models.
Here, we consider both classes of neural fields described above in a unified setting. These models describe spatially extended sources occupying bounded manifolds (patches) in different layers that lie beneath the cortical surface. The dynamics of cortical sources conform to integrodifferential equations, such as the Wilson-Cowan or Amari equations, where coupling is parameterised by matrix-valued coupling kernels—namely, smooth (analytic) connectivity matrices that also depend on time and space. Neural field models can be written in the following general form:
![]() |
where interactions among populations — within and across macrocolumn/is — are described by the connectivity kernel K(x) (we will see examples of this kernel later). We also use θ to denote the parameters of the model.
Here, V (x, t) = [V1 (x, t), …, Vn (x, t)]T ∈ Rn is an n × 1 vector of hidden neuronal states in each layer in Equation (1); both this vector and the input U(x, t) ∈ Rn are explicit functions of both space and time. M(x, t) ∈ Rn is an n × 1 vector of presynaptic firing rates, F :Rn → Rn is a nonlinear mapping from postsynaptic depolarization to presynaptic firing rates at each point on the cortical manifold and G is a column vector.
The function h in Equation (1) may involve synaptic conductances (nonlinear filtering) or describe a linear convolution of afferent firing rate M(x, t). This firing rate obeys the integrodifferential equation (second equality in Equation 1) and results from convolving afferent firing rate in space and time, while considering local conduction delays (parameterized by u —the inverse speed at which spikes propagate along connections). We now consider the alternative forms of models denoted by the function h (see Figures 1 and 2):
When h = h(V, U, M, gk , θ ) depends on synaptic conductances gk(x, t) modeling distinct membrane channel types, the resulting neural field model described by Equation (1) is formally related to conductance models that consider the geometry and topography of neuronal interactions [32,33,34,35]. In this case, the populations comprising the local cortical network described by Equation 1 can be viewed as a set of coupled RC circuits, where channels open in proportion to presynaptic input and close in proportion to the number already open (see Figure 1). Changes in conductance produce changes in depolarization in proportion to the potential difference between transmembrane potential and a reversal potential vk that depends upon the channel type. Open channels result in hyperpolarizing or depolarizing currents depending on whether the transmembrane potential is above or below the reversal potential. These currents are supplemented with exogenous current to produce changes in the transmembrane potential (scaled by the membrane capacitance C).
For a Jansen and Rit cortical source [36], the neuronal states at a location x on a cortical patch evolve according to the system of differential equations in Figure 1 [37], where we have assumed that the density distributions of axonal arbors decay in a simple exponential manner and the parameters aij and cij encode the strength (analogous to the total number of synaptic connections) and extent (spatial precision) of intrinsic connections between the cortical layers. Also, the rate constants λk characterize the response of each channel type to afferent input. The parameters of this model are provided in Table 1.
Parameter | Physiological interpretation | Value |
gL | Leakage conductance | 1 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/10,1,1/2,1/10,1)*3/10 |
cij | Intrinsic connectivity decay constant | 1 (mm-1) |
vL, vE, vI | Reversal potential | -70,60,-90 (mV) |
vR | Threshold potential | -40 (mV) |
C | Membrane capacitance | 8 (pFnS-1) |
S | Conduction speed | .3 m/s |
λ, λ | Postsynaptic rate constants | 1/4, 1/16 (ms-1) |
l | Radius of cortical patch | 7 (mm) |
Alternatively, one can assume that the function h does not depend on synaptic conductances, that is h = h(V, U, M, θ) . In this case, h prescribes a linear convolution of presynaptic activity to produce postsynaptic depolarization (see Figure 2). Firing rates within each sub-population are then transformed through the nonlinear voltage-f ring rate function to provide inputs to other populations. These inputs are weighted by connection strengths. For the case that h is given by an alpha kernel, the analogous equations to those in Figure 1 for the case of convolution models are shown in Figure 2, see also [38]. We used the same parameters for both models: see Table 1; where additional parameters for the convolution based model are provided in Table 2 below.
(Other parameters as in Table 1) | ||
Parameter | Physiological interpretation | Prior mean |
HE, HI | Maximum postsynaptic depolarisations | 8 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/2, 1, 1/2,1,1)*3/10 |
The crucial difference between conductance and convolution models is that in conductance models, the parameters characterize the response of each population to distinct excitatory and inhibitory inputs: in other words, there is a set of synaptic rate constants (each corresponding to a distinct channel) associated with each population. The corresponding dynamics are defined over timescales that result from the parameters used and rest on the nonlinear interaction between membrane potential and conductance. These timescales are crucial in pharmacological manipulations that selectively affect one sort of current in a receptor specific fashion. This means that conductance-based models allow one to perform a detailed study of synaptic function at the level of specific neurotransmitter systems [39,40]. In contrast, in convolution models, passive membrane dynamics and dendritic effects are summarized by lumped parameters that model the rate that depolarization increases to a maximum and synaptic efficacy (or maximum postsynaptic potential) respectively. However, this sort of description neglects the timescales of synaptic currents that are implicit in conductance based models: in the Equations of Figure 1, these timescales are characterized in terms of the rate constants λ and C; namely, channel response and membrane capacitance.
In the remainder of this section, we consider simulations of the conductance and convolutionbased models above. Our aim here was to illustrate changes in responses with changes in the model parameters. A range of anaesthetics has been shown to increase inhibitory neurotransmission. This effect has been attributed to allosteric activators that sensitize GABAA receptors. In the context of our models, these effects correspond to an increase of the strength of inhibitory input to pyramidal cells 32 a . We focused on spectral responses in the alpha and beta range as this is the frequency range of interest for many applications involving drug effects. Spectral responses can be summarized in terms of transfer functions as shown in Figure 3 for a range of physiological parameters.
These transfer functions can be obtained in a straightforward manner from the dynamical equations describing cortical activity (like the equations of Figures 1 or 2) and can be regarded as a representation of cortical dynamics in the Fourier domain. They represent the spectral density that would be seen if the models were driven by independent fluctuations. These characterizations assume a linearization around the fixed point and therefore do not capture the nonlinear behavior of these models. This means that a change in the parameters only changes the system’s flow (and implicitly the Jacobian and associated transfer functions); however, in the case of conductance-based models it also changes the expansion point.
Transfer functions specify predicted cross-spectral densities associated with the neural field models. These functions depend on spatial and synaptic parameters that determine the spectral properties of observed activity (like peak frequency and power). The predicted responses become therefore functions of the model parameters and can be written in the following generic form:
![]() |
where the indices l and m denote different sensors and * denotes the conjugate transpose matrix. The predicted cross spectra between two sensors are a function of the power of underlying neuronal fluctuations gu(w, θ) and transfer functions Tl(k, w) that depend upon model parameters θ . These parameters quantify the neuronal architecture mediating responses and the appropriate observation function. In the empirical studies below, we will use Equation (2) to generate model predictions of observed spectra of neuronal fluctuations that are modeled as a mixture of white and colored components. Here, we will consider a single sensor and cortical source driven by white noise input; in this case, the cross spectral density of Equation (2) reduces to a transfer function Tl(k, w) which is the Fourier transform of the impulse response or first-order Volterra kernel associated with the equations in Figures (1) and (2). We used the transfer function to characterize LFP responses under different values of the inhibitory intrinsic connectivity, a32 and excitatory time constant, 1/λ , of the inhibitory populations. The varied needs to parameters between 10% and 36% and between 10% and 270% respectively [37]: The resulting transfer functions are shown in Figure 3, which also reports the peak frequency of the spectral response as a function of the two model parameters (the peak frequency corresponds to maximum system response and is shown in image format).
A common profile of responses can be seen in both conductance and convolution models, with an increase of peak frequencies with smaller inhibitory time constants. In other words, as the strength of inhibition increases, activity becomes progressively faster (power shifts to higher frequencies). Some of the responses obtained for the convolution model seem to suggest that the bandwidth changes substantially with centre frequency, which is not observed for gamma frequencies. Conversely, convolution and conductance mass models showed quantitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. In the next section, we use neural field models of this sort to predict empirical data - and use those data to provide estimates of intrinsic connectivity generating observed spectral peaks.
In this section, we focus on convolution models and discuss two applications in the context Bayesian modelling of neuroimaging data obtained via magnetoencephalography and electrocorticography. We first provide a quick overview of neural fields as probabilistic generative models of cortical responses:
The modeling of electrophysiological signals depends upon models of how they are generated in source space and how the resulting (hidden) neuronal states are detected by sensors. Following [41], we use a likelihood model relating hidden neuronal states to observed cross spectra gy over sensors that sample from the cortical surface. This likelihood model assumes the measured signal is a mixture of predicted spectra, channel and Gaussian observation noise.
![]() |
The first equality expresses the data features gy(ω) as a mixture of predictions and prediction errors εy with covariance ∑(ω, λ). The predictions are a mixture of predicted cross spectra ĝ(ω, μ) and channel noise gn(ω, μ). Equation (3) provides the basis for our generative model and entails free parameters controlling the spectra of the inputs and channel noise {an, au, bn, bu} ⊂ θ. Gaussian assumptions about the observation error mean that we have a probabilistic mapping from the unknown parameters to observed (spectral) data features. Inversion of this model means estimating, probabilistically, the free parameters from the data.
At the neuronal level, we consider a neural field model based on the canonical microcircuit. This model differs slightly from the Jansen and Rit model considered above [9,42] in that the pyramidal cell population is split into superficial and deep subpopulations. In the original model (not shown) only one pyramidal cell population was considered; however, there are important differences between superficial and deep pyramidal cells—in particular, they are thought to be the sources of forward and backward connections in cortical hierarchies. This distinction effectively requires the introduction of a fourth population and associated synaptic parameters. The modelling of distinct (sources of) forward and backward connections in cortical hierarchies and has proved useful when trying to explain several aspects of distributed cortical computations in theoretical neurobiology [42,43]. As above, this model provides the particular form of the predicted spectra by prescribing the associated transfer functions, cf. [9].
In general, when mapping neuronal circuitry to a tractable model, we are bound by computational efficiency and numerical stability. This usually calls for a balanced co-activation of inhibitory and excitatory inputs to neuronal populations (this cortical gain control is thought to underlie real cortical function, see e.g. [44]). The anatomical motivation for the Jansen and Rit microcircuitry is described in [45] . Furthermore, the canonical microcircuit is a parsimonious (and dynamically stable) model that also draws from the theoretical constraints implied by the message passing of predictive coding. The precise form of this model rests on combining different populations to simplify the model and ensure efficient model inversion, see [42].
In brief, we assume that the measured signal is a mixture of predicted spectra, channel and observation noise. Then, Equation (3) prescribes a probabilistic mapping between model parameters (effective connectivity) and spectral characterizations (functional connectivity) that provides a useful link between the generative modeling of biophysical time series and dynamical systems theory. In this setting, neuronal field models prescribe a likelihood function; this function—taken together with the priors over the model parameters—specifies a dynamic causal model that can be inverted using standard variational procedures, such as Variational Laplace [46]. Variational Laplace approximates model evidence with a variational free energy and optimizes the posterior density over model parameters with respect to this free energy. The resulting (approximate) posterior density and (approximate) log-evidence are used for inference on parameters and models respectively. In other words, one can compare different models (e.g., neural field and mass models) using their log-evidence and make inferences about (changes in) model parameters, under the model selected.
In motivating neural field models it is generally assumed that the visual cortex is tiled with macrocolumns and that the response of each local source or patch can be described in terms of a receptive field with rotational symmetry. This receptive field depends upon the topography of the neuronal connections and its orientation axis coincides with the coordinates of the field model. Below, we use parameters encoding the range of inhibitory and excitatory source components to characterize spatial summation of receptive fields based on gamma activity. Furthermore, we will examine estimates of neuronal parameters to characterize the excitatory and inhibitory post-synaptic potentials (EPSPs, IPSPs) elicited by horizontal connections under different levels of visual contrast. The underlying assumption here is that the cortex acts as a dynamic filter of visual stimuli - that shows rapid nonlinear adaptation and where the local centre-surround interactions determine the frequency of gamma oscillations [26]. In a second empirical study, we will consider contrast-specific effects and simultaneously optimize responses obtained from all conditions across sites that show stimulus induced responses. The exact locations of each site on the cortical patch are also optimized during model inversion or fitting of the cross spectral data. These data inform the spatial sensitivity of the recording sites by allowing for conduction delays and other spatial effects including the relative location of the sensors.
Our approach follows standard treatments of neural field models (see e.g. [47,48,49,50,51]). Our aim here is not to provide a systematic characterisation of the neural f eld model; for this we refer the reader to earlier work, e.g. [52,53]. Instead, we discuss below two studies that use recordings of gamma oscillations to disclose the neurobiological underpinnings of visual processing. Both studies exploit insights from similar models like PING or ING (see e.g. [29,54,55,56]), to enable hypotheses testing and parameter inference: in our first study, we use the excitatory drive to inhibitory interneurons as a useful marker of cortical excitability; while in our second study, we look at trial-specific modulations of intrinsic connection strengths among pyramidal and spiny stellate cells and inhibitory interneurons under different levels of stimulus contrast.
The first study we review [9] used the generative model of spectral densities described above and recordings of gamma responses obtained from the visual cortex during a perception experiment [57]. Our focus was on mechanistic accounts of intersubject variability in peak gamma frequency. Data were obtained during a study of how activity in the gamma band is related to visual perception [58]. In this earlier work, the surface of the primary visual cortex was estimated using retinotopic mapping and was found to correlate with individual gamma peak frequencies. Interestingly, a similar visual MEG experiment found a correlation between gamma peak frequency and resting GABA concentration, as measured with MR spectroscopy [59].
In this study, we wanted to identify the origins of individual gamma peak variability and understand whether this variability can be attributed to cortical structure or function (the level of cortical inhibition as expressed by resting GABA concentration). The generative model we used (the connectivity kernel K(x) in Equation (1) ) embodied a canonical microcircuit model [9,42] allowing for patchy horizontal connectivity, see [21,60]. This neural field DCM was particularly useful to discern the roles of cortical anatomy and function as it parameterizes both structure and functional excitation-inhibition balance. Our model included parameters describing the dispersion of lateral or horizontal connections—which we associate with columnar width—and kinetic parameters describing the synaptic drive that various populations are exposed to. These two attributes of the field model stand in for structural and functional explanations for observed spectral responses.
We first considered the relationship between individual differences in GABA concentration or peak gamma frequency [61,62] and parameters describing synaptic transmission: Muthukumaraswamy and colleagues have showed that individual differences in gamma oscillation frequency are positively correlated with resting GABA concentration in visual cortex—as measured with magnetic resonance spectroscopy. Furthermore, they showed that fMRI responses are inversely correlated with resting GABA and that gamma oscillation frequency is inversely correlated with the magnitude of the BOLD response. These results were taken to suggest that the excitation/inhibition balance in visual cortex is reflected in peak gamma frequencies at rest. We addressed this hypothesis by looking for correlations involving the connections to inhibitory interneurons. We found that posterior estimates of the excitatory connections between deep pyramidal cells and inhibitory interneurons correlated negatively with gamma peak (Pearson r = -0.37, p = 0.03, 30 d.f, two-tailed test). This confirms the hypothesis that inter-subject variation in inhibitory drive in visual cortex is associated with characteristic differences in the peak frequency of gamma oscillations—and provides a mechanistic link from a synaptic level description to spectral behavior that can be measured noninvasively.
We then turned to the interplay between various mechanisms underlying inter-subject variability in gamma peak frequency. Previous studies suggested two possible causes: as described above, Muthukumaraswamy et al, (2009) suggest that peak gamma frequency is determined by the level of inhibition in V1, while in the study of [57], the authors found a correlation between V1 size and peak gamma frequency and suggested that the size of V1 and associated differences in microanatomy could be determinants of peak gamma frequency. This suggests that both GABA concentration and V1 size can influence gamma frequency; however, they these factors may or may not be causally linked.
Biophysical parameters estimated using our neural field DCM provide an opportunity to investigate alternative explanations of phenotypic variation, like gamma peak frequency: both the excitatory drive to inhibitory neurons ( a23 ) and macrocolumn width could mediate differences in peak gamma frequency. We therefore looked at the correlations over subjects between peak gamma frequency (f), V1 surface area and the posterior estimates of these parameters. These correlations are summarized in Table 3:
Width | V1 size | a23 | f | ||
Width Pearson correlation | 1 | ||||
Significance | |||||
V1 size Pearson correlation | 0.364 | 1 | |||
Significance | 0.02 | ||||
a23 Pearson correlation | -0.32 | -0.099 | 1 | ||
Significance | 0.037 | 0.295 | |||
f Pearson correlation | 0.271 | 0.286 | -0.379 | 1 | |
Significance | 0.06 | 0.056 | 0.016 |
Interestingly, the partial correlation between 23 a and gamma peak remained significant when controlling for V1 size and width 1/ cab (r = -0.332, p = 0.037). This suggests that the correlation between gamma peak and V1 inhibition cannot be accounted for completely by the spatial (structural) parameters (at the microscopic or macroscopic level).
Some of the correlations reported in Table 3 are weak and only reached trend significance. This might relate to fact that MEG data have low spatial resolution and Bayesian inference fails to find more evidence for field models, relative to their neural mass counterparts, see [9]. The lead fields, inherent in non-invasive MEG recordings, are necessarily broader and suppress temporal dynamics that are expressed in high spatial frequencies, see [9] for a further discussion.
In our second study [10], neuronal signals were recorded while a monkey first fixated its gaze (in a window with 1° radius) and later released a lever when it detected a color change at fixation. During the fixation period, a stimulus was presented at 4° eccentricity. Crucially, the contrast of the stimulus varied between trials. Details of the task and the surgical procedures are described elsewhere [63,64]. We analyzed the activity before the fixation color change and the subsequent behavioral response. We obtained model predictions using a canonical microcircuit field model with local exponentially decaying synaptic densities, see also [42]. Gamma peak frequency fell between 48 and 60 Hz and contrast levels varied between 0 and 82% of the maximum contrast that could be used. There were 9 contrast conditions; where the first two contrast levels elicited no prominent gamma peak. Bipolar differences were extracted from ECoG sensors covering a large part of the primate brain.
Our goal in this study was to delineate the mechanisms underlying contrast specific effects. We tested these effects by allowing for trial-specific changes in model parameters. In particular, we considered a family of DCMs that allowed for contrast dependent changes in the strength of recurrent connections, the strength of intrinsic connections or the extent of intrinsic connections (or combinations thereof) and asked which model can best explain spectral responses for different contrast conditions, see Figure 4. The ability of each model to explain induced responses was evaluated in terms of their Bayesian model evidence; which provides a principled way to evaluate competing or complementary models or hypotheses.
The first set of parameters comprised the gains of neural populations that are thought to encode precision errors. These gain parameters correspond to the precision (inverse variance or reliability) of prediction errors in predictive coding. This fits comfortably with neurobiological theories of attention under predictive coding and the hypothesis that contrast manipulation amounts to changes in the precision of sensory input, as reviewed in [65]. These changes affect hierarchical processing in the brain and control the interaction between bottom up sensory information and top down modulation: here, we focus on the sensory level of the visual hierarchy.
The second set of parameters comprised intrinsic connection strengths among pyramidal and spiny stellate cells and inhibitory interneurons. This speaks to variations in cortical excitability, which modulates the contributions of different neuronal populations under varying contrast conditions—and a fixed dispersion of lateral connections. This hypothesis fits comfortably well with studies focusing on the activation of reciprocally connected networks of excitatory and inhibitory neurons, including PING/ ING models [66,67,68].
The last set included the spatial extent of excitatory and inhibitory connections. From single cell recordings, it is known that as the boundary between classical and non-classical receptive field depends crucially upon contrast [26]. As stimulus contrast is decreased, the excitatory region becomes larger and the inhibitory flanks become smaller. The hypothesis here rests on a modulation of the effective spatial extent of lateral connections; effective extent refers to the extent of neuronal populations that subserve stimulus integration—that are differentially engaged depending on stimulus properties (as opposed to the anatomical extent of connections).
Model comparison uses the evidence for models in data from all conditions simultaneously. The models we compared allowed only a subset of parameters to vary with contrast level, where each model corresponds to a hypothesis about contrast-specific effects on cortical responses. Specifically, (the log of) contrast dependent parameters were allowed to change linearly with the contrast level, such that the effect of contrast was parameterized by the sensitivity of contrast dependent parameters to contrast level.
In summary, we considered three putative mechanisms of gain control that speak to models with and without contrast dependent effects on: (i) recurrent connections of neuronal populations, (ii) horizontal connections between excitatory and inhibitory pools of neurons and (iii) spatial dispersion of horizontal connections. These three model factors lead to 8 candidate models (or seven models excluding a model with no contrast dependent effects). Figure 4, shows three of the models that we considered. The seven (non-null) candidate models include the models depicted in Figure 4 and their combinations. The model with contrast dependent effects on all parameters (model 7) had the highest evidence with a relative log evidence difference of 17 with respect to the model that allows for modulations of all but the extent parameters (model 6). In these plots, the first three models correspond to hypotheses (i), (ii) and (iii) while models 4 and 5 to combinations (i) and (iii) and (ii) and (iii). In the right panel, we see the corresponding model posterior probabilities (assuming uniform priors over all models considered). This suggests that we can be almost certain that all three synaptic mechanisms contribute to the formation of cross spectral density features observed under different levels of visual contrast (given the models evaluated).
Note that Figure 4 shows the relative log-evidence for each model normalised with respect to the lowest evidence; in this instance, the evidence of model (iii). A relative log evidence of three is usually taken to indicate a winning model. This corresponds to a likelihood ratio (log Bayes factor) of approximately 20:1. This is based on a free energy bound to model-evidence; this quantity can be written as the difference of a term quantifying accuracy minus a term quantifying complexity (see e.g. [46]). This characterization of model evidence results therefore in a winning model which is both accurate and parsimonious.
Having identified the best model, we then examined its parameter estimates: the maximum a posteriori estimates are shown in Figure 5 (top), while estimates of their contrast sensitivity are shown in the lower panel. These results suggest that the largest contrast modulations are observed in (log scale parameter) estimates of connections to and from the superficial pyramidal cells. In particular, the largest variation over contrast is observed for the parameter that corresponds to the gain associated with superficial pyramidal cells. Here, an increase in contrast reduces the inhibitory self connection leading to a disinhibitory increase in gain. This result is in accord with the predictive coding formulation above, where the gain of superficial pyramidal cells is thought to encode the precision of prediction errors. As contrast increases, confidence in (precision of) sensory information rises. In predictive coding this is thought to be accompanied by an increase in the weighting of sensory prediction errors that are generally thought to be reported by superficial pyramidal cells.
The contrast dependent changes in the extent of horizontal connectivity suggest an effective shrinking of excitatory horizontal influences and an increase in inhibitory effects. This is precisely what would be expected on the basis of contrast dependent changes in receptive field size. As contrast increases, receptive field sizes shrink—effectively passing higher frequency information to higher levels. In other words, the neural field has a more compact spatial summation that depends upon gain control and the balance of horizontal excitatory and inhibitory connections. For more details, we refer the reader to [10].
In this paper, we have considered neural field models in the light of a Bayesian framework for evaluating model evidence and obtaining parameter estimates using invasive and non-invasive recordings of gamma oscillations. We first focused on model predictions of conductance and convolution based field models and showed that these can yield spectral responses that are sensitive to biophysical properties of local cortical circuits like cortical excitability and synaptic filtering; we also considered two different mechanisms for this filtering: a nonlinear mechanism involving specific conductances and a linear convolution of afferent firing rates producing post synaptic potentials.
We then turned to empirical MEG data and looked for potential determinants of the spectral properties of an individual's gamma response, and how they relate to underlying visual cortex microcircuitry and excitation/inhibition balance. We found correlations between peak gamma frequency and cortical inhibition (parameterized by the excitatory drive to inhibitory cell populations) over subjects. This constitutes a compelling illustration of how non-invasive data can provide quantitative estimates of the spatial properties of neural sources and explain systematic variations in the dynamics those sources generate. Furthermore, the conclusions fitted comfortably with studies of contextual interactions and orientation discrimination suggesting that local contextual interactions in V1 are weaker in individuals with a large V1 area [69,70].
Finally we used dynamic causal modeling and neural fields to test specific hypotheses about precision and gain control based on predictive coding formulations of neuronal processing. We exploited finely sampled electrophysiological responses from awake-behaving monkeys and an experimental manipulation (the contrast of visual stimuli) to look at changes in the gain and balance of excitatory and inhibitory influences. Our results suggest that increasing contrast effectively increases the sensitivity or gain of superficial pyramidal cells to inputs from spiny stellate populations. Furthermore, they are consistent with intriguing results showing that the receptive fields of V1 units shrinks with increasing visual contrast.
The approach we have illustrated in this paper rests on neural field models that are optimized in relation to observed gamma responses from the visual cortex and are - crucially - compared in terms of their evidence. This provides a principled way to address questions about cortical structure, function and the architectures that underlie neuronal computations.
The Wellcome Trust funded this work.
All authors declare no conflicts of interest in this paper.
[1] |
Ackert L, Tian Y (2008) Arbitrage, liquidity, and the valuation of exchange traded funds. Financ Mark Inst Instrum 17: 331-362. doi: 10.1111/j.1468-0416.2008.00144.x
![]() |
[2] |
Agapova A (2011) Conventional Mutual Index Funds versus Exchange Traded Funds. J Financ Mark 14: 323-343. doi: 10.1016/j.finmar.2010.10.005
![]() |
[3] | Ambroziak Ł (2014) Rynki finansowe w warunkach ograniczonej racjonalności. Ekonomia XXI wieku 3: 76-93. |
[4] | Amenc N, Goltz F, Le Sourd V (2017) The EDHEC European ETF and smart beta survey, EDHEC-Risk Institute. |
[5] | Antoniewicz R, Heinrichs J (2014) Understanding exchange-traded funds: How ETFs work. ICI Res Perspect 20: 1-20. |
[6] | Bas NK, Sarıoglu SE (2015) Tracking Ability and Pricing Efficiency of Exchange Traded Funds: Evidence from Borsa Istanbul. Bus Econ Res J 6: 19-33. |
[7] |
Basu S (1977) Investment Performance of Common Stocks in Relation to Their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis. J Financ 32: 663-682. doi: 10.1111/j.1540-6261.1977.tb01979.x
![]() |
[8] | Bernstein RS (2004) A Comparison of Tax Efficiencies of ETFs, Vipers, and Open and Closed End Funds. Corp Taxation 31: 34-38. |
[9] |
Blitz D, Huij J (2012) Evaluating The Performance of Global Emerging Markets Equity Exchange-Traded Funds. Emerg Mark Rev 13: 149-158. doi: 10.1016/j.ememar.2012.01.004
![]() |
[10] |
Blitz D, Huij J, Swinkels L (2012) The Performance of European Index Funds and Exchange Traded Funds. Eur Financ Manage 18: 649-662. doi: 10.1111/j.1468-036X.2010.00550.x
![]() |
[11] |
Chan JS, Jain R, Xia Y (2008) Market segmentation, liquidity spillover, and closed-end country fund discounts. J Financ Mark 11: 377-399. doi: 10.1016/j.finmar.2008.01.005
![]() |
[12] |
Charteris A, Musadziruma A (2017) Feedback trading in stock index futures: Evidence from South Africa. Res Int Bus Financ 42: 1289-1297. doi: 10.1016/j.ribaf.2017.07.065
![]() |
[13] |
Charupat N, Miu P (2013) Recent developments in exchange-traded funds literature: Pricing efficiency, tracking ability and effects on underlying assets. Managerial Financ 39: 427-443. doi: 10.1108/03074351311313816
![]() |
[14] | Chlebisz A (2018) Efektywność ETF w porównaniu do aktywnie zarządzanych akcyjnych funduszy inwestycyjnych na podstawie rynku kapitałowego w Polsce. Rynek-Społeczeństwo-Kultura 1: 46-50. |
[15] |
Cresson J, Cudd R, Lipscomb T (2002) The Early Attraction of S&P Index Funds: Is Perfect Tracking Performance an Illusion? Managerial Financ 28: 1-8. doi: 10.1108/03074350210767933
![]() |
[16] | Czekaj J, Woś M, Żarnowski J (2001) Efektywność giełdowego rynku akcji w Polsce, z perspektywy dziesięciolecia, Warszawa: Wydawnictwo Naukowe PWN. |
[17] |
Deville L (2008) Exchange traded funds: History, trading and research, [in:] M. Doumpos, P. Pardalos, and C. Zopounidis (ed.), Handbook of financial engineering, Springer, 67-98. doi: 10.1007/978-0-387-76682-9_4
![]() |
[18] | Dębski W (2010) Rynek finansowy i jego mechanizmy. Podstawy teorii i praktyki, Warszawa: Wydawnictwo Naukowe PWN. |
[19] |
Engle R, Sarkar D (2006) Premiums-discounts and exchange traded funds. J Deriv, 27-45. doi: 10.3905/jod.2006.635418
![]() |
[20] |
Evans GW, Honkapohja S (2005) An interview with Thomas J. Sargent. Macroecon Dyn 9: 561-583. doi: 10.1017/S1365100505050042
![]() |
[21] |
Fama EF (1970) Efficient capital markets: a review of theory and empirical work. J Financ 25: 383-417. doi: 10.2307/2325486
![]() |
[22] | Fletcher M (2013) Liquidity, sentiment and segmentation: A survey of closed-end fund literature. Accounting Manage Inf Syst 12: 510-536. |
[23] |
Gastineau GL (2004) The Benchmark Index ETF Performance Problem. J Portf Manage 30: 96-103. doi: 10.3905/jpm.2004.319935
![]() |
[24] | Goczek Ł, Kania-Morales J (2015) Analiza porównawcza efektywności rynków papierów wartościowych ze szczególnym uwzględnieniem kryzysu w latach 2007−2009. Bank i Kredyt 46: 41-90. |
[25] |
Harper JT, Madura J, Schnusenberg O (2006) Performance comparison between exchange-traded funds and closed-end country funds. J Int Financ Mark Inst Money 16: 104-122. doi: 10.1016/j.intfin.2004.12.006
![]() |
[26] | Investment Company Institute (2019) Investment Company Fact Book 2017, 57th edition, Investment Company Institute. |
[27] |
Jares T, Lavin A (2004) Japan and Hong-Kong exchange-traded funds (ETFs): Discounts, returns and trading strategies. J Financ Serv Res 25: 57-69. doi: 10.1023/B:FINA.0000008665.55707.ab
![]() |
[28] |
Kallinterakis V, Liu F, Pantelous A, et al. (2020) Pricing Inefficiencies and Feedback Trading: Evidence from Country ETFs. Int Rev Financ Anal 70: 101498. doi: 10.1016/j.irfa.2020.101498
![]() |
[29] |
Konak F, Seker Y (2014) The Efficiency of Developed Markets: Empirical Evidence from FTSE 100. J Adv Manage Sci 2: 29-32. doi: 10.12720/joams.2.1.29-32
![]() |
[30] |
Kostovetsky L (2003) Index Mutual Funds and Exchange Traded Funds. J Portf Manage 29: 80-92. doi: 10.3905/jpm.2003.319897
![]() |
[31] | Madhavan AN (2016) Echange-Traded Funds and the new dynamics of investing, New York: Oxford University Press. |
[32] | Malkiel BG (2003) The Efficient Market Hypothesis and Its Critics. CEPS Working Paper No. 91. |
[33] | Marszk A, Lechman E (2019) Exchange-Traded Funds in Europe, Academic Press. |
[34] | Nawrot W (2007) Exchange-Traded Funds (ETF): Nowe produkty na rynku funduszy inwestycyjnych, Warszawa: CeDeWu. |
[35] |
Pope PF, Yadav PK (1994) Discovering errors in tracking error. J Portf Manage 20: 27-32. doi: 10.3905/jpm.1994.409471
![]() |
[36] |
Roll R (1992) A mean/variance analysis of tracking error. J Portf Manage 18: 13-22. doi: 10.3905/jpm.1992.701922
![]() |
[37] |
Qadan M, Yagil J (2012) On the dynamics of tracking indices by exchange traded funds in the presence of high volatility. Managerial Financ 38: 804-832. doi: 10.1108/03074351211248162
![]() |
[38] |
Sewell M (2012) The Efficient Market Hypothesis: Empirical Evidence. Int J Stat Probab 1: 164-178. doi: 10.5539/ijsp.v1n2p164
![]() |
[39] |
Shin S, Soydemir G (2010) Exchange-Traded Funds, Persistence in Tracking Errors and Information Dissemination. J Multinatl Financ Manage 20: 214-234. doi: 10.1016/j.mulfin.2010.07.005
![]() |
[40] | Straffin PD (2001) Teoria gier, Warszawa: Wydawnictwo Naukowe Scholar. |
[41] | Szyszka A (2003) Efektywność Giełdy Papierów Wartościowych w Warszawie na tle rynków dojrzałych, Poznań: Wydawnictwo Akademii Ekonomicznej w Poznaniu. |
[42] | Witkowska D, Żebrowska-Suchodolska D (2008) Badanie słabej formy efektywności informacyjnej GPW. Studia i Prace Wydziału Nauk Ekonomicznych i Zarządzania Uniwersytetu Szczecińskiego 9: 155−166. |
[43] | Zawadzki K (2018) Wpływ czynników behawioralnych na decyzje inwestycyjne na rynkach rozwiniętych i rozwijających się. Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu 531: 462-471. |
1. | Baojuan Li, Seppo P. Ahlfors, Dimitris Pinotsis, Karl J. Friston, Maria Mody, 2017, Chapter 8, 978-1-4939-7323-1, 155, 10.1007/978-1-4939-7325-5_8 | |
2. | Enzo Tagliazucchi, Eus J.W. van Someren, The large-scale functional connectivity correlates of consciousness and arousal during the healthy and pathological human sleep cycle, 2017, 160, 10538119, 55, 10.1016/j.neuroimage.2017.06.026 | |
3. | Kaouther A. El Kourd, Naoual B. Atia, Abd El Rahman C. Malki, Gauss Seidal Method to Detect the Lesion with Accuracy Time in Front of Student Test for MR Images, 2017, 2301380X, 144, 10.18178/ijeee.5.2.144-147 | |
4. | Dimitris A. Pinotsis, Gavin Perry, Vladimir Litvak, Krish D. Singh, Karl J. Friston, Intersubject variability and induced gamma in the visual cortex: DCM with empirical Bayes and neural fields, 2016, 37, 10659471, 4597, 10.1002/hbm.23331 | |
5. | Dimitris A. Pinotsis, Roman Loonis, Andre M. Bastos, Earl K. Miller, Karl J. Friston, Bayesian Modelling of Induced Responses and Neuronal Rhythms, 2019, 32, 0896-0267, 569, 10.1007/s10548-016-0526-y |
Parameter | Physiological interpretation | Value |
gL | Leakage conductance | 1 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/10,1,1/2,1/10,1)*3/10 |
cij | Intrinsic connectivity decay constant | 1 (mm-1) |
vL, vE, vI | Reversal potential | -70,60,-90 (mV) |
vR | Threshold potential | -40 (mV) |
C | Membrane capacitance | 8 (pFnS-1) |
S | Conduction speed | .3 m/s |
λ, λ | Postsynaptic rate constants | 1/4, 1/16 (ms-1) |
l | Radius of cortical patch | 7 (mm) |
(Other parameters as in Table 1) | ||
Parameter | Physiological interpretation | Prior mean |
HE, HI | Maximum postsynaptic depolarisations | 8 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/2, 1, 1/2,1,1)*3/10 |
Width | V1 size | a23 | f | ||
Width Pearson correlation | 1 | ||||
Significance | |||||
V1 size Pearson correlation | 0.364 | 1 | |||
Significance | 0.02 | ||||
a23 Pearson correlation | -0.32 | -0.099 | 1 | ||
Significance | 0.037 | 0.295 | |||
f Pearson correlation | 0.271 | 0.286 | -0.379 | 1 | |
Significance | 0.06 | 0.056 | 0.016 |
Parameter | Physiological interpretation | Value |
gL | Leakage conductance | 1 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/10,1,1/2,1/10,1)*3/10 |
cij | Intrinsic connectivity decay constant | 1 (mm-1) |
vL, vE, vI | Reversal potential | -70,60,-90 (mV) |
vR | Threshold potential | -40 (mV) |
C | Membrane capacitance | 8 (pFnS-1) |
S | Conduction speed | .3 m/s |
λ, λ | Postsynaptic rate constants | 1/4, 1/16 (ms-1) |
l | Radius of cortical patch | 7 (mm) |
(Other parameters as in Table 1) | ||
Parameter | Physiological interpretation | Prior mean |
HE, HI | Maximum postsynaptic depolarisations | 8 (mV) |
α13, α23, α31, α12, α32 | Amplitude of intrinsic connectivity kernels | (1/2, 1, 1/2,1,1)*3/10 |
Width | V1 size | a23 | f | ||
Width Pearson correlation | 1 | ||||
Significance | |||||
V1 size Pearson correlation | 0.364 | 1 | |||
Significance | 0.02 | ||||
a23 Pearson correlation | -0.32 | -0.099 | 1 | ||
Significance | 0.037 | 0.295 | |||
f Pearson correlation | 0.271 | 0.286 | -0.379 | 1 | |
Significance | 0.06 | 0.056 | 0.016 |