Poster Abstracts

Poster 1: Signal-to-noise ratio in V1 population activity

Andreea Lazar (1,2,3), Wolf Singer (1,2) and Danko Nikolic (1,2)

1. Ernst Strungmann Institute (ESI) for Neuroscience in Cooperation with Max-Planck Society, 2. Frankfurt Institute for Advanced Studies, 3. J.W. Goethe University, Frankfurt

A hypothetical decision unit in a higher visual area should be able to decode relevant information about a stimulus based on the input received from V1 neurons. The V1 population response to a visual stimulus varies from trial-to trial around a mean value. For efficient coding, this trial-to-trial variability should be sufficiently low, i.e. lower than the differences in mean responses to different stimuli. We assessed the ability of a linear classifier to discriminate between different stimuli based on short segments of population data, recorded simultaneously from area 17 of anesthetized cat. Additionally, we employed a decoding technique to reconstruct the visual stimuli (white letters on black background) from ensemble responses. Interestingly, stimulus exposure increased the amount of stimulus-specific information conveyed by V1 populations: The late trials in a session had a higher signal-to-noise ratio compared to the early trials. This effect could not be trivially explained by fluctuations in reduced population measures, e.g. mean spike rate or mean variability. The results suggest that in addition to the well-studied parametric features of single cells in primary visual cortex, it is important to study the rich dynamics of simultaneously recorded neuronal populations, which may encode stimulus properties in a distributed fashion, difficult to infer from the responses of individual neurons.

Poster 2: Self-organized learning and inference explain key properties of neural variability

Christoph Hartmann, Andreea Lazar, Jochen Triesch

Abstract: Spontaneous activity and trial-to-trial variability in neocortical recordings have long been assumed to reflect intrinsic cortical noise. However, it has been shown that spontaneous activity is highly structured in space and time (1,2), contributes to trial-to-trial variability of subsequent evoked sensory responses (3) and predicts subsequent perceptual decisions (4). Furthermore, it has been demonstrated that variability in neural activity decreases with stimulus onset (5). These findings are consistent with the ?sampling hypothesis?(6), according to which instantaneous population activity corresponds to samples from a posterior distribution over the represented variables.

In this study, we demonstrate that many of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, (7)). The network consists of excitatory and inhibitory McCulloch-Pitts neurons learning a model of structured input sequences through a combination of STDP and homeostatic forms of plasticity. When confronted with previously unseen ambiguous inputs, the network approximates a Bayesian inference operation by appropriately combining the sensory input with prior knowledge encoded in the learnt network parameters. Furthermore, the network matches many of the key findings on spontaneous activity and neural variability outlined above and samples frequent input sequences during phases of spontaneous activity. While doing so, it also reproduces the lognormal statistics of synaptic connection strengths observed in cortex. The simplicity of our model suggests that these features are canonical properties of recurrent networks learning with a combination of STDP and homeostatic plasticity mechanisms. Overall, our results demonstrate that the complex dynamics of a deterministic self-organizing recurrent neural network can approximate sampling-like inference while capturing key features of neural variability at the single unit and network levels.

References:

  • 1. Kenet, T., Bibitchkov, D., Tsodyks, M., Grinvald, A., & Arieli, A. (2003). Spontaneously emerging cortical representations of visual attributes. Nature, 425(6961), 954?6., 10.1038/nature02078
  • 2. Luczak, A., Barthó, P., & Harris, K. D. (2009). Spontaneous events outline the realm of possible sensory responses in neocortical populations. Neuron, 62(3), 413?25. , 10.1016/j.neuron.2009.03.014
  • 3. Arieli, A., Sterkin, A., Grinvald, A., & Aertsen, A. (1996). Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science (New York, N.Y.), 273(5283), 1868?71., 10.1126/science.273.5283.1868
  • 4. Hesselmann, G., Kell, C. a, Eger, E., & Kleinschmidt, A. (2008). Spontaneous local variations in ongoing neural activity bias perceptual decisions. Proceedings of the National Academy of Sciences of the United States of America, 105(31), 10984?9.,10.1073/pnas.0712043105
  • 5. Churchland, M. M., Yu, B. M., Cunningham, J. P., Sugrue, L. P., Cohen, M. R., Corrado, G. S., ? Shenoy, K. V. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience, 13(3), 369?378., 10.1038/nn.2501.Stimulus
  • 6. Fiser, J., Berkes, P., Orban, G., & Lengyel, M. (2010). Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 14(3), 119?130., 10.1016/j.tics.2010.01.003.Statistically
  • 7. Lazar, A., Pipa, G., & Triesch, J. (2011). Emerging Bayesian priors in a self-organizing recurrent network. In Artificial Neural Networks and Machine Learning ? ICANN (pp. 127?134)., 10.1007/978-3-642-21738-8_17

Poster 3: NOISE & BRAIN PLASTICITY?

J. Rouat+*, L. Bachatène*, S. Cattan*, M. Adeli+, V. Bahmauria*, N. Chanauria* and  S. Molotchnikoff*
+ NECOTIS, Dép. GEGI., Univ. de Sherbrooke
* Dept Sciences Biologiques, Univ. de Montréal

What is noise in the brain? Certainly not what is commonly called ?white noise? as it is part of the brain activity. Our electrophysiological recordings show that neurons that fire spontaneously can have a correlated activity. It is also known that noise can be a facilitator that gates a neuron to listen to a weak stimulus. Without noise, the neuron's firing rate would be too small to elicit learning and plasticity. In some sense, noise is a good « catalytic » in assisting neurons to listen to weak signals.

Plasticity breaks one equilibrium for a new one. In principle, noise alone can not induce plasticity as there will be no differential effect from the stimuli. To some extend it seems to be conflicting with learning. On the other hand it is also known that noise can facilitate the convergence towards (or divergence from) attractors when combined with stimuli. We observe that a broadly tuned cell can be stimulated and « awaked » to become sensitive and sharply tuned to a specific weak stimulation coming from a "master" cell. Does noise take part in that process? How noise is involved with this plasticity?

Plasticity occurs when a specific stimulus is frequent or has a long duration or is more powerful or unusual than others. Brain plasticity requires a release from inhibition and appropriate (duration and intensity) stimuli that drive cellular excitation. In general plasticity ends with inhibition return. Can noise interact within this critical cycle to modulate or impact plasticity?

The poster will use recent results (from ours and others) in adaptation and plasticity to discuss various scenarios and hypothesis in relation with potential impacts of the noise on plasticity and learning. These results could be the basis of models of plasticity which we can discuss with the poster.

Poster 4: Physical limits to learning: Axonal noise as a source of synaptic variability

Ali Neishabouri (1,2) and A. Aldo Faisal (1,2,3)

1 Department of Computing, Imperial College London, London, United Kingdom; 2 Department of Bioengineering, Imperial College London, London, United Kingdom; 3 MRC Medical Sciences Centre, London, United Kingdom

Poster abstract.pdf

Poster 5: INEX ?A simple but effective stochastic model to simulate neuronal- astrocytic activity

Kerstin Lenk, Inkeri Vornanen, Eero Räisänen, Jari AK Hyttinen

Tampere University of Technology and BioMediTech, Finland

Poster abstract.pdf

Poster 6: Dynamic stimulus representations in adapting neuronal networks

Throughout our everyday experience, we are continuously exposed to dynamic and highly complex streams of multimodal sensory information, which we tend to perceive as a series of discrete and coherently bounded sub-sequences [1]. While these 'perceptual events' [2] are unfolding, active representations of the relevant stimulus features (such as identity, duration, intensity, etc.) are maintained and ought to be sufficiently discernible by the distributed responses of specifically tuned neuronal populations, transiently associated into coherent ensembles [3]. A primary function of cortical microcircuits and a necessary first step toward more specialized information processing thus lies in their ability to acquire and maintain appropriate and reliable representations of time-varying, sequentially patterned stimuli in a self-organized and experience-dependent manner, through targeted functional modifications of various intrinsic and synaptic properties. In this work, we numerically investigate the relations between several important principles of functional neurodynamics, involving distributed processing in inhibition-dominated, sparsely and randomly coupled recurrent networks of LIF neurons, endowed with spike-timing dependent adaptation at excitatory and inhibitory synapses. Because the dynamical features of active stimulus representations are necessarily bound to the current state of the circuit, we start by assessing the impact of plasticity on the characteristics of the different activity regimes exhibited by networks driven by stochastic and unspecific background input, showing that plasticity actively maintains a robust state of asynchronous irregular (AI) activity, closely matching several in vivo recordings in awake, behaving animals. This activity regime is shown to be highly robust to large variations in the control parameters, namely E/I balance and rate of external input. The noisy and stochastic nature of firing activity in the AI regime is shown to be fundamental for the development of stable and reproducible dynamic stimulus representations, due to a better exploration of the network's state space. Networks with the ability to re-balance following the perturbation caused by the input stimulus and to actively maintain the AI regime display less variable (from trial to trial), but higher-dimensional stimulus-driven responses. In contrast, static networks cannot recover from repeated perturbations of E/I balance caused by the patterned external stimulus, leading to an increasingly synchronous network state and, consequently, an increasingly constrained and redundant dynamical space, which is detrimental to an adequate stimulus representation.

REFERENCES:

  1. Schapiro, A. C., Rogers, T. T., Cordova, N. I., Turk-Browne, N. B., and Botvinick, M. M.(2013), Neural representations of events arise from temporal community structure., Nature Neuroscience, 16, 4, 486 92, doi:10.1038/nn.3331
  2. Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., and Reynolds, J. R. (2007), Event perception: a mind-brain perspective., Psychological Bulletin, 133, 2, 273?93, doi:10.1037/0033-2909.133.2.273
  3. Singer, W. (2013), Cortical dynamics revisited., Trends in Cognitive Sciences, 17, 12, 616?26, doi:10.1016/j.tics.2013.09.006

Poster 7: Effect of connectivity on noise correlations and coding accuracy in coupled neural populations

Volker Pernice and Rava Azeredo da Silveira

Department of Physics, Ecole Normale Supérieure

Poster abstract.pdf

Poster 8: Bayesian inference in single neuron: a normative model of spike rate adaptation

Alessandro Ticchi (1,4), A. Aldo Faisal (1,2,3)

1 Dept of Computing; 2 Dept of Bioengineering, Imperial College London, UK; 3 MRC Clinical Sciences Centre, London, UK; 4 Dept. of Physics, University of Bologna, Italy

Poster abstract.pdf

Poster 9: Bayesian Inference in Spiking Neurons: A Neural Circuit Model of MCMC Exploiting Neuronal Noise

Alessandro Ticchi (1,4), A. Aldo Faisal (1,2,3)

1 Dept of Computing; 2 Dept of Bioengineering, Imperial College London, UK; 3 MRC Clinical Sciences Centre, London, UK; 4 Dept. of Physics, University of Bologna, Italy

Poster abstract.pdf

Poster 10: Cortical multi-layered, multi-area networks as a substrate for stochastic computing

Hannah Bos, Maximilian Schmidt, Jakob Jordan, Jannis Schuecker, Sacha van Albada, Rembrandt Bakker, Markus Diesmann, Moritz Helias and Tom Tetzlaff

Poster abstract.pdf

Poster 11: Stochastic Sampling without Noise

Bernhard Nessler

Frankfurt Institute for Advanced Studies (FIAS)

The ?stochastic view? of the brain has received much attention recently and models based on the sampling hypothesis promise simpler explanations for complex behavior than traditional deterministic approaches. However, the success of these stochastic models in spite of the rather deterministic spike generation mechanism of single neurons raises a question: Where does the noise come from?

In contrast to previous approaches that resort either to biochemical fluctuations in synapses or input from other brain regions, we propose a new solution; we use a simple deterministic activity regulation as a model of neuronal intrinsic plasticity (IP) and show that the interplay of these individual IP processes in all neurons of a recurrently connected network leads to a chaotic jitter in the neuronal activity that acts like an intrinsic pseudo-random noise source. In the framework of a Self-Organizing Recurrent Neural Network (SORN) we demonstrate the learning and pseudo-stochastic replay of distributions of sequences of symbols and quantify the quality of the resulting sampler on the level of individual neurons, the level of the whole network and from the reduced perspective of a number of read-out neurons.

Our results have implications both for a possible new understanding of the implementation of stochastic computations in the brain through chaotic dynamics and for the construction of neuromorphic hardware for probabilistic inference, where costly noise generators could be replaced by simple deterministic homeostatic control mechanisms.


 
. 
 

13 Aug 2014