NICE 2021 - Agenda(hide abstracts)
|Tuesday, 16 March 2021|
We have a chatserver for the workshop (please use the username / initial password from your meetings 'personal page' (URL in your email). Please ask talk-related questions in the respective talk-channel (linked here in the agenda).
The proceedings for NICE 2021 are actually papers for the postponed 2020 NICE. You can find papers for many of the presentations here: https://dl.acm.org/doi/proceedings/10.1145/3381755
Welcome to NICE #8show presentation.pdf (public accessible)
|Johannes Schemmel (Heidelberg University)|
Keynote: Lessons from Loihi for the Future of Neuromorphic Computingshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
The past three years have seen significant progress in neuromorphic computing. The availability of Loihi has enabled a community of over 100 research groups around the world to evaluate a wide range of neuro-inspired algorithms and applications with a neuromorphic chip and toolchain that is sufficiently mature to support meaningful benchmarking. So far these efforts have yielded a number of compelling results, for example in the domains of combinatorial optimization and event-based sensing, control, and learning, while highlighting the opportunities and challenges the field faces for delivering real-world technological value over both the near and long term. This talk surveys the most important results and perspectives we've obtained with Loihi to date.
|Mike Davies (Intel)|
Why is Neuromorphic Event-based Engineering the future of AI?show presentation.pdf (public accessible), show talk video (YouTube) (local version)
While neuromorphic vision sensors and processors are becoming more available and usable by laymen and although they outperform existing devices specially in the case of sensing, there are still no successful commercial applications that allowed them to overtake conventional computation and sensing. In this presentation, I will provide insights on what are the missing key steps that are preventing this new computational revolution to happen. I will give an overview of neuromorphic, event-based approaches for image sensing and processing and how these have the potential to radically change current AI technologies and open new frontiers in building intelligent machines. I will focus on what is intended by event-based computation and the urge to process information in the time domain rather than recycling old concepts such as images, backpropagation and any form of frame-based approach. I will introduce new models of machine learning based on spike timings and show the importance of being compatible with neurosciences findings and recorded data. Finally, I will provide new insights on how to build neuromorphic neural processors able to operate these new AI and the urge to move to new architectural concepts.
|Ryad Benjamin Benosman (UPITT/CMU/SORBONNE)|
Exploring the possibilities of analog neuromorphic computing with BrainScaleSshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
BrainScaleS is an analog accelerated neuromorphic hardware architecture. Originally devised to emulate learning in the brain using spike-based models, its research scope has significantly broadened. The most recent addition to the BrainScaleS architecture extends the analog neuron operation to include rate-based modeling. Using analog vector-matrix multiplications, the BrainScaleS-ASIC has successfully demonstrated its capability to process real world data sets. The ASIC still keeps the full functionality of its analog event based core, including hybrid plasticity. This talk will present results form the current BrainScaleS system, highlighting the flexibility that is possible with an analog neuromorphic substrate. The BrainScaleS system will serve as the analog neuromorphic platform in the upcoming EBRAINS research infrastructure of the European Community.
|Johannes Schemmel (Heidelberg University)|
Group photo (zoom screenshots)
Note: we want to publish the group photo publicly = please switch on you camera, if you agree to appear in the photo.
Evaluating complexity and resilience trade-offs in emerging memory inference machinesshow presentation.pdf (public accessible)
|Christopher Bennett (Sandia National Labs)|
Lightning talk: From clean room to machine room: towards accelerated cortical simulations on the BrainScaleS wafer-scale systemshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
The BrainScaleS system follows the principle of so-called "physical modeling", wherein the dynamics of VLSI circuits are designed to emulate the dynamics of their biological archetypes. Neurons and synapses are implemented by analog circuits that operate in continuous time, governed by time constants which arise from the properties of the transistors and capacitors on the microelectronic substrate. This defines our intrinsic hardware acceleration factor of 10000 with respect to biological real-time. The system evolved for over ten years from a lab prototype to a larger installation of several wafer modules. The talk reflects on the development process, the lessons learned and summarize the recent progress in commission- ing and operating the BrainScaleS system. The current state of the endeavor is demonstrated on the example of wafer-scale emulations of functional neural networks.
|Sebastian Schmitt (Heidelberg University)|
Closed-loop experiments on the BrainScaleS-2 architecture(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)
The evolution of biological brains has always been contingent on their embodiment within their respective environments, in which survival required appropriate navigation and manipulation skills. Studying such interactions thus represents an important aspect of computational neuroscience and, by extension, a topic of interest for neuromorphic engineering. In the talk, we present three examples of embodiment on the BrainScaleS-2 architecture, in which dynamical timescales of both agents and environment are accelerated by several orders of magnitude with respect to their biological archetypes.
|Korbinian Schreiber (Heidelberg University)|
Batch << 1: Why Neuromorphic Computing Architectures Suit Real-Time Workloadsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
As predicted by John Hennessy, there has been a “Cambrian explosion” of computing architectures as Moore’s Law scaling has broken down. This is most obvious in the new field of AI hardware, where the competition to develop and commercialize chips for deep learning training and inference is particularly strong. There is no consensus as to whether the same architectures will be appropriate for data-center computation and edge computation, although some practitioners are starting to differentiate architectures on the basis of whether inputs (typically, images or video frames) can be accumulated before processing (allowing for very large memory read and write blocks and large matrix multiplications); or whether the task demands that each frame must be processed in real time (so-called “Batch = 1” processing).
In this presentation we show that many real-world tasks are in fact “Batch << 1” operations. For example, in the case of a forward-facing video camera in a self-driving car application, the similarity between successive frames is very high, and increases as the frame rate and resolution of the video increase; a 240fps 1080p camera will typically have well over 99% of pixels unchanged between successive frames. The same high correlation between successive samples applies in other real-world workloads such as conversational audio processing.
Exploiting the correlation of input streams can lead to very efficient processing (as shown in video compression techniques such as H.264 / MPEG-4). However, it requires significantly different processing architectures, chief among which is the necessity to maintain system state in memory between inputs.
We will show that neuromorphic architectures intrinsically implement the most important features of a ‘Batch << 1” architecture, and are very well suited to edge processing. We will describe a new architecture – NeuronFlow - which is optimized for this purpose, and present results from GrAIOne, the first chip manufactured to implement this architecture. Early results show a significant processing advantage in terms of both latency and power consumption.
|Jonathan Tapson (School of Electrical and Data Engineering, University of Technology Sydney)|
Neuromorphic and AI research at BCAI (Bosch Center for Artificial Intelligence)show presentation.pdf (public accessible), show talk video (YouTube) (local version)
We will give an overview of current challenges and activities at Bosch Center for Artificial Intelligence regarding neuromorphic computing, spiking neural networks, in-memory computation and deep learning. This includes a short introduction to the publicly funded project ULPEC addressing ultra-low power vision systems. In addition, we will give a summary of selected academic contributions in the field of spiking neural networks, in-memory computation and hardware-aware compression of deep neural networks.
|Thomas Pfeil (Bosch Center for Artificial Intelligence)|
Mapping Deep Neural Networks on SpiNNaker2show presentation.pdf (public accessible), show talk video (YouTube) (local version)
Florian Kelber, Binyi Wu, Bernhard Vogginger, Johannes Partzsch, Chen Liu, Marco Stolba and Christian Mayr
SpiNNaker is an efficient many-core architecture for the real-time simulation of spiking neural networks. To also speed up deep neural networks (DNNs), the 2nd generation SpiNNaker2 will contain dedicated DNN accelerators in each processing element. When realizing large CNNs on SpiNNaker2, layers have to be split, mapped and scheduled onto 144 processing elements. We describe the underlying mapping procedure with optimized data reuse to achieve inference of VGG-16 and ResNet-50 models in tens of milliseconds.
|Florian Kelber (TU Dresden)|
Open mic / discussion
End of day I
Tutorials: BrainScaleS and DYNAP-SE
Two tutorials/hands on in parallel:
For a description please see the tutorials page.
|Wednesday, 17 March 2021|
Tutorials: SpiNNaker and DYNAP-SE
Two tutorials in parallel:
For a description please see the tutorials page.
SpiNNaker tutorial introductionshow talk video (YouTube) (local version)
|Andrew Rowley (University of Manchester)|
NICE - day II
Keynote: From Brains to Silicon -- Applying lessons from neuroscience to machine learningshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
In this talk we will review some of the latest neuroscience discoveries and suggest how they describe a roadmap to achieving true machine intelligence. We will then describe our progress of applying one neuroscience principle, sparsity, to existing deep learning networks. We show that sparse networks are significantly more resilient and robust than traditional dense networks. With the right hardware substrate, sparsity can also lead to significant performance improvements. On an FPGA platform our sparse convolutional network runs inference 50X faster than the equivalent dense network on a speech dataset. In addition, we show that sparse networks can run efficiently on small power-constrained embedded chips that cannot run equivalent dense networks. We conclude our talk by proposing that neuroscience principles implemented on the right hardware substrate offer the only feasible path to scalable intelligent systems.
|Jeff Hawkins (Numenta)|
Subutai Ahmad (Numenta)
A Neuromorphic Future for Classic Computing Tasksshow talk video (YouTube) (local version)
The obvious promise of neuromorphic hardware is to enable efficient implementations of brain-derived algorithms. However, to be successful, it is essential that the community demonstrates that neuromorphic systems can be broadly impactful for beyond a few narrow tasks. While more advanced post-deep learning brain-derived algorithms would be ideal, it is helpful to look beyond cognitive algorithms as well for potential market impact.
In this talk, I will highlight one such opportunity: the application of neuromorphic hardware for large-scale scientific computing applications. Specifically, I will present a perspective on neuromorphic hardware that enables us to use large spiking architectures for solving stochastic differential equations and graph analytics. Our general approach treats neuromorphic architectures as a large computational graph onto which we can map sophisticated algorithmic tasks. We have demonstrated how this approach can be used to efficiently model Monte Carlo approximations to a class of partial differential equations that challenge the high-performance computing community, and we can further illustrate how this approach is well-suited for performing general dynamic programming tasks.
Finally, the talk will include some concrete examples of this approach on different spiking neuromorphic platforms, such as Loihi, TrueNorth, and SpiNNaker.
|Brad Aimone (Sandia National Laboratories)|
Lightning talk: Benchmarking of Neuromorphic Hardware Systems(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)
With more and more neuromorphic hardware systems for the acceleration of spiking neural networks available in science and industry, there is a demand for platform comparison and performance estimation of such systems. This work describes selected benchmarks implemented in a framework with exactly this target: independent black-box benchmarking and comparison of platforms suitable for the simulation/emulation of spiking neural networks.
|Christoph Ostrau (Bielefeld University)|
Natural density cortical models as benchmarks for universal neuromorphic computers(the presentation .pdf is accessible for meeting attendants from their 'personal page')
Throughout evolution, the cortex has increased in volume from mouse to man by three orders of magnitude, while the architecture at the local scale of a cubic millimeter has largely been conserved in terms of the multi-layered structure and the density of synapses. Furthermore, local cortical networks are similar, independent of whether an area processes visual, auditory, or tactile information. This dual universality raises hope that fundamental principles of cortical computation can be discovered. Although a coherent view of these principles still remains missing, the universality motivated researchers already more than a decade go to start to develop neuromorphic computing systems based on the interaction between neurons by delayed point events and basic parameters of cortical architecture.
These systems need to be verified in the sense of accurately representing cortical dynamics and validated in the sense of simulating faster or more energy than software solutions on conventional computers. Such comparisons are only meaningful if they refer to implementations of the same neuronal network model. The role of models changes from mere demonstrations of functionality to quantitative benchmarks. In fields of computer science like computer vision and machine learning the definition of benchmarks helps to quantify progress and drives a constructive competition between research groups. The talk argues that neuromorphic computing needs to advance the development of benchmarks of increasing size and complexity.
A model of the cortical microcircuit  exemplifies the recent interplay and co-design of alternative hardware architectures enabled by a common benchmark. The model represents neurons with their natural number of synapses and at the same time captures the natural connection probability between neurons in the local volume. Consequently, all questions on the proper scaling of network parameters become irrelevant. The model constitutes a milestone for neuromorphic hardware systems as larger cortical models are necessarily less densely connected.
As metrics we discuss the energy consumed per synaptic event and the real-time factor. We illustrate the progress in the past few years and show that a single conventional compute node still keeps up with neuromorphic hardware and achieves sub real-time performance. Finally, the talk exposes the limitations of the microcircuit model as a benchmark and positions cortical multi-area models  as a biologically meaningful way of upscaling benchmarks to the next problem size.
This work is partially supported by the European Union's Horizon 2020 (H2020) funding framework under grant agreement no. 945539 (Human Brain Project SGA3) and the Helmholtz Association Initiative and Networking Fund under project number SO-092 (Advanced Computing Architectures, ACA).
|Markus Diesmann (Forschungszentrum Jülich GmbH)|
Poster Lightning Talks
1 min - 1 slide poster appetizers
|Alberto Vergani (AMU)|
Pau Vilimelis Aceituno (ETH Zürich)
Benjamin Cramer (Heidelberg University)
Garibaldi Pineda Garcia (University of Sussex)
Ioannis Polykretis (Rutgers University)
Tobias Thommes (Heidelberg University)
Andrew Fountain (Rochester Institute of Technology)
Ayon Borthakur (Cornell University)
Akos Ferenc Kungl (Heidelberg University)
Poster session A (and break)
Platform-Agnostic Neural Algorithm Composition using Fugu(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (the talk video is accessible for meeting attendants via their personal page)
Spiking neural networks and corresponding neuromorphic hardware are undergoing an uptick in interest as key milestones are accomplished by industry, academic and government research groups. Unfortunately, from an end-user’s perspective, testing or deploying applications on a neuromorphic platform is very challenging and often infeasible. We hope to address two common and key challenges, portability and composition, by the creation of an overarching software framework called Fugu. Fugu allows for spiking neural algorithms, created by independent designers, to be combined seamlessly in a scalable and target- platform-agnostic manner. This resulting intermediate representation is then translatable to multiple neuromorphic hardware backends.
Acknowledgements: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
|William Severa (Sandia National Laboratories)|
Lightning talk: Implementing Backpropagation for Learning on Neuromorphic Spiking Hardwareshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Many contemporary advances in the theory and practice of neural networks are inspired by our understanding of how information is processed by natural neural systems. However, the basis of modern deep neural networks remains the error backpropagation algorithm, which, though founded in rigorous mathematical optimization theory, has not been successfully demonstrated in a neurophysiologically inspired (neuromorphic) circuit. In a recent study, we proposed a neuromorphic architecture for learning that tunes the propagation of information forward and backwards through network layers using a timing mechanism controlled by a synfire-gated synfire chain (SGSC). This architecture was demonstrated in simulation of firing rates in a current-based neuronal network. In this follow-on study, we present a spiking backpropagation algorithm based on this architecture, but including several new mechanisms that enable implementation of the backpropagation algorithm using neuromorphic spiking units. We demonstrate the function of this architecture learning an XOR logic circuit and numerical character recognition with the MNIST dataset on Intel's Loihi neuromorphic chip.
|Andrew Sornborger (Los Alamos National Laboratory)|
Inductive bias transfer between brains and machinesshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Machine Learning, in particular computer vision, has made tremendous progress in recent year. On standardized datasets deep networks now frequently achieve close to human or super human performance. However, despite this enormous progress, artiﬁcial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many deﬁning features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ‘‘inductive bias,’’ determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artiﬁcial networks use rather nonspeciﬁc biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. I will give an overview on some conceptual ideas and preliminary results how the rapid increase of neuroscientific data could be used to transfer low level inductive biases from the brain to learning machines.
|Fabian Sinz (University Tübingen)|
Lightning talk: Spike Latency Reduction generates Efficient Predictive Codingshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Latency reduction in postsynaptic spikes is a well-known effect of spiking time-dependent plasticity. We expand this notion for long postsynaptic spike trains on single neurons, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then, we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction in postsynaptic latencies can lead to the emergence of predictions.
|Pau Vilimelis Aceituno (ETH Zürich)|
Lightning talk: Cognitive Domain Ontologies: HPCs to Ultra Low Power Neuromorphic Platforms(the presentation .pdf is accessible for meeting attendants from their 'personal page')
The Cognitively Enhanced Complex Event Processing (CECEP) is an agent-based decision-making architecture. Within this agent, the Cognitive Domain Ontology (CDO) component is the slowest for most applications of the agent. We show that even after acceleration on a high performance server based computing system enhanced with a high end graphics processing unit (GPU), the CDO component does not scale well for real time use on large problem sizes. Thus, to enable real time use of the agent, particularly in power constrained environments (such as autonomous air vehicles), alternative implementations of the agent logic are needed. The objective of this work was to carry out an initial design space search of algorithms and hardware for decision making through the domain knowledge component of CECEP. Several algorithmic and circuit approaches are proposed that span across six hardware options of varying power consumption and weight (ranging from over 1000W to less than 1W). The algorithms range from exact solution producers optimized for running on a cluster of high performance computing systems to approximate solution producers running fast on low power neuromorphic hardware. The loss in accuracy for the approximate approaches is minimal, making them well suited to SWaP constrained systems, such as UAVs. The exact solution approach on an HPC will give confidence that the best answer has been evaluated (although this may take some time to generate).
|Chris Yakopcic (University of Dayton)|
Open mic / discussion
We’ll cover a few new “advanced” topics of interest to the community. These are
We’ll present on these topics and show some code running. Anyone who has access to Loihi will be able to find and run the code themselves, but we’d like to clarify that access to Loihi is not required for attending the tutorials. (If you want to get access to Loihi, please email firstname.lastname@example.org to get legal access to the Loihi cloud systems.)
Intel Loihi's NxSDK: Introduction and overviewshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
|Andreas Wild (Intel Corporation)|
A Fast and Efficient Constraint Satisfaction Solver on Loihishow talk video (YouTube) (local version)
|Gabriel Andres Fonseca Guerra (Intel Coorporation)|
SLAYER for Loihishow presentation.pdf (public accessible), show talk video (YouTube) (local version)
|Sumit Bam Shrestha (Intel Corporation)|
Performance Characterization on Loihishow talk video (YouTube) (local version)
|Garrick Orchard (Intel Corporation)|
|Thursday, 18 March 2021|
Tutorial: BrainScaleS hands-on
Please use the "Join_Main" dial in on your personal page to attend.
For a description of the pre-requirements, please see the tutorials page.
BrainScaleS-2 hands-on introduction, part I: Spiking modeshow talk video (YouTube) (local version)
|Sebastian Billaudelle (Kirchhoff Institute for Physics, Heidelberg University)|
BrainScaleS-2 hands-on introduction, part II: Matrix multiplication modeshow talk video (YouTube) (local version)
|Johannes Weis (Kirchhoff-Institute for Physics, Heidelberg University)|
BrainScaleS-2 hands-on introduction, part III: Matrix multiplication mode IIshow talk video (YouTube) (local version)
|Arne Emmel (Universität Heidelberg)|
(hands-on work with the BSS-2 single chip system)
NICE - day III
Please use the "Join_Main" dial in on your personal page to attend.
Keynote: Biological inspiration for improving computing and learning in spiking neural networksshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
The talk will address three new methods:
For details see:
|Wolfgang Maass (Graz University of Technology)|
On the computational power and complexity of Spiking Neural Networksshow presentation.pdf (public accessible)
The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and colocating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. I will demonstrate the first steps towards such a theory, including canonical problems, hierarchies of complexity classes and some first completeness results.
|Johan Kwisthout (Radboud Universiteit Nijmegen)|
Evolutionary Optimization for Neuromorphic Systems(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (the talk video is accessible for meeting attendants via their personal page)
Designing and training an appropriate spiking neural network for neuromorphic deployment remains an open challenge in neuromorphic computing. In 2016, we introduced an approach for utilizing evolutionary optimization to address this challenge called Evolutionary Optimization for Neuromorphic Systems (EONS). In this work, we present an improvement to this approach that enables rapid prototyping of new applications of spiking neural networks in neuromorphic systems. We discuss the overall EONS framework and its improvements over the previous implementation. We present several case studies of how EONS can be used, including to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization.
|Catherine Schuman (Oak Ridge )|
An event-based gas sensing device that resolves fast transients in a turbulent environment(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (the talk video is accessible for meeting attendants via their personal page)
Electronic olfaction can help detect and localize harmful gases and pollutants, but the turbulence of the natural environment presents a particular challenge: odor encounters are intermittent, and an eﬀective electronic nose must therefore be able to resolve short odor pulses. The slow responses of the widely used metal oxide (MOX) gas sensors complicate the task. Here, we combine high-resolution data acquisition with a processing method based on Kalman ﬁltering and event-driven, level-crossing sampling to extract fast onset events. We ﬁnd that our system can resolve the onset time of odor encounters with enough precision for source direction estimation with a pair of MOX sensors in a stereo-osmic conﬁguration. Our work demonstrates how neuromorphic principles help improve the performance of electronic gas sensors.
|Michael Schmuker (University of Hertfordshire)|
Damien Drix (University of Hertfordshire)
Sequence learning, prediction, and generation in networks of spiking neurons(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (the talk video is accessible for meeting attendants via their personal page)
Sequence learning, prediction and generation has been proposed to be the universal computation performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes this form of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context-specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms.
Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns non-Markovian sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific feedforward subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context-specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences.
By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.
|Younes Bouhadjar (Forschungszentrum Juelich)|
Poster session b (and break)
And for talk posters by:
Lessons from neurobiology and physics: pseudobackprop & more
Spike-based computation is inspired by neurobiology and is implemented in physical devices. Spike-based computation can also be inspired by methods from theoretical physics where overarching principles are formulated to capture the dynamics of charges, masses and more abstract variables. We consider the principle of Least Action and show how this can be applied to the neurobiology of cognition. The key notion is the one of an error. Errors can be defined at the level of the behaviour, the microcircuits and the single neurons. I will show how a rigorous application of this Neural Least Action (NLA) principle leads to a cortical version of error-backpropagation, namely pseudobackprop. Pseudobackprop naturally emerges when error representations are learned by cortical microcircuits and made available at the dendritic sites. Performance-wise, pseudobackprop outperforms feedback-alignment, is comparable with backpropagation, and has distinct advantages. The NLA principle potentially offers generalizations to spike-based computation, conductance-based computation, and natural gradients (covered by other talks at NICE 2021). As a physical theory that deals with continuous time it may give hints to actually implement it by real-time physical devices.
|Walter Senn (Universität Bern)|
Conductance-based dendrites perform reliability-weighted opinion poolingshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Cue integration, the combination of different sources of information to reduce uncertainty, is a fundamental computational principle of brain function. Starting from a normative model we show that the dynamics of multi-compartment neurons with conductance-based dendrites naturally implement the required probabilistic computations. The associated error-driven plasticity rule allows neurons to learn the relative reliability of different pathways from data samples, approximating Bayes-optimal observers in multisensory integration tasks. Additionally, the model provides a functional interpretation of neural recordings from multisensory integration experiments and makes specific predictions for membrane potential and conductance dynamics of individual neurons.
|Jakob Jordan (Institute of Physiology, University of Bern)|
Lightning talk: Natural gradient learning for spiking neuronsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.
|Elena Kreutzer (University of Bern)|
Making spiking neurons more succinct with multi-compartment modelsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Spiking neurons consume energy for each spike they emit. Reducing the firing rate of each neuron — without sacrificing relevant information content — is therefore a critical constraint for energy efficient networks of spiking neurons in biology and neuromorphic hardware alike. The inherent complexity of biological neurons provides a possible mechanism to realize a good trade-off between these two conflicting objectives: multi-compartment neuron models can become selective to highly specific input patterns, and can thus produce informative yet sparse spiking codes. I'll present a model of this mechanism and discuss its potential utility for spiking neural networks and neuromorphic hardware.
|Johannes Leugering (Fraunhofer IIS)|
Lightning talk: The Computational Capacity of Mem-LRC Reservoirs(the presentation .pdf is accessible for meeting attendants from their 'personal page')
Forrest Sheldon and Francesco Caravelli
Reservoir computing has a emerged as a powerful tool in data-driven time series analysis. The possibility of utilizing hardware reservoirs as specialized co-processors has generated interest in the properties of electronic reservoirs, especially those based on memristors as the nonlinearity of these devices should translate to an improved nonlinear computational capacity of the reservoir. However, designing these reservoirs requires a detailed understanding of how memristive networks process information which has thusfar been lacking. In this work, we derive an equation for general memristor-inductor-resistor-capacitor (MEM-LRC) reservoirs that includes all network and dynamical constraints explicitly. Utilizing this we undertake a detailed study of the computational capacity of these reservoirs. We demonstrate that hardware reservoirs may be constructed with extensive memory capacity and that the presence of memristors enacts a tradeoff between memory capacity and nonlinear computational capacity. Using these principles we design reservoirs to tackle problems in signal processing, paving the way for applying hardware reservoirs to high-dimensional spatiotemporal systems.
|Forrest Sheldon (Los Alamos National Lab - T-4/CNLS)|
Open mic / discussion
Tutorial: SpiNNaker hands-on
For a recording of the introduction, please see the agenda entry for 17 March in the morning.
For a description please see the tutorials page.
|Friday, 19 March 2021|
Please use the "Join_Main" dial in on your personal page to attend.
Dynap-SE1 Demo Sessionshow talk video (YouTube) (local version)
|Yigit Demirag (The Institute of Neuroinformatics, UZH and ETH Zürich)|
Remote demo of the Dynap-SE board(the talk video is accessible for meeting attendants via their personal page)
|Dmitrii Zendrikov (Institute of Neuroinformatics, UZH and ETH Zurich)|
DYNAP-SE tutorial session 2: Simulating Dynap-SE1show talk video (YouTube) (local version)
|Yigit Demirag (The Institute of Neuroinformatics, UZH and ETH Zürich)|
NICE - day IV
Please use the "Join_Main" dial in on your personal page to attend.
Keynote: Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognitionshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
While Moore’s law has driven exponential computing power expectations, its nearing end calls for new roads to embedded cognition. The field of neuromorphic computing aims at a paradigm shift compared to conventional von-Neumann computers, both for the architecture (i.e. memory and processing co-location) and for the data representation (i.e. spike-based event-driven encoding). However, it is unclear which of the bottom-up (neuroscience-driven) or top-down (application-driven) design approaches could unveil the most promising roads to embedded cognition. In order to clarify this question, this talk is divided into two parts.
The first part focuses on the bottom-up approach. From the building-block level to the silicon integration, we design two bottom-up neuromorphic processors: ODIN and MorphIC. We demonstrate with silicon measurement results that hardware-aware neuroscience model design and selection allows reaching record neuron and synapse densities with low-power operation. However, the inherent difficulty for bottom-up designs lies in applying them to real-world problems beyond the scope of neuroscience-oriented applications.
The second part investigates the top-down approach. By starting from the applicative problem of adaptive edge computing, we derive the direct random target projection (DRTP) algorithm for low-cost neural network training and design a top-down DRTP-enabled neuromorphic processor: SPOON. We demonstrate with silicon measurement results that combining event-driven and frame-based processing with weight-transport-free update-unlocked training supports low-cost adaptive edge computing with spike-based sensors. However, defining a suitable target for bio-inspiration in top-down designs is difficult, as it underlies both the efficiency and the relevance of the resulting neuromorphic device.
Therefore, we claim that each of these two design approaches can act as a guide to address the shortcomings of the other.
|Charlotte Frenkel (Institute of Neuroinformatics, Zürich, Switzerland)|
Lightning talk: Subspace Locally Competitive Algorithmsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
We introduce subspace locally competitive algorithms (SLCAs), a family of novel network architectures for modeling latent representations of natural signals with group sparse structure. SLCA first layer neurons are derived from locally competitive algorithms, which produce responses and learn representations that are well matched to both the linear and non-linear properties observed in simple cells in layer 4 of primary visual cortex (area V1). SLCA incorporates a second layer of neurons which produce approximately invariant responses to signal variations that are linear in their corresponding subspaces, such as phase shifts, resembling a hallmark characteristic of complex cells in V1. We provide a practical analysis of training parameter settings, explore the features and invariances learned, and finally compare the model to single-layer sparse coding and to independent subspace analysis.
|Dylan Paiton (University of Tübingen)|
Programming neuromorphic computers: PyNN and beyondshow presentation.pdf (public accessible)
Programming neuromorphic computers: PyNN and beyond PyNN is a Python API for describing spiking neuronal networks consisting of point neurons, with synaptic plasticity. The API is intended to be independent of the underlying simulator or hardware platform: PyNN models can run on traditional simulators such as NEST, NEURON and Brian, GPU-based simulators such as GeNN, and neuromorphic hardware systems such as BrainScaleS and SpiNNaker. In this talk I will present the current state of PyNN and forthcoming extensions, in particular support for multicompartmental models, intracellular calcium dynamics, and structural plasticity.
|Andrew Davison (CNRS)|
Lightning talk: Neuromorphic Graph Algorithms : Cycle Detection, Odd Cycle Detection, and Max Flowshow talk video (YouTube) (local version)
Recently, neuromorphic systems have been applied outside of the arena of machine learning, primarily in the field of graph algorithms. Neuromorphic systems have been shown to perform graph algorithms faster and with lower power consumption than their traditional (GPU/CPU) counterparts, and are hence an attractive option for a co-processing unit in future high performance computing systems, where graph algorithms play a critical role. In this talk, I present a primer on several graph algorithms (cycle detection, odd cycle detection, and the Ford-Fulkerson max-flow algorithm) along with their neuromorphic implementations.
|William Kay (Oak Ridge National Laboratory)|
BrainScaleS: Development Methodologies and Operating Systemshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
The BrainScaleS (BSS) neuromorphic architectures are based on the analog emulation of neuro-synaptic behavior. Neuronal membrane voltages are represented as voltages, model dynamics evolve in a time-continuous manner. Compared to biology the systems run at a typical speedup factor of 1000–10000. This enables the evaluation of effects on long timescales and experiments with many trials. Simultaneously, BSS focuses model configurability and flexibility in plasticity, experiment control and data handling. On BSS-2, this flexibility is facilitated by an embedded SIMD microprocessor located next to the analog neural network core.
The extended configurability, the inclusion of embedded programmability as well as the horizontal scalability of the systems induces additional complexity. Challenges arise in areas such as initial experiment configuration and runtime control, reproducibility and robustness. We present operation and development methodologies implemented for the BSS neuromorphic architectures and walk through the individual components constituting the software stack for BSS platform operation.
|Eric Müller (Heidelberg University)|
Lightning talk: Evolving Spiking Neural Networks for Robot Sensory-motor Decision Tasks of Varying Difficultyshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
While there is considerable enthusiasm for the potential of spiking neural network (SNN) computing, there remains the fundamental issue of designing the topologies and parameters for these networks. We say the topology IS the algorithm. Here, we describe experiments using evolutionary computation (genetic algorithms, GAs) on a simple robotic sensory-motor decision task using a gene driven topology growth algorithm and letting the GA set all the SNNís parameters. We highlight lessons learned from early experiments where evolution failed to produce designs beyond what we called ìcheap-trickstersî. These were simple topologies implementing decision strategies that could not satisfactorily solve tasks beyond the simplest, but were nonetheless able to outcompete more complex designs in the course of evolution. The solution involved alterations to the fitness function so as to reduce the inherent noise in the assessment of performance, adding gene driven control of the symmetry of the topology, and improving the robot sensors to provide more detailed information about its environment. We show how some subtle variations in the topology and parameters can affect behaviors. We discuss an approach to gradually increasing the complexity of the task that can induce evolution to discover more complex designs. We conjecture that this type of approach will be important as a way to discover cognitive design principles.
|J. David Schaffer (Binghamton University)|
Relational Neurogenesis for Lifelong Learning Agentsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Tej Pandit and Dhireesha Kudithipudi
Reinforcement learning systems have shown tremendous potential in being able to model meritorious behavior in virtual agents and robots. The ability to learn through continuous reinforcement and interaction with an environment negates the requirement of painstakingly curated datasets and hand crafted features. However, the ability to learn multiple tasks in a sequential manner, referred to as lifelong or continual learning, remains unresolved. Current implementations either concentrate on preserving information in fixed capacity networks, or propose incrementally growing networks which randomly search through an unconstrained solution space. This paper presentation discusses a novel algorithm for continual learning using neurogenesis in reinforcement learning agents. It builds upon existing neuroevolutionary techniques, and incorporates several new mechanisms for limiting the memory resources while expanding neural network learning capacity. The algorithm is tested on a custom set of sequential virtual environments which emulate meaningful real-world scenarios, such as forest fires.
|Tej Pandit (University of Texas at San Antonio)|
Lightning talk: Fast and deep neuromorphic learning with first-spike coding(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)
For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems also strive for short time-to-solution and low energy-to-solution characteristics, but current machine learning solutions struggle to meet especially the latter goal. Back in biology, at the level of neuronal implementation the two goals imply achieving the desired results with as few and as early spikes as possible. In the time-to-first-spike-coding framework, both of these goals are inherently emerging features of learning. We describe a rigorous derivation of learning such first-spike times in networks of leaky integrate-and-fire neurons, relying solely on input and output spike times, and show how it can implement error backpropagation in hierarchical spiking networks. Furthermore, we emulate our framework on the BrainScaleS-2 neuromorphic system and demonstrate its capability of harnessing the chip's speed and energy characteristics to solve the typical machine learning problem of image classification. Finally, we examine how our approach generalizes to other neuromorphic platforms by studying how its performance is affected by typical distortive effects induced by neuromorphic substrates.
|Julian Goeltz (Kirchhoff Institut fuer Physik, Universitaet Heidelberg)|
Lightning talk: Neuromorphic Computing for Spacecraft’s Terrain Relative Navigation: A Case of Event-Based Crater Classification Taskshow presentation.pdf (public accessible), (the talk video is accessible for meeting attendants via their personal page)
Terrain relative navigation is a key technology to enhance conventional spacecraft navigation systems for accurate landing on a planetary body. Since the navigation task is self-localization based on terrain information, computer vision tasks using terrain images are often used for feature extraction and matching. Although the navigation system requires real-time and onboard processing capability due to high-speed descent and the communication propagation delay, the processing performance of space-grade computers is about two orders of magnitude less than commercial ones. This decline in the performance is caused by the power constraints and the acquisition of radiation hardening inherent in the space environments. Neuromorphic computing architecture may meet this need in terms of power consumption and processing speed.
In this study, we investigate the applicability of neuromorphic computing systems for a crater classification as a function of terrain relative navigation. The navigation system consists of a spiking neural network that processes the classification task and an event-based camera that provides terrain information as input to the network. Results show that the system can classify craters with very low power consumption while maintaining performance comparable to existing computing architectures.
|Kazuki Kariya (The Graduate University for Advanced Studies, SOKENDAI)|
Beyond Backprop: Different Approaches to Credit Assignment in Neural Netsshow presentation.pdf (public accessible), show talk video (YouTube) (local version)
Backpropagation algorithm (backprop) has been the workhorse of neural net learning for several decades, and its practical effectiveness is demonstrated by recent successes of deep learning in a wide range of applications. This approach uses chain rule differentiation to compute gradients in state-of-the-art learning algorithms such as stochastic gradient descent (SGD) and its variations. However, backprop has several drawbacks as well, including the vanishing and exploding gradients issue, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.
|Irina Rish (MILA / Université de Montréal )|
Lightning talk: Comparing Neural Accelerators & Neuromorphic Architectures The False Idol of Operationsshow presentation.pdf (public accessible), (the talk video is accessible for meeting attendants via their personal page)
Accompanying the advanced computing capabilities neural networks are enabling across a suite of application domains, there is a resurgence in interest in understanding what architectures can efficiently enable these advanced computational demands. Both neural accelerators and neuromorphic approaches are emerging at different scales, resource requirements, and enabling capabilities. Beyond the similarity of executing neural network workloads, these two paradigms exhibit significant differences. Accordingly, here we compare neural accelerators and neuromorphic architectures highlighting that operations alone are a lacking singular measure of performance.
|Craig Vineyard (Sandia National Laboratories )|
Real-time Mapping on a Neuromorphic Processorshow talk video (YouTube) (local version)
Navigation is so crucial for our survival that the brain hosts a dedicated network of neurons to map our surroundings. Place cells, grid cells, border cells, head direction cells and other specialized neurons in the hip- pocampus and the cortex work together in planning and learning maps of the environment . When faced with similar navigation challenges, robots have an equally important need for generating a stable and accurate map. In our ongoing effort to translate the biological network for spatial navigation into a spiking neural network (SNN) that controls mobile robots in real-time, we first focused on simultaneous localization and mapping (SLAM), being one of the critical problems in robotics that relies highly on the accuracy of map representation . Our approach allows us to leverage the asynchronous computing paradigm commonly found across brain areas and therefore has already demonstrated to be a significant energy-efficient solution for 1D SLAM , that can spur the emergence of the new neuromorphic processors, such as Intel’s Loihi  and IBM’s TrueNorth . In this paper, we expand our previous work by proposing a SNN that forms a cognitive map of an unknown environment and is seamlessly integrated to Loihi.
 S. Poulter, T. Hartley, and C. Lever, "The neurobiology of mammalian navigation," Current Biology, vol. 28, no. 17, pp. R1023-R1042, 2018.
 G. Grisetti, C. Stachniss, and W. Burgard, "Improved techniques for grid mapping with rao-blackwellized particle filters," IEEE transactions on Robotics, vol. 23, no. 1, p. 34, 2007.
 G. Tang, A. Shah, and K. P. Michmizos, "Spiking neural network on neuromorphic hardware for energy- efficient unidimensional SLAM," in IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS), Macau, China, 2019, pp. 1-6.
 M. Davies et al., "Loihi: A neuromorphic manycore processor with on-chip learning," IEEE Micro, vol. 38, no. 1, pp. 82-99, 2018.
 P. A. Merolla et al., "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science, vol. 345, no. 6197, pp. 668-673, 2014.
|Konstantinos Michmizos (Rutgers University)|
Opn mic / Wrap up
Farewell .... and See you next year...
End of NICE 2021