NICE 2021 - Agenda(hide abstracts)
|Tuesday, 16 March 2021|
(The NICE#8 was initially planned to take place in 2020, but had to be postponed to 2021 due to COVID-19.)
Note: the exact order and timing of the talks is so far draft!
NEUROTECH Forum (15 March 2021)
Info: just before NICE on 15 March 2021 the NEUROTECH Forum II will take place online (free of charge. Registration for the forum event). The topic is "Neuromorphic Computing Technologies: Opportunities, challenges and Applications Roadmap".
Welcome to NICE #8
Keynote: Lessons from Loihi for the Future of Neuromorphic Computing
The past three years have seen significant progress in neuromorphic computing. The availability of Loihi has enabled a community of over 100 research groups around the world to evaluate a wide range of neuro-inspired algorithms and applications with a neuromorphic chip and toolchain that is sufficiently mature to support meaningful benchmarking. So far these efforts have yielded a number of compelling results, for example in the domains of combinatorial optimization and event-based sensing, control, and learning, while highlighting the opportunities and challenges the field faces for delivering real-world technological value over both the near and long term. This talk surveys the most important results and perspectives we've obtained with Loihi to date.
|Mike Davies (Intel)|
Why is Neuromorphic Event-based Engineering the future of AI?
|Ryad Benjamin Benosman (UPITT/CMU/SORBONNE)|
The BrainScaleS mobile platform
BrainScaleS is a analog accelerated neuromoprhic hardware architecture. Originally devised to emulate learning in the brain using spike-based models, its research scope has significantly broadened. The most recent addition to the BrainScaleS architecture allows the extension of the analog neuron operation to include rate-based modeling. In this talk we will present the results from a nationwide competition ”Energieeffizientes KI-System”, organized by the German federal ministry of education and research (BMBF) during 2020. The ASIC developed during this competition demonstrated successfully the implementation of an energy-efficient rate-based DCNN using analog verctor-matrix multiplications. To our best knowledge, this is the first time analog computing has been benchmarked in silico by an independent entity with real-world data unknown to the researchers before the evaluation. This talk will present the different technical solutions that made the successful conclusion of the task possible, including software and training aspects.
|Johannes Schemmel (Heidelberg University)|
Group photo (zoom screenshots)
Evaluating complexity and resilience trade-offs in emerging memory inference machines
|Christopher Bennett (Sandia National Labs)|
Lightning talk: From clean room to machine room: towards accelerated cortical simulations on the BrainScaleS wafer-scale system
The BrainScaleS system follows the principle of so-called "physical modeling", wherein the dynamics of VLSI circuits are designed to emulate the dynamics of their biological archetypes. Neurons and synapses are implemented by analog circuits that operate in continuous time, governed by time constants which arise from the properties of the transistors and capacitors on the microelectronic substrate. This defines our intrinsic hardware acceleration factor of 10000 with respect to biological real-time. The system evolved for over ten years from a lab prototype to a larger installation of several wafer modules. The talk reflects on the development process, the lessons learned and summarize the recent progress in commission- ing and operating the BrainScaleS system. The current state of the endeavor is demonstrated on the example of wafer-scale emulations of functional neural networks.
|Sebastian Schmitt (Heidelberg University)|
Closed-loop experiments on the BrainScaleS-2 architecture
The evolution of biological brains has always been contingent on their embodiment within their respective environments, in which survival required appropriate navigation and manipulation skills. Studying such interactions thus represents an important aspect of computational neuroscience and, by extension, a topic of interest for neuromorphic engineering. In the talk, we present three examples of embodiment on the BrainScaleS-2 architecture, in which dynamical timescales of both agents and environment are accelerated by several orders of magnitude with respect to their biological archetypes.
|Korbinian Schreiber (Heidelberg University)|
Batch << 1: Why Neuromorphic Computing Architectures Suit Real-Time Workloads
|Jonathan Tapson (GrAI Matter Labs)|
Neuromorphic and AI research at BCAI (Bosch Center for Artificial Intelligence)
|Thomas Pfeil (Bosch Center for Artificial Intelligence)|
Mapping Deep Neural Networks on SpiNNaker2
|Florian Kelber (TU Dresden)|
Open mic / discussion
End of day I
Tutorials: BrainScaleS and DYNAP-SE
Two tutorials/hands on in parallel:
For a description please see the tutorials page.
|Wednesday, 17 March 2021|
Tutorial: SpiNNaker hands-on
(Note: the same SpiNNaker hands on tutorial is also offered on Thursday, 21:00 - 22:30h CET)
For a description please see the tutorials page.
|Andrew Rowley (UMAN)|
NICE - day II
Keynote: From Brains to Silicon -- Applying lessons from neuroscience to machine learning
In this talk we will review some of the latest neuroscience discoveries and suggest how they describe a roadmap to achieving true machine intelligence. We will then describe our progress of applying one neuroscience principle, sparsity, to existing deep learning networks. We show that sparse networks are significantly more resilient and robust than traditional dense networks. With the right hardware substrate, sparsity can also lead to significant performance improvements. On an FPGA platform our sparse convolutional network runs inference 50X faster than the equivalent dense network on a speech dataset. In addition, we show that sparse networks can run efficiently on small power-constrained embedded chips that cannot run equivalent dense networks. We conclude our talk by proposing that neuroscience principles implemented on the right hardware substrate offer the only feasible path to scalable intelligent systems.
|Jeff Hawkins and Subutai Ahmad (Numenta)|
A Neuromorphic Future for Classic Computing Tasks
|Brad Aimone (Sandia National Laboratories)|
Lightning talk: Benchmarking of Neuromorphic Hardware Systems
With more and more neuromorphic hardware systems for the acceleration of spiking neural networks available in science and industry, there is a demand for platform comparison and performance estimation of such systems. This work describes selected benchmarks implemented in a framework with exactly this target: independent black-box benchmarking and comparison of platforms suitable for the simulation/emulation of spiking neural networks.
|Christoph Ostrau (Bielefeld University)|
Natural density cortical models as benchmarks for universal neuromorphic computers
Throughout evolution, the cortex has increased in volume from mouse to man by three orders of magnitude, while the architecture at the local scale of a cubic millimeter has largely been conserved in terms of the multi-layered structure and the density of synapses. Furthermore, local cortical networks are similar, independent of whether an area processes visual, auditory, or tactile information. This dual universality raises hope that fundamental principles of cortical computation can be discovered. Although a coherent view of these principles still remains missing, the universality motivated researchers already more than a decade go to start to develop neuromorphic computing systems based on the interaction between neurons by delayed point events and basic parameters of cortical architecture.
These systems need to be verified in the sense of accurately representing cortical dynamics and validated in the sense of simulating faster or more energy than software solutions on conventional computers. Such comparisons are only meaningful if they refer to implementations of the same neuronal network model. The role of models changes from mere demonstrations of functionality to quantitative benchmarks. In fields of computer science like computer vision and machine learning the definition of benchmarks helps to quantify progress and drives a constructive competition between research groups. The talk argues that neuromorphic computing needs to advance the development of benchmarks of increasing size and complexity.
A model of the cortical microcircuit  exemplifies the recent interplay and co-design of alternative hardware architectures enabled by a common benchmark. The model represents neurons with their natural number of synapses and at the same time captures the natural connection probability between neurons in the local volume. Consequently, all questions on the proper scaling of network parameters become irrelevant. The model constitutes a milestone for neuromorphic hardware systems as larger cortical models are necessarily less densely connected.
As metrics we discuss the energy consumed per synaptic event and the real-time factor. We illustrate the progress in the past few years and show that a single conventional compute node still keeps up with neuromorphic hardware and achieves sub real-time performance. Finally, the talk exposes the limitations of the microcircuit model as a benchmark and positions cortical multi-area models  as a biologically meaningful way of upscaling benchmarks to the next problem size.
This work is partially supported by the European Union's Horizon 2020 (H2020) funding framework under grant agreement no. 945539 (Human Brain Project SGA3) and the Helmholtz Association Initiative and Networking Fund under project number SO-092 (Advanced Computing Architectures, ACA).
|Markus Diesmann (Forschungszentrum Jülich GmbH)|
Poster Lightning Talks
1 min - 1 slide poster appetizers
Poster session A and coffee
Platform-Agnostic Neural Algorithm Composition using Fugu
|William Severa (Sandia National Laboratories)|
Lightning talk: Implementing Backpropagation for Learning on Neuromorphic Spiking Hardware
Many contemporary advances in the theory and practice of neural networks are inspired by our understanding of how information is processed by natural neural systems. However, the basis of modern deep neural networks remains the error backpropagation algorithm, which, though founded in rigorous mathematical optimization theory, has not been successfully demonstrated in a neurophysiologically inspired (neuromorphic) circuit. In a recent study, we proposed a neuromorphic architecture for learning that tunes the propagation of information forward and backwards through network layers using a timing mechanism controlled by a synfire-gated synfire chain (SGSC). This architecture was demonstrated in simulation of firing rates in a current-based neuronal network. In this follow-on study, we present a spiking backpropagation algorithm based on this architecture, but including several new mechanisms that enable implementation of the backpropagation algorithm using neuromorphic spiking units. We demonstrate the function of this architecture learning an XOR logic circuit and numerical character recognition with the MNIST dataset on Intel's Loihi neuromorphic chip.
|Andrew Sornborger (Los Alamos National Laboratory)|
Inductive bias transfer between brains and machines
Machine Learning, in particular computer vision, has made tremendous progress in recent year. On standardized datasets deep networks now frequently achieve close to human or super human performance. However, despite this enormous progress, artiﬁcial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many deﬁning features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ‘‘inductive bias,’’ determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artiﬁcial networks use rather nonspeciﬁc biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. I will give an overview on some conceptual ideas and preliminary results how the rapid increase of neuroscientific data could be used to transfer low level inductive biases from the brain to learning machines.
|Fabian Sinz (University Tübingen)|
Lightning talk: Spike Latency Reduction generates Efficient Predictive Coding
Latency reduction in postsynaptic spikes is a well-known effect of spiking time-dependent plasticity. We expand this notion for long postsynaptic spike trains on single neurons, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then, we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction in postsynaptic latencies can lead to the emergence of predictions.
|Pau Vilimelis Aceituno (ETH Zürich)|
Lightning talk: Cognitive Domain Ontologies: HPCs to Ultra Low Power Neuromorphic Platforms
The Cognitively Enhanced Complex Event Processing (CECEP) is an agent-based decision-making architecture. Within this agent, the Cognitive Domain Ontology (CDO) component is the slowest for most applications of the agent. We show that even after acceleration on a high performance server based computing system enhanced with a high end graphics processing unit (GPU), the CDO component does not scale well for real time use on large problem sizes. Thus, to enable real time use of the agent, particularly in power constrained environments (such as autonomous air vehicles), alternative implementations of the agent logic are needed. The objective of this work was to carry out an initial design space search of algorithms and hardware for decision making through the domain knowledge component of CECEP. Several algorithmic and circuit approaches are proposed that span across six hardware options of varying power consumption and weight (ranging from over 1000W to less than 1W). The algorithms range from exact solution producers optimized for running on a cluster of high performance computing systems to approximate solution producers running fast on low power neuromorphic hardware. The loss in accuracy for the approximate approaches is minimal, making them well suited to SWaP constrained systems, such as UAVs. The exact solution approach on an HPC will give confidence that the best answer has been evaluated (although this may take some time to generate).
|Chris Yakopcic (University of Dayton)|
Open mic / discussion
|Thursday, 18 March 2021|
Tutorial: BrainScaleS hands-on
(Note: the same BrainScaleS hands on tutorial is also offered on Tuesday evening)
For a description of the pre-requirements, please see the tutorials page.
NICE - day III
Keynote: Biological inspiration for improving computing and learning in spiking neural networks
The talk will address three new methods:
For details see:
|Wolfgang Maass (Graz University of Technology)|
On the computational power and complexity of Spiking Neural Networks
|Johan Kwisthout (Radboud Universiteit Nijmegen)|
Evolutionary Optimization for Neuromorphic Systems
Designing and training an appropriate spiking neural network for neuromorphic deployment remains an open challenge in neuromorphic computing. In 2016, we introduced an approach for utilizing evolutionary optimization to address this challenge called Evolutionary Optimization for Neuromorphic Systems (EONS). In this work, we present an improvement to this approach that enables rapid prototyping of new applications of spiking neural networks in neuromorphic systems. We discuss the overall EONS framework and its improvements over the previous implementation. We present several case studies of how EONS can be used, including to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization.
|Catherine Schuman (Oak Ridge )|
An event-based gas sensing device that resolves fast transients in a turbulent environment
|Michael Schmuker (University of Hertfordshire)|
Sequence learning, prediction, and generation in networks of spiking neurons
Sequence learning, prediction and generation has been proposed to be the universal computation performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes this form of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context-specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms.
Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns non-Markovian sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific feedforward subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context-specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences.
By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.
|Younes Bouhadjar (Forschungszentrum Juelich)|
Poster session b and coffee
|Walter Senn (Universität Bern)|
Conductance-based dendrites perform reliability-weighted opinion pooling
Cue integration, the combination of different sources of information to reduce uncertainty, is a fundamental computational principle of brain function. Starting from a normative model we show that the dynamics of multi-compartment neurons with conductance-based dendrites naturally implement the required probabilistic computations. The associated error-driven plasticity rule allows neurons to learn the relative reliability of different pathways from data samples, approximating Bayes-optimal observers in multisensory integration tasks. Additionally, the model provides a functional interpretation of neural recordings from multisensory integration experiments and makes specific predictions for membrane potential and conductance dynamics of individual neurons.
|Jakob Jordan (Institute of Physiology, University of Bern)|
Lightning talk: Natural gradient learning for spiking neurons
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.
|Elena Kreutzer (University of Bern)|
Making spiking neurons more succinct with multi-compartment models
|Johannes Leugering (Fraunhofer IIS)|
Lightning talk: The Computational Capacity of Mem-LRC Reservoirs
|Forrest Sheldon (Los Alamos National Lab - T-4/CNLS)|
Open mic / discussion
Tutorial: SpiNNaker hands-on
(Note: the same SpiNNaker hands on tutorial is also offered on Wednesday, 10:30-12:00h CET)
For a description please see the tutorials page.