NICE 2021 - Agenda

(hide abstracts)
(Agenda as of 2024/04/25-16:38 CEST)
Tuesday, 16 March 2021

NICE #8


Talk videos

  • Talk videos, which the speakers made available for public viewing are linked in the agenda. Some videos are accessible only for meeting attendants (from their personal page)
  • Public videos are also available in the NICE YouTube channel in the NICE 2021 playlist

Chatserver

We have a chatserver for the workshop (please use the username / initial password from your meetings 'personal page' (URL in your email). Please ask talk-related questions in the respective talk-channel (linked here in the agenda).

"Proceedings"

The proceedings for NICE 2021 are actually papers for the postponed 2020 NICE. You can find papers for many of the presentations here: https://dl.acm.org/doi/proceedings/10.1145/3381755

CET: 14:00‑14:10
EDT: 09:00‑09:10
PDT: 06:00‑06:10
UTC: 13:00‑13:10
(10+5 min)
 
Welcome to NICE #8

show presentation.pdf (public accessible)

Link to chat channel

Johannes Schemmel (Heidelberg University)
CET: 14:15‑14:40
EDT: 09:15‑09:40
PDT: 06:15‑06:40
UTC: 13:15‑13:40
(25 min)
 
Organizer Round

Link to chat channel

CET: 14:40‑15:20
EDT: 09:40‑10:20
PDT: 06:40‑07:20
UTC: 13:40‑14:20
(40+5 min)
 
Keynote: Lessons from Loihi for the Future of Neuromorphic Computing

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

The past three years have seen significant progress in neuromorphic computing. The availability of Loihi has enabled a community of over 100 research groups around the world to evaluate a wide range of neuro-inspired algorithms and applications with a neuromorphic chip and toolchain that is sufficiently mature to support meaningful benchmarking. So far these efforts have yielded a number of compelling results, for example in the domains of combinatorial optimization and event-based sensing, control, and learning, while highlighting the opportunities and challenges the field faces for delivering real-world technological value over both the near and long term. This talk surveys the most important results and perspectives we've obtained with Loihi to date.

Link to chat channel

Mike Davies (Intel)
CET: 15:25‑15:45
EDT: 10:25‑10:45
PDT: 07:25‑07:45
UTC: 14:25‑14:45
(20+5 min)
 
Why is Neuromorphic Event-based Engineering the future of AI?

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

While neuromorphic vision sensors and processors are becoming more available and usable by laymen and although they outperform existing devices specially in the case of sensing, there are still no successful commercial applications that allowed them to overtake conventional computation and sensing. In this presentation, I will provide insights on what are the missing key steps that are preventing this new computational revolution to happen. I will give an overview of neuromorphic, event-based approaches for image sensing and processing and how these have the potential to radically change current AI technologies and open new frontiers in building intelligent machines. I will focus on what is intended by event-based computation and the urge to process information in the time domain rather than recycling old concepts such as images, backpropagation and any form of frame-based approach. I will introduce new models of machine learning based on spike timings and show the importance of being compatible with neurosciences findings and recorded data. Finally, I will provide new insights on how to build neuromorphic neural processors able to operate these new AI and the urge to move to new architectural concepts.

Link to chat channel

Ryad Benjamin Benosman (UPITT/CMU/SORBONNE)
CET: 15:50‑16:10
EDT: 10:50‑11:10
PDT: 07:50‑08:10
UTC: 14:50‑15:10
(20+5 min)
 
Exploring the possibilities of analog neuromorphic computing with BrainScaleS

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

BrainScaleS is an analog accelerated neuromorphic hardware architecture. Originally devised to emulate learning in the brain using spike-based models, its research scope has significantly broadened. The most recent addition to the BrainScaleS architecture extends the analog neuron operation to include rate-based modeling. Using analog vector-matrix multiplications, the BrainScaleS-ASIC has successfully demonstrated its capability to process real world data sets. The ASIC still keeps the full functionality of its analog event based core, including hybrid plasticity. This talk will present results form the current BrainScaleS system, highlighting the flexibility that is possible with an analog neuromorphic substrate. The BrainScaleS system will serve as the analog neuromorphic platform in the upcoming EBRAINS research infrastructure of the European Community.

Link to chat channel

Johannes Schemmel (Heidelberg University)
CET: 16:15‑16:45
EDT: 11:15‑11:45
PDT: 08:15‑08:45
UTC: 15:15‑15:45
(30 min)
 
(break)
CET: 16:45‑16:55
EDT: 11:45‑11:55
PDT: 08:45‑08:55
UTC: 15:45‑15:55
(10 min)
 
Group photo (zoom screenshots)

Note: we want to publish the group photo publicly = please switch on you camera, if you agree to appear in the photo.

CET: 16:55‑17:15
EDT: 11:55‑12:15
PDT: 08:55‑09:15
UTC: 15:55‑16:15
(20+5 min)
 
Evaluating complexity and resilience trade-offs in emerging memory inference machines

show presentation.pdf (public accessible)

Link to chat channel

Christopher Bennett (Sandia National Labs)
CET: 17:20‑17:30
EDT: 12:20‑12:30
PDT: 09:20‑09:30
UTC: 16:20‑16:30
(10+5 min)
 
Lightning talk: From clean room to machine room: towards accelerated cortical simulations on the BrainScaleS wafer-scale system

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

The BrainScaleS system follows the principle of so-called "physical modeling", wherein the dynamics of VLSI circuits are designed to emulate the dynamics of their biological archetypes. Neurons and synapses are implemented by analog circuits that operate in continuous time, governed by time constants which arise from the properties of the transistors and capacitors on the microelectronic substrate. This defines our intrinsic hardware acceleration factor of 10000 with respect to biological real-time. The system evolved for over ten years from a lab prototype to a larger installation of several wafer modules. The talk reflects on the development process, the lessons learned and summarize the recent progress in commission- ing and operating the BrainScaleS system. The current state of the endeavor is demonstrated on the example of wafer-scale emulations of functional neural networks.

Link to chat channel

Sebastian Schmitt (Heidelberg University)
CET: 17:35‑17:55
EDT: 12:35‑12:55
PDT: 09:35‑09:55
UTC: 16:35‑16:55
(20+5 min)
 
Closed-loop experiments on the BrainScaleS-2 architecture
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)

The evolution of biological brains has always been contingent on their embodiment within their respective environments, in which survival required appropriate navigation and manipulation skills. Studying such interactions thus represents an important aspect of computational neuroscience and, by extension, a topic of interest for neuromorphic engineering. In the talk, we present three examples of embodiment on the BrainScaleS-2 architecture, in which dynamical timescales of both agents and environment are accelerated by several orders of magnitude with respect to their biological archetypes.

Link to chat channel

Korbinian Schreiber (Heidelberg University)
CET: 18:00‑18:20
EDT: 13:00‑13:20
PDT: 10:00‑10:20
UTC: 17:00‑17:20
(20+5 min)
 
Batch << 1: Why Neuromorphic Computing Architectures Suit Real-Time Workloads

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

As predicted by John Hennessy, there has been a “Cambrian explosion” of computing architectures as Moore’s Law scaling has broken down. This is most obvious in the new field of AI hardware, where the competition to develop and commercialize chips for deep learning training and inference is particularly strong. There is no consensus as to whether the same architectures will be appropriate for data-center computation and edge computation, although some practitioners are starting to differentiate architectures on the basis of whether inputs (typically, images or video frames) can be accumulated before processing (allowing for very large memory read and write blocks and large matrix multiplications); or whether the task demands that each frame must be processed in real time (so-called “Batch = 1” processing).

In this presentation we show that many real-world tasks are in fact “Batch << 1” operations. For example, in the case of a forward-facing video camera in a self-driving car application, the similarity between successive frames is very high, and increases as the frame rate and resolution of the video increase; a 240fps 1080p camera will typically have well over 99% of pixels unchanged between successive frames. The same high correlation between successive samples applies in other real-world workloads such as conversational audio processing.

Exploiting the correlation of input streams can lead to very efficient processing (as shown in video compression techniques such as H.264 / MPEG-4). However, it requires significantly different processing architectures, chief among which is the necessity to maintain system state in memory between inputs.

We will show that neuromorphic architectures intrinsically implement the most important features of a ‘Batch << 1” architecture, and are very well suited to edge processing. We will describe a new architecture – NeuronFlow - which is optimized for this purpose, and present results from GrAIOne, the first chip manufactured to implement this architecture. Early results show a significant processing advantage in terms of both latency and power consumption.

Link to chat channel

Jonathan Tapson (School of Electrical and Data Engineering, University of Technology Sydney)
CET: 18:25‑18:45
EDT: 13:25‑13:45
PDT: 10:25‑10:45
UTC: 17:25‑17:45
(20+5 min)
 
Neuromorphic and AI research at BCAI (Bosch Center for Artificial Intelligence)

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

We will give an overview of current challenges and activities at Bosch Center for Artificial Intelligence regarding neuromorphic computing, spiking neural networks, in-memory computation and deep learning. This includes a short introduction to the publicly funded project ULPEC addressing ultra-low power vision systems. In addition, we will give a summary of selected academic contributions in the field of spiking neural networks, in-memory computation and hardware-aware compression of deep neural networks.

Link to chat channel

Thomas Pfeil (Bosch Center for Artificial Intelligence)
CET: 18:50‑19:10
EDT: 13:50‑14:10
PDT: 10:50‑11:10
UTC: 17:50‑18:10
(20+5 min)
 
Mapping Deep Neural Networks on SpiNNaker2

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Florian Kelber, Binyi Wu, Bernhard Vogginger, Johannes Partzsch, Chen Liu, Marco Stolba and Christian Mayr

SpiNNaker is an efficient many-core architecture for the real-time simulation of spiking neural networks. To also speed up deep neural networks (DNNs), the 2nd generation SpiNNaker2 will contain dedicated DNN accelerators in each processing element. When realizing large CNNs on SpiNNaker2, layers have to be split, mapped and scheduled onto 144 processing elements. We describe the underlying mapping procedure with optimized data reuse to achieve inference of VGG-16 and ResNet-50 models in tens of milliseconds.

Link to chat channel

Florian Kelber (TU Dresden)
CET: 19:15‑19:45
EDT: 14:15‑14:45
PDT: 11:15‑11:45
UTC: 18:15‑18:45
(30 min)
 
Open mic / discussion

Link to chat channel

CET: 19:45
EDT: 14:45
PDT: 11:45
UTC: 18:45
End of day I
CET: 19:45‑20:45
EDT: 14:45‑15:45
PDT: 11:45‑12:45
UTC: 18:45‑19:45
(60 min)
(break)
CET: 21:00‑00:00
EDT: 16:00‑19:00
PDT: 13:00‑16:00
UTC: 20:00‑23:00
(180 min)
Tutorials: BrainScaleS and DYNAP-SE

Two tutorials/hands on in parallel:

  • BrainScaleS (note: the same BrainScaleS hands on tutorial is also offered on Thursday, 10:00-13:00h CET). Please use the "Join Main" dial in from your personal info page to access this tutorial.
  • DYNAP-SE (note:the same DYNAP-SE hands on tutorial is also offered on Wednesday and Friday morning) Please use the "Join Parallel meeting" dial in from your personal info page to access this tutorial.
    • 1-hour live/interactive Dynapse demo: demo on a real Dynapse, take questions and implementing small changes from the audience. (this part of the tutorial is accessible for all NICE registered attendants)
    • 2-hour guided session where participants run a Jupyter notebook with simulations modelling Dynapse. This part is limited to 15 people per session.
      Link to Tutorial DYNAP-SE chat channel
      For talk videos see further down in the agenda (19 March)

For a description please see the tutorials page.

Wednesday, 17 March 2021
CET: 10:00
EDT: 05:00
PDT: 02:00
UTC: 09:00
Tutorials: SpiNNaker and DYNAP-SE

Two tutorials in parallel:

  • Starting at 10:30h CET: SpiNNaker (note: the same SpiNNaker hands on tutorial is also offered on Thursday, 21:00 - 22:30h CET) Please use the "Join Main" dial in from your personal info page to access this tutorial.
  • Starting at 10:00h CET: DYNAP-SE (note:the same DYNAP-SE hands on tutorial is also offered on Tuesday evening and Friday morning) Please use the "Join Parallel meeting" dial in from your personal info page to access this tutorial.

For a description please see the tutorials page.

CET: 10:30‑11:15
EDT: 05:30‑06:15
PDT: 02:30‑03:15
UTC: 09:30‑10:15
(45 min)
 
SpiNNaker tutorial introduction

show talk video (YouTube) (local version)
Andrew Rowley (University of Manchester)
CET: 13:00‑14:00
EDT: 08:00‑09:00
PDT: 05:00‑06:00
UTC: 12:00‑13:00
(60 min)
(break)
CET: 14:00
EDT: 09:00
PDT: 06:00
UTC: 13:00
NICE - day II
CET: 14:00‑14:40
EDT: 09:00‑09:40
PDT: 06:00‑06:40
UTC: 13:00‑13:40
(40+5 min)
Keynote: From Brains to Silicon -- Applying lessons from neuroscience to machine learning

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

In this talk we will review some of the latest neuroscience discoveries and suggest how they describe a roadmap to achieving true machine intelligence. We will then describe our progress of applying one neuroscience principle, sparsity, to existing deep learning networks. We show that sparse networks are significantly more resilient and robust than traditional dense networks. With the right hardware substrate, sparsity can also lead to significant performance improvements. On an FPGA platform our sparse convolutional network runs inference 50X faster than the equivalent dense network on a speech dataset. In addition, we show that sparse networks can run efficiently on small power-constrained embedded chips that cannot run equivalent dense networks. We conclude our talk by proposing that neuroscience principles implemented on the right hardware substrate offer the only feasible path to scalable intelligent systems.

Link to chat channel

Jeff Hawkins (Numenta), Subutai Ahmad (Numenta)
CET: 14:45‑15:05
EDT: 09:45‑10:05
PDT: 06:45‑07:05
UTC: 13:45‑14:05
(20+5 min)
A Neuromorphic Future for Classic Computing Tasks

show talk video (YouTube) (local version)

The obvious promise of neuromorphic hardware is to enable efficient implementations of brain-derived algorithms. However, to be successful, it is essential that the community demonstrates that neuromorphic systems can be broadly impactful for beyond a few narrow tasks. While more advanced post-deep learning brain-derived algorithms would be ideal, it is helpful to look beyond cognitive algorithms as well for potential market impact.

In this talk, I will highlight one such opportunity: the application of neuromorphic hardware for large-scale scientific computing applications. Specifically, I will present a perspective on neuromorphic hardware that enables us to use large spiking architectures for solving stochastic differential equations and graph analytics. Our general approach treats neuromorphic architectures as a large computational graph onto which we can map sophisticated algorithmic tasks. We have demonstrated how this approach can be used to efficiently model Monte Carlo approximations to a class of partial differential equations that challenge the high-performance computing community, and we can further illustrate how this approach is well-suited for performing general dynamic programming tasks.

Finally, the talk will include some concrete examples of this approach on different spiking neuromorphic platforms, such as Loihi, TrueNorth, and SpiNNaker.

Link to chat channel

Brad Aimone (Sandia National Laboratories)
CET: 15:10‑15:20
EDT: 10:10‑10:20
PDT: 07:10‑07:20
UTC: 14:10‑14:20
(10+5 min)
Lightning talk: Benchmarking of Neuromorphic Hardware Systems
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)

With more and more neuromorphic hardware systems for the acceleration of spiking neural networks available in science and industry, there is a demand for platform comparison and performance estimation of such systems. This work describes selected benchmarks implemented in a framework with exactly this target: independent black-box benchmarking and comparison of platforms suitable for the simulation/emulation of spiking neural networks.

Link to chat channel

Christoph Ostrau (Bielefeld University)
CET: 15:25‑15:45
EDT: 10:25‑10:45
PDT: 07:25‑07:45
UTC: 14:25‑14:45
(20+5 min)
Natural density cortical models as benchmarks for universal neuromorphic computers
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Throughout evolution, the cortex has increased in volume from mouse to man by three orders of magnitude, while the architecture at the local scale of a cubic millimeter has largely been conserved in terms of the multi-layered structure and the density of synapses. Furthermore, local cortical networks are similar, independent of whether an area processes visual, auditory, or tactile information. This dual universality raises hope that fundamental principles of cortical computation can be discovered. Although a coherent view of these principles still remains missing, the universality motivated researchers already more than a decade go to start to develop neuromorphic computing systems based on the interaction between neurons by delayed point events and basic parameters of cortical architecture.

These systems need to be verified in the sense of accurately representing cortical dynamics and validated in the sense of simulating faster or more energy than software solutions on conventional computers. Such comparisons are only meaningful if they refer to implementations of the same neuronal network model. The role of models changes from mere demonstrations of functionality to quantitative benchmarks. In fields of computer science like computer vision and machine learning the definition of benchmarks helps to quantify progress and drives a constructive competition between research groups. The talk argues that neuromorphic computing needs to advance the development of benchmarks of increasing size and complexity.

A model of the cortical microcircuit [1] exemplifies the recent interplay and co-design of alternative hardware architectures enabled by a common benchmark. The model represents neurons with their natural number of synapses and at the same time captures the natural connection probability between neurons in the local volume. Consequently, all questions on the proper scaling of network parameters become irrelevant. The model constitutes a milestone for neuromorphic hardware systems as larger cortical models are necessarily less densely connected.

As metrics we discuss the energy consumed per synaptic event and the real-time factor. We illustrate the progress in the past few years and show that a single conventional compute node still keeps up with neuromorphic hardware and achieves sub real-time performance. Finally, the talk exposes the limitations of the microcircuit model as a benchmark and positions cortical multi-area models [2] as a biologically meaningful way of upscaling benchmarks to the next problem size.

  • [1] Potjans & Diesmann (2014), Cerebral Cortex 24:785–806
  • [2] Schmidt, Bakker, Shen, Bezgin, Diesmann, van Albada (2018) PLOS Comput Biol 14(10):e1006359

This work is partially supported by the European Union's Horizon 2020 (H2020) funding framework under grant agreement no. 945539 (Human Brain Project SGA3) and the Helmholtz Association Initiative and Networking Fund under project number SO-092 (Advanced Computing Architectures, ACA).

Link to chat channel

Markus Diesmann (Forschungszentrum Jülich GmbH)
CET: 15:50‑16:15
EDT: 10:50‑11:15
PDT: 07:50‑08:15
UTC: 14:50‑15:15
(25 min)
Poster Lightning Talks

1 min - 1 slide poster appetizers

  • Astrocyte-modulated neuromorphic Central Pattern Generator for Robot Locomotion on Intel's Loihi (Ioannis Polykretis)
  • Bio-inspired few-shot learning with spiking neural networks (Garibaldi Pineda Garcia)
  • BrainScaleS Large Scale Spike Communication using Extoll (Tobias Thommes)
  • Critical Limits in a Bump Attractor Network of Spiking Neurons (Alberto Vergani)
  • Deep reinforcement learning for time-continuous substrates (Akos Ferenc Kungl)
  • Energy Constraints Improve Liquid State Machine Performance (Andrew Fountain)
  • Machine Perception: Similarity Representation while Learning in the Wild (Ayon Borthakur)
  • Structural plasticity on spiking neuromorphic hardware (Benjamin Cramer)
  • Training Delays in Spiking Neural Networks (Pau Vilimelis Aceituno)
  • IOP Publishing: new open access journal Neuromorphic Computing and Engineering

Talk-posters:

  • Neuroevolution with Scaleup (J. David Schaffer)
Alberto Vergani (AMU), Pau Vilimelis Aceituno (ETH Zürich), Benjamin Cramer (Heidelberg University), Garibaldi Pineda Garcia (University of Sussex), Ioannis Polykretis (Rutgers University), Tobias Thommes (Heidelberg University), Andrew Fountain (Rochester Institute of Technology), Ayon Borthakur (Cornell University), Akos Ferenc Kungl (Heidelberg University)
CET: 16:15‑17:15
EDT: 11:15‑12:15
PDT: 08:15‑09:15
UTC: 15:15‑16:15
(60 min)
Poster session A (and break)

CET: 17:15‑17:35
EDT: 12:15‑12:35
PDT: 09:15‑09:35
UTC: 16:15‑16:35
(20+5 min)
Platform-Agnostic Neural Algorithm Composition using Fugu
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Spiking neural networks and corresponding neuromorphic hardware are undergoing an uptick in interest as key milestones are accomplished by industry, academic and government research groups. Unfortunately, from an end-user’s perspective, testing or deploying applications on a neuromorphic platform is very challenging and often infeasible. We hope to address two common and key challenges, portability and composition, by the creation of an overarching software framework called Fugu. Fugu allows for spiking neural algorithms, created by independent designers, to be combined seamlessly in a scalable and target- platform-agnostic manner. This resulting intermediate representation is then translatable to multiple neuromorphic hardware backends.

Acknowledgements: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Link to chat channel

William Severa (Sandia National Laboratories)
CET: 17:40‑17:50
EDT: 12:40‑12:50
PDT: 09:40‑09:50
UTC: 16:40‑16:50
(10+5 min)
Lightning talk: Implementing Backpropagation for Learning on Neuromorphic Spiking Hardware

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Many contemporary advances in the theory and practice of neural networks are inspired by our understanding of how information is processed by natural neural systems. However, the basis of modern deep neural networks remains the error backpropagation algorithm, which, though founded in rigorous mathematical optimization theory, has not been successfully demonstrated in a neurophysiologically inspired (neuromorphic) circuit. In a recent study, we proposed a neuromorphic architecture for learning that tunes the propagation of information forward and backwards through network layers using a timing mechanism controlled by a synfire-gated synfire chain (SGSC). This architecture was demonstrated in simulation of firing rates in a current-based neuronal network. In this follow-on study, we present a spiking backpropagation algorithm based on this architecture, but including several new mechanisms that enable implementation of the backpropagation algorithm using neuromorphic spiking units. We demonstrate the function of this architecture learning an XOR logic circuit and numerical character recognition with the MNIST dataset on Intel's Loihi neuromorphic chip.

Link to chat channel

Andrew Sornborger (Los Alamos National Laboratory)
CET: 17:55‑18:15
EDT: 12:55‑13:15
PDT: 09:55‑10:15
UTC: 16:55‑17:15
(20+5 min)
Inductive bias transfer between brains and machines

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Machine Learning, in particular computer vision, has made tremendous progress in recent year. On standardized datasets deep networks now frequently achieve close to human or super human performance. However, despite this enormous progress, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ‘‘inductive bias,’’ determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. I will give an overview on some conceptual ideas and preliminary results how the rapid increase of neuroscientific data could be used to transfer low level inductive biases from the brain to learning machines.

Link to chat channel

Fabian Sinz (University Tübingen)
CET: 18:20‑18:30
EDT: 13:20‑13:30
PDT: 10:20‑10:30
UTC: 17:20‑17:30
(10+5 min)
Lightning talk: Spike Latency Reduction generates Efficient Predictive Coding

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Latency reduction in postsynaptic spikes is a well-known effect of spiking time-dependent plasticity. We expand this notion for long postsynaptic spike trains on single neurons, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then, we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction in postsynaptic latencies can lead to the emergence of predictions.

Link to chat channel

Pau Vilimelis Aceituno (ETH Zürich)
CET: 18:35‑18:45
EDT: 13:35‑13:45
PDT: 10:35‑10:45
UTC: 17:35‑17:45
(10+5 min)
Lightning talk: Cognitive Domain Ontologies: HPCs to Ultra Low Power Neuromorphic Platforms
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

The Cognitively Enhanced Complex Event Processing (CECEP) is an agent-based decision-making architecture. Within this agent, the Cognitive Domain Ontology (CDO) component is the slowest for most applications of the agent. We show that even after acceleration on a high performance server based computing system enhanced with a high end graphics processing unit (GPU), the CDO component does not scale well for real time use on large problem sizes. Thus, to enable real time use of the agent, particularly in power constrained environments (such as autonomous air vehicles), alternative implementations of the agent logic are needed. The objective of this work was to carry out an initial design space search of algorithms and hardware for decision making through the domain knowledge component of CECEP. Several algorithmic and circuit approaches are proposed that span across six hardware options of varying power consumption and weight (ranging from over 1000W to less than 1W). The algorithms range from exact solution producers optimized for running on a cluster of high performance computing systems to approximate solution producers running fast on low power neuromorphic hardware. The loss in accuracy for the approximate approaches is minimal, making them well suited to SWaP constrained systems, such as UAVs. The exact solution approach on an HPC will give confidence that the best answer has been evaluated (although this may take some time to generate).

Link to chat channel

Chris Yakopcic (University of Dayton)
CET: 18:50‑19:20
EDT: 13:50‑14:20
PDT: 10:50‑11:20
UTC: 17:50‑18:20
(30 min)
Open mic / discussion

Link to chat channel

CET: 19:20‑20:20
EDT: 14:20‑15:20
PDT: 11:20‑12:20
UTC: 18:20‑19:20
(60 min)
(break)
CET: 20:30
EDT: 15:30
PDT: 12:30
UTC: 19:30
Tutorial: Loihi

We’ll cover a few new “advanced” topics of interest to the community. These are

  • Characterizing energy and performance of Loihi workloads
  • Solving and constraint satisfaction problems on Loihi and benchmarking them to a state-of-the-art CPU solver
  • Training deep SNNs for Loihi with SLAYER

We’ll present on these topics and show some code running. Anyone who has access to Loihi will be able to find and run the code themselves, but we’d like to clarify that access to Loihi is not required for attending the tutorials. (If you want to get access to Loihi, please email inrc_interest@intel.com to get legal access to the Loihi cloud systems.)

Link to chat channel

CET: 20:30‑20:37
EDT: 15:30‑15:37
PDT: 12:30‑12:37
UTC: 19:30‑19:37
(7 min)
 
Intel Loihi's NxSDK: Introduction and overview

show presentation.pdf (public accessible), show talk video (YouTube) (local version)
Andreas Wild (Intel Corporation)
CET: 20:37‑21:18
EDT: 15:37‑16:18
PDT: 12:37‑13:18
UTC: 19:37‑20:18
(41 min)
 
A Fast and Efficient Constraint Satisfaction Solver on Loihi

show talk video (YouTube) (local version)
Gabriel Andres Fonseca Guerra (Intel Coorporation)
CET: 21:18‑21:52
EDT: 16:18‑16:52
PDT: 13:18‑13:52
UTC: 20:18‑20:52
(34 min)
 
SLAYER for Loihi

show presentation.pdf (public accessible), show talk video (YouTube) (local version)
Sumit Bam Shrestha (Intel Corporation)
CET: 21:52‑22:49
EDT: 16:52‑17:49
PDT: 13:52‑14:49
UTC: 20:52‑21:49
(57 min)
 
Performance Characterization on Loihi

show talk video (YouTube) (local version)
Garrick Orchard (Intel Corporation)

Thursday, 18 March 2021
CET: 10:00
EDT: 05:00
PDT: 02:00
UTC: 09:00
Tutorial: BrainScaleS hands-on

Please use the "Join_Main" dial in on your personal page to attend.

  • about 30 min introduction
  • hands-on usage of the BrainScaleS system (via web browser). Limited number of participants.

For a description of the pre-requirements, please see the tutorials page.

Link to chat channel

CET: 10:00‑11:00
EDT: 05:00‑06:00
PDT: 02:00‑03:00
UTC: 09:00‑10:00
(60 min)
 
BrainScaleS-2 hands-on introduction, part I: Spiking mode

show talk video (YouTube) (local version)
Sebastian Billaudelle (Kirchhoff Institute for Physics, Heidelberg University)
CET: 11:00‑11:15
EDT: 06:00‑06:15
PDT: 03:00‑03:15
UTC: 10:00‑10:15
(15 min)
 
BrainScaleS-2 hands-on introduction, part II: Matrix multiplication mode

show talk video (YouTube) (local version)
Johannes Weis (Kirchhoff-Institute for Physics, Heidelberg University)
CET: 11:15‑11:50
EDT: 06:15‑06:50
PDT: 03:15‑03:50
UTC: 10:15‑10:50
(35 min)
 
BrainScaleS-2 hands-on introduction, part III: Matrix multiplication mode II

show talk video (YouTube) (local version)
Arne Emmel (Universität Heidelberg)
CET: 11:50‑12:50
EDT: 06:50‑07:50
PDT: 03:50‑04:50
UTC: 10:50‑11:50
(60 min)
 
(hands-on work with the BSS-2 single chip system)
CET: 13:00‑14:00
EDT: 08:00‑09:00
PDT: 05:00‑06:00
UTC: 12:00‑13:00
(60 min)
(break)
CET: 14:00
EDT: 09:00
PDT: 06:00
UTC: 13:00
NICE - day III

Please use the "Join_Main" dial in on your personal page to attend.

CET: 14:00‑14:40
EDT: 09:00‑09:40
PDT: 06:00‑06:40
UTC: 13:00‑13:40
(40+5 min)
Keynote: Biological inspiration for improving computing and learning in spiking neural networks

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

The talk will address three new methods:

  1. Many type of biological neurons transmit analog values neither through the time of a single spike, nor through their firing rate, but through temporal patterns of a very small number of spikes. If one optimizes spiking neuron models for such communication one arrives at a new and more efficient method for emulating ANNs by SNNs. In particular, this yields the best known performance of SNNs for image classification (on ImageNet) with an average of just two spikes per neuron.

  2. Local eligibility traces of synapses that are gated through global learning signals are well known ingredients of synaptic plasticity in brains. We show that these two ingredients enable a principled approximation of BPTT for recurrent SNNs --called e-prop-- that is suitable for on-chip learning on neuromorphic hardware.

  3. Brains emit learning signals, for example dopamin, through special brain structures such as VTA, that evolution is likely to have optimized for enabling learning of important new tasks from very few examples. This observation gives rise to a variation of e-prop, called natural e-prop, that enables one-shot and few-shot learning for RSNNs.

For details see:

  • C. Stoeckl and W. Maass. Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes. arXiv:2002.00860v4, 2020; in press at Nature Machine Intelligence
  • G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, and W. Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11:3625, 2020
  • F. Scherr, C. Stoeckl, and W. Maass. One-shot learning with spiking neural networks. bioRxiv, 2020.

Link to chat channel

Wolfgang Maass (Graz University of Technology)
CET: 14:45‑15:05
EDT: 09:45‑10:05
PDT: 06:45‑07:05
UTC: 13:45‑14:05
(20+5 min)
On the computational power and complexity of Spiking Neural Networks

show presentation.pdf (public accessible)

The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and colocating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. I will demonstrate the first steps towards such a theory, including canonical problems, hierarchies of complexity classes and some first completeness results.

Link to chat channel

Johan Kwisthout (Radboud Universiteit Nijmegen)
CET: 15:10‑15:30
EDT: 10:10‑10:30
PDT: 07:10‑07:30
UTC: 14:10‑14:30
(20+5 min)
Evolutionary Optimization for Neuromorphic Systems
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Designing and training an appropriate spiking neural network for neuromorphic deployment remains an open challenge in neuromorphic computing. In 2016, we introduced an approach for utilizing evolutionary optimization to address this challenge called Evolutionary Optimization for Neuromorphic Systems (EONS). In this work, we present an improvement to this approach that enables rapid prototyping of new applications of spiking neural networks in neuromorphic systems. We discuss the overall EONS framework and its improvements over the previous implementation. We present several case studies of how EONS can be used, including to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization.

Link to chat channel

Catherine Schuman (Oak Ridge)
CET: 15:35‑15:55
EDT: 10:35‑10:55
PDT: 07:35‑07:55
UTC: 14:35‑14:55
(20+5 min)
An event-based gas sensing device that resolves fast transients in a turbulent environment
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Electronic olfaction can help detect and localize harmful gases and pollutants, but the turbulence of the natural environment presents a particular challenge: odor encounters are intermittent, and an effective electronic nose must therefore be able to resolve short odor pulses. The slow responses of the widely used metal oxide (MOX) gas sensors complicate the task. Here, we combine high-resolution data acquisition with a processing method based on Kalman filtering and event-driven, level-crossing sampling to extract fast onset events. We find that our system can resolve the onset time of odor encounters with enough precision for source direction estimation with a pair of MOX sensors in a stereo-osmic configuration. Our work demonstrates how neuromorphic principles help improve the performance of electronic gas sensors.

Link to chat channel

Michael Schmuker (University of Hertfordshire), Damien Drix (University of Hertfordshire)
CET: 16:00‑16:20
EDT: 11:00‑11:20
PDT: 08:00‑08:20
UTC: 15:00‑15:20
(20+5 min)
Sequence learning, prediction, and generation in networks of spiking neurons
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Sequence learning, prediction and generation has been proposed to be the universal computation performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes this form of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context-specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms.

Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns non-Markovian sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific feedforward subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context-specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences.

By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.

Link to chat channel

Younes Bouhadjar (Forschungszentrum Juelich)
CET: 16:25‑17:25
EDT: 11:25‑12:25
PDT: 08:25‑09:25
UTC: 15:25‑16:25
(60 min)
Poster session b (and break)

For posters:

  • Energy Constraints Improve Liquid State Machine Performance (Andrew Fountain)
    poster.pdf and poster chat channel
  • Machine Perception: Similarity Representation while Learning in the Wild (Ayon Borthakur)
    poster.pdf and poster chat channel
  • Structural plasticity on spiking neuromorphic hardware (Benjamin Cramer)
    poster.pdf and poster chat channel
  • Training Delays in Spiking Neural Networks (Pau Vilimelis Aceituno)
    poster.pdf and poster chat channel

And for talk posters by:

  • Younes Bouhadjar: "Sequence learning, prediction, and generation in networks of spiking neurons"
    talk chat channel
  • Dylan Paiton: "Subspace Locally Competitive Algorithms"
    poster.pdf and the talk chat channel
  • J. David Schaffer: Evolving Spiking Neural Networks for Robot Sensory-motor Decision Tasks of Varying Difficulty
    poster.pdf and chat channel
  • Michael Schmuker / Damien Drix: An event-based gas sensing device that resolves fast transients in a turbulent environment
    poster.pdf and talk chat channel
CET: 17:25‑17:45
EDT: 12:25‑12:45
PDT: 09:25‑09:45
UTC: 16:25‑16:45
(20+5 min)
Lessons from neurobiology and physics: pseudobackprop & more

Spike-based computation is inspired by neurobiology and is implemented in physical devices. Spike-based computation can also be inspired by methods from theoretical physics where overarching principles are formulated to capture the dynamics of charges, masses and more abstract variables. We consider the principle of Least Action and show how this can be applied to the neurobiology of cognition. The key notion is the one of an error. Errors can be defined at the level of the behaviour, the microcircuits and the single neurons. I will show how a rigorous application of this Neural Least Action (NLA) principle leads to a cortical version of error-backpropagation, namely pseudobackprop. Pseudobackprop naturally emerges when error representations are learned by cortical microcircuits and made available at the dendritic sites. Performance-wise, pseudobackprop outperforms feedback-alignment, is comparable with backpropagation, and has distinct advantages. The NLA principle potentially offers generalizations to spike-based computation, conductance-based computation, and natural gradients (covered by other talks at NICE 2021). As a physical theory that deals with continuous time it may give hints to actually implement it by real-time physical devices.

Link to chat channel

Walter Senn (Universität Bern)
CET: 17:50‑18:10
EDT: 12:50‑13:10
PDT: 09:50‑10:10
UTC: 16:50‑17:10
(20+5 min)
Conductance-based dendrites perform reliability-weighted opinion pooling

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Cue integration, the combination of different sources of information to reduce uncertainty, is a fundamental computational principle of brain function. Starting from a normative model we show that the dynamics of multi-compartment neurons with conductance-based dendrites naturally implement the required probabilistic computations. The associated error-driven plasticity rule allows neurons to learn the relative reliability of different pathways from data samples, approximating Bayes-optimal observers in multisensory integration tasks. Additionally, the model provides a functional interpretation of neural recordings from multisensory integration experiments and makes specific predictions for membrane potential and conductance dynamics of individual neurons.

Link to chat channel

Jakob Jordan (Institute of Physiology, University of Bern)
CET: 18:15‑18:25
EDT: 13:15‑13:25
PDT: 10:15‑10:25
UTC: 17:15‑17:25
(10+5 min)
Lightning talk: Natural gradient learning for spiking neurons

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.

Link to chat channel

Elena Kreutzer (University of Bern)
CET: 18:30‑18:50
EDT: 13:30‑13:50
PDT: 10:30‑10:50
UTC: 17:30‑17:50
(20+5 min)
Making spiking neurons more succinct with multi-compartment models

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Spiking neurons consume energy for each spike they emit. Reducing the firing rate of each neuron — without sacrificing relevant information content — is therefore a critical constraint for energy efficient networks of spiking neurons in biology and neuromorphic hardware alike. The inherent complexity of biological neurons provides a possible mechanism to realize a good trade-off between these two conflicting objectives: multi-compartment neuron models can become selective to highly specific input patterns, and can thus produce informative yet sparse spiking codes. I'll present a model of this mechanism and discuss its potential utility for spiking neural networks and neuromorphic hardware.

Link to chat channel

Johannes Leugering (Fraunhofer IIS)
CET: 18:55‑19:05
EDT: 13:55‑14:05
PDT: 10:55‑11:05
UTC: 17:55‑18:05
(10+5 min)
Lightning talk: The Computational Capacity of Mem-LRC Reservoirs
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Forrest Sheldon and Francesco Caravelli

Reservoir computing has a emerged as a powerful tool in data-driven time series analysis. The possibility of utilizing hardware reservoirs as specialized co-processors has generated interest in the properties of electronic reservoirs, especially those based on memristors as the nonlinearity of these devices should translate to an improved nonlinear computational capacity of the reservoir. However, designing these reservoirs requires a detailed understanding of how memristive networks process information which has thusfar been lacking. In this work, we derive an equation for general memristor-inductor-resistor-capacitor (MEM-LRC) reservoirs that includes all network and dynamical constraints explicitly. Utilizing this we undertake a detailed study of the computational capacity of these reservoirs. We demonstrate that hardware reservoirs may be constructed with extensive memory capacity and that the presence of memristors enacts a tradeoff between memory capacity and nonlinear computational capacity. Using these principles we design reservoirs to tackle problems in signal processing, paving the way for applying hardware reservoirs to high-dimensional spatiotemporal systems.

Link to chat channel

Forrest Sheldon (Los Alamos National Lab - T-4/CNLS)
CET: 19:10‑19:40
EDT: 14:10‑14:40
PDT: 11:10‑11:40
UTC: 18:10‑18:40
(30 min)
Open mic / discussion

Link to chat channel

CET: 19:40‑20:50
EDT: 14:40‑15:50
PDT: 11:40‑12:50
UTC: 18:40‑19:50
(70 min)
(break)
CET: 21:00‑22:30
EDT: 16:00‑17:30
PDT: 13:00‑14:30
UTC: 20:00‑21:30
(90 min)
Tutorial: SpiNNaker hands-on

For a recording of the introduction, please see the agenda entry for 17 March in the morning.

For a description please see the tutorials page.

Link to chat channel

Friday, 19 March 2021
CET: 10:00
EDT: 05:00
PDT: 02:00
UTC: 09:00
Tutorial:: DYNAP-SE

Please use the "Join_Main" dial in on your personal page to attend.

  • 1-hour live/interactive Dynapse demo: demo on a real Dynapse, take questions and implementing small changes from the audience. (this part of the tutorial is accessible for all NICE registered attendants)
  • 2-hour guided session where participants run a Jupyter notebook with simulations modelling Dynapse. This part is limited to 15 people per session.

Link to chat channel

CET: 10:00‑10:20
EDT: 05:00‑05:20
PDT: 02:00‑02:20
UTC: 09:00‑09:20
(20 min)
 
Dynap-SE1 Demo Session

show talk video (YouTube) (local version)
Yigit Demirag (The Institute of Neuroinformatics, UZH and ETH Zürich)
CET: 10:20‑10:35
EDT: 05:20‑05:35
PDT: 02:20‑02:35
UTC: 09:20‑09:35
(15 min)
 
Remote demo of the Dynap-SE board

(a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))
Dmitrii Zendrikov (Institute of Neuroinformatics, UZH and ETH Zurich)
CET: 10:35‑10:38
EDT: 05:35‑05:38
PDT: 02:35‑02:38
UTC: 09:35‑09:38
(3 min)
 
DYNAP-SE tutorial session 2: Simulating Dynap-SE1

show talk video (YouTube) (local version)
Yigit Demirag (The Institute of Neuroinformatics, UZH and ETH Zürich)
CET: 13:00‑14:00
EDT: 08:00‑09:00
PDT: 05:00‑06:00
UTC: 12:00‑13:00
(60 min)
(break)
CET: 14:00
EDT: 09:00
PDT: 06:00
UTC: 13:00
NICE - day IV

Please use the "Join_Main" dial in on your personal page to attend.

CET: 14:00‑14:40
EDT: 09:00‑09:40
PDT: 06:00‑06:40
UTC: 13:00‑13:40
(40+5 min)
Keynote: Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

While Moore’s law has driven exponential computing power expectations, its nearing end calls for new roads to embedded cognition. The field of neuromorphic computing aims at a paradigm shift compared to conventional von-Neumann computers, both for the architecture (i.e. memory and processing co-location) and for the data representation (i.e. spike-based event-driven encoding). However, it is unclear which of the bottom-up (neuroscience-driven) or top-down (application-driven) design approaches could unveil the most promising roads to embedded cognition. In order to clarify this question, this talk is divided into two parts.

The first part focuses on the bottom-up approach. From the building-block level to the silicon integration, we design two bottom-up neuromorphic processors: ODIN and MorphIC. We demonstrate with silicon measurement results that hardware-aware neuroscience model design and selection allows reaching record neuron and synapse densities with low-power operation. However, the inherent difficulty for bottom-up designs lies in applying them to real-world problems beyond the scope of neuroscience-oriented applications.

The second part investigates the top-down approach. By starting from the applicative problem of adaptive edge computing, we derive the direct random target projection (DRTP) algorithm for low-cost neural network training and design a top-down DRTP-enabled neuromorphic processor: SPOON. We demonstrate with silicon measurement results that combining event-driven and frame-based processing with weight-transport-free update-unlocked training supports low-cost adaptive edge computing with spike-based sensors. However, defining a suitable target for bio-inspiration in top-down designs is difficult, as it underlies both the efficiency and the relevance of the resulting neuromorphic device.

Therefore, we claim that each of these two design approaches can act as a guide to address the shortcomings of the other.

Link to chat channel

Charlotte Frenkel (Institute of Neuroinformatics, Zürich, Switzerland)
CET: 14:45‑14:55
EDT: 09:45‑09:55
PDT: 06:45‑06:55
UTC: 13:45‑13:55
(10+5 min)
Lightning talk: Subspace Locally Competitive Algorithms

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

We introduce subspace locally competitive algorithms (SLCAs), a family of novel network architectures for modeling latent representations of natural signals with group sparse structure. SLCA first layer neurons are derived from locally competitive algorithms, which produce responses and learn representations that are well matched to both the linear and non-linear properties observed in simple cells in layer 4 of primary visual cortex (area V1). SLCA incorporates a second layer of neurons which produce approximately invariant responses to signal variations that are linear in their corresponding subspaces, such as phase shifts, resembling a hallmark characteristic of complex cells in V1. We provide a practical analysis of training parameter settings, explore the features and invariances learned, and finally compare the model to single-layer sparse coding and to independent subspace analysis.

Link to chat channel

Dylan Paiton (University of Tübingen)
CET: 15:00‑15:20
EDT: 10:00‑10:20
PDT: 07:00‑07:20
UTC: 14:00‑14:20
(20+5 min)
Programming neuromorphic computers: PyNN and beyond

show presentation.pdf (public accessible), show talk video

Programming neuromorphic computers: PyNN and beyond PyNN is a Python API for describing spiking neuronal networks consisting of point neurons, with synaptic plasticity. The API is intended to be independent of the underlying simulator or hardware platform: PyNN models can run on traditional simulators such as NEST, NEURON and Brian, GPU-based simulators such as GeNN, and neuromorphic hardware systems such as BrainScaleS and SpiNNaker. In this talk I will present the current state of PyNN and forthcoming extensions, in particular support for multicompartmental models, intracellular calcium dynamics, and structural plasticity.

Link to chat channel

Andrew Davison (CNRS)
CET: 15:25‑15:35
EDT: 10:25‑10:35
PDT: 07:25‑07:35
UTC: 14:25‑14:35
(10+5 min)
Lightning talk: Neuromorphic Graph Algorithms : Cycle Detection, Odd Cycle Detection, and Max Flow

show talk video (YouTube) (local version)

Recently, neuromorphic systems have been applied outside of the arena of machine learning, primarily in the field of graph algorithms. Neuromorphic systems have been shown to perform graph algorithms faster and with lower power consumption than their traditional (GPU/CPU) counterparts, and are hence an attractive option for a co-processing unit in future high performance computing systems, where graph algorithms play a critical role. In this talk, I present a primer on several graph algorithms (cycle detection, odd cycle detection, and the Ford-Fulkerson max-flow algorithm) along with their neuromorphic implementations.

Link to chat channel

William Kay (Oak Ridge National Laboratory)
CET: 15:40‑16:00
EDT: 10:40‑11:00
PDT: 07:40‑08:00
UTC: 14:40‑15:00
(20+5 min)
BrainScaleS: Development Methodologies and Operating System

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

The BrainScaleS (BSS) neuromorphic architectures are based on the analog emulation of neuro-synaptic behavior. Neuronal membrane voltages are represented as voltages, model dynamics evolve in a time-continuous manner. Compared to biology the systems run at a typical speedup factor of 1000–10000. This enables the evaluation of effects on long timescales and experiments with many trials. Simultaneously, BSS focuses model configurability and flexibility in plasticity, experiment control and data handling. On BSS-2, this flexibility is facilitated by an embedded SIMD microprocessor located next to the analog neural network core.

The extended configurability, the inclusion of embedded programmability as well as the horizontal scalability of the systems induces additional complexity. Challenges arise in areas such as initial experiment configuration and runtime control, reproducibility and robustness. We present operation and development methodologies implemented for the BSS neuromorphic architectures and walk through the individual components constituting the software stack for BSS platform operation.

Link to chat channel

Eric Müller (Heidelberg University)
CET: 16:05‑16:15
EDT: 11:05‑11:15
PDT: 08:05‑08:15
UTC: 15:05‑15:15
(10+5 min)
Lightning talk: Evolving Spiking Neural Networks for Robot Sensory-motor Decision Tasks of Varying Difficulty

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

While there is considerable enthusiasm for the potential of spiking neural network (SNN) computing, there remains the fundamental issue of designing the topologies and parameters for these networks. We say the topology IS the algorithm. Here, we describe experiments using evolutionary computation (genetic algorithms, GAs) on a simple robotic sensory-motor decision task using a gene driven topology growth algorithm and letting the GA set all the SNNís parameters. We highlight lessons learned from early experiments where evolution failed to produce designs beyond what we called ìcheap-trickstersî. These were simple topologies implementing decision strategies that could not satisfactorily solve tasks beyond the simplest, but were nonetheless able to outcompete more complex designs in the course of evolution. The solution involved alterations to the fitness function so as to reduce the inherent noise in the assessment of performance, adding gene driven control of the symmetry of the topology, and improving the robot sensors to provide more detailed information about its environment. We show how some subtle variations in the topology and parameters can affect behaviors. We discuss an approach to gradually increasing the complexity of the task that can induce evolution to discover more complex designs. We conjecture that this type of approach will be important as a way to discover cognitive design principles.

Link to chat channel

J. David Schaffer (Binghamton University)
CET: 16:20‑16:50
EDT: 11:20‑11:50
PDT: 08:20‑08:50
UTC: 15:20‑15:50
(30 min)
(break)
CET: 16:50‑17:10
EDT: 11:50‑12:10
PDT: 08:50‑09:10
UTC: 15:50‑16:10
(20+5 min)
Relational Neurogenesis for Lifelong Learning Agents

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Tej Pandit and Dhireesha Kudithipudi

Reinforcement learning systems have shown tremendous potential in being able to model meritorious behavior in virtual agents and robots. The ability to learn through continuous reinforcement and interaction with an environment negates the requirement of painstakingly curated datasets and hand crafted features. However, the ability to learn multiple tasks in a sequential manner, referred to as lifelong or continual learning, remains unresolved. Current implementations either concentrate on preserving information in fixed capacity networks, or propose incrementally growing networks which randomly search through an unconstrained solution space. This paper presentation discusses a novel algorithm for continual learning using neurogenesis in reinforcement learning agents. It builds upon existing neuroevolutionary techniques, and incorporates several new mechanisms for limiting the memory resources while expanding neural network learning capacity. The algorithm is tested on a custom set of sequential virtual environments which emulate meaningful real-world scenarios, such as forest fires.

Link to chat channel

Tej Pandit (University of Texas at San Antonio)
CET: 17:15‑17:25
EDT: 12:15‑12:25
PDT: 09:15‑09:25
UTC: 16:15‑16:25
(10+5 min)
Lightning talk: Fast and deep neuromorphic learning with first-spike coding
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video (YouTube) (local version)

For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems also strive for short time-to-solution and low energy-to-solution characteristics, but current machine learning solutions struggle to meet especially the latter goal. Back in biology, at the level of neuronal implementation the two goals imply achieving the desired results with as few and as early spikes as possible. In the time-to-first-spike-coding framework, both of these goals are inherently emerging features of learning. We describe a rigorous derivation of learning such first-spike times in networks of leaky integrate-and-fire neurons, relying solely on input and output spike times, and show how it can implement error backpropagation in hierarchical spiking networks. Furthermore, we emulate our framework on the BrainScaleS-2 neuromorphic system and demonstrate its capability of harnessing the chip's speed and energy characteristics to solve the typical machine learning problem of image classification. Finally, we examine how our approach generalizes to other neuromorphic platforms by studying how its performance is affected by typical distortive effects induced by neuromorphic substrates.

Link to chat channel

Julian Goeltz (Kirchhoff Institut fuer Physik, Universitaet Heidelberg)
CET: 17:30‑17:40
EDT: 12:30‑12:40
PDT: 09:30‑09:40
UTC: 16:30‑16:40
(10+5 min)
Lightning talk: Neuromorphic Computing for Spacecraft’s Terrain Relative Navigation: A Case of Event-Based Crater Classification Task

show presentation.pdf (public accessible), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Terrain relative navigation is a key technology to enhance conventional spacecraft navigation systems for accurate landing on a planetary body. Since the navigation task is self-localization based on terrain information, computer vision tasks using terrain images are often used for feature extraction and matching. Although the navigation system requires real-time and onboard processing capability due to high-speed descent and the communication propagation delay, the processing performance of space-grade computers is about two orders of magnitude less than commercial ones. This decline in the performance is caused by the power constraints and the acquisition of radiation hardening inherent in the space environments. Neuromorphic computing architecture may meet this need in terms of power consumption and processing speed.

In this study, we investigate the applicability of neuromorphic computing systems for a crater classification as a function of terrain relative navigation. The navigation system consists of a spiking neural network that processes the classification task and an event-based camera that provides terrain information as input to the network. Results show that the system can classify craters with very low power consumption while maintaining performance comparable to existing computing architectures.

Link to chat channel

Kazuki Kariya (The Graduate University for Advanced Studies, SOKENDAI)
CET: 17:45‑18:05
EDT: 12:45‑13:05
PDT: 09:45‑10:05
UTC: 16:45‑17:05
(20+5 min)
Beyond Backprop: Different Approaches to Credit Assignment in Neural Nets

show presentation.pdf (public accessible), show talk video (YouTube) (local version)

Backpropagation algorithm (backprop) has been the workhorse of neural net learning for several decades, and its practical effectiveness is demonstrated by recent successes of deep learning in a wide range of applications. This approach uses chain rule differentiation to compute gradients in state-of-the-art learning algorithms such as stochastic gradient descent (SGD) and its variations. However, backprop has several drawbacks as well, including the vanishing and exploding gradients issue, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.

Biography: https://sites.google.com/site/irinarish/

Link to chat channel

Irina Rish (MILA / Université de Montréal)
CET: 18:10‑18:20
EDT: 13:10‑13:20
PDT: 10:10‑10:20
UTC: 17:10‑17:20
(10+5 min)
Lightning talk: Comparing Neural Accelerators & Neuromorphic Architectures The False Idol of Operations

show presentation.pdf (public accessible), (a video of this talk is available for meeting attendants. Please check your personal meeting page (for EBRAINS account owners: personal meeting page and show video))

Accompanying the advanced computing capabilities neural networks are enabling across a suite of application domains, there is a resurgence in interest in understanding what architectures can efficiently enable these advanced computational demands. Both neural accelerators and neuromorphic approaches are emerging at different scales, resource requirements, and enabling capabilities. Beyond the similarity of executing neural network workloads, these two paradigms exhibit significant differences. Accordingly, here we compare neural accelerators and neuromorphic architectures highlighting that operations alone are a lacking singular measure of performance.

Link to chat channel

Craig Vineyard (Sandia National Laboratories)
CET: 18:25‑18:45
EDT: 13:25‑13:45
PDT: 10:25‑10:45
UTC: 17:25‑17:45
(20+5 min)
Real-time Mapping on a Neuromorphic Processor

show talk video (YouTube) (local version)

Navigation is so crucial for our survival that the brain hosts a dedicated network of neurons to map our surroundings. Place cells, grid cells, border cells, head direction cells and other specialized neurons in the hip- pocampus and the cortex work together in planning and learning maps of the environment [1]. When faced with similar navigation challenges, robots have an equally important need for generating a stable and accurate map. In our ongoing effort to translate the biological network for spatial navigation into a spiking neural network (SNN) that controls mobile robots in real-time, we first focused on simultaneous localization and mapping (SLAM), being one of the critical problems in robotics that relies highly on the accuracy of map representation [2]. Our approach allows us to leverage the asynchronous computing paradigm commonly found across brain areas and therefore has already demonstrated to be a significant energy-efficient solution for 1D SLAM [3], that can spur the emergence of the new neuromorphic processors, such as Intel’s Loihi [4] and IBM’s TrueNorth [5]. In this paper, we expand our previous work by proposing a SNN that forms a cognitive map of an unknown environment and is seamlessly integrated to Loihi.

[1] S. Poulter, T. Hartley, and C. Lever, "The neurobiology of mammalian navigation," Current Biology, vol. 28, no. 17, pp. R1023-R1042, 2018.

[2] G. Grisetti, C. Stachniss, and W. Burgard, "Improved techniques for grid mapping with rao-blackwellized particle filters," IEEE transactions on Robotics, vol. 23, no. 1, p. 34, 2007.

[3] G. Tang, A. Shah, and K. P. Michmizos, "Spiking neural network on neuromorphic hardware for energy- efficient unidimensional SLAM," in IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS), Macau, China, 2019, pp. 1-6.

[4] M. Davies et al., "Loihi: A neuromorphic manycore processor with on-chip learning," IEEE Micro, vol. 38, no. 1, pp. 82-99, 2018.

[5] P. A. Merolla et al., "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science, vol. 345, no. 6197, pp. 668-673, 2014.

Link to chat channel

Konstantinos Michmizos (Rutgers University)
CET: 18:50‑19:19
EDT: 13:50‑14:19
PDT: 10:50‑11:19
UTC: 17:50‑18:19
(29 min)
Opn mic / Wrap up

  • Best talk awards (by the NEUROTECH project):
    • Johannes Leugering (Fraunhofer)
    • Julian Göltz (Heidelberg University)
    • Charlotte Frenkel (Institute of Neuroinformatics)
    • Jakob Jordan (University of Bern)
CET: 19:19
EDT: 14:19
PDT: 11:19
UTC: 18:19
Farewell .... and See you next year...
CET: 19:20
EDT: 14:20
PDT: 11:20
UTC: 18:20
End of NICE 2021
Contact: bjoern.kindler@kip.uni-heidelberg.de