Show this page for printing or as short info (with end time)

NICE 2022 - Agenda

(hide abstracts)
Monday, 28 March 2022
CEST: 14:00‑18:30
EDT: 08:00‑12:30
CDT: 07:00‑11:30
MDT: 06:00‑10:30
PDT: 05:00‑09:30
UTC: 12:00‑16:30
(270 min)
Pre-NICE day -- Monday, 28 March 2022

A pre-NICE event: The NEUROTECH project offers (free of charge) a NEUROTECH WorkGroup Day: Bridging materials to neuromorphic systems and applications. more information and registration


Tuesday, 29 March 2022
CEST: 15:30
EDT: 09:30
CDT: 08:30
MDT: 07:30
PDT: 06:30
UTC: 13:30

NICE 2022


The 9th Annual Neuro-Inspired Computational Elements (NICE) workshop

Countdown to 29 March, 13:30h UTC: NICE start

Agenda

(Please note, that the agenda is not completely final yet, so talks may still "move around a bit".)

Times in the agenda are in CEST (Europe, Berlin), EDT (New York), CDT (Central time), MDT (Denver), PDT (Los Angeles) and UTC. (Some other time zones: Australia, Japan, China, India, ... or only CEST ... )

Meeting venue / Dial-in

  • online as zoom video conference (live talks and Q and A). The zoom video conference client software (free of charge, available for Windows, Mac and Linux at zoom.us, also available for iOS and Android in the respective app stores ) is required.
  • The link to dial in will be shown on the 'personal page' of the registered attendants:
    • For people who registered with an EBRAINS account: the personal page is here.
    • For people who registered with email only: please find the link to the personal page as e-mail

Registration

Please register here (50 Euro regular, 25 Euro student)

Chatserver

We have a chatserver for the workshop (please use the username / initial password from your meetings 'personal page' (URL in your email). Please ask talk-related questions in the respective talk-channel (linked here in the agenda).

Calendar export

The ".ics" links in the agenda are for calendar entries for the specific event. If you would like to get all events into your calendar, please use the whole meeting calendar export .ics.

Help

For any technical questions:

  • if you already registered for the meeting (and thus have access to the chat-server), please use the Helpdesk chat channel.
  • else please send an mail to brainscales_admin@kip.uni-heidelberg.de

Session chair for the first set of talks: Dhireesha Kudithipudi

CEST: 15:30‑15:35
EDT: 09:30‑09:35
CDT: 08:30‑08:35
MDT: 07:30‑07:35
PDT: 06:30‑06:35
UTC: 13:30‑13:35
(5 min)
Opening of NICE

.ics

CEST: 15:35‑15:50
EDT: 09:35‑09:50
CDT: 08:35‑08:50
MDT: 07:35‑07:50
PDT: 06:35‑06:50
UTC: 13:35‑13:50
(15 min)
Welcome to NICE

.ics

Taylor Eighmy (President of UTSA)
CEST: 15:50‑16:15
EDT: 09:50‑10:15
CDT: 08:50‑09:15
MDT: 07:50‑08:15
PDT: 06:50‑07:15
UTC: 13:50‑14:15
(25 min)
Organizer Round

.ics

CEST: 16:15‑16:55
EDT: 10:15‑10:55
CDT: 09:15‑09:55
MDT: 08:15‑08:55
PDT: 07:15‑07:55
UTC: 14:15‑14:55
(40+5 min)
Keynote I: The pervasiveness of disentangled representational geometries in the brain

show talk video

talk slides (60 MB pptx)

Link to chat channel

.ics

Stefano Fusi (Columbia)
CEST: 17:00‑17:25
EDT: 11:00‑11:25
CDT: 10:00‑10:25
MDT: 09:00‑09:25
PDT: 08:00‑08:25
UTC: 15:00‑15:25
(25+5 min)
Computing on Functions Using Randomized Vector Representations
show presentation.pdf (public accessible)

Link to chat channel

.ics

Christopher Kymn (University of California, Berkeley)
CEST: 17:30‑17:35
EDT: 11:30‑11:35
CDT: 10:30‑10:35
MDT: 09:30‑09:35
PDT: 08:30‑08:35
UTC: 15:30‑15:35
(5 min)
Group photo

Taking screenshots from the zoom for a publishable group photo from NICE. Please:

  • turn your camera ON, if you are fine with you being in the publicly published group photo
  • else, please turn your camera OFF during taking the group photo

.ics

CEST: 17:35‑18:05
EDT: 11:35‑12:05
CDT: 10:35‑11:05
MDT: 09:35‑10:05
PDT: 08:35‑09:05
UTC: 15:35‑16:05
(30 min)
Break

Session chair for the next set of talks: Steve Furber

CEST: 18:05‑18:30
EDT: 12:05‑12:30
CDT: 11:05‑11:30
MDT: 10:05‑10:30
PDT: 09:05‑09:30
UTC: 16:05‑16:30
(25+5 min)
Integer Factorization with Compositional Distributed Representations
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Denis Kleyko (UC Berkeley)
CEST: 18:35‑18:45
EDT: 12:35‑12:45
CDT: 11:35‑11:45
MDT: 10:35‑10:45
PDT: 09:35‑09:45
UTC: 16:35‑16:45
(10+5 min)
Lightning talk: Efficient Optimized Spike Encoding of Multivariate Time-series
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), video (restricted access)

Link to chat channel

.ics

Dighanchal Banerjee (Tata Consultancy Services)
CEST: 18:50‑19:15
EDT: 12:50‑13:15
CDT: 11:50‑12:15
MDT: 10:50‑11:15
PDT: 09:50‑10:15
UTC: 16:50‑17:15
(25+5 min)
Quantum many-body states: A novel neuromorphic application
show presentation.pdf (public accessible), show talk video

Emergent phenomena in condensed matter physics, such as superconductivity, are rooted in the interaction of many quantum particles. These phenomena remain poorly understood in part due to the computational demands of their simulation. In recent years variational representations based on artificial neural networks, so called neural quantum states (NQS), have been shown to be efficient, ie. sub-exponentially scaling, representations. However, the computational complexity of such representations scales not only with the size of the physical system, but also with the size of the neural network. In this work, we use the analog neuromorphic BrainScaleS-2 platform to implement probabilistic representations of two particular types of quantum states.
The physical nature of the neuromorphic system enforces an inherent parallelism of the compuation, rendering the emulation time independent of the used network size. We show the effectiveness of our scheme in two settings:

  • First, we consider a hallmark test for "quantumness" by representing a quantum state that violates the classical bounds of the Bell inequality.
  • Second, we show that we can represent the large class of stoquastic quantum states with fidelities above 98% for moderate system sizes. This offers a novel application for spike-based neuromorphic hardware which departs from the more traditional neuroscience-inspired use cases.

Link to chat channel

.ics

Andreas Baumbach (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 19:20‑19:30
EDT: 13:20‑13:30
CDT: 12:20‑12:30
MDT: 11:20‑11:30
PDT: 10:20‑10:30
UTC: 17:20‑17:30
(10+5 min)
Lightning talk: CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Artificial intelligence promises considerable improvements over conventional von Neumann architectures for applications that process real-world, unstructured information. To fully realize this potential, neuromorphic systems should exploit the biomimetic behavior of emerging nanodevices. In particular, exceptional opportunities are provided by the non-volatility and analog capabilities of spintronic devices. While a variety of spintronic devices have been proposed that exhibit characteristics similar to neurons and synapses, they necessitate the use of complementary metal-oxide-semiconductor (CMOS) devices to implement multilayer perceptron crossbars. This work therefore proposes a new spintronic neuron that enables purely spintronic multilayer perceptrons, eliminating the need for CMOS circuitry and simplifying fabrication.

Link to chat channel

.ics

Wesley Brigner (The University of Texas at Dallas)
CEST: 19:35‑19:50
EDT: 13:35‑13:50
CDT: 12:35‑12:50
MDT: 11:35‑11:50
PDT: 10:35‑10:50
UTC: 17:35‑17:50
(15 min)
Break

Session chair for the next set of talks: Winfried Wilcke

CEST: 19:50‑20:00
EDT: 13:50‑14:00
CDT: 12:50‑13:00
MDT: 11:50‑12:00
PDT: 10:50‑11:00
UTC: 17:50‑18:00
(10+5 min)
Lightning talk: Graph Embedding Using Cortical Like Sparse Distributed Representations
show presentation.pdf (public accessible)

Dan W. Hammerstrom[1], Dmitri Nikonov[2], and Mohamed Abidalrekab[3]

The goal of the research discussed in this talk is to explore the mapping of graphs to basic cortical like arrays using sparse distributed data representations, and to understand what characteristics of the cortical array, such as connectivity, allow the efficient embedding of complex graphs. In other words, do cortical like networks generate useful graph embeddings for a range of applications (graph queries)? Most real-world graphs (such as semantic or knowledge graphs) often display scale free / small world characteristics, which lead to small diameter graphs and graphs with lognormal distributions of node connectivity. Such characteristics have also been observed in neural circuitry.

This work represents a first step in understanding how complex, structured knowledge may be represented by simplified cortical arrays. The next step is to adopt increasingly more realistic cortical models that concurrently improve the quality of graph embeddings, and to investigate their performance over a wide range of graph queries.

[1] Portland State University [2] Intel Corporation [3] Portland State University

Link to chat channel

.ics

Dan Hammerstrom (Portland State University)
CEST: 20:05‑20:30
EDT: 14:05‑14:30
CDT: 13:05‑13:30
MDT: 12:05‑12:30
PDT: 11:05‑11:30
UTC: 18:05‑18:30
(25+5 min)
Accelerating Deep Neural Networks with Analog-memory-based Hardware Accelerators
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Link to chat channel

.ics

An Chen (IBM Research)
CEST: 20:35‑21:00
EDT: 14:35‑15:00
CDT: 13:35‑14:00
MDT: 12:35‑13:00
PDT: 11:35‑12:00
UTC: 18:35‑19:00
(25 min)
Open mic / discussion

.ics

CEST: 21:00
EDT: 15:00
CDT: 14:00
MDT: 13:00
PDT: 12:00
UTC: 19:00
End of day I
CEST: 21:00‑21:30
EDT: 15:00‑15:30
CDT: 14:00‑14:30
MDT: 13:00‑13:30
PDT: 12:00‑12:30
UTC: 19:00‑19:30
(30 min)
 
Break
CEST: 21:30
EDT: 15:30
CDT: 14:30
MDT: 13:30
PDT: 12:30
UTC: 19:30
Hands-on tutorial: BrainScaleS-2 interactive via EBRAINS

Notes:

  • This tutorial is offered twice with identical content (to allow for easy access for America and Europe based people)
  • For using the BrainScaleS system during this hands-on tutorial session, please create (ahead of time) an EBRAINS account (free of charge) at: https://ebrains.eu/register/

Link to chat channel

CEST: 21:30‑22:00
EDT: 15:30‑16:00
CDT: 14:30‑15:00
MDT: 13:30‑14:00
PDT: 12:30‑13:00
UTC: 19:30‑20:00
(30 min)
 
Hands-on tutorial: BrainScaleS-2 introduction
show presentation.pdf (public accessible), show talk video

.ics

Eric Müller (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 22:00‑22:10
EDT: 16:00‑16:10
CDT: 15:00‑15:10
MDT: 14:00‑14:10
PDT: 13:00‑13:10
UTC: 20:00‑20:10
(10 min)
 
Hands-on tutorial: BrainScaleS-2 first steps

.ics

Christian Pehle (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 22:10‑22:25
EDT: 16:10‑16:25
CDT: 15:10‑15:25
MDT: 14:10‑14:25
PDT: 13:10‑13:25
UTC: 20:10‑20:25
(15 min)
 
Hands-on tutorial: BrainScaleS-2 Learning with the SuperSpike rule

video (restricted access)

.ics

Elias Arnold (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 22:25‑22:35
EDT: 16:25‑16:35
CDT: 15:25‑15:35
MDT: 14:25‑14:35
PDT: 13:25‑13:35
UTC: 20:25‑20:35
(10 min)
 
Hands-on tutorial: BrainScaleS-2 Structured Neurons

show talk video

.ics

Jakob Kaiser (Ruprecht-Karls-Universitaet Heidelberg)

Wednesday, 30 March 2022
CEST: 12:00‑14:00
EDT: 06:00‑08:00
CDT: 05:00‑07:00
MDT: 04:00‑06:00
PDT: 03:00‑05:00
UTC: 10:00‑12:00
(120 min)
Hands-on tutorial: SpiNNaker
show presentation.pdf (public accessible), show talk video

Notes:

  • This tutorial is offered twice today with identical content (to allow for easy access for America and Europe based people) - the other tutorial is just after today's talk program
  • For using the SpiNNaker system during this hands-on tutorial session, please create (ahead of time) an EBRAINS account (free of charge) at: https://ebrains.eu/register/

Link to chat channel

.ics

Andrew Rowley (The University of Manchester)
CEST: 14:00‑15:00
EDT: 08:00‑09:00
CDT: 07:00‑08:00
MDT: 06:00‑07:00
PDT: 05:00‑06:00
UTC: 12:00‑13:00
(60 min)
Break

Session chair for the fist set of talks: Brad Aimone

CEST: 15:00
EDT: 09:00
CDT: 08:00
MDT: 07:00
PDT: 06:00
UTC: 13:00

NICE 2022 - day II -- Wednesday, 30 March 2022

CEST: 15:00‑15:05
EDT: 09:00‑09:05
CDT: 08:00‑08:05
MDT: 07:00‑07:05
PDT: 06:00‑06:05
UTC: 13:00‑13:05
(5 min)
Welcome, day II

.ics

CEST: 15:05‑15:30
EDT: 09:05‑09:30
CDT: 08:05‑08:30
MDT: 07:05‑07:30
PDT: 06:05‑06:30
UTC: 13:05‑13:30
(25+5 min)
Coinflips: CO-designed Improved Neural Foundations Leveraging Inherent Physics Stochasticity
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Brad Aimone (Sandia National Laboratories)
CEST: 15:35‑15:45
EDT: 09:35‑09:45
CDT: 08:35‑08:45
MDT: 07:35‑07:45
PDT: 06:35‑06:45
UTC: 13:35‑13:45
(10+5 min)
Lightning talk: Temporal and Spatio-temporal domains for Neuromorphic Tactile Texture Classification
show presentation.pdf (public accessible)

Link to chat channel

.ics

George Brayshaw (University of Bristol)
CEST: 15:50‑16:15
EDT: 09:50‑10:15
CDT: 08:50‑09:15
MDT: 07:50‑08:15
PDT: 06:50‑07:15
UTC: 13:50‑14:15
(25+5 min)
Abisko: Deep Codesign of an Energy-Optimized, High Performance Neuromorhpic Accelerator
show presentation.pdf (public accessible)

Link to chat channel

.ics

Jeffrey Vetter (Oak Ridge National Laboratory)
CEST: 16:20‑16:45
EDT: 10:20‑10:45
CDT: 09:20‑09:45
MDT: 08:20‑08:45
PDT: 07:20‑07:45
UTC: 14:20‑14:45
(25+5 min)
BrianScaleS via EBRAINS
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Johannes Schemmel (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 16:50‑17:05
EDT: 10:50‑11:05
CDT: 09:50‑10:05
MDT: 08:50‑09:05
PDT: 07:50‑08:05
UTC: 14:50‑15:05
(15 min)
Break

Session chair for the next set of talks: Hemanth Jagannathan

CEST: 17:05‑17:30
EDT: 11:05‑11:30
CDT: 10:05‑10:30
MDT: 09:05‑09:30
PDT: 08:05‑08:30
UTC: 15:05‑15:30
(25+5 min)
New Tools for a New Era of Neuromorphic Computing
show presentation.pdf (public accessible)

Link to chat channel

.ics

Mike Davies (Intel)
CEST: 17:35‑17:45
EDT: 11:35‑11:45
CDT: 10:35‑10:45
MDT: 09:35‑09:45
PDT: 08:35‑08:45
UTC: 15:35‑15:45
(10+5 min)
Lightning talk: Demonstrating BrainScaleS 2 Inter-Chip Pulse Communication using Extoll
show presentation.pdf (public accessible), show talk video

The BrainScaleS-2 (BSS-2) Neuromorphic Computing System currently consists of multiple single-chip setups, which are connected to a compute cluster via Gigabit-Ethernet network technology. This is convenient for small experiments, where the neural networks fit into a single chip. When modeling networks of larger size, neurons have to be connected across chip boundaries. We implement these connections for BSS-2 using the EXTOLL networking technology.This provides high bandwidths and low latencies, as well as high message rates. Here, we describe the targeted pulse-routing implementation and required extensions to the BSS-2 software stack. We as well demonstrate feed-forward pulse-routing on BSS-2 using a scaled-down version without temporal merging.

Link to chat channel

.ics

Tobias Thommes (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 17:50‑18:15
EDT: 11:50‑12:15
CDT: 10:50‑11:15
MDT: 09:50‑10:15
PDT: 08:50‑09:15
UTC: 15:50‑16:15
(25+5 min)
Record Simulation of the Full-Density Spiking Potjans-Diesmann-Microcircuit Model on the IBM Neural Supercomputer INC 3000
show presentation.pdf (public accessible)

The full abstract is available here

The discussion part of it is: Discussion: For future very-large Neuroscience networks with sophisticated plasticity models a significant improvement of simulation times will be required. At a first glance, there are three mayor challenges in this: First at all, to “bring the spikes to the FLOPS” and to synchronize the massively parallel processes, i.e., ultra-low latency communications. Secondly, to access the synapse states (including plasticity information) in huge data structures, i.e., ultra-low latency memory accesses. And, last but not least, to allow for quick network generation in the preparation of the simulation itself, avoiding hours-long set-up times. Neither today’s HPC systems nor dedicated neuromorphic computing systems are well suited to these requirements and new, network centric system concepts and architectures are needed to overcome the current limitations. At least the simulation of cortical intra-area spike communication via electrical interconnects sets stringent limits to the spatial distance between the compute nodes. Multi-area models will enforce the use of improved hierarchical communication and synchronization infrastructures. Very-large on-chip or at least on-package memories will become a must. The simulation of ultra-large unscaled cortical areas may require new concepts and hardware support for spatial mapping. Moreover, sophisticated plasticity models with compartmental dendrite models may also challenge arithmetic performance and make the use of efficient accelerators a must. Although, initially area and energy efficiency do not appear as a mayor goal for such systems, like in biology, the stringent communication-latency constraints do enforce a high integration density at least at the level of cortical areas and consequently the need for low power consumption. It appears not to be reasonable to expect that future development of general-purpose HPC systems will follow these directions.

Link to chat channel

.ics

Arne Heittmann (PGI-10 Forschungszentrum Jülich)
CEST: 18:20‑18:30
EDT: 12:20‑12:30
CDT: 11:20‑11:30
MDT: 10:20‑10:30
PDT: 09:20‑09:30
UTC: 16:20‑16:30
(10+5 min)
Lightning talk: Frustrated Arrays of Nanomagnets for Efficient Reservoir Computing
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Link to chat channel

.ics

Alexander Edwards (The University of Texas at Dallas)
CEST: 18:35‑19:05
EDT: 12:35‑13:05
CDT: 11:35‑12:05
MDT: 10:35‑11:05
PDT: 09:35‑10:05
UTC: 16:35‑17:05
(30 min)
Break

Session chair for the next set of talks: Johannes Schemmel

CEST: 19:05‑19:30
EDT: 13:05‑13:30
CDT: 12:05‑12:30
MDT: 11:05‑11:30
PDT: 10:05‑10:30
UTC: 17:05‑17:30
(25+5 min)
SpiNNaker 2 results: A Platform for Real-Time Bio-Inspired AI and Cognition
show presentation.pdf (public accessible), show talk video

SpiNNaker (“Spiking Neural Network Architecture”) is a computer architecture designed for the efficient simulation of spiking neural networks. It integrates a huge number of ARM processors into a scalable system architecture with optimized memory access and communication infrastructure. This enables the energy-efficient simulation of neural networks in biological real-time. Since 2013, TU Dresden and University of Manchester jointly develop the 2nd generation SpiNNaker system within the EU-flagship Human Brain Project (HBP). The goal is to approach simulation capacity on the order of a human brain in real time. Besides its original use for brain simulation, SpiNNaker2 has great potential as a research tool in the areas of tactile internet, Industry 4.0 and automotive artificial intelligence. SpiNNcloud targets the following key figures: 10 million processors distributed across 10 server racks, 5 PFLOPS of CPU performance and 0.6 ExaOPS for machine learning through integrated MAC accelerators, at 400 kW power consumption. High energy-efficiency is achieved by employing adaptive body biasing in 22 nm FSDOI technology, which allows operation at very low supply voltages. By combining high-performance machine learning, sensor/actuator processing at millisecond latency, and high energy efficiency, SpiNNcloud will represent a breakthrough in the area of real-time human-machine interaction at cloud scale. This talk will thus delve into SpiNNaker2 applications and the current development state of the machine.

Link to chat channel

.ics

Christian Mayr (TU Dresden)
CEST: 19:35‑20:00
EDT: 13:35‑14:00
CDT: 12:35‑13:00
MDT: 11:35‑12:00
PDT: 10:35‑11:00
UTC: 17:35‑18:00
(25+5 min)
Photonic neuromorphic processing
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Wolfram Pernice (Heidelberg University)
CEST: 20:05‑20:30
EDT: 14:05‑14:30
CDT: 13:05‑13:30
MDT: 12:05‑12:30
PDT: 11:05‑11:30
UTC: 18:05‑18:30
(25+5 min)
Encoding Event-Based Data With a Hybrid SNN Guided Variational Auto-encoder in Neuromorphic Hardware
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Kenneth Stewart (University of California Irvine, and Accenture)
CEST: 20:35‑21:00
EDT: 14:35‑15:00
CDT: 13:35‑14:00
MDT: 12:35‑13:00
PDT: 11:35‑12:00
UTC: 18:35‑19:00
(25 min)
Open mic / discussion

.ics

CEST: 21:00
EDT: 15:00
CDT: 14:00
MDT: 13:00
PDT: 12:00
UTC: 19:00
End of day II
CEST: 21:00‑21:30
EDT: 15:00‑15:30
CDT: 14:00‑14:30
MDT: 13:00‑13:30
PDT: 12:00‑12:30
UTC: 19:00‑19:30
(30 min)
 
Break
CEST: 21:30‑23:30
EDT: 15:30‑17:30
CDT: 14:30‑16:30
MDT: 13:30‑15:30
PDT: 12:30‑14:30
UTC: 19:30‑21:30
(120 min)
 
Hands-on tutorial: SpiNNaker

Notes:

  • Please find the slides linked at the first instance of the tutorial (30 March in the morning)
  • This tutorial is offered twice today with identical content (to allow for easy access for America and Europe based people) - the other tutorial was just before today's talk program
  • For using the SpiNNaker system during this hands-on tutorial session, please create (ahead of time) an EBRAINS account (free of charge) at: https://ebrains.eu/register/

Link to chat channel

.ics

Andrew Rowley (The University of Manchester)

Thursday, 31 March 2022
CEST: 12:00‑14:00
EDT: 06:00‑08:00
CDT: 05:00‑07:00
MDT: 04:00‑06:00
PDT: 03:00‑05:00
UTC: 10:00‑12:00
(120 min)
Hands-on tutorial: BrainScaleS-2 interactive via EBRAINS

Notes:

  • This tutorial is offered twice with identical content (to allow for easy access for America and Europe based people)
  • For using the BrainScaleS system during this hands-on tutorial session, please create (ahead of time) an EBRAINS account (free of charge) at: https://ebrains.eu/register/

Link to chat channel

.ics

Eric Müller (Ruprecht-Karls-Universitaet Heidelberg)
CEST: 14:00‑15:00
EDT: 08:00‑09:00
CDT: 07:00‑08:00
MDT: 06:00‑07:00
PDT: 05:00‑06:00
UTC: 12:00‑13:00
(60 min)
Break

Session chair for the first set of talks: Johannes Schemmel

CEST: 15:00
EDT: 09:00
CDT: 08:00
MDT: 07:00
PDT: 06:00
UTC: 13:00

NICE 2022 - day III -- Thursday, 31 March 2022

CEST: 15:00‑15:25
EDT: 09:00‑09:25
CDT: 08:00‑08:25
MDT: 07:00‑07:25
PDT: 06:00‑06:25
UTC: 13:00‑13:25
(25+5 min)
Continual learning

show talk video

Link to chat channel

.ics

Dhireesha Kudithipudi (UTSA)
CEST: 15:30‑15:55
EDT: 09:30‑09:55
CDT: 08:30‑08:55
MDT: 07:30‑07:55
PDT: 06:30‑06:55
UTC: 13:30‑13:55
(25+5 min)
BitBrain and Sparse Binary Coincidence memories
show presentation.pdf (public accessible), show talk video

We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) - based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory - which support single-pass learning, accurate and robust inference, and the potential for continuous adaptive learning with and without forgetting. They are designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures.

Link to chat channel

.ics

Michael Hopkins (The University of Manchester)
CEST: 16:00
EDT: 10:00
CDT: 09:00
MDT: 08:00
PDT: 07:00
UTC: 14:00
Three data talks
CEST: 16:00‑16:10
EDT: 10:00‑10:10
CDT: 09:00‑09:10
MDT: 08:00‑08:10
PDT: 07:00‑07:10
UTC: 14:00‑14:10
(10 min)
 
Lightning talk: Rapid Inference of Geographical Location with an Event-based Electronic Nose
show presentation.pdf (public accessible)

Link to chat channel

.ics

Nik Dennler (University of Hertfordshire)
CEST: 16:10‑16:20
EDT: 10:10‑10:20
CDT: 09:10‑09:20
MDT: 08:10‑08:20
PDT: 07:10‑07:20
UTC: 14:10‑14:20
(10 min)
 
Lightning talk: Event-based dataset for classification and pose estimation
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

James Turner (University of Sussex)
CEST: 16:20‑16:30
EDT: 10:20‑10:30
CDT: 09:20‑09:30
MDT: 08:20‑08:30
PDT: 07:20‑07:30
UTC: 14:20‑14:30
(10 min)
 
Lightning talk: The Yin-Yang Dataset
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Link to chat channel

.ics

Laura Kriener (Universitaet Bern)
CEST: 16:30‑16:35
EDT: 10:30‑10:35
CDT: 09:30‑09:35
MDT: 08:30‑08:35
PDT: 07:30‑07:35
UTC: 14:30‑14:35
(5 min)
 
Q&A for the data talks

.ics

CEST: 16:35‑17:00
EDT: 10:35‑11:00
CDT: 09:35‑10:00
MDT: 08:35‑09:00
PDT: 07:35‑08:00
UTC: 14:35‑15:00
(25+5 min)
Optimal Oscillator Memory Networks
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Connor Bybee (UC Berkeley)
CEST: 17:05‑17:20
EDT: 11:05‑11:20
CDT: 10:05‑10:20
MDT: 09:05‑09:20
PDT: 08:05‑08:20
UTC: 15:05‑15:20
(15 min)
Break

Session chair for the next set of talks: Hemanth Jagannathan

CEST: 17:20‑17:30
EDT: 11:20‑11:30
CDT: 10:20‑10:30
MDT: 09:20‑09:30
PDT: 08:20‑08:30
UTC: 15:20‑15:30
(10+5 min)
Lightning talk: Stable Lifelong Learning: Spiking neurons as a solution to instability in plastic neural networks
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Samuel Schmidgall (George Mason University)
CEST: 17:35‑18:00
EDT: 11:35‑12:00
CDT: 10:35‑11:00
MDT: 09:35‑10:00
PDT: 08:35‑09:00
UTC: 15:35‑16:00
(25+5 min)
Towards the Neuromorphic Implementation of the Auditory Perception in the iCub Robotic Platform
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Daniel Gutierrez-Galan (University of Seville)
CEST: 18:05‑18:15
EDT: 12:05‑12:15
CDT: 11:05‑11:15
MDT: 10:05‑10:15
PDT: 09:05‑09:15
UTC: 16:05‑16:15
(10+5 min)
Lightning talk: Oscillatory Neural Network as Hetero-Associative Memory for Image Edge Detection
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video

Typical image processing methods performed at the edge, use convolutional filters that are energy, computation, and memory hungry algorithms. But edge devices and cameras have scarce computational resources, bandwidth, and power and are limited due to privacy constraints to send data over to the cloud. Thus, there is a need to process image data at the edge. This need has incited a lot of interest in implementing neuromorphic computing at the edge. Recently, Oscillatory Neural Networks (ONNs) present a novel brain-inspired computing approach by emulating brain oscillations to perform auto-associative memory types of applications. To speed up image edge detection and reduce its power consumption, we perform an in-depth investigation with ONNs. We propose a novel image processing method by using ONNs as a Heterogeneous Associative Memory (HAM) for image edge detection.

Link to chat channel

.ics

Madeleine Abernot (LIRMM-CNRS)
CEST: 18:20‑19:00
EDT: 12:20‑13:00
CDT: 11:20‑12:00
MDT: 10:20‑11:00
PDT: 09:20‑10:00
UTC: 16:20‑17:00
(40+5 min)
Keynote II: Materials Matter: How biologically inspired alternatives to conventional neural networks improve meta-learning and continual learning
show presentation.pdf (public accessible), show talk video

I will describe how alternatives to conventional neural networks that are very loosely biologically inspired can improve meta-learning, including continual learning. First I will summarize differentiable Hebbian learning and differentiable neuromodulated Hebbian learning (aka “backpropamine”). Both are techniques for training deep neural networks with synaptic plasticity, meaning the weights can change during meta-testing/inference. Whereas meta-learned RNNs can only store within-episode information in their activations, such plastic Hebbian networks can store information in their weights in addition to its activations, improving performance on some classes of problems. Second, I will describe my view that we will make the fastest progress on our grand ambitions in AI research by trying to create AI-generating Algorithms (AI-GAs), which on their own learn to solve the hardest AI problems. I will describe one example of this paradigm: Learning to Continually Learn. Catastrophic forgetting is a longstanding Achilles Heel of machine learning, wherein our systems learn new tasks by overwriting their knowledge of how to solve previous tasks. To produce agents that can continually learn, we must prevent catastrophic forgetting. I will describe a Neuromodulated Meta-Learning algorithm (ANML), which uses meta-learning to try to solve catastrophic forgetting, producing state-of-the-art results.

Link to chat channel

.ics

Jeff Clune (University of British Columbia)
CEST: 19:05‑19:35
EDT: 13:05‑13:35
CDT: 12:05‑12:35
MDT: 11:05‑11:35
PDT: 10:05‑10:35
UTC: 17:05‑17:35
(30 min)
Break

Session chair for the next set of talks: Tim Shea (Intel)

CEST: 19:35‑19:45
EDT: 13:35‑13:45
CDT: 12:35‑12:45
MDT: 11:35‑11:45
PDT: 10:35‑10:45
UTC: 17:35‑17:45
(10+5 min)
Lightning talk: Sequence Learning and Consolidation on Loihi using On-chip Plasticity
(the presentation .pdf is accessible for meeting attendants from their 'personal page')

Link to chat channel

.ics

Jack Lindsey (Columbia University)
CEST: 19:50‑20:15
EDT: 13:50‑14:15
CDT: 12:50‑13:15
MDT: 11:50‑12:15
PDT: 10:50‑11:15
UTC: 17:50‑18:15
(25+5 min)
Online learning in SNNs with e-prop and Neuromorphic Hardware
show presentation.pdf (public accessible), show talk video

Link to chat channel

.ics

Adam Perrett (University of Manchester)
CEST: 20:20‑20:30
EDT: 14:20‑14:30
CDT: 13:20‑13:30
MDT: 12:20‑12:30
PDT: 11:20‑11:30
UTC: 18:20‑18:30
(10+5 min)
Lightning talk: A Neuromorphic Normalization Algorithm for Stabilizing Synaptic Weights with Application to Dictionary Learning in LCA

Link to chat channel

.ics

Diego Chavez Arana (New Mexico State University)
CEST: 20:35‑21:00
EDT: 14:35‑15:00
CDT: 13:35‑14:00
MDT: 12:35‑13:00
PDT: 11:35‑12:00
UTC: 18:35‑19:00
(25 min)
Open mic / discussion

.ics

CEST: 21:00
EDT: 15:00
CDT: 14:00
MDT: 13:00
PDT: 12:00
UTC: 19:00
End of day III
CEST: 21:00‑22:00
EDT: 15:00‑16:00
CDT: 14:00‑15:00
MDT: 13:00‑14:00
PDT: 12:00‑13:00
UTC: 19:00‑20:00
(60 min)
 
Break
CEST: 22:00‑00:00
EDT: 16:00‑18:00
CDT: 15:00‑17:00
MDT: 14:00‑16:00
PDT: 13:00‑15:00
UTC: 20:00‑22:00
(120 min)
Loihi tutorial: Accelerating neuromorphic application development with Loihi 2 and Lava

This tutorial provides a deeper dive into the Loihi 2 architecture and latest results, as well as an overview of the open-source Lava Software Framework for neuromorphic computing including its user interface and algorithm libraries.

The tutorial video is accessible for attendants from their "personal page" (if registered with EBRAINS account that is this page)

Link to chat channel

.ics

Andreas Wild (Intel Corporation)
Garrick Orchard (Intel Corporation)
Timothy Shea (Intel)
Sumit Bam Shrestha (Intel Labs)
Mathis Richter (Intel Deutschland GmbH)

Friday, 1 April 2022
CEST: 15:00
EDT: 09:00
CDT: 08:00
MDT: 07:00
PDT: 06:00
UTC: 13:00

NICE 2022 - day IV -- Friday, 1 April 2022

Session chair for the first set of talks: Yulia Sandamirskaya

CEST: 15:00‑15:45
EDT: 09:00‑09:45
CDT: 08:00‑08:45
MDT: 07:00‑07:45
PDT: 06:00‑06:45
UTC: 13:00‑13:45
(45 min)
Programme managers panel

Panelists:

  • David A. Markowitz, Program Manager, IARPA
  • Hal Greenwald
  • Robinson Pino

Moderator: Dhireesha Kudithipudi, Brad Aimone

Format: brief (5 min each) position statements by the panelists, then moderator + audience questions.

Chat channel

.ics

Hal Greenwald (Air Force Office of Scientific Research)
David Markowitz (IARPA)
Robinson Pino (DOE)
CEST: 15:45‑16:10
EDT: 09:45‑10:10
CDT: 08:45‑09:10
MDT: 07:45‑08:10
PDT: 06:45‑07:10
UTC: 13:45‑14:10
(25+5 min)
Efficient GPU training of LSNNs using eProp
show presentation.pdf (public accessible), show talk video

Taking inspiration from machine learning libraries - where techniques such as parallel batch training minimise latency and maximise GPU occupancy - as well as our previous research on efficiently simulating Spiking Neural Networks (SNNs) on GPUs for computational neuroscience, we have extended our GeNN SNN simulator to enable spike-based machine learning research on general purpose hardware. We demonstrate that SNN classifiers implemented using GeNN and trained using the eProp learning rule can provide comparable performance to those trained using Back Propagation Through Time and show that the latency and energy usage of our SNN classifiers is up to lower than an LSTM running on the same GPU hardware.

Link to chat channel

.ics

James Knight (University of Sussex)
CEST: 16:15‑16:25
EDT: 10:15‑10:25
CDT: 09:15‑09:25
MDT: 08:15‑08:25
PDT: 07:15‑07:25
UTC: 14:15‑14:25
(10+5 min)
Lightning talk: Evaluating parameter tuning and real-time closed loop simulation of large scale spiking networks before mapping to neuromorphic hardware: Comparing GeNN and NEST
show presentation.pdf (public accessible)

Link to chat channel

.ics

Felix Schmitt (University of Cologne)
CEST: 16:30‑16:55
EDT: 10:30‑10:55
CDT: 09:30‑09:55
MDT: 08:30‑08:55
PDT: 07:30‑07:55
UTC: 14:30‑14:55
(25+5 min)
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video

Link to chat channel

.ics

Paul Haider (Universitaet Bern)
CEST: 17:00‑17:30
EDT: 11:00‑11:30
CDT: 10:00‑10:30
MDT: 09:00‑09:30
PDT: 08:00‑08:30
UTC: 15:00‑15:30
(30 min)
Break

Session chair for the next set of talks: Steve Furber

CEST: 17:30‑17:40
EDT: 11:30‑11:40
CDT: 10:30‑10:40
MDT: 09:30‑09:40
PDT: 08:30‑08:40
UTC: 15:30‑15:40
(10+5 min)
Lightning talk: Information Theory Limits of Neuromorphic Energy Efficiency
show presentation.pdf (public accessible), show talk video

Energy efficiency is one of the key motivations for neuromorphic computing. This raises the question of whether there is a fundamental trade-offs between energy, computational capacity and reliability that can help guide future neuromorphic design. In this talk, I will present a theoretical approach to this problem based on information theory. I will use simple combinatorial calculations to show how we can derive a basic bound relating energy consumption and representational capacity, and show how it can be extended to account for stochastic noise.

Link to chat channel

.ics

Pau Vilimelis Aceituno (ETH Zürich)
CEST: 17:45‑17:55
EDT: 11:45‑11:55
CDT: 10:45‑10:55
MDT: 09:45‑09:55
PDT: 08:45‑08:55
UTC: 15:45‑15:55
(10+5 min)
Lightning talk: Modeling and analyzing neuromorphic SNNs as discrete event systems

show talk video

Link to chat channel

.ics

Johannes Leugering (Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung EV)
CEST: 18:00‑18:25
EDT: 12:00‑12:25
CDT: 11:00‑11:25
MDT: 10:00‑10:25
PDT: 09:00‑09:25
UTC: 16:00‑16:25
(25+5 min)
A Framework to Enable Top-Down Co-Design of Neuromorphic Systems for Real-World Applications
(the presentation .pdf is accessible for meeting attendants from their 'personal page'), show talk video

Neuromorphic computing systems offer many attractive qualities for real-world applications, including extremely low power and relatively fast computation. There are many new and emerging devices and materials that are being evaluated by the research community for neuromorphic deployment. However, as this is still an active research field, there are relatively few application-ready, full-scale neuromorphic systems that are available to the community. The lack of complete neuromorphic solutions, including algorithms and software, as well as full-scale neuromorphic implementation, does not allow for application-deployment and evaluation of neuromorphic computing solutions for real-world applications. To address this issue, we have developed a complete neuromorphic framework, including algorithms for designing and training spiking neural networks (SNNs) for neuromorphic deployment, a common software interface to a variety of neuromorphic backends, and a field-programmable gate array (FPGA)-based neuromorphic implementation to study architectural requirements of neuromorphic systems. This workflow can be applied to different applications, including data classification, event detection, and real-time control, and the associated FPGA-based neuromorphic implementation can be deployed into the physical environment to allow for real-world evaluation.

Link to chat channel

.ics

Catherine Schuman (University of Tennessee)
CEST: 18:30‑18:40
EDT: 12:30‑12:40
CDT: 11:30‑11:40
MDT: 10:30‑10:40
PDT: 09:30‑09:40
UTC: 16:30‑16:40
(10+5 min)
Lightning talk: Localization through Grid-based Encodings on Digital Elevation Models
show presentation.pdf (public accessible), show talk video

It has been demonstrated that grid cells are encoding physical locations using hexagonally spaced, periodic phase-space representations. Theories of how the brain is decoding this phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method on how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this extended abstract, we walk through our approach with simulations using a digital elevation model.

Link to chat channel

.ics

Felix Wang (Sandia National Laboratories)
CEST: 18:45‑19:10
EDT: 12:45‑13:10
CDT: 11:45‑12:10
MDT: 10:45‑11:10
PDT: 09:45‑10:10
UTC: 16:45‑17:10
(25+5 min)
Neural Mini-Apps as a Tool for Neuromorphic Computing Insight
show presentation.pdf (public accessible), show talk video

Assessing the merits of neuromorphic computing (NMC) is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Accordingly, here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.

Link to chat channel

.ics

Craig Vineyard (Sandia National Laboratories)
CEST: 19:15‑19:25
EDT: 13:15‑13:25
CDT: 12:15‑12:25
MDT: 11:15‑11:25
PDT: 10:15‑10:25
UTC: 17:15‑17:25
(10+5 min)
Lightning talk: Benchmarking a Bio-inspired SNN on a Neuromorphic System

Neuromorphic devices present an opportunity to place densely-connected networks on architectures that more closely resemble biological neural systems. To that end, this work introduces initial findings in comparing the computational efficiency between a traditional and neuromorphic platform when implementing a bio-inspired SNN. These findings contribute to the growing body of benchmark literature that highlight the performance benefits of using neuromorphic devices for bio-inspired neural network designs.

Link to chat channel

.ics

Luke Parker (Sandia National Labs)
CEST: 19:30‑19:45
EDT: 13:30‑13:45
CDT: 12:30‑12:45
MDT: 11:30‑11:45
PDT: 10:30‑10:45
UTC: 17:30‑17:45
(15 min)
Break

Session chair for the last set of talks: Dhireesha Kudithipudi

CEST: 19:45‑20:00
EDT: 13:45‑14:00
CDT: 12:45‑13:00
MDT: 11:45‑12:00
PDT: 10:45‑11:00
UTC: 17:45‑18:00
(15 min)
Best presentation awards

The EU funded NEUROTECH project sponsors two prices:

  • Best student presentation (250 Euro)
  • Best early researcher presentation (250 Euro) (“early researcher” = until 10 years after finishing their PhD or similar)

A jury with members from the NEUROTECH project and one of the NICE organizers selects the winners.

.ics

CEST: 20:00‑20:25
EDT: 14:00‑14:25
CDT: 13:00‑13:25
MDT: 12:00‑12:25
PDT: 11:00‑11:25
UTC: 18:00‑18:25
(25 min)
Open mic / discussion

.ics

CEST: 20:25‑20:30
EDT: 14:25‑14:30
CDT: 13:25‑13:30
MDT: 12:25‑12:30
PDT: 11:25‑11:30
UTC: 18:25‑18:30
(5 min)
Farewell - see you at NICE in 2023 hopefully on-site at UTSA, USA

.ics

CEST: 20:30
EDT: 14:30
CDT: 13:30
MDT: 12:30
PDT: 11:30
UTC: 18:30
End of NICE 2022
Contact: bjoern.kindler@kip.uni-heidelberg.de