PRELIMINARY list of talks and tutorials
Timing of the days:
- Tuesday, 25 March 2025: start at 8:30h with registration. First talk at 9:00h. End of the day (includes the poster dinner) at 20:15h
- Wednesday, 26 March 2025: 9:00 - 20:30h (includes the conference dinner)
- Thursday, 27 March 2025: 9:00h - 18:00h (no dinner provided)
- Friday, 28 March 2025 (Tutorial day): 9:00h - 16:15h (no dinner provided)
- Please note: this list is NOT final: not all speakers confirmed yet, the late-breaking news talks have not yet been selected, etc. This list if for "early-information" about what the conference content will about look like.
- The list below is sorted: Invited talks, selected talks, tutorials
Keynote: Toward a formal semantics for neuromorphic computing theory What does it mean when a brainlike system 'computes'? This is the question of the semantics of neuromorphic computing. In classical digital computing, several mutually connected approaches to formalize the 'meaning' of a computational process have been worked out to textbook format. These formal frameworks allow one to characterize, analyse and prove, for instance, whether a computer program actually does what the user meant it to achieve; whether two different programs actually compute 'the same' task; which tasks can be 'programmed' at all; or what hardware requirements must be met to implement a given program. In brief, semantic theory allows one to analyse how abstract models of computational processes interface with reality - both at the bottom level of the physical reality of hardware, and at the top level of user tasks. Neuromorphic computing theory can learn a lot about these things from looking at the digital world, but also needs to find its very own view on semantics. | Herbert Jäger (RUG) |
Invited talk (talk will be on neuromorphic intelligence (mostly hardware and algorithmic aspects, from low-latency event-based computing to on-device learning and in-memory computing)) | Charlotte Frenkel (Delft University of Technology) |
Invited talk: 28nm Embedded RRAM for Consumer and Industrial Products: Enabling, Design, and Reliability After a long period of research and development, Infineon Technologies has recently started to sell RRAM based products. For these new products we follow the maxim "RRAM is the new (embedded) Flash". In the presentation we will discuss design aspects of an embedded RRAM macro in a 28nm advanced logic foundry process employed for consumer and industrial products. We present high statistic reliability data of the embedded RRAM coming from test devices and products to demonstrate the matureness and usability of the embedded emerging (digital) memory. We compare RRAM failure modes with embedded flash from the previous generations and discuss counter measures. Overall, we show, that 28nm- and 22nm-embedded RRAM are adequate and now finally available successors of embedded flash from previous generations: Today, RRAM is not an "emerging memory" anymore; it now is actually "emerged". Biography: Jan Otterstedt received the Dr.-Ing. degree in electrical engineering from the University of Hannover, Germany, in 1997. Afterwards, he has joined the Semiconductor Group of Siemens, which later became the Infineon Technologies AG. Since more than 15 years, he is responsible for concept engineering for embedded non-volatile memories, mostly covering consumer and industrial applications. He now is a Senior Principal. Since 2006, Jan lectures on "Testing Digital Circuits" at the Technical University of Munich (TUM). | Jan Otterstedt (Infineon Technologies AG) |
Invited talk: A new direction for continual learning: ask not just where to go, also how to get there Continually learning from a stream of non-stationary data is challenging for deep neural networks. When these networks are trained on something new, they tend to quickly forget what was learned before. In recent years, considerable progress has been made towards overcoming such "catastrophic forgetting", largely thanks to methods such as replay or regularization that add extra terms to the loss function to approximate the joint loss over all tasks so far. However, I will show that even in the best-case scenario (i.e., with a perfectly approximated joint loss), these current methods still suffer from temporary but substantial forgetting when starting to learn something new (the stability gap) and fail to re-organize the network appropriately when relevant new information comes in (lack of knowledge restructuring). I therefore argue that continual learning should focus not only on the optimization objective (“where to go”), but also on the optimization trajectory (“how to get there”). | Gido van de Ven (KU Leuven) |
Invited talk: Memristive valence change memory cross-bar arrays for neuro-inspired data processing Memristive cross-bar arrays are a highly promising to overcome the limits of von-Neumann architectures with respect to the latency and power consumption for the training and inference of deep neural networks. Moreover, the rich dynamics of memristive devices offer the possibility to obtain spatio-temporal information in brain-inspired information processing. We report about the use of cross-bar arrays of valence change memory (VCM) cells co-integrated with CMOS transistors. We examined 1T1R structures, where one transistor (1T) is paired with one resistive memory cell (1R), focusing on three different transistor width-to-length (W/L) ratios. Based on this, we obtained valuable guidance for designing devices that meet required resistance windows and to ensure compatibility with various application needs. To validate the practical applicability of 1T1R arrays, functional testing was conducted for vector-matrix multiplication, a key operation step during the training and inference of deep neural networks . The characterization of different types of transistors revealed that the interference between adjacent cells was negligible, confirming the feasibility of using such arrays for high-density, low-power computing. VCM cells show a strong non-linearity in the switching kinetics which is induced by a temperature increase. In this respect, thermal crosstalk can be observed in highly integrated passive crossbar arrays which may impact the resistance state of adjacent devices. Additionally, due to the thermal capacitance, a VCM cell can remain thermally active after a pulse and thus influence the temperature conditions for a possible subsequent pulse. We have shown that spatio-temporal thermal correlations can be observed for device spacings as small as a few hundred nanometers and pulse trains with pauses in the order of the thermal time constant of the memristive device. Based on this effect, novel learning rules can potentially be derived for future neuromorphic computing applications. These findings are likely not limited to crossbar arrays with single VCM devices and can be applied to other temperature-sensitive memristive devices as well, in particular also in 1T1R structures. Authors: R. Dittmann, S. Wiefels, S. Hoffmann-Eifert, V. Rana, S. Menzel Peter Grünberg Institute, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany | Regina Dittmann (Forschungszentrum Jülich GmbH) |
Invited talk: Neuromorphic Principles and Synaptic Plasticity for Self-Attention Hardware The causal decoder transformer is the workhorse of state-of-the-art LLMs and sequence modeling. However, causal transformers are inefficient on conventional hardware, mainly due to the self-attention operation. This talk explains how synaptic plasticity can assume the role of self-attention and enable more efficient inference in transformer-like models. | Emre Neftci (Forschungszentrum Juelich) |
Invited talk: Robust Computation with Neuronal Heterogeneity . | Christian Tetzlaff (University Medical Center Göttingen) |
Invited talk: The Spiking Neural Processor: mixed-signal MCU for power constrained tiny ML applications Ambient intelligence imposes strict requirements on area and power dissipation of edge devices and sensors. Innatera's Spiking Neural Processor SNP is a microcontroller featuring heterogeneous accelerators, including mixed-signal SNN, DSP, and CNN accelerators alongside an efficient RISC core designed to support a wide array of Tiny ML / Edge AI complex workloads. The SNP is accompanied by the Talamo software tool that enables building, gradient-based optimisation and deployment of entire sensor processing pipelines and applications onto the chip. This session will explore the architecture of the SNP, the advantages of SNNs for temporal and event-based processing, and practical insights into building and deploying SNN-based applications on the platform.
| Petrut Bogdan (Innatera) |
Invited talk: What can AI learn from the brain? Past, Present and Future There have been incredible advances in AI systems over the past few years, with deep learning-trained AI systems rivalling and even outperforming humans on many challenging tasks, including image and video analysis, speech processing and text generation. Such systems avoid temporal constraints imposed by the brain’s neural hardware that include the slow axonal conduction velocities of real neurons and the relatively slow integration of individual neurons. As a result, such systems can perform tasks much faster than humans. However, many of the brain’s key computational tricks are still missing from state-of-the-art AI. In this talk, Simon Thorpe will discuss a range of features that are missing from current systems. He will argue that using ultra-sparse spike-based coding schemes is critical for explaining why the brain only needs 20 Watts of power, orders of magnitude less than current Neuromorphic solutions. He will also propose that the brain uses efficient learning mechanisms that allow neurons to become selective to repeating activity patterns in just a few presentations, much more efficient than back-propagation learning schemes used in most systems. Such features could allow the development of new types of brain-inspired AI systems that could transform the start of the art. | Simon Thorpe (CNRS) |
A Diagonal Structured State Space Model on Loihi 2 for Efficient Streaming Sequence Processing "Svea Marie Meyer, Philipp Weidel, Philipp Plank, Leobardo Leobardo Campos-Macias, Sumit Bam Shreshta, Philipp Stratmann, Jonathan Timcheck and Mathis Richter" | Weidel, Philipp (Intel Labs) |
A LIF-based Legendre Memory Unit as neuromorphic State Space Model benchmarked on a second-long spatio-temporal task "Benedetto Leto, Gianvito Urgese, Enrico Macii and Vittorio Fra" Chasing energy efficiency through biologically inspired computing has produced significant interest in neuromorphic computing as a new approach to overcome some of the limitations of conventional Machine Learning (ML) solutions. In this context, State Space Models (SSMs) are arising as a powerful tool to model the temporal evolution of a system through differential equations. Their established ability to process time dependent information and their compact mathematical framework make them indeed attractive for integration with neuromorphic principles, leveraging the time-driven basis on which the latter are inherently based. To further explore such integration and co-operation, we investigated the adoption of a neuromorphic SSM, based on the redesign of the Legendre Memory Unit (LMU) through populations of Leaky Integrate-and- Fire (LIF) neurons, for a spatio-temporal task. Our LIF-based LMU (L2MU) turned out to outperform recurrent Spiking Neural Networks (SNNs) on the event-based Braille letter reading task, providing additional hints on the feasibility of SSMs as upcoming alternative in the neuromorphic domain. | Vittorio Fra (Politecnico di Torino) |
A Milling Swarm of Ground Robots using Spiking Neural Networks "Kevin Zhu, Shay Snyder, Ricardo Vega, Maryam Parsa and Cameron Nowzari" | Kevin Zhu (George Mason University) |
A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity Jamie Lohoff, Anil Kaya, Florian Assmuth and Emre Neftci Online synaptic plasticity rules derived from gradient descent achieve high accuracy on a wide range of practical tasks. However, their software implementation often requires tediously hand-derived gradients or using gradient backpropagation which sacrifices the online capability of the rules. In this work, we present a custom automatic differentiation (AD) pipeline for sparse and online implementation of gradient-based synaptic plasticity rules that generalizes to arbitrary neuron models. Our work combines the programming ease of backpropagation-type methods for forward AD while being memory-efficient. To achieve this, we exploit the advantageous compute and memory scaling of online synaptic plasticity by providing an inherently sparse implementation of AD where expensive tensor contractions are replaced with simple element-wise multiplications if the tensors are diagonal. Gradient-based synaptic plasticity rules such as eligibility propagation (e-prop) have exactly this property and thus profit immensely from this feature. We demonstrate the alignment of our gradients with respect to gradient backpropagation on an synthetic task where e-prop gradients are exact, as well as audio speech classification benchmarks. We demonstrate how memory utilization scales with network size without dependence on the sequence length, as expected from forward AD methods. | Jamie Lohoff (Forschungszentrum Jülich) |
Biologically-Inspired Representations for Adaptive Control with Spatial Semantic Pointers "Graeme Damberger, Kathryn Simone, Chandan Datta, Ram Eshwar Kaundinya, Juan Escareno and Chris Eliasmith" We explore and evaluate biologically-inspired representations for an adaptive controller using Spatial Semantic Pointers (SSPs). Specifically, we show that Place-cell like SSP representations outperform past methods. Using this representation, we efficiently learn the dynamics of a given plant over its state space. We implement this adaptive controller in a spiking neural network along with a classical sliding mode controller and prove the stability of the overall system with non-linear plant dynamics. We then simulate the controller on a 3-link arm and demonstrate that the proposed adaptive controller gives a simpler and more systematic way of designing the neural representation of the state space. Compared to previous methods, we show an increase of 1.23-1.25x in tracking accuracy. | Graeme Damberger (University of Waterloo) |
Deep activity propagation via weight initialization in spiking neural networks Aurora Micheli, Olaf Booij, Jan van Gemert and Nergis Tömen Spiking Neural Networks (SNNs) and neuromorphic computing offer bio-inspired advantages such as sparsity and ultra-low power consumption, providing a promising alternative to conventional artificial neural networks (ANNs). However, training deep SNNs from scratch remains a challenge, as SNNs process and transmit information by quantizing the real-valued membrane potentials into binary spikes. This can lead to information loss and vanishing spikes in deeper layers, impeding effective training. While weight initialization is known to be critical for training deep neural networks, what constitutes an effective initial state for a deep SNN is not well-understood. Existing weight initialization methods designed for ANNs are often applied to SNNs without accounting for their distinct computational properties. In this work we derive an optimal weight initialization method specifically tailored for SNNs, taking into account the quantization operation. We show theoretically that, unlike standard approaches, our method enables the propagation of activity in deep SNNs without loss of spikes. We demonstrate this behavior in numerical simulations of SNNs with up to 100 layers across multiple time steps. We present an in-depth analysis of the numerical conditions, regarding layer width and neuron hyperparameters, which are necessary to accurately apply our theoretical findings. Furthermore, we present extensive comparisons of our method with previously established baseline initializations for deep ANNs and SNNs. Our experiments on four different datasets demonstrate higher accuracy and faster convergence when using our proposed weight initialization scheme. Finally, we show that our method is robust against variations in several network and neuron hyperparameters. | Aurora Micheli (TU Delft) |
Demonstrating the Advantages of Analog Wafer-Scale Neuromorphic Hardware "Hartmut Schmidt, Andreas Grübl, José Montes, Eric Müller, Sebastian Schmitt and Johannes Schemmel" | Hartmut Schmidt (Kirchhoff-Institute for Physics, Heidelberg University) |
Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation. "Sirine Arfa, Bernhard Vogginger, Chen Liu, Johannes Partzsch, Mark Schöne and Christian Mayr" Spiking Neural Networks (SNNs) are highly energy- efficient during inference, making them particularly suitable for deployment on neuromorphic hardware. Their ability to process event-driven inputs, such as data from dynamic vision sensors (DVS), further enhances their applicability to edge computing tasks. However, the resource constraints of edge hardware necessitate techniques like weight quantization, which reduce the memory footprint of SNNs while preserving accuracy. Despite its importance, existing quantization methods typically focus on synaptic weights quantization without taking account of other critical parameters, such as scaling neuron firing thresholds. To address this limitation, we present the first benchmark for the DVS gesture recognition task using SNNs optimized for the many-core neuromorphic chip SpiNNaker2. Our study evaluates two quantization pipelines for fixed-point computations. The first approach employs post training quantization (PTQ) with percentile-based threshold scaling, while the second uses quantization aware training (QAT) with adaptive threshold scaling. Both methods achieve accurate 8-bit on-chip inference, closely approximating 32-bit floating-point performance. Additionally, our baseline SNNs perform competitively against previously reported results without specialized techniques. These models are deployed on SpiNNaker2 using the neuromorphic intermediate representation (NIR). Ultimately, we achieve 94.13% classification accuracy on-chip, demonstrating the SpiNNaker2’s potential for efficient, low-energy neuromorphic computing. | Sirine Arfa (Technical University of Dresden - TU Dresden) |
Event-based backpropagation on the neuromorphic platform SpiNNaker2 "Gabriel Béna, Timo Wunderlich, Mahmoud Akl, Bernhard Vogginger, Christian Mayr and Hector Andres Gonzalez" Neuromorphic computing aims to replicate the brain's capabilities for energy efficient and parallel information processing, promising a solution to the increasing demand for faster and more efficient computational systems. Efficient training of neural networks on neuromorphic hardware requires the development of training algorithms that retain the sparsity of spike-based communication during training. Here, we report on the first implementation of event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform. We use EventProp, an algorithm for event-based backpropagation in spiking neural networks (SNNs), to compute exact gradients using sparse communication of error signals between neurons. Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints, and uses event packets to transmit spikes and error signals between network layers. We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset, and provide an off-chip implementation for efficient prototyping, hyper-parameter search, and hybrid training methods. | Gabriel Béna (Imperial College London) |
Eventprop training for efficient neuromorphic applications "Thomas Shoesmith, James Knight, Balazs Meszaros, Jonathan Timcheck and Thomas Nowotny" Neuromorphic computing can reduce the energy requirements of neural networks and holds the promise to ‘repatriate’ AI workloads back from the cloud to the edge. However, training neural networks on neuromorphic hardware has remained elusive. Here, we instead present a pipeline for training spiking neural networks on GPUs, using the efficient event-driven Eventprop algorithm implemented in mlGeNN, and deploying them on Intel’s Loihi 2 neuromorphic chip. Our benchmarking on keyword spotting tasks indicates that there is almost no loss in accuracy between GPU and Loihi 2 implementations and that classifying a sample on Loihi 2 is up to 10× faster and uses 200×less energy than on an NVIDIA Jetson Orin Nano. | Thomas Shoesmith (University of Sussex) |
Evolution at the Edge: Real-Time Evolution for Neuromorphic Engine Control "Karan Patel, Ethan Maness, Tyler Nitzsche, Emma Brown, Brett Witherspoon, Aaron Young, Bryan Maldonado, Brian Kaul and Catherine Schuman" Neuromorphic computing systems are attractive for real-time control at the edge because of their low power operation, real-time processing capabilities and their potential ability to do online learning. In this work, we describe an approach for performing real-time evolution of spiking neural networks for neuromorphic systems at the edge called Neuromorphic Optimization using Dynamic Evolutionary Systems or NODES. We apply this approach to real-time combustion engine control and develop an engine-specific hardware platform for NODES called FireBox. We demonstrate how the real-time evolution approach works in simulation and the performance of networks trained in simulation on the physical engine. | Karan Patel (University of Tennessee Knoxville) |
Exploring Spike Encoder Designs for Near-Sensor Edge Computing Jingang Jin, Zhenhang Zhang and Qinru Qiu | Jingang Jin (Syracuse University) |
FeNN: A RISC-V vector processor for Spiking Neural Network acceleration Zainab Aizaz, James Knight and Thomas Nowotny Spiking Neural Networks (SNNs) have the potential to drastically reduce the energy requirements of AI systems. However, mainstream accelerators like GPUs and TPUs are designed for the high arithmetic intensity of standard ANNs so are not well-suited to SNN simulation. FPGAs are well-suited to applications with low arithmetic intensity as they have high off-chip memory bandwidth and large amounts of on-chip memory. In this talk, James Knight and Zainab Aizaz will present a novel RISC-V-based soft vector processor (FeNN), tailored to simulating SNNs on FPGAs. Unlike most dedicated neuromorphic hardware, FeNN is fully programmable and designed to be integrated with applications running on standard computers from the edge to the cloud. By using stochastic rounding and saturation, FeNN can achieve high numerical precision with low hardware utilisation and that a single FeNN core can simulate an SNN classifier faster than both an embedded GPU and the Loihi neuromorphic system. | Zainab Aizaz (University of Sussex), James Knight (University of Sussex) |
Hardware architecture and routing-aware training for optimal memory usage: a case study Jimmy Weber, Theo Ballet and Melika Payvand Efficient deployment of neural networks on resource-constrained hardware demands optimal use of on- chip memory. In event-based processors, this is particularly critical for routing architectures, where substantial memory is dedicated to managing network connectivity. While prior work has focused on optimizing event routing during hardware design, optimizing memory utilization for routing during network training remains underexplored. Key challenges include: (i) integrating routing into the loss function, which often intro- duces non-differentiability, and (ii) computational expense in evaluating network mappability to hardware. We propose a hardware-algorithm co-design approach to train routing-aware neural networks. To address challenge (i), we extend the DeepR training algorithm, leveraging dynamic pruning and random re-assignment to optimize memory use. For challenge (ii), we introduce a proxy-based approximation of the mapping function to incorporate placement and routing constraints efficiently. We demonstrate our approach by optimizing a network for the Spiking Heidelberg Digits (SHD) dataset using a small-world connectivity-based hardware architecture as a case study. The resulting network, trained with our routing-aware methodology, is fully mappable to the hardware, achieving 5% more accuracy using the same number of parameters, and iso-accuracy with 10x less memory usage, compared to non-routing-aware training methods. This work highlights the critical role of co-optimizing algorithms and hardware to enable efficient and scalable solutions for constrained environments. | Jimmy Weber (Institute of Neuroinformatics, University of Zurich and ETH Zurich) |
Heterogenous Population Encoding for Multi-joint Regression using sEMG signals Farah Baracat, Luca Manneschi and Elisa Donati Proportional and simultaneous decoding of individual fingers is essential for human-machine interface (HMI) applications, such as myoelectric prostheses, which restore motor function by decoding motor intentions from electromyography (EMG) signals. These closed-loop systems require high real-time decoding accuracy and low-power operation, making spiking neural networks (SNNs) on neuromorphic hardware a promising solution. To fully leverage SNNs, continuous EMG signals must be encoded into a spiking domain while preserving key information. Most existing methods use a single-neuron approach, where each input channel is fed into a single neuron. However, we hypothesize that this limits representation richness and requires per-subject tuning. This talk explores how variability in neuronal populations affects decoding performance, using it as a proxy for information content. We examine how membrane time constants, thresholds, and population size influence finger kinematics decoding. Our results demonstrate that encoding EMG with a heterogeneous neuron population enhances decoding performance and generalizes across subjects without additional tuning or training of the encoding layer parameters. | Farah Baracat (Institute of Neuroinformatics, University of Zurich and ETH Zurich) |
Integrating programmable plasticity in experiment descriptions for analog neuromorphic hardware Philipp Spilger, Eric Müller and Johannes Schemmel The study of plasticity in spiking neural networks is an active area of research. However, simulations that involve complex plasticity rules, dense connectivity/high synapse counts, complex neuron morphologies, or extended simulation times can be computationally demanding. The BrainScaleS-2 neuromorphic architecture has been designed to address this challenge by supporting “hybrid” plasticity, which combines the concepts of programmability and inherently parallel emulation. In particular, observables that are expensive in numerical simulation, such as per-synapse correlation measurements, are implemented directly in the synapse circuits. The evaluation of the observables, the decision to perform an update, and the magnitude of an update, are all conducted in a conventional program that runs simultaneously with the analog neural network. Consequently, these systems can offer a scalable and flexible solution in such cases. While previous work on the platform has already reported on the use of different kinds of plasticity, the descriptions for the spiking neural network experiment topology and protocol, and the plasticity algorithm have not been connected. In this work, we introduce an integrated framework for describing spiking neural network experiments and plasticity rules in a unified high-level experiment description language for the Brain ScaleS-2 platform and demonstrate its use. | Philipp Spilger (Kirchhoff Institute for Physics, Heidelberg University) |
OctopuScheduler: On-Chip Multi-Core Scheduling of Deep Neural Networks on SpiNNaker2 "Tim Langer, Matthias Jobst, Chen Liu, Florian Kelber, Bernhard Vogginger and Christian Mayr" We present OctopuScheduler, the first generalized on-chip scheduling framework for the accelerated inference of non-spiking deep neural networks (DNNs) on the neuromorphic hardware platform SpiNNaker2. The goal of OctopuScheduler is to flexibly support a wide variety of state-of-the-art DNN architectures for different domains, moving from an application-specific custom implementation to a generally applicable framework, simplifying the access to the SpiNNaker2 platform. The on-chip scheduling approach allows to minimize communication latencies with the host, completely controlling the execution of layers for convolutional neural networks (CNNs) and transformer architectures within a single chip. OctopuScheduler as a scheduling framework for classical deep neural networks has the potential to unlock the experimentation with large-scale hybrid deep and spiking neural network (SNN) architectures, event-based computing and neuromorphic modifications of classical state-of-the-art DNN architectures on the neuromorphic multi-processor system-on-chip (MPSoC) SpiNNaker2. | Tim Langer (TU Dresden) |
Realtime-Capable Hybrid Spiking Neural Networks for Neural Decoding of Cortical Activity "Jann Krausse, Alexandru Vasilache, Klaus Knobloch and Juergen Becker" Intra-cortical brain-machine interfaces (iBMIs) present a promising solution to restoring and decoding brain activity that is lost due to injury. However, patients with such neuroprosthetics suffer from the permanent skull opening resulting from the devices’ bulky wiring. This drives the development of wireless iBMIs, which in turn demand low power consumption and small device footprint. Most recently, spiking neural networks (SNNs) have been researched as potential candidates for low power neural decoding. In this work, we present the next step of utilizing SNNs for such tasks, building on the recently published results of the 2024 Grand Challenge on Neural Decoding Challenge for Motor Control of non-Human Primates. We optimize our model architecture to exceed the existing state of the art on the Primate Reaching dataset while maintaining similar resource demand through the use of various compression techniques. We further focus on the implementation of a realtime-capable version of the model and discuss the implications of this architecture. With this, we advance one step closer towards latency-free decoding of cortical spike trains using neuromorphic technology, which would ultimately improve the lives of millions of paralyzed patients. | Jann Krausse (Infineon Technologies) |
Retina-Inspired Object Motion Segmentation for Event-Cameras "Victoria Clerico, Shay Snyder, Arya Lohia, Md Abdullah-Al Kaiser, Gregory Schwartz, Akhilesh Jaiswal and Maryam Parsa" Event-cameras have emerged as a revolutionary technology with a high temporal resolution that far surpasses standard active pixel cameras. This technology draws biological inspiration from photoreceptors and the initial retinal synapse. This research showcases the potential of additional retinal functionalities to extract visual features. We provide a domain- agnostic and efficient algorithm for ego-motion compensation based on Object Motion Sensitivity (OMS), one of the multiple features computed within the mammalian retina. We develop a method based on experimental neuroscience that translates OMS’ biological circuitry to a low-overhead algorithm to suppress camera motion bypassing the need for deep networks and learning. Our system processes event data from dynamic scenes to perform pixel-wise object motion segmentation using a real and synthetic dataset. This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by 10^3 to 10^6 orders of magnitude compared to previous approaches. Our work paves the way for robust, high-speed, and low-bandwidth decision-making for in-sensor computations. | Victoria Clerico (IBM Research Zürich) |
Short-reach Optical Communication: A Real-world Task for Neuromorphic Hardware "Elias Arnold, Eike-Manuel Edelmann, Alexander von Bank, Eric Müller, Laurent Schmalen and Johannes Schemmel" SNNs emulated on dedicated neuromorphic accelerators promise to offer energy-efficient signal processing. However, the neuromorphic advantage over traditional algorithms still remains to be demonstrated in real-world applications. In this talk we outline an IMDD task that is relevant to high-speed optical communication systems used in data centers. Compared to other machine learning-inspired benchmarks, the task offers several advantages. First, the dataset is inherently time-dependent, i.e., there is a time dimension that can be natively mapped to the dynamic evolution of SNNs. Second, small-scale SNNs can achieve the target accuracy required by technical communication standards. Third, due to the small scale and the defined target accuracy, the task facilitates the optimization for real-world aspects, such as energy efficiency, resource requirements, and system complexity. | Elias Arnold (Kirchhoff Institute for Physics, Heidelberg University, Germany) |
State-Space Model Inspired Multiple-Input Multiple-Output Spiking Neurons Sanja Karilanova, Subhrakanti Dey and Ayça Özçelikkale In spiking neural networks (SNNs), the main unit of information processing is the neuron with an internal state. The internal state generates an output spike based on its component associated with the membrane potential. This spike is then communicated to other neurons in the network. Here, we propose a general multiple-input multiple-output (MIMO) spiking neuron model that goes beyond this traditional single-input single-output (SISO) model in the SNN literature. Our proposed framework is based on interpreting the neurons as state-space models (SSMs) with linear state evolution and non-linear spiking activation functions. We illustrate the trade-offs among various parameters of the proposed SSM-inspired neuron model, such as the number of hidden neuron states, the number of input and output channels, including single-input multiple-output (SIMO) and multiple-input single-output (MISO) models. We show that for SNNs with a small number of neurons with large internal state spaces, significant performance gains may be obtained by increasing the number of output channels of a neuron. In particular, a network with spiking neurons with multiple-output channels may achieve the same level of accuracy with the baseline with the continuous-valued communications on the same reference network architecture. | Sanja Karilanova (Uppsala University) |
The Spatial Effect of the Pinna for Neuromorphic Speech Denoising Ranganath Selagamsetty, Joshua San Miguel and Mikko Lipasti Abstract: Humans are capable of complex communication in the form of speech, which fundamentally relies on the ability to parse and distinguish sounds in noisy environments. Advances in computing hardware have made artificial neural networks ideal for imitating human speech recognition. Such models have achieved near human-like performance in isolating speech from noisy audio, at the cost of enormous model sizes and power consumption at orders of magnitude greater than the brain. Spiking neural networks have been proposed as an alternative, attaining model efficiency by prioritizing biological fidelity. Inspired by the biological pinna, our model encodes noisy speech input with spatial cues to aid in speech denoising. We show that denoising performance improves when a spiking neural network consumes audio encoded with spatial cues from pinna transforms. Our fixed networks achieve up to +0.15 dB improvement and up to +1.04 dB for our generalized pinna networks against comparable models. We present a neuroscience-inspired, shallow, spiking neural network architecture with just 525K weights that may be used as a starting model to explain neuronal observations. | Ranganath Selagamsetty (University of Wisconsin - Madison) |
Tutorial: Running SNNs on SpiNNaker SpiNNaker is a highly programmable neuromorphic platform, designed to simulate large spiking neural networks in real-time. It uses many conventional low-power ARM processors executing customizable software in parallel, coupled with a specialized multicast network enabling the transmission of many spikes to multiple target neurons. This tutorial will give an introduction on running SNNs on SpiNNaker using the PyNN language. Users will have a chance to run SNNs on the SpiNNaker hardware directly through a Jupyter notebook interface. | Andrew Rowley (U Manchester) |
Tutorial: Accelerated Neuromorphic Computing on BrainScaleS In this suggested tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. BrainScaleS-2 has primarily been designed to serve as a versatile computational substrate for the emulation of spiking neural networks. As such, each ASIC integrates 512 analog neuron circuits implementing the rich dynamics of the adaptive exponential leaky integrate-and-fire (AdEx) model. Each neuron receives input from 256 current- or conductance-based synapses with configurable sign and weight. Multi-compartment extensions allow the formation of complex, spatially distributed, dendritic trees with active processing elements. Integrating thousands of ADC and DAC channels as well as two custom microprocessors with SIMD extensions, each ASIC represents a software-controlled analog computer that can be configured and probed at will. For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems. Participants can use their EBRAINS account (free of charge available at https://ebrains.eu/register) or use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial. | |
Tutorial: Development and Deployment of SNNs on FPGA for Embedded Applications This tutorial presents an in-depth introduction to a many-core near-memory-computing Spiking Neural Network (SNN) FPGA accelerator developed at the FZI Research Center for Information Technology. The accelerator is designed for embedded sensor processing applications in medical, industrial, and automotive contexts, with a focus on dataset evaluation and real-time processing of high data rate neuromorphic sensors. The hardware architecture is based on a pipelined SNN processing core, and the tutorial will delve into the numerous co-design decisions made to optimize its performance and versatility. Participants will gain insights into critical concepts such as quantization, the mapping of logical neurons onto physical processing elements (PEs), and the accelerator’s integration within a System-on-Chip (SoC) FPGA context running Linux on classical processors. The tutorial will also cover the current (work-in-progress) feature set of the accelerator and provide hands-on experience in developing and deploying SNNs using our toolchain. The accelerator is intended to be open-sourced to the neuromorphic community upon reaching maturity in its development and deployment framework. In the interim, this tutorial aims to gather valuable feedback from potential users, researchers, and experts in neuromorphic hardware implementation to refine and enhance the accelerator's capabilities. Necessary background
Tutorial materials
| Brian Pachideh and Sven Nitzsche (FZI) |
Tutorial: NEST Simulator as a neuromorphic prototyping platform In the design of neuromorphic systems, it is vital to have a flexible and highly performant way of exploring system parameters. Using NEST Simulator [1] and the NESTML modeling language [2], spiking neural network models can be quickly prototyped and subjected to design constraints that mirror those of the intended neuromorphic platform. NEST has a proven track record on a large and diverse set of use cases, and can run on anywhere from laptops to supercomputers, making it an ideal prototyping and research platform for neuromorphic systems. This benefits reproducibility (obtaining the same numerical results across platforms), highlighting the value of NEST in verification and validation of neuromorphic systems. In this tutorial, participants will get hands-on experience creating neuron and synapse models in NESTML, and using them to build networks in NEST that perform various tasks, such as sequence learning and reinforcement learning. We will introduce several tools and front-ends to implement modeling ideas most effectively, such as the graphical user interface NEST Desktop [3]. Through the use of target-specific code generation options in NESTML, the same model can even be directly run on neuromorphic platforms. Participants do not have to install software as all tools are accessible via the cloud. All parts of the tutorial are hands-on, and take place via Jupyter notebooks.
| Dennis Terhorst and Charl Linssen (Jülich Research Centre) |
Tutorial: NeuroBench Benchmarking is an essential component of research which involves measuring and comparing approaches in order to evaluate improvements and demonstrate objective benefits. Essentially, it aims to answer the questions - “How much better are my approaches now, and how can I make them even better next?” NeuroBench is a community-driven initiative towards providing a standardized framework for benchmarking neuromorphic solutions, unifying the field with straightforward, well-defined, and reproducible benchmark measurement. NeuroBench offers common tools and methodology that apply broadly across different models, tasks, and scenarios, allowing for comprehensive insights into the correctness and costs of execution. Recently, it was used to compare and score accurate, tiny-compute sequence models in the BioCAS 2024 Neural Decoding Grand Challenge. In this tutorial, we provide a hands-on guide to using the open-source NeuroBench harness for profiling neuromorphic models, such as spiking neural networks and other efficiency-focused models. Participants will learn how to benchmark models, extracting meaningful metrics in order to have a comprehensive understanding of the cost profile associated with model execution. We will show how the harness interfaces can be used to connect with other popular software libraries and how users can easily extend the harness with their own custom tasks and metrics of interest, which will provide the most relevant information for their research. The hands-on examples will be offered through Python notebooks. Please bring your own laptop. | Jason Yik (Harvard) |
Tutorial: Neuromorphic Control for Autonomous Driving This tutorial is based on 3 of our recent publications:
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this tutorial, we will explore neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework (CARLA). We will showcase these controllers with various vehicle models (from a Tesla Model 3 to an Ambulance) and compare their performance with conventional CPU-based implementations. While being neural approximations, we will demonstrate how neuromorphic models can perform competitively with their conventional counterparts. Particularly, we will show that neuromorphic models can converge to their op=mal performances with merely 100–1,000 neurons while providing state-of-the-art response dynamics to unforeseen situations. For example, we will showcase realistic driving scenarios in which vehicles experience malfunctioning and swift steering scenarios. We will demonstrate significant improvements in dynamic error rate compared with traditional control implementation with up to 89.15% median prediction error reduction with 5 spiking neurons and up to 96.08% with 5,000 neurons. In this tutorial, we will provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. We will also highlight the importance of hybrid - conventional and neuromorphic - designs, as well as highlight the limitations of neuromorphic implementations, particularly at higher speeds where they tend to degrade faster than in conventional designs. | Elishai Ezra Tsur (The Open University of Israel) |
Tutorial: SpiNNaker2 Tutorial: Beyond Neural Simulation SpiNNaker2 is a scalable many-core architecture for flexible neuromorphic computing. It combines low-power ARM cores and dedicated accelerators for deep neural networks with a scalable, event-based communication infrastructure. This unique combination allows to explore a wide range of applications on SpiNNaker2 including spiking neural network simulation, deep neural networks, hybrid neural networks as well other event-based algorithms. This tutorial complements the planned PyNN tutorial for SpiNNaker by the University of Manchester and focuses on applications that go beyond neural simulation and make use of SpiNNaker2’s features. We will bring single-chip SpiNNaker2 boards and offer remote access to 48-chip server boards. The first part of the tutorial will focus on deploying deep SNN on SpiNNaker2 using the neuromorphic intermediate representation. In the second part we will showcase examples of our generic compute and/or deep learning software stacks. | Bernhard Vogginger, Florian Feiler and Mahmoud Akl (TU Dresden / SpiNNcloud Systems GmbH) |