NICE 2025 - Agenda
(hide abstracts)Tuesday, 25 March 2025 | |||
08:30 | NICE 2025VenueEuropean Institute for Neuromorphic Computing (EINC), Airports
Train stationHeidelberg Main Station (Heidelberg Hauptbahnhof, use bahn.de for time table information) From Heidelberg Main station to the institute:
Time zoneThe times in the agenda are in Central European Time CET (e.g. Europe, Berlin or Paris). Some other time zones: America, Australia, Europe, Japan, China, India, ... or only CET .) | ||
08:30‑09:00 (30 min) | Registration (with some coffee but no breakfast) | ||
Session chair: Johannes Schemmel | |||
09:00‑09:15 (15+5 min) | NICE 2025 : opening and welcome | Markus Oberthaler (Kirchhoff-Institute for Physics) | |
09:20‑10:05 (45+5 min) | Organisers round | NICE organisers committee members | |
10:10‑10:55 (45+5 min) | Keynote: Toward a formal semantics for neuromorphic computing theory show presentation.pdf (public accessible) What does it mean when a brainlike system 'computes'? This is the question of the semantics of neuromorphic computing. In classical digital computing, several mutually connected approaches to formalize the 'meaning' of a computational process have been worked out to textbook format. These formal frameworks allow one to characterize, analyse and prove, for instance, whether a computer program actually does what the user meant it to achieve; whether two different programs actually compute 'the same' task; which tasks can be 'programmed' at all; or what hardware requirements must be met to implement a given program. In brief, semantic theory allows one to analyse how abstract models of computational processes interface with reality - both at the bottom level of the physical reality of hardware, and at the top level of user tasks. Neuromorphic computing theory can learn a lot about these things from looking at the digital world, but also needs to find its very own view on semantics. | Herbert Jäger (Rijksuniversiteit Groningen) | |
11:00‑11:30 (30 min) | Coffee break | ||
Session chair: Mihai Petrovici | |||
11:30‑11:55 (25+5 min) | Exploring Spike Encoder Designs for Near-Sensor Edge Computing show presentation.pdf (public accessible) Jingang Jin, Zhenhang Zhang and Qinru Qiu Robust sensing and detection require energy- and cost-efficient hardware and software capable of operating reliably in dynamic environments with wide variations in operating conditions. Spiking Neural Networks (SNNs), widely recognized as biologically inspired computing models, offer significant potential for near-sensor signal processing due to their energy efficiency and adaptability. A critical step toward broader adoption of this novel computing paradigm is the development of efficient frontend designs capable of encoding multichannel time-series data from sensors into sparse spike trains. This work introduces two spike-encoder architectures: a population coding-based encoder and a reservoir computing-based encoder. These architectures convert multivariate time series into multichannel spike sequences, performing sparse coding that effectively projects the input temporal sequences into a high-dimensional binary feature space in both spatial and temporal domains. When combined with an SNN-based backend classifier, the encoded spike sequences enable effective classification. Furthermore, our proposed reservoir encoder achieves lower implementation complexity compared to conventional reservoir models while maintaining effective sparse coding capabilities. Finally, we demonstrate that the in-hardware online learning capability of SNN models can alleviate stringent requirements on encoder performance and precision to allow cost reduction and design simplification. | Qinru Qiu (Syracuse University) | |
12:00‑12:10 (10+5 min) | A LIF-based Legendre Memory Unit as neuromorphic State Space Model benchmarked on a second-long spatio-temporal task show presentation.pdf (public accessible) Benedetto Leto, Gianvito Urgese, Enrico Macii and Vittorio Fra Chasing energy efficiency through biologically inspired computing has produced significant interest in neuromorphic computing as a new approach to overcome some of the limitations of conventional Machine Learning (ML) solutions. In this context, State Space Models (SSMs) are arising as a powerful tool to model the temporal evolution of a system through differential equations. Their established ability to process time dependent information and their compact mathematical framework make them indeed attractive for integration with neuromorphic principles, leveraging the time-driven basis on which the latter are inherently based. To further explore such integration and co-operation, we investigated the adoption of a neuromorphic SSM, based on the redesign of the Legendre Memory Unit (LMU) through populations of Leaky Integrate-and- Fire (LIF) neurons, for a spatio-temporal task. Our LIF-based LMU (L2MU) turned out to outperform recurrent Spiking Neural Networks (SNNs) on the event-based Braille letter reading task, providing additional hints on the feasibility of SSMs as upcoming alternative in the neuromorphic domain. | Vittorio Fra (Politecnico di Torino) | |
12:15‑12:40 (25+5 min) | Demonstrating the Advantages of Analog Wafer-Scale Neuromorphic Hardware show presentation.pdf (public accessible) As numerical simulations grow in size and complexity, they become increasingly resource-intensive in terms of time and energy. While specialized hardware accelerators often provide order-of-magnitude gains and are state of the art in other scientific fields, their availability and applicability in computational neuroscience is still limited. In this field, neuromorphic accelerators, particularly mixed-signal architectures like the BrainScaleS systems, offer the most significant performance benefits. These systems maintain a constant, accelerated emulation speed independent of network model and size. This is especially beneficial when traditional simulators reach their limits, such as when modeling complex neuron dynamics, incorporating plasticity mechanisms, or running long or repetitive experiments. Here, we demonstrate the capabilities of the BrainScaleS-1 system: We report the emulation time and energy consumption for two biologically inspired networks adapted to the neuromorphic hardware substrate: a balanced random network based on Brunel and the cortical microcircuit from Potjans and Diesmann. | Eric Müller (uhei) | |
12:45 | 10 Poster teasers 1 min "poster teasers" for 10 selected posters. | ||
12:45 (1+1 min) | Poster: Improved Cleanup and Decoding of Fractional Power Encodings | Alicia Bremer (University of Waterloo) | |
12:47 (1+1 min) | Poster: "Comply: Learning Sentences with Complex Weights inspired by Fruit Fly Olfaction" | Alexei Gustavo Figueroa Rosero (Berliner Hochschule Fuer Technik) | |
12:49 (1+1 min) | Poster: Multi-timescale synaptic plasticity on analog neuromorphic hardware | Amani Atoui (Heidelberg University) | |
12:51 (1+1 min) | Poster: Threshold Adaptation in Spiking Networks Enables Shortest Path Finding and Place Disambiguation | Robin Dietrich (Technical University of Munich) | |
12:53 (1+1 min) | Poster: Never Reset Again: A Mathematical Framework for Continual Inference in Recurrent Neural Networks | Bojian Yin (TUE) | |
12:55 (1+1 min) | Poster: A feedback control optimizer for online and hardware-aware training of Spiking Neural Networks | Matteo Saponati (Institute of Neuroinformatics (ETH/UZH)) | |
12:57 (1+1 min) | Poster: Working in Progress: 3D hand tracking for Extended Reality | Zhen Xu (Leiden University) | |
12:59 (1+1 min) | Poster: Dedicated Class Sub-networks for SNN Class-Incremental Learning | Katy Warr (University of Southampton) | |
13:01 (1+1 min) | Poster: VIBE: Enhancing Unsupervised Continual Learning with Autonomous Novelty Detection | Balachandran Swaminathan (Pennsylvania State University) | |
13:03 (1 min) | Poster: A Grid-Cell-Inspired Structured Vector Algebra for Cognitive Maps | Sven Krauße (Forschungszentrum Jülich GmbH) | |
(1 min) | Info: There is more, much more -- list of the posters
Posters for late breaking news
And the posters for late breaking news talks
| ||
13:05‑14:05 (60 min) | Poster-lunch | ||
Session chair: Sebastian Billaudelle | |||
14:05‑14:30 (25+5 min) | Invited talk: 28nm Embedded RRAM for Consumer and Industrial Products: Enabling, Design, and Reliability show presentation.pdf (public accessible) After a long period of research and development, Infineon Technologies has recently started to sell RRAM based products. For these new products we follow the maxim "RRAM is the new (embedded) Flash". In the presentation we will discuss design aspects of an embedded RRAM macro in a 28nm advanced logic foundry process employed for consumer and industrial products. We present high statistic reliability data of the embedded RRAM coming from test devices and products to demonstrate the matureness and usability of the embedded emerging (digital) memory. We compare RRAM failure modes with embedded flash from the previous generations and discuss counter measures. Overall, we show, that 28nm- and 22nm-embedded RRAM are adequate and now finally available successors of embedded flash from previous generations: Today, RRAM is not an "emerging memory" anymore; it now is actually "emerged". Biography: Jan Otterstedt received the Dr.-Ing. degree in electrical engineering from the University of Hannover, Germany, in 1997. Afterwards, he has joined the Semiconductor Group of Siemens, which later became the Infineon Technologies AG. Since more than 15 years, he is responsible for concept engineering for embedded non-volatile memories, mostly covering consumer and industrial applications. He now is a Senior Principal. Since 2006, Jan lectures on "Testing Digital Circuits" at the Technical University of Munich (TUM). | Jan Otterstedt (Infineon Technologies AG) | |
14:35 | Special session: Late breaking news App/HW | ||
14:35‑14:40 (5 min) | Late breaking news Event-based Delay Learning and Cross-platform In-the-loop Training for Neuromorphic Hardware Neuromorphic hardware architectures leverage event- driven computation, where asynchronous events, such as spikes, trigger localized processing within synapses and neurons. In recent years, benchmarking these neuromorphic architectures has gained importance, alongside a growing interest in machine-learning-inspired training of spiking neural networks. In this work, we extend a modern event-based spiking neural network training framework to support arbitrary recurrent and delayed topologies. Additionally, we propose an extension of the Neuromorphic Intermediate Representation (NIR) to enable event-based in-the-loop training, simplifying the porting of benchmarks across neuromorphic hardware backends. | Florian Fischer | |
14:40‑14:45 (5 min) | Late breaking news Visual coding of SNNs - with Norse and NEST Desktop | Sebastian Spreizer (University Trier) | |
14:45‑14:50 (5 min) | Late breaking news Neuromorphic Computing through a Heterogeneous Photonic-Electronic Architecture | Matej Hejda (Hewlett Packard Enterprise) | |
14:50‑14:55 (5 min) | Late breaking news Solving sparse finite element problems on neuromorphic hardware | Bradley Theilman (Sandia National Laboratories) | |
14:55‑15:10 (15 min) | Q & A to the four late-breaking news talks | ||
15:10‑15:35 (25+5 min) | Integrating programmable plasticity in experiment descriptions for analog neuromorphic hardware show presentation.pdf (public accessible) Philipp Spilger, Eric Müller and Johannes Schemmel The study of plasticity in spiking neural networks is an active area of research. However, simulations that involve complex plasticity rules, dense connectivity/high synapse counts, complex neuron morphologies, or extended simulation times can be computationally demanding. The BrainScaleS-2 neuromorphic architecture has been designed to address this challenge by supporting “hybrid” plasticity, which combines the concepts of programmability and inherently parallel emulation. In particular, observables that are expensive in numerical simulation, such as per-synapse correlation measurements, are implemented directly in the synapse circuits. The evaluation of the observables, the decision to perform an update, and the magnitude of an update, are all conducted in a conventional program that runs simultaneously with the analog neural network. Consequently, these systems can offer a scalable and flexible solution in such cases. While previous work on the platform has already reported on the use of different kinds of plasticity, the descriptions for the spiking neural network experiment topology and protocol, and the plasticity algorithm have not been connected. In this work, we introduce an integrated framework for describing spiking neural network experiments and plasticity rules in a unified high-level experiment description language for the Brain ScaleS-2 platform and demonstrate its use. | Philipp Spilger (Kirchhoff Institute for Physics, Heidelberg University) | |
15:40‑16:10 (30 min) | Coffee break | ||
Session chair and open-mic moderator: Sunny Bains | |||
16:10‑16:35 (25+5 min) | Invited talk: Robust Computation with Neuronal Heterogeneity | Christian Tetzlaff (University Medical Center Göttingen) | |
16:40‑17:05 (25+5 min) | Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation. show presentation.pdf (public accessible) Sirine Arfa, Bernhard Vogginger, Chen Liu, Johannes Partzsch, Mark Schöne and Christian Mayr Spiking Neural Networks (SNNs) are highly energy- efficient during inference, making them particularly suitable for deployment on neuromorphic hardware. Their ability to process event-driven inputs, such as data from dynamic vision sensors (DVS), further enhances their applicability to edge computing tasks. However, the resource constraints of edge hardware necessitate techniques like weight quantization, which reduce the memory footprint of SNNs while preserving accuracy. Despite its importance, existing quantization methods typically focus on synaptic weights quantization without taking account of other critical parameters, such as scaling neuron firing thresholds. To address this limitation, we present the first benchmark for the DVS gesture recognition task using SNNs optimized for the many-core neuromorphic chip SpiNNaker2. Our study evaluates two quantization pipelines for fixed-point computations. The first approach employs post training quantization (PTQ) with percentile-based threshold scaling, while the second uses quantization aware training (QAT) with adaptive threshold scaling. Both methods achieve accurate 8-bit on-chip inference, closely approximating 32-bit floating-point performance. Additionally, our baseline SNNs perform competitively against previously reported results without specialized techniques. These models are deployed on SpiNNaker2 using the neuromorphic intermediate representation (NIR). Ultimately, we achieve 94.13% classification accuracy on-chip, demonstrating the SpiNNaker2’s potential for efficient, low-energy neuromorphic computing. | Sirine Arfa (Technical University of Dresden - TU Dresden) | |
17:10‑18:10 (60 min) | Open mic / discussion -- day I speakers | ||
18:15‑20:15 (120 min) | Dinner |
Wednesday, 26 March 2025 | |||
09:00 | NICE 2025 - day 2 | ||
Session chair: Eric Müller | |||
09:00‑09:25 (25+5 min) | Invited talk: What can AI learn from the brain? Past, Present and Future show presentation.pdf (public accessible) There have been incredible advances in AI systems over the past few years, with deep learning-trained AI systems rivalling and even outperforming humans on many challenging tasks, including image and video analysis, speech processing and text generation. Such systems avoid temporal constraints imposed by the brain’s neural hardware that include the slow axonal conduction velocities of real neurons and the relatively slow integration of individual neurons. As a result, such systems can perform tasks much faster than humans. However, many of the brain’s key computational tricks are still missing from state-of-the-art AI. In this talk, Simon Thorpe will discuss a range of features that are missing from current systems. He will argue that using ultra-sparse spike-based coding schemes is critical for explaining why the brain only needs 20 Watts of power, orders of magnitude less than current Neuromorphic solutions. He will also propose that the brain uses efficient learning mechanisms that allow neurons to become selective to repeating activity patterns in just a few presentations, much more efficient than back-propagation learning schemes used in most systems. Such features could allow the development of new types of brain-inspired AI systems that could transform the start of the art. | Simon Thorpe (CNRS) | |
09:30‑09:55 (25+5 min) | State-Space Model Inspired Multiple-Input Multiple-Output Spiking Neurons show presentation.pdf (public accessible) Sanja Karilanova, Subhrakanti Dey and Ayça Özçelikkale In spiking neural networks (SNNs), the main unit of information processing is the neuron with an internal state. The internal state generates an output spike based on its component associated with the membrane potential. This spike is then communicated to other neurons in the network. Here, we propose a general multiple-input multiple-output (MIMO) spiking neuron model that goes beyond this traditional single-input single-output (SISO) model in the SNN literature. Our proposed framework is based on interpreting the neurons as state-space models (SSMs) with linear state evolution and non-linear spiking activation functions. We illustrate the trade-offs among various parameters of the proposed SSM-inspired neuron model, such as the number of hidden neuron states, the number of input and output channels, including single-input multiple-output (SIMO) and multiple-input single-output (MISO) models. We show that for SNNs with a small number of neurons with large internal state spaces, significant performance gains may be obtained by increasing the number of output channels of a neuron. In particular, a network with spiking neurons with multiple-output channels may achieve the same level of accuracy with the baseline with the continuous-valued communications on the same reference network architecture. | Sanja Karilanova (Uppsala University) | |
10:00‑10:25 (25+5 min) | Invited talk: Neuromorphic Principles for Self-Attention The causal decoder transformer is the workhorse of state-of-the-art large language models and sequence modeling. Its key enabling building block is self-attention, which acts as a history-dependent weighting of sequence elements. Self-attention can take a form strikingly similar to synaptic plasticity, which can be efficiently implemented in neuromorphic hardware. So far, challenges in deep credit assignment have limited the use of synaptic plasticity to relatively shallow networks and simple tasks. By leveraging the equivalence between self-attention and plasticity, we explain how transformer inference is essentially a learning problem that can be addressed with local synaptic plasticity, thereby circumventing the online credit assignment problem. With this understanding, self-attention can be further improved using concepts inspired by computational neuroscience, such as continual learning and metaplasticity. Since causal transformers are notoriously inefficient on conventional hardware, neuromorphic principles for self-attention could hold the key to more efficient inference with transformer-like models. | Emre Neftci (Forschungszentrum Juelich) | |
10:30 | Special session: late-breaking news: ML Theory Session | ||
10:30‑10:35 (5 min) | Late breaking news Weight transport through spike timing for robust local gradients show presentation.pdf (public accessible) | Timo Gierlich (University of Bern) | |
10:35‑10:40 (5 min) | Late breaking news: Backpropagation through space, time, and the brain | Benjamin Ellenberger (University of Bern) | |
10:40‑10:45 (5 min) | Late breaking news: Sparse Convolutional Recurrent Learning for Efficient Event-base Neuromorphic Object Detection show presentation.pdf (public accessible) | Guangzhi Tang (Maastricht University) | |
10:45‑10:55 (10 min) | Q & A to the three late-breaking news talks | ||
10:55‑11:25 (30 min) | Coffee break | ||
Session chair: Steve Furber | |||
11:25‑11:50 (25+5 min) | Invited talk: A new direction for continual learning: ask not just where to go, also how to get there show presentation.pdf (public accessible) Continually learning from a stream of non-stationary data is challenging for deep neural networks. When these networks are trained on something new, they tend to quickly forget what was learned before. In recent years, considerable progress has been made towards overcoming such "catastrophic forgetting", largely thanks to methods such as replay or regularization that add extra terms to the loss function to approximate the joint loss over all tasks so far. However, I will show that even in the best-case scenario (i.e., with a perfectly approximated joint loss), these current methods still suffer from temporary but substantial forgetting when starting to learn something new (the stability gap) and fail to re-organize the network appropriately when relevant new information comes in (lack of knowledge restructuring). I therefore argue that continual learning should focus not only on the optimization objective (“where to go”), but also on the optimization trajectory (“how to get there”). | Gido van de Ven (KU Leuven) | |
11:55‑12:05 (10+5 min) | Deep activity propagation via weight initialization in spiking neural networks Aurora Micheli, Olaf Booij, Jan van Gemert and Nergis Tömen Spiking Neural Networks (SNNs) and neuromorphic computing offer bio-inspired advantages such as sparsity and ultra-low power consumption, providing a promising alternative to conventional artificial neural networks (ANNs). However, training deep SNNs from scratch remains a challenge, as SNNs process and transmit information by quantizing the real-valued membrane potentials into binary spikes. This can lead to information loss and vanishing spikes in deeper layers, impeding effective training. While weight initialization is known to be critical for training deep neural networks, what constitutes an effective initial state for a deep SNN is not well-understood. Existing weight initialization methods designed for ANNs are often applied to SNNs without accounting for their distinct computational properties. In this work we derive an optimal weight initialization method specifically tailored for SNNs, taking into account the quantization operation. We show theoretically that, unlike standard approaches, our method enables the propagation of activity in deep SNNs without loss of spikes. We demonstrate this behavior in numerical simulations of SNNs with up to 100 layers across multiple time steps. We present an in-depth analysis of the numerical conditions, regarding layer width and neuron hyperparameters, which are necessary to accurately apply our theoretical findings. Furthermore, we present extensive comparisons of our method with previously established baseline initializations for deep ANNs and SNNs. Our experiments on four different datasets demonstrate higher accuracy and faster convergence when using our proposed weight initialization scheme. Finally, we show that our method is robust against variations in several network and neuron hyperparameters. | Aurora Micheli (TU Delft) | |
12:10‑12:20 (10+5 min) | Eventprop training for efficient neuromorphic applications Thomas Shoesmith, James Knight, Balazs Meszaros, Jonathan Timcheck and Thomas Nowotny Neuromorphic computing can reduce the energy requirements of neural networks and holds the promise to ‘repatriate’ AI workloads back from the cloud to the edge. However, training neural networks on neuromorphic hardware has remained elusive. Here, we instead present a pipeline for training spiking neural networks on GPUs, using the efficient event-driven Eventprop algorithm implemented in mlGeNN, and deploying them on Intel’s Loihi 2 neuromorphic chip. Our benchmarking on keyword spotting tasks indicates that there is almost no loss in accuracy between GPU and Loihi 2 implementations and that classifying a sample on Loihi 2 is up to 10× faster and uses 200×less energy than on an NVIDIA Jetson Orin Nano. | Thomas Shoesmith (University of Sussex) | |
12:25‑12:35 (10+5 min) | Event-based backpropagation on the neuromorphic platform SpiNNaker2 show presentation.pdf (public accessible) Gabriel Béna, Timo Wunderlich, Mahmoud Akl, Bernhard Vogginger, Christian Mayr and Hector Andres Gonzalez Neuromorphic computing aims to replicate the brain's capabilities for energy efficient and parallel information processing, promising a solution to the increasing demand for faster and more efficient computational systems. Efficient training of neural networks on neuromorphic hardware requires the development of training algorithms that retain the sparsity of spike-based communication during training. Here, we report on the first implementation of event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform. We use EventProp, an algorithm for event-based backpropagation in spiking neural networks (SNNs), to compute exact gradients using sparse communication of error signals between neurons. Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints, and uses event packets to transmit spikes and error signals between network layers. We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset, and provide an off-chip implementation for efficient prototyping, hyper-parameter search, and hybrid training methods. | Gabriel Béna (Imperial College London) | |
12:40 | Special session: late-breaking news: Delay Session | ||
12:40‑12:45 (5 min) | Late breaking news: Three Factor Delay Learning Rules for Spiking Neural Networks show presentation.pdf (public accessible) | Luke Vassallo (Institut für Technische Informatik (ZITI)) | |
12:45‑12:50 (5 min) | Late breaking news: DelGrad: Exact event-based gradients in spiking networks for training delays and weights | Jimmy Weber (Institute of Neuroinformatics) | |
12:50‑12:55 (5 min) | Late breaking news Efficient Event-based Delay Learning in Spiking Neural Networks show presentation.pdf (public accessible) | Balázs Mészáros (University of Sussex) | |
12:55‑13:05 (10 min) | Q & A to the three late-breaking news talks | ||
13:05‑14:00 (55 min) | Poster-lunch | ||
14:00 | Session chair: Suma Cardwell | ||
14:00‑14:25 (25+5 min) | Hardware architecture and routing-aware training for optimal memory usage: a case study Jimmy Weber, Theo Ballet and Melika Payvand Efficient deployment of neural networks on resource-constrained hardware demands optimal use of on- chip memory. In event-based processors, this is particularly critical for routing architectures, where substantial memory is dedicated to managing network connectivity. While prior work has focused on optimizing event routing during hardware design, optimizing memory utilization for routing during network training remains underexplored. Key challenges include: (i) integrating routing into the loss function, which often intro- duces non-differentiability, and (ii) computational expense in evaluating network mappability to hardware. We propose a hardware-algorithm co-design approach to train routing-aware neural networks. To address challenge (i), we extend the DeepR training algorithm, leveraging dynamic pruning and random re-assignment to optimize memory use. For challenge (ii), we introduce a proxy-based approximation of the mapping function to incorporate placement and routing constraints efficiently. We demonstrate our approach by optimizing a network for the Spiking Heidelberg Digits (SHD) dataset using a small-world connectivity-based hardware architecture as a case study. The resulting network, trained with our routing-aware methodology, is fully mappable to the hardware, achieving 5% more accuracy using the same number of parameters, and iso-accuracy with 10x less memory usage, compared to non-routing-aware training methods. This work highlights the critical role of co-optimizing algorithms and hardware to enable efficient and scalable solutions for constrained environments. | Jimmy Weber (Institute of Neuroinformatics, University of Zurich and ETH Zurich) | |
14:30‑14:40 (10+5 min) | Short-reach Optical Communication: A Real-world Task for Neuromorphic Hardware Elias Arnold, Eike-Manuel Edelmann, Alexander von Bank, Eric Müller, Laurent Schmalen and Johannes Schemmel SNNs emulated on dedicated neuromorphic accelerators promise to offer energy-efficient signal processing. However, the neuromorphic advantage over traditional algorithms still remains to be demonstrated in real-world applications. In this talk we outline an IMDD task that is relevant to high-speed optical communication systems used in data centers. Compared to other machine learning-inspired benchmarks, the task offers several advantages. First, the dataset is inherently time-dependent, i.e., there is a time dimension that can be natively mapped to the dynamic evolution of SNNs. Second, small-scale SNNs can achieve the target accuracy required by technical communication standards. Third, due to the small scale and the defined target accuracy, the task facilitates the optimization for real-world aspects, such as energy efficiency, resource requirements, and system complexity. | Eike-Manuel Edelmann (Karlsruhe Institute of Technology (KIT), Communications Engineering Lab (CEL)) | |
14:45‑14:55 (10+5 min) | Retina-Inspired Object Motion Segmentation for Event-Cameras show presentation.pdf (public accessible) Victoria Clerico, Shay Snyder, Arya Lohia, Md Abdullah-Al Kaiser, Gregory Schwartz, Akhilesh Jaiswal and Maryam Parsa Event-cameras have emerged as a revolutionary technology with a high temporal resolution that far surpasses standard active pixel cameras. This technology draws biological inspiration from photoreceptors and the initial retinal synapse. This research showcases the potential of additional retinal functionalities to extract visual features. We provide a domain- agnostic and efficient algorithm for ego-motion compensation based on Object Motion Sensitivity (OMS), one of the multiple features computed within the mammalian retina. We develop a method based on experimental neuroscience that translates OMS’ biological circuitry to a low-overhead algorithm to suppress camera motion bypassing the need for deep networks and learning. Our system processes event data from dynamic scenes to perform pixel-wise object motion segmentation using a real and synthetic dataset. This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by 10^3 to 10^6 orders of magnitude compared to previous approaches. Our work paves the way for robust, high-speed, and low-bandwidth decision-making for in-sensor computations. | Victoria Clerico (IBM Research Zürich) | |
15:00‑15:25 (25+5 min) | Invited talk: The Spiking Neural Processor: mixed-signal MCU for power constrained tiny ML applications Ambient intelligence imposes strict requirements on area and power dissipation of edge devices and sensors. Innatera's Spiking Neural Processor SNP is a microcontroller featuring heterogeneous accelerators, including mixed-signal SNN, DSP, and CNN accelerators alongside an efficient RISC core designed to support a wide array of Tiny ML / Edge AI complex workloads. The SNP is accompanied by the Talamo software tool that enables building, gradient-based optimisation and deployment of entire sensor processing pipelines and applications onto the chip. This session will explore the architecture of the SNP, the advantages of SNNs for temporal and event-based processing, and practical insights into building and deploying SNN-based applications on the platform.
| Rui Teixeira (Innatera) | |
15:30‑15:35 (5 min) | Group photo | ||
15:35‑16:05 (30 min) | Coffee break | ||
Session chair and open-mic moderator: Brad Aimone | |||
16:05‑16:30 (25+5 min) | FeNN: A RISC-V vector processor for Spiking Neural Network acceleration Zainab Aizaz, James Knight and Thomas Nowotny Spiking Neural Networks (SNNs) have the potential to drastically reduce the energy requirements of AI systems. However, mainstream accelerators like GPUs and TPUs are designed for the high arithmetic intensity of standard ANNs so are not well-suited to SNN simulation. FPGAs are well-suited to applications with low arithmetic intensity as they have high off-chip memory bandwidth and large amounts of on-chip memory. In this talk, James Knight and Zainab Aizaz will present a novel RISC-V-based soft vector processor (FeNN), tailored to simulating SNNs on FPGAs. Unlike most dedicated neuromorphic hardware, FeNN is fully programmable and designed to be integrated with applications running on standard computers from the edge to the cloud. By using stochastic rounding and saturation, FeNN can achieve high numerical precision with low hardware utilisation and that a single FeNN core can simulate an SNN classifier faster than both an embedded GPU and the Loihi neuromorphic system. | Zainab Aizaz (University of Sussex), James Knight (University of Sussex) | |
16:35‑16:45 (10+5 min) | Recent Nature paper on NC at scale withTHOr as an example. | Catherine Schuman (University of Tennessee) | |
16:50‑17:15 (25+5 min) | Invited talk: Memristive valence change memory cross-bar arrays for neuro-inspired data processing show presentation.pdf (public accessible) Memristive cross-bar arrays are a highly promising to overcome the limits of von-Neumann architectures with respect to the latency and power consumption for the training and inference of deep neural networks. Moreover, the rich dynamics of memristive devices offer the possibility to obtain spatio-temporal information in brain-inspired information processing. We report about the use of cross-bar arrays of valence change memory (VCM) cells co-integrated with CMOS transistors. We examined 1T1R structures, where one transistor (1T) is paired with one resistive memory cell (1R), focusing on three different transistor width-to-length (W/L) ratios. Based on this, we obtained valuable guidance for designing devices that meet required resistance windows and to ensure compatibility with various application needs. To validate the practical applicability of 1T1R arrays, functional testing was conducted for vector-matrix multiplication, a key operation step during the training and inference of deep neural networks . The characterization of different types of transistors revealed that the interference between adjacent cells was negligible, confirming the feasibility of using such arrays for high-density, low-power computing. VCM cells show a strong non-linearity in the switching kinetics which is induced by a temperature increase. In this respect, thermal crosstalk can be observed in highly integrated passive crossbar arrays which may impact the resistance state of adjacent devices. Additionally, due to the thermal capacitance, a VCM cell can remain thermally active after a pulse and thus influence the temperature conditions for a possible subsequent pulse. We have shown that spatio-temporal thermal correlations can be observed for device spacings as small as a few hundred nanometers and pulse trains with pauses in the order of the thermal time constant of the memristive device. Based on this effect, novel learning rules can potentially be derived for future neuromorphic computing applications. These findings are likely not limited to crossbar arrays with single VCM devices and can be applied to other temperature-sensitive memristive devices as well, in particular also in 1T1R structures. Authors: R. Dittmann, S. Wiefels, S. Hoffmann-Eifert, V. Rana, S. Menzel Peter Grünberg Institute, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany | Regina Dittmann (Forschungszentrum Jülich GmbH) | |
17:20‑18:20 (60 min) | Open mic / discussion - day II speakers | ||
18:20‑20:20 (120 min) | Conference dinner |
Thursday, 27 March 2025 | |||
08:59 | NICE 2025 - day 3 | ||
08:59 (1 min) | Announcement: Questionnaire on the neuromorphic field | Matteo Saponati (Institute of Neuroinformatics (ETH/UZH)) | |
09:00 | Session chair: Andreas Grübl | ||
09:00‑09:10 (10+5 min) | The state of NeuroBench show presentation.pdf (public accessible) Engaging researchers across academia and industry from around the globe, NeuroBench is a community-driven initiative towards providing a standardized framework for benchmarking neuromorphic solutions. This short talk will present the latest developments in NeuroBench, including benchmark competition events, open-source tooling, and progress on next steps towards expanding neuromorphic research benchmarking. Further discussion into next steps is welcome at the NeuroBench tutorial on Friday. | Jason Yik (Harvard University) | |
09:15‑09:25 (10+5 min) | OctopuScheduler: On-Chip DNN Scheduling on the SpiNNaker2 Neuromorphic MPSoC Tim Langer, Matthias Jobst, Chen Liu, Florian Kelber, Bernhard Vogginger and Christian Mayr We present OctopuScheduler, the first generalized on-chip scheduling framework for the accelerated inference of non-spiking deep neural networks (DNNs) on the neuromorphic hardware platform SpiNNaker2. The goal of OctopuScheduler is to flexibly support a wide variety of state-of-the-art DNN architectures for different domains, moving from an application-specific custom implementation to a generally applicable framework, simplifying the access to the SpiNNaker2 platform. The on-chip scheduling approach allows to minimize communication latencies with the host, completely controlling the execution of layers for convolutional neural networks (CNNs) and transformer architectures within a single chip. OctopuScheduler as a scheduling framework for classical deep neural networks has the potential to unlock the experimentation with large-scale hybrid deep and spiking neural network (SNN) architectures, event-based computing and neuromorphic modifications of classical state-of-the-art DNN architectures on the neuromorphic multi-processor system-on-chip (MPSoC) SpiNNaker2. | Tim Langer (TU Dresden) | |
09:30 | Special session: late-breaking news Bio Theory | ||
09:30‑09:35 (5 min) | Late breaking news ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks show presentation.pdf (public accessible) | Ben von Hünerbein (University of Bern) | |
09:35‑09:40 (5 min) | Late breaking news Co-Designed Neuromorphic Circuits for Local Dendritic learning | Maryada (Institute of Neuroinformatics) | |
09:40‑09:45 (5 min) | Late breaking news Switching dynamics of working memory. show presentation.pdf (public accessible) | Ghanendra Singh (TU Graz, Austria) | |
09:45‑09:55 (10 min) | Q & A to the three late-breaking news talks | ||
09:55‑10:05 (10+5 min) | Heterogenous Population Encoding for Multi-joint Regression using sEMG signals show presentation.pdf (public accessible) Farah Baracat, Luca Manneschi and Elisa Donati Proportional and simultaneous decoding of individual fingers is essential for human-machine interface (HMI) applications, such as myoelectric prostheses, which restore motor function by decoding motor intentions from electromyography (EMG) signals. These closed-loop systems require high real-time decoding accuracy and low-power operation, making spiking neural networks (SNNs) on neuromorphic hardware a promising solution. To fully leverage SNNs, continuous EMG signals must be encoded into a spiking domain while preserving key information. Most existing methods use a single-neuron approach, where each input channel is fed into a single neuron. However, we hypothesize that this limits representation richness and requires per-subject tuning. This talk explores how variability in neuronal populations affects decoding performance, using it as a proxy for information content. We examine how membrane time constants, thresholds, and population size influence finger kinematics decoding. Our results demonstrate that encoding EMG with a heterogeneous neuron population enhances decoding performance and generalizes across subjects without additional tuning or training of the encoding layer parameters. | Farah Baracat (Institute of Neuroinformatics, University of Zurich and ETH Zurich) | |
10:10‑10:35 (25+5 min) | Realtime-Capable Hybrid Spiking Neural Networks for Neural Decoding of Cortical Activity show presentation.pdf (public accessible) Jann Krausse, Alexandru Vasilache, Klaus Knobloch and Juergen Becker Intra-cortical brain-machine interfaces (iBMIs) present a promising solution to restoring and decoding brain activity that is lost due to injury. However, patients with such neuroprosthetics suffer from the permanent skull opening resulting from the devices’ bulky wiring. This drives the development of wireless iBMIs, which in turn demand low power consumption and small device footprint. Most recently, spiking neural networks (SNNs) have been researched as potential candidates for low power neural decoding. In this work, we present the next step of utilizing SNNs for such tasks, building on the recently published results of the 2024 Grand Challenge on Neural Decoding Challenge for Motor Control of non-Human Primates. We optimize our model architecture to exceed the existing state of the art on the Primate Reaching dataset while maintaining similar resource demand through the use of various compression techniques. We further focus on the implementation of a realtime-capable version of the model and discuss the implications of this architecture. With this, we advance one step closer towards latency-free decoding of cortical spike trains using neuromorphic technology, which would ultimately improve the lives of millions of paralyzed patients. | Jann Krausse (Infineon Technologies) | |
10:40‑11:10 (30 min) | Coffee break | ||
Session chair: Sunny Bains | |||
11:10‑11:35 (25+5 min) | A Milling Swarm of Ground Robots using Spiking Neural Networks show presentation.pdf (public accessible) Kevin Zhu, Shay Snyder, Ricardo Vega, Maryam Parsa and Cameron Nowzari Spiking Neural Networks, or SNNs, have the potential to enable ultra-cheap, ultra-small neural controllers for robots, as the bio-plausibility of SNNs implies the possibility of microscopic, bio-scale control circuits. We seek to explore the viability of using SNNs to form agent-local policies on robots with low-fidelity sensing and actuation: each robot has only simple 1-bit detection of peers within its field-of-view, and no wheel encoders or absolute positioning information. Using simulations, we evolve a compact (14 neuron) SNN which controls robots to move in a circular milling formation. Network structure and parameters are evolved to optimize a global Circliness metric. To enable sim2real transfer, the simulated agent is matched to the robotic embodiment through an iterative process of characterization, the RSRS process. The resulting emergent behavior is decentralized and robust, requires little to no perception, localization, or planning, and achieves comparable milling performance to existing controllers. Yet, the learned control policy is distinct from that of the state-of-the-art. We hope this motivates further exploration of the viability of using SNNs as agent-level decision-making controllers in swarms of robots. | Kevin Zhu (George Mason University) | |
11:40‑12:05 (25+5 min) | Invited talk: A Neuroscience Perspective on Dendrites for Neuromorphic Computing Dendrites do much more for biological neurons than provide complex structures for receiving massive quantities of synaptic inputs. I will discuss recent neuroscience developments in our understanding of dendrite processing and how these advancements are translating to new neuromorphic models. | Frances Chance (Sandia National Laboratories) | |
12:10‑12:20 (10+5 min) | Biologically-Inspired Representations for Adaptive Control with Spatial Semantic Pointers Graeme Damberger, Kathryn Simone, Chandan Datta, Ram Eshwar Kaundinya, Juan Escareno and Chris Eliasmith We explore and evaluate biologically-inspired representations for an adaptive controller using Spatial Semantic Pointers (SSPs). Specifically, we show that Place-cell like SSP representations outperform past methods. Using this representation, we efficiently learn the dynamics of a given plant over its state space. We implement this adaptive controller in a spiking neural network along with a classical sliding mode controller and prove the stability of the overall system with non-linear plant dynamics. We then simulate the controller on a 3-link arm and demonstrate that the proposed adaptive controller gives a simpler and more systematic way of designing the neural representation of the state space. Compared to previous methods, we show an increase of 1.23-1.25x in tracking accuracy. | Graeme Damberger (University of Waterloo) | |
12:25‑12:50 (25+5 min) | A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity Jamie Lohoff, Anil Kaya, Florian Assmuth and Emre Neftci Online synaptic plasticity rules derived from gradient descent achieve high accuracy on a wide range of practical tasks. However, their software implementation often requires tediously hand-derived gradients or using gradient backpropagation which sacrifices the online capability of the rules. In this work, we present a custom automatic differentiation (AD) pipeline for sparse and online implementation of gradient-based synaptic plasticity rules that generalizes to arbitrary neuron models. Our work combines the programming ease of backpropagation-type methods for forward AD while being memory-efficient. To achieve this, we exploit the advantageous compute and memory scaling of online synaptic plasticity by providing an inherently sparse implementation of AD where expensive tensor contractions are replaced with simple element-wise multiplications if the tensors are diagonal. Gradient-based synaptic plasticity rules such as eligibility propagation (e-prop) have exactly this property and thus profit immensely from this feature. We demonstrate the alignment of our gradients with respect to gradient backpropagation on an synthetic task where e-prop gradients are exact, as well as audio speech classification benchmarks. We demonstrate how memory utilization scales with network size without dependence on the sequence length, as expected from forward AD methods. | Jamie Lohoff (Forschungszentrum Jülich) | |
13:00‑14:00 (60 min) | Lunch | ||
Session chair: Catherine Schuman | |||
14:00‑14:25 (25+5 min) | Evolution at the Edge: Real-Time Evolution for Neuromorphic Engine Control show presentation.pdf (public accessible) Karan Patel, Ethan Maness, Tyler Nitzsche, Emma Brown, Brett Witherspoon, Aaron Young, Bryan Maldonado, Brian Kaul and Catherine Schuman Neuromorphic computing systems are attractive for real-time control at the edge because of their low power operation, real-time processing capabilities and their potential ability to do online learning. In this work, we describe an approach for performing real-time evolution of spiking neural networks for neuromorphic systems at the edge called Neuromorphic Optimization using Dynamic Evolutionary Systems or NODES. We apply this approach to real-time combustion engine control and develop an engine-specific hardware platform for NODES called FireBox. We demonstrate how the real-time evolution approach works in simulation and the performance of networks trained in simulation on the physical engine. | Karan Patel (University of Tennessee Knoxville) | |
14:30‑14:40 (10+5 min) | The Spatial Effect of the Pinna for Neuromorphic Speech Denoising show presentation.pdf (public accessible) Ranganath Selagamsetty, Joshua San Miguel and Mikko Lipasti Abstract: Humans are capable of complex communication in the form of speech, which fundamentally relies on the ability to parse and distinguish sounds in noisy environments. Advances in computing hardware have made artificial neural networks ideal for imitating human speech recognition. Such models have achieved near human-like performance in isolating speech from noisy audio, at the cost of enormous model sizes and power consumption at orders of magnitude greater than the brain. Spiking neural networks have been proposed as an alternative, attaining model efficiency by prioritizing biological fidelity. Inspired by the biological pinna, our model encodes noisy speech input with spatial cues to aid in speech denoising. We show that denoising performance improves when a spiking neural network consumes audio encoded with spatial cues from pinna transforms. Our fixed networks achieve up to +0.15 dB improvement and up to +1.04 dB for our generalized pinna networks against comparable models. We present a neuroscience-inspired, shallow, spiking neural network architecture with just 525K weights that may be used as a starting model to explain neuronal observations. | Ranganath Selagamsetty (University of Wisconsin - Madison) | |
14:45‑15:10 (25+5 min) | Invited talk: Merging insights from artificial and biological neural networks for neuromorphic edge intelligence The development of efficient bio-inspired training algorithms and adaptive hardware is currently missing a clear framework. Should we start from the brain computational primitives and figure out how to apply them to real-world problems (bottom-up approach), or should we build on working AI solutions and fine-tune them to increase their biological plausibility (top-down approach)? In this talk, we will see why biological plausibility and hardware efficiency are often two sides of the same coin, and how neuroscience- and AI-driven insights can cross-feed each other toward low-cost on-device processing and learning. | Charlotte Frenkel (Delft University of Technology) | |
15:15‑15:45 (30 min) | Coffee break | ||
Session chair and open-mic moderator: Brad Aimone | |||
15:45‑15:55 (10+5 min) | The Young Neuromorphs initiative (https://www.linkedin.com/company/young-neuromorphs) | Nassim Beladel (Delft University of Technology) | |
16:00‑16:25 (25+5 min) | A Diagonal Structured State Space Model on Loihi 2 for Efficient Streaming Sequence Processing Svea Marie Meyer, Philipp Weidel, Philipp Plank, Leobardo Leobardo Campos-Macias, Sumit Bam Shreshta, Philipp Stratmann, Jonathan Timcheck and Mathis Richter Deep State-Space Models (SSM) demonstrate state-of-the art performance on long-range sequence modeling tasks. While the recurrent structure of SSMs can be efficiently implemented as a convolution or as a parallel scan during training, recurrent token-by-token processing cannot currently be implemented efficiently on GPUs. Here, we demonstrate efficient token-by-token inference of the SSM S4D on Intel’s Loihi 2 state-of-the-art neuromorphic processor. We compare this first ever neuromorphic-hardware implementation of an SSM on sMNIST, psMNIST, and sCIFAR to a recurrent and a convolutional implementation of S4D on Jetson Orin Nano (Jetson). While we find Jetson to perform better in an offline sample-by-sample based batched processing mode, Loihi 2 outperforms during token-by-token based processing, where it consumes 1000 times less energy with a 75 times lower latency and a 75 times higher throughput compared to the recurrent implementation of S4D on Jetson. This opens up new avenues towards efficient real-time streaming applications of SSMs. | Weidel, Philipp (Intel Labs) | |
16:30‑17:30 (60 min) | Open mic / discussion - day III speakers | ||
17:30‑17:35 (5+5 min) | Goodbye | Johannes Schemmel (uhei) | |
18:00 | End of day 3 and of the talk-days of NICE 2025 |
Friday, 28 March 2025 | |||
09:00 | NICE 2025 - tutorial day Tutorials will be offered in three slots with several tutorials running in parallel. Please see below for the description of the offered tutorials | ||
Tutorial: Accelerated Neuromorphic Computing on BrainScaleS show presentation.pdf (public accessible) In this suggested tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. BrainScaleS-2 has primarily been designed to serve as a versatile computational substrate for the emulation of spiking neural networks. As such, each ASIC integrates 512 analog neuron circuits implementing the rich dynamics of the adaptive exponential leaky integrate-and-fire (AdEx) model. Each neuron receives input from 256 current- or conductance-based synapses with configurable sign and weight. Multi-compartment extensions allow the formation of complex, spatially distributed, dendritic trees with active processing elements. Integrating thousands of ADC and DAC channels as well as two custom microprocessors with SIMD extensions, each ASIC represents a software-controlled analog computer that can be configured and probed at will. For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems. Participants can use their EBRAINS account (free of charge available at https://ebrains.eu/register) or use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial. | Amani Atoui (Heidelberg University) | ||
Tutorial: Development and Deployment of SNNs on FPGA for Embedded Applications Update to our attendees: To follow and execute the tutorial on your own machine, docker is required. You can now pull the image ahead of time from Docker Hub:
This tutorial presents an in-depth introduction to a many-core near-memory-computing Spiking Neural Network (SNN) FPGA accelerator developed at the FZI Research Center for Information Technology. The accelerator is designed for embedded sensor processing applications in medical, industrial, and automotive contexts, with a focus on dataset evaluation and real-time processing of high data rate neuromorphic sensors. The hardware architecture is based on a pipelined SNN processing core, and the tutorial will delve into the numerous co-design decisions made to optimize its performance and versatility. Participants will gain insights into critical concepts such as quantization, the mapping of logical neurons onto physical processing elements (PEs), and the accelerator’s integration within a System-on-Chip (SoC) FPGA context running Linux on classical processors. The tutorial will also cover the current (work-in-progress) feature set of the accelerator and provide hands-on experience in developing and deploying SNNs using our toolchain. The accelerator is intended to be open-sourced to the neuromorphic community upon reaching maturity in its development and deployment framework. In the interim, this tutorial aims to gather valuable feedback from potential users, researchers, and experts in neuromorphic hardware implementation to refine and enhance the accelerator's capabilities. Necessary Background
Tutorial Materials
Tutorial Content
Project ContributorsThe Neuromorphs of Group Becker at Karlsruhe Institute of Technology:
| Brian Pachideh (FZI Research Center for Information Technology), Sven Nitzsche (FZI Forschungszentrum Informatik) | ||
Tutorial: NEST Simulator as a neuromorphic prototyping platform In the design of neuromorphic systems, it is vital to have a flexible and highly performant way of exploring system parameters. Using NEST Simulator [1] and the NESTML modeling language [2], spiking neural network models can be quickly prototyped and subjected to design constraints that mirror those of the intended neuromorphic platform. NEST has a proven track record on a large and diverse set of use cases, and can run on anywhere from laptops to supercomputers, making it an ideal prototyping and research platform for neuromorphic systems. This benefits reproducibility (obtaining the same numerical results across platforms), highlighting the value of NEST in verification and validation of neuromorphic systems. In this tutorial, participants will get hands-on experience creating neuron and synapse models in NESTML, and using them to build networks in NEST that perform various tasks, such as sequence learning and reinforcement learning. We will introduce several tools and front-ends to implement modeling ideas most effectively, such as the graphical user interface NEST Desktop [3]. Through the use of target-specific code generation options in NESTML, the same model can even be directly run on neuromorphic platforms. Participants do not have to install software as all tools are accessible via the cloud. All parts of the tutorial are hands-on, and take place via Jupyter notebooks.
| Dennis Terhorst (IAS-6, Forschungszentrum Jülich), Charl Linssen (Jülich Research Centre) | ||
Tutorial: NeuroBench show presentation.pdf (public accessible) Benchmarking is an essential component of research which involves measuring and comparing approaches in order to evaluate improvements and demonstrate objective benefits. Essentially, it aims to answer the questions - “How much better are my approaches now, and how can I make them even better next?” NeuroBench is a community-driven initiative towards providing a standardized framework for benchmarking neuromorphic solutions, unifying the field with straightforward, well-defined, and reproducible benchmark measurement. NeuroBench offers common tools and methodology that apply broadly across different models, tasks, and scenarios, allowing for comprehensive insights into the correctness and costs of execution. Recently, it was used to compare and score accurate, tiny-compute sequence models in the BioCAS 2024 Neural Decoding Grand Challenge. In this tutorial, we provide a hands-on guide to using the open-source NeuroBench harness for profiling neuromorphic models, such as spiking neural networks and other efficiency-focused models. Participants will learn how to benchmark models, extracting meaningful metrics in order to have a comprehensive understanding of the cost profile associated with model execution. We will show how the harness interfaces can be used to connect with other popular software libraries and how users can easily extend the harness with their own custom tasks and metrics of interest, which will provide the most relevant information for their research. The hands-on examples will be offered through Python notebooks. Please bring your own laptop. | Jason Yik (Harvard) | ||
Tutorial: Neuromorphic Control for Autonomous Driving This tutorial is based on 3 of our recent publications:
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this tutorial, we will explore neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework (CARLA). We will showcase these controllers with various vehicle models (from a Tesla Model 3 to an Ambulance) and compare their performance with conventional CPU-based implementations. While being neural approximations, we will demonstrate how neuromorphic models can perform competitively with their conventional counterparts. Particularly, we will show that neuromorphic models can converge to their op=mal performances with merely 100–1,000 neurons while providing state-of-the-art response dynamics to unforeseen situations. For example, we will showcase realistic driving scenarios in which vehicles experience malfunctioning and swift steering scenarios. We will demonstrate significant improvements in dynamic error rate compared with traditional control implementation with up to 89.15% median prediction error reduction with 5 spiking neurons and up to 96.08% with 5,000 neurons. In this tutorial, we will provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. We will also highlight the importance of hybrid - conventional and neuromorphic - designs, as well as highlight the limitations of neuromorphic implementations, particularly at higher speeds where they tend to degrade faster than in conventional designs. | Elishai Ezra Tsur (The Open University of Israel) | ||
Tutorial: Running SNNs on SpiNNaker show presentation.pdf (public accessible) SpiNNaker is a highly programmable neuromorphic platform, designed to simulate large spiking neural networks in real-time. It uses many conventional low-power ARM processors executing customizable software in parallel, coupled with a specialized multicast network enabling the transmission of many spikes to multiple target neurons. This tutorial will give an introduction on running SNNs on SpiNNaker using the PyNN language. Users will have a chance to run SNNs on the SpiNNaker hardware directly through a Jupyter notebook interface. | Andrew Rowley (U Manchester) | ||
Tutorial: SpiNNaker2 Tutorial: Beyond Neural Simulation show presentation.pdf (public accessible) SpiNNaker2 is a scalable many-core architecture for flexible neuromorphic computing. It combines low-power ARM cores and dedicated accelerators for deep neural networks with a scalable, event-based communication infrastructure. This unique combination allows to explore a wide range of applications on SpiNNaker2 including spiking neural network simulation, deep neural networks, hybrid neural networks as well other event-based algorithms. This tutorial complements the planned PyNN tutorial for SpiNNaker by the University of Manchester and focuses on applications that go beyond neural simulation and make use of SpiNNaker2’s features. We will bring single-chip SpiNNaker2 boards and offer remote access to 48-chip server boards. The first part of the tutorial will focus on deploying deep SNN on SpiNNaker2 using the neuromorphic intermediate representation. In the second part we will showcase examples of our generic compute and/or deep learning software stacks. | Bernhard Vogginger (TU Dresden), Florian Feiler (SpiNNcloud Systems GmbH), Mahmoud Akl (SpiNNcloud Systems) | ||
09:00 | Tutorials (The tutorials described above will be distributed into the three available tutorial slots - with several of the tutorials running in parallel) | ||
09:00‑11:00 (120 min) | Tutorial slot I | ||
11:00‑11:30 (30 min) | Coffee break | ||
11:30‑13:30 (120 min) | Tutorial slot II | ||
13:30‑14:15 (45 min) | Lunch | ||
14:15‑16:15 (120 min) | Tutorial slot III | ||
16:15 | End of NICE 2025 |