PRELIMINARY list of talks and tutorials at NICE 2026

Timing of the days:
  • Start at 8:30h
  • End at 17:00h on Tuesday (24 March) , 16:30h on Wednesday (25 March) - Friday (27 March)
  • the tutorial day (Friday) will take place in a different venue than the conference (in the same general area)

Invited talk: Infrastructure for Neuromorphic System Design

There are many intriguing proposals for next-generation efficient computing substrates, and Biologically-inspired neuromorphic platforms are one such alternative. We have previously participated in the development of asynchronous hardware for a number of neuromorphic substrates, but compelling evidence for neuromorphic systems is still absent. We discuss the current state of our effort in creating both software and hardware infrastructure for the co-design of algorithms and hardware for neuromorphic systems.

Rajit Manohar (Yale)25 min
Invited talk: REM-like Consolidation: Same Performance, Sparser Representations

Maxim Bazhenov (University of California, San Diego)25 min
Invited talk: Training SNNs with exact gradients: Progress and Challenges

In 2021 Wunderlich and Pehle published the “Eventprop” method of using the adjoint method to calculate exact gradients in spiking neural networks (SNNs) of LIF neurons, and used the method to solve diagnostic machine learning benchmarks such as the YinYang dataset and latency MNIST. Soon after, we were able to show that this work could be extended to problems such as keyword recognition on the SHD and SSC benchmarks and to networks with delays and for learning delays. The trained networks run on Intel’s Loihi with minimal accuracy loss and considerable energy savings.

Pehle’s PhD thesis also contains more general equations for any hybrid SNN models. Based on this, we have used Python’s sympy symbolic math package to implement an “auto-adjoint” method in our mlGeNN software, not unlike the powerful “autodiff” methods in PyTorch, TensorFlow or JAX. This allows users to define neuron and synapse models of their choice, which mlGeNN automatically turns into equations and code for gradient descent on the chosen model. However, there are issues that mean that adjoint learning in SNNs is not yet as plug and play as autodiff-based error backpropagation in time in artificial neural networks.

In this talk, I will briefly introduce the overall method and will then discuss the main advances and the challenges for a wider adoption of this technology, drawing on examples from our recent works.

Thomas Nowotny (University of Sussex)25 min
A Compute and Communication Runtime Model for Loihi 2

Jonathan Timcheck, Alessandro Pierro and Sumit Bam Shrestha

Neuromorphic hardware promises major gains in speed and energy efficiency, but designing fast kernels is hindered by the lack of simple, predictive performance models. In this talk, we present the first max‑affine (multi‑dimensional roofline) runtime model for Intel’s Loihi 2 that captures both compute and on‑chip communication costs. Calibrated using microbenchmarks, the model shows a tight match to measured runtimes (≥ 0.97 Pearson correlation) for matrix‑vector multiplication and a QUBO solver. We further use the model to analyze communication bottlenecks and scalability, revealing clear area–runtime tradeoffs. This work enables principled, performance‑driven algorithm design on Loihi 2.

Jonathan Timcheck (Intel)25 min
A principled procedure for designing brain-derived SWaP optimized neuronal units for low-power neuromorphic analog computation and digital communication.

.

Chad Harper (UC Berkeley)10 min
Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware

Jakob Kaiser, Eric Müller and Johannes Schemmel

Mapping experimental observations to parameters of analog neuromorphic hardware is often labor-intensive. We apply amortized simulation-based inference to infer parameters of an adaptive exponential integrate-and-fire neuron implemented on the BrainScaleS-2 system. Using membrane potential responses to step current stimulation, we estimate posterior distributions for seven model parameters. We compare different approaches for extracting summary statistics from membrane traces and evaluate their impact on inference performance. Our results demonstrate that amortized SBI is a promising tool for parameterizing analog neuron circuits.

Jakob Kaiser (Institute of Computer Engineering, Heidelberg University, Germany)10 min
Critical Spike Attribution for Feature Importance in Spiking Neural Networks

Jack Klawitter and Konstantinos P. Michmizos

Spiking neural networks (SNNs) have emerged as a compelling alternative to standard artificial neural networks (ANNs) due to their biological plausibility and energy efficiency. However, while feature attribution methods are well established for ANNs, comparable techniques for SNNs remain underdeveloped. Here, I introduce Critical Spike Attribution (CSA), a causal attribution method that identifies which presynaptic spikes were functionally necessary for producing downstream firing. CSA provides temporally resolved explanations and is compatible with any training method. CSA is evaluated on both the Spiking Heidelberg Digits dataset and a real-world neural decoding task. Across both settings, CSA more accurately identifies causal features while producing sparser explanations with minimal trade-off in completeness.

Jack Klawitter (Rutgers University)25 min
DARWIN: Hardware Efficient Analog In-Memory Computing Using Dendritic ARborized Weights In Neural Networks

Ming-Jay Yang, Dimitrios Spithouris, Johannes Hellwig, Pascal Nieters, Regina Dittmann, Gordon Pipa and John Paul Strachan

In-memory computing (IMC) with memristive cross-bar arrays offers a promising solution to the energy and latency limitations of conventional von Neumann architectures. However, most existing IMC systems rely on perceptron-based neurons, limiting computational flexibility and requiring large, dense networks to achieve competitive performance. Inspired by the nonlinear processing of biological dendrites, this work introduces a hardware-efficient IMC architecture, ”DARWIN”, that integrates nonvolatile memristive devices as synaptic weights arranged as a tree with volatile memristive devices functioning as integration filters. To further optimize hardware utilization, we incorporate learnable sparsity, enabling the network to automatically discover a compact synaptic pattern within a dendritic tree tailored to the underlying device characteristics. Experimental results demonstrate that the proposed architecture achieves more than two orders of magnitude lower memory footprint and at least an order of magnitude lower power consumption than other in-memory computing architectures. These findings highlight the potential of combining dendritic computation, heterogeneous memristive technologies, and sparsity-aware learning to advance scalable and bio-inspired in-memory computing hardware.

Ming-Jay Yang (Forschungszentrum Jülich, Germany)25 min
Dynamic Heuristic Neuromorphic Solver for the Edge User Allocation Problem with Bayesian Confidence Propagation Neural Network

Kecheng Zhang, Anders Lansner, Ahsan Javed Awan, Naresh Balaji Ravichandran and Pawel Herman

We propose a neuromorphic solver for the NP-hard Edge User Allocation problem using an attractor network with Winner-Takes-All (WTA) mechanism implemented with the Bayesian Confidence Propagation Neural Network (BCPNN) framework. Unlike previous energy-based attractor networks, our solver uses dynamic heuristic biasing to guide allocations in real time and introduces a “no allocation” state to each WTA motif, achieving near-optimal performance with an empirically upper-bounded number of time steps. The approach is compatible with neuromorphic architectures and may offer improvements in energy efficiency.

Ahsan Javed Awan (Ericsson)10 min
Fully Spiking Linear Quadratic Regulator Control via a Neuromorphic Solver for the Continuous Algebraic Riccati Equation

Graeme Damberger, Omar Alejandro Garcia Alcantara, Eduardo S. Espinoza, Luis Rodolfo Garcia Carrillo, Terrence C. Stewart and Chris Eliasmith

Graeme Damberger (University of Waterloo)25 min
Graph Reservoir Networks for Prediction of Spatiotemporal Systems

William Chapman, Darby Smith, Corinne Teeter and Nicole Jackson

Predicting large-scale spatiotemporal data poses significant challenges for traditional machine learning methods, particularly in high-performance and edge-based computing environments. Graph neural networks (GNNs) have emerged as powerful tools for modeling structured data, while reservoir computing (RC) offers efficient and state-of-the-art solutions for predicting low-rank dynamical systems. In this work, we propose a novel approach that integrates the global structured modeling capabilities of GNNs with the efficient training and predictive strengths of RC. By combining local random connectivity with global domain-informed structure, the proposed graph reservoir networks leverage both random and model-based information to accurately predict large-scale spatiotemporal dynamics. Furthermore, we demonstrate the robustness of these networks to neuromorphic hardware constraints, including temporally sparse spiking activity, quantized weights, limited fan-in, and local learning rules. These results highlight the utility of integrating reservoir computing with domain-informed inductive biases, paving the way for scalable and efficient solutions to complex dynamical systems in diverse applications.

William Chapman (Sandia National Laboratories)25 min
Hardware-Algorithm Co-design for On-Chip SNN: SOLO(Spatial Online Learning at Once) with analog flash device

Sungmin Lee

Spiking Neural Networks (SNNs) have emerged as a promising energy-efficient alternative to conventional deep learning models. However, realizing truly energy-efficient artificial intelligence requires a synergistic approach that optimizes both the learning algorithm and the underlying hardware architecture. In this talk, we present a novel hardware-algorithm co-design that integrates the SOLO (Spatial Online Learning at Once) algorithm with a novel analog flash device. Specifically, SOLO overcomes the high latency and memory costs of traditional BPTT, enabling highly efficient, noise-robust on-chip training of deep SNNs. By leveraging co-integrated flash devices to model synapses, neurons, and surrogate gradients, we demonstrate a highly efficient and superior platform for analog in-memory computing (AIMC).

Sungmin Lee (Seoul National University)10 min
Intrinsic Numerical Robustness and Fault Tolerance in a Neuromorphic Algorithm for Scientific Computing

Bradley H. Theilman and James B. Aimone

The potential for neuromorphic computing to provide intrinsically fault-tolerant has long been speculated, but demonstrations of brain-like robustness in neuromorphic applications is rare. Here, we show that a previously-described, natively spiking neuromorphic algorithm for solving partial differential equations is intrinsically tolerant to structural perturbations in the form of ablated neurons and dropped spikes. The tolerance band for these perturbations is large: we find that as many as 32% of the neurons and up to 90% of the spikes may be entirely dropped before a significant degradation in the accuracy results. Furthermore, this robustness is tunable through structural hyperparameters. This work demonstrates that the specific brain-like inspiration behind the algorithm contributes to a significant degree of robustness as expected from brain-like neuromorphic algorithms.

Bradley Theilman (Sandia National Laboratories)25 min
Late breaking news: Bridging Neuromorphic and Traditional Computing Performance: An Information-Theoretic Approach

Max Hawkins, Richard Vuduc

Neuromorphic computers challenge traditional definitions of computing performance by prioritizing characteristics like sparsity and noise tolerance, which are often ignored by conventional metrics. As a result, the community lacks a fair method of comparing computational work performed by sparse, noisy neurons against dense, deterministic arithmetic logic units (ALUs). We propose a performance framework grounded in information theory that models any computational unit as a channel, transforming and retaining information from inputs (X) to outputs (Y). By defining computing ‘work’ as the mutual information, I(X; Y), between input and output states, we provide an implementation-agnostic framework for computational performance. This framework resolves conflicting definitions of work in the presence of sparsity by distinguishing between structural sparsity and activation sparsity. It also naturally accounts for noise and accommodates diverse encodings (e.g. fixed-width binary, spike trains, etc) to offer a ‘common currency’ for evaluating innovation across diverse computing Paradigms.

Max Hawkins (Georgia Institute of Technology)10 min
Late breaking news: NOVA: Real-Time Visualization and Streaming for Neuromorphic Event Cameras

Andrew Lin, Daniel Querrey, Eric McGonagle, John Langs, Nai Yun Wu, John Ho, Nick Almeter, Matthew Fisher, Utsawb Lamichhane, Praket Desai, David Mascarenas, Tracy Hammond; Texas A&M University & Los Alamos

Neuromorphic (event-based) cameras capture visual information as asynchronous streams of per-pixel intensity changes, offering high temporal resolution and reduced redundancy compared to frame-based sensors. Despite these advantages, the sparse and asynchronous nature of event data makes real-time inspection and interpretation challenging, limiting effective use during development, debugging, and onboarding. This paper presents NOVA (Neuromorphic Optics and Visualization Application), an integrated software system for real-time visualization, streaming, and interaction with neuromorphic event data. NOVA provides a GPU-accelerated pipeline supporting both live camera input and offline datasets, with native compatibility for modern formats and sustained high-throughput operation. A central feature is a 3D spatiotemporal visualization that represents spatial position and time simultaneously, enabling users to observe motion patterns and temporal structure that are difficult to perceive in conventional 2D views. The system also incorporates Digital Coded Exposure reconstruction to generate frame-like representations alongside raw event streams, supporting intuitive interpretation. Designed with a modular architecture and task-oriented interface, NOVA emphasizes performance, extensibility, and usability across research, development, and educational contexts. Deployment experiences and early user observations suggest that enhanced visualization and interactive temporal controls improve users’ ability to reason about event data and accelerate exploratory workflows. NOVA demonstrates how human-centered tooling can reduce barriers to working with neuromorphic sensors and provides a foundation for future research on visualization and interpretation of asynchronous visual data.

David Mascarenas (Los Alamos)10 min
Memory Trade-Offs in Neuromorphic Communication Strategies of the FlyWire Connectome on Loihi 2

Felix Wang, Bradley Theilman, Fred Rothganger, William Severa, Craig Vineyard and James Aimone

Felix Wang (Sandia National Laboratories)25 min
Multi-Timescale Conductance Spiking Networks: A Sparse, Gradient-Trainable Framework with Rich Firing Dynamics for Enhanced Temporal Processing

Alex Fulleda-Garcia, Saray Soldado-Magraner and Josep Maria Margarit-Taulé

Spiking neural networks (SNNs) enable low-power, event-driven processing, but common neuron models often trade off gradient-based training, dynamical richness and spike sparsity limiting their quality-energy performance on temporal tasks. We present a gradient-trainable multi-timescale spiking-neuron framework that provides controllable firing behavior within a single model, and is amenable to efficient neuromorphic realization. Feedforward networks are evaluated on long-horizon chaotic Mackey–Glass time-series regression against leaky integrate-and-fire (LIF) and an adaptive LIF baselines. Our approach outperforms both LIF and adaptive LIF accuracies while yielding substantially sparser spiking from both communication and compute perspectives.

Alex Fulleda-Garcia (IMB-CNM-CSIC, Spain)10 min
NeuroCoreX: An Open-Source FPGA-Based Spiking Neural Network Emulator with On-Chip Learning

Ashish Gautam, Prasanna Date, Shruti Kulkarni, Ian Mulet, Kevin Zhu, Robert Patton and Thomas Potok

Spiking Neural Networks (SNNs) are computational models inspired by the event-driven communication and con- nectivity patterns of biological neural circuits. They enable high energy efficiency and natural support for diverse architectures ranging from layered networks to small-world and graph- structured topologies. In this work, we introduce NeuroCoreX, an open-source, FPGA-based spiking neural network emulator that provides real-time, on-chip learning and flexible network organization. NeuroCoreX supports both feedforward sensory inputs streamed directly from sensors or PCs via UART and recurrent on-chip connectivity, enabling simultaneous process- ing and learning from external stimuli and internal network dynamics—capabilities rarely available in existing FPGA SNN platforms. The system implements a Leaky Integrate-and-Fire (LIF) neuron model with current-based synapses and supports pair-based STDP learning on both feedforward and recurrent synapses. A lightweight Python interface enables interactive configuration, live monitoring, weight read-back, and experiment control. Importantly, NeuroCoreX is tightly integrated with the SuperNeuroMAT simulator, allowing SNN models to be transferred seamlessly from software to hardware for hardware- in-the-loop development. By combining real-time plasticity, flex- ible connectivity, and an open-source VHDL implementation, NeuroCoreX provides an extensible and accessible platform for neuromorphic research, algorithm–hardware co-design, and energy-efficient edge intelligence.

Ashish Gautam (Oak Ridge National Lab)10 min
Neuromorphic Eye Tracking for Low-Latency Pupil Detection

Paul Hueber, Luca Peres, Florian Pitters, Alejandro Gloriani and Oliver Rhodes

Oliver Rhodes (University of Manchester, United Kingdom)10 min
On the Status, Requirements, and Expectations of Neuro-Inspired High-Performance Computing

Johannes Gebert, Qifeng Pan, Lukas Stockmann, Hartwig Anzt and Christian Mayr

Brain-inspired integrated circuits in 2026 are a future high-performance computing (HPC) technology, seemingly ready for industrialization.

They demonstrated support for AI through a hardware approach and traditional algorithms commonly used in HPC. However, the community needs to determine whether these proofs of concept translate into readily usable HPC software. On the hardware side, we may have to cherry-pick the best suitable brain-inspired features and combine them with, e.g., sufficiently fast interconnects and programmability to actually outperform current HPC software and hardware stacks.

Performance and energy efficiency are two significant concerns driving capital and operational expenses, as well as returns on investment, and hence the deployment of HPC systems. However, they are not exclusive. IT security, multi-tenancy, and spare parts are just a few examples that may seem mundane from a neuromorphic perspective, but are mandatory for HPC systems.

The following paper presents the perspective of a high-performance computing center eager to adopt promising brain-inspired technologies and to serve the academic and industrial communities.

Johannes Gebert (High-Performance Computing Center Stuttgart (HLRS), Germany)10 min
Predicting Price Movements in High-Frequency Financial Data with Spiking Neural Networks

Brian Ezinwoke and Oliver Rhodes

Modern high-frequency trading environments are characterized by sudden price spikes that present both risk and opportunity, yet conventional financial models often struggle to capture the required fine temporal structure. Spiking Neural Networks (SNNs) offer a biologically inspired framework well-suited to these challenges due to their natural ability to process discrete events and preserve millisecond-scale timing. This work investigates the application of SNNs to high-frequency price-spike forecasting, enhancing performance via robust hyperparameter tuning with Bayesian Optimization. To address the inherent difficulty of tuning unsupervised Spike-Timing-Dependent Plasticity (STDP) for non-stationary data, we introduce a novel objective, Penalized Spike Accuracy (PSA), designed to align a network's predicted spike rate with empirical market event frequencies. By evaluating an extended architecture featuring explicit inhibitory competition and multiple temporal lags on microsecond-precision stock data, we demonstrate that PSA-optimized models achieve superior risk-adjusted returns when compared to both supervised backpropagation alternatives and standard accuracy-tuned benchmarks. These results suggest that task-specific tuning of spiking dynamics offers a viable methodology for training event-driven algorithms for financial time-series, inviting further exploration into the inviting further exploration into the utility of unsupervised plasticity for high-dimensional temporal data.

Brian Ezinwoke (University College London)10 min
Predicting band-gap of Inorganic Materials using Neuromorphic Graph Learning

Ian Mulet, Derek Gobin, Ashish Gautam, Kevin Zhu, Prasanna Date, Shruti Kulkarni, Seung-Hwan Lim, Guojing Cong, Maryam Parsa, Thomas Potok and Catherine Schuman

Predicting properties of inorganic materials is a heavily researched topic, with several new prediction approaches emerging as competitors. One such competitor is graph neural networks, which leverage the structure of the graph to aid in the prediction process. In this work, we propose integration of neuromorphic computation into the graph neural network pipeline. We call this approach Neuromorphic Graph Learning (NGL). We utilize the NGL approach to leverage evolutionary algorithms and a novel Spike Pipeline for Raster Analysis (SPIRE) for the prediction of band gap in inorganic materials.

Ian Mulet (University of Tennessee Knoxville)10 min
Privacy-preserving fall detection at the edge using Sony IMX636 event-based vision sensor and Intel Loihi 2 neuromorphic processor

Lyes Khacef, Philipp Weidel, Susumu Hogyoku, Harry Liu, Claire Alexandra Bräuer, Shunsuke Koshino, Takeshi Oyakawa, Vincent Parret, Yoshitaka Miyatani, Mike Davies and Mathis Richter

Fall detection for elderly care using non-invasive vision-based systems remains an important yet unsolved problem. Driven by strict privacy requirements, inference must run at the edge of the vision sensor, demanding robust, real-time, and always-on perception under tight hardware constraints. To address these challenges, we propose a neuromorphic fall detection system that integrates the Sony IMX636 event-based vision sensor with the Intel Loihi 2 neuromorphic processor via a dedicated FPGA-based interface, leveraging the sparsity of event data together with near-memory asynchronous processing. Using a newly recorded dataset under diverse environmental conditions, we explore the design space of sparse neural networks deployable on a single Loihi 2 chip and analyze the tradeoffs between detection F1 score and computational cost. Notably, on the Pareto front, our LIF-based convolutional SNN with graded spikes achieves the highest computational efficiency, reaching a 55x synaptic operations sparsity for an F1 score of 58%. The LIF with graded spikes shows a gain of 6% in F1 score with 5x less operations compared to binary spikes. Furthermore, our MCUNet feature extractor with patched inference, combined with the S4D state space model, achieves the highest F1 score of 84% with a synaptic operations sparsity of 2x and a total power consumption of 90 mW on Loihi 2. Overall, our smart security camera proof-of-concept highlights the potential of integrating neuromorphic sensing and processing for edge AI applications where latency, energy consumption, and privacy are critical.

Lyes Khacef (Sony Advanced Visual Sensing)25 min
Quadratic Integrate-and-Fire Neurons as Differentiable Units for Scientific Machine Learning

Ruyin Wan, George Em Karniadakis, and Panos Stinis

Spiking neural networks (SNNs) offer energy-efficient computation but face fundamental challenges in continuous regression due to non-differentiable spike dynamics. In this talk, we present a fully differentiable spiking framework based on quadratic integrate-and-fire (QIF) neurons, where spikes are represented as phase-based transitions rather than discrete events. This formulation enables exact gradient computation via standard backpropagation and eliminates surrogate gradients. We demonstrate improved accuracy and stability performance on spiking regression and physics-informed learning tasks, highlighting the potential of differentiable SNNs for scientific machine learning.

Ruyin Wan (Brown University)10 min
Real-time processing of analog signals on accelerated neuromorphic hardware

Yannik Stradmann, Johannes Schemmel, Mihai A. Petrovici and Laura Kriener

Sensory processing on neuromorphic hardware typically relies on event-based sensors or prior conversion of signals into spikes. In this talk, we present an alternative approach using the BrainScaleS-2 mixed-signal system: direct analog signal injection into on-chip neuron circuits enables efficient near-sensor processing without prior analog-to-digital or digital-to-analog conversion. Leveraging the platform’s 1000-fold acceleration factor, we implement a spiking neural network that transforms interaural time differences into a spatial representation to predict sound source location. We demonstrate a fully on-chip pipeline—from continuous microphone input, through accelerated neural processing, to actuator control—by localizing transient noise sources and aligning a servo motor in real time.

Yannik Stradmann (Institute of Computer Engineering, Heidelberg University, Germany)25 min
THOR Competition

Dhireesha Kudithipudi (UT San Antonio)20 min
The BrainScaleS-2 multi-chip system: Interconnecting continuous-time neuromorphic compute substrates

Joscha Ilmberger and Johannes Schemmel

The BrainScaleS-2 SoC integrates analog neuron and synapse circuits with digital periphery, including two CPUs with SIMD extensions. Each ASIC is connected to a Node-FPGA, providing experiment control and Ethernet connectivity. This work details the scaling of the compute substrate through FPGA-based interconnection via an additional Aggregator unit. The Aggregator provides up to 12 transceiver links to a backplane of Node-FPGAs, as well as 4 transceiver lanes for further extension. Two such interconnected backplanes are integrated into a standard 19in rack case with 4U height together with an Ethernet switch, system controller and power supplies. For all spike rates, chip-to-chip latencies—consisting of four hops across three FPGAs—below 1.3μs are achieved within each backplane.

Joscha Ilmberger (Heidelberg University)25 min
Training Spiking Neural Networks on Multi-chip Analog Neuromorphic Hardware

Elias Arnold, Yannik Stradmann, Joscha Ilmberger, Eric Müller and Johannes Schemmel

We present the first demonstration of a four-chip BrainScaleS-2 neuromorphic system showcasing spiking neural network model inference and hardware-in-the-loop training. Building on several years of single-chip BrainScaleS-2 availability, the newly commissioned system uses a hierarchical communication fabric linking an initial set of four Brain ScaleS-2 ASICs. We evaluate this architecture using a spiking neural network whose hidden layer is distributed across three chips while placing the output layer on a fourth chip, thus exceeding the resource limits of any previously available BrainScaleS-2 substrate. We report about stable experiment execution and gradient-based training using surrogate gradients with the MNIST dataset. This talk introduces the BrainScaleS-2 multi-chip system as a platform for scalable analog neuromorphic computing.

Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg)10 min
Training event-based neural networks with exact gradients via Differentiable ODE Solving in JAX

Lukas König, Manuel Kuhn, David Kappel and Anand Subramoney

Lukas König (University of Bielefeld)25 min
VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing

Hamed Poursiami, Shay Snyder, Guojing Cong, Thomas Potok and Maryam Parsa

Graph Neural Networks (GNNs) offer high accuracy but incur computational costs that limit scalability on edge devices. We present VS-Graph, a vector-symbolic framework that bridges the gap between the efficiency of Hyperdimensional Computing (HDC) and the expressive power of message passing. By introducing Spike Diffusion for topology-driven identification and Associative Message Passing in high-dimensional space, VS-Graph achieves accuracy competitive with modern GNNs without gradient-based optimization. Our results demonstrate a training speedup of up to 450× over GNNs and a 4–5% accuracy improvement over prior HDC baselines, while maintaining robustness even under aggressive dimensionality reduction.

Hamed Poursiami (George Mason University)25 min
Vertical Floating-Gate Memristor for Highly Integrated and Energy-Efficient Two-Terminal Neuromorphic Computing

So Hyeon Park and Woo Jong Yu

So Hyeon Park (Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea)10 min
δ Multiplexed Gradient Descent: Perturbative Learning with Astrocytes

Ryan O'Loughlin, Bakhrom Oripov, Nick Skuda Skuda, Noah Chongsiriwatana, Ian Whitehouse, Wolfgang Losert, Bradley Hayes, Adam McCaughan and Sonia Buckley

Ryan O'Loughlin (National Institude of Standards and Technology, and University of Colorado Boulder, United States)10 min
1st tutorial session

Tutorial: BrainScaleS hands-on Tutorial: N2A – neural programming language and workbench
Jakob Kaiser (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg), Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg)

Johannes Schemmel, Yannik Stradtmann, Joscha Imberger, Jakob Kaiser and Björn Kindler from Heidelberg University (Germany)

In this tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems. Participants can use their EBRAINS account (available at https://ebrains.eu/register free of charge) or can use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial.

More Info about the tutorial

Fred Rothganger (Sandia National Labs)

This tutorial will introduce the user to the N2A programming language and its associated IDE. Upon completion, the user will be able to create new neuron types, new applications, and run them on their local machine (or on SpiNNaker-2 if hardware is available). This will be a hands-on tutorial. N2A may be downloaded from https://github.com/sandialabs/n2a and run on your personal laptop.

Typically, neuromorphic machines execute a simple dynamical system called the Leaky Integrate and Fire (LIF) model, analogous to a logic gate in conventional machines, and communicate between these using single-bit events called “spikes”. However, neuromorphic device makers are moving away from simple LIF dynamics toward more general neurons. The SpiNNaker system has always been general-purpose programmable due to its use of ARM cores, and SpiNNaker-2 has the capacity to send up to four 32-bit floats with each event. Intel’s second-generation Loihi also supports graded spikes and assembly-level programming of neuron models.. The future of neuromorphic computing will likely be neurons with complex dynamics combined with high-volume short-packet communication. Several frameworks exist for programming neuromorphic systems. The challenge is to enable general-purpose programming of neuron types while maintaining cross-platform portability. Remarkably, these are complementary goals. With an appropriate level of abstraction, it is possible to “write once, run anywhere”. N2A’s unique approach allows the user to specify the dynamics for each class of neuron by simply listing its equations. The tool then compiles these for a given target platform. The structure of the network and interactions between neurons are specified in the same equation language. Network structures can be arbitrarily deep and complex. The language supports component creation, extension, reuse and sharing. The tool comes with a base library that supplies common neuroscience models as well as components for specific neuromorphic devices.

120 min
2nd tutorial session

Tutorial: Introduction to Fugu: A Framework for Composing Neural AlgorithmsTutorial: Simulation Tool for Asynchronous Cortical Streams (STACS) Tutorial
Michael Krygier (Sandia National Laboratories)

In this tutorial, we will begin with an overview of the basic algorithm design in Fugu and typical workflows.

Fugu is an open-source high-level python programming framework designed for developing spiking algorithms in terms of computational graphs. It provides a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from different sources. Fugu is intended to be suitable for a wide range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Unlike other tools, Fugu separates the task of programming applications that may leverage neuromorphic hardware from the design of spiking neural algorithms and the specific details of neuromorphic hardware platforms. This allows users to focus on developing their applications without needing to be experts in neural computation or neuromorphic hardware. To design Fugu bricks and run these algorithms on hardware, users first construct a computational graph of their application using Fugu’s API. The API provides a simple and intuitive way to define the graph, and Fugu takes care of the rest, including automating the construction of a graphical intermediate representation of the spiking neural algorithm. The output of Fugu is a single NetworkX graph that fully describes the spiking neural algorithm, which can then be compiled down to platform-specific code using one of Fugu’s hardware backends or run on Fugu’s reference simulator. By providing a high-level abstraction and automating the process of constructing and compiling spiking neural algorithms, Fugu makes it easier for users to develop and deploy neuromorphic applications, and enables the exploration of new and innovative uses for neuromorphic computing.

Felix Wang (Sandia National Laboratories)

In this tutorial, we will explore how to define networks and take advantage of the parallel capabilities of the spiking neural network (SNN) simulator STACS (Simulation Tool for Asynchronous Cortical Streams) 〈https://github.com/sandialabs/STACS〉. Developed to be parallel from the ground up, STACS leverages the highly portable Charm++ parallel programming framework, which expresses a paradigm of asynchronous message-driven parallel objects, and supports both large-scale and long running simulations on high performance computing systems. In addition to the parallel runtime, STACS also implements a memory-efficient distributed network data structure for network construction, simulation, and serialization. In particular, STACS uses a distributed intermediate representation, an SNN extension to the distributed compressed sparse row format, which supports interoperability with graph partitioners to facilitate optimizing communication costs across compute resources. With respect to the neuromorphic computing software ecosystem, this has enabled toolchains for mapping networks onto neuromorphic platforms such as Loihi 2 and SpiNNaker 2.

More info about the tutorial

120 min
3rd tutorial session

Tutorial: LiteVLA at the Edge: CPU-Only Vision–Language–Action Control as a Testbed for Neuro-Inspired RoboticsTutorial: SuperNeuro + NeuroCoreX Tutorial: Running Fast and Scalable Neuromorphic Simulations

Kishor Datta Gupta, Justin Williams and Mohd. Ariful Haque from Clark Atlanta University

This 2-hour tutorial presents LiteVLA, a lightweight vision–language–action pipeline that runs fully on a Raspberry Pi 4 / TurtleBot-class robot using only CPU resources. We show how its LoRA-adapted SmolVLM backbone and 4-bit NF4 quantization create a constrained, low-power control loop that mirrors many of the design pressures in neuromorphic and non-von-Neumann systems. Participants will walk through the full stack—RGB+action data collection, parameter-efficient fine-tuning, hybrid-precision quantization, and ROS 2 integration with asynchronous Action Chunking—then discuss how such edge VLA controllers can benchmark algorithms, latency, and robustness before porting them to neuromorphic hardware or event-driven sensors. The tutorial targets NICE attendees interested in robotics and edge AI applications of neuro-inspired computing.

Ashish Gautam (Oak Ridge National Lab), Xi Zhang (Oak Ridge National Laboratory), Prasanna Date (Oak Ridge National Laboratory)

Shruti R. Kulkarni, Ashish Gautam, Xi Zhang and Prasanna Date from Oak Ridge National Laboratory and Kevin Zhu from George Mason University Washington

A tutorial for two neuromorphic computing tools: SuperNeuro and NeuroCoreX.

SuperNeuro is a fast and scalable Python-based simulator for neuromorphic computing. It has the capability to support homogeneous and heterogeneous neuromorphic simulations and also support GPU acceleration. We will present a brief overview of the different evaluation modes within SuperNeuro—Matrix Computation (MAT) mode and the the Agent Based Modeling (ABM) mode. The users will be guided through the process of installing SuperNeuro, setting up their networks using the SuperNeuro API, defining connectivity with the spiking neural networks (SNNs), and leveraging different hardware backends for accelerating the simulations. SuperNeuro is tightly integrated with NeuroCoreX, which is an FPGA-based neuromorphic hardware platform that enables seamless translation of simulated SNNs from software to hardware execution. This integration allows users to validate algorithmic concepts, learning rules, and timing dynamics in both simulated and physical environments, thereby promoting a unified neuromorphic co-design workflow. We will demonstrate how a network written in SuperNeuro can be run on NeuroCoreX seamlessly.

More info about the tutorial

120 min