Show this page for printing or as short info (with end time)

NICE 2026 - Agenda

(hide abstracts)
Tuesday, 24 March 2026
08:30
NICE 2026 - day I

NICE 2026 takes place at the Historic Academy of Medicine is 875 W Peachtree Street NW, Atlanta, GA 30309, United States of America. See the venue page for details

08:30‑09:00
(30 min)
 Registration and coffee
09:00
Session chair: Brad Aimone
09:00‑09:10
(10 min)
 Welcome to NICE 2026Jennifer Hasler (Georgia Institute of Technology)
09:10‑09:55
(45 min)
 NICE organisers round(The NICE organisers)
09:55‑10:20
(25+5 min)
 Keynote: Infrastructure for Neuromorphic System Design
show presentation.pdf (public accessible)

There are many intriguing proposals for next-generation efficient computing substrates, and Biologically-inspired neuromorphic platforms are one such alternative. We have previously participated in the development of asynchronous hardware for a number of neuromorphic substrates, but compelling evidence for neuromorphic systems is still absent. We discuss the current state of our effort in creating both software and hardware infrastructure for the co-design of algorithms and hardware for neuromorphic systems.

Rajit Manohar (Yale)
10:25‑10:55
(30 min)
 Coffee break
10:55‑11:20
(25+5 min)
 Memory Trade-Offs in Neuromorphic Communication Strategies of the FlyWire Connectome on Loihi 2
show presentation.pdf (public accessible)

Felix Wang, Bradley Theilman, Fred Rothganger, William Severa, Craig Vineyard and James Aimone

Felix Wang (Sandia National Laboratories)
11:25‑11:50
(25+5 min)
 Local host talk: Advancing Neuromorphic Hardware using Recent Advancements in Analog Computing & Tools
show presentation.pdf (public accessible)
show talk video
Jennifer Hasler (Georgia Institute of Technology)
11:55
Poster teasers

1-minute-1-slide teasers for 9 selected posters

11:55
(1+1 min)
 Poster: Fuzzy Encoding-Decoding to Improve Spiking Q-Learning Performance in Autonomous Driving

Aref Ghoreishee, Abhishek Mishra, Lifeng Zhou, John MacLaren Walsh, Anup Das and Nagarajan Kandasamy

Aref Ghoreishee (Drexel University)
11:57
(1+1 min)
 Poster: SimScore: A Similarity Score for Spiking Neurons

Barnali Basak, Sounak Dey and Arpan Pal

Sounak Dey (Tata Consultancy Services Ltd)
11:59
(1+1 min)
 Poster: interneurOn: A Collapsing-Bound Decision Neuron for Neuromorphic Early Exit

Esra Genc, Zoran Utkovski, Johannes Dommel and Sławomir Stańczak

Zoran Utkovski (Fraunhofer HHI)
12:01
(1+1 min)
 Poster: NeuroHex: Highly-Efficient Hex Coordinate System for Creating World Models to Enable Adaptive AI

Quinn Jacobson, Joe Luo, Jingfei Xu, Shanmuga Venkatachalam, Kevin Wang, Josh Rong and John Paul Shen

Quinn Jacobson (Carnegie Mellon University)
12:03
(1+1 min)
 Poster: From Spikes to Swarms: Evolving Spiking Neural Networks to Create Emergent Swarm Behaviors

Kevin Zhu, Ricardo Vega, Maryam Parsa and Cameron Nowzari

Shay Snyder (George Mason University)
12:05
(1+1 min)
 Poster: Spiking Value Iteration for Solving Markov Decision Processes on Neuromorphic Hardware

Sarah Luca and Felix Wang

Sarah Luca (Sandia National Laboratories)
12:07
(1+1 min)
 Memory-Augmented Spiking Networks: Synergistic Integration of Complementary Mechanisms for Neuromorphic Vision

Blessing Effiong, Chiung-Yi Tseng, Isaac Nkrumah and Junaid Rehman

Effiong Blessing (Project phasor/Saint louis university)
12:09
(1+1 min)
 Poster: Generalized multi-object classification and tracking with sparse feature resonator networks.

Lazar Supic, Alec Mulen and Paxon Frady

Paxon Frady (Intel)
12:11
(1+1 min)
 Poster: NEUKRAG: NEUROMORPHIC KG–RAG WITH SMALL LLMS for Hardware–Algorithm Co-Design

Ramakrishnan Kannan, Ashish Gautam, Robert Patton, Nicholas Haas, Todd Thomas, James Aimone and Thomas Potok

Ramakrishnan Kannan (Oak Ridge National Laboratory)
12:15‑13:45
(90 min)
 Poster lunch

The first 18:

  • All On-Board: Fully On-Chip Neuromorphic Q-Learning with Embedded CartPole Simulation
  • Analyzing the Impact of Numerical Methods in Deep Spiking Neural Network
  • Assessing Procedurally Generated Spiking Networks for Large-Scale Simulations
  • AstroNet: Self-Modulation of Deep Feature Space Via Bottom-Up Saliency Descriptors
  • Autonomous Learning of Attractors for Neuromorphic Computing with Wien Bridge Oscillator Networks
  • Autonomous Reinforcement Learning Robot Control with Intel's Loihi 2 Neuromorphic Hardware
  • Biologically-plausible Shortest Path Algorithm on Real-World Graphs
  • Charge-Domain Leaky-Integrate-and-Fire Neuron with Tunable Parameters Using Ferroelectric Non-Volatile Capacitors
  • Deep Spiking Q-Networks for Turn-based Game Environments: Encoding Choices and Energy Trade-offs
  • Delays in Spiking Neural Networks: A State Space Model Approach
  • DERC: Distributed Edge Reservoir computation
  • Energy-Aware Spike Budgeting for Continual Learning in Spiking Neural Networks for Neuromorphic Vision
  • Evaluating CNN vs. CNN-to-SNN Execution on Neuromorphic Hardware: A Benchmark
  • From Astrocytes to novel algorithms: Multiplexed Gradient Descent and Rhythmic Sharing
  • From Spikes to Swarms: Evolving Spiking Neural Networks to Create Emergent Swarm Behaviors
  • FuseSNN : Spiking Attention-driven Multi-Sensor Fusion for Energy Efficient Fall Detection at Edge
  • Fuzzy Encoding-Decoding to Improve Spiking Q-Learning Performance in Autonomous Driving
  • Generalized multi-object classification and tracking with sparse feature resonator networks.
  • High-Speed Vision-Based Control with Neuromorphic Imagers
13:45
Session chair: Luke Hanks , Johannes Schemmel
13:45‑14:10
(25+5 min)
 A Compute and Communication Runtime Model for Loihi 2

Jonathan Timcheck, Alessandro Pierro and Sumit Bam Shrestha

Neuromorphic hardware promises major gains in speed and energy efficiency, but designing fast kernels is hindered by the lack of simple, predictive performance models. In this talk, we present the first max‑affine (multi‑dimensional roofline) runtime model for Intel’s Loihi 2 that captures both compute and on‑chip communication costs. Calibrated using microbenchmarks, the model shows a tight match to measured runtimes (≥ 0.97 Pearson correlation) for matrix‑vector multiplication and a QUBO solver. We further use the model to analyze communication bottlenecks and scalability, revealing clear area–runtime tradeoffs. This work enables principled, performance‑driven algorithm design on Loihi 2.

Jonathan Timcheck (Intel)
14:15‑14:25
(10+5 min)
 Quadratic Integrate-and-Fire Neurons as Differentiable Units for Scientific Machine Learning
show presentation.pdf (public accessible)
show talk video

Ruyin Wan, George Em Karniadakis, and Panos Stinis

Spiking neural networks (SNNs) offer energy-efficient computation but face fundamental challenges in continuous regression due to non-differentiable spike dynamics. In this talk, we present a fully differentiable spiking framework based on quadratic integrate-and-fire (QIF) neurons, where spikes are represented as phase-based transitions rather than discrete events. This formulation enables exact gradient computation via standard backpropagation and eliminates surrogate gradients. We demonstrate improved accuracy and stability performance on spiking regression and physics-informed learning tasks, highlighting the potential of differentiable SNNs for scientific machine learning.

Ruyin Wan (Brown University)
14:30‑14:45
(15 min)
 Group photo

Please note: the group photo will be published publicly. By being on the photo you grant permission for you to be on this public photo.


And "Women in Neuromorphic":

14:45‑15:15
(30 min)
 Break
15:15‑15:35
(20+5 min)
 The Neuromorphic Commons (THOR) Goes Live: Phase 1 Challenge Launch
Dhireesha Kudithipudi (UT San Antonio)
15:40‑16:25
(45 min)
 Open mic session
16:30
End of day I

Afterwards: Possibility to visit Jennifers lab.


Wednesday, 25 March 2026
08:30
NICE 2026 - day II
08:30
Session chair: Catherine Lacy , Suma Cardwell
08:30‑08:45
(15 min)
 Delayed start
08:45‑09:10
(25+5 min)
 Keynote: Training SNNs with exact gradients: Progress and Challenges
show presentation.pdf (public accessible)
show talk video

In 2021 Wunderlich and Pehle published the “Eventprop” method of using the adjoint method to calculate exact gradients in spiking neural networks (SNNs) of LIF neurons, and used the method to solve diagnostic machine learning benchmarks such as the YinYang dataset and latency MNIST. Soon after, we were able to show that this work could be extended to problems such as keyword recognition on the SHD and SSC benchmarks and to networks with delays and for learning delays. The trained networks run on Intel’s Loihi with minimal accuracy loss and considerable energy savings.

Pehle’s PhD thesis also contains more general equations for any hybrid SNN models. Based on this, we have used Python’s sympy symbolic math package to implement an “auto-adjoint” method in our mlGeNN software, not unlike the powerful “autodiff” methods in PyTorch, TensorFlow or JAX. This allows users to define neuron and synapse models of their choice, which mlGeNN automatically turns into equations and code for gradient descent on the chosen model. However, there are issues that mean that adjoint learning in SNNs is not yet as plug and play as autodiff-based error backpropagation in time in artificial neural networks.

In this talk, I will briefly introduce the overall method and will then discuss the main advances and the challenges for a wider adoption of this technology, drawing on examples from our recent works.

Thomas Nowotny (University of Sussex)
09:15‑09:25
(10+5 min)
 On the Status, Requirements, and Expectations of Neuro-Inspired High-Performance Computing
show presentation.pdf (public accessible)

Johannes Gebert, Qifeng Pan, Lukas Stockmann, Hartwig Anzt and Christian Mayr

Brain-inspired integrated circuits in 2026 are a future high-performance computing (HPC) technology, seemingly ready for industrialization.

They demonstrated support for AI through a hardware approach and traditional algorithms commonly used in HPC. However, the community needs to determine whether these proofs of concept translate into readily usable HPC software. On the hardware side, we may have to cherry-pick the best suitable brain-inspired features and combine them with, e.g., sufficiently fast interconnects and programmability to actually outperform current HPC software and hardware stacks.

Performance and energy efficiency are two significant concerns driving capital and operational expenses, as well as returns on investment, and hence the deployment of HPC systems. However, they are not exclusive. IT security, multi-tenancy, and spare parts are just a few examples that may seem mundane from a neuromorphic perspective, but are mandatory for HPC systems.

The following paper presents the perspective of a high-performance computing center eager to adopt promising brain-inspired technologies and to serve the academic and industrial communities.

Johannes Gebert (High-Performance Computing Center Stuttgart (HLRS), Germany)
09:30‑09:55
(25+5 min)
 The BrainScaleS-2 multi-chip system: Interconnecting continuous-time neuromorphic compute substrates

Joscha Ilmberger and Johannes Schemmel

The BrainScaleS-2 SoC integrates analog neuron and synapse circuits with digital periphery, including two CPUs with SIMD extensions. Each ASIC is connected to a Node-FPGA, providing experiment control and Ethernet connectivity. This work details the scaling of the compute substrate through FPGA-based interconnection via an additional Aggregator unit. The Aggregator provides up to 12 transceiver links to a backplane of Node-FPGAs, as well as 4 transceiver lanes for further extension. Two such interconnected backplanes are integrated into a standard 19in rack case with 4U height together with an Ethernet switch, system controller and power supplies. For all spike rates, chip-to-chip latencies—consisting of four hops across three FPGAs—below 1.3μs are achieved within each backplane.

Joscha Ilmberger (Heidelberg University)
10:00‑10:10
(10+5 min)
 Multi-Timescale Conductance Spiking Networks: A Sparse, Gradient-Trainable Framework with Rich Firing Dynamics for Enhanced Temporal Processing
show presentation.pdf (public accessible)

Alex Fulleda-Garcia, Saray Soldado-Magraner and Josep Maria Margarit-Taulé

Spiking neural networks (SNNs) enable low-power, event-driven processing, but common neuron models often trade off gradient-based training, dynamical richness and spike sparsity limiting their quality-energy performance on temporal tasks. We present a gradient-trainable multi-timescale spiking-neuron framework that provides controllable firing behavior within a single model, and is amenable to efficient neuromorphic realization. Feedforward networks are evaluated on long-horizon chaotic Mackey–Glass time-series regression against leaky integrate-and-fire (LIF) and an adaptive LIF baselines. Our approach outperforms both LIF and adaptive LIF accuracies while yielding substantially sparser spiking from both communication and compute perspectives.

Alex Fulleda-Garcia (IMB-CNM-CSIC, Spain)
10:15‑10:20
(5 min)
 Message from a program manager from Army Research Labs
show presentation.pdf (public accessible)
Chou Hung (Army Research Office (ARO))
10:20‑10:45
(25 min)
 Coffee break
10:45‑11:10
(25+5 min)
 Privacy-preserving fall detection at the edge using Sony IMX636 event-based vision sensor and Intel Loihi 2 neuromorphic processor
show presentation.pdf (public accessible)
show talk video

Lyes Khacef, Philipp Weidel, Susumu Hogyoku, Harry Liu, Claire Alexandra Bräuer, Shunsuke Koshino, Takeshi Oyakawa, Vincent Parret, Yoshitaka Miyatani, Mike Davies and Mathis Richter

Fall detection for elderly care using non-invasive vision-based systems remains an important yet unsolved problem. Driven by strict privacy requirements, inference must run at the edge of the vision sensor, demanding robust, real-time, and always-on perception under tight hardware constraints. To address these challenges, we propose a neuromorphic fall detection system that integrates the Sony IMX636 event-based vision sensor with the Intel Loihi 2 neuromorphic processor via a dedicated FPGA-based interface, leveraging the sparsity of event data together with near-memory asynchronous processing. Using a newly recorded dataset under diverse environmental conditions, we explore the design space of sparse neural networks deployable on a single Loihi 2 chip and analyze the tradeoffs between detection F1 score and computational cost. Notably, on the Pareto front, our LIF-based convolutional SNN with graded spikes achieves the highest computational efficiency, reaching a 55x synaptic operations sparsity for an F1 score of 58%. The LIF with graded spikes shows a gain of 6% in F1 score with 5x less operations compared to binary spikes. Furthermore, our MCUNet feature extractor with patched inference, combined with the S4D state space model, achieves the highest F1 score of 84% with a synaptic operations sparsity of 2x and a total power consumption of 90 mW on Loihi 2. Overall, our smart security camera proof-of-concept highlights the potential of integrating neuromorphic sensing and processing for edge AI applications where latency, energy consumption, and privacy are critical.

Lyes Khacef (Sony Advanced Visual Sensing)
11:15‑11:25
(10+5 min)
 Training Spiking Neural Networks on Multi-chip Analog Neuromorphic Hardware

Elias Arnold, Yannik Stradmann, Joscha Ilmberger, Eric Müller and Johannes Schemmel

We present the first demonstration of a four-chip BrainScaleS-2 neuromorphic system showcasing spiking neural network model inference and hardware-in-the-loop training. Building on several years of single-chip BrainScaleS-2 availability, the newly commissioned system uses a hierarchical communication fabric linking an initial set of four Brain ScaleS-2 ASICs. We evaluate this architecture using a spiking neural network whose hidden layer is distributed across three chips while placing the output layer on a fourth chip, thus exceeding the resource limits of any previously available BrainScaleS-2 substrate. We report about stable experiment execution and gradient-based training using surrogate gradients with the MNIST dataset. This talk introduces the BrainScaleS-2 multi-chip system as a platform for scalable analog neuromorphic computing.

Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg)
11:30‑11:55
(25+5 min)
 Critical Spike Attribution for Feature Importance in Spiking Neural Networks
show presentation.pdf (public accessible)
show talk video

Jack Klawitter and Konstantinos P. Michmizos

Spiking neural networks (SNNs) have emerged as a compelling alternative to standard artificial neural networks (ANNs) due to their biological plausibility and energy efficiency. However, while feature attribution methods are well established for ANNs, comparable techniques for SNNs remain underdeveloped. Here, I introduce Critical Spike Attribution (CSA), a causal attribution method that identifies which presynaptic spikes were functionally necessary for producing downstream firing. CSA provides temporally resolved explanations and is compatible with any training method. CSA is evaluated on both the Spiking Heidelberg Digits dataset and a real-world neural decoding task. Across both settings, CSA more accurately identifies causal features while producing sparser explanations with minimal trade-off in completeness.

Jack Klawitter (Rutgers University)
12:00‑12:10
(10+5 min)
 Predicting Price Movements in High-Frequency Financial Data with Spiking Neural Networks
show presentation.pdf (public accessible)
show talk video

Brian Ezinwoke and Oliver Rhodes

Modern high-frequency trading environments are characterized by sudden price spikes that present both risk and opportunity, yet conventional financial models often struggle to capture the required fine temporal structure. Spiking Neural Networks (SNNs) offer a biologically inspired framework well-suited to these challenges due to their natural ability to process discrete events and preserve millisecond-scale timing. This work investigates the application of SNNs to high-frequency price-spike forecasting, enhancing performance via robust hyperparameter tuning with Bayesian Optimization. To address the inherent difficulty of tuning unsupervised Spike-Timing-Dependent Plasticity (STDP) for non-stationary data, we introduce a novel objective, Penalized Spike Accuracy (PSA), designed to align a network's predicted spike rate with empirical market event frequencies. By evaluating an extended architecture featuring explicit inhibitory competition and multiple temporal lags on microsecond-precision stock data, we demonstrate that PSA-optimized models achieve superior risk-adjusted returns when compared to both supervised backpropagation alternatives and standard accuracy-tuned benchmarks. These results suggest that task-specific tuning of spiking dynamics offers a viable methodology for training event-driven algorithms for financial time-series, inviting further exploration into the inviting further exploration into the utility of unsupervised plasticity for high-dimensional temporal data.

Brian Ezinwoke (University College London)
12:15‑13:30
(75 min)
 Poster lunch
  • In-hardware learning of multilayer spiking neural networks on FPGA
  • interneurOn: A Collapsing-Bound Decision Neuron for Neuromorphic Early Exit
  • Memory-Augmented Spiking Networks: Synergistic Integration of Complementary Mechanisms for Neuromorphic Vision*I
  • NEUKRAG: NEUROMORPHIC KG–RAG WITH SMALL LLMS for Hardware–Algorithm Co-Design
  • NeuroAI Temporal Neural Networks (NeuTNNs): Microarchitecture and Design Framework for Specialized Neuromorphic Processing Units
  • NeuroHex: Highly-Efficient Hex Coordinate System for Creating World Models to Enable Adaptive AI
  • NeuroMatrix: Hardware-Realistic 300-Neuron Cortical Simulation Framework for Neuromorphic System Design
  • Real-Time Neuromorphic Control of an Underactuated Ball-and-Plate System using Intel Loihi
  • Scenario-Aware Control of Segmented Ladder Bus: Design and FPGA Implementation
  • Self-Supervised Spiking Neural Networks via Dual-Path Temporal Alignment
  • SimScore: A Similarity Score for Spiking Neurons
  • SpikeViT: A Memory-Efficient Mobile Spiking Vision Transformer
  • Spiking Value Iteration for Solving Markov Decision Processes on Neuromorphic Hardware
  • Sugar, Serenades, Swarms, and Synapses: Understanding Neural Pathways Associated with Feeding and Courtship via Swarm Optimization
  • SuperNeuroABM: A GPU-based multi-agent co-design simulation framework for neuromorphic computing
  • Temporal-ASL-DVS: A Temporally Rich PsuedoDVS American Sign Language Dataset
  • The Scalability of Spatial Perception and Learning with Biologically Inspired Mapping Algorithms
  • Tracking Neural Plasticity Through Incremental Spiking Neural Networks
  • Hardware Software co-design for on-chip IED detection with graph based network analysis
13:30
Sesssion chair: Connor White, Ashish Gautam
13:30‑13:55
(25+5 min)
 Real-time processing of analog signals on accelerated neuromorphic hardware

Yannik Stradmann, Johannes Schemmel, Mihai A. Petrovici and Laura Kriener

Sensory processing on neuromorphic hardware typically relies on event-based sensors or prior conversion of signals into spikes. In this talk, we present an alternative approach using the BrainScaleS-2 mixed-signal system: direct analog signal injection into on-chip neuron circuits enables efficient near-sensor processing without prior analog-to-digital or digital-to-analog conversion. Leveraging the platform’s 1000-fold acceleration factor, we implement a spiking neural network that transforms interaural time differences into a spatial representation to predict sound source location. We demonstrate a fully on-chip pipeline—from continuous microphone input, through accelerated neural processing, to actuator control—by localizing transient noise sources and aligning a servo motor in real time.

Yannik Stradmann (Institute of Computer Engineering, Heidelberg University, Germany)
14:00‑14:25
(25+5 min)
 DARWIN: Hardware Efficient Analog In-Memory Computing Using Dendritic ARborized Weights In Neural Networks
show presentation.pdf (public accessible)
show talk video

Ming-Jay Yang, Dimitrios Spithouris, Johannes Hellwig, Pascal Nieters, Regina Dittmann, Gordon Pipa and John Paul Strachan

In-memory computing (IMC) with memristive cross-bar arrays offers a promising solution to the energy and latency limitations of conventional von Neumann architectures. However, most existing IMC systems rely on perceptron-based neurons, limiting computational flexibility and requiring large, dense networks to achieve competitive performance. Inspired by the nonlinear processing of biological dendrites, this work introduces a hardware-efficient IMC architecture, ”DARWIN”, that integrates nonvolatile memristive devices as synaptic weights arranged as a tree with volatile memristive devices functioning as integration filters. To further optimize hardware utilization, we incorporate learnable sparsity, enabling the network to automatically discover a compact synaptic pattern within a dendritic tree tailored to the underlying device characteristics. Experimental results demonstrate that the proposed architecture achieves more than two orders of magnitude lower memory footprint and at least an order of magnitude lower power consumption than other in-memory computing architectures. These findings highlight the potential of combining dendritic computation, heterogeneous memristive technologies, and sparsity-aware learning to advance scalable and bio-inspired in-memory computing hardware.

Ming-Jay Yang (Forschungszentrum Jülich, Germany)
14:30
(1 min)
 ICONS announcement

ICONS website https://iconsneuromorphic.cc/

Prasanna Date (Oak Ridge National Laboratory)
14:31‑15:00
(29 min)
 Coffee break
15:00‑15:25
(25+5 min)
 Intrinsic Numerical Robustness and Fault Tolerance in a Neuromorphic Algorithm for Scientific Computing

Bradley H. Theilman and James B. Aimone

The potential for neuromorphic computing to provide intrinsically fault-tolerant has long been speculated, but demonstrations of brain-like robustness in neuromorphic applications is rare. Here, we show that a previously-described, natively spiking neuromorphic algorithm for solving partial differential equations is intrinsically tolerant to structural perturbations in the form of ablated neurons and dropped spikes. The tolerance band for these perturbations is large: we find that as many as 32% of the neurons and up to 90% of the spikes may be entirely dropped before a significant degradation in the accuracy results. Furthermore, this robustness is tunable through structural hyperparameters. This work demonstrates that the specific brain-like inspiration behind the algorithm contributes to a significant degree of robustness as expected from brain-like neuromorphic algorithms.

Bradley Theilman (Sandia National Laboratories)
15:30‑15:40
(10+5 min)
 Neuromorphic Eye Tracking for Low-Latency Pupil Detection
show presentation.pdf (public accessible)
video (restricted access)

Paul Hueber, Luca Peres, Florian Pitters, Alejandro Gloriani and Oliver Rhodes

Wearable eye-tracking devices demand milliwatt-level power and millisecond latency, which conventional frame-based pipelines struggle to deliver due to motion blur and heavy compute requirements. In this work, we redesign top-performing event-based eye-tracking architectures from the AIS 2024 Challenge as neuromorphic models by replacing their recurrent and attention modules with lightweight leaky integrate-and-fire (LIF) layers and depth-wise separable convolutions. Evaluated in a continuous 1 kHz setting on the 3ET+ dataset, these models achieve 3.7–4.1 pixel mean pupil-localization error, approaching the accuracy of the specialized neuromorphic Retina system (3.24 px). Compared to their closest ANN counterparts, the proposed SNN variants reduce model size by 22–45× and theoretical compute by 30–1000×, while maintaining accuracy suitable for real-time pupil detection. Using the SENeCA neuromorphic energy model, operational power is estimated at 3.9–4.9 mW with approximately 3 ms end-to-end latency, establishing a key building block for always-on, event-driven ​gaze estimation in resource-constrained wearable devices.

Oliver Rhodes (University of Manchester, United Kingdom)
15:45‑16:30
(45 min)
 Open mic session
16:30
End of day II

Thursday, 26 March 2026
08:00
NICE 2026 - day III

Food should be available from 8:00h onwards

08:30
Session chair: Pranav Mathews , Felix Wang
08:30‑08:45
(15 min)
 Delayed start
08:45‑09:10
(25+5 min)
 Keynote: REM-like Consolidation: Same Performance, Sparser Representations
show presentation.pdf (public accessible)

Sleep plays a fundamental role in memory and learning. While the conventional view of sleep emphasizes replay and strengthening of recent memory traces for long-term storage, often coupled with the dual-memory framework of a fast-learning hippocampus and a slow-learning cortex, here we propose a more general role for sleep as a mechanism for optimizing memory representations by reducing overlap between memories and increasing representational sparseness. Using a spike-based sleep replay algorithm for ANNs, we show that sleep rescues recently learned memories from interference by enhancing synaptic inhibition and isolating memory traces in activation space. These results further suggest that sleep-like reactivation algorithms may serve a regularizing function by increasing the sparseness of memory representations.

Maxim Bazhenov (University of California, San Diego)
09:15‑09:25
(10+5 min)
 Predicting band-gap of Inorganic Materials using Neuromorphic Graph Learning
show presentation.pdf (public accessible)
show talk video

Ian Mulet, Derek Gobin, Ashish Gautam, Kevin Zhu, Prasanna Date, Shruti Kulkarni, Seung-Hwan Lim, Guojing Cong, Maryam Parsa, Thomas Potok and Catherine Schuman

Predicting properties of inorganic materials is a heavily researched topic, with several new prediction approaches emerging as competitors. One such competitor is graph neural networks, which leverage the structure of the graph to aid in the prediction process. In this work, we propose integration of neuromorphic computation into the graph neural network pipeline. We call this approach Neuromorphic Graph Learning (NGL). We utilize the NGL approach to leverage evolutionary algorithms and a novel Spike Pipeline for Raster Analysis (SPIRE) for the prediction of band gap in inorganic materials.

Ian Mulet (University of Tennessee Knoxville)
09:30‑09:40
(10+5 min)
 A principled procedure for designing brain-derived SWaP optimized neuronal units for low-power neuromorphic analog computation and digital communication.
show presentation.pdf (public accessible)
Chad Harper (UC Berkeley)
09:45‑10:15
(30 min)
 Coffee break
10:15‑10:25
(10+5 min)
 δ Multiplexed Gradient Descent: Perturbative Learning with Astrocytes
show presentation.pdf (public accessible)

Ryan O'Loughlin, Bakhrom Oripov, Nick Skuda Skuda, Noah Chongsiriwatana, Ian Whitehouse, Wolfgang Losert, Bradley Hayes, Adam McCaughan and Sonia Buckley

Nicholas Skuda (NIST Boulder)
10:30‑10:40
(10+5 min)
 NeuroCoreX: An Open-Source FPGA-Based Spiking Neural Network Emulator with On-Chip Learning

Ashish Gautam, Prasanna Date, Shruti Kulkarni, Ian Mulet, Kevin Zhu, Robert Patton and Thomas Potok

Spiking Neural Networks (SNNs) are computational models inspired by the event-driven communication and con- nectivity patterns of biological neural circuits. They enable high energy efficiency and natural support for diverse architectures ranging from layered networks to small-world and graph- structured topologies. In this work, we introduce NeuroCoreX, an open-source, FPGA-based spiking neural network emulator that provides real-time, on-chip learning and flexible network organization. NeuroCoreX supports both feedforward sensory inputs streamed directly from sensors or PCs via UART and recurrent on-chip connectivity, enabling simultaneous process- ing and learning from external stimuli and internal network dynamics—capabilities rarely available in existing FPGA SNN platforms. The system implements a Leaky Integrate-and-Fire (LIF) neuron model with current-based synapses and supports pair-based STDP learning on both feedforward and recurrent synapses. A lightweight Python interface enables interactive configuration, live monitoring, weight read-back, and experiment control. Importantly, NeuroCoreX is tightly integrated with the SuperNeuroMAT simulator, allowing SNN models to be transferred seamlessly from software to hardware for hardware- in-the-loop development. By combining real-time plasticity, flex- ible connectivity, and an open-source VHDL implementation, NeuroCoreX provides an extensible and accessible platform for neuromorphic research, algorithm–hardware co-design, and energy-efficient edge intelligence.

Ashish Gautam (Oak Ridge National Lab)
10:45‑11:10
(25+5 min)
 Fully Spiking Linear Quadratic Regulator Control via a Neuromorphic Solver for the Continuous Algebraic Riccati Equation
show presentation.pdf (public accessible)
show talk video

Graeme Damberger, Omar Alejandro Garcia Alcantara, Eduardo S. Espinoza, Luis Rodolfo Garcia Carrillo, Terrence C. Stewart and Chris Eliasmith

We present a fully spiking implementation of the Linear Quadratic Regulator (LQR) by introducing a recurrent spiking neural network that dynamically solves the Continuous Algebraic Riccati Equation (CARE). Using the Neural Engineering Framework, the CARE dynamics are embedded directly within a recurrent spiking network, enabling an online controller that computes the optimal LQR controller gains in real time. We derive Lyapunov-based conditions ensuring the stability of both the spiking CARE solver and the resulting closed-loop system as a function of the neuron population size. Through simulation of the yaw dynamics of a constrained quadrotor, we show that the proposed controller achieves tracking performance comparable to an ideal non-spiking LQR while preserving convergence of the CARE solution within the spiking network. These results demonstrate that optimal control algorithms can be implemented entirely within neuromorphic systems.

Graeme Damberger (University of Waterloo)
11:15‑11:25
(10+5 min)
 Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware
show presentation.pdf (public accessible)

Jakob Kaiser, Eric Müller and Johannes Schemmel

Mapping experimental observations to parameters of analog neuromorphic hardware is often labor-intensive. We apply amortized simulation-based inference to infer parameters of an adaptive exponential integrate-and-fire neuron implemented on the BrainScaleS-2 system. Using membrane potential responses to step current stimulation, we estimate posterior distributions for seven model parameters. We compare different approaches for extracting summary statistics from membrane traces and evaluate their impact on inference performance. Our results demonstrate that amortized SBI is a promising tool for parameterizing analog neuron circuits.

Jakob Kaiser (Institute of Computer Engineering, Heidelberg University, Germany)
11:30‑11:55
(25+5 min)
 Training event-based neural networks with exact gradients via Differentiable ODE Solving in JAX
show presentation.pdf (public accessible)

Lukas König, Manuel Kuhn, David Kappel and Anand Subramoney

Lukas König (University of Bielefeld)
12:00‑13:00
(60 min)
 Break for lunch (food provided)
13:00
Session chair: Praveen Raj, Brad Theilman
13:00‑13:10
(10+5 min)
 Late breaking news: Bridging Neuromorphic and Traditional Computing Performance: An Information-Theoretic Approach
show presentation.pdf (public accessible)

Max Hawkins, Richard Vuduc

Neuromorphic computers challenge traditional definitions of computing performance by prioritizing characteristics like sparsity and noise tolerance, which are often ignored by conventional metrics. As a result, the community lacks a fair method of comparing computational work performed by sparse, noisy neurons against dense, deterministic arithmetic logic units (ALUs). We propose a performance framework grounded in information theory that models any computational unit as a channel, transforming and retaining information from inputs (X) to outputs (Y). By defining computing ‘work’ as the mutual information, I(X; Y), between input and output states, we provide an implementation-agnostic framework for computational performance. This framework resolves conflicting definitions of work in the presence of sparsity by distinguishing between structural sparsity and activation sparsity. It also naturally accounts for noise and accommodates diverse encodings (e.g. fixed-width binary, spike trains, etc) to offer a ‘common currency’ for evaluating innovation across diverse computing Paradigms.

Max Hawkins (Georgia Institute of Technology)
13:15‑13:25
(10+5 min)
 Late breaking news: NOVA: Real-Time Visualization and Streaming for Neuromorphic Event Cameras

Andrew Lin, Daniel Querrey, Eric McGonagle, John Langs, Nai Yun Wu, John Ho, Nick Almeter, Matthew Fisher, Utsawb Lamichhane, Praket Desai, David Mascarenas, Tracy Hammond; Texas A&M University & Los Alamos

Neuromorphic (event-based) cameras capture visual information as asynchronous streams of per-pixel intensity changes, offering high temporal resolution and reduced redundancy compared to frame-based sensors. Despite these advantages, the sparse and asynchronous nature of event data makes real-time inspection and interpretation challenging, limiting effective use during development, debugging, and onboarding. This paper presents NOVA (Neuromorphic Optics and Visualization Application), an integrated software system for real-time visualization, streaming, and interaction with neuromorphic event data. NOVA provides a GPU-accelerated pipeline supporting both live camera input and offline datasets, with native compatibility for modern formats and sustained high-throughput operation. A central feature is a 3D spatiotemporal visualization that represents spatial position and time simultaneously, enabling users to observe motion patterns and temporal structure that are difficult to perceive in conventional 2D views. The system also incorporates Digital Coded Exposure reconstruction to generate frame-like representations alongside raw event streams, supporting intuitive interpretation. Designed with a modular architecture and task-oriented interface, NOVA emphasizes performance, extensibility, and usability across research, development, and educational contexts. Deployment experiences and early user observations suggest that enhanced visualization and interactive temporal controls improve users’ ability to reason about event data and accelerate exploratory workflows. NOVA demonstrates how human-centered tooling can reduce barriers to working with neuromorphic sensors and provides a foundation for future research on visualization and interpretation of asynchronous visual data.

David Mascarenas (Los Alamos National Laboratory)
13:30‑13:40
(10+5 min)
 Hardware-Algorithm Co-design for On-Chip SNN: SOLO(Spatial Online Learning at Once) with analog flash device

Sungmin Lee

Spiking Neural Networks (SNNs) have emerged as a promising energy-efficient alternative to conventional deep learning models. However, realizing truly energy-efficient artificial intelligence requires a synergistic approach that optimizes both the learning algorithm and the underlying hardware architecture. In this talk, we present a novel hardware-algorithm co-design that integrates the SOLO (Spatial Online Learning at Once) algorithm with a novel analog flash device. Specifically, SOLO overcomes the high latency and memory costs of traditional BPTT, enabling highly efficient, noise-robust on-chip training of deep SNNs. By leveraging co-integrated flash devices to model synapses, neurons, and surrogate gradients, we demonstrate a highly efficient and superior platform for analog in-memory computing (AIMC).

Sungmin Lee (Seoul National University)
13:45‑14:10
(25+5 min)
 Graph Reservoir Networks for Prediction of Spatiotemporal Systems

William Chapman, Darby Smith, Corinne Teeter and Nicole Jackson

Predicting large-scale spatiotemporal data poses significant challenges for traditional machine learning methods, particularly in high-performance and edge-based computing environments. Graph neural networks (GNNs) have emerged as powerful tools for modeling structured data, while reservoir computing (RC) offers efficient and state-of-the-art solutions for predicting low-rank dynamical systems. In this work, we propose a novel approach that integrates the global structured modeling capabilities of GNNs with the efficient training and predictive strengths of RC. By combining local random connectivity with global domain-informed structure, the proposed graph reservoir networks leverage both random and model-based information to accurately predict large-scale spatiotemporal dynamics. Furthermore, we demonstrate the robustness of these networks to neuromorphic hardware constraints, including temporally sparse spiking activity, quantized weights, limited fan-in, and local learning rules. These results highlight the utility of integrating reservoir computing with domain-informed inductive biases, paving the way for scalable and efficient solutions to complex dynamical systems in diverse applications.

William Chapman (Sandia National Laboratories)
14:15‑14:25
(10+5 min)
 Dynamic Heuristic Neuromorphic Solver for the Edge User Allocation Problem with Bayesian Confidence Propagation Neural Network
show presentation.pdf (public accessible)
show talk video

Kecheng Zhang, Anders Lansner, Ahsan Javed Awan, Naresh Balaji Ravichandran and Pawel Herman

We propose a neuromorphic solver for the NP-hard Edge User Allocation problem using an attractor network with Winner-Takes-All (WTA) mechanism implemented with the Bayesian Confidence Propagation Neural Network (BCPNN) framework. Unlike previous energy-based attractor networks, our solver uses dynamic heuristic biasing to guide allocations in real time and introduces a “no allocation” state to each WTA motif, achieving near-optimal performance with an empirically upper-bounded number of time steps. The approach is compatible with neuromorphic architectures and may offer improvements in energy efficiency.

Ahsan Javed Awan (Ericsson)
14:30‑15:00
(30 min)
 Coffee break
15:00‑15:25
(25+5 min)
 VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing
show presentation.pdf (public accessible)

Hamed Poursiami, Shay Snyder, Guojing Cong, Thomas Potok and Maryam Parsa

Graph Neural Networks (GNNs) offer high accuracy but incur computational costs that limit scalability on edge devices. We present VS-Graph, a vector-symbolic framework that bridges the gap between the efficiency of Hyperdimensional Computing (HDC) and the expressive power of message passing. By introducing Spike Diffusion for topology-driven identification and Associative Message Passing in high-dimensional space, VS-Graph achieves accuracy competitive with modern GNNs without gradient-based optimization. Our results demonstrate a training speedup of up to 450× over GNNs and a 4–5% accuracy improvement over prior HDC baselines, while maintaining robustness even under aggressive dimensionality reduction.

Hamed Poursiami (George Mason University)
15:30‑15:40
(10 min)
 Tutorial day information and messages from the local host

The tutorials will take place at the TSRB Technology Square Research Building at: 85 5th St NW, Atlanta (The TSRB is the building where the lab of Jennifer Haslers group is located - most attendants visited that building at the end of day I of NICE.) Openstreetmap link)

Rooms 118 and 132 (ground floor)

Jennifer Hasler (Georgia Institute of Technology)
15:40‑15:45
(5 min)
 NICE 2027Thomas Nowotny (University of Sussex)
15:45‑16:30
(45 min)
 Open mic session
16:30
End of the NICE 2026 cxonference part

Friday, 27 March 2026
08:30
NICE 2026 tutorial day

Venue Openstreetmap link

  • TSRB (Technology Square Research Building 85 5th St NW, Atlanta) (The TSRB is the building where the lab of Jennifer Haslers group is located - most attendants visited that building at the end of day I of NICE)
  • Rooms 118 and 132 (ground floor)
08:30‑10:30
(120 min)
 1st tutorial session
Tutorial: BrainScaleS hands-on - room 118Tutorial: N2A – neural programming language and workbench - room 132
Jakob Kaiser (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg), Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg)

Johannes Schemmel, Yannik Stradtmann, Joscha Imberger, Jakob Kaiser and Björn Kindler from Heidelberg University (Germany)

In this tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems.

  • Participants can use their EBRAINS account (available at https://ebrains.eu/register free of charge) or can use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial.
  • For the tutorial we will use the non persistent quick start here on EBRAINS.

More Info about the tutorial

Fred Rothganger (Sandia National Labs)

This tutorial will introduce the user to the N2A programming language and its associated IDE. Upon completion, the user will be able to create new neuron types, new applications, and run them on their local machine (or on SpiNNaker-2 if hardware is available). This will be a hands-on tutorial. N2A may be downloaded from https://github.com/sandialabs/n2a and run on your personal laptop.

Typically, neuromorphic machines execute a simple dynamical system called the Leaky Integrate and Fire (LIF) model, analogous to a logic gate in conventional machines, and communicate between these using single-bit events called “spikes”. However, neuromorphic device makers are moving away from simple LIF dynamics toward more general neurons. The SpiNNaker system has always been general-purpose programmable due to its use of ARM cores, and SpiNNaker-2 has the capacity to send up to four 32-bit floats with each event. Intel’s second-generation Loihi also supports graded spikes and assembly-level programming of neuron models.. The future of neuromorphic computing will likely be neurons with complex dynamics combined with high-volume short-packet communication. Several frameworks exist for programming neuromorphic systems. The challenge is to enable general-purpose programming of neuron types while maintaining cross-platform portability. Remarkably, these are complementary goals. With an appropriate level of abstraction, it is possible to “write once, run anywhere”. N2A’s unique approach allows the user to specify the dynamics for each class of neuron by simply listing its equations. The tool then compiles these for a given target platform. The structure of the network and interactions between neurons are specified in the same equation language. Network structures can be arbitrarily deep and complex. The language supports component creation, extension, reuse and sharing. The tool comes with a base library that supplies common neuroscience models as well as components for specific neuromorphic devices.

10:30‑11:00
(30 min)
 Break / next tutorial setup
11:00‑13:00
(120 min)
 2nd tutorial session
Tutorial: Introduction to Fugu: A Framework for Composing Neural Algorithms - room 118Tutorial: Simulation Tool for Asynchronous Cortical Streams (STACS) Tutorial - room 132
Michael Krygier (Sandia National Laboratories)

In this tutorial, we will begin with an overview of the basic algorithm design in Fugu and typical workflows.

Fugu is an open-source high-level python programming framework designed for developing spiking algorithms in terms of computational graphs. It provides a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from different sources. Fugu is intended to be suitable for a wide range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Unlike other tools, Fugu separates the task of programming applications that may leverage neuromorphic hardware from the design of spiking neural algorithms and the specific details of neuromorphic hardware platforms. This allows users to focus on developing their applications without needing to be experts in neural computation or neuromorphic hardware. To design Fugu bricks and run these algorithms on hardware, users first construct a computational graph of their application using Fugu’s API. The API provides a simple and intuitive way to define the graph, and Fugu takes care of the rest, including automating the construction of a graphical intermediate representation of the spiking neural algorithm. The output of Fugu is a single NetworkX graph that fully describes the spiking neural algorithm, which can then be compiled down to platform-specific code using one of Fugu’s hardware backends or run on Fugu’s reference simulator. By providing a high-level abstraction and automating the process of constructing and compiling spiking neural algorithms, Fugu makes it easier for users to develop and deploy neuromorphic applications, and enables the exploration of new and innovative uses for neuromorphic computing.

Felix Wang (Sandia National Laboratories)

In this tutorial, we will explore how to define networks and take advantage of the parallel capabilities of the spiking neural network (SNN) simulator STACS (Simulation Tool for Asynchronous Cortical Streams) 〈https://github.com/sandialabs/STACS〉. Developed to be parallel from the ground up, STACS leverages the highly portable Charm++ parallel programming framework, which expresses a paradigm of asynchronous message-driven parallel objects, and supports both large-scale and long running simulations on high performance computing systems. In addition to the parallel runtime, STACS also implements a memory-efficient distributed network data structure for network construction, simulation, and serialization. In particular, STACS uses a distributed intermediate representation, an SNN extension to the distributed compressed sparse row format, which supports interoperability with graph partitioners to facilitate optimizing communication costs across compute resources. With respect to the neuromorphic computing software ecosystem, this has enabled toolchains for mapping networks onto neuromorphic platforms such as Loihi 2 and SpiNNaker 2.

More info about the tutorial

13:00‑14:30
(90 min)
 Time for lunch (NO food provided - use the nearby options)
14:30‑16:30
(120 min)
 3rd tutorial session
Tutorial: LiteVLA at the Edge: CPU-Only Vision–Language–Action Control as a Testbed for Neuro-Inspired Robotics - room 118Tutorial: SuperNeuro + NeuroCoreX Tutorial: Running Fast and Scalable Neuromorphic Simulations - room 132
Mohd Ariful Haque (Clark Atlanta University)

Kishor Datta Gupta, Justin Williams and Mohd. Ariful Haque from Clark Atlanta University

This 2-hour tutorial presents LiteVLA, a lightweight vision–language–action pipeline that runs fully on a Raspberry Pi 4 / TurtleBot-class robot using only CPU resources. We show how its LoRA-adapted SmolVLM backbone and 4-bit NF4 quantization create a constrained, low-power control loop that mirrors many of the design pressures in neuromorphic and non-von-Neumann systems. Participants will walk through the full stack—RGB+action data collection, parameter-efficient fine-tuning, hybrid-precision quantization, and ROS 2 integration with asynchronous Action Chunking—then discuss how such edge VLA controllers can benchmark algorithms, latency, and robustness before porting them to neuromorphic hardware or event-driven sensors. The tutorial targets NICE attendees interested in robotics and edge AI applications of neuro-inspired computing.


show presentation.pdf (public accessible)
Ashish Gautam (Oak Ridge National Lab), Xi Zhang (Oak Ridge National Laboratory), Prasanna Date (Oak Ridge National Laboratory)

Shruti R. Kulkarni, Ashish Gautam, Xi Zhang and Prasanna Date from Oak Ridge National Laboratory and Kevin Zhu from George Mason University Washington

A tutorial for two neuromorphic computing tools: SuperNeuro and NeuroCoreX.

SuperNeuro is a fast and scalable Python-based simulator for neuromorphic computing. It has the capability to support homogeneous and heterogeneous neuromorphic simulations and also support GPU acceleration. We will present a brief overview of the different evaluation modes within SuperNeuro—Matrix Computation (MAT) mode and the the Agent Based Modeling (ABM) mode. The users will be guided through the process of installing SuperNeuro, setting up their networks using the SuperNeuro API, defining connectivity with the spiking neural networks (SNNs), and leveraging different hardware backends for accelerating the simulations. SuperNeuro is tightly integrated with NeuroCoreX, which is an FPGA-based neuromorphic hardware platform that enables seamless translation of simulated SNNs from software to hardware execution. This integration allows users to validate algorithmic concepts, learning rules, and timing dynamics in both simulated and physical environments, thereby promoting a unified neuromorphic co-design workflow. We will demonstrate how a network written in SuperNeuro can be run on NeuroCoreX seamlessly.

More info about the tutorial

16:30
END of NICE 2026
Contact: kindler@kip.uni-heidelberg.de