Show this page for printing or as short info (with end time)

NICE 2026 - Agenda

(show all abstracts)
Tuesday, 24 March 2026
08:30
NICE 2026 - day I

NICE 2026 takes place at the Historic Academy of Medicine is 875 W Peachtree Street NW, Atlanta, GA 30309, United States of America. See the venue page for details

08:30‑09:00
(30 min)
 Registration and coffee
09:00
Session chair: Brad Aimone
09:00‑09:10
(10 min)
 Welcome to NICE 2026Jennifer Hasler (Georgia Institute of Technology)
09:10‑09:55
(45 min)
 NICE organisers round(The NICE organisers)
09:55‑10:20
(25+5 min)
 Keynote: Infrastructure for Neuromorphic System Design
show presentation.pdf (public accessible)

show abstract

Rajit Manohar (Yale)
10:25‑10:55
(30 min)
 Coffee break
10:55‑11:20
(25+5 min)
 Memory Trade-Offs in Neuromorphic Communication Strategies of the FlyWire Connectome on Loihi 2
show presentation.pdf (public accessible)

Felix Wang, Bradley Theilman, Fred Rothganger, William Severa, Craig Vineyard and James Aimone

Felix Wang (Sandia National Laboratories)
11:25‑11:50
(25+5 min)
 Local host talk: Advancing Neuromorphic Hardware using Recent Advancements in Analog Computing & Tools
show presentation.pdf (public accessible)
show talk video
Jennifer Hasler (Georgia Institute of Technology)
11:55
Poster teasers

1-minute-1-slide teasers for 9 selected posters

11:55
(1+1 min)
 Poster: Fuzzy Encoding-Decoding to Improve Spiking Q-Learning Performance in Autonomous Driving

Aref Ghoreishee, Abhishek Mishra, Lifeng Zhou, John MacLaren Walsh, Anup Das and Nagarajan Kandasamy

Aref Ghoreishee (Drexel University)
11:57
(1+1 min)
 Poster: SimScore: A Similarity Score for Spiking Neurons

Barnali Basak, Sounak Dey and Arpan Pal

Sounak Dey (Tata Consultancy Services Ltd)
11:59
(1+1 min)
 Poster: interneurOn: A Collapsing-Bound Decision Neuron for Neuromorphic Early Exit

Esra Genc, Zoran Utkovski, Johannes Dommel and Sławomir Stańczak

Zoran Utkovski (Fraunhofer HHI)
12:01
(1+1 min)
 Poster: NeuroHex: Highly-Efficient Hex Coordinate System for Creating World Models to Enable Adaptive AI

Quinn Jacobson, Joe Luo, Jingfei Xu, Shanmuga Venkatachalam, Kevin Wang, Josh Rong and John Paul Shen

Quinn Jacobson (Carnegie Mellon University)
12:03
(1+1 min)
 Poster: From Spikes to Swarms: Evolving Spiking Neural Networks to Create Emergent Swarm Behaviors

Kevin Zhu, Ricardo Vega, Maryam Parsa and Cameron Nowzari

Shay Snyder (George Mason University)
12:05
(1+1 min)
 Poster: Spiking Value Iteration for Solving Markov Decision Processes on Neuromorphic Hardware

Sarah Luca and Felix Wang

Sarah Luca (Sandia National Laboratories)
12:07
(1+1 min)
 Memory-Augmented Spiking Networks: Synergistic Integration of Complementary Mechanisms for Neuromorphic Vision

Blessing Effiong, Chiung-Yi Tseng, Isaac Nkrumah and Junaid Rehman

Effiong Blessing (Project phasor/Saint louis university)
12:09
(1+1 min)
 Poster: Generalized multi-object classification and tracking with sparse feature resonator networks.

Lazar Supic, Alec Mulen and Paxon Frady

Paxon Frady (Intel)
12:11
(1+1 min)
 Poster: NEUKRAG: NEUROMORPHIC KG–RAG WITH SMALL LLMS for Hardware–Algorithm Co-Design

Ramakrishnan Kannan, Ashish Gautam, Robert Patton, Nicholas Haas, Todd Thomas, James Aimone and Thomas Potok

Ramakrishnan Kannan (Oak Ridge National Laboratory)
12:15‑13:45
(90 min)
 Poster lunch

The first 18:

  • All On-Board: Fully On-Chip Neuromorphic Q-Learning with Embedded CartPole Simulation
  • Analyzing the Impact of Numerical Methods in Deep Spiking Neural Network
  • Assessing Procedurally Generated Spiking Networks for Large-Scale Simulations
  • AstroNet: Self-Modulation of Deep Feature Space Via Bottom-Up Saliency Descriptors
  • Autonomous Learning of Attractors for Neuromorphic Computing with Wien Bridge Oscillator Networks
  • Autonomous Reinforcement Learning Robot Control with Intel's Loihi 2 Neuromorphic Hardware
  • Biologically-plausible Shortest Path Algorithm on Real-World Graphs
  • Charge-Domain Leaky-Integrate-and-Fire Neuron with Tunable Parameters Using Ferroelectric Non-Volatile Capacitors
  • Deep Spiking Q-Networks for Turn-based Game Environments: Encoding Choices and Energy Trade-offs
  • Delays in Spiking Neural Networks: A State Space Model Approach
  • DERC: Distributed Edge Reservoir computation
  • Energy-Aware Spike Budgeting for Continual Learning in Spiking Neural Networks for Neuromorphic Vision
  • Evaluating CNN vs. CNN-to-SNN Execution on Neuromorphic Hardware: A Benchmark
  • From Astrocytes to novel algorithms: Multiplexed Gradient Descent and Rhythmic Sharing
  • From Spikes to Swarms: Evolving Spiking Neural Networks to Create Emergent Swarm Behaviors
  • FuseSNN : Spiking Attention-driven Multi-Sensor Fusion for Energy Efficient Fall Detection at Edge
  • Fuzzy Encoding-Decoding to Improve Spiking Q-Learning Performance in Autonomous Driving
  • Generalized multi-object classification and tracking with sparse feature resonator networks.
  • High-Speed Vision-Based Control with Neuromorphic Imagers
13:45
Session chair: Luke Hanks , Johannes Schemmel
13:45‑14:10
(25+5 min)
 A Compute and Communication Runtime Model for Loihi 2

Jonathan Timcheck, Alessandro Pierro and Sumit Bam Shrestha

show abstract

Jonathan Timcheck (Intel)
14:15‑14:25
(10+5 min)
 Quadratic Integrate-and-Fire Neurons as Differentiable Units for Scientific Machine Learning
show presentation.pdf (public accessible)
show talk video

Ruyin Wan, George Em Karniadakis, and Panos Stinis

show abstract

Ruyin Wan (Brown University)
14:30‑14:45
(15 min)
 Group photo

Please note: the group photo will be published publicly. By being on the photo you grant permission for you to be on this public photo.


And "Women in Neuromorphic":

14:45‑15:15
(30 min)
 Break
15:15‑15:35
(20+5 min)
 The Neuromorphic Commons (THOR) Goes Live: Phase 1 Challenge Launch
Dhireesha Kudithipudi (UT San Antonio)
15:40‑16:25
(45 min)
 Open mic session
16:30
End of day I

Afterwards: Possibility to visit Jennifers lab.


Wednesday, 25 March 2026
08:30
NICE 2026 - day II
08:30
Session chair: Catherine Lacy , Suma Cardwell
08:30‑08:45
(15 min)
 Delayed start
08:45‑09:10
(25+5 min)
 Keynote: Training SNNs with exact gradients: Progress and Challenges
show presentation.pdf (public accessible)
show talk video

show abstract

Thomas Nowotny (University of Sussex)
09:15‑09:25
(10+5 min)
 On the Status, Requirements, and Expectations of Neuro-Inspired High-Performance Computing
show presentation.pdf (public accessible)

Johannes Gebert, Qifeng Pan, Lukas Stockmann, Hartwig Anzt and Christian Mayr

show abstract

Johannes Gebert (High-Performance Computing Center Stuttgart (HLRS), Germany)
09:30‑09:55
(25+5 min)
 The BrainScaleS-2 multi-chip system: Interconnecting continuous-time neuromorphic compute substrates

Joscha Ilmberger and Johannes Schemmel

show abstract

Joscha Ilmberger (Heidelberg University)
10:00‑10:10
(10+5 min)
 Multi-Timescale Conductance Spiking Networks: A Sparse, Gradient-Trainable Framework with Rich Firing Dynamics for Enhanced Temporal Processing
show presentation.pdf (public accessible)

Alex Fulleda-Garcia, Saray Soldado-Magraner and Josep Maria Margarit-Taulé

show abstract

Alex Fulleda-Garcia (IMB-CNM-CSIC, Spain)
10:15‑10:20
(5 min)
 Message from a program manager from Army Research Labs
show presentation.pdf (public accessible)
Chou Hung (Army Research Office (ARO))
10:20‑10:45
(25 min)
 Coffee break
10:45‑11:10
(25+5 min)
 Privacy-preserving fall detection at the edge using Sony IMX636 event-based vision sensor and Intel Loihi 2 neuromorphic processor
show presentation.pdf (public accessible)
show talk video

Lyes Khacef, Philipp Weidel, Susumu Hogyoku, Harry Liu, Claire Alexandra Bräuer, Shunsuke Koshino, Takeshi Oyakawa, Vincent Parret, Yoshitaka Miyatani, Mike Davies and Mathis Richter

show abstract

Lyes Khacef (Sony Advanced Visual Sensing)
11:15‑11:25
(10+5 min)
 Training Spiking Neural Networks on Multi-chip Analog Neuromorphic Hardware

Elias Arnold, Yannik Stradmann, Joscha Ilmberger, Eric Müller and Johannes Schemmel

show abstract

Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg)
11:30‑11:55
(25+5 min)
 Critical Spike Attribution for Feature Importance in Spiking Neural Networks
show presentation.pdf (public accessible)
show talk video

Jack Klawitter and Konstantinos P. Michmizos

show abstract

Jack Klawitter (Rutgers University)
12:00‑12:10
(10+5 min)
 Predicting Price Movements in High-Frequency Financial Data with Spiking Neural Networks
show presentation.pdf (public accessible)
show talk video

Brian Ezinwoke and Oliver Rhodes

show abstract

Brian Ezinwoke (University College London)
12:15‑13:30
(75 min)
 Poster lunch
  • In-hardware learning of multilayer spiking neural networks on FPGA
  • interneurOn: A Collapsing-Bound Decision Neuron for Neuromorphic Early Exit
  • Memory-Augmented Spiking Networks: Synergistic Integration of Complementary Mechanisms for Neuromorphic Vision*I
  • NEUKRAG: NEUROMORPHIC KG–RAG WITH SMALL LLMS for Hardware–Algorithm Co-Design
  • NeuroAI Temporal Neural Networks (NeuTNNs): Microarchitecture and Design Framework for Specialized Neuromorphic Processing Units
  • NeuroHex: Highly-Efficient Hex Coordinate System for Creating World Models to Enable Adaptive AI
  • NeuroMatrix: Hardware-Realistic 300-Neuron Cortical Simulation Framework for Neuromorphic System Design
  • Real-Time Neuromorphic Control of an Underactuated Ball-and-Plate System using Intel Loihi
  • Scenario-Aware Control of Segmented Ladder Bus: Design and FPGA Implementation
  • Self-Supervised Spiking Neural Networks via Dual-Path Temporal Alignment
  • SimScore: A Similarity Score for Spiking Neurons
  • SpikeViT: A Memory-Efficient Mobile Spiking Vision Transformer
  • Spiking Value Iteration for Solving Markov Decision Processes on Neuromorphic Hardware
  • Sugar, Serenades, Swarms, and Synapses: Understanding Neural Pathways Associated with Feeding and Courtship via Swarm Optimization
  • SuperNeuroABM: A GPU-based multi-agent co-design simulation framework for neuromorphic computing
  • Temporal-ASL-DVS: A Temporally Rich PsuedoDVS American Sign Language Dataset
  • The Scalability of Spatial Perception and Learning with Biologically Inspired Mapping Algorithms
  • Tracking Neural Plasticity Through Incremental Spiking Neural Networks
  • Hardware Software co-design for on-chip IED detection with graph based network analysis
13:30
Sesssion chair: Connor White, Ashish Gautam
13:30‑13:55
(25+5 min)
 Real-time processing of analog signals on accelerated neuromorphic hardware

Yannik Stradmann, Johannes Schemmel, Mihai A. Petrovici and Laura Kriener

show abstract

Yannik Stradmann (Institute of Computer Engineering, Heidelberg University, Germany)
14:00‑14:25
(25+5 min)
 DARWIN: Hardware Efficient Analog In-Memory Computing Using Dendritic ARborized Weights In Neural Networks
show presentation.pdf (public accessible)
show talk video

Ming-Jay Yang, Dimitrios Spithouris, Johannes Hellwig, Pascal Nieters, Regina Dittmann, Gordon Pipa and John Paul Strachan

show abstract

Ming-Jay Yang (Forschungszentrum Jülich, Germany)
14:30
(1 min)
 ICONS announcement

ICONS website https://iconsneuromorphic.cc/

Prasanna Date (Oak Ridge National Laboratory)
14:31‑15:00
(29 min)
 Coffee break
15:00‑15:25
(25+5 min)
 Intrinsic Numerical Robustness and Fault Tolerance in a Neuromorphic Algorithm for Scientific Computing

Bradley H. Theilman and James B. Aimone

show abstract

Bradley Theilman (Sandia National Laboratories)
15:30‑15:40
(10+5 min)
 Neuromorphic Eye Tracking for Low-Latency Pupil Detection
show presentation.pdf (public accessible)
video (restricted access)

Paul Hueber, Luca Peres, Florian Pitters, Alejandro Gloriani and Oliver Rhodes

show abstract

Oliver Rhodes (University of Manchester, United Kingdom)
15:45‑16:30
(45 min)
 Open mic session
16:30
End of day II

Thursday, 26 March 2026
08:00
NICE 2026 - day III

Food should be available from 8:00h onwards

08:30
Session chair: Pranav Mathews , Felix Wang
08:30‑08:45
(15 min)
 Delayed start
08:45‑09:10
(25+5 min)
 Keynote: REM-like Consolidation: Same Performance, Sparser Representations
show presentation.pdf (public accessible)

show abstract

Maxim Bazhenov (University of California, San Diego)
09:15‑09:25
(10+5 min)
 Predicting band-gap of Inorganic Materials using Neuromorphic Graph Learning
show presentation.pdf (public accessible)
show talk video

Ian Mulet, Derek Gobin, Ashish Gautam, Kevin Zhu, Prasanna Date, Shruti Kulkarni, Seung-Hwan Lim, Guojing Cong, Maryam Parsa, Thomas Potok and Catherine Schuman

show abstract

Ian Mulet (University of Tennessee Knoxville)
09:30‑09:40
(10+5 min)
 A principled procedure for designing brain-derived SWaP optimized neuronal units for low-power neuromorphic analog computation and digital communication.
show presentation.pdf (public accessible)
Chad Harper (UC Berkeley)
09:45‑10:15
(30 min)
 Coffee break
10:15‑10:25
(10+5 min)
 δ Multiplexed Gradient Descent: Perturbative Learning with Astrocytes
show presentation.pdf (public accessible)

Ryan O'Loughlin, Bakhrom Oripov, Nick Skuda Skuda, Noah Chongsiriwatana, Ian Whitehouse, Wolfgang Losert, Bradley Hayes, Adam McCaughan and Sonia Buckley

Nicholas Skuda (NIST Boulder)
10:30‑10:40
(10+5 min)
 NeuroCoreX: An Open-Source FPGA-Based Spiking Neural Network Emulator with On-Chip Learning

Ashish Gautam, Prasanna Date, Shruti Kulkarni, Ian Mulet, Kevin Zhu, Robert Patton and Thomas Potok

show abstract

Ashish Gautam (Oak Ridge National Lab)
10:45‑11:10
(25+5 min)
 Fully Spiking Linear Quadratic Regulator Control via a Neuromorphic Solver for the Continuous Algebraic Riccati Equation
show presentation.pdf (public accessible)
show talk video

Graeme Damberger, Omar Alejandro Garcia Alcantara, Eduardo S. Espinoza, Luis Rodolfo Garcia Carrillo, Terrence C. Stewart and Chris Eliasmith

show abstract

Graeme Damberger (University of Waterloo)
11:15‑11:25
(10+5 min)
 Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware
show presentation.pdf (public accessible)

Jakob Kaiser, Eric Müller and Johannes Schemmel

show abstract

Jakob Kaiser (Institute of Computer Engineering, Heidelberg University, Germany)
11:30‑11:55
(25+5 min)
 Training event-based neural networks with exact gradients via Differentiable ODE Solving in JAX
show presentation.pdf (public accessible)

Lukas König, Manuel Kuhn, David Kappel and Anand Subramoney

Lukas König (University of Bielefeld)
12:00‑13:00
(60 min)
 Break for lunch (food provided)
13:00
Session chair: Praveen Raj, Brad Theilman
13:00‑13:10
(10+5 min)
 Late breaking news: Bridging Neuromorphic and Traditional Computing Performance: An Information-Theoretic Approach
show presentation.pdf (public accessible)

Max Hawkins, Richard Vuduc

show abstract

Max Hawkins (Georgia Institute of Technology)
13:15‑13:25
(10+5 min)
 Late breaking news: NOVA: Real-Time Visualization and Streaming for Neuromorphic Event Cameras

Andrew Lin, Daniel Querrey, Eric McGonagle, John Langs, Nai Yun Wu, John Ho, Nick Almeter, Matthew Fisher, Utsawb Lamichhane, Praket Desai, David Mascarenas, Tracy Hammond; Texas A&M University & Los Alamos

show abstract

David Mascarenas (Los Alamos National Laboratory)
13:30‑13:40
(10+5 min)
 Hardware-Algorithm Co-design for On-Chip SNN: SOLO(Spatial Online Learning at Once) with analog flash device

Sungmin Lee

show abstract

Sungmin Lee (Seoul National University)
13:45‑14:10
(25+5 min)
 Graph Reservoir Networks for Prediction of Spatiotemporal Systems

William Chapman, Darby Smith, Corinne Teeter and Nicole Jackson

show abstract

William Chapman (Sandia National Laboratories)
14:15‑14:25
(10+5 min)
 Dynamic Heuristic Neuromorphic Solver for the Edge User Allocation Problem with Bayesian Confidence Propagation Neural Network
show presentation.pdf (public accessible)
show talk video

Kecheng Zhang, Anders Lansner, Ahsan Javed Awan, Naresh Balaji Ravichandran and Pawel Herman

We propose a neuromorphic solver for the NP-hard Edge User Allocation problem using an attractor network with Winner-Takes-All (WTA) mechanism implemented with the Bayesian Confidence Propagation Neural Network (BCPNN) framework. Unlike previous energy-based attractor networks, our solver uses dynamic heuristic biasing to guide allocations in real time and introduces a “no allocation” state to each WTA motif, achieving near-optimal performance with an empirically upper-bounded number of time steps. The approach is compatible with neuromorphic architectures and may offer improvements in energy efficiency.

Ahsan Javed Awan (Ericsson)
14:30‑15:00
(30 min)
 Coffee break
15:00‑15:25
(25+5 min)
 VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing
show presentation.pdf (public accessible)

Hamed Poursiami, Shay Snyder, Guojing Cong, Thomas Potok and Maryam Parsa

show abstract

Hamed Poursiami (George Mason University)
15:30‑15:40
(10 min)
 Tutorial day information and messages from the local host

The tutorials will take place at the TSRB Technology Square Research Building at: 85 5th St NW, Atlanta (The TSRB is the building where the lab of Jennifer Haslers group is located - most attendants visited that building at the end of day I of NICE.) Openstreetmap link)

Rooms 118 and 132 (ground floor)

Jennifer Hasler (Georgia Institute of Technology)
15:40‑15:45
(5 min)
 NICE 2027Thomas Nowotny (University of Sussex)
15:45‑16:30
(45 min)
 Open mic session
16:30
End of the NICE 2026 cxonference part

Friday, 27 March 2026
08:30
NICE 2026 tutorial day

Venue Openstreetmap link

  • TSRB (Technology Square Research Building 85 5th St NW, Atlanta) (The TSRB is the building where the lab of Jennifer Haslers group is located - most attendants visited that building at the end of day I of NICE)
  • Rooms 118 and 132 (ground floor)
08:30‑10:30
(120 min)
 1st tutorial session
Tutorial: BrainScaleS hands-on - room 118Tutorial: N2A – neural programming language and workbench - room 132
Jakob Kaiser (Ruprecht-Karls-Universitaet Heidelberg), Joscha Ilmberger (Ruprecht-Karls-Universitaet Heidelberg), Yannik Stradmann (Ruprecht-Karls-Universitaet Heidelberg)

Johannes Schemmel, Yannik Stradtmann, Joscha Imberger, Jakob Kaiser and Björn Kindler from Heidelberg University (Germany)

In this tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems.

  • Participants can use their EBRAINS account (available at https://ebrains.eu/register free of charge) or can use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial.
  • For the tutorial we will use the non persistent quick start here on EBRAINS.

More Info about the tutorial

Fred Rothganger (Sandia National Labs)

This tutorial will introduce the user to the N2A programming language and its associated IDE. Upon completion, the user will be able to create new neuron types, new applications, and run them on their local machine (or on SpiNNaker-2 if hardware is available). This will be a hands-on tutorial. N2A may be downloaded from https://github.com/sandialabs/n2a and run on your personal laptop.

Typically, neuromorphic machines execute a simple dynamical system called the Leaky Integrate and Fire (LIF) model, analogous to a logic gate in conventional machines, and communicate between these using single-bit events called “spikes”. However, neuromorphic device makers are moving away from simple LIF dynamics toward more general neurons. The SpiNNaker system has always been general-purpose programmable due to its use of ARM cores, and SpiNNaker-2 has the capacity to send up to four 32-bit floats with each event. Intel’s second-generation Loihi also supports graded spikes and assembly-level programming of neuron models.. The future of neuromorphic computing will likely be neurons with complex dynamics combined with high-volume short-packet communication. Several frameworks exist for programming neuromorphic systems. The challenge is to enable general-purpose programming of neuron types while maintaining cross-platform portability. Remarkably, these are complementary goals. With an appropriate level of abstraction, it is possible to “write once, run anywhere”. N2A’s unique approach allows the user to specify the dynamics for each class of neuron by simply listing its equations. The tool then compiles these for a given target platform. The structure of the network and interactions between neurons are specified in the same equation language. Network structures can be arbitrarily deep and complex. The language supports component creation, extension, reuse and sharing. The tool comes with a base library that supplies common neuroscience models as well as components for specific neuromorphic devices.

10:30‑11:00
(30 min)
 Break / next tutorial setup
11:00‑13:00
(120 min)
 2nd tutorial session
Tutorial: Introduction to Fugu: A Framework for Composing Neural Algorithms - room 118Tutorial: Simulation Tool for Asynchronous Cortical Streams (STACS) Tutorial - room 132
Michael Krygier (Sandia National Laboratories)

In this tutorial, we will begin with an overview of the basic algorithm design in Fugu and typical workflows.

Fugu is an open-source high-level python programming framework designed for developing spiking algorithms in terms of computational graphs. It provides a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from different sources. Fugu is intended to be suitable for a wide range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Unlike other tools, Fugu separates the task of programming applications that may leverage neuromorphic hardware from the design of spiking neural algorithms and the specific details of neuromorphic hardware platforms. This allows users to focus on developing their applications without needing to be experts in neural computation or neuromorphic hardware. To design Fugu bricks and run these algorithms on hardware, users first construct a computational graph of their application using Fugu’s API. The API provides a simple and intuitive way to define the graph, and Fugu takes care of the rest, including automating the construction of a graphical intermediate representation of the spiking neural algorithm. The output of Fugu is a single NetworkX graph that fully describes the spiking neural algorithm, which can then be compiled down to platform-specific code using one of Fugu’s hardware backends or run on Fugu’s reference simulator. By providing a high-level abstraction and automating the process of constructing and compiling spiking neural algorithms, Fugu makes it easier for users to develop and deploy neuromorphic applications, and enables the exploration of new and innovative uses for neuromorphic computing.

Felix Wang (Sandia National Laboratories)

In this tutorial, we will explore how to define networks and take advantage of the parallel capabilities of the spiking neural network (SNN) simulator STACS (Simulation Tool for Asynchronous Cortical Streams) 〈https://github.com/sandialabs/STACS〉. Developed to be parallel from the ground up, STACS leverages the highly portable Charm++ parallel programming framework, which expresses a paradigm of asynchronous message-driven parallel objects, and supports both large-scale and long running simulations on high performance computing systems. In addition to the parallel runtime, STACS also implements a memory-efficient distributed network data structure for network construction, simulation, and serialization. In particular, STACS uses a distributed intermediate representation, an SNN extension to the distributed compressed sparse row format, which supports interoperability with graph partitioners to facilitate optimizing communication costs across compute resources. With respect to the neuromorphic computing software ecosystem, this has enabled toolchains for mapping networks onto neuromorphic platforms such as Loihi 2 and SpiNNaker 2.

More info about the tutorial

13:00‑14:30
(90 min)
 Time for lunch (NO food provided - use the nearby options)
14:30‑16:30
(120 min)
 3rd tutorial session
Tutorial: LiteVLA at the Edge: CPU-Only Vision–Language–Action Control as a Testbed for Neuro-Inspired Robotics - room 118Tutorial: SuperNeuro + NeuroCoreX Tutorial: Running Fast and Scalable Neuromorphic Simulations - room 132
Mohd Ariful Haque (Clark Atlanta University)

Kishor Datta Gupta, Justin Williams and Mohd. Ariful Haque from Clark Atlanta University

This 2-hour tutorial presents LiteVLA, a lightweight vision–language–action pipeline that runs fully on a Raspberry Pi 4 / TurtleBot-class robot using only CPU resources. We show how its LoRA-adapted SmolVLM backbone and 4-bit NF4 quantization create a constrained, low-power control loop that mirrors many of the design pressures in neuromorphic and non-von-Neumann systems. Participants will walk through the full stack—RGB+action data collection, parameter-efficient fine-tuning, hybrid-precision quantization, and ROS 2 integration with asynchronous Action Chunking—then discuss how such edge VLA controllers can benchmark algorithms, latency, and robustness before porting them to neuromorphic hardware or event-driven sensors. The tutorial targets NICE attendees interested in robotics and edge AI applications of neuro-inspired computing.


show presentation.pdf (public accessible)
Ashish Gautam (Oak Ridge National Lab), Xi Zhang (Oak Ridge National Laboratory), Prasanna Date (Oak Ridge National Laboratory)

Shruti R. Kulkarni, Ashish Gautam, Xi Zhang and Prasanna Date from Oak Ridge National Laboratory and Kevin Zhu from George Mason University Washington

A tutorial for two neuromorphic computing tools: SuperNeuro and NeuroCoreX.

SuperNeuro is a fast and scalable Python-based simulator for neuromorphic computing. It has the capability to support homogeneous and heterogeneous neuromorphic simulations and also support GPU acceleration. We will present a brief overview of the different evaluation modes within SuperNeuro—Matrix Computation (MAT) mode and the the Agent Based Modeling (ABM) mode. The users will be guided through the process of installing SuperNeuro, setting up their networks using the SuperNeuro API, defining connectivity with the spiking neural networks (SNNs), and leveraging different hardware backends for accelerating the simulations. SuperNeuro is tightly integrated with NeuroCoreX, which is an FPGA-based neuromorphic hardware platform that enables seamless translation of simulated SNNs from software to hardware execution. This integration allows users to validate algorithmic concepts, learning rules, and timing dynamics in both simulated and physical environments, thereby promoting a unified neuromorphic co-design workflow. We will demonstrate how a network written in SuperNeuro can be run on NeuroCoreX seamlessly.

More info about the tutorial

16:30
END of NICE 2026
Contact: kindler@kip.uni-heidelberg.de