Show this page for printing or as short info (with end time)

NICE 2023 - Agenda

(hide abstracts)
Tuesday, 11 April 2023
08:00
NICE 2023 -- day 1

("Theory day")

Agenda as .pdf download

The agenda as of 11 April 2023 can be downloaded here as .pdf.

Venue

UTSA Student Union, H-E-B University Center, 1 UTSA Circle, San Antonio, TX 78249, Texas, United States of America.

show a map of the venue.

Also available: a schematic view as .pdf

Registration

Please follow the link on the registration page to register for the workshop.

08:00‑08:30
(30 min)
 Registration, coffee
08:30
Session chair: Dhireesha Kudithipudi
08:30‑08:35
(5+5 min)
 Welcome
08:40‑08:45
(5+5 min)
 Opening by Dr. Taylor Eighmy, President, University of Texas at San Antonio
08:50‑09:35
(45+5 min)
 Organisers round
  • Dr. Dhireesha Kudithipudi, UT San Antonio
  • Dr. Brad Aimone, Sandia National Laboratories
  • Dr. Johannes Schemmel, Kirchhoff-Institute for Physics, Heidelberg University
  • Dr. Suma George Cardwell, National Laboratories
09:40‑10:25
(45+5 min)
 Keynote: Neuroevolution: Beyond human design of neural networks

Neuroevolution, or design of neural networks through evolutionary algorithms, has long been used to solve tasks where gradients are not available, such as partially observable decision tasks. Recently it has also turned out useful in designing complex deep learning architectures. I will outline how the approach can result in complex architectures beyond human designs, complex behavior beyond human expectations, and solutions that combine human expertise and evolutionary discovery synergetically, with examples in vision, language, robotics, game playing, and decision making.

Risto Mikkulainen (UT Austin)
10:30‑11:00
(30 min)
 Break
11:00‑11:25
(25+5 min)
 How Unsupervised Learning During Sleep Could Contribute to Temporal Pattern Recognition and The Gain of InsightItamar Lerner (University of Texas at San Antonio)
11:30‑11:40
(10+5 min)
 AEStream: Accelerated event-based processing with coroutines

Authors: Jens Egholm Pedersen and Jörg Conradt.

We present a novel method to efficiently process event streams on conventional hardware, along with a freely available implementation: AEStream. AEStream provides at least 2x throughput compared to conventional parallelization mechanisms and at least 5x faster memory management on GPUs. Our method operates directly on event-address representations, allowing us to (1) freely combine input-output pairs and (2) directly interface event-based peripherals, such as neuromorphic hardware and event cameras. github.com/norse/aestream

Jens Egholm Pedersen (KTH Royal Institute of Technology)
11:45‑12:10
(25+5 min)
 Goemans-Williamson MAXCUT approximation algorithm on Loihi

Authors: Bradley Theilman and James B. Aimone

(Unfortunately no slides were in the talk video capture - therefore there is no video of the talk available here)

Bradley Theilman (Sandia National Laboratories)
12:15‑12:25
(10+5 min)
 Work in Progress: A Network of Sigma–Pi Units producing Higher-order Interactions for Reservoir Computing

Authros: Denis Kleyko, Christopher Kymn, Bruno A. Olshausen, Friedrich T. Sommer and E. Paxon Frady.

(Unfortunately no slides were in the talk video capture - therefore there is no video of the talk available here)

This presentation will introduce a way of computing higher-order features, which have been recently proposed for the use within the reservoir computing, via compositional distributed representations formed by the framework of hyperdimensional computing. At the implementational level, the proposed mechanism can be realized as a network of Sigma-Pi neurons.

Denis Kleyko (RISE)
12:30‑13:30
(60 min)
lunch
13:30
Session chair: Johannes Schemmel
13:30‑13:55
(25+5 min)
 Full-stack Co-Design for Neuromorphic Systems
show talk video

(this talk video has a gap)

We present major design issues for large-scale neuromorphic computing systems, and some of the trade-offs in designing hardware and software for such systems. Many of the detailed hardware trade-offs that have significant impact on overall energy efficiency depend strongly on the networks being mapped to the hardware. We describe ongoing work on creating a quantitative, full-stack approach to evaluating the trade-offs in neuromorphic system design, enabled by recently developed open-source tools for the design and implementation of asynchronous digital systems.

Rajit Manohar (Yale University)
14:00‑14:25
(25+5 min)
 Modeling Coordinate Transformations in the Dragonfly Nervous System

Authors: Claire Plunkett and Frances Chance.

Coordinate transformations are a fundamental operation that must be performed by any animal relying upon sensory information to interact with the external world. We present a neural network model that performs a coordinate transformation from the dragonfly eye's frame of reference to the body's frame of reference while hunting. We demonstrate that the model successfully calculates turns required for interception, and discuss how future work will compare our model with biological dragonfly neural circuitry and guide neural-inspired neuromorphic implementations of coordinate transformations.

Claire Plunkett (Sandia National Laboratories)
14:30‑14:55
(25+5 min)
 Beyond Neuromorphics: Non-Cognitive Applications of SpiNNaker2
show talk video
Christian Mayr (TU Dresden)
15:00‑15:30
(30 min)
 break
15:30‑15:40
(10+5 min)
 Online training of quantized weights on neuromorphic hardware with multiplexed gradient descent
show talk video

Authros: Adam McCaughan, Cory Merkel, Bakhrom Oripov, Andrew Dienstfrey, Sae Woo Nam and Sonia Buckley.

Adam McCaughan (NIST)
15:45‑16:10
(25+5 min)
 NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning
show talk video

Authors: Anurag Daram and Dhireesha Kudithipudi.

Continual learning is challenging for deep neural networks, mainly because of catastrophic forgetting, the tendency for accuracy on previously trained tasks to drop when new tasks are learned. Although several biologically-inspired techniques have been proposed for mitigating catastrophic forgetting, they typically require additional memory and/or computational overhead. Here, we propose a novel regularization approach that combines neuronal activation-based importance measurement with neuron state-dependent learning mechanisms to alleviate catastrophic forgetting in both task-aware and task-agnostic scenarios. We introduce a neuronal state-dependent mechanism driven by neuronal activity traces and selective learning rules, with storage requirements for regularization parameters that grow asymptotically slower with network size - compared to schemes that calculate weight importance, whose storage grows quadratically. The proposed model, NEO, is able to achieve performance comparable to other state-of-the-art regularization based approaches to catastrophic forgetting, while operating with a reduced memory overhead.

Anurag Daram (UTSA)
16:15‑16:25
(10+5 min)
 Impact of Noisy Input on Evolved Spiking Neural Networks for Neuromorphic Systems
show talk video

Authors: Karan Patel and Catherine Schuman.

Karan Patel (University of Tennessee Knoxville)
16:30‑16:35
(5 min)
 Spotlight: Intel Neuromorphic Deep Noise Suppression Challenge
16:35‑17:30
(55 min)
 Open mic / discussions
17:30
End of the first day of NICE
17:30‑18:00
(30 min)
 (break)
18:00‑18:30
(30 min)
 Shuttle service to downtown area

Shuttle leaves at 18:00h from the meeting place and goes to "UTSA, San Pedro 1" (place of the welcome reception)

18:30‑20:00
(90 min)
 Welcome reception in San Antonio downtown, at UTSA, San Pedro 1, 1st floor lobby

Address of the place: 506 Dolorosa St, San Antonio, TX 78204

(For people using their own car: parking space should likely be available at "Dolorosa Lot")

20:00‑21:00
(60 min)
 1h to explore San Antonio downtown (self guided)
21:00‑21:30
(30 min)
 Shuttle back to UTSA

Shuttle leaves at 21:00h = 9:00 pm and returns to "UTSA main campus" (conference venue)


Wednesday, 12 April 2023
08:00
NICE 2023 - day 2

("Hardware" day)

08:00‑08:30
(30 min)
 Breakfast
08:30
Session chair: Suma George Cardwell
08:30‑09:15
(45+5 min)
 Keynote: Versatility, Efficiency, and Resilience in Large-Scale Neuromorphic Intelligence at the Edge
show talk video

We present neuromorphic cognitive computing systems-on-chip implemented in custom silicon compute-in-memory neural and memristive synaptic crossbar array architectures that combine the efficiency of local interconnects with flexibility and sparsity in global interconnects, and that realize a wide class of deeply layered and recurrent neural network topologies with embedded local plasticity for on-line learning, at a fraction of the computational and energy cost of implementation on CPU and GPGPU platforms. Co-optimization across the abstraction layers of hardware and algorithms leverage inherent stochasticity in the physics of synaptic memory devices and neural interface circuits with plasticity in reconfigurable massively parallel architecture towards high system-level accuracy, resilience, and efficiency for natural intelligence at the edge. Adiabatic energy recycling in charge-mode crossbar arrays permit extreme scaling in energy efficiency, approaching that of synaptic transmission in the mammalian brain.

Gert Cauwenberghs (UC San Diego)
09:20‑09:45
(25+5 min)
 All Aboard the Open-Source Neuromorphic Hardware Hype Train
show talk video

Deep learning gained an insane amount of traction due to the availability of miniaturized sensors, data availability, and the broad open-source culture. The chip design flow from RTL to bring-up is amongst the most closed and secretive processes known to man. This means that there is a very high barrier to designing and building integrated circuits. This presentation explores how neuromorphic computing is the perfect battle ground for harnessing recently open-sourced design flows, process design kits, and memory compilers. I will talk about several neuromorphic chips we have taped out in the SkyWater process, including OpenSpike, and how these can promote reproducibility of both software and silicon. After all, open sourcing is what enabled the deep learning hype train. The next station is silicon.

Jason Eshraghian (University of California, Santa Cruz)
09:50‑10:15
(25+5 min)
 Exploring Information-Theoretic Criteria to Accelerate the Tuning of Neuromorphic Level-Crossing ADCs
video (restricted access)

Authors: Ali Safa, Jonah Van Assche, Charlotte Frenkel, André Bourdoux, Francky Catthoor and Georges Gielen.

Ali Safa (Katholieke Universiteit Leuven)
10:20‑10:50
(30 min)
 break
10:50‑11:00
(10+5 min)
 Easy and efficient spike-based Machine Learning with mlGeNN

Authors: James Knight and Thomas Nowotny.

Intuitive and easy to use application programming interfaces such as Keras have contributed majorly to the rapid acceleration of machine learning with artificial neural networks. Building on our recent works on translating ANNs to SNNs and training classifiers with eProp, we here present the mlGeNN interface as an easy way to define, train and test spiking neural networks on our efficient GPU based GeNN framework. We illustrate the use of mlGeNN by investigating the performance of a number of shallow spiking neural networks trained with the e-prop learning rule to recognise hand gestures from the DVS gesture dataset. We find that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details.

James Knight (University of Sussex)
11:05‑11:30
(25+5 min)
 Structure-function duality in memristive intelligent systems
show talk video

Brain’s functionality has long been postulated to be related to its structure. This structure-function relationship is present at different spatial and hierarchical scales, going from neurons and synapses, to dendritic arbors and of course to its connectome. Inspired by this, I will present neuromorphic circuits and architectures, incorporating resistive memory into different levels of this computational hierarchy. The resistive memory can be used as the knob that changes the form, and thus function, of the synapses, neurons, dendritic arbors and connectivity for hierarchical sensory signal processing and on-chip learning.

Melika Payvand (Institute of Neuroinformatics, ETH Zurich and University of Zurich)
11:35‑11:45
(10+5 min)
 Additive manufacture of polymeric organometallic ferroelectric diodes (POMFeDs) for structural neuromorphic hardware

Author: Davin Browner.

Hardware design for application of online machine learning is complicated by a number of facets of conventional ANN frameworks, e.g. deep neural networks (DNNs), such as reliance on non-temporally local offline learning, potential difficulties in transfer from model to substrates, and issues with processing of noisy sensory data using energy-efficient and asynchronous information processing modalities. Analog or mixed-signal spiking neural networks (SNNs) have promise for lower power, temporally localised, and stimuli selective online sensing and inference but are difficult to design and fabricate at low cost. Investigation of beyond-CMOS alternative substrates including organic and organometallic compounds may be worthwhile for development of unconventional neuromorphic hardware with pseudo-spiking dynamics. Here, polymeric organometallic ferroelectric diodes (POMFeDs) are introduced as a hardware platform for development of printable ferroelectric in-sensor SNNs.

Davin Browner (Robotics Royal College of Art (UK))
11:50‑12:15
(25 min)
 Poster flash talks: 1 min appetizer for posters
12:15‑12:20
(5 min)
 Group photo

12:20‑13:45
(85 min)
Poster-Lunch (posters + finger food)
13:45
Session chair: Catherine Schuman
13:45‑14:10
(25+5 min)
 hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2
show talk video

Authors: Philipp Spilger, Elias Arnold, Luca Blessing, Christian Mauch, Christian Pehle, Eric Müller and Johannes Schemmel.

Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.

Philipp Spilger (Heidelberg University)
14:15‑14:40
(25+5 min)
 SupportHDC: Hyperdimensional Computing with Scalable Hypervector Sparsity
show talk video

Authors: Ali Safa, Ilja Ocket, Francky Catthoor and Georges Gielen.

In this talk, we introduce SupportHDC, a novel HDC design framework that can jointly optimize system accuracy and sparsity in an automated manner, in order to trade off classification performance and hardware implementation overheads. We illustrate the inner working of the framework on two bio-signal classification tasks: cancer detection and arrhythmia detection. We show how SupportHDC enables the system designer to choose the final design solution from the accuracy-sparsity trade-off curve produced by the framework. The python code for reproducing our experiments is released as open-source with the hope of being beneficial to future research.

Ali Safa (Katholieke Universiteit Leuven)
14:45‑15:15
(30 min)
 break
15:15‑15:40
(25+5 min)
 Accelerating AI with analog in-memory computing

Artificial Intelligence, or AI, has become pervasive in a wide variety of domains, from image and video classification to speech recognition, translation and text generation, just to cite a few examples. These models come with accuracies which improve year after year, however at the cost of huge training and inference computational effort both in terms of energy and time. For this reason, research groups from both academia and industry are developing novel approaches to accelerate computation. We have recently developed an in-memory analog computing chip with more than 35 million Phase-Change Memory devices, analog peripheral circuitry and massive parallel routing to accelerate communication between inputs, outputs and analog cores. We demonstrate that analog computing shows significant advantages in terms of power and speed, yet retaining high accuracy on neural networks taken from both image classification and language processing tasks.

Stefano Ambrogio (IBM)
15:45‑15:55
(10+5 min)
 Configurable Activation Functions based on DW-MTJ LIF Neurons

Authors: Wesley Brigner, Naimul Hassan, Xuan Hu, Christopher Bennett, Felipe Garcia-Sanchez, Can Cui, Alvaro Velasquez, Matthew Marinella, Jean Anne Incorvia and Joseph S. Friedman.

Wesley Brigner (University of Texas Dallas)
16:00‑16:25
(25+5 min)
 Shunting Inhibition as a Neural-Inspired Mechanism for Multiplication in Neuromorphic Architectures

Authors: Frances Chance and Suma Cardwell.

Shunting inhibition is a potential mechanism by which biological systems multiply two time-varying signals, most recently demonstrated in single neurons in the fly visual system. Our work demonstrates this effect in a biological neuron model and also models the equivalent circuit in neuromorphic hardware. Here we demonstrate how this mechanism can be leveraged in neuromorphic dendrites.

Frances Chance (Sandia National Lab)
16:30‑17:30
(60 min)
 Open mic / discussions
17:30‑19:00
(90 min)
 Break
19:00‑21:00
(120 min)
Conference dinner

Location: iH-E-B Ballroom 1.104


Thursday, 13 April 2023
08:00
NICE 2023 - day 3

("Applications day")

08:00‑08:30
(30 min)
 Breakfast
08:30
Session chair: Craig Michael Vineyard
08:30‑09:15
(45+5 min)
 Exciting Opportunities at the Intersection of Spatial Neuroscience, Robot Navigation, and Neuromorphic Compute and Sensing
show talk video
Michael Milford (QUT Robotics Centre)
09:20‑09:30
(10+5 min)
 Demonstration of neuromorphic sequence learning on a memristive array
show talk video

Authors: Sebastian Siegel, Tobias Ziegler, Younes Bouhadjar, Tom Tetzlaff, Rainer Waser, Regina Dittmann and Dirk Wouter.

We present measurement results on a memristive / 130nm CMOS co-integrated chip of high-order sequence learning experiments with the MemSpikingTM algorithm which was developed as a hardware friendly version of SpikingTM, a biologically plausible version of the Hierarchical Temporal Memory's (HTM) Temporal Memory.

Sebastian Siegel (Peter Grünberg Institute, Forschungszentrum Jülich)
09:35‑10:35
(60 min)
 Funders panel - with the funders attending via video
  • Joe Hays (U.S Naval research Lab)
  • Andrey Kanaev (NSF)
  • Tina Kaarsberg (DOE)
  • Jano Costard (SPRIN-D, Germany)
  • Clare Thiem (AFRL)
10:35‑11:05
(30 min)
 Break
11:05‑11:30
(25+5 min)
 Speech2Spikes: Efficient Audio Encoding Pipeline for Real-time Neuromorphic Systems

Authors: Kenneth Stewart, Timothy Shea, Noah Pacik-Nelson, Eric Gallo and Andreea Danielescu.

(unfrotunately no talk video captured)

Despite the maturity and availability of speech recognition systems, there are few available spiking speech recognition tasks that can be implemented with current neuromorphic systems. The methods used previously to generate spiking speech data are not capable of encoding speech in real-time or encoding very large modern speech datasets efficiently for input to neuromorphic processors. The ability to efficiently encode audio data to spikes will enable a wider variety of spiking audio datasets to be available and can also enable algorithmic development of real-time neuromorphic automatic speech recognition systems. Therefore, we developed speech2spikes, a simple and efficient audio processing pipeline that encodes recorded audio into spikes and is suitable for real-time operation with low-power neuromorphic processors. To demonstrate the efficacy of our method for audio to spike encoding we show that a small feed-forward spiking neural network trained on data generated with the pipeline achieves 88.5% accuracy on the Google Speech Commands recognition task, exceeding the state-of-the art set by Spiking Speech Commands, a prior spiking encoding of the Google Speech Commands dataset, by over 10%. We also demonstrate a proof-of-concept real-time neuromorphic automatic speech recognition system using audio encoded with speech2spikes streamed to an Intel Loihi neuromorphic research processor.

Kenneth Stewart (University of California, Irvine)
11:35‑11:45
(10+5 min)
 Spiking LCA in a Neural Circuit with Dictionary Learning and Synaptic Normalization

Authors: Diego Chavez Arana, Alpha Renner and Andrew Sornborger.

(unfrotunately no talk video captured)

Diego Chavez Arana (talk presented by Andrew Sornborger) (Los Alamos National Lab)
11:50‑12:15
(25+5 min)
 Neuromorphic Downsampling of Event-Based Camera Output

Authors: Charles Rizzo, Catherine Schuman and James Plank.

(unfrotunately no talk video captured)

In this work, we address the problem of training a neuromorphic agent to work on data from event-based cameras. Although event-based camera data is much sparser than standard video frames, the sheer number of events can make the observation space too complex to effectively train an agent. We construct multiple neuromorphic networks that downsample the camera data so as to make training more effective. We then perform a case study of training an agent to play the Atari Pong game by converting each frame to events and downsampling them. The final network combines both the downsampling and the agent. We discuss some practical considerations as well.

Charles Rizzo (University of Tennessee Knoxville)
12:20‑12:30
(10+5 min)
 A Neuromorphic System for Real-time Tactile Texture Classification

Authors: George Brayshaw, Martin Pearson and Benjamin Ward-Cherrier.

(unfrotunately no talk video captured)

George Brayshaw (University of Bristol)
12:35‑14:05
(90 min)
Poster-Lunch (posters + finger food)
14:05
Session chair: Felix Wang
14:05‑14:15
(10+5 min)
 SIFT-ONN: SIFT Feature Detection Algorithm Employing ONNs for Edge Detection

Authors: Madeleine Abernot, Sylvain Gauthier, Théophile Gonos and Aida Todri-Sanial.

Madeleine Abernot (University of Montpellier)
14:20‑14:45
(25+5 min)
 Translation and Scale Invariance for Event-Based Object tracking
show talk video

Authors: Jens Egholm Pedersen, Raghav Singhal and Jörg Conradt.

We propose a new method to accurately predict spatial coordinates of objects from event data using spiking neurons without temporal averaging. Our method achieves accuracies comparable to artificial neural networks, demonstrates faster convergence, and is directly portable to neuromorphic hardware. In this talk, we will present our model, along with unpublished experimental data, and discuss its generalization to real-life settings. github.com/jegp/coordinate-regression/

Jens Pedersen (KTH Royal Institute of Technology)
14:50‑15:15
(25+5 min)
 NeuroBench: Advancing Neuromorphic Computing through Collaborative and Rigorous BenchmarkingVijay Janapa Reddi (Harvard University)
15:20‑15:50
(30 min)
 break
15:50‑16:15
(25+5 min)
 Sigma-Delta networks for Robot Arm Control
show talk video

Authors: Wallace Lawson, Anthony Harrison and Greg Trafton.

Our autonomous robot, Bight, can be a reliable teammate that is capable of assisting in performing routine maintenance tasks on a Naval vessel. In this paper, we consider the task of maintaining the electrical panel. A vital first step is putting the robot into the correct position to view all of the parts of the electrical panel. The robot can get close, but the arm of the robot will need to move to where it can see everything. Here, we propose to solve this using a sigma delta spiking network that is trained using deep Q learning. Our approach is able to successfully solve this problem at varying distances. While we show how this works on this specific problem, we believe this approach to be general enough to be applied to any similar problem.

Ed Lawson (U.S Naval Research Lab)
16:20‑16:45
(25+5 min)
 Towards Neuromorphic Edge Intelligence
show talk video
Joseph Hays (U.S Naval research Lab)
16:50‑17:00
(10 min)
 Best paper award! (Sponsored by IOP neuroscience and APL machine learning)
17:00‑17:30
(30 min)
 Open mic / discussions
17:30
End of day 3 and of the talk-days of NICE 2023

Friday, 14 April 2023
08:00
NICE 2023: hands-on tutorials day

Likely three slots in parallel

Confirmed tutorials:

  • An Introduction to a Simulator for Super Conducting Optoelectronic Networks (Sim-SOENs)
  • Sandia – Fugu Introductory Tutorial (offered twice with the same content)
  • N2A -- An IDE for neural modeling
  • Hands-on BrainScaleS - analog accelerated neuromorphic compute hardware (The hands-on session is offered twice with the same content). The BrainScaleS hardware systems are available for use online
  • Intel Loihi 2: Build more impactful neuromorphic applications with Intel Loihi 2 and the open-source Lava framework

An Introduction to a Simulator for Super Conducting Optoelectronic Networks (Sim-SOENs)

This tutorial will suffice to impart a functional understanding of Sim-SOENs. Starting with the computational building blocks of SOEN neurons, we will cover the nuances and processing power of single dendrites, before building up to dendritic arbors within complex neuron structures. We will find it is straightforward to implement arbitrary neuron structures and even dendritic-based logic operations. Even at this single neuron level, we will already demonstrate efficacy on basic computational tasks. From there we will scale to network simulations of many-neuron systems, again with demonstrative use-cases. By the end of the tutorial, participants should be able to easily generate custom SOEN neuron structures and networks. These lessons will apply directly to researching in the computational paradigm that is to be instantiating on the burgeoning hardware of SOENs.
Format: Examples and instructions will be given in the form of Jupyter Notebook tutorials (already well into development). If it is conducive to the conference environment, these notebooks may be available for download and use in real-time. If this latter format is the case, practice exercises can be derived for active learning.

N2A -- An IDE for neural modeling

N2A is a tool for editing and simulating large-scale/complex neural models. These are written in a simple equation language with object-oriented features that support component creation and reuse. The tool compiles these models for various hardware targets ranging from neuromophic devices to supercomputers.
Format: The first hour will provide a general introduction to the integrated development environment (IDE) and cover basic use cases: model editing, running a simulation, sharing models via Git, and running parameter sweeps. The second hour will cover the basic LIF class hierarchy, techniques for designing your own component set, and integration with Sandia's Fugu tool.
Special Requirements: This will be a hands-on tutorial. N2A may be downloaded from https://github.com/frothga/n2a and run on your personal laptop.

BrainScaleS

A hands-on tutorial for online interactive use of the BrainScaleS neuromorphic compute system: from the first log-in via the EBRAINS Collaboratory to interactive emulation of small spiking neural networks. This hands-on tutorial is especially suitable for beginners (more advanced attendants are welcome as well). We are going to use the BrainScaleS tutorial notebooks for this event.
For using the BrainScaleS system during the tutorial (and also independently of the tutorial for own research, free of charge for evaluation) an EBRAINS account (also free of charge) is needed (get an EBRAINS account here).
More info on how to get started using BrainScaleS.
Format: Introductory presentation, followed by interactive hands-on tutorials. The attendants of the tutorial can a webbrowser on their own laptops to execute and change provided tutorials and explore on their own. Attendants will be able to continue accessing the systems with a generous test-quota also after the event
Preparation: Best to get an EBRAINS account here) ahead of time. We can also create a guest-account on the spot.

Fugu Introductory Tutorial

The tutorial will cover the basic design and practice of Fugu, a software package for composing spiking neural algorithms. We will begin will an introductory presentation on the motivation, design, and limitations of Fugu. Then, we will do two deep dive, interactive tutorials using jupyter notebooks. The first will cover how to use Fugu with pre-existing components, we call Bricks. The second will cover how to build a custom brick to perform a particular algorithm. In this case, the algorithm we choose will be an 80-20 network.
Format: Interactive
Preparation: Please clone and install: https://github.com/snl-nerl/fugu

Intel Loihi 2: Build more impactful neuromorphic applications with Intel Loihi 2 and the open-source Lava framework

Tim Shea from Intel Labs will demonstrate how you can program applications using the open-source Lava framework for neuromorphic computing and how to compile and run those applications on Intel Loihi 2 hardware. Lava is an excellent platform for neuromorphic researchers seeking more real-world impact because the high-level, modular API makes it easy for other labs to replicate your work while the flexible compiler architecture makes it easy to distribute your models across conventional and neuromorphic hardware. In this tutorial, you will learn how to build and run several example applications in Lava, including a deep learning model, a Dynamic Neural Field algorithm, a mathematical optimizer.

Format: This tutorial will introduce application programming in Lava through a series of Jupyter notebook tutorials. Attendees can follow along building the applications on their own laptops or using any free cloud-based notebook (e.g. Google Colab). Each application can be run locally on a standard CPU and the presenter will demonstrate how to run the examples on an Intel Kapoho Point neuromorphic system. All the necessary code and instructions are available at github.com/lava-nc.

08:00‑08:30
(30 min)
 Breakfast
08:30‑10:30
(120 min)
 Tutorial session 1 (tutorials in parallel)
Sim-SOENsFugu -- IntroductionBrainScaleS

Room: Harris


Room: Mesquite


Room: Travis

10:30‑11:00
(30 min)
 Break
11:00‑13:00
(120 min)
 Tutorial session 2 (tutorials in parallel)
N2AFugu - IntermediateIntel Loihi 2

Room: Harris


Room: Mesquite


Room: Travis

13:00‑14:00
(60 min)
 Lunch
14:00‑16:00
(120 min)
 Tutorial session 3 (tutorials in parallel)
N2ABrainScaleSIntel Loihi 2

Room: Harris


Room: Mesquite


Room: Travis

16:00‑16:30
(30 min)
 Farewell coffee
16:30
End of the tutorial day