NICE 2023 - Agenda(show all abstracts)
|Tuesday, 11 April 2023|
|08:00||NICE 2023 -- day 1|
Agenda as .pdf download
The agenda as of 11 April 2023 can be downloaded here as .pdf.
UTSA Student Union, H-E-B University Center, 1 UTSA Circle, San Antonio, TX 78249, Texas, United States of America.show a map of the venue.
Also available: a schematic view as .pdf
Please follow the link on the registration page to register for the workshop.
|08:30||Session chair: Dhireesha Kudithipudi|
|Opening by Dr. Taylor Eighmy, President, University of Texas at San Antonio|
|Keynote: Neuroevolution: Beyond human design of neural networks||Risto Mikkulainen (UT Austin)|
|How Unsupervised Learning During Sleep Could Contribute to Temporal Pattern Recognition and The Gain of Insight||Itamar Lerner (University of Texas at San Antonio)|
|AEStream: Accelerated event-based processing with coroutines|
Authors: Jens Egholm Pedersen and Jörg Conradt.
|Jens Egholm Pedersen (KTH Royal Institute of Technology)|
|Goemans-Williamson MAXCUT approximation algorithm on Loihi|
Authors: Bradley Theilman and James B. Aimone
|Bradley Theilman (Sandia National Laboratories)|
|Work in Progress: A Network of Sigma–Pi Units producing Higher-order Interactions for Reservoir Computing|
Authros: Denis Kleyko, Christopher Kymn, Bruno A. Olshausen, Friedrich T. Sommer and E. Paxon Frady.
|Denis Kleyko (RISE)|
|13:30||Session chair: Johannes Schemmel|
|Full-stack Co-Design for Neuromorphic Systems|
We present major design issues for large-scale neuromorphic computing systems, and some of the trade-offs in designing hardware and software for such systems. Many of the detailed hardware trade-offs that have significant impact on overall energy efficiency depend strongly on the networks being mapped to the hardware. We describe ongoing work on creating a quantitative, full-stack approach to evaluating the trade-offs in neuromorphic system design, enabled by recently developed open-source tools for the design and implementation of asynchronous digital systems.
|Rajit Manohar (Yale University)|
|Modeling Coordinate Transformations in the Dragonfly Nervous System|
Authors: Claire Plunkett and Frances Chance.
|Claire Plunkett (Sandia National Laboratories)|
|Beyond Neuromorphics: Non-Cognitive Applications of SpiNNaker2||Christian Mayr (TU Dresden)|
|Online training of quantized weights on neuromorphic hardware with multiplexed gradient descent|
Authros: Adam McCaughan, Cory Merkel, Bakhrom Oripov, Andrew Dienstfrey, Sae Woo Nam and Sonia Buckley.
|Adam McCaughan (NIST)|
|NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning|
Authors: Anurag Daram and Dhireesha Kudithipudi.
|Anurag Daram (UTSA)|
|Impact of Noisy Input on Evolved Spiking Neural Networks for Neuromorphic Systems|
Authors: Karan Patel and Catherine Schuman.
|Karan Patel (University of Tennessee Knoxville)|
|Spotlight: Intel Neuromorphic Deep Noise Suppression Challenge|
|Open mic / discussions|
|17:30||End of the first day of NICE|
|Shuttle service to downtown area|
Shuttle leaves at 18:00h from the meeting place and goes to "UTSA, San Pedro 1" (place of the welcome reception)
|Welcome reception in San Antonio downtown, at UTSA, San Pedro 1, 1st floor lobby|
Address of the place: 506 Dolorosa St, San Antonio, TX 78204
(For people using their own car: parking space should likely be available at "Dolorosa Lot")
|1h to explore San Antonio downtown (self guided)|
|Shuttle back to UTSA|
Shuttle leaves at 21:00h = 9:00 pm and returns to "UTSA main campus" (conference venue)
|Friday, 14 April 2023|
|08:00||NICE 2023: hands-on tutorials day|
Likely three slots in parallel
An Introduction to a Simulator for Super Conducting Optoelectronic Networks (Sim-SOENs)
This tutorial will suffice to impart a functional understanding of Sim-SOENs. Starting with the computational building blocks of SOEN neurons, we will cover the nuances and processing power of single dendrites, before building up to dendritic arbors within complex neuron structures. We will find it is straightforward to implement arbitrary neuron structures and even dendritic-based logic operations. Even at this single neuron level, we will already demonstrate efficacy on basic computational tasks. From there we will scale to network simulations of many-neuron systems, again with demonstrative use-cases. By the end of the tutorial, participants should be able to easily generate custom SOEN neuron structures and networks. These lessons will apply directly to researching in the computational paradigm that is to be instantiating on the burgeoning hardware of SOENs.
N2A -- An IDE for neural modeling
N2A is a tool for editing and simulating large-scale/complex neural models. These are written in a simple equation language with object-oriented features that support component creation and reuse. The tool compiles these models for various hardware targets ranging from neuromophic devices to supercomputers.
A hands-on tutorial for online interactive use of the BrainScaleS neuromorphic compute system: from the first log-in via the EBRAINS Collaboratory to interactive emulation of small spiking neural networks. This hands-on tutorial is especially suitable for beginners (more advanced attendants are welcome as well). We are going to use the BrainScaleS tutorial notebooks for this event.
Fugu Introductory Tutorial
The tutorial will cover the basic design and practice of Fugu, a software package for composing spiking neural algorithms. We will begin will an introductory presentation on the motivation, design, and limitations of Fugu. Then, we will do two deep dive, interactive tutorials using jupyter notebooks. The first will cover how to use Fugu with pre-existing components, we call Bricks. The second will cover how to build a custom brick to perform a particular algorithm. In this case, the algorithm we choose will be an 80-20 network.
Intel Loihi 2: Build more impactful neuromorphic applications with Intel Loihi 2 and the open-source Lava framework
Tim Shea from Intel Labs will demonstrate how you can program applications using the open-source Lava framework for neuromorphic computing and how to compile and run those applications on Intel Loihi 2 hardware. Lava is an excellent platform for neuromorphic researchers seeking more real-world impact because the high-level, modular API makes it easy for other labs to replicate your work while the flexible compiler architecture makes it easy to distribute your models across conventional and neuromorphic hardware. In this tutorial, you will learn how to build and run several example applications in Lava, including a deep learning model, a Dynamic Neural Field algorithm, a mathematical optimizer.
Format: This tutorial will introduce application programming in Lava through a series of Jupyter notebook tutorials. Attendees can follow along building the applications on their own laptops or using any free cloud-based notebook (e.g. Google Colab). Each application can be run locally on a standard CPU and the presenter will demonstrate how to run the examples on an Intel Kapoho Point neuromorphic system. All the necessary code and instructions are available at github.com/lava-nc.
|Tutorial session 1 (tutorials in parallel)|
|Tutorial session 2 (tutorials in parallel)|
|Tutorial session 3 (tutorials in parallel)|
|16:30||End of the tutorial day|