(For printing: the agenda without borders) (hide abstracts)
Monday, 16 March 2020

NICE 2020 in Heidelberg

6th March 2020: We are sorry to announce that NICE 2020, scheduled to be held on March 17-20 2020, will be postponed to a later date. Please see here for the new date in March 2021




Meeting venue: Kirchhoff-Institute for Physics, Im Neuenheimer Feld 227, D-69117 Heidelberg, Germany

Pre-NICE events on Monday, 16 March 2020 (both POSTPONED as well)

Travel info:

  • Getting to the venue:
    • the nearest tram stop to the meeting venue is "Heidelberg Bunsengymnasium" (marked in the map linked above) [online timetable]https://reiseauskunft.bahn.de//bin/query.exe/en?Z=Neuenheim+Bunsengymnasium,+Heidelberg), provided by German Railway. Here you can also buy tickets online
  • Getting to Heidelberg

Hotels: These hotels are relatively close to the meeting venue (Kirchhoff-Institute for Physics, see the map above). A lot more hotels are listed in online hotel booking sites (e.g. on booking.com)

.

19:30‑21:30
(120 min)

NICE 2020 Welcome reception

at the meeting venue.

The reception is also open for the participants of the NEUROTECH event Future Application Directions for Neuromorphic Computing Technologies


Tuesday, 17 March 2020
08:45
NICE 2020, workshop day I -- NOTE: NICE will be POSTPONED!

(Registration booth opens at 8:30h)

09:00‑09:10
(10+5 min)
 Welcome to NICE 2020
09:15‑09:45
(30 min)
 Organizer Round
09:45‑10:25
(40+5 min)
 Keynote IMike Davies (Intel)
10:30‑10:50
(20+5 min)
 Luping Shi (Tsinghua University)
11:00‑11:30
(30 min)
 Coffee break
11:30‑11:50
(20+5 min)
 Evaluating complexity and resilience trade-offs in emerging memory inference machines

Christopher H. Bennett, Ryan Dellana, Tianyo Patrick Xiao, Ben Feinberg, Sapan Agarwal, Suma Cardwell, Matthew Marinella, William Severa and Brad Aimone

Neuromorphic engineering only works well if limited hardware resources are maximized properly, e.g. memory and computational elements, scale efficiently as the number of parameters relative to potential disturbance. In this work, we use realistic crossbar simulations to highlight a significant trade-off between the complexity of deep neural networks and their susceptibility to collapse from internal system disturbances. Although the simplest models are the most resilient, they cannot achieve competitive results. Our work proposes a middle path towards high performance and moderate resilience utilizing the Mosaics framework, by re-using synaptic connections in a recurrent neural network implementation.

11:55‑12:15
(20+5 min)
 Johannes Schemmel (Heidelberg University)
12:20‑12:30
(10+5 min)
 Lightning talk: From clean room to machine room: towards accelerated cortical simulations on the BrainScaleS wafer-scale system

The BrainScaleS system follows the principle of so-called “physical modeling”, wherein the dynamics of VLSI circuits are designed to emulate the dynamics of their biological archetypes, where neurons and synapses are implemented by analog circuits that operate in continuous time, governed by time constants which arise from the properties of the transistors and capacitors on the microelectronic substrate. This defines our intrinsic hardware acceleration factor of 10000 with respect to biological real-time. The system is based on the ideas described in [Schemmel et al. 2010] and in the last ten years it was developed from a lab prototype to a larger installation comprising 20 wafer modules. The talk will give a reflection on the development process, the lessons learned and summarize the recent progress in commissioning and operating the BrainScaleS system. The success of the endeavor is demonstrated on the example of a wafer-scale emulation of a cortical microcolumn network.

Schemmel et al. 2010: J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner. 2010. A Wafer-Scale Neuromorphic Hardware System for Large-Scale Neural Modeling. In IEEE Int Symp Circuits Syst Proc. 1947–1950, http://dx.doi.org/10.1109/ISCAS.2010.5536970

Sebastian Schmitt (Heidelberg University)
12:35‑13:00
(25 min)
 Poster Lightning Talks

1 min - 1 slide poster appetizers

all Poster Presenters
13:00‑14:30
(90 min)
 Lunch, poster setup, demonstrators setup
14:30‑14:40
(10+5 min)
 Group photo at NICE

(The group photo will be placed on the internet. By showing up for the photo you grant your permission for the publication of the photo)

14:45‑15:05
(20+5 min)
 Why is Neuromorphic Event-based Engineering the future of AI?

While neuromorphic vision sensors and processors are becoming more available and usable by laymen and although they outperform existing devices specially in the case of sensing, there are still no successful commercial applications that allowed them to overtake conventional computation and sensing. In this presentation, I will provide insights on what are the missing key steps that are preventing this new computational revolution to happen. I will give an overview of neuromorphic, event-based approaches for image sensing and processing and how these have the potential to radically change current AI technologies and open new frontiers in building intelligent machines. I will focus on what is intended by event-based computation and the urge to process information in the time domain rather than recycling old concepts such as images, backpropagation and any form of frame-based approach. I will introduce new models of machine learning based on spike timings and show the importance of being compatible with neurosciences findings and recorded data. Finally, I will provide new insights on how to build neuromorphic neural processors able to operate these new AI and the urge to move to new architectural concepts.

15:10‑15:30
(20+5 min)
 Neuromorphic and AI research at BCAI (Bosch Center for Artificial Intelligence)

We will give an overview of current challenges and activites at Bosch Center for Artificial Intelligence regarding neuromorphic computing, spiking neural networks and deep learning. This includes a short introduction to the publicly funded project ULPEC addressing ultra-low power vision systems. In addition, we will give a summary of selected academic contributions in the field of spiking neural networks and hardware-aware compression of deep neural networks.

Thomas Pfeil
15:35‑15:55
(20+5 min)
 Mapping Deep Neural Networks on SpiNNaker2

Florian Kelber, Binyi Wu, Bernhard Vogginger, Johannes Partzsch, Chen Liu, Marco Stolba and Christian Mayr

SpiNNaker is an efficient many-core architecture for the real-time simulation of spiking neural networks. To also speed up deep neural networks (DNNs), the 2nd generation SpiNNaker2 will contain dedicated DNN accelerators in each processing element. When realizing large CNNs on SpiNNaker2, layers have to be split, mapped and scheduled onto 144 processing elements. We describe the underlying mapping procedure with optimized data reuse to achieve inference of VGG-16 and ResNet-50 models in tens of milliseconds.

Florian Kelber et al.
16:00‑16:30
(30 min)
 Coffee break
16:30‑16:50
(20+5 min)
 Closed-loop experiments on the BrainScaleS-2 architecture

Korbinian Schreiber, Timo Wunderlich, Christian Pehle, Mihai Alexandru Petrovici, Johannes Schemmel and Karlheinz Meier

The evolution of biological brains has always been contingent on their embodiment within their respective environments, in which survival required appropriate navigation and manipulation skills. Studying such interactions thus represents an important aspect of computational neuroscience and, by extension, a topic of interest for neuromorphic engineering. Here, we present three examples of embodiment on the BrainScaleS-2 architecture, in which dynamical timescales of both agents and environment are accelerated by several orders of magnitude with respect to their biological archetypes.

Korbinian Schreiber et al.
16:55‑17:05
(10+5 min)
 Lightning talk: Adaptive control for hindlimb locomotion in a simulated mouse through temporal cerebellar learning

Thomas Passer Jensen, Shravan Tata, Auke Jan Ijspeert and Silvia Tolu

Human beings and other vertebrates show remarkable performance and efficiency in locomotion, but the functioning of their biological control systems for locomotion is still only partially understood. The basic patterns and timing for locomotion are provided by a central pattern generator (CPG) in the spinal cord. The cerebellum is known to play an important role in adaptive locomotion. Recent studies have given insights into the error signals responsible for driving the cerebellar adaptation in locomotion. However, the question of how the cerebellar output influences the gait remains unanswered. We hypothesize that the cerebellar correction is applied to the pattern formation part of the CPG. Here, a bio-inspired control system for adaptive locomotion of the musculoskeletal system of the mouse is presented, where a cerebellar-like module adapts the step time by using the double support interlimb asymmetry as a temporal teaching signal. The control system is tested on a simulated mouse in a split-belt treadmill setup similar to those used in experiments with real mice. The results show adaptive locomotion behavior in the interlimb parameters similar to that seen in humans and mice. The control system adaptively decreases the double support asymmetry that occurs due to environmental perturbations in the split-belt protocol.

Thomas Passer Jensen (Technical University of Denmark)
17:10‑17:55
(45 min)
 Open mic / discussions
19:00‑21:30
(150 min)
 Poster dinner

The max. poster size is A0, orientation PORTRAIT (841 mm wide x 1189 mm high)


Wednesday, 18 March 2020
08:45
NICE 2020, workshpo day II -- NOTE: NICE will be postponed!
09:00‑09:15
(15 min)
 Welcome / overview
09:15‑09:55
(40+5 min)
 KeynoteWolfgang Maass
10:00‑10:20
(20+5 min)
 On the computational power and complexity of Spiking Neural Networks

Johan Kwisthout and Nils Donselaar

The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and co-locating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. In this paper we take the first steps towards such a theory. We introduce spiking neural networks as a machine model where — in contrast to the familiar Turing machine — information and the manipulation thereof are co-located in the machine. We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.

(Nils Donselaar)
10:25‑10:45
(20+5 min)
 The speed of sequence processing in biological neuronal networks

Younes Bouhadjar, Markus Diesmann, Dirk .J Wouters and Tom Tetzlaff

Sequence processing has been proposed to be the universal computation performed by the neocortex. The Hierarchical Temporal Memory (HTM) model provides a mechanistic implementation of this form of processing. While the model accounts for a number of neocortical features, it is based on networks of highly abstract neuron and synapse models updated in discrete time. Here, we reformulate the model in terms of a network of spiking neurons with continuous-time dynamics to investigate how neuronal and synaptic parameters constrain the sequence-processing speed.

Younes Bouhadjar et. al
11:00‑11:30
(30 min)
 Coffee brek
11:30‑11:50
(20+5 min)
 Walter Senn
11:55‑12:15
(20+5 min)
 Conductance-based dendrites perform reliability-weighted opinion pooling

Jakob Jordan, João Sacramento, Mihai A. Petrovici and Walter Senn

Cue integration, the combination of different sources of information to reduce uncertainty, is a fundamental computational principle of brain function. Starting from a normative model we show that the dynamics of multi-compartment neurons with conductance-based dendrites naturally implement the required probabilistic computations. The associated error-driven plasticity rule allows neurons to learn the relative reliability of different pathways from data samples, approximating Bayes-optimal observers in multisensory integration tasks. Additionally, the model provides a functional interpretation of neural recordings from multisensory integration experiments and makes specific predictions for membrane potential and conductance dynamics of individual neurons.

Jakob Jordan et al.
12:20‑12:30
(10+5 min)
 Lightning talk: Natural gradient learning for spiking neurons

Elena Kreutzer, Mihai Alexandru Petrovici and Walter Senn

Due to their simplicity and success in machine learning, gradient-based learning rules represent a popular choice for synaptic plasticity models. While they have been linked to biological observations, it is often ignored that their predictions generally depend on a specific representation of the synaptic strength. In a neuron, the impact of a synapse can be described using the state of many different observables such as neutortransmitter release rates or membrane potential changes. Which one of these is chosen when deriving a learning rule can drastically change the predictions of the model. This is doubly unsatisfactory, both with respect to optimality and from a conceptual point of view. By following the gradient on the manifold of the neuron’s firing distributions instead of one that is relative to some arbitrary synaptic weight parametrization, natural gradient descent provides a solution to both these problems. While the computational advantages of natural gradient are well-studied in ANNs, its predictive power as a model for in-vivo synaptic plasticity has not yet been assessed. By formulating natural gradient learning in the context of spiking interactions, we demonstrate how it can improve the convergence speed of spiking networks. Furthermore, our approach provides a unified, normative framework for both homo- and heterosynaptic plasticity in structured neurons and predicts a number of related biological phenomena.

Elena Kreutzer et al.
12:35‑12:45
(10+5 min)
 Lightning talk: The Computational Capacity of Mem-LRC Reservoirs

Forrest Sheldon and Francesco Caravelli

Reservoir computing has a emerged as a powerful tool in data-driven time series analysis. The possibility of utilizing hardware reservoirs as specialized co-processors has generated interest in the properties of electronic reservoirs, especially those based on memristors as the nonlinearity of these devices should translate to an improved nonlinear computational capacity of the reservoir. However, designing these reservoirs requires a detailed understanding of how memristive networks process information which has thusfar been lacking. In this work, we derive an equation for general memristor-inductor-resistor-capacitor (MEM-LRC) reservoirs that includes all network and dynamical constraints explicitly. Utilizing this we undertake a detailed study of the computational capacity of these reservoirs. We demonstrate that hardware reservoirs may be constructed with extensive memory capacity and that the presence of memristors enacts a tradeoff between memory capacity and nonlinear computational capacity. Using these principles we design reservoirs to tackle problems in signal processing, paving the way for applying hardware reservoirs to high-dimensional spatiotemporal systems.

Forrest Sheldon et al.
13:00‑14:00
(60 min)
 Lunch
14:00‑14:20
(20+5 min)
 Making spiking neurons more succinct with multi-compartment models

Spiking neurons consume energy for each spike they emit. Reducing the firing rate of each neuron — without sacrificing relevant information content — is therefore a critical constraint for energy efficient networks of spiking neurons in biology and neuromorphic hardware alike. The inherent complexity of biological neurons provides a possible mechanism to realize a good trade-off between these two conflicting objectives: multi-compartment neuron models can become selective to highly specific input patterns, and thus learn to produce informative yet sparse spiking codes. In this paper, I motivate the operation of a simplistic hierarchical neuron model by analogy to decision trees, show how they can be optimized using a modified version of the greedy decision tree learning rule, and analyze the results for a simple illustrative binary classification problem

Johannes Leugering
14:25‑14:45
(20+5 min)
 Evolutionary Optimization for Neuromorphic Systems

Catherine Schuman, J. Parker Mitchell, Robert Patton, Thomas Potok and James Plank

Designing and training an appropriate spiking neural network for neuromorphic deployment remains an open challenge in neuromorphic computing. In 2016, we introduced an approach for utilizing evolutionary optimization to address this challenge called Evolutionary Optimization for Neuromorphic Systems (EONS). In this work, we present an improvement to this approach that enables rapid prototyping of new applications of spiking neural networks in neuromorphic systems. We discuss the overall EONS framework and its improvements over the previous implementation. We present several case studies of how EONS can be used, including to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization.

Catherine Schuman et al.
14:50‑15:00
(10+5 min)
 Lightning talk: Implementing Backpropagation for Learning on Neuromorphic Spiking Hardware

Andrew Sornborger, Alpha Renner, Forrest Sheldon, Anatoly Zlotnik and Louis Tao

Many contemporary advances in the theory and practice of neural networks are inspired by our understanding of how information is processed by natural neural systems. However, the basis of modern deep neural networks remains the error backpropagation algorithm [1], which though founded in rigorous mathematical optimization theory, has not been successfully demonstrated in a neurophysiologically realistic circuit. In a recent study, we proposed a neuromorphic architecture for learning that tunes the propagation of information forward and backwards through network layers using an endogenous timing mechanism controlled by thresholding of intensities [2]. This mechanism was demonstrated in simulation of analog currents, which represent the mean fields of spiking neuron populations. In this follow-on study, we present a modified architecture that includes several new mechanisms that enable implementation of the backpropagation algorithm using neuromorphic spiking units. We demonstrate the function of this architecture in learning mapping examples, both in event-based simulation as well as a true hardware implementation.

[1] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating errors. Nature, pages 533–536, 1986.

[2] Andrew Sornborger, Louis Tao, Jordan Snyder, and Anatoly Zlotnik. A pulse-gated, neural implementation of the backpropagation algorithm. In Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop, page 10. ACM, 2019.

Andrew Sornborger et al.
15:05‑15:15
(10+5 min)
 Lightning talk: Spike Latency Reduction generates Efficient Predictive Coding

Laura State and Pau Vilimelis Aceituno

Latency reduction of postsynaptic spikes is a well-known effect of Synaptic Time-Dependent Plasticity. We expand this notion for long postsynaptic spike trains, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction of postsynaptic latencies can lead to the emergence of predictions.

Pau Vilimelis Aceituno et al.
15:20‑16:10
(50 min)
 Special coffee break: EINC & BrainScaleS 1
16:10‑16:30
(20+5 min)
 Real-time Mapping on a Neuromorphic Processor

Guangzhi Tang and Konstantinos Michmizos

Navigation is so crucial for our survival that the brain hosts a dedicated network of neurons to map our surroundings. Place cells, grid cells, border cells, head direction cells and other specialized neurons in the hip- pocampus and the cortex work together in planning and learning maps of the environment [1]. When faced with similar navigation challenges, robots have an equally important need for generating a stable and accurate map. In our ongoing effort to translate the biological network for spatial navigation into a spiking neural network (SNN) that controls mobile robots in real-time, we first focused on simultaneous localization and mapping (SLAM), being one of the critical problems in robotics that relies highly on the accuracy of map representation [2]. Our approach allows us to leverage the asynchronous computing paradigm commonly found across brain areas and therefore has already demonstrated to be a significant energy-efficient solution for 1D SLAM [3], that can spur the emergence of the new neuromorphic processors, such as Intel’s Loihi [4] and IBM’s TrueNorth [5]. In this paper, we expand our previous work by proposing a SNN that forms a cognitive map of an unknown environment and is seamlessly integrated to Loihi.

[1] S. Poulter, T. Hartley, and C. Lever, "The neurobiology of mammalian navigation," Current Biology, vol. 28, no. 17, pp. R1023-R1042, 2018.

[2] G. Grisetti, C. Stachniss, and W. Burgard, "Improved techniques for grid mapping with rao-blackwellized particle filters," IEEE transactions on Robotics, vol. 23, no. 1, p. 34, 2007.

[3] G. Tang, A. Shah, and K. P. Michmizos, "Spiking neural network on neuromorphic hardware for energy- efficient unidimensional SLAM," in IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS), Macau, China, 2019, pp. 1-6.

[4] M. Davies et al., "Loihi: A neuromorphic manycore processor with on-chip learning," IEEE Micro, vol. 38, no. 1, pp. 82-99, 2018.

[5] P. A. Merolla et al., "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science, vol. 345, no. 6197, pp. 668-673, 2014.

(Konstantinos Michmizos )
16:35‑16:55
(20+5 min)
 Inductive bias transfer between brains and machines

Machine Learning, in particular computer vision, has made tremendous progress in recent year. On standardized datasets deep networks now frequently achieve close to human or super human performance. However, despite this enormous progress, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ‘‘inductive bias,’’ determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. I will give an overview on some conceptual ideas and preliminary results how the rapid increase of neuroscientific data could be used to transfer low level inductive biases from the brain to learning machines.

Fabian Sinz
17:00‑17:45
(45 min)
 Open mic / discussion
18:00‑21:00
(180 min)
 Conference dinner

Thursday, 19 March 2020
08:45
NICE 2020, workshop day III -- NOTE: NICE will be postponed!
09:00‑09:15
(15 min)
 Welcome / overview
09:15‑09:55
(40+5 min)
 Keynote: Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition

While Moore’s law has driven exponential computing power expectations, its nearing end calls for new roads to embedded cognition. The field of neuromorphic computing aims at a paradigm shift compared to conventional von-Neumann computers, both for the architecture (i.e. memory and processing co-location) and for the data representation (i.e. spike-based event-driven encoding). However, it is unclear which of the bottom-up (neuroscience-driven) or top-down (application-driven) design approaches could unveil the most promising roads to embedded cognition. In order to clarify this question, this talk is divided into two parts.

The first part focuses on the bottom-up approach. From the building-block level to the silicon integration, we design two bottom-up neuromorphic processors: ODIN and MorphIC. We demonstrate with measurement results that hardware-aware neuroscience model design and selection allows reaching record neuron and synapse densities with low-power operation. However, the inherent difficulty for bottom-up designs lies in applying them to real-world problems beyond the scope of neuroscience applications.

The second part investigates the top-down approach. By starting from the applicative problem of adaptive edge computing, we derive the direct random target projection (DRTP) algorithm for low-cost neural network training and design a top-down DRTP-enabled neuromorphic processor: SPOON. We demonstrate with pre-silicon implementation results that combining event-driven and frame-based processing with weight-transport-free update-unlocked training supports low-cost adaptive edge computing with spike-based sensors. However, defining a suitable target for bio-inspiration in top-down designs is difficult, as it should ensure both the efficiency and the relevance of the resulting neuromorphic device.

Therefore, we claim that each of these two design approaches can act as a guide to address the shortcomings of the other.

Charlotte Frenkel
10:00‑10:20
(20+5 min)
 A Neuromorphic Future for Classic Computing Tasks

The obvious promise of neuromorphic hardware is to enable efficient implementations of brain-derived algorithms. However, to be successful, it is essential that the community demonstrates that neuromorphic systems can be broadly impactful for beyond a few narrow tasks. While more advanced post-deep learning brain-derived algorithms would be ideal, it is helpful to look beyond cognitive algorithms as well for potential market impact.

In this talk, I will highlight one such opportunity: the application of neuromorphic hardware for large-scale scientific computing applications. Specifically, I will present a perspective on neuromorphic hardware that enables us to use large spiking architectures for solving stochastic differential equations and graph analytics. Our general approach treats neuromorphic architectures as a large computational graph onto which we can map sophisticated algorithmic tasks. We have demonstrated how this approach can be used to efficiently model Monte Carlo approximations to a class of partial differential equations that challenge the high-performance computing community, and we can further illustrate how this approach is well-suited for performing general dynamic programming tasks.

Finally, the talk will include some concrete examples of this approach on different spiking neuromorphic platforms, such as Loihi, TrueNorth, and SpiNNaker.

Brad Aimone
10:25‑10:35
(10+5 min)
 Lightning talk: Benchmarking of Neuromorphic Hardware Systems

Christoph Ostrau, Christian Klarhorst, Michael Thies and Ulrich Rueckert

With more and more neuromorphic hardware systems for the acceleration of spiking neural networks available in science and industry there is a demand for platform comparison and performance estimation of such systems. This work describes selected benchmarks implemented in a framework with exactly this target: independent black-box benchmarking and comparison of platforms suitable for the simulation/emulation of spiking neural networks.

Christoph Ostrau et al.
10:40‑10:50
(10+5 min)
 Lightning talk: Evolving Spiking Neural Networks for Robot Sensory-motor Decision Tasks of Varying Difficulty

While there is considerable enthusiasm for the potential of spiking neural network (SNN) computing, there remains the fundamental issue of designing the topologies and parameters for these networks. We say the topology IS the algorithm. Here, we describe experiments using evolutionary computation (genetic algorithms, GAs) on a simple robotic sensory-motor decision task using a gene driven topology growth algorithm and letting the GA set all the SNN’s parameters.

We highlight lessons learned from early experiments where evolution failed to produce designs beyond what we called “cheap-tricksters”. These were simple topologies implementing decision strategies that could not satisfactorily solve tasks beyond the simplest, but were nonetheless able to outcompete more complex designs in the course of evolution. The solution involved alterations to the fitness function so as to reduce the inherent noise in the assessment of performance, adding gene driven control of the symmetry of the topology, and improving the robot sensors to provide more detailed information about its environment.

We show how some subtle variations in the topology and parameters can affect behaviors. We discuss an approach to gradually increasing the complexity of the task that can induce evolution to discover more complex designs. We conjecture that this type of approach will be important as a way to discover cognitive design principles.

(J David Schaffer)
11:00‑11:30
(30 min)
 Coffee break
11:30‑11:50
(20+5 min)
 Natural density cortical models as benchmarks for universal neuromorphic computersMarkus Diesmann
11:55‑12:15
(20+5 min)
 Platform-Agnostic Neural Algorithm Composition using Fugu

Spiking neural networks and corresponding neuromorphic hardware are undergoing an uptick in interest as key milestones are accomplished by industry, academic and government research groups. Unfortunately, from an end-user’s perspective, testing or deploying applications on a neuromorphic platform is very challenging and often infeasible. We hope to address two common and key challenges, portability and composition, by the creation of an overarching software framework called Fugu. Fugu allows for spiking neural algorithms, created by independent designers, to be combined seamlessly in a scalable and target- platform-agnostic manner. This resulting intermediate representation is then translatable to multiple neuromorphic hardware backends.

Acknowledgements: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

William Severa
12:20‑12:40
(20+5 min)
 Programming neuromorphic computers: PyNN and beyond

PyNN is a Python API for describing spiking neuronal networks consisting of point neurons, with synaptic plasticity. The API is intended to be independent of the underlying simulator or hardware platform: PyNN models can run on traditional simulators such as NEST, NEURON and Brian, GPU-based simulators such as GeNN, and neuromorphic hardware systems such as BrainScaleS and SpiNNaker. In this talk I will present the current state of PyNN and forthcoming extensions, in particular support for multicompartmental models, intracellular calcium dynamics, and structural plasticity. I will also briefly discuss ideas for higher-level APIs/component libraries, built on PyNN, to support cognitive modelling and machine-learning-inspired networks.

Andrew Davison
12:45‑12:55
(10+5 min)
 Lightning talk: Caspian: A Neuromorphic Development Platform

John Mitchell, Catherine Schuman, Robert Patton and Thomas Potok

Current neuromorphic systems often may be difficult to use and costly to deploy. There exists a need for a simple yet flexible neuromorphic development platform which can allow researchers to quickly prototype ideas and applications. Caspian offers a high-level API along with a fast spiking simulator to enable the rapid development of neuromorphic solutions. It further offers an FPGA architecture that allows for simplified deployment – particularly in SWaP (size, weight, and power) constrained environments. Leveraging both software and hardware, Caspian aims to accelerate development and deployment while enabling new researchers to quickly become productive with a spiking neural network system.

(John (Parker) Mitchell)
13:00‑14:00
(60 min)
 Lunch
14:00‑14:20
(20+5 min)
 BrainScaleS: Development Methodologies and Operating System

The BrainScaleS (BSS) neuromorphic architectures are based on the analog emulation of neuro-synaptic behavior. Neuronal membrane voltages are represented as voltages, model dynamics evolve in a time-continuous manner. Compared to biology the systems run at a typical speedup factor of 1000–10000. This enables the evaluation of effects on long timescales and experiments with many trials. Simultaneously, BSS focuses model configurability and flexibility in plasticity, experiment control and data handling. On BSS-2, this flexibility is facilitated by an embedded SIMD microprocessor located next to the analog neural network core.

The extended configurability, the inclusion of embedded programmability as well as the horizontal scalability of the systems induces additional complexity. Challenges arise in areas such as initial experiment configuration and runtime control, reproducibility and robustness. We present operation and development methodologies implemented for the BSS neuromorphic architectures and walk through the individual components constituting the software stack for BSS platform operation.

Eric Müller
14:25‑14:35
(10+5 min)
 Lightning talk: Cognitive Domain Ontologies: HPCs to Ultra Low Power Neuromorphic Platforms

Tarek Taha, Chris Yakopcic, Nayim Rahman, Tanvir Atahary and Scott Douglass

The Cognitively Enhanced Complex Event Processing (CECEP) agent-based decision-making architecture is being developed at AFRL/RHCI [1]. Within this agent, the Cognitive Domain Ontology (CDO) component is the slowest for most applications of the agent. We show that even after acceleration on a high performance server based computing system enhanced with a high end graphics processing unit (GPU), the CDO component does not scale well for real time use on large problem sizes. Thus, to enable real time use of the agent, particularly in power constrained environments (such as autonomous air vehicles), alternative implementations of the agent logic are needed. These alternative implementations need to utilize different algorithms that implement the CDO logic and need to be targeted to much lower power (and weight) computing systems than GPU enabled servers (which can consume over 500W and weigh over 50lbs).

The objective of this work was to carry out an initial design space search of algorithms and hardware for decision making through the domain knowledge component of CECEP (the CDO) [2-5]. Several algorithmic and circuit approaches are proposed that span across six hardware options of varying power consumption and weight (ranging from over 1000W to less than 1W). The algorithms range from exact solution producers optimized for running on a cluster of high performance computing systems [1] to approximate solution producers running fast on low power neuromorphic hardware [6-9]. The loss in accuracy for the approximate approaches is minimal, making them well suited to SWaP constrained systems, such as UAVs. The exact solution approach on an HPC will give confidence that the best answer has been evaluated (although this may take some time to generate).

[1] T. Atahary, T. Taha, F. Webber, and S. Douglass, “Knowledge mining for cognitive agents through path based forward checking,” 16th IEEE/ACIS SNPD, pp. 1-8, June, 2015.

[2] C. Yakopcic, N. Rahman, T. Atahary, T. M. Taha, and S. Douglass, “Cognitive Domain Ontologies in a Memristor Crossbar Architecture,” IEEE National Aerospace and Electronics Conference (NAECON), pp. 76-83, Dayton, OH, June 2017.

[3] N. Rahman, T. Atahary, T. Taha, S. Douglass, "A pattern matching approach to map cognitive domain ontologies to the IBM TrueNorth Neurosynaptic System." 2017 Cognitive Communications for Aerospace Applications Workshop (CCAA). IEEE, 2017.

[4] N. Rahman, C. Yakopcic, T. Atahary, R. Hasan, T. M. Taha, and S. Douglass, “Cognitive Domain Ontologies in Lookup Tables Stored in a Memristor String Matching Architecture,” 30th annual IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1-4, Windsor, Ontario, April 2017.

[5] N. Rahman, T. Atahary, C. Yakopcic, T. M. Taha, Scott Douglass, “Task Allocation Performance Comparison for Low Power Devices,” IEEE National Aerospace and Electronics Conference (NAECON), pp. 247-253, Dayton, OH, July, 2018.

[6] C. Yakopcic, T. Atahary, T. M. Taha, A. Beigh, and S. Douglass, “High Speed Approximate Cognitive Domain Ontologies for Asset Allocation based on Isolated Spiking Neurons,” IEEE National Aerospace and Electronics Conference (NAECON), pp. 241-246, Dayton, OH, July, 2018.

[7] C. Yakopcic, N. Rahman, T. Atahary, T. M. Taha, A. Beigh, and S. Douglass, “High Speed Approximate Cognitive Domain Ontologies for Constrained Asset Allocation based on Spiking Neurons,” IEEE National Aerospace and Electronics Conference (NAECON), 2019.

[8] C. Yakopcic, T. Atahary, N. Rahman, T. M. Taha, A. Beigh, and S. Douglass, “High Speed Approximate Cognitive Domain Ontologies for Asset Allocation Using Loihi Spiking Neurons,” IEEE/INNS International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, July, 2019.

[9] C. Yakopcic, J. Freeman, T. M. Taha, S. Douglass, and Q. Wu, “Cognitive Domain Ontologies Based on Loihi Spiking Neurons Implemented Using a Confabulation Inspired Network,” IEEE Cognitive Communications for Aerospace Applications Workshop, June, 2019.

Tarek Taha et al.
14:40‑14:50
(10+5 min)
 Lightning talk: Comparing Neural Accelerators & Neuromorphic Architectures The False Idol of Operations

Craig Vineyard, Sam Green and Mark Plagge

Accompanying the advanced computing capabilities neural networks are enabling across a suite of application domains, there is a resurgence in interest in understanding what architectures can efficiently enable these advanced computational demands. Both neural accelerators and neuromorphic approaches are emerging at different scales, resource requirements, and enabling capabilities. Beyond the similarity of executing neural network workloads, these two paradigms exhibit significant differences. As processing, memory, and communication are the core tenets of computing, here we compare architectures of neural accelerators and neuromorphic in these terms. Specifically we show that operations alone are a lacking singular measure of performance due to contrasting computational goals. These differing computational paradigms, to maximize the amount of computations performed or to compute as needed, are analogous to maximin and minimax decision theory reasoning. The differing objectives make neural accelerator and neuromorphic architectural choices suited to enable different computational demands.

Craig Vineyard et al.
14:55‑15:05
(10+5 min)
 Lightning talk: Subspace Locally Competitive Algorithms

Dylan Paiton, Steven Shepard, Kwan Ho Ryan Chan and Bruno Olshausen

We introduce the subspace locally competitive algorithms (SLCAs), a family of novel network architectures for modeling latent representations of natural signals with group sparse structure. SLCA first layer neurons are derived from locally competitive algorithms, which produce responses and learn representations that are well matched to both the linear and non-linear properties observed in simple cells in layer 4 of the primary visual cortical area V1. SLCA incorporates a second layer of neurons which produce approximately invariant responses to signal variations that are linear in their corresponding subspaces, such as phase shifts, resembling a hallmark characteristic of complex cells in V1. We describe the model, give practical analysis of training parameter settings, explore the features and invariances learned, and finally compare it to single-layer sparse coding and to independent subspace analysis.

Dylan Paiton et al.
15:10‑15:20
(10+5 min)
 Lightning talk: Fast and deep neuromorphic learning with first-spike coding

Julian Göltz, Andreas Baumbach, Sebastian Billaudelle, Oliver Breitwieser, Laura Kriener, Akos Ferenc Kungl, Karlheinz Meier, Johannes Schemmel and Mihai Alexandru Petrovici

For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems also strive for short time-to-solution and low energy-to-solution characteristics. At the level of neuronal implementation, this implies achieving the desired results with as few and as early spikes as possible. In the time-to-first-spike coding framework, both of these goals are inherently emerging features of learning. Here, we describe a rigorous derivation of error-backpropagation-based learning for hierarchical networks of leaky integrate-and-fire neurons. This narrows the gap between previous existing models of first-spike-time learning and biological neuronal dynamics, thereby also enabling fast and energy-efficient inference on analog neuromorphic devices that inherit these dynamics from their biological archetypes.

Julian Göltz et al.
15:25‑15:35
(10+5 min)
 Lightning talk: Neuromorphic Graph Algorithms: Extracting Longest ShortestPaths and Minimum Spanning Trees

Bill Kay, Prasanna Date and Catherine Schuman

Neuromorphic computing is poised to become a promising computing paradigm in the post Moore’s law era due to its extremely low power usage and inherent parallelism. Traditionally speaking, a majority of the use cases for neuromorphic systems have been in the field of machine learning. In order to expand their usability, it is imperative that neuromorphic systems be used for non-machine learning tasks as well. The structural aspects of neuromorphic systems (i.e., neurons and synapses) are similar to those of graphs (i.e., nodes and edges), However, it is not obvious how graph algorithms would translate to their neuromorphic counterparts. In this work, we propose a preprocessing technique that introduces fractional offsets on the synaptic delays of neuromorphic graphs in order to break ties. This technique, in turn, enables two graph algorithms: longest shortest path extraction and minimum spanning trees.

15:40‑15:50
(10+5 min)
 Lightning talk: Neuromorphic Computing for Spacecraft’s Terrain Relative Navigation: A Case of Event-Based Crater Classification Task

Kazuki Kariya and Seisuke Fukuda

Terrain relative navigation is a key technology to enhance conventional spacecraft navigation systems for accurate landing on a planetary body. Since the navigation task is self-localization based on terrain information, computer vision tasks using terrain images are often used for feature extraction and matching. Although the navigation system requires real-time and onboard processing capability due to high-speed descent and the communication propagation delay, the processing performance of space-grade computers is about two orders of magnitude less than commercial ones. This decline in the performance is caused by the power constraints and the acquisition of radiation hardening inherent in the space environments. Neuromorphic computing architecture may meet this need in terms of power consumption and processing speed.

In this study, we investigate the applicability of neuromorphic computing systems for a crater classification as a function of terrain relative navigation. The navigation system consists of a spiking neural network that processes the classification task and an event-based camera that provides terrain information as input to the network. Results show that the system can classify craters with very low power consumption while maintaining performance comparable to existing computing architectures.

Kazuki Kariya et al.
15:55‑16:25
(30 min)
 Coffee break
16:25‑16:45
(20+5 min)
 Beyond Backprop: Different Approaches to Credit Assignment in Neural Nets

Backpropagation algorithm (backprop) has been the workhorse of neural net learning for several decades, and its practical effectiveness is demonstrated by recent successes of deep learning in a wide range of applications. This approach uses chain rule differentiation to compute gradients in state-of-the-art learning algorithms such as stochastic gradient descent (SGD) and its variations. However, backprop has several drawbacks as well, including the vanishing and exploding gradients issue, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.

Biography: https://sites.google.com/site/irinarish/

16:50‑17:10
(20+5 min)
 Batch << 1: Why Neuromorphic Computing Architectures Suit Real-Time Workloads

As predicted by John Hennessy, there has been a “Cambrian explosion” of computing architectures as Moore’s Law scaling has broken down. This is most obvious in the new field of AI hardware, where the competition to develop and commercialize chips for deep learning training and inference is particularly strong. There is no consensus as to whether the same architectures will be appropriate for data-center computation and edge computation, although some practitioners are starting to differentiate architectures on the basis of whether inputs (typically, images or video frames) can be accumulated before processing (allowing for very large memory read and write blocks and large matrix multiplications); or whether the task demands that each frame must be processed in real time (so-called “Batch = 1” processing).

In this presentation we show that many real-world tasks are in fact “Batch << 1” operations. For example, in the case of a forward-facing video camera in a self-driving car application, the similarity between successive frames is very high, and increases as the frame rate and resolution of the video increase; a 240fps 1080p camera will typically have well over 99% of pixels unchanged between successive frames. The same high correlation between successive samples applies in other real-world workloads such as conversational audio processing.

Exploiting the correlation of input streams can lead to very efficient processing (as shown in video compression techniques such as H.264 / MPEG-4). However, it requires significantly different processing architectures, chief among which is the necessity to maintain system state in memory between inputs.

We will show that neuromorphic architectures intrinsically implement the most important features of a ‘Batch << 1” architecture, and are very well suited to edge processing. We will describe a new architecture – NeuronFlow - which is optimized for this purpose, and present results from GrAIOne, the first chip manufactured to implement this architecture. Early results show a significant processing advantage in terms of both latency and power consumption.

Jonathan Tapson (GrAI Matter Labs)
17:15‑17:35
(20+5 min)
 Relational Neurogenesis for Lifelong Learning Agents

Tej Pandit and Dhireesha Kudithipudi

Reinforcement learning systems have shown tremendous potential in being able to model meritorious behavior in virtual agents and robots. The ability to learn through continuous reinforcement and interaction with an environment negates the requirement of painstakingly curated datasets and hand crafted features. However, the ability to learn multiple tasks in a sequential manner, referred to as lifelong or continual learning, remains unresolved. Current implementations either concentrate on preserving information in fixed capacity networks, or propose incrementally growing networks which randomly search through an unconstrained solution space. This work proposes a novel algorithm for continual learning using neurogenesis in reinforcement learning agents. It builds upon existing neuroevolutionary techniques, and incorporates several new mechanisms for limiting the memory resources while expanding neural network learning capacity. The algorithm is tested on a custom set of sequential virtual environments which emulate meaningful scenarios.

Tej Pandit et al.
17:40‑18:10
(30 min)
 open mic / discussions
18:20‑18:30
(10 min)
 Wrap-up / adjourn
18:30
End of NICE 2020 for non-tutorial attendants
19:00‑20:30
(90 min)
 Dinner (only for tutorial attendants)

Friday, 20 March 2020
09:00
NICE 2020, Tutorials day: NOTE: NICE will be POSTPONED!

The tutorial day can be booked as one of the registration options. On the tutorial day hands-on interactive tutorials with several different neuromorphic compute systems will be offered:

SpiNNaker tutorial

Title: Running Spiking Neural Network Simulations on SpiNNaker

Description: This workshop will describe how to access the SpiNNaker platform, via both Jupyter Notebooks and the HBP Collaboratory. It will then discuss how to write spiking neural networks using the PyNN language to be executed on SpiNNaker, and introduce the integration with the HBP Neurorobotics environment. Participants will be given access to the Jupyter Notebook system from which they will be able to follow some lab examples, and then go on to create their own networks running on the platform, as well as create co-simulations with the robotics environment.

Structure:

  • How to access SpiNNaker using Jupyter and the HBP Collaboratory
  • How to use the NRP through the SpiNNaker Jupyter Service
  • Running PyNN Simulations on SpiNNaker
  • Run lab examples and write your own networks

Timing: The tutorial will run twice with roughly identical content, once in the morning and once in the afternoon, so it can be combined with another morning or afternoon tutorial.

Access: the SpiNNaker system in Manchester is available remotely via the HBP Collaboratoy. Please find here the access procedure.

Intel Loihi tutorial

Title: Intel Corporation Loihi and Nx SDK

Description: The tutorial will provide an introduction to the Loihi Neuromorphic Computing Platform and its Nx SDK development toolkit. The Loihi chip features a unique programmable microcode learning engine for on-chip spiking neural networks. The chip contains 128 neuromorphic cores and is fabricated in Intel’s 14nm process.

  • A morning session will provide an overview of the Loihi hardware architecture and SDK basics, followed by
  • an afternoon session sharing live examples and step-by-step iPhython-based tutorials of a wide variety of algorithmic examples.

Note that participants will not be able to follow along from their own laptops unless they engage with Intel’s Neuromorphic Research Community beforehand (email inrc_interest@intel.com for more information).

Timing: The morning and the afternoon sessions are fairly self-contained, so people can pick and choose and also attend the other tutorials, as they wish.

BrainScaleS tutorial

Title: Experiments on BrainScaleS

Description: The tutorial will provide an introduction to and hands-on experiment with the BrainScaleS accelerated analog neuromorphic hardware system. BrainScaleS is a mixed analog-digital design operating 1,000 times faster than real-time. BrainScaleS-2 features programmable on-chip learning capabilities and a new concept called dendritic computing, developed in close collaboration with neuroscientists. Participants will gain a familiarity with biologically inspired spiking neural networks and novel computation.

Timing: The tutorial will run twice with roughly identical content, once in the morning and once in the afternoon, so it can be combined with another morning or afternoon tutorial.

Access: the BrainScaleS-1 system in Heidelberg is available remotely via the HBP Collaboratoy. Please find here the access procedure.

09:00‑11:30
(150 min)
 Tutorial (and coffee)

In parallel ("choose one"):

  • BrainScaleS
  • SpiNNaker
  • Loihi: an overview of the Loihi hardware architecture and SDK basics
11:30‑12:30
(60 min)
 Lunch (only for tutorial participants)
12:30‑15:00
(150 min)
 Tutorial (and coffee)

In parallel ("choose one"):

  • BrainScaleS
  • SpiNNaker
  • Loihi: live examples and step-by-step iPhython-based tutorials of a wide variety of algorithmic examples
15:00
End of the NICE 2020 tutorial day
Contact: bjoern.kindler@kip.uni-heidelberg.de
Agenda page for printing (also as short info or short info with end-time)