Modeling Neuromorphic and Advanced Computing Architectures

Navy SBIR 20.2 - Topic N202-108

Naval Air Systems Command (NAVAIR) - Ms. Donna Attick navairsbir@navy.mil

Opens: June 3, 2020 - Closes: July 2, 2020 (12:00 pm ET)

 

 

N202-108       TITLE: Modeling Neuromorphic and Advanced Computing Architectures

 

RT&L FOCUS AREA(S): General Warfighting Requirements (GWR)

TECHNOLOGY AREA(S): Air Platform, Information Systems

 

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 3.5 of the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

 

OBJECTIVE: Develop a software tool to optimize the signal processing chain across various sensors and systems, e.g., radar, electronic warfare (EW), electro-optical/infrared (EO/IR), and communications, that consists of functional models that can be assembled to produce an integrated network model used to predict overall detection/classification, power, and throughput performance to make design trade-off decisions.

 

DESCRIPTION: Conventional computing architectures are running up against a quantum limit in terms of transistor size and efficiency, sometimes referred to as the end of Moore’s Law. To regain our competitive edge, we need to find a way around this limit. This is especially relevant for small size, weight, and power (SWaP)-constrained platforms. For these systems, scaling Von Neumann computing becomes prohibitively expensive in terms of power and/or SWaP.

 

Biologically inspired neural networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological neuron model, also known as a spiking neuron model; mathematically describing the systems found in nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].

 

The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM) where information is spatially and temporally encoded onto the network. It can be shown that a simple spiking network can reproduce the complex behavior found in the neural cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference in algorithms based on neural networks. In fact, they can easily be transferred between hardware architectures [Ref 4]. Performance gains and the relative ease of transitioning current algorithms over to the new hardware motivates consideration of this SBIR topic.   

 

Hardware based on SNN is currently under development at various stages of maturity. Two prominent examples are the IBM True North and the Intel Loihi chips. The IBM approach uses conventional Complementary Metal-Oxide Semiconductor (CMOS) technology and the IBM approach uses a less mature memristor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state-of-the-art graphics processing units (GPU) or field-programmable gate arrays (FPGA). More advanced architectures based on an all optical or photonic-based SNN show even more promise. Nano-Photonic-based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density, approaching the performance of a human neural cortex. Modeling these systems to make design and acquisition decisions is of great interest and importance. Validating these performance estimates and providing a modeling tool is the basis for this SBIR topic.

 

The primary goal of this effort is to create a software tool that captures the non-linear physics of these SNNs, and possibly other neuromorphic and related low-SWaP architectures, as well as functionally model their behavior. It is recommended to use open source languages, software, and hardware when possible. A similar approach [Ref 6] should be considered as a starting point, with the ultimate goal of producing a viable and flexible product for capturing, modeling, and understanding the behaviors of a composite system constructed to employ these adaptive learning systems, including all systems ranging from CMOS to photonics. Additionally, the model should be able to take an algorithm developed on a conventional neural network framework like Caffe, PyTorch, TensorFlow, etc. and run it through the functional model to predict performance criteria like latency and throughput. The secondary goal is to build up a network framework to model multi-step processing chains. For example, a hypothetical processing chain for a communications system might be filter, in-phase quadrature (IQ) demodulation, frequency decomposition, symbol detection, interference mitigation, filter, and decryption.

 

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.

 

PHASE I: Design and develop the modeling approach and demonstrate feasibility to capture the relevant physics and computational complexity. Demonstrate a functional model of a SNN. The Phase I effort will include prototype plans to be developed under Phase II.

 

PHASE II: Validate the functional model using test cases from literature. Model validation with hardware is strongly encouraged, however, due to the limited availability of hardware this is not a requirement. The model will need to contain a network framework for various processing steps across multiple sensor areas using lower level functional models. Priorities sensor/functional areas are EW, radar, communications, and EO/IR. 

Work in Phase II may become classified. Please see note in Description section.

 

PHASE III DUAL USE APPLICATIONS: Refine algorithms and test with hardware. Validate models with data provided by Naval Air Warfare Center (NAWC) Aircraft Division (AD)/Weapons Division (WD). Transition model to the warfare centers. Development of documentation, training manuals, and software maintenance may be required.

 

Heavy commercial investments in machine learning and artificial intelligence will likely continue for the foreseeable future. Adoption of hardware that can deliver on orders of magnitude in SWaP performance for intelligent mobile machine applications is estimated to be worth 10^9-10^12 global dollars annually.) Provide the software tools needed to optimize the algorithms and hardware integration. This effort would be a significant contribution to this requirement. Industries that would benefit from successful technology development include automotive (self-driving vehicles), personal robots, and a variety of intelligent sensors.

 

REFERENCES:

1. Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R.M., Boybat, I., Nolfo, C.D., Sidler, S., Giordano, M., Bodini, M., Farinha, N.C., Killeen, B., Cheng, C., Jaoudi, Y. & Burr, G.W. “Equivalent-accuracy accelerated neural-network training using analogue memory.” Nature, 558, 2018, pp. 60-67   DOI:10.1038/s41586-018-0180-5

 

2. Izhikevich, E.M. “Simple model of spiking neurons.” IEEE Transactions on Neural Networks, Volume: 14, Issue: 6, 2003. https://ieeexplore.ieee.org/document/1257420

 

3. Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U. & Neftci, E. “Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware.” ArXiv:1601.04187 [cs:NE], 2016.  https://arxiv.org/pdf/1601.04187.pdf

 

4. Esser, S.K., Merolla, P., Arthur, J.V., Cassidy, A.S., Appuswamy, R., Andreopoulos, A., Berg, D.J., McKinstry, J.L., Melano, T., Barch, D., Nolfo, C.D., Datta, P., Amir, A., Taba, B., Flickner, M. & Modha, D.S. “Convolutional networks for fast, energy-efficient neuromorphic computing.” Proceedings of the National Academy of Sciences of the United States of America, 113 41, pp. 11441-11446. https://arxiv.org/pdf/1603.08270.pdf

 

5. “National Defense Strategy 2018.” United States Congress. https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf

 

6. Rajendran, B., Sebastian, A., Schmuker, M., Srinivasa, N. & Eleftheriou, E. “Low-Power Neuromorphic Hardware for Signal Processing Applications.” https://arxiv.org/pdf/1901.03690.pdf

 

7. Wolfe, N., Plagge, M., Carothers, C. D., Mubarak M. and Ross, R. B. "Evaluating the Impact of Spiking Neural Network Traffic on Extreme-Scale Hybrid Systems." 2018 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), Dallas, TX, USA, 2018, pp. 108-120. doi: 10.1109/PMBS.2018.8641660

 

KEYWORDS: Spiking Neural Network, Neuromorphic Computing, Modeling, Convolution Neural Network, Analog Memory, Processing in Memory

 

TPOC-1:   Josef Schaff

Phone:   (301)757-2467

 

TPOC-2:   Ari Goodman

Phone:   (732)323-4601

Return