D
Deleted member 118
Guest
It was only a matter of time before more information became available from nasa
Deep Neural Networks (DNN) have become a critical component of tactical applications, assisting the warfighter in interpreting and making decisions from vast and disparate sources of data. Whether image, signal or text data, remotely sensed or scraped from the web, cooperatively collected or intercepted, DNNs are the go-to tool for rapid processing of this information to extract relevant features and enable the automated execution of downstream applications. Deployment of DNNs in data centers, ground stations and other locations with extensive power infrastructure has become commonplace but at the edge, where the tactical user operates, is very difficult. Secure, reliable, high bandwidth communications are a constrained resource for tactical applications which limits the ability to routed data collected at the edge back to a centralized processing location. Data must therefore be processed in real-time at the point of ingest which has its own challenges as almost all DNNs are developed to run on power hungry GPUs at wattages exceeding the practical capacity of solar power sources typically available at the edge. So what then is the future of advanced AI for the tactical end user where power and communications are in limited supply. Neuromorphic processors may provide the answer. Blue Ridge Envisioneering, Inc. (BRE) proposes the development of a systematic and methodical approach to deploying Deep Neural Network (DNN) architectures on neuromorphic hardware and evaluating their performance relative to a traditional GPU-based deployment. BRE will develop and document a process for benchmarking a DNN’ s performance on a standard GPU, converting it to run on commercially available neuromorphic hardware, training and evaluating model accuracy for a range of available bit quantizations, characterizing the trade between power consumption and the various bit quantizations, and characterizing the trade between throughput/latency and the various bit quantizations. This process will be demonstrated on a Deep Convolutional Neural Network trained to classify Electronic Warfare (EW) emitters in data collected by AFRL in 2011. The BrainChip Akida Event Domain Neural Processor development environment will be utilized for demonstration as it provides a simulated execution environment for running converted models under the discrete, low quantization constraints of neuromorphic hardware. In the option effort we pursue direct Spiking Neural Network (SNN) implementation and compare performance on the Akida hardware, and potentially other vendor’s hardware as well. We demonstrate the capability operating on real hardware in a relevant environment by conducting a data collection and demonstration activity at a U.S. test range with relevant EW emitters.
MENTAT | SBIR.gov
www.sbir.gov
Deep Neural Networks (DNN) have become a critical component of tactical applications, assisting the warfighter in interpreting and making decisions from vast and disparate sources of data. Whether image, signal or text data, remotely sensed or scraped from the web, cooperatively collected or intercepted, DNNs are the go-to tool for rapid processing of this information to extract relevant features and enable the automated execution of downstream applications. Deployment of DNNs in data centers, ground stations and other locations with extensive power infrastructure has become commonplace but at the edge, where the tactical user operates, is very difficult. Secure, reliable, high bandwidth communications are a constrained resource for tactical applications which limits the ability to routed data collected at the edge back to a centralized processing location. Data must therefore be processed in real-time at the point of ingest which has its own challenges as almost all DNNs are developed to run on power hungry GPUs at wattages exceeding the practical capacity of solar power sources typically available at the edge. So what then is the future of advanced AI for the tactical end user where power and communications are in limited supply. Neuromorphic processors may provide the answer. Blue Ridge Envisioneering, Inc. (BRE) proposes the development of a systematic and methodical approach to deploying Deep Neural Network (DNN) architectures on neuromorphic hardware and evaluating their performance relative to a traditional GPU-based deployment. BRE will develop and document a process for benchmarking a DNN’ s performance on a standard GPU, converting it to run on commercially available neuromorphic hardware, training and evaluating model accuracy for a range of available bit quantizations, characterizing the trade between power consumption and the various bit quantizations, and characterizing the trade between throughput/latency and the various bit quantizations. This process will be demonstrated on a Deep Convolutional Neural Network trained to classify Electronic Warfare (EW) emitters in data collected by AFRL in 2011. The BrainChip Akida Event Domain Neural Processor development environment will be utilized for demonstration as it provides a simulated execution environment for running converted models under the discrete, low quantization constraints of neuromorphic hardware. In the option effort we pursue direct Spiking Neural Network (SNN) implementation and compare performance on the Akida hardware, and potentially other vendor’s hardware as well. We demonstrate the capability operating on real hardware in a relevant environment by conducting a data collection and demonstration activity at a U.S. test range with relevant EW emitters.
Last edited by a moderator: