Northrop Grumman + Neuromorphic

uiux

Regular
https:// arxiv.org/pdf/1611.01235 .pdf

A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware

Tiffany Hwu∗†, Jacob Isbell‡, Nicolas Oros§, and Jeffrey Krichmar∗¶

∗Department of Cognitive Sciences
University of California, Irvine

Irvine, California, USA, 92697

†Northrop Grumman
Redondo Beach, California, USA, 90278
‡Department of Electrical and Computer Engineering
University of Maryland
College Park, Maryland, USA, 20742

§BrainChip LLC
Aliso Viejo, California, USA, 92656

¶Department of Computer Sciences
University of California, Irvine

Irvine, California, USA, 92697

Abstract
Neuromorphic computing is a promising solution for reducing the size, weight and power of mobile embedded systems. In this paper, we introduce a realization of such a system by creating the first closed-loop battery-powered communication system between an IBM TrueNorth NS1e and an autonomous Android-Based Robotics platform. Using this system, we constructed a dataset of path following behavior by manually driving the Android-Based robot along steep mountain trails and recording video frames from the camera mounted on the robot along with the corresponding motor commands. We used this dataset to train a deep convolutional neural network implemented on the TrueNorth NS1e. The NS1e, which was mounted on the robot and powered by the robot’s battery, resulted in a selfdriving robot that could successfully traverse a steep mountain path in real time. To our knowledge, this represents the first time the TrueNorth NS1e neuromorphic chip has been embedded on a mobile platform under closed-loop control.

1645094634059.png


ACKNOWLEDGMENT
The authors would like to thank Andrew Cassidy and Rodrigo Alvarez-Icaza of IBM for their support. This work was supported by the National Science Foundation Award number 1302125 and Northrop Grumman Aerospace Systems. We also would like to thank the Telluride Neuromorphic Cognition Engineering Workshop, The Institute of Neuromorphic Engineering, and their National Science Foundation, DoD and Industrial Sponsors.




---


Superconducting neuromorphic core

NORTHROP GRUMMAN SYSTEMS CORPORATION

Abstract
A superconducting neuromorphic pipelined processor core can be used to build neural networks in hardware by providing the functionality of somas, axons, dendrites and synaptic connections. Each instance of the superconducting neuromorphic pipelined processor core can implement a programmable and scalable model of one or more biological neurons in superconducting hardware that is more efficient and biologically suggestive than existing designs. This core can be used to build a wide variety of large-scale neural networks in hardware. The biologically suggestive operation of the neuron core provides additional capabilities to the network that are difficult to implement in software-based neural networks and would be impractical using room-temperature semiconductor electronics. The superconductive electronics that make up the core enable it to perform more operations per second per watt than is possible in comparable state-of-the-art semiconductor-based designs.



BrainFreeze: Expanding the Capabilities of Neuromorphic Systems Using Mixed-Signal Superconducting Electronics

Superconducting electronics (SCE) is uniquely suited to implement neuromorphic systems. As a result, SCE has the potential to enable a new generation of neuromorphic architectures that can simultaneously provide scalability, programmability, biological fidelity, on-line learning support, efficiency and speed. Supporting all of these capabilities simultaneously has thus far proven to be difficult using existing semiconductor technologies. However, as the fields of computational neuroscience and artificial intelligence (AI) continue to advance, the need for architectures that can provide combinations of these capabilities will grow. In this paper, we will explain how superconducting electronics could be used to address this need by combining analog and digital SCE circuits to build large scale neuromorphic systems. In particular, we will show through detailed analysis that the available SCE technology is suitable for near term neuromorphic demonstrations. Furthermore, this analysis will establish that neuromorphic architectures built using SCE will have the potential to be significantly faster and more efficient than current approaches, all while supporting capabilities such as biologically suggestive neuron models and on-line learning. In the future, SCE-based neuromorphic systems could serve as experimental platforms supporting investigations that are not feasible with current approaches. Ultimately, these systems and the experiments that they support would enable the advancement of neuroscience and the development of more sophisticated AI.

The main contributions of this paper are:
• A description of a novel mixed-signal superconducting neuromorphic architecture
• A detailed analysis of the design trade-offs and feasibility of mixed-signal superconducting neuromorphic architectures
• A comparison of the proposed system with other state-of-the-art neuromorphic architectures
• A discussion of the potential of superconducting neuromorphic systems in terms of on-line learning support and large system scaling.



---


System and method for automating observe-orient-decide-act (ooda) loop enabling cognitive autonomous agent systems

NORTHROP GRUMMAN SYSTEMS CORPORATION

Abstract
The disclosed invention provides system and method for providing autonomous actions that are consistently applied to evolving missions in physical and virtual domains. The system includes a autonomy engine that is implanted in computing devices. The autonomy engine includes a sense component including one or more sensor drivers that are coupled to the one or more sensors, a model component including a world model, a decide component reacting to changes in the world model and generating a task based on the changes in the world model, and an act component receiving the task from the decide component and invoking actions based on the task. The sense component acquires data from the sensors and extracts knowledge from the acquired data. The model component receives the knowledge from the sense component, and creates or updates the world model based on the knowledge received from the sense component. The act component includes one or more actuator drivers to apply the invoked actions to the physical and virtual domains.


1645094810488.png



The sense component 102 includes sensor drivers 121 to control sensors 202 to monitor and acquire environmental data. The model component 103 includes domain specific tools, tools for Edge which is light weight, streaming, and neuromorphic, common analytic toolkits, and common open architecture. The model component 103 is configured to implement human interfaces and application programming interfaces. The model component 103 build knowledge-based models (e.g., the World Model) that may include mission model including goals, tasks, and plans, environment model including adversaries, self-model including agent's capabilities, and history. The model component 103 also may build machine learning-based models that learns from experience.

---


Optical information collection system

NORTHROP GRUMMAN SYSTEMS CORPORATION

Abstract
An optical information collection system includes a neuromorphic sensor to collect optical information from a scene in response to change in photon flux detected at a plurality of photoreceptors of the neuromorphic sensor. A sensor stimulator stimulates a subset of the plurality of photoreceptors according to an eye movement pattern in response to a control command. A controller generates the control command that includes instructions to execute the eye movement pattern.

1645094937374.png



---


DARPA Announces Research Teams to Develop Intelligent Event-Based Imagers

DARPA today announced that three teams of researchers led by Raytheon, BAE Systems, and Northrop Grumman have been selected to develop event-based infrared (IR) camera technologies under the Fast Event-based Neuromorphic Camera and Electronics (FENCE) program. Event-based – or neuromorphic – cameras are an emerging class of sensors with demonstrated advantages relative to traditional imagers. These advanced models operate asynchronously and only transmit information about pixels that have changed. This means they produce significantly less data and operate with much lower latency and power.

---


Neuromorphic Cameras Provide a Vision of the Future

“Cameras we use today have an array of pixels: 1024 by 768,” explained Isidoros Doxas, an AI Systems Architect at Northrop Grumman. “And each pixel essentially measures the amount of light or number of photons falling on it. That number is called the flux. Now, if you display the same numbers on a screen, you will see the same image that fell on your camera.”

By contrast, neuromorphic cameras only report changes in flux. If the rate of photons falling on a pixel doesn’t change, they report nothing.

“If a constant 1,000 photons per second is falling on a pixel, it basically says, ‘I’m good, nothing happened.’ But, if at some point there are now 1,100 photons per second falling on the pixel, it will report that change in flux,” Doxas said.

“Surprisingly, this is exactly how the human eye works,” he added. “You may think that your eye reports the image that you see. But it doesn’t. All that stuff is in your head. All the eye reports are little blips saying ‘up’ or ‘down.’ The image we perceive is built by our brains.”

1645095096872.png
 
Last edited:
  • Like
  • Fire
Reactions: 26 users
  • Like
Reactions: 4 users

uiux

Regular

Speaker
Dr. Jamal Molin, Senior Principal FPGA and ASIC Design Engineer, Northrup Grumman

Enhancing Edge Processing: Imagers with In-pixel Processors

By 2025, the amount of data generated each day is expected to reach 463 exabytes globally.*" To be useful, this data has to be stored and processed which requires large amounts of both time and power and is typically done in huge data centers, which can have a communications bottleneck, particularly if processing sensor data. A promising alternative enhances near-sensor computing capability (which will never be enough) brings processing into the sensor. This paradigm shift improves latency and power at the sensor itself, but also makes back-end processing faster and more accurate, and further reduces the amount of information that must be communicated to the external world.

---

Jamal Molin received his B.S. from University of Maryland, Baltimore County in 2011 as a Meyerhoff Scholar and then went on to receive his M.S.E. and Ph.D. in Electrical and Computer Engineering at Johns Hopkins University in 2015 and 2017, respectively. His research was in the field of neuromorphic engineering where he emulated systems on FPGA and designed neuromorphic ASICs for low SWaP visual processing. The objective was to design systems inspired by the human brain. He currently works at Northrop Grumman in the Advanced Electronics group where he continues to work on applied R&D projects related to neuromorphic and low SWaP imaging and processing.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Wags

Regular

Speaker
Dr. Jamal Molin, Senior Principal FPGA and ASIC Design Engineer, Northrup Grumman

Enhancing Edge Processing: Imagers with In-pixel Processors

By 2025, the amount of data generated each day is expected to reach 463 exabytes globally.*" To be useful, this data has to be stored and processed which requires large amounts of both time and power and is typically done in huge data centers, which can have a communications bottleneck, particularly if processing sensor data. A promising alternative enhances near-sensor computing capability (which will never be enough) brings processing into the sensor. This paradigm shift improves latency and power at the sensor itself, but also makes back-end processing faster and more accurate, and further reduces the amount of information that must be communicated to the external world.

---

Jamal Molin received his B.S. from University of Maryland, Baltimore County in 2011 as a Meyerhoff Scholar and then went on to receive his M.S.E. and Ph.D. in Electrical and Computer Engineering at Johns Hopkins University in 2015 and 2017, respectively. His research was in the field of neuromorphic engineering where he emulated systems on FPGA and designed neuromorphic ASICs for low SWaP visual processing. The objective was to design systems inspired by the human brain. He currently works at Northrop Grumman in the Advanced Electronics group where he continues to work on applied R&D projects related to neuromorphic and low SWaP imaging and processing.
good evening Uiux and chippers,
Just reading the agenda gives me shivers for the knowledge to be assembled, presented and discussed. Applications and use cases will just continue to grow, booming the market that we shareholders might just get a few percent of.
It will be worth the cost of admission and a few sleepless nights for me, though 95% of the technical will be well over my head.
Exiting times, cheers
 
  • Like
  • Fire
  • Love
Reactions: 11 users
Top Bottom