BRN - NASA

D

Deleted member 118

Guest
This looks interesting


The broader impact/commercial potential of this Small Technology Transfer Research (STTR) Phase I project lays in its ability to enable reconfigurable hardware acceleration in the Internet-of-Things (IoT). Users under constrained power at the edge will be able to choose a new solution that can bring acceleration, and enable datacenter like capabilities, and benefit from the IoT's long-sought promise. Using Resistive Random-Access Memory (RRAM) to develop a next-generation Field Programmable Gate Array (FPGA), increasing performance while reducing energy consumption at the sensor node level, is possible. As already experienced in our data driven world, it is critical that we improve our computing capabilities at the edge in order to gain in responsiveness and increase our energy efficiency. With evermore data and performance requirements to deliver on the consumer demands, innovative uses of emerging memories are showing great promise and providing capabilities that will help fulfill the IoT's potential. If fulfilled, this technology has the potential to enable a whole set of data driven applications at the edge, such as low-energy image recognition and learning in drones to operate longer and more effectively or long-lasting medical implants with leading data aggregation and reactiveness. This Small Technology Transfer Research (STTR) Phase I project will aim to commercialize a patented technology to realize a ultra-low-power Field Programmable Gate Array (FPGA) based on Resistive Random-Access Memory (RRAM). To handle the data explosion in Internet of Things (IoT) network, the industry is moving towards increasing intelligent analysis capability for single IoT devices. Tight power budget has become a critical road block: high-end solutions, such as multicore CPUs, GPUs, can provide enough computing capability but fail to meet the power budget, while low-power commercial products, such as micro-controllers and low-power FPGAs, can satisfy power constraints but hardly follow the increasing complexity in data analysis algorithms. This project aims to develop a ultra-low-power FPGA that can offer high-performance data analysis capability under IoT-level power limits. This project will prototype a FPGA chip built around an innovative RRAM-based routing multiplexer design. We will also release an associated software tool suite to support the implementation of customer's applications on the technology. Compared to existing commercial solutions, the proposed FPGA product is expected to demonstrate similar computing capability as high-end FPGA solutions while satisfying an IoT power budget (<1W). This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
 
  • Like
  • Love
  • Fire
Reactions: 4 users
D

Deleted member 118

Guest


Due to the complexity of tactical mission as well as the uncertainty of the practical environment, more and more Unmanned Aerial Vehicles (UAVs) have been seeking innovative and reliable sensing, navigation, path planning, and real-time control technique that can be used even in harsh environment such as the GPS-denied environment. With the rapid development of machine learning, an emerging artificial intelligence (AI) on the chip can be a promising technique to fully enable the autonomous UAVs swarming capability in practical even under harsh environments. In this project, we aim to develop and verify a new type of Smart Unmanned Aerial Vehicle (Smart UAV) with emerging artificial intelligence on-chip that possesses four prominent properties, i.e. scalability, adaptability, resiliency, and autonomy, and can be used for various tactical environments. The novel Hierarchical Hybrid Artificial Intelligence (H2AI) framework that has designed a set of appropriate AI techniques and implemented them into different Smart UAV Layers, i.e. sensing/perception layer, path planning layer, and flight control layer. The Hybrid Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) based Online Dead Reckoning Navigation for Smart UAV will significantly upgrade the dead reckoning navigation accuracy; while the designed multiscale switching reinforcement learning based UAV path planning can enable the complex UAV mission, e.g. swarming under the GPS-denied harsh environment.
 
  • Like
  • Fire
  • Love
Reactions: 9 users
D

Deleted member 118

Guest

712EF15E-003B-4C38-B953-7AE93DFBAA4C.png
 
  • Like
  • Fire
Reactions: 7 users
Hi Rocket

Brainchip style technology has come a long way over at DARPA and NASA. The following extract basically is telling sensor manufacturers if you want to win this contract best use SNN technology to make your sensors smart.

“PHASE III DUAL USE APPLICATIONS: Phase III efforts will demonstrate a fully packaged camera with a neuromorphic processing chip. Spiking neural network device technology is preferable. Efforts will leverage emerging event-based sensing algorithms to demonstrate.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 23 users
D

Deleted member 118

Guest
Hi Rocket

Brainchip style technology has come a long way over at DARPA and NASA. The following extract basically is telling sensor manufacturers if you want to win this contract best use SNN technology to make your sensors smart.

“PHASE III DUAL USE APPLICATIONS: Phase III efforts will demonstrate a fully packaged camera with a neuromorphic processing chip. Spiking neural network device technology is preferable. Efforts will leverage emerging event-based sensing algorithms to demonstrate.”

My opinion only DYOR
FF

AKIDA BALLISTA


Precisely what I was thinking

 
  • Like
  • Fire
  • Haha
Reactions: 5 users
D

Deleted member 118

Guest
  • Haha
  • Like
  • Fire
Reactions: 8 users
D

Deleted member 118

Guest
Last edited by a moderator:
  • Like
Reactions: 2 users
This is the last one today Rocket otherwise its double time and one half as I have not had a full eight hours between shifts.

Having said this the following is pretty amazing stuff really when you consider they mention Loihi and True North in the full knowledge that they cannot meet the specified requirements under COTS and SWaP. Somewhere I read something about Brainchip and open sourced hardware recently:

DESCRIPTION: Conventional computing architectures are running up against a quantum limit in terms of transistor size and efficiency, sometimes referred to as the end of Moore’s Law. To regain our competitive edge, we need to find a way around this limit. This is especially relevant for small size, weight, and power (SWaP)-constrained platforms. For these systems, scaling Von Neumann computing becomes prohibitively expensive in terms of power and/or SWaP.Biologically inspired neural networks provide the basis for modern signal processing and classification algorithms.

Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological neuron model, also known as a spiking neuron model; mathematically describing the systems found in nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM) where information is spatially and temporally encoded onto the network. It can be shown that a simple spiking network can reproduce the complex behavior found in the neural cortex with significant reduction in complexity and power requirements [Ref 2]……


“It is recommended to use open source languages, software, and hardware when possible.”…


PHASE III:
Refine algorithms and test with hardware. Validate models with data provided by Naval Air Warfare Center (NAWC) Aircraft Division (AD)/Weapons Division (WD). Transition model to the warfare centers. Development of documentation, training manuals, and software maintenance may be required.Heavy commercial investments in machine learning and artificial intelligence will likely continue for the foreseeable future. Adoption of hardware that can deliver on orders of magnitude in SWaP performance for intelligent mobile machine applications is estimated to be worth 10^9-10^12 global dollars annually.) Provide the software tools needed to optimize the algorithms and hardware integration. This effort would be a significant contribution to this requirement. Industries that would benefit from successful technology development include automotive (self-driving vehicles), personal robots, and a variety of intelligent sensors.

KEYWORDS: Spiking Neural Network, Neuromorphic Computing, Modeling, Convolution Neural Network, Analog Memory, Processing in Memory

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 13 users
D

Deleted member 118

Guest
This is the last one today Rocket otherwise its double time and one half as I have not had a full eight hours between shifts.

Having said this the following is pretty amazing stuff really when you consider they mention Loihi and True North in the full knowledge that they cannot meet the specified requirements under COTS and SWaP. Somewhere I read something about Brainchip and open sourced hardware recently:

DESCRIPTION: Conventional computing architectures are running up against a quantum limit in terms of transistor size and efficiency, sometimes referred to as the end of Moore’s Law. To regain our competitive edge, we need to find a way around this limit. This is especially relevant for small size, weight, and power (SWaP)-constrained platforms. For these systems, scaling Von Neumann computing becomes prohibitively expensive in terms of power and/or SWaP.Biologically inspired neural networks provide the basis for modern signal processing and classification algorithms.

Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological neuron model, also known as a spiking neuron model; mathematically describing the systems found in nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM) where information is spatially and temporally encoded onto the network. It can be shown that a simple spiking network can reproduce the complex behavior found in the neural cortex with significant reduction in complexity and power requirements [Ref 2]……


“It is recommended to use open source languages, software, and hardware when possible.”…


PHASE III:
Refine algorithms and test with hardware. Validate models with data provided by Naval Air Warfare Center (NAWC) Aircraft Division (AD)/Weapons Division (WD). Transition model to the warfare centers. Development of documentation, training manuals, and software maintenance may be required.Heavy commercial investments in machine learning and artificial intelligence will likely continue for the foreseeable future. Adoption of hardware that can deliver on orders of magnitude in SWaP performance for intelligent mobile machine applications is estimated to be worth 10^9-10^12 global dollars annually.) Provide the software tools needed to optimize the algorithms and hardware integration. This effort would be a significant contribution to this requirement. Industries that would benefit from successful technology development include automotive (self-driving vehicles), personal robots, and a variety of intelligent sensors.

KEYWORDS: Spiking Neural Network, Neuromorphic Computing, Modeling, Convolution Neural Network, Analog Memory, Processing in Memory

My opinion only DYOR
FF

AKIDA BALLISTA

 
  • Like
Reactions: 3 users
D

Deleted member 118

Guest
Is this Akida?

A recently developed radiation-hardened processor provides 5.6 GOPS (giga operations per second) performance with a power dissipation of 17 W.





It looks like nasa started this back in 2020



State of the Art and Critical Gaps:

Most NASA missions utilize processors with in-space qualifiable high-performance computing that has high power dissipation (approximately 18 W), and the current state of practice in Technology Readiness Level 9 (TRL-9) space computing solutions have relatively low performance (between 2 and 200 DMIPS (Dhrystone million instructions per second) at 100 MHz). A recently developed radiation-hardened processor provides 5.6 GOPS (giga operations per second) performance with a power dissipation of 17 W. Neither of these systems provide the performance, the power-to-performance ratio, or the flexibility in configuration, performance, power management, fault tolerance, or extensibility with respect to heterogeneous processor elements. Onboard network standards exist that can provide >10 Gbps bandwidth, but not everything is available to fully implement them.
 
Last edited by a moderator:
  • Like
  • Fire
Reactions: 3 users
D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 4 users
D

Deleted member 118

Guest
Or maybe not

 
  • Fire
  • Like
Reactions: 2 users
D

Deleted member 118

Guest
  • Like
  • Love
Reactions: 8 users

View attachment 4065

@Fact Finder i was researching more and decided to delete the 1st post
Thanks for replying but I had worked out I was not hallucinating well after I checked that I had taken my meds this morning. LOL
This could just be a freakish coincidence but if it walks like a duck etc; to quote I think it was @uiux it has to be a duck. Another great
find and reveal. Encourage all and sundry to read the link you have put up.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Haha
Reactions: 11 users

View attachment 4065

@Fact Finder i was researching more and decided to delete the 1st post
The dates, the disclosure by SiFive that AKIDA is hardened it all aligns so completely sometimes you just have to go with what appears to be obvious and not over think the situation. SiFive is RISC-V and Vorago is hardening RISC-V and SiFive is partnering up with Brainchip. Brainchip is partnered with Vorago and NASA. SiFive is partnered with NASA. What's that OK. Blind Freddie is convinced.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 18 users
D

Deleted member 118

Guest
Last edited by a moderator:
  • Like
  • Fire
Reactions: 7 users
D

Deleted member 118

Guest
So is this company helping Nasa implement Akida as I noticed they were awarded some of these contracts a few days before we heard about Nasa and they concluded on the same day. I did see something regarding Loihi so not too sure.








The objective of this SBIR proposal is to develop a design space exploration tool to rapidly evaluate different neuromorphic processing options for a given application. The neuromorphic options to be examined include deep learning accelerators, spiking neuron accelerators, memristive systems, and photonic accelerators. The tool will work with both individual algorithms and with chains of algorithms, where it will help with mapping individual algorithms within a chain to multiple neuromorphic processing options in a heterogeneous compute system. The tool will be able to take inputs from traditional neural frameworks and applications.



EB4A07D1-6F98-40A1-B7CF-A4CC3B530CB2.png


Artificial intelligence (AI) algorithms have many applications in satellites and are generally quite compute intensive. The objective of this work is to develop highly Size, Weight, and Power (SWaP) efficient neuromorphic processors that can run AI algorithms. We will develop resistive crossbar neuromorphic processors, with the primary target being deep learning algorithms. We plan to process various types of signals very efficiently ndash; these include sensor and cognitive communication applications. The key outcomes of the work will be the processor design, processor performance metrics on various applications, and software for the processor.



80E55FFB-239A-4D61-9365-0FF07933553C.png




The provision of high performing and efficient data communications is crucial for the success of most space exploration missions. Many challenges affect this goal as data transmissions have to occur over unreliable channels that span very large distances and that may not be available continuously over time. As a result, communication opportunities must be exploited optimally to achieve the reliable, high volume and low latency data transfers that will be demanded by future space missions.nbsp;This goal is hindered by the use of a communication management approach that is mainly centralized. Such practice creates limitations to what can be optimized not only because of the need for expert human assistance but also because certain system updates could not be communicated to the required network devices within a reasonable time to be effective given the physical dimensions and nature of the network.We propose to develop a software-defined networking method that exploits cognitive networking methods to optimize the transmission of data flows in a space network. We propose to utilize the Intel Loihi spiking neural network processor and develop learning algorithms for it to achieve very low SWaP processing. The key benefit of this approach will be novel scheduling capabilities that are also implemented on an ultra low SWaP system, making it very suitable for power constrained systems, such as cubesats. This work is being carried out jointly with the University of Houston.



A00551A3-90D3-424A-BA73-D9D000FC3D87.png



Artificial intelligence (AI) algorithms have many applications in satellites and are generally quite compute intensive. The objective of this work is to develop highly Size, Weight, and Power (SWaP) efficient neuromorphic processors that can run AI algorithms. We will develop resistive crossbar neuromorphic processors, with the primary target being deep learning algorithms. We plan to process various types of signals very efficiently ndash; these include sensor and cognitive communication applications. The key outcomes of the work will be the processor design, processor performance metrics on various applications, and software for the processor.
 

Attachments

  • 7A0CAEFC-00E4-4D36-93C1-BEBFDD9353B0.png
    7A0CAEFC-00E4-4D36-93C1-BEBFDD9353B0.png
    416.4 KB · Views: 61
  • B8BBFB09-C539-49C4-BAFD-8FB46E35848D.png
    B8BBFB09-C539-49C4-BAFD-8FB46E35848D.png
    332.9 KB · Views: 78
Last edited by a moderator:
  • Like
  • Fire
Reactions: 10 users
D

Deleted member 118

Guest
Might be a bit juggled above as trying to copy and paste everything from my phone
 
  • Like
  • Fire
Reactions: 5 users

Dhm

Regular
From @Rocket577 's post above which mentions Intel's Loihi which is still in design phase?? Why would they go in that direction?

The provision of high performing and efficient data communications is crucial for the success of most space exploration missions. Many challenges affect this goal as data transmissions have to occur over unreliable channels that span very large distances and that may not be available continuously over time. As a result, communication opportunities must be exploited optimally to achieve the reliable, high volume and low latency data transfers that will be demanded by future space missions.nbsp;This goal is hindered by the use of a communication management approach that is mainly centralized. Such practice creates limitations to what can be optimized not only because of the need for expert human assistance but also because certain system updates could not be communicated to the required network devices within a reasonable time to be effective given the physical dimensions and nature of the network.We propose to develop a software-defined networking method that exploits cognitive networking methods to optimize the transmission of data flows in a space network. We propose to utilize the Intel Loihi spiking neural network processor and develop learning algorithms for it to achieve very low SWaP processing. The key benefit of this approach will be novel scheduling capabilities that are also implemented on an ultra low SWaP system, making it very suitable for power constrained systems, such as cubesats. This work is being carried out jointly with the University of Houston.
 
Last edited:
  • Like
  • Thinking
Reactions: 3 users
From @Rocket577 's post above is mention of Intel's Loihi which is still in design phase?? Why would they go in that direction?

The provision of high performing and efficient data communications is crucial for the success of most space exploration missions. Many challenges affect this goal as data transmissions have to occur over unreliable channels that span very large distances and that may not be available continuously over time. As a result, communication opportunities must be exploited optimally to achieve the reliable, high volume and low latency data transfers that will be demanded by future space missions.nbsp;This goal is hindered by the use of a communication management approach that is mainly centralized. Such practice creates limitations to what can be optimized not only because of the need for expert human assistance but also because certain system updates could not be communicated to the required network devices within a reasonable time to be effective given the physical dimensions and nature of the network.We propose to develop a software-defined networking method that exploits cognitive networking methods to optimize the transmission of data flows in a space network. We propose to utilize the Intel Loihi spiking neural network processor and develop learning algorithms for it to achieve very low SWaP processing. The key benefit of this approach will be novel scheduling capabilities that are also implemented on an ultra low SWaP system, making it very suitable for power constrained systems, such as cubesats. This work is being carried out jointly with the University of Houston.
NASA has been working with Intel for at least ten years. They are using Loihi to build algorithms that meet SWaP. These algorithms to meet SWaP will need to be compatible with other neuromorphic hardware.

The US Defence SBIR that @uiux has discovered involving Intellisense and Brainchip is doing something similar.

I see no issue there are literally hundreds in not thousands of NASA DARPA projects running at anyone time and Brainchip only ‘has so many hands as my mother used say.’

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 11 users
Top Bottom