8 nodes is obviously a sweet spot for Akida.
Both Akida 1500 and Akida 2S offer 8 nodes.
SiFive suggests that Akida 2S is not for on-sensor applications, but rather for integrating into a CPU, so Akida 2S can do a lot of the heavy lifting:
ā
Through our collaboration with BrainChip, we are enabling the combination of SiFiveās RISC-V processor IP portfolio and BrainChipās 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,ā said Phil Dworsky, Global Head of Strategic Alliances at SiFive. āDeeply embedded applications can benefit from the combination of compact SiFive Essentialā¢ processors with BrainChipās Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligenceā¢ AI Dataflow Processors tightly integrated with BrainChipās Akida-S or Akida-P neural processors.ā
Phil Dworsky, Global Head of Strategic Alliances, SiFive
SiFive has several "Essential" processor cores. SiFive E20 looks like a good match for Akida 2E:
https://www.riscfive.com/2022/12/07/sifive-essential-family/#:~:text=The SiFive Essential family is a portfolio of,RISC-V ISA to provide 64-bit and 32-bit options.
SiFive E20
T
he SiFive E20 Standard Core is an extremely efficient implementation of the E2 Series configured for very low area and power. The E20 brings the power of the RISC-V software ecosystem to efficiently address traditional 8-bit and 32-bit microcontroller applications such as IoT, Analog Mixed Signal, and Programmable Finite State Machines.
The SiFive X280 Intelligence is more powerful and AI/ML capable:
https://www.sifive.com/cores/intelligence-x280
T
he SiFive Intelligenceā¢ X280 is a multi-core capable RISC-V processor with vector extensions and SiFive Intelligence Extensions and is optimized for AI/ML compute at the edge.
In addition to ML inferencing, it is ideal for applications requiring high-throughput, single-thread performance while under power constraints (e.g., AR, VR, sensor hubs, IVI systems, IP cameras, digital cameras, gaming devices).
...
SiFive X280 Intelligence data sheet:
https://sifive.cdn.prismic.io/sifive/70445cba-0549-475e-a538-5c09a402efbc_x280-datasheet-22G1.pdf
View attachment 31643
Here's a coincidence:-
The smallest byte that the X280 Intelligence can accept is 8 bits ...
View attachment 31644
If only Akida had an 8 bit version it could be tightly integrated with SiFive's X280 Intelligence processor.
@Diogenese
Thoughts on this latest NASA SBIR paper and the relevant to BRN section highlighted towards end of post.
In your thinking, does our recent discussions around 22nm FDSOI, the X280 NASA use and recent Akida platform upgrade have any applicability to the NASA comments re critical gaps possibly being filled?
TIA
Couldn't attach full paper as extension not compatible and just on moby at the mo.
National Aeronautics and Space Administration
Small Business
Innovation Research (SBIR)
Phase I
Fiscal Year 2023 Solicitation
Scope Title: Neuromorphic Software for Cognition and Learning for Space Missions
Scope Description:
This scope seeks integrated neuromorphic software systems that together achieve a space mission capability. Such capabilities include but are not limited to:
ā¢ Cognitive communications for constellations of spacecraft.
ā¢ Spacecraft health and maintenance from anomaly detection through diagnosis; prognosis; and fault detection, isolation, and recovery (FDIR).
ā¢ Visual odometry, path planning, and navigation for autonomous rovers.
ā¢ Science data processing from sensor denoising, through sensor fusion and super resolution, and finally output the generation of science information products such as planetary digital elevation maps.
In this scope, it is expected that a provider will pipeline together a number of neural nets from different sources to achieve a space capability. The first challenge is to achieve the pipelining in a manner that achieves high overall throughput and is energy efficient. The second challenge is to put together a demonstration breadboard integrated hardware/software system that achieves the throughput incorporating neuromorphic or neural net accelerators perhaps in combination with conventional processors such as CPUs, GPUs, and FPGAs. Systems on a chip (SOC), could be another demonstration hardware platform. In either case, the neural cores should do the heavy computational lifting, and the CPUs, GPUs, and FPGAs should play a supportive role. The total power requirements shall be commensurate with the space domain, for example, 10 W maximum for systems expected to operate on CubeSats 24/7 and even less wattage for lunar systems that need to operate on battery power over the 2-week-long lunar night.
The third optional challenge is to evolve the neural net individual applications and pipeline through adaptive learning over the course of a simulated mission.
Radiation tolerance and space environment robustness are not addressed directly through this scope. Rather, a provider is expected to use terrestrial grade processors and only after Phase II target radiation tolerant neuromorphic processors potentially developed under Scopes 1 or 2 or from another source. The goal is to achieve space mission capabilities that require system integration of individual neural nets together with minimal overhead conventional software. The continuous mission-long learning complements the capability of Earth operations to adapt software over the course of a mission.
As background, development of individual neural net software is now state of the practice, and a large number of neural net applications can be downloaded in standard formats such as pseudo-assembly level or programming languages such as TensorflowTM (Google Inc), PyTorchTM (Linux Foundation), NengoTM (Applied Brain Research), LavaTM (Intel Cooporation), and others. Published neural nets for aerospace applications can be found, ranging from telescope fine-pointing control to adaptive flight control to medical support for astronaut health. In addition, there are many published neural nets for analogous terrestrial capabilities, such as autonomous driving. Transfer learning and other state-of-practice techniques enable adaptation of neural nets from terrestrial domains, such as image-processing for the image net challenge, to space domains such as Mars terrain classification for predicting rover traction.
Expected TRL or TRL Range at completion of the Project: 2 to 4
Primary Technology Taxonomy:
ā¢ Level 1: TX 10 Autonomous Systems
ā¢ Level 2: TX 10.2 Reasoning and Acting
Desired Deliverables of Phase I and Phase II:
ā¢ Analysis
ā¢ Prototype
ā¢ Hardware
ā¢ Software
Desired Deliverables Description:
The deliverables for Phase I should include at minimum the concept definition of a space capability that could be achieved through a dataflow pipeline/graph of neural nets and identification of at least a portion of the pipeline that can be achieved with existing neural nets that are either already suited for the space domain or provide an analogous capability from an Earth application. The pipeline should at a minimum be mocked up and characterized by parameterized throughput requirements for the individual neural nets, a description of the dataflow and control flow integration of the system of neural nets, and an assignment and mapping from the individual software components to the hardware elements, and an energy/power/throughput estimate for the entire pipeline. Enhanced deliverables for Phase I would include a partial demonstration of the pipeline on some terrestrial hardware platform. A report that illustrates a conceptual pipeline of neural nets for autonomous rovers can be found in the reference authored by Eric Barszcz.
The deliverables for Phase II should include at minimum a demonstration hardware system, using terrestrial grade processors and sensors, that performs a significant portion of the overall pipeline needed for the chosen space capability, together with filling in at least some of the neural net applications that needed to be customized, adapted, or developed from scratch.
It is expected that the hardware system would include one or more terrestrial grade neuromorphic processors that do the primary processing, with support from CPUs, GPUs, and FPGAs. An alternative would be an SOC that incorporates a substantial number of neural cores. The demonstration shall include empirical measurement and validation of throughput and power. Enhanced deliverables for Phase II would be a simulation of continuous in situ mission-long adaptation and learning that exhibits significant evolution.
State of the Art and Critical Gaps:
Neuromorphic and deep neural net software for point applications has become widespread and is state of the art. Integrated solutions that achieve space-relevant mission capabilities with high throughput and energy efficiency is a critical gap.
For example, terrestrial neuromorphic processors such as Intel Corporationās LoihiTM,
Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM)
require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions. The system integration principles for integrated combinations of neuromorphic software is a critical gap that requires R&D, as well as the efficient mapping of integrated software to integrated avionics hardware.
Challenges include translating the throughput and energy efficiency of neuromorphic processors from the component level to the system level, which means minimizing the utilization and processing done by supportive CPUs and GPUs.
Relevance / Science Traceability:
ā¢ 03-09a (Autonomous self-sensing)
ā¢ 04-15 (Collision avoidance maneuver design)
ā¢ 04-16 (Consolidated advanced sensors for relative navigation and autonomous robotics)
ā¢ 04-23 (Robotic actuators, sensors, and interfaces)
ā¢ 04-77 (Low SWaP, āEnd of armā proximity range sensors)
ā¢ 04-89 (Autonomous Rover GNC for mating)
ā¢ 10-04 (Integrated system fault/anomaly detection, diagnosis, prognostics)
ā¢ 10-05 (On-Board "thinking" autonomy)
ā¢ 10-06 (Creation, scheduling and execution of activities by autonomous systems)
ā¢ 10-16 (Fail operational robotic manipulation)