![]()
BrainChip on LinkedIn: #edgeai #lowpowerai #aiaccelerators
News: BrainChip brings Neuromorphic Capabilities to M.2 Form Factor BrainChip’s event-based neural processor IP delivers incremental learning and high-speed…www.linkedin.com
View attachment 75504
![]()
BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor
BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor Laguna Hills, Calif. – January 8th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the...brainchip.com
@Frangipani and I have posted about Nimble AI before. I've noticed that their recent content no longer mention Brainchip and the AKIDA 1500. It appears we've been overshadowed by IMEC, a multi-billion-dollar company and research partner on the Nimble project. IMEC is heavily involved in nearly every EU-sponsored neuromorphic project and has been developing their own SNN for several years. What is news is that In Q1 2025, IMEC plans to do a foundry run of their SNN based neuromorphic processor called SENeCA (Scalable Energy-efficient Neuromorphic Computer Architecture).
View attachment 66880
View attachment 66881
Some details on SENeCA are in the below paper (few years old now).
![]()
(PDF) SENeCA: Scalable Energy-efficient Neuromorphic Computer Architecture
PDF | SENeCA is our first RISC-V-based digital neuromorphic processor to accelerate bio-inspired Spiking Neural Networks for extreme edge applications... | Find, read and cite all the research you need on ResearchGatewww.researchgate.net
Are they developing the hardware/processor, though the IP may not be in-house? Hard to tell from the info online around SENeCA. Other aspects that make me wonder about the use of Akida as the IP include reference to digital IP, RISC-V based architecture and designed for GF22nm.
I thought this was worth mentioning as IMEC could be a customer or a potential rival. If they're doing a foundry run Q1 2025 and we're involved, would expect some kind of IP license or arrangement prior. Would line up with Seans comments around deals before end of 2024.
I reached out directly to the project director for Nimble AI and asked has SENeCA replaced the use of Akida 1500, reply below:
View attachment 66909
Reading between the lines, it seems they have been forced to sub out Akida for IMEC’s SENeCA (which does not include our IP) due to their partnership. This means there is another confirmed competitor for SNN processors, with a chip planned for tape-out in January 2025. We need to pick up the pace. What happened to the patent fortress?
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.
Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow
View attachment 74968
eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement
Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype
In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.
How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.
Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules
Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu
The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
View attachment 74969
Or was it.....
I gotta protect my billable (wish) DD IP hours... I'll happily take any effective SP rise as payment though
Thankfully through our collective DD efforts info is generally found on this site first most of the time.
We can 100% exclude that the 2020 NASA SBIR proposal which featured Akida has anything to do with NASA’s Mars 2020 mission and the Perseverance Mars Rover, given the fact that it embarked on its voyage to the Red Planet on July 30, 2020 (hence the mission name!) and landed on the Martian surface on February 18, 2021…
View attachment 75445
Apart from the fact that the timelines just don’t match - Perseverance left Planet Earth 4.5 years ago, the same year the SBIR proposal was published, while BrainChip celebrated Akida being first launched into space on March 4, 2024 (in ANT61’s Brain) - the 2020 SBIR proposal itself clearly indicates it is out of the question that it could have anything to do with the Perseverance Mars Rover’s autonomous navigation system: the research project relates to TRL (Technology Readiness Level) 1-2, which is considered very basic and speculative research. I’ll leave it up to you to figure out what TRL would be required for any mission-critical technology destined for Mars…
View attachment 75448
![]()
Technology Readiness Levels - NASA
Technology Readiness Levels (TRL) are a type of measurement system used to assess the maturity level of a particular technology. Each technology project iswww.nasa.gov
View attachment 75446
View attachment 75447
I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.
Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…
View attachment 75505
View attachment 75514
Hi @AI_Inquirer,
what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.
Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.
Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893
View attachment 75508
The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).
This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…
View attachment 75515
…as well as in this October 7, 2024 video:
As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:
View attachment 75513
Regards,
Frangipani
Have a whisper to your little birdies, Tech and see if you can work out what's going on..Question...The 10 to 12 weeks time lag in shipping out the edge box tends to suggest to me at least the time period
is linked to the wafer process, I strongly expect that we have no AKD 1000 SoC's...if you listen carefully to what Sean
stated in the recent podcast when talking about VVDN, he said we supply AKD 1000 SoC's to them to fulfil any orders
they receive (in large volumes)...yes it's a guess, but 10/12 weeks isn't good enough in my opinion...we are obviously not
holding any stock whatsoever, or VVDN have us way down the food chain as far as production of said box's.
We all know the AI EDGE Box is just a vehicle to get people into discovering what the AKIDA suite of products can currently
offer, and it's not an earner as such, but promoting something, then in the same breath saying, wait for 10/12 weeks doesn't
sound very practical to my business brain...purely my opinion, neither company appears to be holding any stock ??
Tech.
"The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt"Have a whisper to your little birdies, Tech and see if you can work out what's going on..
I remember an email of Tony Dawe's, which said, it was hoped that demand was such, that VVDN, would place an order for chips (hey just my recollection)..
You'ld think, that we would've had to have gotten TSMC, to produce another run by now?..(we still have production slots allocated?)..
There's 2 chips per Edge Box, 5 in the Bascom Hunter thingo (although genuinely low volume) and now this M.2 form factor, that's supposed to be the size of a chewy? (isn't the AKD1000 physically bigger than that to begin with? Maybe it's using AKD1500, which is still AKIDA 1.0 IP or they've already produced AKD1000, in a smaller process size @Diogenese?)..
As long as it's not like this, 7...Me, imagining what the stock price will look like after another day at CES without any major announcements, while telling myself, “It won’t be that bad.”
View attachment 75529
No mention of Pico......which is less than a watt."The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt"
Unless the size reference, is just that of "the slot" it goes in, but that's a weird thing to mention..
Pico, is much, much, much, much, much, less than a watt and will be going straight into a product.No mention of Pico......which is less than a watt.
It depends how it will look like on the 10th January
HI 7 today is the 9thIt depends how it will look like on the 8th January![]()
HI 7 today is the 9th
Nice to see we are back in the target range. What!