BRN Discussion Ongoing

yogi

Regular
Just a thought looks like we wont see any wow effect from CES25 this time.
 
  • Like
  • Wow
  • Sad
Reactions: 9 users

IMG_3532.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Frangipani

Regular

View attachment 75504



I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.

Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…

@Frangipani and I have posted about Nimble AI before. I've noticed that their recent content no longer mention Brainchip and the AKIDA 1500. It appears we've been overshadowed by IMEC, a multi-billion-dollar company and research partner on the Nimble project. IMEC is heavily involved in nearly every EU-sponsored neuromorphic project and has been developing their own SNN for several years. What is news is that In Q1 2025, IMEC plans to do a foundry run of their SNN based neuromorphic processor called SENeCA (Scalable Energy-efficient Neuromorphic Computer Architecture).

View attachment 66880

View attachment 66881

Some details on SENeCA are in the below paper (few years old now).


Are they developing the hardware/processor, though the IP may not be in-house? Hard to tell from the info online around SENeCA. Other aspects that make me wonder about the use of Akida as the IP include reference to digital IP, RISC-V based architecture and designed for GF22nm.

I thought this was worth mentioning as IMEC could be a customer or a potential rival. If they're doing a foundry run Q1 2025 and we're involved, would expect some kind of IP license or arrangement prior. Would line up with Seans comments around deals before end of 2024.

8AD5935D-3B31-4A4E-B70B-2EE03EF554B4.jpeg


I reached out directly to the project director for Nimble AI and asked has SENeCA replaced the use of Akida 1500, reply below:

View attachment 66909
Reading between the lines, it seems they have been forced to sub out Akida for IMEC’s SENeCA (which does not include our IP) due to their partnership. This means there is another confirmed competitor for SNN processors, with a chip planned for tape-out in January 2025. We need to pick up the pace. What happened to the patent fortress?

90803B0E-9389-414C-B178-F9C1E7B09AC8.jpeg



Hi @AI_Inquirer,

what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.

Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.

Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893



C9C3D46A-E409-40D4-ABE4-2F1A53195DB3.jpeg


The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).

This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…

6CEBFC46-AC74-4B2B-A395-B12847EC146D.jpeg


…as well as in this October 7, 2024 video:





As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:

25400C8A-3B32-4B60-AFEC-C5676360DC1E.jpeg


Regards,
Frangipani
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Frangipani

Regular
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.

Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow (y)



View attachment 74968

eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement

Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype

In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.

How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.

Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules

Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu

The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.


View attachment 74969

Wait a minute, @Fullmoonfever! 🤣
Don’t you remember? 👇🏻

Or was it..... :unsure:

I gotta protect my billable (wish) DD IP hours... I'll happily take any effective SP rise as payment though :ROFLMAO::LOL::ROFLMAO:

Thankfully through our collective DD efforts info is generally found on this site first most of the time.

It shouldn’t come as a surprise to you, then, that this also holds true for info on Nimble AI and our connection to them, which both @AI_Inquirer and I had posted about several times in the past…
We’ve actually known about the Nimble AI researchers’ intention to use AKD1500 for almost a year here on TSE! 🥳

Happy to receive some free shares in lieu of credit, though, in case you don’t have the heart to ask Rayz to return some of the “full credit” you so generously gave him… 🤣
 
  • Like
Reactions: 5 users

manny100

Regular
Thanks Frangipani, looks like some sales there already for M2.
Your reference to Prophesee and its financial woes in connection with Nimble certainly makes our LDA financing decision look very smart.
Can this be our product that really starts to move? Cheap and allows others to do their own thing- and that is its beauty.
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
We can 100% exclude that the 2020 NASA SBIR proposal which featured Akida has anything to do with NASA’s Mars 2020 mission and the Perseverance Mars Rover, given the fact that it embarked on its voyage to the Red Planet on July 30, 2020 (hence the mission name!) and landed on the Martian surface on February 18, 2021…

View attachment 75445


Apart from the fact that the timelines just don’t match - Perseverance left Planet Earth 4.5 years ago, the same year the SBIR proposal was published, while BrainChip celebrated Akida being first launched into space on March 4, 2024 (in ANT61’s Brain) - the 2020 SBIR proposal itself clearly indicates it is out of the question that it could have anything to do with the Perseverance Mars Rover’s autonomous navigation system: the research project relates to TRL (Technology Readiness Level) 1-2, which is considered very basic and speculative research. I’ll leave it up to you to figure out what TRL would be required for any mission-critical technology destined for Mars…

View attachment 75448




View attachment 75446
View attachment 75447


Well, something must be ongoing in relation to "Mars2020 Rover" because it is listed on NASA's 2024 updated inventory, which is the reason why I posted it.

The article includes a link to the full inventory which describes the Stage of System as "in production".

If you feel this information is incorrect, you can contact NASA"s Chief Artificial Intelligence Officer, David Salvagnini, who put the list together.



Screenshot 2025-01-09 at 8.37.50 am.png


Screenshot 2025-01-09 at 8.45.26 am.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Xray1

Regular
I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.

Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…



View attachment 75505



View attachment 75514


Hi @AI_Inquirer,

what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.

Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.

Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893



View attachment 75508

The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).

This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…

View attachment 75515

…as well as in this October 7, 2024 video:





As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:

View attachment 75513

Regards,
Frangipani

Frangipani ............ I totally agree with you, that it was a real loss for TSE to lose a very technically minded and informative poster like AI_Inquirer, who was more than happy to share his ongoing research and discussions with various other organisations where b
BrainChip had some kind of connection / affiliation with, but he was eventually push out from posting here, due mainly imo to the poor form and lack of respect shown here by other questionable posters.
 
  • Like
  • Love
Reactions: 12 users

TECH

Regular
Question...The 10 to 12 weeks time lag in shipping out the edge box tends to suggest to me at least the time period
is linked to the wafer process, I strongly expect that we have no AKD 1000 SoC's...if you listen carefully to what Sean
stated in the recent podcast when talking about VVDN, he said we supply AKD 1000 SoC's to them to fulfil any orders
they receive (in large volumes)...yes it's a guess, but 10/12 weeks isn't good enough in my opinion...we are obviously not
holding any stock whatsoever, or VVDN have us way down the food chain as far as production of said box's.

We all know the AI EDGE Box is just a vehicle to get people into discovering what the AKIDA suite of products can currently
offer, and it's not an earner as such, but promoting something, then in the same breath saying, wait for 10/12 weeks doesn't
sound very practical to my business brain...purely my opinion, neither company appears to be holding any stock ??

Tech.
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users
  • Like
  • Fire
  • Love
Reactions: 13 users
Question...The 10 to 12 weeks time lag in shipping out the edge box tends to suggest to me at least the time period
is linked to the wafer process, I strongly expect that we have no AKD 1000 SoC's...if you listen carefully to what Sean
stated in the recent podcast when talking about VVDN, he said we supply AKD 1000 SoC's to them to fulfil any orders
they receive (in large volumes)...yes it's a guess, but 10/12 weeks isn't good enough in my opinion...we are obviously not
holding any stock whatsoever, or VVDN have us way down the food chain as far as production of said box's.

We all know the AI EDGE Box is just a vehicle to get people into discovering what the AKIDA suite of products can currently
offer, and it's not an earner as such, but promoting something, then in the same breath saying, wait for 10/12 weeks doesn't
sound very practical to my business brain...purely my opinion, neither company appears to be holding any stock ??

Tech.
Have a whisper to your little birdies, Tech and see if you can work out what's going on.. 😛

I remember an email of Tony Dawe's, which said, it was hoped that demand was such, that VVDN, would place an order for chips (hey just my recollection)..

You'ld think, that we would've had to have gotten TSMC, to produce another run by now?..(we still have production slots allocated?)..

There's 2 chips per Edge Box, 5 in the Bascom Hunter thingo (although genuinely low volume) and now this M.2 form factor, that's supposed to be the size of a chewy? (isn't the AKD1000 physically bigger than that to begin with? Maybe it's using AKD1500, which is still AKIDA 1.0 IP or they've already produced AKD1000, in a smaller process size @Diogenese?)..
Edit M.2 2260 is 22mm wide, AKD1000 in 28nm ~ 15mm².
 
Last edited:
  • Like
  • Thinking
Reactions: 12 users

7für7

Top 20
Me, imagining what the stock price will look like after another day at CES without any major announcements, while telling myself, “It won’t be that bad.”


1736379763962.gif
 
  • Haha
  • Like
Reactions: 6 users
Have a whisper to your little birdies, Tech and see if you can work out what's going on.. 😛

I remember an email of Tony Dawe's, which said, it was hoped that demand was such, that VVDN, would place an order for chips (hey just my recollection)..

You'ld think, that we would've had to have gotten TSMC, to produce another run by now?..(we still have production slots allocated?)..

There's 2 chips per Edge Box, 5 in the Bascom Hunter thingo (although genuinely low volume) and now this M.2 form factor, that's supposed to be the size of a chewy? (isn't the AKD1000 physically bigger than that to begin with? Maybe it's using AKD1500, which is still AKIDA 1.0 IP or they've already produced AKD1000, in a smaller process size @Diogenese?)..
"The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt"

Unless the size reference, is just that of "the slot" it goes in, but that's a weird thing to mention..
 
  • Like
Reactions: 2 users
  • Haha
  • Like
Reactions: 3 users

Boab

I wish I could paint like Vincent
"The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt"

Unless the size reference, is just that of "the slot" it goes in, but that's a weird thing to mention..
No mention of Pico......which is less than a watt.
 
  • Like
Reactions: 6 users
No mention of Pico......which is less than a watt.
Pico, is much, much, much, much, much, less than a watt and will be going straight into a product. 😉

Not "much" to "play with" there...
 
  • Like
Reactions: 4 users

7für7

Top 20
Last edited:
  • Thinking
Reactions: 2 users

Shadow59

Regular
  • Like
Reactions: 1 users

7für7

Top 20
  • Haha
Reactions: 1 users

IloveLamp

Top 20
1000021042.jpg
1000021045.gif
 
  • Haha
  • Fire
Reactions: 5 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 5 users
Top Bottom