BRN Discussion Ongoing

TopCat

Regular
Ranting on about Megachips but see the connection below from the Megachips website between neuromorohic AI and robotics.


"The robotic projects undertaken by MegaChips (see the press release dated Sep. 17, 2025) aim to promote the use of scalable, AI-based robotic systems that are independent of specific hardware.
Building on the expertise cultivated through joint research – such as AI-based robotic motion planning and high-speed control, as well as initiatives to enhance the safety and operational efficiency of robotic systems – we seek to develop these efforts into a competitive business."

" Through this collaboration with the Nara Institute of Science and Technology (NAIST), we will build on the experience gained from the joint research conducted through FY2024, “High-speed robotic control using a Spiking Neural Network (SNN) applied to a high-rate 3D sensor,” and continue activities to deepen our technical expertise toward commercialization."
My bold above.
Megachips likely have early access to the 1500M.2 CARD (Andes has access).

As discussed earlier the robotic software used is Acumino which is hardware agnostic so its likely that the will use both AKIDA and Quadric in order to get the high human like dexterity they aim for in robotics.
Your article you posted doesn’t mention Neuromorphic, it actually says “AI-based robotic systems that are independent of specific hardware.”
 

manny100

Top 20
Your article you posted doesn’t mention Neuromorphic, it actually says “AI-based robotic systems that are independent of specific hardware.”
"" Through this collaboration with the Nara Institute of Science and Technology (NAIST), we will build on the experience gained from the joint research conducted through FY2024, “High-speed robotic control using a Spiking Neural Network (SNN) applied to a high-rate 3D sensor,” and continue activities to deepen our technical expertise toward commercialization.""
Its says Spiking Neural Network (SNN) which is the core of Neuromorphic.
Quadric is neural only which means on chip but Traditional AI in nature.
There is room for both in Megachips/Acumino robots depending on the tasks allocated to each. Quadric would take the high volume intense calculations that AKIDA could struggle with.
That is why IMO they have chosen both as partners.
See Section:

Collaboration with the Nara Institute of Science and Technology (NAIST), then Purpose.​

 
  • Like
  • Fire
  • Love
Reactions: 19 users

Diogenese

Top 20
"" Through this collaboration with the Nara Institute of Science and Technology (NAIST), we will build on the experience gained from the joint research conducted through FY2024, “High-speed robotic control using a Spiking Neural Network (SNN) applied to a high-rate 3D sensor,” and continue activities to deepen our technical expertise toward commercialization.""
Its says Spiking Neural Network (SNN) which is the core of Neuromorphic.
Quadric is neural only which means on chip but Traditional AI in nature.
There is room for both in Megachips/Acumino robots depending on the tasks allocated to each. Quadric would take the high volume intense calculations that AKIDA could struggle with.
That is why IMO they have chosen both as partners.
See Section:

Collaboration with the Nara Institute of Science and Technology (NAIST), then Purpose.​

Hi Manny,

"High rate 3D sensor" sounds like an ideal application for see-in-the-dark radar.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

manny100

Top 20
Hi Manny,

"High rate 3D sensor" sounds like an ideal application for see-in-the-dark radar.
Yes, i think a lot of what AKIDA does will be used elsewhere. Steve Brightfield said that he has had 'early' discussions with robotic companies concerning the RTX/AKIDA radar project.
I bet Megachips cannot wait for the 1500 to arrive. It chews up less than 300 milliwatts (way less than AKIDA 1000) and goes a way to address the issues of heat and battery life which is crucial for Robotics.
Its almost as if Megachips made some very pertinent suggestions after their development experience with its development of AKIDA 1000 under license.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Guzzi62

Regular
Our IBM friend Kevin Johansen have maybe got inspiration from below Swedish university white paper called:

Comparison of Akida Neuromorphic
Processor and NVIDIA Graphics
Processor Unit for Spiking Neural
Networks
CARL CHEMNITZ
MALIK ERMIS

Degree Project in Computer Science and Engineering, First Cycle 15 credits
Date: June 9, 2025
Supervisor: Jörg Conradt
Examiner: Pawel Herman
Swedish title: Jämförelse av neuromorfisk processor Akida och NVIDIA grafikkort för Spiking
Neural Networks
School of Electrical Engineering and Computer Science


Some snippets:

Key Observations
• Neuromorphic Akida demonstrates 99.520 % (MNIST) and
95.956-99.699 % (YOLO) energy reduction compared to GPU for the same
or similar networks.
• For simpler networks, Akida processes 76.733 % faster (1.622 ms vs
7.014 ms), proving its suitability for latency critical real-time processing
tasks. However, for more complex models, the AKD1000 is outperformed
by the GTX 1080, 73.769 ms to 160.872 ms.

• Akida’s adaptive clocking (35-179 MHz) reduces clock speeds by an average
of 86.610 % versus GPUs’ relatively fixed 1746 MHz operation, reflecting its
dynamic power engagement through sparse computing.

• The sparse input patterns, utilizing Akida’s neuromorphic architecture,
achieves up to 58 times fewer clock cycles through spike-based,
asynchronous processing, demonstrating significant improvements in
computational efficiency and suitability for edge AI systems.

• Akida correlation for MNIST model show that between clock cycles and both
energy and time were essentially zero (0.0153, with a p-value of 0.497). In
contrast to the YOLOv2 model where it shows a small but definite correlation
between clock cycles and both energy consumption (0.2119) and inference
time (0.2022), with p-value below 0.05 (making the correlation plausible).

• Quantization has a major effect on all key metrics except for the clock speed
on the GPU, reducing energy consumption, throughput, and latency by 85.9-
99.9 %. This demonstrates the suitability and importance of quantization
for deploying neural networks on resource limited hardware. However, for
more complex neural networks, this comes at the cost of reduced accuracy,
highlighting critical trade-off between computational efficiency and predic-
tive performance.

Whole white paper:

 
  • Like
  • Fire
  • Love
Reactions: 16 users

Bravo

Meow Meow 🐾
I'm trying to ascertain whether the information in this Robocloud article (below) is correct. It says that this quadruped inspection robot called ANYmal D by ANYbotics is due for shipping in Q3 2026 utilizing Intel's Loihi 3.

The only credible information I can find from Intel re Loihi 3 is Mike Davies Linkedin post from 8 months go, in which he said Loihi 3 was still under development.

Perhaps Intel is close to completion???...





Screenshot 2026-02-15 at 3.32.52 pm.png






Excerpt from Robocloud article describing ANYmal shipping in Q3 2026 with Intel's Loihi 3.
Screenshot 2026-02-15 at 3.24.16 pm.png













This June 2025 research paper discusses ANYmal running on Loihi 2.




Screenshot 2026-02-15 at 3.24.00 pm.png









Anymal works without an internet connection as shown in this video.

 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 12 users

7für7

Top 20
I'm trying to ascertain whether the information in this Robocloud article (below) is correct. It says that this quadruped inspection robot called ANYmal D by ANYbotics is due for shipping in Q3 2026 utilizing Intel's Loihi 3.

The only credible information I can find from Intel re Loihi 3 is Mike Davies Linkedin post from 8 months go, in which he said Loihi was still under development.

Maybe by now they're close to completion???...



View attachment 95110





Excerpt from Robocloud article describing ANYmal shipping in Q3 2026 with Intel's Loihi 3.
View attachment 95104












This June 2025 research paper discusses ANYmal running on Loihi 2.




View attachment 95108








Anymal works without an internet connection as shown in this video.



They can go ahead and strap Loihi onto their dogs… BrainChip is already in position. Competition keeps the business alive… especially when the “competition” looks like it’s been studying a lot of BrainChip via Intel Foundry.

I’m genuinely curious whether any patent issues could pop up. I said it a while ago: Intel can be pretty sly. With their foundry, they lure small innovative companies under the banner of a “good cause”… then absorb a ton of information to push their own tech forward. Of course many startups bite …partnering with Intel looks like a golden ticket.

… this is going to be interesting.

DYOR no financial advice
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi Bravo,

Interesting the reference is to Loihi 3 plus Prophesee, when BRN just put this up:

https://brainchip.com/prophesee-lp/

Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results.

The interesting thing about using a DVS camera in a mobile application whether there must be some means of adjusting the variation in pixel output from received light intensity due to motion. DVS generates a spike when the light impinging on a pixel changes by more than a predetermined amount, so moving the camera will change the light impinging on a lot of pixels except in areas of uniform light intensity. Depending on the threshold level, that may only involve fairly abrupt edges.
 
  • Like
  • Wow
Reactions: 8 users

Guzzi62

Regular
I'm trying to ascertain whether the information in this Robocloud article (below) is correct. It says that this quadruped inspection robot called ANYmal D by ANYbotics is due for shipping in Q3 2026 utilizing Intel's Loihi 3.

The only credible information I can find from Intel re Loihi 3 is Mike Davies Linkedin post from 8 months go, in which he said Loihi 3 was still under development.

Maybe by now Intel close to completion???...





View attachment 95110





Excerpt from Robocloud article describing ANYmal shipping in Q3 2026 with Intel's Loihi 3.
View attachment 95104












This June 2025 research paper discusses ANYmal running on Loihi 2.




View attachment 95108








Anymal works without an internet connection as shown in this video.


It's an interesting question, how far they are?

On Intel's web page you can find absolutely nothing about Loihi3 only 2, and they call it a research chip!

There can be 2 reasons why Loihi3 isn't mentioned that I can think of:

1: They keep a tight lid on it?

2: They are moving very slowly forward and maybe hit a snag and are years away from launch?

As a BRN investor, I hope it's the 2 reason, LOL.


 
  • Like
  • Love
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
I'm trying to ascertain whether the information in this Robocloud article (below) is correct. It says that this quadruped inspection robot called ANYmal D by ANYbotics is due for shipping in Q3 2026 utilizing Intel's Loihi 3.

The only credible information I can find from Intel re Loihi 3 is Mike Davies Linkedin post from 8 months go, in which he said Loihi 3 was still under development.

Perhaps Intel is close to completion???...





View attachment 95110





Excerpt from Robocloud article describing ANYmal shipping in Q3 2026 with Intel's Loihi 3.
View attachment 95104












This June 2025 research paper discusses ANYmal running on Loihi 2.




View attachment 95108








Anymal works without an internet connection as shown in this video.


Hi Bravo.

That poster Flectional who has been showing up on the crappers BRN threads lately sure seems to think Loihi 3 is real.
He comes across as a bit of a tech head having seen him post on WBT and 4DS over the past few years.
He posted on 8/2/26 quoting something called "SapienFusion - Feb 4 - 2026" as a source.

See excerpt of that post below...............

SapienFusion - Feb 4 - 2026

Intel Loihi 3​

Intel’s neuromorphic journey began in 2017 with Loihi 1, continued through Loihi 2’s 2021 release, and culminates in Loihi 3’s January 2026 commercial availability, a processor that represents the most significant architectural departure from conventional computing since GPUs themselves emerged. This isn’t an incremental improvement. This is brain-inspired computing that finally delivers on decades-old promises.
Graded Spikes: Bridging Two Worlds
'Loihi 3’s critical innovation introduces 32-bit graded spikes—a bridge between traditional deep neural networks operating on continuous values and spiking neural networks communicating through discrete events.Earlier neuromorphic generations used binary on/off signaling. A neuron either fired or didn’t. This forced algorithms designed for conventional architectures to undergo complete rewriting. Converting a PyTorch model to a binary spiking neural network required redesigning activation functions, adjusting learning algorithms, tuning temporal dynamics, and accepting accuracy degradation. The result created high barriers to adoption, and most developers stayed with GPUs.Graded spikes solve this problem by encoding information into spike amplitudes across a 32-bit range. Each spike carries nuanced information—not just fire or don’t fire, but fire with this specific intensity. This enables mainstream AI workloads to run on neuromorphic hardware with dramatically reduced power while requiring minimal algorithmic adaptation. Developers can convert existing models with automated tools currently in development, maintain accuracy within 1-2% of original performance, and achieve neuromorphic efficiency without complete redesign. This technical bridge makes commercial viability possible.'

Event-Driven Computation at Scale​

'The power efficiency advantage comes from temporal sparsity—the principle that most neurons remain inactive most of the time, processing only when relevant events occur.GPU processing a video stream at 30 frames per second processes all pixels with full computation for every frame, regardless of whether the scene changes. Frame 2 might be 95% identical to Frame 1, but the GPU performs full computation anyway. Frame 3 might be 97% identical to Frame 2, but again receives full computation. The result delivers massive redundant processing, consuming constant power.'

'Loihi 3 processing the same video stream activates neurons to establish a baseline during the initial scene, then only fires 5% of neurons to detect the changes in Frame 2 when 95% remains unchanged. Frame 3 triggers only 3% of neurons, when 97% stays static. Power consumption becomes proportional to actual information content rather than frame rate.For event-driven sensory data from neuromorphic cameras and event-based audio, Loihi 3 achieves theoretical 1,000× efficiency versus GPUs. This isn’t marketing hyperbole—it’s architectural mathematics. Temporal sparsity with 99% of neurons inactive delivers a 100× reduction. Spatial sparsity through local processing without global synchronization provides a 10× reduction. Combined, these factors multiply to 1,000× efficiency. Real-world performance varies by workload, but event-based applications routinely achieve 500-1,000× improvements.'




good to know where the competition is at - dyor
could always be wrong of course - all freely available in the public domain
 
  • Like
  • Thinking
Reactions: 3 users

Slade

Top 20
It's an interesting question, how far they are?

On Intel's web page you can find absolutely nothing about Loihi3 only 2, and they call it a research chip!

There can be 2 reasons why Loihi3 isn't mentioned that I can think of:

1: They keep a tight lid on it?

2: They are moving very slowly forward and maybe hit a snag and are years away from launch?

As a BRN investor, I hope it's the 2 reason, LOL.


The launch of Loihi 3 could well be a good thing for BrainChip. It could push the industry to speed up the adoption of neuromophic processing, with various industries turning to Akida specifically for wearables, defense, cyber security and space where we seem to be suited and making inroads. We will potentially take a large slice of the pie.
 
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
Hi Bravo.

That poster Flectional who has been showing up on the crappers BRN threads lately sure seems to think Loihi 3 is real.
He comes across as a bit of a tech head having seen him post on WBT and 4DS over the past few years.
He posted on 8/2/26 quoting something called "SapienFusion - Feb 4 - 2026" as a source.

See excerpt of that post below...............

SapienFusion - Feb 4 - 2026

Intel Loihi 3​

Intel’s neuromorphic journey began in 2017 with Loihi 1, continued through Loihi 2’s 2021 release, and culminates in Loihi 3’s January 2026 commercial availability, a processor that represents the most significant architectural departure from conventional computing since GPUs themselves emerged. This isn’t an incremental improvement. This is brain-inspired computing that finally delivers on decades-old promises.
Graded Spikes: Bridging Two Worlds
'Loihi 3’s critical innovation introduces 32-bit graded spikes—a bridge between traditional deep neural networks operating on continuous values and spiking neural networks communicating through discrete events.Earlier neuromorphic generations used binary on/off signaling. A neuron either fired or didn’t. This forced algorithms designed for conventional architectures to undergo complete rewriting. Converting a PyTorch model to a binary spiking neural network required redesigning activation functions, adjusting learning algorithms, tuning temporal dynamics, and accepting accuracy degradation. The result created high barriers to adoption, and most developers stayed with GPUs.Graded spikes solve this problem by encoding information into spike amplitudes across a 32-bit range. Each spike carries nuanced information—not just fire or don’t fire, but fire with this specific intensity. This enables mainstream AI workloads to run on neuromorphic hardware with dramatically reduced power while requiring minimal algorithmic adaptation. Developers can convert existing models with automated tools currently in development, maintain accuracy within 1-2% of original performance, and achieve neuromorphic efficiency without complete redesign. This technical bridge makes commercial viability possible.'

Event-Driven Computation at Scale​

'The power efficiency advantage comes from temporal sparsity—the principle that most neurons remain inactive most of the time, processing only when relevant events occur.GPU processing a video stream at 30 frames per second processes all pixels with full computation for every frame, regardless of whether the scene changes. Frame 2 might be 95% identical to Frame 1, but the GPU performs full computation anyway. Frame 3 might be 97% identical to Frame 2, but again receives full computation. The result delivers massive redundant processing, consuming constant power.'

'Loihi 3 processing the same video stream activates neurons to establish a baseline during the initial scene, then only fires 5% of neurons to detect the changes in Frame 2 when 95% remains unchanged. Frame 3 triggers only 3% of neurons, when 97% stays static. Power consumption becomes proportional to actual information content rather than frame rate.For event-driven sensory data from neuromorphic cameras and event-based audio, Loihi 3 achieves theoretical 1,000× efficiency versus GPUs. This isn’t marketing hyperbole—it’s architectural mathematics. Temporal sparsity with 99% of neurons inactive delivers a 100× reduction. Spatial sparsity through local processing without global synchronization provides a 10× reduction. Combined, these factors multiply to 1,000× efficiency. Real-world performance varies by workload, but event-based applications routinely achieve 500-1,000× improvements.'




good to know where the competition is at - dyor
could always be wrong of course - all freely available in the public domain
Is the second letter of that poster's name capitalized?
 
Hi Bravo,

Interesting the reference is to Loihi 3 plus Prophesee, when BRN just put this up:

https://brainchip.com/prophesee-lp/

Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results.

The interesting thing about using a DVS camera in a mobile application whether there must be some means of adjusting the variation in pixel output from received light intensity due to motion. DVS generates a spike when the light impinging on a pixel changes by more than a predetermined amount, so moving the camera will change the light impinging on a lot of pixels except in areas of uniform light intensity. Depending on the threshold level, that may only involve fairly abrupt edges.
Hey @Diogenese

Does any of the below from our mate Olivier answer any of the questions / provide solutions?

Posted a couple months ago to a post by someone else on event based cameras.



Olivier Coenen2mo

The Austrian Institute of Technology (AIT) where Christoph Posch developed the ATIS sensor developed a linear event-based visual array of 3(?) pixel wide to scan objects on a conveyor belt moving really fast. We used the technique presented here to categorize moving grocery items and read their barcodes when put in a shopping basket. The difficulty was the effective temporal resolution, only about 0.1 ms at best, far from the ns resolution of the timestamps. So we needed better techniques, dynamical NNs to extract all the info and compensate for the time-dependent response of a pixel: if fired recently, less likely to fire again for same contrast. We didn’t have that NN then and I think I have it now with PLEIADES and successors.
Like
Reply

Olivier Coenen2mo

The increase resolution brings out the another main advantage of event-based vision sensors, one can eventually see/resolve objects that one couldn’t with same frame-based resolution. We could still track drones flying in the sky that were 1/10 the size of a single pixel for example. Try that with a frame-based camera. We could generate Gigapixel resolution images with a DAVIS240 (240 x 180) of our surroundings while driving a vehicle offroad on a bumpy ride.
Like
Reply

Olivier Coenen2mo

This brings out another point: single cone/rod in our retinas don’t have the angular resolution to clearly see stars in the night sky, our eyes just don’t have the necessary resolution, yet we can clearly stars. How is that then possible? Eye movement and spike timing. Within a certain spatial resolution, spikes fire at the same spatial location when our eyes move. The temporal resolution of the spike translates to the spatial resolution that we can achieve when the spikes accumulates around the same spatial location where the star is. Be drunk and the stars should appear thicker or will appear to move because the temporal spike alignment is off. Thus, what we see is clearly a result of the neural processing, not just the inputs, the neural networks processing the spike firing. That’s the neural processing we need to take full advantage of event-based (vision) sensors, without having to resort to the processing of periodic sampling of frames that the engineering world is stuck in today.
 
  • Like
Reactions: 3 users

manny100

Top 20
Hi Bravo.

That poster Flectional who has been showing up on the crappers BRN threads lately sure seems to think Loihi 3 is real.
He comes across as a bit of a tech head having seen him post on WBT and 4DS over the past few years.
He posted on 8/2/26 quoting something called "SapienFusion - Feb 4 - 2026" as a source.

See excerpt of that post below...............

SapienFusion - Feb 4 - 2026

Intel Loihi 3​

Intel’s neuromorphic journey began in 2017 with Loihi 1, continued through Loihi 2’s 2021 release, and culminates in Loihi 3’s January 2026 commercial availability, a processor that represents the most significant architectural departure from conventional computing since GPUs themselves emerged. This isn’t an incremental improvement. This is brain-inspired computing that finally delivers on decades-old promises.
Graded Spikes: Bridging Two Worlds
'Loihi 3’s critical innovation introduces 32-bit graded spikes—a bridge between traditional deep neural networks operating on continuous values and spiking neural networks communicating through discrete events.Earlier neuromorphic generations used binary on/off signaling. A neuron either fired or didn’t. This forced algorithms designed for conventional architectures to undergo complete rewriting. Converting a PyTorch model to a binary spiking neural network required redesigning activation functions, adjusting learning algorithms, tuning temporal dynamics, and accepting accuracy degradation. The result created high barriers to adoption, and most developers stayed with GPUs.Graded spikes solve this problem by encoding information into spike amplitudes across a 32-bit range. Each spike carries nuanced information—not just fire or don’t fire, but fire with this specific intensity. This enables mainstream AI workloads to run on neuromorphic hardware with dramatically reduced power while requiring minimal algorithmic adaptation. Developers can convert existing models with automated tools currently in development, maintain accuracy within 1-2% of original performance, and achieve neuromorphic efficiency without complete redesign. This technical bridge makes commercial viability possible.'

Event-Driven Computation at Scale​

'The power efficiency advantage comes from temporal sparsity—the principle that most neurons remain inactive most of the time, processing only when relevant events occur.GPU processing a video stream at 30 frames per second processes all pixels with full computation for every frame, regardless of whether the scene changes. Frame 2 might be 95% identical to Frame 1, but the GPU performs full computation anyway. Frame 3 might be 97% identical to Frame 2, but again receives full computation. The result delivers massive redundant processing, consuming constant power.'

'Loihi 3 processing the same video stream activates neurons to establish a baseline during the initial scene, then only fires 5% of neurons to detect the changes in Frame 2 when 95% remains unchanged. Frame 3 triggers only 3% of neurons, when 97% stays static. Power consumption becomes proportional to actual information content rather than frame rate.For event-driven sensory data from neuromorphic cameras and event-based audio, Loihi 3 achieves theoretical 1,000× efficiency versus GPUs. This isn’t marketing hyperbole—it’s architectural mathematics. Temporal sparsity with 99% of neurons inactive delivers a 100× reduction. Spatial sparsity through local processing without global synchronization provides a 10× reduction. Combined, these factors multiply to 1,000× efficiency. Real-world performance varies by workload, but event-based applications routinely achieve 500-1,000× improvements.'




good to know where the competition is at - dyor
could always be wrong of course - all freely available in the public domain
I think that when Loihi is actually commercial there will be a formal announcement, a celebration and self congratulation.
The first to know certainly will not be down rampers from over on the crapper.
 
  • Like
  • Haha
Reactions: 7 users

Diogenese

Top 20
Back in 2024, Anybotics thought machine learning was a software function:

US2025328146A1 LEGGED ROBOT LOCOMOTION CONTROL TRAINED IN REINFORECEMENT LEARNING BASED ON HETEROGENEOUS ENVIRONMENTAL REPRESENTATIONS 20240423

[0043] Models. Various models are involved. Such models are computational models, which include learned models (i.e., models trained using machine learning, as with the first model) and may further include other types of models, e.g., models relying on model predictive control (MPC) and/or geometric obstacle detection. Such models are implemented by computerised modules, executing on same or distinct processing means, this depending on the processing capability of the underlying processing means 600 and the safety level desired.
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Hey @Diogenese

Does any of the below from our mate Olivier answer any of the questions / provide solutions?

Posted a couple months ago to a post by someone else on event based cameras.



Olivier Coenen2mo

The Austrian Institute of Technology (AIT) where Christoph Posch developed the ATIS sensor developed a linear event-based visual array of 3(?) pixel wide to scan objects on a conveyor belt moving really fast. We used the technique presented here to categorize moving grocery items and read their barcodes when put in a shopping basket. The difficulty was the effective temporal resolution, only about 0.1 ms at best, far from the ns resolution of the timestamps. So we needed better techniques, dynamical NNs to extract all the info and compensate for the time-dependent response of a pixel: if fired recently, less likely to fire again for same contrast. We didn’t have that NN then and I think I have it now with PLEIADES and successors.
Like
Reply

Olivier Coenen2mo

The increase resolution brings out the another main advantage of event-based vision sensors, one can eventually see/resolve objects that one couldn’t with same frame-based resolution. We could still track drones flying in the sky that were 1/10 the size of a single pixel for example. Try that with a frame-based camera. We could generate Gigapixel resolution images with a DAVIS240 (240 x 180) of our surroundings while driving a vehicle offroad on a bumpy ride.
Like
Reply

Olivier Coenen2mo

This brings out another point: single cone/rod in our retinas don’t have the angular resolution to clearly see stars in the night sky, our eyes just don’t have the necessary resolution, yet we can clearly stars. How is that then possible? Eye movement and spike timing. Within a certain spatial resolution, spikes fire at the same spatial location when our eyes move. The temporal resolution of the spike translates to the spatial resolution that we can achieve when the spikes accumulates around the same spatial location where the star is. Be drunk and the stars should appear thicker or will appear to move because the temporal spike alignment is off. Thus, what we see is clearly a result of the neural processing, not just the inputs, the neural networks processing the spike firing. That’s the neural processing we need to take full advantage of event-based (vision) sensors, without having to resort to the processing of periodic sampling of frames that the engineering world is stuck in today.
Hi FMF,

It does not explain it to Ella's satisfaction. It's all very well to say you can track something from a vehicle offroad on a bumpy road, but that does not explain the way that you do it, although I do like the idea of drunken star gazing offroad on a bumpy road while the temporal alignment is skewwiff.
 
  • Haha
  • Like
Reactions: 5 users

7für7

Top 20
It’s always an emotional rollercoaster reading along here… one post is “we’ll be rich tomorrow,” the next is “sh*t… we’re all screwed” …. and that’s happening even without the usual bashers being around…

Hahahaha
Confused Kevin James GIF by TV Land
 
  • Like
Reactions: 1 users

TopCat

Regular
With all information on robotics lately, it jogged my memory of our partnership with Magikeye whom , I think, would appeal to many key players

From our website:
BrainChip Holdings (BRN) has teamed up with MagikEye for a total 3D vision solution.

MagikEye’s patent-protected technology is based on its Invertible Light system, which enables the “smallest, fastest & most power-efficient 3D depth sensing.”

By combining the strengths of BrainChip’s Neural Network capabilities with MagikEye’s Invertible Light, we are excited about the game-changing benefits that customers will experience, in terms of a total 3D vision solution for robotics, machine vision and many other new applications,”
 
  • Like
  • Love
Reactions: 6 users
Hmmm



BRAIN INTERFACES AND ON-DEVICE AI (BRAINEULINK + BRAINCHIP)​

Promotional poster for sBCI Glasses by BraineuLink showing two styles of smart glasses and caps, listing features like mind-interaction, AR vision, and AI object recognition, with branding and product descriptions.

BraineuLink describes work on non-invasive EEG brain-computer interfaces for decoding user intentions and interfacing with digital devices, while BrainChip describes its Akida neuromorphic processor platform for low-power, real-time edge AI.

A comparison chart showing Akida Neural Processor and TENNs Models using less energy than standard AI systems, with a note that Akida consumes less than 1% of the power of typical AI systems.

MIDI relevance: these are enabling technologies: lower-power on-device perception and alternate input methods are exactly what’s needed for future accessible instruments, adaptive controllers, and context-aware performance rigs.






Apart from they are from Taiwan and started in 2021 there ain’t much online about the company


IMG_4560.png

IMG_4562.png


At BraineuLink Technology Inc, they are pioneering the development of non-invasive EEG brain-computer interfaces (BCI).



Their advanced systems include cutting-edge algorithms for decoding user intentions and innovative BCI chips.



They are focused on creating intelligent solutions that allow seamless brain interaction with digital devices such as mobile applications and AR glasses, as well as translating EEG signals into text.



Their mission is to make neural interfacing technology accessible and transformative for everyday use.
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 12 users
Top Bottom