BRN Discussion Ongoing

I don't know why you want to compare brn with qualcomm.
Qualcomm is a big established player, while brn is a start up for last 12 years. We all know Qualcomm use arm ip for most of their products and they may use brn IP as well. Arm is our technology partner which means our technology may have some benefits with arm technology. Which in other way, may mean Qualcomm may be using us in their future endeavors or may need to use us. Qualcomm is not a competitor of brainchip because we are not even in the picture.
The only way for growth of brn is to sell it's technology to Qualcomm than to compete with qualcomm or any big player.
Dyor
So you have watched the Qualcomm investor presentation then 😵‍💫
 

rgupta

Regular
So you have watched the Qualcomm investor presentation then 😵‍💫
I am neither invested in Qualcomm, nor have a motivation to invest in them in near future, so no I did not see their presentation.
I am invested in brainchip and want follow the same. And I don't want to compare brn with anyone like apple, Google, Nvidia, Qualcomm, arm etc . They are all established players and world leaders while we have to start getting our 1st cent of royalty.
To me brainchip have a technology and have a future for their technology. Will they compete with existing players or go with existing players only our company statements can tell the same.
Qualcomm is a very good company around 400 times bigger than brn, around 10000 times revenue than brainchip, but there is no comparison here. Once we start getting into main stream and one of Qualcomm or their competitor will start using our IP only then I would like to compare. Unneccessary comparing can either be a hype or doom for brainchip. Yes I want team brainchip should sell their IP to Qualcomm but that is work for team brainchip, I cannot do much about that.
Dyor
 
  • Like
Reactions: 10 users

7für7

Top 20
Mine Xmas present has already arrived in the way of my XRP and know I don’t know what do, cash some buy more BRN or let it ride, but I could buy a shit load more BRN 😂

View attachment 73218
Same here but I’ll just let it run. I don’t see the point in exchanging a currency that might later be worth more than other currencies for a cheaper one. I think if everything goes well, some people will be glad they held on to it. Just my opinion.
 
  • Like
Reactions: 2 users

manny100

Regular
I don't know why you want to compare brn with qualcomm.
Qualcomm is a big established player, while brn is a start up for last 12 years. We all know Qualcomm use arm ip for most of their products and they may use brn IP as well. Arm is our technology partner which means our technology may have some benefits with arm technology. Which in other way, may mean Qualcomm may be using us in their future endeavors or may need to use us. Qualcomm is not a competitor of brainchip because we are not even in the picture.
The only way for growth of brn is to sell it's technology to Qualcomm than to compete with qualcomm or any big player.
Dyor
One of our partners Prophesee is a supplier to Qualcomm.
We make Prophesee products smarter.
 
  • Like
  • Love
Reactions: 18 users

manny100

Regular
 
  • Like
  • Fire
  • Love
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
One of our partners Prophesee is a supplier to Qualcomm.
We make Prophesee products smarter.


That's a great point @manny100.

It might be a simplistic way of looking at things, but for me it boils down to two particular areas:
  • performance and efficiency calculated by TOPS/watt, and
  • the unique processing capabilities that BrainChip's technology can bring to the table
If BrainChip's technology can deliver on both of these areas in a meaningful way, then there should be no reason for Qualcomm not to want to integrate it into their products. As has been mentioned numerous times, we are trying to position ourselves as a partner rather than a competitor to behemoths like Qualcomm and NVIDIA.

In terms of determining the TOPS/watt capabilities of Qualcomm's chips, it's a bit tricky because there doesn't seem to be any specified values from the manufacturer on power consumption measurements. In this instance, the authors of one article (published 20 June 2024, linked below) estimated via their own power consumption measurements that "the most efficient power range of the Snapdragon X Elite chips seems to be 20-30 Watts".

AKIDA's Pico on the other hand operates in the microwatt (μW) to milliwatt (mW) range.

When it comes to the process that BrainChip's technology allows for, we can look to Max Maxfiled's latest article "Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" dated 21 Nov 2024. The obvious benefit is that you can "feed the event-based data from the camera directly to the event-based Akida processor, thereby cutting latency and power consumption to a minimum", as compared to other available techniques.

The big question is whether Qualcomm would see any value in adopting this type technology into their own products and I think that Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, might have actually answered that question in an EE Times article dated 22 March 2023 when he stated the following.


EXTRACT - EE Times article dated 22 March 2023 (Interview with Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies).

Screenshot 2024-11-24 at 1.51.53 pm.png






EXTRACT - Notebook Check 20 June 2024
Screenshot 2024-11-24 at 1.21.10 pm.png



EXTRACT - Notebook Check 20 June 2024
TOPS  pm.png



EXTRACT - EE Journal"Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" 21 Nov 2024.
Screenshot 2024-11-22 at 10.24.31 am (1).png



Links
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

rgupta

Regular
That's a great point @manny100.

It might be a simplistic way of looking at things, but for me it boils down to two particular areas:
  • performance and efficiency calculated by TOPS/watt, and
  • the unique processing capabilities that BrainChip's technology can bring to the table
If BrainChip's technology can deliver on both of these areas in a meaningful way, then there should be no reason for Qualcomm not to want to integrate it into their products. As has been mentioned numerous times, we are trying to position ourselves as a partner rather than a competitor to behemoths like Qualcomm and NVIDIA.

In terms of determining the TOPS/watt capabilities of Qualcomm's chips, it's a bit tricky because there doesn't seem to be any specified values from the manufacturer on power consumption measurements. In this instance, the authors of one article (published 20 June 2024, linked below) estimated via their own power consumption measurements that "the most efficient power range of the Snapdragon X Elite chips seems to be 20-30 Watts".

AKIDA's Pico on the other hand operates in the microwatt (μW) to milliwatt (mW) range.

When it comes to the process that BrainChip's technology allows for, we can look to Max Maxfiled's latest article "Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" dated 21 Nov 2024. The obvious benefit is that you can "feed the event-based data from the camera directly to the event-based Akida processor, thereby cutting latency and power consumption to a minimum", as compared to other available techniques.

The big question is whether Qualcomm would see any value in adopting this type technology into their own products and I think that Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, might have actually answered that question in an EE Times article dated 22 March 2023 when he stated the following.


EXTRACT - EE Times article dated 22 March 2023 (Interview with Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies).

View attachment 73247





EXTRACT - Notebook Check 20 June 2024
View attachment 73241


EXTRACT - Notebook Check 20 June 2024
View attachment 73244


EXTRACT - EE Journal"Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" 21 Nov 2024.
View attachment 73246


Links
You are right Bravo we need not compete with tech behemoths.
A matter of fact no one can hide the sun. So if our solutions are better than others that will mean we are going to get our share sooner or later.
Event based processing sensors are new for the market place and similarly event based processor no one thought about a few years ago.
So let us see where we are heading to.
 
  • Like
Reactions: 6 users
Today, numerous articles on some of the promising future technologies Mercedes-Benz is exploring were published online, after the carmaker had recently invited journalists to its Future Technologies Lab in Sindelfingen.

And of course - you guessed it - neuromorphic computing was one of them.
(I also find solar coating another interesting concept).

There was also a press release by MB itself:

View attachment 73221



View attachment 73222

View attachment 73225
View attachment 73226



Playmobil-Männchen im Einsatz… 😀

View attachment 73230
View attachment 73231



German magazine auto, motor und sport published both an online article and a video on MB & neuromorphic computing earlier today (both in German) that literally confirm what I’ve been suspecting: that Mercedes-Benz is nowhere near to implementing neuromorphic technology at scale into their serial cars…




“Der Weg in die Serie ist noch weit.” - “It’s still a long way to serial cars”.

“Bis neuromorphe Chips ihren Weg ins Auto finden, wird es wohl noch einige Jahre dauern.” - “It’ll probably take a few years / It looks as if it will still take a few years until neuromorphic chips will find their way into (serial) cars.”


View attachment 73227




“Diese Technologie steht jedoch noch am Anfang und erfordert umfangreiche Tests und Zertifizierungen, bevor sie in Autos eingesetzt werden kann.”

“However, this technology is still in its infancy and requires extensive testing and certification before it can be used in cars.”

Something similar is said in the video itself around the 5 min mark.


Other articles are behind a paywall, but maybe one of you happens to be a subscriber and could check out whether there are any additional snippets of interest worth sharing?





My guess is that the next Mercedes-Benz LinkedIn post regarding NC will give us some more details about the research collaboration with HKA (Hochschule Karlsruhe) on neuromorphic cameras. Or it might be a post announcing the collaboration between Mercedes and Neurobus that I had spotted on Nov 12 (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-441454).
Hi @Frangipani

Thanks for the info.

The comments of certification etc tie in with a role advertised by BRN earlier this year where part of the requirements needed a working knowledge of Auto / ISO design processes etc.

It is something I believe Akida would need to meet either as part of a package or stand alone.

 
  • Like
Reactions: 9 users
This gentleman popped up in a general search. Noticed with us for 5 mths but also a researcher at UC San Diego.

Worked on TENNS with us but is also "working" on ICA for State Space Models for audio denoising.

At UC San Diego he is working on video / LLMs for captioning. Wonder if a fit for us too?



Machine Learning @ BrainChip | MS Machine Learning & Data Science @ UC San Diego​

BrainChip UC San Diego​

San Diego, California, United States Contact Info

  • Machine Learning Engineer​

    BrainChip

    Jul 2024 - Present 5 months
    - Working on implementing Independent Component Analysis (ICA) for effective basis extraction for State Space Models (SSM) to enhance audio denoising performance.
    - Worked on integrating sparsity techniques with Temporal Event-based Neural Network (TENN).
  • UC San Diego Graphic

    Graduate Student Researcher​

    UC San Diego

    Jan 2024 - Present 11 months
    San Diego, California, United States
    - Developed a video captioning pipeline leveraging Large Language Models (LLMs) to capture high-level dynamics/actions in a video.
    - Augmented the captioning pipeline to handle 1M videos on Nvidia H100 cluster.
    - Developed a video filtering pipeline to extract suitable videos from Panda-70M dataset for our captioning pipeline.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

7für7

Top 20
82ED1FF3-7559-42BA-94E7-136C87BFB232.jpeg


Sign the contract big boy… sign the contract….
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's a Neurobus Internship starting January 2025. It states that "knowledge of artificial intelligence, neuromorphic computing and autonomous systems a plus".

Good to see major players like Airbus, ESA, and Mercedes-Benz listed here - all of whom are our partners with whom Neurobus also has strategic partnerships.

It makes me wonder if our work with the likes of Neurobus, ESA, Fontrgade Gaisler and Airbus on the next generation microprocessors will eventually see them utilized not only in space operations but also being incorporated into Mercedes-Benz vehicles. The next generation microprocessors are supposed to set a new standard for modern space-grade computing devices. It wouldn't be unheard of for them to be used in automobiles and smart phones IMO, as has been the case with other space-grade microprocessors, because of their demonstrated robustness.

Food for thought.





Screenshot 2024-11-24 at 3.20.01 pm.png




Screenshot 2024-11-24 at 3.20.17 pm.png





 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 40 users

Diogenese

Top 20
Hi @Frangipani

Thanks for the info.

The comments of certification etc tie in with a role advertised by BRN earlier this year where part of the requirements needed a working knowledge of Auto / ISO design processes etc.

It is something I believe Akida would need to meet either as part of a package or stand alone.

Some years ago, the company said that obtaining ISO certification would be done by the vehicle maker. However, that does not relieve the company of responsibility for ensuring the designs are ISO compliant.

Then there is the continuous development of the tech which inhibits customers from committing to silicon - hence, SDV.

I wonder how long it takes to get ISO cert for software, and for software updates. It would require documenttion of the software and of every code change as well as exhaustive testing. I would think that software testing would not take as long as hardware testing.

Hopefully TENNs/Akida2 will be part of the SDV software in the near term. TENNs 01 software has been around for a couple of years and would have undergone extensive testing, but we don't know what the latest version is or when it was available, let alone what's still in the pipeline.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Getupthere

Regular
Congratulations and Well done to Sean!!

Managed to go through the whole 2024 without achieving anything

Apart from getting his well deserved shares this week.

Great to see management followed through with their commitment to communicate better to their shareholders.

3 years lead has well and truly gone.
 
  • Like
  • Haha
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
A TEMPORAL convolutional neural network was used... Tell me more!



Down, But Not Out​

This low-cost wrist-worn fall detector uses a Photon 2, accelerometer, and Edge Impulse to instantly alert first responders to an emergency.​


Nick BildFollow
5 hours ago • Wearables


EXTRACT

Screenshot 2024-11-24 at 5.25.07 pm.png




EXTRACT


Screenshot 2024-11-24 at 5.32.36 pm.png



EXTRACT

Screenshot 2024-11-24 at 5.34.53 pm.png





 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users

manny100

Regular
That's a great point @manny100.

It might be a simplistic way of looking at things, but for me it boils down to two particular areas:
  • performance and efficiency calculated by TOPS/watt, and
  • the unique processing capabilities that BrainChip's technology can bring to the table
If BrainChip's technology can deliver on both of these areas in a meaningful way, then there should be no reason for Qualcomm not to want to integrate it into their products. As has been mentioned numerous times, we are trying to position ourselves as a partner rather than a competitor to behemoths like Qualcomm and NVIDIA.

In terms of determining the TOPS/watt capabilities of Qualcomm's chips, it's a bit tricky because there doesn't seem to be any specified values from the manufacturer on power consumption measurements. In this instance, the authors of one article (published 20 June 2024, linked below) estimated via their own power consumption measurements that "the most efficient power range of the Snapdragon X Elite chips seems to be 20-30 Watts".

AKIDA's Pico on the other hand operates in the microwatt (μW) to milliwatt (mW) range.

When it comes to the process that BrainChip's technology allows for, we can look to Max Maxfiled's latest article "Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" dated 21 Nov 2024. The obvious benefit is that you can "feed the event-based data from the camera directly to the event-based Akida processor, thereby cutting latency and power consumption to a minimum", as compared to other available techniques.

The big question is whether Qualcomm would see any value in adopting this type technology into their own products and I think that Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, might have actually answered that question in an EE Times article dated 22 March 2023 when he stated the following.


EXTRACT - EE Times article dated 22 March 2023 (Interview with Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies).

View attachment 73247





EXTRACT - Notebook Check 20 June 2024
View attachment 73241


EXTRACT - Notebook Check 20 June 2024
View attachment 73244


EXTRACT - EE Journal"Taking the Size and Power of Extreme Edge AI/ML to the Extreme Minimum" 21 Nov 2024.
View attachment 73246


Links
" “By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings, said Luca Verre, CEO and co-founder of Prophesee.”
If Prophesee sells Qualcomm a product containing AKIDA there would likely be an NDA. Qualcomm may not release the product containing AKIDA until????
May well would be a wait for the cash to start rolling in. Not sure about the license situation in these cases.
We would not know until
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Diogenese

Top 20
A TEMPORAL convolutional neural network was used... Tell me more!



Down, But Not Out​

This low-cost wrist-worn fall detector uses a Photon 2, accelerometer, and Edge Impulse to instantly alert first responders to an emergency.​


Nick BildFollow
5 hours ago • Wearables


EXTRACT

View attachment 73262



EXTRACT


View attachment 73263


EXTRACT

View attachment 73264





Temporal Convolutional Networks can be software:



The seminal work of Lea et al. (2016) first proposed a Temporal Convolutional Networks (TCNs) for video-based action segmentation. The two steps of this conventional process include: firstly, computing low-level features using (usually) CNN that encode spatial-temporal information and secondly, input these low-level features into a classifier that captures high-level temporal information using (usually) RNN. The main disadvantage of such an approach is that it requires two separate models. TCN provides a unified approach to capture all two levels of information hierarchically.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Temporal Convolutional Networks can be software:



The seminal work of Lea et al. (2016) first proposed a Temporal Convolutional Networks (TCNs) for video-based action segmentation. The two steps of this conventional process include: firstly, computing low-level features using (usually) CNN that encode spatial-temporal information and secondly, input these low-level features into a classifier that captures high-level temporal information using (usually) RNN. The main disadvantage of such an approach is that it requires two separate models. TCN provides a unified approach to capture all two levels of information hierarchically.


Ahhhhhh! Very interesting...

Is that good or bad? Just asking for a friend!


e8efd22e67f1a803ab7953ee0614c861.gif
 
  • Haha
  • Love
  • Like
Reactions: 20 users

Diogenese

Top 20
Ahhhhhh! Very interesting...

Is that good or bad? Just asking for a friend!


View attachment 73265
Well, it's a 50/50 bet, but, given I've been cheering on Valeo and Mercedes SDV for TENNs, and given we're such good mates with EI, I hope it is TENNs software.

There are a number of different Temporal Convolutional Networks. As Ella reminds us,

Tain't what you do.
It's the way that you do it
.

TENNs does it with Convergent Orthogonal Polynomials.*

EI automate the classification of objects when building NN models. The models then have to be converted to N-of-M code format for Akida like we're doing with our other mates BeEmotion (nee Nviso)).

Automating model formation expedites the process of implemanting a NN. Converting the new models to spike/event format is necessary to adapt the new models for Akida, and applying N-of-M coding compresses the size of the model, thus improving latency and power consumption.

Akida 2 uses 4-bit events for the highest precision applications, but it can use 1-bit or 2-bit events where speed/power consumption are more important. I assume that TENNs is adapted or is being adapted to work with 1, 2, or 4 bits and N-of-M coding.

* Don't ask me.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Appears Tech Mahindra, like TCS did, also see the future including a ramp up in neuromorphic.

Be great if they're talking to us as well.

If BRN can't make some sort of inroads into what, on the surface, appears to be a increasing need, recognition and catalyst for neuromorphic, then we have some issues.

Snipped a couple of bits from link below.




Tech Mahindra

The Engine Behind AI: AI Accelerator​


November 19, 2024
:
The last few years have witnessed a profound shift in operational models across global industries. The reason? The exponential evolution of AI. Foundational models of AI in 2024 encompass various capabilities, including, but not limited to, computer code, images, video, and audio. As a direct impact, we can see a plethora of AI-assisted innovations in healthcare, automotive, finance, and more. A McKinsey study (2023) reveals that the new generations of AI can inject $2.6 – $4.4 trillion across all industries combined.

..........

The Future of AI Accelerators: What Lies Ahead?​

AI accelerators are still evolving, and their future promises even more powerful capabilities. Here are a few trends and innovations expected in the coming years:


  • Accelerators in Consumer Electronics
    Even more consumer devices will undoubtedly feature AI accelerators in the coming months. Whether a wearable device that is equipped with real-time health diagnostics or AR glasses analyzing surroundings in real-time, AI-based consumer electronics will change how we use technology in our lives. This trend dictates more AI accelerators in consumer electronics.
In addition to this, energy-efficient AI models are also gaining popularity. The high demand for the AI accelerator will lead to a substantial focus on the development of an energy-efficient model. These models can save operational costs while reducing the negative carbon footprint issues related to massive AI deployment. The major hardware companies will further improvise in developing architectures related to more power efficiency such as the use of neuromorphic computing and neural structures mimicking the human brain along with AI accelerators. So only one question remains. Are you ready for the future?
 
  • Like
  • Fire
  • Love
Reactions: 29 users
Top Bottom