BRN Discussion Ongoing

IloveLamp

Top 20
šŸ¤”


1000013717.jpg
 
  • Fire
  • Like
  • Wow
Reactions: 3 users

IloveLamp

Top 20
Ready for open

1000013718.gif
 
  • Haha
  • Love
Reactions: 8 users
Maybe, don't think we will ever know who it was. Although Prophesee already had a working solution with their current tech partner so could be one of our early adopters.
Fingers crossed that is true, I am sure the discussion was based around blurred images that BRN helped resolve
 
  • Like
  • Thinking
Reactions: 3 users
Without a signed IP contract with us how can Prophesee be using our IP in their products? Is it correct to say that that we cannot expect any royalties from any partner if they havent signed a IP contract? @Fact Finder could you please shed some light on this? Sean also said in the webinar that it was a high priority for the sales team to get more IP contracts. I hope it is possible that our IP can be used by partners without an IP agreement. Then hopefully revenue will appear suddenly. I just dont understand how anyone can pay us without any formally signed contract.
Hi Cand2it

I agree with the other comments. Where I think your confusion may have begun is the fact that Brainchip has an interesting interpretation of the ASX Continuous Disclosure Rules.

Their interpretation is that unless they can calculate a precise dollar amount on an agreement they cannot announce an engagement on the ASX.

So starting from this point and taking Tata Elxsi as an example it is most certainly the case there is in place a written document that contains the terms under which Brainchip and Tata Elxsi engage.

I donā€™t know what these terms are but it logically would cover the release of the IP and any joint intellectual property development.

I would also suggest it would cover the financial terms otherwise how would Tata Elxsi be able to approach a medical or industrial customer to sell an AKIDA based product without knowing what it would cost the customer.

This agreement/contract/NDA or whatever name one wants to apply not being for a defined sum of money ie one million units at $25.00 each, will not be an agreement based on Brainchipā€™s interpretation of the ASX Rules that can be announced on the ASX.

I personally believe there are many such agreements of this type in place between Brainchip and its many partners.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Thinking
Reactions: 44 users
Maybe this is the deal Sean was referring to that didnā€™t go through is my thinking
No.

The deal did not proceed because (Seanā€™s words) the company laid off thousands of employees.

My best bet is Ericsson as they laid off thousands of employees about the right time.


At least some of researchers in the Ericsson paper I posted worked at the North American branch and still do and have published since these lay offs. Their paper relates to the area of commercial operations targeted in the lay offs and appear on LinkedIn to be still employed up to a couple of weeks ago:


Towards 6G Zero-Energy Internet of Things: Standards, Trends, and Recent Results

T Khan, SNK Veedu, A RĆ”cz, M Afshangā€¦ - Authorea ā€¦, 2023 - techrxiv.org
ā€¦ , a neuromorphic AI chip (the Akida neural chip from BrainChip), ā€¦ @ericsson.com) is a senior researcher at Ericsson Research in ā€¦ She joined Ericsson in 2018 following her postdoctoral


I believe Sean Heir has subsequently said that the project is not dead. I note also that Nadan Nayampally has mentioned energy harvesting as a use case for AKIDA.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Bloodsy

Regular
Shared on ASX bets, not sure of the validity of it, but interestingly this was allegedly posted and quickly deleted.

Wonder if they could be toying with AKIDA? who was it playing with neuromorphic SDRs? Always pricks the ears (or eyes) when we see and hear "without referencing the cloud"


1709163060718.png
 
  • Like
  • Fire
  • Love
Reactions: 15 users
No.

The deal did not proceed because (Seanā€™s words) the company laid off thousands of employees.

My best bet is Ericsson as they laid off thousands of employees about the right time.


At least some of researchers in the Ericsson paper I posted worked at the North American branch and still do and have published since these lay offs. Their paper relates to the area of commercial operations targeted in the lay offs and appear on LinkedIn to be still employed up to a couple of weeks ago:


Towards 6G Zero-Energy Internet of Things: Standards, Trends, and Recent Results

T Khan, SNK Veedu, A RĆ”cz, M Afshangā€¦ - Authorea ā€¦, 2023 - techrxiv.org
ā€¦ , a neuromorphic AI chip (the Akida neural chip from BrainChip), ā€¦ @ericsson.com) is a senior researcher at Ericsson Research in ā€¦ She joined Ericsson in 2018 following her postdoctoral


I believe Sean Heir has subsequently said that the project is not dead. I note also that Nadan Nayampally has mentioned energy harvesting as a use case for AKIDA.

My opinion only DYOR
Fact Finder
My wild theory for the day.

What if Ericsson, Brainchip and Prophesee were working together to substitute Propheseeā€™s event based neuromorphic vision sensor for the camera used in the above paper.

My technophobe understanding is this would further reduce power consumption making it a World beating best fit for being powered by energy harvesting from normal internal light sources.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Thinking
Reactions: 24 users

charles2

Regular
Hi @charles2

The only things I know about Prophesee and itā€™s neuromorphic sensor technology are:

1. It can work with other technology including Von Neumann;

2. It partnered with Brainchip because AKIDA technology natively runs SNN and as such is directly compatible with Propheseeā€™s sensor and there is no loss of functionality as there is when teamed with other technology such as that from SynSense, Intel, Qualcomm and Sony.

3. Brainchip and Prophesee were demonstrating their joint technology at CES 2024.

4. In my opinion Brainchip is not part of the current Prophesee product offerings from SynSense, Qualcomm and/or Sony.

My opinion only DYOR
Fact Finder
Likely then that Akida is the "killer app" in Prophesee's next iteration. That is an easy concept to keep us grounded ....as prophesy relates to future events.

Thanks FF for clarifying.
 
  • Like
  • Haha
Reactions: 6 users

AARONASX

Holding onto what I've got
Shared on ASX bets, not sure of the validity of it, but interestingly this was allegedly posted and quickly deleted.

Wonder if they could be toying with AKIDA? who was it playing with neuromorphic SDRs? Always pricks the ears (or eyes) when we see and hear "without referencing the cloud"


View attachment 58149

nice find!...I put these guys DRO on my watch list the other day, commsec data says they are overvalued at the moment, maybe one to watch as Akida would benefit their tech... might pick a few in a month or two.
 
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
A MUST WATCH INTERVIEW IMO!!!


Ziad Asghar, Qualcomm | MWC Barcelona 2024 - YouTube

5 hours ago





Sustainability and AI: Qualcomm aims to reshape the mobile landscape​



The future of artificial intelligence is not confined to boardrooms and laboratories ā€” itā€™s unfolding in real-time, shaping the way we live, work and interact with the world around us. The journey is one marked by ingenuity, collaboration and a steadfast commitment to harnessing the transformative power of AI for improved sustainability and the betterment of humanity.
Diving deep into the implications of AIā€™s migration to the edge, Ziad Asghar (pictured), senior vice president of product management and head of AI Technology at Qualcomm Technologies Inc., gave a glimpse into the transformative power of on-device processing, the burgeoning realm of sustainability and the democratization of AI.
ā€œGenerative AI may have started on the cloud, but itā€™s transitioning to the edge, itā€™s coming to the device and that brings in with it [an] amazing amount of benefits,ā€ Asghar said.
Asghar spoke with theCUBE Research analysts John Furrier, Dave Vellante and Shelly Kramer at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Mediaā€™s livestreaming studio. They discussed the evolving landscape of AI and its integration into everyday devices. (* Disclosure below.)

The future of AI: A move toward efficiency and sustainability​

AIā€™s trajectory is swiftly shifting toward on-device processing, marking a paradigm shift in how we interact with technology. Gone are the days of heavy reliance on cloud infrastructure; the future lies in the empowerment of devices to perform complex AI tasks locally. The cornerstone of this shift is the evolution of AI models ā€” becoming not only more capable but also more compact, according to Asghar.
ā€œThat allows us to be able to do more AI processing than anybody else out there from a performance per watt perspective,ā€ he said.
Qualcommā€™s unveiling of Stable Diffusion, a text-to-image model running on a device with a staggering seven billion parameters, underscores the strides made in on-device AI processing. This progress not only enhances efficiency but also aligns with sustainability goals by significantly reducing energy consumption. The age-old debate of proprietary versus open-source models is also addressed, with Qualcomm embracing the democratization of AI through initiatives, such as the AI Hub, empowering developers to leverage curated models for diverse applications.
ā€œPeople can take these generative AI model like Stable Diffusion, we can pick it up from there and readily create an application,ā€ Asghar said. ā€œAnd you know what Iā€™m excited about is there are many applications that we might not even thought about when we were designing these chips. But I hope that developers are able to come up with those amazing ideas.ā€

The shift toward edge AI​

Amid the excitement, challenges loom on the horizon. The optimization of power consumption, interoperability of AI models and maintaining a delicate balance between innovation and ethical considerations are just a few hurdles to overcome.
These challenges are opportunities in disguise ā€” opportunities to push boundaries, solve meaningful problems and shape a future where human-machine interaction transcends boundaries, according to Asghar.
ā€œWhat if you have a virtual assistant sitting on your device that you say, ā€˜Reserve a restaurant for me tomorrow,ā€™ā€ he said. ā€œIt basically figures out your calendar, it figures out Yelp, it finds a place close to you, goes to open table, makes the reservation for you and youā€™re done. This is the kind of human-machine interaction sign up that I think that we can enable in the future.ā€
Hereā€™s the complete video interview, part of SiliconANGLEā€™s and theCUBE Researchā€™s coverage of MWC Barcelona:

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

ndefries

Regular
Likely then that Akida is the "killer app" in Prophesee's next iteration. That is an easy concept to keep us grounded ....as prophesy relates to future events.

Thanks FF for clarifying.
Prophesee Sony and Akida combination is going to take the smart phone to a new level. I just wish it can hurry up already.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

rgupta

Regular
Hi @charles2

The only things I know about Prophesee and itā€™s neuromorphic sensor technology are:

1. It can work with other technology including Von Neumann;

2. It partnered with Brainchip because AKIDA technology natively runs SNN and as such is directly compatible with Propheseeā€™s sensor and there is no loss of functionality as there is when teamed with other technology such as that from SynSense, Intel, Qualcomm and Sony.

3. Brainchip and Prophesee were demonstrating their joint technology at CES 2024.

4. In my opinion Brainchip is not part of the current Prophesee product offerings from SynSense, Qualcomm and/or Sony.

My opinion only DYOR
Fact Finder
Just one question for point 4 why Qualcomm has to wait until prophese partnered with brn before declaring their partnership with prophese?
They must be working on product before hand.
 
  • Like
Reactions: 4 users
Just one question for point 4 why Qualcomm has to wait until prophese partnered with brn before declaring their partnership with prophese?
They must be working on product before hand.
Hi rgupta

I donā€™t know the answer. Just like the share price Qualcomm will do what Qualcomm will do.

If I was running Qualcomm building an anti blur mobile phone camera using a product from Prophesee without an exclusive license I would announce as late as possible to make it hard for Apple to catch up. I would probably wait until the first new product launch at least.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
A MUST WATCH INTERVIEW IMO!!!


Ziad Asghar, Qualcomm | MWC Barcelona 2024 - YouTube

5 hours ago





Sustainability and AI: Qualcomm aims to reshape the mobile landscape​



The future of artificial intelligence is not confined to boardrooms and laboratories ā€” itā€™s unfolding in real-time, shaping the way we live, work and interact with the world around us. The journey is one marked by ingenuity, collaboration and a steadfast commitment to harnessing the transformative power of AI for improved sustainability and the betterment of humanity.
Diving deep into the implications of AIā€™s migration to the edge, Ziad Asghar (pictured), senior vice president of product management and head of AI Technology at Qualcomm Technologies Inc., gave a glimpse into the transformative power of on-device processing, the burgeoning realm of sustainability and the democratization of AI.
ā€œGenerative AI may have started on the cloud, but itā€™s transitioning to the edge, itā€™s coming to the device and that brings in with it [an] amazing amount of benefits,ā€ Asghar said.
Asghar spoke with theCUBE Research analysts John Furrier, Dave Vellante and Shelly Kramer at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Mediaā€™s livestreaming studio. They discussed the evolving landscape of AI and its integration into everyday devices. (* Disclosure below.)

The future of AI: A move toward efficiency and sustainability​

AIā€™s trajectory is swiftly shifting toward on-device processing, marking a paradigm shift in how we interact with technology. Gone are the days of heavy reliance on cloud infrastructure; the future lies in the empowerment of devices to perform complex AI tasks locally. The cornerstone of this shift is the evolution of AI models ā€” becoming not only more capable but also more compact, according to Asghar.
ā€œThat allows us to be able to do more AI processing than anybody else out there from a performance per watt perspective,ā€ he said.
Qualcommā€™s unveiling of Stable Diffusion, a text-to-image model running on a device with a staggering seven billion parameters, underscores the strides made in on-device AI processing. This progress not only enhances efficiency but also aligns with sustainability goals by significantly reducing energy consumption. The age-old debate of proprietary versus open-source models is also addressed, with Qualcomm embracing the democratization of AI through initiatives, such as the AI Hub, empowering developers to leverage curated models for diverse applications.
ā€œPeople can take these generative AI model like Stable Diffusion, we can pick it up from there and readily create an application,ā€ Asghar said. ā€œAnd you know what Iā€™m excited about is there are many applications that we might not even thought about when we were designing these chips. But I hope that developers are able to come up with those amazing ideas.ā€

The shift toward edge AI​

Amid the excitement, challenges loom on the horizon. The optimization of power consumption, interoperability of AI models and maintaining a delicate balance between innovation and ethical considerations are just a few hurdles to overcome.
These challenges are opportunities in disguise ā€” opportunities to push boundaries, solve meaningful problems and shape a future where human-machine interaction transcends boundaries, according to Asghar.
ā€œWhat if you have a virtual assistant sitting on your device that you say, ā€˜Reserve a restaurant for me tomorrow,ā€™ā€ he said. ā€œIt basically figures out your calendar, it figures out Yelp, it finds a place close to you, goes to open table, makes the reservation for you and youā€™re done. This is the kind of human-machine interaction sign up that I think that we can enable in the future.ā€
Hereā€™s the complete video interview, part of SiliconANGLEā€™s and theCUBE Researchā€™s coverage of MWC Barcelona:


At 20.30 Ziad Ashgar talks about future use cases in the medical arena where the AI at the edge will be able to predict things before they happen. I didn't think that Qualcomm's NPU had predictive capabilities??....

Also, it sounds like Ziad Ashgar took notes from Peter van der Made's "4 Bit are Enough" paper. At 22.50 mins -In this segment Ziad discusses quantization and how they pre-empted LLM's to run on 4 bit integer.

Ziad also says they are preparing for their next gen which will have a lot more techniques, models, capabilities and use cases ("hello" AKIDAšŸ˜). He says they're just getting started.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
Does anyone else reckon that Competitor 1 on the slide is Qualcomm?

Surely they're not going to bypass the opportunity to add "on-chip learning" to their suite of capabilities, particularly if they want to rule the edge.


Screenshot 2024-02-29 at 11.53.45 am.png
 
  • Like
  • Haha
  • Fire
Reactions: 17 users

Diogenese

Top 20
Hey @Diogenese, you got your eyes on?
Yes. I saw that and wondered, as you did, if it referred to Thorpe's N-of-M coding.

N-of-M coding is directly tied to the arrival time of spikes from different "pixel" neurons although I would guess it also applies to auditory and other sensory neurons.

To get to N-of-M coding, there was a two step process. The first step was in realizing that the then "orthodox" spike rate coding was inefficient and error-prone. When a nerve fires, it transmits a burst of decreasing pulses, the pulse rate being a measure of the strength of the signal. Thorpe realized that measuring the rate was inefficient mainly because it involved using a redundant secondary source of the spike information, and it was error-prone because neuron firing at rates of less than 1/10th of a second is unreliable. As I've mentioned before, Thorpe found that the initial spike of a spike burst from an optical nerve carried all the necessary information.

Thorpe also noticed that the firing time was inverse to the amplitude of the excitatory energy - the stronger the signal, the sooner the nerve fired.

Hence the switch from rate coding to spike time coding.

So that leaves all M spikes from M neurons still being processed, which brings us to the second step.

The next step was the realization that the later-arriving spikes added little to the accuracy of the visual detection. In other words, accurate detection could be carried out from the first N pulses to arrive - N-of-M coding.

As you can see, Thorpe's N-of-M coding is all about spikes which arrive asynchronously.

I could find nothing in the Renesas article or in their patents suggesting the use of spikes.


This recent Renesas patent application uses MACs.

US2024054083A1 SEMICONDUCTOR DEVICE 20220808


1709168447956.png


A semiconductor device capable of shortening processing time of a neural network is provided. The memory stores a compressed weight parameter. A plurality of multiply accumulators perform a multiply-accumulation operation to a plurality of pixel data and a plurality of weight parameters. A decompressor restores the compressed weight parameter stored in the memory to a plurality of weight parameters. A memory for weight parameter stores the plurality of weight parameters restored by the decompressor. The DMA controller transfers the plurality of weight parameters from the memory to the memory for weight parameter via the decompressor. A sequence controller writes down the plurality of weight parameters stored in the memory for weight parameter to a weight parameter buffer at write timing.

That said, I haven't found out what their N:M refers to, and there is still 18 months of unpublished patent applications.
 
  • Like
  • Fire
  • Wow
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
Does anyone else reckon that Competitor 1 on the slide is Qualcomm?

Surely they're not going to bypass the opportunity to add "on-chip learning" to their suite of capabilities, particularly if they want to rule the edge.


View attachment 58150


Actually it looks like Competitor 1 is Intel, if the previous chart is anything to go by. How come Qualcomm has never been on any of our competitor charts?


Screenshot 2024-02-29 at 12.07.38 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users

manny100

Regular
Shots fired..... I've given up guessing/predicting who might be using Akida... but it's great to see that the competition is heating up. This can only be good for Akida's time to market at scale???

View attachment 58120

View attachment 58121

We are partners with Prophesee. Any chance AKIDA is involved with the new Qualcomm Snapdragon or maybe the one after.
Prophessee are supplying a chip for Qualcomm. " Vision firm Prophesee has announced a new collaboration that will see its neuromorphic Metavision sensors optimised for use with Qualcommā€™s Snapdragon mobile platforms."
BRN Partnership June14th 2022:
" ā€œWeā€™ve successfully ported the data from Propheseeā€™s neuromorphic-based camera sensor to process inference on Akida with impressive performance,ā€ said Anil Mankar, Co-Founder and CDO of BrainChip. ā€œThis combination of intelligent vision sensors with Akidaā€™s ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution.ā€
Is AKIDA in the new Snapdragon described in your post?
At a presentation last year by Sean last year when talking about Tech partnerships and how we need to show our AKIDA works well with any processors etc our clients use (that is why we embedded with ARM) he said:
" a customer buys a Prophesee camera they want to know Branchip works well with them"
Not sure whether this was just a qiup or an unintentional slip of the tongue given NDA??
In any case Prophesee have been working with the AKIDA chip for almost 2 years - and it works.
I am not sure of any other event based/power saving Neuromorphic AI companies are advanced enough to have gone this far.
According to Sean at the recent presentation we are streets ahead of the competition.
 
  • Like
  • Love
  • Fire
Reactions: 28 users
Top Bottom