BRN Discussion Ongoing

This ia another example of how AI will be a gamechanger in the medical field:



I can't see any evidence of Brainchip's involvement but this would seem to be a perfect use case for Akida tech.

The company is Fujifilm Sonsite.
 
  • Like
  • Thinking
Reactions: 9 users

cosors

👀
Bless me father for I have sinned, I was meant to buy a 2ltr milk but couldn't resist buying 6 more BRN shares at this ridiculous low price.
I bought a euro pallet of milk.
 
  • Haha
Reactions: 3 users

Boab

I wish I could paint like Vincent
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
1,000 Million is a billion so the article is confusing from what it says in the headline to what is says within the article.

The Global Neuromorphic Computing Market Size is to grow from USD 31.2 million in 2021 to USD 8,275.9 million by 2030, at a Compound Annual Growth Rate (CAGR) of 85.73%
 
  • Like
Reactions: 5 users

Rskiff

Regular
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
As @FrederikSchack pointed out, 1% is 80 billion not 800 million, so I guess we have to multiply by 10 times more, so $132260 per share, sorry to everyone for under valuing the worth :ROFLMAO:Edit, as @Boab has pointed out, it's in millions :( Recalculate then puts it in more realistic perspective!
 
Last edited:
  • Haha
  • Like
Reactions: 18 users
1,000 Million is a billion so the article is confusing from what it says in the headline to what is says within the article.

The Global Neuromorphic Computing Market Size is to grow from USD 31.2 million in 2021 to USD 8,275.9 million by 2030, at a Compound Annual Growth Rate (CAGR) of 85.73%
Thanks, I was to quick with the head, embarassing error.
 
  • Like
  • Love
Reactions: 5 users

jtardif999

Regular
I guess the key to the Renasas deal is who did the Tape out the Chip for???
Was it produced for BRN to warehouse and sell?
Did Renasas produce for themselves to sell?
Were the Chips produced for a genuine 3rd party who have clients to sell to? They would not get Renasas to Tape the Chip out on a 'wing and a prayer'. They would have done their research and have clients to sell to.
So i guess depending on who the chips were for is the key to revenue this quarter. It might not be much but it would be a start.
One thing is a certainty - the Chips would not have been Taped out for the office Rubbish bin.
Renesas took a licence from BrainChip so they could sell a chip they have created with Akida inside to their customers. I don’t think they would have taped out the chip unless they had at least one customer in line. Further to this in my previous post - the chip they’ve taped out is stated to be ISO compliant, so there’s a fair chance the at least one customer is from the automobile industry. AIMO.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

IloveLamp

Top 20
Interesting like by Anil.......Apologies if already posted......

Screenshot_20230406_085225_LinkedIn.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 30 users
Renesas took a licence from BrainChip so they could sell a chip they have created with Akida inside to their customers. I don’t think they would have taped out the chip unless they had at least one customer in line. Further to this in my previous post - the chip they’ve taped out is stated to be ISO compliant, so there’s a fair chance the at least one customer is from the automobile industry. AIMO.
That should get the scarlet pimpernel or whatever his name is on LinkedIn happy when that becomes available.
 
  • Like
Reactions: 3 users
This just popped up, and may mean something to someone......


1680736641919.png

1680736684971.png
 
  • Like
  • Thinking
  • Fire
Reactions: 26 users
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

How we are pushing the boundaries of integer quantization and making it possible at scale.​

Artificial intelligence (AI) is enhancing our daily lives, whether it’s improving collaboration, providing entertainment, or transforming industries through autonomous vehicles, digital health, smart factories, and more. However, these AI-driven benefits come at a high energy consumption cost. To put things in perspective, it is expected that by 2025 deep neural networks will have reached 100 trillion weight parameters, which is similar to the capacity of the human brain, or at least the number of synapses in the brain.

Why quantization continues to be important for edge AI​

To make our vision of intelligent devices possible, AI inference needs to run efficiently on the device — whether a smartphone, car, robot, drone, or any other machine. That is why Qualcomm AI Researchcontinues to work on holistic model efficiency research, which covers areas such as quantization, neural architecture search, compilation, and conditional compute. Quantization is particularly important because it allows for automated reduction of weights and activations to improve power efficiency and performance while maintaining accuracy. Based on how power is consumed in silicon, we see an up to 16X increase in performance per watt and a 4X decrease in memory bandwidth by going from 32-bit floating point (FP32) to 8-bit integer quantization (INT8), which equates to a lot of power savings and/or increased performance. Our webinar explores the latest findings in quantization.

AI engineers and developers can opt for post-training quantization (PTQ) and/or quantization-aware training (QAT) to optimize and then deploy their models efficiently on the device. PTQ consists of taking a pre-trained FP32 model and optimizing it directly into a fixed-point network. This is a straightforward process but may yield lower accuracy than QAT. QAT requires training/fine-tuning the network with the simulated quantization operations in place, but yields higher accuracy, especially for lower bit-widths and certain AI models.

ModelEffPicture2.jpg


Integer quantization is the way to do AI inference​

For the same numbers of bits, integers and floating-point formats have the same number of values but different distributions. Many assume that floating point is more accurate than integer, but it really depends on the distribution of the model. For example, FP8 is better at representing distributions with outliers than INT8, but it is less practical as it consumes more power — also note that for distributions without outliers, INT8 provides a better representation. Furthermore, there are multiple FP8 formats in existence, rather than a one-size-fits-all. Finding the best FP8 format and supporting it in hardware comes at a cost. Crucially, with QAT the benefits offered by FP8 in outlier-heavy distributions are mitigated, and the accuracy of FP8 and INT8 becomes similar — refer to our NeurIPS 2022 paper for more details. In the case of INT16 and FP16, PTQ mitigates the accuracy differences, and INT16 outperforms FP16.

ModelEffPicture3.jpg


Making 4-bit integer quantization possible​

The latest findings by Qualcomm AI Research go one step further and address another issue harming the model accuracy: oscillating weights when training a model. In our ICML 2022 accepted paper, “Overcoming Oscillations in Quantization-Aware Training,” we show that higher oscillation frequencies during QAT negatively impact accuracy, but it turns out that this issue can be solved with two state-of-the-art (SOTA) methods called oscillation dampening and iterative freezing. SOTA accuracy results are achieved for 4-bit and 3-bit quantization thanks to the two novel methods.

Enabling the ecosystem with open-source quantization tools​

Now, AI engineers and developers can efficiently implement their machine learning models on-device with our open-source tools AI Model Efficiency Toolkit (AIMET) and AIMET Model Zoo. AIMET features and APIs are easy to use and can be seamlessly integrated into the AI model development workflow. What does that mean?

  • SOTA quantization and compression tools
  • Support for both TensorFlow and PyTorch
  • Benchmarks and tests for many models
AIMET enables accurate integer quantization for a wide range of use cases and model types, pushing weight bit-width precision all the way down to 4-bit.

ModelEffPicture4.jpg


In addition to AIMET, Qualcomm Innovation Center has also made the AIMET Model Zoo available: a broad range of popular pre-trained models optimized for 8-bit inference across a broad range of categories from image classification, object detection, and semantic segmentation to pose estimation and super resolution. AIMET Model Zoo’s quantized models are created using the same optimization techniques as found in AIMET. Together with the models, AIMET Model Zoo also provides the recipe for quantizing popular 32-bit floating point (FP32) models to 8-bit integer (INT8) models with little loss in accuracy. In the future, AIMET Model Zoo will provide more models covering new use cases and architectures while also enabling models with 4-bit weights quantization via further optimizations. AIMET Model Zoo makes it as simple as possible for an AI developer to grab an accurate quantized model for enhanced performance, latency, performance per watt, and more without investing a lot of time.

ModelEffPicture5.jpg


Model efficiency is key for enabling on-device AI and accelerating the growth of the connected intelligent edge. For this purpose, integer quantization is the optimal solution. Qualcomm AI Research is enabling 4-bit quantization without loss in accuracy, and AIMET makes this technology available at scale.

Tijmen Blankevoort
Director, Engineering, Qualcomm Technologies Netherlands B.V.

Chirag Patel
Engineer, Principal/Mgr., Qualcomm Technologies
Can you repost that in English🤯
 

Xray1

Regular
In Dec'20 BRN and Renesas struck a deal.
  • Firstly, the company has signed an intellectual property licensing agreement with major semiconductor manufacturer Renesas Electronics America
  • Under the agreement, Renesas will gain a single-use, royalty-bearing, worldwide design license to use the Akida IP

On Dec 2nd 2022 BRN reported that Renesas was Taping out a Chip using BRN Tech.

Renesas had basically 2 years to work with AKIDA testing, talking to clients etc.
There is no way Renesas would have produced the Chip just for it to gather dust.
Its a certainty they would have done their Tech research worked with clients over the 2 year period.
No Company would produce a product unless it was certain of demand and/or advance orders.
IMO the chip was produced for sale simply because that is what Renesas do. No different than Ford making cars to sell.
No dot joining there.
I expect to see revenue start in the 1st quarter. The product sales will appear in Renesas accounts. The Royalty in BRN Accounts.
When and how much depends on the timing of Renesas sales and Royalty payments.
As the sales belong to Renesas BRN cannot make an announcement.
I doubt when the 1st Royalty $ comes in that the ASX will allow an announcement. Simply because the Royalty flows from Renesas sales of which BRN has no control over and therefore would not be able to make a reliable estimate.
Does this make sense or am i missing something??
Happy to be corrected.
From memory ... I think Renesas purchased only the "2 nodes" so most likely imo possibly for whitegoods sales ??!!!
 
  • Like
Reactions: 2 users

Violin1

Regular
In Dec'20 BRN and Renesas struck a deal.
  • Firstly, the company has signed an intellectual property licensing agreement with major semiconductor manufacturer Renesas Electronics America
  • Under the agreement, Renesas will gain a single-use, royalty-bearing, worldwide design license to use the Akida IP

On Dec 2nd 2022 BRN reported that Renesas was Taping out a Chip using BRN Tech.

Renesas had basically 2 years to work with AKIDA testing, talking to clients etc.
There is no way Renesas would have produced the Chip just for it to gather dust.
Its a certainty they would have done their Tech research worked with clients over the 2 year period.
No Company would produce a product unless it was certain of demand and/or advance orders.
IMO the chip was produced for sale simply because that is what Renesas do. No different than Ford making cars to sell.
No dot joining there.
I expect to see revenue start in the 1st quarter. The product sales will appear in Renesas accounts. The Royalty in BRN Accounts.
When and how much depends on the timing of Renesas sales and Royalty payments.
As the sales belong to Renesas BRN cannot make an announcement.
I doubt when the 1st Royalty $ comes in that the ASX will allow an announcement. Simply because the Royalty flows from Renesas sales of which BRN has no control over and therefore would not be able to make a reliable estimate.
Does this make sense or am i missing something??
Happy to be corrected.
Hi @manny100
I'm not sure this will mean revenue in the first quarter though. If they started taping out in Dec 2022 I think one of the things we've learnt over the past few years is that taping out, testing, production, testing etc all takes time. It therefore depends on when the royalty becomes due to BRN and if invoices are then prepared and terms of trade - none of which we are privy to. So I'm thinking last half of calendar 2023 rather than the past or next few months.

I'd rather you be correct though!

Just my thoughts.
 
  • Like
  • Fire
Reactions: 21 users

Steve10

Regular
Hi @manny100
I'm not sure this will mean revenue in the first quarter though. If they started taping out in Dec 2022 I think one of the things we've learnt over the past few years is that taping out, testing, production, testing etc all takes time. It therefore depends on when the royalty becomes due to BRN and if invoices are then prepared and terms of trade - none of which we are privy to. So I'm thinking last half of calendar 2023 rather than the past or next few months.

I'd rather you be correct though!

Just my thoughts.

Renesas mentioned on December 2, 2022 announcement:

"We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Brainchip mentioned on January 29, 2023 announcement:

today announced that it has achieved tape out of its AKD1500 reference design on GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology.

Neither have announced completion of chips yet. And requires testing.

Wafer fabrication commenced in April 2020 for AKD1000 & was completed in July 2020 = 3 months. Then required time for testing/validation which was completed by September 2020 = 2 months.

From tape out completed to tested/validated chip = 5 months.

So if tape out for AKD1500 was completed late January 2023 + 3 months = late April 2023 news of chip manufactured.

Then another 2 months for testing/validation = late June 2023 news.

No news as to when Renesas completed their tape out.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Steve10

Regular
By end of April or another 14 trading days after today news is due for AKD1500 chip production completion.

Just in time for AGM on 23rd May.
 
  • Like
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Some key points that stand out to me. Apologies if you already posted this info @Bravo , I must have missed it!

"While the consumer experience driven by our technology is simple, there’s a great amount of complexity behind the scenes. The experience is facilitated by state-of-the-art computer vision (CV), sensor fusion, and deep learning algorithms made possible by several pieces of hardware and a system of in-store and cloud microservices that we’ve designed."

What hardware powers the Just Walk Out technology shopping experience?


"Once consumers enter the store, our technology relies on in-house-designed cameras to identify what products consumers take off or put back on the shelves. Each of our cameras has high resolution and a wide field of view, allowing us to install the fewest number of cameras possible. The reduced number of cameras makes the technology cost effective. We run CV algorithms directly on the camera to process data locally to reduce the bandwidth needed to send data to other devices or to the cloud. To provide security, our cameras also incorporate hardware-backed security capabilities and end-to-end encryption of data both locally and while being sent between our services."

Bringing the power of the cloud to the store

"While Amazon Web Services (AWS) helps us to elastically scale our resources to process data, stores can be a long distance from a data center, and there can often be a large amount of data to process. Our initial prototypes and installations in our own store formats started with all of our processing done in the cloud. As we scaled to different locations and larger store formats, we quickly needed to iterate on an architecture to allow us to run our algorithms where it makes the most sense—either in the cloud with elastic compute or in the store where the data is.

To manage these bandwidth issues, we built an edge computing architecture to process sensor data and compute receipts locally without going back and forth to the cloud
. Placing compute close to our data helps us to improve reliability by sending less data over the internet.

Ultimately, all our cameras, sensors, and scanners create a significant amount of data to process. To make the whole system more robust, the data streams are processed as independently as possible, resulting in a highly concurrent and asynchronous architecture."

Nice @DollazAndSense! One can only assume we are or will be working with Amazon (IMO DYOR).


Amit Mate am.png
 
  • Like
  • Fire
  • Love
Reactions: 39 users

JDelekto

Regular
This just popped up, and may mean something to someone......


View attachment 33755
View attachment 33756
These are the Python packages that appear to have been recently updated for the Akida Execution Engine (sorry for re-stating the obvious), which contain the Akida emulation layer, and if you have the hardware available (such as the PCIe card), it can be used for training and inference.

Pypi.org also has the CNN to SNN converter package, the various Akida models from their model zoo, as well as a library for quantizing models.

The homepage link will take you to BrainChip's Meta TF documentation, to which their main Web site also links from the "Try Meta TF" button off of the "Meta TF" option on the Products link. I have installed version 2.3.2 a couple of weeks ago, so I'll need to take a look and see if I can find what has changed.

It's also worth mentioning that BrainChip has its own Github Repositories, with the source code for its PCIe driver (currently Linux only), as well as their samples. Interestingly, their release notes only seem to go up to 2.3.0. Maybe they have yet to push the changes.

A bonus search for Akida on Github will also unearth an excellent repository with @uiux 's projects that were created to take advantage of the Akida hardware.
 
  • Like
  • Fire
  • Love
Reactions: 34 users
D

Deleted member 118

Guest
A forecast of a 8 trillion neuromorphic computing market by 2030:
------
Update. Embarrassing error, 8 billion of course.


1% of that, is a small 80 billion $
 
Last edited by a moderator:
  • Like
  • Haha
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sorry forum I just got dreadbotted for the first time 🫨 I'll keep it clearly on topic from now on 😅

Lucky you didn't get dreddb0tted twice @skutza, because your reply was actually off-topic too, truth be told. ❤️ from Bra-b0tt.:ROFLMAO::LOL:

I'm just going to squeeze this here so that I don't get in twuble too. I've been wondering about the possibility of our integration into Qualcomm's Flex SOC which due to enter the market in early 2024. I personally think there's a pretty reasonable prospect of that happening IMO.

Screen Shot 2023-04-05 at 12.22.59 pm.png
 
  • Like
  • Fire
  • Haha
Reactions: 17 users

rgupta

Regular
Lucky you didn't get dreddb0tted twice @skutza, because your reply was actually off-topic too, truth be told. ❤️ from Bra-b0tt.:ROFLMAO::LOL:

I'm just going to squeeze this here so that I don't get in twuble too. I've been wondering about the possibility of our integration into Qualcomm's Flex SOC which due to enter the market in early 2024. I personally think there's a pretty reasonable prospect of that happening IMO.

View attachment 33771
Qualcomm is promoting similarl technology to ours.
Only time will tell how it is related.
1st time it was discussed here without involving Qualcomm when Francois from Merc make a tweet that he just authenticate something without the involvement of brn. The properties of io were same as brn quoted.
Then Qualcomm promoted on chip processing, on device optimization without cloud, processing on the edge etc
So there is no doubt Qualcomm will be our friend number 1 or our competitor no. 1
But everying is hiding under NDAs or with time.
 
  • Like
  • Fire
Reactions: 12 users
Top Bottom