BRN Discussion Ongoing

rgupta

Regular
I assume a
I note that Sean H will do his Investor Presentation next Tuesday 27/2/24 .... so I wonder when the Annual Report and 4E will be released on the ASX.... Will it be before or after his Presentation .......... Personally, I think that if the Annual R

I prefer no announcement. May get a SP sugar hit, but outlets like the Crapper and Motley Fool will wrongly credit our recent run to the edge box, claiming a shift to hardware sales or we failed with IP to intentionally spread confusion.

Sean in the latest podcast, stresses BRN operates as an IP business. The introduction of edge devices is geared towards getting some SNN market penetration, not revenue (this was mentioned in a prior podcast or release when the VVDN box first got mentioned). IP remains our ultimate goal, Sean really emphasises that in the most recent podcast.

We should avoid being perceived as a hardware supplier or valued based on edge box sales.

The IFS event is tomorrow, wouldn’t be a bad time for IFS and BRN to announce the successful tape out of 2.0 🤞
I imagine Sean is very patient and clever CEO.
This is the 1st product we are launching with Sean on board. Yes akida 2000 and akida 1500 were there but they were in different category.
I assume Sean had taken that decision only after getting some assurances from someone.
I believe company should be able to sell a few thousand of them in this quarter
Dyor
 
  • Like
Reactions: 6 users
Yes, had my eye on it when it was 32cents, it was buy or keep buying BrainChip.
I don't have to tell you which way I went.
BRN

1708506461154.gif
 
  • Haha
  • Fire
  • Like
Reactions: 8 users
Anyone know what time our demo is scheduled for? There are a total of 5 demos during the day so hopefully ours is the first :).

IFS Direct Connect 2024​

February 21, 2024 | San Jose McEnery Convention Center

View attachment 57387

View attachment 57385
Actually looking at the days calendar I’m expecting it to be during the demo showcase either between

7.30 to 8.30am
10.00 to 10.30am
12.00 to 12.30pm
3.00 to 3.20pm
4.45 to 6.15pm


What to Expect​


All times in PSTWednesday, February 21, 2024
7:30 - 8:30 amRegistration, Breakfast, and Demo Showcase
8:30 - 9:30 amPat Gelsinger and Stu Pann: A Systems Foundry for the AI Era with special appearances by:
  • Gina M. Raimondo, United States Secretary of Commerce
  • Satya Nadella, Chairman and CEO, Microsoft
  • Rene Haas, CEO, Arm
9:30 - 10:00 amCustomer Fireside Chat featuring:
  • Yuan Xing Lee, VP of Central Engineering, Broadcom
  • Eric Fisher, President, MediaTek North America
10:00 - 10:30 amAM Break and Demo Showcase
10:30 - 12:00 pmEcosystem Spotlight featuring:
  • Aart de Geus, Executive Chair and Founder, Synopsys
  • Mike Ellow, Executive Vice President, Siemens Digital Industries Software
  • John Lee, General Manager and Vice President for Electronics, Semiconductors and Optics BU, Ansys
  • Anirudh Devgan, President and CEO, Cadence Design System
12:00 - 1:00 pmNetworking Lunch and Demo Showcase
1:00 - 2:15 pmDr. Ann Kelleher and Dr. Gary Patton: Delivering the Present and Inventing the Future: A Look Beyond 5N4Y
2:15 - 3:00 pmDr. Choon Lee: Advanced Packaging and Test Solutions
3:00 - 3:20 pmPM Break and Demo Showcase
3:20 - 3:50 pmKeyvan Esfarjani: Transforming Intel Manufacturing featuring:
  • Jason Wang, President, UMC
3:50 - 4:00 pm Stu Pann: Wrap Up
4:00 - 4:45 pmFireside Chat featuring:
  • Sam Altman, Co-founder and CEO, OpenAI
4:45 - 6:15 pmNetworking Reception and Demo Showcase
 
  • Like
  • Fire
Reactions: 14 users

Tothemoon24

Top 20
This is a beautiful part of the Gold Coast, just live 10km up the road and had many good waves there and Greenmount
when I was young, they were great days.
Oh to be young again.:cry:
Hi sb , so true it’s a cracking spot .
Wouldn’t think there be to many better beaches in Australia .
Going to checkout the Greenmount surf 🏄‍♀️ club tomorrow

I’m hoping Brn can pay for my early retirement shack at rainbow
 
  • Like
  • Fire
  • Love
Reactions: 13 users
So as it’s Friday tomorrow and a few flat days trading recently I’m guessing we could be green tomorrow edit for @Space Cadet
View attachment 57395
Well actually it will be definitely Thursday tomorrow as that usually follows Wednesday where I am from, but sometimes it’s best just to miss a day and move forward and up !!!!!
lol
 
  • Haha
Reactions: 7 users
Well actually it will be definitely Thursday tomorrow as that usually follows Wednesday where I am from, but sometimes it’s best just to miss a day and move forward and up !!!!!
lol
Im going to bed I’m tired

1708508191133.gif
 
  • Haha
Reactions: 2 users

Teach22

Regular
If anyone is expecting to see anything that we dont already know at IFS then I’m guessing you should be prepared to be disappointed as that’s my gut feeling. But just being invited to the event is just proof how things are heading for the company.

Has any webinar, podcast, presentation, quarterly, half yearly, yearly, agm etc. in the history of any company on the asx ever revealed anything ground breaking, ever ??
Not to any company I’ve ever had a stake in.
 
  • Like
  • Thinking
Reactions: 4 users
I’m in the UK we are 2 days in front

View attachment 57396


And yes I’ve completely lost track of time and where I live.
Mmm that’s got me thinking
How can you be two days in front?
And what are you in front of .
Maybe you’re in the twilight zone !

I have heard the Queensland’s is like 5 years behind, but that was some time ago maybe the caught up and over took the rest of the world lol.
 
  • Haha
Reactions: 7 users
Hi sb , so true it’s a cracking spot .
Wouldn’t think there be to many better beaches in Australia .
Going to checkout the Greenmount surf 🏄‍♀️ club tomorrow

I’m hoping Brn can pay for my early retirement shack at rainbow
I’m currently working in Ballinga and Kirra and yes I’m surprised how nice it is, especially if you take a drive down the road next to the coast around 7am to check out all the surf, no sorry I mean hot totty either running or going for a surf

1708508619729.gif
 
  • Haha
  • Love
  • Fire
Reactions: 8 users
Mmm that’s got me thinking
How can you be two days in front?
And what are you in front of .
Maybe you’re in the twilight zone !

I have heard the Queensland’s is like 5 years behind, but that was some time ago maybe the caught up and over took the rest of the world lol.
1708508755036.gif
 
  • Like
  • Haha
Reactions: 6 users
Yes, had my eye on it when it was 32cents, it was buy or keep buying BrainChip.
I don't have to tell you which way I went.
Same here . 😁 At $1.87 .$1.63 .$1.13 .$0.195c .$0.23c .$0.365c .That part of the game is over . At peace now . Happy to watch the games being played from a distance. Good luck to us all . We deserve it .
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

Galaxycar

Regular
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought
 
  • Like
  • Wow
  • Thinking
Reactions: 6 users

wilzy123

Founding Member
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought

Welcome back! 🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡

1000023981.gif
 
  • Haha
Reactions: 17 users
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought
Unless there is some better financials $$$ or another asx price sensitive announcement, then it could be a possible chance after the last AGM. But it would be a big gamble if you’d ask me and I forgot your a








































1708509354449.gif
 
  • Haha
  • Fire
  • Like
Reactions: 15 users

Diogenese

Top 20
SW-F put me onto Roger Levinson's analog adventure:

https://www.eetimes.com/blumind-harnesses-analog-for-ultra-low-power-intelligence/

Canadian startup Blumind recently developed an analog computing architecture for ultra-low power AI acceleration of sensor data, Blumind CEO Roger Levinson told EE Times. The company hopes to enable widespread intelligence in Internet of Things (IoT) devices.
Advanced process nodes aren’t cost effective for tiny chips used in tens of hundreds of millions of units in the IoT. Combine this with the fragmentation of the IoT market, the need for application-specific silicon, and the requirement for zero additional power consumption and it’s easy to see why the IoT has been slow to adopt AI, Levinson said.

“The request from our customer was: I need to lower my existing power, add no cost, and add intelligence to my system,” he said. “That isn’t possible, but how close to zero [power consumption] can you get? We’re adding a piece to the system, so we have to add zero to the system power, and cost has to be negligible, and then we have a shot. Otherwise, they’re not going to add intelligence to the devices, they’re going to wait, and that’s what’s happening. People are waiting.”

This is the problem Blumind is taking on. Initial development work on efficient machine learning (ML) at ultra-low power by Blumind cofounder and CTO John Gosson forms the basis for the startup’s architecture today.


“John said, ‘What you need to do is move charge around as the information carrier, and not let it go between power supplies’,” Levinson said. “Energy [consumption] happens when charge moves from the power supply to ground, and heat is generated. So he built an architecture [around that idea] which is elegant in its simplicity and robustness.”

Like some of its competitors in the ultra-low power space, Blumind is focusing on analog computing.

“We’ve solved the system-level always-on problem by making it all analog,” he said. “We look like a compute in memory architecture because we use a single transistor to store coefficients for the network, and that device also does the multiplication.”

The transistor’s output is the product of the input and the stored weight; the signal integrates for a certain amount of time, which generates a charge proportional to that product. This charge is then accumulated on a capacitor. A proprietary scheme measures the resulting charge and generates an output proportional to it which represents the activation.

“Everything is time based, so we are not looking at absolute voltages or currents,” he said. “All our calculations are ratiometric, which makes us insensitive to process, voltage and temperature. To maintain analog dynamic range, we do have to compensate for temperature, so even though the ratios remain stable, the magnitudes of signals can change.”

Levinson said Blumind has chosen to focus on “use cases that are relevant to the world today”—keyword spotting and vision—partly in an effort to prove to the market analog implementations of neural networks are viable in selected use cases.


Blumind test silicon (Source: Blumind) Blumind test silicon. (Source: Blumind)
One of the biggest challenges has been, does it have to be software configurable, or not?” he said. “Our first architecture is not configurable in terms of the network—we build a model in silicon, which happens to robust for the class of applications we’re going after, and is orders of magnitude more power and area efficient.”

Model weights are adjustable, but everything else is fixed. However, this is enough flexibility to cater for a class of problems, Levinson said.

The layers are fixed, the neurons and synapses are fixed,” he said. “We’re starting with audio because our [customer] wants an always-on voice system. However, our silicon is capable of doing anything that can utilize a recurrent neural network.”

Blumind’s software stack supports customer training of the recurrent neural network (RNN) its silicon is designed for with customers’ own data.

This strategy helps minimize power consumption, but it means a separate tapeout for every new class of application Blumind wants to target. Levinson said that at legacy 22-nm nodes, the cost of an analog/mixed-signal tapeout is a little over $1 million, and requires a team of just five to eight people.

In tinyML today, the performance difference from changing models is minor, he argues.

“There is a hard limit at the edge, especially in sensors,” he said. “I have X amount of memory and X amount of compute power, and a battery. The data scientist has to fit the model within these constraints.”
Blumind has test chips for its first product, the RNN accelerator designed for keyword spotting, voice activity detection and similar time series data applications. This silicon achieves 10 nJ per inference; combined with feature extraction, it consumes a few microwatts during always-on operation. The chip also includes an audio data buffer (required for the Amazon Echo specification) within “single digit microwatts,” Levinson said.

Blumind’s chip connects directly to an analog microphone for input, and sends a wake up signal to an MCU when it detects a keyword. The current generation requires weight storage in external non-volatile memory, but Blumind plans to incorporate that in future devices.


1708508666940.png


Tapeout for the commercial version of the RNN accelerator is underway.

Blumind’s also currently bringing up test silicon of a convolutional neural network (CNN) accelerator it’s designed for vision applications in its lab, which it plans to demonstrate this summer. The target is object detection, such as person detection, at up to 10 fps using 5-20 µW, depending on configuration, Levinson said.

The company’s also working with an academic partner on a software-definable version of its analog architecture for future technology generations.

First samples of Blumind’s RNN accelerator are due in Q3.


Having fixed layers and synapses designed according to each customers data means designing a new tapeout for each customer - a mere bagatelle according to Levinson. $1M a pop.

I wonder about accuracy. This is for low hanging fruit which is not safety-critical, so there may be a market for ultra-low power near-enuf-is-good-enuf NNs.

.PS: Roger's looking pretty ripped, so don't tell him I said this.
 
Last edited:
  • Like
  • Haha
  • Wow
Reactions: 14 users

Diogenese

Top 20
Yes I believe its still coming soon. I saw a conference can't find it now but the renesas speaker blurbed 2023 then said 2024 so anytime now as its 2024. Date no idea. This was probably of no help to you lol 🤪
Hi MD,

A perfect example of information v knowledge :)
 
  • Like
  • Haha
  • Fire
Reactions: 5 users
SW-F put me onto Roger Levinson's analog adventure:

https://www.eetimes.com/blumind-harnesses-analog-for-ultra-low-power-intelligence/

Canadian startup Blumind recently developed an analog computing architecture for ultra-low power AI acceleration of sensor data, Blumind CEO Roger Levinson told EE Times. The company hopes to enable widespread intelligence in Internet of Things (IoT) devices.
Advanced process nodes aren’t cost effective for tiny chips used in tens of hundreds of millions of units in the IoT. Combine this with the fragmentation of the IoT market, the need for application-specific silicon, and the requirement for zero additional power consumption and it’s easy to see why the IoT has been slow to adopt AI, Levinson said.

“The request from our customer was: I need to lower my existing power, add no cost, and add intelligence to my system,” he said. “That isn’t possible, but how close to zero [power consumption] can you get? We’re adding a piece to the system, so we have to add zero to the system power, and cost has to be negligible, and then we have a shot. Otherwise, they’re not going to add intelligence to the devices, they’re going to wait, and that’s what’s happening. People are waiting.”

This is the problem Blumind is taking on. Initial development work on efficient machine learning (ML) at ultra-low power by Blumind cofounder and CTO John Gosson forms the basis for the startup’s architecture today.


“John said, ‘What you need to do is move charge around as the information carrier, and not let it go between power supplies’,” Levinson said. “Energy [consumption] happens when charge moves from the power supply to ground, and heat is generated. So he built an architecture [around that idea] which is elegant in its simplicity and robustness.”

Like some of its competitors in the ultra-low power space, Blumind is focusing on analog computing.

“We’ve solved the system-level always-on problem by making it all analog,” he said. “We look like a compute in memory architecture because we use a single transistor to store coefficients for the network, and that device also does the multiplication.”

The transistor’s output is the product of the input and the stored weight; the signal integrates for a certain amount of time, which generates a charge proportional to that product. This charge is then accumulated on a capacitor. A proprietary scheme measures the resulting charge and generates an output proportional to it which represents the activation.

“Everything is time based, so we are not looking at absolute voltages or currents,” he said. “All our calculations are ratiometric, which makes us insensitive to process, voltage and temperature. To maintain analog dynamic range, we do have to compensate for temperature, so even though the ratios remain stable, the magnitudes of signals can change.”

Levinson said Blumind has chosen to focus on “use cases that are relevant to the world today”—keyword spotting and vision—partly in an effort to prove to the market analog implementations of neural networks are viable in selected use cases.


Blumind test silicon (Source: Blumind) Blumind test silicon. (Source: Blumind)
One of the biggest challenges has been, does it have to be software configurable, or not?” he said. “Our first architecture is not configurable in terms of the network—we build a model in silicon, which happens to robust for the class of applications we’re going after, and is orders of magnitude more power and area efficient.”

Model weights are adjustable, but everything else is fixed. However, this is enough flexibility to cater for a class of problems, Levinson said.

The layers are fixed, the neurons and synapses are fixed,” he said. “We’re starting with audio because our [customer] wants an always-on voice system. However, our silicon is capable of doing anything that can utilize a recurrent neural network.”

Blumind’s software stack supports customer training of the recurrent neural network (RNN) its silicon is designed for with customers’ own data.

This strategy helps minimize power consumption, but it means a separate tapeout for every new class of application Blumind wants to target. Levinson said that at legacy 22-nm nodes, the cost of an analog/mixed-signal tapeout is a little over $1 million, and requires a team of just five to eight people.

In tinyML today, the performance difference from changing models is minor, he argues.

“There is a hard limit at the edge, especially in sensors,” he said. “I have X amount of memory and X amount of compute power, and a battery. The data scientist has to fit the model within these constraints.”
Blumind has test chips for its first product, the RNN accelerator designed for keyword spotting, voice activity detection and similar time series data applications. This silicon achieves 10 nJ per inference; combined with feature extraction, it consumes a few microwatts during always-on operation. The chip also includes an audio data buffer (required for the Amazon Echo specification) within “single digit microwatts,” Levinson said.

Blumind’s chip connects directly to an analog microphone for input, and sends a wake up signal to an MCU when it detects a keyword. The current generation requires weight storage in external non-volatile memory, but Blumind plans to incorporate that in future devices.


View attachment 57407

Tapeout for the commercial version of the RNN accelerator is underway.

Blumind’s also currently bringing up test silicon of a convolutional neural network (CNN) accelerator it’s designed for vision applications in its lab, which it plans to demonstrate this summer. The target is object detection, such as person detection, at up to 10 fps using 5-20 µW, depending on configuration, Levinson said.

The company’s also working with an academic partner on a software-definable version of its analog architecture for future technology generations.

First samples of Blumind’s RNN accelerator are due in Q3.


Having fixed layers and synapses designed according to each customers data means designing a new tapeout for each customer - a mere bagatelle according to Levinson. $1M a pop.

I wonder about accuracy. This is for low hanging fruit which is not safety-critical, so there may be a market for ultra-low power near-enuf-is-good-enuf NNs.

.PS: Roger's looking pretty ripped, so don't tell him I said this.
Sounds like they only have "one" customer, at the moment, from what they say?...

Might be a "good" one though..

AKIDA could do all that, but if they are custom designing each chip, for their customer/s needs, that is an advantage, with development time/costs etc, if my read is correct?
 
  • Like
Reactions: 4 users

wilzy123

Founding Member
AKIDA could do all that, but if they are cusrom designing each chip, for their customer/s needs, that is an advantage, with development time/costs etc, if my read is correct?

Don't ask if your read is correct.

Start with asking yourself if you even understand what it is you are saying. If you cannot answer that, maybe @Galaxycar help.
 
  • Haha
Reactions: 1 users
Don't ask if your read is correct.

Start with asking yourself if you even understand what it is you are saying. If you cannot answer that, maybe @Galaxycar help.
Not enough antagonists, back on the forum for you yet, Wilzy?
 
  • Like
  • Haha
  • Love
Reactions: 8 users
Top Bottom