BRN Discussion Ongoing

Tothemoon24

Top 20
This is a beautiful part of the Gold Coast, just live 10km up the road and had many good waves there and Greenmount
when I was young, they were great days.
Oh to be young again.:cry:
Hi sb , so true it’s a cracking spot .
Wouldn’t think there be to many better beaches in Australia .
Going to checkout the Greenmount surf 🏄‍♀️ club tomorrow

I’m hoping Brn can pay for my early retirement shack at rainbow
 
  • Like
  • Fire
  • Love
Reactions: 13 users
So as it’s Friday tomorrow and a few flat days trading recently I’m guessing we could be green tomorrow edit for @Space Cadet
View attachment 57395
Well actually it will be definitely Thursday tomorrow as that usually follows Wednesday where I am from, but sometimes it’s best just to miss a day and move forward and up !!!!!
lol
 
  • Haha
Reactions: 7 users
Well actually it will be definitely Thursday tomorrow as that usually follows Wednesday where I am from, but sometimes it’s best just to miss a day and move forward and up !!!!!
lol
Im going to bed I’m tired

1708508191133.gif
 
  • Haha
Reactions: 2 users

Teach22

Regular
If anyone is expecting to see anything that we dont already know at IFS then I’m guessing you should be prepared to be disappointed as that’s my gut feeling. But just being invited to the event is just proof how things are heading for the company.

Has any webinar, podcast, presentation, quarterly, half yearly, yearly, agm etc. in the history of any company on the asx ever revealed anything ground breaking, ever ??
Not to any company I’ve ever had a stake in.
 
  • Like
  • Thinking
Reactions: 4 users
I’m in the UK we are 2 days in front

View attachment 57396


And yes I’ve completely lost track of time and where I live.
Mmm that’s got me thinking
How can you be two days in front?
And what are you in front of .
Maybe you’re in the twilight zone !

I have heard the Queensland’s is like 5 years behind, but that was some time ago maybe the caught up and over took the rest of the world lol.
 
  • Haha
Reactions: 7 users
Hi sb , so true it’s a cracking spot .
Wouldn’t think there be to many better beaches in Australia .
Going to checkout the Greenmount surf 🏄‍♀️ club tomorrow

I’m hoping Brn can pay for my early retirement shack at rainbow
I’m currently working in Ballinga and Kirra and yes I’m surprised how nice it is, especially if you take a drive down the road next to the coast around 7am to check out all the surf, no sorry I mean hot totty either running or going for a surf

1708508619729.gif
 
  • Haha
  • Love
  • Fire
Reactions: 8 users
Mmm that’s got me thinking
How can you be two days in front?
And what are you in front of .
Maybe you’re in the twilight zone !

I have heard the Queensland’s is like 5 years behind, but that was some time ago maybe the caught up and over took the rest of the world lol.
1708508755036.gif
 
  • Like
  • Haha
Reactions: 6 users
Yes, had my eye on it when it was 32cents, it was buy or keep buying BrainChip.
I don't have to tell you which way I went.
Same here . 😁 At $1.87 .$1.63 .$1.13 .$0.195c .$0.23c .$0.365c .That part of the game is over . At peace now . Happy to watch the games being played from a distance. Good luck to us all . We deserve it .
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

Galaxycar

Regular
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought
 
  • Like
  • Wow
  • Thinking
Reactions: 6 users

wilzy123

Founding Member
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought

Welcome back! 🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡

1000023981.gif
 
  • Haha
Reactions: 17 users
Has anyone considered that the buying is quiet possibly someone buying to control the vote at the next AGM,Hmmmmmm just a thought
Unless there is some better financials $$$ or another asx price sensitive announcement, then it could be a possible chance after the last AGM. But it would be a big gamble if you’d ask me and I forgot your a








































1708509354449.gif
 
  • Haha
  • Fire
  • Like
Reactions: 15 users

Diogenese

Top 20
SW-F put me onto Roger Levinson's analog adventure:

https://www.eetimes.com/blumind-harnesses-analog-for-ultra-low-power-intelligence/

Canadian startup Blumind recently developed an analog computing architecture for ultra-low power AI acceleration of sensor data, Blumind CEO Roger Levinson told EE Times. The company hopes to enable widespread intelligence in Internet of Things (IoT) devices.
Advanced process nodes aren’t cost effective for tiny chips used in tens of hundreds of millions of units in the IoT. Combine this with the fragmentation of the IoT market, the need for application-specific silicon, and the requirement for zero additional power consumption and it’s easy to see why the IoT has been slow to adopt AI, Levinson said.

“The request from our customer was: I need to lower my existing power, add no cost, and add intelligence to my system,” he said. “That isn’t possible, but how close to zero [power consumption] can you get? We’re adding a piece to the system, so we have to add zero to the system power, and cost has to be negligible, and then we have a shot. Otherwise, they’re not going to add intelligence to the devices, they’re going to wait, and that’s what’s happening. People are waiting.”

This is the problem Blumind is taking on. Initial development work on efficient machine learning (ML) at ultra-low power by Blumind cofounder and CTO John Gosson forms the basis for the startup’s architecture today.


“John said, ‘What you need to do is move charge around as the information carrier, and not let it go between power supplies’,” Levinson said. “Energy [consumption] happens when charge moves from the power supply to ground, and heat is generated. So he built an architecture [around that idea] which is elegant in its simplicity and robustness.”

Like some of its competitors in the ultra-low power space, Blumind is focusing on analog computing.

“We’ve solved the system-level always-on problem by making it all analog,” he said. “We look like a compute in memory architecture because we use a single transistor to store coefficients for the network, and that device also does the multiplication.”

The transistor’s output is the product of the input and the stored weight; the signal integrates for a certain amount of time, which generates a charge proportional to that product. This charge is then accumulated on a capacitor. A proprietary scheme measures the resulting charge and generates an output proportional to it which represents the activation.

“Everything is time based, so we are not looking at absolute voltages or currents,” he said. “All our calculations are ratiometric, which makes us insensitive to process, voltage and temperature. To maintain analog dynamic range, we do have to compensate for temperature, so even though the ratios remain stable, the magnitudes of signals can change.”

Levinson said Blumind has chosen to focus on “use cases that are relevant to the world today”—keyword spotting and vision—partly in an effort to prove to the market analog implementations of neural networks are viable in selected use cases.


Blumind test silicon (Source: Blumind) Blumind test silicon. (Source: Blumind)
One of the biggest challenges has been, does it have to be software configurable, or not?” he said. “Our first architecture is not configurable in terms of the network—we build a model in silicon, which happens to robust for the class of applications we’re going after, and is orders of magnitude more power and area efficient.”

Model weights are adjustable, but everything else is fixed. However, this is enough flexibility to cater for a class of problems, Levinson said.

The layers are fixed, the neurons and synapses are fixed,” he said. “We’re starting with audio because our [customer] wants an always-on voice system. However, our silicon is capable of doing anything that can utilize a recurrent neural network.”

Blumind’s software stack supports customer training of the recurrent neural network (RNN) its silicon is designed for with customers’ own data.

This strategy helps minimize power consumption, but it means a separate tapeout for every new class of application Blumind wants to target. Levinson said that at legacy 22-nm nodes, the cost of an analog/mixed-signal tapeout is a little over $1 million, and requires a team of just five to eight people.

In tinyML today, the performance difference from changing models is minor, he argues.

“There is a hard limit at the edge, especially in sensors,” he said. “I have X amount of memory and X amount of compute power, and a battery. The data scientist has to fit the model within these constraints.”
Blumind has test chips for its first product, the RNN accelerator designed for keyword spotting, voice activity detection and similar time series data applications. This silicon achieves 10 nJ per inference; combined with feature extraction, it consumes a few microwatts during always-on operation. The chip also includes an audio data buffer (required for the Amazon Echo specification) within “single digit microwatts,” Levinson said.

Blumind’s chip connects directly to an analog microphone for input, and sends a wake up signal to an MCU when it detects a keyword. The current generation requires weight storage in external non-volatile memory, but Blumind plans to incorporate that in future devices.


1708508666940.png


Tapeout for the commercial version of the RNN accelerator is underway.

Blumind’s also currently bringing up test silicon of a convolutional neural network (CNN) accelerator it’s designed for vision applications in its lab, which it plans to demonstrate this summer. The target is object detection, such as person detection, at up to 10 fps using 5-20 µW, depending on configuration, Levinson said.

The company’s also working with an academic partner on a software-definable version of its analog architecture for future technology generations.

First samples of Blumind’s RNN accelerator are due in Q3.


Having fixed layers and synapses designed according to each customers data means designing a new tapeout for each customer - a mere bagatelle according to Levinson. $1M a pop.

I wonder about accuracy. This is for low hanging fruit which is not safety-critical, so there may be a market for ultra-low power near-enuf-is-good-enuf NNs.

.PS: Roger's looking pretty ripped, so don't tell him I said this.
 
Last edited:
  • Like
  • Haha
  • Wow
Reactions: 14 users

Diogenese

Top 20
Yes I believe its still coming soon. I saw a conference can't find it now but the renesas speaker blurbed 2023 then said 2024 so anytime now as its 2024. Date no idea. This was probably of no help to you lol 🤪
Hi MD,

A perfect example of information v knowledge :)
 
  • Like
  • Haha
  • Fire
Reactions: 5 users
SW-F put me onto Roger Levinson's analog adventure:

https://www.eetimes.com/blumind-harnesses-analog-for-ultra-low-power-intelligence/

Canadian startup Blumind recently developed an analog computing architecture for ultra-low power AI acceleration of sensor data, Blumind CEO Roger Levinson told EE Times. The company hopes to enable widespread intelligence in Internet of Things (IoT) devices.
Advanced process nodes aren’t cost effective for tiny chips used in tens of hundreds of millions of units in the IoT. Combine this with the fragmentation of the IoT market, the need for application-specific silicon, and the requirement for zero additional power consumption and it’s easy to see why the IoT has been slow to adopt AI, Levinson said.

“The request from our customer was: I need to lower my existing power, add no cost, and add intelligence to my system,” he said. “That isn’t possible, but how close to zero [power consumption] can you get? We’re adding a piece to the system, so we have to add zero to the system power, and cost has to be negligible, and then we have a shot. Otherwise, they’re not going to add intelligence to the devices, they’re going to wait, and that’s what’s happening. People are waiting.”

This is the problem Blumind is taking on. Initial development work on efficient machine learning (ML) at ultra-low power by Blumind cofounder and CTO John Gosson forms the basis for the startup’s architecture today.


“John said, ‘What you need to do is move charge around as the information carrier, and not let it go between power supplies’,” Levinson said. “Energy [consumption] happens when charge moves from the power supply to ground, and heat is generated. So he built an architecture [around that idea] which is elegant in its simplicity and robustness.”

Like some of its competitors in the ultra-low power space, Blumind is focusing on analog computing.

“We’ve solved the system-level always-on problem by making it all analog,” he said. “We look like a compute in memory architecture because we use a single transistor to store coefficients for the network, and that device also does the multiplication.”

The transistor’s output is the product of the input and the stored weight; the signal integrates for a certain amount of time, which generates a charge proportional to that product. This charge is then accumulated on a capacitor. A proprietary scheme measures the resulting charge and generates an output proportional to it which represents the activation.

“Everything is time based, so we are not looking at absolute voltages or currents,” he said. “All our calculations are ratiometric, which makes us insensitive to process, voltage and temperature. To maintain analog dynamic range, we do have to compensate for temperature, so even though the ratios remain stable, the magnitudes of signals can change.”

Levinson said Blumind has chosen to focus on “use cases that are relevant to the world today”—keyword spotting and vision—partly in an effort to prove to the market analog implementations of neural networks are viable in selected use cases.


Blumind test silicon (Source: Blumind) Blumind test silicon. (Source: Blumind)
One of the biggest challenges has been, does it have to be software configurable, or not?” he said. “Our first architecture is not configurable in terms of the network—we build a model in silicon, which happens to robust for the class of applications we’re going after, and is orders of magnitude more power and area efficient.”

Model weights are adjustable, but everything else is fixed. However, this is enough flexibility to cater for a class of problems, Levinson said.

The layers are fixed, the neurons and synapses are fixed,” he said. “We’re starting with audio because our [customer] wants an always-on voice system. However, our silicon is capable of doing anything that can utilize a recurrent neural network.”

Blumind’s software stack supports customer training of the recurrent neural network (RNN) its silicon is designed for with customers’ own data.

This strategy helps minimize power consumption, but it means a separate tapeout for every new class of application Blumind wants to target. Levinson said that at legacy 22-nm nodes, the cost of an analog/mixed-signal tapeout is a little over $1 million, and requires a team of just five to eight people.

In tinyML today, the performance difference from changing models is minor, he argues.

“There is a hard limit at the edge, especially in sensors,” he said. “I have X amount of memory and X amount of compute power, and a battery. The data scientist has to fit the model within these constraints.”
Blumind has test chips for its first product, the RNN accelerator designed for keyword spotting, voice activity detection and similar time series data applications. This silicon achieves 10 nJ per inference; combined with feature extraction, it consumes a few microwatts during always-on operation. The chip also includes an audio data buffer (required for the Amazon Echo specification) within “single digit microwatts,” Levinson said.

Blumind’s chip connects directly to an analog microphone for input, and sends a wake up signal to an MCU when it detects a keyword. The current generation requires weight storage in external non-volatile memory, but Blumind plans to incorporate that in future devices.


View attachment 57407

Tapeout for the commercial version of the RNN accelerator is underway.

Blumind’s also currently bringing up test silicon of a convolutional neural network (CNN) accelerator it’s designed for vision applications in its lab, which it plans to demonstrate this summer. The target is object detection, such as person detection, at up to 10 fps using 5-20 µW, depending on configuration, Levinson said.

The company’s also working with an academic partner on a software-definable version of its analog architecture for future technology generations.

First samples of Blumind’s RNN accelerator are due in Q3.


Having fixed layers and synapses designed according to each customers data means designing a new tapeout for each customer - a mere bagatelle according to Levinson. $1M a pop.

I wonder about accuracy. This is for low hanging fruit which is not safety-critical, so there may be a market for ultra-low power near-enuf-is-good-enuf NNs.

.PS: Roger's looking pretty ripped, so don't tell him I said this.
Sounds like they only have "one" customer, at the moment, from what they say?...

Might be a "good" one though..

AKIDA could do all that, but if they are custom designing each chip, for their customer/s needs, that is an advantage, with development time/costs etc, if my read is correct?
 
  • Like
Reactions: 4 users

wilzy123

Founding Member
AKIDA could do all that, but if they are cusrom designing each chip, for their customer/s needs, that is an advantage, with development time/costs etc, if my read is correct?

Don't ask if your read is correct.

Start with asking yourself if you even understand what it is you are saying. If you cannot answer that, maybe @Galaxycar help.
 
  • Haha
Reactions: 1 users
Don't ask if your read is correct.

Start with asking yourself if you even understand what it is you are saying. If you cannot answer that, maybe @Galaxycar help.
Not enough antagonists, back on the forum for you yet, Wilzy?
 
  • Like
  • Haha
  • Love
Reactions: 8 users

Galaxycar

Regular
Just sayin the against vote was 160million last time, This time it may be 200 plus million they own. Think it may be on the cards that some consortium may want a seat on the board and it may be what all this price rise is about. I admit Brainchip has made leaps and bounds behind the scenes lately and at some stage that will turn to fruit.But you have to look at all possibilities to the current buying.
 
  • Like
  • Thinking
Reactions: 5 users

Earlyrelease

Regular
Edge box’s.
My following post will demonstrate my lack of memory and retrieval skills from this site but here goes anyway.

My understanding was that when Akida 1000 was made we paid TSMC for the disc to be made and then various samples were used to validate the run and then various demo kits were provided to uni’s etc. This was part of the requirement to prove the idea worked in silicon which it surpassed most expectations. There was obviously a quantity of chips left over ( unknown number of discs made) so the exact number of chips remaining after that first run remains a mystery. We also sold a very limited number of devices on our web page with even some TSE members posting their purchase photos on the system.

So my belief is that the initial lull in take up in our IP model only approach resulted in a desire to show the product in a more user friendly environment and the edge box opportunity presented and allowed the company to maximise the remaining 1000 model chips to be used.

Now I have not seen anything in the financial records or company releases that we have gone back to TSMC and had another run of the 1000 model chips made.

Thus my expectations of edge box sales are not huge.

What is huge is the addition of a tool, cheap one too, that will provide those interested but maybe on the fence or cautions over funds customers to try the tech and play before they commit.

OK those with better research skills and memory shoot me down in flames. (Be kind ❤️)
 
  • Like
  • Fire
  • Love
Reactions: 18 users

skutza

Regular
Just think though. In all of this, BRN is like a tightly wound spring. Here is the list of investors/watchers IMO.
  1. Some totally understand and are waiting for the confirmation.
  2. Some mostly understand the company, what they can achieve but are still a little cautious
  3. Some get that the company has something, they don't exactly know what, but with all the other companies talking and partnering with BRN- well....
  4. Some don't get the whole thing but invest for the hype
  5. Some don't get it and call it a meem stock but keep a close eye, just in case.
The one thing this group of people all have in common is that, if/when that Intel, Nvidia, Mercedes or other really big name take a bite and the link confirmed, well that spring will bust and the Shorters will run for the hills, the investors will want more and the ones sitting on the side-lines will smash it hard.

$2.50 in the first 2 days will be blown away and we'll be still trying to close our mouths at $3. Step by step.
 
  • Like
  • Fire
  • Wow
Reactions: 26 users
Top Bottom