BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
I just noticed that the Arm blog "Multimodal Marvels: How Advanced AI is Revolutionizing Autonomous Robots"dated 12 June 2024 has a digram including an NPU which I've circled below.

If you read the blog, it also refers to Boston Dynamics Spot, the dog, which co-incidentally we recently discussed here on TSEx by virtue of it having been featured in a video produced by Fraunhofer Institute running on the AKIDA Dev Kit.


EXTRACT ONLY
Screenshot 2024-06-28 at 10.29.54 am.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 49 users

Slade

Top 20
Still here, and ready to buy back in when it gets lower.
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
 
  • Like
  • Haha
Reactions: 8 users

DK6161

Regular
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users

Guzzi62

Regular
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
I am also getting very frustrated I must admit.
They have to deliver this year or they lost all trustworthiness in my opinion.
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

Slade

Top 20
Dog stock. Time to trade these puppies. I’m waiting for mum to get home with more weed so I can pull another bong.
 
  • Haha
Reactions: 13 users
Dog stock. Time to trade these puppies. I’m waiting for mum to get home with more weed so I can pull another bong.
That should help you with your financial decision making 😂
 
  • Haha
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024• 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance​

New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.

 
  • Like
  • Thinking
  • Fire
Reactions: 11 users

MrNick

Regular
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024• 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance​

New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.

Correct.
 
  • Like
Reactions: 3 users
Fantastic PR and advice on how to increase the Brainchip share price :)
Someone can send this to Sean & the team :)

Have a great weekend everyone!


1719548841569.png
 
  • Haha
  • Like
Reactions: 16 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 3 users

Diogenese

Top 20
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024• 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance​

New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.

Hi Bravo,

This reads like a von Neumann computer software implementation, not an SNN. They run it on a GPU.

The software makes it slow and power hungry compared to Akida.

It may be ok for cloud processing, but may not be suitable for real-time applications.
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

CHIPS

Regular
  • Like
  • Love
Reactions: 5 users


Pedro Machado, a Senior Lecturer in Computer Science at Nottingham Trent University must have been a little giddy and trembling with excitement while typing this LinkedIn post about his uni joining “the race of Neuromorphic Computing by being [one] of the pioneers accelerating SNNs on the Alkida’s Brainchip.🤣 You gotta love his enthusiasm, though!

View attachment 64801

View attachment 64803

View attachment 64804



Akida will be useful for their ongoing project on “Position Aware Activity Recognition”:

View attachment 64805
Looks like Pedro is enjoying his new toy so far :)




IMG_20240628_162156.jpg
 
  • Like
  • Fire
  • Love
Reactions: 22 users

DK6161

Regular
I think Kuchenbuch urgently needs to work on his presentation skills. There are courses for that.

Already his first slide and and confusing presentation made me shut it off.
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!
 
  • Like
  • Love
Reactions: 4 users

Slade

Top 20
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!
We want the share price to go lower so we can buy back in. Hehehe 😜
 
  • Haha
Reactions: 2 users

Slade

Top 20

CHIPS

Regular
We want the share price to go lower so we can buy back in. Hehehe 😜

You had so many chances to buy low and did not use them? And now you want the SP to fall again after it has recovered a bit?
How stupid can one be?
 
  • Haha
  • Fire
  • Love
Reactions: 5 users

miaeffect

Oat latte lover
You had so many chances to buy low and did not use them? And now you want the SP to fall again after it has recovered a bit?
How stupid can one be?
sarcasm....................
 
  • Haha
  • Fire
Reactions: 3 users
Top Bottom