BRN Discussion Ongoing

IloveLamp

Top 20
1000016705.jpg
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Frangipani

Top 20



UPDATED 09:00 EDT / JUNE 27 2024
Innatera.jpg
AI

Innatera books $21M in funding for its ultra-low-power AI chips​

33c4ed50eeebeb4edc9ff1f344e0a2d1

BY PAUL GILLIN


Netherlands-based microprocessor maker Innatera Nanosystems B.V. said it closed an oversubscribed $21 million Series A funding round, which included a $16 million investment the company announced in March and an additional $5 million from new investors.

The company’s Spiking Neural Processor T1, unveiled in January, is an energy-efficient artificial intelligence chip for sensor-edge applications. It incorporates a proprietary event-driven computing engine, a convolutional neural network accelerator and a RISC-V central processing unit (pictured) for running ultra-low-power AI applications on battery-powered devices.

Innatera, which was spun out of the Delft University of Technology in 2018, says it’s filling a gap in the market for AI-powered devices that require human-machine interaction. Its chip is “basically a brain-inspired processor that enables turnkey intelligence in applications where power is limited,” Chief Executive Officer Sumeet Kumar told SiliconANGLE in an interview. “It essentially allows you to analyze sensor data in real time by simulating how your brain recognizes patterns of interest.”

The company says the Spiking Neural Processor enables high-performance pattern recognition of images and spoken words at the sensor edge with submilliwatt power consumption and submillisecond latency. It claims its chips consume 500 times less energy and are 100 times faster than conventional microprocessors.

Always-on operation​

The analog-mixed signal neuromorphic architecture allows for the always-on operation needed in applications like security cameras and listening devices within a narrow power envelope. The processors can be used as a dedicated sensor-handling engine that allows functions such as conditioning, filtering and classification to be offloaded from a central processor or sent to the cloud.

“We tend not to focus on applications that require large format image processing,” Kumar said. “We are most useful when there is event data inside the data stream, or there is something temporal such as radar, low-resolution images, cameras and sensors.” A typical use case, he said, is a video doorbell that needs to be constantly awake but run on a rechargeable battery.

“It’s basically a neural network that understands time,” Kumar said
. “By implementing this sort of computation using analog circuits and mimicking the brain’s algorithms for pattern recognition, we came up with a solution that is about 10,000 times more efficient at detecting patterns and sensor data compared to traditional microcontrollers.”

Similar to a field programmable gate array, he said, “it consists of computational elements whose connectivity and parameters can be programmed at runtime. It can flexibly implement any neural network that you can train on your desktop.”

AI framework support​

As a microcontroller, the processor has no operating system, but Innatera has a software development kit and firmware that runs applications built in PyTorch, with support for additional AI frameworks planned. “You build a new training model in a familiar framework, and then once that model is trained, you can map it onto the chip without having to understand any of what goes on inside the chip,” Kumar said.

The processors, the result of six generations of silicon design, are expected to ship in limited volume by the end of the year and at full volume in 2025.
Innatera employs about 75 people today and built its first processors with less than $5 million of investment. “We’ve been tremendously capital-efficient,” Kumar said.

The company plans to use the funding to get its first product into large-scale production in 2025 and expand marketing and sales. A Series B funding round is planned within the next year.


The Series A extension was led by Innavest and Invest-NL N.V., who joined existing Series A investors, which included the European Commission’s EIC Fund, MIG Capital LLC, Matterwave Ventures Management GmbH and Delft Enterprises B.V.

Image: Innatera

 
  • Thinking
  • Like
  • Wow
Reactions: 12 users

DK6161

Regular

Labsy

Regular
Good morning everyone.... It's going to be a cracker next couple of weeks .... Buckle up.
Just a blatant up-ramp... Or is it? ;)
Time will tell of course.
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!



UPDATED 09:00 EDT / JUNE 27 2024
Innatera.jpg
AI

Innatera books $21M in funding for its ultra-low-power AI chips​

33c4ed50eeebeb4edc9ff1f344e0a2d1

BY PAUL GILLIN


Netherlands-based microprocessor maker Innatera Nanosystems B.V. said it closed an oversubscribed $21 million Series A funding round, which included a $16 million investment the company announced in March and an additional $5 million from new investors.

The company’s Spiking Neural Processor T1, unveiled in January, is an energy-efficient artificial intelligence chip for sensor-edge applications. It incorporates a proprietary event-driven computing engine, a convolutional neural network accelerator and a RISC-V central processing unit (pictured) for running ultra-low-power AI applications on battery-powered devices.

Innatera, which was spun out of the Delft University of Technology in 2018, says it’s filling a gap in the market for AI-powered devices that require human-machine interaction. Its chip is “basically a brain-inspired processor that enables turnkey intelligence in applications where power is limited,” Chief Executive Officer Sumeet Kumar told SiliconANGLE in an interview. “It essentially allows you to analyze sensor data in real time by simulating how your brain recognizes patterns of interest.”

The company says the Spiking Neural Processor enables high-performance pattern recognition of images and spoken words at the sensor edge with submilliwatt power consumption and submillisecond latency. It claims its chips consume 500 times less energy and are 100 times faster than conventional microprocessors.

Always-on operation​

The analog-mixed signal neuromorphic architecture allows for the always-on operation needed in applications like security cameras and listening devices within a narrow power envelope. The processors can be used as a dedicated sensor-handling engine that allows functions such as conditioning, filtering and classification to be offloaded from a central processor or sent to the cloud.

“We tend not to focus on applications that require large format image processing,” Kumar said. “We are most useful when there is event data inside the data stream, or there is something temporal such as radar, low-resolution images, cameras and sensors.” A typical use case, he said, is a video doorbell that needs to be constantly awake but run on a rechargeable battery.

“It’s basically a neural network that understands time,” Kumar said
. “By implementing this sort of computation using analog circuits and mimicking the brain’s algorithms for pattern recognition, we came up with a solution that is about 10,000 times more efficient at detecting patterns and sensor data compared to traditional microcontrollers.”

Similar to a field programmable gate array, he said, “it consists of computational elements whose connectivity and parameters can be programmed at runtime. It can flexibly implement any neural network that you can train on your desktop.”

AI framework support​

As a microcontroller, the processor has no operating system, but Innatera has a software development kit and firmware that runs applications built in PyTorch, with support for additional AI frameworks planned. “You build a new training model in a familiar framework, and then once that model is trained, you can map it onto the chip without having to understand any of what goes on inside the chip,” Kumar said.

The processors, the result of six generations of silicon design, are expected to ship in limited volume by the end of the year and at full volume in 2025.
Innatera employs about 75 people today and built its first processors with less than $5 million of investment. “We’ve been tremendously capital-efficient,” Kumar said.

The company plans to use the funding to get its first product into large-scale production in 2025 and expand marketing and sales. A Series B funding round is planned within the next year.


The Series A extension was led by Innavest and Invest-NL N.V., who joined existing Series A investors, which included the European Commission’s EIC Fund, MIG Capital LLC, Matterwave Ventures Management GmbH and Delft Enterprises B.V.

Image: Innatera



Innatera's T1 has no self-learning capabilities probably because it doesn't have enough neurons and synapses.





Screenshot 2024-06-28 at 9.16.01 am.png





Screenshot 2024-06-28 at 9.14.57 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

TheDrooben

Pretty Pretty Pretty Pretty Good

"Qualcomm's end goal for its AI technology is what it calls "embodied AI," which involves the complete integration of machine learning, multimodal AI, and LLM technology into a hybrid, always-on AI that is infused into every aspect of a device's capabilities."


We are absolutely in the right space at the right time

Happy as Larry
 
  • Like
  • Love
Reactions: 22 users

FiveBucks

Regular
BRCHF up 10% overnight :unsure:
 
  • Like
  • Thinking
  • Love
Reactions: 15 users

DK6161

Regular
  • Haha
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I just noticed that the Arm blog "Multimodal Marvels: How Advanced AI is Revolutionizing Autonomous Robots"dated 12 June 2024 has a digram including an NPU which I've circled below.

If you read the blog, it also refers to Boston Dynamics Spot, the dog, which co-incidentally we recently discussed here on TSEx by virtue of it having been featured in a video produced by Fraunhofer Institute running on the AKIDA Dev Kit.


EXTRACT ONLY
Screenshot 2024-06-28 at 10.29.54 am.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 49 users

Slade

Top 20
Still here, and ready to buy back in when it gets lower.
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
 
  • Like
  • Haha
Reactions: 8 users

DK6161

Regular
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
Annoucements usually help a shareprice
 

Guzzi62

Regular
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
I am also getting very frustrated I must admit.
They have to deliver this year or they lost all trustworthiness in my opinion.
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

Slade

Top 20
Dog stock. Time to trade these puppies. I’m waiting for mum to get home with more weed so I can pull another bong.
 
  • Haha
Reactions: 13 users
Dog stock. Time to trade these puppies. I’m waiting for mum to get home with more weed so I can pull another bong.
That should help you with your financial decision making 😂
 
  • Haha
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024• 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance​

New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.

 
  • Like
  • Thinking
  • Fire
Reactions: 11 users

MrNick

Regular
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024• 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance​

New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.

Correct.
 
  • Like
Reactions: 3 users
Fantastic PR and advice on how to increase the Brainchip share price :)
Someone can send this to Sean & the team :)

Have a great weekend everyone!


1719548841569.png
 
  • Haha
  • Like
Reactions: 16 users
Top Bottom