Pedro Machado on LinkedIn: #brainchip #ntu #neuromorphic #dcs #circ
I am truly impressed, it was so easy to integrate the RPI5 with the PCIe Slot for Raspberry Pi 5 (P02) and the Akida Brainchip. I just had to follow the…
Still here, and ready to buy back in when it gets lower.
![]()
Innatera books $21M in funding for its ultra-low-power AI chips - SiliconANGLE
Innatera books $21M in funding for its ultra-low-power AI chips - SiliconANGLEsiliconangle.com
UPDATED 09:00 EDT / JUNE 27 2024
AI![]()
Innatera books $21M in funding for its ultra-low-power AI chips
![]()
BY PAUL GILLIN
Netherlands-based microprocessor maker Innatera Nanosystems B.V. said it closed an oversubscribed $21 million Series A funding round, which included a $16 million investment the company announced in March and an additional $5 million from new investors.
The company’s Spiking Neural Processor T1, unveiled in January, is an energy-efficient artificial intelligence chip for sensor-edge applications. It incorporates a proprietary event-driven computing engine, a convolutional neural network accelerator and a RISC-V central processing unit (pictured) for running ultra-low-power AI applications on battery-powered devices.
Innatera, which was spun out of the Delft University of Technology in 2018, says it’s filling a gap in the market for AI-powered devices that require human-machine interaction. Its chip is “basically a brain-inspired processor that enables turnkey intelligence in applications where power is limited,” Chief Executive Officer Sumeet Kumar told SiliconANGLE in an interview. “It essentially allows you to analyze sensor data in real time by simulating how your brain recognizes patterns of interest.”
The company says the Spiking Neural Processor enables high-performance pattern recognition of images and spoken words at the sensor edge with submilliwatt power consumption and submillisecond latency. It claims its chips consume 500 times less energy and are 100 times faster than conventional microprocessors.
Always-on operation
The analog-mixed signal neuromorphic architecture allows for the always-on operation needed in applications like security cameras and listening devices within a narrow power envelope. The processors can be used as a dedicated sensor-handling engine that allows functions such as conditioning, filtering and classification to be offloaded from a central processor or sent to the cloud.
“We tend not to focus on applications that require large format image processing,” Kumar said. “We are most useful when there is event data inside the data stream, or there is something temporal such as radar, low-resolution images, cameras and sensors.” A typical use case, he said, is a video doorbell that needs to be constantly awake but run on a rechargeable battery.
“It’s basically a neural network that understands time,” Kumar said. “By implementing this sort of computation using analog circuits and mimicking the brain’s algorithms for pattern recognition, we came up with a solution that is about 10,000 times more efficient at detecting patterns and sensor data compared to traditional microcontrollers.”
Similar to a field programmable gate array, he said, “it consists of computational elements whose connectivity and parameters can be programmed at runtime. It can flexibly implement any neural network that you can train on your desktop.”
AI framework support
As a microcontroller, the processor has no operating system, but Innatera has a software development kit and firmware that runs applications built in PyTorch, with support for additional AI frameworks planned. “You build a new training model in a familiar framework, and then once that model is trained, you can map it onto the chip without having to understand any of what goes on inside the chip,” Kumar said.
The processors, the result of six generations of silicon design, are expected to ship in limited volume by the end of the year and at full volume in 2025.
Innatera employs about 75 people today and built its first processors with less than $5 million of investment. “We’ve been tremendously capital-efficient,” Kumar said.
The company plans to use the funding to get its first product into large-scale production in 2025 and expand marketing and sales. A Series B funding round is planned within the next year.
The Series A extension was led by Innavest and Invest-NL N.V., who joined existing Series A investors, which included the European Commission’s EIC Fund, MIG Capital LLC, Matterwave Ventures Management GmbH and Delft Enterprises B.V.
Image: Innatera
Which is equivalent to 22 ozzy cents mate.BRCHF up 10% overnight![]()
So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?Still here, and ready to buy back in when it gets lower.
Not down ramping at all.So you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
Annoucements usually help a sharepriceSo you guys are going to continue the down ramping hoping to lower the share price so you can get back in?
I am also getting very frustrated I must admit.Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.
As always, Not advice. DYOR and of course "watch the financials".
So yeah, "watch the financials"
That should help you with your financial decision makingDog stock. Time to trade these puppies. I’m waiting for mum to get home with more weed so I can pull another bong.
Correct.Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?
SNN's eliminating the need for matrix multiplication without affecting performance?
Researchers claim new approach to LLMs could slash AI power consumption
Work was conducted as concerns about power demands of AI have risen
Graeme Burton
27 June 2024• 2 min read![]()
SHARE
![]()
ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performance
New research suggests that eliminating the ‘matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication – abbreviated to MatMul in the research paper – performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ‘transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ‘output' – information that is not provided in the research paper.
![]()
Researchers claim new approach to LLMs could slash AI power consumption
ChatGPT uses 10 times the electricity of a Google search – but cutting out ‘matrix multiplication’ could cut this number without affecting performancewww.computing.co.uk
Scalable MatMul-free Language Modeling
Matrix multiplication (MatMul) typically dominates the overall computational cost of large language models (LLMs). This cost only grows as LLMs scale to larger embedding dimensions and context lengths. In this work, we show that MatMul operations can be completely eliminated from LLMs while...arxiv.org