Breaking News- new BRN articles/research

windfall

Member
Something I've been thinking about is Neuralink. They made one of their rare updates on YT late last year. If ever there was a need for a cool- running, low power edge chip, that would be it. Brainchip powering a human-computer interface device would make sense. I've heard Musk's Neuralink team develop their own hardware, but is their any inkling of interest from those quarters?
 
  • Like
Reactions: 2 users

JDelekto

Regular
Something I've been thinking about is Neuralink. They made one of their rare updates on YT late last year. If ever there was a need for a cool- running, low power edge chip, that would be it. Brainchip powering a human-computer interface device would make sense. I've heard Musk's Neuralink team develop their own hardware, but is their any inkling of interest from those quarters?

I was very interested in Nuralink, however, they seem to be getting a lot of negative press lately regarding the monkeys that they were using in their experiments: Elon Musk's brain chip firm denies animal cruelty claims.

While I haven't given much thought to any type of Neuralink/BrainChip collaboration, what I have been interested in is research using BrainChip with existing BCI (Brain to Computer Interface) devices, like the Emotiv Insight, in conjunction with BrainChip's on-chip learning to identify various commands that the wearer could issue to control IoT devices locally.

The BCI device is trained similar to a TTS (Text-To-Speech) engine, where you "think" about certain commands, and the resulting EEG (Electroencephalogram) or brain activity is recorded and used to "train" the software that does the control. One can then use the device to read and playback these commands. It sounds like science fiction, but it's very impressive technology.

Imagine being able to put on a lightweight Bluetooth headset when you get home and use your Smart Home devices, just by thinking about it.

I think in the future, BrainChip will be very symbiotic with technologies like Nueralink once they are more advanced and proven safer, as it could be a great solution for training artificial limbs, eyes, and ears while personalizing the experience for that particular individual's brain and nervous system.
 
  • Like
  • Fire
Reactions: 10 users

Jasonk

Regular
sorry if this has been touched on already.
The patent is a good read if you have half an hour. I know it's from 2018... but Kimberly from brainchip seems a little more excited than just giving the usual thumbs up. Anyone know if Knightscope is working with Brainchip?

Knightscope is an advanced security technology company based in Silicon Valley that builds fully autonomous security robots that deter, detect and report. Listed on the nasdaq 3 weeks ago.

Autonomous data machines and systems​

Jan 19, 2018 - Knightscope, Inc.

Posted a couple of weeks ago; LinkedIn
Screenshot_20220217-213941_Samsung Internet.jpg


Screenshot_20220217-214935_Gallery.jpg
 
Last edited:
  • Like
  • Fire
Reactions: 18 users

Perhaps

Regular
Don't know if this been published here before. A great overview on the AI market with a special interest in Brainchip:
 
  • Like
Reactions: 10 users

Quatrojos

Regular

'...on the Weekly Briefing podcast: We talk with Renesas EVP Sailesh Chittipeddi about the distinct requirements of industrial internet of things (IIoT) applications, and about new technologies that are enabling end-users to push the edge of the IIoT further and further out. Also, the biggest beneficiaries of recent maneuvers by Intel and Nvidia are probably advocates of the RISC-V architecture – a conversation with Keven Krewell and Steve Leibson of Tirias Research. This episode sponsored by Renesas.'

[FULL TRANSCRIPT WILL BE AVAILABLE SOON]
 
  • Like
Reactions: 6 users

Moonshot

Regular
For those that are interested, this article provides a simple overview of the IP business model


“The diagram above shows you the ideal relationship between investment and revenue for IP; notice how companies must make a significant financial investment if they want to see a jump in revenue once that piece of IP reaches maturity.

The revenue curve assumes that mature IP eventually ships in extremely high volume, generating a significant return on the initial investment.”
 

Attachments

  • 663B0E56-A446-42C7-BB45-21D6839366C7.jpeg
    663B0E56-A446-42C7-BB45-21D6839366C7.jpeg
    75.3 KB · Views: 70
  • Like
Reactions: 15 users

Quatrojos

Regular


BRIAN SANTO: Interesting, interesting. If you’re an enormous company, like one of the hyperscalers, you’ve probably got the expertise to do this in-house. If you’re a smaller company, AI’s kind of knew, you might not have that expertise in house; you might not want to have that expertise in house, because you might not need it consistently. I imagine that for companies in that class, going out and finding somebody who has the combination of expertise in IP and design expertise is a premium.



MATT GUTIERREZ: It is. And don’t forget the tools part of it as well. So there’s the hardware piece, which you just described accurately Customers are, depending on the size of those customers, making trade offs. It’s the classic build versus buy. If I can find off-the-shelf components — IP, for example — that meet my requirements, my PPA — my Performance, Power and Area requirements — then that’s probably the best path for me to go.

And I can take my precious resources and devote them to doing things that are more differentiated.



And so customers are making those choices among the vendors that they talk to. The more that they can get from one vendor, usually the better. But the tools aspect is also important. And I don’t only mean the kinds of tools that it takes to implement an SOC, I’m also talking about the tools that go along with the processor architecture. Processors by themselves, no matter how cleverly architected, are not valuable if you can’t program them. It’s not a revolutionary thought to understand that most of the effort goes into developing software, and you need to be able to take that investment in software from one processor generation to the next. So the tools are hugely important for programmers to be able to create in an efficient way and also interoperability amongst standards, optimizing their software so that it runs in an efficient way on the hardware resources available.



And also, in the case of edge devices, you got to worry about memory. How much memory is on the device? So code density. Nobody in the cloud worries about code. About how big their program is. Memory is free for the most part in those kinds of applications.



In constrained devices, it’s not. So you’re even looking at things like, Do my compilers create very dense programs? And so some of the compilers are more optimized and better at doing that than others. And so you take all of these things into account — the tools that are required, the operating systems that are available, the library building blocks for doing machine learning applications, and how well do they allow me to map onto the different hardware resources I have — all of those things are taken into account by developers of SOC when they’re looking at making vendor choices.
 
  • Like
Reactions: 11 users

Twing

Akida’s gambit

'...on the Weekly Briefing podcast: We talk with Renesas EVP Sailesh Chittipeddi about the distinct requirements of industrial internet of things (IIoT) applications, and about new technologies that are enabling end-users to push the edge of the IIoT further and further out. Also, the biggest beneficiaries of recent maneuvers by Intel and Nvidia are probably advocates of the RISC-V architecture – a conversation with Keven Krewell and Steve Leibson of Tirias Research. This episode sponsored by Renesas.'

[FULL TRANSCRIPT WILL BE AVAILABLE SOON]
A great find @Quatrojos.
This is a very interesting interview with Sailesh Chittipeddi of Renesas, although he doesn't mention Akida at all, everything he spoke about is Akida !
Certainly indicates to me that Akida will be in everything and it won't be too long before we start seeing it in the market.
 
  • Like
Reactions: 19 users

Quatrojos

Regular
  • Like
  • Wow
Reactions: 14 users

Quatrojos

Regular

Presentation​

Intelligent autonomy: Enabling endpoint devices to self-govern​

Advancements in traditional compute combined with inclusion of power-efficient AI acceleration fabrics at the edge and within endpoints open up exciting new possibilities for managing the intelligence life-cycle of a system. There is a shift from a cloud-centric intelligence model to a more distributed intelligence architecture. While big-data workloads continue to be cloud centric there is a lot of demand for efficient small-data workload management right at the source.
Being able to run AI/ML workloads within tiny machines (TinyML) combined with how we are re-thinking our lives post COVID has led to some interesting market dynamics. Some of the areas and use-cases that are seeing disruption are using Voice as a User Interface for human-to-machine communication, environmental sensing and predictive analytics and maintenance.
Inference engines running on tiny computers within endpoints now enable far more efficient data handling and analytics right at the source, improving data gravity. Embedded intelligence within end points also means improved response times, reduced network data transport requirements and removal of the need to be persistently connected to the edge or cloud.
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Presentation​

Intelligent autonomy: Enabling endpoint devices to self-govern​

Advancements in traditional compute combined with inclusion of power-efficient AI acceleration fabrics at the edge and within endpoints open up exciting new possibilities for managing the intelligence life-cycle of a system. There is a shift from a cloud-centric intelligence model to a more distributed intelligence architecture. While big-data workloads continue to be cloud centric there is a lot of demand for efficient small-data workload management right at the source.
Being able to run AI/ML workloads within tiny machines (TinyML) combined with how we are re-thinking our lives post COVID has led to some interesting market dynamics. Some of the areas and use-cases that are seeing disruption are using Voice as a User Interface for human-to-machine communication, environmental sensing and predictive analytics and maintenance.
Inference engines running on tiny computers within endpoints now enable far more efficient data handling and analytics right at the source, improving data gravity. Embedded intelligence within end points also means improved response times, reduced network data transport requirements and removal of the need to be persistently connected to the edge or cloud.
The following words echo those of the former CEO Mr. Dinardo back in 2020 when filled with frustration in a webinar he said words to the effect of what we are trying to get across is that AKIDA is not an accelerator it is a processor, a tiny computer on a chip. Now Renesas are saying this exact same thing:

"Inference engines running on tiny computers within endpoints now enable far more efficient data handling and analytics right at the source, improving data gravity".

The former CEO continued to state this until Rob Telson came on board and they reverted to using the term 'accelerator' as a descriptor for AKIDA. At the time I noted this and posted in the 'black hole' that this was probably because Rob Telson was having trouble getting through the door at prospective customers offices as they likely had some idea what an accelerator was but absolutely no idea what a neuromorphic chip was anyway (LOL). AKIDA can of course pretend to be an accelerator as it does speed up the sending of data to the cloud but not by compressing but by processing and sending the relevant meta data.

What does not get mentioned is that normal accelerators by compressing the data can still cause log jams because of how the data is being sent and it can end up queuing whereas meta data being sent by AKIDA does not suffer this problem.

My opinion and reminiscing only so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

Goldphish

Emerged
  • Like
Reactions: 10 users

Bobbydog

Emerged
Awesome, not that I understood one word of it, LOL, but good to hear Akida mentioned positively.

That university is huge!! 17.000 staff and 100.000 students, I never thought a university could be that big so I learned something today, I given up on the research paper.
🤣big ditto here
 
  • Like
  • Haha
Reactions: 4 users

TheFunkMachine

seeds have the potential to become trees.
https://news.financial/comments/infineon-brainchip-amd-chip-industry-with-enormous-potential

https://news.financial/comments/infineon-brainchip-nvidia-the-chip-market-remains-hot


Two articles of late that points out the potential of Brainchip along side Nvidia, AMD and Infineon.

I especially like the first article with the Akida Chip obviously pictures in the top banner of the article;)

I have my spider senses tingling in regards to a Brainchip/Infineon partnership. I will do more in dearth research to find some possible connections here. I do know they are both presenting at Tiny ML in power savings. But then again, Brainchip could benefit every Chip in the world in power savings so it’s a bit of a rabbit hole.
 
  • Like
  • Fire
Reactions: 15 users

Worker122

Regular
  • Like
Reactions: 3 users
  • Like
Reactions: 2 users

cosors

👀

You probably all know that already. But I'm still happy to read it. Here we go!

"BrainChip Achieves Full Commercialization of its AKD1000 AIoT Chip with Availability of Mini PCIe Boards in High Volume​


February 22, 2022

PRESS RELEASE


BrainChip Achieves Full Commercialization of its AKD1000 AIoT Chip with Availability of Mini PCIe Boards in High Volume

BrainChip Holdings Ltd, a leading provider of ultra-low power high performance artificial intelligence technology and the world's first commercial producer of neuromorphic AI chips and IP, today announced that it has begun taking orders for the first commercially available Mini PCIe board leveraging its Akida™ advanced neural networking processor, rounding out its suite of AKD1000 offerings.

The AKD1000-powered Mini PCIe boards can be plugged into a developer's existing system to unlock capabilities for a wide array of edge AI applications, including Smart City, Smart Health, Smart Home and Smart Transportation. BrainChip will also offer the full PCIe design layout files and the bill of materials (BOM) to system integrators and developers to enable them to build their own boards and implement AKD1000 chips in volume as a stand-alone embedded accelerator or as a co-processor.

The new boards help usher in a new era of AI at the edge due to their performance, security, low power requirements, and the ability to perform AI training and learning on the device itself, without dependency on the cloud. The production-ready chips provide high-speed neuromorphic processing of sensor data at a low cost, high speed and very low power consumption. The PCIe boards are immediately available for pre-order on the BrainChip website. Pricing starts at $499.

"I am excited that people will finally be able to enjoy a world where AI meets the Internet of Things," said Sean Hehir, BrainChip CEO. "We have been working on developing our Akida technology for more than a decade and with the full commercial availability of our AKD1000, we are ready to fully execute on our vision. Other technologies are simply not capable of the autonomous, incremental learning at ultra-low power consumption that BrainChip's solutions can provide. Getting these chips into as many hands as possible is how the next generation of AI becomes reality."

The launch of BrainChip's new PCIe board closely follows the company's development kit offerings introduced in October. The two development kits - an x86 Shuttle PC development kit, as well as an ARM-based Raspberry Pi development kit - both include the AKD1000 chip on a Mini PCIe board and are available to partners, large enterprises and OEMs. BrainChip's AKD1000 chips and PCIe board can be purchased at shop.brainchipinc.com or via the Buy Now button at www.brainchip.com/.

Additional information is available at https://www.brainchipinc.com"
 
  • Like
Reactions: 12 users
This article may have already been posted but I just came across it in edge impulse Twitter.


It looks very much like they're talking about Akida here

"The good news is that a new breed of chipset technology has finally hit the market designed specifically with the smart home in mind. These chips provide simple AI capabilities at a fraction of the cost of traditional AI chips, are flexible enough to embed within any smart home device and program to almost any smart home AI use case easily."
 
  • Like
  • Fire
  • Love
Reactions: 30 users
This article may have already been posted but I just came across it in edge impulse Twitter.


It looks very much like they're talking about Akida here

"The good news is that a new breed of chipset technology has finally hit the market designed specifically with the smart home in mind. These chips provide simple AI capabilities at a fraction of the cost of traditional AI chips, are flexible enough to embed within any smart home device and program to almost any smart home AI use case easily."
The author from XMOS is referring to his own product from XMOS. Has anyone looked into XMOS. He is claiming similar functionality to AKIDA. Maybe a IP deal. Talks CNN and DNN but no SNN. I’m not technical enough to analyse it all.
Anyone else care to comment.
 
  • Like
Reactions: 5 users
Top Bottom