BRN Discussion Ongoing

chapman89

Founding Member
From the EE news journal posted earlier-


“Even though it’s been around for only one year, the Akida 1.0 platform has enjoyed tremendous success, having been used by the chaps and chapesses at a major automobile manufacturer to demonstrate a next-generation human interaction in-cabin experience in one of their concept cars; also by the folks at NASA, who are on a mission to incorporate neuromorphic learning into their space programs; also by a major microcontroller manufacturer, which is scheduled to tape-out an MCU augmented by Akida neuromorphic technology in the December 2023 timeframe. And this excludes all of the secret squirrel projects that we are not allowed to talk about.”
 
  • Like
  • Love
  • Fire
Reactions: 79 users

charles2

Regular

Impressive list !​


Come Find Edge Impulse at Embedded World 2023​

EMBEDDED DEVICES
Mike Senese
8 March 2023
Linkedin_1200x630_3_ddec41cb49.png

Each year, all the big names in embedded computing gather at Embedded World in Nuremberg, Germany to show off their latest innovations and developments, to meet with partners and customers, and to learn about new advancements in their fields. This year, Embedded World is happening from March 14–16, and Edge Impulse is excited to once again be participating with a range of activities.

embedded world | …it's a smarter world's a smarter world
H5LJ0H2eGnMXfLTpUSvvSxIMORTXeZloKPUHYbSl-Fv9uTF8bzf1wbuWNGzd51iTQsAgX4OiSpDLIVkwpwRKEy8yPL6p7qNXSPQopTt1m_RD0qc3yGRWU8VR2xMEs_PfwtXToapRkXZUEVKyNmI2bis

First held in 2003, Embedded World is known as possibly the largest show in the world for the embedded industry. The exhibition focuses on products and services related to embedded systems, including hardware, software, and tools for developing and testing. The conference portion of the event features presentations and workshops from industry experts on a variety of topics, such as security, connectivity, and real-time operating systems. There’s a lot there for everyone.
With our machine learning toolkit that is ideally optimized for embedded applications, Edge Impulse and Embedded World are a perfect match. Here are some of the different places you will be able to find us and what we’ll be getting up to in each spot.
tkw3srg0yQQxcfB-J4osq_mBLeCqyoe3-ZwAn_AAInh2gqMa5fD3tsx6maxmcFFX46TRGpHIEjdmecQMqNjgodIXms8mVwQety8xIT-L_kKO3GnhYoy77u0DoRoenoN7GNewnghnfPNf5YftFDdLFd4

Edge Impulse Booth
Hall 2, Booth 2-238
This year we will be hosting our own space in the TinyML area of Embedded World. Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
Also at the booth: Meet BrickML, the first product based on the Edge Impulse “Industrial Monitoring” reference design, focused on providing machine learning processing for industrial/machine monitoring applications. Built in collaboration with Reloc and Zalmotek, BrickML can be used to track numerous aspects of industrial machinery performance via its multitude of embedded sensors. We’ll be showing it in a motor-monitoring demonstration. BrickML is fully integrated into the Edge Impulse platform which makes everything from data logging from the device, to ML inference model deployment on to the device a real snap. (Our Industrial Monitoring reference design includes hardware and software source code to rapidly design your own product, available for Edge Impulse enterprise customers.)
esSCdCuQarpv7nYBBz-Yv9U1Mya8kEWFbQMmDDEPH9u9Kk6OJl6oqDTb52oH8YDTSXrxu6bJed55rrZ3skim5klMnOHCVJpCuNgudC3ntnOwhVr0K8o0eBfDLHocEyBHe3jNoVyhCfZEFXzLJMMauAw

We’ll additionally be showing off devices from companies we work with, including Oura, the health-monitoring wearable that is discreetly embedded in a ring you wear on your finger, and NOWATCH, a wrist-based wearable that tracks your stress levels and mental well-being.
File:TexasInstruments-Logo.svg - Wikimedia Commons

Texas Instruments
Hall 3A, Booth 3A-215
In the TI booth you’ll find our Edge Impulse/Texas Instruments demo. This will show TI’s YOLOX-nano-lite model. The model was trained on a Kaggle dataset to detect weeds and crops. The dataset was loaded to Edge Impulse and the YOLOX model was trained via the “Bring Your Own Model” extensions to Edge Impulse Studio. The train model was then deployed to run on the TI Deep Learning framework.
File:Advantech logo.svg - Wikimedia Commons

Advantech
Hall 3, Booth 3-339
Scailable will be demonstrating their Edge Impulse FOMO-driven object detection implementation at the Advantech booth. It uses the Advantech ICAM camera to distinguish small washers, screws, and other items on several different trays. They’ll be demonstrating different trays and different models for the demo, and showing how to train new models at the booth.
File:AVSystem logo.jpg - Wikimedia Commons

AVSystem
Demo at the Zephyr booth: Hall 4, Booth 4-170
AVSystems’ Coiote is a LwM2M-based IoT device-management platform, providing support for constrained IoT devices at scale. It integrates with a tinyML-based vibration sensor and can detect and report anomalies in vibrations. This demo is based on the Nordic Thingy:91, which runs the Zephyr OS, and uses the Edge Impulse platform.
The Things Conference

Arduino
Hall 2, Booth 2-238
Check out the “vineyard pest monitoring” vision demo, running on the Arduino Nicla Vision and MKR WAN 1310, built by Zalmotek and using Edge Impulse for machine learning.
alif_4bb36c6432.png

Alif
Hall 4, Booth 4-544
Alif will also be hosting an Edge Impulse-powered demo to the show. It is viewable in their private conference room by appointment; contact kirtana@edgeimpulse.com to set up a meeting.
Press Kit | Synaptics

Synaptics panel, featuring Edge Impulse
Tuesday, 3/14 @ 3PM (local time)
Hall 1, Booth 500
Edge Impulse co-founder/CEO Zach Shelby will be a participant in the “Rapid Development of AI Applications on the Katana SoC” panel, brought to you by one of our partner companies, Synaptics, and moderated by Rich Nass from Embedded Computing Design.
Come find us!
In addition to these locations and scheduled events, we’ll have numerous staff members from Edge Impulse on site and ready to answer any questions you may have about our tools and use cases. Be sure to stop by to say hi.
(And if you can’t make it in person, you can always drop us a note: hello@edgeimpulse.com)
To emphasize:

Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Tothemoon24

Regular
The mighty chip is getting some much deserved media attention.




BrainChip Unveils Its Second-Generation Akida Platform, Now Boasting Vision Transformer Acceleration​

Brainchip's Akida 2.0 gains some impressive new features, along with a three-tier launch strategy scaling up to 128 nodes and 50 TOPS.​







BrainChip has announced the launch of its second-generation Akida processor family, designed for high-efficiency artificial intelligence at the edge, adding Temporal Event-Based Neural Net (TENN) support and optional vision transformer acceleration on top of the company's existing spiking neural network capabilities.
"Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device," claims BrainChip's chief executive officer Sean Hehir of the next-generation design. "By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience."
BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip began offering development kits for its first-generation Akida AKD1000 neural network processors in October 2021, building two kits around the user's choice of a Shuttle x86 PC or a Raspberry Pi. Ease of use took a leap earlier this year when the company announced the fruit of its partnership with Edge Impulse to bring Akida support to the latter's machine learning platform — offering what Edge Impulse co-founder and chief executive officer Zach Shelby described as a "powerful and easy-to-use solution for building and deploying machine learning models on the edge."
The promise of the Akida platform, which was developed based on the operation of the human brain, is high performance at a far greater efficiency than its rivals — when, at least, the problem to be solved can be defined as a spiking neural network. It's this efficiency which has seen BrainChip primarily position its Akida hardware for use at the edge, accelerating on-device machine learning in power-sensitive applications.
The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The second-generation Akida platform brings with it high-efficiency eight-bit processing and support for Temporal Event-Based Neural Nets (TENNs), giving it the ability to consume raw real-time streaming data from sensors, including video sensors. This, the company claims, provides "radically simpler implementations" for tasks including video analytics, target tracking, audio classification, and even vital sign prediction in medical imaging analysis.
BrainChip's Akida refresh also brings with it support for accelerating vision transformers, as an optional component that can be discarded if not required, as primarily used for image classification, object detection, and semantic segmentation. Combined with Akida's ability to process multiple layers at once, the company claims the new parts will allow for complete self-management and execution of even relatively complex networks like RESNET-50 — without the host device's processor having to get involved at all.

The new features come alongside BrainChip's earlier promises of dramatic efficiency gains through the use of spiking neural networks. (📹: BrainChip)
The company has confirmed that it will be licensing the Akida IP in three product classes: Akida-E will focus on high energy efficiency with a view to being embedded alongside, or as close as possible, to sensors and offering up to 200 giga-operations per second (GOPS) across one to four nodes; Akida-S will be for integration into microcontroller units and systems-on-chip (SoCs), hitting up to 1 tera-operations per second (TOPS) across two to eight nodes; and Akida-P will target the mid- to high-end, and will be the only tier to offer the optional vision transformer acceleration, scaling between eight and 128 nodes with a total performance of up to 50 TOPS.
While the part launches to unnamed "early adopters" today, though, BrainChip isn't quite ready to start selling them to the public — promising instead that second-generation Akida processors will be available in the third quarter of 2023 with as-yet unannounced pricing. More information is available on the BrainChip website.
machine learning
artificial intelligence
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Tothemoon24

Regular
5953AEA8-BD9B-4751-842C-AD559683D628.png
 
  • Like
  • Fire
Reactions: 22 users

Baisyet

Regular
Why is Renesas missing from partnership page on BRN website ..
 
  • Like
Reactions: 2 users

cassip

Regular
  • Like
  • Fire
  • Love
Reactions: 15 users

cosors

👀
You are absolutely right. It's cold as shit here and it's snowing and I'm out with my phone and I was lazy. I deleted my post. Still interesting or not?
Sorry for that and thanks for reading. With freeze fingers and feets it looked to complicated for me.
Bravo, now I would like to be...
giphy (1).gif

or
omg-wow.gif

🥶🤣
 
  • Haha
Reactions: 6 users

BaconLover

Founding Member



Luckily Nicobo farts don't cause climate crisis.
 
  • Haha
  • Like
Reactions: 18 users

Learning

Learning to the Top 🕵‍♂️
Here is an intresting Research Blog From Google Research with Vision Transformer (ViT).


Transformers for Image Recognition at Scale
THURSDAY, DECEMBER 03, 2020
Posted by Neil Houlsby and Dirk Weissenborn, Research Scientists, Google Research

Extract:
As a first step in this direction, we present the Vision Transformer (ViT), a vision model based as closely as possible on the Transformer architecture originally designed for text-based tasks. ViT represents an input image as a sequence of image patches, similar to the sequence of word embeddings used when applying Transformers to text, and directly predicts class labels for the image. ViT demonstrates excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources. To foster additional research in this area, we have open-sourced both the code and models.


The Vision Transformer treats an input image as a sequence of patches, akin to a series of word embeddings generated by a natural language processing (NLP) Transformer.

The Vision Transformer
The original text Transformer takes as input a sequence of words, which it then uses for classification, translation, or other NLP tasks. For ViT, we make the fewest possible modifications to the Transformer design to make it operate directly on images instead of words, and observe how much about image structure the model can learn on its own.

ViT divides an image into a grid of square patches. Each patch is flattened into a single vector by concatenating the channels of all pixels in a patch and then linearly projecting it to the desired input dimension. Because Transformers are agnostic to the structure of the input elements we add learnable position embeddings to each patch, which allow the model to learn about the structure of the images. A priori, ViT does not know about the relative location of patches in the image, or even that the image has a 2D structure — it must learn such relevant information from the training data and encode structural information in the position embeddings.

Full blog here.

Learning 🏖
 
Last edited:
  • Like
  • Fire
Reactions: 20 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 12 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Thinking
  • Fire
Reactions: 29 users

Townyj

Ermahgerd
Last edited:
  • Like
  • Fire
Reactions: 7 users

Sirod69

bavarian girl ;-)

BrainChip
@BrainChip_inc


In this Digital CxO Leadership Insights series video, Mike Vizard talks with Nandan Nayampally, CMO BrainChip, about how a new class of processors will advance artificial intelligence (AI) at the edge https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/…
@DigCxO

@mvizard

 
  • Like
  • Fire
  • Love
Reactions: 42 users

Damo4

Regular

https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/

Transcript​

Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series. I’m your host Mike Vizard. Today we’re with Nandan Nayampally, CMO for BrainChip, and they’ve created a processor that mimics the way the brain works. And it’s going to be used in a lot of interesting use cases that we’re going to jump into. Dan, welcome the show.

Nandan Nayampally: Thanks, Michael.

Mike Vizard: A lot of people are a little dubious of how the brain works. So why is it a good thing to mimic the way our brain works and what went into building this process?

Nandan Nayampally: Well, firstly, the brain is probably the most efficient cognitive processor known to man, right? So naturally, there are a lot of good things that come with understanding study of the brain, especially how to learn more efficiently, which is the critical part of artificial intelligence. And what’s generally done is, there’s a lot of very parallel compute. And that’s why GPUs and new accelerators have been created that do a lot of things in parallel. Now, the problem with that is that they’re often computations that aren’t fully used and get thrown away. So it becomes very, very inefficient as you keep getting more and more complex models, right? So things like ChatGPT 3, for example, just to train it on the cloud takes four weeks, $6 million. There are better ways to kind of achieve those kinds of things. And that comes from the study of the brain.

Mike Vizard: So what exactly did you guys do to create this? I mean, how does that processor architecture work? And how long you’ve been been building this thing?

Nandan Nayampally: It’s a very good question. So obviously, there’s a lot of study going on about how neurons work, and how they compute only when needed, right? And trigger forward computation only when needed. The founders of BrainChip, Peter van der Made and Anil Mankar, had been doing this research over the last 15 years. They actually built a lot of neuron models. And then realized that pure neuron models, which there are a number of other companies like IBM, Intel, also doing neuromorphic computing, as it’s called, but applying it to real world problems is still far away if you truly build it exactly like the brain functions. So what brain chip did was, about five years ago, they started applying it to today’s problems. So we have a hybrid approach from a traditional neuromorphic neuron driven model, applying a layer that does very well with today’s conventional neural networks, such as the convolutional one, the deep learning neural networks and transformers. So applying the principles of only executing what is needed, and only executing when it’s needed, improves the efficiency substantially, while still delivering the kind of performance that you need. And when you think about AI in general, everybody thinks that AI is in the cloud. It’s only going to scale when you actually have more intelligent computation at the edge. Otherwise, you’re just going to clog up the network, you’re just going to explode the compute on the cloud.

Mike Vizard: To then end, am I replacing certain classes of processors that exists today? Or is this an entirely new use case and this processor will be used alongside other processors, and we’ll have more of this kind of hybrid processor architecture?

Nandan Nayampally: So yes, so AI is a problem that – it’s a computational problem, right? So you can do it on CPUs, you can do it on GPUs, you can do it on different kinds of accelerators. If you think about it, the AI computation use cases are all growing. So what we’ll see is more and more use cases at the edge that are smarter, that can learn. So for example, today, you have a ring doorbell that recognizes faces, or at least recognizes there is a face, but it keeps reminding you that somebody showed up that you knew already. You don’t want to be disturbed. If your neighbor walks past it, and it says, oh, “Somebody’s at the door.” They are naturally going to walk past. Now if you can train it to say, okay, this is my neighbor, don’t bother me if they’re not, you know, showing up at the door – that’s a use case that is new, right? You could do it on the CPU. You can do it on the GPU. But I think a lot of these use cases become more and more cost effective and efficient if they’re done with specialized computation. So I believe that we will have a strong growth in the types of use cases that this enables. The Price Waterhouse Coopers view is that’s about by 2030, the annual impact GDP from AI is about $15 trillion. And out of that, they look at a Iot or the artificial intelligence of things industry, that is hardware software services, is going to be over a trillion dollars. So there’s a huge market that’s developing, whether it is healthcare, monitoring vital signs and predicting, right? You can’t do that today, just because the competition, or the technology is not there to do it in a portable, cost effective way, you can start doing that on devices that you can embed or where you have, maybe hearing devices that could be a lot more efficient, that can help you either filter noise automatically and learn from your environment. There are customizations that you could do on your device saying, “hey, your car learns how you drive and helps you drive better, for example. So there are lots of new use cases that will emerge that drive new computation paradigms like what we’re proposing.

Mike Vizard: How did you solve the training problem at the edge? Because, at least in my understanding, it takes a lot of compute power to train these models. And so how did you get that down to a footprint that’s acceptable from a energy and heat perspective?

Nandan Nayampally: That’s, that’s a great question. So I want to be very clear, we’re not training on the edge. Okay? At this point, the benefits of neuromorphics are in being able to learn at the edge, but it still starts from a trained model. Right? So what what we do is we take the trained model, and it’s already got features extracted; we use that to learn and extend the classes. So for example, if there’s a model that is recognizing faces, so it’s on the device, but you can then teach it to recognize Mike’s face. Okay, so it’s still a face. But now, you know, it’s Mike’s face. And you can add that to the similar things. There are applications like pet doors, where they have cameras to allow the pet door to open or not, depending on the type of pet today recognizes between cats and dogs and other pets; you can now customize it to say, “okay, this is my cat and don’t let in the neighbor’s cat,” for example.

Mike Vizard: So to that point, will this narrow the amount of drift that we see in AI models over time? When somebody deploys these and we start to collect new data, there seems to be a need to update those models more regularly. So can we narrow that a little bit and kind of get more life out of the model before we need to replace it?

Nandan Nayampally: Yeah, that’s, that’s actually I think you’ve hit the nail on the head. Every time you create a model, it’s expensive. Sending it to cloud to retrain or customize is expensive. So this gives you a path for it. To some extent, there’ll be more drift, but then you can actually pull it back together the next generation. And it’s – the reality is, some of these drifts, you don’t even want to go back to the cloud. Because if I’m training it to recognize my pet, or my face, I don’t want it to go in my cloud. That’s my privacy. That’s my security associated with it. So there’ll be some things that are relevant that need to go back to cloud, some things that are personalized, that may not.

Mike Vizard: As we go along, how will you expose this to developers and to the data scientists that are out there? Is there some sort of SDK that they’ll invoke or set API’s? Or how will we build the software stack for this?

Nandan Nayampally: Yeah, this is an excellent question, right? We can have the most elegant hardware. If it’s not usable in a developer friendly fashion, it doesn’t mean anything. So I’ll make one comment as well on our learning, which is that one of the key things about our learning, because it’s on device and last layer only – we don’t actually save even the data on the device. It’s only stored as the weights in the network as an adjust. So it adds to the security because even if the device is compromised, they only have weights and doesn’t really change it. So there’s a security privacy layer that goes with it. So we do have a very intelligent runtime that goes on top of our hardware. And that has an API for developers to utilize. We also plug into a lot of the frameworks that we have today and partners like Edge Impulse who provide a developer environment extension environment, that that can help people tune what they need to do for our platform.

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?

Mike Vizard: Sure. Are you at all concerned that the Intel’s and the invidious of the world will go build something similar? You mentioned IBM, but what ultimately makes your approach unique in, you know, something that is sustainable as a platform that people should build on today?

Nandan Nayampally: That’s, that’s an excellent question. And so the Intel’s, the IBM’s are building their platforms. More often than not, they are building their platforms for their needs. Right? Nvidia is selling platforms that are much more scalable. But again, they they tend towards going towards a much higher end of the market, rather than the very sensor level, which is a different cost structure, a different set of requirements. So we are geared towards building for embedded solutions. And so both our business model, as well as our design, is much more geared from the ground up for being very, very low-resource requirements, whether it’s memory, whether it’s power, whether it’s, you know, silicon, right? So we are focused on building cost effective solutions and enabling, and because we are an IP model – so we license our technology to customers, customers could actually build their own specialized systems on chip, or ASICs, as they call it, write application specific ICs. That tune to their requirement. We’re not trying to sell chips into that market. We’re licensing technology that enables people to build their own specialized solutions. So a washing machine manufacturer that knows what they need to do intelligently may use microcontrollers today and say, “Okay, I’ve got this done. But, but in a year’s time or two years time, and I’ve perfected this, I’m actually going to build my own chip because the volumes and the scale require it.” Same thing with camera manufacturers; they may choose to have their own specialized IC design because it cuts their overall costs when they strive to scale.

Mike Vizard: Alright, folks, you heard it here. AI is coming to the edge, but you should not assume it’s going to be running on a processor that looks like anything we have today. Hey, Dan, thanks for being on the show.

Nandan Nayampally: Thanks, Michael. Thanks for having us. All right, and thank you all for watching the latest edition of the Digital CxO Insights series. I’m your host Mike Vizard. You can find this episode and others on the digitalcxo.com website and we invite you to check them all out. Once again, thanks for watching
 
  • Like
  • Fire
  • Love
Reactions: 55 users

Damo4

Regular

https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/

Transcript​

Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series. I’m your host Mike Vizard. Today we’re with Nandan Nayampally, CMO for BrainChip, and they’ve created a processor that mimics the way the brain works. And it’s going to be used in a lot of interesting use cases that we’re going to jump into. Dan, welcome the show.

Nandan Nayampally: Thanks, Michael.

Mike Vizard: A lot of people are a little dubious of how the brain works. So why is it a good thing to mimic the way our brain works and what went into building this process?

Nandan Nayampally: Well, firstly, the brain is probably the most efficient cognitive processor known to man, right? So naturally, there are a lot of good things that come with understanding study of the brain, especially how to learn more efficiently, which is the critical part of artificial intelligence. And what’s generally done is, there’s a lot of very parallel compute. And that’s why GPUs and new accelerators have been created that do a lot of things in parallel. Now, the problem with that is that they’re often computations that aren’t fully used and get thrown away. So it becomes very, very inefficient as you keep getting more and more complex models, right? So things like ChatGPT 3, for example, just to train it on the cloud takes four weeks, $6 million. There are better ways to kind of achieve those kinds of things. And that comes from the study of the brain.

Mike Vizard: So what exactly did you guys do to create this? I mean, how does that processor architecture work? And how long you’ve been been building this thing?

Nandan Nayampally: It’s a very good question. So obviously, there’s a lot of study going on about how neurons work, and how they compute only when needed, right? And trigger forward computation only when needed. The founders of BrainChip, Peter van der Made and Anil Mankar, had been doing this research over the last 15 years. They actually built a lot of neuron models. And then realized that pure neuron models, which there are a number of other companies like IBM, Intel, also doing neuromorphic computing, as it’s called, but applying it to real world problems is still far away if you truly build it exactly like the brain functions. So what brain chip did was, about five years ago, they started applying it to today’s problems. So we have a hybrid approach from a traditional neuromorphic neuron driven model, applying a layer that does very well with today’s conventional neural networks, such as the convolutional one, the deep learning neural networks and transformers. So applying the principles of only executing what is needed, and only executing when it’s needed, improves the efficiency substantially, while still delivering the kind of performance that you need. And when you think about AI in general, everybody thinks that AI is in the cloud. It’s only going to scale when you actually have more intelligent computation at the edge. Otherwise, you’re just going to clog up the network, you’re just going to explode the compute on the cloud.

Mike Vizard: To then end, am I replacing certain classes of processors that exists today? Or is this an entirely new use case and this processor will be used alongside other processors, and we’ll have more of this kind of hybrid processor architecture?

Nandan Nayampally: So yes, so AI is a problem that – it’s a computational problem, right? So you can do it on CPUs, you can do it on GPUs, you can do it on different kinds of accelerators. If you think about it, the AI computation use cases are all growing. So what we’ll see is more and more use cases at the edge that are smarter, that can learn. So for example, today, you have a ring doorbell that recognizes faces, or at least recognizes there is a face, but it keeps reminding you that somebody showed up that you knew already. You don’t want to be disturbed. If your neighbor walks past it, and it says, oh, “Somebody’s at the door.” They are naturally going to walk past. Now if you can train it to say, okay, this is my neighbor, don’t bother me if they’re not, you know, showing up at the door – that’s a use case that is new, right? You could do it on the CPU. You can do it on the GPU. But I think a lot of these use cases become more and more cost effective and efficient if they’re done with specialized computation. So I believe that we will have a strong growth in the types of use cases that this enables. The Price Waterhouse Coopers view is that’s about by 2030, the annual impact GDP from AI is about $15 trillion. And out of that, they look at a Iot or the artificial intelligence of things industry, that is hardware software services, is going to be over a trillion dollars. So there’s a huge market that’s developing, whether it is healthcare, monitoring vital signs and predicting, right? You can’t do that today, just because the competition, or the technology is not there to do it in a portable, cost effective way, you can start doing that on devices that you can embed or where you have, maybe hearing devices that could be a lot more efficient, that can help you either filter noise automatically and learn from your environment. There are customizations that you could do on your device saying, “hey, your car learns how you drive and helps you drive better, for example. So there are lots of new use cases that will emerge that drive new computation paradigms like what we’re proposing.

Mike Vizard: How did you solve the training problem at the edge? Because, at least in my understanding, it takes a lot of compute power to train these models. And so how did you get that down to a footprint that’s acceptable from a energy and heat perspective?

Nandan Nayampally: That’s, that’s a great question. So I want to be very clear, we’re not training on the edge. Okay? At this point, the benefits of neuromorphics are in being able to learn at the edge, but it still starts from a trained model. Right? So what what we do is we take the trained model, and it’s already got features extracted; we use that to learn and extend the classes. So for example, if there’s a model that is recognizing faces, so it’s on the device, but you can then teach it to recognize Mike’s face. Okay, so it’s still a face. But now, you know, it’s Mike’s face. And you can add that to the similar things. There are applications like pet doors, where they have cameras to allow the pet door to open or not, depending on the type of pet today recognizes between cats and dogs and other pets; you can now customize it to say, “okay, this is my cat and don’t let in the neighbor’s cat,” for example.

Mike Vizard: So to that point, will this narrow the amount of drift that we see in AI models over time? When somebody deploys these and we start to collect new data, there seems to be a need to update those models more regularly. So can we narrow that a little bit and kind of get more life out of the model before we need to replace it?

Nandan Nayampally: Yeah, that’s, that’s actually I think you’ve hit the nail on the head. Every time you create a model, it’s expensive. Sending it to cloud to retrain or customize is expensive. So this gives you a path for it. To some extent, there’ll be more drift, but then you can actually pull it back together the next generation. And it’s – the reality is, some of these drifts, you don’t even want to go back to the cloud. Because if I’m training it to recognize my pet, or my face, I don’t want it to go in my cloud. That’s my privacy. That’s my security associated with it. So there’ll be some things that are relevant that need to go back to cloud, some things that are personalized, that may not.

Mike Vizard: As we go along, how will you expose this to developers and to the data scientists that are out there? Is there some sort of SDK that they’ll invoke or set API’s? Or how will we build the software stack for this?

Nandan Nayampally: Yeah, this is an excellent question, right? We can have the most elegant hardware. If it’s not usable in a developer friendly fashion, it doesn’t mean anything. So I’ll make one comment as well on our learning, which is that one of the key things about our learning, because it’s on device and last layer only – we don’t actually save even the data on the device. It’s only stored as the weights in the network as an adjust. So it adds to the security because even if the device is compromised, they only have weights and doesn’t really change it. So there’s a security privacy layer that goes with it. So we do have a very intelligent runtime that goes on top of our hardware. And that has an API for developers to utilize. We also plug into a lot of the frameworks that we have today and partners like Edge Impulse who provide a developer environment extension environment, that that can help people tune what they need to do for our platform.

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?

Mike Vizard: Sure. Are you at all concerned that the Intel’s and the invidious of the world will go build something similar? You mentioned IBM, but what ultimately makes your approach unique in, you know, something that is sustainable as a platform that people should build on today?

Nandan Nayampally: That’s, that’s an excellent question. And so the Intel’s, the IBM’s are building their platforms. More often than not, they are building their platforms for their needs. Right? Nvidia is selling platforms that are much more scalable. But again, they they tend towards going towards a much higher end of the market, rather than the very sensor level, which is a different cost structure, a different set of requirements. So we are geared towards building for embedded solutions. And so both our business model, as well as our design, is much more geared from the ground up for being very, very low-resource requirements, whether it’s memory, whether it’s power, whether it’s, you know, silicon, right? So we are focused on building cost effective solutions and enabling, and because we are an IP model – so we license our technology to customers, customers could actually build their own specialized systems on chip, or ASICs, as they call it, write application specific ICs. That tune to their requirement. We’re not trying to sell chips into that market. We’re licensing technology that enables people to build their own specialized solutions. So a washing machine manufacturer that knows what they need to do intelligently may use microcontrollers today and say, “Okay, I’ve got this done. But, but in a year’s time or two years time, and I’ve perfected this, I’m actually going to build my own chip because the volumes and the scale require it.” Same thing with camera manufacturers; they may choose to have their own specialized IC design because it cuts their overall costs when they strive to scale.

Mike Vizard: Alright, folks, you heard it here. AI is coming to the edge, but you should not assume it’s going to be running on a processor that looks like anything we have today. Hey, Dan, thanks for being on the show.

Nandan Nayampally: Thanks, Michael. Thanks for having us. All right, and thank you all for watching the latest edition of the Digital CxO Insights series. I’m your host Mike Vizard. You can find this episode and others on the digitalcxo.com website and we invite you to check them all out. Once again, thanks for watching
I think the below segment is important for some of those who have been expecting things ahead of Brainchip timeline:

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?
 
  • Like
  • Fire
  • Love
Reactions: 28 users
Great reminder of why.
Do your own research.
Ignore the manipulators of markets.

Logic says if you want to buy a great company at a low price so will institutions but they have the ability and resources to influence the market and profit from lending to short traders.

BrainChip Introduces Second-Generation Akida Platform​

Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient and secure Edge AIoT products, untethered from the cloud
Laguna Hills, Calif. – March 6, 2023
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, neuromorphic AI IP, today announced the second generation of its Akida™ platform that drives extremely efficient and intelligent edge devices for the Artificial Intelligence of Things (AIoT) solutions and services market that is expected to be $1T+ by 2030. This hyper-efficient yet powerful neural processing system, architected for embedded Edge AI applications, now adds efficient 8-bit processing to go with advanced capabilities such as time domain convolutions and vision transformer acceleration, for an unprecedented level of performance in sub-watt devices, taking them from perception towards cognition.
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.
The Akida IP platform has a unique ability to learn on the device for continuous improvement and data-less customization that improves security and privacy. This, combined with the efficiency and performance available, enable very differentiated solutions that until now have not been possible. These include secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively.
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, principal analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Akida’s software and tooling further simplifies the development and deployment of solutions and services with these features:
  • An efficient runtime engine that autonomously manages model accelerations completely transparent to the developer
  • MetaTF™ software that developers can use with their preferred framework, like TensorFlow/Keras, or development platform, like Edge Impulse, to easily develop, tune, and deploy AI solutions.
  • Supports all types of Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT) as well as Spiking Neural Networks (SNNs), future-proofing designs as the models get more advanced.
Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners. BrainChip is engaged with early adopters on the second generation IP platform. General availability will follow in Q3’ 2023.
See what they’re saying:
“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with Brainchip’s Akida neuromorphic processor holds great potential for developers of high-performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs,” said Luca Verre, Co-Founder and CEO at Prophesee.
Luca Verre, Co-Founder and CEO, Prophesee
“BrainChip and its unique digital neuromorphic IP have been part of IFS’ Accelerator IP Alliance ecosystem since 2022,” said Suk Lee, Vice President of Design Ecosystem Development at IFS. “We are keen to see how the capabilities in Akida’s latest generation offerings enable more compelling AI use cases at the edge”
Suk Lee, VP Design Ecosystem Development, Intel Foundry Services
“Edge Impulse is thrilled to collaborate with BrainChip and harness their groundbreaking neuromorphic technology. Akida’s 2ndgeneration platform adds TENNs and Vision Transformers to a strong neuromorphic foundation. That’s going to accelerate the demand for intelligent solutions. Our growing partnership is a testament to the immense potential of combining Edge Impulse’s advanced machine learning capabilities with BrainChip’s innovative approach to computing. Together, we’re forging a path toward a more intelligent and efficient future,” said Zach Shelby, Co-Founder and CEO at Edge Impulse.
Zach Shelby, Co-Founder and CEO, Edge Impulse
“BrainChip has some exciting upcoming news and developments underway,” said Daniel Mandell, Director at VDC Research. “Their 2nd generation Akida platform provides direct support for the intelligence chip market, which is exploding. IoT market opportunities are driving rapid change in our global technology ecosystem, and BrainChip will help us get there.”
Daniel Mandell, Director, VDC Research
“Integration of AI Accelerators, such as BrainChip’s Akida technology, has application for high-performance RF, including spectrum monitoring, low-latency links, distributed networking, AESA radar, and 5G base stations,” said John Shanton, CEO of Ipsolon Research, a leader in small form factor, low power SDR technology.
John Shanton, CEO, Ipsolon Research
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive
“Ai Labs is excited about the introduction of BrainChip’s 2nd generation Akida neuromorphic IP, which will support vision transformers and TENNs. This will enable high-end vision and multi-sensory capability devices to scale rapidly. Together, Ai Labs and BrainChip will support our customers’ needs to address complex problems,” said Bhasker Rao, Founder of Ai Labs. “Improving development and deployment for industries such as manufacturing, oil and gas, power generation, and water treatment, preventing costly failures and reducing machine downtime.”
Bhasker Rao, Founder, Ai Labs
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
Roger Wendelken, Senior Vice President IoT and Infrastructure Business Unit, Renesas
“We see a growing number of predictive industrial (including HVAC, motor control) or automotive (including fleet maintenance), building automation, remote digital health equipment and other AIoT applications use complex models with minimal impact to product BOM and need faster real-time performance at the Edge” said Nalin Balan, Head of Business Development at Reality ai, a Renesas company. “BrainChip’s ability to efficiently handle streaming high frequency signal data, vision, and other advanced models at the edge can radically improve scale and timely delivery of intelligent services.”
Nalin Balan, Head of Business Development, Reality.ai, a Renesas Company
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, Principal Analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more, in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
Richard Wawrzyniak, Principal Analyst, Semico Research
“BrainChip’s cutting-edge neuromorphic technology is paving the way for the future of artificial intelligence, and Drexel University recognizes its immense potential to revolutionize numerous industries. We have experienced that neuromorphic compute is easy to use and addresses real-world applications today. We are proud to partner with BrainChip and advancing their groundbreaking technology, including TENNS and how it handles time series data, which is the basis to address a lot of complex problems and unlocking its full potential for the betterment of society,” said Anup Das, Associate Professor and Nagarajan Kandasamy, Interim Department Head of Electrical and Computer Engineering, Drexel University.
Anup Das, Associate Professor, Drexel University
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Sean Hehir, CEO, BrainChip

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 50 users

Learning

Learning to the Top 🕵‍♂️

BrainChip
@BrainChip_inc


In this Digital CxO Leadership Insights series video, Mike Vizard talks with Nandan Nayampally, CMO BrainChip, about how a new class of processors will advance artificial intelligence (AI) at the edge https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/…
@DigCxO

@mvizard

https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/

Transcript​

Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series. I’m your host Mike Vizard. Today we’re with Nandan Nayampally, CMO for BrainChip, and they’ve created a processor that mimics the way the brain works. And it’s going to be used in a lot of interesting use cases that we’re going to jump into. Dan, welcome the show.

Nandan Nayampally: Thanks, Michael.

Mike Vizard: A lot of people are a little dubious of how the brain works. So why is it a good thing to mimic the way our brain works and what went into building this process?

Nandan Nayampally: Well, firstly, the brain is probably the most efficient cognitive processor known to man, right? So naturally, there are a lot of good things that come with understanding study of the brain, especially how to learn more efficiently, which is the critical part of artificial intelligence. And what’s generally done is, there’s a lot of very parallel compute. And that’s why GPUs and new accelerators have been created that do a lot of things in parallel. Now, the problem with that is that they’re often computations that aren’t fully used and get thrown away. So it becomes very, very inefficient as you keep getting more and more complex models, right? So things like ChatGPT 3, for example, just to train it on the cloud takes four weeks, $6 million. There are better ways to kind of achieve those kinds of things. And that comes from the study of the brain.

Mike Vizard: So what exactly did you guys do to create this? I mean, how does that processor architecture work? And how long you’ve been been building this thing?

Nandan Nayampally: It’s a very good question. So obviously, there’s a lot of study going on about how neurons work, and how they compute only when needed, right? And trigger forward computation only when needed. The founders of BrainChip, Peter van der Made and Anil Mankar, had been doing this research over the last 15 years. They actually built a lot of neuron models. And then realized that pure neuron models, which there are a number of other companies like IBM, Intel, also doing neuromorphic computing, as it’s called, but applying it to real world problems is still far away if you truly build it exactly like the brain functions. So what brain chip did was, about five years ago, they started applying it to today’s problems. So we have a hybrid approach from a traditional neuromorphic neuron driven model, applying a layer that does very well with today’s conventional neural networks, such as the convolutional one, the deep learning neural networks and transformers. So applying the principles of only executing what is needed, and only executing when it’s needed, improves the efficiency substantially, while still delivering the kind of performance that you need. And when you think about AI in general, everybody thinks that AI is in the cloud. It’s only going to scale when you actually have more intelligent computation at the edge. Otherwise, you’re just going to clog up the network, you’re just going to explode the compute on the cloud.

Mike Vizard: To then end, am I replacing certain classes of processors that exists today? Or is this an entirely new use case and this processor will be used alongside other processors, and we’ll have more of this kind of hybrid processor architecture?

Nandan Nayampally: So yes, so AI is a problem that – it’s a computational problem, right? So you can do it on CPUs, you can do it on GPUs, you can do it on different kinds of accelerators. If you think about it, the AI computation use cases are all growing. So what we’ll see is more and more use cases at the edge that are smarter, that can learn. So for example, today, you have a ring doorbell that recognizes faces, or at least recognizes there is a face, but it keeps reminding you that somebody showed up that you knew already. You don’t want to be disturbed. If your neighbor walks past it, and it says, oh, “Somebody’s at the door.” They are naturally going to walk past. Now if you can train it to say, okay, this is my neighbor, don’t bother me if they’re not, you know, showing up at the door – that’s a use case that is new, right? You could do it on the CPU. You can do it on the GPU. But I think a lot of these use cases become more and more cost effective and efficient if they’re done with specialized computation. So I believe that we will have a strong growth in the types of use cases that this enables. The Price Waterhouse Coopers view is that’s about by 2030, the annual impact GDP from AI is about $15 trillion. And out of that, they look at a Iot or the artificial intelligence of things industry, that is hardware software services, is going to be over a trillion dollars. So there’s a huge market that’s developing, whether it is healthcare, monitoring vital signs and predicting, right? You can’t do that today, just because the competition, or the technology is not there to do it in a portable, cost effective way, you can start doing that on devices that you can embed or where you have, maybe hearing devices that could be a lot more efficient, that can help you either filter noise automatically and learn from your environment. There are customizations that you could do on your device saying, “hey, your car learns how you drive and helps you drive better, for example. So there are lots of new use cases that will emerge that drive new computation paradigms like what we’re proposing.

Mike Vizard: How did you solve the training problem at the edge? Because, at least in my understanding, it takes a lot of compute power to train these models. And so how did you get that down to a footprint that’s acceptable from a energy and heat perspective?

Nandan Nayampally: That’s, that’s a great question. So I want to be very clear, we’re not training on the edge. Okay? At this point, the benefits of neuromorphics are in being able to learn at the edge, but it still starts from a trained model. Right? So what what we do is we take the trained model, and it’s already got features extracted; we use that to learn and extend the classes. So for example, if there’s a model that is recognizing faces, so it’s on the device, but you can then teach it to recognize Mike’s face. Okay, so it’s still a face. But now, you know, it’s Mike’s face. And you can add that to the similar things. There are applications like pet doors, where they have cameras to allow the pet door to open or not, depending on the type of pet today recognizes between cats and dogs and other pets; you can now customize it to say, “okay, this is my cat and don’t let in the neighbor’s cat,” for example.

Mike Vizard: So to that point, will this narrow the amount of drift that we see in AI models over time? When somebody deploys these and we start to collect new data, there seems to be a need to update those models more regularly. So can we narrow that a little bit and kind of get more life out of the model before we need to replace it?

Nandan Nayampally: Yeah, that’s, that’s actually I think you’ve hit the nail on the head. Every time you create a model, it’s expensive. Sending it to cloud to retrain or customize is expensive. So this gives you a path for it. To some extent, there’ll be more drift, but then you can actually pull it back together the next generation. And it’s – the reality is, some of these drifts, you don’t even want to go back to the cloud. Because if I’m training it to recognize my pet, or my face, I don’t want it to go in my cloud. That’s my privacy. That’s my security associated with it. So there’ll be some things that are relevant that need to go back to cloud, some things that are personalized, that may not.

Mike Vizard: As we go along, how will you expose this to developers and to the data scientists that are out there? Is there some sort of SDK that they’ll invoke or set API’s? Or how will we build the software stack for this?

Nandan Nayampally: Yeah, this is an excellent question, right? We can have the most elegant hardware. If it’s not usable in a developer friendly fashion, it doesn’t mean anything. So I’ll make one comment as well on our learning, which is that one of the key things about our learning, because it’s on device and last layer only – we don’t actually save even the data on the device. It’s only stored as the weights in the network as an adjust. So it adds to the security because even if the device is compromised, they only have weights and doesn’t really change it. So there’s a security privacy layer that goes with it. So we do have a very intelligent runtime that goes on top of our hardware. And that has an API for developers to utilize. We also plug into a lot of the frameworks that we have today and partners like Edge Impulse who provide a developer environment extension environment, that that can help people tune what they need to do for our platform.

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?

Mike Vizard: Sure. Are you at all concerned that the Intel’s and the invidious of the world will go build something similar? You mentioned IBM, but what ultimately makes your approach unique in, you know, something that is sustainable as a platform that people should build on today?

Nandan Nayampally: That’s, that’s an excellent question. And so the Intel’s, the IBM’s are building their platforms. More often than not, they are building their platforms for their needs. Right? Nvidia is selling platforms that are much more scalable. But again, they they tend towards going towards a much higher end of the market, rather than the very sensor level, which is a different cost structure, a different set of requirements. So we are geared towards building for embedded solutions. And so both our business model, as well as our design, is much more geared from the ground up for being very, very low-resource requirements, whether it’s memory, whether it’s power, whether it’s, you know, silicon, right? So we are focused on building cost effective solutions and enabling, and because we are an IP model – so we license our technology to customers, customers could actually build their own specialized systems on chip, or ASICs, as they call it, write application specific ICs. That tune to their requirement. We’re not trying to sell chips into that market. We’re licensing technology that enables people to build their own specialized solutions. So a washing machine manufacturer that knows what they need to do intelligently may use microcontrollers today and say, “Okay, I’ve got this done. But, but in a year’s time or two years time, and I’ve perfected this, I’m actually going to build my own chip because the volumes and the scale require it.” Same thing with camera manufacturers; they may choose to have their own specialized IC design because it cuts their overall costs when they strive to scale.

Mike Vizard: Alright, folks, you heard it here. AI is coming to the edge, but you should not assume it’s going to be running on a processor that looks like anything we have today. Hey, Dan, thanks for being on the show.

Nandan Nayampally: Thanks, Michael. Thanks for having us. All right, and thank you all for watching the latest edition of the Digital CxO Insights series. I’m your host Mike Vizard. You can find this episode and others on the digitalcxo.com website and we invite you to check them all out. Once again, thanks for watching
Wow.

It's a must watch Video, very informative. Fantastic to have Nandan as CMO.

It's great to be a shareholder 🏖
 
  • Like
  • Love
  • Fire
Reactions: 38 users

buena suerte :-)

BOB Bank of Brainchip
View attachment 31526

I assume the 22 is a typo and meant to be 2023
I'm wondering if this means the samples are now available!??.... nice post @IloveLamp

“The baby is alive and doing well,” he said. “We will have more on this in the coming weeks, but so far so good, we are quite happy with where we are.”
 
Last edited:
  • Like
  • Fire
Reactions: 17 users

Cardpro

Regular
No offence but, it doesn't sit well with me when people say "Don't worry. Just day traders doing blah blah blah. It's short-term blah blah blah".

Our SP jumping for one day then slowly getting eroded away seems to be a common thing. Wouldn't surprise me if yesterday's gain will be gone by end of next week. Then it will be back to square one. Waiting for the next 4C. Getting over-excited about partnerships.
The sight of a blue sky seems to be the only "short-term" thing that's happening right now.

Frustrated, but still holding.
Not advice.
Although I hate to say this, unless we see multiple IP contracts or significant revenue on our financial statements share price will continue to get manipulated hard... (it gets manipulated even with strong financials lol)

I also get frustrated time to time and complained a lot about lack of updates to share holders but glad to see that the managements main focus was to build partnerships & joining ecosystem to be the key player & industry standard for the Edge AI sector.

As they stated in the Annual Report and 2nd Gen Platform announcement, they will be focusing on executing the IP agreements and generating revenue growth. The rerate should be coming soon with new IP agreements IMO :)

Mr Hehir added, “The development of the second generation of Akida was strongly
influenced by our customers’ feedback and driven by our extensive market engagement.
We have recently expanded our sales organisation to become truly global and we are
focused on executing more IP licence agreements and generating revenue growth over
coming years.
 
  • Like
  • Fire
  • Wow
Reactions: 12 users
Top Bottom