BRN Discussion Ongoing

Boab

I wish I could paint like Vincent

https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/

Transcript​

Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series. I’m your host Mike Vizard. Today we’re with Nandan Nayampally, CMO for BrainChip, and they’ve created a processor that mimics the way the brain works. And it’s going to be used in a lot of interesting use cases that we’re going to jump into. Dan, welcome the show.

Nandan Nayampally: Thanks, Michael.

Mike Vizard: A lot of people are a little dubious of how the brain works. So why is it a good thing to mimic the way our brain works and what went into building this process?

Nandan Nayampally: Well, firstly, the brain is probably the most efficient cognitive processor known to man, right? So naturally, there are a lot of good things that come with understanding study of the brain, especially how to learn more efficiently, which is the critical part of artificial intelligence. And what’s generally done is, there’s a lot of very parallel compute. And that’s why GPUs and new accelerators have been created that do a lot of things in parallel. Now, the problem with that is that they’re often computations that aren’t fully used and get thrown away. So it becomes very, very inefficient as you keep getting more and more complex models, right? So things like ChatGPT 3, for example, just to train it on the cloud takes four weeks, $6 million. There are better ways to kind of achieve those kinds of things. And that comes from the study of the brain.

Mike Vizard: So what exactly did you guys do to create this? I mean, how does that processor architecture work? And how long you’ve been been building this thing?

Nandan Nayampally: It’s a very good question. So obviously, there’s a lot of study going on about how neurons work, and how they compute only when needed, right? And trigger forward computation only when needed. The founders of BrainChip, Peter van der Made and Anil Mankar, had been doing this research over the last 15 years. They actually built a lot of neuron models. And then realized that pure neuron models, which there are a number of other companies like IBM, Intel, also doing neuromorphic computing, as it’s called, but applying it to real world problems is still far away if you truly build it exactly like the brain functions. So what brain chip did was, about five years ago, they started applying it to today’s problems. So we have a hybrid approach from a traditional neuromorphic neuron driven model, applying a layer that does very well with today’s conventional neural networks, such as the convolutional one, the deep learning neural networks and transformers. So applying the principles of only executing what is needed, and only executing when it’s needed, improves the efficiency substantially, while still delivering the kind of performance that you need. And when you think about AI in general, everybody thinks that AI is in the cloud. It’s only going to scale when you actually have more intelligent computation at the edge. Otherwise, you’re just going to clog up the network, you’re just going to explode the compute on the cloud.

Mike Vizard: To then end, am I replacing certain classes of processors that exists today? Or is this an entirely new use case and this processor will be used alongside other processors, and we’ll have more of this kind of hybrid processor architecture?

Nandan Nayampally: So yes, so AI is a problem that – it’s a computational problem, right? So you can do it on CPUs, you can do it on GPUs, you can do it on different kinds of accelerators. If you think about it, the AI computation use cases are all growing. So what we’ll see is more and more use cases at the edge that are smarter, that can learn. So for example, today, you have a ring doorbell that recognizes faces, or at least recognizes there is a face, but it keeps reminding you that somebody showed up that you knew already. You don’t want to be disturbed. If your neighbor walks past it, and it says, oh, “Somebody’s at the door.” They are naturally going to walk past. Now if you can train it to say, okay, this is my neighbor, don’t bother me if they’re not, you know, showing up at the door – that’s a use case that is new, right? You could do it on the CPU. You can do it on the GPU. But I think a lot of these use cases become more and more cost effective and efficient if they’re done with specialized computation. So I believe that we will have a strong growth in the types of use cases that this enables. The Price Waterhouse Coopers view is that’s about by 2030, the annual impact GDP from AI is about $15 trillion. And out of that, they look at a Iot or the artificial intelligence of things industry, that is hardware software services, is going to be over a trillion dollars. So there’s a huge market that’s developing, whether it is healthcare, monitoring vital signs and predicting, right? You can’t do that today, just because the competition, or the technology is not there to do it in a portable, cost effective way, you can start doing that on devices that you can embed or where you have, maybe hearing devices that could be a lot more efficient, that can help you either filter noise automatically and learn from your environment. There are customizations that you could do on your device saying, “hey, your car learns how you drive and helps you drive better, for example. So there are lots of new use cases that will emerge that drive new computation paradigms like what we’re proposing.

Mike Vizard: How did you solve the training problem at the edge? Because, at least in my understanding, it takes a lot of compute power to train these models. And so how did you get that down to a footprint that’s acceptable from a energy and heat perspective?

Nandan Nayampally: That’s, that’s a great question. So I want to be very clear, we’re not training on the edge. Okay? At this point, the benefits of neuromorphics are in being able to learn at the edge, but it still starts from a trained model. Right? So what what we do is we take the trained model, and it’s already got features extracted; we use that to learn and extend the classes. So for example, if there’s a model that is recognizing faces, so it’s on the device, but you can then teach it to recognize Mike’s face. Okay, so it’s still a face. But now, you know, it’s Mike’s face. And you can add that to the similar things. There are applications like pet doors, where they have cameras to allow the pet door to open or not, depending on the type of pet today recognizes between cats and dogs and other pets; you can now customize it to say, “okay, this is my cat and don’t let in the neighbor’s cat,” for example.

Mike Vizard: So to that point, will this narrow the amount of drift that we see in AI models over time? When somebody deploys these and we start to collect new data, there seems to be a need to update those models more regularly. So can we narrow that a little bit and kind of get more life out of the model before we need to replace it?

Nandan Nayampally: Yeah, that’s, that’s actually I think you’ve hit the nail on the head. Every time you create a model, it’s expensive. Sending it to cloud to retrain or customize is expensive. So this gives you a path for it. To some extent, there’ll be more drift, but then you can actually pull it back together the next generation. And it’s – the reality is, some of these drifts, you don’t even want to go back to the cloud. Because if I’m training it to recognize my pet, or my face, I don’t want it to go in my cloud. That’s my privacy. That’s my security associated with it. So there’ll be some things that are relevant that need to go back to cloud, some things that are personalized, that may not.

Mike Vizard: As we go along, how will you expose this to developers and to the data scientists that are out there? Is there some sort of SDK that they’ll invoke or set API’s? Or how will we build the software stack for this?

Nandan Nayampally: Yeah, this is an excellent question, right? We can have the most elegant hardware. If it’s not usable in a developer friendly fashion, it doesn’t mean anything. So I’ll make one comment as well on our learning, which is that one of the key things about our learning, because it’s on device and last layer only – we don’t actually save even the data on the device. It’s only stored as the weights in the network as an adjust. So it adds to the security because even if the device is compromised, they only have weights and doesn’t really change it. So there’s a security privacy layer that goes with it. So we do have a very intelligent runtime that goes on top of our hardware. And that has an API for developers to utilize. We also plug into a lot of the frameworks that we have today and partners like Edge Impulse who provide a developer environment extension environment, that that can help people tune what they need to do for our platform.

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?

Mike Vizard: Sure. Are you at all concerned that the Intel’s and the invidious of the world will go build something similar? You mentioned IBM, but what ultimately makes your approach unique in, you know, something that is sustainable as a platform that people should build on today?

Nandan Nayampally: That’s, that’s an excellent question. And so the Intel’s, the IBM’s are building their platforms. More often than not, they are building their platforms for their needs. Right? Nvidia is selling platforms that are much more scalable. But again, they they tend towards going towards a much higher end of the market, rather than the very sensor level, which is a different cost structure, a different set of requirements. So we are geared towards building for embedded solutions. And so both our business model, as well as our design, is much more geared from the ground up for being very, very low-resource requirements, whether it’s memory, whether it’s power, whether it’s, you know, silicon, right? So we are focused on building cost effective solutions and enabling, and because we are an IP model – so we license our technology to customers, customers could actually build their own specialized systems on chip, or ASICs, as they call it, write application specific ICs. That tune to their requirement. We’re not trying to sell chips into that market. We’re licensing technology that enables people to build their own specialized solutions. So a washing machine manufacturer that knows what they need to do intelligently may use microcontrollers today and say, “Okay, I’ve got this done. But, but in a year’s time or two years time, and I’ve perfected this, I’m actually going to build my own chip because the volumes and the scale require it.” Same thing with camera manufacturers; they may choose to have their own specialized IC design because it cuts their overall costs when they strive to scale.

Mike Vizard: Alright, folks, you heard it here. AI is coming to the edge, but you should not assume it’s going to be running on a processor that looks like anything we have today. Hey, Dan, thanks for being on the show.

Nandan Nayampally: Thanks, Michael. Thanks for having us. All right, and thank you all for watching the latest edition of the Digital CxO Insights series. I’m your host Mike Vizard. You can find this episode and others on the digitalcxo.com website and we invite you to check them all out. Once again, thanks for watching
Nandan really knows his stuff. Sensational❤️
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Damo4

Regular
Although I hate to say this, unless we see multiple IP contracts or significant revenue on our financial statements share price will continue to get manipulated hard... (it gets manipulated even with strong financials lol)

I also get frustrated time to time and complained a lot about lack of updates to share holders but glad to see that the managements main focus was to build partnerships & joining ecosystem to be the key player & industry standard for the Edge AI sector.

As they stated in the Annual Report and 2nd Gen Platform announcement, they will be focusing on executing the IP agreements and generating revenue growth. The rerate should be coming soon with new IP agreements IMO :)

Mr Hehir added, “The development of the second generation of Akida was strongly
influenced by our customers’ feedback and driven by our extensive market engagement.
We have recently expanded our sales organisation to become truly global and we are
focused on executing more IP licence agreements and generating revenue growth over
coming years.

The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
 
  • Like
  • Fire
Reactions: 15 users

Mccabe84

Regular
The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
I would be happy with another 2 IP deals signed by the end of year. Surely that’s not asking to much
 
  • Like
  • Fire
Reactions: 15 users

IloveLamp

Top 20
Screenshot_20230308_084944_LinkedIn.jpg
Screenshot_20230308_084934_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 32 users

TechGirl

Founding Member
Morning All,

So much exciting news going on.

Our new Integration Partner Teksun seem like a great fit for us with endless usecases...

Teksun recent blog post below





IoT | AI | Embedded Product Development Company

How AI and IoT are Transforming Smart Homes?​


Feb 23 2023

Artificial Intelligence IoT Product Engineering
How AI and IoT are Transforming Smart Homes Primary image.


AI and the Internet of Things are driving the expansion of smart home markets. As home automation solutions have gotten more inexpensive, smart living with automation and integrated AI-IoT is no longer considered a luxury. Local hardware or cloud-based intelligence can be used to provide smart home control.


According to a recent study, the smart home market is expected to develop at a 27.01% annual rate and reach a value of $537 billion by 2030.AI is one of the driving forces behind this expansion.

As AI continues to expand automation’s capabilities, such as replicating human decision-making and anticipating human behavior, it offers huge benefits in terms of convenience and smart support.

AI in Smart Homes​

The application of AI in managing the smart home infrastructure helps gather data from the home automation devices, anticipate user behavior, give maintenance data, and aid better data security and privacy. Because of its ability to do certain activities automatically for the user, its presence in home automation enables us to control our home appliances, safeguard our houses, and so on by reducing the need for human intervention.
This automation relies heavily on the data acquired by the devices and trained on utilizing a range of machine learning and deep learning methods. Smart home-linked devices provide the data, and the AI learns from the data to do particular activities without human interaction.
For example, Teksun thermostats learn automatically from their customers’ behavior on how to operate and then utilize that information to adjust the temperatures when someone is home or go energy efficient when no one is home.

The Internet of Things in Smart Homes​

IoT allows connected devices, vehicles, buildings, and other items implanted with software, sensors, and the internet to communicate with one another and may either be operated remotely or relay data to a distant user via AI. With the help of AI, these linked devices can monitor the status of every device connected to the same network and offer real-time data.

Important Considerations for Any Smart Home System​

How-AI-and-IoT-are-Transforming-Smart-Homes-Secondary-image.jpg

1. Data security and privacy are the two most important issues that any AI and IoT-enabled smart home should solve. Every connected device leaves digital traces of personal data that must be kept safe and secure.

2. Proper AI and IoT integration enables devices to perform more automatically and with expanded features. Security cameras, for example, often warn of threats automatically, but with correct AI integration, they will proactively alert humans to take charge of the situation when something goes wrong.

3. Interoperability is a critical issue that must be addressed by any home automation tool. Smart home devices should be made interoperable so that new use cases such as energy saving, appliance diagnostics, disaster damage prevention, and so on can be applied to the same smart devices.

4. Better customer service is an essential component of any organization. People living in smart homes may face issues inside their IoT environment, ranging from minor troubleshooting to major data protection concerns. Companies that deliver superior customer service will always be ahead of the competition.

5. Incorporating voice commands will allow the user to save time, and money and alleviate certain laborious activities. Voice control of devices and home appliances should be prioritized because providing user-friendly services always benefits the business.

How will the convergence of AI and IoT affect smart homes?​

AI in smart homes can translate raw sensor data from connected smart devices into beneficial behavior designs in our daily lives. AI-enabled gadgets understand the patterns of the renters and forecast the best experience. It will not turn on the heating, fan, or lights if there is no one in the house, and it will automatically lock the doors if there is no one in the house.
A perfect scenario would be for a user to prepare meals in a smart oven or stove while AI checks the meal’s internal temperature. If the meal reaches the ideal temperature, the AI can lower the cooking temperature to prevent it from burning. The AI would notify the user when the meal was ready to be taken from the oven or burner.
Artificial intelligence (AI) may be able to learn and anticipate a user’s desires. For example, a smart kitchen may be set up before a client user reaches home to begin cooking.
The promise of IoT and AI isn’t restricted to new homes; there are a variety of options that allow current devices, such as switches, to be converted to Smart Switches and old air conditioners to be updated to provide remote access via Smart Apps or AI-based on cloud servers, among other things.
Wireless solutions facilitate deployment, requiring no major electrical or common labor to get the user to Smart Living. Almost any present switch, air conditioner, or light can be converted to IoT-enabled via various brand-agnostic retrofit methods.
The combination of AI and IoT in the smart home is a winning combo for tech-savvy households. AI-enabled personalization, rather than historical usage, can assist your home in keeping track of how you go about your everyday routine. AI and smart home automation have come to a crossroads. Significant gains will be realized as technology progresses and more device integration becomes available.

In Nutshell​

Technology is changing and combining Smart Home requirements on a large scale. As the number of connected devices outnumbers humans, the concept of a smarter, more convenient home is gaining traction. Home automation has virtually endless applications.
Smart Homes, which blend AI and IoT, appeal to the technologically savvy while cutting energy expenditures and enhancing security. As a result, smart homes enable and safeguard the next level of technological existence.
The Internet of Things (IoT) and artificial intelligence (AI) are here to stay and will dramatically improve Smart Home automation.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Cardpro

Regular
The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
IMO, the biggest risk for BrainChip is that the technology might not be adopted by the industry. (Significantly de-risked now IMO)

Having proven technology & being the best solution doesn't necessarily mean it will be adopted by the industry which is the reason why our management was focused on establishing multiple partnerships with industry leaders & joining key ecosystems.

I disagree that sharp increase won't happen in next year, if we land multiple IP agreements this will further validate that our technology will be adopted by the industry and it will be reflected on the SP both in the short term and long term.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

TechGirl

Founding Member
Teksun Machine Learning section on their website, again right up our ally & they do Natural Language Processing amongst others.


MACHINE LEARNING


We assist you in developing and deploying personalized and data-intensive solutions based on Machine Learning Services, to let you counter business challenges.

Instilling Intelligence​


Teksun delivers you the new-age apps empowered with pattern recognition, artificial intelligence, and mathematical predictability, which collectively provide you higher scalability. Our technical developers are experts in optimally utilizing and placing machine learning in anomaly detection, algorithm design, future forecasting, data modeling, spam filtering, predictive analytics, product recommendations, etc.

Get You First Consultation for FREE

Our Offerings

The offerings that we present here are just a gist of options and alternatives that we have for you inside the box. Catch sight of these to know the scope of our services:

null

Deep
Learning​

null

Predictive
Analytics​

null

Image
Analytics​

null

Video
Analytics​

null

Natural
Language
Processing



We also provide for Neural Network Development and Machine Learning Solutions. Looking for a better start for your project! Partner with our expert consultants to draft out the premier ways of undertaking it.

Get Started​

It’s an apt time to take-off with us!



What makes us unique

The unique is our ability to serve you in a ceaseless manner, with real-time updates of every project phase.

1​


We provide Machine Learning Consulting, assisting you all the way from project initiation to deployment.

2​


We furnish you with Supervised/Unsupervised ML services on both structured and unstructured data.

3​


Our experts undertake different algorithms and models to cater you the required service such as NLP, Decision Trees, etc.

4​


The tools and technologies used by us are the best in the market, a few of which can be named MongoDB, Cassandra, and so on.

5​


Our constantly updated and wide range of AI Models impart your business with high performance & scalability.

6​


Our experts undertake a personalized approach while delivering you the finest of Machine Learning Services.




Take a Look at

QA & Project Execution


Hire Developer​

Develop with the industry masters!
It’s the selection of technologies that carves out its full potential. Our top developers, assure your Machine Learning solutions of the finest tools as per the project and budget needs.


Industry we serve

We bring across a broad gamut of services, along with a versatile approach. Hence we are also able to facilitate a wide foot of industries, whether it be Forensic, Financial, Healthcare, Defence, or any other.
Consumer Electronics

Consumer Electronics​


Wearable Devices

Wearable​


Industrial Automation

Industry 4.0​


Biotech Solutions

Biotech​


Home Automation

Home Automation​


Agritech Solutions

Agritech​


null

Security & Surveillance​


Health Care System Design

Health Care​


null

Drones & Autonomy​


Automated testing

Automotive​



Every project needs different kind of attention and service. Our highly experienced consultants and technicians arrange for tailor-made plans and strategies to manage your varied projects.

Kick-Off Project​

Surge on your success journey!
 
  • Like
  • Fire
  • Love
Reactions: 35 users

hotty4040

Regular
Great reminder of why.
Do your own research.
Ignore the manipulators of markets.

Logic says if you want to buy a great company at a low price so will institutions but they have the ability and resources to influence the market and profit from lending to short traders.

BrainChip Introduces Second-Generation Akida Platform​

Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient and secure Edge AIoT products, untethered from the cloud
Laguna Hills, Calif. – March 6, 2023
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, neuromorphic AI IP, today announced the second generation of its Akida™ platform that drives extremely efficient and intelligent edge devices for the Artificial Intelligence of Things (AIoT) solutions and services market that is expected to be $1T+ by 2030. This hyper-efficient yet powerful neural processing system, architected for embedded Edge AI applications, now adds efficient 8-bit processing to go with advanced capabilities such as time domain convolutions and vision transformer acceleration, for an unprecedented level of performance in sub-watt devices, taking them from perception towards cognition.
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.
The Akida IP platform has a unique ability to learn on the device for continuous improvement and data-less customization that improves security and privacy. This, combined with the efficiency and performance available, enable very differentiated solutions that until now have not been possible. These include secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively.
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, principal analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Akida’s software and tooling further simplifies the development and deployment of solutions and services with these features:
  • An efficient runtime engine that autonomously manages model accelerations completely transparent to the developer
  • MetaTF™ software that developers can use with their preferred framework, like TensorFlow/Keras, or development platform, like Edge Impulse, to easily develop, tune, and deploy AI solutions.
  • Supports all types of Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT) as well as Spiking Neural Networks (SNNs), future-proofing designs as the models get more advanced.
Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners. BrainChip is engaged with early adopters on the second generation IP platform. General availability will follow in Q3’ 2023.
See what they’re saying:
“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with Brainchip’s Akida neuromorphic processor holds great potential for developers of high-performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs,” said Luca Verre, Co-Founder and CEO at Prophesee.
Luca Verre, Co-Founder and CEO, Prophesee
“BrainChip and its unique digital neuromorphic IP have been part of IFS’ Accelerator IP Alliance ecosystem since 2022,” said Suk Lee, Vice President of Design Ecosystem Development at IFS. “We are keen to see how the capabilities in Akida’s latest generation offerings enable more compelling AI use cases at the edge”
Suk Lee, VP Design Ecosystem Development, Intel Foundry Services
“Edge Impulse is thrilled to collaborate with BrainChip and harness their groundbreaking neuromorphic technology. Akida’s 2ndgeneration platform adds TENNs and Vision Transformers to a strong neuromorphic foundation. That’s going to accelerate the demand for intelligent solutions. Our growing partnership is a testament to the immense potential of combining Edge Impulse’s advanced machine learning capabilities with BrainChip’s innovative approach to computing. Together, we’re forging a path toward a more intelligent and efficient future,” said Zach Shelby, Co-Founder and CEO at Edge Impulse.
Zach Shelby, Co-Founder and CEO, Edge Impulse
“BrainChip has some exciting upcoming news and developments underway,” said Daniel Mandell, Director at VDC Research. “Their 2nd generation Akida platform provides direct support for the intelligence chip market, which is exploding. IoT market opportunities are driving rapid change in our global technology ecosystem, and BrainChip will help us get there.”
Daniel Mandell, Director, VDC Research
“Integration of AI Accelerators, such as BrainChip’s Akida technology, has application for high-performance RF, including spectrum monitoring, low-latency links, distributed networking, AESA radar, and 5G base stations,” said John Shanton, CEO of Ipsolon Research, a leader in small form factor, low power SDR technology.
John Shanton, CEO, Ipsolon Research
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive
“Ai Labs is excited about the introduction of BrainChip’s 2nd generation Akida neuromorphic IP, which will support vision transformers and TENNs. This will enable high-end vision and multi-sensory capability devices to scale rapidly. Together, Ai Labs and BrainChip will support our customers’ needs to address complex problems,” said Bhasker Rao, Founder of Ai Labs. “Improving development and deployment for industries such as manufacturing, oil and gas, power generation, and water treatment, preventing costly failures and reducing machine downtime.”
Bhasker Rao, Founder, Ai Labs
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
Roger Wendelken, Senior Vice President IoT and Infrastructure Business Unit, Renesas
“We see a growing number of predictive industrial (including HVAC, motor control) or automotive (including fleet maintenance), building automation, remote digital health equipment and other AIoT applications use complex models with minimal impact to product BOM and need faster real-time performance at the Edge” said Nalin Balan, Head of Business Development at Reality ai, a Renesas company. “BrainChip’s ability to efficiently handle streaming high frequency signal data, vision, and other advanced models at the edge can radically improve scale and timely delivery of intelligent services.”
Nalin Balan, Head of Business Development, Reality.ai, a Renesas Company
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, Principal Analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more, in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
Richard Wawrzyniak, Principal Analyst, Semico Research
“BrainChip’s cutting-edge neuromorphic technology is paving the way for the future of artificial intelligence, and Drexel University recognizes its immense potential to revolutionize numerous industries. We have experienced that neuromorphic compute is easy to use and addresses real-world applications today. We are proud to partner with BrainChip and advancing their groundbreaking technology, including TENNS and how it handles time series data, which is the basis to address a lot of complex problems and unlocking its full potential for the betterment of society,” said Anup Das, Associate Professor and Nagarajan Kandasamy, Interim Department Head of Electrical and Computer Engineering, Drexel University.
Anup Das, Associate Professor, Drexel University
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Sean Hehir, CEO, BrainChip

My opinion only DYOR
FF

AKIDA BALLISTA
Thanks FF, for putting some real life perspective back into the conversation, ( SO MUCH HAPPENING )

AKIDA BALLISTA



hotty...
 
  • Like
  • Fire
Reactions: 9 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 36 users

TECH

Regular

BrainChip
@BrainChip_inc


In this Digital CxO Leadership Insights series video, Mike Vizard talks with Nandan Nayampally, CMO BrainChip, about how a new class of processors will advance artificial intelligence (AI) at the edge https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/…
@DigCxO

@mvizard

Thanks Sirod69, that was an excellent interview and once again, Nandan spoke very well, I personally think he could have mentioned the word
Sparsity, in regards to low power, running extremely cool, and learning on the fly, as in, continually learning to become even more efficient on it's own....but very happy having Nandan on our team.

Have a nice evening....Chris (Tech) ;)
 
  • Like
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Rise and shine



Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.

While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.

For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChip’s technology to make the wake word detection 5-10 times faster than convention voice control systems).

Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).

The other is a screen-shot of a testimonial from Daimler on Cerence's website.

As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.

SCerence.png


MER.png






 
  • Like
  • Fire
  • Love
Reactions: 45 users

Steve10

Regular
View attachment 31526

I assume the 22 is a typo and meant to be 2023

STMicroelectronics & Lacroix Group were on my radar the other day.

Akida could be in ovens monitoring the fan or in people flow detectors.

Case study 1: AI solution for people counting sensor​

Making buildings smarter is one of the big challenges of today's companies to improve their efficiency. The people flow counting sensor developed by Schneider Electric in partnership with STMicroelectronics enables the counting of the number of people. It also detects whether they are crossing a virtual line in both directions, using a large field of view and a small resolution thermal sensor.

This prototype can count in real-time and with a high level of accuracy the restaurant's attendance, while running on the standard STM32 microcontroller. This is achieved thanks to the artificial intelligence algorithm embedded on the STM32 microcontroller and the use of a thermal infrared technology.

1678234966505.jpeg

Case study 2: Low-power predictive maintenance + AI at the Edge​


Lacroix Group and its ecosystem are building the future of industrial electronics, in the design and production of industrial embedded systems and connected objects. At the heart of its smart industry strategy, Lacroix Electronics is now experimenting with predictive maintenance on its own assembly lines with the help of STMicroelectronics and its AI ecosystem.

The first trial of the condition monitoring technology is being done on the reflow oven of an automated line that solders component on PCB boards.

1678234335570.png



Artificial Intelligence @ ST​



STM32Cube function pack for high performance STM32 with artificial intelligence (AI) application for Computer Vision.
1678234417200.png




Artificial Intelligence (AI) condition monitoring function pack for STM32Cube.
1678234439092.png


STM32Cube function pack for ultra-low power IoT node with artificial intelligence (AI) application based on audio and motion sensing.
1678234461918.png





Give your product an Edge​

Simple, fast, optimized. Our extensive solutions
for embedded AI.​


A set of tools to enable Edge AI
on STM32 MCU, MPU and smart sensors​

Embedded AI can improve many solutions in a simple, fast, and cost-effective way.
Predictive maintenance, IoT products, smart buildings, asset tracking, people counting and more.
Learn how these applications can become smarter by making data meaningful with machine learning!




Customers:
1678235254664.png
1678235268462.png
1678235280192.png
1678235290647.png

 

Attachments

  • 1678234405343.png
    1678234405343.png
    14.1 KB · Views: 53
  • Like
  • Fire
  • Love
Reactions: 22 users

stuart888

Regular
The mighty chip is getting some much deserved media attention.




BrainChip Unveils Its Second-Generation Akida Platform, Now Boasting Vision Transformer Acceleration​

Brainchip's Akida 2.0 gains some impressive new features, along with a three-tier launch strategy scaling up to 128 nodes and 50 TOPS.​







BrainChip has announced the launch of its second-generation Akida processor family, designed for high-efficiency artificial intelligence at the edge, adding Temporal Event-Based Neural Net (TENN) support and optional vision transformer acceleration on top of the company's existing spiking neural network capabilities.
"Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device," claims BrainChip's chief executive officer Sean Hehir of the next-generation design. "By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience."
BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip began offering development kits for its first-generation Akida AKD1000 neural network processors in October 2021, building two kits around the user's choice of a Shuttle x86 PC or a Raspberry Pi. Ease of use took a leap earlier this year when the company announced the fruit of its partnership with Edge Impulse to bring Akida support to the latter's machine learning platform — offering what Edge Impulse co-founder and chief executive officer Zach Shelby described as a "powerful and easy-to-use solution for building and deploying machine learning models on the edge."
The promise of the Akida platform, which was developed based on the operation of the human brain, is high performance at a far greater efficiency than its rivals — when, at least, the problem to be solved can be defined as a spiking neural network. It's this efficiency which has seen BrainChip primarily position its Akida hardware for use at the edge, accelerating on-device machine learning in power-sensitive applications.
The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The second-generation Akida platform brings with it high-efficiency eight-bit processing and support for Temporal Event-Based Neural Nets (TENNs), giving it the ability to consume raw real-time streaming data from sensors, including video sensors. This, the company claims, provides "radically simpler implementations" for tasks including video analytics, target tracking, audio classification, and even vital sign prediction in medical imaging analysis.
BrainChip's Akida refresh also brings with it support for accelerating vision transformers, as an optional component that can be discarded if not required, as primarily used for image classification, object detection, and semantic segmentation. Combined with Akida's ability to process multiple layers at once, the company claims the new parts will allow for complete self-management and execution of even relatively complex networks like RESNET-50 — without the host device's processor having to get involved at all.

The new features come alongside BrainChip's earlier promises of dramatic efficiency gains through the use of spiking neural networks. (📹: BrainChip)
The company has confirmed that it will be licensing the Akida IP in three product classes: Akida-E will focus on high energy efficiency with a view to being embedded alongside, or as close as possible, to sensors and offering up to 200 giga-operations per second (GOPS) across one to four nodes; Akida-S will be for integration into microcontroller units and systems-on-chip (SoCs), hitting up to 1 tera-operations per second (TOPS) across two to eight nodes; and Akida-P will target the mid- to high-end, and will be the only tier to offer the optional vision transformer acceleration, scaling between eight and 128 nodes with a total performance of up to 50 TOPS.
While the part launches to unnamed "early adopters" today, though, BrainChip isn't quite ready to start selling them to the public — promising instead that second-generation Akida processors will be available in the third quarter of 2023 with as-yet unannounced pricing. More information is available on the BrainChip website.
machine learning
artificial intelligence
Yeah-Yeah to Brainchip employees making this happen! 🍹

Akida speaks AXI 4.0, interesting. There could be clues in the interface data transmission. Standardization is part of the complexity/slowness to implement, so this is good news.

The whole Edge IoT is going to explode. I assume AXI 4.0 is the key to the device connecting kingdom, easing adoption. Just trying to learn.

1678234372302.png


1678234650300.png


https://www.xilinx.com/products/intellectual-property/axi.html

 
  • Like
  • Fire
  • Love
Reactions: 23 users

stuart888

Regular
Wow.

It's a must watch Video, very informative. Fantastic to have Nandan as CMO.

It's great to be a shareholder 🏖
Only when needed is the key! Fantastic video, thanks a bunch Learning.

Energy efficient SNN spiking smarts!

1678236206502.png
 
  • Like
  • Fire
Reactions: 21 users

Diogenese

Top 20
Will Renesas do an Oliver Twist?

We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Evermont

Stealth Mode
Will Renesas do an Oliver Twist?

We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.

Wouldn't that be a nice message to the market.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Interesting write up


I don't think I've ever seen this map before.

View attachment 31498

Yes @MadMayHam, and I thought it was very interesting that Si-Five specify that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors.




Screen Shot 2023-03-08 at 11.37.5.png



Screen Shot 2023-03-08 at 12.14.08 .png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 33 users

Diogenese

Top 20
Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.

While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.

For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChip’s technology to make the wake word detection 5-10 times faster than convention voice control systems).

Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).

The other is a screen-shot of a testimonial from Daimler on Cerence's website.

As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.

View attachment 31550

View attachment 31551





Hi @Bravo ,

Here are a couple of Cerence patent applications:

US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

1678237664902.png


1678237624897.png

A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.



WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION

1678238534634.png



The specifications seem oblivious of SNNs.
 
  • Like
  • Sad
  • Fire
Reactions: 10 users
Has somebody already posted the nvisio newsletter? Just noticed an email that was received about 10 hrs ago.?
Edit... Just seen @Tothemoon24 post
 
Last edited:
  • Like
Reactions: 3 users
Some bigger buying in market has just started again. Buyers now double the sellers! Some line wiping just occurred at .55

The sneaky mass accumulation is continuing as they know BRN is going to fly and these are giveaway share prices.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom