BRN Discussion Ongoing

Diogenese

Top 20
I already addressed the fact that TrueNorth is digital and not analog in an earlier post today, and ensuingly there is something else I would disagree with you on:

While NorthPole may not pose an imminent threat to Akida (IBM’s chief scientist for brain-inspired computing and the project’s technical lead, Dr Dharmendra Modha, states in yesterday’s blog post on his personal website that “Please note that NorthPole chip, software, systems are research prototypes, in IBM Research, and that NorthPole is designed for inference (not training).”) and our resident hardware expert @Diogenese even reckons that NorthPole is a toothless tiger anyway and in fact sees Brainchip as having extended its technology lead, I think you are vastly overestimating the work and time IBM actually “wasted” over the past couple of years, since after all they didn’t have to start from scratch. Have a look at NorthPole’s development timeline. All in stealth mode. The veil of secrecy was lifted only very recently at the Hot Chips 2023 Conference in late August. Do you really think that DARPA, the Office of the Under Secretary of Defense for Research and Engineering as well as AFRL would have continued with this partnership for fourteen (!) long years, if they had felt at some point over the past couple of years it had been pretty much a waste of time and money?

The fact that their collaboration with IBM has been ongoing for almost one and a half decades, however, also signifies that said US government agencies must have seen added benefit in experimenting with Akida, which of course is superb validation for both former Chief Scientist of IBM Internet Security Systems Peter van der Made and IIT Bombay alumni Anil Mankar (same alma mater as IBM’s Dharmendra Modha)!

And yet - despite Akida clearly being the superior tech according to our resident forum experts - the DoD didn’t cut the contract and stop the collaboration with IBM after getting their hands on Akida 1000, so they have apparently ascertained distinct use cases for both NorthPole and Akida (on-chip learning!).

Unless of course this long-term partnership were to serve as a prime example of the sunk cost fallacy…



NorthPole: Neural Inference at the Frontier of Energy, Space, and Time
October 19, 2023 By dmodha


§A. Breaking News:
Today in Science Magazine, a major new article from IBM Research introducing a new brain-inspired, silicon-optimized chip architecture suitable for neural inference. The chip, NorthPole, is the result of nearly two decades of work by scientists (see illustration in §O below) at IBM Research and has been an outgrowth of a 14 year partnership with United States Department of Defense (Defense Advanced Research Projects Agency (DARPA), Office of the Under Secretary of Defense for Research and Engineering, and Air Force Research Laboratory).
NorthPole, implemented using a 12-nm process on-shore in US, outperforms all of the existing specialized chips for running neural networks on ResNet50 and Yolov4, even those using more advanced technology processes. Additional results on BERT-base were presented at Hot Chips Symposium.




(…)


§F. History & Context:

In 2004, nineteen years ago, I had a stark realization that I was going to die — not imminently, but eventually. Therefore, 7,034 days ago, on July 16, 2004, I decided to focus my life’s limited energy on brain-inspired computing — a career wager against improbable odds. Along the way, we carried out simulations at the scale of mouse, rat, cat, monkey, and, eventually, human brains — winning ACM’s Gordon Bell Prizealong the way. We mapped (Link: https://www.pnas.org/doi/10.1073/pnas.1008054107) the long-distance wiring diagram of the primate brain. TrueNorth won the inaugural Misha Mahowald Prize and is in the Computer History Museum. I was named R&D Magazine’s Scientist of the Year, became an IBM Fellow, was named Distinguished Alumni of IIT Bombay, and was named Distinguished Alumni of UCSD ECE Department. The project has been featured on the covers of Scientific American, Science (twice), and Communications of the ACM.

The first idea for NorthPole was conceived on May 30, 2015, in a flash of meditative insight, at Crissy Field in San Francisco. The main motivation was to dramatically reduce capital cost of TrueNorth. Over the next few years, collaboratively and creatively, we pushed boundaries of innovation along all aspects of computation, memory, communication, control, and IO. Starting in 2018, we went under stealth mode. To focus fully on NorthPole, we turned down all talk invitations and we stopped the flow of publications, taking a huge risk. Along the way, the project encountered many technical, economic, and political obstacles and nearly died many times — not even counting the pandemic. The unique combination of the environment of IBM Research and the long-term support of DoD was the key to forward progress. So, TrueNorth and NorthPole are a story of seeking long-term rewards, a tale of epic collaborations, an example of team creativity (Link: https://www.nytimes.com/2013/10/13/jobs/when-debate-stalls-try-your-paintbrush.html), an account of perseverance and steadfastness of purpose, and a chronicle of a vision realized. To quote General Leslie Groves, who directed the Manhattan project, “now it can be told.”

TrueNorth was a direction, NorthPole is a destination.


Although we have been working on it for 19 years, this moment can be best described by quoting Winston Churchill: “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”


(…)

O. A continuing 19-year journey

View attachment 47582
Fig. 19. An info-graphic illustrating major milestones of a 19-year journey.
Note that since 2015, NorthPole has been in the stealth mode.



Although NorthPole is a destination, it does not do learning. I wonder if going into stealth mode, huddled down with the spooks from DARPA, inhibited the development team's openness to external ideas. Compare the major changes that Akida 1 underwent based on EAP feedback, and more so Akida 2.

IBM continues to research analog NNs. This may have to do with the DARPA as analog is said to be inherently radhard - NASA still holds a candle for analog.

US2023206964A1 DIGITAL PHASE CHANGE MEMORY (PCM) ARRAY FOR ANALOG COMPUTING

1697847650914.png


A plurality of bit lines corresponding to elements of an input vector intersect a plurality of word lines and a plurality of memristive cells are located at the intersections. At least three cells are grouped together to represent a single matrix element. At least three word lines correspond to each element of an output vector. An A/D converter is coupled to each of the word lines, and for each line, except a first, in each group, a shifter has an input coupled to one of the A/D converters. For each group, an addition-subtraction block adds the output of the A/D converter coupled to the first one of the word lines to outputs of each of the shifters except that for a last one of the word lines, subtracts the output of the last shifter, and outputs a corresponding element of an output vector.
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Diogenese

Top 20
  • Like
Reactions: 6 users

Slade

Top 20
Intel announced their Loihi chip in 2017 and yet it is still only a research chip. The years and money poured into have not guaranteed it commercial success. I think we need to look at technical comparisons when assessing potential competition. Akida is commercially available and in the hands of some very big companies. That hasn’t happened overnight.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Labsy

Regular
Disney getting into robotics and re-inforcement learning, hardware agnostic.
Just a matter of time. Right place right time.
Edge AI is going to be 🔥 and we are in the box seat big time!

 
  • Like
  • Love
  • Fire
Reactions: 19 users

Slade

Top 20
Although NorthPole is a destination, it does not do learning. I wonder if going into stealth mode, huddled down with the spooks from DARPA, inhibited the development team's openness to external ideas. Compare the major changes that Akida 1 underwent based on EAP feedback, and more so Akida 2.

IBM continues to research analog NNs. This may have to do with the DARPA as analog is said to be inherently radhard - NASA still holds a candle for analog.

US2023206964A1 DIGITAL PHASE CHANGE MEMORY (PCM) ARRAY FOR ANALOG COMPUTING

View attachment 47605

A plurality of bit lines corresponding to elements of an input vector intersect a plurality of word lines and a plurality of memristive cells are located at the intersections. At least three cells are grouped together to represent a single matrix element. At least three word lines correspond to each element of an output vector. An A/D converter is coupled to each of the word lines, and for each line, except a first, in each group, a shifter has an input coupled to one of the A/D converters. For each group, an addition-subtraction block adds the output of the A/D converter coupled to the first one of the word lines to outputs of each of the shifters except that for a last one of the word lines, subtracts the output of the last shifter, and outputs a corresponding element of an output vector.
“Compare the major changes that Akida 1 underwent based on EAP feedback, and more so Akida 2.”

And there is the strength in being a small company that did not sell out to the Tech giants. PVDM was burnt once before and wasn’t going to let that happen again. Now he has the neuromophic solution that edge AI needs. The guy is a genius and visionary. Brainchip has worked in stealth several times, lodging patents before announcing different technological game changers. BrainChip is working with dozens of companies under NDAs. Renesas announced that they taped out an Akida infused chip 9 months ago but have gone into stealth mode while it gets fabricated. This is the industry that Brainchip operates within.
 
  • Like
  • Love
  • Fire
Reactions: 44 users

Damo4

Regular
Intel announced their Loihi chip in 2017 and yet it is still only a research chip. The years and money poured into have not guaranteed it commercial success. I think we need to look at technical comparisons when assessing potential competition. Akida is commercially available and in the hands of some very big companies. That hasn’t happened overnight.

Patents.

No explanation necessary.
 
  • Like
  • Love
Reactions: 14 users

Vladsblood

Regular
At 40 billion US dollars MC we would still be only at four percent of Nvidia’s current MC…$$$$ Lotta room to move upwards and beyond.
Vlad.
 
  • Like
  • Fire
Reactions: 16 users

MrRomper

Regular
  • Haha
  • Like
Reactions: 13 users

TechGirl

Founding Member
New article, doesn’t have a date on the article but my google search says it’s from 17 minutes ago

Only IBM, Intel and Brainchip mentioned


WHO ARE THE PRODUCERS OF NEUROMORPHIC CHIPS?​

Who are the producers of neuromorphic chips?

Neuromorphic Chips: Unveiling the Masterminds Behind the Technology​

In the ever-evolving landscape of technology, one concept that has gained significant attention is neuromorphic chips. These advanced microchips, inspired by the human brain’s neural architecture, have the potential to revolutionize computing by enabling machines to process information in a manner similar to our own cognitive abilities. But who are the masterminds behind these groundbreaking innovations?
Neuromorphic chips are the brainchild of several prominent companies and research institutions that have dedicated their efforts to push the boundaries of artificial intelligence (AI) and machine learning. One of the pioneers in this field is IBM, a global technology giant renowned for its cutting-edge research and development. IBM’s TrueNorth chip, introduced in 2014, was a significant breakthrough in neuromorphic computing, featuring a network of one million programmable neurons.

Another key player in the neuromorphic chip arena is Intel. The tech behemoth has been actively involved in developing neuromorphic architectures, aiming to create processors that can mimic the brain’s parallel processing capabilities. Intel’s Loihi chip, unveiled in 2017, is designed to accelerate AI workloads and has gained recognition for its ability to learn and adapt in real-time.
Apart from industry giants, academic institutions have also made substantial contributions to the development of neuromorphic chips. The University of Manchester’s SpiNNaker project, for instance, has been at the forefront of research in this domain. SpiNNaker, short for Spiking Neural Network Architecture, is a massively parallel computing system that emulates the brain’s neural networks. It has been instrumental in advancing our understanding of neuromorphic computing and has paved the way for further innovations.
Additionally, BrainChip Holdings, a company specializing in neuromorphic computing, has emerged as a significant player in the market. BrainChip’s Akida chip, introduced in 2019, is designed to provide ultra-low power AI processing for edge devices. With its unique architecture, the Akida chip enables real-time learning and inference capabilities, making it a promising solution for applications such as autonomous vehicles and surveillance systems.

To shed light on the advancements in neuromorphic chip technology, experts in the field have contributed valuable insights. Dr. Dharmendra S. Modha, the Chief Scientist of IBM Research, has been a driving force behind IBM’s TrueNorth chip and has extensively researched neuromorphic computing. His expertise has been pivotal in unraveling the potential of these chips and their applications in various domains.
Dr. Chris Eliasmith, a renowned neuroscientist and computer engineer, has also made significant contributions to the field. His work on the Neural Engineering Framework has laid the foundation for building large-scale brain models and has influenced the development of neuromorphic chips.

As the demand for AI and machine learning continues to surge, the development of neuromorphic chips is expected to gain further momentum. With companies like IBM, Intel, BrainChip Holdings, and academic institutions like the University of Manchester leading the charge, the future of computing appears to be heading towards a more brain-inspired approach.
Sources:
– IBM Research
– Intel
– University of Manchester
– BrainChip Holdings
– Dr. Dharmendra S. Modha
– Dr. Chris Eliasmith
 
  • Like
  • Fire
  • Love
Reactions: 73 users

Getupthere

Regular
  • Like
  • Love
Reactions: 7 users

Xray1

Regular
I already addressed the fact that TrueNorth is digital and not analog in an earlier post today, and ensuingly there is something else I would disagree with you on:

While NorthPole may not pose an imminent threat to Akida (IBM’s chief scientist for brain-inspired computing and the project’s technical lead, Dr Dharmendra Modha, states in yesterday’s blog post on his personal website that “Please note that NorthPole chip, software, systems are research prototypes, in IBM Research, and that NorthPole is designed for inference (not training).”) and our resident hardware expert @Diogenese even reckons that NorthPole is a toothless tiger anyway and in fact sees Brainchip as having extended its technology lead, I think you are vastly overestimating the work and time IBM actually “wasted” over the past couple of years, since after all they didn’t have to start from scratch. Have a look at NorthPole’s development timeline. All in stealth mode. The veil of secrecy was lifted only very recently at the Hot Chips 2023 Conference in late August. Do you really think that DARPA, the Office of the Under Secretary of Defense for Research and Engineering as well as AFRL would have continued with this partnership for fourteen (!) long years, if they had felt at some point over the past couple of years it had been pretty much a waste of time and money?

The fact that their collaboration with IBM has been ongoing for almost one and a half decades, however, also signifies that said US government agencies must have seen added benefit in experimenting with Akida, which of course is superb validation for both former Chief Scientist of IBM Internet Security Systems Peter van der Made and IIT Bombay alumni Anil Mankar (same alma mater as IBM’s Dharmendra Modha)!

And yet - despite Akida clearly being the superior tech according to our resident forum experts - the DoD didn’t cut the contract and stop the collaboration with IBM after getting their hands on Akida 1000, so they have apparently ascertained distinct use cases for both NorthPole and Akida (on-chip learning!).

Unless of course this long-term partnership were to serve as a prime example of the sunk cost fallacy…



NorthPole: Neural Inference at the Frontier of Energy, Space, and Time
October 19, 2023 By dmodha


§A. Breaking News:
Today in Science Magazine, a major new article from IBM Research introducing a new brain-inspired, silicon-optimized chip architecture suitable for neural inference. The chip, NorthPole, is the result of nearly two decades of work by scientists (see illustration in §O below) at IBM Research and has been an outgrowth of a 14 year partnership with United States Department of Defense (Defense Advanced Research Projects Agency (DARPA), Office of the Under Secretary of Defense for Research and Engineering, and Air Force Research Laboratory).
NorthPole, implemented using a 12-nm process on-shore in US, outperforms all of the existing specialized chips for running neural networks on ResNet50 and Yolov4, even those using more advanced technology processes. Additional results on BERT-base were presented at Hot Chips Symposium.




(…)


§F. History & Context:

In 2004, nineteen years ago, I had a stark realization that I was going to die — not imminently, but eventually. Therefore, 7,034 days ago, on July 16, 2004, I decided to focus my life’s limited energy on brain-inspired computing — a career wager against improbable odds. Along the way, we carried out simulations at the scale of mouse, rat, cat, monkey, and, eventually, human brains — winning ACM’s Gordon Bell Prizealong the way. We mapped (Link: https://www.pnas.org/doi/10.1073/pnas.1008054107) the long-distance wiring diagram of the primate brain. TrueNorth won the inaugural Misha Mahowald Prize and is in the Computer History Museum. I was named R&D Magazine’s Scientist of the Year, became an IBM Fellow, was named Distinguished Alumni of IIT Bombay, and was named Distinguished Alumni of UCSD ECE Department. The project has been featured on the covers of Scientific American, Science (twice), and Communications of the ACM.

The first idea for NorthPole was conceived on May 30, 2015, in a flash of meditative insight, at Crissy Field in San Francisco. The main motivation was to dramatically reduce capital cost of TrueNorth. Over the next few years, collaboratively and creatively, we pushed boundaries of innovation along all aspects of computation, memory, communication, control, and IO. Starting in 2018, we went under stealth mode. To focus fully on NorthPole, we turned down all talk invitations and we stopped the flow of publications, taking a huge risk. Along the way, the project encountered many technical, economic, and political obstacles and nearly died many times — not even counting the pandemic. The unique combination of the environment of IBM Research and the long-term support of DoD was the key to forward progress. So, TrueNorth and NorthPole are a story of seeking long-term rewards, a tale of epic collaborations, an example of team creativity (Link: https://www.nytimes.com/2013/10/13/jobs/when-debate-stalls-try-your-paintbrush.html), an account of perseverance and steadfastness of purpose, and a chronicle of a vision realized. To quote General Leslie Groves, who directed the Manhattan project, “now it can be told.”

TrueNorth was a direction, NorthPole is a destination.


Although we have been working on it for 19 years, this moment can be best described by quoting Winston Churchill: “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”


(…)

O. A continuing 19-year journey

View attachment 47582
Fig. 19. An info-graphic illustrating major milestones of a 19-year journey.
Note that since 2015, NorthPole has been in the stealth modeI
 
  • Like
Reactions: 1 users

Xray1

Regular
New article, doesn’t have a date on the article but my google search says it’s from 17 minutes ago

Only IBM, Intel and Brainchip mentioned


WHO ARE THE PRODUCERS OF NEUROMORPHIC CHIPS?​

Who are the producers of neuromorphic chips?

Neuromorphic Chips: Unveiling the Masterminds Behind the Technology​

In the ever-evolving landscape of technology, one concept that has gained significant attention is neuromorphic chips. These advanced microchips, inspired by the human brain’s neural architecture, have the potential to revolutionize computing by enabling machines to process information in a manner similar to our own cognitive abilities. But who are the masterminds behind these groundbreaking innovations?
Neuromorphic chips are the brainchild of several prominent companies and research institutions that have dedicated their efforts to push the boundaries of artificial intelligence (AI) and machine learning. One of the pioneers in this field is IBM, a global technology giant renowned for its cutting-edge research and development. IBM’s TrueNorth chip, introduced in 2014, was a significant breakthrough in neuromorphic computing, featuring a network of one million programmable neurons.

Another key player in the neuromorphic chip arena is Intel. The tech behemoth has been actively involved in developing neuromorphic architectures, aiming to create processors that can mimic the brain’s parallel processing capabilities. Intel’s Loihi chip, unveiled in 2017, is designed to accelerate AI workloads and has gained recognition for its ability to learn and adapt in real-time.
Apart from industry giants, academic institutions have also made substantial contributions to the development of neuromorphic chips. The University of Manchester’s SpiNNaker project, for instance, has been at the forefront of research in this domain. SpiNNaker, short for Spiking Neural Network Architecture, is a massively parallel computing system that emulates the brain’s neural networks. It has been instrumental in advancing our understanding of neuromorphic computing and has paved the way for further innovations.
Additionally, BrainChip Holdings, a company specializing in neuromorphic computing, has emerged as a significant player in the market. BrainChip’s Akida chip, introduced in 2019, is designed to provide ultra-low power AI processing for edge devices. With its unique architecture, the Akida chip enables real-time learning and inference capabilities, making it a promising solution for applications such as autonomous vehicles and surveillance systems.

To shed light on the advancements in neuromorphic chip technology, experts in the field have contributed valuable insights. Dr. Dharmendra S. Modha, the Chief Scientist of IBM Research, has been a driving force behind IBM’s TrueNorth chip and has extensively researched neuromorphic computing. His expertise has been pivotal in unraveling the potential of these chips and their applications in various domains.
Dr. Chris Eliasmith, a renowned neuroscientist and computer engineer, has also made significant contributions to the field. His work on the Neural Engineering Framework has laid the foundation for building large-scale brain models and has influenced the development of neuromorphic chips.

As the demand for AI and machine learning continues to surge, the development of neuromorphic chips is expected to gain further momentum. With companies like IBM, Intel, BrainChip Holdings, and academic institutions like the University of Manchester leading the charge, the future of computing appears to be heading towards a more brain-inspired approach.
Sources:
– IBM Research
– Intel
– University of Manchester
– BrainChip Holdings
– Dr. Dharmendra S. Modha
– Dr. Chris Eliasmith
I wonder what the cost of IBM's Northpole chip will cost once they end their reseach / commercialisation phases in due course, especially given the fact that from my own memory going back way back sometime ago was that Intels Loihi Chip was costing around about the ~$30K US mark for each Loihi chip.

IMO..... both Intel and IBM should seriously re-evaluate their own current funding programs / researching / ongoing financial expenditure positions and take up what Akida 1500 & 2.0 (E, S & P) and it's future generations of upcoming Akida Gen 3.0 chips and most likely our ongoing development / bench testing of Cortical Columns BRN has to offer at this present time.
 
  • Like
  • Love
  • Fire
Reactions: 29 users
We know what Renesas and their Reality AI think of Akida.

Now seeing them pushing out some PR and discussions on endpoint, edge and real time AI plus cases that we would fit perfectly for imo.

They don't mention us unfortunately but like to suspect we are in the background about what they are discussing.

Podcast last 24 hrs.




AI at the edge is no longer something down the road, or even leading edge technology. Some consider it to be mainstream. But that doesn’t make it any less complex. We’re joined today by Kaushal Vora and Mo Dogar to discuss hardware and software components that are required to implement AI at the edge and how those various components get pieced together. We’ll also discuss some very real use cases spanning computer vision, voice, and real time analytics, or non-visual sensing. Kaushal is senior director for business acceleration and global ecosystem at Renesas Electronics. With over 15 years of experience in the semiconductor industry, Kaushal’s worked in several technology areas, including healthcare, telecom infrastructure, and solid state lighting. At Renesas, he leads a global team responsible for defining and developing IOT solutions for the company’s microcontroller and microprocessor product lines, with a focus on AI and ML, cybersecurity, functional safety, and connectivity, among other areas. Kaushal has an MSEE from the University of Southern California. Kaushal, thanks for joining us.


KV: Pleasure is mine. Always happy to be in such good company.

ES: And welcome to Mo Dogar, who is head of global business development and technology ecosystem for Renesas, responsible for promotion and business expansion of the complete microcontroller portfolio and other key products. He’s instrumental in driving global business growth and alignment of marketing, sales, and development strategies, particularly in the field of IOT, EAI, security, smart embedded electronics, and communications. In addition, Mr. Dogar helps provide the vision and thought leadership behind product and solution development, smart society, and the evolving IOT economy. Mo, thanks so much for joining us.

MD: It’s great to be here. Thank you for having us.

ES: So we are super excited to have you both on today because of obviously the tremendous hype around AI right now. It’s everywhere. Can you demystify some of that? And if you would, I’d love to start by talking about the difference between generative AI, things like ChatGPT that almost everyone is familiar with these days, and predictive AI.

MD: Yeah. I’ll kick it off. What a great time to be in technology right now, or actually in the world we live in, right? So AI is certainly everywhere, and I would say, actually it’s a bit more than a hype in some cases. So your question’s great. How do we differentiate? What is the real distinguish between generative AI and predictive AI? So the generative AI is all about creating new content, right? It’s about adding value for people in a way to save time. And on the other hand, if you look at predictive AI, it’s about analyzing and making predictions. And most of the case time you’re talking about those intelligent end point or edge devices, whether they’re in our homes or factories, or wherever they happen to be. We’re collecting data all the time. You know, we live in this world that’s full of sensors. So really, there are two different types that really face, then creating huge opportunity for us. When you talk about generative AI, you’re talking about text or audio or video, and it’s really helping to accelerate some of these content that is needed. And it kind of leverages on foundation models and transformers. But on the other hand, if you look at the edge AIs, typically running on these resource constrained devices that’s collecting data, and they have to make decisions on real time in some cases, and give feedback to the system or the network. And literally, you can imagine billions of these endpoint devices out there are actually collecting data and making decisions as well. What we are also seeing, if you look at it from a market perspective, I mean, huge opportunity out there. If you look at generative AI, some of the market researchers predict anywhere, if you look at around 2030, 190 billion dollar worth of market being predicted. On the edge AI, it’s closer to maybe 6 to 100 billion. So it’s really significant. What I would also add is that probably the edge AI is a bit more mature compared to generative AI, but really, the scale of acceleration and adoption is phenomenal. So I think exciting times to be in the world of technology, and seeing AI to really add value to our lives and the technology at large.

KV: Yeah, excellent points. Just to add on what Mo said, right, the two types of AI are very different technically. Generative AI tends to be running in data centers and in the cloud. It uses tera-ops of performance, and it uses gigabytes of memory storage. These are extremely large models that have been generalized to solve more general purpose problems, like understanding language, understanding video, understanding images, right? One of the challenges we see with generative AI, although there’s a lot of hype around it because of how consumerized it has become, is, you know, can we scale generative AI sustainably? Because it’s extremely power hungry, and it’s extremely resource hungry. As Mo said, edge AI is definitely more mature. There is a lot of use cases on the edge that tend to be a lot more real time, that tend to be a lot more constrained, that can leverage predictive AI. And I think a balance between both the generative AI and predictive AI would eventually be where people settle a few years from now that’s just a prediction based on some of the things that we’re seeing. But absolutely, as Mo mentioned, from an edge and end point perspective, we’re seeing tremendous traction. We’re seeing traction across, whether it’s [?7:28] interfacing with voice, whether it’s environmental sensing, you know, whether it’s predictive analytics and maintenance of machines. Anywhere you have sensors and anywhere you have engineering problems and wave forms and things like that, there’s applications for predictive analytics and predictive AI.

ES: One of the things that you mentioned there at the end was voice interfaces. And that gets me thinking about all those devices at the edge, most of which have no traditional text based data entry capabilities. We don’t interact with these devices in that way typically. So let’s talk about the convergence of IOT and AI. What is artificial intelligence of things, AIOT, and what factors have made it possible today?

MD: Okay, great question, actually. So exciting times. You know, we have IOT, which has been around for a while and somehow matured, and we’ve got EAI coming in and adding value. And the two powerful technologies are coming together and giving us a very high value when it comes to connected systems. And I think the other thing that’s happening also is that in this exciting time of technology, we have, you know, the IOT and 5G and AI kind of coming together and accelerating in terms of the maturity at a similar time, and really providing us a big value. and to me, I always say this. These sensors that are out there are generating lots and lots of data. And we need to turn that data into revenue, Right? This is where the AI coming together with IOT is able to solve real world problems. You know, whether you’re thinking about a sensor on the factory floor which is looking at some vibrations with your machine maybe running a motor control, motor drive system, perhaps indicating there is some sort of a failure, it can help us on that prediction to maybe save a big downtime down in the factory. Real problem solving, where you’re able to generate new revenue stream, but at the same time make a big saving as well. And the other thing I would say also is, when it comes to IOT, you talk about the products and sensors and devices are connected. But when we talk about the combination of AI, a lot of the cases, these sensors or the end point devices may not be always on, may not be connected to the cloud. So there’s a significant need to be able to develop very optimized models and an algorithm that’s going to be able to run all those constrained devices and make those real time decision making. So together, AI and IOT is adding a significant value and bringing a big opportunity for everybody involved.

KV: Yeah. And I think what’s going to really happen is a drastic shift in the way we’ve thought about architecting intelligence in the network. If you think about the IOT, traditionally, all of the intelligence was concentrated in the cloud or in the data center or in the core of the network. And any machine learning or AI that had to be run at the other layers of the network, like the edge or even in the endpoints, a query had to be sent up to the cloud and then the round trip time latency was something that the application would have to tolerate. If you think about scaling AI at the more resource constrained layers of the network, which is the edge, the endpoint, and the continuum of the edge, this is where for AIOT to be successfully adopted, the intelligence model needs to be shifting to a more decent [?11:02] intelligence model. This is where you’re going to see a lot more capabilities or intelligence being embedded right at the edge and within the endpoints to do local inferencing, local classification, local regression models, and things like that, for a broad range of application. And that is where a drastic shift in terms of decentralizing the intelligence is kind of a need for the day, and already something that the ecosystem overall is working on.

ES: You mentioned that travel time, sending data from these devices into the cloud or wherever else the processing is happening. Can you talk a little more about that and some of the other advantages that you see of that decentralized intelligence model?

KV: Absolutely. Traditionally, when we’ve thought about AI, we’ve thought about things like computer learning or natural language understanding. When we talk to our Alexas and Siris, these are all backed by powerful cloud based intelligence. For a human based query, waiting a few seconds is okay. But say you have an application that’s time critical or even mission critical, something that’s running a motor in, say, a multi-million dollar industrial equipment. And the failure of that model can be catastrophic. In order for machine learning to classify that particular type of anomaly, you just cannot expect the inferencing to go back to the cloud and come back, and there just is not room for tolerating that kind of round trip latency to the cloud. And that’s where we’re seeing a lot of interest in backing these applications into these devices. Now what kind of devices are we talking about? We’re talking about devices that have significantly reduced compute capacity, right? We’re talking about, in most cases, hundreds of megahertz, in some cases low gigahertz type of compute range. We’re talking about significantly low memory capacity. We’re talking about megabytes of memory in some cases, maybe a little bit more than that, and then significantly constrained RAM capacity as well, which is the real time memory that’s required to actually run the model now and run the inferencing. So we’re talking very different constraints here from a system level, and therefore, the AI and machine learning models that have to be deployed and trained for these kind of applications have to be working within these constraints and doing all of the inferencing locally. A lot of these applications may never even connect to the cloud. I mean, a classic example I’ll give you, right? We were working with a customer that was trying to deploy machine learning into a metallurgy and mineral processing application. Now these are multi-million dollar metallurgy equipment that is sitting in a very remote location often even not accessed by humans, and in some cases not even connected. There’s not even an infrastructure to connect that piece of equipment to the cloud. So we’ve been able to deploy light weight machine learning in the form of, say, a couple examples I can think of is regression models, to detect the thickness of a shield that is used to filter the oar, and that vibrates when the oar is basically shaking, right? And with certain types of vibrations, that shield can be compromised. So we’ve implemented machine learning in the form of a regression model to basically detect that anomaly. Another example is where we’ve implemented a classification model to detect a harmful [tramp] in the overall mix, and [tramps] are very important to detect, because harmful [tramps] can cause a lot of nightmare and disruption in mining and metallurgy overall. So being able to detect those kind of foreign components through a classification engine, again, is all done remotely, and it’s all done at that endpoint running either on a microcontroller or a light weight microprocessor. So these are classic examples, and there’s hundreds of other examples in the industrial space, where local inferencing is the only way to practically implement machine learning, because the applications are just so time sensitive.

ES: And not to mention the security concerns with transmitting information.

KV: Absolutely. The other inherent advantage of running inference locally is, it takes away the need for transporting data back and forth in the network. Because you have such a controlled transport and flow of data through the network, your security posture is significantly simplified. And a lot of these endpoint devices today, if you look at microcontrollers, microprocessors from Renesas, we have built in route of trust capabilities in hardware. So your machine learning algorithm could be tightly coupled to the route of trust in hardware, and therefore significantly reduce any threats from a malicious attacker or any sort of hacker or whatever. So not only security, but even data privacy concerns are significantly alleviated when we look at the local inferencing and running things at the edge locally.

ES: Looking at this broadly, how do you think about AI from a systems perspective?

MD: So I think if you think about from a system, think about typical IOT system, which is doing more than one thing. So you think about, for example, a connected system on a factory floor, thus collecting some data from, let’s say, a manufacturing line that’s sending it back to the central control panel. You have interaction of human machine interface technologies. You need to have connectivity, whether it’s wired or wireless. And security, as Kaushal also mentioned earlier. So really, when we look at a system, AI has to now take into account all of this different diverse set of technology to be able to then add value and do predictions. One of the areas which is really important is power consumption. Now think about it. One of the reasons why the endpoint or edge AI is really accelerating at a phenomenal growth is that a lot of these devices are able to make decisions on the device itself. What this means is that they are not turning on the radios in terms of transmitting data or sending data through real time ethernet around the factory floor. This means that the device on time is much lower. So the overall power consumption is very low, actually creating a very sustainable AI solution. From a system perspective, if you then go deeper into the system of AI, then you’re looking at technologies in the vision space, for example, you’re able to, whether it’s a security system, to be able to detect a person, validate, make sure that’s the right person when you’re giving access to it, whether it’s through the image itself, the facial recognition, voice signatures, and then as well as sort of real time analytics. So really, it’s a very diverse and wide range of technologies that needs to work seamlessly together in a system where AI can be applied to do prediction and improve the overall system, and give back, basically, system level efficiencies to the segment where it’s operating in.

ES: Yeah. Efficiency really feels like the key word there, cutting down on network traffic and power consumption.

MD: Absolutely.

ES: So let’s talk a little more about what each of you are doing in your roles at Renesas, and how the organization as a whole is addressing AI as we move into the AIOT.

MD: We are really at the forefront, trying to enable our customers across industrial consumer infrastructure, automotive, really a very diverse set of industry that we’re operating, is providing them a solution all the way from data movement, connectivity, sensing, analog and power capability, really providing a complete chain of solutions that ultimately power the edge as well. And on top of that, our real vision that Renesas has is, how do we give back time to the developers in order for them to spend more time on their systems? One thing we need to also consider is that the embedded engineers, and that we are empowering [?19:21] AI, may not be data scientists, or may not even be a connectivity expert. So how can we enable them through the right set of tools, right set of consultancy and training, to be able to add AI to their applications? So with that in mind, we have a significant investment in terms of software tools solutions in [?] ecosystem really accelerate AI which [he] designs. We’re leading the world, especially when it comes to time series or real time analytics, where we’re basically taking high frequency data from different sensors and able to do prediction at the end point, and with that, we made an acquisition of a company called Reality AI in 2022, who are a pioneer when it comes to solve real optimized AI on the endpoint. And through this, we’re providing automated tool chain, which is a [?20:14] ML capability, to be able to collect the data, really optimize and classify the data, in a way that is able to fit within those resource constrained devices that really helps. For example, one of the appliance customers who had a major issue where they wanted to detect an [out of] balance situation or status within a washer or a dishwasher application. Now, they were using an accelerometer in that design, a real hardware sensor, to be able to detect the out of balance condition. That was adding around three dollars to their [?20:54] material. And what we did was, we basically took that data. We looked at the current and voltage fluctuation from the motor itself using our reality AI tool chain, and developed a model where we’re running more than 95 percent of accuracy doing that prediction.

ES: Wow. So without the hardware sensor at all in the picture anymore, yeah?

MD: Absolutely. You know, what we talk about is, from Renesas, how we can make AI a reality for customers. See, the people who are adding AI or ML capability are not going to add to their hardware. What we’re saying is, you cannot just add to the hardware you have, but remove. So ultimately, these customers are happy. They have a significant cost reduction for their material. They have a real intelligent application doing that prediction. And there are many other applications where this is really adding value. [?21:48] there was a customer recently where we basically, they wanted to predict the temperature difference in terms of the components that’s used in a battery management and a power tool. So the issue is that stopping the battery from discharging when it overheats during this charge time, that means the battery has a much longer life as well. And in this case, the thermal model is critical, as it helps to protect at the same time it gets an accurate current measurement as well. So traditionally, a customer was using in this case, they was using a simple [MATLAB] approach, not providing very sufficient accuracy. Again, we brought in our reality AI solution running on our 16 bit, actually, a 16 bit RS78 microcontrollers, and then we were able to really provide very low power solutions where they can actually predict these temperature changes and really keeping the health of that battery for that power tool. So yeah, it’s really amazing what’s happening in that world. So that’s just one area. And then of course we have the other area of voice and vision. Perhaps my colleague can add more to that.

KV: Yeah. To expand beyond the real time analytics and time series applications, Renesas has made significant investments in the areas of computer vision and voice as well. If you look at the computer vision space, most of the applications rely on complex deep learning models, as well as they require very intensive data sets for models to be trained for commercial applications. And a lot of our customers today struggle with that as a design challenge, as first of all, where do we get the data set from, and secondly, it’s a very compute intensive process to train those models to meet certain performance criterias. So the approach that we’re taking at Renesas is, we’re building a library of pretrained models. These are models that have been trained to, say, 80, 90 percent accuracy. And then you can take these models and use them as a foundation to retrain them based on incremental data. And this is where, if you go to renesas.com/ai, you will see a library of 30 plus pretrained models that cover a range of different applications of computer vision, and Renesas continues to invest and grow that library of models. On the voice side, we have applications all the way from voice command recognition that run on, I would say resource constrained 16 32 bit microcontrollers, all the way to natural language understanding by applications that are running on slightly higher end microcontrollers and microprocessors. And we’re seeing tremendous traction for voice being used as human to machine interface in a broad spectrum of applications. And COVID exacerbated that trend, because people are now reluctant to touch things in public access spaces, and voice just seems to be that natural medium to be able to control something. So across the broad spectrum of AI segments, whether it’s real time analytics or vision or voice, Renesas has taken a very holistic approach of building relevant tools, building a strong set of application libraries and reference designs, and then also building support models to make sure that our customers are successful, and that they are getting started and working with us and being able to successfully deploy AI.

ES: So talk to us a little more about bridging this divide between the AI domain and the developer domain.

MD: That’s really where kind of the rubber meets the road, right? So we have engineers who are experts in [rhythmatics] into developing these complex embedded systems, but they may not have the time or the expertise on the data collection side, for example, how to build those models. And with that in mind, remember what I said about Renesas wants to give back time to those developers. We have made a huge step forward where we’re bringing the AI domain and the embedded domain together. So if you think about it today, a customer has to develop an AI model using some sort of tool on one of his screens or one PC, and he has to do his embedded development for, whether it happens to be a healthcare product or [?26:10] product or industrial product, when he’s developing that core. How do we bring the two worlds together, right? So with that in mind, what we have done is, we have put together, we have actually reality AI tools. We did a workflow integration with our [?] studio, which is our [?] development environment. And with this, it will enable the designer to seamlessly share their data and projects and AI code module between the two projects. What [?] done is that you basically, in the [?] studio, we put our connective projects, configure the support package, and you can collect the data through this data collection module, and then put it through the reality AI tools, where you can train the model, optimize it. Remember we talked, what is constrained devices, make it really reoptimize efficient, and then export that inference
Code:
 into a embedded project through API context aware, where it goes into the embedded project. You then add it into a kind of C file that goes into an embedded project, and then you can actually develop a chord and deploy into an end application. What this really means ultimately is that you have a faster design cycle for AI application at the endpoint and for the IOT networks. And we are providing with this a lot of support, application notes, training modules, to get those embedded engineers to start developing AI models seamlessly and quickly.

ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.

MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.

ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.

MD:Thank you, appreciate it.
 
  • Like
  • Fire
  • Love
Reactions: 30 users
We know what Renesas and their Reality AI think of Akida.

Now seeing them pushing out some PR and discussions on endpoint, edge and real time AI plus cases that we would fit perfectly for imo.

They don't mention us unfortunately but like to suspect we are in the background about what they are discussing.

Podcast last 24 hrs.




AI at the edge is no longer something down the road, or even leading edge technology. Some consider it to be mainstream. But that doesn’t make it any less complex. We’re joined today by Kaushal Vora and Mo Dogar to discuss hardware and software components that are required to implement AI at the edge and how those various components get pieced together. We’ll also discuss some very real use cases spanning computer vision, voice, and real time analytics, or non-visual sensing. Kaushal is senior director for business acceleration and global ecosystem at Renesas Electronics. With over 15 years of experience in the semiconductor industry, Kaushal’s worked in several technology areas, including healthcare, telecom infrastructure, and solid state lighting. At Renesas, he leads a global team responsible for defining and developing IOT solutions for the company’s microcontroller and microprocessor product lines, with a focus on AI and ML, cybersecurity, functional safety, and connectivity, among other areas. Kaushal has an MSEE from the University of Southern California. Kaushal, thanks for joining us.


KV: Pleasure is mine. Always happy to be in such good company.

ES: And welcome to Mo Dogar, who is head of global business development and technology ecosystem for Renesas, responsible for promotion and business expansion of the complete microcontroller portfolio and other key products. He’s instrumental in driving global business growth and alignment of marketing, sales, and development strategies, particularly in the field of IOT, EAI, security, smart embedded electronics, and communications. In addition, Mr. Dogar helps provide the vision and thought leadership behind product and solution development, smart society, and the evolving IOT economy. Mo, thanks so much for joining us.

MD: It’s great to be here. Thank you for having us.

ES: So we are super excited to have you both on today because of obviously the tremendous hype around AI right now. It’s everywhere. Can you demystify some of that? And if you would, I’d love to start by talking about the difference between generative AI, things like ChatGPT that almost everyone is familiar with these days, and predictive AI.

MD: Yeah. I’ll kick it off. What a great time to be in technology right now, or actually in the world we live in, right? So AI is certainly everywhere, and I would say, actually it’s a bit more than a hype in some cases. So your question’s great. How do we differentiate? What is the real distinguish between generative AI and predictive AI? So the generative AI is all about creating new content, right? It’s about adding value for people in a way to save time. And on the other hand, if you look at predictive AI, it’s about analyzing and making predictions. And most of the case time you’re talking about those intelligent end point or edge devices, whether they’re in our homes or factories, or wherever they happen to be. We’re collecting data all the time. You know, we live in this world that’s full of sensors. So really, there are two different types that really face, then creating huge opportunity for us. When you talk about generative AI, you’re talking about text or audio or video, and it’s really helping to accelerate some of these content that is needed. And it kind of leverages on foundation models and transformers. But on the other hand, if you look at the edge AIs, typically running on these resource constrained devices that’s collecting data, and they have to make decisions on real time in some cases, and give feedback to the system or the network. And literally, you can imagine billions of these endpoint devices out there are actually collecting data and making decisions as well. What we are also seeing, if you look at it from a market perspective, I mean, huge opportunity out there. If you look at generative AI, some of the market researchers predict anywhere, if you look at around 2030, 190 billion dollar worth of market being predicted. On the edge AI, it’s closer to maybe 6 to 100 billion. So it’s really significant. What I would also add is that probably the edge AI is a bit more mature compared to generative AI, but really, the scale of acceleration and adoption is phenomenal. So I think exciting times to be in the world of technology, and seeing AI to really add value to our lives and the technology at large.

KV: Yeah, excellent points. Just to add on what Mo said, right, the two types of AI are very different technically. Generative AI tends to be running in data centers and in the cloud. It uses tera-ops of performance, and it uses gigabytes of memory storage. These are extremely large models that have been generalized to solve more general purpose problems, like understanding language, understanding video, understanding images, right? One of the challenges we see with generative AI, although there’s a lot of hype around it because of how consumerized it has become, is, you know, can we scale generative AI sustainably? Because it’s extremely power hungry, and it’s extremely resource hungry. As Mo said, edge AI is definitely more mature. There is a lot of use cases on the edge that tend to be a lot more real time, that tend to be a lot more constrained, that can leverage predictive AI. And I think a balance between both the generative AI and predictive AI would eventually be where people settle a few years from now that’s just a prediction based on some of the things that we’re seeing. But absolutely, as Mo mentioned, from an edge and end point perspective, we’re seeing tremendous traction. We’re seeing traction across, whether it’s [?7:28] interfacing with voice, whether it’s environmental sensing, you know, whether it’s predictive analytics and maintenance of machines. Anywhere you have sensors and anywhere you have engineering problems and wave forms and things like that, there’s applications for predictive analytics and predictive AI.

ES: One of the things that you mentioned there at the end was voice interfaces. And that gets me thinking about all those devices at the edge, most of which have no traditional text based data entry capabilities. We don’t interact with these devices in that way typically. So let’s talk about the convergence of IOT and AI. What is artificial intelligence of things, AIOT, and what factors have made it possible today?

MD: Okay, great question, actually. So exciting times. You know, we have IOT, which has been around for a while and somehow matured, and we’ve got EAI coming in and adding value. And the two powerful technologies are coming together and giving us a very high value when it comes to connected systems. And I think the other thing that’s happening also is that in this exciting time of technology, we have, you know, the IOT and 5G and AI kind of coming together and accelerating in terms of the maturity at a similar time, and really providing us a big value. and to me, I always say this. These sensors that are out there are generating lots and lots of data. And we need to turn that data into revenue, Right? This is where the AI coming together with IOT is able to solve real world problems. You know, whether you’re thinking about a sensor on the factory floor which is looking at some vibrations with your machine maybe running a motor control, motor drive system, perhaps indicating there is some sort of a failure, it can help us on that prediction to maybe save a big downtime down in the factory. Real problem solving, where you’re able to generate new revenue stream, but at the same time make a big saving as well. And the other thing I would say also is, when it comes to IOT, you talk about the products and sensors and devices are connected. But when we talk about the combination of AI, a lot of the cases, these sensors or the end point devices may not be always on, may not be connected to the cloud. So there’s a significant need to be able to develop very optimized models and an algorithm that’s going to be able to run all those constrained devices and make those real time decision making. So together, AI and IOT is adding a significant value and bringing a big opportunity for everybody involved.

KV: Yeah. And I think what’s going to really happen is a drastic shift in the way we’ve thought about architecting intelligence in the network. If you think about the IOT, traditionally, all of the intelligence was concentrated in the cloud or in the data center or in the core of the network. And any machine learning or AI that had to be run at the other layers of the network, like the edge or even in the endpoints, a query had to be sent up to the cloud and then the round trip time latency was something that the application would have to tolerate. If you think about scaling AI at the more resource constrained layers of the network, which is the edge, the endpoint, and the continuum of the edge, this is where for AIOT to be successfully adopted, the intelligence model needs to be shifting to a more decent [?11:02] intelligence model. This is where you’re going to see a lot more capabilities or intelligence being embedded right at the edge and within the endpoints to do local inferencing, local classification, local regression models, and things like that, for a broad range of application. And that is where a drastic shift in terms of decentralizing the intelligence is kind of a need for the day, and already something that the ecosystem overall is working on.

ES: You mentioned that travel time, sending data from these devices into the cloud or wherever else the processing is happening. Can you talk a little more about that and some of the other advantages that you see of that decentralized intelligence model?

KV: Absolutely. Traditionally, when we’ve thought about AI, we’ve thought about things like computer learning or natural language understanding. When we talk to our Alexas and Siris, these are all backed by powerful cloud based intelligence. For a human based query, waiting a few seconds is okay. But say you have an application that’s time critical or even mission critical, something that’s running a motor in, say, a multi-million dollar industrial equipment. And the failure of that model can be catastrophic. In order for machine learning to classify that particular type of anomaly, you just cannot expect the inferencing to go back to the cloud and come back, and there just is not room for tolerating that kind of round trip latency to the cloud. And that’s where we’re seeing a lot of interest in backing these applications into these devices. Now what kind of devices are we talking about? We’re talking about devices that have significantly reduced compute capacity, right? We’re talking about, in most cases, hundreds of megahertz, in some cases low gigahertz type of compute range. We’re talking about significantly low memory capacity. We’re talking about megabytes of memory in some cases, maybe a little bit more than that, and then significantly constrained RAM capacity as well, which is the real time memory that’s required to actually run the model now and run the inferencing. So we’re talking very different constraints here from a system level, and therefore, the AI and machine learning models that have to be deployed and trained for these kind of applications have to be working within these constraints and doing all of the inferencing locally. A lot of these applications may never even connect to the cloud. I mean, a classic example I’ll give you, right? We were working with a customer that was trying to deploy machine learning into a metallurgy and mineral processing application. Now these are multi-million dollar metallurgy equipment that is sitting in a very remote location often even not accessed by humans, and in some cases not even connected. There’s not even an infrastructure to connect that piece of equipment to the cloud. So we’ve been able to deploy light weight machine learning in the form of, say, a couple examples I can think of is regression models, to detect the thickness of a shield that is used to filter the oar, and that vibrates when the oar is basically shaking, right? And with certain types of vibrations, that shield can be compromised. So we’ve implemented machine learning in the form of a regression model to basically detect that anomaly. Another example is where we’ve implemented a classification model to detect a harmful [tramp] in the overall mix, and [tramps] are very important to detect, because harmful [tramps] can cause a lot of nightmare and disruption in mining and metallurgy overall. So being able to detect those kind of foreign components through a classification engine, again, is all done remotely, and it’s all done at that endpoint running either on a microcontroller or a light weight microprocessor. So these are classic examples, and there’s hundreds of other examples in the industrial space, where local inferencing is the only way to practically implement machine learning, because the applications are just so time sensitive.

ES: And not to mention the security concerns with transmitting information.

KV: Absolutely. The other inherent advantage of running inference locally is, it takes away the need for transporting data back and forth in the network. Because you have such a controlled transport and flow of data through the network, your security posture is significantly simplified. And a lot of these endpoint devices today, if you look at microcontrollers, microprocessors from Renesas, we have built in route of trust capabilities in hardware. So your machine learning algorithm could be tightly coupled to the route of trust in hardware, and therefore significantly reduce any threats from a malicious attacker or any sort of hacker or whatever. So not only security, but even data privacy concerns are significantly alleviated when we look at the local inferencing and running things at the edge locally.

ES: Looking at this broadly, how do you think about AI from a systems perspective?

MD: So I think if you think about from a system, think about typical IOT system, which is doing more than one thing. So you think about, for example, a connected system on a factory floor, thus collecting some data from, let’s say, a manufacturing line that’s sending it back to the central control panel. You have interaction of human machine interface technologies. You need to have connectivity, whether it’s wired or wireless. And security, as Kaushal also mentioned earlier. So really, when we look at a system, AI has to now take into account all of this different diverse set of technology to be able to then add value and do predictions. One of the areas which is really important is power consumption. Now think about it. One of the reasons why the endpoint or edge AI is really accelerating at a phenomenal growth is that a lot of these devices are able to make decisions on the device itself. What this means is that they are not turning on the radios in terms of transmitting data or sending data through real time ethernet around the factory floor. This means that the device on time is much lower. So the overall power consumption is very low, actually creating a very sustainable AI solution. From a system perspective, if you then go deeper into the system of AI, then you’re looking at technologies in the vision space, for example, you’re able to, whether it’s a security system, to be able to detect a person, validate, make sure that’s the right person when you’re giving access to it, whether it’s through the image itself, the facial recognition, voice signatures, and then as well as sort of real time analytics. So really, it’s a very diverse and wide range of technologies that needs to work seamlessly together in a system where AI can be applied to do prediction and improve the overall system, and give back, basically, system level efficiencies to the segment where it’s operating in.

ES: Yeah. Efficiency really feels like the key word there, cutting down on network traffic and power consumption.

MD: Absolutely.

ES: So let’s talk a little more about what each of you are doing in your roles at Renesas, and how the organization as a whole is addressing AI as we move into the AIOT.

MD: We are really at the forefront, trying to enable our customers across industrial consumer infrastructure, automotive, really a very diverse set of industry that we’re operating, is providing them a solution all the way from data movement, connectivity, sensing, analog and power capability, really providing a complete chain of solutions that ultimately power the edge as well. And on top of that, our real vision that Renesas has is, how do we give back time to the developers in order for them to spend more time on their systems? One thing we need to also consider is that the embedded engineers, and that we are empowering [?19:21] AI, may not be data scientists, or may not even be a connectivity expert. So how can we enable them through the right set of tools, right set of consultancy and training, to be able to add AI to their applications? So with that in mind, we have a significant investment in terms of software tools solutions in [?] ecosystem really accelerate AI which [he] designs. We’re leading the world, especially when it comes to time series or real time analytics, where we’re basically taking high frequency data from different sensors and able to do prediction at the end point, and with that, we made an acquisition of a company called Reality AI in 2022, who are a pioneer when it comes to solve real optimized AI on the endpoint. And through this, we’re providing automated tool chain, which is a [?20:14] ML capability, to be able to collect the data, really optimize and classify the data, in a way that is able to fit within those resource constrained devices that really helps. For example, one of the appliance customers who had a major issue where they wanted to detect an [out of] balance situation or status within a washer or a dishwasher application. Now, they were using an accelerometer in that design, a real hardware sensor, to be able to detect the out of balance condition. That was adding around three dollars to their [?20:54] material. And what we did was, we basically took that data. We looked at the current and voltage fluctuation from the motor itself using our reality AI tool chain, and developed a model where we’re running more than 95 percent of accuracy doing that prediction.

ES: Wow. So without the hardware sensor at all in the picture anymore, yeah?

MD: Absolutely. You know, what we talk about is, from Renesas, how we can make AI a reality for customers. See, the people who are adding AI or ML capability are not going to add to their hardware. What we’re saying is, you cannot just add to the hardware you have, but remove. So ultimately, these customers are happy. They have a significant cost reduction for their material. They have a real intelligent application doing that prediction. And there are many other applications where this is really adding value. [?21:48] there was a customer recently where we basically, they wanted to predict the temperature difference in terms of the components that’s used in a battery management and a power tool. So the issue is that stopping the battery from discharging when it overheats during this charge time, that means the battery has a much longer life as well. And in this case, the thermal model is critical, as it helps to protect at the same time it gets an accurate current measurement as well. So traditionally, a customer was using in this case, they was using a simple [MATLAB] approach, not providing very sufficient accuracy. Again, we brought in our reality AI solution running on our 16 bit, actually, a 16 bit RS78 microcontrollers, and then we were able to really provide very low power solutions where they can actually predict these temperature changes and really keeping the health of that battery for that power tool. So yeah, it’s really amazing what’s happening in that world. So that’s just one area. And then of course we have the other area of voice and vision. Perhaps my colleague can add more to that.

KV: Yeah. To expand beyond the real time analytics and time series applications, Renesas has made significant investments in the areas of computer vision and voice as well. If you look at the computer vision space, most of the applications rely on complex deep learning models, as well as they require very intensive data sets for models to be trained for commercial applications. And a lot of our customers today struggle with that as a design challenge, as first of all, where do we get the data set from, and secondly, it’s a very compute intensive process to train those models to meet certain performance criterias. So the approach that we’re taking at Renesas is, we’re building a library of pretrained models. These are models that have been trained to, say, 80, 90 percent accuracy. And then you can take these models and use them as a foundation to retrain them based on incremental data. And this is where, if you go to renesas.com/ai, you will see a library of 30 plus pretrained models that cover a range of different applications of computer vision, and Renesas continues to invest and grow that library of models. On the voice side, we have applications all the way from voice command recognition that run on, I would say resource constrained 16 32 bit microcontrollers, all the way to natural language understanding by applications that are running on slightly higher end microcontrollers and microprocessors. And we’re seeing tremendous traction for voice being used as human to machine interface in a broad spectrum of applications. And COVID exacerbated that trend, because people are now reluctant to touch things in public access spaces, and voice just seems to be that natural medium to be able to control something. So across the broad spectrum of AI segments, whether it’s real time analytics or vision or voice, Renesas has taken a very holistic approach of building relevant tools, building a strong set of application libraries and reference designs, and then also building support models to make sure that our customers are successful, and that they are getting started and working with us and being able to successfully deploy AI.

ES: So talk to us a little more about bridging this divide between the AI domain and the developer domain.

MD: That’s really where kind of the rubber meets the road, right? So we have engineers who are experts in [rhythmatics] into developing these complex embedded systems, but they may not have the time or the expertise on the data collection side, for example, how to build those models. And with that in mind, remember what I said about Renesas wants to give back time to those developers. We have made a huge step forward where we’re bringing the AI domain and the embedded domain together. So if you think about it today, a customer has to develop an AI model using some sort of tool on one of his screens or one PC, and he has to do his embedded development for, whether it happens to be a healthcare product or [?26:10] product or industrial product, when he’s developing that core. How do we bring the two worlds together, right? So with that in mind, what we have done is, we have put together, we have actually reality AI tools. We did a workflow integration with our [?] studio, which is our [?] development environment. And with this, it will enable the designer to seamlessly share their data and projects and AI code module between the two projects. What [?] done is that you basically, in the [?] studio, we put our connective projects, configure the support package, and you can collect the data through this data collection module, and then put it through the reality AI tools, where you can train the model, optimize it. Remember we talked, what is constrained devices, make it really reoptimize efficient, and then export that inference
Code:
 into a embedded project through API context aware, where it goes into the embedded project. You then add it into a kind of C file that goes into an embedded project, and then you can actually develop a chord and deploy into an end application. What this really means ultimately is that you have a faster design cycle for AI application at the endpoint and for the IOT networks. And we are providing with this a lot of support, application notes, training modules, to get those embedded engineers to start developing AI models seamlessly and quickly.

ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.

MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.

ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.

MD:Thank you, appreciate it.
Not sure what the last bit went weird, maybe cause of length.

Anyway, last bit here.

ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.

MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.

ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.

MD: Thank you, appreciate it.
 
  • Like
  • Fire
  • Love
Reactions: 23 users
  • Haha
  • Like
Reactions: 4 users

MrRomper

Regular
Tata patent / application
Brainchip mention




mand large memory and computation power to run effi- ciently, thus limiting their use in power and memory con- strained edge devices. Present application/disclosure provides a Spiking Neural Network based system which is a robust low power edge compatible ultrasound-based gesture detection system. The system uses a plurality of speakers and microphones that mimics a Multi Input Multi Output (MIMO) setup thus providing requisite diversity to effectively address fading. The system also makes use of distinctive Channel Impulse Response (CIR) estimat- ed by imposing sparsity prior for robust gesture detection. A multi-layer Convolutional Neural Network (CNN) has been trained on these distinctive CIR images and the trained CNN model is converted into an equivalent Spik- ing Neural Network (SNN) via an ANN (Artificial Neural Network)-to-SNN conversion mechanism. The SNN is further configured to detect/classify gestures performed by user(s).
Conventional gesture detection approaches de-
Processed
A separate Tata patent published on same day at USPTO.
https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/20230334300
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Latest GitHub info & updates from early Sept. Lots nice 2.0 :)


Sep 5
@ktsiknos-brainchip
ktsiknos-brainchip
2.4.0-doc-1
cb33735
Upgrade to QuantizeML 0.5.3, Akida/CNN2SNN 2.4.0 and Akida models 1.2.0
Latest


Update QuantizeML to version 0.5.3

  • "quantize" (both method and CLI) will now also perform calibration and cross-layer equalization
  • Changed default quantization scheme to 8 bits (from 4) for both weights and activations

Update Akida and CNN2SNN to version 2.4.0

New features​

  • [Akida] Updated compatibility with python 3.11, dropped support for python 3.7
  • [Akida] Support for unbounded ReLU activation by default
  • [Akida] C++ helper added on CLI to allow testing Akida engine from a host PC
  • [Akida] Prevent user from mixing V1 and V2 Layers
  • [Akida] Add fixtures for the DepthwiseConv2D
  • [Akida] Add AKD1500 virtual device
  • [Akida] Default buffer_bitwidth for all layers is now 32.
  • [Akida] InputConv2D parameters and Stem convolution parameters take the same parameters
  • [Akida] Estimated bit width of variables added to json serialised model
  • [Akida] Added Akida 1500 PCIe driver support
  • [Akida] Shifts are now uint8 instead of uint4
  • [Akida] Bias variables are now int8
  • [Akida] Support of Vision Transformer inference
  • [Akida] Model.predict now supports Akida 2.0 models
  • [Akida] Add an Akida 2.0 ExtractToken layer
  • [Akida] Add an Akida 2.0 Conv2D layer
  • [Akida] Add an Akida 2.0 Dense1D layer
  • [Akida] Add an Akida 2.0 DepthwiseConv2D layer
  • [Akida] Add an Akida 2.0 DepthwiseConv2DTranspose layer
  • [Akida] Add an Akida Dequantizer layer
  • [Akida] Support the conversion of QuantizeML CNN models into Akida 1.0 models
  • [Akida] Support the conversion of QuantizeML CNN models into Akida 2.0 models
  • [Akida] Support Dequantizer and Softmax on conversion of a QuantizeML model
  • [Akida] Model metrics now include configuration clocks
  • [Akida] Pretty-print serialized JSON model
  • [Akida] Include AKD1000 tests when deploying engine
  • [Akida/infra] Add first official ADK1500 PCIe driver support
  • [CNN2SNN] Updated dependency to QuantizeML 0.5.0
  • [CNN2SNN] Updated compatibility with tensorflow 2.12
  • [CNN2SNN] Provide a better solution to match the block pattern with the right conversion function
  • [CNN2SNN] Implement DenseBlockConverterVX
  • [CNN2SNN] GAP output quantizer can be signed
  • [CNN2SNN] removed input_is_image from convert API, now deduced by input channels

Bug fixes:​

  • [Akida] Fixed wrong buffer size in update_learn_mem, leading to handling of bigger buffers than required
  • [Akida] Fixed issue in matmul operation leading to an overflow in corner cases
  • [Akida] Akida models could not be created by a list of layers starting from InputConv2D
  • [Akida] Increasing batch size between two forward did not work
  • [Akida] Fix variables shape check failure
  • [engine] Optimize output potentials parsing
  • [CNN2SNN] Fixed conversion issue when converting QuantizeML model with Reshape + Dense
  • [CNN2SNN] Convert with input_is_image=False raises an exception if the first layer is a Stem or InputConv2D
Note that version 2.3.7 is the last Akida and CNN2SNN drop supporting Python 3.7 (EOL end of June 2023).

Update Akida models to 1.2.0

  • Updated CNN2SNN minimal required version to 2.4.0 and QuantizeML to 0.5.2
  • Pruned the zoo from several models: Imagenette, cats_vs_dogs, melanoma classification, both occular disease, ECG classification, CWRU fault detection, VGG, face verification
  • Added load_model/save_models utils
  • Added a 'fused' option to separable layer block
  • Added a helper to unfuse SeparableConvolutional2D layers
  • Added a 'post_relu_gap' option to layer blocks
  • Stride 2 is now the default for MobileNet models
  • Training scripts will now always save the model after tuning/calibration/rescaling
  • Reworked GXNOR/MNIST pipeline to get rid of distillation
  • Removed the renaming module
  • Data server with pretrained models reorganized in preparation for Akida 2.0 models
  • Legacy 1.0 models have been updated towards 2.0, providing both a compatible architecture and a pretrained model
  • 2.0 models now also come with a pretrained 8bit helper (ViT, DeiT, CenterNet, AkidaNet18 and AkidaUNet)
  • ReLU max value is now configurable in layer_blocks module
  • It is now possible to build ‘unfused’ separable layer blocks
  • Legacy quantization parameters removed from model creation APIs
  • Added an extract.py module that allows samples extraction for model calibration
  • Dropped pruning tools support
  • Added Conv3D blocks

Bug fixes:​

  • Removed duplicate DVS builders in create CLI
  • Silenced unexpected verbosity in detection models evaluation pipeline

Known issues:​

  • Pretrained helpers will fail downloading models on Windows
  • Edge models are not available for 2.0 yet

Documentation update

  • Large rework of the documentation to integrate changes for 2.0
  • Added QuantizeML user guide, reference API and examples
  • Introduced a segmentation example
  • Introduced a vision transformer example
  • Introduce a tutorial to upgrade 1.0 to 2.0
  • Updated zoo performance page with 2.0 models
  • Aligned overall theme with Brainchip website
  • Fixed a menu display issue in the example section
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 42 users

Frangipani

Regular

This Zoom talk (I assume it will be held in English) sounds intriguing, especially given that it is organised by TU Darmstadt’s Institute of Automotive Engineering (FZD), whose head is Prof. Dr.-Ing. Steven Peters (founder and former Head of AI Research at Mercedes-Benz, responsible for implementing neuromorphic technology in the Vision EQXX concept car that - as the rest of the world found out in January 2022 - used Akida for in-cabin AI).

The paragraph underneath the actual announcement says that this Zoom talk is part of a series of public lectures primarily aimed at students, research associates, professors, and representatives of the automotive and supplier industries, and that the intention of this series of public lectures is to nurture closer cooperation (well, that’s my own free translation, it literally says “deepen the contact”) between industry, university students and research institutions. The university then adds that this series of public lectures does not pursue any economic interests, and that participation is hence free and does not require pre-registration.


20BFBD8A-5D6B-4F4C-97D7-233436EF0FEA.jpeg

2CE250F0-F43E-4D67-8AFE-DB4CA8BCEAF8.jpeg



The Zoom talk’s speaker is a lady from TWT GmbH in Science and Information (www.twt-innovation.de/en/) - the initialism TWT stands for Technisch-Wissenschaftlicher Transfer (Technical Scientific Transfer):


24FB61DB-6F07-494C-A654-B12E47A4334D.jpeg


0DE97679-C9C2-4BBA-898C-494D50932FA0.jpeg


Under Topics we are passionate about, the TWT website (https://twt-innovation.de/en/themen/) lists Artificial Intelligence, Future Engineering, Cloud Transformation, Autonomous Driving, Data Science, Virtual Experience, E-Mobility, Model Based System Engineering and Quantum Computing.

Now comes the electrifying part: TWT has quite a number of illustrious partners that are or could be of interest to us (in alphabetical order): Airbus Group, AMG, Audi, aws, BMW, Bosch, CARIAD, Continental, Dassault Systèmes, Daimler Truck, EnBW (German energy supplier), ESA, Here Technologies, Lufthansa, MAN, Mercedes-Benz, MINI, NVIDIA, Porsche, Rolls Royce, Samsung, T-Systems, VW… (TWT is also partnered with IBM, by the way.)

As mentioned above, the Zoom meeting is open to the public and does not require pre-registration. Unfortunately for those of you residing in Oceania and East Asia/SEA, it will be held on Nov 20 at 6 pm Central European Time, hence at ungodly hours for East Coast Australians, but maybe some early bird Kiwis (6 am on Nov 21) or night owl Sandgropers (1 am on Nov 21) would want to listen in and possibly ask some clever questions. For those in the Americas, the Nov 20 call will start at 12pm EST/9 am PST.
 

Attachments

  • 995EA671-CFB1-4274-81A5-FB1569A0C763.jpeg
    995EA671-CFB1-4274-81A5-FB1569A0C763.jpeg
    455 KB · Views: 63
  • CFA7A5E8-5D3D-42D4-A36C-B92095881E67.jpeg
    CFA7A5E8-5D3D-42D4-A36C-B92095881E67.jpeg
    455 KB · Views: 62
Last edited:
  • Like
  • Love
  • Fire
Reactions: 35 users

Kachoo

Regular
Not sure what the last bit went weird, maybe cause of length.

Anyway, last bit here.

ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.

MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.

ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.

MD: Thank you, appreciate it.
FMF,

Renesas not mentioning BRN or Akida is what woild be the norm if they purchased the IP.

When you buy the right to produce another companies say IP you are then not required to mention then as your the owner your obligation is the to just pay royalties if comercial.

My c company i worked for had equipment that had patented tech on the tool on its own and they were required to monitor the units used to later pay a royalty to the IP owner.

Then there could be other rules but in the end base on what is and is not allowed to be disclosed.

Cheers
 
  • Like
  • Fire
Reactions: 16 users
Top Bottom