We know what Renesas and their Reality AI think of Akida.
Now seeing them pushing out some PR and discussions on endpoint, edge and real time AI plus cases that we would fit perfectly for imo.
They don't mention us unfortunately but like to suspect we are in the background about what they are discussing.
Podcast last 24 hrs.
AI at the edge is no longer something down the road, or even leading edge technology. Some consider it to be mainstream. But that doesn’t make it any less complex. We’re joined today by Kaushal Vora and Mo Dogar to discuss hardware and software components that are required to implement AI at the edge and how those various components get pieced together. We’ll also discuss some very real use cases spanning computer vision, voice, and real time analytics, or non-visual sensing. Kaushal is senior director for business acceleration and global ecosystem at Renesas Electronics. With over 15 years of experience in the semiconductor industry, Kaushal’s worked in several technology areas, including healthcare, telecom infrastructure, and solid state lighting. At Renesas, he leads a global team responsible for defining and developing IOT solutions for the company’s microcontroller and microprocessor product lines, with a focus on AI and ML, cybersecurity, functional safety, and connectivity, among other areas. Kaushal has an MSEE from the University of Southern California. Kaushal, thanks for joining us.
KV: Pleasure is mine. Always happy to be in such good company.
ES: And welcome to Mo Dogar, who is head of global business development and technology ecosystem for Renesas, responsible for promotion and business expansion of the complete microcontroller portfolio and other key products. He’s instrumental in driving global business growth and alignment of marketing, sales, and development strategies, particularly in the field of IOT, EAI, security, smart embedded electronics, and communications. In addition, Mr. Dogar helps provide the vision and thought leadership behind product and solution development, smart society, and the evolving IOT economy. Mo, thanks so much for joining us.
MD: It’s great to be here. Thank you for having us.
ES: So we are super excited to have you both on today because of obviously the tremendous hype around AI right now. It’s everywhere. Can you demystify some of that? And if you would, I’d love to start by talking about the difference between generative AI, things like ChatGPT that almost everyone is familiar with these days, and predictive AI.
MD: Yeah. I’ll kick it off. What a great time to be in technology right now, or actually in the world we live in, right? So AI is certainly everywhere, and I would say, actually it’s a bit more than a hype in some cases. So your question’s great. How do we differentiate? What is the real distinguish between generative AI and predictive AI? So the generative AI is all about creating new content, right? It’s about adding value for people in a way to save time. And on the other hand, if you look at predictive AI, it’s about analyzing and making predictions. And most of the case time you’re talking about those intelligent end point or edge devices, whether they’re in our homes or factories, or wherever they happen to be. We’re collecting data all the time. You know, we live in this world that’s full of sensors. So really, there are two different types that really face, then creating huge opportunity for us. When you talk about generative AI, you’re talking about text or audio or video, and it’s really helping to accelerate some of these content that is needed. And it kind of leverages on foundation models and transformers. But on the other hand, if you look at the edge AIs, typically running on these resource constrained devices that’s collecting data, and they have to make decisions on real time in some cases, and give feedback to the system or the network. And literally, you can imagine billions of these endpoint devices out there are actually collecting data and making decisions as well. What we are also seeing, if you look at it from a market perspective, I mean, huge opportunity out there. If you look at generative AI, some of the market researchers predict anywhere, if you look at around 2030, 190 billion dollar worth of market being predicted. On the edge AI, it’s closer to maybe 6 to 100 billion. So it’s really significant. What I would also add is that probably the edge AI is a bit more mature compared to generative AI, but really, the scale of acceleration and adoption is phenomenal. So I think exciting times to be in the world of technology, and seeing AI to really add value to our lives and the technology at large.
KV: Yeah, excellent points. Just to add on what Mo said, right, the two types of AI are very different technically. Generative AI tends to be running in data centers and in the cloud. It uses tera-ops of performance, and it uses gigabytes of memory storage. These are extremely large models that have been generalized to solve more general purpose problems, like understanding language, understanding video, understanding images, right? One of the challenges we see with generative AI, although there’s a lot of hype around it because of how consumerized it has become, is, you know, can we scale generative AI sustainably? Because it’s extremely power hungry, and it’s extremely resource hungry. As Mo said, edge AI is definitely more mature. There is a lot of use cases on the edge that tend to be a lot more real time, that tend to be a lot more constrained, that can leverage predictive AI. And I think a balance between both the generative AI and predictive AI would eventually be where people settle a few years from now that’s just a prediction based on some of the things that we’re seeing. But absolutely, as Mo mentioned, from an edge and end point perspective, we’re seeing tremendous traction. We’re seeing traction across, whether it’s [?7:28] interfacing with voice, whether it’s environmental sensing, you know, whether it’s predictive analytics and maintenance of machines. Anywhere you have sensors and anywhere you have engineering problems and wave forms and things like that, there’s applications for predictive analytics and predictive AI.
ES: One of the things that you mentioned there at the end was voice interfaces. And that gets me thinking about all those devices at the edge, most of which have no traditional text based data entry capabilities. We don’t interact with these devices in that way typically. So let’s talk about the convergence of IOT and AI. What is artificial intelligence of things, AIOT, and what factors have made it possible today?
MD: Okay, great question, actually. So exciting times. You know, we have IOT, which has been around for a while and somehow matured, and we’ve got EAI coming in and adding value. And the two powerful technologies are coming together and giving us a very high value when it comes to connected systems. And I think the other thing that’s happening also is that in this exciting time of technology, we have, you know, the IOT and 5G and AI kind of coming together and accelerating in terms of the maturity at a similar time, and really providing us a big value. and to me, I always say this. These sensors that are out there are generating lots and lots of data. And we need to turn that data into revenue, Right? This is where the AI coming together with IOT is able to solve real world problems. You know, whether you’re thinking about a sensor on the factory floor which is looking at some vibrations with your machine maybe running a motor control, motor drive system, perhaps indicating there is some sort of a failure, it can help us on that prediction to maybe save a big downtime down in the factory. Real problem solving, where you’re able to generate new revenue stream, but at the same time make a big saving as well. And the other thing I would say also is, when it comes to IOT, you talk about the products and sensors and devices are connected. But when we talk about the combination of AI, a lot of the cases, these sensors or the end point devices may not be always on, may not be connected to the cloud. So there’s a significant need to be able to develop very optimized models and an algorithm that’s going to be able to run all those constrained devices and make those real time decision making. So together, AI and IOT is adding a significant value and bringing a big opportunity for everybody involved.
KV: Yeah. And I think what’s going to really happen is a drastic shift in the way we’ve thought about architecting intelligence in the network. If you think about the IOT, traditionally, all of the intelligence was concentrated in the cloud or in the data center or in the core of the network. And any machine learning or AI that had to be run at the other layers of the network, like the edge or even in the endpoints, a query had to be sent up to the cloud and then the round trip time latency was something that the application would have to tolerate. If you think about scaling AI at the more resource constrained layers of the network, which is the edge, the endpoint, and the continuum of the edge, this is where for AIOT to be successfully adopted, the intelligence model needs to be shifting to a more decent [?11:02] intelligence model. This is where you’re going to see a lot more capabilities or intelligence being embedded right at the edge and within the endpoints to do local inferencing, local classification, local regression models, and things like that, for a broad range of application. And that is where a drastic shift in terms of decentralizing the intelligence is kind of a need for the day, and already something that the ecosystem overall is working on.
ES: You mentioned that travel time, sending data from these devices into the cloud or wherever else the processing is happening. Can you talk a little more about that and some of the other advantages that you see of that decentralized intelligence model?
KV: Absolutely. Traditionally, when we’ve thought about AI, we’ve thought about things like computer learning or natural language understanding. When we talk to our Alexas and Siris, these are all backed by powerful cloud based intelligence. For a human based query, waiting a few seconds is okay. But say you have an application that’s time critical or even mission critical, something that’s running a motor in, say, a multi-million dollar industrial equipment. And the failure of that model can be catastrophic. In order for machine learning to classify that particular type of anomaly, you just cannot expect the inferencing to go back to the cloud and come back, and there just is not room for tolerating that kind of round trip latency to the cloud. And that’s where we’re seeing a lot of interest in backing these applications into these devices. Now what kind of devices are we talking about? We’re talking about devices that have significantly reduced compute capacity, right? We’re talking about, in most cases, hundreds of megahertz, in some cases low gigahertz type of compute range. We’re talking about significantly low memory capacity. We’re talking about megabytes of memory in some cases, maybe a little bit more than that, and then significantly constrained RAM capacity as well, which is the real time memory that’s required to actually run the model now and run the inferencing. So we’re talking very different constraints here from a system level, and therefore, the AI and machine learning models that have to be deployed and trained for these kind of applications have to be working within these constraints and doing all of the inferencing locally. A lot of these applications may never even connect to the cloud. I mean, a classic example I’ll give you, right? We were working with a customer that was trying to deploy machine learning into a metallurgy and mineral processing application. Now these are multi-million dollar metallurgy equipment that is sitting in a very remote location often even not accessed by humans, and in some cases not even connected. There’s not even an infrastructure to connect that piece of equipment to the cloud. So we’ve been able to deploy light weight machine learning in the form of, say, a couple examples I can think of is regression models, to detect the thickness of a shield that is used to filter the oar, and that vibrates when the oar is basically shaking, right? And with certain types of vibrations, that shield can be compromised. So we’ve implemented machine learning in the form of a regression model to basically detect that anomaly. Another example is where we’ve implemented a classification model to detect a harmful [tramp] in the overall mix, and [tramps] are very important to detect, because harmful [tramps] can cause a lot of nightmare and disruption in mining and metallurgy overall. So being able to detect those kind of foreign components through a classification engine, again, is all done remotely, and it’s all done at that endpoint running either on a microcontroller or a light weight microprocessor. So these are classic examples, and there’s hundreds of other examples in the industrial space, where local inferencing is the only way to practically implement machine learning, because the applications are just so time sensitive.
ES: And not to mention the security concerns with transmitting information.
KV: Absolutely. The other inherent advantage of running inference locally is, it takes away the need for transporting data back and forth in the network. Because you have such a controlled transport and flow of data through the network, your security posture is significantly simplified. And a lot of these endpoint devices today, if you look at microcontrollers, microprocessors from Renesas, we have built in route of trust capabilities in hardware. So your machine learning algorithm could be tightly coupled to the route of trust in hardware, and therefore significantly reduce any threats from a malicious attacker or any sort of hacker or whatever. So not only security, but even data privacy concerns are significantly alleviated when we look at the local inferencing and running things at the edge locally.
ES: Looking at this broadly, how do you think about AI from a systems perspective?
MD: So I think if you think about from a system, think about typical IOT system, which is doing more than one thing. So you think about, for example, a connected system on a factory floor, thus collecting some data from, let’s say, a manufacturing line that’s sending it back to the central control panel. You have interaction of human machine interface technologies. You need to have connectivity, whether it’s wired or wireless. And security, as Kaushal also mentioned earlier. So really, when we look at a system, AI has to now take into account all of this different diverse set of technology to be able to then add value and do predictions. One of the areas which is really important is power consumption. Now think about it. One of the reasons why the endpoint or edge AI is really accelerating at a phenomenal growth is that a lot of these devices are able to make decisions on the device itself. What this means is that they are not turning on the radios in terms of transmitting data or sending data through real time ethernet around the factory floor. This means that the device on time is much lower. So the overall power consumption is very low, actually creating a very sustainable AI solution. From a system perspective, if you then go deeper into the system of AI, then you’re looking at technologies in the vision space, for example, you’re able to, whether it’s a security system, to be able to detect a person, validate, make sure that’s the right person when you’re giving access to it, whether it’s through the image itself, the facial recognition, voice signatures, and then as well as sort of real time analytics. So really, it’s a very diverse and wide range of technologies that needs to work seamlessly together in a system where AI can be applied to do prediction and improve the overall system, and give back, basically, system level efficiencies to the segment where it’s operating in.
ES: Yeah. Efficiency really feels like the key word there, cutting down on network traffic and power consumption.
MD: Absolutely.
ES: So let’s talk a little more about what each of you are doing in your roles at Renesas, and how the organization as a whole is addressing AI as we move into the AIOT.
MD: We are really at the forefront, trying to enable our customers across industrial consumer infrastructure, automotive, really a very diverse set of industry that we’re operating, is providing them a solution all the way from data movement, connectivity, sensing, analog and power capability, really providing a complete chain of solutions that ultimately power the edge as well. And on top of that, our real vision that Renesas has is, how do we give back time to the developers in order for them to spend more time on their systems? One thing we need to also consider is that the embedded engineers, and that we are empowering [?19:21] AI, may not be data scientists, or may not even be a connectivity expert. So how can we enable them through the right set of tools, right set of consultancy and training, to be able to add AI to their applications? So with that in mind, we have a significant investment in terms of software tools solutions in [?] ecosystem really accelerate AI which [he] designs. We’re leading the world, especially when it comes to time series or real time analytics, where we’re basically taking high frequency data from different sensors and able to do prediction at the end point, and with that, we made an acquisition of a company called Reality AI in 2022, who are a pioneer when it comes to solve real optimized AI on the endpoint. And through this, we’re providing automated tool chain, which is a [?20:14] ML capability, to be able to collect the data, really optimize and classify the data, in a way that is able to fit within those resource constrained devices that really helps. For example, one of the appliance customers who had a major issue where they wanted to detect an [out of] balance situation or status within a washer or a dishwasher application. Now, they were using an accelerometer in that design, a real hardware sensor, to be able to detect the out of balance condition. That was adding around three dollars to their [?20:54] material. And what we did was, we basically took that data. We looked at the current and voltage fluctuation from the motor itself using our reality AI tool chain, and developed a model where we’re running more than 95 percent of accuracy doing that prediction.
ES: Wow. So without the hardware sensor at all in the picture anymore, yeah?
MD: Absolutely. You know, what we talk about is, from Renesas, how we can make AI a reality for customers. See, the people who are adding AI or ML capability are not going to add to their hardware. What we’re saying is, you cannot just add to the hardware you have, but remove. So ultimately, these customers are happy. They have a significant cost reduction for their material. They have a real intelligent application doing that prediction. And there are many other applications where this is really adding value. [?21:48] there was a customer recently where we basically, they wanted to predict the temperature difference in terms of the components that’s used in a battery management and a power tool. So the issue is that stopping the battery from discharging when it overheats during this charge time, that means the battery has a much longer life as well. And in this case, the thermal model is critical, as it helps to protect at the same time it gets an accurate current measurement as well. So traditionally, a customer was using in this case, they was using a simple [MATLAB] approach, not providing very sufficient accuracy. Again, we brought in our reality AI solution running on our 16 bit, actually, a 16 bit RS78 microcontrollers, and then we were able to really provide very low power solutions where they can actually predict these temperature changes and really keeping the health of that battery for that power tool. So yeah, it’s really amazing what’s happening in that world. So that’s just one area. And then of course we have the other area of voice and vision. Perhaps my colleague can add more to that.
KV: Yeah. To expand beyond the real time analytics and time series applications, Renesas has made significant investments in the areas of computer vision and voice as well. If you look at the computer vision space, most of the applications rely on complex deep learning models, as well as they require very intensive data sets for models to be trained for commercial applications. And a lot of our customers today struggle with that as a design challenge, as first of all, where do we get the data set from, and secondly, it’s a very compute intensive process to train those models to meet certain performance criterias. So the approach that we’re taking at Renesas is, we’re building a library of pretrained models. These are models that have been trained to, say, 80, 90 percent accuracy. And then you can take these models and use them as a foundation to retrain them based on incremental data. And this is where, if you go to renesas.com/ai, you will see a library of 30 plus pretrained models that cover a range of different applications of computer vision, and Renesas continues to invest and grow that library of models. On the voice side, we have applications all the way from voice command recognition that run on, I would say resource constrained 16 32 bit microcontrollers, all the way to natural language understanding by applications that are running on slightly higher end microcontrollers and microprocessors. And we’re seeing tremendous traction for voice being used as human to machine interface in a broad spectrum of applications. And COVID exacerbated that trend, because people are now reluctant to touch things in public access spaces, and voice just seems to be that natural medium to be able to control something. So across the broad spectrum of AI segments, whether it’s real time analytics or vision or voice, Renesas has taken a very holistic approach of building relevant tools, building a strong set of application libraries and reference designs, and then also building support models to make sure that our customers are successful, and that they are getting started and working with us and being able to successfully deploy AI.
ES: So talk to us a little more about bridging this divide between the AI domain and the developer domain.
MD: That’s really where kind of the rubber meets the road, right? So we have engineers who are experts in [rhythmatics] into developing these complex embedded systems, but they may not have the time or the expertise on the data collection side, for example, how to build those models. And with that in mind, remember what I said about Renesas wants to give back time to those developers. We have made a huge step forward where we’re bringing the AI domain and the embedded domain together. So if you think about it today, a customer has to develop an AI model using some sort of tool on one of his screens or one PC, and he has to do his embedded development for, whether it happens to be a healthcare product or [?26:10] product or industrial product, when he’s developing that core. How do we bring the two worlds together, right? So with that in mind, what we have done is, we have put together, we have actually reality AI tools. We did a workflow integration with our [?] studio, which is our [?] development environment. And with this, it will enable the designer to seamlessly share their data and projects and AI code module between the two projects. What [?] done is that you basically, in the [?] studio, we put our connective projects, configure the support package, and you can collect the data through this data collection module, and then put it through the reality AI tools, where you can train the model, optimize it. Remember we talked, what is constrained devices, make it really reoptimize efficient, and then export that inference
Code:
into a embedded project through API context aware, where it goes into the embedded project. You then add it into a kind of C file that goes into an embedded project, and then you can actually develop a chord and deploy into an end application. What this really means ultimately is that you have a faster design cycle for AI application at the endpoint and for the IOT networks. And we are providing with this a lot of support, application notes, training modules, to get those embedded engineers to start developing AI models seamlessly and quickly.
ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.
MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.
ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.
MD:Thank you, appreciate it.