Renesas

Hi @Diogenese
When you have a chance have a look at these people which come out of one of the Renesas links and see if you can work out if they can run their algorithm/s on AKIDA nodes licenced by Renesas:


I am assuming the CNN component of AKIDA could process then convert to SNN and take advantage of sparsity etc; but maybe I am completely off the track with this one. Thanks in advance.
FF
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi @Diogenese
When you have a chance have a look at these people which come out of one of the Renesas links and see if you can work out if they can run their algorithm/s on AKIDA nodes licenced by Renesas:


I am assuming the CNN component of AKIDA could process then convert to SNN and take advantage of sparsity etc; but maybe I am completely off the track with this one. Thanks in advance.
FF
1647140189551.png


Hi FF,

I didn't find any reference to CNN in the website or in their patent application.

Their invention does relate to updating the fault profile on a plurality of machine-fault-detectors with individualized data using a central server database which is continually updated with data from the detectors.


US2020393818A1 System and Method for Predicting Industrial Equipment Motor Behavior

1647140614086.png


Prior Art Problem
The machine learning algorithm is deployed on the server device where the machine learning model is trained through the ingestion and processing of data. Once the training of the machine learning model is complete on the server side, the model is deployed to edge devices. After deployment of the machine learning model to the edge devices, no further changes are made to the machine learning model itself in the edge devices even though the edge device continues to gather copious amounts of data. This is commonly known as the train-freeze-deploy model.
...
The Invention
The system and method employ artificial intelligence to train machine learning models to predict failure mechanisms within a machine. The system may use a server to train machine learning models to predict machine failure, using performance indicator data. The system may deploy the trained machine learning models onto a one board computer (processor) where the models use operation data to predict motor failure. The models can be further trained using the operation data and continue to improve their machine failure prediction methods as a result of their evolutionary training.

The training (learning) is done at the central server and then distributed to the detectors. Akida does its own learning.

So it's "either/or".

That said, Akida's on-chip learning can be distributed to other Akida's via a central server, but that's a horse of a different colour.
 
  • Like
  • Fire
Reactions: 8 users
View attachment 2496

Hi FF,

I didn't find any reference to CNN in the website or in their patent application.

Their invention does relate to updating the fault profile on a plurality of machine-fault-detectors with individualized data using a central server database which is continually updated with data from the detectors.


US2020393818A1 System and Method for Predicting Industrial Equipment Motor Behavior

View attachment 2498

Prior Art Problem
The machine learning algorithm is deployed on the server device where the machine learning model is trained through the ingestion and processing of data. Once the training of the machine learning model is complete on the server side, the model is deployed to edge devices. After deployment of the machine learning model to the edge devices, no further changes are made to the machine learning model itself in the edge devices even though the edge device continues to gather copious amounts of data. This is commonly known as the train-freeze-deploy model.
...
The Invention
The system and method employ artificial intelligence to train machine learning models to predict failure mechanisms within a machine. The system may use a server to train machine learning models to predict machine failure, using performance indicator data. The system may deploy the trained machine learning models onto a one board computer (processor) where the models use operation data to predict motor failure. The models can be further trained using the operation data and continue to improve their machine failure prediction methods as a result of their evolutionary training.

The training (learning) is done at the central server and then distributed to the detectors. Akida does its own learning.

So it's "either/or".

That said, Akida's on-chip learning can be distributed to other Akida's via a central server, but that's a horse of a different colour.
Thanks for that. The way they describe on their website what they are doing is very deceptive. Not cricket as it used to be played in the Don’s day. 😎 FF
 
  • Like
  • Fire
Reactions: 3 users
  • Like
  • Fire
Reactions: 13 users
Will have a few other posts that I found interesting reading on Renesas.

Probs wrong but what I found odd is down the bottom of the page where you can click on partner ecosystem links (highlighted) to take to partners however on the RA Family which is ARM based (like something else we know) you actually have to log in for that one :unsure:

The RA is to do with IOT devices and if you click on the product link you can see pic of the partner ecosystem though I suspect that pic is maybe outdated otherwise why not show like all the other product partner links?

IOT page bottom of post.



Renesas e-AI 1.jpg

Renesas e-AI 2.jpg

Renesas e-AI 3.jpg

Renesas e-AI 4.jpg

Renesas e-AI 5.jpg

Renesas e-AI 6.jpg


Renesas e-AI 7.jpg





Renesas e-AI RA ARM Family.jpg
 
  • Like
  • Fire
Reactions: 10 users
Renesas blog post on their latest version of the e-AI Translator on 8 Bit with Tensorflow.



Contributes to reduction of ROM / RAM usage by 8-bit quantization Released e-AI Translator V2.1.0 for TensorFlow Lite​


Back to top
Image
Toshiyuki Syo

Toshiyuki Syo
Principal Engineer



What is e-AI Translator?​

A tool that converts learned AI algorithms created using an open source AI framework into C source code dedicated to inference.
The latest version of e-AI Translator, V2.1.0, was released on September 30, 2021.
In this blog, I will introduce the new function of V2.1.0 "8-bit quantization function by supporting TensorFlow Lite".

Bottleneck when processing AI on the MCU: memory resources​

The needs are increasing that not only acquiring data but also making judgments by using AI algorithms on the system using MCUs and sensors.
If real-time performance is required for judgments, the MCU is the best platform for AI operating environment.
On the other hand, the problem of low memory resources is an obstacle to using AI algorithms on the MCU.
The AI algorithm is represented by a polynomial with many variables.
Therefore, compared to the conventional algorithm, it was difficult to allocate many AI parameters to a small amount of memory resources of MCU.
In recent years, the quantization of AI algorithms by TensorFlow Lite has become widespread to solve this memory resource problem.

ROM/RAM usage reduction effect of TensorFlow Lite 8bit quantization​

First, please check how much the ROM / RAM usage reduction effect of the MCU by TensorFlow Lite 8bit quantization is.
As shown in the table below, the reduction effect varies depending on the structure of the model, but the maximum ROM / RAM reduction effect is 4.5 times.
Image
Survey using Renesas-owned model


1647233583031.png


Source: Survey using Renesas-owned model

Mechanism of 8-bit quantization and influence on inference accuracy​

8-bit quantization changes the parameters and operations used in the AI model from the 32-bit float format to the 8-bit integer format.
As a result, it is possible to reduce the ROM usage for storing parameters and the RAM usage required for calculation.
On the other hand, there is a concern that the inference accuracy will decrease because the expressiveness of parameters and operations will decrease due to the reduction in the number of bits.
Then, please check how much the inference accuracy actually decreases.
As shown in the table below, there are differences depending on the model structure, but even the MobileNet v1 model, which has the largest accuracy difference, has a decrease in inference accuracy of about 1%.
Image
 Quoted from a blog post about TensorFlow Lite


1647233612886.png


Source: Quoted from a blog post about TensorFlow Lite

8-bit quantization, which has a large effect of reducing ROM/RAM usage and has little decrease in accuracy, is an ideal function for MCUs.
Various networks can be executed on the MCU by using 8-bit quantization as shown in the figure below.
Please take advantage of this tool.
Image
8bit quantized models which can be executed by each MCU


1647233638016.png



8bit quantized models which can be executed by each MCU

Tool download link​

APPLICATIONS > Technologies > Embedded Artifical Inteligence (e-AI) > e-AI DevelopmentEnvironment & Downloads

e-AI Home Page​

https://www.renesas.com/e-AI
 
  • Like
  • Fire
Reactions: 8 users
Flyer on moving away from cloud dependency from last year.


1647233786127.png


1647233859578.png
 

Attachments

  • Renesas - r01pm0066eu0200-e-ai Flyer.pdf
    1.3 MB · Views: 123
  • Like
  • Fire
Reactions: 14 users
Post late last year as below.

What I liked was the comments I highlighted & bold in red under AIoT moving out.

:unsure::)



Where Edge and Endpoint AI Meet the Cloud​

By Dr. Sailesh Chittipeddi

EXECUTIVE VICE PRESIDENT AND GENERAL MANAGER OF THE IOT AND INFRASTRUCTURE BUSINESS UNIT
RENESAS ELECTRONICS CORPORATION
September 22, 2021

BLOG

LinkedInTwitterFacebookEmailMorePrint
Where Edge and Endpoint AI Meet the Cloud

The COVID-19 pandemic created new health and safety requirements that transformed how people interact with each other and their direct environments. The skyrocketing demand for touch-free experiences has in turn accelerated the move toward AI-powered systems and voice-based control and other contactless user interfaces – pushing intelligence closer and closer to the endpoint.​

One of the most important trends in the electronics industry today is the incorporation of AI into embedded devices, particularly AI interpreting sensor data such as images and machine learning for alternative user interfaces such as voice.

Embedded Artificial Intelligence of Things (AIoT) is the key to unlocking the seamless, hands-free experience that will help keep users safe in a post-Covid environment. Consider the possibilities: Smart shopping carts that allow you to scan your goods as you drop them in your cart and use mobile payments to bypass the checkout counter, or intelligent video conferencing systems that automatically recognize and switch focus on different speakers during meetings to provide a more ‘in-person’ experience for remote teams.

Why is now the time for an embedded AIoT breakthrough?

AIoT is Moving Out​

Initially, AI sat up in the cloud where it took advantage of computational power, memory, and storage scalability levels that the edge and endpoint just could not match. However, more and more, we are seeing not only machine learning training algorithms move out toward the edge of the network, but also a shift from deep learning training to deep learning inference.

Where “training” typically sits in the network core, “inference” now lives at the endpoint where developers can access AI analytics in real time and then optimize device performance, rather than sifting through the device-to-cloud-to-device loop.

Today, most of the inference process runs at the CPU level. However, this is shifting to a chip architecture that integrates more AI acceleration on chip. Efficient AI inference demands efficient endpoints that can infer, pre-process, and filter data in real time. Embedding AI at the chip level, integrating neural processing and hardware accelerators, and pairing embedded-AI chips with special-purpose processors designed specifically for deep learning, offer developers a trifecta of the performance, bandwidth, and real-time responsiveness needed for next-generation connected systems.


614b6688266a8-Figure+1a.PNG
614b669ee6597-Figure+1b.PNG




Figure 1 (Source: Renesas Electronics)

An AIoT Future: At Home and the Workplace​

In addition, a convergence of advancements around AI accelerators, adaptive and predictive control, and hardware and software for voice and vision open up new user interface capabilities for a wide range of smart devices.

For example, voice activation is quickly becoming the preferred user interface for always-on connected systems for both industrial and consumer markets. We have seen the accessibility advantages that voice-control based systems offer for users navigating visual or other physical disabilities, using spoken commands to activate and achieve tasks. With the rising demand for touchless control as a health and safety countermeasure in shared spaces like kitchens, workspaces, and factory floors, voice recognition – combined with a variety of wireless connectivity options – will bring seamless, non-contact experiences into the home and workspace.

Multimodal architectures offer another path for AIoT. Using multiple input information streams improves safety and ease of use for AI-based systems. For example, a voice + vision processing combination is particularly well suited for hands-free AI-based vision systems. Voice recognition activates object and facial recognition for critical vision-based tasks for applications like smart surveillance or hands-free video conferencing systems. Vision AI recognition then jumps in to track operator behavior, control operations, or manage error or risk detection.

On factory and warehouse floors, multimodal AI powers collaborative robots – or CoBots –as part of the technology grouping serving as the five senses that allow CoBots to safely perform tasks side-by-side with their human counterparts. Voice + gesture recognition allows the two groups to communicate in their shared workspace.

What’s on the Horizon?​

According to IDC Research, there will be 55 billion connected devices worldwide generating 73 zettabytes of data by 2025, and edge AI chips are set to outpace cloud AI chips as deep learning inference continues to relocate out to the edge and device endpoints. This integrated AI will be the foundation that powers a complex combination of “sense” technologies to create smart applications with more natural, “human-like” communication and interaction.


Dr. Sailesh Chittipeddi is the Executive Vice President and General Manager of the IoT and Infrastructure Business Unit at Renesas.
 
  • Like
  • Fire
Reactions: 14 users
Recent article outlining Renesas "honing in on AI" for their PMIC.

Highlighted red down bottom of article & last line.

Getting closer...n...closer hopefully.




A Glimpse at a New Wave of Smaller PMICs: AI, Automotive, Wearables, and More​

February 26, 2022 by Antonio Anzaldua Jr.

Power management ICs (PMICs) are continually scaling down. Here's how a few companies are packing functionality in these devices in the face of miniaturization.​


By 2027, the market for power management integrated circuits (PMICs) is expected to grow to $51.04 billion, making the most impact in industries including automotive, consumer, industrial, and telecommunications.
PMICs control voltage, current, and heat dissipation in a system, so the device can function efficiently without consuming too much power. The PMIC manages battery charging and monitors sleep modes, DC-to-DC conversions, and voltage and current setting adjustments in real-time.

Renesas' new PMIC

Renesas' new PMIC complements two of its microprocessors. Image used courtesy of Renesas

In the past six months, a number of developers have designed compact single-chip PMICs to meet the growing demand for smaller, more power-efficient electronics. Here are a few PMICs designed with automotive, wearables/hearables, and artificial intelligence (AI) in mind.


Maxim/ADI's PMIC Integrates Switching Charger for Wearables​

Maxim Integrated (now part of Analog Devices) is known for its analog and mixed-signal integrated circuits for the automotive, industrial, communications, consumer, and computing markets. Maxim has an established portfolio of PMICs for both high-performing and low-power solutions.

Simplified block diagram of the MAX77659

Maxim Integrated’s MAX77659 SIMO PMIC comes with a built-in 300 mA switching charger to help extend battery lifespan. Image used courtesy of Maxim Integrated


The high-performance PMICs are said to maximize tasks per watt while increasing system efficiency for complex systems-on-chip (SoCs), FPGAs, and application processors. Maxim says the low-power PMICs pack multiple power rails and power management features in a small footprint.
The company recently released a tiny PMIC designed to charge wearables and listening devices four times faster than conventional chargers. This device, the MAX77659, is a single-inductor multiple-output (SIMO) PMIC equipped with Analog Devices’ switch-mode buck-boost charger. Geared for wearables, hearables, and IoT designs, this device is said to provide over four hours of battery life with one 10-minute charge.
This PMIC includes low-dropout (LDO) regulators that provide noise mitigation and help derive voltage from the battery when lighter loads are occurring. Like a load switch, these LDOs disconnect external blocks not in use to decrease power consumption. Two GPIOs and an analog multiplexer are incorporated in the MAX77659 to allow the PMIC to switch between several internal voltages and current signals to an external node for monitoring.

ROHM’s PMIC Aims to Improve ADAS Camera Modules​

ROHM Semiconductor believes it is addressing the demand for smaller, more compact PMICs in the automotive realm. Now, the company is targeting this market with PMIC solutions for tiny satellite camera modules. To do this, the company plans on combining its existing SerDes ICs with a new PMIC.
This combo was designed to solve issues revolving around the compact footprints and low power consumption of new camera modules. It also features a new element: low electromagnetic noise (EMI) cancellation. These cameras assist with object detection to ensure the driver’s safety from potential risks around the vehicle.

Example of the camera module circuit

Through a single IC, the voltage settings and sequence controls can be performed to reduce the mounting area by 41%, according to ROHM. Image used courtesy of ROHM Semiconductor


ROHM says the SerDes IC optimizes the transmission rate based on video resolution and reduces power consumption by approximately 27%. The device is also equipped with a spread-spectrum function that mitigates noise by nearly 20 dB while improving the reliability of images taken by the ADAS cameras.
The PMIC is designed to manage the power supply systems of any CMOS image sensor. Using a camera-specific PMIC can help deliver high conversion efficiency that leads to low power consumption. The SerDes IC dissipates heat concentration with an external LDO to supply power to the CMOS image sensor. The close proximity of the image sensor and the LDO decreases the disturbance noise in the power supply.

Renesas Hones in on AI With MPU-focused PMIC

Renesas has also announced a new PMIC in the past few months—this one for MPUs. The company says the new PMIC, RAA215300, complements two of Renesas’ microprocessors (MPUs) (32-bit and 64-bit) built for AI applications.
The new RAA215300 PMIC enables four-layer PCBs, which is a cost-effective route for developers to take. This higher level of integration also increases system reliability because it doesn't need as many external components on the board.


Application diagram of RAA215300

A stacked solution for MPUs and FPGAs with six high-efficiency buck regulators and three LDOs to ensure complete system power. Image used courtesy of Renesas


Key features for Renesas’ PMIC solution are a built-in power sequencer, a real-time clock, a super cap charger, and an ultrasonic mode for eliminating unwanted audible noise interference found in microphones or speakers. A feature worth highlighting is the fact that this PMIC can support different memory power requirements, ranging from DDR3 to LPDDR4.

A PMIC for Any Application​

Some characteristics of PMICs are valuable across the board—for instance, the ability to add separate input current limits and battery current settings. These features simplify monitoring since the current in the charger is independently regulated and not shared among the other system loads.
Each of these PMICs from Maxim, ROHM, and Renesas illustrate the many different use cases for which PMICs may aid designers.
Maxim Integrated's PMIC incorporates a smart power selector charging component for addressing smaller wearable technologies. This SIMO PMIC also has GPIOs that can communicate to the CPU with signals received from switches or sensors. ROHM targets automotive applications. With an increasing demand for solutions that occupy less space within the ADAS system, ROHM’s PMIC may be of use in small cameras. Lastly, Renesas’ PMIC was built for AI and MPUs. This device is essentially built like a microscopic tank. The 9-channel PMIC holds a larger footprint at 8 x 8 mm; however it is equipped with DDR memory, a built-in charger, and RTC, along with six buck regulators and three LDOs.
 
  • Like
  • Fire
Reactions: 10 users

Makeme 2020

Regular
 
  • Like
Reactions: 4 users
  • Like
  • Fire
  • Love
Reactions: 15 users

Diogenese

Top 20
  • Haha
  • Like
  • Wow
Reactions: 9 users

M_C

Founding Member

The Indian design and tech service provider Tata Elxsi aims to build an EV ecosystem in India with the Japanese semiconductor company Renesas. The two companies had recently announced plans to set up an innovation center that focuses on automobile electrification products, toward this end.

Tata Elxsi has been working with Renesas for over a decade, mainly focusing on the console safety side, the MCAL, the maintenance of MCAL, etc. Speaking to Digitimes Asia recently, Shaju S, VP and head of transportation business unit at Tata Elxsi, explained that the latest initiative took shape from the current global interest in EVs.

"If you look at the market trend today, there is huge investment happening in electrification and a marked push from almost all the governments across the globe to encourage it," Shaju said. "Looking at the kind of investments happening and the way the market is shaping it, we decided it is high time to get into this segment. But rather than do it on our own alone, we decided that it is worthwhile to have a strong partner who can come along and fill some of the gaps that we may have."
 
  • Like
  • Fire
Reactions: 18 users

Slade

Top 20
  • Like
  • Fire
Reactions: 17 users

Makeme 2020

Regular
I Subscribed to Renesas a few months ago this a Email i received from Renesas today.
Building and deploying Edge AI and machine learning can be very complex and time consuming, but it doesn't need to be. Imagimob can help streamline the development process with simple, easy‑to‑use AI tinyML end‑to‑end platform solutions on edge devices.

Key Features:
  • Collect and annotate high quality data from edge devices
  • Manage data into datasets
  • Build AI models quickly and easily with AutoML
  • Evaluate, verify, and find the best model
Watch the webinar to learn how simple it can be to deploy a touchpad letter writing application using the KT‑CAP1‑MATRIXPAD and the RA2L1 ultra‑low‑power MCU.

To browse all of the RA Family partner solutions, visit renesas.com/ra-partners.​
Imagimob AI Development Platform Simplifies Machine Learning​
 
  • Like
  • Fire
Reactions: 18 users

Makeme 2020

Regular
I Subscribed to Renesas a few months ago this a Email i received from Renesas today.
Imagimob AI Development Platform Simplifies Machine Learning​
Building and deploying Edge AI and machine learning can be very complex and time consuming, but it doesn't need to be. Imagimob can help streamline the development process with simple, easy‑to‑use AI tinyML end‑to‑end platform solutions on edge devices.

Key Features:
  • Collect and annotate high quality data from edge devices
  • Manage data into datasets
  • Build AI models quickly and easily with AutoML
  • Evaluate, verify, and find the best model

Watch the webinar to learn how simple it can be to deploy a touchpad letter writing application using the KT‑CAP1‑MATRIXPAD and the RA2L1 ultra‑low‑power MCU.

To browse all of the RA Family partner solutions, visit renesas.com/ra-partners.​
03.24.22
THURSDAY

Imagimob AI - Development Platform for Machine Learning on Edge devices​

AI​

Imagimob AI is a development platform for machine learning on edge devices. It allows users to go from data collection to deployment on an edge device in minutes.
FREE TRIAL

Edge​

Imagimob Edge help users convert machine learning models to optimised C code at the click of a button. Saves months of development time.
DOWNLOAD

421_o_3-grey.jpg

Build production grade ML applications for...​

Audio classification - Classify sound events, spot keywords, and recognize your sound environment.
Predictive maintenance -
Recognize machine state, detect machine anomalies and act in milliseconds, on device.
Gesture recognition -
Detect hand gestures using low-power radars, capacitive touch sensors or accelerometers
Signal classification -
Recognize repeatable signal patterns from any sensor
Fall detection -
Fall detection using IMUs or a single accelerometer
Material detection -
Real time material detection using low-power radars

Without any data ever leaving the device without your permission.

Read more

Imagimob AI covers the entire machine learning workflow for embedded devices​

1. Collect and annotate high quality data
2. Manage, analyze and process your data
3. Build great models without being an ML expert
4. Evaluate, verify and select the best models
5. Quickly deploy your models on your target hardware

Read more
420_o_2-text.jpg

LATEST NEWS​

arrow_forward
03/23/22

Imagimob Selected as Renesas’ “Partner of the Month” to boost tinyML​

Renesas have selected Imagimob as “Partner of the Month” within the Renesas Ready Partner Program. Renesas and Imagimob have worked together...
FULL ARTICLE
02/25/22

Imagimob AI the first tinyML platform to support deep learning anomaly detection​

Imagimob today announced that its new release of the tinyML platform Imagimob AI supports end-to-end development of deep learning anomaly de...
FULL ARTICLE
02/13/22

Imagimob to attend tinyML Summit 2022​

arrow_forward
01/28/22

Getting started with ML/AI with Imagimob AI and IA...​

arrow_forward
12/14/21

Imagimob tinyML platform supports quantization of ...​

arrow_forward
LOAD MOREkeyboard_arrow_down
SEE ALL NEWS

BLOG​

arrow_forward
03/10/22

Imagimob to present at Arm AI Tech Talks on April 5th​

Johan Malm, PhD and product owner at Imagimob will present at the Arm AI Tech Talks on April 5th. "Quantization of neural networks is crucial to get good performance on MCUs without FPU, e.g., on Arm M0-M3 cores. LSTM layers are especially difficult ...
READ MORE

FEED​

03/05/22

The Future is Touchless: Radical Gesture Control P...​


01/31/22

Quantization of LSTM layers - a Technical White Pa...​


12/02/21

Imagimob @ CES 2022​


11/25/21

Imagimob AI in Agritech​


SEE ALL POSTS

ABOUT US​

arrow_forward
266_o_imagimob-white-no-sign.png

Imagimob is a fast growing startup driving innovation at the forefront of Edge AI and tinyML—and enabling the intelligent products of the future. Based in Stockholm, Sweden, the company has been serving global customers within the automotive, manufacturing, healthcare, and lifestyle industries since 2013. In 2020, Imagimob launched their SaaS Imagimob AI for the swift and easy end-to-end development of Edge AI applications for devices with constrained resources. Imagimob AI guides and empowers users throughout the entire development journey, resulting in game-changing productivity and faster time-to-market. Tirelessly dedicated to staying on top of the latest research, the experienced team behind Imagimob is always thinking new, and thinking big.

SOME OF OUR PEOPLE​

arrow_forward
263_o_1.png

Anders Hardebring
CEO and Founder
496_o_alex-2016.jpeg

Alexander Samuelsson
CTO and Founder
265_o_4.png

Åke Wernelind
Business Development
Phone: +46 70 370 97 24
511_o_profile-picture-black-and-white.jpg

Alina Enciu
TinyML Solutions Manager
Phone: +46 72 031 75 20
 
  • Like
  • Fire
Reactions: 18 users

Learning

Learning to the Top 🕵‍♂️
I find this article very interesting as its talk about new drink driving tech that's soon be fixture in all car. (Driving Monitoring Assistance System DMAS)


As Continental are in this space (Diver Monitoring System) where they incorporates Renesas R-Car V3M into their system.


All that's indicate to me, (my opinion only) Akida is just a perfect fit in the DMAS once Renesas fully incorporates Akida into all their SoC.

So be careful the future of intoxicated driver. (No more drink driving, lol)

Its great to be a shareholder.
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 17 users

Makeme 2020

Regular

Irida Labs Partners with Renesas Electronics for AI Vision Sensor​

March 18, 2022
The new Vision AI Sensor is intended for smart cities and spaces.

Vsd031722 Irida

View Image Gallery


Working closely with the Renesas team, Irida Labs has integrated its PerCV.ai platform with Renesas’s RZ/V2L family to jointly release the Vision AI Sensor for Smart Cities & Spaces.
This Vision AI Sensor for Smart Cities & Spaces is a plug-and-play Edge AI hardware and software solution that drives the new era in urban area management. Based on the core functionalities of accurate, Vision AI-powered vehicle, object, and citizen detection, the sensor comes with the PerCV.ai intuitive dashboard for data visualization, AI analytics, or connectivity to third-party apps.

Among the supported Vision AI-powered applications are 24/7 traffic monitoring, smart parking space occupancy, vehicle detection and classification, LPR, parking lot zone management as well as pedestrian flow monitoring while safeguarding citizen safety, quality of life, and accessibility to public spaces in full anonymity and privacy.

Vsd031722 Irida2

VISION AI at the EDGE turns a camera into a vision sensor that does all the AI processing at the endpoint using RZ/V2L. No Cloud is necessary. It’s compact size allows easy integration with existing software stack via industry standard MQTT protocol. 24/7 analytics allow fully automated, unsupervised, and privacy-preserving traffic and pedestrian flow monitoring and analytics around the clock.
Applications include smart cities and spaces operations and analytics, smart grid/energy saving, smart lighting, intelligent transportation, citizen safety, and citizen quality of life & accessibility.
 
  • Like
  • Fire
Reactions: 16 users

Irida Labs Partners with Renesas Electronics for AI Vision Sensor​

March 18, 2022
The new Vision AI Sensor is intended for smart cities and spaces.

Vsd031722 Irida

View Image Gallery


Working closely with the Renesas team, Irida Labs has integrated its PerCV.ai platform with Renesas’s RZ/V2L family to jointly release the Vision AI Sensor for Smart Cities & Spaces.
This Vision AI Sensor for Smart Cities & Spaces is a plug-and-play Edge AI hardware and software solution that drives the new era in urban area management. Based on the core functionalities of accurate, Vision AI-powered vehicle, object, and citizen detection, the sensor comes with the PerCV.ai intuitive dashboard for data visualization, AI analytics, or connectivity to third-party apps.

Among the supported Vision AI-powered applications are 24/7 traffic monitoring, smart parking space occupancy, vehicle detection and classification, LPR, parking lot zone management as well as pedestrian flow monitoring while safeguarding citizen safety, quality of life, and accessibility to public spaces in full anonymity and privacy.

Vsd031722 Irida2

VISION AI at the EDGE turns a camera into a vision sensor that does all the AI processing at the endpoint using RZ/V2L. No Cloud is necessary. It’s compact size allows easy integration with existing software stack via industry standard MQTT protocol. 24/7 analytics allow fully automated, unsupervised, and privacy-preserving traffic and pedestrian flow monitoring and analytics around the clock.
Applications include smart cities and spaces operations and analytics, smart grid/energy saving, smart lighting, intelligent transportation, citizen safety, and citizen quality of life & accessibility.
Not sure what your opinion is but to me it doesn’t seem like we are involved?
 
  • Like
Reactions: 2 users

Makeme 2020

Regular
Experience best‑in‑class thermal performance in Vision‑based AI applications with RZ/V2L and RZ/V2M MPUs. Now, you can implement Vision AI at the endpoint with highly power‑efficient, real‑time Vision AI performance at a lower cost – Renesas’ original DRP‑AI accelerator embedded in these MPUs delivers high‑speed AI inference at low power consumption, eliminating the need for heat sinks and cooling fans, thanks to the excellent processing performance per unit power.

Highest power efficiency is realized with the combination of DRP‑AI hardware and the DRP‑AI Translator tool. Our complimentary DRP‑AI Translator software tool supporting ONNX can be used easily to convert AI models to an executable format. Programmable DRP‑AI architecture allows scaling of more advanced Vision AI models with the same RZ/V MPUs resulting in lower development cost and faster time‑to‑market for Vision AI products.

Choose between simple ISP functions, provided by RZ/V2L, for more flexibility in developing Vision AI systems and hardware ISP functions, equipped in RZ/V2M, that are tuned in advance by experts to provide the best image quality.

Let your creativity flow with the winning combination solutions – HMI SoM with AI Accelerator and AI Camera & Voice Recognition Solution – for your next Vision AI design.​
Dedicated RZ/V Microprocessors (MPUs) to Implement Power‑Efficient Vision AI​
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users
Top Bottom