BRN Discussion Ongoing

Tothemoon24

Top 20

2023 Edge AI Technology Report​

The guide to understanding the state of the art in hardware & software in Edge AI.​

Hero Image



2023 Edge AI Technology Report.​

Edge AI, empowered by the recent advancements in Artificial Intelligence, is driving significant shifts in today's technology landscape. By enabling computation near the data source, Edge AI enhances responsiveness, boosts security and privacy, promotes scalability, enables distributed computing, and improves cost efficiency.
Wevolver has partnered with industry experts, researchers, and tech providers to create a detailed report on the current state of Edge AI. This document covers its technical aspects, applications, challenges, and future trends. It merges practical and technical insights from industry professionals, helping readers understand and navigate the evolving Edge AI landscape.



Report Introduction​

The advent of Artificial Intelligence (AI) over recent years has truly revolutionized our industries and personal lives, offering unprecedented opportunities and capabilities. However, while cloud-based processing and cloud AI took off in the past decade, we have come to experience issues such as latency, bandwidth constraints, and security and privacy concerns, to name a few. That is where the emergence of Edge AI became extremely valuable and transformed the AI landscape.

Edge AI represents a paradigm shift in AI deployment, bringing computational power closer to the data source. It allows for on-device data processing and enables real-time, context-aware decision-making. Instead of relying on cloud-based processing, Edge AI utilizes edge devices such as sensors, cameras, smartphones, and other compact devices to perform AI computations on the device itself. Such an approach offers multitudes of advantages, including reduced latency, improved bandwidth efficiency, enhanced data privacy, and increased reliability in scenarios with limited or intermittent connectivity.

“Even with ubiquitous 5G, connectivity to the cloud isn’t guaranteed, and bandwidth isn’t assured in every case. The move to AIoT increasingly needs that intelligence and computational power at the edge.”
- Nandan Nayampally, CMO, Brainchip
While Cloud AI predominantly performs data processing and analysis in remote servers, Edge AI focuses on enabling AI capabilities directly on the devices. The key distinction here lies in the processing location and the nature of the data being processed. Cloud AI is suitable for processing-intensive applications that can tolerate latency, while Edge AI excels in time-sensitive scenarios where real-time processing is essential. By deploying AI models directly on edge devices, Edge AI minimizes the reliance on cloud connectivity, enabling localized decision-making and response.

The Edge encompasses the entire spectrum from data centers to IoT endpoints. This includes the data center edge, network edge, embedded edge, and on-prem edge, each with its own use cases. The compute requirements essentially determine where a particular application falls on the spectrum, ranging from data-center edge solutions to small sensors embedded in devices like automobile tires. Vibration-related applications would be positioned towards one end of the spectrum, often implemented on microcontrollers, while more complex video analysis tasks might be closer to the other end, sometimes on more powerful microprocessors.

“Applications are gradually moving towards the edge as these edge platforms enhance their compute power.”
- Ian Bratt, Fellow and Senior Director of Technology, Arm
When it comes to Edge AI, the focus is primarily on sensing systems. This includes camera-based systems, audio sensors, and applications like traffic monitoring in smart cities. Edge AI essentially functions as an extensive sensory system, continuously monitoring and interpreting events in the world. In an integrated-technology approach, the collected information can then be sent to the cloud for further processing.

Edge AI shines in applications where rapid decision-making and immediate response to time-sensitive data are required. For instance, in autonomous driving, Edge AI empowers vehicles to process sensor data onboard and make split-second decisions to ensure safe navigation. Similarly, in healthcare, Edge AI enables real-time patient monitoring, detecting anomalies, and facilitating immediate interventions. The ability to process and analyze data locally empowers healthcare professionals to deliver timely and life-saving interventions.

Edge AI application areas can be distinguished based on specific requirements such as power sensitivity, size limitations, weight constraints, and heat dissipation. Power sensitivity is a crucial consideration, as edge devices are often low-power devices used in smartphones, wearables, or Internet of Things (IoT) systems. AI models deployed on these devices must be optimized for efficient power consumption to preserve battery life and prolong operational duration.

Size limitations and weight constraints also play quite a significant role in distinguishing Edge AI application areas. Edge devices are typically compact and portable, making it essential for AI models to be lightweight and space-efficient. This consideration is particularly relevant upon integrating edge devices into drones, robotics, or wearable devices, where size and weight directly impact performance and usability.

Nevertheless, edge computing presents significant advantages that weren’t achievable beforehand. Owning the data, for instance, provides a high level of security, as there is no need for the data to be sent to the cloud, thus mitigating the increasing cybersecurity risks. Edge computing also reduces latency and power usage due to less communication back and forth with the cloud, which is particularly important for constrained devices running on low power. And the advantages don’t stop there, as we are seeing more and more interesting developments in real-time performance and decision-making, improved privacy control, and on-device learning, enabling intelligent devices to operate autonomously and adaptively without relying on constant cloud interaction.

“The recent surge in AI has been fueled by a harmonious interplay between cutting-edge algorithms and advanced hardware. As we move forward, the symbiosis of these two elements will become even more crucial, particularly for Edge AI.”
- Dr. Bram Verhoef, Head of Machine Learning at Axelera AI
Edge AI holds immense significance in the current and future technology landscape. With decentralized AI processing, improved responsiveness, enhanced privacy and security, cost-efficiency, scalability, and distributed computing, Edge AI is revolutionizing our world as we speak. And with the rapid developments happening constantly, it may be difficult to follow all the new advancements in the field.

That is why Wevolver has collaborated with several industry experts, researchers, professors, and leading companies to create a comprehensive report on the current state of Edge AI, exploring its history, cutting-edge applications, and future developments. This report will provide you with practical and technical knowledge to help you understand and navigate the evolving landscape of Edge AI.

This report would not have been possible without the esteemed contributions and sponsorship of Alif Semiconductor, Arduino, Arm, Axelera AI, BrainChip, Edge Impulse, GreenWaves Technologies, Sparkfun, ST, and Synaptics. Their commitment to objectively sharing knowledge and insights to help inspire innovation and technological evolution aligns perfectly with what Wevolver does and the impact it aims to achieve.

As the world becomes increasingly connected and data-driven, Edge AI is emerging as a vital technology at the core of this transformation, and we hope this comprehensive report provides all the knowledge and inspiration you need to participate in this technological journey.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

CHIPS

Regular
Here you go . It was me.
:love: I love it! I spent a long time looking for it and could not find it anymore. Thanks for posting it again (y)
 
  • Like
Reactions: 7 users

Frangipani

Regular
Not sure why the images in the article I posted earlier today won’t show up on TSE (you can see them once you click on the links, though).

I found this one very helpful to anyone not overly familiar with the SAE classification of autonomous driving levels:

1690802492279.jpeg


I am just a bit worried that this image might have a somewhat unsettling effect on @Bravo, given the following study on feline reactions to bearded men… 😉

 
  • Haha
  • Like
  • Love
Reactions: 9 users

miaeffect

Oat latte lover
Here you go . It was me.
THANK YOU
MB
- EQ CLA CONCEPT IN SEPT 2023
- MB.OS DEBUT IN 2024
- EQ CLA LAUNCH IN LATE 2024
- 2025 WILL BE MASSIVE FOR BRN
mercedes-benz-operating-system-mb-os_100797100_l.jpg

Screenshot_20230731-225032_Chrome.jpg
Screenshot_20230731-223928_Chrome.jpg
Screenshot_20230731-223742_Chrome.jpg
 
  • Like
  • Fire
  • Love
Reactions: 42 users
Wonder what STM up to :unsure:

Know they have the STM32Cube to assist deploy NN algos / models onto STM32 MCUs but hadn't seen anything for neuromorphic prev or maybe missed it if they have.



STMicroelectronics3.9
HW Engineer M/F
Noida
2d
Apply Now
Save
Position description

Posting title

HW Engineer M/F

Regular/Temporary

Regular

Job description


We are seeking a highly skilled and motivated Neural Processing Hardware Engineer specializing in deep learning hardware to join our dynamic team. As a Neural Processing Hardware Engineer, you will play a pivotal role in designing, developing, and optimizing NPU and IMC NPU hardware solutions specifically tailored for neural processing in deep learning applications. You will work closely with our process, systems and research teams to ensure seamless integration of hardware and software components.
Responsibilities:**
  • Collaborate with cross-functional teams to define hardware requirements for neural processing in deep learning applications.
  • Design, develop, and prototype hardware architectures and IMC solutions for accelerators optimized for neural network inference and training.
  • Perform performance evaluation, optimization, and benchmarking of neural processing hardware solutions.
  • Conduct feasibility studies and provide technical recommendations for hardware IP feature selection and deployment.
  • Participate in hardware debugging, testing, and troubleshooting activities.
  • Stay up-to-date with the latest advancements in neural processing hardware and contribute to research and development efforts.

Profile


  • Front end architecture and RTL design exposure
  • Background in SRAM design and characterization is a plus.
  • Exposure to mixed-signal design and verification.
  • Experience with AI-specific hardware architectures (e.g., TPUs, neuromorphic chips).
  • Knowledge of hardware-software co-design and system-level optimization.
  • Familiarity with low-power design techniques and energy-efficient computing.
  • Background in signal processing, computer vision, or natural language processing.
  • Exposure to heterogeneous hardware platforms and architectures.
  • Knowledge of 3D integration techniques for hardware design.

Position location

Job location

Asia-Pacific, India, Greater Noida
Candidate criteria

Education level required

4 - Bachelor degree

Experience level required

Over 10 years
 
  • Like
  • Fire
Reactions: 30 users

Neuromorphia

fact collector

Oculi adds the human eye to Artificial Intelligence | Luminate N.Y. Spotlight​




Oculi, an artificial vision startup, is hard at work creating technology that is transforming computer/machine vision for edge applications, such as smart and interactive devices at home, office, and in vehicles, including those used in gaming and the Artificial Reality (AR)/Virtual Reality (VR) market.
Charbel-Rizk-2023-e1689775828102-234x300.jpg
Rizk
The company’s Real-Time Vision Intelligence (VI) technology was developed by founder and CEO Dr. Charbel Rizk at Johns Hopkins University. It led to the creation of Oculi’s flagship product, OCULI SPU (Sensing and Processing Unit), the world’s first smart programmable vision sensor. The chip combines the efficiency of biology, more specifically the human eye, and the speed of machines to make computer vision faster and more efficient.
We caught up with Rizk to better understand Oculi’s vision technology, the problem it solves and the progress the company is making to transform smart and AR/VR devices.
Solving the challenges of machine vision to enable new possibilities
The way traditional cameras reconstruct images for human consumption works well for electronic and printed media, but it’s inefficient and limiting in computer/machine vision. Imaging sensors continuously transmit data, much of it irrelevant, which creates a data deluge that often leads to latency (the pesky time lags that make gaming far less enjoyable), energy waste, blurring, and inaccurate signal information.
Traditional imaging sensors are also inefficient, requiring a large processor, electricity (i.e. wired), and time to process. These requirements add significant expense to manufacturing costs and have prevented mass production of game-changing features and devices, such as gesture control for displays and TVs, people detection for appliances, and effective in-cabin monitoring. As these markets evolve, machine vision needs to be revolutionized to enable products for mass production.
“Machine vision to date involves a conventional architecture, including a dumb image sensor and a processing platform,” said Rizk. “The current state of the market—including the need for face, eye, and hand tracking; low latency; and reduced power consumption requirements— is too complex and expensive for this old model.”
Oculi provides real-time, more efficient machine imaging
In 2019, with nearly 20 years of research and development under his belt — preceded by stints as a Senior Systems Engineer for Boeing Douglas and Boeing North America/Rockwell Aerospace — Rizk spun Oculi out from Johns Hopkins and launched its revolutionary product.
Oculi SPU
Oculi SPU. (Image provided)
Oculi solves the fundamental problems of traditional computer/machine vision by offering an integrated sensing and processing module, transforming the dumb imaging sensor to a smart programmable vision sensor. Its main product offering is a sensing and processing unit (SPU) that is capable of performing tasks at the pixel level and at the edge. Useless data doesn’t need to be sent back to a central processor or the cloud to be processed. The OCULI SPU knows what data is needed from a scene and ignores the rest, so there’s less data to be analyzed or stored. This reduces power requirements and latency so action can be taken in real time and at the edge.
“By emulating the human eye, Oculi offers a first of its kind vision sensor that enables a natural and immersive user experience under any lighting conditions–indoors and outdoors,” said Rizk. “It does this 30 times faster and at one-tenth the power drain, while also protecting privacy and providing security, and it does it at a fraction of the cost. Our technology is ideal for edge applications, such as always-on gesture/face/people tracking and low-latency eye tracking.”
Looking Ahead: The Future of Oculi
Rizk estimates that Oculi will achieve significant revenue in the next five years. As an early-stage company, its initial focus is on Original Equipment Manufacturers (OEMs) and Tier One manufacturers that are looking to solve fundamental vision challenges and deliver new smart and interactive products to various markets. It’s currently piloting OCULI SPU with several manufacturers.
To speed the commercialization of its technology and advance its business, Oculi is one of 10 startups from around the world working at the Luminate NY accelerator at NextCorps in downtown Rochester. With support from Luminate and introductions to its network of potential partners and investors, Oculi hopes to secure additional funding and partnerships to scale operations for mass production, expand sales, and develop advanced versions of its SPU products.
“Oculi is working hard to transform artificial vision by improving its quality and efficiency,” said Dr. Sujatha Ramanujan, managing director of Luminate. ”We’re confident that the team’s dedication and expertise will help to create products that propel numerous industries into a new era of optics and machine vision.”
For more information,visit www.oculi.ai. To get updates on Luminate and the emerging technologies being developed in Rochester, go to www.luminate.org.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users

Ian

Founding Member
  • Like
  • Fire
  • Love
Reactions: 24 users

Tothemoon24

Top 20
New Promo

  1. TECHNOLOGIES

  2. EMBEDDED

Vision Processors Bring Scalable Edge AI to Smart Cameras​

July 29, 2023
The new processor family from Texas Instruments can handle vision and AI processing for up to 12 cameras.
Cabe Atwell
Related To: Electronic Design



Check out the other 2023 IDEA Award Nominees. This article is also part of the TechXchange: AI on the Edge.

AI has evolved over the years and continues to be used in various applications meant to make our lives easier, such as cameras, security experiences, and autonomous vehicles. Typically, these systems rely on cloud-based resources to run, but at the risk of losing real-time responsiveness.
Texas Instruments aims to address this with its new family of six Arm Cortex-based vision processors. With these processors, you can add more vision and AI processing at a lower cost and with better energy efficiency in low-power applications, like video doorbells, machine vision, and autonomous mobile robots.
The family, which includes the AM62A, AM68A, and AM69A, is meant to help achieve real-time responsiveness “in the electronics that keep our world moving,” according to the company. The processors are supported by open-source elevation and model development tools, plus common software that’s programmable through industry-standard application programming interfaces (APIs), frameworks, and models.
By eliminating cost and design complexity roadblocks when implanting vision processing and deep-learning capabilities in low-power edge AI applications, the new vision processors can bring intelligence from the cloud to the real world to improve real-time responsiveness, said TI.

Up to 32 TOPS of AI Processing​

The processors feature a system-on-chip (SoC) architecture that includes extensive integration. Among the integrated components are Arm Cortex-A53 or Cortex-A72 central processing units, a third-generation TI image signal processor, internal memory, interfaces, and hardware accelerators that deliver from 1 to 32 teraoperations per second (TOPS) of AI processing for deep-learning algorithms.
AM62A3, AM62A3-Q1, AM62A7 and AM62A7-Q1 processors can support one to two cameras at less than 2 W in applications such as video doorbells and smart retail systems. The AM62A3 is claimed as the industry's lowest-cost 1-TOPS vision processor (US$12 in 1,000-unit quantities). Meanwhile, the AM68A enables one to eight cameras in applications like machine vision. Finally, the AM69A achieves 32 TOPS of AI processing for one to 12 cameras in high-performance applications such as edge AI boxes, autonomous mobile robots, and traffic-monitoring systems.
In addition to the new processors, Texas Instruments also offers Edge AI Studio, a free open-source tool that helps improve time-to-market for edge AI applications. With the tool, designers are able to quickly develop and test AI models using user-created models and the company’s optimized models, which can also be retrained with custom data.
 
  • Like
  • Fire
  • Love
Reactions: 48 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Just to state the obvious , looking positive today.

Over $1,000,000.00AU transacted within first 35 minutes of trade with building Buy pressure.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 46 users
Good Morning Chippers,

Just to state the obvious , looking positive today.

Over $1,000,000.00AU transacted within first 35 minutes of trade with building Buy pressure.

Regards,
Esq.
Imo: just another double top to lower lows, just a sad truth with no news and rev.
 
  • Like
  • Sad
  • Thinking
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
Good Morning Chippers,

Just to state the obvious , looking positive today.

Over $1,000,000.00AU transacted within first 35 minutes of trade with building Buy pressure.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 10 users

Foxdog

Regular
Imo: just another double top to lower lows, just a sad truth with no news and rev.
Nup, this is the beginning of a sustained run through to $7.55. imo
 
  • Like
  • Haha
  • Fire
Reactions: 34 users

HopalongPetrovski

I'm Spartacus!
Nup, this is the beginning of a sustained run through to $7.55. imo
And when we hit $7.55 it'll be time to change up to 2nd gear. 🤣
 
  • Haha
  • Like
  • Fire
Reactions: 27 users

IloveLamp

Top 20
Don't think it's personal......
.




Screenshot_20230801_123928_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 25 users

qhaiho

Emerged

Calsco

Regular
Is the MB concept launch a function that people can attend or is it only online? If so is there anyone who can attend in person and ask questions about Brainchip partnership?
 
  • Like
  • Haha
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Is the MB concept launch a function that people can attend or is it only online? If so is there anyone who can attend in person and ask questions about Brainchip partnership?


Are you free to pop on down to the IAA Mobility in Munich which kicks off on September 5 and runs through September 10?


"Mercedes-Benz experts will be on-hand to offer deep dives into the latest forward-thinking technologies."


MM1pm.png

MM2pm.png

 
  • Like
  • Love
  • Fire
Reactions: 38 users

Calsco

Regular
Are you free to pop on down to the IAA Mobility in Munich which kicks off on September 5 and runs through September 10?


"Mercedes-Benz experts will be on-hand to offer deep dives into the latest forward-thinking technologies."


View attachment 41182
View attachment 41183
I have time off work and am happy to go if someone wants to spot me the plane ticket haha
 
  • Haha
  • Like
Reactions: 12 users

equanimous

Norse clairvoyant shapeshifter goddess
Imo: just another double top to lower lows, just a sad truth with no news and rev.
I think your in the wrong forum. We got news and revenue 6 days ago with price sensitive announcement..
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Damo4

Regular
I think your in the wrong forum. We got news and revenue 6 days ago with price sensitive announcement..
Where Am I Jack Black GIF - Where Am I Jack Black Jumanji - Discover &  Share GIFs
 
  • Haha
  • Like
Reactions: 12 users
Top Bottom