BRN Discussion Ongoing

Home101

Regular
View attachment 52067

As well as looking back on the achievements of 2023, we are also looking forward with excitement to the start of 2024. As always, it starts with a bang at the CES in Las Vegas.

As I said, our vision of a Mercedes-Benz running on MB.OS is a completely new approach, as it opens up entirely new dimensions through software and AI. It will be a place of entertainment and gaming. Somewhere to be productive. A private spa. A Mercedes-Benz will even be part of a server farm and the energy grid.

And here is the first realisation of this vision:
What better place to present our vision of a hyper-personalised Mercedes-Benz user experience than CES? To show you what that means, we are showcasing our game-changing MBUX Virtual Assistant. Powered by generative AI, it takes the driver-car relationship to a whole new dimension with natural human-like interaction. It is based on our Mercedes-Benz Operating System (MB.OS) and includes empathetic characteristics that sync together with your driving style and mood.

I’ll be at the CES 2024 along with my colleague, our Chief Software Officer Magnus Östberg, to explain the details. And of course, I’ll be posting about it here on LinkedIn as well.

The MBUX Virtual Assistant is just one of several Mercedes-Benz highlights. As well as further digital innovations, there are the North American premieres of the Concept CLA Class and the camouflaged prototype of the electric G-Class. Plus, we have some exciting developments in the field of in-car entertainment.

🙏
Now this could bring another mercedes news we are all waiting for.
 
  • Like
  • Fire
Reactions: 9 users

IloveLamp

Top 20
View attachment 52067

As well as looking back on the achievements of 2023, we are also looking forward with excitement to the start of 2024. As always, it starts with a bang at the CES in Las Vegas.

As I said, our vision of a Mercedes-Benz running on MB.OS is a completely new approach, as it opens up entirely new dimensions through software and AI. It will be a place of entertainment and gaming. Somewhere to be productive. A private spa. A Mercedes-Benz will even be part of a server farm and the energy grid.

And here is the first realisation of this vision:
What better place to present our vision of a hyper-personalised Mercedes-Benz user experience than CES? To show you what that means, we are showcasing our game-changing MBUX Virtual Assistant. Powered by generative AI, it takes the driver-car relationship to a whole new dimension with natural human-like interaction. It is based on our Mercedes-Benz Operating System (MB.OS) and includes empathetic characteristics that sync together with your driving style and mood.

I’ll be at the CES 2024 along with my colleague, our Chief Software Officer Magnus Östberg, to explain the details. And of course, I’ll be posting about it here on LinkedIn as well.

The MBUX Virtual Assistant is just one of several Mercedes-Benz highlights. As well as further digital innovations, there are the North American premieres of the Concept CLA Class and the camouflaged prototype of the electric G-Class. Plus, we have some exciting developments in the field of in-car entertainment.

🙏

Screenshot_20231215_201340_LinkedIn.jpg
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Proga

Regular
View attachment 52067

As well as looking back on the achievements of 2023, we are also looking forward with excitement to the start of 2024. As always, it starts with a bang at the CES in Las Vegas.

As I said, our vision of a Mercedes-Benz running on MB.OS is a completely new approach, as it opens up entirely new dimensions through software and AI. It will be a place of entertainment and gaming. Somewhere to be productive. A private spa. A Mercedes-Benz will even be part of a server farm and the energy grid.

And here is the first realisation of this vision:
What better place to present our vision of a hyper-personalised Mercedes-Benz user experience than CES? To show you what that means, we are showcasing our game-changing MBUX Virtual Assistant. Powered by generative AI, it takes the driver-car relationship to a whole new dimension with natural human-like interaction. It is based on our Mercedes-Benz Operating System (MB.OS) and includes empathetic characteristics that sync together with your driving style and mood.

I’ll be at the CES 2024 along with my colleague, our Chief Software Officer Magnus Östberg, to explain the details. And of course, I’ll be posting about it here on LinkedIn as well.

The MBUX Virtual Assistant is just one of several Mercedes-Benz highlights. As well as further digital innovations, there are the North American premieres of the Concept CLA Class and the camouflaged prototype of the electric G-Class. Plus, we have some exciting developments in the field of in-car entertainment.

🙏
Thanks TTM. We'll know then. Interesting he mentions the North American premieres of the Concept CLA Class. They're supposed to be in production not long after CES 2024
 
  • Like
Reactions: 9 users

IloveLamp

Top 20

Gelsinger made the comments about Intel's manufacturing business at an event in New York that focused on PC chips with artificial intelligence features.

Running AI applications from far away data centers is too costly for the likes of Microsoft (MSFT.O) and will have to be run on local PCs, Gelsinger said.

"There isn't any possible way that they can have a billion Windows devices hitting Azure to be running these workloads in real time," Gelsinger said.
 
  • Like
  • Fire
Reactions: 8 users

GazDix

Regular
Having been proven wrong and surprised at how long it has taken BrainChip to achieve traction and much meaningful revenue to date, I am now loath to recommend it to others.
I am happy to talk it up and express my ongoing support for the tech and the Company strategy which makes sense to me and which I endorse.
But, having done so, and backed it in myself numerous times over the past few years, I always include the caveat of DYOR, only invest what is losable, have a longish term time frame (5 years) etc, etc.
Unfortunately I have learnt that most of my friends don't tend to actually do any of the above in any meaningful way and have no real understanding of the machinations of the ASX.
Instead they react much as one would do, to a tip on a horse at the races.
Expecting relatively instant gratification (a few months to a year or so) and given our share price decline over the past 12-18 months, they are mostly left disappointed.
So these days I do not bring up the topic and only engage if I think the person is both serious and diligent enough to do sufficient research to actually make up their own mind and decide for themselves whether such an investment is right for them.
I'm still in and still a believer but have diversified my portfolio somewhat as the world has changed dramatically over the past few years leading me to adopt a more prudent and defensive posture.
Whilst it has been shown that my expectations of a faster and more generalised uptake of the Akida solution was incorrect I certainly did not foresee the actuality and consequences of a global pandemic nor Putin's aggressive push into the Ukraine.
So, I have tempered my expectations and exposure somewhat at this time.......
Of course, whilst no one expects the Spanish Inquisition, I still remain in expectation of jaw dropping and chart bursting announcements to be revealed any and every day.
But 8 years plus in, I find my enthusiasm at present, somewhat curbed.
Bring It, BrainChip!
GLTAH

I completely agree with your feelings and sentiment Hops.

Brainchip has actually been my best investment to date, and I am not talking about my average price which is around the 20 cent mark.

Yourself, Dio, Dingo, FF, McHale, Jessie, previously UIUX and many others writing previously on hotcrapper and now TSE showed me the power of research and I developed a real interest in tech related matters and how the future will be since 2019. I guess I can also attribute that change in thinking to my young kids as well and how I want to have them to have the best advantages in life.

This interest integrated with economics teaching for over 15 years that only reinforced my knowledge of first principles in money finally allowed me to open pandora's box which was the world of Web 3 or the worldwide dreaded term 'cryptocurrency'. Brainchip's amazing edge AI and how its utility along with economics teaching which reinforces basic principles and what money really is allowed me to go 'ball's deep' in Bitcoin and other cryptocurrencies using skills I learned reseaching Brainchip and the Aussie share market in general. I am forever grateful as we are on the cusp now of a massive change in the next 5-10 years of the economic concepts that haven't changed in decades such as employment, efficiency and productivity and knowledge of AI and blockchain puts us conservatively ahead of 95% of the population.

Disruption will be the norm. Brainchip is in a very good place to take advantage of this disruption.
 
  • Like
  • Love
  • Fire
Reactions: 34 users
American GNC Corporation
888 Easy Street
Simi Valley, CA 93065-1812
United States
Hubzone Owned:
No

Socially and Economically Disadvantaged:
Yes
Woman Owned:
Yes

Duns:
611466855
Principal Investigator
Name: Francisco Maldonado
Phone: (805) 582-0582
Email: fmald@americangnc.com
Business Contact
Name: Emily Melgarejo
Phone: (805) 582-0582
Email: emelgarejo@americangnc.com
Research Institution
N/A
Abstract
The “On-Board Distributed Autonomous LearnIng for Satellite (ODALIS) Communication System” provides a cognitive approach that senses, detects, adapts, and learns from both experiences and the environment to optimize communications while addressing NASA’s needs to leverage artificial intelligence and machine learning technologies to optimize space communication links, networks, and systems. While CubeSats and Software Defined Radio (SDR) support are the initial target focus, the ODALIS system can also be applied to lunar surface assets (e.g. surface relays, science stations, astronaut communications, and rovers) and relay satellites, Earth ground stations, the International Space Station, and spacecraft communications. The ODALIS system provides an innovative on-board embedded implementation and spectrum availability prognostics based upon an ensemble of learning paradigms that involves: (i) Federated Learning; (ii) local learning by a Long Short Term Memory (LSTM) based recurrent neural network, which is applied to spectrum prediction; and (iii) State-of-the-Art (SOTA) software defined radio with embedded distributed learning. Innovations are aimed at improvement and enhancement of cognitive communications by leveraging machine learning technology and SDR with optimized size, weight, and power (SWaP) to conduct RF spectrum availability detection and prognostics and include: (1) Over-the-Air Federated Learning implementation (a Phase II result); (2) novel hardware implementation based upon a SOTA RFSoC (FPGA) and Machine Learning toolboxes; (3) fully embedded Machine Learning within the hardware core for achieving real time operation in cognitive communications; and (4) design for automated configuration support within Software Defined Radio and Cognitive Radio.


 
  • Like
  • Fire
  • Love
Reactions: 12 users
I thought it was comical, when Musk had a person in a "robot" suit dancing around, only 2 years ago..





But this is Tesla Optimus Gen 2 (Gen 1 was March 2023 and was almost as much of a joke, as the actor).

It's pretty bloody impressive (if nothing 3D rendered) and already very close to his vision, in such a short time.



This comment is Gold Star quality though 😂..
20231216_034532.jpg




Hmmm...

63eebeb313c40230827d99f3_No-brain.gif


If he wants it to be able to do "stuff" without needing a charge every 30 min, it's going to need a neuromorphic principled "brain" of some sort.
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 13 users

CHIPS

Regular

BrainChip Previews Industry’s First Edge Box Powered by Neuromorphic AI IP


LAGUNA HILLS, Calif.--(BUSINESS WIRE)-- BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today released previews of the industry’s first Edge box based on neuromorphic technology built in collaboration with VVDN Technologies, a premier electronics and manufacturing company.

The Akida™ Edge Box—expected to begin pre-sales on January 15th, 2024—powers AI applications in challenging environments where performance and efficiency are essential. The device will be demonstrated for the first time at CES 2024, January 9-12 in Las Vegas.

Designed for vision-based AI workloads, the compact Akida Edge box is intended for video analytics, facial recognition, and object detection, and can extend intelligent processing capabilities that integrate inputs from various other sensors. This device is compact, powerful, and enables cost-effective, scalable AI solutions at the Edge.

BrainChip’s event-based neural processing, which closely mimics the learning ability of the human brain, delivers essential performance within an energy-efficient, portable form factor, while offering cost-effectiveness surpassing market standards for edge AI computing appliances. BrainChip’s Akida neuromorphic processors are capable of on-chip learning that enables customization and personalization on device without support from the cloud, enhancing privacy and security while also reducing training overhead, which is a growing cost for AI services.

“BrainChip’s neuromorphic technology gives the Akida Edge box the ‘edge’ in demanding markets such as industrial, manufacturing, warehouse, high-volume retail, and medical care,” said Sean Hehir, CEO of BrainChip. “We are excited to partner with an industry leader like VVDN technologies to bring groundbreaking technology to the market.”

“There is a strong demand for cost-effective, flexible edge AI computation across many industries,” said Puneet Agarwal, Co-Founder and CEO, VVDN Technologies. “VVDN is excited to offer OEMs its experience and expertise in bringing the advanced, transformative technology integrations that meet market needs and eventually help them with faster time to market.”

BrainChip’s Akida Edge Box is suitable for environments that require cost-effective and low-latency AI processing. In security and surveillance, it can automatically detect and report intrusion, identify individuals in restricted areas, and perform behavior analysis to recognize suspicious behaviors or potential threats.

In retail and warehousing, it can assist in inventory management and loss prevention by identifying when shelves are empty, when restocking is needed, and when merchandise is removed without authorization. Behavior analysis capabilities can also help retailers understand how customers interact with products and store layouts to maximize profitability.

The Akida Edge Box brings AI to industrial settings for visual detection applications such as quality inspection, identifying defects or irregularities in products, and integration with factory robotic systems for precise object manipulation. It can be used to enhance plant and worker safety, identify whether workers are using proper safety gear, following protocols and proper workflows, and identify malfunctions in assembly lines.

Healthcare applications include patient monitoring, such as noting a patient’s physical movements to ensure safety and provide alerts for falls or unusual behavior. In rehabilitation facilities it can track and analyze patient movements to aid in physical therapy. In elder care settings it can be used to detect falls or other situations that require staff assistance or intervention.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Rskiff

Regular
I thought it was comical, when Musk had a person in a "robot" suit dancing around, only 2 years ago..





But this is Tesla Optimus Gen 2 (Gen 1 was March 2023 and was almost as much of a joke, as the actor).

It's pretty bloody impressive (if nothing 3D rendered) and already very close to his vision, in such a short time.



This comment is Gold Star quality though 😂..
View attachment 52098



Hmmm...

View attachment 52096

If he wants it to be able to do "stuff" without needing a charge every 30 min, it's going to need a neuromorphic principled "brain" of some sort.

to add
 
  • Like
  • Haha
  • Love
Reactions: 8 users

Nice partner to have hey!

"We see a future where millions of developers like you are building billions of smart devices, powered by Edge Impulse."
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Esq.111

Fascinatingly Intuitive.
  • Like
  • Fire
  • Wow
Reactions: 25 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 22 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 15 users

Getupthere

Regular

Tokay Lite no-code Edge AI camera with night vision and TensorFlow Lite support and more​

10:39 am December 14, 2023 By Julian Horsey

Tokay Lite no-code Edge AI camera with night vision and TensorFlow Lite support

freestar
A new multifunctional Edge AI camera offering night vision, motion detection, TensorFlow Lite support, no-code set up and open source design is now available to purchase from just $89 from the Crowd Supply website. Watch the video below for a quick overview of what you can expect from this tiny and affordable Edge AI camera. The Tokay Lite, is an advanced AI camera platform and ESP32-based development board. This sophisticated piece of hardware is not just a camera, but a versatile tool with a myriad of applications that stretch from security to wildlife monitoring, and even agricultural use.
At the heart of the Tokay Lite lies its onboard Edge AI Processing capability. This feature allows real-time image analysis and decision-making without the need for an external computer. It is a testament to the power of edge computing, where data is processed on the device itself, thus reducing latency and increasing speed. Whether it’s detecting an intruder in a security setup or identifying a rare bird species in a wildlife monitoring scenario, the Tokay Lite can process and analyze data in real-time, right at the source.

Tokay Lite no-code Edge AI camera​

The Tokay Lite is powered by open-source firmware, and its no-code web interface makes it user-friendly even for those not versed in programming. This interface allows users to configure settings according to their specific needs. Moreover, the device can be seamlessly integrated with major IoT and AI platforms like AWS, ThingsBoard, and Home Assistant, further enhancing its versatility.



Watch this video on YouTube.

Here are some other articles you may find of interest on the subject of cameras
One of the standout features of the Tokay Lite is its facial recognition and detection capabilities. The device comes pre-loaded with a facial recognition model, but users can also reprogram it with their own AI models. This feature can be particularly useful in security applications, where recognizing and identifying individuals is paramount.
Beyond its use in security, the Tokay Lite also finds application in robotics. It can serve as a sensor and decision-making unit, providing real-time visual data and performing visual recognition tasks. This capability can be instrumental in developing robots that can navigate their environment and interact with objects and individuals.
The Tokay Lite is designed for plug-and-play integration, featuring an AI-capable chip with 8 MB FLASH and 8 MB of external RAM. Its adaptable power modes and night vision feature make it suitable for use in various environmental conditions. The device is also equipped with light and motion sensors for environmental monitoring.
Tokay Lite camera components and controls diagram

The open-source nature of the Tokay Lite extends to its SDK, providing developers with the freedom to customize and enhance the device’s capabilities. The device comes with numerous examples, serving as a valuable resource for developers looking to explore its potential.
Technical specifications of the Tokay Lite include an onboard MCU: ESP32-S3, sensor interface: DVP, stock camera sensor: OV2640, image capabilities: 0.3 MP / 2 MP / 3 MP with RGB and JPEG support, frame rate: up to 15 FPS, night vision, sensors: light sensor and passive IR (motion detection), connectivity: Wi-Fi and BLE, memory: 8 MB Flash, 512 kB + 8 MB RAM, software: TF-Lite Micro, esp-dl, interfaces: SPI, UART, battery connector: JST-PH (2 mm pitch), and customizable power features: programmable external RTC.
The Tokay Lite is now available to purchase from Crowd Supply priced at $89 with shipping expected to take place sometime around May 2024. Its versatility, advanced capabilities, and user-friendly design make it a valuable tool in various fields, from security to robotics. As AI continues to evolve, devices like the Tokay Lite serve as a reminder of the exciting possibilities that lie ahead.

Tokay Lite camera features​

  • Designed for Plug-and-Play: Ready for integration with a hassle-free setup.
  • Powerful AI-Capable Chip: Packed with an 8 MB FLASH and 8 MB of external RAM for AI tasks.
  • Adaptable Power Modes: Switch between low power and high-performance modes to match your power needs.
  • Night Vision: Equipped with night vision and onboard IR LED illumination for all-environment use.
  • Environmental Monitoring: Features light and motion sensors for comprehensive environmental awareness.
  • No-Code UI: Easily set up projects and fine-tune sensors without coding.
  • Seamless Integrations: Effortlessly connects to major IoT and AI platforms like AWS IoT, Edge Impulse, and ThingsBoard.
  • Open Source: Explore the possibilities with an open-source SDK and numerous examples.

Specifications​

  • Onboard MCU: ESP32-S3
  • Sensor Interface: DVP
  • Stock Camera Sensor: OV2640
  • Image Capabilities: 0.3 MP / 2 MP / 3 MP with RGB and JPEG support
  • Frame Rate: Up to 15 FPS
  • Night Vision: Yes
  • Sensors: Includes Light Sensor and Passive IR (Motion Detection)
  • Connectivity: Wi-Fi and BLE
  • Memory: 8 MB Flash, 512 kB + 8 MB RAM
  • Software: TF-Lite Micro, esp-dl
  • Interfaces: SPI, UART
  • Battery Connector: JST-PH (2 mm pitch)
 
  • Like
  • Love
  • Wow
Reactions: 15 users

Xray1

Regular

BrainChip Previews Industry’s First Edge Box Powered by Neuromorphic AI IP


LAGUNA HILLS, Calif.--(BUSINESS WIRE)-- BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today released previews of the industry’s first Edge box based on neuromorphic technology built in collaboration with VVDN Technologies, a premier electronics and manufacturing company.

The Akida™ Edge Box—expected to begin pre-sales on January 15th, 2024—powers AI applications in challenging environments where performance and efficiency are essential. The device will be demonstrated for the first time at CES 2024, January 9-12 in Las Vegas.

Designed for vision-based AI workloads, the compact Akida Edge box is intended for video analytics, facial recognition, and object detection, and can extend intelligent processing capabilities that integrate inputs from various other sensors. This device is compact, powerful, and enables cost-effective, scalable AI solutions at the Edge.

BrainChip’s event-based neural processing, which closely mimics the learning ability of the human brain, delivers essential performance within an energy-efficient, portable form factor, while offering cost-effectiveness surpassing market standards for edge AI computing appliances. BrainChip’s Akida neuromorphic processors are capable of on-chip learning that enables customization and personalization on device without support from the cloud, enhancing privacy and security while also reducing training overhead, which is a growing cost for AI services.

“BrainChip’s neuromorphic technology gives the Akida Edge box the ‘edge’ in demanding markets such as industrial, manufacturing, warehouse, high-volume retail, and medical care,” said Sean Hehir, CEO of BrainChip. “We are excited to partner with an industry leader like VVDN technologies to bring groundbreaking technology to the market.”

“There is a strong demand for cost-effective, flexible edge AI computation across many industries,” said Puneet Agarwal, Co-Founder and CEO, VVDN Technologies. “VVDN is excited to offer OEMs its experience and expertise in bringing the advanced, transformative technology integrations that meet market needs and eventually help them with faster time to market.”

BrainChip’s Akida Edge Box is suitable for environments that require cost-effective and low-latency AI processing. In security and surveillance, it can automatically detect and report intrusion, identify individuals in restricted areas, and perform behavior analysis to recognize suspicious behaviors or potential threats.

In retail and warehousing, it can assist in inventory management and loss prevention by identifying when shelves are empty, when restocking is needed, and when merchandise is removed without authorization. Behavior analysis capabilities can also help retailers understand how customers interact with products and store layouts to maximize profitability.

The Akida Edge Box brings AI to industrial settings for visual detection applications such as quality inspection, identifying defects or irregularities in products, and integration with factory robotic systems for precise object manipulation. It can be used to enhance plant and worker safety, identify whether workers are using proper safety gear, following protocols and proper workflows, and identify malfunctions in assembly lines.

Healthcare applications include patient monitoring, such as noting a patient’s physical movements to ensure safety and provide alerts for falls or unusual behavior. In rehabilitation facilities it can track and analyze patient movements to aid in physical therapy. In elder care settings it can be used to detect falls or other situations that require staff assistance or intervention.
I wonder for how much it will be sold for and how those sale funds will be finally distributed between BRN & VVDN.
 
  • Like
  • Thinking
Reactions: 6 users

Xray1

Regular
Do people here think now, is perhaps the Time, to strongly recommend those we know and love, to invest a decent sum in BrainChip?

Perhaps from their Super?

$17500 will buy you 100000 shares in this Company and on the basis, that it could be lost, or profoundly change their future, could be a good "bet"?...

Many here have already recommended strongly over the years, with good intentions, but ill effect.

I personally encouraged my Brother to buy 50000 shares at 95 cents last year and now he won't listen to me about averaging down 🤔..


Disclosure - I am very heavily invested in BrainChip and bought another 10000 shares yesterday..

I sleep like a baby at night, but if BrainChip doesn't "make it" my future will seriously look a bit like this..

View attachment 52053

Provided someone will be kind enough to let me park this beauty on their property 😬..



However, I will probably still look at it like this..

View attachment 52054
I personally would wait and see what actually eventuates between now and the AGM
 
  • Like
  • Thinking
Reactions: 6 users

KiKi

Regular
I wonder for how much it will be sold for and how those sale funds will be finally distributed between BRN & VVDN.
I might be completely mistaken, but I think I read something like 9,900 Dollar somewhere. Or was it 990? 🙃
O.k. I am not sure. 😂
 
Last edited:
  • Haha
  • Like
Reactions: 4 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!






AI News

AI & Big Data Expo: Unlocking the potential of AI on edge devices​


About the Author​

By Ryan Daws | December 15, 2023
Categories: Artificial Intelligence, Development, Enterprise, Healthcare, Industries, Machine Learning,
Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)



In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them.
During the discussion, Grande provided insightful perspectives on the current challenges, how Edge Impulse is helping address these struggles, and the tremendous promise of on-device AI.

Key hurdles with edge AI adoption​

Grande highlighted three primary pain points companies face when attempting to productise edge machine learning models, including difficulties determining optimal data collection strategies, scarce AI expertise, and cross-disciplinary communication barriers between hardware, firmware, and data science teams.
“A lot of the companies building edge devices are not very familiar with machine learning,” says Grande. “Bringing those two worlds together is the third challenge, really, around having teams communicate with each other and being able to share knowledge and work towards the same goals.”

Strategies for lean and efficient models​

When asked how to optimise for edge environments, Grande emphasised first minimising required sensor data.
“We are seeing a lot of companies struggle with the dataset. What data is enough, what data should they collect, what data from which sensors should they collect the data from. And that’s a big struggle,” explains Grande.
Selecting efficient neural network architectures helps, as does compression techniques like quantisation to reduce precision without substantially impacting accuracy. Always balance sensor and hardware constraints against functionality, connectivity needs, and software requirements.
Edge Impulse aims to enable engineers to validate and verify models themselves pre-deployment using common ML evaluation metrics, ensuring reliability while accelerating time-to-value. The end-to-end development platform seamlessly integrates with all major cloud and ML platforms.

Transformative potential of on-device intelligence​

Grande highlighted innovative products already leveraging edge intelligence to provide personalised health insights without reliance on the cloud, such as sleep tracking with Oura Ring.
“It’s sold over a billion pieces, and it’s something that everybody can experience and everybody can get a sense of really the power of edge AI,” explains Grande.
Other exciting opportunities exist around preventative industrial maintenance via anomaly detection on production lines.
Ultimately, Grande sees massive potential for on-device AI to greatly enhance utility and usability in daily life. Rather than just raw data, edge devices can interpret sensor inputs to provide actionable suggestions and responsive experiences not previously possible—heralding more useful technology and improved quality of life.
Unlocking the potential of AI on edge devices hinges on overcoming current obstacles inhibiting adoption. Grande and other leading experts provided deep insights at this year’s AI & Big Data Expo on how to break down the barriers and unleash the full possibilities of edge AI.
“I’d love to see a world where the devices that we were dealing with were actually more useful to us,” concludes Grande.
 
  • Love
  • Fire
  • Like
Reactions: 8 users
Top Bottom