BRN Discussion Ongoing

Evermont

Stealth Mode
BLOGSEPTEMBER 20, 2023

The Future of Vision: Edge AI Breaks Barriers for Data-Intensive Applications​

Transforming industries and enhancing efficiency with cutting-edge AI vision technology.
By Parag Beeraka, Senior Director, Segment Marketing, IoT, Arm

One of the most promising applications of IoT AI vision technology is to capture consumer data inside stores so retailers can more quickly and efficiently optimize product placement, store layout and customer experience based on video data.

But there are two major hurdles to overcome: Cost and complexity. A large grocery store that wants to harvest foot-traffic, purchase and other data would need about 15,000 cameras in store. At 30 frames per second of 4K video, those 15,000 cameras would produce 225 gigabits of data per second.

That happens because video data is enormous, compared with other forms of data, and intricate processing is required, including image recognition, object detection, and scene analysis. These AI vision tasks often require advanced algorithms and models, contributing to the computational complexity. On top of that, big data like that needs to be sent to the cloud for efficient computation and then back out for decision-making.

Clearly, 225 gigabits per second is uneconomical.

But that’s only if you think it’s still 2018, not 2023. Much has changed in the past five years. The combination of improved and more-efficient processing at the edge, coupled with AI and machine learning, now chips away at that big uneconomic roadblock in front of many promising vision applications.

Unlocking edge AI vision innovation​

Back then, too many vital technologies sat siloed, each either difficult or impossible to integrate with other important puzzle pieces to enable a frictionless innovation ecosystem. In a homogenous processing world it was difficult to be able to customize solutions for different vision workloads when one size had to fit all. What’s different about today?

Engineers and developers have attacked cost and complexity, as well as other challenges. Take the processing complexity challenge for example. One pathway to driving down the cost and complexity of vision solutions is to offer developers more flexibility in how they implement edge solutions – heterogeneous compute.

Designers are producing increasingly powerful processors that offer higher computational capacity while remaining energy efficient. These processors include CPUs, GPUs, ISPs and accelerators designed to handle complex tasks like AI and machine learning in sometimes resource-constrained environments. In addition, AI accelerators – either as a core on an SoC or as a stand-alone SoC – enable efficient execution of AI algorithms at the edge.

Tackling complexity​

Let’s take one piece of the complexity puzzle. Arm in 2022 introduced the Arm Mali-C55, its smallest and most high-performance Image Signal Processor (ISP) to date. It features a blend of image quality, throughput, power efficiency, and silicon area, for applications like endpoint AI, smart home cameras, AR/VR, and smart displays. It achieves higher performance with throughput of up to 1.2Gpix/sec, making it a compelling choice for demanding visual processing tasks. And, when it comes to the push toward heterogeneous compute, the Mali-C55 is designed for seamless integration in SoC designs with Cortex-A or Cortex-M CPUs.

That’s key because in SoCs ISP output is often linked directly to a machine learning accelerator for further processing using neural networks or similar algorithms. This involves providing scaled-down images to the machine learning models for tasks like object detection and pose estimation.

This synergy, in turn, has given rise to ML-enabled cameras and the concept of “Software Defined Cameras.” And this allows OEMs and service providers to deploy cameras globally with evolving capabilities and revenue models tied to dynamic feature enhancements.

Think for example, of a car parking garage with cameras dangling above every slot, determining whether the slot is filled or not. That was great in 2018 for drivers entering the garage needing to know where the open slots are at a glance, but not an economical solution in 2023. How about leveraging the notion of edge AI and positioning only one or two cameras at the entrance and exit or on each floor and having AI algorithms figure out the rest? That’s 2023 thinking.

That brings us back to the retailer’s problem: 15,000 cameras producing 225 gigabits per second of data. You get the picture, right?

Amazon has recognized this problem and in the latest version of its Just Walk Out store technology, it increased the compute capability in the camera module itself. That’s moved the power of AI to the edge, where it can be more efficiently and quickly computed.

With this powerful, cost-effective vision opportunity, a grocery retailer might, for example, analyze video data from in-store cameras and determine that it needs to restock oranges around noon every day because most people buy them between 9-11 a.m. Upon further analysis, the retailer realizes that a lot of your customers – anonymized in the video data for privacy reasons – also buy peanuts during the same shopping trip. You use this video data to change your product placement.

Right place, right compute​

This kind of compute optimization – putting the right type of edge AI computing much closer to the sensors – reduces latency, can improve tighten security and reduce costs. It also can spark new business models.

One such business model is video surveillance-as-a-service (VSaaS). VSaaS is the provision of video recording, storage, remote management and cybersecurity in the mix of on-premises cameras and cloud-based video-management systems. The VSaaS market is expected to reach $132 billion by 2027, according to Transparency Market Research.

At a broader level, however, immense opportunity awaits because so many powerful potential applications have been waiting in the wings because of economics, processing limitations or sheer complexity.

Consider:

  • Smart Cities: Video analytics for traffic management, pedestrian flow analysis, and parking space optimization in smart cities can lead to substantial data generation.
  • Industrial Automation: Quality control, defect detection, and process optimization.
  • Autonomous Vehicles: The sensors and cameras on autonomous vehicles, such as self-driving cars and drones capturing data for navigation and safety systems, perceiving their surroundings in real time.
  • Virtual Reality (VR) and Augmented Reality (AR): Immersive VR and AR experiences require rendering and processing of high-resolution visual content in real time, resulting in significant data generation.
Leading-edge adopters aren’t waiting. In South Korea’s Pyeongtaek City, city leaders are planning to build a test bed using smart city technologies such as artificial intelligence and autonomous driving to be completed in 2025 and spread throughout the city.

The city of a half-million people grapples with traffic congestion and pedestrian fatalities. As part of a citywide “smart city” overhaul, experts have deployed Arm partner Nota.ai’s Nespresso platform – an automatic AI model compression solution – in vision devices to create an intelligent transportation system.

At the device level, clever design is helping customers achieve their vision visions. Take the Himax Wiseye-II, a smart image sensing solution that can be deployed in a range of battery-operated consumer and home security applications, including notebooks, doorbell, door lock, surveillance camera and smart office. It marries Arm microcontroller and neural processor cores to drive machine vision AI more deeply into consumer and smart-home devices.

These examples and future innovation being designed today are happening because of amazing advances in edge AI technology. And in vision, they’re being built on Arm.

In addition to hardware, Arm makes the journey for developers of image solutions faster and more efficient, thanks for software libraries, interconnect standards, security frameworks, and development tools such as Arm Virtual Hardware, which allows them to run applications in a virtual on their target architecture before committing to hardware.

So remember those hopes of transforming the world with previously untapped amounts of data using vision technology that once seemed a far-off dream because of cost and complexity? They can become reality now.

 
  • Like
  • Fire
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip
Unfortunately, that is only the top 18. You missed a couple. Deena
16!...:love:

Here's the other 4!! :)

1698115771101.png
 
  • Like
  • Haha
Reactions: 4 users

Evermont

Stealth Mode
BLOGSEPTEMBER 20, 2023

The Future of Vision: Edge AI Breaks Barriers for Data-Intensive Applications​

Transforming industries and enhancing efficiency with cutting-edge AI vision technology.
By Parag Beeraka, Senior Director, Segment Marketing, IoT, Arm

One of the most promising applications of IoT AI vision technology is to capture consumer data inside stores so retailers can more quickly and efficiently optimize product placement, store layout and customer experience based on video data.

But there are two major hurdles to overcome: Cost and complexity. A large grocery store that wants to harvest foot-traffic, purchase and other data would need about 15,000 cameras in store. At 30 frames per second of 4K video, those 15,000 cameras would produce 225 gigabits of data per second.

That happens because video data is enormous, compared with other forms of data, and intricate processing is required, including image recognition, object detection, and scene analysis. These AI vision tasks often require advanced algorithms and models, contributing to the computational complexity. On top of that, big data like that needs to be sent to the cloud for efficient computation and then back out for decision-making.

Clearly, 225 gigabits per second is uneconomical.

But that’s only if you think it’s still 2018, not 2023. Much has changed in the past five years. The combination of improved and more-efficient processing at the edge, coupled with AI and machine learning, now chips away at that big uneconomic roadblock in front of many promising vision applications.

Unlocking edge AI vision innovation​

Back then, too many vital technologies sat siloed, each either difficult or impossible to integrate with other important puzzle pieces to enable a frictionless innovation ecosystem. In a homogenous processing world it was difficult to be able to customize solutions for different vision workloads when one size had to fit all. What’s different about today?

Engineers and developers have attacked cost and complexity, as well as other challenges. Take the processing complexity challenge for example. One pathway to driving down the cost and complexity of vision solutions is to offer developers more flexibility in how they implement edge solutions – heterogeneous compute.

Designers are producing increasingly powerful processors that offer higher computational capacity while remaining energy efficient. These processors include CPUs, GPUs, ISPs and accelerators designed to handle complex tasks like AI and machine learning in sometimes resource-constrained environments. In addition, AI accelerators – either as a core on an SoC or as a stand-alone SoC – enable efficient execution of AI algorithms at the edge.

Tackling complexity​

Let’s take one piece of the complexity puzzle. Arm in 2022 introduced the Arm Mali-C55, its smallest and most high-performance Image Signal Processor (ISP) to date. It features a blend of image quality, throughput, power efficiency, and silicon area, for applications like endpoint AI, smart home cameras, AR/VR, and smart displays. It achieves higher performance with throughput of up to 1.2Gpix/sec, making it a compelling choice for demanding visual processing tasks. And, when it comes to the push toward heterogeneous compute, the Mali-C55 is designed for seamless integration in SoC designs with Cortex-A or Cortex-M CPUs.

That’s key because in SoCs ISP output is often linked directly to a machine learning accelerator for further processing using neural networks or similar algorithms. This involves providing scaled-down images to the machine learning models for tasks like object detection and pose estimation.

This synergy, in turn, has given rise to ML-enabled cameras and the concept of “Software Defined Cameras.” And this allows OEMs and service providers to deploy cameras globally with evolving capabilities and revenue models tied to dynamic feature enhancements.

Think for example, of a car parking garage with cameras dangling above every slot, determining whether the slot is filled or not. That was great in 2018 for drivers entering the garage needing to know where the open slots are at a glance, but not an economical solution in 2023. How about leveraging the notion of edge AI and positioning only one or two cameras at the entrance and exit or on each floor and having AI algorithms figure out the rest? That’s 2023 thinking.

That brings us back to the retailer’s problem: 15,000 cameras producing 225 gigabits per second of data. You get the picture, right?

Amazon has recognized this problem and in the latest version of its Just Walk Out store technology, it increased the compute capability in the camera module itself. That’s moved the power of AI to the edge, where it can be more efficiently and quickly computed.

With this powerful, cost-effective vision opportunity, a grocery retailer might, for example, analyze video data from in-store cameras and determine that it needs to restock oranges around noon every day because most people buy them between 9-11 a.m. Upon further analysis, the retailer realizes that a lot of your customers – anonymized in the video data for privacy reasons – also buy peanuts during the same shopping trip. You use this video data to change your product placement.

Right place, right compute​

This kind of compute optimization – putting the right type of edge AI computing much closer to the sensors – reduces latency, can improve tighten security and reduce costs. It also can spark new business models.

One such business model is video surveillance-as-a-service (VSaaS). VSaaS is the provision of video recording, storage, remote management and cybersecurity in the mix of on-premises cameras and cloud-based video-management systems. The VSaaS market is expected to reach $132 billion by 2027, according to Transparency Market Research.

At a broader level, however, immense opportunity awaits because so many powerful potential applications have been waiting in the wings because of economics, processing limitations or sheer complexity.

Consider:

  • Smart Cities: Video analytics for traffic management, pedestrian flow analysis, and parking space optimization in smart cities can lead to substantial data generation.
  • Industrial Automation: Quality control, defect detection, and process optimization.
  • Autonomous Vehicles: The sensors and cameras on autonomous vehicles, such as self-driving cars and drones capturing data for navigation and safety systems, perceiving their surroundings in real time.
  • Virtual Reality (VR) and Augmented Reality (AR): Immersive VR and AR experiences require rendering and processing of high-resolution visual content in real time, resulting in significant data generation.
Leading-edge adopters aren’t waiting. In South Korea’s Pyeongtaek City, city leaders are planning to build a test bed using smart city technologies such as artificial intelligence and autonomous driving to be completed in 2025 and spread throughout the city.

The city of a half-million people grapples with traffic congestion and pedestrian fatalities. As part of a citywide “smart city” overhaul, experts have deployed Arm partner Nota.ai’s Nespresso platform – an automatic AI model compression solution – in vision devices to create an intelligent transportation system.

At the device level, clever design is helping customers achieve their vision visions. Take the Himax Wiseye-II, a smart image sensing solution that can be deployed in a range of battery-operated consumer and home security applications, including notebooks, doorbell, door lock, surveillance camera and smart office. It marries Arm microcontroller and neural processor cores to drive machine vision AI more deeply into consumer and smart-home devices.

These examples and future innovation being designed today are happening because of amazing advances in edge AI technology. And in vision, they’re being built on Arm.

In addition to hardware, Arm makes the journey for developers of image solutions faster and more efficient, thanks for software libraries, interconnect standards, security frameworks, and development tools such as Arm Virtual Hardware, which allows them to run applications in a virtual on their target architecture before committing to hardware.

So remember those hopes of transforming the world with previously untapped amounts of data using vision technology that once seemed a far-off dream because of cost and complexity? They can become reality now.


Interesting. Click the link and it takes you to the Wevolver Edge AI Report.

1698116054980.png




1698116116198.png
 
  • Like
  • Love
Reactions: 6 users

Evermont

Stealth Mode
Sorry @Tothemoon24 missed your earlier post!
 
  • Like
Reactions: 3 users
16.5c was retested and holding. Volume drying up a bit is a good sign in my view also. I'll be watching 16.5c to hold and a push above 20c as a guide to re-enter.

Obviously, if a new IP deal comes out of the blue then anything is possible, depending on who the cutomer is. But, the possibilty of that alone isn't enough for me to FOMO buy back in.
Why do people dream, how long was it the Megachip deal was announced
 

CWP

Regular
  • Like
  • Wow
Reactions: 3 users
Glad its just your opinion because its not entirely factual. Or rather, it wasn't. When we had a physical product, like the FPGA board, it could in fact be relatively easily incorporated into an existing system. Yes it involved some programming and interfacing but it was far from insurmountable. This was the kind of prospect I presented to management in 2019 but received a very lukewarm response. They preferred to go elephant hunting and came up empty....as often happens when you ignore grass roots opportunities.

When we changed to being IP only provider, my potential customer lost interest due to the cost and time involved in creating a custom chip to suit the application. This was such a shame because the sales model proposed was to charge the end user an ongoing fee which would have been an early start to a recurring revenue stream for BRN. Isn't this exactly the type of sales model now being introduced by BMW, Mercedes et al to allow customers to access some features in the cars????
"When we changed to being IP only provider, my potential customer lost interest due to the cost and time involved in creating a custom chip to suit the application"

Wouldn't AKIDA 1500 now satisfy that requirement?

The Company has now expanded its market introduction strategy and may be more open to such an opportunity.

I think we would have enough irons in the fire as it is, they are just not hot enough yet..

No stone, should be left unturned though, if it can help accelerate market adoption.
 
  • Like
  • Love
Reactions: 10 users
Please stop... apart from some mentions there is no progress. We just go backwards each quarter.

I, and many here, have been patient and supportive and understanding. But nothing is changing. Other that a few "oh this would be great if we were in this or that or here or there...." Then we watch the financials as our missing in action CEO has said... and boom. I make more than that in 3 months. Hell my bonus was bigger than that.
"I make more than that in 3 months. Hell my bonus was bigger than that"

So you're making well over 2 grand a week and your (I'm assuming yearly) bonus was over 27k and you're having a sook about your investment in BrainChip?..

Gee, I wish I could be in such a dire financial predicament..

Just try and batten down the hatches and huddle with your family over a hot peppermint a little longer..
I'm sure you'll make it Champ 😉👍
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 19 users

BigDonger101

Founding Member
Does this indicate the former CEO has sold his shares ? Or just no longer holds enough to make the list ?
1698118611777.png

He could definitely still be on the list, just not in the top 20. This is from 27/04/2023.
 
  • Like
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
Does this indicate the former CEO has sold his shares ? Or just no longer holds enough to make the list ?
He has dropped out of the top20!!

From April 2023


1698118954405.png

1698118979100.png

Updated ..Oct 2023... PVDM back on top :)
1698119132081.png

1698119166364.png
 
  • Like
Reactions: 5 users
"I make more than that in 3 months. Hell my bonus was bigger than that"

So you're making well over 2 grand a week and your (I'm assuming yearly) bonus was over 27k and you're having a sook about your investment in BrainChip?..

Gee, I wish I could be in such a dire financial predicament..

Just try and batten down the hatches and huddle with your family over a hot peppermint a little longer..
I'm sure you'll make it Champ 😉👍
Completely missing the point DB. I never said I was in a dire financial predicament..

I am not annoyed that I am losing money, I am annoyed at the complete lack of traction and/or any visibility of our CEO... Yes we also say there is lots and yes things take time etc. But our CEO is ont our there reassuring anyone, we just dive in SP and everything is hidden behind NDA as we watch the financials...

And, yes I understand that not everyone is as fortunate in terms of their pay, and trust I work for it, was lucky to be in the right place and it absolutely did not come overnight.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

jtardif999

Regular
I'm not usually a conspiracy theorist but the lack of results and the lack of any meaningful explanation over the last 12 months has me wondering about BRN. Are our people not capable of selling this technology? Are we being set up for a takeover? Was our move to an IP only company a strategic mistake? The silence is deafening......... :rolleyes:

Most of we long term investors put our faith in PVdM and Anil when we bought in. I for one would like to hear from them directly about commercial progress without everyone ducking for cover behind the NDA's.
Our move to being an IP company over being just a chip company will imo eventually make us very very rich. There’s around 97% clear profit in it, but to get that we need to be more patient with the scheme of things on the IP path. Through IP BrainChips tech will scale the length and breadth of the market and when it starts the royalties will very quickly snowball imo. If we were just producing the chips the overheads would be much greater and the profit a lot smaller. Perhaps there’s opportunity for both IP and chips particularly in the space sphere but I don’t think it will ever be extensive.
 
  • Like
  • Fire
Reactions: 21 users

toasty

Regular
"When we changed to being IP only provider, my potential customer lost interest due to the cost and time involved in creating a custom chip to suit the application"

Wouldn't AKIDA 1500 now satisfy that requirement?

The Company has now expanded its market introduction strategy and may be more open to such an opportunity.

I think we would have enough irons in the fire as it is, they are just not hot enough yet..

No stone, should be left unturned though, if it can help accelerate market adoption.
Unfortuntely, like many such scenarios, the opportunity was lost at the time BRN decided not to engage. The client found an alternative system which, although not as efficient and elegant as Akida, could get the job done..........in a fashion. I suspect the availability of other "good enough" alternatives is one of the reasons for the delay in uptake of Akida.
 
  • Like
  • Thinking
  • Love
Reactions: 8 users
Completely missing the point DB. I never said I was in a dire financial predicament..

I am not annoyed that I am losing money, I am annoyed the complete lack of traction and/or any visibility of our CEO...

And, yes I understand that not everyone is as fortunate in terms of their pay, and trust I work for it, was lucky to be in the right place and it absolutely did not come overnight.
There is a heap of traction happening, it's just not showing on the books yet.

I think anyone that can't see that, has got to be blind or something man..
And I do understand the frustration of financial advancement, not happening as quickly as we would like.

But it is what it is.

I'm sure Sean and all the BrainChip team, are working hard, in their various levels of expertise, to make BrainChip a Global success story.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
There is a heap of traction happening, it's just not showing on the books yet.

I think anyone that can't see that, has got to be blind or something man..
And I do understand the frustration of financial advancement, not happening as quickly as we would like.

But it is what it is.

I'm sure Sean and all the BrainChip team, are working hard, in their various levels of expertise, to make BrainChip a Global success story.
I hope so too... but in 2 years since Sean joined we have not seen any real tangible results. Which I find deeply concerning. I truly hope this is the bottom and all those announcements, and comments on how great Akida is, starts to show in said financials.
 
  • Like
  • Love
Reactions: 12 users

jk6199

Regular
4C out, I have more lumps in my gravy and custard :cry:.

Pay day, just put another order in. Still an Akidaholic.

Good luck, we are all in the same boat per the SP.
 
  • Like
  • Haha
  • Love
Reactions: 14 users
I hope so too... but in 2 years since Sean joined we have not seen any real tangible results. Which I find deeply concerning. I truly hope this is the bottom and all those announcements, and comments on how great Akida is, starts to show in said financials.
It's a bit like growing truffles FL (which I'm sure you're familiar with 😛)..

The time put into the groundwork and preparation, takes more time than you would think, but the end results are well worth it (apparently 🤔..).
 
  • Like
  • Love
Reactions: 6 users
Top Bottom