BRN Discussion Ongoing

Townyj

Ermahgerd


Some Big names going to be at this event..

*EXPO MAP and PROGRAM GUIDE BELOW*




Untitled.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users

Tothemoon24

Top 20

TinyML computer vision is turning into reality with microNPUs (µNPUs)​

June 14, 2023 Elad Baram
Ubiquitous ML-based vision processing at the edge is advancing as hardware costs decrease, computation capability increases significantly, and new methodologies make it easier to train and deploy models. This leads to fewer barriers to adoption and increased use of computer vision AI at the edge.


Advertisement

Computer vision (CV) technology today is at an inflection point, with major trends converging to enable what has been a cloud technology to become ubiquitous in tiny edge AI devices. Technology advancements are enabling this cloud-centric AI technology to extend to the edge, and new developments will make AI vision at the edge pervasive.
There are three major technological trends enabling this evolution. New, lean neural network algorithms fit the memory space and compute power of tiny devices. New silicon architectures are offering orders of magnitude more efficiency for neural network processing than conventional microcontrollers (MCUs). And AI frameworks for smaller microprocessors are maturing, reducing barriers to developing tiny machine learning (ML) implementations at the edge (tinyML).
As all these elements come together, tiny processors at milliwatt scale can have powerful neural processing units that execute extremely efficient convolutional neural networks (CNNs)—the ML architecture most common for vision processing—leveraging a mature and easy-to-use development tool chain. This will enable exciting new use cases across just about every aspect of our lives.

The promise of CV at the edge

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs).

HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness.


Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. For example, a laptop visually senses when a user is attentive and can adapt its behavior and power policy accordingly. This is already being enabled by Synaptics’ Emza Visual Sense technology, which OEMS can use to optimize power by adaptively dimming the display when a user is not watching it, reducing display energy consumption (figure 1). By tracking on-lookers’ eyeballs (on-looker detect) the technology can also enhance security by alerting the user and hiding the screen content until the coast is clear.

There are also endless use cases for visual sensing in industrial fields, ranging from object detection for safety regulation (i.e., restricted zones, safe passages, protective gear enforcement) up to anomaly detection for manufacturing process control. In agritech, crop inspections, and status and quality monitoring enabled by CV technologies, are all critical.

Whether it’s in laptops, consumer electronics, smart building sensors or industrial environments, this ambient computing capability is enabled when tiny and affordable microprocessors, tiny neural networks, and optimized AI frameworks make devices more intelligent and power efficient.

Neural-network vision processing evolves

2012 marked the turning point when CV started to shift from heuristic CV methods to deep convolutional neural networks (DCNN), with the publication of AlexNet by Alex Krizhevsky and his colleagues. There was no turning back after the DCNN won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) that year.

Since then, teams across the globe have continued to seek higher detection performance, but without much concern about the efficiency of the underlying hardware. So CNNs continued to be data- and compute-hungry. This focus on performance was fine for applications running in the cloud infrastructure.

In 2015, ResNet152 was introduced. It had 60 million parameters, required more than 11 gigaflops for single-inference operation and demonstrated 94% top-5 accuracy for the ImageNet data set. This continued to push the performance and accuracy of CNNs. But it wasn’t until 2017, with the publication of MobileNets by a group of researchers from Google, that we saw a push toward efficiency.

MobileNets—aimed at smartphones—was significantly lighter than existing neural network (NN) architectures at that time. MobileNetV2, as an example, had 3.5 million parameters and required 336 Mflops. This drastic reduction was achieved initially through hard labor—manually identifying layers in the deep-learning network that did not add much to accuracy. Later, automated architecture-search tools allowed further improvement in the number and organization of layers. Roughly 20x “lighter” than ResNet192, both in memory and computational load, MobileNetV2 demonstrated top-5 accuracy of 90%. A new set of mobile-friendly applications could now use AI.

And hardware evolves

With smaller NNs and with a clear understanding of the workloads involved, developers could now design optimized silicon for tiny AI. This led to the micro neural processing unit (micro NPU). By tightly managing memory organization and data flow, while exploiting massive parallelism, these small, dedicated cores can execute NN inference 10x or 100x faster than the unaided CPU in a typical MCU. An example is the Arm Ethos U55 micro NPU.

Let’s look at a specific example of the impact of microNPUs (µNPUs). One of the fundamental tasks in CV is object detection. Object detection in essence requires two tasks: localization, which determines where an object is located within the image, and classification, which identifies the detected object (figure 2).

Emza has implemented a face detection model on an Ethos U55 µNPU, training an object detection and classification model that is a lightweight version of the single shot detector, optimized by Synaptics for detecting just the class of faces. The results astonished us with model execution times of less than 5 milliseconds: this is comparable to the execution speed on a powerful smartphone application processor, like the Snapdragon 845. When executing this same model on the Raspberry Pi 3B using four Cortex A53 cores, the execution time is six times longer.

AI frameworks & democratization

Widespread adoption of any technology as complex as ML requires good development tools. TensorFlow Lite for Microcontrollers (TFLM) by Google is a framework designed for easier training and deployment of AI for tinyML. For a subset of the operators covered by the full TensorFlow, TFLM emits microprocessor C code for an interpreter and a model to run on a µNPU. The PyTorch Mobileframework and Glow compiler from Meta are also targeting this area. In addition, there are today quite a few AI automation platforms (known as AutoML) that can automate some aspects of AI deployment for tiny targets. Examples are Edge Impulse, Deeplite, Qeexo, and SensiML.

But to enable execution on specific hardware and µNPUs, compilers and tool chains must be modified. Arm has developed the Vela compiler that optimizes CNN model execution for the U55 µNPU. The Vela complier removes the complexities of a system that contains both a CPU and a µNPU by automatically splitting the model execution task between them.

More broadly, the Apache TVM is an open-source, end-to-end ML compiler framework for CPUs, GPUs, NPUs and accelerators. And TVM micro is targeting microcontrollers with the vision of running any AI model on any hardware. This evolution of AI frameworks, AutoML platforms, and compilers makes it easier for developers to leverage the new µNPUs for their specific needs.

Ubiquitous AI at the edge

The trend toward ubiquitous ML-based vision processing at the edge is clear. Hardware costs are decreasing, computation capability is increasing significantly, and new methodologies make it easier to train and deploy models. All of this is leading to fewer barriers to adoption, and to increased use of CV AI at the edge.

But even as we see increasingly ubiquitous tiny edge AI, there is still work to do. To make ambient computing a reality, we need to serve the long tail of use cases in many segments that can create a scalability challenge. In consumer products, factories, agriculture, retail and other segments, each new task requires different algorithms and unique data sets for training. The R&D investments and skillset needed to solve for each use case continue to be a major barrier today.

This gap can best be filled by AI companies up-levelling the software around their NPU offerings by developing rich sets of model examples—”model zoos”—and applications reference code. In doing so, they can enable a wider range of applications for the long tail while ensuring design success by having the right algorithms optimized to the target hardware to solve specific business needs, within the defined cost, size, and power constraints.
 
  • Like
Reactions: 10 users

Sirod69

bavarian girl ;-)
Mercedes-Benz AG
Mercedes-Benz AG


Unlike in a classic Mercedes-Benz 300 SL from the 1950s, the user interface and digital experience inside the technology programme #VISIONEQXX catapults you directly into a highly responsive, intelligent and software-driven future:

The first ever completely seamless display in a Mercedes-Benz acts as a portal connecting the driver and occupants with the car and the world outside.

The intelligence in the VISION EQXX can mine for data based on the car’s route and even help you manage your music library and offer local suggestions.

In addition, there are numerous state-of-the-art driver assistance systems that provide a new level of drivability and comfort.

Several features and developments of the VISION EQXX are already being integrated into series production: take the #EQS and #EQESUV that accompany the VISION EQXX during the #MilleMigliaGreen, for example. Just like our halo car, they come with a system that help you drive more efficiently: From energy flow to terrain, battery status and even the direction and intensity of the wind and sun – it curates all the available information and suggests the most efficient driving style.

Which task would you like to delegate to your car on a rally or a road trip?
 
  • Like
  • Fire
  • Wow
Reactions: 20 users

Sirod69

bavarian girl ;-)
  • Like
  • Love
  • Fire
Reactions: 49 users

IloveLamp

Top 20
  • Like
Reactions: 19 users

IloveLamp

Top 20
  • Like
  • Wow
Reactions: 16 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Love
Reactions: 24 users

Damo4

Regular
 
  • Like
  • Love
  • Fire
Reactions: 42 users

Townyj

Ermahgerd
  • Like
  • Thinking
Reactions: 10 users
Hi @manny100

It’s my understanding they were reference chips to prove they worked in silicon.

View attachment 38368
You could buy them to test via Development Kits etc as per below.

View attachment 38369


I imagine the same will happen with Gen 2 variations although there shouldn’t be as much doubt over them working as essentially they are the same tech, just different sizes (large, medium and small) depending on use case.

I’m confident that Brainchip has listened to the customer and is providing them what they have asked for. As Sean said, they will be more easily consumed/used and be a better fit depending on the application.

Edit: in saying that the TENNS sounds like a huge technological advancement which raises the bar significantly so I shouldn’t ignore that new feature!

:)
I just hope our customers give it Tenns out of Tenns.

I know bit early in morning 😆

SC
 
  • Haha
  • Like
  • Fire
Reactions: 26 users
  • Like
  • Fire
  • Love
Reactions: 17 users
This is a short 6m interview with Julie Sweet, Accenture talking about investing 3.8B into AI. Brainchip not mentioned, more LLM but interesting all the same.

 
  • Like
  • Love
  • Fire
Reactions: 6 users

buena suerte :-)

BOB Bank of Brainchip

Media Alert: BrainChip Presents at CVPR 2023 Workshop​

Business Wire
Thu, June 15, 2023 at 12:00 AM GMT+8·3 min read


  • LAGUNA HILLS, Calif., June 14, 2023--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, will present at the CVPR 2023 Workshop on Event-Based Vision as part of the IEEE Conference on Computer Vision and Pattern Recognition June 18-22 at the Vancouver Convention Center.
The fourth international workshop on event-based vision is dedicated to event-based cameras, smart cameras and algorithms processing data from these sensors. BrainChip CMO Nandan Nayampally will present "Enabling Ultra-Low Power Edge Inference and On-Device Learning with Akida" as part of the workshop June 19 at 4 p.m. PDT. The session will feature the challenge, approach, delivery, and results of edge AI on-chip event-based processing and learning.

"CVPR is one of the top computer vision events and I am excited to share with attendees how BrainChip's fully digital, neuromorphic, event-based AI address low power vision and detection challenges at the Edge," said Nayampally. "I look forward to a spirited, mutually beneficial discussion with the community on how to utilize the advancements we’ve made with the Akida second generation processor."

Akida processors power next-generation edge AI in a range of industrial, home, automotive, and scientific environments. Akida’s fully digital, customizable, event-based neural processor and IP is ideal for advanced AI/ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS/autonomous systems. Akida’s neuromorphic architecture delivers high performance with extreme energy efficiency that enables partners to deliver AI solutions previously not possible on battery-operated or fan-less embedded, edge devices. Akida also has a unique ability to learn on-device in a secure fashion, without the need for cloud retraining.

Additional information about CVPR and to register for the event is available at https://na.eventscloud.com/ereg/index.php?eventid=722171&

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
View source version on businesswire.com: https://www.businesswire.com/news/home/20230614052181/en/
Contacts
Media Contact:

Mark Smith
JPR Communications
818-398-1424
Investor Contact:
Tony Dawe
BrainChip
tdawe@brainchip.com
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 27 users

IloveLamp

Top 20
 
  • Like
  • Fire
Reactions: 6 users

Diogenese

Top 20
I hadn't previously noticed the change from "SNN" to "neuromorphic AI"

https://brainchip.com/brainchip-presents-at-cvpr-2023-workshop/

"BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP"

Akida 2 is SNN+TeNN/Vision Transformer ... and it captures the AI zeitgeist.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
  • Like
  • Fire
  • Love
Reactions: 14 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Watching the Pre Market ( before open ) this morning , interesting, alot more than usual above & below market orders.
Almost got the feeling the bots ... Algos, were trying to do a small On Market transaction without using all the other trade executions available to them.

Sly F#$%@*S

Got that feeling again we may see some decent UPWARD MOVEMENT today.

Time will tell...... Gamble Responsibly ...... or simply HOLD & make them suffer.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Mea culpa

prəmɪskjuəs
Good Morning Chippers,

Watching the Pre Market ( before open ) this morning , interesting, alot more than usual above & below market orders.
Almost got the feeling the bots ... Algos, were trying to do a small On Market transaction without using all the other trade executions available to them.

Sly F#$%@*S

Got that feeling again we may see some decent UPWARD MOVEMENT today.

Time will tell...... Gamble Responsibly ...... or simply HOLD & make them suffer.

Regards,
Esq.
Almost got the feeling the bots ... Algos, were trying to do a small On Market transaction without using all the other trade executions available to them.

Sly F#$%@*S


You've put it far more precisely than my thoughts Esq. I just thought there's something 'kn odd going on here.
 
  • Like
  • Fire
Reactions: 9 users

Chris B

Regular
I know we simply can't be in anything on the market without any ip contracts being signed.. with the exception of the 2 signatures we have already with Renesas and Mega Chips. And I know we want to sell IP exclusively... but the question I would like answered is.. is it possible we are selling Chips to some Companies that might want to develop them into products with thier own people??? Maybe no need to be an announcement on ASX for that??? Maybe need to wait and see the Revenue???
Just thought I would put it out there ;-)
 
  • Like
  • Thinking
Reactions: 2 users

chapman89

Founding Member
Brainchips first known solution offering, through megachips.



IMG_2324.png
 
  • Like
  • Fire
  • Love
Reactions: 138 users
Top Bottom