BRN Discussion Ongoing

wilzy123

Founding Member
 
  • Like
  • Fire
  • Love
Reactions: 54 users

charles2

Regular
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

Glen

Regular
NVISO AND Panasonic in May will have there first mass consumer product The Nicobo robot for sale in Japan.
 
  • Like
  • Fire
Reactions: 13 users

stockduck

Regular
Dio here is the link to the paper 😶‍🌫️
https://arxiv.org/pdf/2302.13939
Halleluja....what does this mean......

"...4.1 Datasets

We test two variants of the 45 million parameter model; one where T = 1024 and another where T = 3, 072. We used the Enwik8 dataset to conduct both training and testing. The findings of this experiment are presented in Table 1. To explore the efficiency of our 125 million parameter scale, we trained our model using the BookCorpus [47] dataset, and text generated samples are provided in Fig. 3. Our most extensive model with 260 million parameters was trained using the OpenWebText2 [17] dataset. Text samples of this experiment are shown in Fig. 2. At present, we are conducting additional experiments on the larger models and will update this preprint once completed. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 45M, 120M and 260M, we trained them for 12, 24 and 48 hours respectively.
...."

Can someone help here? There isn`t meant Nvida V100 graphic cards have SNN IP in it, right? Sorry I`m not a "professional" in this case.:unsure:
 
  • Like
  • Haha
  • Thinking
Reactions: 5 users

Tothemoon24

Top 20

Impressive list !​


Come Find Edge Impulse at Embedded World 2023​

EMBEDDED DEVICES
Mike Senese
8 March 2023
Linkedin_1200x630_3_ddec41cb49.png

Each year, all the big names in embedded computing gather at Embedded World in Nuremberg, Germany to show off their latest innovations and developments, to meet with partners and customers, and to learn about new advancements in their fields. This year, Embedded World is happening from March 14–16, and Edge Impulse is excited to once again be participating with a range of activities.

embedded world | …it's a smarter world
H5LJ0H2eGnMXfLTpUSvvSxIMORTXeZloKPUHYbSl-Fv9uTF8bzf1wbuWNGzd51iTQsAgX4OiSpDLIVkwpwRKEy8yPL6p7qNXSPQopTt1m_RD0qc3yGRWU8VR2xMEs_PfwtXToapRkXZUEVKyNmI2bis

First held in 2003, Embedded World is known as possibly the largest show in the world for the embedded industry. The exhibition focuses on products and services related to embedded systems, including hardware, software, and tools for developing and testing. The conference portion of the event features presentations and workshops from industry experts on a variety of topics, such as security, connectivity, and real-time operating systems. There’s a lot there for everyone.
With our machine learning toolkit that is ideally optimized for embedded applications, Edge Impulse and Embedded World are a perfect match. Here are some of the different places you will be able to find us and what we’ll be getting up to in each spot.
tkw3srg0yQQxcfB-J4osq_mBLeCqyoe3-ZwAn_AAInh2gqMa5fD3tsx6maxmcFFX46TRGpHIEjdmecQMqNjgodIXms8mVwQety8xIT-L_kKO3GnhYoy77u0DoRoenoN7GNewnghnfPNf5YftFDdLFd4

Edge Impulse Booth
Hall 2, Booth 2-238
This year we will be hosting our own space in the TinyML area of Embedded World. Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
Also at the booth: Meet BrickML, the first product based on the Edge Impulse “Industrial Monitoring” reference design, focused on providing machine learning processing for industrial/machine monitoring applications. Built in collaboration with Reloc and Zalmotek, BrickML can be used to track numerous aspects of industrial machinery performance via its multitude of embedded sensors. We’ll be showing it in a motor-monitoring demonstration. BrickML is fully integrated into the Edge Impulse platform which makes everything from data logging from the device, to ML inference model deployment on to the device a real snap. (Our Industrial Monitoring reference design includes hardware and software source code to rapidly design your own product, available for Edge Impulse enterprise customers.)
esSCdCuQarpv7nYBBz-Yv9U1Mya8kEWFbQMmDDEPH9u9Kk6OJl6oqDTb52oH8YDTSXrxu6bJed55rrZ3skim5klMnOHCVJpCuNgudC3ntnOwhVr0K8o0eBfDLHocEyBHe3jNoVyhCfZEFXzLJMMauAw

We’ll additionally be showing off devices from companies we work with, including Oura, the health-monitoring wearable that is discreetly embedded in a ring you wear on your finger, and NOWATCH, a wrist-based wearable that tracks your stress levels and mental well-being.
File:TexasInstruments-Logo.svg - Wikimedia Commons

Texas Instruments
Hall 3A, Booth 3A-215
In the TI booth you’ll find our Edge Impulse/Texas Instruments demo. This will show TI’s YOLOX-nano-lite model. The model was trained on a Kaggle dataset to detect weeds and crops. The dataset was loaded to Edge Impulse and the YOLOX model was trained via the “Bring Your Own Model” extensions to Edge Impulse Studio. The train model was then deployed to run on the TI Deep Learning framework.
File:Advantech logo.svg - Wikimedia Commons

Advantech
Hall 3, Booth 3-339
Scailable will be demonstrating their Edge Impulse FOMO-driven object detection implementation at the Advantech booth. It uses the Advantech ICAM camera to distinguish small washers, screws, and other items on several different trays. They’ll be demonstrating different trays and different models for the demo, and showing how to train new models at the booth.
File:AVSystem logo.jpg - Wikimedia Commons

AVSystem
Demo at the Zephyr booth: Hall 4, Booth 4-170
AVSystems’ Coiote is a LwM2M-based IoT device-management platform, providing support for constrained IoT devices at scale. It integrates with a tinyML-based vibration sensor and can detect and report anomalies in vibrations. This demo is based on the Nordic Thingy:91, which runs the Zephyr OS, and uses the Edge Impulse platform.
The Things Conference

Arduino
Hall 2, Booth 2-238
Check out the “vineyard pest monitoring” vision demo, running on the Arduino Nicla Vision and MKR WAN 1310, built by Zalmotek and using Edge Impulse for machine learning.
alif_4bb36c6432.png

Alif
Hall 4, Booth 4-544
Alif will also be hosting an Edge Impulse-powered demo to the show. It is viewable in their private conference room by appointment; contact kirtana@edgeimpulse.com to set up a meeting.
Press Kit | Synaptics

Synaptics panel, featuring Edge Impulse
Tuesday, 3/14 @ 3PM (local time)
Hall 1, Booth 500
Edge Impulse co-founder/CEO Zach Shelby will be a participant in the “Rapid Development of AI Applications on the Katana SoC” panel, brought to you by one of our partner companies, Synaptics, and moderated by Rich Nass from Embedded Computing Design.
Come find us!
In addition to these locations and scheduled events, we’ll have numerous staff members from Edge Impulse on site and ready to answer any questions you may have about our tools and use cases. Be sure to stop by to say hi.
(And if you can’t make it in person, you can always drop us a note: hello@edgeimpulse.com)
 
  • Like
  • Fire
  • Love
Reactions: 27 users

cosors

👀
Halleluja....what does this mean......

"...4.1 Datasets

We test two variants of the 45 million parameter model; one where T = 1024 and another where T = 3, 072. We used the Enwik8 dataset to conduct both training and testing. The findings of this experiment are presented in Table 1. To explore the efficiency of our 125 million parameter scale, we trained our model using the BookCorpus [47] dataset, and text generated samples are provided in Fig. 3. Our most extensive model with 260 million parameters was trained using the OpenWebText2 [17] dataset. Text samples of this experiment are shown in Fig. 2. At present, we are conducting additional experiments on the larger models and will update this preprint once completed. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 45M, 120M and 260M, we trained them for 12, 24 and 48 hours respectively.
...."

Can someone help here? There isn`t meant Nvida V100 graphic cards have SNN IP in it? Sorry I`m not a "professional" in this case.:unsure:
You are absolutely right. It's cold as shit here and it's snowing and I'm out with my phone and I was lazy. I deleted my post. Still interesting or not?
Sorry for that and thanks for reading. With freeze fingers and feets it looked to complicated for me.
 
Last edited:
  • Haha
  • Wow
  • Fire
Reactions: 6 users
D

Deleted member 118

Guest
Can someone please help me and get up early enough to watch the Cerence presentation? It’s on at about 5.30 am eastern standard time Aus, unless I’m mistaken. Any takers? TIA. 🥰
Rise and shine

 
  • Haha
  • Love
Reactions: 11 users

chapman89

Founding Member
From the EE news journal posted earlier-


“Even though it’s been around for only one year, the Akida 1.0 platform has enjoyed tremendous success, having been used by the chaps and chapesses at a major automobile manufacturer to demonstrate a next-generation human interaction in-cabin experience in one of their concept cars; also by the folks at NASA, who are on a mission to incorporate neuromorphic learning into their space programs; also by a major microcontroller manufacturer, which is scheduled to tape-out an MCU augmented by Akida neuromorphic technology in the December 2023 timeframe. And this excludes all of the secret squirrel projects that we are not allowed to talk about.”
 
  • Like
  • Love
  • Fire
Reactions: 79 users

charles2

Regular

Impressive list !​


Come Find Edge Impulse at Embedded World 2023​

EMBEDDED DEVICES
Mike Senese
8 March 2023
Linkedin_1200x630_3_ddec41cb49.png

Each year, all the big names in embedded computing gather at Embedded World in Nuremberg, Germany to show off their latest innovations and developments, to meet with partners and customers, and to learn about new advancements in their fields. This year, Embedded World is happening from March 14–16, and Edge Impulse is excited to once again be participating with a range of activities.

embedded world | …it's a smarter world's a smarter world
H5LJ0H2eGnMXfLTpUSvvSxIMORTXeZloKPUHYbSl-Fv9uTF8bzf1wbuWNGzd51iTQsAgX4OiSpDLIVkwpwRKEy8yPL6p7qNXSPQopTt1m_RD0qc3yGRWU8VR2xMEs_PfwtXToapRkXZUEVKyNmI2bis

First held in 2003, Embedded World is known as possibly the largest show in the world for the embedded industry. The exhibition focuses on products and services related to embedded systems, including hardware, software, and tools for developing and testing. The conference portion of the event features presentations and workshops from industry experts on a variety of topics, such as security, connectivity, and real-time operating systems. There’s a lot there for everyone.
With our machine learning toolkit that is ideally optimized for embedded applications, Edge Impulse and Embedded World are a perfect match. Here are some of the different places you will be able to find us and what we’ll be getting up to in each spot.
tkw3srg0yQQxcfB-J4osq_mBLeCqyoe3-ZwAn_AAInh2gqMa5fD3tsx6maxmcFFX46TRGpHIEjdmecQMqNjgodIXms8mVwQety8xIT-L_kKO3GnhYoy77u0DoRoenoN7GNewnghnfPNf5YftFDdLFd4

Edge Impulse Booth
Hall 2, Booth 2-238
This year we will be hosting our own space in the TinyML area of Embedded World. Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
Also at the booth: Meet BrickML, the first product based on the Edge Impulse “Industrial Monitoring” reference design, focused on providing machine learning processing for industrial/machine monitoring applications. Built in collaboration with Reloc and Zalmotek, BrickML can be used to track numerous aspects of industrial machinery performance via its multitude of embedded sensors. We’ll be showing it in a motor-monitoring demonstration. BrickML is fully integrated into the Edge Impulse platform which makes everything from data logging from the device, to ML inference model deployment on to the device a real snap. (Our Industrial Monitoring reference design includes hardware and software source code to rapidly design your own product, available for Edge Impulse enterprise customers.)
esSCdCuQarpv7nYBBz-Yv9U1Mya8kEWFbQMmDDEPH9u9Kk6OJl6oqDTb52oH8YDTSXrxu6bJed55rrZ3skim5klMnOHCVJpCuNgudC3ntnOwhVr0K8o0eBfDLHocEyBHe3jNoVyhCfZEFXzLJMMauAw

We’ll additionally be showing off devices from companies we work with, including Oura, the health-monitoring wearable that is discreetly embedded in a ring you wear on your finger, and NOWATCH, a wrist-based wearable that tracks your stress levels and mental well-being.
File:TexasInstruments-Logo.svg - Wikimedia Commons

Texas Instruments
Hall 3A, Booth 3A-215
In the TI booth you’ll find our Edge Impulse/Texas Instruments demo. This will show TI’s YOLOX-nano-lite model. The model was trained on a Kaggle dataset to detect weeds and crops. The dataset was loaded to Edge Impulse and the YOLOX model was trained via the “Bring Your Own Model” extensions to Edge Impulse Studio. The train model was then deployed to run on the TI Deep Learning framework.
File:Advantech logo.svg - Wikimedia Commons

Advantech
Hall 3, Booth 3-339
Scailable will be demonstrating their Edge Impulse FOMO-driven object detection implementation at the Advantech booth. It uses the Advantech ICAM camera to distinguish small washers, screws, and other items on several different trays. They’ll be demonstrating different trays and different models for the demo, and showing how to train new models at the booth.
File:AVSystem logo.jpg - Wikimedia Commons

AVSystem
Demo at the Zephyr booth: Hall 4, Booth 4-170
AVSystems’ Coiote is a LwM2M-based IoT device-management platform, providing support for constrained IoT devices at scale. It integrates with a tinyML-based vibration sensor and can detect and report anomalies in vibrations. This demo is based on the Nordic Thingy:91, which runs the Zephyr OS, and uses the Edge Impulse platform.
The Things Conference

Arduino
Hall 2, Booth 2-238
Check out the “vineyard pest monitoring” vision demo, running on the Arduino Nicla Vision and MKR WAN 1310, built by Zalmotek and using Edge Impulse for machine learning.
alif_4bb36c6432.png

Alif
Hall 4, Booth 4-544
Alif will also be hosting an Edge Impulse-powered demo to the show. It is viewable in their private conference room by appointment; contact kirtana@edgeimpulse.com to set up a meeting.
Press Kit | Synaptics

Synaptics panel, featuring Edge Impulse
Tuesday, 3/14 @ 3PM (local time)
Hall 1, Booth 500
Edge Impulse co-founder/CEO Zach Shelby will be a participant in the “Rapid Development of AI Applications on the Katana SoC” panel, brought to you by one of our partner companies, Synaptics, and moderated by Rich Nass from Embedded Computing Design.
Come find us!
In addition to these locations and scheduled events, we’ll have numerous staff members from Edge Impulse on site and ready to answer any questions you may have about our tools and use cases. Be sure to stop by to say hi.
(And if you can’t make it in person, you can always drop us a note: hello@edgeimpulse.com)
To emphasize:

Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Tothemoon24

Top 20
The mighty chip is getting some much deserved media attention.




BrainChip Unveils Its Second-Generation Akida Platform, Now Boasting Vision Transformer Acceleration​

Brainchip's Akida 2.0 gains some impressive new features, along with a three-tier launch strategy scaling up to 128 nodes and 50 TOPS.​







BrainChip has announced the launch of its second-generation Akida processor family, designed for high-efficiency artificial intelligence at the edge, adding Temporal Event-Based Neural Net (TENN) support and optional vision transformer acceleration on top of the company's existing spiking neural network capabilities.
"Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device," claims BrainChip's chief executive officer Sean Hehir of the next-generation design. "By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience."
BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip began offering development kits for its first-generation Akida AKD1000 neural network processors in October 2021, building two kits around the user's choice of a Shuttle x86 PC or a Raspberry Pi. Ease of use took a leap earlier this year when the company announced the fruit of its partnership with Edge Impulse to bring Akida support to the latter's machine learning platform — offering what Edge Impulse co-founder and chief executive officer Zach Shelby described as a "powerful and easy-to-use solution for building and deploying machine learning models on the edge."
The promise of the Akida platform, which was developed based on the operation of the human brain, is high performance at a far greater efficiency than its rivals — when, at least, the problem to be solved can be defined as a spiking neural network. It's this efficiency which has seen BrainChip primarily position its Akida hardware for use at the edge, accelerating on-device machine learning in power-sensitive applications.
The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The second-generation Akida platform brings with it high-efficiency eight-bit processing and support for Temporal Event-Based Neural Nets (TENNs), giving it the ability to consume raw real-time streaming data from sensors, including video sensors. This, the company claims, provides "radically simpler implementations" for tasks including video analytics, target tracking, audio classification, and even vital sign prediction in medical imaging analysis.
BrainChip's Akida refresh also brings with it support for accelerating vision transformers, as an optional component that can be discarded if not required, as primarily used for image classification, object detection, and semantic segmentation. Combined with Akida's ability to process multiple layers at once, the company claims the new parts will allow for complete self-management and execution of even relatively complex networks like RESNET-50 — without the host device's processor having to get involved at all.

The new features come alongside BrainChip's earlier promises of dramatic efficiency gains through the use of spiking neural networks. (📹: BrainChip)
The company has confirmed that it will be licensing the Akida IP in three product classes: Akida-E will focus on high energy efficiency with a view to being embedded alongside, or as close as possible, to sensors and offering up to 200 giga-operations per second (GOPS) across one to four nodes; Akida-S will be for integration into microcontroller units and systems-on-chip (SoCs), hitting up to 1 tera-operations per second (TOPS) across two to eight nodes; and Akida-P will target the mid- to high-end, and will be the only tier to offer the optional vision transformer acceleration, scaling between eight and 128 nodes with a total performance of up to 50 TOPS.
While the part launches to unnamed "early adopters" today, though, BrainChip isn't quite ready to start selling them to the public — promising instead that second-generation Akida processors will be available in the third quarter of 2023 with as-yet unannounced pricing. More information is available on the BrainChip website.
machine learning
artificial intelligence
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Tothemoon24

Top 20
5953AEA8-BD9B-4751-842C-AD559683D628.png
 
  • Like
  • Fire
Reactions: 22 users

yogi

Regular
Why is Renesas missing from partnership page on BRN website ..
 
  • Like
Reactions: 2 users

cassip

Regular
  • Like
  • Fire
  • Love
Reactions: 15 users

cosors

👀
You are absolutely right. It's cold as shit here and it's snowing and I'm out with my phone and I was lazy. I deleted my post. Still interesting or not?
Sorry for that and thanks for reading. With freeze fingers and feets it looked to complicated for me.
Bravo, now I would like to be...
giphy (1).gif

or
omg-wow.gif

🥶🤣
 
  • Haha
Reactions: 6 users

BaconLover

Founding Member



Luckily Nicobo farts don't cause climate crisis.
 
  • Haha
  • Like
Reactions: 18 users

Learning

Learning to the Top 🕵‍♂️
Here is an intresting Research Blog From Google Research with Vision Transformer (ViT).


Transformers for Image Recognition at Scale
THURSDAY, DECEMBER 03, 2020
Posted by Neil Houlsby and Dirk Weissenborn, Research Scientists, Google Research

Extract:
As a first step in this direction, we present the Vision Transformer (ViT), a vision model based as closely as possible on the Transformer architecture originally designed for text-based tasks. ViT represents an input image as a sequence of image patches, similar to the sequence of word embeddings used when applying Transformers to text, and directly predicts class labels for the image. ViT demonstrates excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources. To foster additional research in this area, we have open-sourced both the code and models.


The Vision Transformer treats an input image as a sequence of patches, akin to a series of word embeddings generated by a natural language processing (NLP) Transformer.

The Vision Transformer
The original text Transformer takes as input a sequence of words, which it then uses for classification, translation, or other NLP tasks. For ViT, we make the fewest possible modifications to the Transformer design to make it operate directly on images instead of words, and observe how much about image structure the model can learn on its own.

ViT divides an image into a grid of square patches. Each patch is flattened into a single vector by concatenating the channels of all pixels in a patch and then linearly projecting it to the desired input dimension. Because Transformers are agnostic to the structure of the input elements we add learnable position embeddings to each patch, which allow the model to learn about the structure of the images. A priori, ViT does not know about the relative location of patches in the image, or even that the image has a 2D structure — it must learn such relevant information from the training data and encode structural information in the position embeddings.

Full blog here.

Learning 🏖
 
Last edited:
  • Like
  • Fire
Reactions: 20 users

IloveLamp

Top 20
Screenshot_20230308_065218_LinkedIn.jpg


I assume the 22 is a typo and meant to be 2023
 
  • Like
  • Fire
  • Love
Reactions: 12 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Thinking
  • Fire
Reactions: 29 users

Townyj

Ermahgerd
Avnet???

View attachment 31529 View attachment 31530

Renesas and Syntiant have been working together for a bit now. Avnet have just slapped something together to compete on market.
 
Last edited:
  • Like
  • Fire
Reactions: 7 users
Top Bottom