BRN Discussion Ongoing

Adam

Regular
  • Haha
Reactions: 3 users
Hi All

A tiny detail that some may miss in this EDGX paper is that one of the authors is David Steenari ESA
The Netherlands.

ESA for those that might not know is the European Space Agency.

This is significant lest there be any doubt circulated as to whether the benefits of using AKD1000 has filtered through from EDGX to those in charge in the ESA.

My opinion only DYOR
Fact Finder

David Steenari​

On-Board Payload Data Processing Engineer at European Space Agency - ESALuleå University of Technology European Space Agency - ESANoordwijk-Binnen, Zuid-Holland, Nederland Meer dan 500 connecties
Connectie makenBericht

Info​

Working at ESA on on-board data handling; high-performance on-board processing; on-board AI/ML; modular data handling systems and architectures. Lead organisation of OBDP2021 and OBDP2019.

Regards
Fact Finder
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

TECH

Regular
Peter and Anil are my personal heros (has Tech finally lost it) no I haven't.
They still hold approximately 243 Million shares between them.
I respect them both, they both are extremely discliplined, hardworking men who have given and still give 100% to Brainchip..without their passion and drive over many years we would never have been in this position today.
Neuromorphic Computing followed by Quantum Computing is where humanity is heading.
Can't you feel what I'm feeling...we are so far ahead of the mob that it's just a matter of time now...not years...but months..all the press headlines point to us...NEUROMORPHIC TECHNOLOGY...are we here by accident..no..that would be an insult to Peter...243 Million shares says that I'm 100% correct !!
Love our company ❤️ Tech.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 62 users
This one wins u a $15 drink of yr choicewhen i see u. I will write it in my diary
1706691939193.gif
 
  • Haha
  • Like
Reactions: 11 users

Jchandel

Regular
Hi Cyw
I think first you would need to explain why I need to change your perception of Brainchip as I thought TSEx was for mature individuals capable of making their own investment decisions.

My opinion only DYOR
Fact Finder
And that was in 40 words to be exact 😂
 
  • Haha
  • Fire
Reactions: 3 users

Kachoo

Regular
I'm interested to hear peoples thoughts on what they believe the $780,000 of receipts from customers was made up of.

Obviously more than just engineering fees. Is it a licensing fee or royalties?

I noticed in WBT's 4c today they specifically advised the 457K receipt from customers was from a licensing fee. Are we allowed to ask the company whether it came from Royalties or a license fee, or is that against ASX rules to reveal that information?
Does it Matter where the money came from? I mean we were told that any revenue from USA DoD would just be paid in there would not be any formal announcement.

Could it be:
USA DoD DoE Nasa
Megachips,
Renesas,
Valeo,
Some othere Companynwe do not know of?

Let's be happy that we are seeing the revenue. If there are a lot of NDAs we won't know.

Let's hope we have similar revenues next C4 or more!
 
  • Like
  • Thinking
  • Fire
Reactions: 14 users

Boab

I wish I could paint like Vincent
Balance of the paper on Wallet protection:

III. WALLET SNATCHING THROUGH EDGE IMPULSE

TECHNOLOGY

This study aimed to investigate practical countermeasures
to wallet-snatching incidents in public places. To achieve this,
a dataset was collected, annotated, and submitted to the Edge
Impulse platform. The model was trained to recognize wallet
theft instances, with an impressive 95% accuracy rate. Despite

challenges, such as limited data, the researchers used cutting-
edge methods and tactics to enhance the training process

and improve performance. They considered camera angles,
lighting conditions, and the pace of the grab to ensure accurate
prediction. The research has immense potential for improving

public safety and reducing theft incidences, paving the way
for future security protocols.
A. Edge Impulse Technology

Fig. 2. Flowchart of Edge Impulse.

Edge Impulse is a cutting-edge machine learning platform
for developing and deploying intelligent applications on edge
devices [1], [32]. It provides developers with an easy-to-use

interface and a complete range of tools for collecting, process-
ing, and analyzing data in order to create machine learning

models. Edge Impulse enables machine learning at the edge,
allowing devices to make real-time choices without requiring
ongoing access to the cloud [3]. The platform supports a
variety of edge devices, such as microcontrollers, development

boards, and sensors, allowing developers to harness the po-
tential of machine learning in resource-constrained contexts.

Developers may use Edge Impulse to train and deploy models
for a variety of applications such as predictive maintenance,

anomaly detection, motion identification, and more. The plat-
form also allows for the training and optimization of ma-
chine learning models utilizing common techniques such as

neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
B. Spiking Neural Networks (SNNs)

SNNs are artificial neural networks that are inspired by bio-
logical neural networks found in the brain. SNNs function with

discrete-time, event-driven processing, as opposed to standard

artificial neural networks, which are based on continuous-
valued activations, and employ backpropagation for learning.

Fig. 3. Architecture of a Spiking Neural Network (SNN)

SNNs may provide various benefits over typical neural net-
works, particularly for jobs involving temporal information

processing, event-based data, and bio-inspired computing. Low

energy consumption, improved temporal precision, and possi-
ble appropriateness for neuromorphic hardware implementa-
tions are some of the potential benefits.

The key components of SNNs are as follows:
• Spiking Neurons: These are the network’s fundamental
building components. Based on activation criteria, they
integrate input spikes and create output spikes.

• Spike Trains: Instead of continuous activations like in typ-
ical neural networks, information in SNNs is represented

as discrete spike trains, which are time-varying sequences
of spikes.
• Synaptic Weight Updates: To increase performance,
SNNs may learn from data and modify their synaptic
weights. Learning in SNNs is often characterized by
Spike-Timing-Dependent Plasticity (STDP), in which the
weight updates are determined by the relative timing of
presynaptic and postsynaptic events.
• Spike-Based Learning Rules: Depending on the timing of
the pulses, different learning rules are utilized to adjust
synaptic strengths.
C. Akida FOMO (Field-Programmable Object)
The Akida FOMO (Faster Objects, More Objects) paradigm
is a neuromorphic hardware platform created by BrainChip
that is inspired by the structure and function of the human
brain. It provides real-time neural network inference on edge
devices while reducing latency, increasing energy efficiency,
and allowing for large-scale parallel computing operations.
SNNs are used in the model to effectively handle temporal

and spatial input, replicating the behavior of neurons. This en-
ables operations like object recognition, gesture detection, and

anomaly detection to be performed on edge devices, removing
the need for cloud-based processing. This method is especially
beneficial in low latency and data privacy scenarios when
continuous data transmission to remote servers is not required.
The neural network model is a deep learning architecture for
image processing tasks, consisting of 21 layers. It begins with
an input layer, representing images with (None, 96, 96, 3)
i.e., 96 pixels and 3 color channels. The model then includes
4 layers of Conv2D, 4 layers of BatchNormalization, and 5
layers of Rectified Linear Unit (ReLU) activation functions,

which process the input data and learn complex patterns.

The two layers of SeparableConv2D enhance feature extrac-
tion capabilities. The model’s convolutional nature makes

it suitable for image-related tasks like image classification
and object detection. Each layer contributes to the overall
complexity, capturing important features and patterns from
the input images. The neural network model demonstrates
high performance in image tasks using deep learning and
convolutional networks, versatile for various computer vision
applications.
Algorithm 1 Akida FOMO Model Inference
Require: Input data
Ensure: Inference result
1: Load Akida FOMO model parameters
2: Initialize input data
3: Preprocess input data
4: Convert input data to SNN format
5: Initialize SNN state
6: while Not end of input data do
7: for Each input spike do
8: Propagate spike through SNN
9: Update SNN state
10: end for
11: end while
12: Perform output decoding on SNN state
13: return Inference result
D. Data Collection

Fig. 4. Images of dataset.

This model is able to locate the dataset for our research
using the internet. The dataset includes videos of chain
snatching, wallet snatching, and other forms of snatching.
For creating a dataset, this study gathered all of the essential
videos. After creating the dataset, it began analyzing the
videos to identify common patterns and behaviors among the
snatchers. We found that most snatchers targeted vulnerable
individuals, such as the elderly or those walking alone at night.
Additionally, we noticed that snatchers tended to operate in

specific areas, such as busy marketplaces or near public trans-
portation hubs. With this information, we were able to develop

a more targeted approach to preventing these crimes from
occurring [31]. Edge Impulse streamlines the machine learning
workflow by offering a step-by-step method that comprises
data collection, data preprocessing, model training, and model
deployment. It provides a number of data intake methods,
including direct sensor integration, data import, and interaction
with third-party services. Edge Impulse’s capacity to undertake

automatic data preparation is one of its most prominent charac-
teristics. It provides a variety of signal-processing techniques

and feature extraction strategies for transforming raw data
into relevant features for model training. This streamlines the
data pre-treatment procedure and saves time for developers.
The platform also allows for the training and optimization of
machine learning models utilizing common techniques such as
neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
E. Data Preprocessing

See next post
This looks a lot more promising than last years discussion around weapon detection with Lassen Peak
Nice one FF
 
  • Like
Reactions: 3 users
Peter and Anil are my personal heros (has Tech finally lost it) no I haven't.
They still hold approximately 243 Million shares between them.
I respect them both, they both are extremely discliplined, hardworking men who have given and still give 100% to Brainchip..without their passion and drive over many years we would never have been in this position today.
Neuromorphic Computing followed by Quantum Computing is where humanity is heading.
Can't you feel what I'm feeling...we are so far ahead of the mob that it's just a matter of time now...not years...but months..all the press headlines point to us...NEUROMORPHIC TECHNOLOGY...are we here by accident..no..that would be an insult to Peter...243 Million shares says that I'm 100% correct !!
Love our company ❤️ Tech.
Ok Tech
 
  • Haha
Reactions: 2 users
This one wins u a $15 drink of yr choicewhen i see u. I will write it in my diary
And I’ve got a brain of a gold fish so
I hope you don’t forget

1706695636815.gif
 
  • Haha
  • Like
Reactions: 3 users

TopCat

Regular
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 61 users

Tothemoon24

Top 20
IMG_8273.jpeg


Virtual reality is revolutionising the way we work and helping us move from design concept to full production much faster.

This is why we have developed our very own Virtual Reality Center (VRC) at the Mercedes-Benz Technology Center in Sindelfingen. And one of the most talked about facilities is the “Holodeck” room.

Measuring under 50 square metres, Holodeck helps our designers and engineers to interact with new prototypes like never before. For example, employees can take a seat inside an interior mock-up just as if it was in the room for real. The 1:1 scale model is a simplified representation of the real interior and, when users put on a pair of VR goggles, they immediately feel like they are sitting in a futuristic Mercedes.

As you can imagine, the VR goggles are incredibly high-tech. They are packed with sensors that monitor the movements of the user’s head and eyes. Special hand-tracking technology detects finger movements and hand gestures in real time and transfers them to the VR world. Our current goggles also use what is known as video-see-through technology. The cameras integrated into the glasses film the real surroundings and show them directly on the displays. This creates the impression that you can look through the glasses. The virtual image components generated by the VR software are then superimposed in the correct position. This is referred to as mixed reality (MR) in the field of view.

MR offers numerous benefits. In the early development phase, mixed reality enables users to assess the size and quality of individual components. They can use the tech to see if different objects fit into the car, such as a cardboard box in the boot. And with Holodeck’s haptic feedback, users can, for instance, feel what it’s like to bang into solid surfaces on the vehicle.

In the coming weeks, I will post more insights into how the VRC is enhancing the R&D processes at #MercedesBenz.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Could be a very productive event coming up at IFS.


View attachment 55630


View attachment 55631


Chances are pretty high IMO that Sam Altman already knows about us via Mercedes....
IMO obviously.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 25 users

Diogenese

Top 20
View attachment 55634

Virtual reality is revolutionising the way we work and helping us move from design concept to full production much faster.

This is why we have developed our very own Virtual Reality Center (VRC) at the Mercedes-Benz Technology Center in Sindelfingen. And one of the most talked about facilities is the “Holodeck” room.

Measuring under 50 square metres, Holodeck helps our designers and engineers to interact with new prototypes like never before. For example, employees can take a seat inside an interior mock-up just as if it was in the room for real. The 1:1 scale model is a simplified representation of the real interior and, when users put on a pair of VR goggles, they immediately feel like they are sitting in a futuristic Mercedes.

As you can imagine, the VR goggles are incredibly high-tech. They are packed with sensors that monitor the movements of the user’s head and eyes. Special hand-tracking technology detects finger movements and hand gestures in real time and transfers them to the VR world. Our current goggles also use what is known as video-see-through technology. The cameras integrated into the glasses film the real surroundings and show them directly on the displays. This creates the impression that you can look through the glasses. The virtual image components generated by the VR software are then superimposed in the correct position. This is referred to as mixed reality (MR) in the field of view.

MR offers numerous benefits. In the early development phase, mixed reality enables users to assess the size and quality of individual components. They can use the tech to see if different objects fit into the car, such as a cardboard box in the boot. And with Holodeck’s haptic feedback, users can, for instance, feel what it’s like to bang into solid surfaces on the vehicle.

In the coming weeks, I will post more insights into how the VRC is enhancing the R&D processes at #MercedesBenz.
Lots of scope to utilize Akida in eye-tracking in the goggles alone. Then there's gestures, I guess voice also ...
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Tothemoon24

Top 20
IMG_8274.jpeg







We can't imagine a better way to have kicked off 2024 than by announcing EDGX's first collaboration with the European Space Agency - ESA. Our project, titled 🧠 Onboard Neuromorphic Acceleration 🧠, partially funded by the ESA ARTES programme, marks a milestone in our quest to redefine satellite capabilities.

The entire team is stoked to work on this project, here's why:

🔍 Project Highlights
The project aims to define an onboard neuromorphic, brain-inspired Data Processing Unit (DPU) tailored for satellite communication constellations in Low Earth Orbit (LEO). The DPU will be suitable for small satellites (smallsats) weighing up to 500 kg, with a seven-year planned operational lifetime. Our DPU aims to enhance the capabilities of communication satellites, making them more adaptable, flexible, and intelligent.

The project encompasses three core activities:
1️⃣ Identification of Potential Neuromorphic Satcom Applications and Scenarios
2️⃣ Performance Benchmarking and Demonstration of Neuromorphic Algorithms vs Classical Algorithms
3️⃣ Defining the Product Architecture, User Requirements, and Technical Specifications

🎯 Customer-Centric Innovation
Our commitment to aligning technological advancements with our customers' needs and desires is central to the project. We focus on creating a solution that pushes the boundaries of technology in space and resonates with market demands. Hence, communication with industry stakeholders is vital in this project.

🌍 Shaping the Future
We envision a world where thousands of AI-powered satellites, inspired by the human brain, enable unparalleled real-time connectivity, rapid response to global crises, and direct access to vital earth observation data. Satellite infrastructure will provide enormous societal and economic value to man and machines here on earth. This project is our first step towards building that future.

🤝 A Key Partnership
Our collaboration with ESA symbolises a union of ambition and expertise, emphasising trust in our idea and vision. We're on a mission to transform the satellite communication industry, and together, we can build a product that can achieve that. We can't wait to share more of our journey with you. What would you like to see come out of the project?
 
  • Like
  • Fire
  • Love
Reactions: 61 users

Tothemoon24

Top 20
IMG_8275.jpeg

Eyes on the Road: Enabling Real-Time Traffic Camera Analysis with BrainChip Akida

EDGE AI
By Nick BildJan 30, 2024
Eyes on the Road: Enabling Real-Time Traffic Camera Analysis with BrainChip Akida

Smart camera systems designed to monitor and optimize traffic flow and enhance public safety are becoming more common components of modern urban infrastructure. These advanced systems leverage cutting-edge technologies, such as artificial intelligence and computer vision to analyze and respond to real-time traffic conditions. By deploying these smart camera systems at strategic locations throughout cities, municipalities can address a variety of challenges and significantly improve overall urban efficiency.

One of the primary issues smart camera systems address is traffic congestion. These systems can monitor traffic patterns, identify bottlenecks, and dynamically adjust traffic signal timings to optimize the flow of vehicles. By intelligently managing traffic signals based on the current demand and congestion levels, these systems can reduce delays, shorten travel times, and minimize fuel consumption, thereby contributing to a more sustainable and eco-friendly transportation system.

In addition to alleviating congestion, smart camera systems also play a crucial role in enhancing public safety. They can be equipped with features such as license plate recognition, facial recognition, and object detection to identify and respond to potential security threats or criminal activities. These capabilities enable law enforcement agencies to quickly and proactively address security concerns, providing a force multiplier for urban safety.

image3.png
BrainChip’s Akida Development Kit hardware
Despite the benefits that they can offer, the widespread adoption of smart traffic camera systems has been hindered by some nagging technical issues. In order to be effective, real-time processing of video streams is required, which means powerful edge computing hardware is needed on-site. Moreover, each system generally needs multiple views of the area which further taxes the onboard processing resources. Considering that many of these processing units are needed throughout a city, some just an intersection away from one another, problems of scale quickly emerge.

The Challenge: Real-Time Traffic Analysis​

Being quite familiar with the latest in edge computing hardware and current developments at Edge Impulse, engineer Naveen Kumar recently had an idea that could solve this problem. By leveraging BrainChip’s Akida Development Kit with a powerful AKD1000 neuromorphic processor, it is possible to efficiently analyze multiple video streams in real-time. Pairing this hardware with Edge Impulse’s ground-breaking FOMO object detection algorithm that allows complex computer vision applications to run on resource-constrained hardware platforms, Kumar reasoned that a scalable smart traffic camera system could be produced. The low cost, energy efficiency, and computational horsepower of this combination should result in a device that could be practically deployed throughout a city.


image2.png
Preparing the training data

Implementation​

As a first step, Kumar decided to focus on a critical component of any traffic camera setup — the ability to locate vehicles in real-time. After setting up the hardware for the project, it was time to build the object detection pipeline with Edge Impulse. To train the model, Kumar sought out a dataset captured by cameras at Shibuya Scramble Crossing, a busy intersection in Tokyo, Japan. Still image frames were extracted from videos posted on YouTube, and those images were then uploaded to a project in Edge Impulse using the CLI uploader tool.

From there, Kumar pivoted into the Labeling Queue tool to draw bounding boxes around objects of interest — in this case, vehicles. The object detection algorithm needs this additional information before it can learn to recognize specific objects. This can be a very tedious task for large training datasets, but the Labeling Queue offers an AI-assisted boost that helps to position the bounding boxes. Typically, one only needs to review the suggestions and make an occasional tweak.

Having squared away the training data, the next step involved designing the impulse. The impulse defines exactly how data is processed, all the way from the time it is produced by the sensor until a prediction is made by the machine learning model. In this case, images were first resized, which is important in reducing the computational complexity of downstream processing steps. Following that, the data was forwarded into a FOMO object detection model that has been optimized for use with BrainChip’s Akida neuromorphic processor. Kumar made a few small adjustments to the model’s hyperparameters to optimize it for use with multiple video streams, then the training process was initiated with the click of a button.

image4.png
The impulse defines the data processing steps
After a short time, the training was complete and a set of metrics was presented to help in assessing how well the model was performing. Right off the bat, an average accuracy score of 92.6% was observed. This is certainly more than good enough to prove the concept, but it is important to ensure that the model has not been overfit to the training data. For this reason, Kumar also leveraged the Model Testing tool, which utilizes a dataset that was left out of the training process. This tool revealed an average accuracy rate of 94.85% had been achieved, which added to the confidence given by the training results.

BrainChip’s Akida Development Kit is fully supported by Edge Impulse, which made deployment a snap. By selecting the “BrainChip MetaTF Model” option from the Deployment tool, a compressed archive was automatically prepared that was ready to run on the hardware. This code is aware of how to utilize the Akida PCIe card, which allows it to make the most of the board’s specialized hardware.

To wrap things up, Kumar used the Edge Impulse Linux C++ SDK to run the deployed model. This made it simple to start up a web-based application that marks the locations of vehicles in real-time video streams from traffic cameras. As expected from the performance metrics, the system accurately detected vehicles in the video streams. It was also demonstrated that the predictions could be made in just a few milliseconds, and while slowly sipping on power.


Subscribe to Edge Impulse​

Get the free version of our newsletter. No spam ever, unsubscribe anytime.
Subscribe
Are you interested in building your own multi-camera computer vision application? If so, Kumar has a great write-up that is well worth reading. The insights will help you to get your own idea off the ground quickly. Kumar has also experimented with single-camera setups if that is more what you had in mind.

More Like This

On the Fast Lane of Innovation: Traffic Monitoring with BrainChip Akida

Benchmarking Akida with Edge Impulse: A Validation of Model Performance on BrainChip's Akida Platform

Edge Impulse and BrainChip Partner to Further AI Development with Support for the Akida Platform
Explore Edge Impulse

Announcing FOMO (Faster Objects, More Objects)

Introducing EON: Neural Networks in Up to 55% Less RAM and 35% Less ROM

Voice Activated Micro:bit with Machine Learning

Subscribe​

 
  • Like
  • Fire
  • Love
Reactions: 85 users

Frangipani

Regular
This guy gets it: Byron Callaghan
Does he really? 🧐

Astrophysicists would strongly disagree… 🤣

But I suppose the reincarnation of Lord Byron (“Ye stars! which are the poetry of heaven!”) could always claim poetic license when confusing black with white holes and those self-luminous celestial bodies of gas we mere mortals behold in the night sky…

In the boundless expanse of technological galaxies, there exists a singular constellation that outshines all - the Akida 2.0 system. It is not just a beacon of brilliance; it's a veritable black hole, drawing in all realms of possibility and spewing out pure innovation. In the theatre of Edge AI, where countless players jostle for the limelight, Akida 2.0 doesn't just steal the show; it is the show.

Now the question of course is: since Akida is already being called science fiction, what shall we call white holes, then?! 😉


4C430238-F9A2-4C92-9945-CE6FB8E64700.jpeg
 

Attachments

  • 1B964230-07EE-4638-BF06-1C85E07D77B0.jpeg
    1B964230-07EE-4638-BF06-1C85E07D77B0.jpeg
    592 KB · Views: 54
  • Like
  • Wow
  • Haha
Reactions: 6 users

CHIPS

Regular
  • Like
  • Love
  • Haha
Reactions: 22 users

Kachoo

Regular
Edgx wins the ESA contract Akida will be going to space.
 
  • Like
  • Fire
  • Love
Reactions: 40 users
Top Bottom