BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
That's for sure! No doubt Doodle Labs would want to be part of that action too.😝
 
  • Haha
Reactions: 6 users
Well Ok I guess.
Just make sure the Third Eye doesn't enter the Lunar Gateway!
Not that there's anything wrong with that! 🤣
1706674103330.gif
 
  • Haha
  • Love
Reactions: 8 users
Hi All
Sorry cannot provide a link but for those unlike Pom who should just read the Abstract and Conclusion the full paper is probably interesting to a little exciting to think what AKIDA with a little Edge Impulse can do. Regards Fact Finder:

Safeguarding Public Spaces: Unveiling Wallet
Snatching through Edge Impulse Technology

Ujjwal Reddy K S

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
ujjwal.20bci7203@vitap.ac.in

* Kuppusamy P

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
drpkscse@gmail.com

Abstract—In contemporary society, public space security and
safety are of utmost significance. The theft of wallets, a frequent
type of street crime, puts people’s personal items at risk and
may result in financial loss and psychological misery. By utilizing
Edge Impulse technology to identify and expose wallet-snatching
incidents in public areas, this article offers a fresh solution to
the problem. To develop a reliable and effective wallet-snatching
detection solution, the suggested system blends machine learning
techniques with the strength of the Edge Impulse platform. This
study used Spiking Neural Networks (SNNs) which are inspired
by the biological neural networks found in the brain. Edge

Impulse offers a thorough framework for gathering, prepro-
cessing, and examining data, enabling the creation of extremely

precise machine learning models. The system can accurately
discriminate between legitimate interactions with wallets and
suspicious snatching attempts by training these models on a
dataset that includes both normal and snatching events. The

efficiency of the suggested method is 95% demonstrated by exper-
imental findings, which show high accuracy and low false positive

rates in recognizing wallet snatching instances. Increasing public
safety, giving people a sense of security in public places, and
discouraging prospective wallet-snatching criminals are all goals
of this research.
Index Terms—wallet snatching, public spaces, Edge Impulse,
sensor devices, machine learning, real-time monitoring, security,
privacy

I. INTRODUCTION

Public places are critical for societal interactions and com-
munity participation. They are places of recreation, social-
ization, and public meetings. However, these areas are not

immune to criminal activity, and one typical threat is wallet
snatching. Wallet snatching is the act of forcibly removing
someone’s wallet, which frequently results in financial losses,
identity theft, and psychological suffering for the victims.
Safeguarding public places and combating wallet snatching

necessitate new measures that make use of developing technol-
ogy. In this context, this introduction investigates the potential

of Edge Impulse technology in uncovering and preventing
wallet-snatching events [1].
Wallet-snatching instances can occur in a variety of public
places, including parks, retail malls, congested roadways, and
public transit. These attacks are frequently characterized by
their speed and stealth, giving victims little time to react or

seek aid. Traditional surveillance systems, such as Closed Cir-
cuit Television (CCTV) cameras, have difficulties in efficiently

identifying and preventing wallet-snatching occurrences owing
to variables such as limited coverage, video quality, and human
error in monitoring [2]. As a result, more advanced technical
solutions that can proactively identify and respond to such
situations are required.
Edge Impulse is a new technology that integrates machine
learning algorithms, sensor data, and embedded systems to
generate smart and efficient solutions [3]. It allows machine
learning models to be deployed directly on edge devices such
as smartphones, wearable devices, or Internet of Things (IoT)
devices, reducing the requirement for ongoing access to a
distant server. Edge Impulse is an appropriate solution for
tackling the problem of wallet snatching in public places
because of its capabilities.

Fig. 1. Edge Impulse Architecture.

It is essential to look into the vast amount of research

and studies done in this specific subject in order to prop-
erly understand the powers of Edge Impulse technology in

revealing instances of wallet theft. Numerous studies have
been conducted to examine the use of computer vision and

machine learning approaches in detecting and preventing crim-
inal activity in public spaces. The topic of utilizing cutting-
edge technologies to improve public safety and security has

been explored in a number of academic studies. This research
has shown how machine learning algorithms may be used
to examine video footage and identify patterns of suspicious

2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE) | 979-8-3503-0570-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/RMKMATE59243.2023.10369744

behavior that could be related to wallet-snatching instances.
These cutting-edge technologies may recognize people who
display suspicious motions or participate in potentially illegal
behaviors by utilizing computer vision techniques, such as

object identification and tracking, enabling proactive interven-
tion. Edge Impulse technology integration has a lot of potential

in this area. It may be trained to recognize certain traits
and attributes linked to wallet snatching through its strong
machine learning skills, improving its capacity to precisely
detect such instances in real-time. Edge Impulse can analyze
trends, spot abnormalities, and notify authorities or security
people to take immediate action by utilizing the enormous
volumes of data gathered from several sources, including
surveillance cameras and sensor networks. The possibility of
predictive analytics to foresee wallet theft episodes based on
previous data and behavioral trends has also been investigated
in this field of research. Machine learning algorithms are
able to recognize high-risk locations and deploy resources
appropriately by examining elements like the time of day,
location, and population density. With the use of this proactive
strategy, law enforcement organizations may deploy people
efficiently and put out preventative measures, which serve to
dissuade prospective criminal activity.
Based on these findings, the use of Edge Impulse technology
in the context of wallet snatching can improve the efficiency of
crime prevention systems [4]. The reaction time may be greatly
decreased by implementing machine learning models directly

on edge devices, enabling real-time detection and fast inter-
vention. Furthermore, Edge Impulse technology can record

and analyze essential data for recognizing wallet-snatching
instances using numerous sensors included in smartphones
or wearable devices, such as accelerometers, gyroscopes, and
cameras.
For example, accelerometer data may be utilized to detect
abrupt movements or violent behaviors that are suggestive of
wallet-snatching attempts [5]. The gyroscope data can offer
information regarding the direction and speed of the grab,
assisting in the tracking of the culprit. Additionally, camera
footage may be analyzed using computer vision algorithms to
detect suspicious activity, identify possible thieves, or collect
photographs for later identification and proof.
The increasing availability of data can further benefit the use
of Edge Impulse technology in wallet snatching prevention.
With the growth of smartphones and wearable devices, there is
an abundance of sensor data that can be gathered and analyzed
in order to create strong machine learning models. This data
may be used to train algorithms to recognize certain patterns or
abnormalities related with wallet-snatching instances, boosting
the system’s accuracy and dependability.
Furthermore, integrating Edge Impulse technology with
current surveillance systems can improve their capabilities.
A complete and intelligent system may be constructed by
integrating the strengths of both technologies, such as the

extensive coverage of CCTV cameras and the real-time anal-
ysis of edge devices. This integrated strategy would allow for

proactive identification and rapid reaction to wallet-snatching

occurrences, minimizing the impact on victims and discour-
aging future perpetrators.

Finally, wallet snatching in public places is a serious danger
to public safety and individual well-being [6]. Innovative
techniques are necessary to overcome this difficulty, and Edge
Impulse technology has intriguing possibilities. Edge Impulse
provides real-time detection and fast action in wallet snatching
occurrences by employing machine learning models installed
directly on edge devices. It captures and analyses pertinent
information using multiple sensors and data sources accessible

on smartphones and wearable devices. Integrating Edge Im-
pulse technology with current monitoring systems can improve

the efficacy of crime prevention efforts. These developments
can help to protect public places and expose wallet snatching,
resulting in safer and more secure communities.
A. Motivation
This study aims to use the potential of Edge Impulse
technology to make public areas safer for citizens by efficiently
fighting wallet-snatching events. We hope that by finding a
solution, we can contribute to the wider objective of protecting
public places and improving the general quality of life in our
communities.
B. Contribution
• The study presents an innovative use of Edge Impulse
technology for improving public safety.
• This study proposed SNNs.

• The created machine learning model detects wallet-
snatching episodes in public places with high accuracy

and efficiency.

II. RELATED WORK

The study proposes a framework comprised of two major
components: a behavior model and a detection technique [7].
The behavior model captures the software’s valid behavior
by monitoring its execution and gathering information about
its interactions with the system and the user. The detection
method compares the observed behavior of a software instance
to the behavior model to discover any differences that signal
probable theft. The authors conducted trials with real-world
software applications to assess the efficacy of their technique.
They tested their system’s detection accuracy, false positive
rate, and false negative rate. The results indicated promising
performance in detecting software theft occurrences properly
while keeping false alarms to an acceptable level. The study
presents an overview of the many processes involved in
the identification of anomalous behavior, including human

detection, feature extraction, and classification [8]. It em-
phasizes the importance of Convolutional Neural Networks

(CNNs) in dealing with the complexities of visual input and
extracting important characteristics for behavioral research.
Furthermore, the authors explore several CNN architectures
used for anomalous behavior identification, such as AlexNet,
Visual Geometry Group Network (VGGNet), and Residual
Neural Network (ResNet) [9]–[11]. They also investigate the

use of various datasets and assessment criteria in evaluating
the performance of these models. The survey includes a wide
range of applications where aberrant behavior identification is
critical, such as crowd monitoring, public space surveillance,
and anomaly detection in industrial settings [8]. The authors
assess the merits and limits of existing approaches, as well as
new research avenues and opportunities for development.
The suggested technique consists of two major steps: feature
engineering-based preprocessing and energy theft detection
using gradient boosting [12]. Various characteristics from
the electricity usage data are extracted during the feature

engineering-based preprocessing stage. These traits are in-
tended to detect trends and behaviors that may suggest possible

energy theft. After preprocessing the data, the authors use
gradient boosting, a machine learning approach, to detect

energy theft. Gradient boosting is an ensemble learning ap-
proach that combines numerous weak predictive models to

build a strong predictive model. It constructs decision trees in
a sequential manner, with each succeeding tree learning from
the mistakes of the preceding ones. The suggested strategy
is evaluated by the authors using real-world power use data.
They compare their approach’s performance to that of other
current approaches for detecting energy theft, such as decision
trees, random forests, and support vector machines [13]–
[15]. Accuracy, precision, recall, and F1-score are among the
assessment criteria employed. The paper’s results show that
the suggested technique beats the other methods in terms
of energy theft detection accuracy. The authors credit this
enhanced performance to the preprocessing stage based on
feature engineering and the efficiency of gradient boosting in
identifying complicated connections in the data.
The study is primarily concerned with analyzing power
use trends and discovering abnormalities that might suggest
theft [16]. The system learns to discern between regular use
patterns and suspicious actions that signal theft by training the
decision tree and Support Vector Machine (SVM) models on
historical data. The attributes chosen are used to categorize
incidents as either theft or non-theft. The suggested technique
is tested using real-world smart grid data. The findings show
that the decision tree and SVM-based methods can identify
theft in smart grids with high accuracy and low false positive
rates. The study focuses on identifying instances of theft by
collecting temporal relationships in energy use data [17]. The
system learns to recognize regular consumption patterns and
detect variations that suggest theft by training the CNN-Long
Short-Term Memory (LSTM) model on historical data. The
suggested method is tested using real-world smart grid data,
and the findings show that it is successful at identifying power
theft [18]. The CNN-LSTM-based technique beats existing
approaches in terms of detection accuracy. Both papers address
the important issue of theft detection in smart grid systems,
but they employ different techniques [16], [17]. The first
paper utilizes decision trees and SVM for feature selection
and classification, while the second paper employs CNNs and
LSTM networks for feature extraction and anomaly detection.
These approaches contribute to the development of effective

methods for enhancing the security and reliability of smart
grid systems.
The study most likely proposes an algorithm or strategy
that employs computer vision and motion analysis techniques
to detect suspicious or illegal behavior in video footage [19].
The suggested approach most likely seeks to discriminate
between routine activities and probable criminal behaviors
by analyzing the motion patterns of humans or items in a
setting [20]. It is difficult to offer a full description of the
methodology, results, or conclusions of the study based on
the information supplied. However, it may be deduced that the
authors suggest a way for developing an automated criminal

detection system that combines motion analysis with intel-
ligent information-concealing strategies. The authors suggest

a chain-snatching detection safety system that detects and
prevents chain-snatching accidents by utilizing sophisticated
technologies [21]. However, without complete access to the
article, it is difficult to offer extensive information regarding
the system’s methodology, components, or methods used. To
detect rapid and strong movements associated with chain
snatching attempts, the system is likely to include various
sensors such as motion sensors or accelerometers. Image
processing methods may also be used to identify possible
chain snatchers or to collect photographs of the occurrence
for additional investigation or proof [22]. In addition, when a
chain-snatching incident is identified, the system might contain
an alarm or notification mechanism that warns surrounding
persons or authorities in real time. This quick reaction can
dissuade offenders while also providing urgent support to
victims. The report will most likely offer experimental findings
and assessments to assess the suggested system’s usefulness
in effectively identifying chain-snatching occurrences while
minimizing false alarms [21]. It may also address the system’s
weaknesses, prospective areas for development, and future
research directions in this subject.
The document most likely presents a proposed approach or
algorithm for detecting snatch stealing [23]. It may describe
the selection and extraction of low-level video data elements
such as motion analysis, object tracking, or other relevant
information that can be utilized to detect snatch-snatching
instances. The authors may have also investigated various
strategies for identifying and discriminating between regular

and snatch-stealing incidents. Given that the paper was deliv-
ered in 2010, it is crucial to highlight that the material provided

in it is based on research and technology breakthroughs
accessible at the time [23]. It’s probable that recent advances in
computer vision, machine learning, and surveillance systems
have pushed the area of snatch-steal detection even further.
The authors present an action attribute modeling technique
for automatically recognizing snatch-stealing incidents [24].

To identify possible snatch-steal instances, the technique en-
tails analyzing the activities and characteristics displayed by

persons in surveillance recordings. The idea is to create a
system that can send real-time alerts to security workers or
law enforcement organizations in order to assist avoid such
crimes or respond promptly when they occur. The document

most likely outlines the methods and algorithms used to
detect snatch-stealing occurrences, including the extraction of
key characteristics, training a model using labeled data, and
evaluating the suggested solution. It might also go through
the datasets used for training and testing, as well as the
performance measures used to assess the system’s efficacy.
Because the study was published in 2018, it is crucial to
highlight that advances in the area may have occurred since
then, and other methodologies or approaches may have been
created [24].

The study describes the integrated framework’s many com-
ponents, such as data collecting, preprocessing, feature extrac-
tion, and crime detection [25]. In addition, the authors give

experimental results based on real-world data to illustrate the
efficacy of their technique. The results show that the suggested
framework may detect tiny crimes in a fast and accurate
manner, allowing law enforcement authorities to respond more
efficiently. The research focuses on the use of deep learning
algorithms to detect trustworthy human suspicious conduct
in surveillance films [26]. By using the capabilities of deep
learning algorithms, scientists hope to increase the accuracy
and reliability of suspicious behavior detection. The study
provides a full description of the suggested technique, which
includes surveillance video preprocessing, feature extraction
with CNNs, and categorization of suspicious actions with
Recurrent Neural Networks (RNNs) [27], [28]. The authors

also explore the difficulties connected with detecting sus-
picious behavior and provide strategies to overcome them.

The research focuses on the cap-snatching mechanism used
by the yeast L-A double-stranded Ribonucleic Acid (RNA)
virus [29]. The cap-snatching mechanism is a technique used
by certain RNA viruses to hijack the host’s messenger RNA
(mRNA) cap structure for viral RNA production. The authors
study the particular cap-snatching method used by the yeast
L-A double-stranded RNA virus and give deep insights into
its molecular processes. They investigate the viral variables
involved in cap-snatching and their interplay with host factors.
The authors’ research contributes to the knowledge of RNA
virus viral replication techniques and sheds insight on the
complicated mechanisms involved in the reproduction of the
yeast LA double-stranded RNA virus [29]. The findings of
this study are useful for virology research and increase our
understanding of viral replication techniques.

Continued in next post......
 
  • Fire
  • Like
  • Wow
Reactions: 13 users
Hi All
Sorry cannot provide a link but for those unlike Pom who should just read the Abstract and Conclusion the full paper is probably interesting to a little exciting to think what AKIDA with a little Edge Impulse can do. Regards Fact Finder:

Safeguarding Public Spaces: Unveiling Wallet
Snatching through Edge Impulse Technology

Ujjwal Reddy K S

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
ujjwal.20bci7203@vitap.ac.in

* Kuppusamy P

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
drpkscse@gmail.com

Abstract—In contemporary society, public space security and
safety are of utmost significance. The theft of wallets, a frequent
type of street crime, puts people’s personal items at risk and
may result in financial loss and psychological misery. By utilizing
Edge Impulse technology to identify and expose wallet-snatching
incidents in public areas, this article offers a fresh solution to
the problem. To develop a reliable and effective wallet-snatching
detection solution, the suggested system blends machine learning
techniques with the strength of the Edge Impulse platform. This
study used Spiking Neural Networks (SNNs) which are inspired
by the biological neural networks found in the brain. Edge

Impulse offers a thorough framework for gathering, prepro-
cessing, and examining data, enabling the creation of extremely

precise machine learning models. The system can accurately
discriminate between legitimate interactions with wallets and
suspicious snatching attempts by training these models on a
dataset that includes both normal and snatching events. The

efficiency of the suggested method is 95% demonstrated by exper-
imental findings, which show high accuracy and low false positive

rates in recognizing wallet snatching instances. Increasing public
safety, giving people a sense of security in public places, and
discouraging prospective wallet-snatching criminals are all goals
of this research.
Index Terms—wallet snatching, public spaces, Edge Impulse,
sensor devices, machine learning, real-time monitoring, security,
privacy

I. INTRODUCTION

Public places are critical for societal interactions and com-
munity participation. They are places of recreation, social-
ization, and public meetings. However, these areas are not

immune to criminal activity, and one typical threat is wallet
snatching. Wallet snatching is the act of forcibly removing
someone’s wallet, which frequently results in financial losses,
identity theft, and psychological suffering for the victims.
Safeguarding public places and combating wallet snatching

necessitate new measures that make use of developing technol-
ogy. In this context, this introduction investigates the potential

of Edge Impulse technology in uncovering and preventing
wallet-snatching events [1].
Wallet-snatching instances can occur in a variety of public
places, including parks, retail malls, congested roadways, and
public transit. These attacks are frequently characterized by
their speed and stealth, giving victims little time to react or

seek aid. Traditional surveillance systems, such as Closed Cir-
cuit Television (CCTV) cameras, have difficulties in efficiently

identifying and preventing wallet-snatching occurrences owing
to variables such as limited coverage, video quality, and human
error in monitoring [2]. As a result, more advanced technical
solutions that can proactively identify and respond to such
situations are required.
Edge Impulse is a new technology that integrates machine
learning algorithms, sensor data, and embedded systems to
generate smart and efficient solutions [3]. It allows machine
learning models to be deployed directly on edge devices such
as smartphones, wearable devices, or Internet of Things (IoT)
devices, reducing the requirement for ongoing access to a
distant server. Edge Impulse is an appropriate solution for
tackling the problem of wallet snatching in public places
because of its capabilities.

Fig. 1. Edge Impulse Architecture.

It is essential to look into the vast amount of research

and studies done in this specific subject in order to prop-
erly understand the powers of Edge Impulse technology in

revealing instances of wallet theft. Numerous studies have
been conducted to examine the use of computer vision and

machine learning approaches in detecting and preventing crim-
inal activity in public spaces. The topic of utilizing cutting-
edge technologies to improve public safety and security has

been explored in a number of academic studies. This research
has shown how machine learning algorithms may be used
to examine video footage and identify patterns of suspicious

2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE) | 979-8-3503-0570-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/RMKMATE59243.2023.10369744

behavior that could be related to wallet-snatching instances.
These cutting-edge technologies may recognize people who
display suspicious motions or participate in potentially illegal
behaviors by utilizing computer vision techniques, such as

object identification and tracking, enabling proactive interven-
tion. Edge Impulse technology integration has a lot of potential

in this area. It may be trained to recognize certain traits
and attributes linked to wallet snatching through its strong
machine learning skills, improving its capacity to precisely
detect such instances in real-time. Edge Impulse can analyze
trends, spot abnormalities, and notify authorities or security
people to take immediate action by utilizing the enormous
volumes of data gathered from several sources, including
surveillance cameras and sensor networks. The possibility of
predictive analytics to foresee wallet theft episodes based on
previous data and behavioral trends has also been investigated
in this field of research. Machine learning algorithms are
able to recognize high-risk locations and deploy resources
appropriately by examining elements like the time of day,
location, and population density. With the use of this proactive
strategy, law enforcement organizations may deploy people
efficiently and put out preventative measures, which serve to
dissuade prospective criminal activity.
Based on these findings, the use of Edge Impulse technology
in the context of wallet snatching can improve the efficiency of
crime prevention systems [4]. The reaction time may be greatly
decreased by implementing machine learning models directly

on edge devices, enabling real-time detection and fast inter-
vention. Furthermore, Edge Impulse technology can record

and analyze essential data for recognizing wallet-snatching
instances using numerous sensors included in smartphones
or wearable devices, such as accelerometers, gyroscopes, and
cameras.
For example, accelerometer data may be utilized to detect
abrupt movements or violent behaviors that are suggestive of
wallet-snatching attempts [5]. The gyroscope data can offer
information regarding the direction and speed of the grab,
assisting in the tracking of the culprit. Additionally, camera
footage may be analyzed using computer vision algorithms to
detect suspicious activity, identify possible thieves, or collect
photographs for later identification and proof.
The increasing availability of data can further benefit the use
of Edge Impulse technology in wallet snatching prevention.
With the growth of smartphones and wearable devices, there is
an abundance of sensor data that can be gathered and analyzed
in order to create strong machine learning models. This data
may be used to train algorithms to recognize certain patterns or
abnormalities related with wallet-snatching instances, boosting
the system’s accuracy and dependability.
Furthermore, integrating Edge Impulse technology with
current surveillance systems can improve their capabilities.
A complete and intelligent system may be constructed by
integrating the strengths of both technologies, such as the

extensive coverage of CCTV cameras and the real-time anal-
ysis of edge devices. This integrated strategy would allow for

proactive identification and rapid reaction to wallet-snatching

occurrences, minimizing the impact on victims and discour-
aging future perpetrators.

Finally, wallet snatching in public places is a serious danger
to public safety and individual well-being [6]. Innovative
techniques are necessary to overcome this difficulty, and Edge
Impulse technology has intriguing possibilities. Edge Impulse
provides real-time detection and fast action in wallet snatching
occurrences by employing machine learning models installed
directly on edge devices. It captures and analyses pertinent
information using multiple sensors and data sources accessible

on smartphones and wearable devices. Integrating Edge Im-
pulse technology with current monitoring systems can improve

the efficacy of crime prevention efforts. These developments
can help to protect public places and expose wallet snatching,
resulting in safer and more secure communities.
A. Motivation
This study aims to use the potential of Edge Impulse
technology to make public areas safer for citizens by efficiently
fighting wallet-snatching events. We hope that by finding a
solution, we can contribute to the wider objective of protecting
public places and improving the general quality of life in our
communities.
B. Contribution
• The study presents an innovative use of Edge Impulse
technology for improving public safety.
• This study proposed SNNs.

• The created machine learning model detects wallet-
snatching episodes in public places with high accuracy

and efficiency.

II. RELATED WORK

The study proposes a framework comprised of two major
components: a behavior model and a detection technique [7].
The behavior model captures the software’s valid behavior
by monitoring its execution and gathering information about
its interactions with the system and the user. The detection
method compares the observed behavior of a software instance
to the behavior model to discover any differences that signal
probable theft. The authors conducted trials with real-world
software applications to assess the efficacy of their technique.
They tested their system’s detection accuracy, false positive
rate, and false negative rate. The results indicated promising
performance in detecting software theft occurrences properly
while keeping false alarms to an acceptable level. The study
presents an overview of the many processes involved in
the identification of anomalous behavior, including human

detection, feature extraction, and classification [8]. It em-
phasizes the importance of Convolutional Neural Networks

(CNNs) in dealing with the complexities of visual input and
extracting important characteristics for behavioral research.
Furthermore, the authors explore several CNN architectures
used for anomalous behavior identification, such as AlexNet,
Visual Geometry Group Network (VGGNet), and Residual
Neural Network (ResNet) [9]–[11]. They also investigate the

use of various datasets and assessment criteria in evaluating
the performance of these models. The survey includes a wide
range of applications where aberrant behavior identification is
critical, such as crowd monitoring, public space surveillance,
and anomaly detection in industrial settings [8]. The authors
assess the merits and limits of existing approaches, as well as
new research avenues and opportunities for development.
The suggested technique consists of two major steps: feature
engineering-based preprocessing and energy theft detection
using gradient boosting [12]. Various characteristics from
the electricity usage data are extracted during the feature

engineering-based preprocessing stage. These traits are in-
tended to detect trends and behaviors that may suggest possible

energy theft. After preprocessing the data, the authors use
gradient boosting, a machine learning approach, to detect

energy theft. Gradient boosting is an ensemble learning ap-
proach that combines numerous weak predictive models to

build a strong predictive model. It constructs decision trees in
a sequential manner, with each succeeding tree learning from
the mistakes of the preceding ones. The suggested strategy
is evaluated by the authors using real-world power use data.
They compare their approach’s performance to that of other
current approaches for detecting energy theft, such as decision
trees, random forests, and support vector machines [13]–
[15]. Accuracy, precision, recall, and F1-score are among the
assessment criteria employed. The paper’s results show that
the suggested technique beats the other methods in terms
of energy theft detection accuracy. The authors credit this
enhanced performance to the preprocessing stage based on
feature engineering and the efficiency of gradient boosting in
identifying complicated connections in the data.
The study is primarily concerned with analyzing power
use trends and discovering abnormalities that might suggest
theft [16]. The system learns to discern between regular use
patterns and suspicious actions that signal theft by training the
decision tree and Support Vector Machine (SVM) models on
historical data. The attributes chosen are used to categorize
incidents as either theft or non-theft. The suggested technique
is tested using real-world smart grid data. The findings show
that the decision tree and SVM-based methods can identify
theft in smart grids with high accuracy and low false positive
rates. The study focuses on identifying instances of theft by
collecting temporal relationships in energy use data [17]. The
system learns to recognize regular consumption patterns and
detect variations that suggest theft by training the CNN-Long
Short-Term Memory (LSTM) model on historical data. The
suggested method is tested using real-world smart grid data,
and the findings show that it is successful at identifying power
theft [18]. The CNN-LSTM-based technique beats existing
approaches in terms of detection accuracy. Both papers address
the important issue of theft detection in smart grid systems,
but they employ different techniques [16], [17]. The first
paper utilizes decision trees and SVM for feature selection
and classification, while the second paper employs CNNs and
LSTM networks for feature extraction and anomaly detection.
These approaches contribute to the development of effective

methods for enhancing the security and reliability of smart
grid systems.
The study most likely proposes an algorithm or strategy
that employs computer vision and motion analysis techniques
to detect suspicious or illegal behavior in video footage [19].
The suggested approach most likely seeks to discriminate
between routine activities and probable criminal behaviors
by analyzing the motion patterns of humans or items in a
setting [20]. It is difficult to offer a full description of the
methodology, results, or conclusions of the study based on
the information supplied. However, it may be deduced that the
authors suggest a way for developing an automated criminal

detection system that combines motion analysis with intel-
ligent information-concealing strategies. The authors suggest

a chain-snatching detection safety system that detects and
prevents chain-snatching accidents by utilizing sophisticated
technologies [21]. However, without complete access to the
article, it is difficult to offer extensive information regarding
the system’s methodology, components, or methods used. To
detect rapid and strong movements associated with chain
snatching attempts, the system is likely to include various
sensors such as motion sensors or accelerometers. Image
processing methods may also be used to identify possible
chain snatchers or to collect photographs of the occurrence
for additional investigation or proof [22]. In addition, when a
chain-snatching incident is identified, the system might contain
an alarm or notification mechanism that warns surrounding
persons or authorities in real time. This quick reaction can
dissuade offenders while also providing urgent support to
victims. The report will most likely offer experimental findings
and assessments to assess the suggested system’s usefulness
in effectively identifying chain-snatching occurrences while
minimizing false alarms [21]. It may also address the system’s
weaknesses, prospective areas for development, and future
research directions in this subject.
The document most likely presents a proposed approach or
algorithm for detecting snatch stealing [23]. It may describe
the selection and extraction of low-level video data elements
such as motion analysis, object tracking, or other relevant
information that can be utilized to detect snatch-snatching
instances. The authors may have also investigated various
strategies for identifying and discriminating between regular

and snatch-stealing incidents. Given that the paper was deliv-
ered in 2010, it is crucial to highlight that the material provided

in it is based on research and technology breakthroughs
accessible at the time [23]. It’s probable that recent advances in
computer vision, machine learning, and surveillance systems
have pushed the area of snatch-steal detection even further.
The authors present an action attribute modeling technique
for automatically recognizing snatch-stealing incidents [24].

To identify possible snatch-steal instances, the technique en-
tails analyzing the activities and characteristics displayed by

persons in surveillance recordings. The idea is to create a
system that can send real-time alerts to security workers or
law enforcement organizations in order to assist avoid such
crimes or respond promptly when they occur. The document

most likely outlines the methods and algorithms used to
detect snatch-stealing occurrences, including the extraction of
key characteristics, training a model using labeled data, and
evaluating the suggested solution. It might also go through
the datasets used for training and testing, as well as the
performance measures used to assess the system’s efficacy.
Because the study was published in 2018, it is crucial to
highlight that advances in the area may have occurred since
then, and other methodologies or approaches may have been
created [24].

The study describes the integrated framework’s many com-
ponents, such as data collecting, preprocessing, feature extrac-
tion, and crime detection [25]. In addition, the authors give

experimental results based on real-world data to illustrate the
efficacy of their technique. The results show that the suggested
framework may detect tiny crimes in a fast and accurate
manner, allowing law enforcement authorities to respond more
efficiently. The research focuses on the use of deep learning
algorithms to detect trustworthy human suspicious conduct
in surveillance films [26]. By using the capabilities of deep
learning algorithms, scientists hope to increase the accuracy
and reliability of suspicious behavior detection. The study
provides a full description of the suggested technique, which
includes surveillance video preprocessing, feature extraction
with CNNs, and categorization of suspicious actions with
Recurrent Neural Networks (RNNs) [27], [28]. The authors

also explore the difficulties connected with detecting sus-
picious behavior and provide strategies to overcome them.

The research focuses on the cap-snatching mechanism used
by the yeast L-A double-stranded Ribonucleic Acid (RNA)
virus [29]. The cap-snatching mechanism is a technique used
by certain RNA viruses to hijack the host’s messenger RNA
(mRNA) cap structure for viral RNA production. The authors
study the particular cap-snatching method used by the yeast
L-A double-stranded RNA virus and give deep insights into
its molecular processes. They investigate the viral variables
involved in cap-snatching and their interplay with host factors.
The authors’ research contributes to the knowledge of RNA
virus viral replication techniques and sheds insight on the
complicated mechanisms involved in the reproduction of the
yeast LA double-stranded RNA virus [29]. The findings of
this study are useful for virology research and increase our
understanding of viral replication techniques.

Continued in next post......
Balance of the paper on Wallet protection:

III. WALLET SNATCHING THROUGH EDGE IMPULSE

TECHNOLOGY

This study aimed to investigate practical countermeasures
to wallet-snatching incidents in public places. To achieve this,
a dataset was collected, annotated, and submitted to the Edge
Impulse platform. The model was trained to recognize wallet
theft instances, with an impressive 95% accuracy rate. Despite

challenges, such as limited data, the researchers used cutting-
edge methods and tactics to enhance the training process

and improve performance. They considered camera angles,
lighting conditions, and the pace of the grab to ensure accurate
prediction. The research has immense potential for improving

public safety and reducing theft incidences, paving the way
for future security protocols.
A. Edge Impulse Technology

Fig. 2. Flowchart of Edge Impulse.

Edge Impulse is a cutting-edge machine learning platform
for developing and deploying intelligent applications on edge
devices [1], [32]. It provides developers with an easy-to-use

interface and a complete range of tools for collecting, process-
ing, and analyzing data in order to create machine learning

models. Edge Impulse enables machine learning at the edge,
allowing devices to make real-time choices without requiring
ongoing access to the cloud [3]. The platform supports a
variety of edge devices, such as microcontrollers, development

boards, and sensors, allowing developers to harness the po-
tential of machine learning in resource-constrained contexts.

Developers may use Edge Impulse to train and deploy models
for a variety of applications such as predictive maintenance,

anomaly detection, motion identification, and more. The plat-
form also allows for the training and optimization of ma-
chine learning models utilizing common techniques such as

neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
B. Spiking Neural Networks (SNNs)

SNNs are artificial neural networks that are inspired by bio-
logical neural networks found in the brain. SNNs function with

discrete-time, event-driven processing, as opposed to standard

artificial neural networks, which are based on continuous-
valued activations, and employ backpropagation for learning.

Fig. 3. Architecture of a Spiking Neural Network (SNN)

SNNs may provide various benefits over typical neural net-
works, particularly for jobs involving temporal information

processing, event-based data, and bio-inspired computing. Low

energy consumption, improved temporal precision, and possi-
ble appropriateness for neuromorphic hardware implementa-
tions are some of the potential benefits.

The key components of SNNs are as follows:
• Spiking Neurons: These are the network’s fundamental
building components. Based on activation criteria, they
integrate input spikes and create output spikes.

• Spike Trains: Instead of continuous activations like in typ-
ical neural networks, information in SNNs is represented

as discrete spike trains, which are time-varying sequences
of spikes.
• Synaptic Weight Updates: To increase performance,
SNNs may learn from data and modify their synaptic
weights. Learning in SNNs is often characterized by
Spike-Timing-Dependent Plasticity (STDP), in which the
weight updates are determined by the relative timing of
presynaptic and postsynaptic events.
• Spike-Based Learning Rules: Depending on the timing of
the pulses, different learning rules are utilized to adjust
synaptic strengths.
C. Akida FOMO (Field-Programmable Object)
The Akida FOMO (Faster Objects, More Objects) paradigm
is a neuromorphic hardware platform created by BrainChip
that is inspired by the structure and function of the human
brain. It provides real-time neural network inference on edge
devices while reducing latency, increasing energy efficiency,
and allowing for large-scale parallel computing operations.
SNNs are used in the model to effectively handle temporal

and spatial input, replicating the behavior of neurons. This en-
ables operations like object recognition, gesture detection, and

anomaly detection to be performed on edge devices, removing
the need for cloud-based processing. This method is especially
beneficial in low latency and data privacy scenarios when
continuous data transmission to remote servers is not required.
The neural network model is a deep learning architecture for
image processing tasks, consisting of 21 layers. It begins with
an input layer, representing images with (None, 96, 96, 3)
i.e., 96 pixels and 3 color channels. The model then includes
4 layers of Conv2D, 4 layers of BatchNormalization, and 5
layers of Rectified Linear Unit (ReLU) activation functions,

which process the input data and learn complex patterns.

The two layers of SeparableConv2D enhance feature extrac-
tion capabilities. The model’s convolutional nature makes

it suitable for image-related tasks like image classification
and object detection. Each layer contributes to the overall
complexity, capturing important features and patterns from
the input images. The neural network model demonstrates
high performance in image tasks using deep learning and
convolutional networks, versatile for various computer vision
applications.
Algorithm 1 Akida FOMO Model Inference
Require: Input data
Ensure: Inference result
1: Load Akida FOMO model parameters
2: Initialize input data
3: Preprocess input data
4: Convert input data to SNN format
5: Initialize SNN state
6: while Not end of input data do
7: for Each input spike do
8: Propagate spike through SNN
9: Update SNN state
10: end for
11: end while
12: Perform output decoding on SNN state
13: return Inference result
D. Data Collection

Fig. 4. Images of dataset.

This model is able to locate the dataset for our research
using the internet. The dataset includes videos of chain
snatching, wallet snatching, and other forms of snatching.
For creating a dataset, this study gathered all of the essential
videos. After creating the dataset, it began analyzing the
videos to identify common patterns and behaviors among the
snatchers. We found that most snatchers targeted vulnerable
individuals, such as the elderly or those walking alone at night.
Additionally, we noticed that snatchers tended to operate in

specific areas, such as busy marketplaces or near public trans-
portation hubs. With this information, we were able to develop

a more targeted approach to preventing these crimes from
occurring [31]. Edge Impulse streamlines the machine learning
workflow by offering a step-by-step method that comprises
data collection, data preprocessing, model training, and model
deployment. It provides a number of data intake methods,
including direct sensor integration, data import, and interaction
with third-party services. Edge Impulse’s capacity to undertake

automatic data preparation is one of its most prominent charac-
teristics. It provides a variety of signal-processing techniques

and feature extraction strategies for transforming raw data
into relevant features for model training. This streamlines the
data pre-treatment procedure and saves time for developers.
The platform also allows for the training and optimization of
machine learning models utilizing common techniques such as
neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
E. Data Preprocessing

See next post
 
  • Like
  • Fire
Reactions: 13 users
Hi All
Sorry cannot provide a link but for those unlike Pom who should just read the Abstract and Conclusion the full paper is probably interesting to a little exciting to think what AKIDA with a little Edge Impulse can do. Regards Fact Finder:

Safeguarding Public Spaces: Unveiling Wallet
Snatching through Edge Impulse Technology

Ujjwal Reddy K S

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
ujjwal.20bci7203@vitap.ac.in

* Kuppusamy P

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
drpkscse@gmail.com

Abstract—In contemporary society, public space security and
safety are of utmost significance. The theft of wallets, a frequent
type of street crime, puts people’s personal items at risk and
may result in financial loss and psychological misery. By utilizing
Edge Impulse technology to identify and expose wallet-snatching
incidents in public areas, this article offers a fresh solution to
the problem. To develop a reliable and effective wallet-snatching
detection solution, the suggested system blends machine learning
techniques with the strength of the Edge Impulse platform. This
study used Spiking Neural Networks (SNNs) which are inspired
by the biological neural networks found in the brain. Edge

Impulse offers a thorough framework for gathering, prepro-
cessing, and examining data, enabling the creation of extremely

precise machine learning models. The system can accurately
discriminate between legitimate interactions with wallets and
suspicious snatching attempts by training these models on a
dataset that includes both normal and snatching events. The

efficiency of the suggested method is 95% demonstrated by exper-
imental findings, which show high accuracy and low false positive

rates in recognizing wallet snatching instances. Increasing public
safety, giving people a sense of security in public places, and
discouraging prospective wallet-snatching criminals are all goals
of this research.
Index Terms—wallet snatching, public spaces, Edge Impulse,
sensor devices, machine learning, real-time monitoring, security,
privacy

I. INTRODUCTION

Public places are critical for societal interactions and com-
munity participation. They are places of recreation, social-
ization, and public meetings. However, these areas are not

immune to criminal activity, and one typical threat is wallet
snatching. Wallet snatching is the act of forcibly removing
someone’s wallet, which frequently results in financial losses,
identity theft, and psychological suffering for the victims.
Safeguarding public places and combating wallet snatching

necessitate new measures that make use of developing technol-
ogy. In this context, this introduction investigates the potential

of Edge Impulse technology in uncovering and preventing
wallet-snatching events [1].
Wallet-snatching instances can occur in a variety of public
places, including parks, retail malls, congested roadways, and
public transit. These attacks are frequently characterized by
their speed and stealth, giving victims little time to react or

seek aid. Traditional surveillance systems, such as Closed Cir-
cuit Television (CCTV) cameras, have difficulties in efficiently

identifying and preventing wallet-snatching occurrences owing
to variables such as limited coverage, video quality, and human
error in monitoring [2]. As a result, more advanced technical
solutions that can proactively identify and respond to such
situations are required.
Edge Impulse is a new technology that integrates machine
learning algorithms, sensor data, and embedded systems to
generate smart and efficient solutions [3]. It allows machine
learning models to be deployed directly on edge devices such
as smartphones, wearable devices, or Internet of Things (IoT)
devices, reducing the requirement for ongoing access to a
distant server. Edge Impulse is an appropriate solution for
tackling the problem of wallet snatching in public places
because of its capabilities.

Fig. 1. Edge Impulse Architecture.

It is essential to look into the vast amount of research

and studies done in this specific subject in order to prop-
erly understand the powers of Edge Impulse technology in

revealing instances of wallet theft. Numerous studies have
been conducted to examine the use of computer vision and

machine learning approaches in detecting and preventing crim-
inal activity in public spaces. The topic of utilizing cutting-
edge technologies to improve public safety and security has

been explored in a number of academic studies. This research
has shown how machine learning algorithms may be used
to examine video footage and identify patterns of suspicious

2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE) | 979-8-3503-0570-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/RMKMATE59243.2023.10369744

behavior that could be related to wallet-snatching instances.
These cutting-edge technologies may recognize people who
display suspicious motions or participate in potentially illegal
behaviors by utilizing computer vision techniques, such as

object identification and tracking, enabling proactive interven-
tion. Edge Impulse technology integration has a lot of potential

in this area. It may be trained to recognize certain traits
and attributes linked to wallet snatching through its strong
machine learning skills, improving its capacity to precisely
detect such instances in real-time. Edge Impulse can analyze
trends, spot abnormalities, and notify authorities or security
people to take immediate action by utilizing the enormous
volumes of data gathered from several sources, including
surveillance cameras and sensor networks. The possibility of
predictive analytics to foresee wallet theft episodes based on
previous data and behavioral trends has also been investigated
in this field of research. Machine learning algorithms are
able to recognize high-risk locations and deploy resources
appropriately by examining elements like the time of day,
location, and population density. With the use of this proactive
strategy, law enforcement organizations may deploy people
efficiently and put out preventative measures, which serve to
dissuade prospective criminal activity.
Based on these findings, the use of Edge Impulse technology
in the context of wallet snatching can improve the efficiency of
crime prevention systems [4]. The reaction time may be greatly
decreased by implementing machine learning models directly

on edge devices, enabling real-time detection and fast inter-
vention. Furthermore, Edge Impulse technology can record

and analyze essential data for recognizing wallet-snatching
instances using numerous sensors included in smartphones
or wearable devices, such as accelerometers, gyroscopes, and
cameras.
For example, accelerometer data may be utilized to detect
abrupt movements or violent behaviors that are suggestive of
wallet-snatching attempts [5]. The gyroscope data can offer
information regarding the direction and speed of the grab,
assisting in the tracking of the culprit. Additionally, camera
footage may be analyzed using computer vision algorithms to
detect suspicious activity, identify possible thieves, or collect
photographs for later identification and proof.
The increasing availability of data can further benefit the use
of Edge Impulse technology in wallet snatching prevention.
With the growth of smartphones and wearable devices, there is
an abundance of sensor data that can be gathered and analyzed
in order to create strong machine learning models. This data
may be used to train algorithms to recognize certain patterns or
abnormalities related with wallet-snatching instances, boosting
the system’s accuracy and dependability.
Furthermore, integrating Edge Impulse technology with
current surveillance systems can improve their capabilities.
A complete and intelligent system may be constructed by
integrating the strengths of both technologies, such as the

extensive coverage of CCTV cameras and the real-time anal-
ysis of edge devices. This integrated strategy would allow for

proactive identification and rapid reaction to wallet-snatching

occurrences, minimizing the impact on victims and discour-
aging future perpetrators.

Finally, wallet snatching in public places is a serious danger
to public safety and individual well-being [6]. Innovative
techniques are necessary to overcome this difficulty, and Edge
Impulse technology has intriguing possibilities. Edge Impulse
provides real-time detection and fast action in wallet snatching
occurrences by employing machine learning models installed
directly on edge devices. It captures and analyses pertinent
information using multiple sensors and data sources accessible

on smartphones and wearable devices. Integrating Edge Im-
pulse technology with current monitoring systems can improve

the efficacy of crime prevention efforts. These developments
can help to protect public places and expose wallet snatching,
resulting in safer and more secure communities.
A. Motivation
This study aims to use the potential of Edge Impulse
technology to make public areas safer for citizens by efficiently
fighting wallet-snatching events. We hope that by finding a
solution, we can contribute to the wider objective of protecting
public places and improving the general quality of life in our
communities.
B. Contribution
• The study presents an innovative use of Edge Impulse
technology for improving public safety.
• This study proposed SNNs.

• The created machine learning model detects wallet-
snatching episodes in public places with high accuracy

and efficiency.

II. RELATED WORK

The study proposes a framework comprised of two major
components: a behavior model and a detection technique [7].
The behavior model captures the software’s valid behavior
by monitoring its execution and gathering information about
its interactions with the system and the user. The detection
method compares the observed behavior of a software instance
to the behavior model to discover any differences that signal
probable theft. The authors conducted trials with real-world
software applications to assess the efficacy of their technique.
They tested their system’s detection accuracy, false positive
rate, and false negative rate. The results indicated promising
performance in detecting software theft occurrences properly
while keeping false alarms to an acceptable level. The study
presents an overview of the many processes involved in
the identification of anomalous behavior, including human

detection, feature extraction, and classification [8]. It em-
phasizes the importance of Convolutional Neural Networks

(CNNs) in dealing with the complexities of visual input and
extracting important characteristics for behavioral research.
Furthermore, the authors explore several CNN architectures
used for anomalous behavior identification, such as AlexNet,
Visual Geometry Group Network (VGGNet), and Residual
Neural Network (ResNet) [9]–[11]. They also investigate the

use of various datasets and assessment criteria in evaluating
the performance of these models. The survey includes a wide
range of applications where aberrant behavior identification is
critical, such as crowd monitoring, public space surveillance,
and anomaly detection in industrial settings [8]. The authors
assess the merits and limits of existing approaches, as well as
new research avenues and opportunities for development.
The suggested technique consists of two major steps: feature
engineering-based preprocessing and energy theft detection
using gradient boosting [12]. Various characteristics from
the electricity usage data are extracted during the feature

engineering-based preprocessing stage. These traits are in-
tended to detect trends and behaviors that may suggest possible

energy theft. After preprocessing the data, the authors use
gradient boosting, a machine learning approach, to detect

energy theft. Gradient boosting is an ensemble learning ap-
proach that combines numerous weak predictive models to

build a strong predictive model. It constructs decision trees in
a sequential manner, with each succeeding tree learning from
the mistakes of the preceding ones. The suggested strategy
is evaluated by the authors using real-world power use data.
They compare their approach’s performance to that of other
current approaches for detecting energy theft, such as decision
trees, random forests, and support vector machines [13]–
[15]. Accuracy, precision, recall, and F1-score are among the
assessment criteria employed. The paper’s results show that
the suggested technique beats the other methods in terms
of energy theft detection accuracy. The authors credit this
enhanced performance to the preprocessing stage based on
feature engineering and the efficiency of gradient boosting in
identifying complicated connections in the data.
The study is primarily concerned with analyzing power
use trends and discovering abnormalities that might suggest
theft [16]. The system learns to discern between regular use
patterns and suspicious actions that signal theft by training the
decision tree and Support Vector Machine (SVM) models on
historical data. The attributes chosen are used to categorize
incidents as either theft or non-theft. The suggested technique
is tested using real-world smart grid data. The findings show
that the decision tree and SVM-based methods can identify
theft in smart grids with high accuracy and low false positive
rates. The study focuses on identifying instances of theft by
collecting temporal relationships in energy use data [17]. The
system learns to recognize regular consumption patterns and
detect variations that suggest theft by training the CNN-Long
Short-Term Memory (LSTM) model on historical data. The
suggested method is tested using real-world smart grid data,
and the findings show that it is successful at identifying power
theft [18]. The CNN-LSTM-based technique beats existing
approaches in terms of detection accuracy. Both papers address
the important issue of theft detection in smart grid systems,
but they employ different techniques [16], [17]. The first
paper utilizes decision trees and SVM for feature selection
and classification, while the second paper employs CNNs and
LSTM networks for feature extraction and anomaly detection.
These approaches contribute to the development of effective

methods for enhancing the security and reliability of smart
grid systems.
The study most likely proposes an algorithm or strategy
that employs computer vision and motion analysis techniques
to detect suspicious or illegal behavior in video footage [19].
The suggested approach most likely seeks to discriminate
between routine activities and probable criminal behaviors
by analyzing the motion patterns of humans or items in a
setting [20]. It is difficult to offer a full description of the
methodology, results, or conclusions of the study based on
the information supplied. However, it may be deduced that the
authors suggest a way for developing an automated criminal

detection system that combines motion analysis with intel-
ligent information-concealing strategies. The authors suggest

a chain-snatching detection safety system that detects and
prevents chain-snatching accidents by utilizing sophisticated
technologies [21]. However, without complete access to the
article, it is difficult to offer extensive information regarding
the system’s methodology, components, or methods used. To
detect rapid and strong movements associated with chain
snatching attempts, the system is likely to include various
sensors such as motion sensors or accelerometers. Image
processing methods may also be used to identify possible
chain snatchers or to collect photographs of the occurrence
for additional investigation or proof [22]. In addition, when a
chain-snatching incident is identified, the system might contain
an alarm or notification mechanism that warns surrounding
persons or authorities in real time. This quick reaction can
dissuade offenders while also providing urgent support to
victims. The report will most likely offer experimental findings
and assessments to assess the suggested system’s usefulness
in effectively identifying chain-snatching occurrences while
minimizing false alarms [21]. It may also address the system’s
weaknesses, prospective areas for development, and future
research directions in this subject.
The document most likely presents a proposed approach or
algorithm for detecting snatch stealing [23]. It may describe
the selection and extraction of low-level video data elements
such as motion analysis, object tracking, or other relevant
information that can be utilized to detect snatch-snatching
instances. The authors may have also investigated various
strategies for identifying and discriminating between regular

and snatch-stealing incidents. Given that the paper was deliv-
ered in 2010, it is crucial to highlight that the material provided

in it is based on research and technology breakthroughs
accessible at the time [23]. It’s probable that recent advances in
computer vision, machine learning, and surveillance systems
have pushed the area of snatch-steal detection even further.
The authors present an action attribute modeling technique
for automatically recognizing snatch-stealing incidents [24].

To identify possible snatch-steal instances, the technique en-
tails analyzing the activities and characteristics displayed by

persons in surveillance recordings. The idea is to create a
system that can send real-time alerts to security workers or
law enforcement organizations in order to assist avoid such
crimes or respond promptly when they occur. The document

most likely outlines the methods and algorithms used to
detect snatch-stealing occurrences, including the extraction of
key characteristics, training a model using labeled data, and
evaluating the suggested solution. It might also go through
the datasets used for training and testing, as well as the
performance measures used to assess the system’s efficacy.
Because the study was published in 2018, it is crucial to
highlight that advances in the area may have occurred since
then, and other methodologies or approaches may have been
created [24].

The study describes the integrated framework’s many com-
ponents, such as data collecting, preprocessing, feature extrac-
tion, and crime detection [25]. In addition, the authors give

experimental results based on real-world data to illustrate the
efficacy of their technique. The results show that the suggested
framework may detect tiny crimes in a fast and accurate
manner, allowing law enforcement authorities to respond more
efficiently. The research focuses on the use of deep learning
algorithms to detect trustworthy human suspicious conduct
in surveillance films [26]. By using the capabilities of deep
learning algorithms, scientists hope to increase the accuracy
and reliability of suspicious behavior detection. The study
provides a full description of the suggested technique, which
includes surveillance video preprocessing, feature extraction
with CNNs, and categorization of suspicious actions with
Recurrent Neural Networks (RNNs) [27], [28]. The authors

also explore the difficulties connected with detecting sus-
picious behavior and provide strategies to overcome them.

The research focuses on the cap-snatching mechanism used
by the yeast L-A double-stranded Ribonucleic Acid (RNA)
virus [29]. The cap-snatching mechanism is a technique used
by certain RNA viruses to hijack the host’s messenger RNA
(mRNA) cap structure for viral RNA production. The authors
study the particular cap-snatching method used by the yeast
L-A double-stranded RNA virus and give deep insights into
its molecular processes. They investigate the viral variables
involved in cap-snatching and their interplay with host factors.
The authors’ research contributes to the knowledge of RNA
virus viral replication techniques and sheds insight on the
complicated mechanisms involved in the reproduction of the
yeast LA double-stranded RNA virus [29]. The findings of
this study are useful for virology research and increase our
understanding of viral replication techniques.

Continued in next post......
1706675413451.gif
 
  • Haha
  • Like
Reactions: 11 users
Balance of the paper on Wallet protection:

III. WALLET SNATCHING THROUGH EDGE IMPULSE

TECHNOLOGY

This study aimed to investigate practical countermeasures
to wallet-snatching incidents in public places. To achieve this,
a dataset was collected, annotated, and submitted to the Edge
Impulse platform. The model was trained to recognize wallet
theft instances, with an impressive 95% accuracy rate. Despite

challenges, such as limited data, the researchers used cutting-
edge methods and tactics to enhance the training process

and improve performance. They considered camera angles,
lighting conditions, and the pace of the grab to ensure accurate
prediction. The research has immense potential for improving

public safety and reducing theft incidences, paving the way
for future security protocols.
A. Edge Impulse Technology

Fig. 2. Flowchart of Edge Impulse.

Edge Impulse is a cutting-edge machine learning platform
for developing and deploying intelligent applications on edge
devices [1], [32]. It provides developers with an easy-to-use

interface and a complete range of tools for collecting, process-
ing, and analyzing data in order to create machine learning

models. Edge Impulse enables machine learning at the edge,
allowing devices to make real-time choices without requiring
ongoing access to the cloud [3]. The platform supports a
variety of edge devices, such as microcontrollers, development

boards, and sensors, allowing developers to harness the po-
tential of machine learning in resource-constrained contexts.

Developers may use Edge Impulse to train and deploy models
for a variety of applications such as predictive maintenance,

anomaly detection, motion identification, and more. The plat-
form also allows for the training and optimization of ma-
chine learning models utilizing common techniques such as

neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
B. Spiking Neural Networks (SNNs)

SNNs are artificial neural networks that are inspired by bio-
logical neural networks found in the brain. SNNs function with

discrete-time, event-driven processing, as opposed to standard

artificial neural networks, which are based on continuous-
valued activations, and employ backpropagation for learning.

Fig. 3. Architecture of a Spiking Neural Network (SNN)

SNNs may provide various benefits over typical neural net-
works, particularly for jobs involving temporal information

processing, event-based data, and bio-inspired computing. Low

energy consumption, improved temporal precision, and possi-
ble appropriateness for neuromorphic hardware implementa-
tions are some of the potential benefits.

The key components of SNNs are as follows:
• Spiking Neurons: These are the network’s fundamental
building components. Based on activation criteria, they
integrate input spikes and create output spikes.

• Spike Trains: Instead of continuous activations like in typ-
ical neural networks, information in SNNs is represented

as discrete spike trains, which are time-varying sequences
of spikes.
• Synaptic Weight Updates: To increase performance,
SNNs may learn from data and modify their synaptic
weights. Learning in SNNs is often characterized by
Spike-Timing-Dependent Plasticity (STDP), in which the
weight updates are determined by the relative timing of
presynaptic and postsynaptic events.
• Spike-Based Learning Rules: Depending on the timing of
the pulses, different learning rules are utilized to adjust
synaptic strengths.
C. Akida FOMO (Field-Programmable Object)
The Akida FOMO (Faster Objects, More Objects) paradigm
is a neuromorphic hardware platform created by BrainChip
that is inspired by the structure and function of the human
brain. It provides real-time neural network inference on edge
devices while reducing latency, increasing energy efficiency,
and allowing for large-scale parallel computing operations.
SNNs are used in the model to effectively handle temporal

and spatial input, replicating the behavior of neurons. This en-
ables operations like object recognition, gesture detection, and

anomaly detection to be performed on edge devices, removing
the need for cloud-based processing. This method is especially
beneficial in low latency and data privacy scenarios when
continuous data transmission to remote servers is not required.
The neural network model is a deep learning architecture for
image processing tasks, consisting of 21 layers. It begins with
an input layer, representing images with (None, 96, 96, 3)
i.e., 96 pixels and 3 color channels. The model then includes
4 layers of Conv2D, 4 layers of BatchNormalization, and 5
layers of Rectified Linear Unit (ReLU) activation functions,

which process the input data and learn complex patterns.

The two layers of SeparableConv2D enhance feature extrac-
tion capabilities. The model’s convolutional nature makes

it suitable for image-related tasks like image classification
and object detection. Each layer contributes to the overall
complexity, capturing important features and patterns from
the input images. The neural network model demonstrates
high performance in image tasks using deep learning and
convolutional networks, versatile for various computer vision
applications.
Algorithm 1 Akida FOMO Model Inference
Require: Input data
Ensure: Inference result
1: Load Akida FOMO model parameters
2: Initialize input data
3: Preprocess input data
4: Convert input data to SNN format
5: Initialize SNN state
6: while Not end of input data do
7: for Each input spike do
8: Propagate spike through SNN
9: Update SNN state
10: end for
11: end while
12: Perform output decoding on SNN state
13: return Inference result
D. Data Collection

Fig. 4. Images of dataset.

This model is able to locate the dataset for our research
using the internet. The dataset includes videos of chain
snatching, wallet snatching, and other forms of snatching.
For creating a dataset, this study gathered all of the essential
videos. After creating the dataset, it began analyzing the
videos to identify common patterns and behaviors among the
snatchers. We found that most snatchers targeted vulnerable
individuals, such as the elderly or those walking alone at night.
Additionally, we noticed that snatchers tended to operate in

specific areas, such as busy marketplaces or near public trans-
portation hubs. With this information, we were able to develop

a more targeted approach to preventing these crimes from
occurring [31]. Edge Impulse streamlines the machine learning
workflow by offering a step-by-step method that comprises
data collection, data preprocessing, model training, and model
deployment. It provides a number of data intake methods,
including direct sensor integration, data import, and interaction
with third-party services. Edge Impulse’s capacity to undertake

automatic data preparation is one of its most prominent charac-
teristics. It provides a variety of signal-processing techniques

and feature extraction strategies for transforming raw data
into relevant features for model training. This streamlines the
data pre-treatment procedure and saves time for developers.
The platform also allows for the training and optimization of
machine learning models utilizing common techniques such as
neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
E. Data Preprocessing

See next post
Hopefull final episode.😂🤣😂
E. Data Preprocessing

Fig. 5. Annotation images of dataset.

In this method, we start by deleting duplicate images from
datasets and filtering out datasets that don’t fit our criteria. By
removing irrelevant information, we focus on the photographs
that are most important to our project. The next key step is
annotating these images, which is essential for getting the
model ready for training. Each image is given the proper
tags or categories during annotation, which helps to properly
organize the data. Additionally, the machine learning system
can recognize patterns in the photos thanks to this annotation
process, which enables it to produce precise predictions. Next,
the model set the parameter of images to 96 X 96 size. We may
go on to the following step, which entails training the model
after the photographs have been annotated. In this stage, the
model is taught how to recognize and understand numerous

patterns and characteristics inside the photos using the anno-
tated data. The model continually improves its comprehension

and grows better at generating precise predictions through
a series of iterations and modifications. Generally speaking,
the process starts with data filtering to get rid of unwanted
datasets and duplicates. After that, each image is given the
proper labels via the annotation process, which facilitates data
organization and makes it possible for the machine learning
system to recognize patterns. In the end, the model is trained
using these annotated images, honing its forecasting skills and
deepening its comprehension of the visual information.
F. Implementation
We delivered the dataset to the three modules for training
after Data Preprocessing. They are further discussed below:
1) Creating Impulse: We will start the essential procedures
at this point in order to train our model successfully. To begin
with, we performed a few configuration changes, including
translating the entire dataset into dimensions of 96X96. We
used a resize setting that allowed for the shortest axis of the
photos to ensure compatibility. Then, we used the BrainChip
Akida model to do feature extraction from the dataset’s
images. We were able to draw out important details and useful
information from each image using this procedure. It is crucial
to remember that we will preserve and use each and every

one of these extracted characteristics to train our model in the
future. By following these steps, we are setting up an efficient
training procedure that will allow our model to develop and
produce precise predictions based on the supplied information.
2) Image: The color depth parameters are converted into
the Red-Green-Blue (RGB) format in the next stage, enabling
additional analysis and modification. We then started the
feature extraction procedure, successfully identifying all the
crucial traits that were present in the dataset. We created
a graph that successfully demonstrates the properties of the
dataset to display this data visually. This graph is shown in
Fig. 6, which offers a clear visual depiction of the retrieved
characteristics.

Fig. 6. Feature explorer.

3) Object Detection: In this phase, we set up the parameters
that will be used to train our model. We kept the validation
set size at 20%, the learning rate at 0.001, and the number
of epochs at 100. In addition, we trained our dataset using
the Akida Fomo model. We started the model training process
after making these adjustments, and as a consequence, we got
a remarkable training accuracy of 95.0%. Fig. 7 shows the
precise measurements and results of this training procedure.
In addition, we created a quantized version of the model in
addition to the outcomes. This quantization ensures that the
model performs in real-time without any hiccups even on
devices with modest Random-Access Memory (RAM) and
storage capabilities. The quantized model maintains its ability
to carry out the required activities while greatly reducing
the memory and storage needs. This development makes it
possible to install and use the model effectively in contexts
with limited resources.
4) Classification: In this part, we evaluated the model using
photos from the dataset and were successful in obtaining a
testing accuracy of 90%, which we believe can improve in the
future with a larger training dataset set so that the model may
be fine-tuned and have additional features for training. Fig. 8
depicts the testing accuracy. Overall, we are satisfied with the
performance of the model and optimistic about its potential for
further improvement. It is clear that the team has put in a lot
of effort into developing and evaluating the model. The testing
accuracy of 90% is impressive, and the team’s optimism about

Fig. 7. Model Training Accuracy.

the model’s potential for further improvement is encouraging.
It will be interesting to see how the model performs with a
larger training dataset and additional features for training.

Fig. 8. Model Testing Accuracy.

Fig. 9. Model output.
IV. CONCLUSION

In conclusion, our study made use of a sizable dataset
made up of a variety of occurrences, such as chain stealing
and wallet stealing. The videos in the collection were first
broken down into individual frames, and then redundant and
duplicate pictures were eliminated. After that, we worked on
training our model and annotating the pictures. Our model,
known as Akida FOMO, showed excellent accuracy of 95.0%
throughout the training phase and testing accuracy of 90%
after the training phase was complete. Therefore, we can state

with confidence that the use of the Edge Impulse platform and
the BrainChip Akida FOMO model contributed significantly to
the production of insightful findings for our research. The high
levels of accuracy reached by our model demonstrate that the
incorporation of Akida FOMO into our study was a resounding
success. Although setting up the dataset and annotating the
photos took a lot of time and work, the outcomes were
unquestionably trustworthy and the effort was justified. We are
adamant that future research into computer vision and object

identification has a great deal of potential when the cutting-
edge technology of BrainChip is paired with the Edge Impulse

platform. The positive results of our research demonstrate the
strong combination’s prospective trajectory and demonstrate
its potential to open up new directions for future developments
in this area.

V. FUTURE DIRECTION

There are a number of fascinating future areas that may
be investigated in the field of computer vision and object
identification in order to build upon the accomplishments and
beneficial results of our study using the Edge Impulse platform
and the Akida FOMO model.
Improved Model Performance: Although our existing model
achieved outstanding accuracy rates, there is still potential
for improvement. The model parameters may be adjusted in
future study, along with enhanced training methods, more
datasets, and a wider variety of training examples. The model

can push the limits of accuracy even further by continu-
ously improving the model’s performance. Real-time Object

Recognition: The Edge Impulse platform opens up oppor-
tunities for real-time object recognition applications when

combined with Akida FOMO’s capabilities. The development
of a system that can recognize and detect chain-snatching
or wallet-snatching instances in real-time may result from
broadening our study, which might have a substantial impact

on public safety and crime prevention initiatives. Generali-
sation and Transfer Learning: Exploring the possibilities of

transfer learning strategies might be an attractive area for
future study. We may be able to attain greater accuracy rates
and speed up the training process by utilizing pre-trained
models on related object identification tasks and fine-tuning
them with our particular dataset. The model’s usefulness
and applicability can also be increased by investigating its
capacity to generalize across various scenarios and contexts.
Scalability and Deployment: As our study develops, the Akida
FOMO model’s scalability and deployment need to be taken
into account. The model’s deployment on resource-constrained
edge devices, such as security cameras or smartphones, may be
facilitated by optimizing the model’s architecture and training
procedure to decrease computing needs and memory footprint.
This would make it possible to use our research’s results in the
real world, broadening its influence beyond the boundaries of a
sterile laboratory. Expansion to Other Applications: Although
the majority of our study was devoted to chain-snatching and
wallet-snatching occurrences, the approaches and techniques
created may be applied to a number of other fields and appli-

cations. Exploring the model’s capability to identify and detect
various objects or events, such as pedestrian identification or
traffic sign recognition, can advance computer vision in more
general ways and improve safety protocols in a variety of
scenarios.
Finally, the effective integration of Akida FOMO into the
Edge Impulse platform opens the door for interesting new
research trajectories. We may improve the area of computer
vision and object identification by continually improving
the model, investigating real-time applications, using transfer
learning, assuring scalability, and extending to new domains,
eventually helping society with increased safety and efficiency.
 
  • Like
  • Fire
  • Love
Reactions: 43 users
Balance of the paper on Wallet protection:

III. WALLET SNATCHING THROUGH EDGE IMPULSE

TECHNOLOGY

This study aimed to investigate practical countermeasures
to wallet-snatching incidents in public places. To achieve this,
a dataset was collected, annotated, and submitted to the Edge
Impulse platform. The model was trained to recognize wallet
theft instances, with an impressive 95% accuracy rate. Despite

challenges, such as limited data, the researchers used cutting-
edge methods and tactics to enhance the training process

and improve performance. They considered camera angles,
lighting conditions, and the pace of the grab to ensure accurate
prediction. The research has immense potential for improving

public safety and reducing theft incidences, paving the way
for future security protocols.
A. Edge Impulse Technology

Fig. 2. Flowchart of Edge Impulse.

Edge Impulse is a cutting-edge machine learning platform
for developing and deploying intelligent applications on edge
devices [1], [32]. It provides developers with an easy-to-use

interface and a complete range of tools for collecting, process-
ing, and analyzing data in order to create machine learning

models. Edge Impulse enables machine learning at the edge,
allowing devices to make real-time choices without requiring
ongoing access to the cloud [3]. The platform supports a
variety of edge devices, such as microcontrollers, development

boards, and sensors, allowing developers to harness the po-
tential of machine learning in resource-constrained contexts.

Developers may use Edge Impulse to train and deploy models
for a variety of applications such as predictive maintenance,

anomaly detection, motion identification, and more. The plat-
form also allows for the training and optimization of ma-
chine learning models utilizing common techniques such as

neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
B. Spiking Neural Networks (SNNs)

SNNs are artificial neural networks that are inspired by bio-
logical neural networks found in the brain. SNNs function with

discrete-time, event-driven processing, as opposed to standard

artificial neural networks, which are based on continuous-
valued activations, and employ backpropagation for learning.

Fig. 3. Architecture of a Spiking Neural Network (SNN)

SNNs may provide various benefits over typical neural net-
works, particularly for jobs involving temporal information

processing, event-based data, and bio-inspired computing. Low

energy consumption, improved temporal precision, and possi-
ble appropriateness for neuromorphic hardware implementa-
tions are some of the potential benefits.

The key components of SNNs are as follows:
• Spiking Neurons: These are the network’s fundamental
building components. Based on activation criteria, they
integrate input spikes and create output spikes.

• Spike Trains: Instead of continuous activations like in typ-
ical neural networks, information in SNNs is represented

as discrete spike trains, which are time-varying sequences
of spikes.
• Synaptic Weight Updates: To increase performance,
SNNs may learn from data and modify their synaptic
weights. Learning in SNNs is often characterized by
Spike-Timing-Dependent Plasticity (STDP), in which the
weight updates are determined by the relative timing of
presynaptic and postsynaptic events.
• Spike-Based Learning Rules: Depending on the timing of
the pulses, different learning rules are utilized to adjust
synaptic strengths.
C. Akida FOMO (Field-Programmable Object)
The Akida FOMO (Faster Objects, More Objects) paradigm
is a neuromorphic hardware platform created by BrainChip
that is inspired by the structure and function of the human
brain. It provides real-time neural network inference on edge
devices while reducing latency, increasing energy efficiency,
and allowing for large-scale parallel computing operations.
SNNs are used in the model to effectively handle temporal

and spatial input, replicating the behavior of neurons. This en-
ables operations like object recognition, gesture detection, and

anomaly detection to be performed on edge devices, removing
the need for cloud-based processing. This method is especially
beneficial in low latency and data privacy scenarios when
continuous data transmission to remote servers is not required.
The neural network model is a deep learning architecture for
image processing tasks, consisting of 21 layers. It begins with
an input layer, representing images with (None, 96, 96, 3)
i.e., 96 pixels and 3 color channels. The model then includes
4 layers of Conv2D, 4 layers of BatchNormalization, and 5
layers of Rectified Linear Unit (ReLU) activation functions,

which process the input data and learn complex patterns.

The two layers of SeparableConv2D enhance feature extrac-
tion capabilities. The model’s convolutional nature makes

it suitable for image-related tasks like image classification
and object detection. Each layer contributes to the overall
complexity, capturing important features and patterns from
the input images. The neural network model demonstrates
high performance in image tasks using deep learning and
convolutional networks, versatile for various computer vision
applications.
Algorithm 1 Akida FOMO Model Inference
Require: Input data
Ensure: Inference result
1: Load Akida FOMO model parameters
2: Initialize input data
3: Preprocess input data
4: Convert input data to SNN format
5: Initialize SNN state
6: while Not end of input data do
7: for Each input spike do
8: Propagate spike through SNN
9: Update SNN state
10: end for
11: end while
12: Perform output decoding on SNN state
13: return Inference result
D. Data Collection

Fig. 4. Images of dataset.

This model is able to locate the dataset for our research
using the internet. The dataset includes videos of chain
snatching, wallet snatching, and other forms of snatching.
For creating a dataset, this study gathered all of the essential
videos. After creating the dataset, it began analyzing the
videos to identify common patterns and behaviors among the
snatchers. We found that most snatchers targeted vulnerable
individuals, such as the elderly or those walking alone at night.
Additionally, we noticed that snatchers tended to operate in

specific areas, such as busy marketplaces or near public trans-
portation hubs. With this information, we were able to develop

a more targeted approach to preventing these crimes from
occurring [31]. Edge Impulse streamlines the machine learning
workflow by offering a step-by-step method that comprises
data collection, data preprocessing, model training, and model
deployment. It provides a number of data intake methods,
including direct sensor integration, data import, and interaction
with third-party services. Edge Impulse’s capacity to undertake

automatic data preparation is one of its most prominent charac-
teristics. It provides a variety of signal-processing techniques

and feature extraction strategies for transforming raw data
into relevant features for model training. This streamlines the
data pre-treatment procedure and saves time for developers.
The platform also allows for the training and optimization of
machine learning models utilizing common techniques such as
neural networks, decision trees, and support vector machines.
It provides an easy-to-use interface for configuring model
parameters, evaluating model performance, and optimizing
models for deployment on edge devices.
E. Data Preprocessing

See next post
1706675456394.gif
 
  • Haha
  • Like
Reactions: 11 users
Hopefull final episode.😂🤣😂
E. Data Preprocessing

Fig. 5. Annotation images of dataset.

In this method, we start by deleting duplicate images from
datasets and filtering out datasets that don’t fit our criteria. By
removing irrelevant information, we focus on the photographs
that are most important to our project. The next key step is
annotating these images, which is essential for getting the
model ready for training. Each image is given the proper
tags or categories during annotation, which helps to properly
organize the data. Additionally, the machine learning system
can recognize patterns in the photos thanks to this annotation
process, which enables it to produce precise predictions. Next,
the model set the parameter of images to 96 X 96 size. We may
go on to the following step, which entails training the model
after the photographs have been annotated. In this stage, the
model is taught how to recognize and understand numerous

patterns and characteristics inside the photos using the anno-
tated data. The model continually improves its comprehension

and grows better at generating precise predictions through
a series of iterations and modifications. Generally speaking,
the process starts with data filtering to get rid of unwanted
datasets and duplicates. After that, each image is given the
proper labels via the annotation process, which facilitates data
organization and makes it possible for the machine learning
system to recognize patterns. In the end, the model is trained
using these annotated images, honing its forecasting skills and
deepening its comprehension of the visual information.
F. Implementation
We delivered the dataset to the three modules for training
after Data Preprocessing. They are further discussed below:
1) Creating Impulse: We will start the essential procedures
at this point in order to train our model successfully. To begin
with, we performed a few configuration changes, including
translating the entire dataset into dimensions of 96X96. We
used a resize setting that allowed for the shortest axis of the
photos to ensure compatibility. Then, we used the BrainChip
Akida model to do feature extraction from the dataset’s
images. We were able to draw out important details and useful
information from each image using this procedure. It is crucial
to remember that we will preserve and use each and every

one of these extracted characteristics to train our model in the
future. By following these steps, we are setting up an efficient
training procedure that will allow our model to develop and
produce precise predictions based on the supplied information.
2) Image: The color depth parameters are converted into
the Red-Green-Blue (RGB) format in the next stage, enabling
additional analysis and modification. We then started the
feature extraction procedure, successfully identifying all the
crucial traits that were present in the dataset. We created
a graph that successfully demonstrates the properties of the
dataset to display this data visually. This graph is shown in
Fig. 6, which offers a clear visual depiction of the retrieved
characteristics.

Fig. 6. Feature explorer.

3) Object Detection: In this phase, we set up the parameters
that will be used to train our model. We kept the validation
set size at 20%, the learning rate at 0.001, and the number
of epochs at 100. In addition, we trained our dataset using
the Akida Fomo model. We started the model training process
after making these adjustments, and as a consequence, we got
a remarkable training accuracy of 95.0%. Fig. 7 shows the
precise measurements and results of this training procedure.
In addition, we created a quantized version of the model in
addition to the outcomes. This quantization ensures that the
model performs in real-time without any hiccups even on
devices with modest Random-Access Memory (RAM) and
storage capabilities. The quantized model maintains its ability
to carry out the required activities while greatly reducing
the memory and storage needs. This development makes it
possible to install and use the model effectively in contexts
with limited resources.
4) Classification: In this part, we evaluated the model using
photos from the dataset and were successful in obtaining a
testing accuracy of 90%, which we believe can improve in the
future with a larger training dataset set so that the model may
be fine-tuned and have additional features for training. Fig. 8
depicts the testing accuracy. Overall, we are satisfied with the
performance of the model and optimistic about its potential for
further improvement. It is clear that the team has put in a lot
of effort into developing and evaluating the model. The testing
accuracy of 90% is impressive, and the team’s optimism about

Fig. 7. Model Training Accuracy.

the model’s potential for further improvement is encouraging.
It will be interesting to see how the model performs with a
larger training dataset and additional features for training.

Fig. 8. Model Testing Accuracy.

Fig. 9. Model output.
IV. CONCLUSION

In conclusion, our study made use of a sizable dataset
made up of a variety of occurrences, such as chain stealing
and wallet stealing. The videos in the collection were first
broken down into individual frames, and then redundant and
duplicate pictures were eliminated. After that, we worked on
training our model and annotating the pictures. Our model,
known as Akida FOMO, showed excellent accuracy of 95.0%
throughout the training phase and testing accuracy of 90%
after the training phase was complete. Therefore, we can state

with confidence that the use of the Edge Impulse platform and
the BrainChip Akida FOMO model contributed significantly to
the production of insightful findings for our research. The high
levels of accuracy reached by our model demonstrate that the
incorporation of Akida FOMO into our study was a resounding
success. Although setting up the dataset and annotating the
photos took a lot of time and work, the outcomes were
unquestionably trustworthy and the effort was justified. We are
adamant that future research into computer vision and object

identification has a great deal of potential when the cutting-
edge technology of BrainChip is paired with the Edge Impulse

platform. The positive results of our research demonstrate the
strong combination’s prospective trajectory and demonstrate
its potential to open up new directions for future developments
in this area.

V. FUTURE DIRECTION

There are a number of fascinating future areas that may
be investigated in the field of computer vision and object
identification in order to build upon the accomplishments and
beneficial results of our study using the Edge Impulse platform
and the Akida FOMO model.
Improved Model Performance: Although our existing model
achieved outstanding accuracy rates, there is still potential
for improvement. The model parameters may be adjusted in
future study, along with enhanced training methods, more
datasets, and a wider variety of training examples. The model

can push the limits of accuracy even further by continu-
ously improving the model’s performance. Real-time Object

Recognition: The Edge Impulse platform opens up oppor-
tunities for real-time object recognition applications when

combined with Akida FOMO’s capabilities. The development
of a system that can recognize and detect chain-snatching
or wallet-snatching instances in real-time may result from
broadening our study, which might have a substantial impact

on public safety and crime prevention initiatives. Generali-
sation and Transfer Learning: Exploring the possibilities of

transfer learning strategies might be an attractive area for
future study. We may be able to attain greater accuracy rates
and speed up the training process by utilizing pre-trained
models on related object identification tasks and fine-tuning
them with our particular dataset. The model’s usefulness
and applicability can also be increased by investigating its
capacity to generalize across various scenarios and contexts.
Scalability and Deployment: As our study develops, the Akida
FOMO model’s scalability and deployment need to be taken
into account. The model’s deployment on resource-constrained
edge devices, such as security cameras or smartphones, may be
facilitated by optimizing the model’s architecture and training
procedure to decrease computing needs and memory footprint.
This would make it possible to use our research’s results in the
real world, broadening its influence beyond the boundaries of a
sterile laboratory. Expansion to Other Applications: Although
the majority of our study was devoted to chain-snatching and
wallet-snatching occurrences, the approaches and techniques
created may be applied to a number of other fields and appli-

cations. Exploring the model’s capability to identify and detect
various objects or events, such as pedestrian identification or
traffic sign recognition, can advance computer vision in more
general ways and improve safety protocols in a variety of
scenarios.
Finally, the effective integration of Akida FOMO into the
Edge Impulse platform opens the door for interesting new
research trajectories. We may improve the area of computer
vision and object identification by continually improving
the model, investigating real-time applications, using transfer
learning, assuring scalability, and extending to new domains,
eventually helping society with increased safety and efficiency.
1706675485053.gif
 
  • Haha
  • Like
Reactions: 9 users
Hi All
Here is another paper that I cannot provide a reference and might end up being over a couple of posts. Relax Pom I won't be asking questions. My opinion only DYOR Fact Finder:

EDGX-1: A New Frontier in Onboard AI
Computing with a Heterogeneous and

Neuromorphic Design

1
st Nick Destrycker
EDGX
Belgium
2
nd Wouter Benoot
EDGX
Belgium
3
rd Joao Mattias ̃
EDGX
Belgium
4
th Ivan Rodriguez
BSC
Spain

5
th David Steenari
ESA
The Netherlands

Abstract—In recent years, the demand for onboard artificial

intelligence (AI) computing in satellites has increased dramati-
cally, as it can enhance the autonomy, efficiency, and capabilities

of space missions. However, current data processing units (DPUs)
face significant challenges in handling complex AI algorithms and
large datasets required for onboard processing. In this paper, we
propose a novel heterogeneous DPU architecture that combines
several different processing units into a single architecture. The
proposed DPU leverages the strengths of each processing unit to
achieve high performance, flexibility, and energy efficiency, while
addressing the power, storage, bandwidth and environmental
constraints of space. We present the design methodology for the
DPU, including the hardware architecture and AI capabilities.
The product represents a potentially significant advancement in
the field of onboard AI computing in satellites, with applications
in a wide range of space missions.
Index Terms—Onboard AI Processing, GPU, Neuromorphic,
Heterogeneous computing, in-orbit retraining, AI Acceleration,
low power, high performance, continious online learing

I. INTRODUCTION

The demand for artificial intelligence (AI) in space ap-
plications has been growing rapidly in recent years. How-
ever, the limited computational resources available onboard

spacecrafts have posed a significant challenge for AI imple-
mentation [1] [2]. To address this challenge, data processing

units (DPUs) that leverage heterogeneous computing platforms
have emerged as a promising solution for high-performance
AI computing [3] [4] [5]. These platforms can leverage the
strengths of each processing unit to accelerate AI computations
and achieve higher energy efficiency compared to traditional
single-processor systems.
In this paper, we introduce EDGX-1, a novel DPU that

combines classic computing units and neuromorphic process-
ing capabilities for high-performance AI computing in space

applications designed for class V type missions. EDGX-
1 is a heterogeneous computing platform that integrates a

CPU, GPU, TPU, FPGA and NPU. It allows the creation of
flexible, dynamic AI pipelines to infer and even retrain AI/ML
algorithms onboard. Through the NPU, the EDGX-1 naturally
supports continual learning in the ML models.
The neuromorphic computing capability of the EDGX-1
DPU is a significant innovation which is designed to mimic
the neural structure of the brain [6], enabling them to perform
certain computations much faster and more efficiently than

traditional computing platforms. The integration of a neuro-
morphic processing unit in the EDGX-1 DPU can enhance the

system’s ability to handle AI applications that require real-
time processing [7], low power consumption and high data

bandwidth.
The EDGX-1 is targeted towards the satcom market as well
as the earth observation market and is well-suited for a variety
of space applications, including optical imaging, cognitive
radio, and cognitive SAR.

For optical imaging, the heterogeneous computing archi-
tecture and neuromorphic computing capability can enhance

the system’s ability to process high-resolution images and

data efficiently. In cognitive radio applications such as dy-
namic spectrum allocation and interference management, the

EDGX-1 can be used for low power spectrum monitoring,
analysis and realtime signal classification and charactarization.
In cognitive SAR applications, the EDGX-1 can leverage
its unique processing architecture to provide realtime SAR
instrument zooming, object classification and accelerated data
processing to support environmental monitoring and rapid
disaster response. Overall, the EDGX-1 offers a powerful and
versatile platform for a wide range of space applications. The
flexible architecture also allows for swift adaptation to serve
other spacecraft payloads and needs.

The remainder of this paper is structured as follows. Sec-
tion II addresses the current challenges faced by data process-
ing units and potential solutions to the problem. Section III

provides an overview of the EDGX-1 DPU’s heterogeneous

computing architecture and neuromorphic computing capabil-
ity. Section IV describes the AI operational and retraining

environment capabilities onboard the EDGX-1. Section V
concludes the paper and discusses future work.
II. ONBOARD AI CHALLENGES AND SOLUTIONS
A. Meeting Power Budget of Microsatellites and Cubesats
As the power budget of microsatellites and CubeSats is often
minimal, the EDGX-1 provides various ways to configure its
power consumption. In such a way, system engineers can
completely control and limit the EDGX-1’s power draw. For
the FPGA, they can decide upon the total power consumption
during the bitstream generation process. Similarly, they can
turn off some of the subsystems in the boot configuration for

the SoC. Some SoC subsystems can even be changed on the

fly, tailoring to a dynamic power budget on the satellite in-
orbit. Drastically changing power modes raises the question

if it would effect the radiation characteristics of the device.
However, preliminary tests from Rodriguez et al. [8] indicate
that the radiation characteristics remain consistent between
power modes.
Since the demand for more data acquisition keeps rising,

future data processing systems will need to process increas-
ingly more data within the same power budget. The EDGX-1

technically addresses this problem by implementing the Akida
processor from BrainChip. This digital neuromorphic chip
mimics the human brain to work at high energy efficiency. It
calculates in an event-driven fashion, using only power when
needed, reducing overall power consumption. Integrating the
Akida is ideal for satellites with a limited power budget.
Finally, the design of the EDGX-1 allows not only the
use of the NVIDIA Orin embedded SoC. It also supports
multiple previous generations of NVIDIA embedded devices,
like the TX2 or the Xavier family. This ultimately lowers
power consumption further in favour of performance. Even

within the same NVIDIA generation, the EDGX-1 can accom-
modate the Orin Nano, delivering a lower power consumption

for reasonable performance. All these SoCs share the same
hardware interface and software stack. This way, the EDGX-1
can meet any power requirement and adapt to a large spectrum
of smallsat, microsat and cubesat mission scenarios.
B. Reliability of COTS System-on-Chip in Space Environment
Radiation can impact semiconductors [9] and affect device
reliability [10]. High-energy particles can disrupt charges in
a transistor’s layers, causing unexpected behaviour known
as Single-Event Effects (SEEs) [11]. SEEs can be classified
as Single-Event Upsets (SEUs) or Single-Event Functional
Interrupts (SEFIs) [12]. In the first, the SEE only manifested
itself as a bit-flip in memory, whilst in the latter, a bit-flip in
the control logic caused a loss of device functionality. While
neither physically nor permanently harm the device, a radiation
event causing a MOSFET transistor related to power regulation

to remain open can result in a destructive Single-Event Latch-
up (SEL). Most SEEs, including SELs, can be cleared with a

power cycle and are transient errors.
Radiation testing is crucial to assessing electronic devices’
reliability in safety-critical systems. These tests use protons,
neutrons, heavy ions, two-photon absorption [13] or gamma
radiation (also known as Total Ionizing Dose (TID) testing)
to accelerate the appearance of SEEs for analysis. TID testing
can also shine a light on the device’s ageing by accumulating
radiation faster than it would naturally occur. Neutrons are
commonly used for terrestrial applications, while protons and
heavy ions are preferred for space systems.

The EDGX-1 targets the NVIDIA Jetson Orin NX. Al-
though no radiation test has happened on this device, it has

happened from some of the older modules from NVIDIA. For
example, the Jeston TX2 [14] and the Jetson Xavier NX [8]
have undergone radiation testing. The latter is relevant for the

EDGX-1 since the Xavier directly preceded the Orin. Even
though we can not extrapolate the radiation characteristics to
the Orin, most of its subsystems and processor architecture
are similar to the Xavier. Conducted radiation tests on the

Xavier [8], utilising the Reliability, Availability, and Service-
ability (RAS) features from ARM, pointing out that the cache

tags were the leading causes of SEFIs. These tags are only
protected by a single parity bit, causing complete reboots due
to the inability to recover. In the Orin family, NVIDIA changed
the processor to the Cortex-A78AE [15]. 1-bit correction and
2-bit detection error correction codes now protect the cache
tags. This change could decrease the sensitivity to SEFIs, but
further radiation tests of the Orin will need to verify this
hypothesis.
C. Availability of System in Space Environment
Devices built with these COTS components have a high

chance of encountering SEFIs, which could affect their avail-
ability. The lack of availability can cause the loss of data

or mission return. Availability is an open problem that also

has been explored in the automotive domain. Possible solu-
tions have been proposed, like redundant kernels in the GPU

section [16] or register protection for the CPU [17]. A more
resilient external device could also act as the interface between
the payload and the COTS device, ensuring no data loss in
the case of a SEFI [18]. Luckily, because the EDGX-1 is
a very computationally capable device, it can catch up and
re-compute at an increased rate. This way, the system still
processes all data safeguarded by the resilient supervisor. This
concept has been explored by Kritikakou et al. [19] with
promising results.
D. In-orbit flexibility, adaptability and AI retrainability
Space missions are subject to a wide array of factors that
can affect the performance of AI algorithms. Variations in data
distributions, changes in the spacecraft’s position, sensor drift,
unknown new data, unforeseen new environment parameters
and evolving mission objectives can all contribute to making
pre-trained AI models less effective over time. Therefore, the
challenge lies in developing a mechanism to update and adapt
AI models without requiring frequent costly communication
with ground stations. Enabling AI models to adapt and retrain
in orbit autonomously is crucial for maintaining optimal
performance throughout the lifetime of the mission.
Ensuring flexibility and adaptability in the data processing

part of the payload is crucial for securing a future proof satel-
lite design for next generation missions. The world of AI and

processing technology is changing rapidly, rendering todays
data processing units obsolete tomorrow. Implementing future
technology is critical to allow for changing business models
and keeping relevance in today’s fast paced environment
The EDGX-1 DPU addresses the challenge of in-orbit

adaptability through its innovative capability to retrain AI al-
gorithms directly onboard the spacecraft. This capability stems

from the heterogeneous architecture that combines various
processing units, capable of being reprogrammed and enabling

dynamic adjustments and updates to AI models without ex-
tensive external intervention. Conventional AI retrainability

as well as continual online learning through neuromorphic
technology is available onboard the EDGX-1 DPU and further
explained in Section IV.
Incorporating onboard AI retraining within the EDGX-1
DPU enhances the overall capabilities of space missions by
ensuring that AI models remain effective and accurate in
ever-changing conditions. This dynamic adaptability not only
optimizes decision-making but also reduces the dependence
on ground-based intervention for model updates.

Fig. 1. Hardware Architecture
III. EDGX-1 HARDWARE ARCHITECTURE

A low-power heterogeneous computing design that com-
bines a GPU, FPGA, CPU, TPU, and NPU can potentially

solve the problems with current data processing units and
boost onboard AI computing performance for satellites.
The general hardware architecture consists of multiple
printed circuit boards (PCBs), designed according to PCIe/104
form factor specification which allows for a large range of

benefits for embedded computing applications, including high-
speed connectivity, compact size, rugged design, scalability,

and interoperability.
Internal power and data transmission occur through the
PCIe/104 triple branch stacking connector which acts as a
structural and electrical backbone for the full DPU. This data

bus provides power in 3.3V, 5V and 12V and has both high-
speed (PCIe x1, PCIe x4, USB 2.0/3.0) and low-speed (CAN,

I2C) data links.

The overall DPU is therefore arranged as a modular stack
of these PCB modules as seen in Figure 1, where each
PCB provides specialized computing capabilities and can be
seamlessly integrated (or not) according to the needs and
constraints of the given mission. Multiples of the same type of
unit can be stacked to provide enhanced computational power.
Every processing board provides 2x high speed external
interfaces (ETH, USB 2.0/3.0) for data transfer as well as 2x
low speed interfaces (CAN, UART) for command and control

interfaces. Additional GPIO pins are available for customi-
sation. The power supply and distribution on the EDGX-1

includes overvoltage, overcurrent and latch-up protection.

Fig. 2. Core module schematic.

A. Core Module
This module serves as the core for the DPU stack and

is the only essential module. It provides the primary inter-
face between the EDGX-1 DPU and the spacecraft OBC.

A ruggedized radiation-hardened-by-design (RHBD) CPU is
responsible for the command and data handling dispatching
of the DPU stack, as well as acting as a watchdog over the
different DPU modules. The CPU controls the PCIe switch and
is responsible for establishing and managing the PCIe network
set up within the PCIe/104 stacking connectors.
As seen in Figure 2, the Core module power supply includes
redundant power connectors and latchup protection, as well as
power management of the DPU modules which is handled by
the CPU. The board contains boot drives in hot redundancy.
The SSD storage on the Core board can be accessed by other
DPU modules in the stack to serve as cold storage. In the
event that additional cold storage is necessary, data storage
expansion modules could be integrate in the DPU stack and
are accessible via PCIe links.
B. System-on-Chip Module
The System-on-Chip (SoC) module is designed to serve as
a general AI workhorse, supporting a wide variety of ML
tasks via its powerful and diverse computing architecture. The
processing capabilities of CPUs, GPUs and Tensor Processing
Units (TPUs) are embedded into the NVIDIA Jetson Orin
NX. It combines the CortexA78(AE), the highest-performing

Fig. 3. System-on-Chip module schematic.

CPU from ARM to-date with built-in safety features, with
the high performance offered by the Ampere GPU and the
two NVDLA 2.0 AI inference cores. The SoC runs a custom
operating system built on Jetson Linux to fully leverage the
capabilities of the hardware and provide a development- and

integration-friendly environment while minimising computa-
tional overhead.

Several features of this SoC make it a suitable choice for
onboard AI acceleration in a space mission context. The SoC is
designed for edge processing in a harsh industrial environment
from a thermal and mechanical perspective. From the software
side, CPU safety features include the reliability, availability
and serviceability (RAS) protocols, which allow for error
tracking and correction and thus enable the SoC to handle
single event effects (SEEs) due to radiation exposure. Variable
power modes can be configured to attain optimal power

budgets for different applications and mission/spacecraft con-
straints.

Fig. 4. FPGA module schematic.

C. FPGA Module
The FPGA module has the primary function of providing
flexible and customizable hardware acceleration for specific
AI/ML tasks, as well as general purpose high-throughput data
processing as commonly found in spacecraft DPUs. The use of

programmable logic and Intellectual Property (IP) cores can
be tailored for the specific use case, such as Digital Signal
Processing and ML inferencing.
The FPGA module layout described in Figure 4 is based
around the Xilinx Kintex Ultrascale series of FPGAs. Several
FPGA modules can be used in a DPU stack and the dedicated
high speed inter-board connection interface allows for direct
data exchange between FPGA units in the DPU stack through
high speed serial links using protocols such as Aurora.
D. Neuromorphic Module

The Neuromorphic module will contain a dedicated Neuro-
morphic Processing Unit (NPU) which enables the implemen-
tation, training and execution of neuromorphic AI algorithms.

The hardware implementation of these algorithms could be
supported through various different implementations of NPUs,

ranging from FPGAs running neuromorphic IP cores and pro-
grammable logic of neuromorphic compute units to dedicated

neuromorphic application-specific integrated circuits (ASICs).

Fig. 5. Neuromorphic module schematic
The current module design uses the BrainChip Akida
AKD1000, a neuromorphic ASIC. This chip has a fabric
of 80 separate neuron cores interconnected with each other,
providing next generation classic CNN acceleration, efficient
continuous online and on-chip learning, one-shot learning and
the execution of spiking neural network models. Leveraging
these technologies results in a module with high execution
speed, low power consumption and enhanced operability

which is suitable for energy efficient and low latency or real-
time data processing applications.

Figure 5 shows a detailed schematic of the Neuromorphic
module. The main communication interface for the AKD1000
is the PCIe x1 lane, although USB 2.0 and I2C are also
natively supported. This module additionally contains a local
CPU which stores the drivers and carries out data handling for
the module’s external data interfaces.
E. Overall Architectural Evaluation
By combining these different processing units, a low power
heterogeneous computing design can provide a balance of
performance, flexibility, and energy efficiency that is necessary

Fig. 6. EDGX-1 Operational and Retraining Environment

for onboard AI computing in satellites. This approach can
address the limited processing power, storage capacity, power
constraints, and radiation hardening requirements while also

maximizing bandwidth and handling the extreme environmen-
tal conditions of space.

However, implementing such a design would require careful
consideration of factors such as power consumption, size,
weight, and cost, as well as the specific requirements of the
satellite mission. Additionally, developing software that can
effectively utilize and coordinate the different processing units
would be crucial for achieving optimal performance.
IV. EDGX-1 ARTIFICIAL INTELLIGENCE
The EDGX-1 has a heterogeneous design that enables users
to create flexible, powerful AI pipelines, catering to a wide
range of complex tasks. Its high-performance components
even allow for onboard AI inference and retraining inside the
pipelines, ensuring that the AI models can adapt and improve
over time without requiring constant updates from external
sources. Furthermore, the neuromorphic capabilities of the

EDGX-1 not only empower it with advanced processing abil-
ities but also allow for low-power onboard continual learning

during operations. In this way, the EDGX-1 can pioneer the
development of autonomous onboard AI.

A. Retraining
The SoC board supports complete, unsupervised onboard
retraining to remove the need for frequent on-ground updates.
Onboard retraining abstains EDGX-1 from wasting valuable
bandwidth to send over weights updates. Instead, a new model
is being trained onboard parallel to operational execution.

Once the new model finishes training, it can take over opera-
tions. In such a way, the model calibrates itself autonomously

to real-life data. As Figure 6 depicts, an iterative student-
teacher cycle drives the onboard retraining. Whilst the NN

inferences the incoming data, the system stores all data and

predictions as pseudo-labels in a dedicated dataset. At mo-
ments when enough power is available, an identical NN trains

on the pseudo-labelled dataset. In such a way, the onboard
retraining takes the form of a self-supervised Knowledge
Distillation (KD) process. If desired, the system can mix in a
part of the original dataset to combat catastrophic forgetting.
The training makes full use of the available GPU whilst the
inferencing process continues to work uninterrupted on the
AI accelerators of the SoC. Iteratively the student evaluates a
reference dataset, taking over its place as the teacher should it
outperform it. Depending on whether full retraining or partial
fine-tuning is needed, the system can freeze the feature layers
of the student. The system thus exploits a Transfer Learning
(TL) approach to maximise efficiency.

B. Continual Learning
The NPU board naturally supports continual learning to
adapt the neuromorphic model to any drifts in data. Hence,

users can balance the plasticity and stability of their mod-
els. During inference, the neuromorphic models automatically

learn to cope with dynamic conditions. In such a way, the
EDGX-1 can guarantee smooth operations even under sudden
catastrophic changes, such as a sensor failure.
As Figure 6 depicts, a Spiking Neural Network (SNN) learns
completely unsupervised from the datastreams it perceives.
The local weight update paradigm, Spike-Timing-Dependent
Plasticity (STDP), will enable the SNN to recognise new
patterns and behaviours in the data. Because the field of
neuromorphic computing is still in the research phase, initially

coupling it with conventional AI will allow for early integra-
tion in real applications. The SoC can utilise a traditional NN

encoder to transform the data into a latent space representation.
This latent space facilitates converting the data to input spikes
for the SNN.

V. CONCLUSION

As the exponential growth of AI and high performance data

processing continuous to expand, the demand for more power-
ful and more efficient data processing units increases equally

as fast. This work details some of the greatest challenges for
onboard AI in the space industry and corresponding potential
solutions. The paper introduces EDGX-1, a new onboard
AI computer with a heterogeneous design and neuromorphic

capabilities. A new architecture solving the challenges the in-
dustry faces whilst keeping low power dissipation and pushing

the limits of high performance AI computing.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
  • Like
  • Love
  • Haha
Reactions: 18 users
Hi All
Here is another paper that I cannot provide a reference and might end up being over a couple of posts. Relax Pom I won't be asking questions. My opinion only DYOR Fact Finder:

EDGX-1: A New Frontier in Onboard AI
Computing with a Heterogeneous and

Neuromorphic Design

1
st Nick Destrycker
EDGX
Belgium
2
nd Wouter Benoot
EDGX
Belgium
3
rd Joao Mattias ̃
EDGX
Belgium
4
th Ivan Rodriguez
BSC
Spain

5
th David Steenari
ESA
The Netherlands

Abstract—In recent years, the demand for onboard artificial

intelligence (AI) computing in satellites has increased dramati-
cally, as it can enhance the autonomy, efficiency, and capabilities

of space missions. However, current data processing units (DPUs)
face significant challenges in handling complex AI algorithms and
large datasets required for onboard processing. In this paper, we
propose a novel heterogeneous DPU architecture that combines
several different processing units into a single architecture. The
proposed DPU leverages the strengths of each processing unit to
achieve high performance, flexibility, and energy efficiency, while
addressing the power, storage, bandwidth and environmental
constraints of space. We present the design methodology for the
DPU, including the hardware architecture and AI capabilities.
The product represents a potentially significant advancement in
the field of onboard AI computing in satellites, with applications
in a wide range of space missions.
Index Terms—Onboard AI Processing, GPU, Neuromorphic,
Heterogeneous computing, in-orbit retraining, AI Acceleration,
low power, high performance, continious online learing

I. INTRODUCTION

The demand for artificial intelligence (AI) in space ap-
plications has been growing rapidly in recent years. How-
ever, the limited computational resources available onboard

spacecrafts have posed a significant challenge for AI imple-
mentation [1] [2]. To address this challenge, data processing

units (DPUs) that leverage heterogeneous computing platforms
have emerged as a promising solution for high-performance
AI computing [3] [4] [5]. These platforms can leverage the
strengths of each processing unit to accelerate AI computations
and achieve higher energy efficiency compared to traditional
single-processor systems.
In this paper, we introduce EDGX-1, a novel DPU that

combines classic computing units and neuromorphic process-
ing capabilities for high-performance AI computing in space

applications designed for class V type missions. EDGX-
1 is a heterogeneous computing platform that integrates a

CPU, GPU, TPU, FPGA and NPU. It allows the creation of
flexible, dynamic AI pipelines to infer and even retrain AI/ML
algorithms onboard. Through the NPU, the EDGX-1 naturally
supports continual learning in the ML models.
The neuromorphic computing capability of the EDGX-1
DPU is a significant innovation which is designed to mimic
the neural structure of the brain [6], enabling them to perform
certain computations much faster and more efficiently than

traditional computing platforms. The integration of a neuro-
morphic processing unit in the EDGX-1 DPU can enhance the

system’s ability to handle AI applications that require real-
time processing [7], low power consumption and high data

bandwidth.
The EDGX-1 is targeted towards the satcom market as well
as the earth observation market and is well-suited for a variety
of space applications, including optical imaging, cognitive
radio, and cognitive SAR.

For optical imaging, the heterogeneous computing archi-
tecture and neuromorphic computing capability can enhance

the system’s ability to process high-resolution images and

data efficiently. In cognitive radio applications such as dy-
namic spectrum allocation and interference management, the

EDGX-1 can be used for low power spectrum monitoring,
analysis and realtime signal classification and charactarization.
In cognitive SAR applications, the EDGX-1 can leverage
its unique processing architecture to provide realtime SAR
instrument zooming, object classification and accelerated data
processing to support environmental monitoring and rapid
disaster response. Overall, the EDGX-1 offers a powerful and
versatile platform for a wide range of space applications. The
flexible architecture also allows for swift adaptation to serve
other spacecraft payloads and needs.

The remainder of this paper is structured as follows. Sec-
tion II addresses the current challenges faced by data process-
ing units and potential solutions to the problem. Section III

provides an overview of the EDGX-1 DPU’s heterogeneous

computing architecture and neuromorphic computing capabil-
ity. Section IV describes the AI operational and retraining

environment capabilities onboard the EDGX-1. Section V
concludes the paper and discusses future work.
II. ONBOARD AI CHALLENGES AND SOLUTIONS
A. Meeting Power Budget of Microsatellites and Cubesats
As the power budget of microsatellites and CubeSats is often
minimal, the EDGX-1 provides various ways to configure its
power consumption. In such a way, system engineers can
completely control and limit the EDGX-1’s power draw. For
the FPGA, they can decide upon the total power consumption
during the bitstream generation process. Similarly, they can
turn off some of the subsystems in the boot configuration for

the SoC. Some SoC subsystems can even be changed on the

fly, tailoring to a dynamic power budget on the satellite in-
orbit. Drastically changing power modes raises the question

if it would effect the radiation characteristics of the device.
However, preliminary tests from Rodriguez et al. [8] indicate
that the radiation characteristics remain consistent between
power modes.
Since the demand for more data acquisition keeps rising,

future data processing systems will need to process increas-
ingly more data within the same power budget. The EDGX-1

technically addresses this problem by implementing the Akida
processor from BrainChip. This digital neuromorphic chip
mimics the human brain to work at high energy efficiency. It
calculates in an event-driven fashion, using only power when
needed, reducing overall power consumption. Integrating the
Akida is ideal for satellites with a limited power budget.
Finally, the design of the EDGX-1 allows not only the
use of the NVIDIA Orin embedded SoC. It also supports
multiple previous generations of NVIDIA embedded devices,
like the TX2 or the Xavier family. This ultimately lowers
power consumption further in favour of performance. Even

within the same NVIDIA generation, the EDGX-1 can accom-
modate the Orin Nano, delivering a lower power consumption

for reasonable performance. All these SoCs share the same
hardware interface and software stack. This way, the EDGX-1
can meet any power requirement and adapt to a large spectrum
of smallsat, microsat and cubesat mission scenarios.
B. Reliability of COTS System-on-Chip in Space Environment
Radiation can impact semiconductors [9] and affect device
reliability [10]. High-energy particles can disrupt charges in
a transistor’s layers, causing unexpected behaviour known
as Single-Event Effects (SEEs) [11]. SEEs can be classified
as Single-Event Upsets (SEUs) or Single-Event Functional
Interrupts (SEFIs) [12]. In the first, the SEE only manifested
itself as a bit-flip in memory, whilst in the latter, a bit-flip in
the control logic caused a loss of device functionality. While
neither physically nor permanently harm the device, a radiation
event causing a MOSFET transistor related to power regulation

to remain open can result in a destructive Single-Event Latch-
up (SEL). Most SEEs, including SELs, can be cleared with a

power cycle and are transient errors.
Radiation testing is crucial to assessing electronic devices’
reliability in safety-critical systems. These tests use protons,
neutrons, heavy ions, two-photon absorption [13] or gamma
radiation (also known as Total Ionizing Dose (TID) testing)
to accelerate the appearance of SEEs for analysis. TID testing
can also shine a light on the device’s ageing by accumulating
radiation faster than it would naturally occur. Neutrons are
commonly used for terrestrial applications, while protons and
heavy ions are preferred for space systems.

The EDGX-1 targets the NVIDIA Jetson Orin NX. Al-
though no radiation test has happened on this device, it has

happened from some of the older modules from NVIDIA. For
example, the Jeston TX2 [14] and the Jetson Xavier NX [8]
have undergone radiation testing. The latter is relevant for the

EDGX-1 since the Xavier directly preceded the Orin. Even
though we can not extrapolate the radiation characteristics to
the Orin, most of its subsystems and processor architecture
are similar to the Xavier. Conducted radiation tests on the

Xavier [8], utilising the Reliability, Availability, and Service-
ability (RAS) features from ARM, pointing out that the cache

tags were the leading causes of SEFIs. These tags are only
protected by a single parity bit, causing complete reboots due
to the inability to recover. In the Orin family, NVIDIA changed
the processor to the Cortex-A78AE [15]. 1-bit correction and
2-bit detection error correction codes now protect the cache
tags. This change could decrease the sensitivity to SEFIs, but
further radiation tests of the Orin will need to verify this
hypothesis.
C. Availability of System in Space Environment
Devices built with these COTS components have a high

chance of encountering SEFIs, which could affect their avail-
ability. The lack of availability can cause the loss of data

or mission return. Availability is an open problem that also

has been explored in the automotive domain. Possible solu-
tions have been proposed, like redundant kernels in the GPU

section [16] or register protection for the CPU [17]. A more
resilient external device could also act as the interface between
the payload and the COTS device, ensuring no data loss in
the case of a SEFI [18]. Luckily, because the EDGX-1 is
a very computationally capable device, it can catch up and
re-compute at an increased rate. This way, the system still
processes all data safeguarded by the resilient supervisor. This
concept has been explored by Kritikakou et al. [19] with
promising results.
D. In-orbit flexibility, adaptability and AI retrainability
Space missions are subject to a wide array of factors that
can affect the performance of AI algorithms. Variations in data
distributions, changes in the spacecraft’s position, sensor drift,
unknown new data, unforeseen new environment parameters
and evolving mission objectives can all contribute to making
pre-trained AI models less effective over time. Therefore, the
challenge lies in developing a mechanism to update and adapt
AI models without requiring frequent costly communication
with ground stations. Enabling AI models to adapt and retrain
in orbit autonomously is crucial for maintaining optimal
performance throughout the lifetime of the mission.
Ensuring flexibility and adaptability in the data processing

part of the payload is crucial for securing a future proof satel-
lite design for next generation missions. The world of AI and

processing technology is changing rapidly, rendering todays
data processing units obsolete tomorrow. Implementing future
technology is critical to allow for changing business models
and keeping relevance in today’s fast paced environment
The EDGX-1 DPU addresses the challenge of in-orbit

adaptability through its innovative capability to retrain AI al-
gorithms directly onboard the spacecraft. This capability stems

from the heterogeneous architecture that combines various
processing units, capable of being reprogrammed and enabling

dynamic adjustments and updates to AI models without ex-
tensive external intervention. Conventional AI retrainability

as well as continual online learning through neuromorphic
technology is available onboard the EDGX-1 DPU and further
explained in Section IV.
Incorporating onboard AI retraining within the EDGX-1
DPU enhances the overall capabilities of space missions by
ensuring that AI models remain effective and accurate in
ever-changing conditions. This dynamic adaptability not only
optimizes decision-making but also reduces the dependence
on ground-based intervention for model updates.

Fig. 1. Hardware Architecture
III. EDGX-1 HARDWARE ARCHITECTURE

A low-power heterogeneous computing design that com-
bines a GPU, FPGA, CPU, TPU, and NPU can potentially

solve the problems with current data processing units and
boost onboard AI computing performance for satellites.
The general hardware architecture consists of multiple
printed circuit boards (PCBs), designed according to PCIe/104
form factor specification which allows for a large range of

benefits for embedded computing applications, including high-
speed connectivity, compact size, rugged design, scalability,

and interoperability.
Internal power and data transmission occur through the
PCIe/104 triple branch stacking connector which acts as a
structural and electrical backbone for the full DPU. This data

bus provides power in 3.3V, 5V and 12V and has both high-
speed (PCIe x1, PCIe x4, USB 2.0/3.0) and low-speed (CAN,

I2C) data links.

The overall DPU is therefore arranged as a modular stack
of these PCB modules as seen in Figure 1, where each
PCB provides specialized computing capabilities and can be
seamlessly integrated (or not) according to the needs and
constraints of the given mission. Multiples of the same type of
unit can be stacked to provide enhanced computational power.
Every processing board provides 2x high speed external
interfaces (ETH, USB 2.0/3.0) for data transfer as well as 2x
low speed interfaces (CAN, UART) for command and control

interfaces. Additional GPIO pins are available for customi-
sation. The power supply and distribution on the EDGX-1

includes overvoltage, overcurrent and latch-up protection.

Fig. 2. Core module schematic.

A. Core Module
This module serves as the core for the DPU stack and

is the only essential module. It provides the primary inter-
face between the EDGX-1 DPU and the spacecraft OBC.

A ruggedized radiation-hardened-by-design (RHBD) CPU is
responsible for the command and data handling dispatching
of the DPU stack, as well as acting as a watchdog over the
different DPU modules. The CPU controls the PCIe switch and
is responsible for establishing and managing the PCIe network
set up within the PCIe/104 stacking connectors.
As seen in Figure 2, the Core module power supply includes
redundant power connectors and latchup protection, as well as
power management of the DPU modules which is handled by
the CPU. The board contains boot drives in hot redundancy.
The SSD storage on the Core board can be accessed by other
DPU modules in the stack to serve as cold storage. In the
event that additional cold storage is necessary, data storage
expansion modules could be integrate in the DPU stack and
are accessible via PCIe links.
B. System-on-Chip Module
The System-on-Chip (SoC) module is designed to serve as
a general AI workhorse, supporting a wide variety of ML
tasks via its powerful and diverse computing architecture. The
processing capabilities of CPUs, GPUs and Tensor Processing
Units (TPUs) are embedded into the NVIDIA Jetson Orin
NX. It combines the CortexA78(AE), the highest-performing

Fig. 3. System-on-Chip module schematic.

CPU from ARM to-date with built-in safety features, with
the high performance offered by the Ampere GPU and the
two NVDLA 2.0 AI inference cores. The SoC runs a custom
operating system built on Jetson Linux to fully leverage the
capabilities of the hardware and provide a development- and

integration-friendly environment while minimising computa-
tional overhead.

Several features of this SoC make it a suitable choice for
onboard AI acceleration in a space mission context. The SoC is
designed for edge processing in a harsh industrial environment
from a thermal and mechanical perspective. From the software
side, CPU safety features include the reliability, availability
and serviceability (RAS) protocols, which allow for error
tracking and correction and thus enable the SoC to handle
single event effects (SEEs) due to radiation exposure. Variable
power modes can be configured to attain optimal power

budgets for different applications and mission/spacecraft con-
straints.

Fig. 4. FPGA module schematic.

C. FPGA Module
The FPGA module has the primary function of providing
flexible and customizable hardware acceleration for specific
AI/ML tasks, as well as general purpose high-throughput data
processing as commonly found in spacecraft DPUs. The use of

programmable logic and Intellectual Property (IP) cores can
be tailored for the specific use case, such as Digital Signal
Processing and ML inferencing.
The FPGA module layout described in Figure 4 is based
around the Xilinx Kintex Ultrascale series of FPGAs. Several
FPGA modules can be used in a DPU stack and the dedicated
high speed inter-board connection interface allows for direct
data exchange between FPGA units in the DPU stack through
high speed serial links using protocols such as Aurora.
D. Neuromorphic Module

The Neuromorphic module will contain a dedicated Neuro-
morphic Processing Unit (NPU) which enables the implemen-
tation, training and execution of neuromorphic AI algorithms.

The hardware implementation of these algorithms could be
supported through various different implementations of NPUs,

ranging from FPGAs running neuromorphic IP cores and pro-
grammable logic of neuromorphic compute units to dedicated

neuromorphic application-specific integrated circuits (ASICs).

Fig. 5. Neuromorphic module schematic
The current module design uses the BrainChip Akida
AKD1000, a neuromorphic ASIC. This chip has a fabric
of 80 separate neuron cores interconnected with each other,
providing next generation classic CNN acceleration, efficient
continuous online and on-chip learning, one-shot learning and
the execution of spiking neural network models. Leveraging
these technologies results in a module with high execution
speed, low power consumption and enhanced operability

which is suitable for energy efficient and low latency or real-
time data processing applications.

Figure 5 shows a detailed schematic of the Neuromorphic
module. The main communication interface for the AKD1000
is the PCIe x1 lane, although USB 2.0 and I2C are also
natively supported. This module additionally contains a local
CPU which stores the drivers and carries out data handling for
the module’s external data interfaces.
E. Overall Architectural Evaluation
By combining these different processing units, a low power
heterogeneous computing design can provide a balance of
performance, flexibility, and energy efficiency that is necessary

Fig. 6. EDGX-1 Operational and Retraining Environment

for onboard AI computing in satellites. This approach can
address the limited processing power, storage capacity, power
constraints, and radiation hardening requirements while also

maximizing bandwidth and handling the extreme environmen-
tal conditions of space.

However, implementing such a design would require careful
consideration of factors such as power consumption, size,
weight, and cost, as well as the specific requirements of the
satellite mission. Additionally, developing software that can
effectively utilize and coordinate the different processing units
would be crucial for achieving optimal performance.
IV. EDGX-1 ARTIFICIAL INTELLIGENCE
The EDGX-1 has a heterogeneous design that enables users
to create flexible, powerful AI pipelines, catering to a wide
range of complex tasks. Its high-performance components
even allow for onboard AI inference and retraining inside the
pipelines, ensuring that the AI models can adapt and improve
over time without requiring constant updates from external
sources. Furthermore, the neuromorphic capabilities of the

EDGX-1 not only empower it with advanced processing abil-
ities but also allow for low-power onboard continual learning

during operations. In this way, the EDGX-1 can pioneer the
development of autonomous onboard AI.

A. Retraining
The SoC board supports complete, unsupervised onboard
retraining to remove the need for frequent on-ground updates.
Onboard retraining abstains EDGX-1 from wasting valuable
bandwidth to send over weights updates. Instead, a new model
is being trained onboard parallel to operational execution.

Once the new model finishes training, it can take over opera-
tions. In such a way, the model calibrates itself autonomously

to real-life data. As Figure 6 depicts, an iterative student-
teacher cycle drives the onboard retraining. Whilst the NN

inferences the incoming data, the system stores all data and

predictions as pseudo-labels in a dedicated dataset. At mo-
ments when enough power is available, an identical NN trains

on the pseudo-labelled dataset. In such a way, the onboard
retraining takes the form of a self-supervised Knowledge
Distillation (KD) process. If desired, the system can mix in a
part of the original dataset to combat catastrophic forgetting.
The training makes full use of the available GPU whilst the
inferencing process continues to work uninterrupted on the
AI accelerators of the SoC. Iteratively the student evaluates a
reference dataset, taking over its place as the teacher should it
outperform it. Depending on whether full retraining or partial
fine-tuning is needed, the system can freeze the feature layers
of the student. The system thus exploits a Transfer Learning
(TL) approach to maximise efficiency.

B. Continual Learning
The NPU board naturally supports continual learning to
adapt the neuromorphic model to any drifts in data. Hence,

users can balance the plasticity and stability of their mod-
els. During inference, the neuromorphic models automatically

learn to cope with dynamic conditions. In such a way, the
EDGX-1 can guarantee smooth operations even under sudden
catastrophic changes, such as a sensor failure.
As Figure 6 depicts, a Spiking Neural Network (SNN) learns
completely unsupervised from the datastreams it perceives.
The local weight update paradigm, Spike-Timing-Dependent
Plasticity (STDP), will enable the SNN to recognise new
patterns and behaviours in the data. Because the field of
neuromorphic computing is still in the research phase, initially

coupling it with conventional AI will allow for early integra-
tion in real applications. The SoC can utilise a traditional NN

encoder to transform the data into a latent space representation.
This latent space facilitates converting the data to input spikes
for the SNN.

V. CONCLUSION

As the exponential growth of AI and high performance data

processing continuous to expand, the demand for more power-
ful and more efficient data processing units increases equally

as fast. This work details some of the greatest challenges for
onboard AI in the space industry and corresponding potential
solutions. The paper introduces EDGX-1, a new onboard
AI computer with a heterogeneous design and neuromorphic

capabilities. A new architecture solving the challenges the in-
dustry faces whilst keeping low power dissipation and pushing

the limits of high performance AI computing.
1706675869292.gif
 
  • Haha
  • Like
Reactions: 12 users
  • Haha
  • Like
Reactions: 13 users
Hi All
Here is another paper that I cannot provide a reference and might end up being over a couple of posts. Relax Pom I won't be asking questions. My opinion only DYOR Fact Finder:

EDGX-1: A New Frontier in Onboard AI
Computing with a Heterogeneous and

Neuromorphic Design

1
st Nick Destrycker
EDGX
Belgium
2
nd Wouter Benoot
EDGX
Belgium
3
rd Joao Mattias ̃
EDGX
Belgium
4
th Ivan Rodriguez
BSC
Spain

5
th David Steenari
ESA
The Netherlands

Abstract—In recent years, the demand for onboard artificial

intelligence (AI) computing in satellites has increased dramati-
cally, as it can enhance the autonomy, efficiency, and capabilities

of space missions. However, current data processing units (DPUs)
face significant challenges in handling complex AI algorithms and
large datasets required for onboard processing. In this paper, we
propose a novel heterogeneous DPU architecture that combines
several different processing units into a single architecture. The
proposed DPU leverages the strengths of each processing unit to
achieve high performance, flexibility, and energy efficiency, while
addressing the power, storage, bandwidth and environmental
constraints of space. We present the design methodology for the
DPU, including the hardware architecture and AI capabilities.
The product represents a potentially significant advancement in
the field of onboard AI computing in satellites, with applications
in a wide range of space missions.
Index Terms—Onboard AI Processing, GPU, Neuromorphic,
Heterogeneous computing, in-orbit retraining, AI Acceleration,
low power, high performance, continious online learing

I. INTRODUCTION

The demand for artificial intelligence (AI) in space ap-
plications has been growing rapidly in recent years. How-
ever, the limited computational resources available onboard

spacecrafts have posed a significant challenge for AI imple-
mentation [1] [2]. To address this challenge, data processing

units (DPUs) that leverage heterogeneous computing platforms
have emerged as a promising solution for high-performance
AI computing [3] [4] [5]. These platforms can leverage the
strengths of each processing unit to accelerate AI computations
and achieve higher energy efficiency compared to traditional
single-processor systems.
In this paper, we introduce EDGX-1, a novel DPU that

combines classic computing units and neuromorphic process-
ing capabilities for high-performance AI computing in space

applications designed for class V type missions. EDGX-
1 is a heterogeneous computing platform that integrates a

CPU, GPU, TPU, FPGA and NPU. It allows the creation of
flexible, dynamic AI pipelines to infer and even retrain AI/ML
algorithms onboard. Through the NPU, the EDGX-1 naturally
supports continual learning in the ML models.
The neuromorphic computing capability of the EDGX-1
DPU is a significant innovation which is designed to mimic
the neural structure of the brain [6], enabling them to perform
certain computations much faster and more efficiently than

traditional computing platforms. The integration of a neuro-
morphic processing unit in the EDGX-1 DPU can enhance the

system’s ability to handle AI applications that require real-
time processing [7], low power consumption and high data

bandwidth.
The EDGX-1 is targeted towards the satcom market as well
as the earth observation market and is well-suited for a variety
of space applications, including optical imaging, cognitive
radio, and cognitive SAR.

For optical imaging, the heterogeneous computing archi-
tecture and neuromorphic computing capability can enhance

the system’s ability to process high-resolution images and

data efficiently. In cognitive radio applications such as dy-
namic spectrum allocation and interference management, the

EDGX-1 can be used for low power spectrum monitoring,
analysis and realtime signal classification and charactarization.
In cognitive SAR applications, the EDGX-1 can leverage
its unique processing architecture to provide realtime SAR
instrument zooming, object classification and accelerated data
processing to support environmental monitoring and rapid
disaster response. Overall, the EDGX-1 offers a powerful and
versatile platform for a wide range of space applications. The
flexible architecture also allows for swift adaptation to serve
other spacecraft payloads and needs.

The remainder of this paper is structured as follows. Sec-
tion II addresses the current challenges faced by data process-
ing units and potential solutions to the problem. Section III

provides an overview of the EDGX-1 DPU’s heterogeneous

computing architecture and neuromorphic computing capabil-
ity. Section IV describes the AI operational and retraining

environment capabilities onboard the EDGX-1. Section V
concludes the paper and discusses future work.
II. ONBOARD AI CHALLENGES AND SOLUTIONS
A. Meeting Power Budget of Microsatellites and Cubesats
As the power budget of microsatellites and CubeSats is often
minimal, the EDGX-1 provides various ways to configure its
power consumption. In such a way, system engineers can
completely control and limit the EDGX-1’s power draw. For
the FPGA, they can decide upon the total power consumption
during the bitstream generation process. Similarly, they can
turn off some of the subsystems in the boot configuration for

the SoC. Some SoC subsystems can even be changed on the

fly, tailoring to a dynamic power budget on the satellite in-
orbit. Drastically changing power modes raises the question

if it would effect the radiation characteristics of the device.
However, preliminary tests from Rodriguez et al. [8] indicate
that the radiation characteristics remain consistent between
power modes.
Since the demand for more data acquisition keeps rising,

future data processing systems will need to process increas-
ingly more data within the same power budget. The EDGX-1

technically addresses this problem by implementing the Akida
processor from BrainChip. This digital neuromorphic chip
mimics the human brain to work at high energy efficiency. It
calculates in an event-driven fashion, using only power when
needed, reducing overall power consumption. Integrating the
Akida is ideal for satellites with a limited power budget.
Finally, the design of the EDGX-1 allows not only the
use of the NVIDIA Orin embedded SoC. It also supports
multiple previous generations of NVIDIA embedded devices,
like the TX2 or the Xavier family. This ultimately lowers
power consumption further in favour of performance. Even

within the same NVIDIA generation, the EDGX-1 can accom-
modate the Orin Nano, delivering a lower power consumption

for reasonable performance. All these SoCs share the same
hardware interface and software stack. This way, the EDGX-1
can meet any power requirement and adapt to a large spectrum
of smallsat, microsat and cubesat mission scenarios.
B. Reliability of COTS System-on-Chip in Space Environment
Radiation can impact semiconductors [9] and affect device
reliability [10]. High-energy particles can disrupt charges in
a transistor’s layers, causing unexpected behaviour known
as Single-Event Effects (SEEs) [11]. SEEs can be classified
as Single-Event Upsets (SEUs) or Single-Event Functional
Interrupts (SEFIs) [12]. In the first, the SEE only manifested
itself as a bit-flip in memory, whilst in the latter, a bit-flip in
the control logic caused a loss of device functionality. While
neither physically nor permanently harm the device, a radiation
event causing a MOSFET transistor related to power regulation

to remain open can result in a destructive Single-Event Latch-
up (SEL). Most SEEs, including SELs, can be cleared with a

power cycle and are transient errors.
Radiation testing is crucial to assessing electronic devices’
reliability in safety-critical systems. These tests use protons,
neutrons, heavy ions, two-photon absorption [13] or gamma
radiation (also known as Total Ionizing Dose (TID) testing)
to accelerate the appearance of SEEs for analysis. TID testing
can also shine a light on the device’s ageing by accumulating
radiation faster than it would naturally occur. Neutrons are
commonly used for terrestrial applications, while protons and
heavy ions are preferred for space systems.

The EDGX-1 targets the NVIDIA Jetson Orin NX. Al-
though no radiation test has happened on this device, it has

happened from some of the older modules from NVIDIA. For
example, the Jeston TX2 [14] and the Jetson Xavier NX [8]
have undergone radiation testing. The latter is relevant for the

EDGX-1 since the Xavier directly preceded the Orin. Even
though we can not extrapolate the radiation characteristics to
the Orin, most of its subsystems and processor architecture
are similar to the Xavier. Conducted radiation tests on the

Xavier [8], utilising the Reliability, Availability, and Service-
ability (RAS) features from ARM, pointing out that the cache

tags were the leading causes of SEFIs. These tags are only
protected by a single parity bit, causing complete reboots due
to the inability to recover. In the Orin family, NVIDIA changed
the processor to the Cortex-A78AE [15]. 1-bit correction and
2-bit detection error correction codes now protect the cache
tags. This change could decrease the sensitivity to SEFIs, but
further radiation tests of the Orin will need to verify this
hypothesis.
C. Availability of System in Space Environment
Devices built with these COTS components have a high

chance of encountering SEFIs, which could affect their avail-
ability. The lack of availability can cause the loss of data

or mission return. Availability is an open problem that also

has been explored in the automotive domain. Possible solu-
tions have been proposed, like redundant kernels in the GPU

section [16] or register protection for the CPU [17]. A more
resilient external device could also act as the interface between
the payload and the COTS device, ensuring no data loss in
the case of a SEFI [18]. Luckily, because the EDGX-1 is
a very computationally capable device, it can catch up and
re-compute at an increased rate. This way, the system still
processes all data safeguarded by the resilient supervisor. This
concept has been explored by Kritikakou et al. [19] with
promising results.
D. In-orbit flexibility, adaptability and AI retrainability
Space missions are subject to a wide array of factors that
can affect the performance of AI algorithms. Variations in data
distributions, changes in the spacecraft’s position, sensor drift,
unknown new data, unforeseen new environment parameters
and evolving mission objectives can all contribute to making
pre-trained AI models less effective over time. Therefore, the
challenge lies in developing a mechanism to update and adapt
AI models without requiring frequent costly communication
with ground stations. Enabling AI models to adapt and retrain
in orbit autonomously is crucial for maintaining optimal
performance throughout the lifetime of the mission.
Ensuring flexibility and adaptability in the data processing

part of the payload is crucial for securing a future proof satel-
lite design for next generation missions. The world of AI and

processing technology is changing rapidly, rendering todays
data processing units obsolete tomorrow. Implementing future
technology is critical to allow for changing business models
and keeping relevance in today’s fast paced environment
The EDGX-1 DPU addresses the challenge of in-orbit

adaptability through its innovative capability to retrain AI al-
gorithms directly onboard the spacecraft. This capability stems

from the heterogeneous architecture that combines various
processing units, capable of being reprogrammed and enabling

dynamic adjustments and updates to AI models without ex-
tensive external intervention. Conventional AI retrainability

as well as continual online learning through neuromorphic
technology is available onboard the EDGX-1 DPU and further
explained in Section IV.
Incorporating onboard AI retraining within the EDGX-1
DPU enhances the overall capabilities of space missions by
ensuring that AI models remain effective and accurate in
ever-changing conditions. This dynamic adaptability not only
optimizes decision-making but also reduces the dependence
on ground-based intervention for model updates.

Fig. 1. Hardware Architecture
III. EDGX-1 HARDWARE ARCHITECTURE

A low-power heterogeneous computing design that com-
bines a GPU, FPGA, CPU, TPU, and NPU can potentially

solve the problems with current data processing units and
boost onboard AI computing performance for satellites.
The general hardware architecture consists of multiple
printed circuit boards (PCBs), designed according to PCIe/104
form factor specification which allows for a large range of

benefits for embedded computing applications, including high-
speed connectivity, compact size, rugged design, scalability,

and interoperability.
Internal power and data transmission occur through the
PCIe/104 triple branch stacking connector which acts as a
structural and electrical backbone for the full DPU. This data

bus provides power in 3.3V, 5V and 12V and has both high-
speed (PCIe x1, PCIe x4, USB 2.0/3.0) and low-speed (CAN,

I2C) data links.

The overall DPU is therefore arranged as a modular stack
of these PCB modules as seen in Figure 1, where each
PCB provides specialized computing capabilities and can be
seamlessly integrated (or not) according to the needs and
constraints of the given mission. Multiples of the same type of
unit can be stacked to provide enhanced computational power.
Every processing board provides 2x high speed external
interfaces (ETH, USB 2.0/3.0) for data transfer as well as 2x
low speed interfaces (CAN, UART) for command and control

interfaces. Additional GPIO pins are available for customi-
sation. The power supply and distribution on the EDGX-1

includes overvoltage, overcurrent and latch-up protection.

Fig. 2. Core module schematic.

A. Core Module
This module serves as the core for the DPU stack and

is the only essential module. It provides the primary inter-
face between the EDGX-1 DPU and the spacecraft OBC.

A ruggedized radiation-hardened-by-design (RHBD) CPU is
responsible for the command and data handling dispatching
of the DPU stack, as well as acting as a watchdog over the
different DPU modules. The CPU controls the PCIe switch and
is responsible for establishing and managing the PCIe network
set up within the PCIe/104 stacking connectors.
As seen in Figure 2, the Core module power supply includes
redundant power connectors and latchup protection, as well as
power management of the DPU modules which is handled by
the CPU. The board contains boot drives in hot redundancy.
The SSD storage on the Core board can be accessed by other
DPU modules in the stack to serve as cold storage. In the
event that additional cold storage is necessary, data storage
expansion modules could be integrate in the DPU stack and
are accessible via PCIe links.
B. System-on-Chip Module
The System-on-Chip (SoC) module is designed to serve as
a general AI workhorse, supporting a wide variety of ML
tasks via its powerful and diverse computing architecture. The
processing capabilities of CPUs, GPUs and Tensor Processing
Units (TPUs) are embedded into the NVIDIA Jetson Orin
NX. It combines the CortexA78(AE), the highest-performing

Fig. 3. System-on-Chip module schematic.

CPU from ARM to-date with built-in safety features, with
the high performance offered by the Ampere GPU and the
two NVDLA 2.0 AI inference cores. The SoC runs a custom
operating system built on Jetson Linux to fully leverage the
capabilities of the hardware and provide a development- and

integration-friendly environment while minimising computa-
tional overhead.

Several features of this SoC make it a suitable choice for
onboard AI acceleration in a space mission context. The SoC is
designed for edge processing in a harsh industrial environment
from a thermal and mechanical perspective. From the software
side, CPU safety features include the reliability, availability
and serviceability (RAS) protocols, which allow for error
tracking and correction and thus enable the SoC to handle
single event effects (SEEs) due to radiation exposure. Variable
power modes can be configured to attain optimal power

budgets for different applications and mission/spacecraft con-
straints.

Fig. 4. FPGA module schematic.

C. FPGA Module
The FPGA module has the primary function of providing
flexible and customizable hardware acceleration for specific
AI/ML tasks, as well as general purpose high-throughput data
processing as commonly found in spacecraft DPUs. The use of

programmable logic and Intellectual Property (IP) cores can
be tailored for the specific use case, such as Digital Signal
Processing and ML inferencing.
The FPGA module layout described in Figure 4 is based
around the Xilinx Kintex Ultrascale series of FPGAs. Several
FPGA modules can be used in a DPU stack and the dedicated
high speed inter-board connection interface allows for direct
data exchange between FPGA units in the DPU stack through
high speed serial links using protocols such as Aurora.
D. Neuromorphic Module

The Neuromorphic module will contain a dedicated Neuro-
morphic Processing Unit (NPU) which enables the implemen-
tation, training and execution of neuromorphic AI algorithms.

The hardware implementation of these algorithms could be
supported through various different implementations of NPUs,

ranging from FPGAs running neuromorphic IP cores and pro-
grammable logic of neuromorphic compute units to dedicated

neuromorphic application-specific integrated circuits (ASICs).

Fig. 5. Neuromorphic module schematic
The current module design uses the BrainChip Akida
AKD1000, a neuromorphic ASIC. This chip has a fabric
of 80 separate neuron cores interconnected with each other,
providing next generation classic CNN acceleration, efficient
continuous online and on-chip learning, one-shot learning and
the execution of spiking neural network models. Leveraging
these technologies results in a module with high execution
speed, low power consumption and enhanced operability

which is suitable for energy efficient and low latency or real-
time data processing applications.

Figure 5 shows a detailed schematic of the Neuromorphic
module. The main communication interface for the AKD1000
is the PCIe x1 lane, although USB 2.0 and I2C are also
natively supported. This module additionally contains a local
CPU which stores the drivers and carries out data handling for
the module’s external data interfaces.
E. Overall Architectural Evaluation
By combining these different processing units, a low power
heterogeneous computing design can provide a balance of
performance, flexibility, and energy efficiency that is necessary

Fig. 6. EDGX-1 Operational and Retraining Environment

for onboard AI computing in satellites. This approach can
address the limited processing power, storage capacity, power
constraints, and radiation hardening requirements while also

maximizing bandwidth and handling the extreme environmen-
tal conditions of space.

However, implementing such a design would require careful
consideration of factors such as power consumption, size,
weight, and cost, as well as the specific requirements of the
satellite mission. Additionally, developing software that can
effectively utilize and coordinate the different processing units
would be crucial for achieving optimal performance.
IV. EDGX-1 ARTIFICIAL INTELLIGENCE
The EDGX-1 has a heterogeneous design that enables users
to create flexible, powerful AI pipelines, catering to a wide
range of complex tasks. Its high-performance components
even allow for onboard AI inference and retraining inside the
pipelines, ensuring that the AI models can adapt and improve
over time without requiring constant updates from external
sources. Furthermore, the neuromorphic capabilities of the

EDGX-1 not only empower it with advanced processing abil-
ities but also allow for low-power onboard continual learning

during operations. In this way, the EDGX-1 can pioneer the
development of autonomous onboard AI.

A. Retraining
The SoC board supports complete, unsupervised onboard
retraining to remove the need for frequent on-ground updates.
Onboard retraining abstains EDGX-1 from wasting valuable
bandwidth to send over weights updates. Instead, a new model
is being trained onboard parallel to operational execution.

Once the new model finishes training, it can take over opera-
tions. In such a way, the model calibrates itself autonomously

to real-life data. As Figure 6 depicts, an iterative student-
teacher cycle drives the onboard retraining. Whilst the NN

inferences the incoming data, the system stores all data and

predictions as pseudo-labels in a dedicated dataset. At mo-
ments when enough power is available, an identical NN trains

on the pseudo-labelled dataset. In such a way, the onboard
retraining takes the form of a self-supervised Knowledge
Distillation (KD) process. If desired, the system can mix in a
part of the original dataset to combat catastrophic forgetting.
The training makes full use of the available GPU whilst the
inferencing process continues to work uninterrupted on the
AI accelerators of the SoC. Iteratively the student evaluates a
reference dataset, taking over its place as the teacher should it
outperform it. Depending on whether full retraining or partial
fine-tuning is needed, the system can freeze the feature layers
of the student. The system thus exploits a Transfer Learning
(TL) approach to maximise efficiency.

B. Continual Learning
The NPU board naturally supports continual learning to
adapt the neuromorphic model to any drifts in data. Hence,

users can balance the plasticity and stability of their mod-
els. During inference, the neuromorphic models automatically

learn to cope with dynamic conditions. In such a way, the
EDGX-1 can guarantee smooth operations even under sudden
catastrophic changes, such as a sensor failure.
As Figure 6 depicts, a Spiking Neural Network (SNN) learns
completely unsupervised from the datastreams it perceives.
The local weight update paradigm, Spike-Timing-Dependent
Plasticity (STDP), will enable the SNN to recognise new
patterns and behaviours in the data. Because the field of
neuromorphic computing is still in the research phase, initially

coupling it with conventional AI will allow for early integra-
tion in real applications. The SoC can utilise a traditional NN

encoder to transform the data into a latent space representation.
This latent space facilitates converting the data to input spikes
for the SNN.

V. CONCLUSION

As the exponential growth of AI and high performance data

processing continuous to expand, the demand for more power-
ful and more efficient data processing units increases equally

as fast. This work details some of the greatest challenges for
onboard AI in the space industry and corresponding potential
solutions. The paper introduces EDGX-1, a new onboard
AI computer with a heterogeneous design and neuromorphic

capabilities. A new architecture solving the challenges the in-
dustry faces whilst keeping low power dissipation and pushing

the limits of high performance AI computing.
Hi All

A tiny detail that some may miss in this EDGX paper is that one of the authors is David Steenari ESA
The Netherlands.

ESA for those that might not know is the European Space Agency.

This is significant lest there be any doubt circulated as to whether the benefits of using AKD1000 has filtered through from EDGX to those in charge in the ESA.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 45 users
Hi All

A tiny detail that some may miss in this EDGX paper is that one of the authors is David Steenari ESA
The Netherlands.

ESA for those that might not know is the European Space Agency.

This is significant lest there be any doubt circulated as to whether the benefits of using AKD1000 has filtered through from EDGX to those in charge in the ESA.

My opinion only DYOR
Fact Finder
1706679991487.gif
 
  • Haha
  • Like
Reactions: 5 users

Cyw

Regular
Hopefull final episode.😂🤣😂
E. Data Preprocessing

Fig. 5. Annotation images of dataset.

In this method, we start by deleting duplicate images from
datasets and filtering out datasets that don’t fit our criteria. By
removing irrelevant information, we focus on the photographs
that are most important to our project. The next key step is
annotating these images, which is essential for getting the
model ready for training. Each image is given the proper
tags or categories during annotation, which helps to properly
organize the data. Additionally, the machine learning system
can recognize patterns in the photos thanks to this annotation
process, which enables it to produce precise predictions. Next,
the model set the parameter of images to 96 X 96 size. We may
go on to the following step, which entails training the model
after the photographs have been annotated. In this stage, the
model is taught how to recognize and understand numerous

patterns and characteristics inside the photos using the anno-
tated data. The model continually improves its comprehension

and grows better at generating precise predictions through
a series of iterations and modifications. Generally speaking,
the process starts with data filtering to get rid of unwanted
datasets and duplicates. After that, each image is given the
proper labels via the annotation process, which facilitates data
organization and makes it possible for the machine learning
system to recognize patterns. In the end, the model is trained
using these annotated images, honing its forecasting skills and
deepening its comprehension of the visual information.
F. Implementation
We delivered the dataset to the three modules for training
after Data Preprocessing. They are further discussed below:
1) Creating Impulse: We will start the essential procedures
at this point in order to train our model successfully. To begin
with, we performed a few configuration changes, including
translating the entire dataset into dimensions of 96X96. We
used a resize setting that allowed for the shortest axis of the
photos to ensure compatibility. Then, we used the BrainChip
Akida model to do feature extraction from the dataset’s
images. We were able to draw out important details and useful
information from each image using this procedure. It is crucial
to remember that we will preserve and use each and every

one of these extracted characteristics to train our model in the
future. By following these steps, we are setting up an efficient
training procedure that will allow our model to develop and
produce precise predictions based on the supplied information.
2) Image: The color depth parameters are converted into
the Red-Green-Blue (RGB) format in the next stage, enabling
additional analysis and modification. We then started the
feature extraction procedure, successfully identifying all the
crucial traits that were present in the dataset. We created
a graph that successfully demonstrates the properties of the
dataset to display this data visually. This graph is shown in
Fig. 6, which offers a clear visual depiction of the retrieved
characteristics.

Fig. 6. Feature explorer.

3) Object Detection: In this phase, we set up the parameters
that will be used to train our model. We kept the validation
set size at 20%, the learning rate at 0.001, and the number
of epochs at 100. In addition, we trained our dataset using
the Akida Fomo model. We started the model training process
after making these adjustments, and as a consequence, we got
a remarkable training accuracy of 95.0%. Fig. 7 shows the
precise measurements and results of this training procedure.
In addition, we created a quantized version of the model in
addition to the outcomes. This quantization ensures that the
model performs in real-time without any hiccups even on
devices with modest Random-Access Memory (RAM) and
storage capabilities. The quantized model maintains its ability
to carry out the required activities while greatly reducing
the memory and storage needs. This development makes it
possible to install and use the model effectively in contexts
with limited resources.
4) Classification: In this part, we evaluated the model using
photos from the dataset and were successful in obtaining a
testing accuracy of 90%, which we believe can improve in the
future with a larger training dataset set so that the model may
be fine-tuned and have additional features for training. Fig. 8
depicts the testing accuracy. Overall, we are satisfied with the
performance of the model and optimistic about its potential for
further improvement. It is clear that the team has put in a lot
of effort into developing and evaluating the model. The testing
accuracy of 90% is impressive, and the team’s optimism about

Fig. 7. Model Training Accuracy.

the model’s potential for further improvement is encouraging.
It will be interesting to see how the model performs with a
larger training dataset and additional features for training.

Fig. 8. Model Testing Accuracy.

Fig. 9. Model output.
IV. CONCLUSION

In conclusion, our study made use of a sizable dataset
made up of a variety of occurrences, such as chain stealing
and wallet stealing. The videos in the collection were first
broken down into individual frames, and then redundant and
duplicate pictures were eliminated. After that, we worked on
training our model and annotating the pictures. Our model,
known as Akida FOMO, showed excellent accuracy of 95.0%
throughout the training phase and testing accuracy of 90%
after the training phase was complete. Therefore, we can state

with confidence that the use of the Edge Impulse platform and
the BrainChip Akida FOMO model contributed significantly to
the production of insightful findings for our research. The high
levels of accuracy reached by our model demonstrate that the
incorporation of Akida FOMO into our study was a resounding
success. Although setting up the dataset and annotating the
photos took a lot of time and work, the outcomes were
unquestionably trustworthy and the effort was justified. We are
adamant that future research into computer vision and object

identification has a great deal of potential when the cutting-
edge technology of BrainChip is paired with the Edge Impulse

platform. The positive results of our research demonstrate the
strong combination’s prospective trajectory and demonstrate
its potential to open up new directions for future developments
in this area.

V. FUTURE DIRECTION

There are a number of fascinating future areas that may
be investigated in the field of computer vision and object
identification in order to build upon the accomplishments and
beneficial results of our study using the Edge Impulse platform
and the Akida FOMO model.
Improved Model Performance: Although our existing model
achieved outstanding accuracy rates, there is still potential
for improvement. The model parameters may be adjusted in
future study, along with enhanced training methods, more
datasets, and a wider variety of training examples. The model

can push the limits of accuracy even further by continu-
ously improving the model’s performance. Real-time Object

Recognition: The Edge Impulse platform opens up oppor-
tunities for real-time object recognition applications when

combined with Akida FOMO’s capabilities. The development
of a system that can recognize and detect chain-snatching
or wallet-snatching instances in real-time may result from
broadening our study, which might have a substantial impact

on public safety and crime prevention initiatives. Generali-
sation and Transfer Learning: Exploring the possibilities of

transfer learning strategies might be an attractive area for
future study. We may be able to attain greater accuracy rates
and speed up the training process by utilizing pre-trained
models on related object identification tasks and fine-tuning
them with our particular dataset. The model’s usefulness
and applicability can also be increased by investigating its
capacity to generalize across various scenarios and contexts.
Scalability and Deployment: As our study develops, the Akida
FOMO model’s scalability and deployment need to be taken
into account. The model’s deployment on resource-constrained
edge devices, such as security cameras or smartphones, may be
facilitated by optimizing the model’s architecture and training
procedure to decrease computing needs and memory footprint.
This would make it possible to use our research’s results in the
real world, broadening its influence beyond the boundaries of a
sterile laboratory. Expansion to Other Applications: Although
the majority of our study was devoted to chain-snatching and
wallet-snatching occurrences, the approaches and techniques
created may be applied to a number of other fields and appli-

cations. Exploring the model’s capability to identify and detect
various objects or events, such as pedestrian identification or
traffic sign recognition, can advance computer vision in more
general ways and improve safety protocols in a variety of
scenarios.
Finally, the effective integration of Akida FOMO into the
Edge Impulse platform opens the door for interesting new
research trajectories. We may improve the area of computer
vision and object identification by continually improving
the model, investigating real-time applications, using transfer
learning, assuring scalability, and extending to new domains,
eventually helping society with increased safety and efficiency.
Wow, amazing. Now, the challenge is if you can tell me, in fifty words or less, how this would change my perception of Brainchip?
 
  • Haha
  • Like
  • Fire
Reactions: 9 users
Wow, amazing. Now, the challenge is if you can tell me, in fifty words or less, how this would change my perception of Brainchip?
Hi Cyw
I think first you would need to explain why I need to change your perception of Brainchip as I thought TSEx was for mature individuals capable of making their own investment decisions.

My opinion only DYOR
Fact Finder
 
  • Like
  • Haha
  • Fire
Reactions: 44 users

Frangipani

Regular
A standard Google search on “Akida” came up with these four trademark registry applications in Switzerland, filed two days ago:

A1C04AAB-0D1A-40E8-BB4E-ABB094F0537D.jpeg


I subsequently checked the database of the Eidgenössisches Institut für Geistiges Eigentum (Swiss Federal Institute for Intellectual Property), which confirmed the filing of those trademark registry applications on January 29, 2024:


EAD49F47-0860-476F-BC1E-CBF2C59EA79B.jpeg


D84A30E9-1F3A-4734-9D81-8CE76AEE3733.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 51 users

JB49

Regular
I'm interested to hear peoples thoughts on what they believe the $780,000 of receipts from customers was made up of.

Obviously more than just engineering fees. Is it a licensing fee or royalties?

I noticed in WBT's 4c today they specifically advised the 457K receipt from customers was from a licensing fee. Are we allowed to ask the company whether it came from Royalties or a license fee, or is that against ASX rules to reveal that information?
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

Slade

Top 20
Wow, amazing. Now, the challenge is if you can tell me, in fifty words or less, how this would change my perception of Brainchip?
I can do it in two words if you like.
 
  • Haha
  • Like
  • Fire
Reactions: 13 users
Tata Motors on the way up, nice.


"Tata Motors has become India’s largest carmaker in terms of market capitalization. The company overtook Maruti Suzuki, which has been at the top for the past 7 years."

"Tata Motors also recorded its highest sales in the last 11 quarters."
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users
Top Bottom