BRN Discussion Ongoing

Sirod69

bavarian girl ;-)
  • Like
  • Love
  • Fire
Reactions: 20 users

GStocks123

Regular
Microchips Edge ML stream in 20h from now-Learn how we make Machine Learning (ML) easy and efficient for embedded designers. Join us for a livestream on Wednesday, January 31, 2024 at 9:00 a.m. PST: https://mchp.us/41QD9LZ. #MachineLearning #ML #EmbeddedDesigners #Engineering #MCUs #MPUs

 
  • Like
  • Fire
  • Love
Reactions: 10 users

Frangipani

Top 20

Very interesting interview with Dr Tobi Delbrüch on neuromorphic engineering. Takes you through the history of the field and provides insight into the workings of academic research.

It mentions our own Tony Lewis being part of the neuromorphic community for the last decades building robots with vision back in the 90’s.

Also confirms the importance of digital, being inspired by the brain but not exactly copying it, and making adoption as easy as possible.

I’m pre-coffee and on my phone so above is a bit crude :) Highly recommend having a listen / read yourself.

Edit: looking into the background of Tony again, it is a very good reminder of the incredible management that we have been able to attract. Heavyweights in this industry who clearly see the amazing potential of Brainchip for them to come across from the Amazons, ARMs, HPs and Intels of this world.. makes you wonder if we are on to something ;-)

Absolutely agree - Tony Lewis was an excellent choice as CTO, as he is so well connected and respected in the neuromorphic community!

I recall Ralph Etienne-Cummings referring to him in another Brains & Machines podcast. The two of them go back a long way, by the way…

Just some examples:

47415705-0E3D-4C9B-B9D3-FAF21E41245B.jpeg


45A57CEC-78E5-4AF1-9C8C-A25CE596EE0A.jpeg
BED9211C-E874-4A51-B979-7B9A433A5835.jpeg


67F5E49C-62E0-40BA-86FC-039A4C39147C.jpeg

3E6C9E08-C4D8-4286-BC54-6F53E8669D20.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Top 20
I recall Ralph Etienne-Cummings referring to him in another Brains & Machines podcast. The two of them go back a long way, by the way…

Found it: It was the discussion after the podcast with Yulia Sandamirskaya, who is heading the Research Centre "Cognitive Computing in Life Sciences" at Zurich University of Applied Sciences (ZHAW) and is a senior researcher in Intel's Neuromorphic Computing Lab and thus an expert on Loihi. Her background is in robotics, too.
She was also one of the neuromorphic researchers congratulating Tony Lewis on his appointment as Brainchip CTO, by the way:

5B65F3C8-43FC-4743-AE91-9DBEFDA53B12.jpeg




F165B7EF-F8A8-44E3-AA74-871A16503A71.jpeg
 

Attachments

  • 967B1C90-E6FE-475D-9DE8-70FE2D3630CA.jpeg
    967B1C90-E6FE-475D-9DE8-70FE2D3630CA.jpeg
    43.4 KB · Views: 68
  • Like
  • Fire
  • Love
Reactions: 17 users

Gies

Regular
  • Like
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
3rd eye is being liked by Rob Telson.
Soon out there


Thanks @Gies!

There's a video on this link below. At about 1.30 mins it says that ThirdEye has about twenty years experience developing AI/MR technology for the US Department of Defense (see logo on screenshot below), for the US military, airforce, marines. They are active with all the different branches of the government. Now they're shifting their focus to the commercial and consumer space as well.

They have 50 (or 15?) patents filed in optics, hardware and software. Would be interesting to check some of them out.

4.15 mins : Their glasses are entirely hands free. They don't have to connect to a phone, laptop or processing pack.

Their goal is to be the most widely used smart glasses out there.

25.15 mins : By this time next year they want to be the first to have smart with 4G or 5G built in modem to enable them to be used out in the field where there is no Wi-Fi.

US DOD
Screenshot 2024-01-31 at 9.33.51 am.png



ThirdEye Customers/Partners
Screenshot 2024-01-31 at 10.26.25 am.png



Video
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 24 users

Sirod69

bavarian girl ;-)
Just to say briefly, I went up there with you in a Mercedes, of course in Germany back then, well, what about now? Hence the song.
AND I love this song

 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 8 users

Wickedwolf

Regular
This guy gets it: Byron Callaghan

In the boundless expanse of technological galaxies, there exists a singular constellation that outshines all - the Akida 2.0 system. It is not just a beacon of brilliance; it's a veritable black hole, drawing in all realms of possibility and spewing out pure innovation. In the theatre of Edge AI, where countless players jostle for the limelight, Akida 2.0 doesn't just steal the show; it is the show.
 
  • Like
  • Love
  • Fire
Reactions: 71 users
This guy gets it: Byron Callaghan

In the boundless expanse of technological galaxies, there exists a singular constellation that outshines all - the Akida 2.0 system. It is not just a beacon of brilliance; it's a veritable black hole, drawing in all realms of possibility and spewing out pure innovation. In the theatre of Edge AI, where countless players jostle for the limelight, Akida 2.0 doesn't just steal the show; it is the show.
1706660344806.gif
 
  • Like
  • Haha
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
NEWS

Microchip Prioritizes Customizable Logic in New 8-bit MCUs​

one day ago by Jake Hertz

Outfitted with a configurable logic block module, the new MCUs integrate customizable logic to reduce BOM and improve performance.​


As microcontrollers (MCUs) become more central to the operation of IoT devices, designers need low-power, high-performance MCUs that don't increase system complexity.
To answer this call, Microchip recently announced a new family of devices that integrates customizable logic directly into the MCU. What might this integration mean for the future of embedded systems?

PIC16F13145

Microchip claims that its new configurable logic block (CLB) module enables customizable hardware solutions and may even eliminate the need for external logic components.

New 8-bit MCUs Integrate Configurable Logic Block​

The new PIC16F13145 MCU family introduces a configurable logic block (CLB) peripheral. The CLB consists of 32 individual logic elements, each employing a look-up table (LUT)- based design. This feature enables designers to create hardware-based, custom combinational logic functions directly within the MCU, optimizing the speed and response time of embedded control systems. This integration eliminates the need for external logic components, thereby reducing bill of materials (BOM) costs and power consumption.

A diagram of a basic logic element in the PIC16F13145

A diagram of a basic logic element in the PIC16F13145.

Another important feature of the CLB is its independence from the central processing unit's (CPU) clock speed. This allows the CLB to make logical decisions while the CPU is in sleep mode, further reducing power consumption and software reliance.
The MCU family (datasheet linked) is available in various package sizes, including 8-, 14-, and 20-pin configurations, and offers up to 14 KB of program flash memory and up to 1 KB of RAM. This goes along with an integrated 10-bit ADC with computation (ADCC) capable of up to 100 ksps, an 8-bit DAC, and two fast comparators with a 50-ns response time. These features are complemented by a range of peripherals for timing control and serial communications, including SMBus compatibility.

Understanding Customizable Logic​

Customizable logic allows hardware-based logic functions to be implemented directly within the MCU. Traditionally, such functions required external components like programmable logic devices (PLDs) or additional microcontrollers. However, with customizable logic, these functions are integrated into the MCU itself, simplifying design, reducing system footprint, and minimizing system latency.
At the heart of customizable logic in MCUs like Microchip’s new family is the configurable logic block (CLB). A CLB generally consists of multiple logic elements, each of which can be individually programmed to perform various logic functions. These logic elements are commonly based on LUTs, which can be configured to implement complex combinational logic or simple logic gates like AND, OR, and XOR. By programming these LUTs, engineers can create custom logic circuits that operate independently of the MCU's CPU.

Configurable logic blocks are software-defined hardware

Configurable logic blocks are software-defined hardware. (Click to enlarge.)

One key advantage of integrating customizable logic into MCUs is that it enhances real-time performance. Since these logic blocks operate independently of the CPU, they can make quick logical decisions, effectively reducing system latency. This is particularly advantageous in applications requiring rapid response times, such as motor control, industrial automation, or real-time data processing.

Another significant benefit is power efficiency. Customizable logic can often operate in low-power or sleep modes, making logical decisions without waking the CPU. This feature is invaluable in battery-powered or energy-sensitive applications where conserving power is crucial.


Emblazoning MCUs in Embedded Designs​

By embedding customizable logic into its family of MCUs, Microchip is offering designers new ways to get more performance and efficiency out of their embedded designs. Without the need for extra components, engineers can now create dedicated logic blocks to accelerate their product’s unique tasks, helping them balance the cost-performance tradeoff.

 
  • Like
  • Fire
  • Love
Reactions: 46 users

buena suerte :-)

BOB Bank of Brainchip
Morning Chippers :)

All eagerly awaiting 'The BIg announcement/s' Plenty of great articles being shared by our in house researchers (much appreciated) but it would be nice to have BRN give us something solid! 🙏 📢📢📢 🙏

Well this is some good news to share 😍

1706664115780.png


1706664064273.png

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
If you add the following known facts together in my opinion you get Microchip already working with Brainchip:

1. Brainchip partnered with SiFive with announced compatibility with the x280 Intelligence Series,

2. Brainchip partnered with NASA,

3. Brainchip partnered with GlobalFoundries, and

4. Brainchip taping out AKD1500 minus the ARM Cortex 4, plus

5. The following article:

SL-013023.jpg

January 30, 2023

NASA Recruits Microchip, SiFive, and RISC-V to Develop 12-Core Processor SoC for Autonomous Space Missions​


by Steven Leibson
NASA’s JPL (Jet Propulsion Lab) has selected Microchip to design and manufacture the multi-core High Performance Spaceflight Computer (HPSC) microprocessor SoC based on eight RISC-V X280 cores from SiFive with vector-processing instruction extensions organized into two clusters, with four additional RISC-V cores added for general-purpose computing. The project’s operational goal is to develop “flight computing technology that will provide at least 100 times the computational capacity compared to current spaceflight computers.” During a talk at the recent RISC-V Summit, Pete Fiacco, a member of the HPSC Leadership Team and JPL Consultant, explained the overall HPSC program goals.
Despite the name, the HPSC is not strictly a processor SoC for space. It’s designed to be a reliable computer for a variety of applications on the Earth – such as defense, commercial aviation, industrial robotics, and medical equipment – as well as being a good candidate for use in government and commercial spacecraft. Three characteristics that the HPSC needs beyond computing capability are fault tolerance, radiation tolerance, and overall platform security. The project will result in the development of the HPSC chip, boards, a software stack, and reference designs with initial availability in 2024 and space-qualified hardware available in 2025. Fiacco said that everything NASA JPL does in the future will be based on the HPSC.
NASA JPL set the goals for the HPSC based on its mission requirements to put autonomy into future spacecraft. Simply put, the tasks associated with autonomy are sensing, perceiving, deciding, and actuating. Sensing involves remote imaging using multi-spectral sensors and image processing. Perception instills meaning into the sensed data using additional image processing. Decision making includes mission planning that incorporates the vehicle’s current and future orientation. Actuation involves orbital and surface maneuvering and experiment activation and management.
Correlating these tasks with NASA’s overall objectives for its missions, Fiacco explained that the HPSC is designed to allow space-bound equipment to go, land, live, and explore extraterrestrial environments. Spacecraft also need to report back to earth, which is why Fiacco also included communications in all four major tasks. All of this will require a huge leap in computing power. Simulations suggest that the HPSC increases computing performance by 1000X compared to the processors currently flying in space, and Fiacco expects that number to improve with further optimization of the HPSC’s software stack.
lg.php


It’s hard to describe how much of an upgrade the HPSC represents for NASA JPL’s computing platform without contrasting the new machine with computers currently operating off planet. For example, the essentially similar, nuclear-powered Curiosity and Perseverance rovers currently trundling around Mars with semi-autonomy are based on RAD750 microprocessors from BAE Systems. (See “Baby You Can Drive My Rover.”) The RAD750 employs the 32-bit PowerPC 750 architecture and is manufactured with a radiation-tolerant semiconductor process. This chip has a maximum clock rate of 200 MHz and represents the best of computer architecture circa 2001. Reportedly, more than 150 RAD750 processors have been launched into space. Remember, NASA likes to fly hardware that’s flown before. One of the latest space artifacts to carry a RAD750 into space is the James Webb Space Telescope (JWST), which is now imaging the universe in the infrared spectrum and is collecting massive amounts of new astronomical data while sitting in a Lagrange orbit one million miles from Earth. (That’s four times greater than the moon’s orbit.) The JWST’s RAD750 processor lopes along at 118 MHz.
Our other great space observatory, the solar-powered Hubble Space Telescope (HST), sports an even older processor. The HST payload computer is an 18-bit NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s but designed even earlier. This payload computer controls and coordinates data streams from the HST’s various scientific instruments and monitors their condition. (See “Losing Hubble – Saving Hubble.”)
The original NSSC-1 computer was developed by the NASA Goddard Space Flight Center and Westinghouse Electric in the early 1970s. The design is so old that it’s not based on a microprocessor. The initial version of this computer incorporated 1700 DTL flat-pack ICs from Fairchild Semiconductor and used magnetic core memory. Long before the HST launched in 1990, the NSSC-1 processor design was “upgraded” to fit into some very early MSI TTL gate arrays, each incorporating approximately 130 gates of logic.
I’m not an expert in space-based computing, so I asked an expert for his opinion. The person I know who is most versed in space-based computing with microprocessors and FPGAs is my friend Adam Taylor, the founder and president of Adiuvo Engineering in the UK. I asked Taylor what he thought of the HPSC and he wrote:
“The HPSC is actually quite exciting for me. We do a lot in space and computation is a challenge. Many of the current computing platforms are based on older architectures like the SPARC (LEON series) or Power PC (RAD750 / RAD5545). Not only do these [processors] have less computing power, they also have ecosystems which are limited. Limited ecosystems mean longer development times (less reuse, more “fighting” with the tools as they are generally less polished) and they also limit attraction of new talent, people who want to work with modern frameworks, processors, and tools. This also limits the pool of experienced talent (which is an increasing issue like it is in many industries).
“The creation of a high-performance multicore processor based around RISC-V will open up a wide ecosystem of tools and frameworks while also providing attraction to new talent and widening the pool of experienced talent. The processors themselves look very interesting as they are designed with high performance in mind, so they have SIMD / Vector processing and AI (urgh such an overstated buzz word). It also appears they have considered power management well, which is critical for different applications, especially in space.
“It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA. Ultimately HPSC gives engineers another tool to choose from, and it is designed to prevent the all-too-common, start-from-scratch approach, which engineers love. Sadly, that approach always increases costs and technical risk on these projects, and we have enough of that already.”
One final note: During my research for this article, I discovered that NASA’s HPSC has not always been based on the RISC-V architecture. A presentation made at the Radiation Hardened Electronics Technology (RHET) Conference in 2018 by Wesley Powell, Assistant Chief for Technology at NASA Goddard Space Flight Center’s Electrical Engineering Division, includes a block diagram of the HPSC, which shows an earlier conceptual design based on eight Arm Cortex-A53 microprocessor cores with NEON SIMD vector engines and floating-point units. Powell continues to be the Principal Technologist on the HPSC program. At some point in the HPSC’s evolution over the past four years, at least by late 2020 when NASA published a Small Business Innovation Research (SBIR) project Phase I solicitation for the HPSC, the Arm processor cores had been replaced by a requirement for RISC-V processor cores. That change was formally cast in stone last September with the announcement of the project awards to Microchip and SiFive. A sign of the times, perhaps?

My opinion only DYOR
FF

AKIDA BALLISTA
Hi Facty,

I just noticed that the author of the article you posted previously on NASA's HPSC admits he's not an expert in space-based computing, so he asked an expert for his opinion, and that expert was Adam Taylor, the founder and president of Adiuvo Engineering in the UK. That happens to be the very same Adam Taylor that wrote the below blog on our website. Talk about a coincidence!😝

Adam Taylor states in the article “It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA."

So, from the horses mouth, so-to-speak, Aduivo has designed several radiation tolerant and radiation hardened MicroChip controllers recently.

Very interesting.gif



Screenshot 2024-01-31 at 12.31.22 pm.png




Adam Taylors opinion of BrainChip's Akida in a nutshell.🥳

Extract 1



Screenshot 2024-01-31 at 1.12.06 pm.png

Extract 2

Screenshot 2024-01-31 at 12.53.44 pm.png




 

Attachments

  • Screenshot 2024-01-31 at 12.52.34 pm.png
    Screenshot 2024-01-31 at 12.52.34 pm.png
    168.1 KB · Views: 79
Last edited:
  • Like
  • Fire
  • Love
Reactions: 69 users

Sirod69

bavarian girl ;-)
We here in Germany didn't go below 0.00 for BRN, we were with you at around 0.38, well what can I say, we were at 1.70 euros and some people like me didn't sell. OK and what can I say? I think we here want our price to rise again, right?

Sleep Sleeping GIF by yvngswag
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I do not recall this being posted before and the date of release suggests not so guess which space program is using a COTS anomaly detection SNN on space missions:

Small Business Innovation Research/Small Business Tech Transfer
Neuromorphic Spacecraft Fault Monitor, Phase II
Completed Technology Project (2020 - 2022)
Project Introduction
The goal of this work is to develop a low power machine learning anomaly detector. The low power comes from the type of machine learning (Spiking Neural Network (SNN)) and the hardware the neuromorphic anomaly
detector runs on. The ability to detect and react to anomalies in sensor readings on board resource constrained spacecraft is essential, now more than ever, as enormous satellite constellations are launched and humans push out again beyond low Earth orbit to the Moon and beyond. Spacecraft are autonomous systems operating in dynamic environments. When monitored parameters exceed limits or watchdog timers are not reset, spacecraft can automatically enter a 'safe' mode where primary functionality is reduced or stopped completely. During safe mode the primary mission is put on hold while teams on the ground examine dozens to hundreds of parameters and compare them to archived historical data and the spacecraft design to determine the root cause and what corrective action to take. This is a difficult and time consuming task for humans, but can be accomplished faster, in real- time, by machine learning. As humans travel away from Earth, light travel time delays increase, lengthening the time it takes for ground crews to respond to a safe mode event. The few astronauts onboard will have a hard time replacing the brain power and experience of a team of experts on the ground. Therefore, a new approach is needed that augments existing capabilities to help the astronauts in key decision moments. We provide a new machine learning approach that recognizes nominal and faulty behavior, by learning during integration, test, and on-orbit checkout. This knowledge is stored and used for anomaly detection in a low power neuromorphic chip and continuously updated through regular operations. Anomalies are detected and context is provided in real-time, enabling both astronauts onboard, and ground crews on Earth, to take action and avoid potential faults or safe mode events.
Anticipated Benefits
The software developed in Phase II can potentially be used by NASA for anomaly detection onboard the ISS, the planned Lunar Gateway, and future missions to Mars. The NSFM software can also be used by ground crews to augment their ability to monitor spacecraft and astronaut health telemetry once it reaches the ground. The NSFM software can furthermore be used during integration and test to better inform test operators of the functionality of the system during tests in real time.
The software developed in Phase II can potentially be used for anomaly detection onboard any of the new large constellations planned by private companies. It can also be applied to crewed space missions, deep space probes, UUVs, UAVs, and many industrial applications on Earth. The NSFM software developed in Phase II can also be used during Integration and Test of any commercial satellite.


My opinion only DYOR
FF

AKIDA BALLISTA
Hi Facty,

Me again. Re the Lunar Gateway mission (as per another of your previous posts above), in this video from a year ago, Adam Taylor said he was involved in the Lunar Gateway in the space station that orbits the moon!

In June 2022, unless I'm mistaken, I beleive Rob Teslon said something along the lines that we’d helped NASA get into orbit. Putting these pieces together leads me to beleive that Adam Taylor could have come to know about us through shared work with NASA.


Screenshot 2024-01-31 at 1.36.17 pm.png


He also goes on to say that he is designing a FPGA circuit car for NASA as well. So, given Adam thinks Akida is "stunning" and "very impressive", this would seem to bode extremely well for our inclusion in said FPGA IMO.

Screenshot 2024-01-31 at 2.24.59 pm.png



 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 26 users

HopalongPetrovski

I'm Spartacus!
Well Ok I guess.
Just make sure the Third Eye doesn't enter the Lunar Gateway!
Not that there's anything wrong with that! 🤣
 
  • Haha
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Well Ok I guess.
Just make sure the Third Eye doesn't enter the Lunar Gateway!
Not that there's anything wrong with that! 🤣
That's for sure! No doubt Doodle Labs would want to be part of that action too.😝
 
  • Haha
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
  • Haha
Reactions: 6 users
  • Haha
  • Love
Reactions: 8 users
Hi All
Sorry cannot provide a link but for those unlike Pom who should just read the Abstract and Conclusion the full paper is probably interesting to a little exciting to think what AKIDA with a little Edge Impulse can do. Regards Fact Finder:

Safeguarding Public Spaces: Unveiling Wallet
Snatching through Edge Impulse Technology

Ujjwal Reddy K S

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
ujjwal.20bci7203@vitap.ac.in

* Kuppusamy P

School of Computer Science and Engineering

VIT-AP University
Andhra Pradesh, India
drpkscse@gmail.com

Abstract—In contemporary society, public space security and
safety are of utmost significance. The theft of wallets, a frequent
type of street crime, puts people’s personal items at risk and
may result in financial loss and psychological misery. By utilizing
Edge Impulse technology to identify and expose wallet-snatching
incidents in public areas, this article offers a fresh solution to
the problem. To develop a reliable and effective wallet-snatching
detection solution, the suggested system blends machine learning
techniques with the strength of the Edge Impulse platform. This
study used Spiking Neural Networks (SNNs) which are inspired
by the biological neural networks found in the brain. Edge

Impulse offers a thorough framework for gathering, prepro-
cessing, and examining data, enabling the creation of extremely

precise machine learning models. The system can accurately
discriminate between legitimate interactions with wallets and
suspicious snatching attempts by training these models on a
dataset that includes both normal and snatching events. The

efficiency of the suggested method is 95% demonstrated by exper-
imental findings, which show high accuracy and low false positive

rates in recognizing wallet snatching instances. Increasing public
safety, giving people a sense of security in public places, and
discouraging prospective wallet-snatching criminals are all goals
of this research.
Index Terms—wallet snatching, public spaces, Edge Impulse,
sensor devices, machine learning, real-time monitoring, security,
privacy

I. INTRODUCTION

Public places are critical for societal interactions and com-
munity participation. They are places of recreation, social-
ization, and public meetings. However, these areas are not

immune to criminal activity, and one typical threat is wallet
snatching. Wallet snatching is the act of forcibly removing
someone’s wallet, which frequently results in financial losses,
identity theft, and psychological suffering for the victims.
Safeguarding public places and combating wallet snatching

necessitate new measures that make use of developing technol-
ogy. In this context, this introduction investigates the potential

of Edge Impulse technology in uncovering and preventing
wallet-snatching events [1].
Wallet-snatching instances can occur in a variety of public
places, including parks, retail malls, congested roadways, and
public transit. These attacks are frequently characterized by
their speed and stealth, giving victims little time to react or

seek aid. Traditional surveillance systems, such as Closed Cir-
cuit Television (CCTV) cameras, have difficulties in efficiently

identifying and preventing wallet-snatching occurrences owing
to variables such as limited coverage, video quality, and human
error in monitoring [2]. As a result, more advanced technical
solutions that can proactively identify and respond to such
situations are required.
Edge Impulse is a new technology that integrates machine
learning algorithms, sensor data, and embedded systems to
generate smart and efficient solutions [3]. It allows machine
learning models to be deployed directly on edge devices such
as smartphones, wearable devices, or Internet of Things (IoT)
devices, reducing the requirement for ongoing access to a
distant server. Edge Impulse is an appropriate solution for
tackling the problem of wallet snatching in public places
because of its capabilities.

Fig. 1. Edge Impulse Architecture.

It is essential to look into the vast amount of research

and studies done in this specific subject in order to prop-
erly understand the powers of Edge Impulse technology in

revealing instances of wallet theft. Numerous studies have
been conducted to examine the use of computer vision and

machine learning approaches in detecting and preventing crim-
inal activity in public spaces. The topic of utilizing cutting-
edge technologies to improve public safety and security has

been explored in a number of academic studies. This research
has shown how machine learning algorithms may be used
to examine video footage and identify patterns of suspicious

2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE) | 979-8-3503-0570-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/RMKMATE59243.2023.10369744

behavior that could be related to wallet-snatching instances.
These cutting-edge technologies may recognize people who
display suspicious motions or participate in potentially illegal
behaviors by utilizing computer vision techniques, such as

object identification and tracking, enabling proactive interven-
tion. Edge Impulse technology integration has a lot of potential

in this area. It may be trained to recognize certain traits
and attributes linked to wallet snatching through its strong
machine learning skills, improving its capacity to precisely
detect such instances in real-time. Edge Impulse can analyze
trends, spot abnormalities, and notify authorities or security
people to take immediate action by utilizing the enormous
volumes of data gathered from several sources, including
surveillance cameras and sensor networks. The possibility of
predictive analytics to foresee wallet theft episodes based on
previous data and behavioral trends has also been investigated
in this field of research. Machine learning algorithms are
able to recognize high-risk locations and deploy resources
appropriately by examining elements like the time of day,
location, and population density. With the use of this proactive
strategy, law enforcement organizations may deploy people
efficiently and put out preventative measures, which serve to
dissuade prospective criminal activity.
Based on these findings, the use of Edge Impulse technology
in the context of wallet snatching can improve the efficiency of
crime prevention systems [4]. The reaction time may be greatly
decreased by implementing machine learning models directly

on edge devices, enabling real-time detection and fast inter-
vention. Furthermore, Edge Impulse technology can record

and analyze essential data for recognizing wallet-snatching
instances using numerous sensors included in smartphones
or wearable devices, such as accelerometers, gyroscopes, and
cameras.
For example, accelerometer data may be utilized to detect
abrupt movements or violent behaviors that are suggestive of
wallet-snatching attempts [5]. The gyroscope data can offer
information regarding the direction and speed of the grab,
assisting in the tracking of the culprit. Additionally, camera
footage may be analyzed using computer vision algorithms to
detect suspicious activity, identify possible thieves, or collect
photographs for later identification and proof.
The increasing availability of data can further benefit the use
of Edge Impulse technology in wallet snatching prevention.
With the growth of smartphones and wearable devices, there is
an abundance of sensor data that can be gathered and analyzed
in order to create strong machine learning models. This data
may be used to train algorithms to recognize certain patterns or
abnormalities related with wallet-snatching instances, boosting
the system’s accuracy and dependability.
Furthermore, integrating Edge Impulse technology with
current surveillance systems can improve their capabilities.
A complete and intelligent system may be constructed by
integrating the strengths of both technologies, such as the

extensive coverage of CCTV cameras and the real-time anal-
ysis of edge devices. This integrated strategy would allow for

proactive identification and rapid reaction to wallet-snatching

occurrences, minimizing the impact on victims and discour-
aging future perpetrators.

Finally, wallet snatching in public places is a serious danger
to public safety and individual well-being [6]. Innovative
techniques are necessary to overcome this difficulty, and Edge
Impulse technology has intriguing possibilities. Edge Impulse
provides real-time detection and fast action in wallet snatching
occurrences by employing machine learning models installed
directly on edge devices. It captures and analyses pertinent
information using multiple sensors and data sources accessible

on smartphones and wearable devices. Integrating Edge Im-
pulse technology with current monitoring systems can improve

the efficacy of crime prevention efforts. These developments
can help to protect public places and expose wallet snatching,
resulting in safer and more secure communities.
A. Motivation
This study aims to use the potential of Edge Impulse
technology to make public areas safer for citizens by efficiently
fighting wallet-snatching events. We hope that by finding a
solution, we can contribute to the wider objective of protecting
public places and improving the general quality of life in our
communities.
B. Contribution
• The study presents an innovative use of Edge Impulse
technology for improving public safety.
• This study proposed SNNs.

• The created machine learning model detects wallet-
snatching episodes in public places with high accuracy

and efficiency.

II. RELATED WORK

The study proposes a framework comprised of two major
components: a behavior model and a detection technique [7].
The behavior model captures the software’s valid behavior
by monitoring its execution and gathering information about
its interactions with the system and the user. The detection
method compares the observed behavior of a software instance
to the behavior model to discover any differences that signal
probable theft. The authors conducted trials with real-world
software applications to assess the efficacy of their technique.
They tested their system’s detection accuracy, false positive
rate, and false negative rate. The results indicated promising
performance in detecting software theft occurrences properly
while keeping false alarms to an acceptable level. The study
presents an overview of the many processes involved in
the identification of anomalous behavior, including human

detection, feature extraction, and classification [8]. It em-
phasizes the importance of Convolutional Neural Networks

(CNNs) in dealing with the complexities of visual input and
extracting important characteristics for behavioral research.
Furthermore, the authors explore several CNN architectures
used for anomalous behavior identification, such as AlexNet,
Visual Geometry Group Network (VGGNet), and Residual
Neural Network (ResNet) [9]–[11]. They also investigate the

use of various datasets and assessment criteria in evaluating
the performance of these models. The survey includes a wide
range of applications where aberrant behavior identification is
critical, such as crowd monitoring, public space surveillance,
and anomaly detection in industrial settings [8]. The authors
assess the merits and limits of existing approaches, as well as
new research avenues and opportunities for development.
The suggested technique consists of two major steps: feature
engineering-based preprocessing and energy theft detection
using gradient boosting [12]. Various characteristics from
the electricity usage data are extracted during the feature

engineering-based preprocessing stage. These traits are in-
tended to detect trends and behaviors that may suggest possible

energy theft. After preprocessing the data, the authors use
gradient boosting, a machine learning approach, to detect

energy theft. Gradient boosting is an ensemble learning ap-
proach that combines numerous weak predictive models to

build a strong predictive model. It constructs decision trees in
a sequential manner, with each succeeding tree learning from
the mistakes of the preceding ones. The suggested strategy
is evaluated by the authors using real-world power use data.
They compare their approach’s performance to that of other
current approaches for detecting energy theft, such as decision
trees, random forests, and support vector machines [13]–
[15]. Accuracy, precision, recall, and F1-score are among the
assessment criteria employed. The paper’s results show that
the suggested technique beats the other methods in terms
of energy theft detection accuracy. The authors credit this
enhanced performance to the preprocessing stage based on
feature engineering and the efficiency of gradient boosting in
identifying complicated connections in the data.
The study is primarily concerned with analyzing power
use trends and discovering abnormalities that might suggest
theft [16]. The system learns to discern between regular use
patterns and suspicious actions that signal theft by training the
decision tree and Support Vector Machine (SVM) models on
historical data. The attributes chosen are used to categorize
incidents as either theft or non-theft. The suggested technique
is tested using real-world smart grid data. The findings show
that the decision tree and SVM-based methods can identify
theft in smart grids with high accuracy and low false positive
rates. The study focuses on identifying instances of theft by
collecting temporal relationships in energy use data [17]. The
system learns to recognize regular consumption patterns and
detect variations that suggest theft by training the CNN-Long
Short-Term Memory (LSTM) model on historical data. The
suggested method is tested using real-world smart grid data,
and the findings show that it is successful at identifying power
theft [18]. The CNN-LSTM-based technique beats existing
approaches in terms of detection accuracy. Both papers address
the important issue of theft detection in smart grid systems,
but they employ different techniques [16], [17]. The first
paper utilizes decision trees and SVM for feature selection
and classification, while the second paper employs CNNs and
LSTM networks for feature extraction and anomaly detection.
These approaches contribute to the development of effective

methods for enhancing the security and reliability of smart
grid systems.
The study most likely proposes an algorithm or strategy
that employs computer vision and motion analysis techniques
to detect suspicious or illegal behavior in video footage [19].
The suggested approach most likely seeks to discriminate
between routine activities and probable criminal behaviors
by analyzing the motion patterns of humans or items in a
setting [20]. It is difficult to offer a full description of the
methodology, results, or conclusions of the study based on
the information supplied. However, it may be deduced that the
authors suggest a way for developing an automated criminal

detection system that combines motion analysis with intel-
ligent information-concealing strategies. The authors suggest

a chain-snatching detection safety system that detects and
prevents chain-snatching accidents by utilizing sophisticated
technologies [21]. However, without complete access to the
article, it is difficult to offer extensive information regarding
the system’s methodology, components, or methods used. To
detect rapid and strong movements associated with chain
snatching attempts, the system is likely to include various
sensors such as motion sensors or accelerometers. Image
processing methods may also be used to identify possible
chain snatchers or to collect photographs of the occurrence
for additional investigation or proof [22]. In addition, when a
chain-snatching incident is identified, the system might contain
an alarm or notification mechanism that warns surrounding
persons or authorities in real time. This quick reaction can
dissuade offenders while also providing urgent support to
victims. The report will most likely offer experimental findings
and assessments to assess the suggested system’s usefulness
in effectively identifying chain-snatching occurrences while
minimizing false alarms [21]. It may also address the system’s
weaknesses, prospective areas for development, and future
research directions in this subject.
The document most likely presents a proposed approach or
algorithm for detecting snatch stealing [23]. It may describe
the selection and extraction of low-level video data elements
such as motion analysis, object tracking, or other relevant
information that can be utilized to detect snatch-snatching
instances. The authors may have also investigated various
strategies for identifying and discriminating between regular

and snatch-stealing incidents. Given that the paper was deliv-
ered in 2010, it is crucial to highlight that the material provided

in it is based on research and technology breakthroughs
accessible at the time [23]. It’s probable that recent advances in
computer vision, machine learning, and surveillance systems
have pushed the area of snatch-steal detection even further.
The authors present an action attribute modeling technique
for automatically recognizing snatch-stealing incidents [24].

To identify possible snatch-steal instances, the technique en-
tails analyzing the activities and characteristics displayed by

persons in surveillance recordings. The idea is to create a
system that can send real-time alerts to security workers or
law enforcement organizations in order to assist avoid such
crimes or respond promptly when they occur. The document

most likely outlines the methods and algorithms used to
detect snatch-stealing occurrences, including the extraction of
key characteristics, training a model using labeled data, and
evaluating the suggested solution. It might also go through
the datasets used for training and testing, as well as the
performance measures used to assess the system’s efficacy.
Because the study was published in 2018, it is crucial to
highlight that advances in the area may have occurred since
then, and other methodologies or approaches may have been
created [24].

The study describes the integrated framework’s many com-
ponents, such as data collecting, preprocessing, feature extrac-
tion, and crime detection [25]. In addition, the authors give

experimental results based on real-world data to illustrate the
efficacy of their technique. The results show that the suggested
framework may detect tiny crimes in a fast and accurate
manner, allowing law enforcement authorities to respond more
efficiently. The research focuses on the use of deep learning
algorithms to detect trustworthy human suspicious conduct
in surveillance films [26]. By using the capabilities of deep
learning algorithms, scientists hope to increase the accuracy
and reliability of suspicious behavior detection. The study
provides a full description of the suggested technique, which
includes surveillance video preprocessing, feature extraction
with CNNs, and categorization of suspicious actions with
Recurrent Neural Networks (RNNs) [27], [28]. The authors

also explore the difficulties connected with detecting sus-
picious behavior and provide strategies to overcome them.

The research focuses on the cap-snatching mechanism used
by the yeast L-A double-stranded Ribonucleic Acid (RNA)
virus [29]. The cap-snatching mechanism is a technique used
by certain RNA viruses to hijack the host’s messenger RNA
(mRNA) cap structure for viral RNA production. The authors
study the particular cap-snatching method used by the yeast
L-A double-stranded RNA virus and give deep insights into
its molecular processes. They investigate the viral variables
involved in cap-snatching and their interplay with host factors.
The authors’ research contributes to the knowledge of RNA
virus viral replication techniques and sheds insight on the
complicated mechanisms involved in the reproduction of the
yeast LA double-stranded RNA virus [29]. The findings of
this study are useful for virology research and increase our
understanding of viral replication techniques.

Continued in next post......
 
  • Fire
  • Like
  • Wow
Reactions: 13 users
Top Bottom