BRN Discussion Ongoing

Merry Christmas, happy holidays all

Be safe and enjoy the time with loved ones, friends, fam and doing what makes you happy however you celebrate this time.

Oct French article on the X320 I needed to work around through VPN, private tab etc to get to read with an English translation.

Didn't recall seeing before.

Hopefully the link may work.

Highlighted a comment by our mate Luca that says imo, essentially the neuromorphic processor side via BRN or Intel or Synsense not happening at the mo as we not mainstream enough yet.

Unlike the Sony, Qualcomm, Renesas, AMD of the world. So unless we snuck in there via Renesas, can probs park this one for the mo unfortunately imo until we get a bit more traction in the mkt.

Hopefully wrong and the comments were lost in translation :LOL:



Prophesee sets out to conquer the’IoT and immersive reality with its new GenX320 event sensor​

The Parisian deeptech Prophesee unveils, this Monday, October 16, 2023, its new bio-inspired sensor called GenX320, smaller than previous generations. It’s it addresses the markets of connected objects and immersive reality headsets.


5 Min.

React
Frederic Monflier
Industrie & Technologies

16 october 2023 \ 14h00

Prophesee part à la conquête de l’IoT et de la réalité immersive avec son nouveau capteur événementiel GenX320

© Prophesee
The new GenX320 sensor measures only 3x4 mm. It has a resolution of 320x320 pixels.

And five for the French deeptech Prophesee: its GenX320 sensor, whose availability is announced on Monday, October 16, 2023, inaugurates the fifth generation of its bio-inspired vision technology, also called event. « With a resolution of 320x320 pixels, C’ is the smallest event sensor developed in the world so far’, says Luca Verre, CEO of Prophesee. It is intended for immersive reality headsets and connected objects.

»Co-developed with Sony, the previous generation sensor, referenced IMX636 in the Japanese catalog and benefiting from’a definition of 1280x720 pixels, is mainly for industrial applications (fast counting, etc). It will also be able to express itself in consumer smartphones, as evidenced by the’announcement of’a partnership in February 2023 between Qualcomm and Prophesee.

With its new model, deeptech is therefore launching a new challenge by addressing the’Internet of objects and immersive reality. This time, it alone supports the commercial, software and technical support aspects, while continuing to rely on the qualities of its neuromorphic technology.

Three years of R&D​

A Prophesee sensor is indeed inspired by the human retina. Unlike an ordinary’image sensor, it does not ACQUIRE images one after the other, but just changes (called events) in the’image, using independent and asynchronous photodetectors. Direct consequence : the capture speed is much higher, of the’order of the microsecond, and the amount of data produced decreases significantly, which translates into energy savings’.

Connected objects, often running on batteries, and augmented/virtual reality headsets, however, impose additional constraints in terms of’integration, cost and energy consumption. The deeptech has spared no effort. « We have been working for three years on the GenX320 to meet these needs », explains Luca Verre. R&D efforts that focused on three axes, he continues: « Miniaturization, energy consumption and’IA. »

The GenX320 sensor measures only’a fifth of an inch diagonally, a surface of 3 mm x 4 mm. It is manufactured by the superposition of two silicon wafers, the first comprising the photodetectors, the second the analog and digital processing circuits.

36 Microwatts in standby​

This technique of « stacking » had also made it possible to miniaturize the’IMX636 compared to the three previous generations, which had never been marketed. With the difference that Sony is no longer at work this time: the GenX320 is produced in a European foundry, whose name Prophesee prefers to silence. Like its predecessors, its manufacture uses standard technologies in microelectronics (Cmos) and its cost is aligned with that of traditional’image sensors. The first samples of the GenX320 have been available since the end of 2022.

Energy consumption, meanwhile, is only « 36 Microwatts, in standby mode (always on), and n’atts only 3 milliwatts at most when the sensor wakes up », specifies Luca Verre.

Finally comes the’ aspect IA. « Since a flow of’ events is very scattered and very fast, this type of data does not interface naturally with the accelerating chips of neural networks, which process sequential images, points out Luca Verre. In this GenX320 sensor, we have added digital blocks that, at the output, prepare this data for these accelerators. » This pretreatment gives the possibility of’accumulate events in the form of’a histogram, for example.

Synchronous or asynchronous operation

Directly speeding up the processing of event data could go through neuromorphic chips such as those developed by Brainchip (Akida), Intel (Loihi) and many others, some of whom are beginning their commercial careers.

The neural networks with pulses (SNN, spiking neural networks) that’elles execute are thus particularly suitable for asynchronous calculations.

« It would be the’ideal, recognizes Luca Verre. But, although’on collaborates with Brainchip, Synsense and Intel, their chips are not yet mainstream. » Unlike the systems of Qualcomm, Renesas, d’AMD, etc., optimized to apply today's most common’IA algorithms (including convolution) to images from standard sensors.

« However, thanks to the GenX320, our customers have the choice between synchronous or asynchronous operating modes », shade Luca Glass. Synchronous mode reduces time accuracy to the millisecond range, « what remains powerful for IoT applications », says Luca Verre.


Prophesee Zinn Labs


Tracking the eye movements of a driver is a way to measure his degree of fatigue. An event sensor is particularly interesting in this context because, not capturing any image, it does not compromise the confidentiality of the person.


Deeptech cites the American’ Zinn Labs, which uses the GenX320 sensor to perform gaze detection (eye tracking), whose Swedish Tobi is one of the world's specialists, which spreads in virtual reality headsets, allows to concentrate the maximum of calculations for the 3D image where the user's gaze is directed (what’on calls the foveal rendering).

« The tracking speed reaches 1 kilohertz, with a consumption of less than 20 milliwatts, whereas this is currently 1 or 2 watts and the’sampling is limited to 120 images/second to decrease the amount of data », argues Luca Verre. Such a high frequency is better for tracking the very fast angular movements of the’oeil. According to Prophesee, Zinn Labs has developed a prototype and commercialization could occur in 2024.

Another potential outlet: the follow-up of the’attention of the drivers which rests that the counting of the number of flashes of’ eye per second, by the American’ Xperi. This company appreciates the fact that’an event sensor, not capturing any image, preserves the privacy of the driver.

Prophesee STMicro

Designed by Prophesee, this module (top black part) with the GenX320 sensor easily interfaces with the popular STM32 microcontroller, which will promote application development.

In addition to its new sensor, Prophesee has designed in parallel modules that facilitate the’ interconnection and development of’applications with certain computing platforms. Thus, a plug-and-play module, hosting the GenX320, S’interface specifically with the STMicroelectronics STM32 microcontroller, popular for embedded vision. Deeptech wants to give itself every chance to succeed in its breakthrough in these new markets.
 
  • Like
  • Love
  • Fire
Reactions: 32 users
Merry Christmas, happy holidays all

Be safe and enjoy the time with loved ones, friends, fam and doing what makes you happy however you celebrate this time.

Oct French article on the X320 I needed to work around through VPN, private tab etc to get to read with an English translation.

Didn't recall seeing before.

Hopefully the link may work.

Highlighted a comment by our mate Luca that says imo, essentially the neuromorphic processor side via BRN or Intel or Synsense not happening at the mo as we not mainstream enough yet.

Unlike the Sony, Qualcomm, Renesas, AMD of the world. So unless we snuck in there via Renesas, can probs park this one for the mo unfortunately imo until we get a bit more traction in the mkt.

Hopefully wrong and the comments were lost in translation :LOL:



Prophesee sets out to conquer the’IoT and immersive reality with its new GenX320 event sensor​

The Parisian deeptech Prophesee unveils, this Monday, October 16, 2023, its new bio-inspired sensor called GenX320, smaller than previous generations. It’s it addresses the markets of connected objects and immersive reality headsets.


5 Min.

React
Frederic Monflier
Industrie & Technologies

16 october 2023 \ 14h00

Prophesee part à la conquête de l’IoT et de la réalité immersive avec son nouveau capteur événementiel GenX320

© Prophesee
The new GenX320 sensor measures only 3x4 mm. It has a resolution of 320x320 pixels.

And five for the French deeptech Prophesee: its GenX320 sensor, whose availability is announced on Monday, October 16, 2023, inaugurates the fifth generation of its bio-inspired vision technology, also called event. « With a resolution of 320x320 pixels, C’ is the smallest event sensor developed in the world so far’, says Luca Verre, CEO of Prophesee. It is intended for immersive reality headsets and connected objects.

»Co-developed with Sony, the previous generation sensor, referenced IMX636 in the Japanese catalog and benefiting from’a definition of 1280x720 pixels, is mainly for industrial applications (fast counting, etc). It will also be able to express itself in consumer smartphones, as evidenced by the’announcement of’a partnership in February 2023 between Qualcomm and Prophesee.

With its new model, deeptech is therefore launching a new challenge by addressing the’Internet of objects and immersive reality. This time, it alone supports the commercial, software and technical support aspects, while continuing to rely on the qualities of its neuromorphic technology.

Three years of R&D​

A Prophesee sensor is indeed inspired by the human retina. Unlike an ordinary’image sensor, it does not ACQUIRE images one after the other, but just changes (called events) in the’image, using independent and asynchronous photodetectors. Direct consequence : the capture speed is much higher, of the’order of the microsecond, and the amount of data produced decreases significantly, which translates into energy savings’.

Connected objects, often running on batteries, and augmented/virtual reality headsets, however, impose additional constraints in terms of’integration, cost and energy consumption. The deeptech has spared no effort. « We have been working for three years on the GenX320 to meet these needs », explains Luca Verre. R&D efforts that focused on three axes, he continues: « Miniaturization, energy consumption and’IA. »

The GenX320 sensor measures only’a fifth of an inch diagonally, a surface of 3 mm x 4 mm. It is manufactured by the superposition of two silicon wafers, the first comprising the photodetectors, the second the analog and digital processing circuits.

36 Microwatts in standby​

This technique of « stacking » had also made it possible to miniaturize the’IMX636 compared to the three previous generations, which had never been marketed. With the difference that Sony is no longer at work this time: the GenX320 is produced in a European foundry, whose name Prophesee prefers to silence. Like its predecessors, its manufacture uses standard technologies in microelectronics (Cmos) and its cost is aligned with that of traditional’image sensors. The first samples of the GenX320 have been available since the end of 2022.

Energy consumption, meanwhile, is only « 36 Microwatts, in standby mode (always on), and n’atts only 3 milliwatts at most when the sensor wakes up », specifies Luca Verre.

Finally comes the’ aspect IA. « Since a flow of’ events is very scattered and very fast, this type of data does not interface naturally with the accelerating chips of neural networks, which process sequential images, points out Luca Verre. In this GenX320 sensor, we have added digital blocks that, at the output, prepare this data for these accelerators. » This pretreatment gives the possibility of’accumulate events in the form of’a histogram, for example.

Synchronous or asynchronous operation

Directly speeding up the processing of event data could go through neuromorphic chips such as those developed by Brainchip (Akida), Intel (Loihi) and many others, some of whom are beginning their commercial careers.

The neural networks with pulses (SNN, spiking neural networks) that’elles execute are thus particularly suitable for asynchronous calculations.

« It would be the’ideal, recognizes Luca Verre. But, although’on collaborates with Brainchip, Synsense and Intel, their chips are not yet mainstream. » Unlike the systems of Qualcomm, Renesas, d’AMD, etc., optimized to apply today's most common’IA algorithms (including convolution) to images from standard sensors.

« However, thanks to the GenX320, our customers have the choice between synchronous or asynchronous operating modes », shade Luca Glass. Synchronous mode reduces time accuracy to the millisecond range, « what remains powerful for IoT applications », says Luca Verre.


Prophesee Zinn Labs


Tracking the eye movements of a driver is a way to measure his degree of fatigue. An event sensor is particularly interesting in this context because, not capturing any image, it does not compromise the confidentiality of the person.

Deeptech cites the American’ Zinn Labs, which uses the GenX320 sensor to perform gaze detection (eye tracking), whose Swedish Tobi is one of the world's specialists, which spreads in virtual reality headsets, allows to concentrate the maximum of calculations for the 3D image where the user's gaze is directed (what’on calls the foveal rendering).

« The tracking speed reaches 1 kilohertz, with a consumption of less than 20 milliwatts, whereas this is currently 1 or 2 watts and the’sampling is limited to 120 images/second to decrease the amount of data », argues Luca Verre. Such a high frequency is better for tracking the very fast angular movements of the’oeil. According to Prophesee, Zinn Labs has developed a prototype and commercialization could occur in 2024.

Another potential outlet: the follow-up of the’attention of the drivers which rests that the counting of the number of flashes of’ eye per second, by the American’ Xperi. This company appreciates the fact that’an event sensor, not capturing any image, preserves the privacy of the driver.

Prophesee STMicro

Designed by Prophesee, this module (top black part) with the GenX320 sensor easily interfaces with the popular STM32 microcontroller, which will promote application development.

In addition to its new sensor, Prophesee has designed in parallel modules that facilitate the’ interconnection and development of’applications with certain computing platforms. Thus, a plug-and-play module, hosting the GenX320, S’interface specifically with the STMicroelectronics STM32 microcontroller, popular for embedded vision. Deeptech wants to give itself every chance to succeed in its breakthrough in these new markets.
However, in a Secret Santa, some more interesting news / developments :)

According to a paper just released a couple of days ago (not peer reviewed as yet) it appears some Snr Researchers over at Ericsson have been playing with Akida and "for instance, to demonstrate the feasibility of AI-enabled ZE-IoT, we have developed a prototype of a solar-powered AI-enabled ZE-IoT camera device with neuromorphic computing."

My question would be is this something off their own back or do we have a hand in the background somewhere as well :unsure:


Towards 6G Zero-Energy Internet of Things:
Standards, Trends, and Recent Results
  • December 2023


IMG_20231225_225420.jpg
IMG_20231225_225847.jpg
 
  • Like
  • Fire
  • Love
Reactions: 91 users

manny100

Regular
However, in a Secret Santa, some more interesting news / developments :)

According to a paper just released a couple of days ago (not peer reviewed as yet) it appears some Snr Researchers over at Ericsson have been playing with Akida and "for instance, to demonstrate the feasibility of AI-enabled ZE-IoT, we have developed a prototype of a solar-powered AI-enabled ZE-IoT camera device with neuromorphic computing."

My question would be is this something off their own back or do we have a hand in the background somewhere as well :unsure:


Towards 6G Zero-Energy Internet of Things:
Standards, Trends, and Recent Results
  • December 2023


View attachment 52677 View attachment 52678
Great find FMF. Solar power and AKIDA look like a perfect match. Event based trigger and no expensive trip to the cloud.
This has future demand written all over it.
 
  • Like
  • Love
  • Fire
Reactions: 33 users
  • Like
  • Fire
Reactions: 17 users


 
  • Like
  • Fire
Reactions: 12 users
Check out this Brainchip employee's work background, very interesting:

Experience​

He may have only had fixed term contracts there, who knows, but interesting how after working at these huge tech companies is now working at Brainchip for a lot longer.

Also, he could definitely come with plenty of connections with all the companies he's been at previously. Gee, he moves around a bit, but looks pretty settled at Brainchip.
 
  • Like
  • Fire
  • Love
Reactions: 46 users

Taproot

Regular
 
  • Like
  • Fire
  • Love
Reactions: 13 users

SBIR Phase I:A Wearable, Independent, Braille-Assistive Learning Device​

Award Information
Agency:National Science Foundation
Branch:N/A
Contract:2236574
Agency Tracking Number:2236574
Amount:$274,999.00
Phase:phase I
Program:SBIR
Solicitation Topic Code:HC
Solicitation Number:NSF 22-551
Timeline
Solicitation Year:2022
Award Year:2023
Award Start Date (Proposal Award Date):2023-04-01
Award End Date (Contract End Date):2024-03-31
Small Business Information
BRAILLEWEAR
611 South DuPont Highway, Suite 102
Dover, DE 19901
United States
DUNS:N/A
HUBZone Owned:No
Woman Owned:No
Socially and Economically Disadvantaged:No
Principal Investigator
Name: Kushagra Jain
Phone: (609) 373-3437
Email: kj228@cornell.edu
Business Contact
Name: Kushagra Jain
Phone: (609) 373-3437
Email: kj228@cornell.edu
Research Institution
N/A
Abstract
The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project is in creating an independent. assistive Braille learning device for blind people. The ability to read Braille is highly correlated with improved independence and quality of life. An estimated 70% of the blind are unemployed yet, of that subpopulation that is Braille literate, only 10% are unemployed. There is a Braille literacy crisis - only 8.5% of the blind population in the US can read Braille today, compared to 50% in the 1960s. There are several factors theorized to contribute to increasing Braille illiteracy including: 1) a shortage of teachers qualified to teach Braille, 2) negative outlooks on the difficulty and cost of Braille learning, and 3) and difficulties integrating blind students into mainstream schools that don’t have the specialized resources for this population. The results of this project will assist students of all ages in learning how to read Braille, including secondary Braille learners who become blind later in life. Aiming at inhibiting the Braille literacy crisis, the technology enables the blind to be given the same opportunities as their sighted peers, including better chances at graduating from high school and college, obtaining employment, and having high independence levels._x000D_
_x000D_
_x000D_
The intellectual merit of this project is in development of a wearable, computer vision-based, real-time Braille-to-speech learning device. While the primary mission of the project is to unlock the full potential of blind individuals through Braille literacy, the overall goal for the technology is to unlock the full potential of human touch with computer-assisted augmentation cues in response to intricate textural patterns. The proposed technology will detect such patterns in a contactless approach, preserving the integrity of the material, and provide auditory feedback in real-time to allow for mechanosensory-augmented feedback. This project focuses on establishing the technical feasibility of such an approach by: 1) determining if the device and interpreting algorithms can be made robust to environmental and user postural variations, 2) developing capabilities to perform well on textured and/or patterned surfaces, and 3) conducting usability testing to identify areas of the user experience that must be enhanced in the future to be viable in the market with two vital stakeholders - Braille tutors and Braille students. These goals, if completed successfully, will not only impact Braille learners but also open up other market applications for this technology such as manufacturing and medicine._x000D_
_x000D_
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
 
  • Like
  • Fire
Reactions: 9 users
 
  • Like
  • Fire
Reactions: 7 users

Jasonk

Regular
Merry Christmas, happy holidays all

Be safe and enjoy the time with loved ones, friends, and fam and doing what makes you happy however you celebrate this time.

Oct French article on the X320 I needed to work around through VPN, private tab, etc, to get to read with an English translation.

Didn't recall seeing before.

Hopefully, the link may work.

Highlighted a comment by our mate Luca that says imo, essentially the neuromorphic processor side via BRN or Intel or Synsense not happening at the mo as we enough yet.

Unlike the Sony, Qualcomm, Renesas, AMD of the world. So unless we snuck in there via Renesas, can probs park this one for the mo unfortunately imo until we get a bit more traction in the mkt.

Hopefully wrong and the comments were lost in translation :LOL:



Prophesee sets out to conquer the’IoT and immersive reality with its new GenX320 event sensor​

The Parisian deeptech Prophesee unveils, this Monday, October 16, 2023, its new bio-inspired sensor called GenX320, smaller than previous generations. It’s it addresses the markets of connected objects and immersive reality headsets.


5 Min.

React
Frederic Monflier
Industrie & Technologies

16 october 2023 \ 14h00

Prophesee part à la conquête de l’IoT et de la réalité immersive avec son nouveau capteur événementiel GenX320

© Prophesee
The new GenX320 sensor measures only 3x4 mm. It has a resolution of 320x320 pixels.

And five for the French deeptech Prophesee: its GenX320 sensor, whose availability is announced on Monday, October 16, 2023, inaugurates the fifth generation of its bio-inspired vision technology, also called event. « With a resolution of 320x320 pixels, C’ is the smallest event sensor developed in the world so far’, says Luca Verre, CEO of Prophesee. It is intended for immersive reality headsets and connected objects.

»Co-developed with Sony, the previous generation sensor, referenced IMX636 in the Japanese catalog and benefiting from’a definition of 1280x720 pixels, is mainly for industrial applications (fast counting, etc). It will also be able to express itself in consumer smartphones, as evidenced by the’announcement of’a partnership in February 2023 between Qualcomm and Prophesee.

With its new model, deeptech is therefore launching a new challenge by addressing the’Internet of objects and immersive reality. This time, it alone supports the commercial, software and technical support aspects, while continuing to rely on the qualities of its neuromorphic technology.

Three years of R&D​

A Prophesee sensor is indeed inspired by the human retina. Unlike an ordinary’image sensor, it does not ACQUIRE images one after the other, but just changes (called events) in the’image, using independent and asynchronous photodetectors. Direct consequence : the capture speed is much higher, of the’order of the microsecond, and the amount of data produced decreases significantly, which translates into energy savings’.

Connected objects, often running on batteries, and augmented/virtual reality headsets, however, impose additional constraints in terms of’integration, cost and energy consumption. The deeptech has spared no effort. « We have been working for three years on the GenX320 to meet these needs », explains Luca Verre. R&D efforts that focused on three axes, he continues: « Miniaturization, energy consumption and’IA. »

The GenX320 sensor measures only’a fifth of an inch diagonally, a surface of 3 mm x 4 mm. It is manufactured by the superposition of two silicon wafers, the first comprising the photodetectors, the second the analog and digital processing circuits.

36 Microwatts in standby​

This technique of « stacking » had also made it possible to miniaturize the’IMX636 compared to the three previous generations, which had never been marketed. With the difference that Sony is no longer at work this time: the GenX320 is produced in a European foundry, whose name Prophesee prefers to silence. Like its predecessors, its manufacture uses standard technologies in microelectronics (Cmos) and its cost is aligned with that of traditional’image sensors. The first samples of the GenX320 have been available since the end of 2022.

Energy consumption, meanwhile, is only « 36 Microwatts, in standby mode (always on), and n’atts only 3 milliwatts at most when the sensor wakes up », specifies Luca Verre.

Finally comes the’ aspect IA. « Since a flow of’ events is very scattered and very fast, this type of data does not interface naturally with the accelerating chips of neural networks, which process sequential images, points out Luca Verre. In this GenX320 sensor, we have added digital blocks that, at the output, prepare this data for these accelerators. » This pretreatment gives the possibility of’accumulate events in the form of’a histogram, for example.

Synchronous or asynchronous operation

Directly speeding up the processing of event data could go through neuromorphic chips such as those developed by Brainchip (Akida), Intel (Loihi) and many others, some of whom are beginning their commercial careers.

The neural networks with pulses (SNN, spiking neural networks) that’elles execute are thus particularly suitable for asynchronous calculations.

« It would be the’ideal, recognizes Luca Verre. But, although’on collaborates with Brainchip, Synsense and Intel, their chips are not yet mainstream. » Unlike the systems of Qualcomm, Renesas, d’AMD, etc., optimized to apply today's most common’IA algorithms (including convolution) to images from standard sensors.

« However, thanks to the GenX320, our customers have the choice between synchronous or asynchronous operating modes », shade Luca Glass. Synchronous mode reduces time accuracy to the millisecond range, « what remains powerful for IoT applications », says Luca Verre.


Prophesee Zinn Labs


Tracking the eye movements of a driver is a way to measure his degree of fatigue. An event sensor is particularly interesting in this context because, not capturing any image, it does not compromise the confidentiality of the person.

Deeptech cites the American’ Zinn Labs, which uses the GenX320 sensor to perform gaze detection (eye tracking), whose Swedish Tobi is one of the world's specialists, which spreads in virtual reality headsets, allows to concentrate the maximum of calculations for the 3D image where the user's gaze is directed (what’on calls the foveal rendering).

« The tracking speed reaches 1 kilohertz, with a consumption of less than 20 milliwatts, whereas this is currently 1 or 2 watts and the’sampling is limited to 120 images/second to decrease the amount of data », argues Luca Verre. Such a high frequency is better for tracking the very fast angular movements of the’oeil. According to Prophesee, Zinn Labs has developed a prototype and commercialization could occur in 2024.

Another potential outlet: the follow-up of the’attention of the drivers which rests that the counting of the number of flashes of’ eye per second, by the American’ Xperi. This company appreciates the fact that’an event sensor, not capturing any image, preserves the privacy of the driver.

Prophesee STMicro

Designed by Prophesee, this module (top black part) with the GenX320 sensor easily interfaces with the popular STM32 microcontroller, which will promote application development.

In addition to its new sensor, Prophesee has designed in parallel modules that facilitate the’ interconnection and development of’applications with certain computing platforms. Thus, a plug-and-play module, hosting the GenX320, S’interface specifically with the STMicroelectronics STM32 microcontroller, popular for embedded vision. Deeptech wants to give itself every chance to succeed in its breakthrough in these new markets.

Just to add a level of context.
The x320 via a standard processor, the maximum events that could be processed per second is 10 times less than say an akida neuromorphic processor, for example. One would safely assume that pending the use case one would prefer akida for processing over CPU or GPU.

Links below.
If anyone is interested, spend an hour on the two links below; the event based camera is pretty cool. I'll hopefully be playing with that x320 in a couple of months. Time to dust off the akida pcie card and give it a run 😅


It's worth a read and to watch the videos.
 
Last edited:
  • Like
  • Love
Reactions: 21 users

Gestalt IT talk about unigen cupcake at the 5 min 45 mark


 
  • Like
  • Fire
  • Love
Reactions: 16 users

Diogenese

Top 20
Just to add.
The x320 if processes via a standard processor that maximum events that could be processed per second is 10 times less than say an akida neuromorphic processor for example. Links below.

If interested, spend an hour on the two links below; the event based camera is pretty cool. I'll hopefully be playing with that x320 in a couple of months.


Worth a read and to watch the videos.
Thanks Jason,

I don't know how many pixels the first version of Prophesee's DVS had, but the 320 has 320*320, which is significantly less than version 1. I was wondering if they changed it so SynSense could be used with it, because Dylan Muir did say they were still working with Prophesee in the SW-F interview.
 
Last edited:
  • Like
  • Sad
  • Love
Reactions: 14 users

Jasonk

Regular
Thanks Jason,

I don't know how many pixels the first version of Prophesee's DVS had, but the 320 has 320*320, which is significantly less than version 1. I was wondering if they changed it so SynSense could use it, because Dylan Muir did say they were still working with Prophesee in the SW-F interview.
@Diogenese

I would not be able to answer sorry. I have only looked into the 320 and do not know what application SynSense had planned.

From my limited knowledge I could see the high pixel count being problematic when used at edge and if the event camera has a high sensitivity. The generated events per second in one video demonstration on the Prophesee website was running in the the giga range. Each event packet contains [x , y , on/off , timestamp] which would need to be processed. I could be off the mark, give me a couple of months and hopefully I can be more informative.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 15 users
Fairly recent analysis from 14 September 2023. Wevolver conducted a full demo based on Akida 1.0 and proves it works with a testing accuracy of 100% for traffic object detection use cases. I'm not experienced with the technical coding here however it is clear on this demo the Akida 1.0 has been proven to work :

Conclusion​

This project highlights the impressive abilities of the Akida PCIe board. Boasting low power consumption, it could be used as a highly effective device for real-time object detection in various industries for numerous use cases.

Nice validation from this company for BrainChip. 🙂 Therefore we keenly await the new Akida Gen 2.0 which is an upgraded version with even better capability.


ARTIFICIAL INTELLIGENCEAUTONOMOUS VEHICLESROBOTICS3D PRINTINGIOTCOMPUTINGAEROSPACEMORE >

Real-Time Traffic Monitoring with Neuromorphic Computing​

author avatar

David Tischler
14 Sep, 2023
FOLLOW
Sponsored by
0.tvawvtpuvhedgelogo.jpg

Real-Time Traffic Monitoring with Neuromorphic Computing


Article #5 of Spotlight on Innovations in Edge Computing and Machine Learning: A computer vision project that monitors vehicle traffic in real-time using video inferencing performed on the Brainchip Akida Development Kit.​

Artificial Intelligence
- Edge Processors
- Embedded Machine Learning
- Neural Network
- Transportation
This article is part of Spotlight on Innovations in Edge Computing and Machine Learning. The series features some unique projects from around the world that leverage edge computing and machine learning, showcasing the ways these technological advancements are driving growth, efficiency, and innovation across various domains.
This series is made possible through the sponsorship of Edge Impulse, a leader in providing the platform for building smarter, connected solutions with edge computing and machine learning.

In the ever-evolving landscape of urban planning and development, the significance of efficient real-time traffic monitoring cannot be overstated. Traditional systems, while functional, often fall short when high-performance data processing is required in a low-power budget. Enter neuromorphic computing—a technology inspired by the neural structure of the brain, aiming to combine efficiency with computational power. This article delves into an interesting computer vision project that monitors vehicle traffic using this paradigm.
Utilizing aerial camera feeds, the project can detect moving vehicles with exceptional precision, making it a game-changer for city planners and governments aiming to optimize urban mobility. The key lies in the advanced neuromorphic processor that serves as the project's backbone. This processor is not just about low power consumption—it also boasts high-speed inference capabilities, making it ideal for real-time video inferencing tasks.
But the journey doesn't end at hardware selection. This article covers the full spectrum of the project, from setting up the optimal development environment and data collection methods to model training and deployment strategies. It offers a deep dive into how neuromorphic computing can be applied in real-world scenarios, shedding light on the processes of data acquisition, labeling, model training, and final deployment. As we navigate through the complexities of urban challenges, such insights pave the way for smarter, more efficient solutions in traffic monitoring and beyond.

Traffic Monitoring using the Brainchip Akida Neuromorphic Processor​

Created By: Naveen Kumar
Public Project Link:
https://studio.edgeimpulse.com/public/222419/latest

Overview​

A highly efficient computer-vision system that can detect moving vehicles with great accuracy and relative motion, all while consuming minimal power.
cover

By capturing moving vehicle images, aerial cameras can provide information about traffic conditions, which is beneficial for governments and planners to manage traffic and enhance urban mobility. Detecting moving vehicles with low-powered devices is still a challenging task. We are going to tackle this problem using a Brainchip Akida neural network accelerator.

Hardware Selection​

In this project, we'll utilize BrainChip’s Akida Development Kit. BrainChip's neuromorphic processor IP uses event-based technology for increased energy efficiency. It allows incremental learning and high-speed inference for various applications, including convolutional neural networks, with exceptional performance and low power consumption.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI2ODUxODgwLTE2OTI2MjY4NTE4ODAucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

The kit consists of an Akida PCie board, a Raspberry Pi Compute Module 4 with Wi-Fi and 8 GB RAM, and a Raspberry Pi Compute Module 4 I/O Board. The disassembled kit is shown below.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI5MjY1NjkzLXNwYWNlc19FSkI1T2FlWWpNNVZTRkVLTEVGel91cGxvYWRzX2dpdC1ibG9iLTYzMzc3ZDQ2MGUxYzJiMTc0NjViODFkNDQ3ODRkY2MyYzE1OGQ1MTFfaGFyZHdhcmVfdW5hc3NlbWJsZWQuanBlZyIsImVkaXRzIjp7InJlc2l6ZSI6eyJ3aWR0aCI6OTUwLCJmaXQiOiJjb3ZlciJ9fX0=
Hardware UnassembledThe Akida PCIe board can be connected to the Raspberry Pi Compute Module 4 IO Board through the PCIe Gen 2 x1 socket available onboard.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI2OTExMjk4LTE2OTI2MjY5MTEyOTgucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==
Hardware Closeup

Setting up the Development Environment​

After powering on the Akida development kit, we need to log in using an SSH connection. The kit comes with Ubuntu 20.04 LTS and Akida PCIe drivers preinstalled. Furthermore, the Raspberry Pi Compute Module 4 Wi-Fi is preconfigured in Access Point (AP) mode.
Completing the setup requires the installation of a few Python packages, which requires an internet connection. This internet connection to the Raspberry Pi 4 can be established through wired LAN. In my situation, I used internet sharing on my Macbook with a USB-C to LAN adapter to connect the Raspberry Pi 4 to my Macbook.
To log in and install packages execute the following commands. The password is brainchip for the user ubuntu.
$ ssh ubuntu@<ip-address>
$ pip3 install akida
$ pip3 install opencv-python
$ pip3 install scipy
$ pip3 install Flask

Data Collection​

Capturing video of moving traffic using a drone is not permitted in my area so I used a license-free video from pexels.com (credit: Taryn Elliot). For our demo input images, we extracted every 5th frame from the pexels.com video using the Python script below.
python
import cv2
import sys


We will use Edge Impulse Studio to build and train our demo model. This requires us to create an account and initiate a new project at https://studio.edgeimpulse.com.
To upload the demo input images extracted from the pexels.com video into the demo Edge Impulse project, we will use the Edge Impulse CLI Uploader. Follow the instructions at the link: https://docs.edgeimpulse.com/docs/cli-installation to install the Edge Impulse CLI on your host computer.
Execute the command below to upload the dataset.
$ edge-impulse-uploader --category split images/*.jpg
The command above will upload the demo input images to Edge Impulse Studio and split them into "Training" and "Testing" datasets. Once the upload completes, the demo input datasets are visible on the Data Acquisition page within Edge Impulse Studio.

eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI3OTA3NTI0LTE2OTI2Mjc5MDc1MjQucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

Now we can label the data with bounding boxes in the Labeling queue tab as shown in the GIF below.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI4MDc2MDg5LTE2OTI2MjgwNzYwODkucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==
Labelling

Model training​

Go to the Impulse Design > Create Impulse page, click Add a processing block, and then choose Image. This preprocesses and normalizes image data, and optionally allows you to choose the color depth. Also, on the same page, click Add a learning block, and choose Object Detection (Images) - BrainChip Akida™ which fine-tunes a pre-trained object detection model specialized for the BrainChip AKD1000 PCIe board. This specialized model permits the use of a 224x224 image size, which is the size we are currently utilizing. Now click on the Save Impulse button.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI4MTIxMDY5LTE2OTI2MjgxMjEwNjkucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

On the Image page, choose RGB as color depth and click on the Save parameters button. The page will be redirected to the Generate Features page.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI4MTMzMzU5LTE2OTI2MjgxMzMzNTkucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

Now we can start feature generation by clicking on the Generate features button:
generate_features

After feature generation, go to the Object Detection page and click on Choose a different model and select Akida FOMO. Then click on the Start training button. It will take a few minutes to complete the training.
object_detection

The FOMO model uses an architecture similar to a standard image classification model which splits the input image into a grid and runs the equivalent of image classification across all cells in the grid independently in parallel. By default the grid size is 8x8 pixels, which means for a 224x224 image, the output will be 28x28 as shown in the image below.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI5MjAyODc4LXNwYWNlc19FSkI1T2FlWWpNNVZTRkVLTEVGel91cGxvYWRzX2dpdC1ibG9iLTE4MmI2ZjIyMzM2NTQyMDE4OGU5NmIyY2U2YWE0ZDJkNzU0MWNlYjFfZ3JpZC5qcGVnIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

For localization, it cuts off the last layers of the classification model and replaces this layer with a per-region class probability map, and subsequently applies a custom loss function that forces the network to fully preserve the locality in the final layer. This essentially gives us a heat map of vehicle locations. FOMO works on the constraining assumption that all of the bounding boxes are square, have a fixed size, and the objects are spread over the output grid. In the aerial view images, vehicles look similar in size hence FOMO works quite well.

Confusion Matrix​

Once the training is completed we can see the confusion matrices as shown below. By using the post-training quantization, the Convolutional Neural Networks (CNN) are converted to a low-latency and low-power Spiking Neural Network (SNN) for use with the Akida runtime. We can see in the below image, the F1 score of 94% of the Quantized (Akida) model is better than that of the Quantized (int8) model.
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI4MjMxNzk3LTE2OTI2MjgyMzE3OTcucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==

Model Testing​

On the Model testing page, click on the Classify All button which will initiate model testing with the trained model. The testing accuracy is 100%.
model_testing

Deployment​

We will be using Akida Python SDK to run inferencing, thus we will need to download the Meta TF model (underlined in red color in the image below) from the Edge Impulse Studio's Dashboard. After downloading, copy the ei-object-detection-metatf-model.fbz model file to the Akida development kit using command below.
$ scp ei-object-detection-metatf-model.fbz ubuntu@<ip-address>:/home/ubuntu
eyJidWNrZXQiOiJ3ZXZvbHZlci1wcm9qZWN0LWltYWdlcyIsImtleSI6ImZyb2FsYS8xNjkyNjI4MjgzMzkxLTE2OTI2MjgyODMzOTEucG5nIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo5NTAsImZpdCI6ImNvdmVyIn19fQ==
Block Output

Application Development​

The application loads the MetaTF model (*fbz) and maps it to the Akida neural processor. The inferencing is done on the images from a video file. We have converted several Edge Impulse C++ SDK functions to Python to preprocess FOMO input.

Run Inferencing​

To run the application, login to the Akida development kit and execute the commands below.
$ git clone https://github.com/metanav/vehicle_detection_brainchip_edge_impulse.git
$ cd vehicle_detection_brainchip_edge_impulse
$ python3 main.py
The inferencing results can be accessed at http://:8080 using a web browser. The application also displays the model summary mapped on the Akida PCIe neural processor on the console.

Notice there is no Softmax layer at the end of the model. That layer has been removed during model conversion to run on the Akida processor. The Softmax operation is performed in the application code, rather than in the model..

Demo​

The video used for the demonstration runs at a framerate of 24 fps, and the inferencing takes approximately 40ms per frame, resulting in real-time inferencing.

Conclusion​

This project highlights the impressive abilities of the Akida PCIe board. Boasting low power consumption, it could be used as a highly effective device for real-time object detection in various industries for numerous use cases.

Thought I recalled the name Naveen Kumar from the traffic monitoring.

Appears he has created another Edge Impulse project recently using multi camera streams, Edge Impulse FOMO and Akida. Hadn't seen this one myself.

Snipped a couple of sections.


Multi-camera Video Stream Inference - Brainchip Akida Neuromorphic Processor​


Real-time inferencing for multi-camera video streaming using Brainchip Akida and Edge Impulse FOMO.
Created By: Naveen Kumar

1703565569026.png



Real-time inferencing for multi-camera video streaming for road crossings is a challenging task that involves processing and analyzing multiple video sources simultaneously, such as from different cameras or sensors, to provide useful information and insights for traffic management, safety, and planning. Some of the possible goals and applications of this task could be:

  • To recognize and monitor traffic events, such as congestion, accidents, violations, or anomalies, and alert the authorities or the public in real time.
  • To measure and optimize traffic flow, density, and patterns, and provide guidance or recommendations for traffic control, routing, or scheduling.


Conclusion​

In this project, we have evaluated the Brainchip AKD1000 Akida processor and demonstrated its effectiveness and efficiency in terms of accuracy, latency, bandwidth, and power consumption. We also conclude that Edge Impulse FOMO model is highly suitable for contrained and low-power edge devices to achieve fast inferencing without losing much accuracy. The public version of the Edge Impulse Studio project can be found here:
 
  • Like
  • Fire
  • Love
Reactions: 55 users

AusEire

Founding Member. It's ok to say No to Dot Joining
Thought I recalled the name Naveen Kumar from the traffic monitoring.

Appears he has created another Edge Impulse project recently using multi camera streams, Edge Impulse FOMO and Akida. Hadn't seen this one myself.

Snipped a couple of sections.


Multi-camera Video Stream Inference - Brainchip Akida Neuromorphic Processor​


Real-time inferencing for multi-camera video streaming using Brainchip Akida and Edge Impulse FOMO.
Created By: Naveen Kumar

View attachment 52700


Real-time inferencing for multi-camera video streaming for road crossings is a challenging task that involves processing and analyzing multiple video sources simultaneously, such as from different cameras or sensors, to provide useful information and insights for traffic management, safety, and planning. Some of the possible goals and applications of this task could be:

  • To recognize and monitor traffic events, such as congestion, accidents, violations, or anomalies, and alert the authorities or the public in real time.
  • To measure and optimize traffic flow, density, and patterns, and provide guidance or recommendations for traffic control, routing, or scheduling.


Conclusion​

In this project, we have evaluated the Brainchip AKD1000 Akida processor and demonstrated its effectiveness and efficiency in terms of accuracy, latency, bandwidth, and power consumption. We also conclude that Edge Impulse FOMO model is highly suitable for contrained and low-power edge devices to achieve fast inferencing without losing much accuracy. The public version of the Edge Impulse Studio project can be found here:


Um FMF I'm sorry to be a downer but I thought the Akida 1000 chip/IP was a complete failure?
 
  • Haha
Reactions: 21 users
Um FMF I'm sorry to be a downer but I thought the Akida 1000 chip/IP was a complete failure?
Geez....there's always one killjoy :LOL:

According to some uneducated (manipulaters?) apparently.
 
  • Haha
  • Like
  • Fire
Reactions: 22 users

Tothemoon24

Top 20
Not sure if this dribble has been posted ,
Mixed lollies of positivity, negativity & as the author says No Idea




It sounds like a cool idea. There are a lot of silicon-based AI products trying to build and take advantage of this current level of AI enthusiasm, including a few companies who sound somewhat similar to BrainChip in trying to build smaller, much more efficient AI processors that can handle specific tasks, rather than the big generalist AI chips, like NVIDIA’s GPUs, which can handle almost any task but are overkill for some specific processing needs and have to be housed in huge data centers rather than at the Edge, or in a specific product (like a car). I don’t know enough to know what kind of tasks BrainChip’s Akida designs are best for in this idea of “distributed AI,” whether that’s inference or training or some very specific AI task that could be done by a custom-built chip, but the idea seems to be having AI processing closer that’s done closer to the real-world action, and much more efficient.

It’s a little similar to the evolution of Bitcoin hardware, frankly — originally Bitcoin was mined with regular ol’ CPUs, then some people realized that GPU’s like NVIDIA’s would be far faster if the mining was programmed to use them properly, and those took over… and then Bitcoin got big enough, and established enough, that people could justify spending a big chunk of money on developing customized ASICs (application-specific integrated circuits) that would handle just one task, Bitcoin mining, much more efficiently than a high-end and expensive GPU chipset, and a few companies started making those chips and building that hardware, and before you know it almost all Bitcoin mining was done with ASIC miners from Bitmain, Canaan and others, and ASICs were being designed for other popular cryptocurrencies, too… though GPUs still get a lot of use for the thousands of tinier cryptos that don’t yet justify the development of specific hardware.
 
  • Like
  • Fire
Reactions: 14 users

Perhaps

Regular
Renesas roadmap, gives some ideas for possible future use of Akida IP:

1703601207044.png
 
  • Like
Reactions: 15 users

charles2

Regular
ASX......Time to set/follow a good example or is it counter to your charter to protect small companies and investors

 
  • Like
  • Love
Reactions: 23 users

Perhaps

Regular
What happens there at OTC?

1703609090391.png
 
  • Like
  • Fire
Reactions: 11 users
Top Bottom