BRN Discussion Ongoing

I will let you draw your own conclusion as to how successful the Brainchip and ISL collaboration with the US Airforce Research Laboratory has been but in my opinion ‘Highly successful’ seems the likely answer.

A: The original Brainchip & ISL announcement as a memory refresh.

January 09, 2022
05:30 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), a leading provider of ultra-low power, high performance artificial intelligence technology and the world’s first commercial producer of neuromorphic AI chips and IP, today announced that Information Systems Laboratories, Inc. (ISL) is developing an AI-based radar research solution for the Air Force Research Laboratory (AFRL) based on its Akida™ neural networking processor



B: An Extract from ISL’s website advising that the CEO Dr. Joe Guerci will deliver a Keynote Address on recent radar advances.


logo_ISL.png

IEEE International Microwave Symposium (IMS)

Denver, Colorado USA June 2022



ISL CEO Dr. Joe Guerci delivered a Keynote address on recent radar advances. In recent years, there has been a wide variety of technological advances that have either directly or indirectly impacted both the government and commercial radar industries. These include new radar architectures such as cognitive radar, and many key enabling technologies such as RF systems-on-a-chip, high performance embedded computing, machine learning/AI, and low cost, size weight and power RF subsystems. Concurrently, there has been significant technology pull from new applications in unmanned aerial systems (UAS), counter UAS (CUAS), space-borne radar, and commercial self-driving vehicles. In this talk, Dr. Guerci surveyed both the new emerging technology trends, and the applications for which they are finding a home.

C: The link to the slides used in his Keynote Address which I suggest reading through.

Click here for a link to the presentation slides


D: Slide number 7 extract which appears to relate directly to A. above.​

7
Keynote

New Architectures: Cognitive Radar (cont.)
• Cognitive Fully Adaptive Radar (CoFAR) architecture
• Highly successful AFRL project has led to DoD follow-on and commercialization (new Digital Engineering tools)

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Does anyone think NVDIA's " inference transformer engine" could be code for AKIDA?

amaze-balls-excited.gif




Screen Shot 2022-10-08 at 5.53.53 pm.png

Screen Shot 2022-10-08 at 5.51.27 pm.png
Screen Shot 2022-10-08 at 5.49.24 pm.png



Nvidia Cancels Atlan Chip For AVs, Launches Thor With Double Performance​

Sam Abuelsamid
Senior Contributor

Sep 20, 2022,12:45pm EDT


At the spring 2021 Nvidia GPU Technology Conference (GTC) one of the highlights of CEO Jensen Huang’s presentation was the announcement of a next-generation system-on-a-chip (SoC) for automated vehicles called Atlan. Atlan was scheduled to be available for production vehicle applications in 2025. At this week’s fall 2022 GTC, Huang announced that Atlan had been canceled and replaced with a new design dubbed Thor that will offer twice the performance and data throughput, still arriving in 2025.

When it was announced, Atlan promised the highest performance of any automotive SoC to date with up to 1,000 trillion operations per second (TOPS) of integer computing capability. That’s about four times the performance of the Orin SoC that is launching this year in production vehicles including the Nio ET7, Xpeng G9 and the soon to be released Polestar 3 and Volvo XC90 replacement.

At the same GTC where Huang announced Atlan, he also revealed Grace, a new ARM-based CPU for servers. At the April 2022 GTC Nvidia also announced Hopper, it’s next-generation GPU architecture. Given the 2025 timing of Atlan, Nvidia decided that it had time to start over and deliver a new chip that combined technology from the Grace and Hopper chips to create Tho

The Thor SoC is expected to deliver 2,000 TOPS of integer computing power as well as 2,000 teraflops of floating point performance from 77 billion transistors. For comparison, the Parker SoC that powered version 2 of Tesla AutoPilot (in combination with a Pascal GPU) from 2016 delivered about 1 TOPS and was followed in 2020 by the Xavier chip with 30 TOPs. The Xavier SoC is used in the Xpeng P7 and a number of other vehicles in China.

In addition to the new CPU and GPU cores as well next-generation GPU cores, Thor also integrates NVLINK connections originally developed for data center applications to speed up data transfer between chips on the board. The new SoC is the first automated vehicle compute platform with an integrated inference transformer engine. Essentially, this improves the capability of deep neural networks processing sensor data by running many parallel operations so that the system has context about what is happening at any point in time. With new vehicles now including upwards of 30 sensors including 10 or more cameras, multiple radars, lidar and ultrasonic sensors, this ability is essential making a software perception system work.

 
  • Like
  • Love
  • Thinking
Reactions: 27 users
Does anyone think NVDIA's " inference transformer engine" could be code for AKIDA?

View attachment 18384



View attachment 18385
View attachment 18386 View attachment 18387


Nvidia Cancels Atlan Chip For AVs, Launches Thor With Double Performance​

Sam Abuelsamid
Senior Contributor

Sep 20, 2022,12:45pm EDT


At the spring 2021 Nvidia GPU Technology Conference (GTC) one of the highlights of CEO Jensen Huang’s presentation was the announcement of a next-generation system-on-a-chip (SoC) for automated vehicles called Atlan. Atlan was scheduled to be available for production vehicle applications in 2025. At this week’s fall 2022 GTC, Huang announced that Atlan had been canceled and replaced with a new design dubbed Thor that will offer twice the performance and data throughput, still arriving in 2025.

When it was announced, Atlan promised the highest performance of any automotive SoC to date with up to 1,000 trillion operations per second (TOPS) of integer computing capability. That’s about four times the performance of the Orin SoC that is launching this year in production vehicles including the Nio ET7, Xpeng G9 and the soon to be released Polestar 3 and Volvo XC90 replacement.

At the same GTC where Huang announced Atlan, he also revealed Grace, a new ARM-based CPU for servers. At the April 2022 GTC Nvidia also announced Hopper, it’s next-generation GPU architecture. Given the 2025 timing of Atlan, Nvidia decided that it had time to start over and deliver a new chip that combined technology from the Grace and Hopper chips to create Tho

The Thor SoC is expected to deliver 2,000 TOPS of integer computing power as well as 2,000 teraflops of floating point performance from 77 billion transistors. For comparison, the Parker SoC that powered version 2 of Tesla AutoPilot (in combination with a Pascal GPU) from 2016 delivered about 1 TOPS and was followed in 2020 by the Xavier chip with 30 TOPs. The Xavier SoC is used in the Xpeng P7 and a number of other vehicles in China.

In addition to the new CPU and GPU cores as well next-generation GPU cores, Thor also integrates NVLINK connections originally developed for data center applications to speed up data transfer between chips on the board. The new SoC is the first automated vehicle compute platform with an integrated inference transformer engine. Essentially, this improves the capability of deep neural networks processing sensor data by running many parallel operations so that the system has context about what is happening at any point in time. With new vehicles now including upwards of 30 sensors including 10 or more cameras, multiple radars, lidar and ultrasonic sensors, this ability is essential making a software perception system work.

Not yet. Sorry. :(
 
  • Sad
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Like
  • Thinking
Reactions: 30 users

TopCat

Regular
  • Like
Reactions: 2 users

TopCat

Regular
What about this??? 🙂
The freely programmable embedded vision system works with a special sensor from the French manufacturer Prophesee, which emulates an extraordinary characteristic of human vision that has developed over the course of evolution: Our eye, or brain, is very sensitive to rapid changes, while the brain ignores an infinite amount of motionless data.

Prophesee’s Event-based Vision Sensor (EVS) works according to the same principle: each pixel on the chip has its own logic that allows it to detect changes in brightness and send detected event data to the evaluating computer autonomously, without the line or frame rate specified for conventional image sensors. The pixels themselves decide when an event is triggered and data is sent. As with human vision, this concept leads to considerably less data traffic and thus also to a lower transfer volume and shorter processing time. The sensor chip delivers up to 50 million events per second, significantly exceeding the speed of traditional image processing sensors, and is therefore also suitable for applications in which very fast movements have to be detected.
 
  • Like
  • Fire
  • Love
Reactions: 16 users
These researchers are very, very well researched because they have actually heard of AKIDA:

“ABSTRACT​

One challenging issue in speaker identification (SID) is to achieve noise-robust performance. Humans can accurately identify speakers, even in noisy environments. We can leverage our knowledge of the function and anatomy of the human auditory pathway to design SID systems that achieve better noise-robust performance than conventional approaches. We propose a text-dependent SID system based on a real-time cochlear model called cascade of asymmetric resonators with fast-acting compression (CARFAC). We investigate the SID performance of CARFAC on signals corrupted by noise of various types and levels. We compare its performance with conventional auditory feature generators including mel-frequency cepstrum coefficients, frequency domain linear predictions, as well as another biologically inspired model called the auditory nerve model. We show that CARFAC outperforms other approaches when signals are corrupted by noise. Our results are consistent across datasets, types and levels of noise, different speaking speeds, and back-end classifiers. We show that the noise-robust SID performance of CARFAC is largely due to its nonlinear processing of auditory input signals. Presumably, the human auditory system achieves noise-robust performance via inherent nonlinearities as well.
I. INTRODUCTION
Biometric authentication has a wide range of applications including human-machine interfaces, online banking, shopping, forensic testing, and crime investigation. Nowadays, iPhone's Siri, Google Assistant, Samsung's Bixby, and other smartphone assistants use audio biometric authentication. Recently, biometric authentication has been implemented on several neuromorphic systems such as TrueNorth (Modha, 2014), Loihi-Intel (Davies et al., 2018), and BrainChip's Akida (Turchin, 2019). These hardware implementations should expand applications of biometric authentication in mobile devices, cars, computers, and beyond.”


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 24 users
What about this??? 🙂
The freely programmable embedded vision system works with a special sensor from the French manufacturer Prophesee, which emulates an extraordinary characteristic of human vision that has developed over the course of evolution: Our eye, or brain, is very sensitive to rapid changes, while the brain ignores an infinite amount of motionless data.

Prophesee’s Event-based Vision Sensor (EVS) works according to the same principle: each pixel on the chip has its own logic that allows it to detect changes in brightness and send detected event data to the evaluating computer autonomously, without the line or frame rate specified for conventional image sensors. The pixels themselves decide when an event is triggered and data is sent. As with human vision, this concept leads to considerably less data traffic and thus also to a lower transfer volume and shorter processing time. The sensor chip delivers up to 50 million events per second, significantly exceeding the speed of traditional image processing sensors, and is therefore also suitable for applications in which very fast movements have to be detected.
I am most certainly feeling this one.

Can I add this last year and earlier this year Prophesee was using the following language “with Ai Algorithms on board” .

When you go to the following and read the final paragraph by Luca Verre you will see a substantial change in the language and of course a claim that fits perfectly with AKIDA IP now being incorporated. I have emboldened in red the words that in my opinion support this conclusion:

“PARIS – October 3, 2022 – The full extent of the power and accessibility of Prophesee’s advanced neuromorphic vision systems will be on display at the 2022 VISION show, with an array of technology showcases, partner demonstrations and live interactions that showcase the company’s breakthrough event-based vision approach to machine vision.
Prophesee will have in-depth product demonstrations at its Booth 8B29 at the premier industry event happening October 4-6 in Stuttgart, Germany. Prophesee will also deliver an overview of its event-based platform on Wednesday, 6 October at 12:20 (Hall 8 Booth C70), entitled “The World Between Frames; Event Camera Come of Age.”
Secure your private meeting today for in-depth insights from our experts: http://prophesee.ai/meet
Prophesee’s Metavision® platform has gained traction with leading developers of machine vision systems in industrial automation, robotics, automotive and IoT. Its event-based vision approach significantly reduces the amount of data needed to capture information. Among the benefits of the sensor and AI technology are ultra-low latency, robustness to challenging lighting conditions, energy efficiency, and low data rate.
The company’s breakthrough sensors, accompanying development tools, open-source algorithms and models are at the foundation of several announcements and demonstrations at the show, including:

  • CENTURY ARKS: Century Arks announces it will release mid-October the SilkyEvCam HD, first commercial Event-based Vision camera featuring the IMX636MP sensor realized in collaboration between Sony and Prophesee.
  • CIS: Prophesee announces collaboration with CIS, a leading machine vision provider, to build the first Event-Based 3D sensing evaluation platform, leveraging advanced VCSEL technology: https://www.prophesee.ai/2022/10/03/cis-prophesee-structured-light-evaluation-kit/
  • Datalogic: A global technology leader in the automatic data capture and factory automation markets begins landmark partnership with Prophesee to bring the performance and efficiency of neuromorphic vision to its industrial products: https://www.prophesee.ai/2022/10/03/datalogic-prophesee-event-camera-partnership/
  • Framos: A leading global supplier of imaging products, custom vision solutions and OEM services is releasing their brand new Event-Based Vision development kit based on NVIDIA Jetson Xavier, featuring IMX636 sensor realized in collaboration between Sony and Prophesee.
  • Lucid Vision Labs: Following early prototype announcement at VISION 2021, Machine Vision leader Lucid announces commercial availability of its new Triton EVS, featuring the Prophesee Metavision sensor inside.
  • MVTec: Prophesee and MVTec Partner to Support Integration of Prophesee Event-Based Metavision® Cameras with MVTec HALCON Machine Vision Software. https://www.prophesee.ai/2022/10/03/mvtec-prophesee-halcon-integration-partnership/
“We are very pleased to see the fast-growing ecosystem around event-based vision and the variety of applications this powerful approach to machine vision can be used for. We have made great strides in the Machine Vision world even from last year when we were named best in show by VISION, and this year we shine the spotlight on not just our innovations but the broad range of use cases being enabled by our partners around the world,” said Luca Verre, CEO and co-founder of Prophesee.

My opinion only DYOR
FF


AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 40 users

cosors

👀
I had yesterday a stupid and naive post regarding Akidas in parallel but already deleted again. Nevertheless, the thought does not let me go. The answers to my following questions can be found here somewhere but you know how it is. So I just dare to ask:

1.) How many Akidas can be connected in parallel, at least I assume that it is parallel.

2.) Why is the number limited?

3) I assume that the energy consumption cannot be the same no matter what is calculated. What is the maximum energy consumption per Akida if someone upscaled the full program?

My thought goes in the direction of an Akida cluster. So the maximum what goes. Like an Akida mainframe.
 
Last edited:
  • Like
  • Love
Reactions: 7 users

TopCat

Regular
I am most certainly feeling this one.

Can I add this last year and earlier this year Prophesee was using the following language “with Ai Algorithms on board” .

When you go to the following and read the final paragraph by Luca Verre you will see a substantial change in the language and of course a claim that fits perfectly with AKIDA IP now being incorporated. I have emboldened in red the words that in my opinion support this conclusion:

“PARIS – October 3, 2022 – The full extent of the power and accessibility of Prophesee’s advanced neuromorphic vision systems will be on display at the 2022 VISION show, with an array of technology showcases, partner demonstrations and live interactions that showcase the company’s breakthrough event-based vision approach to machine vision.
Prophesee will have in-depth product demonstrations at its Booth 8B29 at the premier industry event happening October 4-6 in Stuttgart, Germany. Prophesee will also deliver an overview of its event-based platform on Wednesday, 6 October at 12:20 (Hall 8 Booth C70), entitled “The World Between Frames; Event Camera Come of Age.”
Secure your private meeting today for in-depth insights from our experts: http://prophesee.ai/meet
Prophesee’s Metavision® platform has gained traction with leading developers of machine vision systems in industrial automation, robotics, automotive and IoT. Its event-based vision approach significantly reduces the amount of data needed to capture information. Among the benefits of the sensor and AI technology are ultra-low latency, robustness to challenging lighting conditions, energy efficiency, and low data rate.
The company’s breakthrough sensors, accompanying development tools, open-source algorithms and models are at the foundation of several announcements and demonstrations at the show, including:

  • CENTURY ARKS: Century Arks announces it will release mid-October the SilkyEvCam HD, first commercial Event-based Vision camera featuring the IMX636MP sensor realized in collaboration between Sony and Prophesee.
  • CIS: Prophesee announces collaboration with CIS, a leading machine vision provider, to build the first Event-Based 3D sensing evaluation platform, leveraging advanced VCSEL technology: https://www.prophesee.ai/2022/10/03/cis-prophesee-structured-light-evaluation-kit/
  • Datalogic: A global technology leader in the automatic data capture and factory automation markets begins landmark partnership with Prophesee to bring the performance and efficiency of neuromorphic vision to its industrial products: https://www.prophesee.ai/2022/10/03/datalogic-prophesee-event-camera-partnership/
  • Framos: A leading global supplier of imaging products, custom vision solutions and OEM services is releasing their brand new Event-Based Vision development kit based on NVIDIA Jetson Xavier, featuring IMX636 sensor realized in collaboration between Sony and Prophesee.
  • Lucid Vision Labs: Following early prototype announcement at VISION 2021, Machine Vision leader Lucid announces commercial availability of its new Triton EVS, featuring the Prophesee Metavision sensor inside.
  • MVTec: Prophesee and MVTec Partner to Support Integration of Prophesee Event-Based Metavision® Cameras with MVTec HALCON Machine Vision Software. https://www.prophesee.ai/2022/10/03/mvtec-prophesee-halcon-integration-partnership/


My opinion only DYOR
FF


AKIDA BALLISTA
What confuses me is the specification page which mentions the Prophesee camera but the processor specs stump me . When reading the camera general information page it clearly sounds like Akida
E69537B1-4E9C-4014-912E-BF7E9BBB6F27.jpeg
 
  • Like
  • Thinking
Reactions: 6 users
  • Like
  • Love
Reactions: 5 users

Newk R

Regular
  • Haha
  • Like
Reactions: 5 users

TopCat

Regular
  • Like
Reactions: 3 users

TopCat

Regular
I am most certainly feeling this one.

Can I add this last year and earlier this year Prophesee was using the following language “with Ai Algorithms on board” .

When you go to the following and read the final paragraph by Luca Verre you will see a substantial change in the language and of course a claim that fits perfectly with AKIDA IP now being incorporated. I have emboldened in red the words that in my opinion support this conclusion:

“PARIS – October 3, 2022 – The full extent of the power and accessibility of Prophesee’s advanced neuromorphic vision systems will be on display at the 2022 VISION show, with an array of technology showcases, partner demonstrations and live interactions that showcase the company’s breakthrough event-based vision approach to machine vision.
Prophesee will have in-depth product demonstrations at its Booth 8B29 at the premier industry event happening October 4-6 in Stuttgart, Germany. Prophesee will also deliver an overview of its event-based platform on Wednesday, 6 October at 12:20 (Hall 8 Booth C70), entitled “The World Between Frames; Event Camera Come of Age.”
Secure your private meeting today for in-depth insights from our experts: http://prophesee.ai/meet
Prophesee’s Metavision® platform has gained traction with leading developers of machine vision systems in industrial automation, robotics, automotive and IoT. Its event-based vision approach significantly reduces the amount of data needed to capture information. Among the benefits of the sensor and AI technology are ultra-low latency, robustness to challenging lighting conditions, energy efficiency, and low data rate.
The company’s breakthrough sensors, accompanying development tools, open-source algorithms and models are at the foundation of several announcements and demonstrations at the show, including:

  • CENTURY ARKS: Century Arks announces it will release mid-October the SilkyEvCam HD, first commercial Event-based Vision camera featuring the IMX636MP sensor realized in collaboration between Sony and Prophesee.
  • CIS: Prophesee announces collaboration with CIS, a leading machine vision provider, to build the first Event-Based 3D sensing evaluation platform, leveraging advanced VCSEL technology: https://www.prophesee.ai/2022/10/03/cis-prophesee-structured-light-evaluation-kit/
  • Datalogic: A global technology leader in the automatic data capture and factory automation markets begins landmark partnership with Prophesee to bring the performance and efficiency of neuromorphic vision to its industrial products: https://www.prophesee.ai/2022/10/03/datalogic-prophesee-event-camera-partnership/
  • Framos: A leading global supplier of imaging products, custom vision solutions and OEM services is releasing their brand new Event-Based Vision development kit based on NVIDIA Jetson Xavier, featuring IMX636 sensor realized in collaboration between Sony and Prophesee.
  • Lucid Vision Labs: Following early prototype announcement at VISION 2021, Machine Vision leader Lucid announces commercial availability of its new Triton EVS, featuring the Prophesee Metavision sensor inside.
  • MVTec: Prophesee and MVTec Partner to Support Integration of Prophesee Event-Based Metavision® Cameras with MVTec HALCON Machine Vision Software. https://www.prophesee.ai/2022/10/03/mvtec-prophesee-halcon-integration-partnership/


My opinion only DYOR
FF


AKIDA BALLISTA
So it seems Prophesee isn’t with Imago technologies anymore as they are not listed there?? And Imago were also at that show. 🤔
 
  • Fire
  • Like
  • Thinking
Reactions: 3 users

Diogenese

Top 20
I will let you draw your own conclusion as to how successful the Brainchip and ISL collaboration with the US Airforce Research Laboratory has been but in my opinion ‘Highly successful’ seems the likely answer.

A: The original Brainchip & ISL announcement as a memory refresh.

January 09, 2022
05:30 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), a leading provider of ultra-low power, high performance artificial intelligence technology and the world’s first commercial producer of neuromorphic AI chips and IP, today announced that Information Systems Laboratories, Inc. (ISL) is developing an AI-based radar research solution for the Air Force Research Laboratory (AFRL) based on its Akida™ neural networking processor



B: An Extract from ISL’s website advising that the CEO Dr. Joe Guerci will deliver a Keynote Address on recent radar advances.


logo_ISL.png

IEEE International Microwave Symposium (IMS)

Denver, Colorado USA June 2022



ISL CEO Dr. Joe Guerci delivered a Keynote address on recent radar advances. In recent years, there has been a wide variety of technological advances that have either directly or indirectly impacted both the government and commercial radar industries. These include new radar architectures such as cognitive radar, and many key enabling technologies such as RF systems-on-a-chip, high performance embedded computing, machine learning/AI, and low cost, size weight and power RF subsystems. Concurrently, there has been significant technology pull from new applications in unmanned aerial systems (UAS), counter UAS (CUAS), space-borne radar, and commercial self-driving vehicles. In this talk, Dr. Guerci surveyed both the new emerging technology trends, and the applications for which they are finding a home.

C: The link to the slides used in his Keynote Address which I suggest reading through.

Click here for a link to the presentation slides


D: Slide number 7 extract which appears to relate directly to A. above.​

7
Keynote

New Architectures: Cognitive Radar (cont.)
• Cognitive Fully Adaptive Radar (CoFAR) architecture
• Highly successful AFRL project has led to DoD follow-on and commercialization (new Digital Engineering tools)

My opinion only DYOR
FF

AKIDA BALLISTA
I wonder if they installed their radar on the Ghost Bat (Loyal Wingman) drone the USAF just bought from RAAF/Boeing.
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users
  • Like
  • Fire
Reactions: 19 users

Baisyet

Regular
  • Like
  • Fire
  • Haha
Reactions: 13 users

FJ-215

Regular
Hi all,

Loving the research and dots being put up. Got pinged by Bloomberg in the early hours this morning with a piece on everything that is wrong with self driving cars. Seems legacy computers aren't up to the job.

“You think the computer can see everything and can understand what’s going to happen next. But computers are still really dumb”

Says it all really.

Worth a read.

Even After $100 Billion, Self-Driving Cars Are Going Nowhere

'The first car woke Jennifer King at 2 a.m. with a loud, high‑pitched hum. “It sounded like a hovercraft,” she says, and that wasn’t the weird part. King lives on a dead-end street at the edge of the Presidio, a 1,500-acre park in San Francisco where through traffic isn’t a thing. Outside she saw a white Jaguar SUV backing out of her driveway. It had what looked like a giant fan on its roof—a laser sensor—and bore the logo of Google’s driverless car division, Waymo.

She was observing what looked like a glitch in the self-driving software: The car seemed to be using her property to execute a three-point turn. This would’ve been no biggie, she says, if it had happened once. But dozens of Google cars began doing the exact thing, many times, every single day.'
 
Last edited:
  • Like
  • Haha
  • Wow
Reactions: 15 users

Baisyet

Regular
Rob T liking this one

 
  • Like
  • Fire
  • Thinking
Reactions: 19 users
I am wondering why Edge Impulse is promoting this.

Cause their really nice and want to give Google a leg up in the camera industry.🤡

Regards
FF


AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 14 users
Top Bottom