BRN Discussion Ongoing

Hi - does anyone have any thoughts on why they used Yolo to perform the detailed analysis of the HD imagery instead of using Akida?

I get that there is a benefit in performing the initial triage of the incoming data using Akida to analyse low def images to ditch images without any ships present.

But why not train Akida to identify various types of ships and then use that to analyse the HD imagey?

Instead they train Yolo to recognise different types of ships; my initial assumption is because an SNN is not able to perform that task?

Yet my understanding is that I could take my CNN's (i.e. a trained CNN Yolo to recognise different ships) and convert that to an SNN to use with Akida and consume less power than if I was using a CNN [using this to convert my CNN https://doc.brainchipinc.com/user_guide/cnn2snn.html].

Is the SNN unable to perform the inferance on the HD imagery in granular detail, i.e. it can recognise a ship but not what type of ship because .........
- maybe the Akida hardware (ver 1) is not powerful enough to run inferance on HD images (is Akida 2 powerful enough)?
- we don't have the algorithms yet to recognise such detail (so our SNNtoCNN is not great and needs to improve)?
- I don't understand SNN's well enough and this tech is not capable of doing this level of detail?

I have used Yolo a lot on projects different fish species, people in crowded events, etc and always assumed one day I will do this at the edge with an SNN (which is why I invested in Brainship) - now I wonder what I have missed?
Just my thoughts.

We already know that behemoths like NVIDIA etc permeate the mkt and have the name and resources to go with it.

Personally I get the read that they were trying to show that by adding Akida as a classifier to something like a Jetson which, let's face it, will be a first look for most larger comporations looking at the edge space like in this shipping example, that Akida compliments and actually enhances the performance of that sort of product by reducing the power and increasing outcomes.

Rather than this paper discussing Akida, models, SNN, processors stand alone comparisons per se, it is like trying another strategy to mkt by educating them where we can assist and enhance already maybe insitu processors....can we ride on the NVIDIA coat tails a little?

Edit - the other thought in this example is as Akida is being used as a classifier and the Jetson downstream processing, that the Jetson doesn't run SNN as far as I'm aware hence maybe YOLO utilised as it can be read by both Akida and Jetson? We're there for the sparsity, power reduction and classification.

"For edge computing tasks, it is common to have a small gating model which activates more costly downstream processing whenever necessary."

"We show that our classifier stage running on Akida benefits from a high degree of sparsity when processing the vast amounts of homogeneous bodies of water, clouds or land in satellite images, where only 0.3% of the pixels are objects of interest."

"The limitations of our system are the increased needs of size, having to fit two different accelerators instead of a single one. But by combining the strengths of different hardware platforms, we can optimize the overall performance, which is critical for edge computing applications in space"
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

IloveLamp

Top 20
1000016520.jpg
 
  • Like
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Latest paper just released atoday from Gregor Lenz at Neurobus and Doug McLelland at Brainchip.


Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware​

Gregor LenzCorresponding author. E-Mail: gregor@neurobus.space Neurobus, Toulouse, FranceDouglas McLellandBrainChip, Toulouse, France

Abstract​

Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships. We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip’s Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9%, which increases to 79.3% when evaluated solely on images containing ships, by reducing false positives. Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.
\makeCustomtitle

1Introduction​

Ship detection from satellite imagery is a critical application within the field of remote sensing, offering significant benefits for maritime safety, traffic monitoring, and environmental protection. The vast amount of data generated by satellite imagery cannot all be treated on the ground in data centers, as the downlinking of image data from a satellite is a costly process in terms of power and bandwidth.
Refer to caption
Figure 1:Data flow chart of our system.
To help satellites identify the most relevant data to downlink and alleviate processing on the ground, recent years have seen the emergence of edge artificial intelligence (AI) applications for Earth observation [xu2022lite, zhang2020ls, xu2021board, ghosh2021board, yao2019board, alghazo2021maritime, vstepec2019automated]. By sifting through the data on-board the satellite, we can discard a large number of irrelevant images and focus on the relevant information. Because satellites are subject to extreme constraints in size, weight and power, energy-efficient AI systems are crucial. In response to these demands, our research focuses on using low-power neuromorphic chips for ship detection tasks in satellite images. Neuromorphic computing, inspired by the neural structure of the human brain, offers a promising avenue for processing data with remarkable energy efficiency. The Airbus Ship Detection challenge [al2021airbus] on Kaggle aimed to identify the best object detection models. A post-challenge analysis [faudi2023detecting] revealed that a binary classification pre-processing stage was crucial in winning the challenge, as it reduced the rates of false positives and therefore boosted the relevant segmentation score. We introduce a ship detection system that combines a binary classifier with a powerful downstream object detection model. The first stage is implemented on a state-of-the-art neuromorphic chip and determines the presence of ships. Images identified as containing ships are then processed by a more complex detection model in the second stage, which can be run on more flexible hardware. Our work showcases a heterogeneous computing pipeline for a complex real-world task, combining the low-power efficiency of neuromorphic computing with the increased accuracy of a more complex model.
Refer to caption
Figure 2:The first two images are examples of 22% of annotated samples. The second two images are examples of the majority of images that do not contain ships but only clouds, water or land.

2Dataset​

The Airbus Ship Detection dataset [airbus_ship_detection_2018] contains 192k satellite images, of which 22.1% contain annotated bounding boxes for a single ship class. Key metrics of the dataset are described in Table 1. As can be seen in the sample images in Figure 2, a large part of the overall pixel space captures relatively homogenuous parts such as open water or clouds. We chose this dataset as it is part of the European Space Agency’s (ESA) On-Board Processing Benchmark suite for machine learning applications [obpmark], with the goal in mind to test and compare a variety of edge computing hardware platforms for the most common ML tasks related to space applications. The annotated ship bounding boxes have diagonals that vary from 1 to 380 pixels in length, and 48.3% of bounding boxes have diagonals of 40 pixels or shorter. Given that the images are 768×768px in size, this makes it a challenging dataset, as the model needs to be able to detect ships of a large variety of sizes. Since on Kaggle there are only annotations for the training set available, we used a random 80/20 split for training and validation, similarly to Huang et al [huang2020fast]. For our binary classifier, we downsized all images to 256×256px, to be compatible with the input resolution of Akida 1.0, and labeled the images as 1 if they contained at least one bounding box of any size, otherwise 0. For our detection model, we downsized all images to 640×640px in size.
RGB image size768×768
Total number of images192,555
Number of training images154,044
Percentage of images that contain ships22.1%
Total number of bounding boxes81,723
Median diagonal of all bounding boxes43.19px
Ratio of bounding box to image area0.3%
Table 1:Summary of image and bounding box data for the Airbus Ship Detection Training dataset.

3Models​

For our binary classifier, we used a 866k parameter model named AkidaNet 0.5, which is loosely inspired from MobileNet [howard2017mobilenets] with alpha = 0.5. It consists of standard convolutional, separable convolutional and linear layers, to reduce the number of parameters and to be compatible with Akida 1.0 hardware. To train the network, we used binary crossentropy loss, the Adam optimizer, a cosine decay learning rate scheduler with initial rate of 0.001 and lightweight L1 regularization on all model parameters over 10 epochs. For our detection model, we trained a YOLOv5 medium [ge2021yolox] model of 25m parameters with stochastic gradient descent, a learning rate of 0.01 and 0.9 momentum, plus blurring and contrast augmentations over 25 epochs.

4Akida hardware​

Akida by Brainchip is an advanced artificial intelligence processor inspired by the neural architecture of the human brain, designed to provide high-performance AI capabilities at the edge with exceptional energy efficiency. Version 1.0 is available for purchase in the form factor of PCIe x1 as shown in Figure 3, and supports convolutional neural network architectures. Version 2.0 adds support for a variety of neural network types including RNNs and transformer architectures, but is currently only available in simulation. The Akida processor operates in an event-based mode for intermediate layer activations, which only performs computations for non-zero inputs, significantly reducing operation counts and allowing direct, CPU-free communication between nodes. Akida 1.0 supports flexible activation and weight quantization schemes of 1, 2, or 4 bit. Models are trained in Brainchip’s MetaTF, which is a lightweight wrapper around Tensorflow. In March 2024, Akida has also been sent to space for the first time [brainchip2024launch].
Refer to caption
Figure 3:AKD1000 chip on a PCB with PCIe x1 connector.

5Results​

5.1Classification accuracy​

The key metrics for our binary classification model are provided in Table 2. The trained floating point model reaches an accuracy of 97.91%, which drops to 95.75% after quantizing the weights and activations to 4 bit and the input layer weights to 8 bit. After one epoch of quantization-aware training with a tenth of the normal learning rate, the model recovers nearly its floating point accuracy, at 97.67%. Work by Alghazo et al [alghazo2021maritime] reaches an accuracy of 89.7% in the same binary classification setting, albeit on a subset of the dataset and on images that are downscaled to 100 pixels. In addition, the corresponding recall and precision metrics for our model are shown in the table. In our system we prioritize recall, because false negatives (missing a ship) have a higher cost than false positives (detecting ships where there are none), as the downstream detection model can correct for mistakes of the classifier stage. By default we obtain a recall of 94.4 and a precision of 95.07%, but by adjusting the decision threshold on the output, we bias the model to include more images at the cost of precision, obtaining a recall of 97.64% for a precision of 89.73%.
Table 2:Model performance comparison in percent. FP is floating point, 4 bit is the quantized model of 8 bit inputs, and 4 bit activations and weights. QAT is quantization-aware training for 1 epoch with reduced learning rate. Precision and recall values are given for a decision threshold of 0.5 and 0.1.
FP4 bitAfter QAT
Accuracy97.9195.7597.67
Accuracy [alghazo2021maritime]89.70--
Recall95.2385.1294.40/97.64
Precision95.3895.3295.07/89.73

5.2Performance on Akida 1.0​

The model that underwent QAT is then deployed to the Akida 1.0 reference chip, AKD1000, where the same accuracy, recall and precision are observed as in simulation. As detailed in Table 3, feeding a batch of 100 input images takes 1.168 s and consumes 440 mW of dynamic power. The dynamic energy used to process the whole batch is therefore 515 mJ, which translates to 5.15 mJ per image. The network is distributed across 51 of the 78 available neuromorphic processing cores. During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations.

Table 3:Summary of performance metrics on Akida 1.0 for a batch size of 100.
Total duration (ms)1167.95
Duration per sample (ms)11.7
Throughput (fps)85.7
Total dynamic power (mW)440.8
Energy per batch (mJ)514.84
Energy per sample (mJ)5.15
Total neuromorphic processing cores51
We can further break down performance across the different layers in the model. The top plot in Figure 4 shows the latency per frame: it increases as layers are added up to layer 7, but beyond that, the later layers make almost no difference. As each layer is added, we can measure energy consumption, and estimate the per-layer contribution as the difference from the previous measurement, shown in the middle plot of Figure 4. We observe that most of the energy during inference is spent on earlier layers, even though the work required per layer is expected to be relatively constant as spatial input sizes are halved, but the number of filters doubled throughout the model. The very low energy measurements of later layers are explained by the fact that Akida is an event-based processor that exploits sparsity. When measuring the density of input events per layer as shown in the bottom plot of Figure 4, we observe that energy per layer correlates well with the event density. The very first layer receives dense images, but subsequent layers have much sparser inputs, presumably due to a lot of input pixels that are not ships, which in turn reduces the activation of filters that encode ship features. We observe an average event density of 29.3% over all layers including input, reaching less than 5% in later layers. This level of sparsity is achieved through the combination of ReLU activation functions and L1 regularization on activations during training.
Refer to caption
Figure 4:Layer-wise statistics per sample image for inference of binary classifier on Akida v1, measured for a batch of 100 images over 10 repeats.

5.3Detection model performance​

For our subsequent stage, our YOLOv5 model of 25m parameters achieves 76.9% mAP when evaluated on the full validation set containing both ship and non-ship images. When we evaluate the same model on the subset of the validation set that just contains ships, the mAP jumps to 79.3%, as the false positives are reduced considerably. That means that our classifier stage already has a beneficial influence on the detection performance of the downstream model. Table 4 provides an overview of detection performance in the literature. Machado et al. [machado2022estimating] provide measurements for different YOLOv5 models on the NVIDIA Jetson Nano series, a hardware platform designed for edge computing. For the YOLOv5 medium model, the authors report an energy consumption of 0.804 mWh per frame and a throughput of 2.7 frames per second at input resolution of 640 pixels, which translates to a power consumption of 7.81 W. The energy necessary to process the full validation dataset of 38,511 images on a Jetson is therefore 38511×7.81/2.7=111.4 kJ. For our proposed two-stage system, we calculate the total energy as the sum of processing the full validation set on Akida plus processing the identified ship images on the Jetson device. Akida has a power consumption of 0.921+0.44=1.361 W at a throughput of 85.7 images/s. With a recall of 97.64% and a precision of 89.73%, 9243 images, equal to 24.03% of the validation data, are classified to contain ships, in contrast to the actual 22.1%. We therefore obtain an overall energy consumption of 38511×1.361/85.7+9243×7.81/2.7=27.3 kJ. Our proposed system uses 4.07 times less energy to evaluate this specific dataset.
ModelmAP (%)Energy (kJ)
YOLOv3 [patel2022deep]49-
YOLOv4 [patel2022deep]61-
YOLOv5 [patel2022deep]65-
Faster RCNN [al2021airbus]80-
YOLOv576.9111.4
AkidaNet + YOLOv579.327.3
Table 4:Mean average precision and energy consumption evaluated on the Airbus ship detection dataset.

6Discussion​

For edge computing tasks, it is common to have a small gating model which activates more costly downstream processing whenever necessary. As only 22.1% of images in the Airbus detection dataset contain ships, a two-stage processing pipeline can leverage different model and hardware architectures to optimize the overall system. We show that our classifier stage running on Akida benefits from a high degree of sparsity when processing the vast amounts of homogeneous bodies of water, clouds or land in satellite images, where only 0.3% of the pixels are objects of interest. We hypothesise that many filter maps that encode ship features are not activated most of the times. This has a direct impact on the dynamic power consumption and latency during inference due to Akida’s event-based processing nature. In addition, we show that a two-stage system actually increases the mAP of the downstream model by reducing false positive rates, as is also mentioned in the post-challenge analysis of the Airbus Kaggle challenge [faudi2023detecting]. The energy consumption of the hybrid system is less than a fourth compared to running the detection model on the full dataset, with more room for improvement when using Akida v2, which is going to reduce both static and dynamic power consumption and allow the deployment of more complex models that likely achieve higher recall rates. The limitations of our system are the increased needs of size, having to fit two different accelerators instead of a single one. But by combining the strengths of different hardware platforms, we can optimize the overall performance, which is critical for edge computing applications in space


Thanks for posting FMF!

So the power usage of the proposed system was capable of achieving 4.07 times less energy consumption when AKD1000 was combined with NVIDIA Jetson Nano, as opposed to it running it solely on NVIDIA Jetson Nano?

That's very impressive but what's even more mind-boggling is that it states" During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations."

Assuming this works in a similar way when AKIDA is combined with numerous NVIDIA products, which one of NVIDIA's customers wouldn't jump at the chance to implement a system whereby the total energy consumption per task could be reduced so dramatically?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 56 users

toasty

Regular
Thanks for posting FMF!

So the power usage of the proposed system was capable of achieving 4.07 less energy consumption when AKD1000 was combined with NVIDIA Jetson Nano, as opposed to it running it solely on NVIDIA Jetson Nano?

That's very impressive but what's even more mind-boggling is that it states" During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations."

Assuming this works in a similar way when AKIDA is combined with numerous NVIDIA's products, which one of NVIDIA's customers wouldn't jump at the chance to implement a system whereby the total energy consumption per task could be reduced so dramatically?
Nvidia takeover incoming??????
 
  • Like
  • Thinking
Reactions: 5 users

7für7

Top 20
Come on Braini… COMEON…COMEON

1718857245455.gif
 
  • Like
Reactions: 1 users
Just because you cannot think of any doesn’t mean there aren’t any… 😉

Not all of BrainChip’s EAP customers have been disclosed to date.
Nevertheless, a few of them were officially announced (which obviously reduces the possible number of yet undisclosed EAP customers and hence the likelihood of Meta being one of them).




View attachment 65127



View attachment 65128




View attachment 65129



It also sounds as if BrainChip considered Valeo an EAP customer:




View attachment 65131


And then there was also the ASX announcement on an agreement with a Tier-1 Automotive Manufacturer (which the next day was revealed to be Ford, after a “please explain” by the ASX forced BrainChip to do so)

View attachment 65132



View attachment 65133
Yeah, but apart from Ford (which was never actually announced as an EAP, but safe to assume) which was not a good situation for the Company to have been put in and a major reason, why the Company now keeps its lips tightly zipped, the EAPs announced, have either been obscure, or relative small fry.

NASA is a huge name, but not really a business.

"Not all of BrainChip’s EAP customers have been disclosed to date"

Your statement infers, that most have, which would indicate you think we only have a handful..
I really don't think and hope that's not the case..

Mercedes can also be safely assumed, but not announced as an EAP (or I'm sure you would have dug it up).

So my point still stands, that it's a very weak argument, to say Meta not being announced as an EAP, has any weight against it's possibility 😛
 
Last edited:
  • Like
Reactions: 16 users

manny100

Regular
Yeah, but apart from Ford (which was never actually announced as an EAP, but safe to assume) which was not a good situation for the Company to have been put in and a major reason, why the Company now keeps its lips tightly zipped, the EAPs announced, have either been obscure, or relative small fry.

NASA is a huge name, but not really a business.

"Not all of BrainChip’s EAP customers have been disclosed to date"

Your statement infers, that most have, which would indicate you think we only have a handful..
I really don't think and hope that's not the case..

Mercedes can also be safely assumed, but not announced as an EAP (or I'm sure you would have dug it up).

So my point still stands, that it's a very weak argument, to say Meta not being announced as an EAP, has any weight against it's possibility 😛
It's likely almost all the relevant big companies would have at least had a look at AKIDA.
The quick and the dead and no one wants to be left behind including holders because once some window shoppers who have been trialling the product become customers the SP will fly.
Not all will buy but we do not need all.
 
  • Like
Reactions: 12 users

IloveLamp

Top 20
  • Wow
  • Like
Reactions: 3 users
It's likely almost all the relevant big companies would have at least had a look at AKIDA.
The quick and the dead and no one wants to be left behind including holders because once some window shoppers who have been trialling the product become customers the SP will fly.
Not all will buy but we do not need all.
It is well over 2 years now (March 2022) when Rob Telson, had the interview with Al Martin, of "Making Data Simple.



In it, Al asked and Rob stated, the following..

Al: "Who is your biggest competitor?"

Rob Telson: "There are a lot of great companies that are designing and have developed applications and devices to support AI in the future. I think that when you look at the company that has really seen some success incorporating AI into active working products it’s the big guy that’s developed GPU’S and that is NVIDIA. But what they’re doing doesn’t support the edge devices of the future, and that’s where we strongly believe two things. Number one we don’t see companies like that as a competitor. We actually see them as a partner where we can complement what they have started and what they're doing, and our technology can work side by side in those environments or it can work independent in those environments."



This was when Nvidia was the Company, that few had heard of, outside of gamers and crypto miners..

Can you imagine, what would happen now, if BrainChip announced that NVIDIA, now the largest Company on the Planet (by market cap) had become a partner, or customer?

The mind boggles..


giphy.gif

"Please God!"
 
  • Like
  • Love
  • Fire
Reactions: 48 users

jtardif999

Regular
Somewhat disappointingly, there is no image of Akida to be spotted in the pictures showing impressions of the first “Swedish SNN network seminar on industrial applications of SNN”.

View attachment 65066


Ericsson’s Ahsan Javed Awan continues to be enamoured with Intel, although I noticed a slight change in his slide, where “neuromorphic hardware” is no longer followed by “(Loihi 2)”. Interesting, given that “Lava is platform-agnostic, so that applications can be prototyped on conventional CPUs/GPUs and deployed to heterogeneous system architectures spanning both conventional processors as well as a range of neuromorphic chips such as Intel’s Loihi.”
(https://lava-nc.org/)
Could that signify he is also trying out other processors these days?

June 2024
View attachment 65075

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418864

compare to July 2023
View attachment 65074

Dylan Muir from SynSense gave a remote presentation on Speck:

View attachment 65067

And I also spotted the SpiNNaker and IBM logos on the opening slide of the online presentation by Jörg Conradt (KTH Stockholm) - no surprise here.

View attachment 65098

Hopefully, researchers in Jörg Conradt’s Neuro Computing Systems lab that moved from Munich (TUM) to Stockholm (KTH), will give Akida another chance one of these days (after the not overly glorious assessment of two KTH Master students in their degree project Neuromorphic Medical Image Analysis at the Edge, which was shared here before: https://www.diva-portal.org/smash/get/diva2:1779206/FULLTEXT01.pdf), trusting the positive feedback by two more advanced researchers Jörg Conradt knows well, who have (resp soon will have) first-hand-experience with AKD1000:

When he was still at TUM, Jörg Conradt was the PhD supervisor of Cristian Axenie (now head of SPICES lab at TH Nürnberg, whose team came runner-up in the 2023 tinyML Pedestrian Detection Hackathon utilising Akida) and co-authored a number of papers with him, and now at Stockholm, he is the PhD supervisor of Jens Egholm Pedersen, who is one of the co-organisers of the topic area Neuromorphic systems for space applications at the upcoming Telluride Neuromorphic Workshop, that will provide participants with neuromorphic hardware, including Akida. (I’d venture a guess that the name Jens on the slide refers to him).



Let’s savour once again the above quote by Rasmus Lundqvist, who is a Senior Researcher in Autonomous Systems at RISE (Sweden’s state-owned research institute and innovation partner), with a focus on drones and innovative aerial mobility.


“And mark my words; there is no more suitable AI tech for low-power low-latency than SNNs and neuromorphic chips to run them.”


RISE’s ongoing project Visual Inspection of airspace for air traffic and SEcuRity (a collaboration with SAAB, https://www.saab.com/) sounds like a perfect use case for Akida:

View attachment 65118
View attachment 65119





View attachment 65121
We must be the elephant in the room 😉
 
  • Like
  • Haha
Reactions: 6 users
It is well over 2 years now (March 2022) when Rob Telson, had the interview with Al Martin, of "Making Data Simple.



In it, Al asked and Rob stated, the following..

Al: "Who is your biggest competitor?"

Rob Telson: "There are a lot of great companies that are designing and have developed applications and devices to support AI in the future. I think that when you look at the company that has really seen some success incorporating AI into active working products it’s the big guy that’s developed GPU’S and that is NVIDIA. But what they’re doing doesn’t support the edge devices of the future, and that’s where we strongly believe two things. Number one we don’t see companies like that as a competitor. We actually see them as a partner where we can complement what they have started and what they're doing, and our technology can work side by side in those environments or it can work independent in those environments."



This was when Nvidia was the Company, that few had heard of, outside of gamers and crypto miners..

Can you imagine, what would happen now, if BrainChip announced that NVIDIA, now the largest Company on the Planet (by market cap) had become a partner, or customer?

The mind boggles..


View attachment 65138
"Please God!"

1718867449376.gif
 
  • Haha
  • Like
Reactions: 15 users
  • Haha
  • Like
  • Fire
Reactions: 20 users

7für7

Top 20
i think WE WILL BUY NVIDIA !!! THIS WILL MAKE A HUGE IMPACT IN THE INDUSTRY HAHAHAHAHAHA…:HAHAHAHAHAAAAAAAAA mmmmUUUHHHaaaaahahahah

1718869951304.gif
 
  • Haha
  • Like
Reactions: 14 users

Gazzafish

Regular
Why do I keep saying in my head …. Kodak: “We aren’t going to go digital because we are making so much money from film”………..Nvidea: “we aren’t going Neuromorphic because we are making so much money from GPU’s” ….. 🤔😁
 
  • Like
  • Haha
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I wonder if we're involved in this? Is it mere co-incidence that our latest patent covers anomaly detection? And when you click on the link at the very end of the article it takes you to this Edge Impulse page mentioning " the latest neural accelerators" (see below).

At any rate, if I was Edge Impulse, I'd be trying to capitalise as much as possible on our relationship. If they bought a licence from us, couldn't they sell the low-power systems as a product? I suppose that's what I'd do, if I was smart.

And, didn't we do something previously with them with FOMO and object detection?



Edge-Impulse-Brings-Anomaly-Detection-to-Any-Edge-Device.png

Edge Impulse Brings Anomaly Detection to Any Edge Device​

June 6, 2024
Edge Impulse, a platform for building, refining and deploying machine learning models to edge devices, has unveiled a novel technology for unlocking visual anomaly detection on any edge device, from NVIDIA GPUs to Arm MCUs, through the first model architecture of its kind: FOMO-AD (Faster Objects, More Objects – Anomaly Detection).
The demand for edge-capable AI software has increased as a path to innovating factories and production lines, with on-device computing allowing faster access to critical data insights, low latency, and more robust security and privacy compliance.
Visual anomaly detection in particular is an important use case for industrial AI, but is not widely used as it requires creating a library of known anomalous samples to train the model to spot deviations in industrial environments. Companies cannot collect real-world samples for every anomaly, especially for unanticipated defects, limiting detection capabilities.
Increased Productivity of Visual Inspection
Edge Impulse’s FOMO-AD architecture, two years in development, offers the first widely accessible platform for visual anomaly detection on any edge device, from GPUs to MCUs. It is also the first scalable system capable of training models on an optimal state to detect and catalog anything outside that baseline as an anomaly in video and image data. This dramatically increases the productivity of visual inspection systems that will no longer have to be manually trained on anomalous samples before they can start generating real-time insights on-device.
“Virtually every industrial customer that wants to deploy computer vision really needs to know when something out of the ordinary happens,” said Jan Jongboom, co-founder and CTO at Edge Impulse. “Traditionally that’s been challenging with machine learning, as classification algorithms need examples of every potential fault state. FOMO-AD uniquely allows customers to build machine learning models by only providing ‘normal’ data.”
Most industrial camera systems capable of computer vision are powered by GPUs and CPUs, with a high install cost that requires wiring and a power-hungry connection to mains electricity. Recent advancements from top-of-the-line silicon manufacturers, and novel edge model architectures from companies like Edge Impulse, enable computer vision AI models to operate in either high- or low-power systems, giving businesses more choice. The benefits of low-power systems include the possibility of building battery-powered visual inspection systems, and lower production costs from using cost-effective hardware that can reduce the overall product form factor.
In recent months, Edge Impulse has been testing FOMO-AD with customers, achieving proven results in industrial environments when proactively detecting irregularities in multiple production scenarios. Use of FOMO-AD has led to marked improvements in machine performance and production line efficiencies for customers.
There are many manufacturing use cases for visual anomaly detection, including:
Industrial: Production line inspection, quality control monitoring, defect detection
Automotive: Part assembly quality control, crack detection, leak detection, EV battery inspection, painting and surface defect detection
Silicon: IC inspection, PCB defect detection, soldering inspection
Medical: Medical device inspection, pill inspection, vial contamination inspection, seal inspection
For more information: www.edgeimpulse.com



Here's what the link shows.
Screenshot 2024-06-20 at 5.59.54 pm.png
 
  • Like
  • Fire
Reactions: 36 users

IloveLamp

Top 20
  • Like
  • Haha
  • Love
Reactions: 27 users

7für7

Top 20
I wonder if we're involved in this? Is it mere co-incidence that our latest patent covers anomaly detection? And when you click on the link at the very end of the article it takes you to this Edge Impulse page mentioning " the latest neural accelerators" (see below).

At any rate, if I was Edge Impulse, I'd be trying to capitalise as much as possible on our relationship. If they bought a licence from us, couldn't they sell the low-power systems as a product? I suppose that's what I'd do, if I was smart.

And, didn't we do something previously with them with FOMO and object detection?



Edge-Impulse-Brings-Anomaly-Detection-to-Any-Edge-Device.png

Edge Impulse Brings Anomaly Detection to Any Edge Device​

June 6, 2024
Edge Impulse, a platform for building, refining and deploying machine learning models to edge devices, has unveiled a novel technology for unlocking visual anomaly detection on any edge device, from NVIDIA GPUs to Arm MCUs, through the first model architecture of its kind: FOMO-AD (Faster Objects, More Objects – Anomaly Detection).
The demand for edge-capable AI software has increased as a path to innovating factories and production lines, with on-device computing allowing faster access to critical data insights, low latency, and more robust security and privacy compliance.
Visual anomaly detection in particular is an important use case for industrial AI, but is not widely used as it requires creating a library of known anomalous samples to train the model to spot deviations in industrial environments. Companies cannot collect real-world samples for every anomaly, especially for unanticipated defects, limiting detection capabilities.
Increased Productivity of Visual Inspection
Edge Impulse’s FOMO-AD architecture, two years in development, offers the first widely accessible platform for visual anomaly detection on any edge device, from GPUs to MCUs. It is also the first scalable system capable of training models on an optimal state to detect and catalog anything outside that baseline as an anomaly in video and image data. This dramatically increases the productivity of visual inspection systems that will no longer have to be manually trained on anomalous samples before they can start generating real-time insights on-device.
“Virtually every industrial customer that wants to deploy computer vision really needs to know when something out of the ordinary happens,” said Jan Jongboom, co-founder and CTO at Edge Impulse. “Traditionally that’s been challenging with machine learning, as classification algorithms need examples of every potential fault state. FOMO-AD uniquely allows customers to build machine learning models by only providing ‘normal’ data.”
Most industrial camera systems capable of computer vision are powered by GPUs and CPUs, with a high install cost that requires wiring and a power-hungry connection to mains electricity. Recent advancements from top-of-the-line silicon manufacturers, and novel edge model architectures from companies like Edge Impulse, enable computer vision AI models to operate in either high- or low-power systems, giving businesses more choice. The benefits of low-power systems include the possibility of building battery-powered visual inspection systems, and lower production costs from using cost-effective hardware that can reduce the overall product form factor.
In recent months, Edge Impulse has been testing FOMO-AD with customers, achieving proven results in industrial environments when proactively detecting irregularities in multiple production scenarios. Use of FOMO-AD has led to marked improvements in machine performance and production line efficiencies for customers.
There are many manufacturing use cases for visual anomaly detection, including:
Industrial: Production line inspection, quality control monitoring, defect detection
Automotive: Part assembly quality control, crack detection, leak detection, EV battery inspection, painting and surface defect detection
Silicon: IC inspection, PCB defect detection, soldering inspection
Medical: Medical device inspection, pill inspection, vial contamination inspection, seal inspection
For more information: www.edgeimpulse.com



Here's what the link shows.
View attachment 65141
We are wondering if you START TO RUN AGAIN SOON!!!!! Ma’am!!

By the way we are wondering as well if we are integrated in so many other things 😂👌
 
  • Haha
  • Like
Reactions: 3 users

IloveLamp

Top 20

At MediaTek, we are building a future of ubiquitous AI. This includes Generative AI that can be processed "at the edge"

Instead of relying on the cloud and risking unguaranteed internet connectivity, or expensive services that can see and even take control of your data, the generative content is processed right inside your smartphone, tablet, smart TV, or vehicle.

1000016525.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 28 users
Top Bottom