www.army-technology.com
Concept: California’s tech company BrainChip and French tech startup Prophesse have partnered to launch next-gen platforms for original equipment manufacturers (OEMs) looking to integrate event-based vision systems with high levels of AI performance. The partnership combines Prophesee’s computer vision technology with BrainChip’s neuromorphic processor Akida to deliver a complete high-performance and ultra-low-power solution.
Nature of Disruption:Prophesee’s computer vision technology leverages patented sensor design and AI algorithms. It mimics the eye and brain to reveal what was invisible until now using standard frame-based technology. The new computer vision technology has applications in autonomous vehicles, industrial automation, IoT, security and surveillance, and AR or VR. BrainChip’s Akida mimics the human brain to analyze only essential sensor inputs at the point of acquisition. It can process data with improved efficiency and precision. Also, it keeps AI or ML local to the chip, independent of the cloud to reduce the latency. The combination of both technologies can help advance AI enablement and offer manufacturers a ready-to-implement solution. Additionally, it helps OEMs looking to leverage edge-based visual technologies as part of their product offerings.
Outlook: The application of computer vision is increasing in various industries including automotive, healthcare, retail, robotics, agriculture, and manufacturing. It gives AI-enabled gadgets an edge to perform efficiently. BrainChip and Prophesee claim that the combination of their technologies can provide OEMs with a computer vision solution that can be directly implemented in a manufacturer’s end product. It can enable data processing with better efficiency, precision, and economy of energy at the point of acquisition
Just on that FF.
Been reading one of our latest Patent apps from earlier this year.
Appears to be an evolution of earlier ones but some parts lead me to believe it is heavily slanted at event cameras and our Akida in relation to processing the information.
Interesting read and whilst most is above my pay grade there are elements that are understandable.
I snipped a few sections.
First link to overarching BRN USPTO including TMs (live & dead). Second link is the patent app.
uspto.report
U.S. Patent Application 20220147797 for Event-based Extraction Of Features In A Convolutional Spiking Neural Network
uspto.report
Event-based Extraction Of Features In A Convolutional Spiking Neural Network
U.S. patent application number 17/583640 was filed with the patent office on 2022-05-12 for
event-based extraction of features in a convolutional spiking neural network. This patent application is currently assigned to BrainChip, Inc.. The applicant listed for this patent is BrainChip, Inc.. Invention is credited to Kristofor D. CARLSON, Milind JOSHI, Douglas MCLELLAND, Harshil K. PATEL, Anup A. VANARSE.
Application Number | 20220147797 17/583640 |
Document ID | / |
Family ID | 1000006135263 |
Filed Date | 2022-05-12 |
United States Patent Application | 20220147797 |
Kind Code | A1 |
MCLELLAND; Douglas ; et al. | May 12, 2022 |
[0100] DVS Camera: DVS stands for dynamic vision sensor. DVS cameras generate events which event-based processors like embodiments of the present approach can process directly. Most cameras produce frame-based images. The main advantage to DVS cameras is the low-latency and the potential for extremely low-power operation.
[0116] A deep neural network (DNN) is defined as an artificial neural network that has multiple hidden layers between its input and output layers, each layer comprising a perceptron. A Convolutional Neural network (CNN) is a class of DNN that performs convolutions and is primarily used for vision processing. CNNs share synaptic weights between neurons. Shared weight values in a perceptron are referred to as filters (aka kernels). Each layer in a conventional CNN is a perceptron. In the present embodiment, event-based convolution is implemented in a Spiking Neural Network (SNN) using event-based rank-coding rather than rate-coding, which has advantages in speed and considerably lower power consumption. Rank coding differs from rate-coding of spike events in that values are encoded in the order of spikes transmitted. In rate coding the repetition rate of spikes transmitted expresses a real number. CNNs process color images which are defined as: imageWidth.times.imageHeight.times.channelNumber. A color image generally has 3 channels (Red, Green, and Blue). CNNs often have many layers, so the output of a convolutional layer is the input to the next convolutional layer. Descriptions are provided as to how convolutions take place in conventional perceptron-based CNN before discussing the event-based convolution methods implemented in the present invention and show that the result of convolution in a CNN and event-based convolution in the present invention return the same results.
[0256] One advantage of implementing event-based transposed convolution is to achieve reusability of the spiking neuron circuits, which will reduce the size and cost of the neuromorphic hardware (e.g. a chip). In other words, the neuromorphic hardware is optimized to implement a diverse variety of use cases by effectively reusing the spiking neuron circuits available on the hardware.
[0283] The present method and system also includes data augmentation capability arranged to augment the network training phase by automatically training the network to recognize patterns in images that are similar to existing training images. In this way, feature extraction during feature prediction by the network is enhanced and a more robust network achieved.
[0284] Training data augmentation is a known pre-processing step that is performed to generate new and varying examples of original input data samples. When used in conjunction with convolutional neural networks, data augmentation techniques can significantly improve the performance of the neural network model by exposing robust and unique features.
[0286] However, existing training data augmentation techniques are carried out separately of the neural network, which is cumbersome, expensive and time consuming.
[0287] According to an embodiment of the present invention, an arrangement is provided whereby the set of training samples is effectively augmented on-the-fly by the network itself by carrying out defined processes on existing samples as they are input to the network during the training phase. Accordingly, with the present system and method, training data augmentation is performed on a neuromorphic chip, which substantially reduces user involvement, and avoids the need for separate preprocessing before commencement of the training phase.
[0315] It will be appreciated that the disclosed data augmentation technique produces a more robust machine learning model by effectively creating different training inputs using existing training data. Performing transformations on the data can create new samples for the model to train on, and varying the data through transformations can cover a larger input domain with limited samples. Advantageously, transforming the data creates new input samples, artificially creating different scenarios for the model to train on. This technique can also be used to create new samples while implementing one/low shot learning in the spiking domain.