BRN Discussion Ongoing

Do you think they could sell more chips if they gave a free fish with every order.

This is a green selling point. They would donate ‘x’ number of fingerling to environmental groups repopulating rivers with native fish based on the size of the chip order. 🧐🤠😞

FF

AKIDA BALLISTA
Not sure that marketing will work in USA as they call them Fries! 😊

SC
 
  • Haha
  • Fire
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
Reactions: 14 users

MDhere

Regular
Ok firstly Good morning fellow brners its a glorious morning!

Right here it is ! drum roll In this pic i know we have an "ARM" AND "MERCEDES"!

And i NOW finally have my 1st proud addition -

I don't think i will need my h.i.n for entrance to the Agm will i ???

:love::ROFLMAO::love:

20220518_110351.jpg
 
  • Like
  • Fire
  • Love
Reactions: 79 users

buena suerte :-)

BOB Bank of Brainchip
Ok firstly Good morning fellow brners its a glorious morning!

Right here it is ! drum roll In this pic i know we have an "ARM" AND "MERCEDES"!

And i NOW finally have my 1st proud addition -

I don't think i will need my h.i.n for entrance to the Agm will i ???

:love::ROFLMAO::love:

View attachment 6914
Awesome MD ...about time!! ;);) ..... Congrats (y)
 
  • Like
Reactions: 11 users

Reuben

Founding Member
Ok firstly Good morning fellow brners its a glorious morning!

Right here it is ! drum roll In this pic i know we have an "ARM" AND "MERCEDES"!

And i NOW finally have my 1st proud addition -

I don't think i will need my h.i.n for entrance to the Agm will i ???

:love::ROFLMAO::love:

View attachment 6914
congrats, mercedes cars are amazing....
 
  • Like
  • Love
Reactions: 8 users
D

Deleted member 118

Guest
Ok firstly Good morning fellow brners its a glorious morning!

Right here it is ! drum roll In this pic i know we have an "ARM" AND "MERCEDES"!

And i NOW finally have my 1st proud addition -

I don't think i will need my h.i.n for entrance to the Agm will i ???

:love::ROFLMAO::love:

View attachment 6914
 
  • Haha
  • Like
  • Fire
Reactions: 9 users
D

Deleted member 118

Guest
Ok firstly Good morning fellow brners its a glorious morning! Right here it is ! drum roll In this pic i know we have an "ARM" AND "MERCEDES"! And i NOW finally have my 1st proud addition - I don't think i will need my h.i.n for entrance to the Agm will i ??? :love::ROFLMAO::love: View attachment 6914

Well I got a new dog yesterday 😁, just don’t get to see it until Monday. She was trying to hide from my fat staffy
D7E03E3E-83C4-4DF7-9CE5-A19AC54FF313.jpeg
 
  • Love
  • Like
Reactions: 26 users

White Horse

Regular
Patent History
Publication number
: 20220147797
Type: Application
Filed: Jan 25, 2022
Publication Date: May 12, 2022
Applicant: BrainChip, Inc. (Laguna Hills, CA)
Inventors: Douglas MCLELLAND (Laguna Hills, CA), Kristofor D. CARLSON (Laguna Hills, CA), Harshil K. PATEL (Laguna Hills, CA), Anup A. VANARSE (Laguna Hills, CA), Milind JOSHI (Perth)
Application Number: 17/583,640

Is this a sign of the changing of the guard/s...I don't think so...BUT...check out the 3 Perth based "Dream Team" getting to put their inventors hats on....I'm personally really pleased for Anup and Harshil with whom I've had the pleasure to talk with in person.

This was obviously only published 5/6 days ago, so if this has already been posted, excuse me, as I can't keep up with all the brilliant articles being posted, I'm a slow reader :ROFLMAO:

Good morning from Australia's Brainchip HQ.....Perth :love::love:

United States Patent Application
20220147797
Kind Code
A1
MCLELLAND; Douglas ; et al.
May 12, 2022


EVENT-BASED EXTRACTION OF FEATURES IN A CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
A system is described that comprises a memory for storing data representative of at least one kernel, a plurality of spiking neuron circuits, and an input module for receiving spikes related to digital data. Each spike is relevant to a spiking neuron circuit and each spike has an associated spatial coordinate corresponding to a location in an input spike array. The system also comprises a transformation module configured to transform a kernel to produce a transformed kernel having an increased resolution relative to the kernel, and/or transform the input spike array to produce a transformed input spike array having an increased resolution relative to the input spike array. The system also comprises a packet collection module configured to collect spikes until a predetermined number of spikes relevant to the input spike array have been collected in a packet in memory, and to organize the collected relevant spikes in the packet based on the spatial coordinates of the spikes, and a convolutional neural processor configured to perform event-based convolution using memory and at least one of the transformed input spike array and the transformed kernel.




Inventors:​
MCLELLAND; Douglas; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; PATEL; Harshil K.; (Laguna Hills, CA) ; VANARSE; Anup A.; (Laguna Hills, CA) ; JOSHI; Milind; (Perth, AU)

Applicant:​

Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA

Family ID:​
1000006135263
Appl. No.:​
17/583640
Filed:​
January 25, 2022



and


United States Patent Application
20220138543
Kind Code
A1
VAN DER MADE; Peter AJ ; et al.
May 5, 2022


EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.



Inventors:​
VAN DER MADE; Peter AJ; (Nedlands, AU) ; MANKAR; Anil S.; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; CHENG; Marco; (Laguna Hills, CA)

Applicant:​

Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA

Appl. No.:​
17/576103
Filed:​
January 14, 2022

Both filed within a week and half of each other.
The boys have been very busy.
 
  • Like
  • Love
  • Fire
Reactions: 56 users

Diogenese

Top 20
Hi @Diogenese

When I first read about Quadric I thought about the fact that there is yet to be a single agreed definition of the Edge and where it actually is in a system.

It struck me then that AKIDA at the far Edge with its mum saying be careful you should not be that close and Quadric at a safer place back from the Edge with its mum saying AKIDA come back and stand with your cousin Quadric was why MegaChips have both solutions.

The following which I extracted from the group of words you posted seems to fit this scenario with AKIDA making all the sensors intelligent and a Quadric processing the AKIDA made relevant data:

“Autonomous vehicles have been implemented with advanced sensor suites that provide a fusion of sensor data that enable route or path planning for autonomous vehicles. But, modern GPUs are not constructed for handling these additional high computation tasks.

[0006] At best, to enable a GPU or similar processing circuitry to handle additional sensor processing needs including path planning, sensor fusion, and the like, additional and/or disparate circuitry may be assembled to a traditional GPU”


If you generally agree then I can allow the rest of the technological differences to happily go over my head?

My opinion only DYOR
FF

AKIDA BALLISTA


Hi @Rocket577 , @Fact Finder ,

here is an extract from an article posted by @Fullmoonfever discussing potential interworking of Akida and Quadric.

#10,930

What people have found is that the existing processors from ARM or RISC-V do not address the power performance requirements of the AI industry. There are some low-end cases that can be handled with software on these embedded processors. In general, people are looking for either accelerators to pair with those processors, or completely new processors, that would replace the embedded processor and AI into a much higher performance functionality.

In this case, MegaChips’ partner, BrainChip, is an example of an accelerator that would be combined with the existing embedded processors. In the case of its other IP partner, Quadric, they could be either used as an accelerator, or even supersede the need for an embedded processor.

Determining success

Now, there have been attempts from some others, but not with much success. How can MegaChips determine its path?

megachips-logo-200x40-1.png

According to Fairbairn, we see this as an emerging market. Those who tried to enter with volume production capabilities up until now were too early to the party. It’s only now that people are reaching the point where they are in need of volume production opportunities.

There have been many obstacles to the adoption of AI, and adoption has been relatively slow. MegaChips realized that, and partnered with a couple of IP vendors that already had some significant traction, but also needed the muscle of pairing up with a silicon vendor to actually provide a complete solution to the customer.

By combining forces and offering that complete solution, and with the ability to help the customer determine which solution is best to integrate into a single chip or module, we can help overcome those things. We are investing heavily in internal capability to address this very need. We believe that we’re hitting the market at an ideal time to be involved with some designs that can go into production in the near future
.

Now I'm not quite sure if they are suggesting:

A) a combination of Akida IP and Quadric IP, perhaps replacing the Quadric MACs (114) and ALUs (118) with Akida NPUs, and using Quadric's reconfigurable interconnexion architecture (120, 122) which I would see as a "coals to Newcastle" solution as far as Akida is concerned because Akida's NPU/core interconnect architecture is already highly flexible, and because the additional IC redesign work would be prohibitive, or

B) using Akida as an accelerator for Quadric, which would seem to be redundant, or

C) simply offering them for different tasks as FF suggests. FF's suggestion seems most probable from " the ability to help the customer determine which solution is best to integrate into a single chip or module", as clearly MegaChips sees them as complementary solutions.

Far be it from me to suggest that one of these makes the other redundant, but I'd like to see the evidence showing that Quadric is better than Akida with a CPU or GPU co-processor.
1652837920681.png
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
United States Patent Application
20220147797
Kind Code
A1
MCLELLAND; Douglas ; et al.
May 12, 2022


EVENT-BASED EXTRACTION OF FEATURES IN A CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
A system is described that comprises a memory for storing data representative of at least one kernel, a plurality of spiking neuron circuits, and an input module for receiving spikes related to digital data. Each spike is relevant to a spiking neuron circuit and each spike has an associated spatial coordinate corresponding to a location in an input spike array. The system also comprises a transformation module configured to transform a kernel to produce a transformed kernel having an increased resolution relative to the kernel, and/or transform the input spike array to produce a transformed input spike array having an increased resolution relative to the input spike array. The system also comprises a packet collection module configured to collect spikes until a predetermined number of spikes relevant to the input spike array have been collected in a packet in memory, and to organize the collected relevant spikes in the packet based on the spatial coordinates of the spikes, and a convolutional neural processor configured to perform event-based convolution using memory and at least one of the transformed input spike array and the transformed kernel.




Inventors:​
MCLELLAND; Douglas; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; PATEL; Harshil K.; (Laguna Hills, CA) ; VANARSE; Anup A.; (Laguna Hills, CA) ; JOSHI; Milind; (Perth, AU)

Applicant:​
Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA
Family ID:​
1000006135263
Appl. No.:​
17/583640
Filed:​
January 25, 2022



and


United States Patent Application
20220138543
Kind Code
A1
VAN DER MADE; Peter AJ ; et al.
May 5, 2022


EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.



Inventors:​
VAN DER MADE; Peter AJ; (Nedlands, AU) ; MANKAR; Anil S.; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; CHENG; Marco; (Laguna Hills, CA)

Applicant:​
Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA
Appl. No.:​
17/576103
Filed:​
January 14, 2022

Both filed within a week and half of each other.
The boys have been very busy.


giphy-downsized-large.gif
 
  • Haha
  • Like
  • Love
Reactions: 32 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi @Rocket577 , @Fact Finder ,

here is an extract from an article posted by @Fullmoonfever discussing potential interworking of Akida and Quadric.

#10,930

What people have found is that the existing processors from ARM or RISC-V do not address the power performance requirements of the AI industry. There are some low-end cases that can be handled with software on these embedded processors. In general, people are looking for either accelerators to pair with those processors, or completely new processors, that would replace the embedded processor and AI into a much higher performance functionality.

In this case, MegaChips’ partner, BrainChip, is an example of an accelerator that would be combined with the existing embedded processors. In the case of its other IP partner, Quadric, they could be either used as an accelerator, or even supersede the need for an embedded processor.

Determining success

Now, there have been attempts from some others, but not with much success. How can MegaChips determine its path?

megachips-logo-200x40-1.png

According to Fairbairn, we see this as an emerging market. Those who tried to enter with volume production capabilities up until now were too early to the party. It’s only now that people are reaching the point where they are in need of volume production opportunities.

There have been many obstacles to the adoption of AI, and adoption has been relatively slow. MegaChips realized that, and partnered with a couple of IP vendors that already had some significant traction, but also needed the muscle of pairing up with a silicon vendor to actually provide a complete solution to the customer.

By combining forces and offering that complete solution, and with the ability to help the customer determine which solution is best to integrate into a single chip or module, we can help overcome those things. We are investing heavily in internal capability to address this very need. We believe that we’re hitting the market at an ideal time to be involved with some designs that can go into production in the near future
.

Now I'm not quite sure if they are suggesting:

A) a combination of Akida IP and Quadric IP, perhaps replacing the Quadric MACs (114) and ALUs (118) with Akida NPUs, and using Quadric's reconfigurable interconnexion architecture (120, 122) which I would see as a "coals to Newcastle" solution as far as Akida is concerned because Akida's NPU/core interconnect architecture is already highly flexible, and because the additional IC redesign work would be prohibitive, or

B) using Akida as an accelerator for Quadric, which would seem to be redundant, or

C) simply offering them for different tasks as FF suggests. FF's suggestion seems most probable from " the ability to help the customer determine which solution is best to integrate into a single chip or module", as clearly MegaChips sees them as complementary solutions.

Far be it from me to suggest that one of these makes the other redundant, but I'd like to see the evidence showing that Quadric is better than Akida with a CPU or GPU co-processor.



CircularLimitedFrigatebird-max-1mb.gif
 
Last edited:
  • Haha
  • Like
Reactions: 11 users
D

Deleted member 118

Guest
Hi @Rocket577 , @Fact Finder ,

here is an extract from an article posted by @Fullmoonfever discussing potential interworking of Akida and Quadric.

#10,930

What people have found is that the existing processors from ARM or RISC-V do not address the power performance requirements of the AI industry. There are some low-end cases that can be handled with software on these embedded processors. In general, people are looking for either accelerators to pair with those processors, or completely new processors, that would replace the embedded processor and AI into a much higher performance functionality.

In this case, MegaChips’ partner, BrainChip, is an example of an accelerator that would be combined with the existing embedded processors. In the case of its other IP partner, Quadric, they could be either used as an accelerator, or even supersede the need for an embedded processo

That’s where I read it lol, going to school and remember anything I was told, wasn’t my best skill
 
  • Love
  • Like
Reactions: 2 users
Hi @Rocket577 , @Fact Finder ,

here is an extract from an article posted by @Fullmoonfever discussing potential interworking of Akida and Quadric.

#10,930

What people have found is that the existing processors from ARM or RISC-V do not address the power performance requirements of the AI industry. There are some low-end cases that can be handled with software on these embedded processors. In general, people are looking for either accelerators to pair with those processors, or completely new processors, that would replace the embedded processor and AI into a much higher performance functionality.

In this case, MegaChips’ partner, BrainChip, is an example of an accelerator that would be combined with the existing embedded processors. In the case of its other IP partner, Quadric, they could be either used as an accelerator, or even supersede the need for an embedded processor.

Determining success

Now, there have been attempts from some others, but not with much success. How can MegaChips determine its path?

megachips-logo-200x40-1.png

According to Fairbairn, we see this as an emerging market. Those who tried to enter with volume production capabilities up until now were too early to the party. It’s only now that people are reaching the point where they are in need of volume production opportunities.

There have been many obstacles to the adoption of AI, and adoption has been relatively slow. MegaChips realized that, and partnered with a couple of IP vendors that already had some significant traction, but also needed the muscle of pairing up with a silicon vendor to actually provide a complete solution to the customer.

By combining forces and offering that complete solution, and with the ability to help the customer determine which solution is best to integrate into a single chip or module, we can help overcome those things. We are investing heavily in internal capability to address this very need. We believe that we’re hitting the market at an ideal time to be involved with some designs that can go into production in the near future
.

Now I'm not quite sure if they are suggesting:

A) a combination of Akida IP and Quadric IP, perhaps replacing the Quadric MACs (114) and ALUs (118) with Akida NPUs, and using Quadric's reconfigurable interconnexion architecture (120, 122) which I would see as a "coals to Newcastle" solution as far as Akida is concerned because Akida's NPU/core interconnect architecture is already highly flexible, and because the additional IC redesign work would be prohibitive, or

B) using Akida as an accelerator for Quadric, which would seem to be redundant, or

C) simply offering them for different tasks as FF suggests. FF's suggestion seems most probable from " the ability to help the customer determine which solution is best to integrate into a single chip or module", as clearly MegaChips sees them as complementary solutions.

Far be it from me to suggest that one of these makes the other redundant, but I'd like to see the evidence showing that Quadric is better than Akida with a CPU or GPU co-processor.
View attachment 6915
Hi @Diogenese

Thank you once again though this seems far to underwhelming as a gratuity but its the best I have for an online forum.

I try to be careful proposing technical views given my background but in my reading I did wonder why Quadric would ever be needed as its use is contrary to what I understood even before Kristopher Carlson's last presentation where the underlying theme was that keep everything you have develop your models and AKIDA will come in at the end CPU, GPU or whatever and save the day so to speak.

I suppose however if you were building from scratch and if Quadric was price competitive with CPU and GPU alternatives then why not if your present engineering staff do not have to go back to University to learn how it works and how to trouble shoot if there is a problem.

I suppose MegaChips having a financial stake in Quadric could colour their view when advising a customer which is only human nature but once AKIDA has some widespread acceptance they will most likely have customers say my mate over at X company is only using AKIDA why do we need Quadric.

Time will tell. Toms Hardware sells solutions from everyone so I suppose it boils down to the sophistication of the customer and the honesty of the sales person.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Thinking
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


I was just reading through this article from August 2021 with Argo AI CEO Bryan Salesky and thought I'd share it to see what everyone's thoughts are. Smells a lot like AKIDA to me.


(Extract)

Ford and Volkswagen are obviously gigantic car companies. They’re good at things like alloy wheels and leather seats and all the other things that go into making cars. They make cars. You don’t make cars. What is the Argo product? Where does it begin and end?


The product is really, at its core, a whole lot of software that runs on some pretty specialized hardware, that connects to a car in a safe way. And I would say those car companies do a lot more than just make leather seats and alloy wheels — I don’t know if you’re setting me up here, but the car companies are increasingly becoming software companies in their own right. If you look at the car as a digital device, there’s actually an API, and a really important one that we interface with to be able to control basic things like steering and braking, and being able to do that in a safe and secure way is actually not trivial.


So, heavy respect for what they do, and working in concert with the automaker makes sure that those interfaces are done right and in a secure and safe way.


So they put a specialized computer in their car, it runs your software. Do you have any hardware demands? Or is there a set of sensors that you require? Is there stuff that you make, or is it off-the-shelf? How does that part work?


It’s sort of an amalgamation of things. So, they certainly have computing that their control software operates on. We have something that almost looks like a mini data center in the car that’s able to process data from sensors that are positioned all around it. So the car is able to see through sensors that we make as well as buy — it’s able to see 360 degrees around it, 400 meters away, day, night, and is able to pick up on things that, I would venture to say, most human drivers don’t even necessarily see or notice.


“The advantage with self-driving tech is that our software stack can reason about literally thousands of objects at the same time.”

So, many times a second, the car is reading that information and making decisions about how to navigate through the street. People ask me all the time, “Well, how is it any different than how a human thinks about things?” Well, the difference is a human’s sort of picking the top two or three things that are relevant at the time. And if they make a mistake in that judgment, and they pick the wrong thing to focus on, or if they’re distracted, typically that’s when collisions happen, right?


The advantage with self-driving tech is that our software stack can reason about literally thousands of objects at the same time, and be tracking each individual bike, pedestrian, and car that’s in a busy surface street, and be able to extrapolate not just what are they doing now, but what are they going to be doing several seconds in the future. It doesn’t get tired, it doesn’t get distracted, it’s always learning and improving. And this is where the safety proposition comes from.

 
  • Like
  • Fire
  • Love
Reactions: 25 users

MDhere

Regular
  • Like
  • Haha
  • Sad
Reactions: 9 users
United States Patent Application
20220147797
Kind Code
A1
MCLELLAND; Douglas ; et al.
May 12, 2022


EVENT-BASED EXTRACTION OF FEATURES IN A CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
A system is described that comprises a memory for storing data representative of at least one kernel, a plurality of spiking neuron circuits, and an input module for receiving spikes related to digital data. Each spike is relevant to a spiking neuron circuit and each spike has an associated spatial coordinate corresponding to a location in an input spike array. The system also comprises a transformation module configured to transform a kernel to produce a transformed kernel having an increased resolution relative to the kernel, and/or transform the input spike array to produce a transformed input spike array having an increased resolution relative to the input spike array. The system also comprises a packet collection module configured to collect spikes until a predetermined number of spikes relevant to the input spike array have been collected in a packet in memory, and to organize the collected relevant spikes in the packet based on the spatial coordinates of the spikes, and a convolutional neural processor configured to perform event-based convolution using memory and at least one of the transformed input spike array and the transformed kernel.




Inventors:​
MCLELLAND; Douglas; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; PATEL; Harshil K.; (Laguna Hills, CA) ; VANARSE; Anup A.; (Laguna Hills, CA) ; JOSHI; Milind; (Perth, AU)

Applicant:​
Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA
Family ID:​
1000006135263
Appl. No.:​
17/583640
Filed:​
January 25, 2022



and


United States Patent Application
20220138543
Kind Code
A1
VAN DER MADE; Peter AJ ; et al.
May 5, 2022


EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract
Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.



Inventors:​
VAN DER MADE; Peter AJ; (Nedlands, AU) ; MANKAR; Anil S.; (Laguna Hills, CA) ; CARLSON; Kristofor D.; (Laguna Hills, CA) ; CHENG; Marco; (Laguna Hills, CA)

Applicant:​
Name​
City​
State​
Country​
Type​

BrainChip, Inc.

Laguna Hills

CA​

US​
Assignee:​
BrainChip, Inc.
Laguna Hills
CA
Appl. No.:​
17/576103
Filed:​
January 14, 2022

Both filed within a week and half of each other.
The boys have been very busy.

EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract

Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.


Seems obvious now. I don't know why it took them so long to come up with this. Anyone here have a clue why they took so long?🤣😆:ROFLMAO::giggle:

This probably explains how they can be processing 250fps with Nviso if I understand this correctly @Rocket577. This is my type of computing no maths required. 😎

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Haha
Reactions: 15 users

Diogenese

Top 20
Patent History
Publication number
: 20220147797
Type: Application
Filed: Jan 25, 2022
Publication Date: May 12, 2022
Applicant: BrainChip, Inc. (Laguna Hills, CA)
Inventors: Douglas MCLELLAND (Laguna Hills, CA), Kristofor D. CARLSON (Laguna Hills, CA), Harshil K. PATEL (Laguna Hills, CA), Anup A. VANARSE (Laguna Hills, CA), Milind JOSHI (Perth)
Application Number: 17/583,640

Is this a sign of the changing of the guard/s...I don't think so...BUT...check out the 3 Perth based "Dream Team" getting to put their inventors hats on....I'm personally really pleased for Anup and Harshil with whom I've had the pleasure to talk with in person.

This was obviously only published 5/6 days ago, so if this has already been posted, excuse me, as I can't keep up with all the brilliant articles being posted, I'm a slow reader :ROFLMAO:

Good morning from Australia's Brainchip HQ.....Perth :love::love:
Hi Tech,

The McLelland patent is a continuation of an earlier application:

[0001] This application is a continuation-in-part of PCT Application No. PCT/US2020/043456, titled “Event-based Classification of Features in a Reconfigurable and Temporally Coded Convolutional Spiking Neural Network,” which was filed on Jul. 24, 2020, which claims the benefit of U.S. Provisional Application No. 62/878,426, filed on Jul. 25, 2019, titled “Event-based Classification of Features in a Reconfigurable and Temporally Coded Convolutional Spiking Neural Network,

[0002] This application is related to U.S. patent application Ser. No. 16/670,368, filed Oct. 31, 2019, and U.S. Provisional Application No. 62/754,348, filed Nov. 1, 2018

It is designed to improve the accuracy of SNNs:

[0006] Spiking neural networks have the advantage that the neural circuits consume power only when they are switching, this is, when they are producing a spike. In sparse networks, the number of spikes is designed to be minimal. The power consumption of such circuits is very low, typically thousands of times lower than the power consumed by a graphics processing unit used to perform a similar neural network function. However, up to now temporal spiking neural networks have not been able to meet the accuracy demands of image classification. Spiking neural networks comprise a network of threshold units, and spike inputs connected to weights that are additively integrated to create a value that is compared to one or more thresholds. No multiplication functions are used. Previous attempts to use spiking neural networks in classification tasks have failed because of erroneous assumptions and subsequent inefficient spike rate approximation of conventional convolutional neural networks and architectures. In spike rate coding methods, the values that are transmitted between neurons in a conventional convolutional neural network are instead approximated as spike trains, whereby the number of spikes represent a floating-point or integer value which means that no accuracy gains or sparsity benefits may be expected. Such rate-coded systems are also significantly slower than temporal-coded systems, since it takes time to process sufficient spikes to transmit a number in a rate-coded system. The present invention avoids those mistakes and returns excellent results on complex data sets and frame-based images.

The parent application is related to:
WO2021016544A1 EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

which disclosed the rank coding technique.




1652841887275.png



The continuation patent looks very much like the thimble-and-pea trick, but it relates to updating training data on-chip (training data augmentation).

1652843767683.png


[0086] FIG. 20 illustrates a hardware architecture and flow diagram according to an embodiment of the present invention, the architecture implementing event-based transposed convolution, event-based dilated convolution, and data augmentation in hardware.


1652843177602.png

[0095] FIGS. 27(i ) and 27 (ii ) illustrate conventional and transformed arrangements for connecting an input to input neurons in accordance with an embodiment of the invention.

[0096] FIGS. 27(iii ) and FIG. 27(iv ) illustrate transformation of neurons of an input layer and processing of an input by the transformed neurons in accordance with another embodiment of the invention
.

[0283] The present method and system also includes data augmentation capability arranged to augment the network training phase by automatically training the network to recognize patterns in images that are similar to existing training images. In this way, feature extraction during feature prediction by the network is enhanced and a more robust network achieved.

[0284] Training data augmentation is a known pre-processing step that is performed to generate new and varying examples of original input data samples. When used in conjunction with convolutional neural networks, data augmentation techniques can significantly improve the performance of the neural network model by exposing robust and unique features.

[0285] Existing training data augmentation techniques, largely implemented in separate dedicated software, apply transformation functions to existing training samples as a pre-processing step in order to create similar training samples that can then be used to augment the set of training samples used to train a network. Typical transformations include mirror image transformations, for example that are obtained based on a horizontal or vertical axis passing through the center of the existing sample.

[0286] However, existing training data augmentation techniques are carried out separately of the neural network, which is cumbersome, expensive and time consuming.

[0287] According to an embodiment of the present invention, an arrangement is provided whereby the set of training samples is effectively augmented on-the-fly by the network itself by carrying out defined processes on existing samples as they are input to the network during the training phase. Accordingly, with the present system and method, training data augmentation is performed on a neuromorphic chip, which substantially reduces user involvement, and avoids the need for separate preprocessing before commencement of the training phase.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Diogenese

Top 20
Hi @Diogenese

Thank you once again though this seems far to underwhelming as a gratuity but its the best I have for an online forum.

I try to be careful proposing technical views given my background but in my reading I did wonder why Quadric would ever be needed as its use is contrary to what I understood even before Kristopher Carlson's last presentation where the underlying theme was that keep everything you have develop your models and AKIDA will come in at the end CPU, GPU or whatever and save the day so to speak.

I suppose however if you were building from scratch and if Quadric was price competitive with CPU and GPU alternatives then why not if your present engineering staff do not have to go back to University to learn how it works and how to trouble shoot if there is a problem.

I suppose MegaChips having a financial stake in Quadric could colour their view when advising a customer which is only human nature but once AKIDA has some widespread acceptance they will most likely have customers say my mate over at X company is only using AKIDA why do we need Quadric.

Time will tell. Toms Hardware sells solutions from everyone so I suppose it boils down to the sophistication of the customer and the honesty of the sales person.

My opinion only DYOR
FF

AKIDA BALLISTA
Quadric may be ok in situations where mains power is available, and if time is not critical.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Slade

Top 20
Says it all really.

5E2DD7C0-B254-418F-8AEF-4A45B8514167.png
 
  • Haha
  • Like
  • Love
Reactions: 23 users
Wondering if worth keeping an eye on these guys...Aerospace Corporation if the eyes not on them already haha



Reason being I see they were awarded a small contract late last year with DoD Airforce but the sub awardee was Vorago with their Radhard Arm M4 MCU.

No mention of neuromorphic or Akida as yet but we know we working with Vorago and according to Aerospace website they want onboard autonomy and AI within next 5 yrs.

Screenshot_2022-05-18-11-05-06-14_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-05-22-34_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-06-16-75_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-06-45-97_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users
Top Bottom