BRN Discussion Ongoing

Wondering if worth keeping an eye on these guys...Aerospace Corporation if the eyes not on them already haha



Reason being I see they were awarded a small contract late last year with DoD Airforce but the sub awardee was Vorago with their Radhard Arm M4 MCU.

No mention of neuromorphic or Akida as yet but we know we working with Vorago and according to Aerospace website they want onboard autonomy and AI within next 5 yrs.

Screenshot_2022-05-18-11-05-06-14_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-05-22-34_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-06-16-75_4641ebc0df1485bf6b47ebd018b5ee76.jpg

Screenshot_2022-05-18-11-06-45-97_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Boab

I wish I could paint like Vincent
  • Haha
  • Like
Reactions: 8 users
D

Deleted member 118

Guest
EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

Abstract

Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.


Seems obvious now. I don't know why it took them so long to come up with this. Anyone here have a clue why they took so long?🤣😆:ROFLMAO::giggle:

This probably explains how they can be processing 250fps with Nviso if I understand this correctly @Rocket577. This is my type of computing no maths required. 😎

My opinion only DYOR
FF

AKIDA BALLISTA
Glad you understand, so
 
Last edited by a moderator:
  • Haha
  • Like
  • Love
Reactions: 14 users

TECH

Regular
Hi Tech,

The McLelland patent is a continuation of an earlier application:

[0001] This application is a continuation-in-part of PCT Application No. PCT/US2020/043456, titled “Event-based Classification of Features in a Reconfigurable and Temporally Coded Convolutional Spiking Neural Network,” which was filed on Jul. 24, 2020, which claims the benefit of U.S. Provisional Application No. 62/878,426, filed on Jul. 25, 2019, titled “Event-based Classification of Features in a Reconfigurable and Temporally Coded Convolutional Spiking Neural Network,

[0002] This application is related to U.S. patent application Ser. No. 16/670,368, filed Oct. 31, 2019, and U.S. Provisional Application No. 62/754,348, filed Nov. 1, 2018

It is designed to improve the accuracy of SNNs:

[0006] Spiking neural networks have the advantage that the neural circuits consume power only when they are switching, this is, when they are producing a spike. In sparse networks, the number of spikes is designed to be minimal. The power consumption of such circuits is very low, typically thousands of times lower than the power consumed by a graphics processing unit used to perform a similar neural network function. However, up to now temporal spiking neural networks have not been able to meet the accuracy demands of image classification. Spiking neural networks comprise a network of threshold units, and spike inputs connected to weights that are additively integrated to create a value that is compared to one or more thresholds. No multiplication functions are used. Previous attempts to use spiking neural networks in classification tasks have failed because of erroneous assumptions and subsequent inefficient spike rate approximation of conventional convolutional neural networks and architectures. In spike rate coding methods, the values that are transmitted between neurons in a conventional convolutional neural network are instead approximated as spike trains, whereby the number of spikes represent a floating-point or integer value which means that no accuracy gains or sparsity benefits may be expected. Such rate-coded systems are also significantly slower than temporal-coded systems, since it takes time to process sufficient spikes to transmit a number in a rate-coded system. The present invention avoids those mistakes and returns excellent results on complex data sets and frame-based images.

The parent application is related to:
WO2021016544A1 EVENT-BASED CLASSIFICATION OF FEATURES IN A RECONFIGURABLE AND TEMPORALLY CODED CONVOLUTIONAL SPIKING NEURAL NETWORK

which disclosed the rank coding technique.




View attachment 6922


The continuation patent looks very much like the thimble-and-pea trick, but it relates to updating training data on-chip (training data augmentation).

View attachment 6924

[0086] FIG. 20 illustrates a hardware architecture and flow diagram according to an embodiment of the present invention, the architecture implementing event-based transposed convolution, event-based dilated convolution, and data augmentation in hardware.


View attachment 6923
[0095] FIGS. 27(i ) and 27 (ii ) illustrate conventional and transformed arrangements for connecting an input to input neurons in accordance with an embodiment of the invention.

[0096] FIGS. 27(iii ) and FIG. 27(iv ) illustrate transformation of neurons of an input layer and processing of an input by the transformed neurons in accordance with another embodiment of the invention
.

[0283] The present method and system also includes data augmentation capability arranged to augment the network training phase by automatically training the network to recognize patterns in images that are similar to existing training images. In this way, feature extraction during feature prediction by the network is enhanced and a more robust network achieved.

[0284] Training data augmentation is a known pre-processing step that is performed to generate new and varying examples of original input data samples. When used in conjunction with convolutional neural networks, data augmentation techniques can significantly improve the performance of the neural network model by exposing robust and unique features.

[0285] Existing training data augmentation techniques, largely implemented in separate dedicated software, apply transformation functions to existing training samples as a pre-processing step in order to create similar training samples that can then be used to augment the set of training samples used to train a network. Typical transformations include mirror image transformations, for example that are obtained based on a horizontal or vertical axis passing through the center of the existing sample.

[0286] However, existing training data augmentation techniques are carried out separately of the neural network, which is cumbersome, expensive and time consuming.

[0287] According to an embodiment of the present invention, an arrangement is provided whereby the set of training samples is effectively augmented on-the-fly by the network itself by carrying out defined processes on existing samples as they are input to the network during the training phase. Accordingly, with the present system and method, training data augmentation is performed on a neuromorphic chip, which substantially reduces user involvement, and avoids the need for separate preprocessing before commencement of the training phase.
Hi Diogenese,

I noticed another patent that was lodged on the 9th May 2022 via Espacenet...but didn't dig very deep, but last night I started reading through the description and claims of the one above, but after about 15 minutes I started to feel a bit dizzy, a bit hot and decided it was all too much for my brain to handle, so I watched TV instead, Albo on the campaign trail, but that made things even worse ! 🤣🤣:geek::geek::geek:
 
  • Haha
  • Like
  • Fire
Reactions: 20 users

Diogenese

Top 20
Hi Diogenese,

I noticed another patent that was lodged on the 9th May 2022 via Espacenet...but didn't dig very deep, but last night I started reading through the description and claims of the one above, but after about 15 minutes I started to feel a bit dizzy, a bit hot and decided it was all too much for my brain to handle, so I watched TV instead, Albo on the campaign trail, but that made things even worse ! 🤣🤣:geek::geek::geek:
Hi Tech,

As I said before, the BrainChip patents are so loaded with many great inventions that it's impossible for me to take it all in in one sitting, so I usually make a targeted attack on a narrow topic.

It was a bit more difficult in this case because the drawings aren't available on Espacenet yet, so I had to go to the USPTO. There are about 8 new drawings added to the original PCT specification, so the continuation specification really relates to Figures 20 onwards, which, as I said, aren't on Espacenet yet.
 
  • Like
  • Fire
  • Wow
Reactions: 27 users
Glad you understand, so

There are all levels of understanding but if you are familiar with the thesis proposed in The Castle the first level of understanding can actually be called the ‘vibe’.

My level of understanding is one layer above the vibe and is not a technical explanation and possibly wrong and if so I will revert to the vibe if questioned in depth. 🤓

AKIDA is trying to work like the brain.

When I look at a familiar scene it is just that familiar, because I have seen it before, and stored some or all of the features.

Now imagine that this familiar scene is overlayed with a grid so that the large area is now made up of a much larger number of smaller familiar scenes all forming the whole scene.

If I turn away from the scene and someone moves something without me being told this is occurring when I turn back and look again I will notice that the chair for example has been moved across the scene to another position which would be another position on the grid. I do not need to reprocess every part of the whole scene to recognise the chair has moved.

So my simple understanding is that this patent is allowing AKIDA to store a scene but impose a grid of points over that stored scene and only process changes in those stored points that occur from the stored image and as each change occurs replace the stored image with the new image by changing only those points of the stored image where the change has occurred.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Thinking
  • Fire
Reactions: 31 users

Proga

Regular
Bloomberg
Hi, it’s Ian in San Francisco. To understand the tech selloff, it’s important to first understand the chip selloff. But first...
Today’s must-reads:
• Terra’s $45 billion face plant is having ripple effects
• Elon Musk said that a deal to buy Twitter at a lower price isn’t “out of the question”
• Donald Trump’s SPAC deal leaves the door open for a Twitter return

Chipmakers’ new normal​

With a few brief exceptions, the Nasdaq Composite Index has been on a tear for 13 years. In that time, not one but five tech companies breached $1 trillion in market capitalization–a milestone that for years seemed unattainable.
I say this because it’s helpful to take the long view of recent weeks’ brutal tech selloff. Even after the carnage in crypto and streaming and social media and stationary bikes, all but four of the top 20 biggest players in the Nasdaq are still tech companies. Those four outliers are PepsiCo Inc., Costco Wholesale Corp., AstraZeneca Plc and Tesla Inc. (an exception that proves the rule). The dominance of technology in equity markets has become a given over the last decade—but of course, nothing lasts forever.
For anyone weighing whether the tech rout is a sign of a bubble bursting or a correction on the pathway to greater riches, chip companies can offer a kind of microcosm for Wall Street’s feelings on the industry. This year, Nvidia Corp. is down 41%. Advanced Micro Devices Inc. has dropped 35%. And the Philadelphia Stock Exchange Semiconductor Index is down more than 25%.
Analysts’ bull case for chips—which are at the heart of everything with an on and off switch—is that technology is becoming ever-more pervasive in the global economy.
Trying to explain why we’re at the beginning of the rush and not seeing the tail end of an artificial spike caused by the pandemic, Micron Technology Chief Executive Officer Sanjay Mehrotra told investors last week that there were 82 zettabytes of data created last year. That’s 1 gigabyte per hour, per person on earth. By 2025, that number will double, he forecast.
Obviously Mehrotra is hoping the explosion of data will translate into a lot of demand for memory chips. If he’s right, there will also be a lot of other gear—processors, computers, networking—and a giant amount of software and artificial intelligence needed to move it around, store and make sense of it.
Meanwhile, chipmakers still can’t keep up with demand. GlobalFoundries Inc. said it’s sold out for this year and 2023. That means that even though it’s expanding production, GlobalFoundries’ customers have already ordered as much as it can manufacture for the next 18 months.
Of course, there’s a bear case too. Consumer spending is about two-thirds of the economy. If inflation in the price of energy and staples caused by wars, geopolitics and supply chain disruptions take money out of people’s wallets and force them to delay spending on the latest gadget, then there could be major slowdowns ahead.
Just look at China, the world’s biggest market for smartphones, where first-quarter handset shipments fell 29% last quarter from the same period a year earlier, according to BMO Capital Markets.
We’ll get more data on how chipmakers are faring in the shifting economy soon. In two weeks, Nvidia will report earnings. The company’s stock has been savaged so far this year, but it’s also telling that this quarter analysts are mainly worried about one thing: If Nvidia will be able to get enough supply from its subcontractors to keep up with demand. —Ian King

If MegaChips, Renasas etc do come up with application for their customers today, new chips using Akida IP can't be produced until 2024.
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
There are all levels of understanding but if you are familiar with the thesis proposed in The Castle the first level of understanding can actually be called the ‘vibe’.

My level of understanding is one layer above the vibe and is not a technical explanation and possibly wrong and if so I will revert to the vibe if questioned in depth. 🤓

AKIDA is trying to work like the brain.

When I look at a familiar scene it is just that familiar, because I have seen it before, and stored some or all of the features.

Now imagine that this familiar scene is overlayed with a grid so that the large area is now made up of a much larger number of smaller familiar scenes all forming the whole scene.

If I turn away from the scene and someone moves something without me being told this is occurring when I turn back and look again I will notice that the chair for example has been moved across the scene to another position which would be another position on the grid. I do not need to reprocess every part of the whole scene to recognise the chair has moved.

So my simple understanding is that this patent is allowing AKIDA to store a scene but impose a grid of points over that stored scene and only process changes in those stored points that occur from the stored image and as each change occurs replace the stored image with the new image by changing only those points of the stored image where the change has occurred.

My opinion only DYOR
FF

AKIDA BALLISTA
... or if someone swapped @Fact Finder 's Chesterfield and replaced it with @Slade's bar stool, Akida would add bar stools to the "chairs" category.
 
  • Haha
  • Love
Reactions: 8 users

Slade

Top 20
There are all levels of understanding but if you are familiar with the thesis proposed in The Castle the first level of understanding can actually be called the ‘vibe’.

My level of understanding is one layer above the vibe and is not a technical explanation and possibly wrong and if so I will revert to the vibe if questioned in depth. 🤓

AKIDA is trying to work like the brain.

When I look at a familiar scene it is just that familiar, because I have seen it before, and stored some or all of the features.

Now imagine that this familiar scene is overlayed with a grid so that the large area is now made up of a much larger number of smaller familiar scenes all forming the whole scene.

If I turn away from the scene and someone moves something without me being told this is occurring when I turn back and look again I will notice that the chair for example has been moved across the scene to another position which would be another position on the grid. I do not need to reprocess every part of the whole scene to recognise the chair has moved.

So my simple understanding is that this patent is allowing AKIDA to store a scene but impose a grid of points over that stored scene and only process changes in those stored points that occur from the stored image and as each change occurs replace the stored image with the new image by changing only those points of the stored image where the change has occurred.

My opinion only DYOR
FF

AKIDA BALLISTA
Love your explanation FF.

1652848313169.png
 
  • Like
  • Love
  • Fire
Reactions: 9 users
D

Deleted member 118

Guest
A really good speech by stmicroelectronics about edge Ai, just ashame it’s DNN

 
Last edited by a moderator:
  • Like
  • Fire
  • Love
Reactions: 6 users

Jimmy17

Regular
If MegaChips, Renasas etc do come up with application for their customers today, new chips using Akida IP can't be produced until 2024.
I would assume that some of that 100% production capacity already allocated would include chips with Akida / BRN IP in it and we wont have to wait till 2024 to see products in the marketplace since they have likely been tinkering with it for 2 or even 3 years
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 18 users

Bloomberg
Hi, it’s Ian in San Francisco. To understand the tech selloff, it’s important to first understand the chip selloff. But first...
Today’s must-reads:
• Terra’s $45 billion face plant is having ripple effects
• Elon Musk said that a deal to buy Twitter at a lower price isn’t “out of the question”
• Donald Trump’s SPAC deal leaves the door open for a Twitter return

Chipmakers’ new normal​

With a few brief exceptions, the Nasdaq Composite Index has been on a tear for 13 years. In that time, not one but five tech companies breached $1 trillion in market capitalization–a milestone that for years seemed unattainable.
I say this because it’s helpful to take the long view of recent weeks’ brutal tech selloff. Even after the carnage in crypto and streaming and social media and stationary bikes, all but four of the top 20 biggest players in the Nasdaq are still tech companies. Those four outliers are PepsiCo Inc., Costco Wholesale Corp., AstraZeneca Plc and Tesla Inc. (an exception that proves the rule). The dominance of technology in equity markets has become a given over the last decade—but of course, nothing lasts forever.
For anyone weighing whether the tech rout is a sign of a bubble bursting or a correction on the pathway to greater riches, chip companies can offer a kind of microcosm for Wall Street’s feelings on the industry. This year, Nvidia Corp. is down 41%. Advanced Micro Devices Inc. has dropped 35%. And the Philadelphia Stock Exchange Semiconductor Index is down more than 25%.
Analysts’ bull case for chips—which are at the heart of everything with an on and off switch—is that technology is becoming ever-more pervasive in the global economy.
Trying to explain why we’re at the beginning of the rush and not seeing the tail end of an artificial spike caused by the pandemic, Micron Technology Chief Executive Officer Sanjay Mehrotra told investors last week that there were 82 zettabytes of data created last year. That’s 1 gigabyte per hour, per person on earth. By 2025, that number will double, he forecast.
Obviously Mehrotra is hoping the explosion of data will translate into a lot of demand for memory chips. If he’s right, there will also be a lot of other gear—processors, computers, networking—and a giant amount of software and artificial intelligence needed to move it around, store and make sense of it.
Meanwhile, chipmakers still can’t keep up with demand. GlobalFoundries Inc. said it’s sold out for this year and 2023. That means that even though it’s expanding production, GlobalFoundries’ customers have already ordered as much as it can manufacture for the next 18 months.
Of course, there’s a bear case too. Consumer spending is about two-thirds of the economy. If inflation in the price of energy and staples caused by wars, geopolitics and supply chain disruptions take money out of people’s wallets and force them to delay spending on the latest gadget, then there could be major slowdowns ahead.
Just look at China, the world’s biggest market for smartphones, where first-quarter handset shipments fell 29% last quarter from the same period a year earlier, according to BMO Capital Markets.
We’ll get more data on how chipmakers are faring in the shifting economy soon. In two weeks, Nvidia will report earnings. The company’s stock has been savaged so far this year, but it’s also telling that this quarter analysts are mainly worried about one thing: If Nvidia will be able to get enough supply from its subcontractors to keep up with demand. —Ian King

If MegaChips, Renasas etc do come up with application for their customers today, new chips using Akida IP can't be produced until 2024.
Hi Proga
There is a major error in your post Renesas produces its own semiconductors so it determines what gets produced and when so if it has an order for 30 million MCUs it can schedule the order as it sees fit.

Manufacturing sites

As of 2021, the in-house wafer fabrication of the semiconductor device is conducted by Renesas Semiconductor Manufacturing, a wholly owned subsidiary operating six front-end plants in the following areas: Naka, Takasaki, Shiga, Saijo, Yamaguchi, Kawashiri

Renesas last year had a fire in one of its foundries but has overcome that minor set back.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Slade

Top 20
Not sure if this Video has been posted already. IRIDA LABS working with Renesas on Vision AI sensors for smart cities. Sate May 11, 2022



1652849363899.png


1652849417395.png
 

Attachments

  • 1652849300695.png
    1652849300695.png
    664 KB · Views: 47
  • 1652849466345.png
    1652849466345.png
    294.3 KB · Views: 45
  • Like
  • Fire
  • Love
Reactions: 35 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I have no idea if this is helpful or not but this Ford Patent entitled VEHICLE VISION SYSTEM might be interesting to look at because it mentions neural networks and deep learning. I also think there may be something interesting in terms of the timing of the patent since it was filed on 5 August 2020. From recollection it was 25 May 2020 that BrainChip entered into a Proof of Concept Agreement with The Ford Motor
Company.

It could be something or it could be a big ole nothing-burger. I'm sure dearest Dodgey-knees will let me know either way.🤭

 
  • Like
  • Fire
  • Love
Reactions: 32 users

Slade

Top 20
Hi Proga
There is a major error in your post Renesas produces its own semiconductors so it determines what gets produced and when so if it has an order for 30 million MCUs it can schedule the order as it sees fit.

Manufacturing sites

As of 2021, the in-house wafer fabrication of the semiconductor device is conducted by Renesas Semiconductor Manufacturing, a wholly owned subsidiary operating six front-end plants in the following areas: Naka, Takasaki, Shiga, Saijo, Yamaguchi, Kawashiri

Renesas last year had a fire in one of its foundries but has overcome that minor set back.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Jimmy17

Regular
Thanks FF for everything, you da' man. Im just as addicted refreshing the TSE page as I am my Comsec accout these days, all the wonderful information that is simply salivating!
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Proga

Regular
Hi Proga
There is a major error in your post Renesas produces its own semiconductors so it determines what gets produced and when so if it has an order for 30 million MCUs it can schedule the order as it sees fit.

Manufacturing sites

As of 2021, the in-house wafer fabrication of the semiconductor device is conducted by Renesas Semiconductor Manufacturing, a wholly owned subsidiary operating six front-end plants in the following areas: Naka, Takasaki, Shiga, Saijo, Yamaguchi, Kawashiri

Renesas last year had a fire in one of its foundries but has overcome that minor set back.

My opinion only DYOR
FF

AKIDA BALLISTA
I can't see them bumping (pissing off) customers to personally jump the queue. Fast way to go broke. The supply shortage won't last forever. Intel would love to see them do it.
 
  • Like
Reactions: 1 users

Diogenese

Top 20
I have no idea if this is helpful or not but this Ford Patent entitled VEHICLE VISION SYSTEM might be interesting to look at because it mentions neural networks and deep learning. I also think there may be something interesting in terms of the timing of the patent since it was filed on 5 August 2020. From recollection it was 25 May 2020 that BrainChip entered into a Proof of Concept Agreement with The Ford Motor
Company.

It could be something or it could be a big ole nothing-burger. I'm sure dearest Dodgey-knees will let me know either way.🤭

As a practicing fat-burger, one can only sympathize with a nothing-burger.

Ford refer to a NN, but they do not describe one. Thus the patent allows for the incorporation of a NN capable of convoluted classification.

[0088] The second module 120 utilizes the neural network 130 to classify the high-resolution image data of vision data signal, to determine the one or more features of the initially validated entity E 1 108 b and to compare the one or more features with the pre-stored dataset. In an alternate aspect, the system 400 utilizes the neural network 130 to classify the image data of the entity E 1 108 b , to determine the one or more features of the entity E 1 108 b and to compare the one or more features with the pre-stored dataset using a Convoluted Neural Network (CNN) classifier. On comparing the one or more features with the pre-stored dataset, the second module 120 communicatively coupled to the one or more vehicle operations endpoints 114 , transmits the authentication signal thereto.
 
  • Like
  • Fire
  • Haha
Reactions: 17 users
I can't see them bumping (pissing off) customers to personally jump the queue. Fast way to go broke. The supply shortage won't last forever. Intel would love to see them do it.
You are entirely missing the point Renesas sell chips they make thus they have their own production line for their own chips. Simple. FF
 
  • Like
  • Love
  • Fire
Reactions: 16 users
You are entirely missing the point Renesas sell chips they make thus they have their own production line for their own chips. Simple. FF
The clue is in the words "in-house" production. FF
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Top Bottom