BRN Discussion Ongoing

FJ-215

Regular
"Single neural processing engine
Minimal core for TENNS
"

I had assumed that the Akida NN would need at least 2 NPEs, but TENNS can run in a single NPE???!!!!

That is truly astonishing.

... and it does not need a microprocessor????!!!!!

https://www.epdtonthenet.net/articl...ion-for-Resource-Constrained-Deployments.aspx

This IP relies on the Akida2 event-based computing platform configuration engine as its foundation, meaning that the data quantities needing to be dealt with are kept to a minimum. Consequently only a small logic die area is required (0.18mm x 0.18mm on a 22nm semiconductor process with 50kBytes of SRAM memory incorporated), plus the associated power budget remains low (with <1mW operation being comfortably achieved). It can serve as either a standalone device (without requiring a microcontroller) or alternatively become a co-processor.

It's a self-contained package that sets the benchmark for low power.
Hi Dio,

A little left field but can you see a way for the JAST learning rules to be implemented in to TENNs? (orthogonal polynomials)

From memory the JAST rules only took up something like 64K lines of code.

Way, way, way out side of my pay grade.
 
  • Like
  • Thinking
Reactions: 3 users

HopalongPetrovski

I'm Spartacus!
Proving that we ARE the benchmark leaders at the "far edge"....PICO.....Pico Brainchip when you want first mover advantage, stop
procrastinating and sign up, our bus feels like it's warming the engines up again.

Fancy that, the company making an nice announcement on 1 October, my birthday 66 clickity...click.....thanks for sharing the love back in the US.

Is it just me, or has the trading pattern changed somewhat ?

Regards to all....Tech.
Happy Bday Tech. Mine next week. Lets hope the combined gravity of our good karma manifests in a couple of big, juicy deals that Brainchip lands, signed, sealed, delivered and announced, providing a steady flow of revenue into our coffers. 🤣


Unknown-1.jpeg
 
  • Like
  • Haha
  • Love
Reactions: 15 users

FJ-215

Regular
Proving that we ARE the benchmark leaders at the "far edge"....PICO.....Pico Brainchip when you want first mover advantage, stop
procrastinating and sign up, our bus feels like it's warming the engines up again.

Fancy that, the company making an nice announcement on 1 October, my birthday 66 clickity...click.....thanks for sharing the love back in the US.

Is it just me, or has the trading pattern changed somewhat ?

Regards to all....Tech.
Happy Birthday Tech.....

Have you made the trek to the Melbourne sandbelt yet.

Should be on all golfer's bucket list.

Mind you, a couple out of play atm but the Big 3 (Royal, Heath, Vic) are a must play.

You deserve it, great Birthday present to yourself.
 
Last edited:
  • Like
  • Love
Reactions: 4 users

Derby1990

Regular
BRN back to nearly half a billion MC on just a sniff of something. Look out BHP, we're coming for you!
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

IloveLamp

Top 20
BRN back to nearly half a billion MC on just a sniff of something. Look out BHP, we're coming for you!
1000012150.gif
 
  • Fire
Reactions: 1 users

Diogenese

Top 20
Good to see us listed as partners on the Neurobus website. Looks like we may already be embedded in some of their products..

Thanks GS,

... and their other industry partners are Intel and Prophesee.




1727860071069.png



Prophesee's DVS can instantaneously detect changes above a set threshold in a field of view. This is another project on which Brainchip and Prophesee are working together.
Hi Dio,

A little left field but can you see a way for the JAST learning rules to be implemented in to TENNs? (orthogonal polynomials)

From memory the JAST rules only took up something like 64K lines of code.

Way, way, way out side of my pay grade.
Well, until you asked the question, I hadn't thought about it.

In my mind, I had considered the division of labour as being Akida doing the inference/classification and TENNS handling the temporal element. But we have learnt that TENNS can be use without Akida, so there's another neat hypothesis down the plughole.

I haven't really got into how TENNS works, but we do know that, unlike Akida, it uses MACs. However it also uses the same sparse weights and activations.

I'm more a pictures person than a words person and I'm not personally acquainted with converging orthogonal polynomials, but some of my best friends are.

This is an abstract from the TENNS patent:

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622
COENEN OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]


[0012] According to an embodiment of the present disclosure, disclosed herein is a neural network system that includes an input interface, a memory including a plurality of temporal and spatial layers, and a processor.

The input interface is configured to receive sequential data that includes temporal data sequences. The memory is configured to store a plurality of group of first temporal kernel values, a first plurality of First-In-FirstOut (FIFO) buffers corresponding to a current temporal layer.

The memory further implements a neural network that includes a first plurality of neurons for the current temporal layer, a corresponding group among the plurality of groups of the first temporal kernel values is associated with each connection of a corresponding neuron of the first plurality of neurons.

The processor is configured to allocate the first plurality of FIFO buffers to a first group of neurons among the first plurality of neurons.

The processor is then configured to receive a first temporal sequence of the corresponding temporal data sequences into the first plurality of FIFO buffers allocated to the first group of neurons from corresponding temporal data sequences over a first time window.

Thereafter, the processor is configured to perform, for each connection of a corresponding neuron of the first group of neurons, a first dot product of the first temporal sequence of the corresponding temporal data sequences within a corresponding FIFO buffer of first plurality of FIFO buffers with a corresponding temporal kernel value among the corresponding group of the first temporal kernel values.

The corresponding temporal kernel values are associated with a corresponding connection of the corresponding neuron of the first group of neurons.

The processor is then further configured to determine a corresponding potential value for the corresponding neurons of the first group of neurons based on the performed first dot product and then generates a first output response based on the determined corresponding potential values
.

1727867422135.png



The TENNs as disclosed herein may effectively learn both spatial and temporal correlations from the input data.

[0050] According to an embodiment, the spatiotemporal networks may be configured to perform the temporal convolution operations either in a buffered temporal convolution mode or a recurrent temporal convolution mode, and may be alternatively referred to as a “buffer mode” or a “recurrent mode”, respectively.

[0051] According to an embodiment, the spatiotemporal network may be configured with a plurality of spatiotemporal convolution layers. Each of the spatiotemporal layers may be further split into plurality of temporal and spatial convolution layers. The kernels for the temporal and spatial convolution layers are represented as a sum over a set of basis functions, such as orthogonal polynomials, where the coefficients of the basis functions are trainable parameters of the network. This basis function representation compresses the number of parameters of the spatiotemporal network, which makes the training of the spatiotemporal network stable and resistant to overfitting
.*

* Overfitting is responsible for the "hallucinations" experienced on OpenAI.



The JAST patent summarizes the rules as follows:

US11853862B2 Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events 20161121

  • Input events (“spikes”) are grouped into fixed-size packets. The temporal order between events of a same packet is lost, which may seem a drawback, but indeed increases robustness as it eases the detection of distorted patterns and makes the method insensitive to changes of the event rate.
  • Weighted or un-weighted synapses are replaced by a set of binary weights. Learning only requires flipping some of these binary weights and performing sums and comparisons, thus minimizing the computational burden.
  • The number of binary weights which is set to “1” for each neuron does not vary during the learning process. This avoids ending up with non-selective or non-sensible neurons.
In other words, the synspses are GO-NO GO gates, depending on how they are programmed from the model.

[0062] The neural processor 320 may correspond to a neural processing unit (NPU). The (NPU) is a specialized circuit that implements all the necessary control and arithmetic logic necessary to execute machine learning algorithms, typically by operating on models such as artificial neural networks (ANNs) and spiking neural networks (SNNs).

However, does the loss of temporal order within a packet exclude TENNS? I wouldn't think so, as the temporal order between the packets is retained.


So the question is

A - whether the JAST rules are compatible with TENNS COPs?; and
B- whether there is any advantage in combining them?

The short answer is: the chooks ate my homework.



Truth is I haven't dug in depth into the TENNS patent or looked at the pictures.

So I think the old engineering adage "If it ain't broke, ..." applies.

As my mother used to say "Leave it alone - you'll make it explode!"
 

Attachments

  • 1727859990934.png
    1727859990934.png
    30.5 KB · Views: 5
  • Like
  • Love
  • Wow
Reactions: 11 users

Guzzi62

Regular
I forgot who Thomas Hulsing is and had to check him on Linkedin.

A systems of systems engineer at Airbus Defence and Space GmbH, working on: Life Support Systems, Scientific Experiments, Satellite SW, Data Evaluation, Automation, Innovation & Improvement.

Clearly a very good sign that he reposted the Akida Pico news, showing he is interested.

I am not so much for this kind of dot joining, but he seems to be an Akida fanboy and posting BrainChip news going back months and considering what he is doing in that 40k employees company, surely he is playing around with Akida2 & TENNs in his department.

Also positive that Akida Pico is out in the news in several specialist news outlets, last one I saw is newelectronics UK below.

 
  • Like
  • Love
  • Fire
Reactions: 6 users
I forgot who Thomas Hulsing is and had to check him on Linkedin.

A systems of systems engineer at Airbus Defence and Space GmbH, working on: Life Support Systems, Scientific Experiments, Satellite SW, Data Evaluation, Automation, Innovation & Improvement.

Clearly a very good sign that he reposted the Akida Pico news, showing he is interested.

I am not so much for this kind of dot joining, but he seems to be an Akida fanboy and posting BrainChip news going back months and considering what he is doing in that 40k employees company, surely he is playing around with Akida2 & TENNs in his department.

Also positive that Akida Pico is out in the news in several specialist news outlets, last one I saw is newelectronics UK below.

@Sirod69 who we haven't seen around for a while, confirmed that Thomas is an enthusiastic shareholder and part of her WhatsApp? group, a while back..
 
  • Like
Reactions: 1 users
Top Bottom