BRN Discussion Ongoing

In the VC investments graph, I think the sleeper is Digital Security.

We've seen Putin's attitude to human life, destroying your computer presents no moral barrier, and he has the capability.
Agree....is interesting to look at the runoff industries past couple of years and the upramp in others where investor $ see possible greater near term growth and returns.
 
  • Like
Reactions: 3 users

Chilling

Member
OK, I have to ask this. The 1000 Eyes have uncovered a labyrinth of amazing links. Therefore so would a number of behemoths within the industry. We know that we are around 3 years ahead of our nearest competitor and have Patents protecting us as well. At the giveaway level of circa $1 a share would not a competitor consider making a bid around $5 to $10 ps to 'buy in'. I understand top 20 shareholders have a strong controlling interest, but at the end of the day (I hate that saying) everyone is in it for the money.

I suppose my comments are stirred by the comments about the after market 20m sale. I know we have discussed this before, but.....
Of course they must be considering it the price is the question have long thought a big name taking a strategic stake in the company would be the best option.
 
  • Like
Reactions: 5 users

Learning

Learning to the Top 🕵‍♂️
Wouldn"t that be nice ...................... 2 days for the market too see BRNs new IP owner,
Mmmm, .............. Apple, Dell, Bosch, Samsung, ??? OR
Nividia takeover play ??? Qualcomm takeover play ??

Quite amazing closing auction !!!!

Akida Ballista ................ here for the ride of our lives. View attachment 2786
With these plates proudly attached to the new Tesla Roadster Gen2 ............
We can ride together. Lol

20220318_182937.jpg


It's great to be a shareholder.
 
  • Like
  • Fire
  • Wow
Reactions: 12 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Haha
  • Like
  • Fire
Reactions: 16 users

uiux

Regular

Not sure if this has been posted - NXP wireless gaming headphones. Certainly has a lot of what I have come to know as "BrainChip terms"

NXH3670: Ultra-low Power, Low Latency Audio for Wireless Gaming Headphone

View attachment 2782

Overview

The NxH3670 constitutes a highly integrated, single-chip ultra-low-power 2.4 GHz wireless transceiver with embedded MCU (Integrated Arm® Cortex®-M0 processor), targeted at wireless audio streaming for gaming headphones, delivering low latency audio and ultra-low power consumption.
The NxH3670 runs a proprietary wireless audio streaming protocol which has been optimized for wireless gaming headset applications, providing high-quality forward audio streaming at low latency (<20ms) combined with a simultaneous voice microphone backchannel. Additionally, a wireless bidirectional data channel is available.
While primarily designed for headsets, this solution can also enable wireless audio streaming for household and office devices like soundbars, wireless speakers, wireless Subwoofers and more.
Overall, the NxH3670 delivers a small form factor with long battery life while reducing overall weight and size.

45 pages of technical info here


nope


I wish more people would use the "is this an Akida?" thread - I am getting sick of reading posts that have nothing to do with neuromorphic tech or neural networks in general.


If you aren't sure just ask in that thread
 
  • Like
Reactions: 8 users

BaconLover

Founding Member
  • Haha
  • Like
  • Fire
Reactions: 18 users

Labsy

Regular
Interesting experience in Miele store today. Went with the Mrs to have a look at some appliances for a new kitchen we plan on building... the sales assistant asked when we plan on renovating our house. I mentioned probably begin late next year. She said "by then all appliances as seen on show room will change. They will look the same but be different". I said "let me guess...they will intuitive yes? Perhaps some AI". She said " yeeeesss, how did u know?" 😳 haha...
Just goes to show how we are definitely in the right place at the right time... I hope there are some Renesas chips in there...
 
  • Like
  • Fire
  • Wow
Reactions: 42 users

Taco77

Member
 

Attachments

  • 24EA195E-B587-437D-8086-C26CC1BD96CA.jpeg
    24EA195E-B587-437D-8086-C26CC1BD96CA.jpeg
    243.7 KB · Views: 50
  • Like
  • Haha
Reactions: 2 users
From my informed lay perspective it is using watts of power so on this basis it does not inhabit the same territory. It does not have incremental and one shot learning.

What is unclear to me is whether it might have potential to reduce the training parameters of the CNN used by AKIDA but also not sure if this is an actual advantage.

Though it sends less data it still requires connection which means unlike AKIDA it could drop out in my carport like my iPhone and drive my car through the back wall.

It is being implemented in 40nm and they do not state if it is able to scale down but even so if it can and chased AKIDA down to 4nm it would not catch up.

Subject to Diogenes view
My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users

Only problem is that the German Government would most likely reject such a takeover.

Also not sure why Mercedes would be interested having shown how advanced its EQXX EV technology is what would they actually get out of the deal.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
Reactions: 9 users

Learning

Learning to the Top 🕵‍♂️
Hi all,

This is my opinion of the unusual volume at close auction today, nearly 20 Million shares were traded after the market closed.

We all know today is the Quarterly Rebalance of S&P/ASX.
Unusual volume happen, when rebalance occur as you enter or exist the ASX 200/300.

However, BRN didn't enter ASX 200 or leaving ASX 300, so why the unusual volume?

My option is that's some of the indices has started to take a holding position in BRN and anticipate for BRN to enter the ASX 200. In the end BRN didn't enter the ASX 200 in this quarter. So these indices has to unload there BRN holding today.

As to who were the buyer after closed auction; Could the short take this opportunity to close there short positions in anticipation of good news coming from BRN. (One can hope).

Its great to be a shareholder.
 
  • Like
  • Thinking
  • Fire
Reactions: 14 users

Diogenese

Top 20
This is way above my pay grade - I'v been flying by the seat of my pants before this, so I think this is in the Kristofor Karlson, Simon Thorpe, PvdM balliwick.

However, if I had to guess, I'd say that our on-chip, one-shot learning would make this irrelevant for Akida.

1647609395000.png


Weight generation and storage generally happen off-chip, creating a power and latency bottleneck related to the movement of this data to and from external memory. Instead, the Hiddenite architecture features on-chip weight generation for re-generating weights through a random number generator, effectively eliminating the need to access the external memory. Beyond this, Hiddenite offers "on-chip supermask expansion,” a feature that reduces the number of supermasks that need to be loaded by the accelerator. 0

Fabricated on TSMC's 40nm technology, the chip measures 3 mm x 3 mm and is capable of performing 4,096 multiply-and-accumulate operations simultaneously. The researchers further claim it achieves a maximum of 34.8 TOPS per watt, all while reducing the amount of model transfer to half that of binarized networks.



Akida
US2020143229A1 SPIKING NEURAL NETWORK
1647609584502.png


[0123] In some embodiments, when a logical AND operation is performed on a spike bit in the spike packet that is ‘1’ and a synaptic weight that is zero, the result is a zero. This can be referred to as an ‘unused spike.’ When a logical AND operation is performed on a spike bit in the spike packet that is ‘0’ and a synaptic weight that is ‘1’, the result is zero. This can be referred to as an ‘unused synaptic weight’. The learning circuit (e.g., weight swapper 113 ) can swap random selected unused synaptic weights where unused spikes occur.

By the way, I just revisited Kristofor Karlson's TinyML talk from March last year:

Watch It’s an SNN future: Are you ready for it? Converting CNN’s to SNN’s - Talk Video by Kristofor Karlson | ConferenceCast.tv

BrainChip provides a number of proprietary models:

1647609699902.png
 
Last edited:
  • Like
  • Fire
Reactions: 16 users

Diogenese

Top 20
This is way above my pay grade - I'v been flying by the seat of my pants before this, so I think this is in the Kristofor Karlson, Simon Thorpe, PvdM balliwick.

However, if I had to guess, I'd say that our on-chip, one-shot learning would make this irrelevant for Akida.

View attachment 2815

Weight generation and storage generally happen off-chip, creating a power and latency bottleneck related to the movement of this data to and from external memory. Instead, the Hiddenite architecture features on-chip weight generation for re-generating weights through a random number generator, effectively eliminating the need to access the external memory. Beyond this, Hiddenite offers "on-chip supermask expansion,” a feature that reduces the number of supermasks that need to be loaded by the accelerator. 0

Fabricated on TSMC's 40nm technology, the chip measures 3 mm x 3 mm and is capable of performing 4,096 multiply-and-accumulate operations simultaneously. The researchers further claim it achieves a maximum of 34.8 TOPS per watt, all while reducing the amount of model transfer to half that of binarized networks.



Akida
US2020143229A1 SPIKING NEURAL NETWORK View attachment 2816

[0123] In some embodiments, when a logical AND operation is performed on a spike bit in the spike packet that is ‘1’ and a synaptic weight that is zero, the result is a zero. This can be referred to as an ‘unused spike.’ When a logical AND operation is performed on a spike bit in the spike packet that is ‘0’ and a synaptic weight that is ‘1’, the result is zero. This can be referred to as an ‘unused synaptic weight’. The learning circuit (e.g., weight swapper 113 ) can swap random selected unused synaptic weights where unused spikes occur.

By the way, I just revisited Kristofor Karlson's TinyML talk from March last year:

Watch It’s an SNN future: Are you ready for it? Converting CNN’s to SNN’s - Talk Video by Kristofor Karlson | ConferenceCast.tv

BrainChip provides a number of proprietary models:

View attachment 2817

While I was browsing the above Akida patent, my memory was jogged when I re-read some advantages of the Akida digital SNN compared with conventional SNNs:

US2020143229A1 SPIKING NEURAL NETWORK

[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully-connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron. Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.

[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.

[0040] Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.

[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.

[0042] Fifth, conventional SNNs often use 1-bit synapse weights. But the use of 1-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.

[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.

[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.

[0045] Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.

[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit
.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Diogenese

Top 20
This Technion patent application makes reference to you-know-whom:

WO2021214763A1 DEVICE AND METHOD FOR RAPID DETECTION OF VIRUSES
(3rd last para of description)
The data was analyzed by Brainchip with a Spiking Neural Network, the adjacent confusion matrix shows the results on the test set. The test set included 31 samples- 21 positives and 10 negatives from 21 tested subjects. Zero out of 21 positive samples were identified correctly which represents 100% sensitivity and 4 out of 10 negative samples were identified correctly which represents 40% specificity. The overall accuracy was 80.65%
(I think they got their "nots" in a twist)
...
The same data set (NB: not the same as above) was analyzed also by the SNN methodology. To make the SNN most efficient, 34 samples were discarded due to noise or improper vector dimensionality. Thus, the dataset included 131 samples taken from 126 subjects tested with Sniffphone device at Zayed Military Hospital- 62 samples from 62 COVID-19 positive subjects and 69 samples from 64 COVID-19 negative subjects (Several negative subjects were sampled two or three times). The adjacent confusion matrix shows the results on the test set that that was completely blind to the training and validation of the model. The test set included 53 samples - 20 positive and 33 negative samples from 53 tested subjects. Nineteen out of 20 positive samples were identified correctly which represents 95% sensitivity and 29 out of 33 negative samples were identified correctly which represents 87.87 % specificity. The overall accuracy was therefore 90.5%.

Footnote: @uiux has already posted this patent on the NaNose thread.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 23 users
D

Deleted member 118

Guest
Just posted some cool videos by Professor Haick on the Nanose page and on a plus side @Fact Finder you don’t have to read anything.

 
Last edited by a moderator:
  • Like
  • Sad
Reactions: 7 users
Last edited:
  • Like
  • Fire
Reactions: 70 users

MrNick

Regular
Thanks all for the considered responses. F1 news is interesting. Lewis for the magic 8 ball GOAT title.
 
  • Like
Reactions: 6 users
Top Bottom