BRN Discussion Ongoing

GREEEEEEENNNNNNN BAAAAAAAABBBBBYY
 
  • Like
  • Haha
  • Fire
Reactions: 20 users
Oh Yeah Yes GIF by TipsyElves.com
 
  • Like
  • Haha
  • Love
Reactions: 16 users
Apologies in advance I think I'll only be posting gifs today 🤣
 
  • Haha
  • Like
  • Love
Reactions: 34 users

Slymeat

Move on, nothing to see.
  • Fire
  • Like
  • Haha
Reactions: 7 users
F

Filobeddo

Guest
Last edited by a moderator:
  • Haha
  • Like
  • Fire
Reactions: 25 users

VictorG

Member
Can we please stand for one minute in silence in memory of BRN shorters.
No, just joking :ROFLMAO:
Fcuk em!
 
  • Haha
  • Like
  • Fire
Reactions: 56 users

VictorG

Member
  • Haha
  • Like
  • Wow
Reactions: 10 users
F

Filobeddo

Guest
Last edited by a moderator:
  • Haha
  • Like
Reactions: 6 users
You can translate this yourself with 2 clicks
My two cents is… it’s all about respect. Both from posting German in an English forum, as well as the above comment. Neither attitude helps anyone…

I speak fluent German, Danish and English, so not bothered really, but that’s me. It’s an English speaking forum, so I’d never use any other language.

But hey that’s just me…

Looks like we cracked $1. So it might be naked pretend hot tub celebrations today 🤣😂
 
  • Like
  • Love
  • Fire
Reactions: 28 users
  • Like
  • Haha
Reactions: 20 users

buena suerte :-)

BOB Bank of Brainchip
That's me topped up!!! ;);) I WISH !!!! :)

10:46:01 AM1.075297,725320,054.375ASX
 
  • Like
  • Wow
  • Fire
Reactions: 22 users

Townyj

Ermahgerd
  • Haha
  • Like
  • Love
Reactions: 29 users

jk6199

Regular
54 B bill passed for chips in USA 🤔

Just got to find a company with potentially the most ground breaking technology in this area???
 
  • Like
  • Fire
  • Haha
Reactions: 42 users

Makeme 2020

Regular
  • Like
  • Haha
  • Fire
Reactions: 13 users
What an awesome start to a dull day
Go you good thing
Brn take me back to. $ 2.34
Please……
 
  • Like
  • Fire
  • Love
Reactions: 34 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Plus we have on chip learning which synsense doesnt

3.7 DYNAP​

DYNAP (Dynamic Neurormorphic Asynchronous Processors) is a family of solutions from SynSence, a company from the University of Zurich. The company has a patented event-routing technology for communication between the cores.
According to [dynap_routing], the scalability of neuromorphic systems is mainly limited by the technologies of communication between neurons, and all other limitations are not so important. Researchers at SynSence invented and patented a two-level communication model based on choosing the right balance between point-to-point communication between neuron clusters and broadcast messages within clusters. The company has presented several neuromorphic processors (ASICs): DYNAP-SE2, DYNAP-SEL and DYNAP-CNN.
The Dynap-SE2 and Dynap-SEL chips are not commercial projects and are being developed by neuroscientist as tools for their research. But Dynap-CNN (2021 tinyML) is marketed as a commercial chip for efficient execution of CNNs converted to SNNs. Whereas the Dynap-SE2 and Dynap-SEL research chips implement analog computing and digital communication, Dynap-CNN is fully digital.
Dynap-SE2 is designed for feed-forward, recurrent and reservoir networks. It includes four cores with 1k LIFAT analog spiking neurons and 65k synapses with configurable delay, weight and short term plasticity. There are four types of synapses (NMDA, AMPA, GABAa, GABAb). The chip is used by researches for exploring topologies and communication models of the SNN.
Main distinctive features of Dynap-SEL chip are support for on-chip learning and large fan-in/out network connectivity. It has been created for biologically realistic networks emulation. The Dynap-SEL chip includes five cores, only one of which has plastic synapse. The chip realizes 1,000 analog spiking neurons and up to 80,000 configurable synaptic connections, including 8,000 synapses with integrated spike-based learning rules (STDP). Researchers are using the chip to model cortical networks.

The Dynap-CNN chip has been available with the Development Kit since 2021. Dynap-CNN is a 12 mm2 chip, fabricated in 22nm technology, hosting over one million spiking neurons and four million programmable parameters. Dynap-CNN is completely digital and realizes linear neuron model without leakage. The chip is best combined with event-based sensors (DVS) and is suitable for image classification tasks. In the inference mode the chip can run a SNN converted from a CNN, in which there may be not more than nine convolutional or fully connected layers and not more than 16 output classes. On-chip learning is not supported. The original CNN must be initially created with PyTorch and trained by classical methods (for example, on GPU). Further, using the Sinabs.ai framework (an open source PyTorch based library), the convolutional network can be converted to a spiking form for execution on Dynap-CNN in the inference mode.

Dynap-CNN has demonstrated the following results:
  • CIFAR-10: 1mJ at 90% accuracy,
  • attention detection: less than 50 ms and 10 mW,
  • gesture recognition: less than 50 ms and 10m W at 89% accuracy,
  • wake phrase detection: less than 200 ms at 98% sensitivity and false-alarm rate less than 1 per 100 hours (office background).

3.8 AKIDA​

Akida [akida] is the first commercial neuromorphic processor, commercially available since August 2021. It has been developed by Australian BrainChip since 2013. Fifteen companies, including NASA, joined the early access program. In addition to Akida System on Chip (SoC), BrainChip also offers licensing of their technologies, providing chip manufacturers a license to build custom solutions.
The chip is marketed as a power efficient event-based processor for Edge computing, not requiring an external CPU. Power consumption for various tasks may range from 100 µW to 300 mW. For example, Akida is capable of processing at 1,000 frames/Watt (compare to TrueNorth with 6,000 frames/Watt). The first generation chip supports operations with convolutional and fully connected networks, with the prospect to add support of LSTM, transformers, capsule networks, recurrent and cortical neural networks. ANN network can be transformed into SNN and executed on the chip.
One Akida chip in a mesh network incorporates 80 Neural Processing Units (NPU), which enables modeling 1,200,000 neurons and 10,000,000,000 synapses. The chip is built at TSMC 28 nm. In 2022, BrainChip announced the second generation chip at 16 nm.

Akida’s ecosystem provides a free chip emulator, TensorFlow compatible framework MetaTF for transformation of convolutional and fully connected neural networks into SNN, аnd a set of pre-trained models. When designing a neural network architecture for execution at Akida, one should take into account a number of additional limitations concerning the layer parameters (e.g. maximum convolution size is 7, while stride 2 is supported for convolution size 3 only) and their sequence.

The major distinctive feature is that incremental, one-shot and continuous learning are supported straight at the chip. At the AI Hardware Summit 2021 BrainChip showed the solution capable of identifying a human in other contexts after having seen him or her only once. Another product by BrainChip is a smart speaker, that on having heard a new voice asks the speaker to identify and after that calls the person by their name. There results are achieved with help of a proprietary local training algorithm on the basis of homeostatic STDP. Only the last fully connected layer supports synaptic plasticity and is involved in learning.
Another instructive case from the AI Hardware Summit 2021 was a classification of fast-moving objects (for example, a race car). Usually, such objects are off the frame center and significantly blurred but they can be detected using an event-based approach.

Looks like Sony has sensor operations in Switzerland
View attachment 11851


Great point @Moonshot, BrainChip has one-shot/on chip learning which Synsense doesn't. And I love how the research paper by Dmitry Ivanov describes the following:

The first generation chip supports operations with convolutional and fully connected networks, with the prospect to add support of LSTM, transformers, capsulenetworks, recurrent and cortical neural networks.

If you thought AKIDA 1000 was incredible (as well all do), then how mind-blowing is it to consider what the future iterations/generations of AKIDA will be capable of doing! And to think the AKIDA 2000 simulation is working already and has even possibly been handed over to engineering, well...it makes me want to grab my leprechaun boots out of the closet and do a little jig.

AKIDA BALLISTA - UBIQUITOUS - MAGIC SAUCE - BLAZINGLY FAST - UNPARALLELED PERFORMANCE - GAME CHANGER - KRYPTONITE - 10 x BIGGER THAN MICROSOFT - 11 X BIGGER THAN BEN HUR - DE FACTO STANDARD - SUPERCHIP ON STEROIDS!!!!!!!!!! 🍾🤩🥳
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 41 users
 
  • Like
  • Fire
Reactions: 10 users

Fox151

Regular
Can we please stand for one minute in silence in memory of BRN shorters.
No, just joking :ROFLMAO:
Fcuk em!
As a wise man once said, "we don't need no water..."
 
  • Like
  • Haha
  • Fire
Reactions: 10 users
Top Bottom