BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
Anyone for some performance figures from the same Forbes article:


"Brainchip offers three Akida IP variants: E, S, and P:

E. The Max Efficiency variant (Akida-E)
provides as many as four nodes (or 16 NPUs), runs as fast as 200MHz, delivers the equivalent of 200 GOPS (giga-operations per second) of performance, and needs only milliwatts of power for operation. The company says that this smaller variant is designed for running simpler AI/ML networks and is intended for use in continuously operating equipment where power consumption is at a premium.

S. The Sensor Balanced variant (Akida-S) can be configured with as many as eight nodes (32 NPUs), runs as fast as 500MHz, and delivers the equivalent of 1 TOPS (trillion operations per second) of performance, which is capable of running object detection and classification workloads.

P. The Performance variant (Akida-P) accommodates as many as 128 nodes (512 NPUs), operates as fast as 1.5GHz, and delivers the equivalent of 50 TOPS. The most capable version of the Performance variant includes optional hardware support for vision transformer networks that take the form of additional nodes in the internal Akida mesh network.

The high-end variant can run the full gamut of AI/ML models in Brainchip’s model zoo to perform tasks including classification, detection, segmentation, and prediction. Together, these Akida variants allow a design team to use one AI/ML architecture that scales from low-power configurations that consume mere microwatts of power to high-performance configurations that deliver dozens of TOPS, and according to BrainChip, can perform HD video object detection at 30 frames per second while consuming less than 75 milliwatts, which could result in very compelling portable vision solutions."


Talk about a versatile solution package.

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF,

My initial thoughts is that all three could incorporated together and optimized for robotics and cars with each variant tailored to Audio, Visual and
kinesthetic.
 
  • Like
  • Love
  • Wow
Reactions: 13 users

BaconLover

Founding Member
The most exciting thing right now is AI :cool:



 
  • Like
  • Fire
  • Love
Reactions: 16 users

Doz

Regular
Come on patent ,,,,,,,,



1678154672282.png
1678154851525.png
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Oh, and speaking of Si-FIve customers, here's what Ziad Asghar, VP of Product Management for Snapdragon Technology and Roadmap at Qualcomm said in a statement in November 2022.

View attachment 31426



Baring in mind that Prophesee loves us and would no doubt be pushing for our inclusion in to Snapdragon platforms. And SiFive loves us too as can be attested in the latest statement from Phil Dworsky (Global Head of Strategic Alliances at SiFive).

So, come on Qualcomm, just go with the flow, because resistance is literally futile. 🚀


View attachment 31425




And then of course there's this...


Screen Shot 2023-03-07 at 1.11.4.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
And this...

feel-the-love-shirt-off.gif
 
  • Haha
  • Like
Reactions: 19 users

jk6199

Regular
This price manipulation after yesterday's great news and confirmation of the chips!

I really liked the firm language of the comments, no pussying around, to the point and confident.

I can't believe the chance is still here to buy more at these prices when some big names have endorsed BRN. I might have to sleep in the dog house a bit longer, but I won't forgive myself if I don't buy more shares.

Yes, I'm still an Akidaholic!
 
  • Like
  • Haha
  • Fire
Reactions: 36 users

TheFunkMachine

seeds have the potential to become trees.
B3FC106F-4F84-44D3-9430-7D9FCC6A6DF4.jpeg

I think this is the reason why Brainchip is pushing for a different way to benchmark Edge AI processors as TOPS has been weighted to much in terms of power and capability. Akida’s top end is 50 TOPs yet according to Nviso Akida outperformed Nvidia in every way.

So if Akida with much less TOPS then Nvidia processors can outperform them, then benchmarking has to change to ensure true value gets highlighted.

Sorry if I misunderstand this as I am not a super technical guy.
 
  • Like
  • Fire
Reactions: 15 users
  • Like
  • Fire
  • Love
Reactions: 34 users

Learning

Learning to the Top 🕵‍♂️
This is news to me.

Screenshot_20230307_134304_Chrome.jpg

Screenshot_20230307_134323_Chrome.jpg
Screenshot_20230307_134333_Chrome.jpg


Wow...

Learning 🏖
 
  • Like
  • Fire
  • Love
Reactions: 61 users

Cardpro

Regular
We need to get off the ASX ASAP. This why we have extremely low US investors. Today we had 150,000 shares traded, average is 40,000 or less. We will not get the US investor until we are listed on NASDAQ.
If we list on US now we will get eaten alive, we need to built strong ship (strong partnerships & revenue sources) before we sail to NASDAQ IMO
 
  • Like
  • Love
  • Fire
Reactions: 42 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Thinking
Reactions: 27 users

Damo4

Regular
Uuuuuuhhhhrrrrmmm...What!!!!!

I'm warming up my chakras and giving myself a migraine trying to will our incorporation in this one into existence.

The patent hasn't been issued as yet or at least not that I can access. This will be one for dear Dodgy Knees to follow up on @Diogense.


View attachment 31446
View attachment 31447

Mate, you're starting to convince me
 
  • Haha
  • Like
Reactions: 10 users

Dhm

Regular
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive

Source: https://www.design-reuse-embedded.com/.../brainchip.../...

COMBINED WITH

Google deploys SiFive's Intelligence X280 processor for AI workloads. Hybridizes the RISC-V cores with TPU architecture.
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.

Source: https://www.datacenterdynamics.com/.../google-deploys.../
Both @NickBRN and @TechGirl jumped fast onto the Google connection thru SiFive and it looked like a solid lead. I would have thought that alone would get our hearts pumping. Are there others that need further convincing?
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Both @NickBRN and @TechGirl jumped fast onto the Google connection thru SiFive and it looked like a solid connection. I would have thought that alone would get our hearts pumping. Are there others that need further convincing?

No, no need for further convincing #48,93 IMO. 😘
 
  • Like
  • Love
  • Fire
Reactions: 15 users
D

Deleted member 2799

Guest
It would be nice if this week would be a week of announcements! After so long time of silence🧐 the shorts have still problems with dumping
 
  • Like
Reactions: 3 users
D

Deleted member 118

Guest
DC382E7F-71FE-46BB-A042-B56F47D76B4C.png


 
  • Haha
  • Like
Reactions: 15 users

Damo4

Regular
  • Haha
  • Like
  • Love
Reactions: 23 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 23 users

Diogenese

Top 20
Hi Doz,

This is an interesting PCT application which includes the N-of-M coding. It resulted in the grant of this US patent:

US11227210B2 Event-based classification of features in a reconfigurable and temporally coded convolutional spiking neural network

It has a priority of 20190725, which is too early to for transformers.

But there was a subsequent "continuation-in-part" [patent of addition] application filed in the US:
US2022147797A1 EVENT-BASED EXTRACTION OF FEATURES IN A CONVOLUTIONAL SPIKING NEURAL NETWORK
which has a priority of 20220114, and is awaiting examination by USPTO.

This application does refer to transformation modules, and the description of the transformer module was added to the parent description on 20220114.

[006] ... However, up to now temporal spiking neural networks have not been able to meet the accuracy demands of image classification. Spiking neural networks comprise a network of threshold units, and spike inputs connected to weights that are additively integrated to create a value that is compared to one or more thresholds. No multiplication functions are used. Previous attempts to use spiking neural networks in classification tasks have failed because of erroneous assumptions and subsequent inefficient spike rate approximation of conventional convolutional neural networks and architectures. In spike rate coding methods, the values that are transmitted between neurons in a conventional convolutional neural network are instead approximated as spike trains, whereby the number of spikes represent a floating-point or integer value which means that no accuracy gains or sparsity benefits may be expected. Such rate-coded systems are also significantly slower than temporal-coded systems, since it takes time to process sufficient spikes to transmit a number in a rate-coded system. The present invention avoids those mistakes and returns excellent results on complex data sets and frame-based images.

A system, comprising:
a memory for storing data representative of at least one kernel;
a plurality of spiking neuron circuits;
an input module for receiving spikes related to digital data, wherein each spike is relevant to a spiking neuron circuit and each spike has an associated spatial coordinate corresponding to a location in an input spike array;
a transformation module configured to:
transform a kernel to produce a transformed kernel having an increased resolution relative to the kernel; and/or
transform the input spike array to produce a transformed input spike array having an increased resolution relative to the input spike array;
a packet collection module configured to collect spikes until a predetermined number of spikes relevant to the input spike array have been collected in a packet in memory, and to organize the collected relevant spikes in the packet based on the spatial coordinates of the spikes; and
a convolutional neural processor configured to perform event-based convolution using memory and at least one of the transformed input spike array and the transformed kerne
l.

However, this use of the term "transformation" is not the same as the "transformer" which is supplanting LSTM with its "attention" capability.

https://blogs.nvidia.com/blog/2022/...ideo,can be used to create even better models.

1678159057668.png


Transformers use positional encoders to tag data elements coming in and out of the network. Attention units follow these tags, calculating a kind of algebraic map of how each element relates to the others.
Attention queries are typically executed in parallel by calculating a matrix of equations in what’s called multi-headed attention.
With these tools, computers can see the same patterns humans see
.
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Top Bottom