BRN Discussion Ongoing

TechGirl

Founding Member
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive

Source: https://www.design-reuse-embedded.com/.../brainchip.../...

COMBINED WITH

Google deploys SiFive's Intelligence X280 processor for AI workloads. Hybridizes the RISC-V cores with TPU architecture.
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.

Source: https://www.datacenterdynamics.com/.../google-deploys.../

Awesome thanks, you beat me to it.

A good Friend of mine Reuben asked me to post the following screenshots from a facebook group plus the article.

Here are the facebook screenshots

1678146768785.png



1678146737116.png




Here is he Google SIFive X280 article


Google deploys SiFive's Intelligence X280 processor for AI workloads​

Hybridizes the RISC-V cores with TPU architecture
September 22, 2022 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare

Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
RISC-V is an open standard instruction set architecture based on established RISC principles, which is provided under open source licenses that do not require fees.
SiFiveGoogleTPURISCV.png

– SiFive/Google
At the AI Hardware Summit in Santa Clara, Krste Asanovic, SiFive's co-founder and chief architect took to the stage with Cliff Young, Google TPU Architect and MLPerf co-founder.
The SiFive Intelligence X280 is a multi-core capable RISC-V processor with vector extension, optimized for AI/ML applications in the data center.
At the summit, the two companies explained that the X280 processor is being used as the AI Compute Host to provide flexible programming combined with the Google MXU (systolic matrix multiplier) accelerator in the data center. However, they did not disclose the scale of the deployment.
SiFive has introduced the Vector Coprocessor Interface eXtension (VCIX), allowing customers to plug an accelerator directly into the vector register file of the X280.
Google already uses third-party ASIC design services with Broadcom for its in-house TPU AI chip, instead focusing on developing its strengths - the Matrix Multiply Unit and the Inter-Chip Interconnect.
Now it is adding the X280 in what Google calls "an elegant division of labor with the TPU," taking the MXUs and combining them with the X280.
Google's Cliff Young said that with SiFive VCIX-based general purpose cores “hybridized” with Google MXUs, you can build a machine "that lets you have your cake and eat it too."
 

Attachments

  • 1678146713122.png
    1678146713122.png
    509.4 KB · Views: 71
  • Like
  • Fire
  • Love
Reactions: 85 users
And just a reminder considering the latest news flow, don't forget you can change your vote at anytime on the poll thread regarding being informed. Those who are easily triggered or totally disregard other people's views please refrain from making your childish vindictive comments please.
 
  • Like
  • Love
Reactions: 4 users

TechGirl

Founding Member
Anyone who has not read the Forbes article make time. It confirms that AKIDA ESP offers 1,2,4&8 bit activations and much much more.

The last two paragraphs inspired me to break my golden rule and buy more BRN:

“Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.

In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell.”

My mistake and opinion only so DYOR
FF

AKIDA BALLISTA

PS: Just love the “head of a pin comment” wish I had said it.

donald duck disney GIF
 
  • Haha
  • Like
  • Love
Reactions: 15 users

Tuliptrader

Regular
This akida 2nd Gen launch has the potential to be the precursor to a mountain of towels being thrown into the ring from technology companies competing within this space.

“Thus with my lips have I denounced you, while my heart, bleeding within me, called you tender names." Kahlil Gibran

TT
 
  • Love
  • Like
  • Fire
Reactions: 21 users

gilti

Regular
while we are all patting ourselves on the back about this great news the sp has dropped 3.5c WTF
 
  • Sad
  • Like
  • Haha
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Rememeber what Jean-Luc Chatelain (CTO Accenture) said in a podcast with Sean in January about transfomers. He said they call transformers "supermodels". Here's a reminder thanks to @BaconLover's transcript.

So, presumably this means AKIDA 2000 will be EXTREMELY well suited to all NLP and Generative AI applications like ChatCPT, not to mention "Cerence's Immersive Companion" which is slated for release FY23/24 about which everyone knows my opinion. Wink, wink, nudge, nudge...


Screen Shot 2023-03-07 at 10.57.0.png
 
  • Like
  • Fire
  • Love
Reactions: 48 users

BaconLover

Founding Member
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive

Source: https://www.design-reuse-embedded.com/.../brainchip.../...

COMBINED WITH

Google deploys SiFive's Intelligence X280 processor for AI workloads. Hybridizes the RISC-V cores with TPU architecture.
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.

Source: https://www.datacenterdynamics.com/.../google-deploys.../

Elon Musk Reaction GIF by Saturday Night Live

donald trump gop GIF by Election 2016
 
  • Like
  • Haha
  • Love
Reactions: 29 users

Baisyet

Regular
SP manipulation at its best just cant believe it
 
  • Like
  • Sad
  • Haha
Reactions: 16 users

ndefries

Regular
4.8m shorts taken out yesterday. There is definitely an attempt to slow this rocket. This company without these attacks would be worth a lot more and the revenue that is coming is going to burn these shorters soon.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Diogenese

Top 20
Awesome thanks, you beat me to it.

A good Friend of mine Reuben asked me to post the following screenshots from a facebook group plus the article.

Here are the facebook screenshots

View attachment 31404


View attachment 31403



Here is he Google SIFive X280 article


Google deploys SiFive's Intelligence X280 processor for AI workloads​

Hybridizes the RISC-V cores with TPU architecture
September 22, 2022 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare

Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
RISC-V is an open standard instruction set architecture based on established RISC principles, which is provided under open source licenses that do not require fees.
SiFiveGoogleTPURISCV.png

– SiFive/Google
At the AI Hardware Summit in Santa Clara, Krste Asanovic, SiFive's co-founder and chief architect took to the stage with Cliff Young, Google TPU Architect and MLPerf co-founder.
The SiFive Intelligence X280 is a multi-core capable RISC-V processor with vector extension, optimized for AI/ML applications in the data center.
At the summit, the two companies explained that the X280 processor is being used as the AI Compute Host to provide flexible programming combined with the Google MXU (systolic matrix multiplier) accelerator in the data center. However, they did not disclose the scale of the deployment.
SiFive has introduced the Vector Coprocessor Interface eXtension (VCIX), allowing customers to plug an accelerator directly into the vector register file of the X280.
Google already uses third-party ASIC design services with Broadcom for its in-house TPU AI chip, instead focusing on developing its strengths - the Matrix Multiply Unit and the Inter-Chip Interconnect.
Now it is adding the X280 in what Google calls "an elegant division of labor with the TPU," taking the MXUs and combining them with the X280.
Google's Cliff Young said that with SiFive VCIX-based general purpose cores “hybridized” with Google MXUs, you can build a machine "that lets you have your cake and eat it too."

Well if google want to maximize the efficiency of their TPU, they need to get over the "not-invented-here" syndrome:

Google already uses third-party ASIC design services with Broadcom for its in-house TPU AI chip, instead focusing on developing its strengths - the Matrix Multiply Unit and the Inter-Chip Interconnect.

Their next batch of X280s may come pre-fitted with Akida 2P.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
Consistency is its own reward:

1678148452333.png
 
  • Haha
  • Like
  • Love
Reactions: 20 users

HopalongPetrovski

I'm Spartacus!
while we are all patting ourselves on the back about this great news the sp has dropped 3.5c WTF
Rat folk backed by big money need to close out their positions as they quickly scurry away under the cover of an indeterminate lead from USA overnight, pending likely domestic rate rise and some of our thunder being appropriated by WBT news.
In the burgeoning light of our announcements they can only hold it down so long. 🤣
Takes the manipulators days to weeks to build their positions.
They run the risk of a squeeze if panic sets in and they need to escape in haste.
Oh, for another announcement with some dollars attached right about now.
Would be glorious. 🤣

AKIDA ESP.
(especial........ am I not?)
 
  • Like
  • Fire
  • Love
Reactions: 30 users

wasMADX

Regular
We are outside of market hours so here is something that could be upramping if taken too seriously.

ChatGPT needs to lower its electricity consumption about 60 billion percent. Slight exaggeration but what it uses boggles the imagination and is unsustainable. There have been some articles citing the need for a neuromorphic solution to some of the processing needs of ChatGPT.

Moving to 8 bit adds an important strategic opportunity to the AKIDA -v- GPU competition.

My opinion only DYOR
FF

AKIDA BALLISTA
My wish is that ChatGPT can be a reader option used on this thread in the future. Having noticed how it can detect my emotion, it could similarly detect and prune out (say) quip-posts, flaming-posts etc. and save reading time.

Conversely, if you would miss the fun content, it could prune out the serious stuff.
 
  • Like
Reactions: 2 users

TheDrooben

Pretty Pretty Pretty Pretty Good
SP manipulation at its best just cant believe
Unfortunately there is a gap in the charts from 51c-54c they will probably push it down to without further news

Screenshot_20230307_112257_Samsung capture.jpg
 
  • Like
  • Sad
Reactions: 10 users
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Quick question. The original roadmap shows Akida 2000 as being "an optimised version of AKD 1500 for LSTM and transformers". We now know that Akida 2000 includes the transformer part, but what happened to the LSTM part? Or have LSTM's been replaced with TENN's? I tried to google information on TNN's but kept getting articles about tennis, which was entertaining but not particularly helpful. But then I discovered something that @TechGirl posted from Carnegie Mellon University which describes TNN's as follows:

Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

1675766511089.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users

Glen

Regular
while we are all patting ourselves on the back about this great news the sp has dropped 3.5c WTF
We need to get off the ASX ASAP. This why we have extremely low US investors. Today we had 150,000 shares traded, average is 40,000 or less. We will not get the US investor until we are listed on NASDAQ.
 
  • Like
  • Thinking
Reactions: 8 users

BaconLover

Founding Member
We need to get off the ASX ASAP. This why we have extremely low US investors. Today we had 150,000 shares traded, average is 40,000 or less. We will not get the US investor until we are listed on NASDAQ.
As much as I want this to happen, we are not there yet. Whether it is NYSE or NASDAQ, I believe every exchange will have it's own listing rules similar to this.


Screenshot 2023-03-07 1145571.png
 
  • Like
  • Fire
  • Love
Reactions: 19 users

skutza

Regular
  • Like
  • Haha
  • Love
Reactions: 7 users

Damo4

Regular
Quick question. The original roadmap shows Akida 2000 as being "an optimised version of AKD 1500 for LSTM and transformers". We now know that Akida 2000 includes the transformer part, but what happened to the LSTM part? Or have LSTM's been replaced with TNN's? I tried to google information on TNN's but kept getting articles about tennis, which was entertaining but not particularly helpful. But then I discovered something that @TechGirl posted from Carnegie Mellon University which describes TNN's as follows:

Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

View attachment 31412

Yeah I think we need a new roadmap, potentially something updated on the website rather than an archived screenshot.
It would be great if the specify the exact naming of current and future products in this too.
I think the E, S and P is a great idea, but it would be nice to either align it with previous naming or just start form scratch and keep it consistent.


BTW I've noticed a little chatter questioning the usefulness of AKD1000, but I believe this will increase sales, not serve as a replacement.
It's clear some customers need the extra features, but as we become ubiquitous, we will attract eyes who may not understand the use cases and see it more as a choice of which model.

I recall reading something about Coke and Pepsi vending machines next to each other increasing sales for both.
The idea is that a consumer no longer chooses whether to purchase a drink or not, instead they are left to choose which brand they would prefer.
I'd like to think the increased brand awareness as well as a true choice of nodes, efficiencies, performance etc, increase sales across the board.
 
  • Like
  • Fire
  • Love
Reactions: 11 users
Top Bottom