Might have been filed very recently. Filing the patent may have been the catalyst for releasing the information on Akida 2.I don't think this patent has been mentioned before
![]()
Technology
Learn about BrainChip's innovative technology, featuring the Akida Neuromorphic Processor for efficient, low-power AI processing at the edge.brainchip.com
If you download the akida 2 platform brief and look under
Tech Foundations
Exceptional spatio-temporal capability:
Patent pending Temporal event-based neural nets
(TENNs) revolutionize time-series data application
TT
Thank‘s for pointing that out - I didn‘t notice it the first time round glancing through the press release as the technology partner testimonials keep rotating.Read the press release. It gives all the information you need. Ispolon, leader in low power SDR technology.
I think they used @Fact Finder as the reading test benchmark ...It is hard to read quick enough when it slides so here are a pictures of statements made by our costumers, partners, market analysts and our CEO SH. View attachment 31374
After what SiFive said and the fact that AKIDA is the only Ai partner they speak of in connection with their Intelligence Series and now being spoken of by SiFive across their product offering well what more can I say???
My opinion only DYOR
FF
AKIDA BALLISTA
I think our relationship with Prophesee is more like a joint venture than a licence.How good would it be now that they have released this if Mercedes grabbed a license and then by Friday Prophasee jumped on board also?
$2.34 would seem like a bargain for sure. Ahhh to dream hey?
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive
Source: https://www.design-reuse-embedded.com/.../brainchip.../...
COMBINED WITH
Google deploys SiFive's Intelligence X280 processor for AI workloads. Hybridizes the RISC-V cores with TPU architecture.
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
Source: https://www.datacenterdynamics.com/.../google-deploys.../
thestockexchange.com.au
Anyone who has not read the Forbes article make time. It confirms that AKIDA ESP offers 1,2,4&8 bit activations and much much more.
The last two paragraphs inspired me to break my golden rule and buy more BRN:
“Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.
In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell.”
My mistake and opinion only so DYOR
FF
AKIDA BALLISTA
PS: Just love the “head of a pin comment” wish I had said it.
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive
Source: https://www.design-reuse-embedded.com/.../brainchip.../...
COMBINED WITH
Google deploys SiFive's Intelligence X280 processor for AI workloads. Hybridizes the RISC-V cores with TPU architecture.
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
Source: https://www.datacenterdynamics.com/.../google-deploys.../
Awesome thanks, you beat me to it.
A good Friend of mine Reuben asked me to post the following screenshots from a facebook group plus the article.
Here are the facebook screenshots
View attachment 31404
View attachment 31403
Here is he Google SIFive X280 article
Google deploys SiFive's Intelligence X280 processor for AI workloads
Hybridizes the RISC-V cores with TPU architecture
September 22, 2022 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare
Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
RISC-V is an open standard instruction set architecture based on established RISC principles, which is provided under open source licenses that do not require fees.
![]()
– SiFive/Google
At the AI Hardware Summit in Santa Clara, Krste Asanovic, SiFive's co-founder and chief architect took to the stage with Cliff Young, Google TPU Architect and MLPerf co-founder.
The SiFive Intelligence X280 is a multi-core capable RISC-V processor with vector extension, optimized for AI/ML applications in the data center.
At the summit, the two companies explained that the X280 processor is being used as the AI Compute Host to provide flexible programming combined with the Google MXU (systolic matrix multiplier) accelerator in the data center. However, they did not disclose the scale of the deployment.
SiFive has introduced the Vector Coprocessor Interface eXtension (VCIX), allowing customers to plug an accelerator directly into the vector register file of the X280.
Google already uses third-party ASIC design services with Broadcom for its in-house TPU AI chip, instead focusing on developing its strengths - the Matrix Multiply Unit and the Inter-Chip Interconnect.
Now it is adding the X280 in what Google calls "an elegant division of labor with the TPU," taking the MXUs and combining them with the X280.
Google's Cliff Young said that with SiFive VCIX-based general purpose cores “hybridized” with Google MXUs, you can build a machine "that lets you have your cake and eat it too."
Rat folk backed by big money need to close out their positions as they quickly scurry away under the cover of an indeterminate lead from USA overnight, pending likely domestic rate rise and some of our thunder being appropriated by WBT news.while we are all patting ourselves on the back about this great news the sp has dropped 3.5c WTF
My wish is that ChatGPT can be a reader option used on this thread in the future. Having noticed how it can detect my emotion, it could similarly detect and prune out (say) quip-posts, flaming-posts etc. and save reading time.We are outside of market hours so here is something that could be upramping if taken too seriously.
ChatGPT needs to lower its electricity consumption about 60 billion percent. Slight exaggeration but what it uses boggles the imagination and is unsustainable. There have been some articles citing the need for a neuromorphic solution to some of the processing needs of ChatGPT.
Moving to 8 bit adds an important strategic opportunity to the AKIDA -v- GPU competition.
My opinion only DYOR
FF
AKIDA BALLISTA