Food4 thought
Regular
No jist at home looking out over the blue blue skies and waterCable bay with a vino in hand?
No jist at home looking out over the blue blue skies and waterCable bay with a vino in hand?
Thanks for posting that. Sounds like they may have been into the AKIDA development hub? Via FGPA it speeds up time to prototype.Merry Chipmas
View attachment 93866
Traditional AI computing relies on machine learning and deep learning methods that demand significant power and memory for both training and inference.
Our researchers have developed a patented neuromorphic computing architecture based on field-programmable gate arrays (FPGAs). This architecture is designed to be parallel and modular, enabling highly efficient, brain-inspired computing directly on the device. Compared to existing techniques, this approach improves energy-per-inference by ~1500 times and latency by ~2400 times .
This paves the way for a new generation of powerful, real-time AI applications in energy-constrained environments.
Know more in the #patent- bit.ly/498XlwC
Inventors: Dhaval Shah, Sounak Dey, Meripe Ajay Kumar, Manoj Nambiar, Arpan Pal
Tata Consultancy Services
#TCSResearch #AI #NeuromorphicComputing
View attachment 93867 View attachment 93869
View attachment 93870
Hi manny,Thanks for posting that. Sounds like they may have been into the AKIDA development hub? Via FGPA it speeds up time to prototype.
Thanks Dio, cheersHi manny,
Take this with a grain of salt. It's just my (low salt) postprandial idle speculation.
TCS claim reduced latency for their FPGA NN. They have designed a purpose-built FPGA rather than using a COTS FPGA, ie, they have actual NPUs built into the FPGA with a switchable interconnect fabric providing hardware connexions for the configuration of the NPUs into layers. I think this is different from Akida in that Akida's interconnexion fabric is basically fixed and acts like a packet switch highway with the NPUs being electronically configured by having destination addresses for their output data. (Renesas also have a reconfigurable arrangement).
My guess is that they believe this hardware configuration provides faster transmission than the packet switched version.
TCS have designed a dedicated NN FPGA with switchable hardware interconnexions between the NPUs.
US12314845B2 Field programmable gate array (FPGA) based neuromorphic computing architecture 20211014
View attachment 93891
[0027] The FIG. 1 illustrates a generic functional system of a neuromorphic FPGA architecture, wherein the plurality of neurons are arranged in plurality of layers in a modular and parallel fashion. The basic component of the neuromorphic FPGA architecture is a bio-plausible high-performance neuron. Each neuron among the plurality of neurons is interconnected with other neurons of a backward or a forward layer only through a plurality of synapses in multiple layers, and each of the neuron is mutually independent. With reference to the FIG. 1 , the plurality of neurons is arranged in the plurality of layers, wherein the neurons in the first layer are represented as a neuron-11 ( 106 ), a neuron-12 ( 108 ) and a neuron-1N ( 110 ) (till a number N). Further neurons in the second layer are represented as a neuron-21 ( 112 ), a neuron-22 ( 114 ) and a neuron-2N ( 116 ) (till a number N). The neuromorphic FPGA architecture can comprise several such layers that can go up to a number (N), wherein the neurons in the Nth layer are represented as a neuron-N1 ( 118 ), a neuron-N2 ( 120 ) and a neuron-NN ( 122 ).
It looks like the arrangement of the NPUs is somewhat constrained by the physical layout and may be less flexible than the Akida arrangement. This would necessitate a significantly larger number of NPUs due to the allocation of NPUs to specific layers. This increases silicon footprint and reduces the number of chips per wafer, increasing the cost per chip. {Again - this is only my assessment).
If there is a significant reduction in latency, the additional cost may be justified in cases requiring the lower latency.
The design of the NPUs could still include elements of the Akida layout (TENNs) minus the packet address header which is set during configuration.
Well I hope Sean gets told so at the AGM If the runs aren't on the boardThat was a great podcast, I really like Steve, he communicates very clearly, he knows his stuff and presents to me as a great ambassador for Brainchip.
Clearly, Sean's IP approach solely was wrong, a combined IP/Chip approach has now proven to be the best path forward, who told us that, that's correct, our partners and early customers, they recognised that the financial risk in outlying tens of millions of dollars in IP blocks within their own products at this early stage in neuromorphic chip technology was and still is too greater a risk... hence AKD 1000 and AKD 1500 are very, very relevant.
I see this acknowledgement by the company as a positive step, no arrogance here, fantastic!!
If we wish to succeed, we must always be open in our thinking, willing to adapt at short notice, like I and a number of other long termers mentioned at the time, AKD 1000 was too narrow, was absolute bullshit..yes we were short on funding, BUT, the movement to an ARM business model was premature and has potentially cost us a few years in progress....purely my self-centred opinion and my bais support for Peter and Anil, AKD 1000 was and will always be the masterstroke that set Brainchip on the road to success, despite taking 4 years longer than I had quietly hoped for.
Thanks for your input Manny over the last year, have a nice Christmas mate, God bless.
Tech/Chris![]()
Well I hope Sean gets told so at the AGM If the runs aren't on the board
Hi Tech, agree with your sentiments. The BOD granted Sean > 7.5 million RSU's in May'25 which are effectively 'golden handcuffs' ensuring he stays until at least the business building phase moves to sustained growth and recurring revenue.I can assure you that Sean isn't going anywhere, unless he decides to throw in the towel himself, the BOD is clearly happy with
how he has, along with the key staff he has employed over the last 4 years progressed, many holders either can't see or understand
how our company has now reached the point where we can look clients/potential clients in the eye in knowing that we have all the
structures in place to engage confidently as an IP/Chip company.
We have the software support, engineering support, hardware available, products and documentation for developers to feel at
ease, neuromorphic technology, through continual education is really starting to hit its strides, we have NEVER BEEN POSITIONED
any better than what we are now, and a lot of that goes to Sean and his business strategy, yes, the sole IP route was possibly wrong,
meaning the timing and our structures weren't ready for that leap, but come the second half of 2026 and throughout 2027 I personally
expect to see a genuine "sales explosion" occur.
I believe when a tape out is confirmed, a client has committed to about a million chips, roll on AKD 2.0.
I love Akida technology, but sadly many others are too afraid to say that.
AKD Tech.
You're very welcomeGood Morning Chippers ,
Just a quick thankyou to all , the collective sharing of information once again has been vast and informative.
Wishing all a enjoyable break & prosperous new year.
Regards,
Esq.
You can't put any pressure on the company dude, keep on dreamingYou're very welcome.
It's the least I can do to put pressure on the company.
![]()
N9 doubt. Done about 12 years of DD myself, but it wouldn't be near as good without the help and comeraderie of those here. Thanks again everyone and Merry Xmas.You can't put any pressure on the company dude, keep on dreaming
Space cadet!
People done their DD knows that BRN are on the right track, thanks to the hard work of the CEO and his crew.
Maybe a new release with them