Hi TD,Presentation slides from the roadmap showcase....... View attachment 83813 View attachment 83814 View attachment 83815 View attachment 83816 View attachment 83817 View attachment 83817 View attachment 83818 View attachment 83819 View attachment 83820 View attachment 83821 View attachment 83812
Thanks for posting the roadmap. I'm a bit surprised there hasn't been more discussion on this.
There are some pretty amazing targets there.
Q2:
GenAI compiler (programming tool)
Q3 -
TENNs vision models/configuration.
NLP
Starting in 2026:
Akida GenAI with 16/32-bit FP: that will give some serious CPU-style math processing capability with greater precision. This also caoability to handle third party open source SSM models without conversion to SSN while also providing the capability to process very large TENNs and SSM models.
(out to 2027)
Akida 3: Up to 16-bit integer and 32-bit FP will give even greater inference precision combined with the 32-bit FP processing precision. This can process the universal CNNs as well as SSMs, TENNs, and LLMs, and can be configured for "arbitrary topologies". This latter means it will be adaptable for yet to be developed functionality.
One of Akida 2's tricks which I hadn't noticed before is "branching" models. I guess this is a hierarchil arrangement which limits the following search:
Animal:
Cat, Dog, Elephant, ...
Persian, Siamese, Sylvester, ...
Sausage, Bulldog, Kelpie, ...
African, indian, Asian, ...
It also utilizes look-up-tables (LUTs) which can provide an analytical shortcut.
So, while maintaining the original 1-bit power parsimony, the trend is to more powerful and more flexible and adaptable chips with up to 32-bit FP capability. Clearly the company has set its eyes on markets beyond "4 Bits is Enough". I had thought the Akida 2 move to 8-bits was a concession to the industry's de facto standard. Are we looking to the cloud?