Hi DB,
Some of the "mistakes" the company is said to have made can be attributed to the rapid changes in the NN technology.
We put the Akida 1 engineering samples out for testing by the EAPs, and made several changes based on their feedback, most notably optional 4-bit weights and activations and the CNN2SNN converter.
So we redesigned Akida 1 and produced it, only to find the market now wanted LSTM for speech and video. Then, even while our engineers were burning the candle at both ends, along came the newly created transformers which were more efficient at handling time-varying inputs like speech and video.
https://deepai.org/machine-learning...vector. This is very important in translation. (
This website comes with a recommendation to enter with a bucket in case your brain explodes)
It should be noted that there was no existing hardware implementation for LSTMs or transformers at the time. They are both software concepts, transformers being adapted to run on GPUs and LSTMs more suited for a more serial-based processing on CPUs.
So really Akida Gen 2 is another major technological breakthrough by BrainChip pretty much on a par with the digital SNN, and, while a few privileged EAPs are aware of it (we've seen the comments) or its software simulation, this is something which should be trumpeted from the rooftops ... but alas, the technology is so complex that it would be more complex than explaining the Higgs Boson.