It
However, when Akida is used with a ground-up* redesign as a NN accelerator for a full-on AI enhanced CPU/GPU SoC, the ARM Cortex is superfluous and would take up precious silicon real estate because the CPU/GPU could do the configuration of Akida.
They would also strip out some of the comms interfaces to save real estate.
In a newly designed system specifically adapted for SNNs, they could also drop the CNN2SNN circuitry.
* "ground-up" as in "starting from the bottom" - not "pulverised".
Almost reads like NASA had trouble comprehending the autonomous nature of the Akida NN in performing inference and ML.Yeah, they appear to have lumped all chips listed in the same basket. Akida only needs an external pre processor to process the initial conditions, i.e. the trained weights and other configuration. Then all processing is done within the Akida Neural Fabric.
However, when Akida is used with a ground-up* redesign as a NN accelerator for a full-on AI enhanced CPU/GPU SoC, the ARM Cortex is superfluous and would take up precious silicon real estate because the CPU/GPU could do the configuration of Akida.
They would also strip out some of the comms interfaces to save real estate.
In a newly designed system specifically adapted for SNNs, they could also drop the CNN2SNN circuitry.
* "ground-up" as in "starting from the bottom" - not "pulverised".