Fact Finder
Top 20
Hi @DiogeneseThe discussion about neuromorphic computing being the province of analog and lumping Akida into analog is a bit unfortunate, and probably derives with the pre-Akida obsession with analog SNNs
Page 27:
View attachment 16789
Page 28:
Analog hardware: The biggest time and energy costs in most computers occur when lots of data has to move between external memory and computational resources such as CPUs, GPUs, or NPUs. This is the "von Neumann bottleneck," named after the classic computer architecture that separates memory and logic. One way to greatly reduce the power needed for machine learning is to avoid moving the data — to do the computation where the data is located. Because there is no movement of data, tasks can be performed in a fraction of the time and require much less energy. In-memory and neuromorphic analog computing are such approaches that take this approach. In-memory computing is the design of memories next to or within the processing elements of hardware such that bitwise operations and arithmetic operations can occur in memory. Neuromorphic analog computing allows in-place compute but also mimics the brain’s function and efficiency by building artificial neural systems that implement «neurons» and «synapses» to transfer electrical signals via an analog circuit design. This circuit is the breakthrough technology solution to the VonNeumann bottleneck problem. Analog neuromorphic ICs are intrinsically parallel, and better adapted for neural network operations than current digital hardware solutions offering orders of magnitude improvements for edge applications. Market-ready, end-user programmable chips are an essential need for neuromorphic computing to expand its visibility and to achieve a variety of “real-world applications” with an increasing number of users. As large technology companies are waiting for the technology to become more mature, some start-ups have released their chips to fill the gap and to have a competitive advantage against those tech-giants. Examples include BrainChip AKIDA, GrAI Matter LabsVIP, and SynSense DYNAP processors.
Neuromorphic digital SNN computing also enables the blending of memory and compute, obviating the von Neumann bottleneck.
I saw that and NViso has done this before in their documentation.
The first time I picked it up I sent it to the company and it was going to be brought to Anil’s attention.
Perhaps they decided to not correct NViso and let the Russians and Chinese go insane trying to do what AKIDA does in digital in analogue.





My opinion only DYOR
FF
AKIDA BALLISTA