Hi Frederick,I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.
While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.
So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.
2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.
One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:
You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.
It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.
Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.
Akida is used to classify input signals, voice, video, etc. It does not generate replies.
It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.
It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.
It can be used for autonomous drone navigation and image detection.
NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.
Similarly, ISL (Information Systems Limited) is using it in USAF trials.
Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.
It is also available to Intel Foundry Services (IFS) customers.
It is also part of a few US university computer courses including Carnegie Mellon.
BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.
A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.
Similarly, MegaChips is also in the process of producing chips containing the Akida design.
A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry.
Last edited: