Sometimes I find competition to Brainchip that I haven't looked closely at. Today I just took a look at Innatera and the technology seems interesting. They try to use carefully designed segments of analog SNN, to avoid the inherent problems with analog SNN's:
Sumeet Kumar mentions that they target something in the hundreds of thousands of neurons and not millions of neurons, as that should be sufficient. Considering the potential efficiency of analog SNN, there could definitely be extremely low power use cases. When he's saying that they won't target millions of neurons, it sounds like a scaling barrier in my ears.
Then I came across that their first chip only has 256 neurons: "Innatera’s third-generation AI chip has 256 neurons and 65,000 synapses and runs inference at under 1 milliwatt",
The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the...
www.hpcwire.com
I would suspect that this few neurons would mean that a disproportional amount of power would be used on the other functionality of the chip, relative to the neurons themselves, like the signal to spike conversion, the DAC e.t.c. Further it's hard to imagine that you can perform anything significant with just 256 neurons. It also doesn't lend itself to the flexibility he describes in the above video, with different segments running different networks.
So, to me it seems very much like an early stage research chip, for evaluating the performance of the neurons and not something you would ship as a final product.
Probably someone much smarter than me could give a perspective on this?