Synsense has both analog and digital SNNs and they have used their analog Dynap (Speck) SNN with Prophesee for vision:Now that it is written here in black and white for all to clearly see Sony is looking at developing SNN offering positions to build Neuromorphic products. This tells me they have been involved for a good long period of time to evaluate this technology prior to the companys conclusion SNN is of value to them in the market
Brainchip as far as we all know is the only company whom offer a complete SNN series of products for testing and integration to potential customers. This tells me that they may very well have been one of the company whom most likely are still under NDA with BRN.
Go brainchip
US2023385617A1 SIGNAL PROCESSING METHOD FOR NEURON IN SPIKING NEURAL NETWORK AND METHOD FOR TRAINING SAID NETWORK 20210716
SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes | SynSense

SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes | SynSense
Neuromorphic Developers Partner to Integrate Sensor, Processor - EETimes By Sally Ward-Foxton 11.22.2021 SynSense and Prophesee are partnering to develop a single-chip, event-based image sensor integrating Prophesee’s Metavision image sensor with Synsense’s DYNAP-CNN neuromorphic processor. The...

SALLY WARD-FOXTON
And remind me, this is mixed-signal or digital?
DYLAN MUIR
This is fully digital. It’s an asynchronous digital architecture. I mean the pixel itself obviously has got some analog components for the photo detector and so on. But the processing side is fully digital.
SALLY WARD-FOXTON
Seems like you have two families of chip. The one that’s in this Speck module. And then there’s another family, which is newer, or older?
DYLAN MUIR
Xylo – it’s a little bit newer. So that’s we’re targeting that for natural signal processing, meaning audio, biosignals, vibration, IMU/accelerometers, things like this. We’ve had a few tapeouts already for the Xylo, we have a dev kit available with an audio version, so that includes a little digital processor core plus a very efficient audio front end. We have a recent publication, some presentations from the end of last year where we’re demonstrating ultra-low power audio processing applications on that chip at a few hundred microwatts. So the idea then of course is also smart low power edge sensory processing. The family will have a common processor architecture plus a number of very efficient sensory interfaces. So we have audio out already, we have an IMU sensory interface which we’re bringing up and testing actually at the moment. We plan to have samples available at the end of this year, beginning of next year, and then there will be other sensory interfaces for other classes of application as well. Yeah, we’ll take these in turn for the use cases that that look like the most commercially accessible.
It’s a different architecture, so Speck is really tailored for vision processing applications and it does very efficient convolutional neural network inference. Xylo is a more general-purpose architecture for mostly non convolutional network architectures. It also has a more advanced neuron model. So it’s still spiking neurons of course, the spiking neurons on Speck are very simple. They’re essentially just a digital counter with no temporal dynamics, whereas the neurons on Xylo are a simulation of a a leaky integrate and fire spiking neuron, a very standard neuron model, including these temporal dynamics, which are very configurable on the chip, so that that’s really suited for temporal signal processing. So when we do things like keyword spotting for example, for audio processing, the standard neural network approach is to buffer 100 milliseconds or 50 milliseconds of audio, produce a spectrogram for that 100 milliseconds, and then treat that as an image.
We don’t do that. We really operate in a continuous streaming mode, where we process the audio as it comes into the device meaning we can have potentially lower latency. We don’t need to do buffering, we don’t need… We can get away with smaller resources for the temporal signal processing, because we’ve got this temporal integration of information in the neurons themselves. and so this lets us operate at lower power.
The Xylo chip is synchronous digital. The reason for the difference is the DVS sensor is fundamentally asynchronous itself, and if you’ve got a static camera application, then there can be basically nothing changing, nothing going on in the visual scene, and then you get basically no input and you don’t need to do anything. Whereas for audio processing, you’ve always got ambient sound coming in and so you essentially need to be processing continuously and then the synchronous digital design is a simpler design.
So essentially we have a digital clocked simulation of the leaky integrate and fire dynamics on Xylo. And so your input clock frequency might be for example 5 megahertz, but then the network dynamics can be slower than this. You can choose to integrate over several seconds and then the simulation of each individual neuron is computed inside Xylo, but you can choose a long time scale to continuously integrate information inside the neuron, if you if you so desire. It’s essentially a little a little synchronous ASIC core for inference in spiking neurons, including these temporal dynamics. So we’re just running a little digital simulation of the of the spiking neuron dynamics.
SALLY WARD-FOXTON
Last time we spoke, which admittedly was a while ago, you had partnered with Prophesee, also. How is that collaboration going?
DYLAN MUIR
That’s going very well. We’ve fabricated a device with Prophesee, and this is… we’re testing this, examining this at the moment, in conjunction with them.
SALLY WARD-FOXTON
OK, to be clear, it uses a different processor that you’ve made compared to the Speck module – or the same?
DYLAN MUIR
The processor IP is basically the same, same design, that’s our processor design for Speck. These little DYNAP-CNN cores and that’s what we apply for CNN processing. So we’ve also looked at, for example, an RGB standard CMOS camera interface which we could then also process using the same spiking CNN architecture.