Yes but NVISO is also located in Switzerland as well as Japan Australia and USA
Plus we have on chip learning which synsense doesnt
3.7 DYNAP
DYNAP (Dynamic Neurormorphic Asynchronous Processors) is a family of solutions from SynSence, a company from the University of Zurich. The company has a patented event-routing technology for communication between the cores.
According to [dynap_routing], the scalability of neuromorphic systems is mainly limited by the technologies of communication between neurons, and all other limitations are not so important. Researchers at SynSence invented and patented a two-level communication model based on choosing the right balance between point-to-point communication between neuron clusters and broadcast messages within clusters. The company has presented several neuromorphic processors (ASICs): DYNAP-SE2, DYNAP-SEL and DYNAP-CNN.
The Dynap-SE2 and Dynap-SEL chips are not commercial projects and are being developed by neuroscientist as tools for their research. But Dynap-CNN (2021 tinyML) is marketed as a commercial chip for efficient execution of CNNs converted to SNNs. Whereas the Dynap-SE2 and Dynap-SEL research chips implement analog computing and digital communication, Dynap-CNN is fully digital.
Dynap-SE2 is designed for feed-forward, recurrent and reservoir networks. It includes four cores with 1k LIFAT analog spiking neurons and 65k synapses with configurable delay, weight and short term plasticity. There are four types of synapses (NMDA, AMPA, GABAa, GABAb). The chip is used by researches for exploring topologies and communication models of the SNN.
Main distinctive features of Dynap-SEL chip are support for on-chip learning and large fan-in/out network connectivity. It has been created for biologically realistic networks emulation. The Dynap-SEL chip includes five cores, only one of which has plastic synapse. The chip realizes 1,000 analog spiking neurons and up to 80,000 configurable synaptic connections, including 8,000 synapses with integrated spike-based learning rules (STDP). Researchers are using the chip to model cortical networks.
The Dynap-CNN chip has been available with the Development Kit since 2021. Dynap-CNN is a 12 mm2 chip, fabricated in 22nm technology, hosting over one million spiking neurons and four million programmable parameters. Dynap-CNN is completely digital and realizes linear neuron model without leakage. The chip is best combined with event-based sensors (DVS) and is suitable for image classification tasks. In the inference mode the chip can run a SNN converted from a CNN, in which there may be not more than nine convolutional or fully connected layers and not more than 16 output classes. On-chip learning is not supported. The original CNN must be initially created with
PyTorch and trained by classical methods (for example, on GPU). Further, using the Sinabs.ai framework (an open source PyTorch based library), the convolutional network can be converted to a spiking form for execution on Dynap-CNN in the inference mode.
Dynap-CNN has demonstrated the following results:
- CIFAR-10: 1mJ at 90% accuracy,
- attention detection: less than 50 ms and 10 mW,
- gesture recognition: less than 50 ms and 10m W at 89% accuracy,
- wake phrase detection: less than 200 ms at 98% sensitivity and false-alarm rate less than 1 per 100 hours (office background).
3.8 AKIDA
Akida [akida] is the first commercial neuromorphic processor, commercially available since August 2021. It has been developed by Australian BrainChip since 2013. Fifteen companies, including NASA, joined the early access program. In addition to Akida System on Chip (SoC), BrainChip also offers licensing of their technologies, providing chip manufacturers a license to build custom solutions.
The chip is marketed as a power efficient event-based processor for Edge computing, not requiring an external CPU. Power consumption for various tasks may range from 100 µW to 300 mW. For example, Akida is capable of processing at 1,000 frames/Watt (compare to TrueNorth with 6,000 frames/Watt). The first generation chip supports operations with convolutional and fully connected networks, with the prospect to add support of LSTM, transformers, capsule networks, recurrent and cortical neural networks. ANN network can be transformed into SNN and executed on the chip.
One Akida chip in a mesh network incorporates 80 Neural Processing Units (NPU), which enables modeling 1,200,000 neurons and 10,000,000,000 synapses. The chip is built at TSMC 28 nm. In 2022, BrainChip announced the second generation chip at 16 nm.
Akida’s ecosystem provides a free chip emulator,
TensorFlow compatible framework MetaTF for transformation of convolutional and fully connected neural networks into SNN, аnd a set of pre-trained models. When designing a neural network architecture for execution at Akida, one should take into account a number of additional limitations concerning the layer parameters (e.g. maximum convolution size is 7, while
stride 2 is supported for convolution size 3 only) and their sequence.
The major distinctive feature is that incremental, one-shot and continuous learning are supported straight at the chip. At the AI Hardware Summit 2021 BrainChip showed the solution capable of identifying a human in other contexts after having seen him or her only once. Another product by BrainChip is a smart speaker, that on having heard a new voice asks the speaker to identify and after that calls the person by their name. There results are achieved with help of a proprietary local training algorithm on the basis of homeostatic STDP. Only the last fully connected layer supports synaptic plasticity and is involved in learning.
Another instructive case from the AI Hardware Summit 2021 was a classification of fast-moving objects (for example, a race car). Usually, such objects are off the frame center and significantly blurred but they can be detected using an event-based approach.
Looks like Sony has sensor operations in Switzerland