Hi manny,According to my reading they are best fit for different market segments.
I cannot find any peer reviewed papers comparing them.
Innatera does not use the words 'on chip learning' as Brainchip does but talks about real time 'real time intelligence' and 'adaption'. According to CEO Kumar see EET times article "the main limitation of the Innatera fabric is that it is not self learning, Kumar said noting that the neuron types are fixed, chosen for their suitability for a wide range of pattern recognition. While functions cannot be changed, parameters can be, he said."
Interesting the different methods used.
by both.
It would be great to see a peer review comparison.
Until then I am a bit uncertain as to the extent of competition Innatra offer.
The EET times article is a good read
AN Innantara patent application tries to capture all means of converting analog to spike train, but leans heavily on VCO )voltage controlled oscillator) in the description.
WO2024023111A1 SYSTEM AND METHOD FOR EFFICIENT FEATURE-CENTRIC ANALOG TO SPIKE ENCODERS 20220725
A signal processing circuit for a spiking neural network, comprising an interface for converting an analog input signal to a corresponding spike-time representation of the analog input signal. The interface comprises an analog-to-information (A/information) converter configured to produce a modulated signal which represents one or more features of the analog input signal; a feature detector circuit configured to compare the modulated signal with a reference signal representing a reference feature, and configured to produce an error signal indicating a difference between the modulated signal and the reference signal; a feature extractor circuit, which comprises a locked loop circuit having an input for receiving the error signal and configured to produce an output signal representing an occurrence of one or more of the features represented by the modulated signal; and an encoder circuit, which is configured to encode the output signal into spike trains for input to the spiking neural network.
[0092] … The A/frequency converter 32B may comprise a voltage- controlled oscillator (VCO).
4. The signal processing circuit of any of the preceding claims, wherein the feature is one or more of
i) specific characteristics, such as transient features, steady-state features,
ii) specific properties, such as (non)linearity features, statistical features, stationary features, transferfunction features, energy-content and/or based on iii) specific domain features, such as time-, delay-, frequency-, phase-domain features,
preferably wherein the A/information converter comprises an analog-to-time converter which converts the analog input signal into a modulated signal which represents certain timedomain features such as delay, frequency and/or phase..
[00101] The type of encoding used in encoding circuit 35 may vary depending on the type of parameters used in the converter 32, detector 33 and feature extractor 34. When looking at the delay parameter, one could use time-to-first spike (TTFS), inter-spike interval (ISI), burst, or delay synchrony encoding. When looking at the frequency parameter, rate or frequency synchrony encoding might be used. When looking at the phase parameter, phase or phase synchrony encoding might be used.
"Time-to-first-spike" sounds a bit like Rank-order coding which we obtained from Spikenet which uses the order of arrival, not specifically time of arrival.
This one is for ML/federated learning, not on-chip learning:
WO2025012331A1 METHOD FOR TRAINING MACHINE LEARNING MODELS FOR STOCHASTIC SUBSTRATES 20230711
The present invention relates to a method for training signal processing pipeline for deployment to a programmable fabric of a target device. The method comprises obtaining a model and characterization data of the components of the target device, obtaining programmable parameter values of the signal processing pipeline. Next, a plurality of target devices is simulated. The simulated target devices are based on the characterization data, such that the simulated target devices represent digital twins and/or the stochastic variability of the plurality of target devices. Optimization methods are used to compute updates of the programmable parameter values of the programmable parameters for each of the simulated target devices independently, after which the programmable parameter value updates are reduced to a single update of the programmable parameter values of the signal processing pipeline.
[0070] After a system performance threshold is passed or convergence is reached, a complete description of the principal network can be deployed to any number of target hardware devices in step 109, making up the hardware deployment 100C.