IMEC have about 60 patents which are related to NNs.Just to be clear it is EP3671748A1 IN-MEMORY COMPUTING FOR MACHINE LEARNING that has been abandoned.
Does that then mean IMEC only have as far as we know the original patent for an analog SNN?
Many thanks @Diogenese
My opinion only DYOR
FF
AKIDA BALLISTA
https://worldwide.espacenet.com/pat...A1?q=pa = "imec" AND nftxt = "neural network"
Many relate to analog NNs eg:
US2020210822A1 Multibit Neural Network
EP3671750A1 SYNAPSE CIRCUIT WITH MEMORY
EP3968208A1 ANALOG IN-MEMORY COMPUTING BASED INFERENCE ACCELERATOR
...
However IMEC does have other digital NNs, but they may be short of rare condiment.
EP3671568A1 BINARY RECURRENT NEURAL NETWORK INFERENCE TECHNIQUE
[006] ... They also propose binarization of the network activations (of inputs and hidden states) based on an equivalent formulation of the recurrences in the neural network. This results in reduced memory requirements for the learnt weights and the replacement of multiply-and-accumulate operations for the binarized activations and binarized hidden layer weights by simpler XNOR operations. Binarizing the activations, however, does not replace all the accumulate operations of the recurrent update equation by simpler XNOR operations. Binarizing the activations in a long short-term memory layer work is ambiguous, because it is not sure which outcome of a non-linear activation should be binarized. If this is applied to the non-linear activations of the cell states, the resulting hidden state vectors are non-binary and not suitable for an energy-efficient hardware implementation.
I haven't seen any reference to STDP.
EP3674982A1 HARDWARE ACCELERATOR ARCHITECTURE FOR CONVOLUTIONAL NEURAL NETWORK
A hardware accelerator architecture (10) for a convolutional neural network comprises a first memory (11) for storing NxM activation inputs of an input tensor; a plurality of processor units (12) each comprising a plurality of Multiply ACcumulate (MAC) arrays (13) and a filter weights memory (14) associated with and common to the plurality of MAC arrays of one processor unit (12). Each MAC array is adapted for receiving a predetermined fraction (FxF) of the NxM activation inputs from the first memory, and filter weights from the associated filter weights memory (14). Each MAC array is adapted for subsequently, during different cycles, computing and storing different partial sums, while reusing the received filter weights, such that every MAC array computes multiple parts of columns of an output tensor, multiplexed in time. Each MAC array further comprises a plurality of accumulators (18) for making a plurality of full sums from the partial sums made at subsequent cycles.
"during different cycles" implies synchronous operation. Akida is asynchronous.
The fact that EP3671748A1 IN-MEMORY COMPUTING FOR MACHINE LEARNING has been abandoned does not mean that IMEC are not using the system.