Fact Finder
Top 20
Great expert witnesses make average lawyers look good.PvdM has been working on SNNs since at least 2008, so I suspect hat by 2019 he would have begun to get a glimmer of understanding of its capabilities.
https://brainchip.com/brainchip-releases-client-server-interface-tool-for-snap-technology/
BrainChip releases client server interface tool for snap technology 15.03.2016
...
The SNAP neural network learns features that exist in the uploaded data, even when they are not distinguishable by human means. Autonomous machine learning has long been an elusive target in computer science. Recursive programs are cumbersome and take a long time to process. BrainChip has accomplished rapid autonomous machine learning in its patented hardware-only solution by replicating the learning ability of the brain, by re-engineering the way neural networks function, and by creating a new way of computing culminating in the SNAP technology.
It is possible to trace the development of Akida through the BrainChip patents, listed here:
https://worldwide.espacenet.com/patent/search/family/070458523/publication/US11468299B2?q=pa = "brainchip"
This is a US patent derived from PvdM's first NN patent application:
US10410117B2 Method and a system for creating dynamic neural function libraries: Priority 20080921
View attachment 20975
A method of creating a reusable dynamic neural function library for use in artificial intelligence, the method comprising the steps of:
sending a plurality of input pulses in form of stimuli to a first artificial intelligent device, where the first artificial intelligent device includes a hardware network of reconfigurable artificial neurons and synapses;
learning at least one task or a function autonomously from the plurality of input pulses, by the first artificial intelligent device;
generating and storing a set of control values, representing one learned function, in synaptic registers of the first artificial intelligent device;
altering and updating the control values in synaptic registers, based on a time interval and an intensity of the plurality of input pulses for autonomous learning of the functions, thereby creating the function that stores sets of control values, at the first artificial intelligent device; and
transferring and storing the function in the reusable dynamic neural function library, together with other functions derived from a plurality of artificial intelligent devices, allowing a second artificial intelligent device to reuse one or more of the functions learned by the first artificial intelligent device.
... and this is the key patent which was granted recently:
US11468299B2 Spiking neural network: Priority 20181101
View attachment 20977
A system, method, and computer program product embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised extraction of features from an input stream. An embodiment operates by receiving a set of spike bits corresponding to a set synapses associated with a spiking neuron circuit. The embodiment applies a first logical AND function to a first spike bit in the set of spike bits and a first synaptic weight of a first synapse in the set of synapses. The embodiment increments a membrane potential value associated with the spiking neuron circuit based on the applying. The embodiment determines that the membrane potential value associated with the spiking neuron circuit reached a learning threshold value. The embodiment then performs a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the spiking neuron circuit reached the learning threshold value.
This one is for detecting partially obscured objects, quite handy in the real world:
US11151441B2 System and method for spontaneous machine learning and feature extraction: Priority 20170208
View attachment 20980
an artificial neural network system for improved machine learning, feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to spontaneously learn complex, temporally overlapping features arising in an input pattern stream. Competitive learning is implemented as Spike Timing Dependent Plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected with the first spiking neural network through dynamic synapses, and is trained to interpret and label the output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing.
Accurate detection of objects is a challenging task due to lighting changes, shadows, occlusions, noise and convoluted backgrounds. Principal computational approaches use either template matching with hand-designed features, reinforcement learning, or trained deep convolutional networks of artificial neurons and combinations thereof. Vector processing systems generate values, indicating color distribution, intensity and orientation from the image. These values are known as vectors and indicate the presence or absence of a defined object. Reinforcement learning networks learn by means of a reward or cost function. The reinforcement learning system is configured to either maximize the reward value or minimize the cost value and the performance of the system is highly dependent on the quality and conditions of these hand-crafted features.
Deep convolutional neural networks learn by means of a technique called back-propagation, in which errors between expected output values for a known and defined input, and actual output values, are propagated back to the network by means of an algorithm that updates synaptic weights with the intent to minimize the error.
The Deep Learning method requires millions of labelled input training models, resulting in long training times, and clear definition of known output values.
However, these methods are not useful when dealing with previously unknown features or in the case whereby templates are rapidly changing or where the features are flexible. The field of neural networks is aimed at developing intelligent learning machines that are based on mechanisms which are assumed to be related to brain function. U.S. Pat. No. 8,250,011 [BrainChip] describes a system based on artificial neural network learning. The system comprises a dynamic artificial neural computing device that is capable of approximation, autonomous learning and strengthening of formerly learned input patterns. The device can be trained and can learn autonomously owing to the artificial spiking neural network that is intended to simulate or extend the functions of a biological nervous system. Since, the artificial spiking neural network simulates the functioning of the human brain; it becomes easier for the artificial neural network to solve computational problems.
[0003] US20100081958 (Lapsed) [Florida Uni] describes a more advanced version of machine learning and automated feature extraction using neural network. US20100081958 is related to pulse-based feature extraction for neural recordings using a neural acquisition system. The neural acquisition system includes the neural encoder for temporal-based pulse coding of a neural signal, and a spike sorter for sorting spikes encoded in the temporal-based pulse coding. The neural encoder generates a temporal-based pulse coded representation of spikes in the neural signal based on integrate-and-fire coding of the received neural signal and can include spike detection and encode features of the spikes as timing between pulses such that the timing between pulses represents features of the spikes.
[0004] However, the prior art do not disclose any system or method which can implement machine learning or training algorithm and can autonomously extract features and label them as an output without implementing lengthy training cycles. In view of the aforementioned reasons, there is therefore a need for improved techniques in spontaneous machine learning, eliminating the need for hand-crafted features or lengthy training cycles. Spontaneous Dynamic Learning differs from supervised learning in that known input and output sets are not presented, but instead the system learns from repeating patterns (features) in the input stream.
US2010081958A1 PULSE-BASED FEATURE EXTRACTION FOR NEURAL RECORDINGS relates to neurophysiology, I guess it's to do with those skull cap neuron detectors, which shows the depth of PvdM's research.
It is interesting that US20100081958 (Lapsed) [Florida Uni] is cited for its discussion of spike sorting because BrainChip's US11468299B2 Spiking neural network also has a spike sorting system:
View attachment 20982
It doesn't look like the Florida Uni document discloses anything like PvdM's spike sorter.
View attachment 20985
Sorry, I seem to have wandered down a different rabbit hole from the one I started in ...
... oh yes, does PvdM know the capabilities of Akida?
I think even he will be astonished when he gets the cortex sorted out and produces AGI, or maybe, justly hugely proud of his achievements.
Thanks for all you do for shareholders @Diogenese
Highest regards
FF
AKIDA BALLISTA