Boab
I wish I could paint like Vincent
The older I get, the better I was.I used to be a superhero then I retired.
"The older I get, the more clearly I remember things that never happened. - Mark Twain
![Beaming face with smiling eyes :grin: 😁](https://cdn.jsdelivr.net/joypixels/assets/6.6/png/unicode/64/1f601.png)
![Beaming face with smiling eyes :grin: 😁](https://cdn.jsdelivr.net/joypixels/assets/6.6/png/unicode/64/1f601.png)
The older I get, the better I was.I used to be a superhero then I retired.
"The older I get, the more clearly I remember things that never happened. - Mark Twain
Great expert witnesses make average lawyers look good.PvdM has been working on SNNs since at least 2008, so I suspect hat by 2019 he would have begun to get a glimmer of understanding of its capabilities.
https://brainchip.com/brainchip-releases-client-server-interface-tool-for-snap-technology/
BrainChip releases client server interface tool for snap technology 15.03.2016
...
The SNAP neural network learns features that exist in the uploaded data, even when they are not distinguishable by human means. Autonomous machine learning has long been an elusive target in computer science. Recursive programs are cumbersome and take a long time to process. BrainChip has accomplished rapid autonomous machine learning in its patented hardware-only solution by replicating the learning ability of the brain, by re-engineering the way neural networks function, and by creating a new way of computing culminating in the SNAP technology.
It is possible to trace the development of Akida through the BrainChip patents, listed here:
https://worldwide.espacenet.com/patent/search/family/070458523/publication/US11468299B2?q=pa = "brainchip"
This is a US patent derived from PvdM's first NN patent application:
US10410117B2 Method and a system for creating dynamic neural function libraries: Priority 20080921
View attachment 20975
A method of creating a reusable dynamic neural function library for use in artificial intelligence, the method comprising the steps of:
sending a plurality of input pulses in form of stimuli to a first artificial intelligent device, where the first artificial intelligent device includes a hardware network of reconfigurable artificial neurons and synapses;
learning at least one task or a function autonomously from the plurality of input pulses, by the first artificial intelligent device;
generating and storing a set of control values, representing one learned function, in synaptic registers of the first artificial intelligent device;
altering and updating the control values in synaptic registers, based on a time interval and an intensity of the plurality of input pulses for autonomous learning of the functions, thereby creating the function that stores sets of control values, at the first artificial intelligent device; and
transferring and storing the function in the reusable dynamic neural function library, together with other functions derived from a plurality of artificial intelligent devices, allowing a second artificial intelligent device to reuse one or more of the functions learned by the first artificial intelligent device.
... and this is the key patent which was granted recently:
US11468299B2 Spiking neural network: Priority 20181101
View attachment 20977
A system, method, and computer program product embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised extraction of features from an input stream. An embodiment operates by receiving a set of spike bits corresponding to a set synapses associated with a spiking neuron circuit. The embodiment applies a first logical AND function to a first spike bit in the set of spike bits and a first synaptic weight of a first synapse in the set of synapses. The embodiment increments a membrane potential value associated with the spiking neuron circuit based on the applying. The embodiment determines that the membrane potential value associated with the spiking neuron circuit reached a learning threshold value. The embodiment then performs a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the spiking neuron circuit reached the learning threshold value.
This one is for detecting partially obscured objects, quite handy in the real world:
US11151441B2 System and method for spontaneous machine learning and feature extraction: Priority 20170208
View attachment 20980
an artificial neural network system for improved machine learning, feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to spontaneously learn complex, temporally overlapping features arising in an input pattern stream. Competitive learning is implemented as Spike Timing Dependent Plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected with the first spiking neural network through dynamic synapses, and is trained to interpret and label the output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing.
Accurate detection of objects is a challenging task due to lighting changes, shadows, occlusions, noise and convoluted backgrounds. Principal computational approaches use either template matching with hand-designed features, reinforcement learning, or trained deep convolutional networks of artificial neurons and combinations thereof. Vector processing systems generate values, indicating color distribution, intensity and orientation from the image. These values are known as vectors and indicate the presence or absence of a defined object. Reinforcement learning networks learn by means of a reward or cost function. The reinforcement learning system is configured to either maximize the reward value or minimize the cost value and the performance of the system is highly dependent on the quality and conditions of these hand-crafted features.
Deep convolutional neural networks learn by means of a technique called back-propagation, in which errors between expected output values for a known and defined input, and actual output values, are propagated back to the network by means of an algorithm that updates synaptic weights with the intent to minimize the error.
The Deep Learning method requires millions of labelled input training models, resulting in long training times, and clear definition of known output values.
However, these methods are not useful when dealing with previously unknown features or in the case whereby templates are rapidly changing or where the features are flexible. The field of neural networks is aimed at developing intelligent learning machines that are based on mechanisms which are assumed to be related to brain function. U.S. Pat. No. 8,250,011 [BrainChip] describes a system based on artificial neural network learning. The system comprises a dynamic artificial neural computing device that is capable of approximation, autonomous learning and strengthening of formerly learned input patterns. The device can be trained and can learn autonomously owing to the artificial spiking neural network that is intended to simulate or extend the functions of a biological nervous system. Since, the artificial spiking neural network simulates the functioning of the human brain; it becomes easier for the artificial neural network to solve computational problems.
[0003] US20100081958 (Lapsed) [Florida Uni] describes a more advanced version of machine learning and automated feature extraction using neural network. US20100081958 is related to pulse-based feature extraction for neural recordings using a neural acquisition system. The neural acquisition system includes the neural encoder for temporal-based pulse coding of a neural signal, and a spike sorter for sorting spikes encoded in the temporal-based pulse coding. The neural encoder generates a temporal-based pulse coded representation of spikes in the neural signal based on integrate-and-fire coding of the received neural signal and can include spike detection and encode features of the spikes as timing between pulses such that the timing between pulses represents features of the spikes.
[0004] However, the prior art do not disclose any system or method which can implement machine learning or training algorithm and can autonomously extract features and label them as an output without implementing lengthy training cycles. In view of the aforementioned reasons, there is therefore a need for improved techniques in spontaneous machine learning, eliminating the need for hand-crafted features or lengthy training cycles. Spontaneous Dynamic Learning differs from supervised learning in that known input and output sets are not presented, but instead the system learns from repeating patterns (features) in the input stream.
US2010081958A1 PULSE-BASED FEATURE EXTRACTION FOR NEURAL RECORDINGS relates to neurophysiology, I guess it's to do with those skull cap neuron detectors, which shows the depth of PvdM's research.
It is interesting that US20100081958 (Lapsed) [Florida Uni] is cited for its discussion of spike sorting because BrainChip's US11468299B2 Spiking neural network also has a spike sorting system:
View attachment 20982
It doesn't look like the Florida Uni document discloses anything like PvdM's spike sorter.
View attachment 20985
Sorry, I seem to have wandered down a different rabbit hole from the one I started in ...
... oh yes, does PvdM know the capabilities of Akida?
I think even he will be astonished when he gets the cortex sorted out and produces AGI, or maybe, justly hugely proud of his achievements.
thank you so muchPvdM has been working on SNNs since at least 2008, so I suspect hat by 2019 he would have begun to get a glimmer of understanding of its capabilities.
https://brainchip.com/brainchip-releases-client-server-interface-tool-for-snap-technology/
BrainChip releases client server interface tool for snap technology 15.03.2016
...
The SNAP neural network learns features that exist in the uploaded data, even when they are not distinguishable by human means. Autonomous machine learning has long been an elusive target in computer science. Recursive programs are cumbersome and take a long time to process. BrainChip has accomplished rapid autonomous machine learning in its patented hardware-only solution by replicating the learning ability of the brain, by re-engineering the way neural networks function, and by creating a new way of computing culminating in the SNAP technology.
It is possible to trace the development of Akida through the BrainChip patents, listed here:
https://worldwide.espacenet.com/patent/search/family/070458523/publication/US11468299B2?q=pa = "brainchip"
This is a US patent derived from PvdM's first NN patent application:
US10410117B2 Method and a system for creating dynamic neural function libraries: Priority 20080921
View attachment 20975
A method of creating a reusable dynamic neural function library for use in artificial intelligence, the method comprising the steps of:
sending a plurality of input pulses in form of stimuli to a first artificial intelligent device, where the first artificial intelligent device includes a hardware network of reconfigurable artificial neurons and synapses;
learning at least one task or a function autonomously from the plurality of input pulses, by the first artificial intelligent device;
generating and storing a set of control values, representing one learned function, in synaptic registers of the first artificial intelligent device;
altering and updating the control values in synaptic registers, based on a time interval and an intensity of the plurality of input pulses for autonomous learning of the functions, thereby creating the function that stores sets of control values, at the first artificial intelligent device; and
transferring and storing the function in the reusable dynamic neural function library, together with other functions derived from a plurality of artificial intelligent devices, allowing a second artificial intelligent device to reuse one or more of the functions learned by the first artificial intelligent device.
... and this is the key patent which was granted recently:
US11468299B2 Spiking neural network: Priority 20181101
View attachment 20977
A system, method, and computer program product embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised extraction of features from an input stream. An embodiment operates by receiving a set of spike bits corresponding to a set synapses associated with a spiking neuron circuit. The embodiment applies a first logical AND function to a first spike bit in the set of spike bits and a first synaptic weight of a first synapse in the set of synapses. The embodiment increments a membrane potential value associated with the spiking neuron circuit based on the applying. The embodiment determines that the membrane potential value associated with the spiking neuron circuit reached a learning threshold value. The embodiment then performs a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the spiking neuron circuit reached the learning threshold value.
This one is for detecting partially obscured objects, quite handy in the real world:
US11151441B2 System and method for spontaneous machine learning and feature extraction: Priority 20170208
View attachment 20980
an artificial neural network system for improved machine learning, feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to spontaneously learn complex, temporally overlapping features arising in an input pattern stream. Competitive learning is implemented as Spike Timing Dependent Plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected with the first spiking neural network through dynamic synapses, and is trained to interpret and label the output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing.
Accurate detection of objects is a challenging task due to lighting changes, shadows, occlusions, noise and convoluted backgrounds. Principal computational approaches use either template matching with hand-designed features, reinforcement learning, or trained deep convolutional networks of artificial neurons and combinations thereof. Vector processing systems generate values, indicating color distribution, intensity and orientation from the image. These values are known as vectors and indicate the presence or absence of a defined object. Reinforcement learning networks learn by means of a reward or cost function. The reinforcement learning system is configured to either maximize the reward value or minimize the cost value and the performance of the system is highly dependent on the quality and conditions of these hand-crafted features.
Deep convolutional neural networks learn by means of a technique called back-propagation, in which errors between expected output values for a known and defined input, and actual output values, are propagated back to the network by means of an algorithm that updates synaptic weights with the intent to minimize the error.
The Deep Learning method requires millions of labelled input training models, resulting in long training times, and clear definition of known output values.
However, these methods are not useful when dealing with previously unknown features or in the case whereby templates are rapidly changing or where the features are flexible. The field of neural networks is aimed at developing intelligent learning machines that are based on mechanisms which are assumed to be related to brain function. U.S. Pat. No. 8,250,011 [BrainChip] describes a system based on artificial neural network learning. The system comprises a dynamic artificial neural computing device that is capable of approximation, autonomous learning and strengthening of formerly learned input patterns. The device can be trained and can learn autonomously owing to the artificial spiking neural network that is intended to simulate or extend the functions of a biological nervous system. Since, the artificial spiking neural network simulates the functioning of the human brain; it becomes easier for the artificial neural network to solve computational problems.
[0003] US20100081958 (Lapsed) [Florida Uni] describes a more advanced version of machine learning and automated feature extraction using neural network. US20100081958 is related to pulse-based feature extraction for neural recordings using a neural acquisition system. The neural acquisition system includes the neural encoder for temporal-based pulse coding of a neural signal, and a spike sorter for sorting spikes encoded in the temporal-based pulse coding. The neural encoder generates a temporal-based pulse coded representation of spikes in the neural signal based on integrate-and-fire coding of the received neural signal and can include spike detection and encode features of the spikes as timing between pulses such that the timing between pulses represents features of the spikes.
[0004] However, the prior art do not disclose any system or method which can implement machine learning or training algorithm and can autonomously extract features and label them as an output without implementing lengthy training cycles. In view of the aforementioned reasons, there is therefore a need for improved techniques in spontaneous machine learning, eliminating the need for hand-crafted features or lengthy training cycles. Spontaneous Dynamic Learning differs from supervised learning in that known input and output sets are not presented, but instead the system learns from repeating patterns (features) in the input stream.
US2010081958A1 PULSBASIERTE MERKMALEXTRAKTION FÜR NEURALAUFZEICHNUNGEN bezieht sich auf die Neurophysiologie, ich denke, es hat mit diesen Schädelkappen-Neuronendetektoren zu tun, was die Tiefe der Forschung von PvdM zeigt.
Es ist interessant, dass US20100081958 (Lapsed) [Florida Uni] für seine Erörterung der Spike-Sortierung zitiert wird, da BrainChips US11468299B2 Spiking Neural Network auch ein Spike-Sortiersystem hat:
View attachment 20982
Es sieht nicht so aus, als würde das Dokument der Florida Uni irgendetwas wie den Spike-Sortierer von PvdM offenbaren.
View attachment 20985
Entschuldigung, ich scheine in ein anderes Kaninchenloch gewandert zu sein als in das, in dem ich angefangen habe ...
... ach ja, kennt PvdM die Fähigkeiten von Akida?
Ich denke, selbst er wird erstaunt sein, wenn er den Kortex aussortiert und AGI produziert, oder vielleicht zu Recht sehr stolz auf seine Errungenschaften sein.
I'd be Julia Guillard so I could go around reciting the misogyny speech to ding-bats like Izzzzy all day.
View attachment 20983
View attachment 20984
looked at Quadric a few months ago. From recall, they use ALUs.![]()
Douglas Fairbairn on LinkedIn: Quadric’s New Chimera GPNPU Processor IP Blends NPU and DSP into New…
Big announcement from one of our AI IP partners!www.linkedin.com
View attachment 20966
Looks to be CNN, not SNN. Not us?
Also, nobody from Brainchip has "liked" it yet.
Was that the fish chorus line song "Salmon chanted evening"?Hi Bravo
Love your work but remember as Brainchip states "We don't make sensors we make them smart." AKIDA is a processor. A GPU is a processor. A CPU is a processor. None of them are sensors but sensors need something to process what they sense and make it intelligible to humans.
Now you can use multiple GPU's or multiple CPU's to process the data coming from five sensors or more and send it somewhere else to be fused into a meaningful action or you can use something that takes in multiple streams of different sensory data and fuses that data on chip and gives you the meaningful action close to the sensors.
By coincidence this ability to take multiple sensor imputes and fuse them on chip close to the sensor and produce meaningful action is something AKIDA technology IP provides. AKIDA IP however will not be the sensor itself.
Remember the Luca Verre CEO of Propehesee interview where he described building their event based sensor but knowing that it was only half the story unless they could find someone with an event based processor.
Intel did not have it.
SynSense did not have it.
But then,
'One enchanted evening,
Then Luca found AKIDA,
And somehow he knew,
With AKIDA he'd be sensing,
And he made it his own'.
(sung to the tune Some Enchanted Evening from South Pacific)
My opinion only DYOR
FF
AKIDA BALLISTA
That's funny. I had dinner with a couple of my brothers last night an we reminisced about when our Uncle Harry sang Some Enchanted Evening at my weddingHi Bravo
Love your work but remember as Brainchip states "We don't make sensors we make them smart." AKIDA is a processor. A GPU is a processor. A CPU is a processor. None of them are sensors but sensors need something to process what they sense and make it intelligible to humans.
Now you can use multiple GPU's or multiple CPU's to process the data coming from five sensors or more and send it somewhere else to be fused into a meaningful action or you can use something that takes in multiple streams of different sensory data and fuses that data on chip and gives you the meaningful action close to the sensors.
By coincidence this ability to take multiple sensor imputes and fuse them on chip close to the sensor and produce meaningful action is something AKIDA technology IP provides. AKIDA IP however will not be the sensor itself.
Remember the Luca Verre CEO of Propehesee interview where he described building their event based sensor but knowing that it was only half the story unless they could find someone with an event based processor.
Intel did not have it.
SynSense did not have it.
But then,
'One enchanted evening,
Then Luca found AKIDA,
And somehow he knew,
With AKIDA he'd be sensing,
And he made it his own'.
(sung to the tune Some Enchanted Evening from South Pacific)
My opinion only DYOR
FF
AKIDA BALLISTA
So she heard no evil when she worked for and was in a relationship with a proven corrupt union official. Back then she even wore opaque glasses so she could see no evil on the documents she drafted. A future Prime Minister cannot be too careful these sorts of things can come back to bite.Have you ever wondered why she covers her ears?
Knock KnockWas that the fish chorus line song "Salmon chanted evening"?
My take is once Akida is used in Mercedes, the rest of the makers will follow..I tend to think that inflationary pressures suppressing industry and the demise of Argo may well play into BRNs hands in the longer run. I see an opportunity for the right technology in the right place at the right time. And Akida is the right technology imo, it’s cheap and scales well as IP, it’s the only real edge choice atm and I think with a suppressed market and the need to keep moving ADAS and level 3 automation ahead, that industry will gravitate towards a modular autonomous solution of necessary components making up what is required. I think this has happened with software in the past (many times) and now it will be forced upon software defined car manufacturers to reduce the amount of reinventing in development they do in favour of purchasing more generic modular solutions - to speed up adoption. I think Akida will be part of this why? Not because I’m a biased shareholder but because Akida scales and will be cheaper and faster to implement. I think that eventually there will be maybe 2 or 3 major autonomous vehicle platforms adopted by the whole of industry and each will utilise Akida technology - fulfilling the companies ambition for Akida to be the defacto standard for car edge AI. AIMO.
Would still really love some clarity at some point on that relationship and how / where we fit given no NDA.My take is once Akida is used in Mercedes, the rest of the makers will follow..
Nice, Im convinced but im a Nurse no Engineer
Hi Slade,@Diogenese and anyone else that can read tech specs, I would be interested in what you think.
MAX78000 Artificial Intelligence Microcontroller with Ultra-Low-Power Convolutional Neural Network Accelerator
Artificial intelligence (AI) requires extreme computational horsepower, but Maxim is cutting the power cord from AI insights. The MAX78000 is a new breed of AI microcontroller built to enable neural networks to execute at ultra-low power and live at twww.maximintegrated.com