Yeah Mr J Chapman89, yeah 1000 fps, that Brainchip's Spiking Solution can use to make decisions! Computers and Human Behavior has been a challenge forever, and now can monitor and assist the driver of an automobile, so needed! This old 1962 video has a lot of neat things in it, when you have time! Best regards.
I just watched this video, thanks for sharing it
@stuart888.
The following is a bit of a rant, but I think it makes some valid points. I apologise in advance for the words that are spewing forth from my fingers. I hope I don‘t mislead too many, and hopefully help some.
For me, this video tells quite an intriguing story. I originally only watched it because I wanted to see how people from 1962 worked with computers of the day. But after watching it, I found It is actually quite revealing of just how grand Akida is. What an amazing quantum leap Akida represents. Especially when it gets LTSM and cortical columns working to even better emulate the human brain. That will allow prior experience and big picture thinking to come into play, even potentially pre-cognition.
Firstly, I noticed how absolutely stuck on binary logic these 1962 scientists where. They saw everything as yes or no. Which does work and has worked for decades. It’s just so bloody limiting.
This also emphasised just how dumb computers are! Computers blindly follow rules, and hence are completely at the mercy of the programmer. They just blindly apply the rules VERY quickly, and in a reproducible manner, and so appear to be clever.
The blocks problem was a particularly good example of this. For those who haven’t watched the video—Given a pattern of black and white blocks, and some rules about how the blocks can be placed, reproduce the pattern.
I believe Billy did follow the rules when he first solved the problem by adding two same coloured blocks that were not adjacent. The rules DO NOT say the blocks have to be adjacent. That was an adjunct added by the tester, and implied by the pictures. This must have been coded in the computer program, but human error neglected to add it to the written rules presented to Billy. I think the tester stated something like “place two blocks together”. Billy just interpreted the word together as simultaneous, or time adjacent, rather than spatially adjacent. A clever interpretation IMHO.
Billy got a typical response of “Oh well, yes, you did solve it, but not the way I wanted to you to solve it.”
So not wrong then! Just different, that kind of stuff the brain does well, and computers cannot do. Maybe not until now - or In the imminent future.
I was equally impressed by Billy‘s absolutely first attempt of exactly reproducing the pattern, by placing the blocks down one at a time. This shows how prior experience and non-binary logic works. He saw the big picture and devised a mechanism to solve it directly that didn‘t apply the rules. He determined that the rules were not the most efficient way to solve the problem.
Again, this is the kind of stuff an intelligent human brain does well.
The computer program, as it was programmed to do, and as limited by pure binary logic, looked at a single column at a time and rather inefficiently tried to resolve any issues in that column alone. Again, yes that works, it IS effective, but it isn‘t how the human mind solves a pattern matching problem.
Computing, since 1962, progressed to using bytes and words. Well the 1962 computers probably did use 4-bit bytes. But with modern computers using 64 bit words (and possibly even 128 bit words now) as complete logic blocks, applying masks to determine what the combination of bits mean. Boolean logic can even be applied directly to these, and even to matrices of these words, to solve more complex problems in a single pass.
This allowed computers to do more complex things quicker, but still in a rather unintelligent, always pre-programmed, way.
Bring in neuomorphic computing and Akida. Some of the examples given by Anil Manaker show logic, and weightings, going back to single bits where appropriate. This results in ultra-low power consumption. Extra nodes and layers are brought into play as needed. And Akida has the ability to learn and hence apply logic that is outside of the initial rules. Just like Billy did so many years ago!
Learning burns in a pathway and reverts back to single bit logic and instant recognition. The fact that Akida does this in single shot is far superior to even the human brain. We need repetition to burn memories. And the more repetition, the better.
Akida does truly more closely mimic the way the brain works, and even seems to exceed it in its learning capability. Bring on LTSM and cortical columns and the amount of information that can form a memory and achieve single bit-like efficiencies becomes immense. Very complex things can be learned And associated to other very complex things.
It seems we have come full circle, back to single bits, but we haven’t. The nueromorphic processor may indeed use single bits but not in the same stupid way the original purely boolean based computers and programmers did/do.
e.g. Imagine a 1000 x 1000 pattern of red dots with an unknown number of blue dots randomly placed within it. And lets refine this particular case to there only being a single blue dot. It is possible to apply 1962 logic to this and test each of the 1M bits in isolation. You have to test them all because you don’t know how many anomalies here are. And even once it corrects the anomaly, the program must continue to test all 1M dots, even if the first one was the one in error. The human brain (and Akida) would ignore all the sameness and zoom in to the anomoly and fix it directly, and in a single step.
This is where Akida is fantastic in scarcity situations. And they appear to be the predominant cases - i.e. find a face in a crowd.
Better still, Akida uses zero power if an anomalous blue dot does not appear.
I have seen an experiment that does just this, timing eye movements to work out when the anomaly is found. The eye focuses on the dot within milliseconds.
The 1962 written program may also be as fast as the human eye/brin in this situation, but it would consume millions of times more power (than Akida) in all the unnecessary testing of each and every pixel and will be about one million times slower than Akida, assuming same clock speed.
Now where have we heard that analogy used before?