Boab
I wish I could paint like Vincent
Thanks for that Dio. Certainly looks like a laborious process?My layman's interpretation of CNN -
In convolution the system takes manageable size bites of the pixel data for processing.
The processor examines small segments of the whole pixel array to see if there is a pattern which it recognizes. It does this by looking at each pixel in association with their neighbouring pixels.
Think of an array of pixels in a camera photoreceptor, say 100 columns x 100 rows (10000 pixels) - starting at the top left corner, draw a box enclosing, say 7x7 pixels. That is what the convolution size means.
Now move the box 1 pixel to the right - that is a stride of 1.
This results in the 7 pixels in the first column being excluded from the next processing step and a new set of 7 pixels to the right being included.
So a stride of 2 moves the box 2 columns to the right, dropping 2 columns of 7 on the left and adding 2 columns of 7 to the right.
When the box has scanned the whole first 7 rows, it can drop down 2 rows and repeat the process.
I just remember one of the things I read that sold me on Brainchip was how if you had a large white board with a black dot in the middle Akida will just process the black dot and wont waist energy sending data for the whole board. (Like event based camera's?)
Not even sure if this is relevant to this discussion.
If you are a Layman then I am the scarecrow from The Wonderful Wizard Of Oz.