HopalongPetrovski
I'm Spartacus!
Thank you Dio for providing us with some context and an explanation from your understanding, in plain english.Well I haven't seen the patent yet, so the following may be complete balderdash.
I'm guessing that long skip will be used in natural language processing (NLP). This is different from key word spotting (kws) which only requires the system to listen out for a word in a list of key words, and then trigger some subsequent action.
NLP requires the system in interpret or understand the meaning of a sentence or a paragraph. AGI/ChatGPT does this in software and this burns a lot of electricity.
The existing systems refer to "attention", meaning the system needs to be able to remember a string of words and parse them into subject (noun), verb, object (noun), adjective, adverb, etc. and then to know what the verb means is being done and who or what is to do it, and to whom it is to be done, etc. So when the system has identified each of these words a la kws, it then has to try to understand the meaning by looking at the context, which may involve looking at more than one sentence.
So, cutting a long story short, as you know, a NN includes a number of layers, each layer having a plurality of neurons, each neuron being configured (programmed/loaded) with weights, and the weights either reinforce or cancel incoming spikes in a pattern derived from the model library. When the spikes identifying a particular word are classified in an intermediate layer, the result can bypass the following layers of the NN and be passed forward for the "interpreting" stage.
So the bypassing of a layer is a skip, and if a number of layers are bypassed, or if the word is held over for comparison with other parts of the sentence/paragraph, this would be a long skip - "long" suggests to me that it is stored in temporary memory for further processing????
Now I haven't got the foggiest how the interpretation is done, but, to do all that, I would think the system will need dictionaries and thesauri*.
Remember this is just my rudimentary understanding and may be way off the beam as it's really above my pay grade.
*I'm very much afraid that, even when I've seen the patent, I still won't understand how it works.
Indeed, the processing step up from the recognition of a predetermined wake word to the complexity of NLP in one generation is astounding to me.
And beyond the building blocks of dictionaries and thesauri surely any system will require access and application rules of syntax and grammar in order to produce anything more than a parroting of language?
Is the mere application of rules enough to provide an adequate simulacrum of consciousness?
Although on reflection that is somewhat the situation now with Chat GPT isn't it.
It has been refined and trained enough though to furnish a usable tool.
So are you saying that some version of all this evaluation, processing and reconciliation (if provisioned with access to a sufficiently worthy model library held separately in memory) could be carried out in Akida 2000 hardware rather than the current software emulations?
I would imagine that initially it would be bound within the confines of specific use cases such as the teaching of/ or the translation of a specific language perhaps or some other definable subject such as biology.
Perhaps I am getting carried away with the immediate potentialities of the tech as is my wont.
I suffer from a somewhat retrofuturistic syndrome bought about by a far too liberal dose of the Jetsons, Lost in Space and Star Trek in my television irradiated youth.

Anyway, Thank You again for all your continuing valuable contributions here.
Well done.