That's right. Edge AGI will not use the global model library of ChatGPT. It's LLM model libraries will be subject specific. But they will still be large. If you look at Zack Shelby's graph from the other day showing the adaptation of Nvidia's model libraries to different size processors, you can see the correlation between model size and performance for von Neumann processors, and how Akida's digital SNN breaks that model by greatly reducing the model size for superior performance.
As you say, the rules of syntax are part of the mix. This is also important in translation from one language to another. Those crazy French always have the cart in front of the horse.
Even if we are at the point where the systems can understand language, I don't think that we are at the point of creating consciousness, where the silicon can think for itself. I don't know how you would even define consciousness in the context of a computer. They are already "aware" of their environment. They can be progremmed to respond to their environment. Tesla claims to have trained an AV based on millions of hours of footage, without using object classification.
Now you've made me go down this rabbit hole:
Is consciousness the difference between learning and thinking?
Learning is a prerequisite for thinking.
Language is not a prerequisite for sensory learning.
Sensory learning does lead to a level of thinking in sentient beings, eg, burning one's hand on the hotplate, (one-shot learning?). You "learn" that that was a painful experience - most people would "think" that they will not do that again.
Language is a prerequisite for some forms of abstract thinking, but does imagination require language?
Are there different forms of consciousness?
https://www.britannica.com/story/why-a-computer-will-never-be-trulyconscious
Why a computer will never be truly conscious
...
Even before Turing’s work, German quantum physicist Werner Heisenberg showed that there was a distinct difference in the nature of the physical event and an observer’s conscious knowledge of it. This was interpreted by Austrian physicist Erwin Schrödinger to mean that consciousness cannot come from a physical process, like a computer’s, that reduces all operations to basic logic arguments.
These ideas are confirmed by medical research findings that there are no unique structures in the brain that exclusively handle consciousness. Rather, functional MRI imaging shows that different cognitive tasks happen in different areas of the brain. This has led neuroscientist Semir Zeki to conclude that “consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space.” That type of limitless brain capacity isn’t the sort of challenge a finite computer can ever handle.
Written by Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University.
The great thing about Heisenberg is his uncertainty principle which has been my lodestone.