I have just been communing with the muse about Rain AI's patented analog random nanowire neuron.
US10430493B1 Systems and methods for efficient matrix multiplication 20180405
[0056] F
IG. 3 illustrates a diagram of a sparse vector-matrix multiplication (SVMM) engine 22 . The SVMM engine 22 includes a silicon substrate 24 , control circuitry within a circuit layer 26 , for example, a complementary metal-oxide-semiconductor (CMOS) layer, a grid of electrodes 28 and a randomly formed mesh 30 of coaxial nanowires 10 deposited on top of the grid 28 . Mesh 30 is placed above or formed on top of the electrode grid 28 , providing physical contact between the mesh 30 and the top of electrode grid 28 . Alternatively, the electrodes of the grid 28 can be grown through the mesh 30 as pillars of metal. The coaxial nanowires 10 deposited randomly on top of the electrodes of the grid 28 can provide electrical connections between the electrodes that they contact. Consequently, the coaxial nanowires 10 sparsely connect the electrodes of the grid 28 . The strength of the electrical connections between the electrodes can be modulated based on increasing or decreasing the resistances of the coaxial nanowires 10.
...
[0059] I
n various applications, the resistances formed at the intersection of the electrodes of the grid 28 and the mesh 30 can be adjusted by tuning or fitting to known sets of input/output pairs until a useful matrix of conductances is formed.*
This is similar to sculpting, where you start with a big rock and chip away all the bits that don't look like a naked lady.
The nanowires have conductive offshoots connecting the insulated wire with the external electrodes.
The inventor apparently was trying to emulate the structure of the human brain more closely than the standard analog neuron, but I think he confused complexity with randomness. The human brain's interconnexion of neurons and synapses is indeed complex, but it is not random. It is arranged in accordance with DNA code and the synaptic connexions, leaving aside the original pre-wired connexions, are formed in response to external data inputs from eyes, ears, nose, tongue ...
Given that the human brain can memorize information virtually instantaneously, and given that it would not be possible for new synapses to grow and connect neurons in such a short time, a potential explanation is that the neurons are densely interconnected by a network of dormant synapse which are activated when new information is added to memory.
So each neuron has many more synaptic inputs than those actively involved in memory.
The next question is whether a neuron cn be involved in the memory of more than one item. I would guess that neurons can be involved in memorizing more than one item otherwise brain overload would be a real thing. That would mean that, to remember a first object, a first group of synapses of the neuron would need to be activated, and to remember a second object, a second group of synapses would need to be activated.
The synaptic weights (ON/OFF or 0.1 to 1.0?) could determine which synapses are involved in recognizing the input data spikes.
Well, the point is Rain sought to make sparse random neuronal interconnexions, whereas wetware is both organized and densely interconnected.
So, the analog system of the Rain patents, on which they raised millions, including a promise from Sam Altman, is a system which may not work.
Indeed, Rain now talks about a digital system which uses MACs.
https://rain.ai/approach
A
I workloads possess extraordinary compute and memory demands, and they are often limited by legacy computer architectures. Rain AI is pioneering the Digital In-Memory Computing (D-IMC) paradigm to address these inefficiencies to refine AI processing, data movement and data storage.
Unlike traditional In-Memory Computing designs, Rain AI’s proprietary D-IMC cores are scalable to high-volume production and support training and inference. When combined with Rain AI's propriety quantization algorithms, the accelerator maintains FP32 accuracy.
R
eaping the benefits of high-accuracy, AI-focused numerics in hardware remains a core challenge in AI training and inference. Rain AI’s block brain floating point scheme ensures no accuracy loss compared to FP32. The numerical formats are co-designed at the circuit level with our D-IMC core, leveraging the immense performance gains of optimized 4-bit and 8-bit matrix multiplication. Our flexible approach ensures broad applicability across diverse networks, setting a new standard in AI efficiency
It looks like their special sauce is plain old catchup.
* [#### this technique is known in the industry as FITUMI "fake-it-til-you-make-it" ####]