The company demoed a basic neural network-rendered world back in 2018.
www.tomshardware.com
While probably not related to DLSS10 per say, not sure if you've been keeping an eye on neuromorphic patents in recent months. I would hazard a guess NVIDIA have evaluated Akida in some capacity. One of NVIDIA's latest patents referring to neuromorphic hardware:
Espacenet: free access to millions of patent documents. Find out if your invention is unique or if other inventors have filed patent applications that are considered to be prior art.
worldwide.espacenet.com
In at least one embodiment, I don't have the technical knowledge to understand the following but neuromorphic technology is showing up in more and more company patents. In neuromorphic use cases that astound me - see Pulse Biotech's SYSTEM FOR MONITORING TISSUE HEALTH patent published recently for instance. Not saying it's Akida but it's heartening nonetheless, knowing we are the only commericially viable option at present if they want to bring it to market soon, to see large and small companies patenting inventions utilising neuromorphic tech.
SELECTING STREAMS FOR OPTIMIZED INFERENCING
US2023297810A1 • 2023-09-21 •
NVIDIA CORP [US]
[0434] FIG.
25 is a block diagram of a neuromorphic processor
2500 , according to at least one embodiment. In at least one embodiment, neuromorphic processor
2500 may receive one or more inputs from sources external to neuromorphic processor
2500 . In at least one embodiment, these inputs may be transmitted to one or more neurons
2502 within neuromorphic processor
2500 . In at least one embodiment, neurons
2502 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor
2500 may include, without limitation, thousands or millions of instances of neurons
2502 , but any suitable number of neurons
2502 may be used. In at least one embodiment, each instance of neuron
2502 may include a neuron input
2504 and a neuron output
2506 . In at least one embodiment, neurons
2502 may generate outputs that may be transmitted to inputs of other instances of neurons
2502 . For example, in at least one embodiment, neuron inputs
2504 and neuron outputs
2506 may be interconnected via synapses
2508 .
[0435] In at least one embodiment, neurons
2502 and synapses
2508 may be interconnected such that neuromorphic processor
2500 operates to process or analyze information received by neuromorphic processor
2500 . In at least one embodiment, neurons
2502 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input
2504 exceed a threshold. In at least one embodiment, neurons
2502 may sum or integrate signals received at neuron inputs
2504 . For example, in at least one embodiment, neurons
2502 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron
2502 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs
2504 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs
2504 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons
2502 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons
2502 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output
2506 when result of applying a transfer function to neuron input
2504 exceeds a threshold. In at least one embodiment, once neuron
2502 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron
2502 may resume normal operation after a suitable period of time (or refractory period).
[0436] In at least one embodiment, neurons
2502 may be interconnected through synapses
2508 . In at least one embodiment, synapses
2508 may operate to transmit signals from an output of a first neuron
2502 to an input of a second neuron
2502 . In at least one embodiment, neurons
2502 may transmit information over more than one instance of synapse
2508 . In at least one embodiment, one or more instances of neuron output
2506 may be connected, via an instance of synapse
2508 , to an instance of neuron input
2504 in same neuron
2502 . In at least one embodiment, an instance of neuron
2502 generating an output to be transmitted over an instance of synapse
2508 may be referred to as a “pre-synaptic neuron” with respect to that instance of synapse
2508 . In at least one embodiment, an instance of neuron
2502 receiving an input transmitted over an instance of synapse
2508 may be referred to as a “post-synaptic neuron” with respect to that instance of synapse
2508 . Because an instance of neuron
2502 may receive inputs from one or more instances of synapse
2508 , and may also transmit outputs over one or more instances of synapse
2508 , a single instance of neuron
2502 may therefore be both a “pre-synaptic neuron” and “post-synaptic neuron,” with respect to various instances of synapses
2508 , in at least one embodiment.
[0437] In at least one embodiment, neurons
2502 may be organized into one or more layers. In at least one embodiment, each instance of neuron
2502 may have one neuron output
2506 that may fan out through one or more synapses
2508 to one or more neuron inputs
2504 . In at least one embodiment, neuron outputs
2506 of neurons
2502 in a first layer
2510 may be connected to neuron inputs
2504 of neurons
2502 in a second layer
2512 . In at least one embodiment, layer
2510 may be referred to as a “feed-forward layer.” In at least one embodiment, each instance of neuron
2502 in an instance of first layer
2510 may fan out to each instance of neuron
2502 in second layer
2512 . In at least one embodiment, first layer
2510 may be referred to as a “fully connected feed-forward layer.” In at least one embodiment, each instance of neuron
2502 in an instance of second layer
2512 may fan out to fewer than all instances of neuron
2502 in a third layer
2514 . In at least one embodiment, second layer
2512 may be referred to as a “sparsely connected feed-forward layer.” In at least one embodiment, neurons
2502 in second layer
2512 may fan out to neurons
2502 in multiple other layers, including to neurons
2502 also in second layer
2512 . In at least one embodiment, second layer
2512 may be referred to as a “recurrent layer.” In at least one embodiment, neuromorphic processor
2500 may include, without limitation, any suitable combination of recurrent layers and feed-forward layers, including, without limitation, both sparsely connected feed-forward layers and fully connected feed-forward layers.
[0438] In at least one embodiment, neuromorphic processor
2500 may include, without limitation, a reconfigurable interconnect architecture or dedicated hard-wired interconnects to connect synapse
2508 to neurons
2502 . In at least one embodiment, neuromorphic processor
2500 may include, without limitation, circuitry or logic that allows synapses to be allocated to different neurons
2502 as needed based on neural network topology and neuron fan-in/out. For example, in at least one embodiment, synapses
2508 may be connected to neurons
2502 using an interconnect fabric, such as network-on-chip, or with dedicated connections. In at least one embodiment, synapse interconnections and components thereof may be implemented using circuitry or logic.