"Your mission, Diogenese, should you choose to accept it ... "
real biological nervous system simulation is too complex: the working principle of biological neural networks has been generally mastered by researchers in years of research, but the structural details of neural networks in real organisms are still an unsolved mystery due to their complexity, which is a great problem for designing neural computing systems based on real biological nervous systems.
Akida does not attempt to produce an anatomically correct neuron. Instead it provides a digital analog.
Digital neurons are a step removed from analog neurons in replication the anatomical neuron, but they are capable of effectively performing the function of a neuron for the purposes of input classification.
Indeed, Ella reminds us: 'tain't what you do - it's the way that you do it.'
As PvdM is entitled to say: "I did it my way."
the challenge of applying to real scenes is that if a human-designed pulsed neural network is used, it is generally more suitable for continuous recognition and inference of dynamic scenes according to its characteristics. however, in the actual application process, how to make the scene fully utilize the low-power and high-speed rate of the pulse neural network and the event-driven characteristics, so as to distinguish it from the second-generation artificial neural network technology to highlight and benefit from its advantages, requires more specific and detailed exploration of the application scenario. at the same time, the application and development of pulsed neural networks also depend on the development of neural computing chips, due to its new design structure and computing mode, it is not possible to achieve the effect it should theoretically achieve on traditional chips.
I have fallen behind in my Japanglish classes, so my response is "Yes it is."
limitations of pulse train coding: the information transfer in pulse neural networks is based on pulse train, which involves encoding the general real input information into a pulse train. there are currently two main types of coding methods for pulse trains: frequency-based coding and time-based coding. for the former, it ignores the time structure inside the sequence, and the precise information in the sequence may be ignored; while the latter can make a more efficient and accurate representation of information, with stronger bio-authenticity. however, most current pulse neural network algorithms focus on frequency coding, so time-based coding algorithms still need to be explored.
PvdM tried the rest, now he uses the best. Akida uses pulse-rank-coding:
The following passage from US20210027152 explains the difference between rate coding and rank coding.
https://worldwide.espacenet.com/pat...98/publication/US2021027152A1?q=us20210027152
[0054]
Rate coding is shown in the top panel 310. The spikes received by two input synapses 320, 330 of a multitude of synapses are shown over one complete integration period labeled t1, in a plurality of integration periods. The first synapse received 25 spikes, while the second synapse received 27 spikes during this integration period. The sum of all synapses is 52, which is the simulated membrane potential of the neuron. Subsequently a non-linear function such as Tanh
or a linear rectifier (ReLU) function is applied to the simulated output value. The resulting output value is transmitted as a series of spikes to one or more synapses in the next neural layer.
The integration time is long to allow sufficient spikes to occur to receive a value.
In the lower left panel 33(4)0, rank coding is illustrated. The spikes received by four of a multitude of synapses is shown for six integration periods labeled t0 to t5.
The integration time is short, and repeating spikes within each integration period are ignored. The integrated value for the first integration period is three, and four in the subsequent four integration periods. The last integration period has the value 2.
These values (∑) are the simulated membrane potential of the neuron. If the simulated membrane potential reaches or exceeds a threshold value, a spike is transmitted to one or more synapses in the next neural layer. In the middle right hand panel 350, the integration method of a section of a neuron of a plurality of neurons in a conventional perceptron is shown.
A collection of weight values labeled W11 to W95 are multiplied with input values I0 to I9.The resulting values are accumulated to form the neuron simulated membrane potential. Subsequently a non-linear function such as Tanh
or a linear rectifier (ReLU) function is applied to the simulated output value.
The resulting output value is transmitted as an integer or floating-point value to one or more synapses in the next neural layer. In the lower right-hand panel 370, a conventional binary coding method is shown for reference. Binary coding schemes are widely used in conventional computer systems to encode characters and numbers. Boolean algebra is applied to compute with binary coded numbers and characters.
View attachment 2474
difficulty in training learning: for direct training of pulsed neural networks, most of the supervised learning methods are based on gradient settings and lack bio rationality; another way to obtain a trained pulsed neural network model is to directly transform the trained traditional neural network, but this transformation is limited by the loss of accuracy caused by many aspects. therefore, compared with the current mature artificial neural networks, the training and learning of pulsed neural networks on real large deep networks still has a long way to develop.
The proof of the pudding ... Ask
@uiux.
application accuracy is lower for more complex tasks: pulsed neural networks have been controversial for a long time, and one of the reasons is that their performance in terms of application accuracy is often inferior to that of traditional ai networks. both the coding and training learning issues described above can lead to an impact on the accuracy of their application to more complex tasks. therefore, how to improve the application accuracy of pulse neural networks while retaining their original advantages and characteristics is also a major challenge for the future development of pulse neural networks.
View attachment 2476
https://www.hackster.io/news/brainc...work-accelerators-go-mass-market-2a6572c67c50
On the question of accuracy, the 4-bit Akida stacks up very well against 32-bit CNNs.
Once again, I rest my briefs.