The way I read it was a much less confident, perhaps reserved, version of our CEO. I think the reality and enormity of introducing new tech into the current world (wars, recessions, inflation, interest rate rises, blah, blah, blah) has dawned upon him (not saying he doesn't know what he is doing but the climate has changed). His comments are valid whichever way this eventually goes - he hasn't said that the evaluations will actually be converted, that's just a goal for the company.“In the coming quarter, the Company will focus on key sales targets and converting technical evaluations into paid licenses.”
This comment has me intrigued.
Sean has not struck me as a CEO who throws away comments. Every interview has been clear and considered.
I’m sure he knows that by the making this statement he is creating the expectancy of paid licences occurring during this next quarter. (Note the plural).
This means we are to expect announcements. Otherwise he is putting his reputation on the line.
The team have described him as laser focussed! Sloppy comments I don’t expect.
Am I reading too much into it?
As I said I only know what I am told and read.Hi Fact Finder,
Yes I'm an emerged poster and don't really post here like some of you do. Mainly because I have a full-time job and a young family. Just to give some background, I'm an electrical engineer by profession so I'm not an expert in electronics or chips but have some basic understanding enough to grasp the difference between Akida, GPUs and CPUs. Since you have commented about me and my post in a separate post as well let me provide my view about the points above. For simplicity, I just pointed to your point 5 in the original post but even the cost comparison is not viable if you cannot replace a GPU with Akida in all use cases. Akida is for the edge and GPUs are for Graphics processing.
1. I'm not sure what you are referring to here and I can't comment further without knowing what parameter they are comparing.
2. Akida is an AI chip and I think here he means that Akida can be used to process all the sensor data of the car. That doesn't mean Akida can outperform a GPU in areas the GPU's are good at. Do you think Akida can replace a GPU and able to handle the graphics you see in modern cars? Can Akida PCIe board replace a GPU and run modern games? If that were the case our Market cap won't be this by now.
3. Yes because Akida is an AI chip which is designed to work at the edge and process sensor data at a lower power. Therefore, it'll outperform GPUs in those edge use cases when it comes to power and cost. That doesn't mean Akida is better than GPUs in a general way.
4. I would like to know what they were benchmarking? Outperformance will depend on the parameter they were benchmarking. For example, you can benchmark the emissions of cars. But just because one car has less emissions, you can't claim it is superior than than its peers in every way. A car with higher emissions will outperform its peers in other parameters. So it is in fact misleading to say "AKIDA 1.0 out performs GPUs and CPUs." without mentioning in what way?
5. Its a feature they added so that its easier for end users to use Akida. Can you explain how this improves the performance of Akida?
6. Yes so its good for edge applications. Another feature of Akida.
7. Not sure about how good is Akida in maths but would like to know from @Diogenese if Akida can outperform a CPU in this case?
8 and 9. Again this is related to sensor data so no one is saying Akida cannot process sensor data.
Also tell him not to sit on his hat ...Hey Blind Freddie these jokers recon Brainchips current cash of $24.6 million gives them 6 quarters of operating runway.
How many quarters are there in a year?
What’s that 4. So 4 into 6 goes how much?
A year and a half so if you double $24.6 million what’s that?
Really $49.2 million so that’s less than the $50 million you said they can get their hands on which would be three years and a bit left over.
Do you recon these jokers have got that right?
You do. 3 years plus I don’t believe it you sure your maths is right?
I mean if you are wrong about how many quarters are in a year.
I’m gunna Google it to be sure.
Hey Freddie your right. No doubt about you. Mum was right you certainly are the one with the brains.
Three years so that bloke saying the next 4C’s was make or break was talking through his hat. I’ll tell him he’s dreaming if I see him down the pub.
My opinion only and lots of plagiarism so DYOR & add ups.
FF
AKIDA BALLISTA
Ssssh I put it on his favourite chair to teach him a lesson. He is so smug about being able to do maths.Also tell him not to sit on his hat ...
If I remember correctly, the other thing that us mentioned continuously is Akida is not a maths chip.As I said I only know what I am told and read.
Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.
( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)
This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.
The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.
You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.
I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.
For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:
Sandia Researchers Show Neuromorphic Computing Widely Applicable
March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.
A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.
Source: Sandia National Laboratories
*********************************************************************************************************************************************************
Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.
AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.
I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA
"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."
Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.
Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:
Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain
(Sorry left out this paper was published in October, 2022)
My opinion only so DYOR
FF
AKIDA BALLISTA
Yes and I asked the convoluted question at the 2019 AGM “while AKIDA does not do maths, can it do maths” To which Peter van der Made replied “Yes it can do maths.”If 8 remember correctly, the other thing that us mentioned continuously is Akida is not a maths chip.
Yes mate lolYes and I asked the convoluted question at the 2019 AGM “while AKIDA does not do maths, can it do maths” To which Peter van der Made replied “Yes it can do maths.”
(His exact words.)
When Peter van der Made states AKIDA does not do maths he is referring to maths accumulate computing in the way Von Neumann compute operates.
But of course you know this don’t you.
My opinion only DYOR
FF
AKIDA BALLISTA
Just as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).As I said I only know what I am told and read.
Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.
( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)
This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.
The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.
You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.
I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.
For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:
Sandia Researchers Show Neuromorphic Computing Widely Applicable
March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.
A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.
Source: Sandia National Laboratories
*********************************************************************************************************************************************************
Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.
AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.
I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA
"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."
Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.
Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:
Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain
(Sorry left out this paper was published in October, 2022)
My opinion only so DYOR
FF
AKIDA BALLISTA
But sometimes I get the feeling trying to compare Akida with what is current is like comparing the horse with the automobile
The fig leaf serves a purposeJust as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).
Before PvdM had made the statement at that AGM that Akida could do maths a couple of years ago, I had expressed the opinion that Akida was not suited to maths, and I was hoping you would have the good grace not to embarrass me by dragging it up again.
So PvdM knew it could do maths - Sandia had to carry out a research project to prove it.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Here is a Scandia random walk patent (Priority: 20210312):
US2022292364A1 DEVICE AND METHOD FOR RANDOM WALK SIMULATION
View attachment 20563
0002] This invention was made with United States Government support under Contract No. DE-NA0003525 between National Technology & Engineering Solutions of Sandia, LLC and the United States Department of Energy. The United States Government has certain rights in this invention.
[0004] Random walk refers to the stochastic (random) process of taking a sequence of discrete, fixed-length steps in random directions. This process has been used to solve a wide range of numerical computation tasks and scientific simulations. However, the electrical power consumption by a computer simulating a large number of random walkers can be large ...
[0006] An illustrative embodiment provides a computer-implemented method for simulating a random walk in spiking neuromorphic hardware. The method comprises receiving, by a buffer count neuron, a number of spiking inputs from a number of upstream mesh nodes, wherein the spiking inputs each include an information packet comprising information associated with a simulation of a specific random walk process. A buffer generator neuron in the mesh node, in response to the inputs, generates a first number of spikes until the buffer count neuron reaches a first predefined threshold. Upon reaching the first predefined threshold the buffer generator neuron sends a number of buffer spiking outputs to a spike count neuron in the mesh node. The spike count neuron counts the buffer spiking outputs from the buffer generator neuron. In response to the buffer spiking outputs, a spike generator neuron in the mesh node generates a second number of spikes until the spike count neuron reaches a second predefined threshold. Upon reaching the second predefined threshold, the spike generator neuron sends a number of counter spiking outputs to a probability neuron in the mesh node, wherein the counter spiking outputs each include an information packet comprising updated information associated with the simulation of the specific random walk process. The probability neuron selects a number of downstream mesh nodes to receive the counter spiking outputs generated by the spike generator neuron and sends the counter spiking outputs to the selected downstream mesh nodes.
... so here I am with my wilted fig leaf ...
I initially thought about being a Chemist when my father did not want me to be a lawyer. He wanted me to be a builder He wanted me not to be a lawyer the most so that’s what I did. Very rational decision by a 17 year old. LOLYes mate lol
I’m not an engineer of any kind, just a chemist. But sometimes I get the feeling trying to compare Akida with what is current is like comparing the horse with the automobile. We have to use a measure with which we are familiar, but what does it really mean when taken as a whole.
“He gave the information to Intel.”On a different note, Why its important to keep the secret sauce secret.
Four Samsung employees charged with semiconductor tech theft
It is reported that four current and former Samsung employees have been charged with theft of proprietary semiconductor technology. These employees reportedly stole highly valued semiconductor chip technology from Samsung and leaked them to an overseas firm.
The Seoul Central District Prosecutors Office has indicted those four employees with physical detention for violating the unfair competition prevention act and the industrial technology protection act. Two of those employees are former engineers, while the remaining two work as researchers for Samsung Engineering.
One former employee, who worked for Samsung’s semiconductor division, obtained blueprints and operation manuals for the ultrapure water system and other critical technical data while searching for a job. He then leaked that to a Chinese semiconductor consulting firm while searching for a job there. And when he got a job there, he used the stolen data and ordered an ultrapure water system.
Prosecutors said another former Samsung employee was charged with stealing a file containing key semiconductor technology. He allegedly gave that file to the company’s competitor Intel while still working for Samsung. He stole data by capturing images of the data in the file.
Four Samsung employees charged with semiconductor tech theft
It is reported that four current and former Samsung employees have been charged with theft of proprietary semiconductor technology. These employees ...www.sammobile.com
Learning
Congrats mate…just do what I do and tell the other half that the extra shares are for the kids future!! They surely can’t argue with that!!I picked up 1,750 at 71.5.
Would have loved to buy more but I'm trying to be fiscally responsible with the arrival of my first in February!
Couldn't help myself today though lol.
It’s truely amazing the tech, just takes time. I hope I don’t have to sell out for the house, regardless it’s always been a wait until 2025 since I entered personally.As I said I only know what I am told and read.
Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.
( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)
This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.
The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.
You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.
I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.
For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:
Sandia Researchers Show Neuromorphic Computing Widely Applicable
March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.
A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.
Source: Sandia National Laboratories
*********************************************************************************************************************************************************
Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.
AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.
I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA
"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."
Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.
Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:
Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain
(Sorry left out this paper was published in October, 2022)
My opinion only so DYOR
FF
AKIDA BALLISTA
In your defence he answered very quickly and moved on to taking questions from others and ignored me thereafter.Just as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).
Before PvdM had made the statement at that AGM that Akida could do maths a couple of years ago, I had expressed the opinion that Akida was not suited to maths, and I was hoping you would have the good grace not to embarrass me by dragging it up again.
So PvdM knew it could do maths - Sandia had to carry out a research project to prove it.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Here is a Scandia random walk patent (Priority: 20210312):
US2022292364A1 DEVICE AND METHOD FOR RANDOM WALK SIMULATION
View attachment 20563
0002] This invention was made with United States Government support under Contract No. DE-NA0003525 between National Technology & Engineering Solutions of Sandia, LLC and the United States Department of Energy. The United States Government has certain rights in this invention.
[0004] Random walk refers to the stochastic (random) process of taking a sequence of discrete, fixed-length steps in random directions. This process has been used to solve a wide range of numerical computation tasks and scientific simulations. However, the electrical power consumption by a computer simulating a large number of random walkers can be large ...
[0006] An illustrative embodiment provides a computer-implemented method for simulating a random walk in spiking neuromorphic hardware. The method comprises receiving, by a buffer count neuron, a number of spiking inputs from a number of upstream mesh nodes, wherein the spiking inputs each include an information packet comprising information associated with a simulation of a specific random walk process. A buffer generator neuron in the mesh node, in response to the inputs, generates a first number of spikes until the buffer count neuron reaches a first predefined threshold. Upon reaching the first predefined threshold the buffer generator neuron sends a number of buffer spiking outputs to a spike count neuron in the mesh node. The spike count neuron counts the buffer spiking outputs from the buffer generator neuron. In response to the buffer spiking outputs, a spike generator neuron in the mesh node generates a second number of spikes until the spike count neuron reaches a second predefined threshold. Upon reaching the second predefined threshold, the spike generator neuron sends a number of counter spiking outputs to a probability neuron in the mesh node, wherein the counter spiking outputs each include an information packet comprising updated information associated with the simulation of the specific random walk process. The probability neuron selects a number of downstream mesh nodes to receive the counter spiking outputs generated by the spike generator neuron and sends the counter spiking outputs to the selected downstream mesh nodes.
... so here I am with my wilted fig leaf ...