BRN Discussion Ongoing

As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


It’s truely amazing the tech, just takes time. I hope I don’t have to sell out for the house, regardless it’s always been a wait until 2025 since I entered personally.
 
  • Like
  • Love
Reactions: 8 users
Just as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).

Before PvdM had made the statement at that AGM that Akida could do maths a couple of years ago, I had expressed the opinion that Akida was not suited to maths, and I was hoping you would have the good grace not to embarrass me by dragging it up again.

So PvdM knew it could do maths - Sandia had to carry out a research project to prove it.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before
.”

Here is a Scandia random walk patent (Priority: 20210312):

US2022292364A1 DEVICE AND METHOD FOR RANDOM WALK SIMULATION

View attachment 20563


0002] This invention was made with United States Government support under Contract No. DE-NA0003525 between National Technology & Engineering Solutions of Sandia, LLC and the United States Department of Energy. The United States Government has certain rights in this invention.

[0004] Random walk refers to the stochastic (random) process of taking a sequence of discrete, fixed-length steps in random directions. This process has been used to solve a wide range of numerical computation tasks and scientific simulations. However, the electrical power consumption by a computer simulating a large number of random walkers can be large ...

[0006] An illustrative embodiment provides a computer-implemented method for simulating a random walk in spiking neuromorphic hardware. The method comprises receiving, by a buffer count neuron, a number of spiking inputs from a number of upstream mesh nodes, wherein the spiking inputs each include an information packet comprising information associated with a simulation of a specific random walk process. A buffer generator neuron in the mesh node, in response to the inputs, generates a first number of spikes until the buffer count neuron reaches a first predefined threshold. Upon reaching the first predefined threshold the buffer generator neuron sends a number of buffer spiking outputs to a spike count neuron in the mesh node. The spike count neuron counts the buffer spiking outputs from the buffer generator neuron. In response to the buffer spiking outputs, a spike generator neuron in the mesh node generates a second number of spikes until the spike count neuron reaches a second predefined threshold. Upon reaching the second predefined threshold, the spike generator neuron sends a number of counter spiking outputs to a probability neuron in the mesh node, wherein the counter spiking outputs each include an information packet comprising updated information associated with the simulation of the specific random walk process. The probability neuron selects a number of downstream mesh nodes to receive the counter spiking outputs generated by the spike generator neuron and sends the counter spiking outputs to the selected downstream mesh nodes
.

... so here I am with my wilted fig leaf ...
In your defence he answered very quickly and moved on to taking questions from others and ignored me thereafter.😂🤣😂
 
  • Haha
  • Like
  • Wow
Reactions: 5 users

Getupthere

Regular
  • Like
Reactions: 4 users

Derby1990

Regular
BEAST Akida
Sorry world as we knew it.
 
  • Like
  • Haha
  • Sad
Reactions: 4 users
I picked up 1,750 at 71.5.

Would have loved to buy more but I'm trying to be fiscally responsible with the arrival of my first in February!

Couldn't help myself today though lol.
That is wonderful news. The chaos that a first child injects into your life destroys every preconceived idea and plan so relax and buy more if that is what you want to do.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Haha
Reactions: 10 users

Deadpool

hyper-efficient Ai
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


Hi FF, If I didn't know better, I would assume a bit of a Dorithy Dixer was going on here.o_O:)
 
  • Haha
  • Sad
Reactions: 3 users
  • Like
  • Fire
  • Love
Reactions: 7 users

alwaysgreen

Top 20
I'll chime in on the discussion about the GPUs with my limited knowledge.

Can Akida replace GPUs in cars for running the sensors that are currently run on GPUs? Yes, absolutely as per Anil's statement.

Can Akida run graphic intensive applications such as modern video games, 3D architectural modelling software that need graphics cards (GPUs) that cost $2,500 and require ridiculous amounts of water cooling. Not to my knowledge and this is not the intended market for Akida. Would love @Diogenese to prove me wrong but I'm pretty sure I'm right on this otherwise Akida would be flying off shelves in every video game shop on Earth.
 
Last edited:
  • Like
  • Love
Reactions: 12 users

Diogenese

Top 20


"NVIDIA Jetson Orin Nano utilizes the Ampere-based GPU, along with eight streaming multiprocessors containing 1,024 CUDA cores and 32 Tensor Cores, which will be used for processing artificial intelligence workloads. The Ampere-based GPU Tensor Cores offer improved performance per watt support for sparsity, allowing for twice the Tensor Core throughput."

US2022327101A1 INCREASING SPARCITY IN DATA SETS (Priority 20210409)

1667038555686.png



[0522] ... AI services 3818 may leverage AI system 3824 to execute machine learning model(s) (e.g., neural networks, such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inferencing tasks.

NVIDIA goes fast because it uses lots of TOPS. They reduce power by sparsity, in their case omitting multiplications by zero.

The power figures are 5W to 15W
 
  • Like
  • Fire
  • Love
Reactions: 11 users
"NVIDIA Jetson Orin Nano utilizes the Ampere-based GPU, along with eight streaming multiprocessors containing 1,024 CUDA cores and 32 Tensor Cores, which will be used for processing artificial intelligence workloads. The Ampere-based GPU Tensor Cores offer improved performance per watt support for sparsity, allowing for twice the Tensor Core throughput."

US2022327101A1 INCREASING SPARCITY IN DATA SETS (Priority 20210409)

View attachment 20568


[0522] ... AI services 3818 may leverage AI system 3824 to execute machine learning model(s) (e.g., neural networks, such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inferencing tasks.

NVIDIA goes fast because it uses lots of TOPS. They reduce power by sparsity, in their case omitting multiplications by zero.

The power figures are 5W to 15W
Wow that’s amazing only 5 to 15 Watts that’s less than the human brain.

What’s that Blind Freddie but at least 50 to 150 times more power hungry than AKIDA.

Sorry Freddie @Diogenese did not say if the 5 to 15 watts includes the electricity to run the cooling fan. Maybe but he has been up all day and it is Saturday.

Oh well back to the drawing board.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
Reactions: 12 users

Diogenese

Top 20
I'll chime in on the discussion about the GPUs with my limited knowledge.

Can Akida replace GPUs in cars for running the sensors that are currently run on GPUs? Yes, absolutely as per Anil's statement.

Can Akida run graphic intensive applications such as modern video games, 3D architectural modelling software that need graphics cards (GPUs) that cost $2,500 and require ridiculous amounts of water cooling. Not to my knowledge and this is not the intended market for Akida. Would love @Diogenese to prove me wrong but I'm pretty sure I'm right on this otherwise Akida would be flying off shelves in every video game shop on Earth.
Hi ag,

I'm not going to risk giving @Fact Finder another chance to rub my nose in it by saying that Akida can't do GPU, but Akida does not run programs. The Cortex MCU is used only for configuration, but does not play any part in the core business of the SNN - classification/inference.
However, there are many cases where Akida can run as a co-processor with a stored program computer.
Given that games are audio-visual, and given that user input is part of the game, there may be scope for Akida in processing some of the user inputs.

However, I've never played one of these new-fangled contraptions:

 
  • Like
  • Love
  • Fire
Reactions: 17 users

RobjHunt

Regular
Hi ag,

I'm not going to risk giving @Fact Finder another chance to rub my nose in it by saying that Akida can't do GPU, but Akida does not run programs. The Cortex MCU is used only for configuration, but does not play any part in the core business of the SNN - classification/inference.
However, there are many cases where Akida can run as a co-processor with a stored program computer.
Given that games are audio-visual, and given that user input is part of the game, there may be scope for Akida in processing some of the user inputs.

However, I've never played one of these new-fangled contraptions:


The most remarkable thing about BrainChip is …….

BrainChip!

Pantene peeps
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Goodness me there is that nasty 2024 year being thrown around again.

That is a whole five 4C’s away.

“We’ll all be ruined said Hanrahan”

What’s that Blind Freddie Brainchip has over $US50 million dollars available to meet a cash burn of around $15 million per annum but is that enough to get us to 2024 that’s over 12 months away?

Why are you putting on your hat? Where are you going? Speak to me? Why did you slam the door?

All I asked was is $US50 million enough to keep the lights on at Brainchip for another 12 months.

My opinion only DYOR
FF

AKIDA BALLISTA
Fact finder
All I see is everyone on this site wants things to go only one way, clearly they know very little about business and how things work in the real world.
It’s not a straight line
Obstacles get in the way
Things change that’s out of your control
It would be very interesting to see all the negative people on here actually run a business, a start up, build something from scratch. But most of them won’t do anything of the sort because they don’t have the balls or just don’t know where to start. But they are very quick to attack the people who do, quite frankly they shit me to tears.
How many business start from the beginning with nothing and go up up ⬆️
Without any set backs or problems, I personally don’t know of any.

But if you believe in the science and the product then please hang on hold tight because this is going to go in one direction once we start to move away from the platform
 
  • Like
  • Love
  • Fire
Reactions: 29 users

JDelekto

Regular
I'll chime in on the discussion about the GPUs with my limited knowledge.

Can Akida replace GPUs in cars for running the sensors that are currently run on GPUs? Yes, absolutely as per Anil's statement.

Can Akida run graphic intensive applications such as modern video games, 3D architectural modelling software that need graphics cards (GPUs) that cost $2,500 and require ridiculous amounts of water cooling. Not to my knowledge and this is not the intended market for Akida. Would love @Diogenese to prove me wrong but I'm pretty sure I'm right on this otherwise Akida would be flying off shelves in every video game shop on Earth.

Convolutional Neural Networks (CNNs) rely on many mathematical operations using matrices to build a model. The feature of the GPU that makes it ideal for CNN processing is that many processing cores can do these matrix operations concurrently or a lot of math operations simultaneously.

The Akida processor is geared more toward Spiking Neural Networks (SNNs), from my understanding, does not require the matrix math processing used by a CNN and only processes the information from sensors that change. For example, only the portion of pixels in a video stream that change needs to be processed.

So the role of GPUs and Akida in AI processing is generating the mathematical models that allow them to recognize patterns. These patterns can be images, gestures captured in a video, or voice commands (not just the command, but the specific user's voice), similar to how our brains learn and recognize patterns of these same inputs.

Akida has the advantage that: a) it can train its model on chip (using its one-shot learning) more efficiently, and b) this training is done faster and with less computational power. In many cases today, the models trained for CNNs are done on machines with powerful GPUs or shifted to the Cloud, where many of these GPUs (usually specialized for the task) can generate these in parallel.

Akida does not do any rendering, ray tracing, application of textures to polygons, etc., which are things that gamers and content creators expect a GPU to do. We can still expect to see customers like Mercedes still using GPUs in their User Interface, though they will probably go for a lower-power solution, like the ARM-based Qualcomm Snapdragon XR2 family.

I don't believe the Qualcomm solution will replace Akida but instead work in tandem with it. There are many types of sensors that go beyond video and audio. Several examples are temperature, proximity, infrared, ultrasonic, light, smoke & gas, humidity, motion, pressure, etc. If these sensors can produce spike trains, then Akida can learn from them.

While several of these types of sensors are employed in informational systems today (for example, your car beeps when too close to an obstacle or can tell you that your tire pressure is too low), Akida would be able to learn from a sensor's input and be able to recognize patterns which can result in more reactive systems, as opposed to just presenting the information.

I think there will be a lot of applications for Akida that are yet to discover.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
On a different note, Why its important to keep the secret sauce secret.

Four Samsung employees charged with semiconductor tech theft


It is reported that four current and former Samsung employees have been charged with theft of proprietary semiconductor technology. These employees reportedly stole highly valued semiconductor chip technology from Samsung and leaked them to an overseas firm.

The Seoul Central District Prosecutors Office has indicted those four employees with physical detention for violating the unfair competition prevention act and the industrial technology protection act. Two of those employees are former engineers, while the remaining two work as researchers for Samsung Engineering.

One former employee, who worked for Samsung’s semiconductor division, obtained blueprints and operation manuals for the ultrapure water system and other critical technical data while searching for a job. He then leaked that to a Chinese semiconductor consulting firm while searching for a job there. And when he got a job there, he used the stolen data and ordered an ultrapure water system.

Prosecutors said another former Samsung employee was charged with stealing a file containing key semiconductor technology. He allegedly gave that file to the company’s competitor Intel while still working for Samsung. He stole data by capturing images of the data in the file.


Learning
Even thought I´m a Samsung investor, I don´t feel bad for them, Samsung is stealing technology from everybody. Their QLED technology is based on stolen technology, some of their server RAM is stolen directly from Netlist, they even infringed on Apple. They are thieves, but good at it and I´ve given up having morals when investing.
 
  • Like
  • Fire
  • Wow
Reactions: 8 users
A difficult day replaced by a new one. So I am resetting my attitude as I became too confrontational for which I apologise.

I believe in the big picture and one 4C is like a zero in a scene being monitored by AKIDA it just does not rate a spike and certainly not the one seen yesterday.

There are at least 21 reasons why this is so and I posted them yesterday but for the moment ignore every single one of those points and consider how a few million dollars in the 4C can out weighs this publicly, Edge Impulse advertised FACT about AKIDA 1.0.

AKIDA 1.0 out performs GPUs and CPUs.

In the following Techradar surveys the best cheap GPUs are discussed and they are all priced in the hundreds of dollars.


In the tests conducted by Nviso it was able to run AKIDA at 1,000 fps and privately it stated they had achieved 1,670 fps.

In Gaming I am informed by another poster here that frames per second performance is very important and I have even found an article which explains this:


Now if you have opened the above link and read the article you will have found it includes a chart of the frames per second performance of better GPUs and they are sitting in the low hundreds of fps.

By now you might have started to see the very, very big picture and why one 4C or a dozen 4C’s means absolutely nothing against the FACT that:

1. Rob Telson has secured a partnership with Edge Impulse;

2. Edge Impulse has over 50,000 developers using their platform;

3. Edge Impulse has recently completed the incorporation of Meta TF into its platform;

4. Edge Impulse is encouraging these 50,000 plus developers to use Meta TF;

5. Edge Impulse is publicly stating that AKIDA 1.0 out performs GPUs and CPUs;

6. AKIDA 1.0 costing $25.00 runs at more than tens of hundreds of fps;

7. GPU’s costing $100’s run at less than a couple of hundred fps; and

7. The bonus FACT MEGACHIPS largest customer is Nintendo and Brainchip is a contracted supplier of AKIDA technology to MEGACHIPS.

My opinion only I will take the above FACTS over a sugar hit in a 4C every day of the week.

Remember the Data Scientist PhD student who described AKIDA 1.0 as a BEAST? Well that is exactly what it is.

Now add back the 21 other reasons to ignore a 4C result and if an investor cannot now see the very, very big picture staring them in the face then there is nothing more to say.

My opinion only DYOR
FF

AKIDA BALLISTA

I think it´s relevant to compare nVidias graphics cards to Akida, but we have to be sure we´re measuring the same thing. Generated 3D pictures FPS or AI inference of pictures FPS. If Akida can beat nVidias A100 (around 30.000 USD) in different inferencing tasks, that would be a huge win for Brainchip and they could go straight to many datacenters :)


B.t.w. the new nVidia 4090 is able to get close to 1000 FPS generated pictures in some games at lower settings, not that it´s relevant at all :D
 
  • Like
  • Love
Reactions: 7 users

Learning

Learning to the Top 🕵‍♂️
Hi FF, If I didn't know better, I would assume a bit of a Dorithy Dixer was going on here.o_O:)
Has to Google who is Dorithy Dixer to understand your reply. 😅😅😅
chrome_screenshot_1667045764269.png

Learning 🤓😂🤣
 
  • Like
Reactions: 9 users

Chris B

Member
Ok. First of all... yes, I was very disappointed with the latest 4C report... many investers are getting frustrated, But taking the emotion out of it, I never expected revenue to kick off until the first quarter of 2023. I just had unrealistic expectations after a good 2nd quarter. Everyone has their own strategies and expectations and I respect that. For me... Nothing has changed my opinion for the end game. I still strongly believe that I will reap a fantastic Harvest from what I have Sown. Just my opinion...
 
  • Like
  • Love
Reactions: 13 users

Deadpool

hyper-efficient Ai
  • Haha
  • Love
  • Like
Reactions: 4 users

M_C

Founding Member
“It took Tesla 17 years to turn a profit when it announced that 2020 was the first full year of profitability in the company's history. While the company generates substantial revenue from automotive sales and regulatory credits, it took some time to profit due to production costs and supply chain issues.8 Sept 2022
View attachment 20527
https://www.forbes.com › 2022/09/08

By The Numbers, How Does Tesla Make Money In 2022?

When I read someone state that the next 4C’s will be ‘make or break’ for Brainchip which at the most extreme assessment of its commercialisation phase has only been at it full time since June, 2021 I have to say based on Tesla what a complete load of old rubbish this statement is and completely miss states both the financial situation of the company and the development time lines for adoption of new technology into products.

So I say beware of wolves in sheep clothing.

What angers me as past evidence proves is long term genuine holders who do not panic and sell because in their hearts and minds they know the opportunity Brainchip presents but yet vent and spew negativity without regard to the impact their statements have on the less informed retail investor looking for facts and reassurance about the fundamentals of their investments in difficult times.

I remember a series of posts I had with @MC🐠 over at the other place about the influence he had on others which he initially did not accept. These posts related to dot joining not being negative by the way.

It is now the weekend and we have the second wave of more articulate manipulators spreading negativity that by itself does not withstand logical analysis but woven into backhanded positive statements will eat away at shareholder confidence.

An emerged poster throwing in a little “I think that is being misleading” comment to attempt to diminish a posts importance or at the very least inject doubt. There are lots of ways to ask for more information about a point which would not have this effect.

So remember Tesla but also take comfort from the fact that Brainchip sells IP using a developing ecosystem so does not have all the infrastructure and labour costs that Tesla had to deal with before reaching profitability but reach it, it did after 17 years.

My opinion only so do not believe a word I or anyone else says until you have DYOR
FF

AKIDA BALLISTA
True. I am a stubborn bastard
 
  • Like
  • Haha
Reactions: 9 users
Top Bottom