BRN Discussion Ongoing

Iseki

Regular
So Ford, VW, Intel, and Tesla have all attempted to bypass the intermediate steps to autonomous driving (AD) and fallen short - you have to crawl before you can walk.

It's not that the billions of dollars they invested has been totally wasted - you should always learn from your mistakes. But both Ford and Intel have recently sold their AD subsidiaries. The "recently" is significant in that it shows that these large companies are feeling the pinch now. As Sean intimated, it's a tough economic environment at the moment.

Remember that they were attempting AD without Valeo's SCALA. I haven't researched Mobileye's technology in depth, but this sample looks unpromising from a SNN point of view:

US2022222317A1 APPLYING A CONVOLUTION KERNEL ON INPUT DATA

View attachment 20521

[0028] Both application processor 180 and image processor 190 can include various types of processing devices. For example, either or both of application processor 180 and image processor 190 can include one or more microprocessors, preprocessors (such as image preprocessors), graphics processors, central processing units (CPUs), support circuits, digital signal processors, integrated circuits, memory, or any other types of devices suitable for running applications and for image processing and analysis. In some embodiments, application processor 180 or image processor 190 can include any type of single or multi-core processor, mobile device microcontroller, central processing unit, or other type of processor. Various processing devices can be used, for example including processors available from manufacturers (e.g., Intel®, AMD®, etc.), and can include various architectures (e.g., x86 processor, ARM®, etc.).

[0054] A convolution neural network includes an input layer, an output layer, as well as multiple hidden layers. The hidden layers of a CNN typically include a series of convolution layers that convolve with a multiplication or other dot product. The activation function is commonly a RELU layer, and is subsequently followed by additional convolutions such as pooling layers, fully connected layers and normalization layers, referred to as hidden layers because their inputs and outputs are masked by the activation function and final convolution. The final convolution, in turn, often involves backpropagation in order to more accurately weight the end product.

Argo AI seems likewise deficient:
US2022301099A1 SYSTEMS AND METHODS FOR GENERATING OBJECT DETECTION LABELS USING FOVEATED IMAGE MAGNIFICATION FOR AUTONOMOUS DRIVING

View attachment 20524


[0044] The machine learning model for generating a saliency map may be generated and/or trained using any now or hereafter known techniques such as, without limitation, kernel density estimation (KDE) and convolution neural network (CNN), both of which are differentiable and the parameters can be learned through the final task loss. In KDE, the system may use bounding box centers as the data points that have a bandwidth proportional to the square root of the area of the bounding box. In CNN, the system may represent the bounding boxes as an N×4 matrix, where N is a fixed maximum value for the number of bounding boxes. If there are less than N objects, the input may be zero-padded to this dimension. Once a model has been generated, the system may also apply the model to all bounding boxes in a training dataset to obtain a dataset-wide prior.

[0076] During operations, information is communicated from the sensors to the on-board computing device 720 . The on-board computing device 720 can (i) cause the sensor information to be communicated from the mobile platform to an external device (e.g., computing device 101 of FIG. 1) and/or (ii) use the sensor information to control operations of the mobile platform
.

So, not only do they not have SCALA, they don't seem to be aware of Akida.

Could it be possible that both Argo and Mobileye had very good algorithms, but they were implementing them on 20th century processors?

That's like getting into the ring against Muhammad Ali with your shoe laces tied together.
"Could it be possible that both Argo and Mobileye had very good algorithms, but they were implementing them on 20th century processors?"

Absolutely. I can just see Gelsinger rallying the troops: Let's not check out Akida, lets just flog off mobileye for $30bill less.

But watch this space: using impeccable logic intel will now become an EAP and put Akida IP into x86 chips.
 
  • Like
  • Haha
Reactions: 7 users

Gazzafish

Regular
The SEMA car industry show in Las Vegas runs from Nov.1-4. Wonder if any of our NDA customers will announce their adoption of Akida during their presentations?? 🤞🏻
 
  • Like
  • Fire
  • Thinking
Reactions: 8 users

Foxdog

Regular
“In the coming quarter, the Company will focus on key sales targets and converting technical evaluations into paid licenses.”
This comment has me intrigued.
Sean has not struck me as a CEO who throws away comments. Every interview has been clear and considered.
I’m sure he knows that by the making this statement he is creating the expectancy of paid licences occurring during this next quarter. (Note the plural).
This means we are to expect announcements. Otherwise he is putting his reputation on the line.
The team have described him as laser focussed! Sloppy comments I don’t expect.
Am I reading too much into it?
The way I read it was a much less confident, perhaps reserved, version of our CEO. I think the reality and enormity of introducing new tech into the current world (wars, recessions, inflation, interest rate rises, blah, blah, blah) has dawned upon him (not saying he doesn't know what he is doing but the climate has changed). His comments are valid whichever way this eventually goes - he hasn't said that the evaluations will actually be converted, that's just a goal for the company.
I think it was a measured response to difficult times. If we see this through to the inevitable economic rebound then we should be in a good position to prosper from the work that is being done now. imo
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Hi Fact Finder,
Yes I'm an emerged poster and don't really post here like some of you do. Mainly because I have a full-time job and a young family. Just to give some background, I'm an electrical engineer by profession so I'm not an expert in electronics or chips but have some basic understanding enough to grasp the difference between Akida, GPUs and CPUs. Since you have commented about me and my post in a separate post as well let me provide my view about the points above. For simplicity, I just pointed to your point 5 in the original post but even the cost comparison is not viable if you cannot replace a GPU with Akida in all use cases. Akida is for the edge and GPUs are for Graphics processing.

1. I'm not sure what you are referring to here and I can't comment further without knowing what parameter they are comparing.

2. Akida is an AI chip and I think here he means that Akida can be used to process all the sensor data of the car. That doesn't mean Akida can outperform a GPU in areas the GPU's are good at. Do you think Akida can replace a GPU and able to handle the graphics you see in modern cars? Can Akida PCIe board replace a GPU and run modern games? If that were the case our Market cap won't be this by now.

3. Yes because Akida is an AI chip which is designed to work at the edge and process sensor data at a lower power. Therefore, it'll outperform GPUs in those edge use cases when it comes to power and cost. That doesn't mean Akida is better than GPUs in a general way.

4. I would like to know what they were benchmarking? Outperformance will depend on the parameter they were benchmarking. For example, you can benchmark the emissions of cars. But just because one car has less emissions, you can't claim it is superior than than its peers in every way. A car with higher emissions will outperform its peers in other parameters. So it is in fact misleading to say "AKIDA 1.0 out performs GPUs and CPUs." without mentioning in what way?

5. Its a feature they added so that its easier for end users to use Akida. Can you explain how this improves the performance of Akida?

6. Yes so its good for edge applications. Another feature of Akida.

7. Not sure about how good is Akida in maths but would like to know from @Diogenese if Akida can outperform a CPU in this case?

8 and 9. Again this is related to sensor data so no one is saying Akida cannot process sensor data.
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Diogenese

Top 20
Hey Blind Freddie these jokers recon Brainchips current cash of $24.6 million gives them 6 quarters of operating runway.

How many quarters are there in a year?

What’s that 4. So 4 into 6 goes how much?

A year and a half so if you double $24.6 million what’s that?

Really $49.2 million so that’s less than the $50 million you said they can get their hands on which would be three years and a bit left over.

Do you recon these jokers have got that right?

You do. 3 years plus I don’t believe it you sure your maths is right?

I mean if you are wrong about how many quarters are in a year.

I’m gunna Google it to be sure.

Hey Freddie your right. No doubt about you. Mum was right you certainly are the one with the brains.

Three years so that bloke saying the next 4C’s was make or break was talking through his hat. I’ll tell him he’s dreaming if I see him down the pub.

My opinion only and lots of plagiarism so DYOR & add ups.
FF

AKIDA BALLISTA
Also tell him not to sit on his hat ...
 
  • Haha
  • Like
Reactions: 6 users
Also tell him not to sit on his hat ...
Ssssh I put it on his favourite chair to teach him a lesson. He is so smug about being able to do maths.
 
  • Haha
  • Like
  • Wow
Reactions: 9 users

Dr E Brown

Regular
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


If I remember correctly, the other thing that us mentioned continuously is Akida is not a maths chip.
 
Last edited:
  • Like
  • Haha
Reactions: 3 users
If 8 remember correctly, the other thing that us mentioned continuously is Akida is not a maths chip.
Yes and I asked the convoluted question at the 2019 AGM “while AKIDA does not do maths, can it do maths” To which Peter van der Made replied “Yes it can do maths.”
(His exact words.)

When Peter van der Made states AKIDA does not do maths he is referring to maths accumulate computing in the way Von Neumann compute operates.

But of course you know this don’t you. 😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Dr E Brown

Regular
Yes and I asked the convoluted question at the 2019 AGM “while AKIDA does not do maths, can it do maths” To which Peter van der Made replied “Yes it can do maths.”
(His exact words.)

When Peter van der Made states AKIDA does not do maths he is referring to maths accumulate computing in the way Von Neumann compute operates.

But of course you know this don’t you. 😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
Yes mate lol
I’m not an engineer of any kind, just a chemist. But sometimes I get the feeling trying to compare Akida with what is current is like comparing the horse with the automobile. We have to use a measure with which we are familiar, but what does it really mean when taken as a whole.
 
  • Like
  • Love
  • Thinking
Reactions: 8 users

Rskiff

Regular
I just watched the first of 2 part from Tony Seba, "The Great Transformation" well worth watching. Love "S" curves.
 
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


Just as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).

Before PvdM had made the statement at that AGM that Akida could do maths a couple of years ago, I had expressed the opinion that Akida was not suited to maths, and I was hoping you would have the good grace not to embarrass me by dragging it up again.

So PvdM knew it could do maths - Sandia had to carry out a research project to prove it.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before
.”

Here is a Scandia random walk patent (Priority: 20210312):

US2022292364A1 DEVICE AND METHOD FOR RANDOM WALK SIMULATION

1667029917545.png



0002] This invention was made with United States Government support under Contract No. DE-NA0003525 between National Technology & Engineering Solutions of Sandia, LLC and the United States Department of Energy. The United States Government has certain rights in this invention.

[0004] Random walk refers to the stochastic (random) process of taking a sequence of discrete, fixed-length steps in random directions. This process has been used to solve a wide range of numerical computation tasks and scientific simulations. However, the electrical power consumption by a computer simulating a large number of random walkers can be large ...

[0006] An illustrative embodiment provides a computer-implemented method for simulating a random walk in spiking neuromorphic hardware. The method comprises receiving, by a buffer count neuron, a number of spiking inputs from a number of upstream mesh nodes, wherein the spiking inputs each include an information packet comprising information associated with a simulation of a specific random walk process. A buffer generator neuron in the mesh node, in response to the inputs, generates a first number of spikes until the buffer count neuron reaches a first predefined threshold. Upon reaching the first predefined threshold the buffer generator neuron sends a number of buffer spiking outputs to a spike count neuron in the mesh node. The spike count neuron counts the buffer spiking outputs from the buffer generator neuron. In response to the buffer spiking outputs, a spike generator neuron in the mesh node generates a second number of spikes until the spike count neuron reaches a second predefined threshold. Upon reaching the second predefined threshold, the spike generator neuron sends a number of counter spiking outputs to a probability neuron in the mesh node, wherein the counter spiking outputs each include an information packet comprising updated information associated with the simulation of the specific random walk process. The probability neuron selects a number of downstream mesh nodes to receive the counter spiking outputs generated by the spike generator neuron and sends the counter spiking outputs to the selected downstream mesh nodes
.

... so here I am with my wilted fig leaf ...
 
  • Like
  • Fire
  • Haha
Reactions: 26 users

Sirod69

bavarian girl ;-)
I am really a fan from Qualcomm:

Digitally connecting your supply chain may sound daunting, but we’re here to help. Explore how our #IoT solutions are helping the supply chain industry take part in the digital revolution using everything from private #5G networks to edge computing and beyond:
1667031200875.png


How digitally transforming your business will revolutionize your operations​

..........

Connecting unconnected things

We start by connecting things we never imagined could be connected. Today, almost any device can become “smart” – that is, they can connect to its users, the internet and other devices. This enables those devices to start collecting data. All kinds of information – from inventory numbers and location or condition of assets to safe work zones and employee behavior – can be tracked and gathered.

Next, smart devices can be connected across different environments, from inside a giant warehouse to the middle of a crowded city to a highway in the middle of nowhere.

QTI leverages over 35 years of mobile industry leadership to create connectivity solutions that work indoors, outside and even when devices are offline. By utilizing 5G networks where available, devices with our tech can upload more data in a shorter amount of time. This means regardless of whether you are using cellular, Wi-Fi, a private network or other standards-based connectivity solutions, you can stay connected to your devices and data for longer periods of time without breaks in coverage.
.........

 
  • Like
  • Fire
  • Love
Reactions: 15 users
But sometimes I get the feeling trying to compare Akida with what is current is like comparing the horse with the automobile

Reminds me of the classic Henry Ford quote below - that ironically is not out of step with the thinking by PvDM and BrainChip IMHO

“If I had asked people what they wanted, they would have said faster horses."
 
  • Like
  • Haha
  • Love
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
Just as a preface, this is beyond my pay grade (and, to top it off, statistics is not my strong suit).

Before PvdM had made the statement at that AGM that Akida could do maths a couple of years ago, I had expressed the opinion that Akida was not suited to maths, and I was hoping you would have the good grace not to embarrass me by dragging it up again.

So PvdM knew it could do maths - Sandia had to carry out a research project to prove it.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before
.”

Here is a Scandia random walk patent (Priority: 20210312):

US2022292364A1 DEVICE AND METHOD FOR RANDOM WALK SIMULATION

View attachment 20563


0002] This invention was made with United States Government support under Contract No. DE-NA0003525 between National Technology & Engineering Solutions of Sandia, LLC and the United States Department of Energy. The United States Government has certain rights in this invention.

[0004] Random walk refers to the stochastic (random) process of taking a sequence of discrete, fixed-length steps in random directions. This process has been used to solve a wide range of numerical computation tasks and scientific simulations. However, the electrical power consumption by a computer simulating a large number of random walkers can be large ...

[0006] An illustrative embodiment provides a computer-implemented method for simulating a random walk in spiking neuromorphic hardware. The method comprises receiving, by a buffer count neuron, a number of spiking inputs from a number of upstream mesh nodes, wherein the spiking inputs each include an information packet comprising information associated with a simulation of a specific random walk process. A buffer generator neuron in the mesh node, in response to the inputs, generates a first number of spikes until the buffer count neuron reaches a first predefined threshold. Upon reaching the first predefined threshold the buffer generator neuron sends a number of buffer spiking outputs to a spike count neuron in the mesh node. The spike count neuron counts the buffer spiking outputs from the buffer generator neuron. In response to the buffer spiking outputs, a spike generator neuron in the mesh node generates a second number of spikes until the spike count neuron reaches a second predefined threshold. Upon reaching the second predefined threshold, the spike generator neuron sends a number of counter spiking outputs to a probability neuron in the mesh node, wherein the counter spiking outputs each include an information packet comprising updated information associated with the simulation of the specific random walk process. The probability neuron selects a number of downstream mesh nodes to receive the counter spiking outputs generated by the spike generator neuron and sends the counter spiking outputs to the selected downstream mesh nodes
.

... so here I am with my wilted fig leaf ...
The fig leaf serves a purpose

1667032445333.png
 
  • Haha
  • Love
  • Wow
Reactions: 6 users

GazDix

Regular
I just want to mention I love the community we have here. I avidly read this forum everyday and information about other investments and it was especially great to see so many posters that don't post too much come out of the woodwork with great thoughts and comments.
Yes, emotions were especially high yesterday, but that is normal and OK.

I will look to increase my holding sometime next week when the right moment comes. More shares were traded more than usual yesterday and now the mutual funds and institutions now have more from retail holders.

I learned early on when I started investing is to act, think and follow like the big guys as much as possible (albeit with much smaller amounts!). If there is a company with enormous potential like Brainchip, it is tricky because your time horizon to see success when you enter the company in its product development stage has to be a long term view, but also with the ability to exit the company if fundamentals (but it is actually a perception of fundamentals) have changed for the worse AND allowing for revenue/product/launch delays (which happens in this space) and infinite patience with balls of steel and a cold blooded - yeah, only $**000 down so far today (or more or less!) like many of you all saw as well just yesterday.

If you are using money you don't need now, all that has changed is the $$ amounts. The amount of shares you have are the same. My time horizon to sell my main parcel (I always leave a little on top to trade so I don't feel like I completely miss out selling the top in a surge) is mid-2024 and my wife will hold her parcel for dividends 8-10 years down the track hopefully if the course and roadmap doesn't change fundamentally.

We also follow our gut as well as nothing is a sure thing. I felt way more worried about Brainchip this time last year when we had no permanent CEO, fewer patents, partners, vision was a little jittery and less dots were joined. But management were always honest. I once held Mesoblast (MSB) shares a few years ago and one announcement just got my blood boiling that my gut thought 'these guys are on a different planet' and sold all my shares right there and then.

Looking forward to Monday as it should be a very interesting day on the ASX in general.
Cheers all,
 
  • Like
  • Love
  • Fire
Reactions: 45 users

Learning

Learning to the Top 🕵‍♂️
On a different note, Why its important to keep the secret sauce secret.

Four Samsung employees charged with semiconductor tech theft


It is reported that four current and former Samsung employees have been charged with theft of proprietary semiconductor technology. These employees reportedly stole highly valued semiconductor chip technology from Samsung and leaked them to an overseas firm.

The Seoul Central District Prosecutors Office has indicted those four employees with physical detention for violating the unfair competition prevention act and the industrial technology protection act. Two of those employees are former engineers, while the remaining two work as researchers for Samsung Engineering.

One former employee, who worked for Samsung’s semiconductor division, obtained blueprints and operation manuals for the ultrapure water system and other critical technical data while searching for a job. He then leaked that to a Chinese semiconductor consulting firm while searching for a job there. And when he got a job there, he used the stolen data and ordered an ultrapure water system.

Prosecutors said another former Samsung employee was charged with stealing a file containing key semiconductor technology. He allegedly gave that file to the company’s competitor Intel while still working for Samsung. He stole data by capturing images of the data in the file.


Learning
 
  • Like
  • Wow
  • Love
Reactions: 14 users
Yes mate lol
I’m not an engineer of any kind, just a chemist. But sometimes I get the feeling trying to compare Akida with what is current is like comparing the horse with the automobile. We have to use a measure with which we are familiar, but what does it really mean when taken as a whole.
I initially thought about being a Chemist when my father did not want me to be a lawyer. He wanted me to be a builder He wanted me not to be a lawyer the most so that’s what I did. Very rational decision by a 17 year old. LOL

Because I missed out on Von Neumann I thought getting in at the beginning of something without all the established knowledge already consumed I could impress my grandchildren one day as I dribbled away in the nursing home. LOL
 
Last edited:
  • Haha
  • Like
  • Love
Reactions: 17 users
On a different note, Why its important to keep the secret sauce secret.

Four Samsung employees charged with semiconductor tech theft


It is reported that four current and former Samsung employees have been charged with theft of proprietary semiconductor technology. These employees reportedly stole highly valued semiconductor chip technology from Samsung and leaked them to an overseas firm.

The Seoul Central District Prosecutors Office has indicted those four employees with physical detention for violating the unfair competition prevention act and the industrial technology protection act. Two of those employees are former engineers, while the remaining two work as researchers for Samsung Engineering.

One former employee, who worked for Samsung’s semiconductor division, obtained blueprints and operation manuals for the ultrapure water system and other critical technical data while searching for a job. He then leaked that to a Chinese semiconductor consulting firm while searching for a job there. And when he got a job there, he used the stolen data and ordered an ultrapure water system.

Prosecutors said another former Samsung employee was charged with stealing a file containing key semiconductor technology. He allegedly gave that file to the company’s competitor Intel while still working for Samsung. He stole data by capturing images of the data in the file.


Learning
“He gave the information to Intel.”

There is a great piece of wisdom handed down criminal law judge to criminal law judge it goes something like if there were no receivers there would be no thieves.

Accordingly the receiver of the stolen goods is punished more severely than the thief.

Intel should be ashamed of itself if it did not bring his theft to notice.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Thinking
Reactions: 13 users

Andy38

The hope of potential generational wealth is real
I picked up 1,750 at 71.5.

Would have loved to buy more but I'm trying to be fiscally responsible with the arrival of my first in February!

Couldn't help myself today though lol.
Congrats mate…just do what I do and tell the other half that the extra shares are for the kids future!! They surely can’t argue with that!!
 
  • Haha
  • Like
  • Fire
Reactions: 11 users
As I said I only know what I am told and read.

Therefore I can tell you that you are wrong as to your view at point 2. I was present at the 2019 AGM as were many who post here when Peter van der Made made it perfectly clear that what he was saying was that he could throw away the GPU and CPUs in the current cars and do all the compute with 100 AKD1000 chips as they were then called. He further stated that with nine AKD1000 including providing redundancy he could cover all the sensors necessary for ADAS.

( Also intended to mention that as well as the throwing out statement he referenced the fact that the current compute was costing a minimum of three thousand plus dollars and that AKIDA was likely to be $10 a chip in bulk so total compute cost per vehicle would be one third or about $1,000 he could not have been any clearer. Following this in early 2020 the then CEO Mr Dinardo in one of his webinars went to some length to hose down shareholder enthusiasm about this use of AKIDA saying that while Peter van der Made was correct their sole focus was to target the Edge.)

This debate about AKIDA only being suitable for Edge compute is a long dead smelly red herring. It has been stated by the company many times that they have chosen to target the Edge because there is no incumbent player and so they do not have any true competitor in that market.

The intention has always been to move back up the supply chain into the data centre as the company becomes established and the technology is recognised and understood.

You will notice that up to this point @Diogenese has not taken up my challenge to correct my view point as he has on many prior occasions.

I accept you are genuine but you really do need to do a lot more research around the AKIDA technology value proposition.

For example the following research from Sandia makes clear that the full power of SNN computing is not yet understood with the reservation that this is not the case where Peter van der Made and his team are concerned:


Sandia Researchers Show Neuromorphic Computing Widely Applicable​

March 10, 2022
ALBUQUERQUE, N.M., March 10, 2022 — With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations using the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor.
“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”
Franke models photon and electron radiation to understand their effects on components.
The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”
The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.
Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.
There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”
Severa wrote several of the experiment’s algorithms.
Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.
The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.
Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.
“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”
The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.
The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.
“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”
The work was funded under the NNSA Advanced Simulation and Computing program and Sandia’s Laboratory Directed Research and Development program.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories.
About Sandia National Laboratories
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

*********************************************************************************************************************************************************

Moving on from this in the final report to NASA of the outcome of its Phase 1 project to provide a design for a hardsil AKIDA 1000 for unconnected autonomous space applications Vorago stated that AKIDA would allow Rover to achieve full autonomy and the NASA goal of speeds up to 20 kph.

AKIDA technology is not just about processing sensors and again accepting you are genuine why do you not contact Edge Impulse or Brainchip and ask them for the details of the bench marking they engaged in.

I will say this at the 2021 Ai Field Day Anil Mankar said this about GPUs and AKIDA


"And that's why we are able to do low power analysis.. The same Mobilenet V1 that you can run on a GPU I can do inference on it. I'll be doing exactly the same level of computation that you want to do because depending on the number of parameters in your CNN, what your input resolutions, you have to do certain calculations to find object classification. We do something similar, but because we do an event domain, I will not be doing, I will be avoiding operations where they are zero value events."

Your statement that if AKIDA could do these things then our valuation would not be where it is and that is the whole point AKIDA technology is beyond anything you are imaging to be its limits.

Finally these researchers do not share your view regarding being able to do regression analysis using SNN technology in fact they think they are on to something however Peter van der Made and team beat them to it by a significant margin:


Spiking Neural Networks for Nonlinear Regression
Alexander Henkes, Jason K. Eshraghian, Member, IEEE, Henning Wessels
Abstract—Spiking neural networks, also often referred to as
the third generation of neural networks, carry the potential for
a massive reduction in memory and energy consumption over
traditional, second-generation neural networks. Inspired by the
undisputed efficiency of the human brain, they introduce temporal
and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To broaden the pathway toward engi-
neering applications, where regression tasks are omnipresent, we
introduce this exciting technology in the context of continuum
mechanics. However, the nature of spiking neural networks poses
a challenge for regression problems, which frequently arise in
the modeling of engineering sciences. To overcome this problem,
a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding
binary spike trains to real numbers is introduced, utilizing the
membrane potential of spiking neurons. Several different spiking
neural architectures, ranging from simple spiking feed-forward
to complex spiking long short-term memory neural networks,
are derived. Numerical experiments directed towards regression
of linear and nonlinear, history-dependent material models are
carried out. As SNNs exhibit memory-dependent dynamics, they
are a natural fit for modelling history-dependent materials which
are prevalent through all of engineering sciences. For example, we
show that SNNs can accurately model materials that are stressed
beyond reversibility, which is a challenging type of non-linearity.
A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more
efficient while retaining precision and generalizability. All code
has been made publicly available in the interest of reproducibility
and to promote continued enhancement in this new domain

(Sorry left out this paper was published in October, 2022)


My opinion only so DYOR
FF

AKIDA BALLISTA


It’s truely amazing the tech, just takes time. I hope I don’t have to sell out for the house, regardless it’s always been a wait until 2025 since I entered personally.
 
  • Like
  • Love
Reactions: 8 users
Top Bottom