BRN Discussion Ongoing

Dhm

Regular
Well this is very exciting in my humble opinion

This guy Kostantinos Demertzis has completed 3 years of Postdoctoral Research in Cyber Security of Intelligence cybersecurity system, that integrated into the Brainchip NSOC Akida

And was submitted 8 hours ago as far as I can tell



Tactical Imagery Intelligence Operations by Cloud Robotics and Artificial Intelligence​


Kostantinos Demertzis
2022, NRDC-GR Herald Issue
4 Views7 Pages
1 File ▾
Geography,
Mathematics,
Computer Science,
Artificial Intelligence,
Social Sciences
...more ▾
Show less ▴
Publication Date: 2022
Publication Name: NRDC-GR Herald Issue
Intelligence service, in government and military operations, evaluates information on the strength, activities, and possible course of action of foreign countries or non-state actors. The term OpenSource Intelligence (OSINT) is used to collect, analyze and distribute the information collected through the analysis of data from publicly accessible sources of information (e.g., media, internet, free databases, social networks, etc.). The field of Imagery Intelligence (IMINT) is one of the most important categories of information analysis which includes all the information collected through photographs, aerial photos, or satellite images. The process and outcome of the synthesis of this information is an important element of national power and a fundamental element in national security, defense, and foreign policy decisions. This article introduces the process of tactical IMINT operations and in particular, a thorough analysis is performed of how IMINT technologies are enhanced using cloud robotics and artificial intelligence.


View attachment 1628

View attachment 1629

View attachment 1630

View attachment 1631
I read through all of that, without understanding much of it, but that is my problem. Just to prove it to all of you that I did, a special prize to the first person that finds an editing error in the article.
I really appreciate how our 1000 eyes (must be more by now) find relevent articles with even just a hint of Brainchip in them from a heap of diverse places and locations. Another brick in the wall of confidence that we are on to a life changing once in a lifetime gift.
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

TechGirl

Founding Member
To make up for your disappointment I suggest you go to 'LEARN' on the Brainchip website and open 'In the News' and just look at the top two entries. No need to open them just look. The view is uplifting and then read the following taken from Brainchip Perspectives the last paper title "Brainchip gives the Green light to automotive"(?):

"Conclusion

The automotive industry is undergoing a tremendous transformation right now. From government mandates requiring manufacturers to move away from gas-powered vehicles to businesses looking to minimize labour costs by adopting autonomous vehicles, the future begins now.

AI and Deep Learning tools are already being deployed in the industry in an attempt to make these cars smarter and safer. But much like implementing radios into cars in the early 20th Century, there are issues and complexities that are complicating these efforts. By moving AI out of the data centre to the location where data is created, a lot of these problems can be averted.

BrainChip’s Akida AI neural processor is an event-based technology that is inherently lower power when compared to conventional neural network processors. By allowing incremental learning and high-speed inferencing, Akida overcomes current technology barriers through a high-performance, low-cost, very efficient low-power solution. It’s the kind of solution that is music to the ears of automotive manufactures and OEM’s looking to bring their ideas to life. 1000 miles on a single charge here we come!"

It is obvious that both Valeo and Mercedes have heard the music but how many others???

My opinion only DYOR
FF

AKIDA BALLISTA

I did it FF, I read it, & I like it 😁

5rN.gif
 
  • Like
  • Love
Reactions: 13 users

Diogenese

Top 20
A European connection?

It has long has been speculated that Bosch has an NDA with Brainchip.

I think Bosch would have a strong interest in Brainchip IP... We cant prove it yet but we do know that Bosch has been collaborating with many partners researching into neuromorphic chips. Bosch's thirst for knowledge would make Akida Brainchip connections unavoidable. see PDF attachment.

We know Bosch has a long history of collaborating with Mercedes...

The "Like a Bosch" Mercedes add screams Akida.

Opinion only, no proof... yet... just lots and lots of dots.



Inside the Bosch semiconductors and sensor Factory

Question: Which companies have Bosch parts in their cars?
Answer: “NO CAR WITHOUT BOSCH

someone else here found the PDF of Bosch Neuromorphic and AI research at BCAI.
At uni in the 60's some students who could afford a second hand car had British vehicles (Hillman, Morris Minor, Austin ...) and the electrics provided by Lucas were of such repute that they were referred to as "Lucas - prince of darkness".
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
Last edited by a moderator:
  • Like
  • Wow
Reactions: 19 users
Hi @Boab, I thought your Ukraine map was very interesting, we may have something in common, could you drop me a line : mimchale@yahoo.com
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF
 
  • Like
Reactions: 10 users

TechGirl

Founding Member
  • Haha
  • Like
  • Wow
Reactions: 12 users
  • Haha
  • Wow
  • Love
Reactions: 8 users
D

Deleted member 118

Guest
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF

1645594587989.gif
 
  • Like
  • Thinking
Reactions: 6 users

Mea culpa

prəmɪskjuəs
At uni in the 60's some students who could afford a second hand car had British vehicles (Hillman, Morris Minor, Austin ...) and the electrics provided by Lucas were of such repute that they were referred to as "Lucas - prince of darkness".
Lucas - prince of darkness. It may have taken more than 50 years to learn why, but now I understand. I had a second-hand Hillman and so did a couple of my cobbers. Minx's as early as 1953 through to mid-60's models. We spent more time pushing them home from the pub or the drive-in than driving them. The ignition system in mine was particularly reliable...to failure. Their history of failure was the cause of, otherwise unnecessary, romantic break-ups. The social history of a rural town in Western Victoria may have been vastly different without Hillmans.

Thank you Dodgy Knees for your vast knowledge. Strangely, I feel better now, simply for knowing. Lucas prince of darkness...pfft.
 
  • Like
  • Haha
  • Fire
Reactions: 8 users
D

Deleted member 118

Guest
B3DB24F3-FD22-4D76-925E-A23F5F1EAEFE.png
 

Attachments

  • deep_spiking_neural_networks_algorithms.pdf
    2.7 MB · Views: 110
  • Like
Reactions: 8 users
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in a lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
Reactions: 22 users

Diogenese

Top 20
Lucas - prince of darkness. It may have taken more than 50 years to learn why, but now I understand. I had a second-hand Hillman and so did a couple of my cobbers. Minx's as early as 1953 through to mid-60's models. We spent more time pushing them home from the pub or the drive-in than driving them. The ignition system in mine was particularly reliable...to failure. Their history of failure was the cause of, otherwise unnecessary, romantic break-ups. The social history of a rural town in Western Victoria may have been vastly different without Hillmans.

Thank you Dodgy Knees for your vast knowledge. Strangely, I feel better now, simply for knowing. Lucas prince of darkness...pfft.
That's about the sum total of my university education - I don't remember anything else.
 
  • Haha
  • Like
Reactions: 10 users
That's about the sum total of my university education - I don't remember anything else.
I remember converting a car to electronic ignition because of Mr. Lucas and being tired of pushing. LOL FF
 
  • Like
  • Wow
  • Haha
Reactions: 8 users

Dhm

Regular
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
Don't you find it annoying that the author Charles Choi doesn't mention the very company that actually has a SNN chip on the market and is cementing relationships with multiple companies as I type this? Loihi and TrueNorth shouldn't be the story, but that is what happens when you are the disruptive minnow in an ocean of sharks.
 
  • Like
  • Fire
Reactions: 12 users

McHale

Regular
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF
Thanks FF, I rarely venture from this thread because it appears to be by far the busiest, the TA/chart thread consists of 3 pages compared with the 176 here.

Point taken though, however not so sure about my fancy being taken, I will now have a heightened awareness of my very public "moves" the message space it will be going forward. Never made such a move before, HC or here, but in the spirit of that famous TV series, I will boldly go where I have never been before. LOL
 
  • Like
Reactions: 9 users
Don't you find it annoying that the author Charles Choi doesn't mention the very company that actually has a SNN chip on the market and is cementing relationships with multiple companies as I type this? Loihi and TrueNorth shouldn't be the story, but that is what happens when you are the disruptive minnow in an ocean of sharks.
I sent the article to Brainchip and they do contact the authors to pass on the message. The thing about these sorts of articles even though they do not mention Brainchip is that it raises awareness of SNN and its potential. As neither Loihi or TrueNorth have a commercial only a research product those who contact looking to buy an SNN solution will come away empty handed.

I hold high hopes for what the CEO Sean Hehir and Jerome Nadel have planned for raising the profile of Brainchip around the world and specifically in the world of academia over the coming months.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Thanks FF, I rarely venture from this thread because it appears to be by far the busiest, the TA/chart thread consists of 3 pages compared with the 176 here.

Point taken though, however not so sure about my fancy being taken, I will now have a heightened awareness of my very public "moves" the message space it will be going forward. Never made such a move before, HC or here, but in the spirit of that famous TV series, I will boldly go where I have never been before. LOL
Will leave a candle in the window so you can find your way back. LOL FF
 
  • Like
  • Haha
Reactions: 4 users

Build-it

Regular
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in a lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
Hi FF,
I believe this is another reporter who may need a link to the BRN website.

When covid is in the rear view mirror I expect the so called tech reporters can attend trade shows and understand just how far advanced we are.

A quick search into Mr Charles Q Choi will alieve any frustration.

Be careful of the rabbit holes, don't wanna loose you down there.

Edge compute.
 
  • Like
Reactions: 4 users
Top Bottom