BRN Discussion Ongoing

Diogenese

Top 20
A European connection?

It has long has been speculated that Bosch has an NDA with Brainchip.

I think Bosch would have a strong interest in Brainchip IP... We cant prove it yet but we do know that Bosch has been collaborating with many partners researching into neuromorphic chips. Bosch's thirst for knowledge would make Akida Brainchip connections unavoidable. see PDF attachment.

We know Bosch has a long history of collaborating with Mercedes...

The "Like a Bosch" Mercedes add screams Akida.

Opinion only, no proof... yet... just lots and lots of dots.



Inside the Bosch semiconductors and sensor Factory

Question: Which companies have Bosch parts in their cars?
Answer: “NO CAR WITHOUT BOSCH

someone else here found the PDF of Bosch Neuromorphic and AI research at BCAI.
At uni in the 60's some students who could afford a second hand car had British vehicles (Hillman, Morris Minor, Austin ...) and the electrics provided by Lucas were of such repute that they were referred to as "Lucas - prince of darkness".
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
Bosch

Not for sale yet





EF215465-C28A-4758-A37D-435FA7A4F518.png
EB332BB7-5725-4394-9A2E-C5FD09AADE38.png
 
Last edited by a moderator:
  • Like
  • Wow
Reactions: 19 users
Hi @Boab, I thought your Ukraine map was very interesting, we may have something in common, could you drop me a line : mimchale@yahoo.com
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF
 
  • Like
Reactions: 10 users

TechGirl

Founding Member


Awww that was a beautiful song, I hope you don't only think I just love FF, I love you to Diogenese ❤️

monophy.gif
 
  • Haha
  • Like
  • Wow
Reactions: 12 users
  • Haha
  • Wow
  • Love
Reactions: 8 users
D

Deleted member 118

Guest
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF

1645594587989.gif
 
  • Like
  • Thinking
Reactions: 6 users

Mea culpa

prəmɪskjuəs
At uni in the 60's some students who could afford a second hand car had British vehicles (Hillman, Morris Minor, Austin ...) and the electrics provided by Lucas were of such repute that they were referred to as "Lucas - prince of darkness".
Lucas - prince of darkness. It may have taken more than 50 years to learn why, but now I understand. I had a second-hand Hillman and so did a couple of my cobbers. Minx's as early as 1953 through to mid-60's models. We spent more time pushing them home from the pub or the drive-in than driving them. The ignition system in mine was particularly reliable...to failure. Their history of failure was the cause of, otherwise unnecessary, romantic break-ups. The social history of a rural town in Western Victoria may have been vastly different without Hillmans.

Thank you Dodgy Knees for your vast knowledge. Strangely, I feel better now, simply for knowing. Lucas prince of darkness...pfft.
 
  • Like
  • Haha
  • Fire
Reactions: 8 users
D

Deleted member 118

Guest
B3DB24F3-FD22-4D76-925E-A23F5F1EAEFE.png
 

Attachments

  • deep_spiking_neural_networks_algorithms.pdf
    2.7 MB · Views: 171
  • Like
Reactions: 8 users
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in a lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
Reactions: 22 users

Diogenese

Top 20
Lucas - prince of darkness. It may have taken more than 50 years to learn why, but now I understand. I had a second-hand Hillman and so did a couple of my cobbers. Minx's as early as 1953 through to mid-60's models. We spent more time pushing them home from the pub or the drive-in than driving them. The ignition system in mine was particularly reliable...to failure. Their history of failure was the cause of, otherwise unnecessary, romantic break-ups. The social history of a rural town in Western Victoria may have been vastly different without Hillmans.

Thank you Dodgy Knees for your vast knowledge. Strangely, I feel better now, simply for knowing. Lucas prince of darkness...pfft.
That's about the sum total of my university education - I don't remember anything else.
 
  • Haha
  • Like
Reactions: 10 users
That's about the sum total of my university education - I don't remember anything else.
I remember converting a car to electronic ignition because of Mr. Lucas and being tired of pushing. LOL FF
 
  • Like
  • Wow
  • Haha
Reactions: 8 users

Dhm

Regular
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
Don't you find it annoying that the author Charles Choi doesn't mention the very company that actually has a SNN chip on the market and is cementing relationships with multiple companies as I type this? Loihi and TrueNorth shouldn't be the story, but that is what happens when you are the disruptive minnow in an ocean of sharks.
 
  • Like
  • Fire
Reactions: 12 users

McHale

Regular
Hi McHale
Took me a little while to realise but unlike that other place there is a private message space where you can communicate with anyone that takes your fancy. LOL
FF
Thanks FF, I rarely venture from this thread because it appears to be by far the busiest, the TA/chart thread consists of 3 pages compared with the 176 here.

Point taken though, however not so sure about my fancy being taken, I will now have a heightened awareness of my very public "moves" the message space it will be going forward. Never made such a move before, HC or here, but in the spirit of that famous TV series, I will boldly go where I have never been before. LOL
 
  • Like
Reactions: 9 users
Don't you find it annoying that the author Charles Choi doesn't mention the very company that actually has a SNN chip on the market and is cementing relationships with multiple companies as I type this? Loihi and TrueNorth shouldn't be the story, but that is what happens when you are the disruptive minnow in an ocean of sharks.
I sent the article to Brainchip and they do contact the authors to pass on the message. The thing about these sorts of articles even though they do not mention Brainchip is that it raises awareness of SNN and its potential. As neither Loihi or TrueNorth have a commercial only a research product those who contact looking to buy an SNN solution will come away empty handed.

I hold high hopes for what the CEO Sean Hehir and Jerome Nadel have planned for raising the profile of Brainchip around the world and specifically in the world of academia over the coming months.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Thanks FF, I rarely venture from this thread because it appears to be by far the busiest, the TA/chart thread consists of 3 pages compared with the 176 here.

Point taken though, however not so sure about my fancy being taken, I will now have a heightened awareness of my very public "moves" the message space it will be going forward. Never made such a move before, HC or here, but in the spirit of that famous TV series, I will boldly go where I have never been before. LOL
Will leave a candle in the window so you can find your way back. LOL FF
 
  • Like
  • Haha
Reactions: 4 users

Build-it

Regular
I will just have to stop doing research. Every time I think I know the full potential of AKIDA in a lay sense I fall into another hole and the following article is one very deep hole into which I need Dio to dive and come back with an explanation that will satisfy my needs and allow me to sleep:

Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic tech from IBM and Intel may prove useful for analyzing X-rays, stock markets, and more​

CHARLES Q. CHOI
15 FEB 2022
4 MIN READ
A green microchip with silver dots

Intel’s neuromorphic research chip Loihi is part of a class of microprocessors designed to mimic the human brain's speed and efficiency.
TIM HERMAN/INTEL
NEUROMORPHIC CHIPS ARTIFICIAL INTELLIGENCE IBM INTEL MONTE CARLO PARALLEL COMPUTING NEUROMORPHIC COMPUTING


Brain-inspired “neuromorphic” microchips from IBM and Intel might be good for more than just artificial intelligence; they may also prove ideal for a class of computations useful in a broad range of applications including analysis of medical X-rays and financial economics, a new study finds.
Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware known as neuromorphic chips. So far, neuromorphic computing was mostly focused on implementing neural networks. It was unclear whether this hardware might prove useful beyond AI applications.
Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. In addition, just as biological neurons both compute and store data, neuromorphic hardware often seeks to unite processors and memory, potentially reducing the energy and time that conventional computers lose in shuttling data between those components. Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours.
Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time.
“We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.

Now scientists find neuromorphic computers may prove well suited to what are called Monte Carlo methods, where problems are essentially treated as games and solutions are arrived at via many random simulations or “walks” of those games.
“We came to random walks by considering problems that do not scale very well on conventional hardware,” Aimone says. “Typically Monte Carlo solutions require a lot of random walks to provide a good solution, and while individually what each walker does over time is not difficult to compute, in practice, having to do a lot of them becomes prohibitive.”
In contrast, “Instead of modeling a bunch of random walkers in parallel, each doing its own computation, we can program a single neuromorphic mesh of circuits to represent all of the computations a random walk may do, and then by thinking of each random walk as a spike moving over the mesh, solve the whole problem at one time,” Aimone says.
Specifically, just as previous research has found quantum computing can display a “quantum advantage” over classical computing on a large set of problems, the researchers discovered a “neuromorphic advantage” may exist when it comes to random walks via discrete-time Markov chains. If you imagine a problem as a board game, here “chain” means playing the game by moving through a sequence of states or spaces. “Markov” means the next space you can move to in the game depends only on your current space, and not on your previous history, as is the case in board games such as Monopoly or Candy Land. “Discrete-time” simply means “that a fixed time interval happens between changing spaces—“a turn,” says study lead author Darby Smith, an applied mathematician at Sandia.
In experiments with IBM’s TrueNorth and Intel’s Loihi neuromorphic chips, a server-class Intel Xeon E5-2662 CPU, and an Nvidia Titan Xp GPU, the scientists found that when it comes to solving this class of problems at large scales, neuromorphic chips proved more efficient than the conventional semiconductors in terms of energy consumption, and competitive, if not better, in terms of time.
“I very much believe that neuromorphic computing for AI is very exciting, and that brain-inspired hardware will lead to smarter and more powerful AI,” says Aimone. “But at the same time, by showing that neuromorphic computing can be impactful at conventional computing applications, the technology has the potential to have much broader impact on society.”
One way the neuromorphic chips achieved their advantages in performance and energy efficiency was a high degree of parallelism. Compounding that was the ability to represent each random walk as a single spike of activity instead of a more complex set of activities.

“The big limitation on these Monte Carlo methods is that we have to model a lot of walkers,” Aimone says. “Because spikes are such a simple way of representing a random walk, adding an extra random walker is just adding one extra spike to the system that will run in parallel with all of the others. So the time cost of an extra walker is very cheap.”
The ability to efficiently tackle this class of problems has a wide range of potential applications, such as modeling stocks and options in financial markets, better understanding how X-rays interact with bone and soft tissue, tracking how information moves on social networks, and modeling how diseases travel through population clusters, Smith notes. “Applications of this approach are everywhere,” Aimone says.
But just because a neuromorphic advantage may exist for some problems “does not mean that brain-inspired computers can do everything better than normal computers,” Aimone cautions. “The future of computing is likely a mix of different technologies. We are not trying to say neuromorphic will supplant everything.”
The scientists are now investigating ways to handle interactions between multiple “walkers” or participants in these scenarios, which will enable applications such as molecular dynamics simulations, Aimone notes. They are also developing software tools to help other developers work on this research, he adds.
The scientists detailed their findings in the 14 February online edition of the journal Nature Electronics.

My opinion only and frustration DYOR
FF

AKIDA BALLISTA
Hi FF,
I believe this is another reporter who may need a link to the BRN website.

When covid is in the rear view mirror I expect the so called tech reporters can attend trade shows and understand just how far advanced we are.

A quick search into Mr Charles Q Choi will alieve any frustration.

Be careful of the rabbit holes, don't wanna loose you down there.

Edge compute.
 
  • Like
Reactions: 4 users

zeeb0t

Administrator
Staff member
Last edited:
  • Like
  • Haha
Reactions: 20 users
Hi all

To prevent the BRN threads getting filled with discussion of the Russia -> Ukraine situation (I don't want to call it a war just yet, even though it seems to be heading that way?) but anyway, back to my point - if we can continue discussion specific to that crisis in this thread? https://thestockexchange.com.au/threads/ukraine-russia-unrest-war-and-impact-in-stock-market.141/

Of course happy to discuss it here where it pertains to BRN specifically! :)
Very clever you must have been a Cretan Philosopher in a past life.

Can only talk about it here when it pertains to BRN specifically which as we know can never be the case.

As I said very clever.
FF.
 
  • Haha
  • Like
Reactions: 8 users

zeeb0t

Administrator
Staff member
Very clever you must have been a Cretan Philosopher in a past life.

Can only talk about it here when it pertains to BRN specifically which as we know can never be the case.

As I said very clever.
FF.

Haha, well perhaps I can clarify my saying as meaning where it relates to BRN as a company / product etc. where as if it is just discussion around the crisis (war) or whatever the media is calling it at this stage, we have a thread for it.

I am suggesting we do this so that those who do not want to read about conflict all day (except where it relates to BRN somehow) may wish to avoid the subject.
 
  • Like
Reactions: 8 users
Top Bottom