BRN Discussion Ongoing

Mt09

Regular
"I know it’s only dot joining but I‘m still waiting for the day Michael Dell announces a partnership with Brainchip. I’m confident one of the largest computing companies in the world didn’t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasn’t even in silicon at that point."

Hi @Stable Genius can you lead me to a link about that podcast?


The bloke on the podcast is Dell engineer Rob Lincourt. His job roll is to keep on top of emerging technology. He makes reference to “early on Brainchip releasing some power consumption figures”. Anyone’s guess as to how long Dell have known about and been asessing Akida.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

equanimous

Norse clairvoyant shapeshifter goddess
he following article makes clear that NASA and DARPA have been anxiously awaiting a chip like AKIDA from at least 2013. At that time they were anticipating an analogue solution not knowing Peter and Anil were on a digital SNN fast track:

“Spiking Neurons for Analysis of Patterns
High-performance pattern-analysis systems could be implemented as analog VLSI circuits. NASA’s Jet Propulsion Laboratory, Pasadena, California
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern- analysis and pattern-recognition compu- tational systems. These neurons are rep- resented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limita- tions of prior spiking-neuron models. There are numerous potential pattern- recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets.
Spiking neurons imitate biological neu- rons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a treelike intercon- nection network (dendrites). Spiking neu- rons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed ad- vantage over traditional neural networks by using the timing of individual spikes for
computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because tradi- tional artificial neurons fail to capture this encoding, they have less processing capa- bility, and so it is necessary to use more gates when implementing traditional artifi- cial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike- transmitting fibers.
The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor fail- ure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an expo- nentially decaying function of the strength of the applied potential.
Choosing computational efficiency over biological fidelity, the dendrites sur- rounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Up- dates to the dendritic potential, calcium- ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compart- ments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons.
At present, SVM neural networks are im- plemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possi- ble to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be imple- mented directly in analog very-large-scale integrated (VLSI) circuits.
This work was done by Terrance Hunts- berger of Caltech for NASA’s Jet Propulsion Laboratory.”


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users
"I know it’s only dot joining but I‘m still waiting for the day Michael Dell announces a partnership with Brainchip. I’m confident one of the largest computing companies in the world didn’t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasn’t even in silicon at that point."

Hi @Stable Genius can you lead me to a link about that podcast?

Hi Dhm,

The quote from Michael Dell was on 3 May 2022 but it will take me a while to find the source. It wasn’t made during the Brainchip podcast. I use spotify to listen to my podcasts so here’s the link to that one:



The Dell podcast was with Rob Lincourt who was at the time the CTO of Dell. That was back on 4 May 2021, almost a year earlier than the quote from Michael Dell.

It’s still worth a listen as it’s still talking about future technologies which will swamp us like a tsunami at some point!

Brainchip: right place, right time!

:)
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Hi Dhm,

The quote from Michael Dell was on 3 May 2022 but it will take me a while to find the source. It wasn’t made during the Brainchip podcast. I use spotify to listen to my podcasts so here’s the link to that one:



The Dell podcast was with Rob Lincourt who was at the time the CTO of Dell. That was back on 4 May 2021, almost a year earlier than the quote from Michael Dell.

It’s still worth a listen as it’s still talking about future technologies which will swamp us like a tsunami at some point!

Brainchip: right place, right time!

:)



Actually it didn’t take as long as I thought to find an article referencing Michael’s quote:


Interestingly this is dated 6 May 2021.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

JK200SX

Regular
  • Like
  • Haha
Reactions: 9 users

Iseki

Regular
Hi Iseki,

I agree that the primary focus is edge devices.

There has also been some discussions headed by @Fullmoonfever where Akida was also being tested in larger devices.

I know it’s only dot joining but I‘m still waiting for the day Michael Dell announces a partnership with Brainchip. I’m confident one of the largest computing companies in the world didn’t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasn’t even in silicon at that point.

“10 per cent of the data in the world today is processed outside of the data centre but by 2025, 75 per cent of enterprise data will be processed outside of the traditional, centralised data centre or cloud,” Dell noted.

Brainchip is in the perfect position and building it’s ecosystem to capitalise on that edge processing/inference need!

:)
Good point.

TBH I don't understand the concept of DoC - Datacentre on a chip.

I can envisage Akida in all the routers in a datacentre looking for unlikely packets of data and better dynamic routing tables to move data around. (remember the difference between an ethernet hub and an ethernet switch)

I can envisage Akida in distributed databases like blockchain looking for anomalies in proof-of-ownership of records or crypto.

But I can't really envisage Akida being used to preprocess data for huge ML training libraries, after you've gone to the expense of moving that data to the centre.

Of course if Dell could do an Apple and move to a customized Arm or SiFive chip, there might be something there.

Love to hear peoples' thought on this.
 
  • Like
Reactions: 6 users

Makeme 2020

Regular
 
  • Like
  • Fire
Reactions: 14 users

Makeme 2020

Regular

Advantech Releases ICAM-500 the All-in-One AI Camera for Industrial AI Vision Applications​

6/20/2022​

Outline​

Related Products​


Related Stories​


content-image-1655798418060.jpg

Advantech, a leading industrial edge AI solutions provider, has announced the release of a new edge industrial AI camera — the ICAM-500. Embedded with NVIDIA® Jetson Nano™, Advantech’s ICAM-500 combines an industrial-grade SONY IMX 296 image sensor, advanced LED lighting, and a variable focus lens with acquisition and AI computing capabilities.

Accelerates the development and deployment of cloud-to-edge vision AI applications​

The Advantech ICAM-500 places NVIDIA Jetson Nano AI computing modules inside a compact industrial camera system. This combines image acquisition and AI inference functionality within the same system, while reducing the latency caused by the distance between IP cameras, cloud, and AI inference systems. ICAM-500 has an FPGA-based trigger input, lighting strobe out and MIPI interface, which also allow ICAM-500 to perform low-latency and high-bandwidth image acquisition. These features improve the efficiency of on-site AI inference, creating an excellent solution for edge AI applications — like AI automated optical inspection, AI optical character recognition, and object recognition at the edge — that require real-time responsiveness.
Advantech’s ICAM-500 also supports the NVIDIA DeepStream SDK. AI developers can use C/C++, Python or Graph Composer, NVIDIA’s low-code tool, to quickly integrate pre-trained models for speedy deployment within vision systems.

All-in-one solution reduces vision system installation and maintenance effort​

ICAM-500 is a highly integrated industrial AI camera equipped with programmable variable focus lenses, LED illumination, and SONY industrial-grade image sensors. These features reduce the effort required in installation and maintenance. The camera’s generic settings, lighting, and one-meter inspection distance satisfy most vision application requirements. In addition, the built-in buttons on ICAM-500 allow the user to take snapshots or customize their functions. Users just need to connect ICAM-500 and control the system using LAN to implement vision systems. ICAM-500 has a USB interface and provides flexibility for customers to connect solutions for communication modules for Wi-Fi and 5G applications. This lowers the barriers to entry for AI software developers and enables them to step into vision AI applications with ease. Advantech ICAM-500 provides board support package support that enables customers to easily integrate software and hardware.
ICAM-500 combines lighting, cameras, and AI computing within a compact system (82 x 121 x 53 mm; 3.22 x 4.7 x 2.08 in). This not only saves space, but also saves vision solution providers time when installing additional AI computing equipment. In addition, its ruggedized fanless design is capable of enduring operation in harsh factory environments.
The Advantech ICAM-500 is ideal for a variety of industrial edge AI vision applications. ICAM-500 is available now. For more information, please contact Advantech regional sales or visit our website.
 
  • Like
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
Morning all.

Have I missed anything exciting during my flight to the UK

 
  • Like
  • Haha
Reactions: 3 users

Boab

I wish I could paint like Vincent
Morning all.

Have I missed anything exciting during my flight to the UK


Nothing definitive but we are expecting to be served up a new type of coffee from @Diogenese anytime now😉
Think of all the good stuff that has already been served up to us.
Enjoy your trip/holiday or whatever you are up to.
Cheers
Boab.
 
  • Like
  • Love
  • Fire
Reactions: 9 users
Actually it didn’t take as long as I thought to find an article referencing Michael’s quote:


Interestingly this is dated 6 May 2021.
Brilliant reminder of the sleeping giant that Dell may possibly be for Brainchip. Thank you Stable Genius.

What adds to my belief that you are on the correct track here is not just the benefits of lower power consumption (although this alone is enough to make Akida revolutionary in Data centre applications) but also the additional benefits from a cyber security standpoint.

With cyber security insurance being one of the fastest growing insurance segments. I see this too as a huge market opportunity for our technology. As the world becomes more reliant on big data both in general and professionally settings the stakes have never been higher. I believe this is a market segment that we have not even scratched the surface on. And have wondered for a while if this is the basis of our previously mentioned involvement with a telecommunications company, especially given that it was mentioned that it was for a non typical application. Of course I may be way off track here as try as I may I am yet to find any crumbs on on who this elusive company may be…

 
  • Like
  • Fire
  • Love
Reactions: 23 users
Brilliant reminder of the sleeping giant that Dell may possibly be for Brainchip. Thank you Stable Genius.

What adds to my belief that you are on the correct track here is not just the benefits of lower power consumption (although this alone is enough to make Akida revolutionary in Data centre applications) but also the additional benefits from a cyber security standpoint.

With cyber security insurance being one of the fastest growing insurance segments. I see this too as a huge market opportunity for our technology. As the world becomes more reliant on big data both in general and professionally settings the stakes have never been higher. I believe this is a market segment that we have not even scratched the surface on. And have wondered for a while if this is the basis of our previously mentioned involvement with a telecommunications company, especially given that it was mentioned that it was for a non typical application. Of course I may be way off track here as try as I may I am yet to find any crumbs on on who this elusive company may be…

Hasn't been much said on this front, in a long while..
But it's there.


thats-it-that-right-there.gif
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Dhm

Regular
Hi Dhm,

The quote from Michael Dell was on 3 May 2022 but it will take me a while to find the source. It wasn’t made during the Brainchip podcast. I use spotify to listen to my podcasts so here’s the link to that one:



The Dell podcast was with Rob Lincourt who was at the time the CTO of Dell. That was back on 4 May 2021, almost a year earlier than the quote from Michael Dell.

It’s still worth a listen as it’s still talking about future technologies which will swamp us like a tsunami at some point!

Brainchip: right place, right time!

:)

I love a specific quote at 9:10 that Rob Lincourt said; "Brainchip potentially as part of that environment" that Dell is exploring. Rob specifically said that power usage - or in our case, minimal power usage - is a major influencer for them.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 27 users

Diogenese

Top 20
he following article makes clear that NASA and DARPA have been anxiously awaiting a chip like AKIDA from at least 2013. At that time they were anticipating an analogue solution not knowing Peter and Anil were on a digital SNN fast track:

“Spiking Neurons for Analysis of Patterns
High-performance pattern-analysis systems could be implemented as analog VLSI circuits. NASA’s Jet Propulsion Laboratory, Pasadena, California
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern- analysis and pattern-recognition compu- tational systems. These neurons are rep- resented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limita- tions of prior spiking-neuron models. There are numerous potential pattern- recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets.
Spiking neurons imitate biological neu- rons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a treelike intercon- nection network (dendrites). Spiking neu- rons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed ad- vantage over traditional neural networks by using the timing of individual spikes for
computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because tradi- tional artificial neurons fail to capture this encoding, they have less processing capa- bility, and so it is necessary to use more gates when implementing traditional artifi- cial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike- transmitting fibers.
The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor fail- ure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an expo- nentially decaying function of the strength of the applied potential.
Choosing computational efficiency over biological fidelity, the dendrites sur- rounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Up- dates to the dendritic potential, calcium- ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compart- ments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons.
At present, SVM neural networks are im- plemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possi- ble to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be imple- mented directly in analog very-large-scale integrated (VLSI) circuits.
This work was done by Terrance Hunts- berger of Caltech for NASA’s Jet Propulsion Laboratory.”


Hi eq,

Brilliant spade-work.

We've known for some time that NASA had a proclivity for ReRAM, but, as you point out, that was through ignorance of Akida.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

pgetties

Member
"I know it’s only dot joining but I‘m still waiting for the day Michael Dell announces a partnership with Brainchip. I’m confident one of the largest computing companies in the world didn’t randomly give a podcast interview with a small Australian start up for no reason. I could be wrong but from memory Akida wasn’t even in silicon at that point."

Hi @Stable Genius can you lead me to a link about that podcast?
Is this article relevant? seems to lean towards the idea that every PC will have NPU

 
  • Like
  • Fire
Reactions: 9 users

equanimous

Norse clairvoyant shapeshifter goddess
In this paper from DARPA which shows that at least the US Military agreed for many years now with all visionary investors in Brainchip Inc that Peter van der Made and Anil Mankar in having the courage and foresight over decades to pursue a solution that mimics the human brain have always been on the right track.

DARPA Report Task1 for Year 1 (Q1-Q4) Task 1:

1. Machine Learning with Spike-Timing-Dependent Plasticity (STDP)

Shortcomings of the deep learning approach to artificial intelligence.

It has been established that deep learning is a promising approach to solving certain problems in artificial intelligence (AI) for which large amounts of data and computation are readily available [1].

On the other hand, tasks which require quick yet robust decisions in the face of little data, or a reasonable response despite the presence of an anomalous event are ill-suited for such an approach [2].

While it is true that deep learning has accomplished ground-breaking new baselines on many tasks in the domains of image processing [3], sequence learning [4], and others, these success stories have necessarily been accompanied by large, labelled datasets and increasingly powerful computers.

Massive computation, and therefore energy expenditure, is required for the training of the increasingly complex models.

To circumvent some of these shortcomings, we take inspiration from the energy efficient, massively parallel, and unsupervised brain.

Mammalian brains are especially capable at complex reasoning, long-term planning, and other high-level functionalities, some that seem to be far out of the scope of deep learning systems.

For this reason, taking advantage of the biological mechanisms employed by the brain (e.g., learning through plasticity, incorporating the relative timing of events, and massive parallelization) may enable the development of AI programs with similarly useful behaviours or properties.

2. Spiking neurons and spike-timing-dependent plasticity (STDP) continues....

https://binds.cs.umass.edu/pdfs/DARPA-Task1SuppHR0011-16-Y1.pdf

Another group of researchers proving the need for AKIDA which is amazing because they do not know about it but they do cover the majority of so called competitors and highlight all their inadequacies and come to the following conclusions:

“4. CONCLUSION
As reported above, there are great expectations on AI and NM in many fields of application. Although in some cases such expectations somehow exceed what is currently possible from a technological point of view, this should not be seen as a negative outlook, as in recent years DL has impressed a considerable acceleration to the effective application of ML to real life problems and scenarios, in ways that would have been difficult to foresee 10-15 years ago. In this context, NM systems are also rapidly gaining momentum, with many industries and governments behind them, in the form of considerable investments of funding, time and effort in these technologies. In particular, there are clear opportunities for NM in the security and defence domain, in all those applications where there are SWaP and timing constraints, or the sensed data has an inherent spiking format. And more applications are likely to arise, as technologies evolve”

https://strathprints.strath.ac.uk/7...hic_technologies_for_defence_and_security.pdf
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Fox151

Regular
  • Like
  • Fire
Reactions: 6 users

Deadpool

hyper-efficient Ai
I was looking into military edge applications for Akida and found Anduril, a Silicon Valley military AI upstart. It’s core system is Lattice OS, an autonomous sense-making and command & control platform.

“Lattice uses technologies like sensor fusion, computer vision, edge computing, and machine learning and artificial intelligence to detect, track, and classify every object of interest in an operator's vicinity.”

View attachment 9586


Now imagine a ‘Neuromorphic bomb’ being deployed from either air or sea that could drop 1000’s of smart sensors in a defensive or offensive situations to utilise Lattice.

The Australian defence force is already interested in their DIVE-LD UAV, with Anduril having established Australian headquarters based in Sydney, and preparing component manufacturing over multiple sites.

Imagine 10x AUV’s released from a mother-sub and each of these having a 1000+ sensor babies with Akida that get deployed deep into hostile territory, these tennis ball sized sensors could return or be collected again after mission by the AUV. If WWIII broke out I know I’d want a few 1000’s in grid formation surrounding all key military infrastructure.

Hi Lex, Further to your research, came across this today from the Breaking Defence website, that if Akida is not being trialled/used in this, then it certainly should.
Akida Ballista
 
  • Fire
  • Like
Reactions: 7 users

chapman89

Founding Member
If I can repeat and remind everybody what Bill Gates has said previously-

If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts,”

Who has machine learning?
Who has on-chip learning?
Who has one-shot learning?
Who has on-chip convolution?

Yeah that’s right…BRAINCHIP!

Only a matter of time!!!
 
  • Like
  • Fire
  • Love
Reactions: 74 users
Top Bottom