BRN Discussion Ongoing

I can only call these two events 35 years apart as I heard Fact.


I can only call these two events 35 years apart as I heard them Fact. Thinking about what he said more pedantically, he may not have used the word Australian, rather…’Munich has been investigating AI technology called Akida…’ All I know is my heart skipped several beats when I heard him say Akida. This occurred after it was uttered, so I was in control of my facilities when he said it.
RealInfo I believe you but as I was going to add support to your eavesdrop I did not want anyone mortgaging themselves up to the neck on the basis of our couple of posts. FF
 
  • Like
  • Haha
  • Love
Reactions: 23 users

Diogenese

Top 20
RealInfo I believe you but as I was going to add support to your eavesdrop I did not want anyone mortgaging themselves up to the neck on the basis of our couple of posts. FF
Too late!!!!!
 
  • Haha
  • Like
  • Love
Reactions: 39 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 14 users
I can only call these two events 35 years apart as I heard Fact.


I can only call these two events 35 years apart as I heard them Fact. Thinking about what he said more pedantically, he may not have used the word Australian, rather…’Munich has been investigating AI technology called Akida…’ All I know is my heart skipped several beats when I heard him say Akida. This occurred after it was uttered, so I was in control of my facilities when he said it.
“there’ll be a leader” could sound like “AKIDA” - just saying - but then the fact they worked at Siemens???

@Diogenese better put your mortgage broker on speed dial. 😂

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
Reactions: 11 users

Violin1

Regular
Maybe he was saying "I kidda you not"....
 
  • Haha
  • Like
  • Love
Reactions: 16 users

Slade

Top 20
If Akida is in Valeo’s new generation Lidar like many of us think it is, we will see an explosion of revenue this year.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
“there’ll be a leader” could sound like “AKIDA” - just saying - but then the fact they worked at Siemens???

@Diogenese better put your mortgage broker on speed dial. 😂

My opinion only DYOR
FF

AKIDA BALLISTA


There's not really any other AI technology I can think of that sounds like Akida. Pretty sure Realinfo's hearing is not that bad that he mistook "Akida" for "Loihi 2".😙
 
  • Like
  • Haha
  • Love
Reactions: 25 users

Slade

Top 20
Valeo SCALA® 2, Valeo's second generation LiDAR, plays an important role in Mercedes-Benz DRIVE PILOT system for conditionally automated driving (SAE-Level 3), allowing the driver to delegate under certain conditions the driving task to the car in complete safety.

DRIVE PILOT will be available in Germany in the first half of 2022. The next step is clear: the car manufacturer plans to apply for regulatory approval in California and Nevada in 2022.

 
  • Like
  • Fire
  • Love
Reactions: 34 users
There's not really any other AI technology I can think of that sounds like Akida. I don't think Realinfo's hearing is that bad that he mistook "Akida" for "Loihi 2".😙
No but all the science about suggestion and memory weighs against advising @Diogenese to hit speed dial to his mortgage broker.

The science of memory informs us that I can prime you by suggestion to see and hear what I want you to hear.

In this case Realinfo has been making the case to his friend to invest in Brainchip and AKIDA so they are both primed and receptive to hear and indeed want to hear positive affirmation of of their now shared beliefs.

They have both been straining to hear and focusing on hearing something of value to them.

Then something is said that they want to hear and immediately afterwards their confidence level in what they wanted to hear is contaminated by the waiter telling them the men worked for Siemens which Realinfo knows has some connection to Brainchip.

I am not saying this is what happened but what I am saying is that the science of memory tells us this can happen.

Do I believe RealInfo is telling the truth yes most definitely.

Will I be telling Diogenese to hit speed dial absolutely not.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Sad
Reactions: 20 users

LuWil

Regular
72B0CEC9-67B6-4CFE-B052-83BBCF69607F.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 50 users
Hi @Diogenese
You know I guess a lot when it comes to the deep science/engineering but I think this paper explains why Peter van der Made’s and Anil Mankar’s design is so much more power efficient than even Loihi and True North but I will leave it to you to chop me up for encroaching on your professional ground:


This work was commissioned by NASA/DARPA

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 15 users

Diogenese

Top 20
Hi @Diogenese
You know I guess a lot when it comes to the deep science/engineering but I think this paper explains why Peter van der Made’s and Anil Mankar’s design is so much more power efficient than even Loihi and True North but I will leave it to you to chop me up for encroaching on your professional ground:


This work was commissioned by NASA/DARPA

Regards
FF

AKIDA BALLISTA

Yep - backpropagation compared to Akida is the T-model Ford compared to the Mercedes EQXX.

Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) have achieved a higher accuracy result on image recognition tasks compared to other SNN training algorithms. Despite the success on the algorithm perspective, prior works neglect the evaluation of the hardware energy overheads of BPTT, due to the lack of a hardware evaluation platform for SNN training algorithm design. Moreover, although SNNs have been long seen as an energy-efficient counterpart of ANNs, a quantitative comparison between the training cost of SNNs and ANNs is missing. To address the above-mentioned issues, in this work, we introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA provides a simple and re-configurable accelerator architecture for the general-purpose hardware evaluation platform, which makes it easier to analyze the training energy for SNN training algorithms. Based on SATA, we show quantitative analyses on the energy efficiency of SNN training and make a comparison between the training cost of SNNs and ANNs. The results show that SNNs consume 1.27× more total energy with considering sparsity (spikes, gradient of firing function, and gradient of membrane potential) when compared to ANNs. We find that such high training energy cost is from time-repetitive convolution operations and data movements during backpropagation. Moreover, to guide the future SNN training algorithm design, we provide several observations on energy efficiency with respect to different SNN-specific training parameters.


https://brainchipinc.com/wp-content...p_tech-brief_2-What-is-Edge-Learning_v1-2.pdf
see page 4:
Instead of the slow and inefficient process of backpropagation, Akida supports learning through Spike Time Dependent Plasticity, or STDP, in which synapses that match an activation pattern are reinforced. STDP is modeled after the human brain and is orders of magnitude faster than backpropagation.

A week back @TECH posted Simon Thorpe's powerful explanation of STDP and JAST:

Hi Tech (a suitable greeting for this blog),

Ca-ching! (penny dropping)

Eureka - you've found it - the secret sauce!

1649334749949.png


Taking the earliest N spikes from the M input spikes because the earlier spikes are the strongest.

Hence temporal coding
.

Simon explains that the strongest spikes occur earlier than the weaker spikes.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
Yep - backpropagation compared to Akida is the T-model Ford compared to the Mercedes EQXX.

Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) have achieved a higher accuracy result on image recognition tasks compared to other SNN training algorithms. Despite the success on the algorithm perspective, prior works neglect the evaluation of the hardware energy overheads of BPTT, due to the lack of a hardware evaluation platform for SNN training algorithm design. Moreover, although SNNs have been long seen as an energy-efficient counterpart of ANNs, a quantitative comparison between the training cost of SNNs and ANNs is missing. To address the above-mentioned issues, in this work, we introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA provides a simple and re-configurable accelerator architecture for the general-purpose hardware evaluation platform, which makes it easier to analyze the training energy for SNN training algorithms. Based on SATA, we show quantitative analyses on the energy efficiency of SNN training and make a comparison between the training cost of SNNs and ANNs. The results show that SNNs consume 1.27× more total energy with considering sparsity (spikes, gradient of firing function, and gradient of membrane potential) when compared to ANNs. We find that such high training energy cost is from time-repetitive convolution operations and data movements during backpropagation. Moreover, to guide the future SNN training algorithm design, we provide several observations on energy efficiency with respect to different SNN-specific training parameters.


https://brainchipinc.com/wp-content...p_tech-brief_2-What-is-Edge-Learning_v1-2.pdf
see page 4:
Instead of the slow and inefficient process of backpropagation, Akida supports learning through Spike Time Dependent Plasticity, or STDP, in which synapses that match an activation pattern are reinforced. STDP is modeled after the human brain and is orders of magnitude faster than backpropagation.

A week back @TECH posted Simon Thorpe's powerful explanation of STDP and JAST:

Hi Tech (a suitable greeting for this blog),

Ca-ching! (penny dropping)

Eureka - you've found it - the secret sauce!

1649334749949.png


Taking the earliest N spikes from the M input spikes because the earlier spikes are the strongest.

Hence temporal coding
.

Simon explains that the strongest spikes occur earlier than the weaker spikes.
Many thanks @Diogenese. The difference between a Loihi type sparse backpropagation compared with the sparse feed forward AKIDA style training in power consumption on one training image of 2.76 times a sparse CNN to .26 is very stark.

It makes clear why AKIDA used one fortyeighth the power of Loihi in one of Peter van der Made’s presentations.

I am sure NASA and DARPA will be impressed.

The world is AKIDA’s oyster.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Diogenese

Top 20
Yep - backpropagation compared to Akida is the T-model Ford compared to the Mercedes EQXX.

Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) have achieved a higher accuracy result on image recognition tasks compared to other SNN training algorithms. Despite the success on the algorithm perspective, prior works neglect the evaluation of the hardware energy overheads of BPTT, due to the lack of a hardware evaluation platform for SNN training algorithm design. Moreover, although SNNs have been long seen as an energy-efficient counterpart of ANNs, a quantitative comparison between the training cost of SNNs and ANNs is missing. To address the above-mentioned issues, in this work, we introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA provides a simple and re-configurable accelerator architecture for the general-purpose hardware evaluation platform, which makes it easier to analyze the training energy for SNN training algorithms. Based on SATA, we show quantitative analyses on the energy efficiency of SNN training and make a comparison between the training cost of SNNs and ANNs. The results show that SNNs consume 1.27× more total energy with considering sparsity (spikes, gradient of firing function, and gradient of membrane potential) when compared to ANNs. We find that such high training energy cost is from time-repetitive convolution operations and data movements during backpropagation. Moreover, to guide the future SNN training algorithm design, we provide several observations on energy efficiency with respect to different SNN-specific training parameters.


https://brainchipinc.com/wp-content...p_tech-brief_2-What-is-Edge-Learning_v1-2.pdf
see page 4:
Instead of the slow and inefficient process of backpropagation, Akida supports learning through Spike Time Dependent Plasticity, or STDP, in which synapses that match an activation pattern are reinforced. STDP is modeled after the human brain and is orders of magnitude faster than backpropagation.

A week back @TECH posted Simon Thorpe's powerful explanation of STDP and JAST:

Hi Tech (a suitable greeting for this blog),

Ca-ching! (penny dropping)

Eureka - you've found it - the secret sauce!

1649334749949.png


Taking the earliest N spikes from the M input spikes because the earlier spikes are the strongest.

Hence temporal coding
.

Simon explains that the strongest spikes occur earlier than the weaker spikes.

So near and yet so far:
1650113186769.png

Rottnest Island is 15 km from Perth.

The authors have used an analog SNN as an example as evidenced by the exponential decay of the spike sum approaching Uth.

There has been a wide range of works that have proposed accelerator designs to carry out SNN inference showing a high degree of parallelism, throughput, and energy-efficiency [15]– [19]. These include accelerators with a fully-digital architecture, such as IBM’s TrueNorth processor [15], as well ones in which synaptic computational cores comprise of analog memristive crossbars, such as Resparc [17]. While most of the works focus on inference-only accelerator designs, some like Intel’s Loihi processor account for SNN training using STDP learning rule [2], [20]. Furthermore, the TrueNorth and Loihi processors are highly optimized to facilitate asynchronous spike communications with the objective of improving the performance of the deployed SNNs having a specific type of architecture, different from the conventional ones. However, they lack general applicability since they do not have support to benchmark a wide variety of SNNs, particularly SNNs trained by standard BPTT learning rules. Thus, it is imperative to have a general-purpose SNN training accelerator framework that can support the training and inference of a plethora of SNN architectures that is emerging from recent SNN algorithm studies. There is also a huge volume of work centered around SNNs that claim SNNs to be an energy-efficient alternative to ANNs due to high sparsity in input spikes [2], [10], [16], [17], [21].

But recently, an inference framework implemented in an Eyeriss-like systolic-array hardware tailored for SNNs called SpinalFlow [11] has shown that standard rate-coded SNNs with modest spike-rates exhibit significantly lower efficiency than corresponding accelerators for ANNs. Note, Eyeriss [13] follows a von-Neumann mode of neural computation widely adopted in modern accelerators and enables us to optimize over different design choices such as type of dataflow, computation reuse, and skipping zero computations. The primary cause behind the inefficiency of SNNs can be attributed to the storage and movement of membrane potentials over multiple time-steps during inference. With this in mind, the next steps include developing a similar hardware evaluation framework that can yield a realistic estimation of hardware energy and latency associated with training a wide range of SNN architectures over multiple time steps. To this end, our SATA framework is the first to show that the inherent sparsity in SNNs associated with the spikes and their gradients are alone insufficient to yield training energy efficiency with respect to baseline ANN models. SNN training for conventional architectures, in fact, incurs huge overheads in terms of memory accesses and computations compared to ANNs, thereby making them highly energy-inefficient. Based on the conclusion and discussion posed in this work through the extensive study conducted on SATA and the energy-analysis tool that we propose,



we hope that the future SNN algorithm research can be directed towards enhancing specific forms of sparsity (that impact computation cost largely) and avoiding certain values of SNN-specific training parameters (that impact memory cost largely) during training that can enable SNNs to be energy-efficient.

To borrow from the old pantos:
"He's behind you!"
 
  • Like
  • Haha
  • Love
Reactions: 15 users
C8C79D31-E0BE-4D62-B74D-77C4411F28FF.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Sirod69

bavarian girl ;-)
does somebody knows what´s with this covid 19 test study, i think it isn´t still ready or was there something wrong, didn´t it work???
 
  • Like
Reactions: 2 users
D

Deleted member 118

Guest

6460B722-4D4B-496F-B9A0-205E8909977E.png
 
  • Like
  • Love
  • Fire
Reactions: 13 users
does somebody knows what´s with this covid 19 test study, i think it isn´t still ready or was there something wrong, didn´t it work???
This NaNose study involved testing 10,000 people. By May 2021 they had tested about 7,000 and the study was extended for 12 months to test the remaining 3,000. When the testing is complete then they have to compile the results and report them to the FDA then wait for the FDA to advise what they decide. Still a lot of work to do.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 23 users
  • Like
  • Fire
Reactions: 15 users
D

Deleted member 118

Guest
  • Like
  • Haha
  • Love
Reactions: 10 users
Top Bottom