BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess

Sub-mW Neuromorphic SNN audio processing applications with Rockpool and Xylo​

Hannah Bos, Dylan Muir
Spiking Neural Networks (SNNs) provide an efficient computational mechanism for temporal signal processing, especially when coupled with low-power SNN inference ASICs. SNNs have been historically difficult to configure, lacking a general method for finding solutions for arbitrary tasks. In recent years, gradient-descent optimization methods have been applied to SNNs with increasing ease. SNNs and SNN inference processors therefore offer a good platform for commercial low-power signal processing in energy constrained environments without cloud dependencies. However, to date these methods have not been accessible to ML engineers in industry, requiring graduate-level training to successfully configure a single SNN application. Here we demonstrate a convenient high-level pipeline to design, train and deploy arbitrary temporal signal processing applications to sub-mW SNN inference hardware. We apply a new straightforward SNN architecture designed for temporal signal processing, using a pyramid of synaptic time constants to extract signal features at a range of temporal scales. We demonstrate this architecture on an ambient audio classification task, deployed to the Xylo SNN inference processor in streaming mode. Our application achieves high accuracy (98%) and low latency (100ms) at low power (<4muW inference power). Our approach makes training and deploying SNN applications available to ML engineers with general NN backgrounds, without requiring specific prior experience with spiking NNs. We intend for our approach to make Neuromorphic hardware and SNNs an attractive choice for commercial low-power and edge signal processing applications.

CONCLUSION We demonstrated a general approach for implementing audio processing applications using spiking neural networks, deployed to a low-power Neuromorphic SNN inference processor Xylo. Our solution reaches high accuracy (98 %) with <100 spiking neurons, operating in streaming mode with low latency (med. 100 ms) and at low power (<200 µW dynamic inference power). Our software pipeline Rockpool (rockpool.ai) provides a modern machine-learning (ML) neural network approach to building applications, with a convenient high-level API for defining networks. Rockpool supports definition and training of SNNs via several automatic differentiation backends. Rockpool also supports quantization, mapping and deployment to

1661938153285.png

1661938215585.png

SynSense (formerly aiCTX) was established in 2017, and is the world’s leading supplier of neuromorphic intelligence and application solutions. SynSense focuses on the research and development of neuromorphic intelligence, based on 20+ years of world’s leading experience of University of Zürich and ETH Zürich. Centering on the application scenes of edge computing,SynSense has the full-stack service, and is the only neuromorphic technology company that involves both sensing and computing in the world.

SynSense makes a significant breakthrough in commercial neuromorphic chips, which is a crucial step towards cognitive intelligence and intelligent connectivity. SynSense will build the cognition ecology of intelligent connectivity,lead the development of neuromorphic intelligence in the world, and create well-being for the humans in the future.

1661938455626.png
 
  • Like
  • Fire
Reactions: 5 users

equanimous

Norse clairvoyant shapeshifter goddess
Not sure about all the Chinese connections

So will we be getting royalties for these

SPECK​

The world’s first fully event-driven neuromorphic vision SoC using the dynamic vision sensing and the spiking neural network technology.
Speck is a fully event-driven neuromorphic vision SoC. Speck is able to support large-scale spiking convolutional neural network (sCNN) with a fully asynchronous chip architecture. Speck is fully configurable with the spiking neuron capacity of 320K. Furthermore, it integrates the state-of-art dynamic vision sensor (DVS) that enables fully event-driven based, real-time, highly integrated solution for varies dynamic visual scene. For classical applications, Speck can provide intelligence upon the scene at only mWs with a response latency in few ms.
speck1-closeup.png

Applications​

Smart Toy
  • Gesture control
  • Smart tracking
Smart Home
  • Gesture control
Smart Security
  • Fall detection
  • Approaching detection
Self Driving
  • Lane detection
  • Sign recognition
  • Driver attention tracking
  • Obstacle detection
Drones
  • Obstacle detection
  • Object tracking
Ultra-low power
100-1000X lower
Ultra-low latency
End-end latency <5ms, reaction speed up 10-100x
Privacy security
Processing of visual applications based on dot matrix data
Cutting-edge algorithm
Various sCNN algorithms
 
  • Like
  • Fire
Reactions: 7 users

Baisyet

Regular
Hi @Fact Finder @Diogenese @uiux @TECH I saw the this tweet on Twitter this morning and wanted to post here in the forum but i had my Specialist appointment to run so couldnnt post it.
It says latest patent granted , I dont if this one has been posted here or not. I hope you guys can shed some light on it. thanks

 
  • Like
  • Fire
  • Haha
Reactions: 43 users
OMG WTF!!!! - not sure if this has been posted already but Renesas is looking for AI Application Engineer and Job description is the followings:
【Role and Responsibility】
 ・research latest AI technologies (Framework, inference tool methodology, IP)
 ・benchmark/optimize open network, customer network
 ・consult and support Tier1 and OEM to utilize AI tool and IP in convination with other Computer Vision IPs in ADAS/AD SoC
 ・Create "easy to develop" application note

【Background】
ADAS/AD is one of Renesas focus growing area. AI is the most important technology in this application for OEM/Tier1.

Renesas already acquired >1B$ business and high quality support is required for OEM/Tier1 for their smooth development and mass production launch.



Requisition ID: 32878
Department: SoC Marketing Department
Location: Kodaira City, JP
Job Function: Application Engineering



This could be very important but we need a bit more information before I tell my wife and she insists we move to the house she found on realestate.com today.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 29 users
Not sure about all the Chinese connections

So will we be getting royalties for these

SPECK​

The world’s first fully event-driven neuromorphic vision SoC using the dynamic vision sensing and the spiking neural network technology.
Speck is a fully event-driven neuromorphic vision SoC. Speck is able to support large-scale spiking convolutional neural network (sCNN) with a fully asynchronous chip architecture. Speck is fully configurable with the spiking neuron capacity of 320K. Furthermore, it integrates the state-of-art dynamic vision sensor (DVS) that enables fully event-driven based, real-time, highly integrated solution for varies dynamic visual scene. For classical applications, Speck can provide intelligence upon the scene at only mWs with a response latency in few ms.
speck1-closeup.png

Applications​

Smart Toy
  • Gesture control
  • Smart tracking
Smart Home
  • Gesture control
Smart Security
  • Fall detection
  • Approaching detection
Self Driving
  • Lane detection
  • Sign recognition
  • Driver attention tracking
  • Obstacle detection
Drones
  • Obstacle detection
  • Object tracking
Ultra-low power
100-1000X lower
Ultra-low latency
End-end latency <5ms, reaction speed up 10-100x
Privacy security
Processing of visual applications based on dot matrix data
Cutting-edge algorithm
Various sCNN algorithms

Hi EQ

Pretty sure we’ve looked at this before.


If you look at the date on this article below they partnered with Prophesee last year….. and have now partnered with Brainchip which indicates to me that Prophesee didn’t find what they were looking for with their first partnership.

1661941045261.jpeg


Just my opinion, which is why, just like your handle; I’m pretty calm about them being a competitor :)
 
  • Like
  • Love
  • Fire
Reactions: 13 users

equanimous

Norse clairvoyant shapeshifter goddess
OMG WTF!!!! - not sure if this has been posted already but Renesas is looking for AI Application Engineer and Job description is the followings:
【Role and Responsibility】
 ・research latest AI technologies (Framework, inference tool methodology, IP)
 ・benchmark/optimize open network, customer network
 ・consult and support Tier1 and OEM to utilize AI tool and IP in convination with other Computer Vision IPs in ADAS/AD SoC
 ・Create "easy to develop" application note

【Background】
ADAS/AD is one of Renesas focus growing area. AI is the most important technology in this application for OEM/Tier1.

Renesas already acquired >1B$ business and high quality support is required for OEM/Tier1 for their smooth development and mass production launch.



Requisition ID: 32878
Department: SoC Marketing Department
Location: Kodaira City, JP
Job Function: Application Engineering



So I was just thinking about Renesas and Toyota.

Rehash of old news but good to share again

Amazing credentials here.

1661941069212.png


1661941042937.png
 
  • Like
  • Love
  • Fire
Reactions: 13 users

equanimous

Norse clairvoyant shapeshifter goddess
Hi EQ

Pretty sure we’ve looked at this before.


If you look at the date on this article below they partnered with Prophesee last year….. and have now partnered with Brainchip which indicates to me that Prophesee didn’t find what they were looking for with their first partnership.

View attachment 15540

Just my opinion, which is why, just like your handle; I’m pretty calm about them being a competitor :)
Or is it because we have the patents..
 
  • Like
Reactions: 2 users

uiux

Regular
So I was just thinking about Renesas and Toyota.

Rehash of old news but good to share again

Amazing credentials here.

View attachment 15541

View attachment 15539

 
  • Like
  • Love
Reactions: 12 users

uiux

Regular
@Fact Finder @Diogenese

Any professionals able to opine on this research and the researchers naming their innovation "SpikeNet"


Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges) with significantly fewer parameters and computation overheads.
 
  • Like
  • Thinking
Reactions: 4 users

equanimous

Norse clairvoyant shapeshifter goddess
With Akida it will be Smart and unbreakable

1661942185152.png
 
  • Like
  • Love
  • Fire
Reactions: 7 users
Or is it because we have the patents..

Could be the case.

Regardless of the reasons why Prophesee are now partnered with Brainchip. So the market Synsense were advertising on their webpage is also now Brainchips market; so we just have to wait for the print to dry on money tree!

:)
 
  • Like
  • Fire
  • Love
Reactions: 11 users
So I was just thinking about Renesas and Toyota.

Rehash of old news but good to share again

Amazing credentials here.

View attachment 15541

View attachment 15539


Toyota and Denso were involved in an AI round-table discussion with none other than Mr Chip; AM 12 months ago.

So they’re both well aware of Brainchip.

In that advert you posted Renesas state they have >1B in business. I read that to mean they have >1B of business in the pipeline; not that they have bought a business for $1B. Renesas have been pushing their own products for ADAS but hopefully there will be a need for Akida in there somewhere.

Regardless having MB, Honda and the Stellantis range in the pipeline (not 100% confirmed) through Valeo it’s only fair to share.

We don’t want a complete Monopoly…. Ok, it would be nice!

:)
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Diogenese

Top 20
@Fact Finder @Diogenese

Any professionals able to opine on this research and the researchers naming their innovation "SpikeNet"


Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges) with significantly fewer parameters and computation overheads.
As to the SpikeNet, it does not appear to have been registered in Australia as a trade mark. It may have been registered in France by Simon Thorpe's company before we bought the company, but I don't know if we would have maintained the French TM registration, if there was one.

So I guess it's a question for management if they want to obtain rights to a TM they do not use at present, and probably have no intention of using.

You might bring it to the notice of our IP man in Perth, Milind Joshi:
https://www.linkedin.com/in/milindjoshi17/?originalSubdomain=au

I'll leave it to @Fact Finder to answer the other part of your question.
 
  • Like
  • Wow
  • Fire
Reactions: 13 users

Diogenese

Top 20
@Fact Finder @Diogenese

Any professionals able to opine on this research and the researchers naming their innovation "SpikeNet"


Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges) with significantly fewer parameters and computation overheads.
Even allowing for the fact that they refer to LSTM, SNN, LIF, this talk of graphs is above my pay grade. [Prof google tells me that the toppled "A" is the universal quantification symbol, but I'm none the wiser]

As you know, we do not use leaky-integrate-and-fire (LIF). We just use integrate-and-fire combined with N-of-M coding, so that on its own is a major advantage as far as compute operations are concerned, saving watts and time.
 
Last edited:
  • Like
  • Fire
Reactions: 18 users

Diogenese

Top 20
Even allowing for the fact that they refer to LSTM, SNN, LIF, this talk of graphs is above my pay grade. [Prof google tells me that the toppled "A" is the universal quantification symbol, but I'm none the wiser]

As you know, we do not use leaky-integrate-and-fire (LIF). We just use integrate-and-fire combined with N-of-M coding, so that on its own is a major advantage as far as compute operations are concerned, saving watts and time.
Speak of the devil (leaky-integrate-and-fire):
from PvdM's White Paper today: https://brainchip.com/wp-content/uploads/2022/08/BrainChip-Learning-how-to-Learn.pdf
"I also needed to efficiently scale neuron count to develop a commercially viable solution. Early FPGA-based prototypes first contained seven digital neurons, then 64, and the next iteration increased this number to 256. To get over a million neurons on a single chip, I had to simplify the neuron model without compromising its computational power. So, I eliminated extraneous biological-inspired elements such as neurotransmitters and exponential decays while preserving the neuromorphic functions that are essential in its computational function"
 
  • Like
  • Fire
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Sub-mW Neuromorphic SNN audio processing applications with Rockpool and Xylo​

Hannah Bos, Dylan Muir


CONCLUSION We demonstrated a general approach for implementing audio processing applications using spiking neural networks, deployed to a low-power Neuromorphic SNN inference processor Xylo. Our solution reaches high accuracy (98 %) with <100 spiking neurons, operating in streaming mode with low latency (med. 100 ms) and at low power (<200 µW dynamic inference power). Our software pipeline Rockpool (rockpool.ai) provides a modern machine-learning (ML) neural network approach to building applications, with a convenient high-level API for defining networks. Rockpool supports definition and training of SNNs via several automatic differentiation backends. Rockpool also supports quantization, mapping and deployment to

View attachment 15529
View attachment 15530
SynSense (formerly aiCTX) was established in 2017, and is the world’s leading supplier of neuromorphic intelligence and application solutions. SynSense focuses on the research and development of neuromorphic intelligence, based on 20+ years of world’s leading experience of University of Zürich and ETH Zürich. Centering on the application scenes of edge computing,SynSense has the full-stack service, and is the only neuromorphic technology company that involves both sensing and computing in the world.

SynSense makes a significant breakthrough in commercial neuromorphic chips, which is a crucial step towards cognitive intelligence and intelligent connectivity. SynSense will build the cognition ecology of intelligent connectivity,lead the development of neuromorphic intelligence in the world, and create well-being for the humans in the future.

View attachment 15531
Go Hannah! You the bos!
 
  • Like
  • Haha
Reactions: 7 users

uiux

Regular
Program SolicitationOpen DateClose DateSelection Announcement Date
NASA STTR 2021 Phase II Proposal Period
only STTR 2021 Phase I awardees eligible to apply
May 06, 2022Due last day of Phase I contractSep 09, 2022*


---

New NASA STTRs coming out soon
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Learning

Learning to the Top 🕵‍♂️
Program SolicitationOpen DateClose DateSelection Announcement Date
NASA STTR 2021 Phase II Proposal Period
only STTR 2021 Phase I awardees eligible to apply
May 06, 2022Due last day of Phase I contractSep 09, 2022*


---

New NASA STTRs coming out soon
This will help new shareholders understand the technical aspects of uiux post.
Learning.
Learning every day.
 
  • Like
  • Haha
Reactions: 18 users

Proga

Regular
Are you getting any of that rain @Sirod69?
 
  • Like
Reactions: 2 users

Sirod69

bavarian girl ;-)
  • Like
Reactions: 2 users
Top Bottom