BRN Discussion Ongoing

YLJ

Swing/Position Trader
This explains in simple terms just how effective Akida is
https://www.eetindia.co.in/keyword-spotting-making-an-on-device-assistant-a-reality/

Article By : Peter AJ van der Made, BrainChip



Natural-language processing technique known as keyword spotting is gaining traction with the proliferation of smart appliances controlled by voice commands.

Voice assistants from Amazon, Google, Apple and others can respond to a phrase that follows a “hot word” such as “Hey, Google” or “Hey, Siri” and appear to respond almost immediately. In fact, the response has a delay of a fraction of a second, which is acceptable in a smart speaker device.

How can a small device be so clever?

The voice assistant uses a digital signal processor to digest the first “hot word.” The phrases that follow are sent via the Internet to the cloud.
The speech is then converted into streams of numbers, which are processed in a recurrent convolutional neural network that remembers previous internal states, so that it can be trained to recognize phrases or sequences of words.
These data streams are processed in a datacenter, and the answer or song requested is sent back to the voice assistant via the web. This works well in situations that are non-critical, where a delay does not matter and where Internet connections are reliable.

The neural networks located in data centers are trained using millions of samples in a method that resembles successive approximation; errors are initially very large, but are reduced by feeding the error back into an algorithm that adjusts the network parameters.
The error is reduced in each training cycle.
Training cycles are then repeated until the output is correct.
This is done for every word and phrase in the dataset. Training such networks can take a very long time, on the order of weeks.
Once trained, the network can recognize words and phrases spoken by different individuals.

The recognition process, called inference, is computed and requires millions of multiplications followed by accumulate (MAC) operations, which is why the information cannot be processed in a timely manner on a microprocessor within the device.

In keyword spotting, multiple words need to be recognized.
The delay of sending it to the datacenter is not acceptable, and Internet connections are not always guaranteed. Hence, local processing of phrases on the device is preferable.

One solution is to shrink the multiply-accumulate functions into smaller chips.
The Google Edge-Tensor Processing Unit (TPU), for instance, incorporates many array multipliers and math functions.
This solution still requires a microprocessor to run the neural network, but the MAC functions are passed on to the chip and accelerated.

While this approach allows a small microprocessor to run larger neural networks, it comes with disadvantages:
The power consumption remains too high for small or battery-powered appliances.
With diminishing size comes diminishing performance.
Small dedicated arrays of multipliers are not as plentiful or as fast as those provided by large, power-hungry GPUs or TPUs in datacenters.

An alternative approach involves smaller, tighter neural networks for keyword processing.
Rather than performing complex processing techniques in large recurrent networks, these networks process keywords by converting a stream of values into a spectrograph using a voice recognition algorithm known as MFCC.



The spectrograph picture is input to a much simpler 7-layer feed-forward neural network that has been trained to recognize the features of a keyword set.
The Google keyword dataset, for instance, consists of 65,000 one-second samples of 30 individual words spoken by thousands of different people.
Examples of keywords are UP, DOWN, LEFT, RIGHT, STOP, GO, ON and OFF.


An alternative approach
We have taken a completely different approach, processing sound, images, data and odors in event-based hardware. Brainchip was founded long before the current machine learning rage.

The advancement of processing methods for neural networks and artificial intelligence are our main aims, and we are focused on neuromorphic hardware designs.

The human brain does not run instructions, but instead relies on neural cells.
These cells process information and communicate in spikes, which are short bursts of electrical energy which express the occurrence of an “event” such as a change in color, a line, a frequency, or touch.

By contrast, computers are designed to operate on data bits and execute instructions written by a programmer.

These are two very different processing techniques.

It takes many computer instructions to emulate the function of brain cells — in the form of a neural network — on a computer.

We realized we could do away with the instructions and build very efficient digital circuits that compute in the same way the brain does.

The brain is the ultimate example of a general intelligent system.

This is exactly what Brainchip has done to develop the Akida neural processor.

The chip evolved further when we combined deep learning capabilities with the event-based spiking neural network (SNN) hardware, thus significantly lowering power requirements and improving performance — with the added advantage of rapid on-chip learning.

The Akida chip can process the Google keyword dataset, utilizing the simple 7-layer neural network described above, within a power budget of less than 200 microwatts.

Akida was trained using the the ImageNet dataset, enabling it to instantly learn to recognize a new object without expensive retraining.

The chip has built-in sparsity.
The all-digital design is event-based and therefore does not produce any output when the input stimulus does not cause the neuron to exceed the threshold.

This can be illustrated in a simplified, although extreme example.

Imagine an image with a single dot in the middle.

A conventional neural network needs to process every location of the image to determine if there is something there.
It takes a block of pixels from the image and performs a convolution.
The results are zero, and these zeros are propagated throughout the entire network, together with the zeros generated by all the other blocks, until it reaches the dot.
To detect and eliminate the zeros would add additional latency and would cause processing to slow down rather than speed it up.
Nearly 500 million operations are required to determine that there is a single dot in the image.

By contrast, the Akida event-based approach responds only to the one event, the single dot.

All other locations contain no information and zeros are not propagated through the network, because they do not generate an event.

In practical terms, with real images this sparsity results in up to 40 to 60 percent fewer computations to produce the same classification results using less power.

Training Akida
A keyword spotting application using the Akida chip trained on the Google Speech Commands Dataset can run for years off a penlight battery.

The same circuit configured to use 30 layers and all 80 neural processing units on the chip can be used to process the entire ImageNet dataset in real-time at less than 200 milliwatts (about five days on a penlight battery).

The MobileNet network for image classification fits comfortably on the chip, including all the required memory.
The on-chip, real-time learning capability makes it possible to add to the library of learned words, a nice feature that can be used for personalized word recognition like names, places and customized commands.
Another option for keyword spotting is the Syntiant NDP101 chip.

While this device also operates at comparable low power (200 microwatts) it is a dedicated audio processor that integrates an audio front end, buffering and feature extraction together with the neural network. Syntiant expects to replace digital MACs with an in-memory analog circuit in the future to further reduce power.

The Akida chip has the added advantages of on-chip learning and versatility. It can also be reconfigured to perform sound or image classification, odor identification or to classify features extracted from data. Another advantage of local processing is that no images or data are exposed on the Internet, significantly reducing privacy risks.

Applications for the technology range from voice-activated appliances to replacing worn-out components in manufacturing equipment.
The technology also could be used to determine tire wear based on the sound a tire makes on a road surface.

Other automotive applications include monitoring a driver’s alertness, listening to the engine to determine if maintenance is required and scanning for vehicles in the driver’s blind spot.

We expect Akida to evolve, incorporating the structures of the brain, particularly cortical neural networks aimed at artificial general intelligence (AGI).

This is a form of machine intelligence that can be trained to perform multiple tasks.

AGI technology can be used for controlling autonomous vehicles, with sufficient intelligence to control a vehicle and eventually learn to drive much like humans learn.
To be sure, there will be many intermediate steps along the way to that goal.


A future Akida device will include a more sophisticated neural network model that can lean increasingly complex tasks. Stay tuned.
— Peter AJ van der Made is the CTO of Brainchip.
Thank you @Rayz for your generous input. As I read this gem of an article again, I remembered that I had seen it before a long time ago.
For those who have not noticed, there is a great feature to this forum that can save a lot of time if you are the type to save research
you come across for future reference... there is a bookmark option at the top right of every post...
 
  • Like
Reactions: 30 users
Not sure if these have been posted already

I found these Semiconductor Engineering website articles from the last few days very interesting and relevant to the automotive industry and Akida - well worth a read

The site mentions Brainchip in several other articles on their website

Automotive Outlook: 2022
ML Focus Shifting Toward Software

Cheers
TLS

This on Twitter today - USA vs China in chip production race

House passes bill that would put billions toward US chip production The America COMPETES
Act of 2022 seeks to position the US on equal footing with China.

On Friday, the US House of Representatives passed the America COMPETES Act of 2022 almost entirely along party lines. Among other measures, the sprawling 2,900-page bill allocates $52 billion in grants to subsidize semiconductor manufacturing. It also authorizes nearly $300 billion for research and development.

If enacted, the legislation would represent the most comprehensive attempt by the US to match China’s recent technological and industrial dominance. However, as The New York Times points out, it is unlikely to pass in its current iteration. Much of that comes down to ideological differences between how Democrats and Republicans think the federal government can best position the country to compete against China.
 
  • Like
Reactions: 12 users

Rayz

Member
There is so much on their website regarding working with DAIMLER re ADAS etc. Rather than me copy and paste it all suggest a visit to their website to BLOW YOUR MIND!


We need BS to ogre all the patents as I may be a bit excited and doing a Manchildreborn (Not a sledge as I love your work!)

So much more research to do but I am hoping this is part of the iceberg!
Xilinx and BRN was the Ultrascale
Hardware Acceleration of ...

19 Jan 2021 — The processing is done by six BrainChip Accelerator cores in a Xilinx Kintex Ultrascale field-programmable gate array (FPGA).
 
  • Like
  • Fire
Reactions: 41 users

Rayz

Member
UltraScale Architecture - Xilinx

Xilinx's new 16nm and 20nm UltraScale™ families are based on the first architecture to span multiple nodes from planar through FinFET technologies and ...
 
  • Like
Reactions: 8 users
Are these guys our competition or are they targeting different applications being analogue?

Rain Neuromorphics Inc.

Feb 2 (Reuters) - Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company's technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

@Fact Finder
 
  • Like
  • Thinking
Reactions: 5 users

Rayz

Member
From what I can gauge Xilinx site is ARM cortex for now difficult on phone even with glasses keep looking
 
  • Like
Reactions: 3 users

BaconLover

Founding Member





Good Morning TSEx-y chippers!

Stupid Cramer being bearish on TESLA when IPO at $3.40 a share (with stock split)

Same criticism - no profits, not possible, you cannot do this, lost $290m so far..... but hey look where Tesla is now.

Don't listen to these media buffoons. Especially Jim Cramer he has very low IQ.
 
  • Like
  • Love
  • Fire
Reactions: 44 users
D

Deleted member 118

Guest
  • Like
Reactions: 5 users

MDhere

Regular





Good Morning TSEx-y chippers!

Stupid Cramer being bearish on TESLA when IPO at $3.40 a share (with stock split)

Same criticism - no profits, not possible, you cannot do this, lost $290m so far..... but hey look where Tesla is now.

Don't listen to these media buffoons. Especially Jim Cramer he has very low IQ.

love it thanks for sharing BL smart man that Elon and this is exactly why Brn is going to. the moon. tesla $57 sp 2018 2021/22 $1000! a share!! ummmm yeah baby. Brainchip To the Moon. sorry i took my happy pills a bit early today lol
 
  • Like
  • Fire
Reactions: 17 users

AlpineLife

Member
Gotta love Monday’s - ASX open! my daily “opening ritual”…….watch the first half hour like a soap-opera addict. One screen is just BRN. now deliver us an announcement & I will be screen-side all day
 
  • Like
Reactions: 20 users
I agree with your recollection that AKIDA was simulated first on an FPGA and that Xilinx was the brand involved.
My opinion only DYOR
FF

AKIDA BALLISTA
Yeah that is my recollection too. 99.9% sure so not even going to waste my time checking.

SC
 
  • Like
Reactions: 8 users

kenjikool

Regular
It will be great if we can get a US listing. Looking through the PS announcements on ASX and its "we dug a hole and we found stuff in it, high fives" then 6 months later they say "well the hole we dug didn't have as much as we thought, so we're going to dig another hole over there, yeah high fives baby" But tell the world we have the first AI chip and deals are coming. Yeah ok, lets wait until revenue is high and you have at least 7 big names buying up with all the losses over the years fully repaid. then we may look at you.
 
  • Like
Reactions: 8 users

Cocoman

Member
Are these guys our competition or are they targeting different applications being analogue?

Rain Neuromorphics Inc.

Feb 2 (Reuters) - Rain Neuromorphics Inc., a startup designing chips that mimic the way the brain works and aims to serve companies using artificial intelligence (AI) algorithms, said on Wednesday it raised $25 million.

Gordon Wilson, CEO and co-founder of Rain, said that while most AI chips on the market today are digital, his company's technology is analogue. Digital chips read 1s and 0s while analogue chips can decipher incremental information such as sound waves.

@Fact Finder
Interesting choice of names.
Will they be selling “rain chips”.
?
 
  • Like
  • Haha
Reactions: 4 users
HELLLLLO BRN FAMILY I HAVE FUCKED hotcrapper AND JOINED HERE
 
  • Like
  • Love
Reactions: 36 users
Actually so good to see most of the same people here
 
  • Like
Reactions: 20 users

GDJR69

Regular
  • Like
  • Haha
Reactions: 29 users
Hi everybody I was doing some research and came across this https://www.synaptics.com/technology/edge-computing
There seems to be a lot of crossover with Brainchip.
Are they using Akida or are they a competitor. Can any of our more intelligent members help out please.

Should add that they are working with Eta Compute who list Edge Impulse as a partner!
Thanks
Hi Dr
The following comes from their quarterly. It seems they are using someone else’s Ai:

About Synaptics Incorporated​

Synaptics (Nasdaq: SYNA) is changing the way humans engage with connected devices and data, engineering exceptional experiences throughout the home, at work, in the car and on the go. Synaptics is the partner of choice for the world’s most innovative intelligent system providers who are integrating multiple experiential technologies into platforms that make our digital lives more productive, insightful, secure and enjoyable. These customers are combining Synaptics’ differentiated technologies in touch, display and biometrics with a new generation of advanced connectivity and AI-enhanced video, vision, audio, speech and security processing”

The last time I looked at Eta they were not able to do vision, audio, speech and cyber security all on the one device. Some time ago when Peter van der Made spoken about Eta he referred to the fact that their device had very limited processing power restricting its possible use cases greatly.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 34 users
Hi Dr
The following comes from their quarterly. It seems they are using someone else’s Ai:

About Synaptics Incorporated​

Synaptics (Nasdaq: SYNA) is changing the way humans engage with connected devices and data, engineering exceptional experiences throughout the home, at work, in the car and on the go. Synaptics is the partner of choice for the world’s most innovative intelligent system providers who are integrating multiple experiential technologies into platforms that make our digital lives more productive, insightful, secure and enjoyable. These customers are combining Synaptics’ differentiated technologies in touch, display and biometrics with a new generation of advanced connectivity and AI-enhanced video, vision, audio, speech and security processing”

The last time I looked at Eta they were not able to do vision, audio, speech and cyber security all on the one device. Some time ago when Peter van der Made spoken about Eta he referred to the fact that their device had very limited processing power restricting its possible use cases greatly.

My opinion only DYOR
FF

AKIDA BALLISTA
Just reading my response to be clear Synaptics do not appear to have their own Ai but combine their technology with someone else’s Ai.
My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 23 users

GRN-BRN

Emerged
love it thanks for sharing BL smart man that Elon and this is exactly why Brn is going to. the moon. tesla $57 sp 2018 2021/22 $1000! a share!! ummmm yeah baby. Brainchip To the Moon. sorry i took my happy pills a bit early today lol
Bet that one comes up at the xmas party every year lol :LOL:
 
  • Like
  • Haha
Reactions: 8 users

JDelekto

Regular
Hi Dr
The following comes from their quarterly. It seems they are using someone else’s Ai:

About Synaptics Incorporated​

Synaptics (Nasdaq: SYNA) is changing the way humans engage with connected devices and data, engineering exceptional experiences throughout the home, at work, in the car and on the go. Synaptics is the partner of choice for the world’s most innovative intelligent system providers who are integrating multiple experiential technologies into platforms that make our digital lives more productive, insightful, secure and enjoyable. These customers are combining Synaptics’ differentiated technologies in touch, display and biometrics with a new generation of advanced connectivity and AI-enhanced video, vision, audio, speech and security processing”

The last time I looked at Eta they were not able to do vision, audio, speech and cyber security all on the one device. Some time ago when Peter van der Made spoken about Eta he referred to the fact that their device had very limited processing power restricting its possible use cases greatly.

My opinion only DYOR
FF

AKIDA BALLISTA
Interesting, I never kept up with Synaptics, but they developed the touchpad that came in a Toshiba Satellite Pro laptop I owned back in the mid-2000s. I distinctly remember the company, because the touchpad had very unique qualities, such as that it was also an LCD screen. One could set a custom image for the background of the touchpad and additionally, various regions along the top and side of the touchpad could be used as "virtual hotkeys".

It does look like they are now into several other sensory input devices and that leads me to another question. Given that neuromorphic processors are still in the early stages of adoption, I wonder how many sensor manufacturers are developing devices whose output is optimized for spiking neural networks and what is involved in adapting existing sensors without decreasing performance.
 
  • Like
Reactions: 5 users
Top Bottom