BRN Discussion Ongoing

TopCat

Regular
How many ubiquitous things can you have 🤔

28617DD7-FE4C-4C87-8717-08972BB8A764.jpeg


Edit: from an article on their LinkedIn page
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 12 users

wilzy123

Founding Member
  • Like
  • Fire
Reactions: 4 users

TheFunkMachine

seeds have the potential to become trees.
For some reason I feel like this is massive and in favour to Brainchip! What do we think people? I was also not aware of a feud between Qualcomm and Arm. This was liked by Kimberly Vaupen and she commented thanks for sharing.
 

Attachments

  • BDC9CF11-9070-4DAC-BC7A-9DA9918AD530.png
    BDC9CF11-9070-4DAC-BC7A-9DA9918AD530.png
    502.3 KB · Views: 169
  • FDB18ABA-5E34-48B3-8F15-576E797D8019.png
    FDB18ABA-5E34-48B3-8F15-576E797D8019.png
    1,002.7 KB · Views: 169
  • 4BC9676F-D65D-4491-91C4-F7C3DFDB9964.png
    4BC9676F-D65D-4491-91C4-F7C3DFDB9964.png
    1.4 MB · Views: 159
  • Like
  • Fire
  • Thinking
Reactions: 26 users

alwaysgreen

Top 20
For some reason I feel like this is massive and in favour to Brainchip! What do we think people? I was also not aware of a feud between Qualcomm and Arm. This was liked by Kimberly Vaupen and she commented thanks for sharing.
That's a very interesting comment... 🤔
 
  • Like
Reactions: 4 users
They are (like a lot of competitors) likely a bit jelly of what we got... and are using our own language to compensate for their present stature.
A bit like buying a red Porsche but at a corporate level. :rolleyes:
 
  • Like
Reactions: 1 users

wilzy123

Founding Member
A bit like buying a red Porsche but at a corporate level. :rolleyes:

Honestly. I think we should expect this from competitors. The best thing we can do is really go hard at demonstrating more clearly why we are different to them and why we can do what they cannot do. That will be the nail in the coffin for competitors and also might help the average WANCA interpret reality correctly.
 
  • Like
  • Fire
Reactions: 8 users
How many ubiquitous things can you have 🤔

View attachment 20875

Edit: from an article on their LinkedIn page
Hi @wilzy123

OK I will give this better explaining of Brainchip AKIDA a go.
I don't know much about advertising but I do know that messaging needs to be clear and catch your attention so here I go:

"Unlike Qualcomm AKIDA will not connect you with Vladimir Putin."

"Unlike Qualcomm AKIDA will keep your children safe from strangers."

"Unlike Qualcomm AKIDA will not connect your spouse with your lover."

What do you think? Any good?

Regards
FF

AKIDA BALLISTA
 
  • Haha
  • Like
Reactions: 21 users

Newk R

Regular
Hi @wilzy123

OK I will give this better explaining of Brainchip AKIDA a go.
I don't know much about advertising but I do know that messaging needs to be clear and catch your attention so here I go:

"Unlike Qualcomm AKIDA will not connect you with Vladimir Putin."

"Unlike Qualcomm AKIDA will keep your children safe from strangers."

"Unlike Qualcomm AKIDA will not connect your spouse with your lover."

What do you think? Any good?

Regards
FF

AKIDA BALLISTA
Geez, I reckon that's a bit ubiquitous.
 
  • Fire
  • Like
Reactions: 2 users

Baracuda

Emerged
AND you really think that means WHAT to us?

ZIP was up over 20% yesterday so..........of course it would drop today.

If we went up 20-25% yesterday I would expect BRN to drop back also. Funny don't recall us up 20% yesterday do you?

Think before you type maybe?
Yak, I have been reading lots of your post before and couldn’t find your post for a while then today i tried again. I noted your frustration around the shorting on brn. Wonder if it is ok to share how much shorts have been closed to date.
 
  • Like
Reactions: 1 users
you sound so unfucked and I reckon you look exactly like that Karen in your gif, this isn’t a school yard or your house so keep your parenting to yourself
Your comment is offensive and your language is repugnant!!

You should apologise to Bravo and refrain from using such language in the future.

It is not appreciated or acceptable!!
 
  • Like
  • Love
  • Fire
Reactions: 22 users

ndefries

Regular
you sound so unfucked and I reckon you look exactly like that Karen in your gif, this isn’t a school yard or your house so keep your parenting to yourself

that's a bit below the line in my book. Sorry you had to read that bravo
 
  • Like
  • Love
  • Haha
Reactions: 11 users

BaconLover

Founding Member
French Guiana Fire GIF by European Space Agency - ESA


Screenshot_20221101-182106_One UI Home.jpg



Hopefully Milo approves this post🤞
 
  • Haha
  • Like
  • Fire
Reactions: 43 users
Bravo
I am with you on this, is that crap even Necessary here
Sounds like someone just got out of pre school. Keep that poor choice of words for your family members, not welcome or needed here.
 
Last edited:
  • Like
  • Love
Reactions: 9 users
Just a recent article from Germany I hadn't seen so thought post as references us.

Couple of bolds about halfway down but it is the second one I highlighted that was...errr...interesting assumption or statement of fact :unsure:

Is a google translate copy.



Neuromorphic components for TinyML Machine learning on microcontrollers at the edge​

10/28/2022 By Michael Eckstein

So far, inference has been used in "intelligent" edge applications: the application of AI algorithms that have been trained in advance with a lot of computing power. Neuromorphic IP blocks, i.e. based on biological brains, enable self-learning edge applications for the first time.

Using neuroscientific terms such as neurons and synapses is actually inadequate to describe most neural networks - after all, they are a long way from how the human brain works. There is now a new, third generation: These spiking neural networks (SNN) are based on hardware that allows processes to be carried out similar to those in the human brain - hence the term neuromorphic architecture.

Spiking Neural Networks​

SNNs are artificial neural networks (ANN) that are more closely modeled on the human brain than second-generation NNs. The main difference is that SNNs are spatio-temporal NNs, ie they take time into account in their operation. SNNs work with discrete spikes, which are determined by a differential equation and represent various biological processes.

The critical process is the discharge of the neuron after reaching the membrane potential (“discharge threshold”), which occurs through the discharge of spikes in this neuron at specific times. Similarly, a brain consists of an average of 86 billion computing units, the neurons, which receive input from other neurons via dendrites. Once the inputs exceed a certain threshold, the neuron fires, sending an electrical impulse across a synapse. Synaptic weight controls the intensity of the impulse sent to the next neuron.

In contrast to other artificial neural networks, in SNNs the neurons are fired asynchronously in different layers of the network and arrive at different times, while the information traditionally propagates through the layers depending on the system clock. The spatiotemporal nature of SNNs together with the discontinuous nature of the spikes means that the models can be more sparsely distributed. The neurons only connect to relevant neurons and use time as a variable. This allows the information to be encoded more densely compared to the traditional binary encoding of ANNs. This results in SNNs being more computationally powerful and efficient. The asynchronous behavior of SNNs together with the need to run differential equations, is very computationally intensive for conventional hardware. This is where neuromorphic architecture comes into play.

Neuromorphic Architecture​

The neuromorphic architecture consists of neurons and synapses and differs significantly from the von Neuman architecture. In neuromorphic computers, the processing and storage of the data takes place in the same region - and thus avoids a weakness of the von Neumann architecture: Here, the data to be processed must first be loaded from the memory into the processing units. The storage interface is a bottleneck that slows down data throughput. In addition, the neuromorphic architecture supports SNNs and accepts spikes as inputs, so the information can be encoded in terms of spike impact time, size, and shape.

Key characteristics of neuromorphic systems include their inherent scalability, event-driven computation, and stochastics. Since the neurons only trigger when their trigger threshold is exceeded, the neuromorphic architecture scores with extremely low power consumption. This is usually orders of magnitude lower than in conventional computer systems. Neuromorphic components therefore have the potential to play a major role in the coming age of edge and endpoint AI.

Sheer Analytics & Insights estimates that the global neuromorphic computing market will reach $780 million by 2028 with a compound annual growth rate of 50.3 percent [1]. Mordor Intelligence projects that the market will reach $366 million by 2026, with a compound annual growth rate of 47.4 percent [2]. Further market research results can be found on the Internet, which suggest a similar increase. Market research companies predict that various sectors such as industrial, automotive, mobile communications and medicine will use neuromorphic systems for a variety of applications.

TinyML (Tiny Machine Learning) is about running ML and NNs on memory/processor limited components such as microcontrollers (MCU). Therefore, it makes sense to integrate a neuromorphic core for TinyML use cases. Neuromorphic components are event-based processors that work with non-zero events. Event-based convolution and dot products are significantly less computationally intensive since zeros are not processed. Performance further improves as the number of zeros in the filter channels or cores increases. This, along with zero-centered wake-up features like Relu, gives the event-based processors the inherent low-power wake-up characteristic, thereby reducing the effective MAC requirements.

Neuromorphic TinyML: Learning at the Edge​

Because neuromorphic systems handle spikes, 1-, 2-, and 4-bit quantizations can also be used with ANNs, compared to the conventional 8-bit quantization. Because SNNs are built into hardware, neuromorphic devices (such as Brainchip's Akida) have the unique ability of on-edge learning. This is not possible with conventional components as they only simulate a neural network with Von Neumann architecture. As a result, on-edge learning is computationally intensive and has a high memory footprint that exceeds the system budget of a TinyML system.

Additionally, integers for training an NN model do not provide enough range to train a model accurately. Therefore, training with 8-bit on traditional architectures is currently not possible. Currently, in the traditional architectures, a few on-edge learning implementations using machine learning algorithms (autocoder, decision trees) have reached a production stage for simple real-time analysis use cases, while NNs are still in the development phase. The advantages of using neuromorphic components and SNNs at the endpoint can be summarized as follows:

  • extremely low power consumption (milli- to microjoules per inference)
  • Less need for MACs compared to traditional NNs
  • Less use of parameter memory compared to conventional NNs
  • On-Edge Learning Capabilities
Recognizing the enormous potential of neuromorphic systems and SNNs, Renesas licensed a core from Brainchip [3], the world's first commercial neuromorphic IP manufacturer. At the lower end of the performance scale there is now an MCU with an M33 processor from Arm and a Spiking Neural Network with a licensed Brainchip core including the appropriate software.

Neuromorphic TinyML Use Cases​

All in all, microcontrollers with neuromorphic cores can convince in use cases across the industry with their special features of on-edge learning:

  • For anomaly detection applications in existing industrial plants where using the cloud to train a model is inefficient, adding an AI endpoint device at the engine and training at the edge would allow for easy scalability. Because the aging of the systems differs from machine to machine, even if it is the same model.
  • In robotics, over time, the joints of robotic arms tend to wear out, misalign, and no longer function as required. Retuning controls at the edge without human intervention reduces the need to call in a professional, reduces downtime, and saves time and money.
  • For facial recognition applications, a new user would need to add their face to the data set and retrain the model in the cloud. With a few snapshots of a person's face, the neuromorphic component can identify the end user through on-edge learning. In this way, the user's data can be secured on the system and an optimal user experience can be guaranteed. This can be used in cars where different drivers have different preferences in terms of seating position, climate control, etc.
  • For keyword detection applications, adding additional words that you want the device to recognize at the edge is essential. This can be used in biometric applications where a person adds a 'codeword' that they wish to keep safe on the device.
The balance between the extremely low power consumption of neuromorphic endpoint systems and the improved processing power makes them suitable for applications with prolonged battery operation. Algorithms can be executed that are not possible on other components with low power consumption due to their limited computing power. Conversely, high-end applications that offer similar computing power are too energy-intensive.

Possible use cases include: smartwatches that monitor and process the data at the endpoint and send only relevant information to the cloud; smart camera sensors to detect people to execute a logical command. For example, automatic door opening when a person approaches, as current technology is based on proximity sensors; Areas without connectivity or charging facilities, e.g. B. in forests to intelligently track animals or to monitor pipelines under the sea for possible cracks using real-time vibration, image and sound data; and for infrastructure monitoring, where a neuromorphic MCU can be used to continuously monitor (via images) movement, vibration and structural changes in bridges to detect potential damage. (me)
 
  • Like
  • Love
  • Fire
Reactions: 90 users

gex

Regular
homer-simpson-homer.gif
 
  • Haha
  • Like
  • Love
Reactions: 5 users

alwaysgreen

Top 20
I wish we were still on the big bank index!

Screenshot_20221101-190428.png
 
  • Like
  • Haha
Reactions: 9 users

jk6199

Regular
Santa Clara, California, November 1 at 10 a.m. PDT.

Anil's presentation overnight our time.
 
  • Like
  • Fire
Reactions: 18 users
Just a recent article from Germany I hadn't seen so thought post as references us.

Couple of bolds about halfway down but it is the second one I highlighted that was...errr...interesting assumption or statement of fact :unsure:

Is a google translate copy.



Neuromorphic components for TinyML Machine learning on microcontrollers at the edge​

10/28/2022 By Michael Eckstein

So far, inference has been used in "intelligent" edge applications: the application of AI algorithms that have been trained in advance with a lot of computing power. Neuromorphic IP blocks, i.e. based on biological brains, enable self-learning edge applications for the first time.

Using neuroscientific terms such as neurons and synapses is actually inadequate to describe most neural networks - after all, they are a long way from how the human brain works. There is now a new, third generation: These spiking neural networks (SNN) are based on hardware that allows processes to be carried out similar to those in the human brain - hence the term neuromorphic architecture.

Spiking Neural Networks​

SNNs are artificial neural networks (ANN) that are more closely modeled on the human brain than second-generation NNs. The main difference is that SNNs are spatio-temporal NNs, ie they take time into account in their operation. SNNs work with discrete spikes, which are determined by a differential equation and represent various biological processes.

The critical process is the discharge of the neuron after reaching the membrane potential (“discharge threshold”), which occurs through the discharge of spikes in this neuron at specific times. Similarly, a brain consists of an average of 86 billion computing units, the neurons, which receive input from other neurons via dendrites. Once the inputs exceed a certain threshold, the neuron fires, sending an electrical impulse across a synapse. Synaptic weight controls the intensity of the impulse sent to the next neuron.

In contrast to other artificial neural networks, in SNNs the neurons are fired asynchronously in different layers of the network and arrive at different times, while the information traditionally propagates through the layers depending on the system clock. The spatiotemporal nature of SNNs together with the discontinuous nature of the spikes means that the models can be more sparsely distributed. The neurons only connect to relevant neurons and use time as a variable. This allows the information to be encoded more densely compared to the traditional binary encoding of ANNs. This results in SNNs being more computationally powerful and efficient. The asynchronous behavior of SNNs together with the need to run differential equations, is very computationally intensive for conventional hardware. This is where neuromorphic architecture comes into play.

Neuromorphic Architecture​

The neuromorphic architecture consists of neurons and synapses and differs significantly from the von Neuman architecture. In neuromorphic computers, the processing and storage of the data takes place in the same region - and thus avoids a weakness of the von Neumann architecture: Here, the data to be processed must first be loaded from the memory into the processing units. The storage interface is a bottleneck that slows down data throughput. In addition, the neuromorphic architecture supports SNNs and accepts spikes as inputs, so the information can be encoded in terms of spike impact time, size, and shape.

Key characteristics of neuromorphic systems include their inherent scalability, event-driven computation, and stochastics. Since the neurons only trigger when their trigger threshold is exceeded, the neuromorphic architecture scores with extremely low power consumption. This is usually orders of magnitude lower than in conventional computer systems. Neuromorphic components therefore have the potential to play a major role in the coming age of edge and endpoint AI.

Sheer Analytics & Insights estimates that the global neuromorphic computing market will reach $780 million by 2028 with a compound annual growth rate of 50.3 percent [1]. Mordor Intelligence projects that the market will reach $366 million by 2026, with a compound annual growth rate of 47.4 percent [2]. Further market research results can be found on the Internet, which suggest a similar increase. Market research companies predict that various sectors such as industrial, automotive, mobile communications and medicine will use neuromorphic systems for a variety of applications.

TinyML (Tiny Machine Learning) is about running ML and NNs on memory/processor limited components such as microcontrollers (MCU). Therefore, it makes sense to integrate a neuromorphic core for TinyML use cases. Neuromorphic components are event-based processors that work with non-zero events. Event-based convolution and dot products are significantly less computationally intensive since zeros are not processed. Performance further improves as the number of zeros in the filter channels or cores increases. This, along with zero-centered wake-up features like Relu, gives the event-based processors the inherent low-power wake-up characteristic, thereby reducing the effective MAC requirements.

Neuromorphic TinyML: Learning at the Edge​

Because neuromorphic systems handle spikes, 1-, 2-, and 4-bit quantizations can also be used with ANNs, compared to the conventional 8-bit quantization. Because SNNs are built into hardware, neuromorphic devices (such as Brainchip's Akida) have the unique ability of on-edge learning. This is not possible with conventional components as they only simulate a neural network with Von Neumann architecture. As a result, on-edge learning is computationally intensive and has a high memory footprint that exceeds the system budget of a TinyML system.

Additionally, integers for training an NN model do not provide enough range to train a model accurately. Therefore, training with 8-bit on traditional architectures is currently not possible. Currently, in the traditional architectures, a few on-edge learning implementations using machine learning algorithms (autocoder, decision trees) have reached a production stage for simple real-time analysis use cases, while NNs are still in the development phase. The advantages of using neuromorphic components and SNNs at the endpoint can be summarized as follows:

  • extremely low power consumption (milli- to microjoules per inference)
  • Less need for MACs compared to traditional NNs
  • Less use of parameter memory compared to conventional NNs
  • On-Edge Learning Capabilities
Recognizing the enormous potential of neuromorphic systems and SNNs, Renesas licensed a core from Brainchip [3], the world's first commercial neuromorphic IP manufacturer. At the lower end of the performance scale there is now an MCU with an M33 processor from Arm and a Spiking Neural Network with a licensed Brainchip core including the appropriate software.

Neuromorphic TinyML Use Cases​

All in all, microcontrollers with neuromorphic cores can convince in use cases across the industry with their special features of on-edge learning:

  • For anomaly detection applications in existing industrial plants where using the cloud to train a model is inefficient, adding an AI endpoint device at the engine and training at the edge would allow for easy scalability. Because the aging of the systems differs from machine to machine, even if it is the same model.
  • In robotics, over time, the joints of robotic arms tend to wear out, misalign, and no longer function as required. Retuning controls at the edge without human intervention reduces the need to call in a professional, reduces downtime, and saves time and money.
  • For facial recognition applications, a new user would need to add their face to the data set and retrain the model in the cloud. With a few snapshots of a person's face, the neuromorphic component can identify the end user through on-edge learning. In this way, the user's data can be secured on the system and an optimal user experience can be guaranteed. This can be used in cars where different drivers have different preferences in terms of seating position, climate control, etc.
  • For keyword detection applications, adding additional words that you want the device to recognize at the edge is essential. This can be used in biometric applications where a person adds a 'codeword' that they wish to keep safe on the device.
The balance between the extremely low power consumption of neuromorphic endpoint systems and the improved processing power makes them suitable for applications with prolonged battery operation. Algorithms can be executed that are not possible on other components with low power consumption due to their limited computing power. Conversely, high-end applications that offer similar computing power are too energy-intensive.

Possible use cases include: smartwatches that monitor and process the data at the endpoint and send only relevant information to the cloud; smart camera sensors to detect people to execute a logical command. For example, automatic door opening when a person approaches, as current technology is based on proximity sensors; Areas without connectivity or charging facilities, e.g. B. in forests to intelligently track animals or to monitor pipelines under the sea for possible cracks using real-time vibration, image and sound data; and for infrastructure monitoring, where a neuromorphic MCU can be used to continuously monitor (via images) movement, vibration and structural changes in bridges to detect potential damage. (me)
Fwiw my previous musings about the M33 product possibility. Who knows.




 
  • Like
  • Fire
Reactions: 8 users

Iseki

Regular
For some reason I feel like this is massive and in favour to Brainchip! What do we think people? I was also not aware of a feud between Qualcomm and Arm. This was liked by Kimberly Vaupen and she commented thanks for sharing.
It's huge.
So "no more external NPU's." can only mean that if you incorporate Arm IP in your chip design, and you want to incorporate Neural processing, then you must include the Arm neural IP. Does this include neuromorphic?

It could be great for us if it locks out NVIDIA, intel, Graphcore etc - all those companies that do not sell their IP.

Maybe it won't limit us as we are already in their ecosystem... some sort of grandfather clause.

The Akida chip includes an arm CPU for configuration.
and we're selling the akida chip IP for others to put inside their arm licensed chips. So where do we stand?
 
  • Fire
  • Like
Reactions: 13 users
Top Bottom