BRN Discussion Ongoing

Terroni2105

Founding Member


1648641797455.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 46 users

rayzor

Regular
In person Poster Presentation ML summit Tuesday 29 March 2022 ( poster below)
Do what you need to over my head DYOR

listed document in the poster​

Overview


The Akida Neural Processor

BrainChip’s Akida integrated circuit technology is an ultra-low power, high performance, minimum memory footprint, event domain neural processor targeting Edge AI applications. In addition, because the architecture is based upon an event domain processor, leveraging fundamental principles from biological SNNs, the processor supports incremental learning. This allows a deeply trained network to continue to learn new classifiers without requiring a re-training process. Due to the highly optimized architecture, the Akida Neural Processor eliminates the need for a CPU to run the neural network algorithm and in most cases eliminates the need for a DRAM external to the neural fabric. The elimination of external devices makes the Akida solution significantly more power efficient compared to deep learning accelerators which require both external CPU and memory.
Built around a mesh-connected array of neural processor units (NPUs) the architecture is highly scalable to meet the needs of a wide range of applications. The uniqueness of the BrainChip Akida Architecture lies in the ability of the hardware to run traditional feedforward, deeply learned CNN networks as well as native SNN networks. This documentation provides examples of how to develop both classes of solutions, using industry standard tool flows and networks, to solve a variety of application problems such as vision, acoustic, cybersecurity amongst others.
The Akida neural processor is available both as Intellectual Property (IP) circuit design for integration in ASIC products or as a System on a Chip (SoC) product.
As Figure 1 shows, the SoC is built around a core neural processor comprised of 80 neural processing units, it includes a conversion complex and allows one to run popular convolutional neural networks (CNNs) such as MobileNet 1. Designers can use the Akida SoC to run industry standard CNNs, dramatically reducing power by changing convolutions to event based computations, or run native SNN solutions.
Brainchip
Figure 1. BrainChip Akida processor
The Akida chip includes several key features that differentiate it from other neural network processors and deep learning accelerators. These are:
  • Event-based computing leveraging inherent data and activation sparsity
  • Fully configurable neural processing cores, supporting convolutional, separable-convolutional, pooling and fully connected layers
  • Incremental learning after off-line training
  • On-chip few-shot training
  • Configurable number of NPUs
  • Programmable data to event converter
  • Event-based NPU engines running on a single clock
  • Configurable on-chip SRAM memory
  • Runs full neural networks in hardware
  • On chip communication via mesh network
  • On chip learning in event domain
  • Process technology independent platform
  • Network size customizable to application needs
  • IP deliverables include: RTL, dev tools, test suites and documentation
Figure 2 shows several examples of IP configurations that could be envisioned. Because the architecture is based upon a neural processing unit which is arrayed and mesh connected, the number of NPUs per solution is dependent upon the application need.
Brainchip
Figure 2. Akida IP example configurations

The Akida Neuromorphic ML Framework

The Akida Neuromorphic ML Framework (MetaTF) relies on a high-level neural networks API, written in Python, and largely inspired by the Keras API.
The core data structure used by the Akida runtime is a neural network model, which itself is a linear stack of layers.
MetaTF leverages the TensorFlow framework and PyPI for BrainChip tools installation. The major difference with other machine learning frameworks is that the data exchanged between layers is not the usual dense multidimensional arrays, but sets of spatially organized events that can be modelled as sparse multidimensional arrays.
Throughout this documentation, those events will often be referred as “spikes”, due to their close similarity with the signals exchanged by biological neurons.
Brainchip
Figure 3. Akida MetaTF ML Framework
The MetaTF ML framework comprises three main python packages:
  • the Akida python package is an interface to the Brainchip Akida Neuromorphic System-on-Chip (NSoC). To allow the development of Akida models without an actual Akida hardware, it includes a runtime, an Hardware Abstraction Layer (HAL) and a software backend that simulates the Akida NSoC (see Figure 4 and Figure 5).
  • the CNN2SNN tool provides means to convert Convolutional Neural Networks (CNN) that were trained using Deep Learning methods to event domain, low-latency and low-power network for use with the Akida runtime.
  • the Akida model zoo contains pre-created network models built with the Akida sequential API and the CNN2SNN tool using quantized Keras models.
Brainchip
Figure 4. Akida python package
Brainchip
Figure 5. Akida runtime configurations

The Akida examples

The examples section comprises a zoo of event-based CNN and SNN tutorials. One can check models performances against MNIST, ImageNet and Google Speech Commands (KWS) datasets.
Note
While the Akida examples are provided under an Apache License 2.0, the underlying Akida library is proprietary.
Please refer to the End User License Agreement for terms and conditions.

1
In most cases the entire network can be accommodated using the on-chip SRAM. Even the large MobileNet network used to classify 1000 classes of ImageNet fits comfortably.
Next

© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.

 

Attachments

  • Carlson-Kristofer-Brainchip.pdf
    1.5 MB · Views: 149
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 36 users

TechGirl

Founding Member
Good to see this ISL story is still getting around

https://inf.news/en/science/791bf988657af0e134c475ca748067c6.html


BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2022-03-30 22:12 HKT

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

ISL focuses on areas such as advanced signal processing, space exploration, subsea technology, surveillance and tracking, cybersecurity, advanced radar systems and energy independence. As a member of the BrainChip Early Partnership Program (EAP), ISL will be able to evaluate boards for Akida devices, software and hardware support, and dedicated engineering resources.

"As part of BrainChip's EAP, we had the opportunity to directly assess the capabilities Akida offers to the AI ecosystem," said Jamie Bergin, Senior Vice President, Research, Development and Engineering Solutions Manager at ISL.


BrainChip brings AI to the edge in ways not possible with existing technologies. Akida processors feature ultra-low power consumption and high performance to support the development of edge AI technologies by using neuromorphic architecture, a type of artificial intelligence inspired by the biology of the human brain. BrainChip's EAP program provides partners with the ability to realize significant benefits of power consumption, design flexibility and true learning at the edge.

"ISL's decision to use Akida and Edge-based learning as a tool to incorporate into their research and engineering solutions portfolio is in large part due to the go-to-market advantages our innovation capabilities and production-ready status provide ” said Sean Hehir, CEO of BrainChip, “We are delighted to be a partner of AFRL and ISL on edge AI and machine learning. We believe the combination of technologies will help accelerate the deployment of AI in the field.”

Akida is currently licensed as IP and is also available to order for chip production. It focuses on low power consumption and high performance, supports sensory processing, and is suitable for applications that benefit artificial intelligence, as well as applications such as smart healthcare, smart cities, smart transportation, and smart homes.
 
  • Like
  • Fire
  • Love
Reactions: 61 users

Slade

Top 20


Ok, this next BrainChip podcast with Ian Drew intrigues me. This is the chairman of Foundries.io, a company that partners with other tech companies to provide edge solutions. He is speaking with BrainChip about next generation IOT technology. Akida is next generation IOT technology. How do you get through a podcast like this without speaking about Akida???? And why hasn't Foundries.io already partnered with BrainChip as they have with the companies listed below? This will be interesting!!!


1648650577457.png
 
  • Like
  • Fire
  • Wow
Reactions: 46 users

Krustor

Regular
Found by user Woody_8 from the German BRN-community:


A new graphic illustration from a car. Definitifely looks like a Dodge Charger. Especially the illustration of the doors.

Dodge is owned by Stellantis, the fourth biggest car manufacturer worldwide. This would be huge.
 
  • Like
  • Fire
  • Wow
Reactions: 41 users
A simple to understand explanation as to why AKIDA cortical columns will blow the semiconductors market socks off.
My opinion only DYOR
FF

AKIDA BALLISTA

  • Metastable dynamics of neural circuits and networks
  • featured


Applied Physics Reviews 9, 011313 (2022);https://doi.org/10.1063/5.0062603
B. A. W. Brinkman1,2, H. Yan3, A. Maffei1,2, I. M. Park1,2, A. Fontanini1,2, J. Wang2,4,a), and G. La Camera1,2,a)
View AffiliationsView Contributors



ABSTRACT
Cortical neurons emit seemingly erratic trains of action potentials or “spikes,” and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed “metastable” and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
 
  • Like
  • Wow
  • Thinking
Reactions: 34 users

IloveLamp

Top 20
I am trying to remember a dot thrown up by someone as far back as 2019 or earlier. It involved an idea that had AKIDA sitting on modems doing cybersecurity before data even made it into the box and Cisco was somewhere in the picture.

The above is a hazy recollection so be flexible in how you think about it but hopefully it might trigger the memory of the person who found the dot back then.

My hazy memory only DYOR
FF

AKIDA BALLISTA
This could be the telecommunications dot that was being discussed, speculation only but Rob seems to like it.
.
.

 

Attachments

  • Screenshot_20220331-074913_LinkedIn.jpg
    Screenshot_20220331-074913_LinkedIn.jpg
    388.6 KB · Views: 87
  • Like
  • Thinking
  • Fire
Reactions: 13 users
A simple to understand explanation as to why AKIDA cortical columns will blow the semiconductors market socks off.
My opinion only DYOR
FF

AKIDA BALLISTA

  • Metastable dynamics of neural circuits and networks
  • featured


Applied Physics Reviews 9, 011313 (2022);https://doi.org/10.1063/5.0062603
B. A. W. Brinkman1,2, H. Yan3, A. Maffei1,2, I. M. Park1,2, A. Fontanini1,2, J. Wang2,4,a), and G. La Camera1,2,a)
View AffiliationsView Contributors



ABSTRACT
Cortical neurons emit seemingly erratic trains of action potentials or “spikes,” and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed “metastable” and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
This is the link to the above paper:


It might be useful to newer investors to understand the Brainchip roadmap as far as technology advancement is concerned.

AKD1000 the first in the series is despite running on virtually no power and being the only standalone chip in the world capable of one shot, few shot and incremental learning as well as being able to process all five senses as well as radar, Lidar and Ultrasound is likely to be seen as unimpressive when AKD2000 incorporating LSTM is released most likely end of this year.

Why will AKD2000 be seen in this light? Well that is because what it will be capable of has been benchmarked by none other than Elon Musk. In an interview Elon Musk spoke to the problem of having compute capable of extrapolating that a plastic bag blowing across the road ahead does not require emergency braking or avoidance action. The plastic bag is a symbol of the present problem.

The ability of compute to extrapolate and decide a vast range of similar problems like the plastic bag or a cricket ball rolling onto the roadway from between parked cars means it could be followed by a child compared with a cricket ball rolling onto the roadway in other circumstances is a holy grail of developers of autonomous robots such as AV’s.

Peter van der Made has spoken to the intention that AKD2000 with LSTM will resolve this compute issue.

Coincidentally, though some thought not, Peter van der Made actually spoke of the plastic bag problem being solved by AKD2000.

This massive achievement however will be dwarfed by the AKD3000/4000/5000 series whichever one it is that incorporates the cortical column.

In May last year Brainchip announced that Emeritus Professor Alan Harvey was joining the Brainchip Scientific Advisory Board. Professor Harvey has studied and published on the brain and cortical columns for many years.

His appointment came a little more than a month after Peter van der Made stated he and his team had a working cortical column on the bench at the Brainchip Perth Research Centre.

When Brainchip announces the generation of AKIDA technology chip that is using their cortical column capable of all the current technology features plus ‘perception, memory, expectation or decision making’ it will be a day that will be marked down in history as the start of a new technology age.

Rob Telson is correct to say Brainchip is only just getting started, we are at the tip of the iceberg and these are exciting times.

Brainchip is on a trajectory to success that if achieved will be unlike anything ever seen before in the commercial and scientific world.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 89 users

Yak52

Regular
Taken from the Brainchip website -KEY FEATURES

BRAINCHIP AKIDA Key Features.jpg


Yak52
 
  • Like
  • Fire
  • Love
Reactions: 55 users

Slymeat

Move on, nothing to see.
A simple to understand explanation as to why AKIDA cortical columns will blow the semiconductors market socks off.
My opinion only DYOR
FF

AKIDA BALLISTA

  • Metastable dynamics of neural circuits and networks
  • featured


Applied Physics Reviews 9, 011313 (2022);https://doi.org/10.1063/5.0062603
B. A. W. Brinkman1,2, H. Yan3, A. Maffei1,2, I. M. Park1,2, A. Fontanini1,2, J. Wang2,4,a), and G. La Camera1,2,a)
View AffiliationsView Contributors



ABSTRACT
Cortical neurons emit seemingly erratic trains of action potentials or “spikes,” and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed “metastable” and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
An interesting article indeed, but by no stretch of the imagination can it be called a “simple read”.

I assume using that word was somewhat tounge-in-cheek, but I often fail to pick up on sarcasm and hence I gave it a quick read. I admit to not reading all of it, I did skim over the quantum physics content and mathematics as I didn‘t need a refresher course.

But firstly, thanks for sharing it, it did cause me to stimulate my mind a little more than normal this morning.

Most importantly, I do hope Brainchip comes up with a method of modelling cortical columns, as that WILL take use cases to the next level and will, as @FactFinder most correctly stated, “blow the semiconductors market socks off”.

My following response is merely an alternate view formed by a casual read of the referenced material. I don‘t pretend to know much about the subject matter nor do I intend to discredit the author or anyone for sharing it. Well maybe the author(s) of the scientific paper, slightly—for the way the information is presented.

And I apologise in advance if I am completely off the mark. As I said—it was not a “simple read” and I only skimmed it.

IMHO, the referenced article is quite a difficult read, not from a content stance , but rather by the way it is presented. I feel it really doesn’t need to delve into quantum physics and electron spin theory nor a lot of the mathematics. The metastable dynamics involved in quantum physics is a completely different concept, and far more complex, I doubt they are at play in brain cells. For one thing, brain cells are far too big for quantum effects to be at play.

Even 28nm electronic circuits may be too large. But getting closer, than a brain cell, to the orders of magnitude required for quantum effects to have an impact.

Using quantum physics to explain the concept is probably using too big a sledge hammer and only alienates many potential readers. But that may be the entire point of the author(s) also!

I saw a lot of the content as the “I’m so smart” padding that normally proliferates scientific papers. I’ve read quite a few in the past. Unfortunately they are often judged as less useful if using an economy of words, or if written so a lay-person can understand them. “Judged by weight of the paper” is a term that is often used.

Academia certainly has a weird way of creating a protectionist bubble that excludes a lot of open discussion. Somewhat like job protection by obfuscation in some ways. Just my opinion!

A simple read would have been: the postulation of Cortical columns (or actually metastable dynamics, as is the true subject of the article) in the notion of preconception and thinking.

It seems they could physically measure brain activity before an event, and it is true that these were fleeting activities that quickly disappeared. Hence calling this metastable dynamics is probably appropriate. My objection is to the postulate that no impetus preceded this activity.

IMHO the author is excluding the wholistic approach of the body and the fact that the human brain does not control ALL aspects of the body. Hormones play a VERY significant role and maybe, in only looking at brain cell stimuli, the study has not appropriately considered hormones as the cause of these seemingly mystical responses in the supposed absence of stimuli.

The human body also can get trained on timing. Like getting hungry at 3 decently standard times of the day. There’s no physiological reason for having 3 meals, yet most people do.

Then there are many other synaesthesias at play, where one sense seems to autonomously elicit a response in another sense. You’ve all heard the saying “We eat with our eyes” for instance.

Now on to thinking. Do we truely understand what thinking is? I won’t go down that rat hole here but needless to say—we certainly don‘t know how the brain achieves the act of thinking. And then there’s thinking about thinking, or meta thinking etc.

Most certainly, every response needs a stimulus. Nothing happens spontaneously in the human brain. Even an unperceived stimulus is still a stimulus. We just have to look harder to notice it.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

HopalongPetrovski

I'm Spartacus!
This is the link to the above paper:


It might be useful to newer investors to understand the Brainchip roadmap as far as technology advancement is concerned.

AKD1000 the first in the series is despite running on virtually no power and being the only standalone chip in the world capable of one shot, few shot and incremental learning as well as being able to process all five senses as well as radar, Lidar and Ultrasound is likely to be seen as unimpressive when AKD2000 incorporating LSTM is released most likely end of this year.

Why will AKD2000 be seen in this light? Well that is because what it will be capable of has been benchmarked by none other than Elon Musk. In an interview Elon Musk spoke to the problem of having compute capable of extrapolating that a plastic bag blowing across the road ahead does not require emergency braking or avoidance action. The plastic bag is a symbol of the present problem.

The ability of compute to extrapolate and decide a vast range of similar problems like the plastic bag or a cricket ball rolling onto the roadway from between parked cars means it could be followed by a child compared with a cricket ball rolling onto the roadway in other circumstances is a holy grail of developers of autonomous robots such as AV’s.

Peter van der Made has spoken to the intention that AKD2000 with LSTM will resolve this compute issue.

Coincidentally, though some thought not, Peter van der Made actually spoke of the plastic bag problem being solved by AKD2000.

This massive achievement however will be dwarfed by the AKD3000/4000/5000 series whichever one it is that incorporates the cortical column.

In May last year Brainchip announced that Emeritus Professor Alan Harvey was joining the Brainchip Scientific Advisory Board. Professor Harvey has studied and published on the brain and cortical columns for many years.

His appointment came a little more than a month after Peter van der Made stated he and his team had a working cortical column on the bench at the Brainchip Perth Research Centre.

When Brainchip announces the generation of AKIDA technology chip that is using their cortical column capable of all the current technology features plus ‘perception, memory, expectation or decision making’ it will be a day that will be marked down in history as the start of a new technology age.

Rob Telson is correct to say Brainchip is only just getting started, we are at the tip of the iceberg and these are exciting times.

Brainchip is on a trajectory to success that if achieved will be unlike anything ever seen before in the commercial and scientific world.

My opinion only DYOR
FF

AKIDA BALLISTA
After weeks of sideways share price motion and the inevitable discussions, cajoling's, recriminations and encouragements to hold against attempted mob mentality manipulation by shortee's and other nefarious entities, further aggravated by the "are there?/ are there not?" creases in tablecloths affair :), we get this gem from our resident Fact Finder extraordinaire. Bravo, Bravis!

 
  • Like
  • Haha
Reactions: 17 users

dippY22

Regular
In person Poster Presentation ML summit Tuesday 29 March 2022 ( poster below)
Do what you need to over my head DYOR

listed document in the poster​

Overview


The Akida Neural Processor

BrainChip’s Akida integrated circuit technology is an ultra-low power, high performance, minimum memory footprint, event domain neural processor targeting Edge AI applications. In addition, because the architecture is based upon an event domain processor, leveraging fundamental principles from biological SNNs, the processor supports incremental learning. This allows a deeply trained network to continue to learn new classifiers without requiring a re-training process. Due to the highly optimized architecture, the Akida Neural Processor eliminates the need for a CPU to run the neural network algorithm and in most cases eliminates the need for a DRAM external to the neural fabric. The elimination of external devices makes the Akida solution significantly more power efficient compared to deep learning accelerators which require both external CPU and memory.
Built around a mesh-connected array of neural processor units (NPUs) the architecture is highly scalable to meet the needs of a wide range of applications. The uniqueness of the BrainChip Akida Architecture lies in the ability of the hardware to run traditional feedforward, deeply learned CNN networks as well as native SNN networks. This documentation provides examples of how to develop both classes of solutions, using industry standard tool flows and networks, to solve a variety of application problems such as vision, acoustic, cybersecurity amongst others.
The Akida neural processor is available both as Intellectual Property (IP) circuit design for integration in ASIC products or as a System on a Chip (SoC) product.
As Figure 1 shows, the SoC is built around a core neural processor comprised of 80 neural processing units, it includes a conversion complex and allows one to run popular convolutional neural networks (CNNs) such as MobileNet 1. Designers can use the Akida SoC to run industry standard CNNs, dramatically reducing power by changing convolutions to event based computations, or run native SNN solutions.
Brainchip
Figure 1. BrainChip Akida processor
The Akida chip includes several key features that differentiate it from other neural network processors and deep learning accelerators. These are:
  • Event-based computing leveraging inherent data and activation sparsity
  • Fully configurable neural processing cores, supporting convolutional, separable-convolutional, pooling and fully connected layers
  • Incremental learning after off-line training
  • On-chip few-shot training
  • Configurable number of NPUs
  • Programmable data to event converter
  • Event-based NPU engines running on a single clock
  • Configurable on-chip SRAM memory
  • Runs full neural networks in hardware
  • On chip communication via mesh network
  • On chip learning in event domain
  • Process technology independent platform
  • Network size customizable to application needs
  • IP deliverables include: RTL, dev tools, test suites and documentation
Figure 2 shows several examples of IP configurations that could be envisioned. Because the architecture is based upon a neural processing unit which is arrayed and mesh connected, the number of NPUs per solution is dependent upon the application need.
Brainchip
Figure 2. Akida IP example configurations

The Akida Neuromorphic ML Framework

The Akida Neuromorphic ML Framework (MetaTF) relies on a high-level neural networks API, written in Python, and largely inspired by the Keras API.
The core data structure used by the Akida runtime is a neural network model, which itself is a linear stack of layers.
MetaTF leverages the TensorFlow framework and PyPI for BrainChip tools installation. The major difference with other machine learning frameworks is that the data exchanged between layers is not the usual dense multidimensional arrays, but sets of spatially organized events that can be modelled as sparse multidimensional arrays.
Throughout this documentation, those events will often be referred as “spikes”, due to their close similarity with the signals exchanged by biological neurons.
Brainchip
Figure 3. Akida MetaTF ML Framework
The MetaTF ML framework comprises three main python packages:
  • the Akida python package is an interface to the Brainchip Akida Neuromorphic System-on-Chip (NSoC). To allow the development of Akida models without an actual Akida hardware, it includes a runtime, an Hardware Abstraction Layer (HAL) and a software backend that simulates the Akida NSoC (see Figure 4 and Figure 5).
  • the CNN2SNN tool provides means to convert Convolutional Neural Networks (CNN) that were trained using Deep Learning methods to event domain, low-latency and low-power network for use with the Akida runtime.
  • the Akida model zoo contains pre-created network models built with the Akida sequential API and the CNN2SNN tool using quantized Keras models.
Brainchip
Figure 4. Akida python package
Brainchip
Figure 5. Akida runtime configurations

The Akida examples

The examples section comprises a zoo of event-based CNN and SNN tutorials. One can check models performances against MNIST, ImageNet and Google Speech Commands (KWS) datasets.
Note
While the Akida examples are provided under an Apache License 2.0, the underlying Akida library is proprietary.
Please refer to the End User License Agreement for terms and conditions.

1
In most cases the entire network can be accommodated using the on-chip SRAM. Even the large MobileNet network used to classify 1000 classes of ImageNet fits comfortably.
Next

© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.

 
  • Like
Reactions: 5 users

MDhere

Regular
landed myself in hospital so if anyone wamts tp take my place with dolci or come along an have a margi for me. Cru at midday
i will survive and will make it next time @Dolci
didn't even get to accountant today. noone panic my shares and me are safe and im in good hands.

keep up all the great research and chatter xx MDhere
 
  • Like
  • Love
  • Sad
Reactions: 30 users

dippY22

Regular
This is the link to the above paper:


It might be useful to newer investors to understand the Brainchip roadmap as far as technology advancement is concerned.

AKD1000 the first in the series is despite running on virtually no power and being the only standalone chip in the world capable of one shot, few shot and incremental learning as well as being able to process all five senses as well as radar, Lidar and Ultrasound is likely to be seen as unimpressive when AKD2000 incorporating LSTM is released most likely end of this year.

Why will AKD2000 be seen in this light? Well that is because what it will be capable of has been benchmarked by none other than Elon Musk. In an interview Elon Musk spoke to the problem of having compute capable of extrapolating that a plastic bag blowing across the road ahead does not require emergency braking or avoidance action. The plastic bag is a symbol of the present problem.

The ability of compute to extrapolate and decide a vast range of similar problems like the plastic bag or a cricket ball rolling onto the roadway from between parked cars means it could be followed by a child compared with a cricket ball rolling onto the roadway in other circumstances is a holy grail of developers of autonomous robots such as AV’s.

Peter van der Made has spoken to the intention that AKD2000 with LSTM will resolve this compute issue.

Coincidentally, though some thought not, Peter van der Made actually spoke of the plastic bag problem being solved by AKD2000.

This massive achievement however will be dwarfed by the AKD3000/4000/5000 series whichever one it is that incorporates the cortical column.

In May last year Brainchip announced that Emeritus Professor Alan Harvey was joining the Brainchip Scientific Advisory Board. Professor Harvey has studied and published on the brain and cortical columns for many years.

His appointment came a little more than a month after Peter van der Made stated he and his team had a working cortical column on the bench at the Brainchip Perth Research Centre.

When Brainchip announces the generation of AKIDA technology chip that is using their cortical column capable of all the current technology features plus ‘perception, memory, expectation or decision making’ it will be a day that will be marked down in history as the start of a new technology age.

Rob Telson is correct to say Brainchip is only just getting started, we are at the tip of the iceberg and these are exciting times.

Brainchip is on a trajectory to success that if achieved will be unlike anything ever seen before in the commercial and scientific world.

My opinion only DYOR
FF

AKIDA BALLISTA
Now that is one BOLD statement FF,... and to be more specific, .... " Brainchip is on a trajectory to success that if achieved will be unlike anything ever seen before in the commercial and scientific world."

That's triple expresso bold. Not that I disagree necessarily, but I am less confident on that outcome, let us say. I'd be happy with just the Nasdaq 100.

Fortunately, you left an escape exit by use of a qualifying " if...". Sneaky,.... You sure do craft wonderful positions usually supported with good proof or evidence. Keep up the great work and be as colorful as you wish. Sometimes you crack me up, though,....like in this case.
 
  • Like
  • Haha
Reactions: 7 users

Murphy

Life is not a dress rehearsal!
landed myself in hospital so if anyone wamts tp take my place with dolci or come along an have a margi for me. Cru at midday
i will survive and will make it next time @Dolci
didn't even get to accountant today. noone panic my shares and me are safe and im in good hands.

keep up all the great research and chatter xx MDhere
Get out of hospital, pronto, MD! Those places will make you sick!! Get well soon Missy.


If you don't have dreams, you can't have dreams come true!
 
  • Like
Reactions: 9 users
Now that is one BOLD statement FF,... and to be more specific, .... " Brainchip is on a trajectory to success that if achieved will be unlike anything ever seen before in the commercial and scientific world."

That's triple expresso bold. Not that I disagree necessarily, but I am less confident on that outcome, let us say. I'd be happy with just the Nasdaq 100.

Fortunately, you left an escape exit by use of a qualifying " if...". Sneaky,.... You sure do craft wonderful positions usually supported with good proof or evidence. Keep up the great work and be as colorful as you wish. Sometimes you crack me up, though,....like in this case.
I was just saying 10 times Microsoft. So don't go too deep into these waters just take a dip. FF LOL
 
  • Like
  • Haha
Reactions: 10 users
An interesting article indeed, but by no stretch of the imagination can it be called a “simple read”.

I assume using that word was somewhat tounge-in-cheek, but I often fail to pick up on sarcasm and hence I gave it a quick read. I admit to not reading all of it, I did skim over the quantum physics content and mathematics as I didn‘t need a refresher course.

But firstly, thanks for sharing it, it did cause me to stimulate my mind a little more than normal this morning.

Most importantly, I do hope Brainchip comes up with a method of modelling cortical columns, as that WILL take use cases to the next level and will, as @FactFinder most correctly stated, “blow the semiconductors market socks off”.

My following response is merely an alternate view formed by a casual read of the referenced material. I don‘t pretend to know much about the subject matter nor do I intend to discredit the author or anyone for sharing it. Well maybe the author(s) of the scientific paper, slightly—for the way the information is presented.

And I apologise in advance if I am completely off the mark. As I said—it was not a “simple read” and I only skimmed it.

IMHO, the referenced article is quite a difficult read, not from a content stance , but rather by the way it is presented. I feel it really doesn’t need to delve into quantum physics and electron spin theory nor a lot of the mathematics. The metastable dynamics involved in quantum physics is a completely different concept, and far more complex, I doubt they are at play in brain cells. For one thing, brain cells are far too big for quantum effects to be at play.

Even 28nm electronic circuits may be too large. But getting closer, than a brain cell, to the orders of magnitude required for quantum effects to have an impact.

Using quantum physics to explain the concept is probably using too big a sledge hammer and only alienates many potential readers. But that may be the entire point of the author(s) also!

I saw a lot of the content as the “I’m so smart” padding that normally proliferates scientific papers. I’ve read quite a few in the past. Unfortunately they are often judged as less useful if using an economy of words, or if written so a lay-person can understand them. “Judged by weight of the paper” is a term that is often used.

Academia certainly has a weird way of creating a protectionist bubble that excludes a lot of open discussion. Somewhat like job protection by obfuscation in some ways. Just my opinion!

A simple read would have been: the postulation of Cortical columns (or actually metastable dynamics, as is the true subject of the article) in the notion of preconception and thinking.

It seems they could physically measure brain activity before an event, and it is true that these were fleeting activities that quickly disappeared. Hence calling this metastable dynamics is probably appropriate. My objection is to the postulate that no impetus preceded this activity.

IMHO the author is excluding the wholistic approach of the body and the fact that the human brain does not control ALL aspects of the body. Hormones play a VERY significant role and maybe, in only looking at brain cell stimuli, the study has not appropriately considered hormones as the cause of these seemingly mystical responses in the supposed absence of stimuli.

The human body also can get trained on timing. Like getting hungry at 3 decently standard times of the day. There’s no physiological reason for having 3 meals, yet most people do.

Then there are many other synaesthesias at play, where one sense seems to autonomously elicit a response in another sense. You’ve all heard the saying “We eat with our eyes” for instance.

Now on to thinking. Do we truely understand what thinking is? I won’t go down that rat hole here but needless to say—we certainly don‘t know how the brain achieves the act of thinking. And then there’s thinking about thinking, or meta thinking etc.

Most certainly, every response needs a stimulus. Nothing happens spontaneously in the human brain. Even an unperceived stimulus is still a stimulus. We just have to look harder to notice it.
I learnt a long time ago that academics proposing to provide a review of an entire area of science are having a lend and just trying to keep up their academic publication numbers which are apparently important in CV's for promotion in their world.

Poor old Peter van der Made would not get a job based on publications at the moment as he has been in stealth mode for years now.

I only posted the extract as being the easy read as I have searched for ages for someone prepared to encapsulate the outcome of understanding and creating a simulation of a cortical column. As for the maths I always assume that is correct and peer reviewed as I could not check it anyway.

The philosophy of whether we will ever understand the human brain using the human brain to derive an understanding and allowing for the variation in human brains and ongoing evolution is an intellectual quandary that I often mull over and cannot resolve. It does seem the more those engaged in this endeavour learn the less they know.

Personally I think that the human brain is a flawed model upon which to build useful artificial intelligence. The human brain makes mistakes constantly hence the saying "To err is human to forgive divine" or something like that. Being human not sure if I am misquoting but I will quote it nonetheless as I am using a flawed human brain to construct this response.

What use is an intelligent machine that perfectly mimics the human brain down to the making of inexplicable mistakes and the ego to keep going at it regardless.

In Autonomous EV's the push is in large part so that they can remove human drivers and eliminate accidents. Go figure that one out when you read about attempts to emulate the human brain to create truly autonomous vehicles. I am not Einstein but the logic breaks down in my opinion.

In short the artificial general intelligence that Peter van der Made is seeking must at some point divert from the position of fully replicating the human brain and as such a complete understanding of the human brain and how it works is unnecessary to this goal in my opinion but what would I know I am human after all and constantly right and wrong unless you ask my wife who will tell you I am constantly wrong unless I agree with her unless it comes to the law and investment and doing our tax planning.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 34 users

Yak52

Regular
Pump & Dump as usual for BRN.

93c start with Top of $1.05 and low now at 95c.


My bet is another 2.5 mil Shorts were taken out at 98c and have been sold off into the Market to drop the SP from there (98c) @ 2.20pm down to here near the close@ 95c.
Find out maybe tomorrow or Friday if this is the case.
about 4 Mil was traded after 2.20pm and there seems to be a big buying expedition at 95.5c mark which could mean they have bought back Shorts at 95.5c.
Guess work but seems possible.
Overall Short Positions outstanding have dropped significantly.
Tuesdays NEW Shorts were ONLY ...........152,000 . :eek: THIS IS THE SMALLEST AMOUNT FOR MONTHS! Normally min 550,000+

Yak52

To continue on from this post on Weds.

Mondays NEW Shorts were only - 150,000
Tuesdays NEW Shorts were only - 152,000

Now Weds NEW Shorts were only - 292,000


SHORTING this week has dramatically dropped off! Weeks prior have seen min of 550,000 to a daily average of 2,000,000 - 2,600,000 Shorts.
Last week by Weds the overall Shorts outstanding had dropped nearly 10 mil from the previous week.
Then they started again with another 10 mil added by Friday.
This Tuesday Total SHORTs had increased to 33 Mil from 21 Mil on the previous Tuesday.
Of these Gross Shorts (31 Mil) there are 28 Mil NOT COVERED! (Net position)

Encouraging that for this week until today (Thurs) virtually no NEW Shorts have been added and existing shorts including last weeks additions are hopefully being covered at this 95.5c level we seem to be holding around.
If so then the last (2) days of trading will have seen hopefully some of these 28 mil Shorts being covered as they were taken out between Weds @ $1.02 and Friday @ $0.93c
Tomorrows ASX Short lists will show if any reduction from Tuesday this week has happened.
One way to cover your risk if Shorting is to buy back but hold onto these shares waiting to see if the SP will drop further before closing the trade.

Overall the BIG Shorting Campaign seems to have ceased. Currently only very small amounts being added and overall impression is one that Shorters expect the SP to raise and see shorting as being risky in the near future. News or a good 4C being released seem to be the motivators behind this trend. WITH GOOD REASON! lol.
Hey Shorters time to close out those outstanding Shorts before something big breaks!

Yak52
 
  • Like
  • Fire
  • Love
Reactions: 54 users

Cyw

Regular
To continue on from this post on Weds.

Mondays NEW Shorts were only - 150,000
Tuesdays NEW Shorts were only - 152,000

Now Weds NEW Shorts were only - 292,000


SHORTING this week has dramatically dropped off! Weeks prior have seen min of 550,000 to a daily average of 2,000,000 - 2,600,000 Shorts.
Last week by Weds the overall Shorts outstanding had dropped nearly 10 mil from the previous week.
Then they started again with another 10 mil added by Friday.
This Tuesday Total SHORTs had increased to 33 Mil from 21 Mil on the previous Tuesday.
Of these Gross Shorts (31 Mil) there are 28 Mil NOT COVERED! (Net position)

Encouraging that for this week until today (Thurs) virtually no NEW Shorts have been added and existing shorts including last weeks additions are hopefully being covered at this 95.5c level we seem to be holding around.
If so then the last (2) days of trading will have seen hopefully some of these 28 mil Shorts being covered as they were taken out between Weds @ $1.02 and Friday @ $0.93c
Tomorrows ASX Short lists will show if any reduction from Tuesday this week has happened.
One way to cover your risk if Shorting is to buy back but hold onto these shares waiting to see if the SP will drop further before closing the trade.

Overall the BIG Shorting Campaign seems to have ceased. Currently only very small amounts being added and overall impression is one that Shorters expect the SP to raise and see shorting as being risky in the near future. News or a good 4C being released seem to be the motivators behind this trend. WITH GOOD REASON! lol.
Hey Shorters time to close out those outstanding Shorts before something big breaks!

Yak52
I don't understand how people can be so brave shorting BRN as demonstrated before that a positive announcement can make the stock go up double within a very short period of time.

There is of course smart people who shorted 1M shares to push the price down and then bought 2M cheap. When the announcement is made, they would just cover the shorts and sell the longs for profit. However, I am not sure if you can make money consistently with this strategy.
 
  • Like
  • Fire
Reactions: 8 users
Top Bottom