BRN Discussion Ongoing

Baisyet

Regular
Hi IDD,

I wake every morning (so far) with the daily expectation of a sales announcement.

I read somewhere that hallucinations are the result of "overfitting", which I think is like second guessing. I think it's to do with a mismatch in the NN configuration, (and possibly backpropagation?).

Since Akida does not use backpropagation, it may be more resistant to hallucinations (if my suppositions are correct).

For a second, I wondered about hallucinations for speech, and thought probably not because it is a much smaller model, but then there are homonyms.
@Diogenese I am the same each morning since first AKIDa 1000 been waking up for news :-( . Everytime i want to get out I buy more lol dont know why. Sine 2022 been on and off work due to personal health, sitting home with two surgery at the moment. It frustrates me at time as This is my biggest investment, I sold other share to buy BRN. Do not know when are we going to hear the good news and realized our dreams.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Diogenese

Top 20
@Diogenese I am the same each morning since first AKIDa 1000 been waking up for news :-( . Everytime i want to get out I buy more lol dont know why. Sine 2022 been on and off work due to personal health, sitting home with two surgery at the moment. It frustrates me at time as This is my biggest investment, I sold other share to buy BRN. Do not know when are we going to hear the good news and realized our dreams.
Obviously it's tomorrow!
 
  • Haha
  • Like
  • Love
Reactions: 25 users

Tothemoon24

Top 20

IMG_9884.jpeg

Beyond Traditional ANNs: Exploring Spiking Neural Networks for Image Recognition​

Frank Morales Aguilera
Frank Morales Aguilera
·
9 hours ago

Listen
Share
1*KR-2t02qS7cKATIm53Q4jQ.png

Frank Morales Aguilera, BEng, MEng, SMIEEE

Boeing Associate Technical Fellow /Engineer /Scientist /Inventor /Cloud Solution Architect /Software Developer /@ Boeing Global Services

Introduction​

Inspired by the remarkable efficiency and complexity of the human brain, researchers in artificial intelligence are constantly seeking new ways to mimic its capabilities. While traditional artificial neural networks (ANNs) have achieved significant success in various domains, they often struggle to replicate the brain’s energy efficiency and inherent ability to process temporal information. This limitation has led to the exploration of spiking neural networks (SNNs), a more biologically plausible approach that utilizes discrete spikes to transmit information, mirroring the fundamental communication mechanism of biological neurons [1]. This article delves into the implementation and performance of an SNN using the ‘sunporch’ library, a powerful tool for building and training SNNs for handwritten digit recognition on the MNIST dataset. The ‘snnTorch’ library, another critical component of this study, provides a convenient framework for constructing and training SNNs in Python. These libraries, along with the computational power of a GPU, enable efficient and robust image classification.

Method​

The SNN model implemented in this study utilizes a simple yet effective architecture consisting of two fully connected layers. The input layer receives a flattened representation of a 28x28 grayscale image from the MNIST dataset, resulting in 784 input features. These features are fed into the first fully connected layer comprising 1000 neurons. A crucial aspect of this SNN is the incorporation of the Leaky Integrate-and-Fire (LIF) neuron model [2]. Positioned between the fully connected layers, the LIF neurons introduce non-linearity and temporal dynamics to the network. These neurons integrate incoming spikes over time, and when their membrane potential reaches a certain threshold, they fire a spike, transmitting information to the next layer.

The snnTorch library provides a convenient framework for constructing and training SNNs in Python. One of its key functionalities is the spikegen.rate function is employed in this implementation to convert the static pixel values of the input images into spike trains. This conversion is essential for enabling the SNN to process information spike-based. The network’s output is also a spike train, averaged over time to obtain a firing rate for each of the ten output neurons, representing the ten possible digit classes (0 to 9).

To train the SNN, the Adam optimizer adjusts the network’s weights iteratively. The optimization process aims to minimize the cross-entropy loss, a joint loss function for classification tasks. The training proceeds for a predefined number of epochs, each involving multiple iterations over batches of training data. Utilizing a CUDA-enabled GPU significantly accelerates the training process, enabling efficient exploration of the SNN’s parameter space.

Applications​

While this study focuses on the MNIST dataset as a benchmark for evaluating the SNN’s performance in image recognition, the potential applications of SNNs are vast and exciting. Their inherent ability to process temporal information makes them particularly well-suited for tasks involving sequential data, such as:

  • Robotics and Control Systems: SNNs can process sensory information from robots in real-time, enabling them to react quickly and efficiently to dynamic environments.
  • Neuromorphic Computing: SNNs are a cornerstone of neuromorphic computing, which aims to develop hardware and software that mimic the brain’s architecture and function [1]. This field promises to create highly energy-efficient and robust computing systems.
  • Event-Based Vision: Traditional cameras capture images at a fixed frame rate, regardless of any motion in the scene. On the other hand, event-based cameras only generate data when there is a change in the scene, making them ideal for surveillance and autonomous driving applications. SNNs can effectively process the sparse and asynchronous data produced by these cameras.
  • Time Series Analysis: SNNs can analyze and predict time series data, such as financial markets, weather patterns, and biological signals.

Case Study: MNIST Digit Recognition​

The MNIST dataset, comprising 60,000 training images and 10,000 test images of handwritten digits, serves as an ideal benchmark for evaluating the performance of the implemented SNN. The code provided utilizes the snnTorch library to construct and train the network, leveraging the computational power of a GPU to accelerate the training process. The choice of the LIF neuron model is motivated by its biological plausibility and computational efficiency [2]. The spikegen.rate function effectively converts the static image data into dynamic spike trains, enabling the SNN to process information in a more brain-like manner.

The training involves feeding the network batches of images and their corresponding labels and adjusting the network’s weights using the Adam optimizer to minimize the cross-entropy loss. This iterative process allows the network to improve its ability to classify digits correctly gradually. The network’s output is another spike train, which is then decoded to determine the predicted digit.

One specific challenge encountered in training this SNN was the selection of appropriate hyperparameters, such as the learning rate and the number of epochs. Experimentation with different values was necessary to achieve optimal performance. Additionally, static image data can be converted to spike trains using the spikegen.rate function required careful consideration to ensure that the temporal information encoded in the spikes was meaningful for the network.

Results​

The SNN trained in this study demonstrates promising results on the MNIST digit recognition task. After five epochs of training, the network achieves an accuracy of 97.28% on the test set. This high accuracy indicates that the SNN effectively learns to extract relevant features from the input spike trains and classify digits correctly. Using a GPU significantly reduces the training time, enabling rapid experimentation and optimization of the network’s parameters. These results provide reassurance about the effectiveness of the SNN approach.

Conclusion​

The successful implementation and evaluation of this SNN for digit recognition underscore the potential of this biologically inspired approach for solving complex problems. SNNs offer a compelling alternative to traditional ANNs, particularly for tasks involving temporal data requiring energy efficiency. As research in this field progresses and more sophisticated SNN architectures and training algorithms are developed, we can anticipate SNNs playing an increasingly pivotal role in the future of artificial intelligence. This promising future should fill us with optimism and hope for the potential of SNNs.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

itsol4605

Regular
@Diogenese I am the same each morning since first AKIDa 1000 been waking up for news :-( . Everytime i want to get out I buy more lol dont know why. Sine 2022 been on and off work due to personal health, sitting home with two surgery at the moment. It frustrates me at time as This is my biggest investment, I sold other share to buy BRN. Do not know when are we going to hear the good news and realized our dreams.
Selling other successful shares to buy BRN shares. Really??
 
  • Like
Reactions: 1 users

itsol4605

Regular

View attachment 72307

Beyond Traditional ANNs: Exploring Spiking Neural Networks for Image Recognition​

Frank Morales Aguilera
Frank Morales Aguilera
·
9 hours ago

Listen
Share
1*KR-2t02qS7cKATIm53Q4jQ.png

Frank Morales Aguilera, BEng, MEng, SMIEEE

Boeing Associate Technical Fellow /Engineer /Scientist /Inventor /Cloud Solution Architect /Software Developer /@ Boeing Global Services

Introduction​

Inspired by the remarkable efficiency and complexity of the human brain, researchers in artificial intelligence are constantly seeking new ways to mimic its capabilities. While traditional artificial neural networks (ANNs) have achieved significant success in various domains, they often struggle to replicate the brain’s energy efficiency and inherent ability to process temporal information. This limitation has led to the exploration of spiking neural networks (SNNs), a more biologically plausible approach that utilizes discrete spikes to transmit information, mirroring the fundamental communication mechanism of biological neurons [1]. This article delves into the implementation and performance of an SNN using the ‘sunporch’ library, a powerful tool for building and training SNNs for handwritten digit recognition on the MNIST dataset. The ‘snnTorch’ library, another critical component of this study, provides a convenient framework for constructing and training SNNs in Python. These libraries, along with the computational power of a GPU, enable efficient and robust image classification.

Method​

The SNN model implemented in this study utilizes a simple yet effective architecture consisting of two fully connected layers. The input layer receives a flattened representation of a 28x28 grayscale image from the MNIST dataset, resulting in 784 input features. These features are fed into the first fully connected layer comprising 1000 neurons. A crucial aspect of this SNN is the incorporation of the Leaky Integrate-and-Fire (LIF) neuron model [2]. Positioned between the fully connected layers, the LIF neurons introduce non-linearity and temporal dynamics to the network. These neurons integrate incoming spikes over time, and when their membrane potential reaches a certain threshold, they fire a spike, transmitting information to the next layer.

The snnTorch library provides a convenient framework for constructing and training SNNs in Python. One of its key functionalities is the spikegen.rate function is employed in this implementation to convert the static pixel values of the input images into spike trains. This conversion is essential for enabling the SNN to process information spike-based. The network’s output is also a spike train, averaged over time to obtain a firing rate for each of the ten output neurons, representing the ten possible digit classes (0 to 9).

To train the SNN, the Adam optimizer adjusts the network’s weights iteratively. The optimization process aims to minimize the cross-entropy loss, a joint loss function for classification tasks. The training proceeds for a predefined number of epochs, each involving multiple iterations over batches of training data. Utilizing a CUDA-enabled GPU significantly accelerates the training process, enabling efficient exploration of the SNN’s parameter space.

Applications​

While this study focuses on the MNIST dataset as a benchmark for evaluating the SNN’s performance in image recognition, the potential applications of SNNs are vast and exciting. Their inherent ability to process temporal information makes them particularly well-suited for tasks involving sequential data, such as:

  • Robotics and Control Systems: SNNs can process sensory information from robots in real-time, enabling them to react quickly and efficiently to dynamic environments.
  • Neuromorphic Computing: SNNs are a cornerstone of neuromorphic computing, which aims to develop hardware and software that mimic the brain’s architecture and function [1]. This field promises to create highly energy-efficient and robust computing systems.
  • Event-Based Vision: Traditional cameras capture images at a fixed frame rate, regardless of any motion in the scene. On the other hand, event-based cameras only generate data when there is a change in the scene, making them ideal for surveillance and autonomous driving applications. SNNs can effectively process the sparse and asynchronous data produced by these cameras.
  • Time Series Analysis: SNNs can analyze and predict time series data, such as financial markets, weather patterns, and biological signals.

Case Study: MNIST Digit Recognition​

The MNIST dataset, comprising 60,000 training images and 10,000 test images of handwritten digits, serves as an ideal benchmark for evaluating the performance of the implemented SNN. The code provided utilizes the snnTorch library to construct and train the network, leveraging the computational power of a GPU to accelerate the training process. The choice of the LIF neuron model is motivated by its biological plausibility and computational efficiency [2]. The spikegen.rate function effectively converts the static image data into dynamic spike trains, enabling the SNN to process information in a more brain-like manner.

The training involves feeding the network batches of images and their corresponding labels and adjusting the network’s weights using the Adam optimizer to minimize the cross-entropy loss. This iterative process allows the network to improve its ability to classify digits correctly gradually. The network’s output is another spike train, which is then decoded to determine the predicted digit.

One specific challenge encountered in training this SNN was the selection of appropriate hyperparameters, such as the learning rate and the number of epochs. Experimentation with different values was necessary to achieve optimal performance. Additionally, static image data can be converted to spike trains using the spikegen.rate function required careful consideration to ensure that the temporal information encoded in the spikes was meaningful for the network.

Results​

The SNN trained in this study demonstrates promising results on the MNIST digit recognition task. After five epochs of training, the network achieves an accuracy of 97.28% on the test set. This high accuracy indicates that the SNN effectively learns to extract relevant features from the input spike trains and classify digits correctly. Using a GPU significantly reduces the training time, enabling rapid experimentation and optimization of the network’s parameters. These results provide reassurance about the effectiveness of the SNN approach.

Conclusion​

The successful implementation and evaluation of this SNN for digit recognition underscore the potential of this biologically inspired approach for solving complex problems. SNNs offer a compelling alternative to traditional ANNs, particularly for tasks involving temporal data requiring energy efficiency. As research in this field progresses and more sophisticated SNN architectures and training algorithms are developed, we can anticipate SNNs playing an increasingly pivotal role in the future of artificial intelligence. This promising future should fill us with optimism and hope for the potential of SNNs.
Nice! But with Akida?? No way!!
 
  • Thinking
Reactions: 1 users

TECH

Regular
Gidday All,

Lets rewind the clock a little, remember back in the early days when Peter was walking along a lonely, desolate track when he
came across a fork in the track, he read the sign on an old piece of bark, which read " left meet the Von" "right meet your future".

Despite not being too keen on being "spiked", our founder chose the right path (so to speak).

Then comes a delay of around 12 months, as Anil and his hardware engineers agree to take onboard ideas from our EAP's in an
effort to bridge the gap between visionary chip architecture and the Vons approach, borne out of this delay was an absolute
masterclass design add...the CNN2SNN conversion tool.

To continue to streamline our software approach we made Akida compatible with TensorFlow, PyTouch and Onyx, while all along
developing our own inhouse software tool metaTF.

Akida 1.5 Akida 2.0 have both had input from our EAP/s, in that, we (Brainchip) have made discrete additions to the architecture,
in essence, providing the key to unlock the gateway to products of unknown customers, current or future.

Now zoom ahead to our latest product offering....AKIDA PICO....this has been brought about because "customers had asked us to
produce this problem-solving technology".

Notice a pattern here !....just like the VVDN/Brainchip Edge AI Box, as you all know this product has been rather slow in forthcoming,
but it's available now, but I'd suggest in small quantities, because our webpage states a delivery time of 10-12 weeks.

Finally products MUST be linked to all this co-operation that the Brainchip management/engineering staff have been providing, for
what currently appears to be no financial return, to date.

Lock them in Sean.....they need to sign and finally commit.

Tech "I Know Nothing" :ROFLMAO::ROFLMAO: (Perth)

VVDN's webpage has now produced a nice product download of the Brainchip Edge AI Box, so movement has been heading in a
positive direction, as in, getting our name out there.
 
  • Like
  • Love
  • Thinking
Reactions: 41 users

Getupthere

Regular
Gidday All,

Lets rewind the clock a little, remember back in the early days when Peter was walking along a lonely, desolate track when he
came across a fork in the track, he read the sign on an old piece of bark, which read " left meet the Von" "right meet your future".

Despite not being too keen on being "spiked", our founder chose the right path (so to speak).

Then comes a delay of around 12 months, as Anil and his hardware engineers agree to take onboard ideas from our EAP's in an
effort to bridge the gap between visionary chip architecture and the Vons approach, borne out of this delay was an absolute
masterclass design add...the CNN2SNN conversion tool.

To continue to streamline our software approach we made Akida compatible with TensorFlow, PyTouch and Onyx, while all along
developing our own inhouse software tool metaTF.

Akida 1.5 Akida 2.0 have both had input from our EAP/s, in that, we (Brainchip) have made discrete additions to the architecture,
in essence, providing the key to unlock the gateway to products of unknown customers, current or future.

Now zoom ahead to our latest product offering....AKIDA PICO....this has been brought about because "customers had asked us to
produce this problem-solving technology".

Notice a pattern here !....just like the VVDN/Brainchip Edge AI Box, as you all know this product has been rather slow in forthcoming,
but it's available now, but I'd suggest in small quantities, because our webpage states a delivery time of 10-12 weeks.

Finally products MUST be linked to all this co-operation that the Brainchip management/engineering staff have been providing, for
what currently appears to be no financial return, to date.

Lock them in Sean.....they need to sign and finally commit.

Tech "I Know Nothing" :ROFLMAO::ROFLMAO: (Perth)

VVDN's webpage has now produced a nice product download of the Brainchip Edge AI Box, so movement has been heading in a
positive direction, as in, getting our name out there.
The alarming issue is that the EAP gave there input for 1.5 and 2.0 over 2 years ago and still no IP signings.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 11 users

Diogenese

Top 20
Gidday All,

Lets rewind the clock a little, remember back in the early days when Peter was walking along a lonely, desolate track when he
came across a fork in the track, he read the sign on an old piece of bark, which read " left meet the Von" "right meet your future".

Despite not being too keen on being "spiked", our founder chose the right path (so to speak).

Then comes a delay of around 12 months, as Anil and his hardware engineers agree to take onboard ideas from our EAP's in an
effort to bridge the gap between visionary chip architecture and the Vons approach, borne out of this delay was an absolute
masterclass design add...the CNN2SNN conversion tool.

To continue to streamline our software approach we made Akida compatible with TensorFlow, PyTouch and Onyx, while all along
developing our own inhouse software tool metaTF.

Akida 1.5 Akida 2.0 have both had input from our EAP/s, in that, we (Brainchip) have made discrete additions to the architecture,
in essence, providing the key to unlock the gateway to products of unknown customers, current or future.

Now zoom ahead to our latest product offering....AKIDA PICO....this has been brought about because "customers had asked us to
produce this problem-solving technology".

Notice a pattern here !....just like the VVDN/Brainchip Edge AI Box, as you all know this product has been rather slow in forthcoming,
but it's available now, but I'd suggest in small quantities, because our webpage states a delivery time of 10-12 weeks.

Finally products MUST be linked to all this co-operation that the Brainchip management/engineering staff have been providing, for
what currently appears to be no financial return, to date.

Lock them in Sean.....they need to sign and finally commit.

Tech "I Know Nothing" :ROFLMAO::ROFLMAO: (Perth)

VVDN's webpage has now produced a nice product download of the Brainchip Edge AI Box, so movement has been heading in a
positive direction, as in, getting our name out there.
The VVDN Akida Edge Box is quite a powerful device.

https://www.vvdntech.com/vision/akida-edge-ai-box

It has quad Cortex A53 SoC with an NPU as well as two Akida 1000 NN SoCs.

While it only has 32 GB on board, there is a micro SD slot.

I think that the Akida NNs would make the NPU redundant, although I suppose that having the two different types of ML inference processors does give the Edge Box the ability to interface with different types of inputs?

Making sure the two systems did not trip over each other could explain some of the delay.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Baisyet

Regular
  • Like
Reactions: 6 users

7für7

Top 20
Obviously it's tomorrow!
It will be always tomorrow until it’s finally today
 
  • Like
Reactions: 5 users

BrainShit

Regular
Nice! But with Akida?? No way!!

Right, ther's no evidence that we're partnering with Boeing. But Akida is suited well for Image Recognition.


Did anyone heard about those engadements since May 2023... altough there is still time until 2026/2027 to see a real product according to development cycles.

● Intellisense Systems Inc. chose BrainChip’s neuromorphic technology to improve the cognitive communication capabilities on size, weight, and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.

● The partnership with Teksun demonstrates and proliferates BrainChip’s technology through Teksun product development channels, impacting the next generation of intelligent vehicles, smart homes, medicine, and industrial IoT.

● BrainChip’s partnership with emotion3D enables an ultra-low-power working environment with on-chip learning to make driving safer and enable next-level user experience.

● AI Labs Inc., both companies are collaborating on next-generation application development, leveraging the Minsky™ AI Engine in a cost-effective, compelling solution to real-world problems.
 
  • Like
  • Love
  • Thinking
Reactions: 35 users

7für7

Top 20
Nice! But with Akida?? No way!!
How negative can someone be… that’s so sad… finally you going into igno … every time you bring everything into a negative light. And when the share is going to skyrocket some day, you will pretend like “I always knew the potential of akida.. I was always strong holding… congratulations to all brainchip investors” I know people like you… waste of time to read you! Bye
 
  • Like
  • Haha
Reactions: 7 users

itsol4605

Regular
Right, ther's no evidence that we're partnering with Boeing. But Akida is suited well for Image Recognition.


Did anyone heard about those engadements since May 2023... altough there is still time until 2026/2027 to see a real product according to development cycles.

● Intellisense Systems Inc. chose BrainChip’s neuromorphic technology to improve the cognitive communication capabilities on size, weight, and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.

● The partnership with Teksun demonstrates and proliferates BrainChip’s technology through Teksun product development channels, impacting the next generation of intelligent vehicles, smart homes, medicine, and industrial IoT.

● BrainChip’s partnership with emotion3D enables an ultra-low-power working environment with on-chip learning to make driving safer and enable next-level user experience.

● AI Labs Inc., both companies are collaborating on next-generation application development, leveraging the Minsky™ AI Engine in a cost-effective, compelling solution to real-world problems.
Where is the revenue?
 
  • Sad
Reactions: 1 users

Rach2512

Regular
Where is the revenue?


I'm thinking revenue will come when products start hitting the shelves and will be a larger slice of the pie rather than a smaller portion for IP upfront. I hope that makes sense. That's my thinking, I hope I'm right. 🙏If this is the case I'm happy to wait, I've waited a very long time, what's another year apart from another year to accumulate more!
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,

BOOOOOM , Great start to kick the week off.

$0.255 Au with a bit of back pressure.

Regards,
Esq
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

itsol4605

Regular
Good Morning Chippers ,

BOOOOOM , Great start to kick the week off.

$0.255 Au with a bit of back pressure.

Regards,
Esq
Can you explain why ?
 
  • Thinking
Reactions: 1 users

Esq.111

Fascinatingly Intuitive.
Morning itsol4605 ,

Can feel it in my loins .

Regards,
Esq.
 
  • Haha
  • Like
  • Love
Reactions: 14 users

Quercuskid

Regular
  • Haha
  • Thinking
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Well someone or something is moving a few shares around this morning ,

7.3 million shares , roughly $1.8 million Au in the first 27 min of trade.

Esq.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Good Morning Chippers ,

BOOOOOM , Great start to kick the week off.

$0.255 Au with a bit of back pressure.

Regards,
Esq
Yeah, looks like a good start to the week.

3,755,088 volume within the first half hour, and with 170 trades so far, meaning a touch over 22k units per trade. Generally good healthy trades so far.

Also double the buyers than sellers lining up on the sidelines, looking positive.
 
  • Like
  • Love
  • Fire
Reactions: 7 users
Top Bottom