BRN Discussion Ongoing

DK6161

Regular
Order filled and thank you to all you shorters and especially @DK6161 for making the price lower than what I originally was going to buy on the cap raise. Now hit my target of 250k shares ❤️

View attachment 67384
Well done pom.
Glad you did that and all the best.
Not sure why you think I am a shorter, as most of my comments reflect my confidence with Sean and co.
 
  • Haha
Reactions: 2 users

Kachoo

Regular
Expecting the price to drop lower if games are being played as previously mentioned, but wasn’t brave enough to take that gamble and happy with the few extra shares.
Look 1 or 2 penny buy difference is nothing if we achieve what was stated long ago. So Pom you do what you know is right for you. Congrats! Good luck to us and all.
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Well done pom.
Glad you did that and all the best.
Not sure why you think I am a shorter, as most of my comments reflect my confidence with Sean and co.
1722504788814.gif
 
  • Haha
Reactions: 7 users
Well done pom.
Glad you did that and all the best.
Not sure why you think I am a shorter, as most of my comments reflect my confidence with Sean and co.
Looking forward to all you positive posts going forward
























.















.










1722504929735.gif
 
  • Haha
Reactions: 5 users

HopalongPetrovski

I'm Spartacus!
Well done pom.
Glad you did that and all the best.
Not sure why you think I am a shorter, as most of my comments reflect my confidence with Sean and co.
Have you recently completely changed your sentiment in regards to the Company?
There seems to be a memory of you being very negative and only recently changing your tune, which of course arouses suspicion.
I went to have a look back at your posting history but it is blocked, apparently by you.
None of this proves anything of course and we all could be mistaken, or you maybe have had a personal conversion on the road to Damascus, but this is perhaps the reason your recent Pollyanna postings are being received with some caution.
Unfortunately some scum like T&J and a number of other more recent rodents from the crapper like to hide out in the shadows here, not daring to squeak and reveal themselves, seeking ammunition for their malicious and twisted diatribe.
 
  • Like
  • Fire
  • Haha
Reactions: 22 users

7für7

Top 20
Have you recently completely changed your sentiment in regards to the Company?
There seems to be a memory of you being very negative and only recently changing your tune, which of course arouses suspicion.
I went to have a look back at your posting history but it is blocked, apparently by you.
None of this proves anything of course and we all could be mistaken, or you maybe have had a personal conversion on the road to Damascus, but this is perhaps the reason your recent Pollyanna postings are being received with some caution.
Unfortunately some scum like T&J and a number of other more recent rodents from the crapper like to hide out in the shadows here, not daring to squeak and reveal themselves, seeking ammunition for their malicious and twisted diatribe.
Yes it’s the T&J Syndrome , suddenly you turn and say l always believed in brainchip… I just wanted to point some issues out and make some questions… go brainchip” and two days later “😏 nice that our ceo decided to sell his private car … looks like the company is going well” or something like this
 
  • Like
  • Haha
Reactions: 3 users
  • Like
  • Love
  • Fire
Reactions: 25 users

manny100

Regular
I just though i would ask Co pilot what AKIDA limitations are.
Nothing that we already do not know. There is room for both the Cloud and Cloudless applications depending on the tasks.
SOME THINGS THAT STAND OUT:
Co Pilot 'understands' adoption is still developing - see point 1 patience still required.
Point 6 is the same - BrainChip’s visibility in the AI market is relatively low compared to larger players. Awareness and adoption take time.
Again patience required. My bold above.
Co pilot answers:
  1. Early Deployment: Akida remains in the early stages of deployment within niche markets1. While it shows promise, widespread adoption is still evolving.
  2. Complexity and Learning Curve: Implementing Akida requires understanding its unique features, such as event-based processing and on-device learning. Developers need to adapt to these novel paradigms.
  3. Specific Use Cases: Akida excels in specific applications like vision, audio processing, and time series forecasting. However, it may not be the best fit for all AI tasks.
  4. Resource Constraints: Although Akida is efficient, it still requires computational resources. Developers must balance performance with power consumption and memory constraints.
  5. Model Size Reduction: While Akida’s TENNs reduce model size significantly, some complex networks may still pose challenges2.
  6. Market Awareness: BrainChip’s visibility in the AI market is relatively low compared to larger players. Awareness and adoption take time.
 
  • Like
  • Fire
  • Sad
Reactions: 11 users

Diogenese

Top 20
Thank you @Diogenese

It is a bit difficult to understand. Do I understand it right that the invalidity of our original patent does not endanger our business as much as the following ones would? What if somebody uses the original patent once it is invalid? What would be the worst that could happen to BrainChip?
Hi Chips,

"Expiry" is a better term than "Invalidity" in patent parlance in this context.

The claim is probably very broad, but, as you suggest, the terminology of the claim would need a lot of unwrapping in court.

A lot of technological water has passed under the bridge between the 2008 patent and the 2018 patent for Akida 1.

I doubt that the expiry of 2008 will significantly affect the value of the patent portfolio, remembering that there are several other patents granted or in the pipeline.

I think the TeNNs patents, when granted, will double the value of the portfolio because it covers a whole different area of the technology, whereas most of the current patents are within the same ballpark.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

manny100

Regular
From the BRN website and how true:
"

The key to growth is
effective partnership."​

 
  • Like
Reactions: 11 users
  • Haha
Reactions: 13 users
Good to see neuromorphic computing is gaining traction in the UK:


"We expect the centre to:

Draw in all stakeholders​

  • engage across and bring together neuromorphic computing, including (but not limited to) the multidisciplinary academic community of computer science, software engineering, semiconductors and hardware and neuroscience
  • create a unified sector voice and common language across the disciplines and across the stack to accelerate innovation
  • bring together the research community, policy makers and industrial partners
  • work internationally to demonstrate the capability of neuromorphic computing research and innovation in the UK

Seed a UK research and innovation programme in neuromorphic computing​

  • carry out UK landscaping and road mapping to evaluate the current state and opportunities for the area, which will provide evidence of the benefits and potential of neuromorphic computing to inform stakeholders
  • engage with existing investments across UK Research and Innovation (UKRI), linking current and future investments to add value
  • build a focused research and innovation programme across the full stack to address core research challenges in neuromorphic computing
  • target the core research challenges that will advance the discipline and demonstrate its power
  • the programme of research must be outcomes focused, with clear goals that can be delivered within the timeframe, moving the area forwards

Provide a focal point for neuromorphic computing research in the UK​

  • demonstrate the capabilities and potential of neuromorphic computing in real world scenarios and in different sectors, including use cases
  • Interface with technologies that could benefit from neuromorphic computing (such as AI, quantum computing and so on)"
 
  • Like
  • Fire
  • Love
Reactions: 24 users

davidfitz

Regular
Great information on the link you provided :)

LSI for Digital Still Camera (DSC)​

MegaChips DSC controllers have been adopted by leading camera and smartphone manufacturers and have gained a high reputation.

Media Processor for Mobile Phone​

MegaChips media processors (MPEG4 and H.264 supported products) have been adopted by leading domestic mobile phone and game console manufacturers.

The Japanese mobile phone industry is one of the most advanced in the world. As of March, 2022 there were 199.99 million mobile contracts in Japan[1] according to the Ministry of Internal Affairs and Communications. This is 158 percent of Japan's total population.

Of course we already know about the game console.

1722555624622.png
 
  • Like
  • Fire
  • Love
Reactions: 40 users

Diogenese

Top 20
Hi SC,

MegaChips have some tie-up with Quadric and they are persisting with their tech which uses MACs for motion estimation, image detection and manipulation.

MACs are conventionally used for processing mathematic calculations with high precision. They dot every i and cross every t.

In contrast, Akida is the "big picture" player, taking an overview and making a decision on probability. This makes Akida's SNN far more energy efficient and reduces latency.

Now we are told that TeNNs is more efficient than Akida. I have not come to grips with the in's and out's of TeNNs, but, for those wishing to get a better understanding of the background context underlying the impetus to develop TeNNs, the "BACKGROUND" section of the TeNNs patent provides a readily comprehensible explanation of the motivation behind the invention.

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

[0003] In general, ANNs were initially developed to replicate the behavior of neurons which communicate with each other via electrical signals known as "spikes". The information conveyed by the neurons was initially believed to be mainly encoded in the rate at which the neurons emit these spikes. Initially, nonlinearities in ANNs, such as sigmoid functions, were inspired by the saturating behavior of neurons. Neurons' firing activity reaches saturation as the neurons approach their maximum firing rate, and nonlinear functions, such as, sigmoid functions were used to replicate this behavior in ANNs. These nonlinear functions became activation functions and allowed ANNs to model complex nonlinear relationships between neuron inputs and outputs.

[0005] Currently, most of the accessible data is available in spatiotemporal formats. To use the spatiotemporal forms of data effectively in machine learning applications, it is essential to design a lightweight network that can efficiently learn spatial and temporal features and correlations from data. At present, the convolutional neural network (CNN) is considered the prevailing standard for spatial networks, while the recurrent neural network (RNN) equipped with nonlinear gating mechanisms, such as long short-term memory (LSTM) and gated recurrent unit (GRU), is being preferred for temporal networks.

[0006] The CNNs are capable of learning crucial spatial correlations or features in spatial data, such as images or video frames, and gradually abstracting the learned spatial correlations or features into more complex features as the spatial data is processed layer by layer. These CNNs have become the predominant choice for image classification and related tasks over the past decade. This is primarily due to the efficiency in extracting spatial correlations from static input images and mapping them into their appropriate classifications with the fundamental engines of deep learning like gradient descent and backpropagation paring up together. This results in state-of-the-art accuracy for the CNNs. However, many modem Machine Learning (ML) workflows increasingly utilize data that come in spatiotemporal forms, such as natural language processing (NLP) and object detection from video streams. The CNN models used for image classification lack the power to effectively use temporal data present in these application inputs. Importantly, CNNs fail to provide flexibility to encode and process temporal data efficiently. Thus, there is a need to provide flexibility to artificial neurons to encode and process temporal data efficiently.

[0007] Recently different methods to incorporate temporal or sequential data, including temporal convolution and internal state approaches have been explored. When temporal processing is a requirement, for example in NLP or sequence prediction problems, the RNNs such as long short-term memory (LSTM) and gated recurrent memory (GRU) models are utilized. Further, according to another conventional method, a 2D spatial convolution combined with state-based RNNs such as LSTMs or GRUs to process temporal information components using models such as ConvLSTM have been used. However, each of these conventional approaches comes with significant drawbacks. For example, while combining 2D spatial convolutions with ID temporal convolutions requires large amount of parameters due to temporal dimension and is thus not appropriate for efficient low-power inference.

[0008] One of the main challenges with the RNNs is the involvement of excessive nonlinear operations at each time step, that leads to two significant drawbacks. Firstly, these nonlinearities force the network to be sequential in time i.e., making the RNNs difficult for efficiently leveraging parallel processing during training. Secondly, since the applied nonlinearities are ad-hoc in nature and lack a theoretical guarantee of stability, it is challenging to train the RNNs or perform inference over long sequences of time series data. These limitations also apply to models, for example, ConvLSTM models as discussed in the above paragraphs, that combine 2D spatial convolution with RNNs to process the sequential and temporal data.

[0009] In addition, for each of the above discussed NN models including ANN, CNN, and RNN, the computation process is very often performed in the cloud. However, in order to have a better user experience, privacy, and for various commercial reasons, an implementation of the computation process has started moving from the cloud to edge devices. Various applications like video surveillance, self-driving video, medical vital signs, speech/audio related data are implemented in the edge devices. Further, with the increasing complexity of the NN models, there is a corresponding increase in the computational requirements required to execute highly complex NN Models. Thus, a huge computational processing and a large memory are required for executing highly complex NN Models like CNNs and RNNs in the edge devices. Further, the edge devices are often required to focus on receiving a continuous stream of the same data from a particular application, as discussed above. This necessitates a large memory buffer (time window) of past inputs to perform temporal convolutions at every time step. However, maintaining such a large memory buffer can be very expensive and power-consuming
.
 
  • Like
  • Love
  • Fire
Reactions: 35 users
Hi SC,

MegaChips have some tie-up with Quadric and they are persisting with their tech which uses MACs for motion estimation, image detection and manipulation.

MACs are conventionally used for processing mathematic calculations with high precision. They dot every i and cross every t.

In contrast, Akida is the "big picture" player, taking an overview and making a decision on probability. This makes Akida's SNN far more energy efficient and reduces latency.

Now we are told that TeNNs is more efficient than Akida. I have not come to grips with the in's and out's of TeNNs, but, for those wishing to get a better understanding of the background context underlying the impetus to develop TeNNs, the "BACKGROUND" section of the TeNNs patent provides a readily comprehensible explanation of the motivation behind the invention.

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

[0003] In general, ANNs were initially developed to replicate the behavior of neurons which communicate with each other via electrical signals known as "spikes". The information conveyed by the neurons was initially believed to be mainly encoded in the rate at which the neurons emit these spikes. Initially, nonlinearities in ANNs, such as sigmoid functions, were inspired by the saturating behavior of neurons. Neurons' firing activity reaches saturation as the neurons approach their maximum firing rate, and nonlinear functions, such as, sigmoid functions were used to replicate this behavior in ANNs. These nonlinear functions became activation functions and allowed ANNs to model complex nonlinear relationships between neuron inputs and outputs.

[0005] Currently, most of the accessible data is available in spatiotemporal formats. To use the spatiotemporal forms of data effectively in machine learning applications, it is essential to design a lightweight network that can efficiently learn spatial and temporal features and correlations from data. At present, the convolutional neural network (CNN) is considered the prevailing standard for spatial networks, while the recurrent neural network (RNN) equipped with nonlinear gating mechanisms, such as long short-term memory (LSTM) and gated recurrent unit (GRU), is being preferred for temporal networks.

[0006] The CNNs are capable of learning crucial spatial correlations or features in spatial data, such as images or video frames, and gradually abstracting the learned spatial correlations or features into more complex features as the spatial data is processed layer by layer. These CNNs have become the predominant choice for image classification and related tasks over the past decade. This is primarily due to the efficiency in extracting spatial correlations from static input images and mapping them into their appropriate classifications with the fundamental engines of deep learning like gradient descent and backpropagation paring up together. This results in state-of-the-art accuracy for the CNNs. However, many modem Machine Learning (ML) workflows increasingly utilize data that come in spatiotemporal forms, such as natural language processing (NLP) and object detection from video streams. The CNN models used for image classification lack the power to effectively use temporal data present in these application inputs. Importantly, CNNs fail to provide flexibility to encode and process temporal data efficiently. Thus, there is a need to provide flexibility to artificial neurons to encode and process temporal data efficiently.

[0007] Recently different methods to incorporate temporal or sequential data, including temporal convolution and internal state approaches have been explored. When temporal processing is a requirement, for example in NLP or sequence prediction problems, the RNNs such as long short-term memory (LSTM) and gated recurrent memory (GRU) models are utilized. Further, according to another conventional method, a 2D spatial convolution combined with state-based RNNs such as LSTMs or GRUs to process temporal information components using models such as ConvLSTM have been used. However, each of these conventional approaches comes with significant drawbacks. For example, while combining 2D spatial convolutions with ID temporal convolutions requires large amount of parameters due to temporal dimension and is thus not appropriate for efficient low-power inference.

[0008] One of the main challenges with the RNNs is the involvement of excessive nonlinear operations at each time step, that leads to two significant drawbacks. Firstly, these nonlinearities force the network to be sequential in time i.e., making the RNNs difficult for efficiently leveraging parallel processing during training. Secondly, since the applied nonlinearities are ad-hoc in nature and lack a theoretical guarantee of stability, it is challenging to train the RNNs or perform inference over long sequences of time series data. These limitations also apply to models, for example, ConvLSTM models as discussed in the above paragraphs, that combine 2D spatial convolution with RNNs to process the sequential and temporal data.

[0009] In addition, for each of the above discussed NN models including ANN, CNN, and RNN, the computation process is very often performed in the cloud. However, in order to have a better user experience, privacy, and for various commercial reasons, an implementation of the computation process has started moving from the cloud to edge devices. Various applications like video surveillance, self-driving video, medical vital signs, speech/audio related data are implemented in the edge devices. Further, with the increasing complexity of the NN models, there is a corresponding increase in the computational requirements required to execute highly complex NN Models. Thus, a huge computational processing and a large memory are required for executing highly complex NN Models like CNNs and RNNs in the edge devices. Further, the edge devices are often required to focus on receiving a continuous stream of the same data from a particular application, as discussed above. This necessitates a large memory buffer (time window) of past inputs to perform temporal convolutions at every time step. However, maintaining such a large memory buffer can be very expensive and power-consuming
.
Hey Diogenese, I remember the gist of the blurb (may have been MegaChips and may have been BrainChip, but I lean towards MegaChips) at the time MegaChips had chosen both BrainChip and Quadric, as their A.I. offerings, to spearhead the US markets and "they" said that the technologies weren't in direct competition (in application) but were more complimentary?..
 
  • Like
Reactions: 8 users

Kachoo

Regular
  • Fire
  • Thinking
  • Wow
Reactions: 11 users

Yoda

Regular
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 7 users
Well 50 million shorts covered?

Fools let them get off easy..
Will be interesting to see what happens from here..

This podcast, next week? Will be the most important we've had, in what is said.

Sean's obviously going to be asked "When deal?"...

And things like, does he still expect, as has been alluded to, that we will land a couple of IP deals, this year?

Of the engagements, in which we are down to the "2 or 3" selection stage, are they still in progress, or have some been decided, not in our favor?

Personally, I'd like to know, if his statement in the quarterly, regarding us being on the "cusp" of sustainable revenue streams, can be elaborated on..

As this would have to relate to things already in motion and not new deals.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
Top Bottom