BRN Discussion Ongoing

Off the Megachips site. Apologies if already posted.

https://www.megachips.co.jp/english/product/asics/solution/image/

1722554948499.png
 
  • Like
  • Fire
  • Love
Reactions: 68 users

davidfitz

Regular
Great information on the link you provided :)

LSI for Digital Still Camera (DSC)​

MegaChips DSC controllers have been adopted by leading camera and smartphone manufacturers and have gained a high reputation.

Media Processor for Mobile Phone​

MegaChips media processors (MPEG4 and H.264 supported products) have been adopted by leading domestic mobile phone and game console manufacturers.

The Japanese mobile phone industry is one of the most advanced in the world. As of March, 2022 there were 199.99 million mobile contracts in Japan[1] according to the Ministry of Internal Affairs and Communications. This is 158 percent of Japan's total population.

Of course we already know about the game console.

1722555624622.png
 
  • Like
  • Fire
  • Love
Reactions: 40 users

Diogenese

Top 20
Hi SC,

MegaChips have some tie-up with Quadric and they are persisting with their tech which uses MACs for motion estimation, image detection and manipulation.

MACs are conventionally used for processing mathematic calculations with high precision. They dot every i and cross every t.

In contrast, Akida is the "big picture" player, taking an overview and making a decision on probability. This makes Akida's SNN far more energy efficient and reduces latency.

Now we are told that TeNNs is more efficient than Akida. I have not come to grips with the in's and out's of TeNNs, but, for those wishing to get a better understanding of the background context underlying the impetus to develop TeNNs, the "BACKGROUND" section of the TeNNs patent provides a readily comprehensible explanation of the motivation behind the invention.

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

[0003] In general, ANNs were initially developed to replicate the behavior of neurons which communicate with each other via electrical signals known as "spikes". The information conveyed by the neurons was initially believed to be mainly encoded in the rate at which the neurons emit these spikes. Initially, nonlinearities in ANNs, such as sigmoid functions, were inspired by the saturating behavior of neurons. Neurons' firing activity reaches saturation as the neurons approach their maximum firing rate, and nonlinear functions, such as, sigmoid functions were used to replicate this behavior in ANNs. These nonlinear functions became activation functions and allowed ANNs to model complex nonlinear relationships between neuron inputs and outputs.

[0005] Currently, most of the accessible data is available in spatiotemporal formats. To use the spatiotemporal forms of data effectively in machine learning applications, it is essential to design a lightweight network that can efficiently learn spatial and temporal features and correlations from data. At present, the convolutional neural network (CNN) is considered the prevailing standard for spatial networks, while the recurrent neural network (RNN) equipped with nonlinear gating mechanisms, such as long short-term memory (LSTM) and gated recurrent unit (GRU), is being preferred for temporal networks.

[0006] The CNNs are capable of learning crucial spatial correlations or features in spatial data, such as images or video frames, and gradually abstracting the learned spatial correlations or features into more complex features as the spatial data is processed layer by layer. These CNNs have become the predominant choice for image classification and related tasks over the past decade. This is primarily due to the efficiency in extracting spatial correlations from static input images and mapping them into their appropriate classifications with the fundamental engines of deep learning like gradient descent and backpropagation paring up together. This results in state-of-the-art accuracy for the CNNs. However, many modem Machine Learning (ML) workflows increasingly utilize data that come in spatiotemporal forms, such as natural language processing (NLP) and object detection from video streams. The CNN models used for image classification lack the power to effectively use temporal data present in these application inputs. Importantly, CNNs fail to provide flexibility to encode and process temporal data efficiently. Thus, there is a need to provide flexibility to artificial neurons to encode and process temporal data efficiently.

[0007] Recently different methods to incorporate temporal or sequential data, including temporal convolution and internal state approaches have been explored. When temporal processing is a requirement, for example in NLP or sequence prediction problems, the RNNs such as long short-term memory (LSTM) and gated recurrent memory (GRU) models are utilized. Further, according to another conventional method, a 2D spatial convolution combined with state-based RNNs such as LSTMs or GRUs to process temporal information components using models such as ConvLSTM have been used. However, each of these conventional approaches comes with significant drawbacks. For example, while combining 2D spatial convolutions with ID temporal convolutions requires large amount of parameters due to temporal dimension and is thus not appropriate for efficient low-power inference.

[0008] One of the main challenges with the RNNs is the involvement of excessive nonlinear operations at each time step, that leads to two significant drawbacks. Firstly, these nonlinearities force the network to be sequential in time i.e., making the RNNs difficult for efficiently leveraging parallel processing during training. Secondly, since the applied nonlinearities are ad-hoc in nature and lack a theoretical guarantee of stability, it is challenging to train the RNNs or perform inference over long sequences of time series data. These limitations also apply to models, for example, ConvLSTM models as discussed in the above paragraphs, that combine 2D spatial convolution with RNNs to process the sequential and temporal data.

[0009] In addition, for each of the above discussed NN models including ANN, CNN, and RNN, the computation process is very often performed in the cloud. However, in order to have a better user experience, privacy, and for various commercial reasons, an implementation of the computation process has started moving from the cloud to edge devices. Various applications like video surveillance, self-driving video, medical vital signs, speech/audio related data are implemented in the edge devices. Further, with the increasing complexity of the NN models, there is a corresponding increase in the computational requirements required to execute highly complex NN Models. Thus, a huge computational processing and a large memory are required for executing highly complex NN Models like CNNs and RNNs in the edge devices. Further, the edge devices are often required to focus on receiving a continuous stream of the same data from a particular application, as discussed above. This necessitates a large memory buffer (time window) of past inputs to perform temporal convolutions at every time step. However, maintaining such a large memory buffer can be very expensive and power-consuming
.
 
  • Like
  • Love
  • Fire
Reactions: 35 users
Hi SC,

MegaChips have some tie-up with Quadric and they are persisting with their tech which uses MACs for motion estimation, image detection and manipulation.

MACs are conventionally used for processing mathematic calculations with high precision. They dot every i and cross every t.

In contrast, Akida is the "big picture" player, taking an overview and making a decision on probability. This makes Akida's SNN far more energy efficient and reduces latency.

Now we are told that TeNNs is more efficient than Akida. I have not come to grips with the in's and out's of TeNNs, but, for those wishing to get a better understanding of the background context underlying the impetus to develop TeNNs, the "BACKGROUND" section of the TeNNs patent provides a readily comprehensible explanation of the motivation behind the invention.

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

[0003] In general, ANNs were initially developed to replicate the behavior of neurons which communicate with each other via electrical signals known as "spikes". The information conveyed by the neurons was initially believed to be mainly encoded in the rate at which the neurons emit these spikes. Initially, nonlinearities in ANNs, such as sigmoid functions, were inspired by the saturating behavior of neurons. Neurons' firing activity reaches saturation as the neurons approach their maximum firing rate, and nonlinear functions, such as, sigmoid functions were used to replicate this behavior in ANNs. These nonlinear functions became activation functions and allowed ANNs to model complex nonlinear relationships between neuron inputs and outputs.

[0005] Currently, most of the accessible data is available in spatiotemporal formats. To use the spatiotemporal forms of data effectively in machine learning applications, it is essential to design a lightweight network that can efficiently learn spatial and temporal features and correlations from data. At present, the convolutional neural network (CNN) is considered the prevailing standard for spatial networks, while the recurrent neural network (RNN) equipped with nonlinear gating mechanisms, such as long short-term memory (LSTM) and gated recurrent unit (GRU), is being preferred for temporal networks.

[0006] The CNNs are capable of learning crucial spatial correlations or features in spatial data, such as images or video frames, and gradually abstracting the learned spatial correlations or features into more complex features as the spatial data is processed layer by layer. These CNNs have become the predominant choice for image classification and related tasks over the past decade. This is primarily due to the efficiency in extracting spatial correlations from static input images and mapping them into their appropriate classifications with the fundamental engines of deep learning like gradient descent and backpropagation paring up together. This results in state-of-the-art accuracy for the CNNs. However, many modem Machine Learning (ML) workflows increasingly utilize data that come in spatiotemporal forms, such as natural language processing (NLP) and object detection from video streams. The CNN models used for image classification lack the power to effectively use temporal data present in these application inputs. Importantly, CNNs fail to provide flexibility to encode and process temporal data efficiently. Thus, there is a need to provide flexibility to artificial neurons to encode and process temporal data efficiently.

[0007] Recently different methods to incorporate temporal or sequential data, including temporal convolution and internal state approaches have been explored. When temporal processing is a requirement, for example in NLP or sequence prediction problems, the RNNs such as long short-term memory (LSTM) and gated recurrent memory (GRU) models are utilized. Further, according to another conventional method, a 2D spatial convolution combined with state-based RNNs such as LSTMs or GRUs to process temporal information components using models such as ConvLSTM have been used. However, each of these conventional approaches comes with significant drawbacks. For example, while combining 2D spatial convolutions with ID temporal convolutions requires large amount of parameters due to temporal dimension and is thus not appropriate for efficient low-power inference.

[0008] One of the main challenges with the RNNs is the involvement of excessive nonlinear operations at each time step, that leads to two significant drawbacks. Firstly, these nonlinearities force the network to be sequential in time i.e., making the RNNs difficult for efficiently leveraging parallel processing during training. Secondly, since the applied nonlinearities are ad-hoc in nature and lack a theoretical guarantee of stability, it is challenging to train the RNNs or perform inference over long sequences of time series data. These limitations also apply to models, for example, ConvLSTM models as discussed in the above paragraphs, that combine 2D spatial convolution with RNNs to process the sequential and temporal data.

[0009] In addition, for each of the above discussed NN models including ANN, CNN, and RNN, the computation process is very often performed in the cloud. However, in order to have a better user experience, privacy, and for various commercial reasons, an implementation of the computation process has started moving from the cloud to edge devices. Various applications like video surveillance, self-driving video, medical vital signs, speech/audio related data are implemented in the edge devices. Further, with the increasing complexity of the NN models, there is a corresponding increase in the computational requirements required to execute highly complex NN Models. Thus, a huge computational processing and a large memory are required for executing highly complex NN Models like CNNs and RNNs in the edge devices. Further, the edge devices are often required to focus on receiving a continuous stream of the same data from a particular application, as discussed above. This necessitates a large memory buffer (time window) of past inputs to perform temporal convolutions at every time step. However, maintaining such a large memory buffer can be very expensive and power-consuming
.
Hey Diogenese, I remember the gist of the blurb (may have been MegaChips and may have been BrainChip, but I lean towards MegaChips) at the time MegaChips had chosen both BrainChip and Quadric, as their A.I. offerings, to spearhead the US markets and "they" said that the technologies weren't in direct competition (in application) but were more complimentary?..
 
  • Like
Reactions: 8 users

Kachoo

Regular
  • Fire
  • Thinking
  • Wow
Reactions: 11 users

GDJR69

Regular
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 7 users
Well 50 million shorts covered?

Fools let them get off easy..
Will be interesting to see what happens from here..

This podcast, next week? Will be the most important we've had, in what is said.

Sean's obviously going to be asked "When deal?"...

And things like, does he still expect, as has been alluded to, that we will land a couple of IP deals, this year?

Of the engagements, in which we are down to the "2 or 3" selection stage, are they still in progress, or have some been decided, not in our favor?

Personally, I'd like to know, if his statement in the quarterly, regarding us being on the "cusp" of sustainable revenue streams, can be elaborated on..

As this would have to relate to things already in motion and not new deals.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
  • Like
  • Fire
  • Love
Reactions: 20 users

Kachoo

Regular
So, the shorties are running for the bottom of the fridge now?
Look at 'em scitter scatter. 🤣
They are not running they dragged their bag of money to the bank and gave BRN a few shillings.

But maybe this is a sign that things will slowly move back to a more logical price closer to value and I'm not quoting the PITTs report but even a 30 to 40 cent range. I guess the next week is important.

But a sign if the short covered at 19.3 then yeah that kinda close to a bottom maybe IMO
 
  • Like
Reactions: 9 users

BigDonger101

Founding Member
Is the short selling information glitching again, or is that true?
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users
Is the short selling information glitching again, or is that true?
Every time It's gone straight down in a line like that it has been a glitch. Be interesting to see it in a weeks' time.

SC
 
  • Like
Reactions: 5 users

FiveBucks

Regular
Is the short selling information glitching again, or is that true?

I'd say glitch.

The volume on Friday and Monday was low (only a few million). How could they cover 50 million?
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Hey Diogenese, I remember the gist of the blurb (may have been MegaChips and may have been BrainChip, but I lean towards MegaChips) at the time MegaChips had chosen both BrainChip and Quadric, as their A.I. offerings, to spearhead the US markets and "they" said that the technologies weren't in direct competition (in application) but were more complimentary?..
Yes. I think they were creating a distinction without a difference to save face. Did they not invest in Quadric at about the same time as they discovered Akida?

I recall as if it were only yesterday Quadric claiming to have a complete solution in contrast with "accelerators". Of course, Akida 1500 is compatible with any CPU/MPU/GPU, so can be incorporated into any "complete solution" with as much post classification horsepower as necessary.

https://quadric.io/news/quadric-and...-to-bring-ip-products-to-asic-and-soc-market/
Collaboration Will Leverage High-Performance Quadric GPNPU Architecture to Deliver On-Device AI Solutions

BURLINGAME, Calif., and OSAKA, Japan, May 17, 2022 – Quadric, an innovator in high-performance edge processing platforms for on-device AI, and MegaChips, a leading ASIC and SoC services company based in Japan, today announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s groundbreaking edge AI processor architecture. MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

Quadric offers the unique ability to handle both neural backbones and classical dynamic data-parallel algorithms in a unified architecture, bringing advanced on-device AI capabilities to edge-based applications. The Quadric architecture is the industry's first General Purpose Neural Processing Unit (GPNPU). Quadric's architecture delivers high machine learning (ML) inference performance, but unlike other neural network accelerators which support a limited number of machine learning graph operators, the Quadric solution also has general purpose control and signal processing capability - blending the best attributes of NPU accelerators with DSPs. Quadric GPNPUs can run both neural net graphs and C++ code for signal pre-processing and post-processing.

While many other edge processing solutions combine high-power CPU clusters or exotic DSPs with application-specific network processing units (NPUs), Quadric’s GPNPU architecture provides the flexibility to accelerate the entire application pipeline without the need for a companion processor paired with the NPU
.

In any case I think TeNNs will have erased any imagined "motion estimation" advantage of Quadric.

Anything they can do,
we can do better.
We can do anything
better than them!
...
Yes we can!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users

7für7

Top 20
Fools let them get off easy..
Will be interesting to see what happens from here..

This podcast, next week? Will be the most important we've had, in what is said.

Sean's obviously going to be asked "When deal?"...

And things like, does he still expect, as has been alluded to, that we will land a couple of IP deals, this year?

Of the engagements, in which we are down to the "2 or 3" selection stage, are they still in progress, or have some been decided, not in our favor?

Personally, I'd like to know, if his statement in the quarterly, regarding us being on the "cusp" of sustainable revenue streams, can be elaborated on..

As this would have to relate to things already in motion and not new deals.
I just find it unfortunate that one always has to speculate and search everywhere to get any information as an investor. It's generally okay to keep some things under wraps, but it sometimes feels like I'm an FBI agent without sufficient clearance for BRN's security level! On another note, what do you expect from the podcast? Can they share any information about partnerships and progress on projects outside of ASX announcements that might influence the stock price?
 
  • Like
  • Love
Reactions: 6 users
Fools let them get off easy..
Will be interesting to see what happens from here..

This podcast, next week? Will be the most important we've had, in what is said.

Sean's obviously going to be asked "When deal?"...

And things like, does he still expect, as has been alluded to, that we will land a couple of IP deals, this year?

Of the engagements, in which we are down to the "2 or 3" selection stage, are they still in progress, or have some been decided, not in our favor?

Personally, I'd like to know, if his statement in the quarterly, regarding us being on the "cusp" of sustainable revenue streams, can be elaborated on..

As this would have to relate to things already in motion and not new deals.
Not showing those numbers here


IMG_0823.png
IMG_0824.png
 
  • Like
  • Wow
  • Fire
Reactions: 5 users

gilti

Regular
I call bs on the shortman table
26/07/24 76,562.897 shorts. total share transactions 6,491,811
29/07/24 25,188,611 shorts. total share transactions 3.900.003
so the shorters covered 51,338,286 shares in 3,900,003 sales BS
even adding the odd afterhour transaction of 10,120,000 shares
leaves a major gap.
The Daily Gross Short report for yesterday is showing no short sales
for Thursday which is also highly sus
 
  • Like
Reactions: 1 users
  • Like
  • Thinking
Reactions: 3 users
Maybe the new CR lent the reported shorts 50 million shares hence the drop without any noticeable change in SP
 
  • Like
  • Thinking
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

This is curtesy of @Doz , And would explain the skullfuckery we have witnessed.

Below from Doz.

The fact is , I have asked multiple sources at Brainchip to dispel my theory , none did .

This is telling , if Shaw and Partners were not shorting Brainchip , I would have expected a simple - no .
A senior analyst presents on ****** stock of the day , then makes the below comments ?

He is either the most clueless unprepared senior analysis or he clearly has other motives in play .

The fact that he remains employed at Shaw and Partners maybe answers this .

1722574504932.png


1722574477052.png



https://hotcrapper.com.au/attachments/image-png.6355932/?temp_hash=651d811fc9687e1df6a69a0ceea0a259
1722574411719.png

1722574316872.png


1722573932248.png




1722573968144.png





Cant believe our management ( if one could call them that ) helped them out... if this is not outright negligence , not Shaw what is.

For a far better understanding go to HotCrapper and read Doz last few posts , last 10 or so to get a understanding of what has more than likely transpired.

Thankyou Doz .

Regards,
Esq.
 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 28 users
Top Bottom