BRN Discussion Ongoing

Thanks Labsy, it seems a few weren't too happy with what I posted, and that's fine.

To buy Brainchip shares was and still is an individuals choice, to sell Brainchip shares was and still is an individuals choice.

I chose not to sell my shares North of $2.00 and that has in effect cost me over 2.5 million dollars, time to make further investments,
the opportunity to buy back into Brainchip and double my already solid holding, I could moan and whinge on this forum all day long,
feeling sorry for myself, but I choose not to vent and keep venting, it's all good and dandy to vent but it's not all good and dandy to
vent against the company because things don't appear on the surface to be tracking the way your cash position selfishly suggests it
should be.

Individuals have made their own choices, for goodness sake own them and stop bagging our company, that's my vent, please respect my
opinion, it's as valuable as yours.

Many on this forum and the past forum know that I have been one of the most positive, passionate supporters of Peter and Anil and the entire Brainchip team for close on 8 years, I'm hurting seeing our share price so low, but is this the place to be venting, I reserve my opinion.

Trust in your own decisions moving forward, I still see Brainchip crossing that finishing post in 1st place...(y)❤️
Until the nxt AGM When they want more bonus shares,the nxt 12 months is massive for shareholders, the company need to deliver
 
  • Like
Reactions: 1 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.
 

Attachments

  • 20230625_083746.jpg
    20230625_083746.jpg
    2.9 MB · Views: 163
  • Like
  • Sad
  • Haha
Reactions: 16 users

Townyj

Ermahgerd
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.

Bit hard to read upside down :p
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Morning Townjy,

Just thought I'd try and spice it up a little.

Yes , sorry about that . Operator error.

Esq.
 
  • Haha
  • Like
  • Love
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
Morning Townjy,

Just thought I'd try and spice it up a little.

Yes , sorry about that . Operator error.

Esq.
istockphoto-902347542-1024x1024.jpg
 
  • Haha
  • Like
Reactions: 12 users

Townyj

Ermahgerd
  • Haha
  • Like
Reactions: 8 users

stan9614

Regular
hotcrapper is hopeless now, full of misleading lies. It took me a bit of effort to set the record straight about our cash runway, which is approximately 8 quarters, instead of the 3 quarter myth that was spreading around the forum...

I wonder how many people on this forum thought we got only 3 quarters cash left?
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

Screen Shot 2023-06-25 at 2.25.17 pm.png





Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

rgupta

Regular
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.
One thing I can tell with 100% accuracy,
All financial writers make their public view the way the curve is. They will promote someone going northward and demote the same when they were going southwards.
But on the other hand a share price increase after going southward and decrease after going northward.
 

rgupta

Regular
However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.
Is not it the same technology what Qualcomm is using?
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

View attachment 38881




Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
  • Like
Reactions: 2 users

FKE

Regular
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

1687679352683.png


This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
 
  • Like
  • Haha
  • Fire
Reactions: 32 users

Diogenese

Top 20
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

View attachment 38882

This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
Dunno what you're smokin', but there's gotta be a market for it.
 
  • Haha
  • Like
  • Wow
Reactions: 25 users

Tothemoon24

Top 20



IPro licenses Silicon IP to the Israeli Chip Design Community, from selected IP companies world-wide. We deliver key functionality for your design through best-in-class IP partnerships and first-class support.

We act as one company. Operating at the same high standards of support and commitment that you have learned to trust along years of partnership with me in a variety of Sales roles, the IPro Group continues a long tradition of engaged support and information exchange. We inform you, learn your needs, and provide IP solutions for your SoC design challenges, enabling you to reach the market with world-class IP products - fast!

Imagine a vibrant community of Israeli fabless companies and Worldwide IP vendors, collaborating closely and sharing information. Imagine an atmosphere of trust and cooperation and mutual commitment - for the success of your designs and for the constant improvement of our IP offer. This is the IPro vision - a one-stop shop of state-of-the-art IP with unique engagement and bond with our Partners.

About our IP Vendor Partners:
_edited.png
 
  • Like
  • Love
  • Fire
Reactions: 44 users

miaeffect

Oat latte lover
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

View attachment 38882

This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
scary-movie-see-how-we.gif
 
  • Haha
Reactions: 5 users

Draed

Regular
The way I see it, this is a last round attempt to short, before the inevitable pull back. Like a reverse pump and dump. I think maybe we will drop back into the asx300? After this month. Institutional owners will have to close their short position very suddenly. I don't think the asx200 has been good for us.

If we can couple this with a nice little announcement, it might burn them a little bit crispier.
 
  • Like
  • Fire
  • Haha
Reactions: 12 users

Neuromorphia

fact collector



IPro licenses Silicon IP to the Israeli Chip Design Community, from selected IP companies world-wide. We deliver key functionality for your design through best-in-class IP partnerships and first-class support.

We act as one company. Operating at the same high standards of support and commitment that you have learned to trust along years of partnership with me in a variety of Sales roles, the IPro Group continues a long tradition of engaged support and information exchange. We inform you, learn your needs, and provide IP solutions for your SoC design challenges, enabling you to reach the market with world-class IP products - fast!

Imagine a vibrant community of Israeli fabless companies and Worldwide IP vendors, collaborating closely and sharing information. Imagine an atmosphere of trust and cooperation and mutual commitment - for the success of your designs and for the constant improvement of our IP offer. This is the IPro vision - a one-stop shop of state-of-the-art IP with unique engagement and bond with our Partners.

About our IP Vendor Partners:
_edited.png
Our new sales representatives web site
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
  • Haha
  • Like
Reactions: 3 users

Reuben

Founding Member
Thanks Labsy, it seems a few weren't too happy with what I posted, and that's fine.

To buy Brainchip shares was and still is an individuals choice, to sell Brainchip shares was and still is an individuals choice.

I chose not to sell my shares North of $2.00 and that has in effect cost me over 2.5 million dollars, time to make further investments,
the opportunity to buy back into Brainchip and double my already solid holding, I could moan and whinge on this forum all day long,
feeling sorry for myself, but I choose not to vent and keep venting, it's all good and dandy to vent but it's not all good and dandy to
vent against the company because things don't appear on the surface to be tracking the way your cash position selfishly suggests it
should be.

Individuals have made their own choices, for goodness sake own them and stop bagging our company, that's my vent, please respect my
opinion, it's as valuable as yours.

Many on this forum and the past forum know that I have been one of the most positive, passionate supporters of Peter and Anil and the entire Brainchip team for close on 8 years, I'm hurting seeing our share price so low, but is this the place to be venting, I reserve my opinion.

Trust in your own decisions moving forward, I still see Brainchip crossing that finishing post in 1st place...(y)❤️
Been a while since i logged in, but good 2 see some very positive posts from a few...

if u have done your research.... time will speak for itself.

I am n deep red as well.... but research s the only thing that has kept me holding to my brn shares...
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 48 users

Getupthere

Regular
The way I see it, this is a last round attempt to short, before the inevitable pull back. Like a reverse pump and dump. I think maybe we will drop back into the asx300? After this month. Institutional owners will have to close their short position very suddenly. I don't think the asx200 has been good for us.

If we can couple this with a nice little announcement, it might burn them a little bit crispier.
I agree, asx200 has not been good for us without revenue.

Back to the 300 would sort us for now and limited the numbers of shorters.
 
  • Like
Reactions: 8 users
Top Bottom