BRN Discussion Ongoing

IloveLamp

Top 20

Featuring innovative TMR technology, the BMM350 from Bosch Sensortec enables new applications such as eliminating motion sickness in VR headsets, while providing a huge reduction in power consumption compared to the previous generation device

In indoor navigation, the BMM350 can be used to improve positioning accuracy when no satellite signal is available. It also provides position and speed measurement in e-bikes and other vehicles.
 
  • Like
Reactions: 3 users

Frangipani

Top 20
Another article hot off the press extolling the virtues of Edge AI.
Although the author is CEO of Xperi, a company that has a competing (although apparently not truly neuromorphic) Edge AI processor (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-289368), it is nevertheless good publicity for the Edge AI market in general.



No Room For Privacy Complacency In The Era Of Cloud-Based AI​

Forbes Technology Council
Jon Kirchner
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (Fee-Based)

Jun 23, 2023,08:15am EDT
Jon Kirchner, CEO, Xperi.




GETTY

While consumers worldwide have largely ignored invitations to review lengthy data collection policy disclosures before accessing the services and experiences they want to enjoy, advocates for protecting personal digital information are wondering out loud if the real-world implications of cloud-based artificial intelligence (AI) will finally knock consumers out of their "privacy complacency."

Beyond taking legal boilerplates seriously, however, a meaningful conversation is emerging around the implications of applying AI and machine learning (ML) to gather, analyze and process information in the cloud compared to edge computing environments—in which processing and data storage takes place closer to where data is created or consumed. It turns out that integrating AI and edge computing opens the door to significant benefits for those interested in enhancing data-powered experiences while protecting their personal information.

Cloud-based AI/ML services have crossed an important line when it comes to personal digital information. The technology no longer just stores the data. It specifically gathers, optimizes, trains on and monetizes data from virtually every public online source—including cloud services and connected devices. The early use cases have demonstrated that this technology can—and will—indiscriminately access and absorb contextual information from any digital source to do its job.

This is especially concerning as elements in the most sensitive sectors of society—including healthcare—rush to integrate cloud-based AI tools into their offerings. Some online mental and emotional support services, for instance, have controversially integrated AI chatbot technology into therapy and counseling sessions. On at least one occasion, cloud-based AI was deployed without the informed consent of patients/clients.

An Irrational Bias For Cloud Impedes Privacy-By-Design​

In the meantime, terabytes of data are being "hoovered" into a small handful of cloud service providers (CSPs) as users engage cloud-based AI applications from mobile devices, smart homes, connected cars—and even biologically embedded technologies, such as connected pacemakers. It represents a colossal concentration of data that should raise serious alarms and prompt consumers—as well as business leaders across all industries—to develop robust privacy-by-design frameworks.

Originally developed to prevent personal data breaches by incorporating privacy considerations into the architecture of digital offerings, privacy-by-design also seeks to ensure that consumers retain control over their information. It is an objective that is difficult to achieve if all data manipulated by AI technologies ends up in cloud environments.

There can be little doubt about the positive role that cloud computing has played in democratizing access to processing and storage capacity. At a business level, cloud computing has allowed small- and mid-sized organizations to take advantage of technology-enabled operations that were once the exclusive domain of the Global 2000.
It has also opened the door for corporate behemoths to become more agile, replicating the market responsiveness once associated with smaller entities. From a consumer point of view, cloud and mobile apps have introduced a wide array of new options that have greatly expanded the experiences available to individuals.
A good argument, however, is emerging for a more balanced approach to the future of the global digital economy. As a society, we should engage in a more robust discussion about how we can get the most out of AI/ML to improve end-users lives without aggregating every citizen's personal information into a handful of CSPs. Indeed, compelling technical benefits are associated with constructing a global information infrastructure that keeps the bulk of our information where most of it belongs—locally and in the control of those who truly own the data.

Integrating The Power of AI With Edge Computing​

For instance, edge computing and AI can make our roadways safer by enabling connected car technologies—including in-cabin cameras and sensors—that notice when a driver is getting drowsy. According to the most recent Center for Disease Control (CDC) survey, approximately one in 25 adult drivers (18 years or older) reported falling asleep while driving. The CDC estimates this results in over 6,000 fatal crashes and 50,000 injuries yearly.
This challenge offers an excellent example of a use case in which data processing must occur at the edge because concepts like "latency" matter. It is difficult to see how drivers, passengers and pedestrians can be kept safe if data must be sent to a distant cloud resource for AI analysis.
There is also the issue of adoption. One can safely assume that many, if not most, consumers will be reluctant to enable functions that collect sensitive behavioral data if it is sent to the cloud for AI processing. (At a more practical level, it is probably not a good idea to deploy vitally important connected car solutions that only work when there is a cell signal—a requirement for remote access to cloud resources.)
There is simply no reason to move all information out of its local environment all of the time. There are growing use cases for edge computing technologies to be applied—and controlled—by consumers to improve accuracy and performance and protect privacy.

How?​

Cloud-based AI needs the brute force of big data analytics before arriving at a conclusion or result. Edge computing—done right—applies machine learning inference from the opposite direction. The most innovative edge-based computing technologies use a small amount of real-time information to derive an accurate result. It is like the old game "Name That Tune."
In short, consumers and enterprises interested in controlling the management of personal information should encourage providers of digital products and services to calibrate how and when data is used in cloud-based AI environments.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?



Follow me on LinkedIn. Check out my website.
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Labsy

Regular
Thanks Labsy, it seems a few weren't too happy with what I posted, and that's fine.

To buy Brainchip shares was and still is an individuals choice, to sell Brainchip shares was and still is an individuals choice.

I chose not to sell my shares North of $2.00 and that has in effect cost me over 2.5 million dollars, time to make further investments,
the opportunity to buy back into Brainchip and double my already solid holding, I could moan and whinge on this forum all day long,
feeling sorry for myself, but I choose not to vent and keep venting, it's all good and dandy to vent but it's not all good and dandy to
vent against the company because things don't appear on the surface to be tracking the way your cash position selfishly suggests it
should be.

Individuals have made their own choices, for goodness sake own them and stop bagging our company, that's my vent, please respect my
opinion, it's as valuable as yours.

Many on this forum and the past forum know that I have been one of the most positive, passionate supporters of Peter and Anil and the entire Brainchip team for close on 8 years, I'm hurting seeing our share price so low, but is this the place to be venting, I reserve my opinion.

Trust in your own decisions moving forward, I still see Brainchip crossing that finishing post in 1st place...(y)❤️
Couldn't agree more... each to their own. I am in a similar position to you and standing by your wise words buddy....Let's see what the new financial year brings . Im not happy with sp but very happy with my investment...
 
Last edited:
  • Like
  • Thinking
Reactions: 21 users

Frangipani

Top 20
You gotta love this guy’s enthusiasm! But someone with a LinkedIn account should let him know in the comments that he is not up-to-date regarding the commercial availability of neuromorphic hardware and the early adoption timeline! 😂



Forget ChatGPT! Neuromorphic Computing is the next big thing​


Neeraj Kumar
Neeraj Kumar

Neeraj Kumar​

Principal Consultant at Quick Brown Fox​

Veröffentlicht: 20. Juni 2023
+ Folgen
Everyone is in awe of ChatGPT. It’s a great piece of tech. No doubt about it. But here’s something even more mind-blowing – Neuromorphic Computing.

Neuromorphic computing is like the lovechild of neuroscience (BTW, you should read A Thousand Brains) and computer science – it’s all about creating computer systems that mimic the structure and function of the human brain.

So, what’s the big deal? Well, imagine a world where machines can learn, adapt, and make decisions on their own. We’re not talking about your average Siri or Alexa here. Neuromorphic computing takes artificial intelligence to a whole new level. It’s like having a Silicon Valley version of Professor X from The X-Men, with computers that can understand and interact with the world in a human-like way.

One of the most significant impacts of neuromorphic computing is its potential to revolutionize the field of robotics. Picture this: robots that can learn and navigate their environment, interact with humans seamlessly, and perform complex tasks with ease. From healthcare to manufacturing to space exploration, the possibilities are endless. We could have robotic companions that truly understand us and assist us in our day-to-day lives. Say goodbye to menial tasks and hello to a world of robot helpers! Remember Kaylons from Star Trek Parody – Orville? Let’s hope it does not happen.

But it doesn’t stop there. Neuromorphic computing also holds the key to unlocking the mysteries of the human brain. By emulating its structure and function, we can gain insights into how our brains work, potentially leading to breakthroughs in understanding and treating neurological disorders. Imagine a world where we can find cures for Alzheimer’s, Parkinson’s, and other devastating conditions. It’s like having a superpower to heal our minds!

It is still in its early stages, and there are numerous challenges to overcome. Creating hardware that can replicate the complexity of the human brain is no easy task. Although, early adoption will happen after 5 years from now! We can expect to see some prototypes as soon as by the year-end.

But, let’s not forget the potential downsides. As with any disruptive technology, there are ethical and privacy concerns to address. We don’t want our robotic friends turning into Skynet, do we? Or do we? I am kind of conflicted here :D. But I genuinely believe that AI will take over Humans someday. But that’s for another day!

We’re on the brink of a technological revolution that will redefine what it means to be human and push the boundaries of what we thought was possible. So, embrace the madness, and let’s ride this wave of innovation together!
Cheers!
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Frangipani

Top 20
Here is another post liked by Nandan Nayampally on LinkedIn, posted by a California internist, who is a specialist in pulmonary and critical care medicine:

52C734A0-A640-4CD3-B34D-7ABBF52BC287.jpeg


I googled the hospital, which serves the Napa Valley area (excellent🍷 region!) and discovered the following Spring/Sommer 2023 hospital newsletter with more info on the said lung nodule programme:


798B6B3F-9ED3-42A3-A159-B7DEC03D4D2F.jpeg

2BE6B3C4-47B8-4DFC-919E-7C56305499E6.jpeg

CFCC36E9-A718-4B7E-8764-7231FD09B6A3.jpeg


How probable is it that Nandan, who resides in Austin, Texas, would 👍🏽 this post about a newly launched “lung nodule programme” by an MD at a California hospital on a six-month demo trial with a state-of-the-art robotic-assisted bronchoscopy system, raising funds for “an Artificial Intelligence System to detect signs of cancerous lung nodules up to a year earlier than manual-only review of x-rays” without Brainchip being involved?!
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 37 users

IloveLamp

Top 20

Featuring innovative TMR technology, the BMM350 from Bosch Sensortec enables new applications such as eliminating motion sickness in VR headsets, while providing a huge reduction in power consumption compared to the previous generation device

In indoor navigation, the BMM350 can be used to improve positioning accuracy when no satellite signal is available. It also provides position and speed measurement in e-bikes and other vehicles.
It's worth noting that the use case in this article is one which has been specifically mentioned by the company.





Screenshot_20230625_060221_LinkedIn.jpg

Screenshot_20230625_060315_Chrome.jpg
Screenshot_20230625_060407_Chrome.jpg
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Thanks Labsy, it seems a few weren't too happy with what I posted, and that's fine.

To buy Brainchip shares was and still is an individuals choice, to sell Brainchip shares was and still is an individuals choice.

I chose not to sell my shares North of $2.00 and that has in effect cost me over 2.5 million dollars, time to make further investments,
the opportunity to buy back into Brainchip and double my already solid holding, I could moan and whinge on this forum all day long,
feeling sorry for myself, but I choose not to vent and keep venting, it's all good and dandy to vent but it's not all good and dandy to
vent against the company because things don't appear on the surface to be tracking the way your cash position selfishly suggests it
should be.

Individuals have made their own choices, for goodness sake own them and stop bagging our company, that's my vent, please respect my
opinion, it's as valuable as yours.

Many on this forum and the past forum know that I have been one of the most positive, passionate supporters of Peter and Anil and the entire Brainchip team for close on 8 years, I'm hurting seeing our share price so low, but is this the place to be venting, I reserve my opinion.

Trust in your own decisions moving forward, I still see Brainchip crossing that finishing post in 1st place...(y)❤️
Until the nxt AGM When they want more bonus shares,the nxt 12 months is massive for shareholders, the company need to deliver
 
  • Like
Reactions: 1 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.
 

Attachments

  • 20230625_083746.jpg
    20230625_083746.jpg
    2.9 MB · Views: 203
  • Like
  • Sad
  • Haha
Reactions: 16 users

Townyj

Ermahgerd
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.

Bit hard to read upside down :p
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Morning Townjy,

Just thought I'd try and spice it up a little.

Yes , sorry about that . Operator error.

Esq.
 
  • Haha
  • Like
  • Love
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 12 users

Townyj

Ermahgerd
  • Haha
  • Like
Reactions: 8 users

stan9614

Regular
hotcrapper is hopeless now, full of misleading lies. It took me a bit of effort to set the record straight about our cash runway, which is approximately 8 quarters, instead of the 3 quarter myth that was spreading around the forum...

I wonder how many people on this forum thought we got only 3 quarters cash left?
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

Screen Shot 2023-06-25 at 2.25.17 pm.png





Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

rgupta

Regular
Good Morning Chippers,

Weekend Financial Review paper...

We get a mention , albeit on the wrong side , unfortunately.

Patiently waiting......

Regards,
Esq.
One thing I can tell with 100% accuracy,
All financial writers make their public view the way the curve is. They will promote someone going northward and demote the same when they were going southwards.
But on the other hand a share price increase after going southward and decrease after going northward.
 

rgupta

Regular
However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.
Is not it the same technology what Qualcomm is using?
Check out this article below from Synopsys about Vision Transformer Networks. Doesn't specifically mention us but we know Akida 2nd gen will support ViTS.

The article discusses ViTS in terms of their ability to amplify contextual awareness. An example given is being able to discern whether an object on the road is a stroller or a motorcycle. This reminds me of the "plastic bag versus a rock" problem which Peter Van Der Made previously discussed as being able to be resolved with AKIDA 2000 and AKIDA 3000 being able to learn the difference between the two because they will learn from the sequences of events and the behaviour of objects in the physical world.

View attachment 38881




Deep Learning Transformers Transform AI Vision​

Article-Deep Learning Transformers Transform AI Vision​

GettyImages-deeplearning1422693944.jpg

Deep learning algorithms are now being used to improve the accuracy of machine vision.
New algorithms challenge convolutional neural networks for vision processing.
Gordon Cooper, Product Manager, Synopsys Solutions Group | Jun 12, 2023



With the continual evolution of modern technology systems and devices such as self-driving cars, mobile phones, and security systems that include assistance from cameras, deep learning models are quickly becoming essential to enhance image quality and accuracy.
For the past decade, convolutional neural networks (CNNs) have dominated the computer vision application market. However, transformers, which were initially designed for natural language processing such as translation and answering questions, are now emerging as a new algorithm model. While they likely won’t immediately replace CNNs, transformers are being used alongside CNNs to ensure the accuracy of vision processing applications such as context-aware video inference.

As the most widely used model for vision processing over the past decade, CNNs offer an advanced deep learning model functionality for classifying images, detecting an object, semantic segmentation (grouping or labeling every pixel in an image), and more. However, researchers were able to demonstrate that transformers can beat the latest advanced CNNs’ accuracy with no modifications made to the system itself except for adjusting the image into small patches.

In 2020, Google Research Scientists published research on the vision transformer (ViT), a model based on the original 2017 transformer architecture specializing in image classification. These researchers found that the ViT “demonstrate[d] excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources.” While they require training with large data sets, ViTs are now beating CNNs in accuracy.


Differences Between CNNs and Transformers
The primary difference between CNNs and transformers is how each model blends information from neighboring pixels and their respective scopes of focus. While CNNs’ data is symmetric, for example based on a 3x3 convolution which calculates a weighted sum of nine pixels around the center pixel, transformers use an attention-based mechanism. Attention networks revolve around the learned properties besides location and have a greater ability to learn and demonstrate more complex relationships. This leads to an expanding contextual awareness when the system attempts to identify an object. For example, a transformer, like a CNN, can discern that the object in the road is a stroller rather than a motorcycle. Rather than expending energy taking in less useful pixels of the entire road, a transformer can home in on the most important part of the data.
Transformers are able to grasp context and absorb more complex patterns to detect an object.
In particular, swin (shifted window) transformers reach the highest accuracy for object detection (COCO) and semantic segmentation (ADE20K). While CNNs are usually only applied to one still image at a time without any context of the frame before and after, the transformer can better deploy across video frames and used for action classification.

Drawbacks
Currently, designers must take into account that while transformers can achieve high accuracy, they will run at much fewer frames-per-second (fps) performance and require many more computations and data movement. In the near term, integrating CNNs and transformers will be key to establishing a stronger foundation for future vision processing development. However, even though CNNs are still considered a mainstream vision processing application, deep learning transformers are rapidly advancing and improving upon the capabilities of CNNs.

As research continues, it may not take long for transformers to completely replace CNNs for real-time vision processing applications, amplifying contextual awareness for complex patterns as well as providing higher accuracy will be beneficial for future AI applications.


 
  • Like
Reactions: 2 users

FKE

Regular
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

1687679352683.png


This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
 
  • Like
  • Haha
  • Fire
Reactions: 32 users

Diogenese

Top 20
I had a strange dream tonight. I was walking down the street and found 100 euros. Since I couldn't think of anything to buy, I thought it would be a good idea to invest the money in shares. In my dream, I was very focused on AI-related tech stocks. In the end, there were two companies to choose from:


Vnidia

A huge company that has made a breathtaking rally lately. In my dream, the technology that generates this company's revenue was called Neu-Vanman. It was at the end of its development and the potential development steps in the future were limited. The company had a valuation of EUR 953 billion. I thought to myself that if it becomes the largest company in the world it can surely reach 5000 billion, or 5 trillion EUR.


Chainbrip

A small company that is currently in a downward spiral. The technology of this company seemed breathtaking to me. In my dream, I actually assumed that this company was developing chips that resembled the function of the brain. The first versions were already on the market, and more were soon to be released. The potential seemed huge, both in terms of the market and the possibilities for further development of the technology. The company had a valuation of EUR 374 million. I thought to myself, if it can reach 1% of the size of Vnidia (if it is the biggest company in the world) that would be a huge success à 50000 million EUR, so 50 billion EUR.


I pulled out my slide rule and realised that for every EUR I invested, I was using the following factors:

View attachment 38882

This led to several questions and conclusions if my vague theories in my confused dream were true:

1.) 100 EUR invested in Vnidia = 520 EUR

2.) 100 EUR invested in Chainbrip = 13370 EUR

3) If I want to have equal total returns, I would have to invest only 0.039 cents in Chainbrip for each EUR invested in Vnidia (5.2 / 133.7)

4) Risk assessment: I only wanted to invest in one company, so I asked myself the following question: What are the probabilities? How likely is it that the above-mentioned market caps will be reached? I speculated in my dream, completely from my gut: For Vnidia the probability is 50%, for Chainbrip 10%. That gives a ratio of 5:1 - per Vnidia.

5) Decision: The risk is 5:1 in favour of Vnidia, the potential returns 25:1 (133.7 / 5.2) in favour of Chainbrip. Thus, even if you call me crazy, I was willing to invest the 100 EUR in Chainbrip.

6) If the downward spiral of Chainbrip would continue, the above calculation and decision for Chainbrip would improve exponentially.


I didn't want to wait and see if the share price would drop further, I was too nervous. So I invested the 100 EUR. Then, unfortunately, I woke up. I hope I will continue to dream the dream in 2-3 years, I would be interested to see how everything has developed.


PS: The share price in Germany has slipped back to 0.21 EUR since its all-time high (approx. 1.67 EUR). This means that I have already experienced 87.5% of the pain. So we are on the home stretch 😉 With the remaining 12.5%, I have a pain ratio of 7:1, which is bearable.
Dunno what you're smokin', but there's gotta be a market for it.
 
  • Haha
  • Like
  • Wow
Reactions: 25 users
Top Bottom