BRN Discussion Ongoing

I think it healthy to take occasional breaks from here now and then.
Can become a little consuming and the older among us don't eat and breathe social media like those of you who have grown up with it. 🤣
Or it may be that life has simply gotten in the way, as it has a tendency to sometimes do.
Or maybe there's just nothing new or notable enough to comment on.
You can only advise to DYOR, be patient and work your own plan, just so many times, before it becomes tedious and repetitious.
Maybe they have decided to limit themselves to an inside cabal having sorted the grain from the chaff. 🤣
All supposition on my behalf of course and hope to read them here again in future times, but if not, I thank them for sharing their wisdom and experience in times past.
Or maybe the influx of people tearing the company down all of a sudden made the decision for them. I myself have seen this playbook a few too many times and am considering joining them. I'll stay true to company and hold though.

SC
 
  • Like
Reactions: 6 users
Ok im too emotional and impatient but remain rational.
Every one just wants the money
I feel that they think creating an echo system is done with a wave of the magic 🪄 wand and poof it’s there. Dream on guys, if it were that easy everyone would be doing it.
It’s a snow ball soon to turn into an avalanche
The market is warming not only to get more investors back but to Ai
It’s going to flow like the a torrent after the thaw
What’s everyone’s rush to become rich
i want sustainability a long ride not a quick buck
if i did i would play the pokies
 
  • Like
  • Love
Reactions: 20 users

Kachoo

Regular
I have a hypothetical question to all and be honest?

If BRN accomplished exactly what it has today partnerships and all these social media posts but the price was at say $1.50 would you all pissed off at the chiefs or waiting on up graded numbers and higher values.

I would bet atleast 90 % would be happily waiting. That is a fact.

I would be absolutely happy in that situation.
 
  • Like
Reactions: 12 users
Interesting recent interview with a team member from Sony Semicon who is working with Prophesee on EVS.

Nothing on us however my takeaway is an understanding of the "time and effort" that has to go into the joint dev, testing and production in merging tech to create one final product.

Food for thought & appreciation maybe in some TSE discussions re how long for consumer / commercial / enterprise product releases...and this by one of the industry giants :unsure:



Sony Semiconductor Solutions Group

Pursuing the approach to social value in developing new technology​

March 22, 2023
There has been a growing trend in recent years to combine AI with diverse sensors, and against this backdrop, the Event-based Vision Sensor (EVS) is garnering much interest.
This is a sensor that only extracts changes within a frame, enabling to significantly reduce the power consumption and the volume of data to be handled compared to the conventional frame-based sensing data.
It has great potential for extensive applications. We interviewed the engineer who developed the world’s smallest*1 pixel for EVS, Atsumi Niwa of Sony Semiconductor Solutions Corporation (SSS) Research Division 1, and asked him about his experience in the development project and the secrets of his success in creating the technology the world has never seen before.
*1) According to Sony research (as of September 2021)
member_icon01.png

Niwa Atsumi
Sony Semiconductor Solutions Corporation
Research Division 1
Profile:Niwa joined the Sony Corporation’s Semiconductor Business Group (present SSS) in 2008.
He initially worked on the development of analog circuits for TV tuners, where he mainly designed the base band analog signal processors and AD converters. Subsequently, he contributed to enhance the performance of the CMOS image sensor for mobile applications. He joined the project to develop EVS in 2017, which led to the launch of IMX636 released in 2021. In recognition of this achievement, he received 2021 Sony Outstanding Engineer Award. Today, he pursues the internal rollout of the EVS technology and pursues to expand the scope of application for the technology through his development projects.

The desire to find out what contributions EVS might offer to society prompted him to participate in the project to commercialize the technology​

- What kind of sensor is an EVS?
EVS is a fundamentally different sensor from general image sensors which have been evolving so far to reproduce images as truthfully to our visions as possible. It measures differences in luminance at the pixel level and outputs the data with their coordinates and time stamps. There are three major advantages of EVS.
Firstly, EVS can detect the luminance changes without complex setting depended on light condition.
Thus user can easily use sensor compared to image sensor which requires lots of setting to produce optimal data.Secondly, it consumes far less power because the output data only contains the detected differences.
And thirdly, thanks to the pixel-parallel detection, EVS only outputs change unlike conventional image sensors, which read out approximately 30 to 60 frames of whole images per second. This allows EVS to respond quickly and capture changes at less than a millisecond.

Suppose that we are capturing a welding process in a factory, the conventional image sensors would produce an image that part is white out because the welded part is extremely bright. Similarly, the sparks must be observed to discern whether the welding is applied correctly, but the conventional image sensors are not capable of precisely capturing the fast-moving sparks. The EVS, on the other hand, can easily capture individual sparks even under this high-contrast condition.
Metal process monitoring
Frame-based sensor image
t-12_products_evs17.jpg

The sparks are overexposed due to high luminance,
making linear trails in the image.
EVS image
t-12_products_evs18.jpg

Each fast-moving spark is captured individually [high frame rate]
Data other than the sparks (such as the machinery) are not output [high efficiency: minimal data output]
Application output
t-12_products_evs19.jpg

Each spark is tagged with ID and tracked
⇒ analyzable in terms of the number, size, speed, etc
Alternatively, the sensor’s capability to measure motion can be applied to machine inspection. Engines, for example, normally run constantly, and their movements in a normal condition become disturbed if there are some abnormalities. The EVS can be used to detect such abnormal movements to catch early signs of malfunctions.
Vibration monitoring
Frame-based sensor image
t-12_products_evs23.jpg

It is impossible to discern the vibration
in the model car on the platform with naked eye.
EVS image
t-12_products_evs24.jpg

Only the vibrating areas are processed and
output so that the vibration is clearly visualized.
Application output
t-12_products_evs25_en.jpg

The frequency is analyzed per pixel and
can be mapped out in two dimensions.
- Has EVS existed for a long time?
The technology emerged during the first decade of the 2000s, inspired by the ways in which the human eye recognized images. The sensor only catches moving targets and outputs the data at high speed. This simply represents the high efficiency in only capturing changes, but the vision sensor has evolved through attempts to apply it to various contexts by combining with a neural network.*2
With reference to the point of efficiency I just mentioned, extracting change information from images captured using the conventional image sensors significantly increases power consumption and the volume of data to be handled. To further evolve AI, sensors will be expected to only provide the information required by the neural network, and EVS represents one of the viable options for this purpose.
*2) A series of algorithms for pattern recognition modeled on the ways in which the human brain operates.
- When you first came across this technology, did you immediately want to get involved in developing it?
When I heard of EVS, I found it interesting, and the idea of making one appealed to me. Also, I was curious to know how this technology could contribute to society.
My background was CMOS image sensors, developing them for taking clear pictures.
EVS with its principles totally different from what I knew excited me for discovering new potential. Soon afterwards, a French venture of sensing devices, Prophesee, approached us to propose a joint project to commercialize EVS. This project was the first attempt in the world to turn EVS into a commercial product. I was keen to witness firsthand how the technology would contribute to society and decided to join the project.

Unprecedented external joint development, having to deal with differences in everything from workflows to technical terms​

- What was the most difficult part in developing this EVS with the smallest pixels in the world?
Each pixel in an EVS has its event detection circuit. If the circuit specifications become demanding, the pixel size increases to accommodate all of it, and this was a problem. The possibilities of implementing it on wearable devices and smartphones make it absolutely necessary that the pixels are smaller and more power-efficient.
So, we decided to design a circuit from scratch, drawing on SSS’s expertise, stacking technology and circuit design know-how.
It was hard-going at the stage where we combined this new circuit design with the process technology. We had many trials and errors before achieving the pixel size reduction.
Meanwhile, another difficulty was to deal with so many unknown factors. Usually, we pursue a development by supposing some use contexts for the product, but the technology concerning EVS had not been established within SSS. We could imagine some use cases but could not verify whether desired data could be obtained without actually testing.
Moreover, the technology being new to us, we had no adequate equipment for verification, data acquisition and, therefore, evaluation. It was necessary to develop the environment for technological verification in parallel with the product development, which made the project time-consuming and difficult to organize.
- This was the first collaboration with other companies in the EVS domain. How did you find it?
There have been many collaborative projects where responsibilities were clearly divided. This time, it was the first ever experience for me as well as for SSS in that two companies joined forces to develop and design one stacked sensor.
SSS has predetermined design development workflows, and these are, obviously, unique to the company. The language we use is also different from theirs.
Communication was often riddled with difficulties due to the differences in the technical background knowledge between Prophesee and SSS. We made efforts to maintain close communication with them and ensure mutual understanding because many little misunderstandings would eventually slow down the development schedules.
While Prophesee and SSS were in agreement to leverage our expertise to create a highly sophisticated EVS product, there were some discrepancies as to what level to be aimed at and how the product should be promoted. We needed to spend quite a lot of time to ensure that we had the same understanding.
The collaboration entailed the establishment of a new development flow in tandem with the product development, and also a new evaluation environment was to be developed. All in all, this was a whole new experience for us.
On top of this, the new concept EVS presented necessitated enduring efforts to explain to people within the Company so that they understood the sensor’s characteristics and were interested in this project. These efforts were necessary to build up the project team.
- How did you face the unfamiliar development workflow?
Despite the difficulties experienced in the development processes and communication, I enjoyed the job very much. I also learned much from the project.
For Prophesee, this technology determines their corporate success or failure. Seeing their uncompromising attitude to details, I thought we could learn from it.

Finding semiconductors interesting at university lab​

- When did you become interested in semiconductors?
I was studying circuit design at university, and though our lab was not dealing with the latest processes, we the students were making prototypes and studying the discrepancies between theories and implementations. It was the process of probing why and how the reality defied theories that appealed to me, and I found semiconductors interesting in this way.
The theory-oriented lab taught me the pleasure of in-depth thinking. Joining this lab was a pivotal experience for my future course of life.
- What was the deciding factor for your joining SSS?
At the end of my master’s course, I was in two minds about staying on to pursue a Doctor of Philosophy. Meanwhile, I saw recruiters of various corporations, of whom the members of SSS left a good impression in me. They said that, at SSS, it was possible to propose and pursue projects based on what you found interesting. It seemed to be a great place to be if I wanted to pursue what interested me.
At the time, I was enjoying the process of theorizing and testing the theory at the university lab, and I was also drawn to the idea of creating products and seeing how they benefited people in society. These were the factors that made me want to test my abilities at SSS.
- Have you ever experienced a failure?
I did when I was engaged in the development of CMOS image sensors for cell phones. I was working on a very demanding project in terms of development requirements, so much so that some people in the Company found it unjustifiable.
I was determined to succeed and managed to make it to the prototyping phase. However, a new specification was added before the commercialization. This latest addition proved to be totally incompatible with the structure of the prototype, and eventually my development had to be suspended.
The technology was modified and eventually adopted in the client’s later model. Even so, the taste of failure in the first attempt was so bitter.

Think deeply about the technology, what value it offers and how it can contribute to society​

- What do you pay attention to when you work?
I take time to consider the research and development project at hand in terms of its future significance and diffusion of value.
Any research and development would need to envision how the technology under development might be put to use. However, thinking about it within a framework of today’s conventions and values may lead to an impasse.
If you are confined in the values of conventional image sensors, for example, you might fail to see the worth of EVS by thinking that the sensor “only captures changes,” which is not a feature required for producing clear images. I think this is a trap that we should avoid. I would like to be someone who really think through “what value can be found” in only capturing changes and “what it will make possible.”
Today, I am fortunate to be at the research division, where we give a thorough consideration on the contributions to society which could be made by the technology we develop and study, and the value it offers. I always learn from this approach.
- What would you like to work on in the future?
SSS has so far been focused on image sensors that are capable of capturing the world as we see it as truthfully as possible, and the world has been demanding such technology and products.
However, I believe that the future opens up to approaches to changing our ways of life from the viewpoint of “social richness.” A sensor may not be able to capture a beautiful photo for viewing, but it is designed to extract some information, which will enable a system to function in such a way that it liberates us humans from our heavy-load tasks.
I would like to be part of the efforts to expand the imaging technology for “photography” into the domain of sensing for “obtaining necessary information.” It would be exciting to keep exploring the possible value that sensing technology can create.
I imagine that the future will have richer combinations of new image sensors and their applications, making our lives better and turning society into a more interesting place. With the technology concerning image sensors at the core, how should we leverage it and what value can be derived from it? We would like to adopt this application-oriented approach to prove the value we create so that our work and products will lead to positive changes in society.
- Image sensors that enrich society sounds like a wonderful idea.
An ambition like this cannot be fulfilled by a solo effort. This is why I always discuss desirable image sensors with many colleagues from various fields of expertise.
And discussing is not enough to move things forward. I think it is important to always keep myself open to new information and keep an extensive network of people as potential source.
- Interviewer’s postscript
It is clear that Niwa has a constant attitude of pursuing something, such as theories at university, the project which everyone else had given up on at the development division, and value for society which he currently pursues at the research division. It seemed to me that this attitude and persistence to challenge were a crucial part of his competence that enabled him to create something the world had never known. His journey to explore the potential of image sensors is far from over. It will be exciting to see what kind of technology he comes up with next to surprise us.

Related links​

 
Last edited:
  • Like
  • Fire
Reactions: 24 users

TECH

Regular
Valeo Patent, translate this and you'll read some interesting descriptions and claims in the invention that truly highlight how spiking neural networks and autonomous driving go seamlessly together, if that patent isn't referring to our technology, well then I give up. :ROFLMAO:

It's not to hard to read once you have translated it !

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users
Valeo Patent, translate this and you'll read some interesting descriptions and claims in the invention that truly highlight how spiking neural networks and autonomous driving go seamlessly together, if that patent isn't referring to our technology, well then I give up. :ROFLMAO:

It's not to hard to read once you have translated it !

Yep, gotta like those highlighted bits.

In another embodiment, detecting objects in the three-dimensional point cloud includes using a neural network.

[0027]​


In another embodiment, detected objects are classified into different categories by the neural network.

[0028]​


The neural network, which can be understood as a software algorithm, is particularly trained for one or more computer vision tasks, for example including object detection and/or segmentation and/or classification.

[0029]​


According to several implementations, the neural network is implemented as a spiking neural network (SNN) or as a convolutional neural network (CNN).

[0030]​

 
  • Like
  • Fire
  • Love
Reactions: 44 users

Frangipani

Regular
Sorry Hoppy, but I plan on coming to the AGM without any shoes on, or maybe thongs depending on how wet it is, so I'm pretty sure I've got the whole 'Belle of the Ball" zeitgeist completely monopolized. One peek at my tootsies is all it will take to leave anyone else, including @The Pope in the dust. Apologies in advance your holiness.

I can’t help but picture a number of guys unfamiliar with the Aussie meaning of “thongs” now hastily booking plane tickets to Sydney for the AGM, as they hope you will turn up in very little underwear… 🤣🤣🤣
 
Last edited:
  • Haha
Reactions: 10 users

TechGirl

Founding Member
New just released 1hr ago Sean Heir interview with Edge Impulse

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 62 users

KiKi

Regular
TWEET by Edge Impulse just 8 minutes ago


A neuromorphic visual sensor can recognise moving objects and predict their path​

Published: 17.4.2023
The new smart sensor uses embedded information to detect motion in a single video frame

sensor

Conventional sensors only capture a single moment in a frame, but the new sensor can read information about the past and use that to predict the future. Image: Hongwei Tan / Aalto University

A new bio-inspired sensor can recognise moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor, described in a Nature Communications paper, will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.

Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.

At the core of their technology is an array of photomemristors, electrical devices that produce electric current in response to light. The current doesn’t immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively ‘remember’ whether they’ve been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn’t just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.

‘The unique property of our technology is its ability to integrate a series of optical images in one frame,’ explains Hongwei Tan, the research fellow who led the study. ‘The information of each image is embedded in the following images as hidden information. In other words, the final frame in a video also has information about all the previous frames. That lets us detect motion earlier in the video by analysing only the final frame with a simple artificial neural network. The result is a compact and efficient sensing unit.’

To demonstrate the technology, the researchers used videos showing the letters of a word one at a time. Because all the words ended with the letter ‘E’, the final frame of all the videos looked similar. Conventional vision sensors couldn’t tell whether the ‘E’ on the screen had appeared after the other letters in ‘APPLE’ or ‘GRAPE’. But the photomemristor array could use hidden information in the final frame to infer which letters had preceded it and predict what the word was with nearly 100% accuracy.

In another test, the team showed the sensor videos of a simulated person moving at three different speeds. Not only was the system able to recognize motion by analysing a single frame, but it also correctly predicted the next frames.

memristor array

The sensor is made of an array of photo memristors. Image: Hongwei Tan / Aalto University

Accurately detecting motion and predicting where an object will be are vital for self-driving technology and intelligent transport. Autonomous vehicles need accurate predictions of how cars, bikes, pedestrians, and other objects will move in order to guide their decisions. By adding a machine learning system to the photomemristor array, the researchers showed that their integrated system can predict future motion based on in-sensor processing of an all-informative frame.

‘Motion recognition and prediction by our compact in-sensor memory and computing solution provides new opportunities in autonomous robotics and human-machine interactions,’ says Professor Sebastiaan van Dijken. ‘The in-frame information that we attain in our system using photomemristors avoids redundant data flows, enabling energy-efficient decision-making in real time.’
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users
So your complaint is a lack of comms from Management and lots of info from TSEx instead.
Yet you post your complaints here ad nauseum , ruining the experience for those same members who have provided you with useful info.

Edit: missed the link post I meant to add: https://www.weforum.org/agenda/2016/08/why-complaining-rewires-your-brain-to-be-negative/
"The second thing you can do—and only when you have something that is truly worth complaining about—is to engage in solution-oriented complaining. Think of it as complaining with a purpose. "
Agree!
 
  • Like
Reactions: 3 users
seems like the big institutions are still buying, is this not a good sign? View attachment 34445

I also see the same picture, institutions buying, while the price has been declining.

Then we have to think supply/demand who's supplying the shares? Most likely the private investors.

It's a shame that private investors doesn't have stronger hands....

1681753710546.png
 
  • Like
  • Love
Reactions: 13 users

Frangipani

Regular
Correct me if I am wrong but i think Germany is 8 hours behind Australian time.

Hi @Mercfan,

since you have Munich‘s Allianz Arena as your profile pic, I assume you are not only a Mercedes, but also an FC Bayern fan? (Who knows, you might be a BVB fan instead, posting this beautiful sunset shot as an ominous Götterdämmerung prediction for Dortmund‘s biggest Bundesliga rival…😉) I just happened to read somewhere that Bayern München are planning a tour through Asia in July. Let’s assume they will be playing each of their three planned matches in Bangkok, Singapore and Tokyo at 8 pm local time. For fans in Germany wanting to watch a live TV broadcast, however, the kick-off will be at different times each match day (3 pm German time for the match in Bangkok, 2 pm for the one in Singapore and 1 pm for the match in Tokyo), as there is no one single Asian time zone. Neither is there one time zone for all of Australia.

So when you are saying that Germany is 8 hours behind Australia, this statement of yours is only partly true. It is indeed, if you are currently in let’s say Port Douglas (Queensland) or Port Macquarie (New South Wales). However, if you are in Port Hedland (Western Australia) instead, Germany is currently only 6 hours behind, as WA is in the same time zone as Singapore. And to make things even more complicated, the time difference between the Ports of Hamburg or Rotterdam and Port Augusta in South Australia is 7.5 hours!

But the point I was trying to get across in my previous post is that those time zones stated in the press release @Opti had asked about were actually North American time zones, not Australian ones.


5118BC30-4836-4C81-BE05-0DCB1BF9C9F4.jpeg



One hint was that the press release was published by Brainchip’s US office in Laguna Hills, CA. It is located in the Pacific Time zone, which is encompassing parts of western Canada, parts of the western United States (except Alaska and Hawaii) as well as the state of Baja California in Mexico. The largest city in the Eastern Time Zone, on the continent’s opposite Atlantic Coast, is New York City.
So another hint - as already mentioned in my previous post - was that only in the Americas (but not in Australia) would it make sense to differentiate between a Pacific (= West Coast) and an Eastern time zone with a three hour time difference between them. In Australia, on the other hand, the East Coast is the Pacific Coast!

So the live online event started at 9.30 am New York time (opening time of the NYSE/NASDAQ) / 6.30 am Los Angeles time, which in turn was 3.30 pm Central European Daylight Saving Time. And way past dinner time (for most people at least) anywhere in Australia & New Zealand. Hope this helps.
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
Hi @Mercfan,

since you have Munich‘s Allianz Arena as your profile pic, I assume you are not only a Mercedes, but also an FC Bayern fan? (Who knows, you might be a BVB fan instead, posting this beautiful sunset shot as an ominous Götterdämmerung prediction for Dortmund‘s biggest Bundesliga rival…😉) I just happened to read somewhere that Bayern München are planning a tour through Asia in July. Let’s assume they will be playing each of their three planned matches in Bangkok, Singapore and Tokyo at 8 pm local time. For fans in Germany wanting to watch a live TV broadcast, however, the kick-off will be at different times each match day (3 pm German time for the match in Bangkok, 2 pm for the one in Singapore and 1 pm for the match in Tokyo), as there is no one single Asian time zone. Neither is there one time zone for all of Australia.

So when you are saying that Germany is 8 hours behind Australia, this statement of yours is only partly true. It is indeed, if you are currently in let’s say Port Douglas (Queensland) or Port Macquarie (New South Wales). However, if you are in Port Hedland (Western Australia) instead, Germany is currently only 6 hours behind, as WA is in the same time zone as Singapore. And to make things even more complicated, the time difference between the Ports of Hamburg or Rotterdam and Port Augusta in South Australia is 7.5 hours!

But the point I was trying to get across in my previous post is that those time zones stated in the press release @Opti had asked about were actually North American time zones, not Australian ones.


View attachment 34475


One hint was that the press release was published by Brainchip’s US office in Laguna Hills, CA. It is located in the Pacific Time zone, which is encompassing parts of western Canada, parts of the western United States (except Alaska and Hawaii) as well as the state of Baja California in Mexico. The largest city in the Eastern Time Zone, on the continent’s opposite Atlantic Coast, is New York City.
So another hint - as already mentioned in my previous post - was that only in the Americas (but not in Australia) would it make sense to differentiate between a Pacific (= West Coast) and an Eastern time zone with a three hour time difference between them. In Australia, on the other hand, the East Coast is the Pacific Coast!

So the live online event started at 9.30 am New York time (opening time of the NYSE/NASDAQ) / 6.30 am Los Angeles time, which in turn was 3.30 pm Central European Daylight Saving Time. And way past dinner time (for most people at least) anywhere in Australia & New Zealand. Hope this helps.
Maybe he has one of these

 
  • Haha
Reactions: 7 users
  • Like
Reactions: 5 users

HUSS

Regular
  • Like
Reactions: 5 users
I have a hypothetical question to all and be honest?

If BRN accomplished exactly what it has today partnerships and all these social media posts but the price was at say $1.50 would you all pissed off at the chiefs or waiting on up graded numbers and higher values.

I would bet atleast 90 % would be happily waiting. That is a fact.

I would be absolutely happy in that situation.
I had a Brainchip Facebook group that was almost disintegrating after Brainchip had been shooting up from nothing to almost 1 AUD momentarily in September 2020 and then retreated back most of the gains. Some were unhappy and started shooting at me.

I left the Brainchip group and started a new group called Fundamental Stock Analysis, the purpose was to collaboratively contribute to analysis of stocks. This group learnt me that almost everybody wants to look, but not contribute anything.

My understanding is that people in general don't care about the qualitative fundamentals, they just want money fast without doing anything significant. They don't understand the value of the company, because they don't understand the fundamentals. So, they only have the price of the company to rely on and when it falls, that's all they see and they panic.

I think that people to some degree consider the market perfect, so they assume that the share price is falling because others know better, so in their head a fall is equal to some horrible new truths being realized somewhere. I've been winning from a lot of opportunities because of the imperfections in the market, I know the market is far from perfect and that's where the opportunities are.

I also realized that other peoples laziness is my opportunity, I can find those stocks that have incredible potential and buy them before everybody else sees it and are piling into them. It takes a lot of reading, but it has definitely been worth it. Same when stocks get overhyped and overcrowded too early, then I leave.

I consider Brainchip an outstanding opportunity at the current price. The value of the company has increased tremendously since three years ago, Brainchip is an entirely different company than back then and the market is much more primed for AI now than back then, but we're close to the same price tag as back then.

---------------------------

Just to reiterate my reasoning behind my view on the value of the company from a previous post:

1) AI got super focus after ChatGPT, Dale-E, Stable Diffusion e.t.c. and the amount of companies that are making something similar is exploding and many more billions will be poured into AI. We're at the accelerating part of the technological S-curve in AI. That's super positive for us, we're right in the middle of a revolution that IS changing the world right now.

2) We can't keep running AI on von Neuman hardware (nVidia, Cerebras e.t.c.), in the future, lest we want to end up consuming a great part of the worlds electricity solely for training AI models. We need a fundamental hardware change and it's never been more urgent. It's not just an environmental thing, it's also an economical thing. Only insane companies would run their models on nVidia hardware if they could run it on neuromorphic hardware and save heaps of money in investments and also save heaps of money on power consumption.

3) Neuromorphic computing is the best solution to point 2 that we have. I'm not saying that Neuromorphic chips will replace von Neuman architecture in all AI, but where it's feasible it will.

4) Neuromorphic computing is more than just energy savings and economic savings, it's new things that we can do, that we couldn't do before. We're opening up entirely new markets, that only neuromorphic computing is suited for.

5) It's crazy to send all the data from our devices to the cloud to have it analyzed there and having sent back the results to our devices, clogging up the veins of the Internet and delaying the respons, when we can do it on the device. Who can't see this? Neuromorphic computing can really bring some of todays data center capabilities to the device.

6) There are only two really promising Neuromorphic companies on the market and that's Brainchip and GrAI Matter Labs. We're in the middle of a startup dream situation, we only have one real competitor in our space. Everybody would love to have no competition at all, but a situation with one competitor is also really good.

7) We're dead center in the Industrial Revolution 4.0, nothing is more central than AI, AI pervades all the themes of Industrial Revolution 4.0. Who still can't see this? I saw this back in the middle of 2020 when reading "Symbiotic Autonomous Systems White Paper III" by IEEE, AI is in almost every trend they mention. Further as explained in the points above, neuromorphic computing is destined to play a continuously larger role in AI.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 95 users

Bombersfan

Regular
New just released 1hr ago Sean Heir interview with Edge Impulse


“Today we have Sean Hehir, CEO of Brainchip, the company that is bringing neuromorphic technology to the entire silicon industry”.
That’s a rather significant call Zach….😃🤑
 
  • Like
  • Fire
  • Love
Reactions: 56 users
Top Bottom