BRN Discussion Ongoing

Cgc516

Regular
What else can we do? Come on BrainChip, don’t keep silence!

129A4CCC-8738-4100-BDF3-0D6389E50A9E.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 10 users
My thoughts scrolling on TSE today looking for some insights/news about Brainchip


"Forget the buyers... I can't believe there are people willing to sell their shares... especially at these prices. I wonder who could possibly be willing to sell?"



"Aaaahhhh... makes sense"
"Forget the buyers... I can't believe there are people willing to sell their shares... especially at these prices. I wonder who could possibly be willing to sell?"

Maybe, roughly in this order..

Shorters and other manipulators.
LDA Capital.
Traders backing out.
Those forced to sell for expenses.
Those that have lost their patience.
Selling due to opportunity cost.
Selling due to fear.

Most, are possibly..

Waiting patiently.
Waiting impatiently.
Wanting to buy more.
Buying (good after bad?).

Right now, BrainChip as a Company, with no financial stress, only has opportunity in front of them.

So as you said, no good reason to sell, but selling factors outweigh buying, at the moment.

20240116_150050.jpg


The share price, is actually the most stable it's been, in a long time (3 year chart).
I think, it's waiting for a catalyst for a breakout, which could occur at any time.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 27 users

wasMADX

Regular
I think what we need to remember, and is how I look at it anyway, is that whilst it is a tradeshow, CES is the "Consumer Electronics Show".

This says to me that most of the actual "exhibitors" are showing off their end user products, things consumers can buy now or near term etc.

The suites appear to be the real tradeshow where OEMs, tech companies, suppliers, component manufacturers etc have the nuts and bolts displays and demos to attract new clients, partners etc rather than investors.

Obviously, investment may pop up as a side discussion however I would think that would be the domain of Broker investor meetings or direct one on ones between interested companies.

Individual investors would be the ones milling around the exhibition wide eyed at new tech products and not really exposed to the suites anyway.

So for me, I wasn't expecting any SP change much unless an actual Ann came out but I do expect that these suite meetings have encouraged other companies to explore Akida and engage with us over and above the few reveals we did also get.

And, that those reveals are closer to firm commitments than not.
OK, so I said " I'm VERY disappointed that the share price has gone nowhere after the C.E.S., to which Wiltzy said "Where was it supposed to go? LOL. Perhaps I am the naive one, and you will enlighten me as to where it should have gone and why."

Fullmoonfever has clarified and I believe he's correct i.e. CES is a trade show & that's where I was wrong Wiltzy.

NOW, trying to be constructive, I still believe there was an opportunity missed at the CES.
I suggest that at future shows like the CES, we could have the best of both worlds.


Have a person or persons demoing, and another discussing investing in us. Have the booth clearly set up between the two and I reckon some visitors would see the demo, go off, thinking, then, realising the potential, and having subliminally noticed the investing idea, would come back for info. We could also be a bit naughty by employing a crowd enthusiastically jostling and calling out "how can I invest".;)
 
Last edited:
  • Haha
  • Like
  • Thinking
Reactions: 9 users
Apologies if this has already been shared or is not a recent update, but is the image on the Brainchip CES page showcasing the Onsemi demo a new addition?

1705382580810.png
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is a cool article to revisit, especially considering we are partnered with no less than 5 companies on Apple's Supplier List (link below) including Infineon, MegaChips, Microchip, Onsemi and Renesas.




Screenshot 2024-01-16 at 4.37.28 pm.png



Morgan Stanley: It’s when, not if, Apple will deliver AI on the edge​

BY JONNY EVANS · NOVEMBER 8, 2023


Apple-WWDC22-hands-on-area-MacBook-Air-Steve-Jobs-Theater-Apple-Park-220606.png


Apple has an opportunity to cruise ahead of the pack when it comes to Generative AI, argues Morgan Stanley’s Erik Woodring in his latest client note. I agree. Here’s the basic proposition to this.

Leading from behind

We know Apple appears to lag its rivals when it comes to the application of Generative AI, but there are problems with those technologies that haven’t yet been fully resolved.
One of these challenges is privacy and the need to protect it. That’s where Woodring has the chance to create a powerful system that combines the best of AI with the best of privacy. Essentially, he thinks (as I do) that the natural evolution for Apple here is to develop AI at the Edge.

He thinks this kind of edge-based intelligence will become more important over the next twelve months as new products appear and new killer apps turn up. Apple’s advantages for this include its massive user base, focus on privacy, and the high degree of integration across its products and services. Those advantages may turn into Edge AI benefits, he argues.
We can’t be certain Apple is working on this, nor can we accurately predict when such tech may appear, but the analyst sees Generative AI in Siri and Edge Intelligence as positive paths for the company’s research, moving forward.

Why does this matter?

It matters, “Because 50% of all enterprise data will be created at the edge, according to Gartner, leaving open an opportunity for hardware OEMs to come to market with a new generation of smart devices to power AI at the edge,” said the analyst.
He thinks this is an opportunity for Apple, which he describes as an ‘Edge AI Enabler’. He also thinks that if Apple succeeds it will unlock huge long term benefits.
It’s not just about the enterprise.
Inevitably as consumers get to understand GAI, they too will become concerned about the impact of unregulated intelligence on their lives.
They too will want better and faster results, more personalization, and more availability.
The analyst also argues that while an iPhone doesn’t really provide the equivalent processing as a server ban, Apple’s super-fast chips can process 35 trillion operations per second, which should be enough to power LLMs “up to high single digit billions of parameters,” on the device itself.
“We expect new battery tech, silicon, and edge devices to emerge in 2024 (and beyond), helping to spark investor interest in this theme,” he wrote.

Apple’s advantages

To achieve this, Apple can deploy existing advantages including:
  • Number of Apple devices in use
  • Powerful processors and integrated hardware/software.
  • Its existing data privacy focus and market tested approach to maintaining that.
  • Siri can benefit from its own LLM models.
  • Apple can also find focused ways to deploy AI to support its existing hardware, think Fall Detection or Face ID.
Apple also has the money to invest in the project. “Apple’s emergence as a leading provider of AI at the edge is a matter of “when” not “if”,” he writes. Siri, Xcode, AI in Apple’s iWork suite, and future Apple-powered apps for health, finance, and fitness are all logical domains for this kind of AI he argues (I agree). He also imagines a Siri+ subscription service as one way to monetise what Apple could be building.

Apple is recruiting

The analyst shares that AI-related job postings at Apple have risen from 5% of total Apple job postings in c.2015 to around 20% today, “with an increased focus on “Natural Language Processing (NLP)” and “Deep Learning” skill sets that make up 20-40% of Apple’s AI job postings today.”
“We see this as clear confirmation of Apple’s AI intentions,” the analyst said.
We suspect that WWDC 2024 may turn out to be a good chance to see some of the first wave of Apple’s next-generation ‘Edge AI’ apps.




 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
And this article published a few days ago.

Apple will seize the ‘AI opportunity’ at WWDC, analyst says​

BY JONNY EVANS · JANUARY 11, 2024


apple_germany-silicon-design-center_internal_03102021.jpg


Morgan Stanley analyst Erik Woodring believes this will be the year in which Apple will seize the ‘Edge AI’ opportunity, introducing its take on LLM Generative AI even as the entire computer industry slips toward “AI inside”. It’s probable that Apple’s deployment will be more than skin deep, given the power and performance of Macs and other Apple devices already in use today.

Why this matters to Apple

Woodring makes numerous pertinent points on how Apple will bring this kind of intelligence to market and what that means.
“We believe Apple is one of 6 key companies that will benefit from compute being pushed to “The Edge” to enable new Gen AI-driven features not currently available on consumer devices,” the analyst recently wrote. “We believe Apple’s efforts to bring these features to market are accelerating, increasingly the likelihood of an “AI iPhone” launch as soon as Fall 2024.”

He noted that 49% of Apple’s AI job postings cite Deep Learning experience, while 23% of them seek NLP talent.
Siri 2.0 will be LLM-powered, he says, but points to a “broader Gen AI-enabled operating system,” he thinks will be introduced at WWDC in June. He notes Apple’s recent move to invent a tech to let LLMs work well on devices.

When the tech does see the light of day, the analyst thinks this will boost Apple’s iPhone sales.
The idea behind this being that a Gen AI enabled OS upgrade may accelerate iPhone upgrades in a similar way the first 5G iPhones boosted them. The tech may also accelerate Services spend per user, “which stands at just $8 per month today,” he told clients.
But perhaps the biggest and most important component of the overall plan will be the ability to run AI at the edge. That’s gonna be the golden ticket for Apple, I think.
 
  • Like
  • Fire
  • Love
Reactions: 35 users
Hi All
I am loath to bother stating the obvious because it will be attacked and lies will be spread far and wide for reasons that others need to judge but what the heck:

Fact 1: Mercedes Benz with a $72 billion market cap in a magazine article is reported as saying at CES 2022 that it is working with Brainchip trialling AKIDA technology in a concept vehicle not intended for production and makes some positive comments. Before these magazine reports can be verified the share price commences to rise terminating at $2.34 before commencing to fall.

Fact 2: OnSemi, Microchip and Infineon with a combined $122.10 billion market cap undertake joint demonstrations of their respective technologies working with Brainchip AKIDA technology at CES 2023 for a range of mass consumption use cases not concepts and company representatives of each come on publicly released Brainchip podcasts and go on record permanently confirming they are actually partnered for these demonstrations and speak highly of the individual outcomes and the Brainchip share price drops to 16 cents.

My opinion only so DYOR but the logic of the above is difficult to understand.
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 111 users

7für7

Regular
Hi All
I am loath to bother stating the obvious because it will be attacked and lies will be spread far and wide for reasons that others need to judge but what the heck:

Fact 1: Mercedes Benz with a $72 billion market cap in a magazine article is reported as saying at CES 2022 that it is working with Brainchip trialling AKIDA technology in a concept vehicle not intended for production and makes some positive comments. Before these magazine reports can be verified the share price commences to rise terminating at $2.34 before commencing to fall.

Fact 2: OnSemi, Microchip and Infineon with a combined $122.10 billion market cap undertake joint demonstrations of their respective technologies working with Brainchip AKIDA technology at CES 2023 for a range of mass consumption use cases not concepts and company representatives of each come on publicly released Brainchip podcasts and go on record permanently confirming they are actually partnered for these demonstrations and speak highly of the individual outcomes and the Brainchip share price drops to 16 cents.

My opinion only so DYOR but the logic of the above is difficult to understand.
Fact Finder
Exactly what I tell the people in the German forum. But I guess this is the power of the brand. It’s like buying the brand chocolate in the expensive supermarket and saying it’s delicious, while you can have the same chocolate in a cheaper supermarket with a different branding and saying it tastes a bit strange…
 
  • Like
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is not new news, but it's important to bear in mind because now we're besties with Infineon, it's going to make us more attractive to these other big cahoonas. And, because RISC-V is essentially competing with Arm, that should see BrainChip being the go to partner in the swingers club where extremely low power, performance-per-watt is required for products IMO.

Oh yeah, and Farshad from Infineon on the recent podcast was super impressed with BrainChip which is pretty cool when you think how familiar Infineon must be with Qualcomm. Really says something don't you think?




Screenshot 2024-01-16 at 5.27.35 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 27 users

Tothemoon24

Top 20
IMG_8183.jpeg



The Brainchip Akida Neuromorphic System-on-Chip, which is a low-power and scalable chip that can integrate multiple DVS and DAS sensors and perform event-based learning and inference.

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 71 users
Exactly what I tell the people in the German forum. But I guess this is the power of the brand. It’s like buying the brand chocolate in the expensive supermarket and saying it’s delicious, while you can have the same chocolate in a cheaper supermarket with a different branding and saying it tastes a bit strange…
As stated before I think at the AGM ,
Watch the Financials
 
  • Like
  • Fire
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OK, well that's probably it from me today, otherwise I'll develop RSI in my two index fingers.
 
  • Haha
  • Like
  • Love
Reactions: 18 users

Kachoo

Regular
Exactly what I tell the people in the German forum. But I guess this is the power of the brand. It’s like buying the brand chocolate in the expensive supermarket and saying it’s delicious, while you can have the same chocolate in a cheaper supermarket with a different branding and saying it tastes a bit strange…
This information needs to be absorbed by all who are complaining.

The 3 companies mentioned will have a much larger impact to BRN revenue then MB would but its about the Branding.

The SP action well there is no action happening its dead just being played with buy institutions and brokers. With no interest its pretty easy to see how the price got to where it is.

What is more frustrating in my view is the SP is measuring past performance instead of future performance. ATM.

The constant slide was achieved by driving away speculation investment Management have not been helpful in some of there action either to attract people.

The technologyband company from many many individuals in the industry speak highly of the SP.

The connections with Microchip, Infineon and Onsemi are more advertised then the one MB tweet that went out and people still dwelling on MB. Come on MB when and if they incorporate Akida will generate revenue till then they is no talk of anything from them. What we know is they are still partners and working away.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
View attachment 54375


The Brainchip Akida Neuromorphic System-on-Chip, which is a low-power and scalable chip that can integrate multiple DVS and DAS sensors and perform event-based learning and inference.

This is a link that should be opened and the full article read.
My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 35 users

IloveLamp

Top 20
View attachment 54375


The Brainchip Akida Neuromorphic System-on-Chip, which is a low-power and scalable chip that can integrate multiple DVS and DAS sensors and perform event-based learning and inference.

For those not on LinkedIn

Screenshot_20240116_174408_LinkedIn.jpg
Screenshot_20240116_174429_LinkedIn.jpg
Screenshot_20240116_174433_LinkedIn.jpg
Screenshot_20240116_174534_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Rach2512

Regular
This is a link that should be opened and the full article read.
My opinion only DYOR
Fact Finder

Dynamic Vision Sensor (DVS) and the Dynamic Audio Sensor (DAS)

Published on Jan 16, 2024

Powered by AI and the LinkedIn community

Kailash PrasadDesign Engineer @ Arm | PMRF | IIRF | nanoDC Lab…

Published Jan 16, 2024

Follow

Have you ever wondered how the human eye👁️ and ear👂 can process complex and dynamic scenes with such high speed and accuracy? Imagine if we could design artificial sensors that mimic the biological mechanisms of vision and hearing, and produce data that is more efficient and meaningful than conventional sensors.


In this post, I will introduce you to two types of neuromorphic sensors: the Dynamic Vision Sensor (DVS) and the Dynamic Audio Sensor (DAS).


These sensors are inspired by the structure and function of the retina and the cochlea, respectively, and use a novel paradigm of event-based sensing. Unlike conventional sensors that capture frames or samples at a fixed rate, event-based sensors only output data when there is a change in the input signal, such as brightness or sound intensity. This results in a stream of asynchronous events that encode the temporal and spatial information of the scene, with high temporal resolution, low latency, and high dynamic range.


📖 - "In simpler terms, these special sensors work like our eyes and ears. They're designed based on the way our eyes' retinas and ears' cochleae function. But what sets them apart is their unique approach called event-based sensing. Unlike regular sensors that take pictures or recordings at a set speed, these event-based sensors only provide information when there's a change. Whether it's a shift in light or a change in sound, they only capture those moments. Instead of a constant flow of data, you get quick updates that show when and where things change. This gives you highly detailed and fast information about what's happening, with minimal delay and a wide range of details. It's like having sensors that focus on the important stuff, making them efficient and responsive."


The DVS is an imaging sensor that responds to local changes in brightness, and outputs events that indicate the pixel address, the polarity (increase or decrease) of the brightness change, and the timestamp. The DVS can achieve a temporal resolution of microseconds⏱️, a dynamic range of 120 dB🔊, and a low power consumption of 30 mW💡. The DVS can also avoid motion blur and under/overexposure that plague conventional cameras. The DVS can be used for applications such as optical flow estimation, object tracking, gesture recognition, and robotics.


The DAS is an auditory sensor that mimics the cochlea, the auditory inner ear. The DAS takes stereo audio inputs and outputs events that represent the activity in different frequency ranges. The DAS can capture sound signals with a frequency range of 20 Hz to 20 kHz🎵, a dynamic range of 60 dB🔊, and a temporal resolution of microseconds⏱️. The DAS can also extract auditory features such as interaural time difference, harmonicity, and speaker identification.


Both the DVS and the DAS are compatible with neuromorphic computing architectures, such as spiking neural networks, that can process the event data in a parallel and distributed manner. This enables low-power and real-time computation of complex tasks such as scene understanding, speech recognition, and sound localization.


Some examples of recent products that use the DVS and the DAS are:


- The Prophesee Metavision Camera, which is a high-resolution DVS camera that can capture fast and complex motions with minimal data and power consumption.
 
  • Like
  • Love
Reactions: 30 users
Hi All
I am loath to bother stating the obvious because it will be attacked and lies will be spread far and wide for reasons that others need to judge but what the heck:

Fact 1: Mercedes Benz with a $72 billion market cap in a magazine article is reported as saying at CES 2022 that it is working with Brainchip trialling AKIDA technology in a concept vehicle not intended for production and makes some positive comments. Before these magazine reports can be verified the share price commences to rise terminating at $2.34 before commencing to fall.

Fact 2: OnSemi, Microchip and Infineon with a combined $122.10 billion market cap undertake joint demonstrations of their respective technologies working with Brainchip AKIDA technology at CES 2023 for a range of mass consumption use cases not concepts and company representatives of each come on publicly released Brainchip podcasts and go on record permanently confirming they are actually partnered for these demonstrations and speak highly of the individual outcomes and the Brainchip share price drops to 16 cents.

My opinion only so DYOR but the logic of the above is difficult to understand.
Fact Finder
It’s certainly a big step up but the expectations were very different back then.
Mercedes was the first prestigious company that actually used the chip for something and told the masses about it on the big stage to announce their concept car. Back then revenue was so far away and this announcement caught everyone of us off guard. Today, due to the statements made by the management our expectations are lots of ip licenses along with revenue growth. So these demos and confirmations are certainly nice to have but they really don’t matter a lot for the vast majority of the investors. The share price acknowledges what we expect and it’s not what we’re getting right now.
I’m positive nevertheless and these demos show that it’s just a matter of time.
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Rach2512

Regular
Dynamic Vision Sensor (DVS) and the Dynamic Audio Sensor (DAS)

Published on Jan 16, 2024

Powered by AI and the LinkedIn community

Kailash PrasadDesign Engineer @ Arm | PMRF | IIRF | nanoDC Lab…

Published Jan 16, 2024

Follow

Have you ever wondered how the human eye👁️ and ear👂 can process complex and dynamic scenes with such high speed and accuracy? Imagine if we could design artificial sensors that mimic the biological mechanisms of vision and hearing, and produce data that is more efficient and meaningful than conventional sensors.


In this post, I will introduce you to two types of neuromorphic sensors: the Dynamic Vision Sensor (DVS) and the Dynamic Audio Sensor (DAS).


These sensors are inspired by the structure and function of the retina and the cochlea, respectively, and use a novel paradigm of event-based sensing. Unlike conventional sensors that capture frames or samples at a fixed rate, event-based sensors only output data when there is a change in the input signal, such as brightness or sound intensity. This results in a stream of asynchronous events that encode the temporal and spatial information of the scene, with high temporal resolution, low latency, and high dynamic range.


📖 - "In simpler terms, these special sensors work like our eyes and ears. They're designed based on the way our eyes' retinas and ears' cochleae function. But what sets them apart is their unique approach called event-based sensing. Unlike regular sensors that take pictures or recordings at a set speed, these event-based sensors only provide information when there's a change. Whether it's a shift in light or a change in sound, they only capture those moments. Instead of a constant flow of data, you get quick updates that show when and where things change. This gives you highly detailed and fast information about what's happening, with minimal delay and a wide range of details. It's like having sensors that focus on the important stuff, making them efficient and responsive."


The DVS is an imaging sensor that responds to local changes in brightness, and outputs events that indicate the pixel address, the polarity (increase or decrease) of the brightness change, and the timestamp. The DVS can achieve a temporal resolution of microseconds⏱️, a dynamic range of 120 dB🔊, and a low power consumption of 30 mW💡. The DVS can also avoid motion blur and under/overexposure that plague conventional cameras. The DVS can be used for applications such as optical flow estimation, object tracking, gesture recognition, and robotics.


The DAS is an auditory sensor that mimics the cochlea, the auditory inner ear. The DAS takes stereo audio inputs and outputs events that represent the activity in different frequency ranges. The DAS can capture sound signals with a frequency range of 20 Hz to 20 kHz🎵, a dynamic range of 60 dB🔊, and a temporal resolution of microseconds⏱️. The DAS can also extract auditory features such as interaural time difference, harmonicity, and speaker identification.


Both the DVS and the DAS are compatible with neuromorphic computing architectures, such as spiking neural networks, that can process the event data in a parallel and distributed manner. This enables low-power and real-time computation of complex tasks such as scene understanding, speech recognition, and sound localization.


Some examples of recent products that use the DVS and the DAS are:


- The Prophesee Metavision Camera, which is a high-resolution DVS camera that can capture fast and complex motions with minimal data and power consumption.


You beat me to it IloveLamp, great read thanks Tothemoon24
 
  • Like
Reactions: 15 users

CHIPS

Regular
I only hope they weren't put off by all the childish spamming and idiotic, relentless questions about brainchip on social media etc... I really do hope thats not the case. Im worried we have infact become a meme stock... Just sit back and let management do their thing. Ive said it from the start...
Edit: This is not directed at you Bravo. Love your work. Mainly a certain" verification engineer"...all over the network.

Oh yes, they might say "We better not use Akida, because its fan club is too annoying. They might even buy our cars later on." :cool:
 
  • Haha
  • Like
  • Fire
Reactions: 8 users
It’s certainly a big step up but the expectations were very different back then.
Mercedes was the first prestigious company that actually used the chip for something and told the masses about it on the big stage to announce their concept car. Back then revenue was so far away and this announcement caught everyone of us off guard. Today, due to the statements made by the management our expectations are lots of ip licenses along with revenue growth. So these demos and confirmations are certainly nice to have but they really don’t matter a lot for the vast majority of the investors. The share price acknowledges what we expect and it’s not what we’re getting right now.
I’m positive nevertheless and these demos show that it’s just a matter of time.
Yes accept all of your reasoning but what logic saw investors find a reason to sell off and drop the price to 16 cents?

What was the bad news that caused the loss of confidence?

Your thinking and mine align except that I am not asking why the price did not explode and go to $2.34 and beyond but why the engagement with these three companies is considered a negative.

****************

By the way Kailash PrasadDesign Engineer @ Arm in the following confirms the statements by the representative of Infineon at CES 2024 that what sets AKIDA apart is its capacity to scale and fuse multiple inputs while providing low powered inference:

“The Brainchip Akida Neuromorphic System-on-Chip, which is a low-power and scalable chip that can integrate multiple DVS and DAS sensors and perform event-based learning and inference”

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 57 users
Top Bottom