BRN Discussion Ongoing

I’m guessing this is Pantene.. Maybe not now, but it will happen.. Would’ve thought what’s most obvious is that BRN didn’t re-tweet it or announce it through socials and BRN not tagged in..

Otherwise you’d think it’s market sensitive Ann, or atleast BRN would’ve been up 200%+ yesterday
 
  • Like
  • Thinking
  • Love
Reactions: 7 users

stan9614

Regular
Spot on @stan9614, I am in strong agreement on this, BRN should be putting up general announcements on the companies they are incorporating into their eco-system.

These announcements do not meet the "Price Sensitive Announcements" spec, however the ASX platform also has the option to post straight up "Announcements".

Every day you can see mining companies making announcements about Conferences which they are presenting at, they are put up as general announcements _ which simply inform shareholders of the companies' efforts to promote their business there is no comeback from the ASX regarding this type of announcement which is used on the ASX platform every day.

IMO all the eco-system and partnership milestones should have been announced on the ASX this year and last year, I really think that some of the ecosystem milestones were quite likely price sensitive material i.e. the deal with Global Foundries to fabricate a new iteration of Akida - that deal demonstrated the ability of Akida to be fabricated by different manufacturers using differing fab formats - that is a major outcome for the Akida tech and as such is worthy of Price Sensitive.

I listened to Antonio (or was it Sean?) being questioned about BRNs policy re ASX Anns. at the AGM this year and I felt that they don't really understand the Anns platform, there has been talk on here and HC about fluff Anns etc, but IMO the ecosystem BRN is building is not fluff at all, it is really the bridge to get Akida to market, and is incredibly important as such.

BRN really need to pull their head out of the sand on this, it is probably my biggest gripe with management.

I will be contacting T Dawe about this because what I have read so far has not convinced me that the ASX platform could be used by BRN to promote what they are achieving with building the business
just imaging some potential investors who are searching AI stock to invest. When they come cross to brainchip, they start flip through the past announcements. Almost all they can see in the past two years are 4Cs, notice of issuing shares. They would have had interest turned down right away before they even have anymore patience to find out the great amount of partners we have signed for our ecosystem in the past two years. Only a few percent of investors follow the companys news, video clips on a daily basis. And unfortnately this few percent of investors only have so little buying power overall.

Look at what WBT announced recently regarding to a commercial agreement with just a foundry, basically it is no revenue announcement as well, as they are yet to sign a real customer. We had relationships with tsmc and global foundry. Basically they are able to do volume production once our customers decide to order chips. Wbt puts these kind of announcement on asx with intention to publish it as price sensitative and later on insist on social media that they believe it should be a price sensative annoucement. But we for some reason did not have dedicated asx announcement for our foundry partners.
 
  • Like
  • Fire
  • Love
Reactions: 20 users
I’m guessing this is Pantene.. Maybe not now, but it will happen.. Would’ve thought what’s most obvious is that BRN didn’t re-tweet it or announce it through socials and BRN not tagged in..

Otherwise you’d think it’s market sensitive Ann, or atleast BRN would’ve been up 200%+ yesterday
EDIT: Figured it out. Did not grow up in Aus, so was not familiar with this pop culture reference.

So when you say "this is Patene" you mean its nothing? Sorry not sure I follow that phrase or expression.

Cheers
 
Last edited:
  • Like
  • Haha
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
EDIT: Figured it out. Did not grow up in Aus, so was not familiar with this pop culture reference.

So when you say "this is Patene" you mean its nothing? Sorry not sure I follow that phrase or expression.

Cheers
'Pantene'....Wash...Rinse and repeat..! :)
 
Last edited:
  • Haha
  • Like
Reactions: 7 users

Taproot

Regular
Zounds! never was I so bethumped with words since I called my brother's father Dad.
He cudgels our ears.
He gives the bastinado with his tongue ...

If my memory serves me well, Syntiant goes in for Frankenstein NNs.

To understand what Syntiant are claiming as their invention, it is necessary to look at the claims, claim 1 beginning at the bottom of column 14.

They are claiming:

an IC for detecting signals off-line, including:
a host processor with
a co-processor
(presumably the co-processor?) to receive a streaming signal from a sensor,
the co-processor having a recognition network to perform recognition tasks,
the co-processor transmitting result (presumably of the recognition tasks?) to the host processor,
wherein
the co-processor includes a NN,
the NN being adapted to identify target signals in the signal stream,
the target signals being detectable (NB: not identified) using a set of weights (are we doing this in the gym?) while not being on-line,
the host processor being adapted to receive weighting signals from the co-processor (where did the co-processor get the weighting signals from?),
the host processor transmitting the target signals (where to?) indicating detection of user-specified signals (produced out of the user's hat or anatomical recess).

So really, they are attempting to claim an IC including a host processor and a co-processor having a NN with weights for use in identifying key words or other specified characteristics, the co-processor notifying the host processor, and the host processor generating an output signal in response to the co-processor notifying the host processor of a hit.

Now why didn't PvdM think of that?

Claim 6 relates to a method of generating a weight file, an entirely different invention. A patent is only permitted to claim a single invention.

I do really like the precision of the definition in claim10.

This is characteristic of the degeneration of examination standards in USPTO.

Perhaps you would like to bring this patent to the attention of Milind Joshi, Brainchip's patent attorney in Perth.
This Syntiant patent cites one of BrainChip's patents.



LOW POWER NEUROMORPHIC VOICE ACTIVATION SYSTEM AND METHOD

The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network . The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone , a decimation filter and then an artificial cochlea . The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network . The first spiking neural network checks for voice activities in output spikes received from artificial cochlea . If any voice activity is detected , it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands . If the first spiking neural network does not detect any voice activity , it halts the second spiking neural network .
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Drewski

Regular
EDIT: Figured it out. Did not grow up in Aus, so was not familiar with this pop culture reference.

So when you say "this is Patene" you mean its nothing? Sorry not sure I follow that phrase or expression.

Cheers
The Pantene slogan was for beautiful hair,
"It won't happen overnight but it will happen"
 
  • Like
  • Fire
Reactions: 16 users

Vladsblood

Regular
  • Like
  • Fire
Reactions: 6 users
1698881030061.png
 
  • Fire
  • Like
Reactions: 5 users
IF anyone ignores this is their own problem/undoing and they get left in the rear vision mirror. Vlad.
I am not sure I fully agree, great products have been ignored many times in history for something that gained better market acceptance (despite that product being inferior).
 
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Wow
  • Like
Reactions: 16 users
  • Haha
Reactions: 6 users

Yoda

Regular
EDIT: Figured it out. Did not grow up in Aus, so was not familiar with this pop culture reference.

So when you say "this is Patene" you mean its nothing? Sorry not sure I follow that phrase or expression.

Cheers
No, there was an ad for Pantene hair conditioner in Australia which had the famous line "It won't happen overnight but it will happen". It means have patience; results are coming.
 
  • Like
Reactions: 10 users
No, there was an ad for Pantene hair conditioner in Australia which had the famous line "It won't happen overnight but it will happen". It means have patience; results are coming.
Thanks mate. I knew what it meant once I heard what the line was, just had not heard the line being that I grew up in Europe ;)
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Like
Reactions: 18 users
  • Fire
  • Like
Reactions: 3 users

Xray1

Regular
Knock Knock
Who's there?
Akiba
Akiba who?
Akibacadabra 🤭


View attachment 48572
IMO ..... By my reading, it should be "Akida's" NOT Akiba's ...... At least I wish it to be so and wouldn't it be nice to get a correction of same and then have the Co make some good PR out of it....
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
IMO ..... By my reading, it should be "Akida's" NOT Akiba's ...... At least I wish it to be so and wouldn't it be nice to get a correction of same and then have the Co make some good PR out of it....

I agree Xray, I think it was just a spelling error. For some context, it may be helpful to revisit @chapman89's earlier post.

To be honest, I don't really get how it works because it sounds like Edge Impulse's customers would be in a position to chose between BrainChip's Akida and one of arm's offerings. So to an extent it looks like we could be almost competing for the same market, unless I'm missing something?



Screen Shot 2023-11-02 at 11.28.59 am.png
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Akida to the rescue methinks! ⛑️


SYSTEMS & DESIGN
OPINION

Unlocking The Power Of Edge Computing With Large Language Models​


sharethis sharing button

Training and inferencing at the edge enables AI applications with low latency, enhanced privacy, and the ability to function offline.
OCTOBER 30TH, 2023 - BY: PAUL KARAZUBA
popularity

In recent years, Large Language Models (LLMs) have revolutionized the field of artificial intelligence, transforming how we interact with devices and the possibilities of what machines can achieve. These models have demonstrated remarkable natural language understanding and generation abilities, making them indispensable for various applications.
However, LLMs are incredibly resource-intensive. Training them on cloud servers with massive GPU clusters is expensive, and the inference of these models on cloud servers can result in substantial latency, poor user experience, and privacy and security risks. Many smartphone, IoT, and automobile makers have set a goal of edge inference deployments of LLMs in future platforms. In this article, we’ll explore the significance of deploying large language models on edge devices, as well as the challenges and future.

Moving LLMs to the edge: Why?​

It would be impossible to discuss every reason for edge deployment of LLMs, which may be industry, OEM, or LLM-specific. For this article, we will address five of the more prevalent reasons we hear.
One of the primary motivations for moving LLM inference to the edge is reduced connectivity dependency. Cloud-based LLMs rely on a stable network connection for inference. Moving LLM inference to the edge means applications can function with limited or no network connectivity. For instance, the LLM could be the interface to your notes, or even your whole phone, regardless of your 5G strength.
Many LLM-based applications depend on low latency for the best user experience. The response time of a cloud-based LLM depends on the stability and speed of the network connection. When inference occurs locally, the response time is significantly reduced, leading to a better user experience.
Edge computing can enhance privacy and data security. Since data processing happens on the local device, attack surfaces are significantly reduced versus a cloud-based system. Sensitive information doesn’t need to be sent over the network to a remote server, minimizing the risk of data breaches and providing users more control over their personal information.
Personalization is another key motivator for edge deployments, not only in inference but in training. An edge-based LLM can learn how the device user speaks, how they write, etc. This allows the device to fine-tune models to cater to the user’s specific personality and habits, providing a more tailored experience. Doing so on the edge can add additional assurance of privacy to the user.
The final motivator we will address in this article is scalability. Edge devices are deployed at scale, making it possible to distribute applications across a wide range of devices without overloading central servers.

Challenges in deploying large language models on edge devices​

While the advantages of deploying LLMs on edge devices are clear, there are several challenges that developers and organizations must address to ensure success. As before, there are more than we will discuss below.
Let’s first address resource constraints. Compared to cloud servers, edge devices have limited processing power, memory, and storage. Adapting LLMs to run efficiently on such devices will be a significant technical challenge. After all, large language models are precisely that—large. Shrinking these models without sacrificing performance is a complex task, requiring optimization and quantization techniques. While many in the AI industry are hard at work at doing this, successfully reducing LLM size is going to be mandatory for successful edge deployment, coupled with use-case tailored NPU (Neural Processing Units) deployments.
Energy efficiency is also a huge challenge. Running resource-intensive models on battery-powered devices can drain the battery quickly. Both developers and chip architects need to optimize their designs for energy efficiency to not create noticeable adverse effects on battery life.
Security requirements of LLMs, and by extension any AI implementation, are different from more traditional processors and code. Device OEMs must adapt to this and ensure privacy and data security is maintained. Even though edge computing may enhance data privacy versus cloud-based implementations, it also brings challenges in terms of securing data stored on edge devices.
A final challenge to consider is compatibility. LLMs may simply not be compatible with all edge devices. Developers must ensure that models are either developed which run on various hardware and software configurations, or that tailored hardware and software will be available to support the custom implementations.

The future of edge-deployed large language models​

The large-scale deployment of LLMs on edge devices is not a question of if, but rather a question of when it happens. This will enable smarter, more responsive, and privacy-focused applications across various industries. Developers, researchers, and organizations are actively working to address the challenges associated with this deployment, and as they do, we can expect more powerful and efficient models that run on a broader range of edge devices.
The synergy of large language models and edge computing opens up a world of possibilities. With low latency, enhanced privacy, and the ability to function offline, edge devices become more useful and versatile than ever before.

 
  • Like
  • Love
Reactions: 15 users

McHale

Regular
FYI concerning the vote against Peter's re -election as a director was 109,789,182 Million ~ 27.94% of votes

"Tuesday, 24 May 2022:
In accordance with Listing Rule 3.13.2 and Section 251AA(2) of the Corporations Act, BrainChip Holdings Ltd (ASX:BRN) provides the following information with respect to the results of its Annual General Meeting held today. Resolutions voted on at the meeting If decided by poll Proxies received in advance of the meeting Resolution Result/ Resolution Type Voting method If s250U applies Voted for Voted against Abstained For Against Abstain Discretion No Short description Number % Number % Number Number Number Number Number 1 Adoption of Remuneration Report Carried Ordinary Poll Yes 262,519,106 88.12 35,383,729 11.88 94,770,967 195,936,865 35,042,053 94,166,812 12,950,370 2 Re-election of Peter Van Der Made as Director Carried Ordinary Poll N/A 283,100,194 72.06 109,789,182 27.94 172,514,611 223,140,439 101,974,925 172,511,110 12,554,811 3 Elect
Thanks for your response @Xray1, so my recollection of that vote against Peters' re-election was correct, and as you have put it, it was in 2022.

So IMO there was no way retail holders would have supported that vote against PvDM. I didn't say it in my previous post on this subject, but I wouldn't be surprised if these votes at the last 2 AGMs were the result of institutions playing games, and yes also exercising their rights as shareholders.

It is pure speculation on my part, but that is what I have felt since the 2022 AGM, and I feel the same about what took place at this years AGM, malevolent actors working at undermining confidence in BRN and undermining price appreciation.

I welcome any feedback, or other research to give more flesh to the bones of this matter.
 
  • Like
  • Love
  • Fire
Reactions: 21 users
Top Bottom