BRN Discussion Ongoing

skutza

Regular
I disagree with this. A negative post Here has never killed anything for me. As I said, as soon as I read something that seems unfounded, emotional and worthless, I move on and ignore it. Like I said we all have a choice, that is mine. Anyway, like you said these posts steal oxygen, enough said on this, I'm off to read anything that's actually worthwhile , peace out
Why do people get upset if someone posts something negative?
Because it steals the oxygen from the room.
It takes time, and effort to research what the company might be doing, and at the early stages of a company this is very exciting.
Negative posts that are also lazy and don't offer a balanced view, kill it for everyone.
 
  • Like
  • Love
Reactions: 8 users
I don’t understand the level of some people’s emotions and how when the share price is where it is, some lose sight of the fundamentals.

The company’s fundamentals have never ever been stronger than they are now.
Yes we have had “headwinds” and macroeconomic challenges, inflation, it’s all been negative.
But at a time when companies are laying off 1000’s in the tech industry, Brainchip is trying to aggressively hire all over the world, & look at the quality of people they’ve hired…absolute world class standards, these people such as Nandan and Duy-loan and Chris Stevens have come from very successful companies, mind you Nandan came from a trillion dollar company and was in charge of the successful adoption of Hey Alexa. Do you really think these high caliber recruits are joining a sinking ship? They’ve had a look under the hood and are amazed, look at what they’ve all said when they’ve joined us. Go and have a look at Sean’s LinkedIn and what he’s said about Brainchip, I mean this guy is a Silicon Valley board member??? That’s some big claims to make if it wasn’t going to come true.

Off the top of my head, in the last 12 or so months we’ve had Mercedes, ISL, VVDN, ARM, SiFive, Intel, 3 universities join us, Prophesee, Socionext Renasas VVDN Nviso displaying akida in real world applications for the first time, we’ve got Renasas taping out as we speak “mass volume” and all the other achievements that’s happened despite all that’s happened and happening in the world and some are losing their cool right at the last hurdle????

I just know that if the share price was let’s say $2, would those who are becoming impatient still be saying the same things? I bet absolutely not, but see, what’s wrong with that is, NOTHING has fundamentally changed, it’s STRONGER THAN ITS EVER BEEN.

The share price will catch up to where we all know it should be, but even the most successful companies in the world are facing struggles, so why would Brainchip be immune to it all? I just don’t get it.

Yes people are allowed to vent frustration, vent their opinions no matter how negative or positive it is, but god damn don’t forget the actual facts of what’s in front of us.

The tipping point is Renasas taping out mass volume this year, god damn if you all say you know Brainchip will be successful then buy more shares?!?! Or sell and move on honestly.

Stop making up your own timelines to suit your narrative only to be let down. Focus your energy on the facts.

Renesas will finish taping out sometime end of 2023, so I’m personally winding back my expectations of anything before then, so I’m not disappointed, will we get more licensing deals before? Well who knows, time will tell.
Brilliant post @chapman89 !!
 
  • Like
  • Love
  • Fire
Reactions: 14 users
I disagree with this. A negative post Here has never killed anything for me. As I said, as soon as I read something that seems unfounded, emotional and worthless, I move on and ignore it. Like I said we all have a choice, that is mine. Anyway, like you said these posts steal oxygen, enough said on this, I'm off to read anything that's actually worthwhile , peace out
I think the key point is that there’s constructive criticism that’s well thought through and then there’s just random speculation that’s unfounded really and that can go both ways negative and positive which really just clogs up the forum unnecessarily. There’s so many comments going through there now it’s pretty easy to miss something important
 

stuart888

Regular
And Sean just happened to ask about Transformers… I think there’s a hint there!!!
Then after the Transformers question was answered, Sean immediately says "We will have to come back soon for more on Transformers"!

Most likely he has hidden confidence because they have Bench Marked this against other methods.

Just a hunch, I don't think you start chatting about something that you are not very sure of Best-in-Class Performance metrics. Some of the customers would have also likely knowledgeable these low-power performance metrics. ⏰
 
  • Like
  • Fire
  • Love
Reactions: 48 users

Damo4

Regular
Love it
“@Neuromorphic technologies make efficient onboard AI possible. In a recent collaboration with an automotive client, we demonstrated that spiking neural networks running on a neuromorphic processor can recognize simple voice commands up to 0.2 seconds faster than a commonly used embedded GPU accelerator, while using up to a thousand times less power. This brings truly intelligent, low latency interactions into play, at the edge, even within the power-limited constraints of a parked vehicle.”

Am I getting ahead of myself or is this Akida? Who else could provide this solution for Accenture?

Edit: I guess this could be in reference to Mercedes using the voice feature?
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

Mugen74

Regular
Hopefully now after this podcast we can put to rest the speculation about our CEOs performance and what he brings to the table for Brainchip!
 
  • Like
  • Love
  • Fire
Reactions: 60 users

Mccabe84

Regular
Am I getting ahead of myself or is this Akida? Who else could provide this solution for Accenture?

Edit: I guess this could be in reference to Mercedes using the voice feature?
Why would someone go on a podcast with a little known company if they weren’t going to use the product. This is the thought I’ve had every time someone from another company is on the podcast
 
  • Like
  • Fire
  • Love
Reactions: 42 users

2990648E-77A6-4607-8FBC-38EE5220A7C9.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Learning

Learning to the Top 🕵‍♂️

Fantastic thelitteshort,

Thanks,

What a great Podcast.

Transformer = Transform the future of AI (JMHO)

"Exciting time!"

Learning 🏖
PS, FF, one's who want to 'Learning to the Top 🕵‍' one has to learn from one of the top researcher. Please share your knowledge, once the restraining order is pass. 😁
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 31 users

Bombersfan

Regular
And Sean just happened to ask about Transformers… I think there’s a hint there!!!
It definitely can’t be a coincidence that the company has mentioned transformers several times over the last few months and now they have significant enhancements to the ip offering due soon it is looking promising.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Taproot

Regular
And then Sean said straight after that our (Brainchip) customers are telling them 2023/24 will be the year.

So you heard it from Sean’s mouth, the CEO of Brainchip that 2023/24 will be the years.

The CTO of Accenture also said that Brainchip is already successful, so here we have the CTO of a US$180 billion company saying Brainchip is successful, yet there’s those that will still not believe it….
I love how these 2 guys are good mates.
Sean has been with Brainchip for 14 months now and you can bet he would have been straight in touch with his mate at Accenture the minute he got his head around the tech.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Taproot

Regular
I think the big takeaway from the podcast is that we are firmly on the radar of the worlds biggest tech consulting firm that is helping the largest companies in the world to transform their businesses using technology. Arguably, Accenture would have to be one of the strongest channels to market for a business like Brainchip in the world.
Arguably, Accenture would have to be one of the strongest channels to market for a business like Brainchip in the world.

Absolutely spot on @Moonshot
I wouldn't say arguably, I'd say that's a statement of fact !!
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Taproot

Regular
this was a good article this month on transformers


interesting that one of the downsides of current transformers that use RNN (recurrent Neural Networks)

The downside of the models created using RNN architecture is that they have short-term memory— they are really bad at analyzing long blocks of text. By the time the model reaches the end of the long block of text, it will forget everything that happened at the beginning of the block. And RNN is really hard to train because it cannot paralyze well (it processes words sequentially, limiting opportunities to scale the model performance).


hmmm BRN loves bringing up the word transformers

Akida2 comes with LSTM (long short term memory)


A long short-term memory network is a type of recurrent neural network (RNN). LSTMs are predominately used to learn, process, and classify sequential data because these networks can learn long-term dependencies between time steps of data.

One of the issues currently with ChatGPT etc is that it isn't understanding what it is producing

Language models created on top of the transformers architecture do not have access to the meaning of the information they analyze. LLMs do natural language processing (NLP), but they do not perform natural language understanding (NLU). As a result, any claims that GPT or BERT is true AI are false.

We are hearing the importance and growth of NLP and you see the great marketcap of ChatGPT as an example.

If Akida2 is the stepping stone for transformers to understand what they are producing over long sequences of data this will be like striking oil!

Our CEO has said BRN should be worth many multiples of its current price and so it should once we start landing these opportunities. We are getting close!
Yes, GPT 3 is stupid.
Akida will make it smart.
 
  • Like
  • Love
Reactions: 9 users

Taproot

Regular
Agree, if we have SNN based transformer network on the edge - think of the market opportunity for ChatGPT like functionality in edge devices. Chat GPT valuation circa $28b USD.

Microsoft reportedly plans to invest $10 billion in creator of buzzy A.I. tool ChatGPT​

PUBLISHED TUE, JAN 10 20237:00 AM ESTUPDATED TUE, JAN 10 20236:44 PM EST

KEY POINTS
  • Microsoft is set to invest $10 billion in OpenAI as part of a funding round that would value the company at $29 billion, news site Semafor reported Tuesday.
  • Microsoft will reportedly get a 75% share of OpenAI’s profits until it makes back the money on its investment, after which the company would assume a 49% stake in OpenAI.
  • A bet on ChatGPT could help Microsoft boost its efforts in web search, a market dominated by Google.
 
  • Like
  • Fire
Reactions: 16 users

stuart888

Regular
On the addition of LSTM and Transformer Neural Networks solutions.

Very likely these multi-flexible solutions which Brainchip can produce, can make "sealing the deals" a whole lot easier after it becomes official.

I would assume any company innovating into Edge SNN solutions, spending millions, would get a lot of comfort knowing that Brainchip can handle many solutions ongoing. They would see that Brainchip innovation never stops!

Like Mercedes said, the new cutting-edge AI/ML solutions would roll out in phases over the next few years.

Oh My, the number of Use-cases by adding on the LSTM and Transformer Neural Networks will be even more endless. 🌪️
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Build-it

Regular
At the 16.39 mark does Jean-luc mention the 4th revolution ?.

Any one out their looking at the big picture the 4th revolution is it and it is happening know.

Edge Compute
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
Hi
Hi @Diogenese

After reading this article by Sally Ward Foxton. https://www.eetimes.eu/get-ready-for-transformational-transformer-networks/

I have a question.

Does Brainchip's Akida 1000 usage of N of M coding and Jast rules currently provide them with an advantage over the competition regarding finding word context wthin NLP, over and above power expenditure.

If so, could one then realistically assume this advantage would increase exponentially with the introduction of transformers and LSTM?


TT
TT,

I know very little about transformers, but one thing stands out - the size of the "model library". This will be too large to implement on an edge device like Akida. SWF's article refers to Cerebas' wafer-scale SoC. That could be 30 cm diameter.

[Edit: Remember that 256 Akida1s can be assembled together]

but apparently Wajahat Qadeer (Kinara - https://kinara.ai/ ) believes that the databases can be compressed to make them practicable at the edge, in mobile phones.

"There are ways to reduce the size of transformers so that inference can be run in edge devices, Qadeer said. “For deployment on the edge, large models can be reduced in size through techniques such as student-teacher training to create lightweight transformers optimized for edge devices,” he said, citing MobileBert as an example. “Further size reductions are possible by isolating the functionality that pertains to the deployment use cases and only training students for that use case.”

In the student-teacher method for training neural networks, a smaller student network is trained to reproduce the outputs of the teacher network.

Techniques like this can bring transformer-powered NLP to applications like smart-home assistants, where consumer privacy dictates that data doesn’t enter the cloud. Smartphones are another key application, Qadeer said.

“In the second generation of our chips, we have specially enhanced our efficiency for pure matrix-matrix multiplications; have significantly increased our memory bandwidth, both internal and external; and have also added extensive vector support for floating-point operations to accelerate activations and operations that may require higher precision,” he added
."

Given that Kinara uses matrix multiplication and floating-point operations, I think that using N-of-M coding, can improve on the Kinara model.

This is a patent application by Qadeer:

US2021174172A1 METHOD FOR AUTOMATIC HYBRID QUANTIZATION OF DEEP ARTIFICIAL NEURAL NETWORKS
1673484548510.png




The method includes, for each floating-point layer in a set of floating-point layers: calculating a set of input activations and a set of output activations of the floating-point layer; converting the floating-point layer to a low-bit-width layer; calculating a set of low-bit-width output activations based on the set of input activations; and calculating a per-layer deviation statistic of the low-bit-width layer. The method also includes ordering the set of low-bit-width layers based on the per-layer deviation statistic of each low-bit-width layer. The method additionally includes, while a loss-of-accuracy threshold exceeds the accuracy of the quantized network: converting a floating-point layer represented by the low-bit-width layer to a high-bit-width layer; replacing the low-bit-width layer with the high-bit-width layer in the quantized network; updating the accuracy of the quantized network; and, in response to the accuracy of the quantized network exceeding the loss-of-accuracy threshold, returning the quantized network.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 28 users
Thanks for your response which ignores completely the two points I raised.

What I have decided is that the way in which you and others have turned this general discussion thread into a place of unfounded and non factual negative commentary and empty rhetoric creates an environment which no longer favours serious research and analysis.

Everyone is entitled to have an opinion indeed like backsides everyone has opinions.

My investment style is not based upon opinions but facts. Factual research no longer dominates the discussion and therefore it no longer has any value as far as I am concerned.

The facts that have been disclosed this week I would have found in one tenth of the time I have spent addressing the sort of rubbish you and others have heaped on here. It was a fruitless and useless task.

This place no longer serves the intended purpose so I bid you and others farewell.

Enjoy the new home you have made for yourselves I am sure many from HC will love to call in a join you as you descend into abject righteous self pity stamping your little feet demanding things happen to your timetable and ignoring the facts.

Regards
FF

AKIDA BALLISTA
Please don’t leave @Fact Finder you have the right to of course but an alternative would be to put the naysayers on ignore and not waste your energy on them. For the sake of a few loud voices you will be leaving all those who highly value your immense contributions bereft 😩
 
  • Like
  • Love
  • Fire
Reactions: 41 users

Taproot

Regular
Hi

TT,

I know very little about transformers, but one thing stands out - the size of the "model library". This will be too large to implement on an edge device like Akida. SWF's article refers to Cerebas' wafer-scale SoC. That could be 30 cm diameter.


but apparently Wajahat Qadeer (Kinara - https://kinara.ai/ ) believes that the databases can be compressed to make them practicable at the edge, in mobile phones.

"There are ways to reduce the size of transformers so that inference can be run in edge devices, Qadeer said. “For deployment on the edge, large models can be reduced in size through techniques such as student-teacher training to create lightweight transformers optimized for edge devices,” he said, citing MobileBert as an example. “Further size reductions are possible by isolating the functionality that pertains to the deployment use cases and only training students for that use case.”

In the student-teacher method for training neural networks, a smaller student network is trained to reproduce the outputs of the teacher network.

Techniques like this can bring transformer-powered NLP to applications like smart-home assistants, where consumer privacy dictates that data doesn’t enter the cloud. Smartphones are another key application, Qadeer said.

“In the second generation of our chips, we have specially enhanced our efficiency for pure matrix-matrix multiplications; have significantly increased our memory bandwidth, both internal and external; and have also added extensive vector support for floating-point operations to accelerate activations and operations that may require higher precision,” he added
."

Given that Kinara uses matrix multiplication and floating-point operations, I think that using N-of-M coding, can improve on the Kinara model.

This is a patent application by Qadeer:

US2021174172A1 METHOD FOR AUTOMATIC HYBRID QUANTIZATION OF DEEP ARTIFICIAL NEURAL NETWORKS
View attachment 26820



The method includes, for each floating-point layer in a set of floating-point layers: calculating a set of input activations and a set of output activations of the floating-point layer; converting the floating-point layer to a low-bit-width layer; calculating a set of low-bit-width output activations based on the set of input activations; and calculating a per-layer deviation statistic of the low-bit-width layer. The method also includes ordering the set of low-bit-width layers based on the per-layer deviation statistic of each low-bit-width layer. The method additionally includes, while a loss-of-accuracy threshold exceeds the accuracy of the quantized network: converting a floating-point layer represented by the low-bit-width layer to a high-bit-width layer; replacing the low-bit-width layer with the high-bit-width layer in the quantized network; updating the accuracy of the quantized network; and, in response to the accuracy of the quantized network exceeding the loss-of-accuracy threshold, returning the quantized network.
Declaration: I know nothing about transformers !
Interesting discussion in Podcast from around 12.20min mark about models.
In the podcast Jean-Luc refers to transformers as super models.
Referencing Google translate as an example. 5 years ago required 500,000 lines of code to make it happen. Today with the help of transformer , it only requires 500 lines.
Also interesting discussion towards the end about Data Mesh strategy.
 
  • Like
  • Fire
  • Love
Reactions: 25 users
Top Bottom