BRN Discussion Ongoing


IMG_1286.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 42 users

mcm

Regular
  • Like
  • Haha
Reactions: 4 users

AARONASX

Holding onto what I've got
  • Like
  • Thinking
  • Fire
Reactions: 7 users

Boab

I wish I could paint like Vincent
Anyone have an idea as to who the customer might be that Sean is meeting with in Australia?
I'm buying him lunch and he's going to tell me everything
No Doubt About it.
 
  • Haha
  • Like
Reactions: 28 users

mcm

Regular
  • Like
  • Fire
Reactions: 7 users

JDelekto

Regular
Exactly.
And finally answer your legitimate question whether or not Sean Hehir had indeed told select participants of that by-invitation-only shareholder meeting in Sydney in November that Brainchip had succeeded in developing and running a number of LLMs on Akida 2.0 (as claimed by a poster here on TSE), and if true, why the remaining shareholders (which constitute the vast majority) still have not been informed about this amazing breakthrough via official channels four months later.

I also wonder why none of the other attendees of that Sydney gathering have so far shared with us their recollection of what Sean Hehir actually said. 🤔

There is something that I would like to point out about LLMs. LLMs can have large memory requirements, depending upon the model quality, the number of parameters, and quantization. For example, a 70-billion parameter CodeLlama Instruct model (useful for writing computer code) with an 8-bit quantization requires about 70+ GB of RAM. You can still get decent results and save space with a lower quantization but it suffers a loss in quality if the quantization is too low.

That same CodeLlama Instruct model with 4-bit quantization is about 40 GB of RAM. I have a desktop with 64 GB and I can partially offload the model to 24 GB of GPU RAM with an Nvidia GTX 3080 card. The more I get off the CPU and onto the video card (which does all the math faster), the faster my responses to chat. Unless one plays a lot of resource-hungry video games or does work with AI, you will not find most consumers that have desktop PCs with that much memory to run those models.

Now, given a base system with a CPU and enough memory to hold a much smaller model (like 7 billion parameters instead of 70) and quantized down to about 4 bits, you're looking at slightly under 4 GB for a model. I think this is something an Edge device can handle and I think that may be useful depending on the use case.

So I think that technically, Akida could run an LLM if it were properly "massaged" (I believe I ran across a GitHub repo where an individual was attempting to run an LLM with a Spiking Neural Network), but running an LLM the size and quality of GPT-4 or Claude-3 would not be practical.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
 
  • Like
  • Love
  • Haha
Reactions: 29 users

overpup

Regular
Fair enough, but I still disagree 😛

The definition of something, with changing relative context, can not remain constant, in my opinion.

Not even sure if that made sense, but it's past even my bedtime...
That old saying attributed to Einstein: "Insanity is doing the same thing over and over and expecting a different result"...
You can tell Albert never worked with computers!
 
  • Like
  • Haha
Reactions: 5 users

IloveLamp

Top 20
1000013960.jpg
1000013957.jpg
1000013965.jpg
1000013962.jpg
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
There is something that I would like to point out about LLMs. LLMs can have large memory requirements, depending upon the model quality, the number of parameters, and quantization. For example, a 70-billion parameter CodeLlama Instruct model (useful for writing computer code) with an 8-bit quantization requires about 70+ GB of RAM. You can still get decent results and save space with a lower quantization but it suffers a loss in quality if the quantization is too low.

That same CodeLlama Instruct model with 4-bit quantization is about 40 GB of RAM. I have a desktop with 64 GB and I can partially offload the model to 24 GB of GPU RAM with an Nvidia GTX 3080 card. The more I get off the CPU and onto the video card (which does all the math faster), the faster my responses to chat. Unless one plays a lot of resource-hungry video games or does work with AI, you will not find most consumers that have desktop PCs with that much memory to run those models.

Now, given a base system with a CPU and enough memory to hold a much smaller model (like 7 billion parameters instead of 70) and quantized down to about 4 bits, you're looking at slightly under 4 GB for a model. I think this is something an Edge device can handle and I think that may be useful depending on the use case.

So I think that technically, Akida could run an LLM if it were properly "massaged" (I believe I ran across a GitHub repo where an individual was attempting to run an LLM with a Spiking Neural Network), but running an LLM the size and quality of GPT-4 or Claude-3 would not be practical.

Hi JD,

That's some impressive technowizardry.

As you know, PvdM's "4 Bits are enough" white paper discusses the advantages of 4-bit quantization.

https://brainchip.com/4-bits-are-enough/
...
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.

Akida is based on a neuromorphic, event-based, fully digital design with additional convolutional features. The combination of spiking, event-based neurons, and convolutional functions is unique. It offers many advantages, including on-chip learning, small size, sparsity, and power consumption in the microwatt/milliwatt ranges. The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses. Akida’s extremely efficient event-based neural processor IP is commercially available as a device (AKD1000) and as an IP offering that can be integrated into partner System on Chips (SoC). The hardware can be configured through the MetaTF software, integrated into TensorFlow layers equating up to 5 million filters, thereby simplifying model development, tuning and optimization through popular development platforms like TensorFlow/Keras and Edge Impulse. There are a fast-growing number of models available through the Akida model zoo and the Brainchip ecosystem.

To dive a little bit deeper into the value of 4-bit, in its 2020 NeurIPS paper IBM described the various pieces that are already present and how they come together. They prove the readiness and the benefit through several experiments simulating 4-bit training for a variety of deep-learning models in computer vision, speech, and natural language processing. The results show a minimal loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The results are also more than seven times faster and seven times more energy efficient. And Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It would not necessarily make new applications possible, but it would make existing ones faster and less battery-draining“ by a good margin
.”

While I have some understanding of the visual aspect, I find the NLP, covering both speech and text, more perplexing mainly because of the need for context or "attention", but, as usual, Prof Wiki has some useful background:

https://en.wikipedia.org/wiki/Natural_language_processing
 
  • Like
  • Love
  • Fire
Reactions: 28 users

McHale

Regular
  • Like
Reactions: 2 users

McHale

Regular
I guess he doesn't think BrainChip has reached the "Tipping point" yet then? 🤔

And he has an unexplainable "new" ethos?

View attachment 58606

He will do much better for us over there, if he can garner the support and following there, that he once had.

There are many more eyes there, that will notice, if he can manage "Top Rated" posts.

It's a good thing he's changed his approach also, as the smarter trolls there, will tear shreds out of his commentary, if he uses information, the way he did here.

However, personally, I think BrainChip is now beyond the influence of share forums and has reached the tipping point, as far as Global recognition is concerned, anyway..

We are on the World stage now and are rapidly moving, from speculative grade, to investment grade.

A couple of new IP deals, would seal that opinion..

Any potential investors, who find BRN an attractive "punt", influenced by reading "Share Forum" posts, are just fortunate, in my opinion.
All very interesting Mr Dingo, but I thought that FF despised HC, so it's curious he's put up some posts over there (there was another the day before I think), he must've been really annoyed at something.

However FF has a track record of leaving in a huff, if my memory serves me correct it happened twice at HC and now twice here, regardless he is an excellent poster and researcher so it's disappointing he's gone again.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Rskiff

Regular
All very interesting Mr Dingo, but I thought that FF despised HC, so it's curious he's put up some posts over there (there was another the day before I think), he must've been really annoyed at something.

However FF has a track record of leaving in a huff, if my memory serves me correct it happened twice at HC and now twice here, regardless he is an excellent poster and researcher so it's disappointing he's gone again.
Unfortunately with FF posting there, it will be benefitting TMH in terms of traffic to site, thus enriching those fwits. I will stay loyal to Tsex and never go back to that cesspit.
 
  • Like
  • Love
  • Wow
Reactions: 26 users
All very interesting Mr Dingo, but I thought that FF despised HC, so it's curious he's put up some posts over there (there was another the day before I think), he must've been really annoyed at something.

However FF has a track record of leaving in a huff, if my memory serves me correct it happened twice at HC and now twice here, regardless he is an excellent poster and researcher so it's disappointing he's gone again.
The difference is though, he hasn't left and stopped posting, he's just changed forums.

His integrity was called into question here and instead of being a man and admitting to his mistakes and errors of judgement, he told Frangipani she should "go and get some help"?

And that he felt it no longer necessary to post about BRN, as he had better things to do?

I agree, he is an excellent poster and great researcher, who excels at communication and much more often than not, has a very insightful point of view.
The prodigiousness and quality of his posting, was/is incredible.

I'm not enamoured though, by someone who thinks they are beyond reproach and is not able to handle criticism, or admit fault.
 
  • Like
  • Love
  • Fire
Reactions: 37 users

skutza

Regular
The difference is though, he hasn't left and stopped posting, he's just changed forums.

His integrity was called into question here and instead of being a man and admitting to his mistakes and errors of judgement, he told Frangipani she should "go and get some help"?

And that he felt it no longer necessary to post about BRN, as he had better things to do?

I agree, he is an excellent poster and great researcher, who excels at communication and more often than not, has a very insightful point of view.
The prodigiousness and quality of his posting, was incredible.

I'm not enamoured though, by someone who thinks they are beyond reproach and is not able to handle criticism, or admit fault.
Could you imagine living with him?


o_O

But yes he does do an excellent job 95% of the time. One of the best I've seen.
 
  • Like
  • Haha
Reactions: 11 users

Slade

Top 20
Could you imagine living with him?
A Horrible and Dumb thing to say that makes me also feel like I don’t want to spend much time on this forum.
 
  • Like
  • Fire
  • Haha
Reactions: 34 users
All very interesting Mr Dingo, but I thought that FF despised HC, so it's curious he's put up some posts over there (there was another the day before I think), he must've been really annoyed at something.

However FF has a track record of leaving in a huff, if my memory serves me correct it happened twice at HC and now twice here, regardless he is an excellent poster and researcher so it's disappointing he's gone again.
Perhaps he posted it there accidently?

A bit like when you are going to send a text to your girlfriend, and accidently send it to your wife :)
 
  • Haha
  • Like
Reactions: 21 users

Makeme 2020

Regular
A Horrible and Dumb thing to say that makes me also feel like I don’t want to spend much time on this forum.
See ya
 
  • Like
  • Haha
Reactions: 6 users

MDhere

Regular
Ok here i go again @skutza this is an anonymous forum about a stock that is absoulely incredible. its not a forum to judge if one can live with another. Also @DingoBorat , there are two sides to the frangipani story and FF , both had merits but both saw different views and i agree that go and get help wasnt exactly positive it was said in regards to Frangis bible of a story which quite frankly wasnt need on this forum.

Now i admit to watching MAFS but i certainly dont expect my fellow brners to be wondering what underwear i wear and and wether im a suitable companion depending on my texts on here lol

i for one am happy with some long term holders posting on HC as it keeps the stupid site in check a bit (although i have chosen not to but i do frequent it and put like and so forth up) Though at times I want to throttle that Dean guy lol

And lastly @Slade , Mate (sounds so aussie when i write that) Mate don't you dare depart from this forum, or all my beer shouting is off! lol

On a side not i can't get long skip connections out of my head i think also known as long residual connections as Sean has mentioned this a number of times stating "our customers" asked for this and we have delivered what they are asking for.

To me that is like a medical purpose this long skip connection. The other thing was somatosensory which I believe will bring up assistance to articial limbs momement. and the US Department of Homeland Security have been playing iwth Adika in the past when i googled about the skp/residual connections.

O and FF said something about Ericsson, I don't see it on our iceberg nor on our resident flowcharting pencil pusher A4 paper detective (cant remember is that you @Esq.111 ) or it might be someone else but i loved it and prob needs an update?

Cheers fellow brners Thats it for me for now, peace to all and I will no doubt re-appear tomorrow as we are all strapped in :)
 
  • Like
  • Love
  • Fire
Reactions: 40 users
Top Bottom