BRN Discussion Ongoing

Diogenese

Top 20
I haven't posted in a while as the forum gets rather toxic at times.

These podcasts at CES however do clearly tell me one thing - The team at Brainchip have listened to feedback regarding communication. These have been great little snippets of updates with a few partners. My favourite so far being Onsemi and Inivation.

I sincerely hope Nandan will be bringing Farshad's (Inivation) comments to Sean Hehirs attention ASAP. I emailed Tony to also follow up with Sean to hopefully benefit from valuable critique from Farshad.
"I have been involved with Brainchip for less than a year, or maybe a year so far...I've been interacting with the company in terms of learning their capabilities"
"it is very promising, I also think their are certain areas that you're not advertising as best as you could"
"Kind of underestimating yourselves"


I really hope the dot joiners can calm down a little and not dismiss posters where someone doesn't jump to the premature conclusions that some draw here without fact or announcements. I like this place and hate blatant downramping as much as the next well intentioned shareholder. But we are still VERY early on in partnerships with some companies from the sounds of it. And we clearly aren't advertising our full capabilities well enough just yet (one partners' opinion but they are in a better position to comment on this than any of us). Hopefully this is a very fruitful year for us and it sounds like a little traction may be headed our way.
Hi D&S,

Good to hear from you.

As usual, that introduction presages a tongue lashing/ear bashing.

This technology is evolving at such a rapid rate that we've met ourselves coming back.

Akida 1 is brilliant groundbreaking technology - digital spiking neural network system on a chip with special sauce (N-of-M coding), using spikes (originally 1-bit) and only activating when an "event" (a change in the input data) occurred.

Previously, the only practicable way of identifying objects in a field of view was with a software program implementing convolutional neural networks (CNNS), either on a CPU (quite slow and power hungry), or GPU (faster but proportionally more power hungry.

Mostly in academic circles, attempts have been made to implement analog (ReRAM/MemRistor) CNN in silicon, but the manufacturing processes and temperature variability have limiter the accuracy of such devices.

PvdM's genius was in realizing that digital NNs could avoid these inherent inaccuracies, and in recognizing the genius of Simon Thorpe's N-of-M coding and, just as importantly, how it could be applied in silicon.

This gave rise to Akida 1, a technology years ahead of the competition. The design is highly flexible, allowing for a few nodes (4*NPEs per node), up to a couple of hundred nodes. Akida 1 was applicable to any sensor (video, audio, taste, smell, vibration ... ) so that, with the appropriate model library, it could classify any input signal. Of course, you don't find model libraries just lying around, but there are open source versions available, and Akida 1 has the ability to learn new classes to add to the model on chip. In addition, BRN has developed its own in-house model library "zoo".

Akida 1 went through a couple of iterations based on customer feedback. Initially it has 1-bit weights and activations - lightning fast and anorexically power sipping.

Customers were prepared to forego portions of these advantages for greater accuracy and somewhat higher power consumption.

So Akida 1 switched to 4-bit weights/activations.

In addition, customers required the ability to use their existing CNN models, so Akida 1 includes CNN2SNN conversion capability.

BRN has been involved with a few leading edge sensor makers for some years - eg, Valeo for lidar, Prophesee for DVS event cameras, both of which are a natural fit for Akida's snn capabilities. But there is an infinite number of applications for which Akida 1 is the best solution, except for Akida 2.

Our switch to IP only made a rod for our own back, excluding all but those who had the odd $50 million laying around to invest in developing new chips. Not that this is an unworkable business model - ARM does nicely out of it, although it would like a larger slice of the pie.

So now we come to the stage where we are prepared to sell devices including Akida 1, not so much as an income generating enterprise as a capability demonstration of Akida 1 ...

... and, to top it off, Akida 2 blows the socks off Akida 1.

How many EAPs are primed to explode? - well. in all the excitement, I've kinda forgotten myself, so, go ahead punk, make my day!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 71 users

7für7

Top 20
In the German forum there is a discussion about a possible reverse split if it continues like this. What is your opinion?
 
  • Haha
Reactions: 1 users
nope you failed again…you did not find what I requested below;

“Now unless you can find any evidence that Akida will be used for processing data at the edge of the sensors in the CLA concept…..then all you have is ‘Hope’.”

….and the Award for the most “Hope Akida Inside 🐂💩”…..goes to BRAVO!!!

connect the dots below.

View attachment 54087
we are done….or at least I am!
1705047281070.gif
 
  • Haha
  • Like
  • Fire
Reactions: 8 users
Hi D&S,

Good to hear from you.

As usual, that introduction presages a tongue lashing.

This technology is evolving at such a rapid rate that we've met ourselves coming back.

Akida 1 is brilliant groundbreaking technology - digital spiking neural network system on a chip with special sauce (N-of-M coding), using spikes (originally 1-bit) and only activating when an "event" (a change in the input data) occurred.

Previously, the only practicable way of identifying objects in a field of view was with a software program implementing convolutional neural networks (CNNS), either on a CPU (quite slow and power hungry), or GPU (faster but proportionally more power hungry.

Mostly in academic circles, attempts have been made to implement analog (ReRAM/MemRistor) CNN in silicon, but the manufacturing processes and temperature variability have limiter the accuracy of such devices.

PvdM's genius was in realizing that digital NNs could avoid these inherent inaccuracies, and in recognizing the genius of Simon Thorpe's N-of-M coding and, just as importantly, how it could be applied in silicon.

This gave rise to Akida 1, a technology years ahead of the competition. The design is highly flexible, allowing for a few nodes (4*NPEs per node), up to a couple of hundred nodes. Akida 1 was applicable to any sensor (video, audio, taste, smell, vibration ... ) so that, with the appropriate model library, it could classify any input signal. Of course, you don't find model libraries just lying around, but there are open source versions available, and Akida 1 has the ability to learn new classes to add to the model on chip. In addition, BRN has developed its own in-house model library "zoo".

Akida 1 went through a couple of iterations based on customer feedback. Initially it has 1-bit weights and activations - lightning fast and anorexically power sipping.

Customers were prepared to forego portions of these advantages for greater accuracy and somewhat higher power consumption.

So Akida 1 switched to 4-bit weights/activations.

In addition, customers required the ability to use their existing CNN models, so Akida 1 includes CNN2SNN conversion capability.

BRN has been involved with a few leading edge sensor makers for some years - eg, Valeo for lidar, Prophesee for DVS event cameras, both of which are a natural fit for Akida's snn capabilities. But there is an infinite number of applications for which Akida 1 is the best solution, except for Akida 2.

Our switch to IP only made a rod for our own back, excluding all but those who had the odd $50 million laying around to invest in developing new chips. Not that this is an unworkable business model - ARM does nicely out of it, although it would like a larger slice of the pie.

So now we come to the stage where we are prepared to sell devices including Akida 1, not so much as an income generating enterprise as a capability demonstration of Akida 1 ...

... and, to top it off, Akida 2 blows the socks off Akida 1.

How many EAPs are primed to explode? - well. in all the excitement, I've kinda forgotten myself, so, go ahead punk, make my day!
I don't know what I wrote that got such a rise out of you (or tongue lashing, as you so eloquently put it), or what the relevance to my post really is. I wouldn't be invested in a tech company that I didn't believe have fantastic technology.

My post was about a partner of ours critiquing the advertising of our capabilities to partners and how I hope Brainchip do take this feedback on board so we get more sales. Something I also sent to Tony because it's great feedback from Farshad. How is improving a function of the business that a partner provided feedback for, an inherently bad thing?? We want more sales, yes? Then we are on the same team.

Beyond that, dot joining can be fun, but it should not be taken as gospel and anyone who doesn't agree with a dot join shouldn't be berated because they have an opposing view. It's the very reason this forum becomes toxic at times.

It seems like you just wanted to chew someone out, or I'm on your bad side for whatever reason.

Hope this punk has made your day.
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Deadpool

Did someone say KFC
In the German forum there is a discussion about a possible reverse split if it continues like this. What is your opinion?
REVERSE SLIT PIGS *UCKING ARSE



sexy pig GIF
 
  • Haha
  • Like
Reactions: 4 users
In the German forum there is a discussion about a possible reverse split if it continues like this. What is your opinion?
1705047950035.gif
 
  • Like
Reactions: 1 users
  • Thinking
  • Like
Reactions: 3 users

DK6161

Regular
Lol, yes that is my main goal here, you caught me out. You are like the Collingwood football club, either love them or hate them. (I am a Cats supporter in case you missed the point) Although I am aware you like the last word and try your best to belittle people, I will also respond because you asked.

"What I would like to know from you though as a person who runs a successful business why you have so readily leapt to the conclusion that Brainchip's executive staff are not successful business people who work to improve staff performance and counsel the staff member, contractor or even the cadet from one of the University programs responsible for the error?" (Jesus I hope not)

I did not imply the majority of what you have written here, but you love to try and play the wordsmith and bend the meaning behind the truth. The truth is, I have not once before commented on the typos that happened previously. My question was how many errors are acceptable? You call anyone that isn't happy with the situation a troll and go off on a rage defending the company by telling everyone to ignore the errors, look at the content.

The errors are small, but for my liking becoming too frequent so I spoke up. If someone lost their job over this or were hung, drawn and quartered then fine but that is the companies business, if they told anyone what happened, that would again be very unprofessional.

So is this the case? Is the company running in an unprofessional manner? Typos, the wrong photo, pretty basic errors, can happen, and what about inviting a few SH to have a private chat? In the eyes of some (especially the invitees) this was all above board and a non-event, but not all believe this and many voiced their concerns. Me, well I have met and spoken with many execs in companies I hold, so for me it wasn't a huge deal but I respected their view. The company seems great, that's why I have invested, but like some when do we start to question if the problem is not the tech, but the people? Here's a quote from a professional in the field just recently. "There are areas I feel you are not explaining or under advertising as well as you could".

So what is my point? People have other points of view, and you make it difficult to question anything, because you label them as trolls. TBH this place was much better without you, as your research while seemingly wonderful has proven to be worthless. Your timelines have been awful and your prediction for revenue and shareprice have been abysmal.

These are the true FACTS, but alas you still have this overpowering sense of superiority over all. If I had $1 for every time you've told us you're a retired lawyer and some shitty story about it then I would likely have had more revenue than Brainchip.

Why am I letting it out today? Because unlike you, I will leave this forum and no longer post. It is clear people cannot question management or the company here without ramifications from the BRN almighty retired lawyer!!! I'm thinking you were more likely a catholic priest, in all respects.

So good luck with your investment here Mr Retired Lawyer, I do note, you use this title frequently to try and have your minions in awe, but someone who is proud of what they have achieved in the small business world is considered worthy of ridicule.

To the rest of the holders, good luck, remember FF has been wrong in all of his predictions to this point, (but only because of outside influences remember ROFL!) so he's not your messiah he's just a naughty little boy. Look on the bright side people, only invest what you can afford, and as good as a stock seems, there's a reason people don't put all their eggs in the one basket.

Adios.
I think you have summarised how a lot of people feel when they come this place. Thank you, and I think we need people like you to stick around to ensure a well balanced place for discussion and to avoid more people getting fooled by select few. Would love to buy you a beer if we meet in person.
And NO, I don't have hidden agendas. I am just a pissed off shareholder.
 
  • Like
  • Love
  • Fire
Reactions: 8 users

cosors

👀
  • Thinking
  • Wow
  • Like
Reactions: 4 users

Damo4

Regular
  • Like
  • Fire
Reactions: 11 users

Damo4

Regular
  • Like
  • Love
  • Fire
Reactions: 6 users

cosors

👀
Thank you Cosors, I have updated the link to have the now updated thumbnail and video
Thanks, that was quick!
 
  • Like
Reactions: 4 users

MDhere

Top 20
  • Like
Reactions: 1 users

cosors

👀
This guy here gives a decent rundown on a few AI Tech rolling out hopefully soon.

On a side note: How I am seeing CES (AKA the world stage), this is the place where big companies (and smaller) want to go to announce their groundbreaking technology coming out 'tomorrow'! now, I speculate that some of companies use they may be using Akida tech that are there this year have been holding off until they firstly announce their new product / future tech at CES 2024…once these products start rolling out this year, payment should start rolling in this year also, “watch the financial”...IMO



What do you all think of this?
AMD RYZEN 800G.png

and of this?
Samsung NQ8 AI Gen3 Propcessor.png



I don't want to be more specific about my question for the time being as most of you are much more familiar with the subject than I am and can better judge what this means. I would be very grateful for explanations in context.
 
  • Like
Reactions: 6 users

keyeat

Regular
  • Love
  • Like
Reactions: 2 users

cosors

👀
Morning fellow brners, great posts!

I will do some research over the weekend when im back at work :) but in the meantime picked up this little beauty and only fitting i open it at $3.89.
Bring it on BRN 😀
View attachment 54048
off topic
As some of you may know, I love red wine.
I have tried some very good Australian wines because of some of you lovely fellows and conversations here on TSE. I now visit my Aussie store in my town regularly. Because of some of you I know for example that the wines from McLaren Vale are particularly good.
I have also tried one from this winery Penfolds. Mine was called BIN 28 and I was more interested in the content than the name back then.)
Your photo shows the BIRN 389. That prompted me to go to the winemaker's website and see what the numbers were all about. But I couldn't find what I was looking for. Can one of you explain to me what the numbers stand for? Could it be that BIN plus the number simply means the vineyard parcel?
 
  • Like
Reactions: 7 users
What do you all think of this?
View attachment 54100
and of this?
View attachment 54101


I don't want to be more specific about my question for the time being as most of you are much more familiar with the subject than I am and can better judge what this means. I would be very grateful for explanations in context.
AMD has always been a thumbs up from me

1705057659075.gif
 
  • Like
  • Haha
Reactions: 6 users

Diogenese

Top 20
What do you all think of this?
View attachment 54100
and of this?
View attachment 54101


I don't want to be more specific about my question for the time being as most of you are much more familiar with the subject than I am and can better judge what this means. I would be very grateful for explanations in context.
Hi cosors,

From memory, AMD uses MACs and 8-bit plus.


TPU Interviews AMD Vice President: Ryzen AI, X3D, Zen 4 Future Strategy and More | TechPowerUp

https://www.techpowerup.com/review/amd-ryzen-interview-ai-zen-4-strategy/

TPU Interviews AMD Vice President: Ryzen AI, X3D, Zen 4 Future Strategy and More by W1zzard, on Jul 13th, 2023, in Processors.

...
When I look at Ryzen AI or XDNA, I think the important thing about that engine is that it's really tuned for low-precision integer operations, so INT8, INT16. Some people are talking about INT4 even. The Ryzen AI engine does not bring new instruction sets necessarily to our new instruction types, or new operator types into that model, it's just highly efficient at multiply-accumulate-collect operations in the same way that you think about the layers in a neural network being able to do that in a way where you get an engine that's just completely built for that. I think that all these different types of execution engines complement each other in some ways.




On-Chip AI Integration is the Future of PC Computi... - AMD Community


1705056963033.png


The image above shows an archetypal neural network on the left and the AMD XDNA adaptive dataflow architecture at the heart of the AMD Ryzen AI engine on the right. The connections running from L1 to L6 simulate the way neurons are connected in the human brain. The Ryzen AI engine is flexible and can allocate resources differently depending on the underlying characteristics of the workload, but the example above works as a proof-of-concept.

Imagine a workload in which each neural layer performs a matrix multiply or convolution operation against incoming data before passing the new values to the next neuron(s) down the line. The AMD XDNA architecture is a dataflow architecture, designed to move data from compute array to compute array without the need for large, power-hungry, and expensive caches. One of the goals of a dataflow architecture is to avoid unexpected latency caused by cache misses by not needing a cache in the first place. This type of design emphasizes high performance without incurring latency penalties while fetching data from a CPU-style cache. It also avoids the increased power consumption associated with large caches.

Advantages of Executing AI on the AMD XDNA Architecture

High performance CPU and GPU technologies are important pillars of AMD's long-term AI strategy, but they aren't as transformative as integration of an AI engine on-die could be. Today, AI engines are already being used to offload certain processing tasks from the CPU and GPU. Moving tasks like background blur, facial detection, and noise cancellation are all tasks that can be performed more efficiently on a dedicated AI engine, freeing CPU and GPU cycles for other things and helping with improved power efficiency at the same time.

Integrating AI into the APU has several advantages. First, it tends to reduce latency and increase performance compared to attaching a device via the PCIe® bus. When an AI engine is integrated into the chip it can also benefit from shared access to memory and greater efficiency through more optimal data movement. Finally, integrating silicon on-die makes it easier to apply advanced power management techniques to the new processor block.

An external AI engine attached via a PCI Express® slot, or an M.2 slot is certainly possible, but integrating this capability into our most advanced “Zen 4” and AMD RDNA™ 3 silicon was a better way to make it available to customers without sacrificing the advantages above. Applications that leverage this local processor can benefit from the faster response times it enables and more consistent performance.

It's an exciting time in AI development. Today, customers, corporations, and manufacturers are evaluating AI at every level and power envelope. The only certainty in this evolving space is that if we could look ahead 5-7 years, we wouldn't "just" see models that do a better job at the same tasks that ChatGPT, Stable Diffusion, or Midjourney perform today. There will be models and applications for AI that nobody has even thought of yet. The AI performance improvements AMD has integrated into select processors in the Ryzen 7040 Mobile Series processors give developers and end-users the flexibility and support they need to experiment, evaluate, and ultimately make that future happen
.

This blurb is about the Ryzen 7040, so an earlier version presumably.

A recent AMD patent application for CNN:

US2023206395A1 HARDWARE SUPPORT FOR CONVOLUTION OPERATIONS 20211229

1705057618594.png


1705057656811.png


performing a first convolution operation based on a first convolutional layer input image to generate at least a portion of a first convolutional layer output image; while performing the first convolution operation, performing a second convolution operation based on a second convolutional layer input image to generate at least a portion of a second convolutional layer output image, wherein the second convolutional layer input image is based on the first convolutional layer output image; storing the portion of the first convolutional layer output image in a first memory dedicated to storing image data for convolution operations; and storing the portion of the second convolutional layer output image in a second memory dedicated to storing image data for convolution operations.

[0038] A convolution operation includes applying the filter 406 to the input image 404 to generate one or more pixels 414 of the output image 408 . Specifically, an image processor 402 calculates a dot product of the filter weights with a filter cutout 411 of the input image 404 . This dot product involves multiplying each filter weight 412 with a corresponding pixel of the input image 404 to generate a set of partial products, and summing all of those partial products to calculate the dot product result.

It seems as if they've invented parallel processing using maths, but they are not into spikes.
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Diogenese

Top 20
off topic
As some of you may know, I love red wine.
I have tried some very good Australian wines because of some of you lovely fellows and conversations here on TSE. I now visit my Aussie store in my town regularly. Because of some of you I know for example that the wines from McLaren Vale are particularly good.
I have also tried one from this winery Penfolds. Mine was called BIN 28 and I was more interested in the content than the name back then.)
Your photo shows the BIRN 389. That prompted me to go to the winemaker's website and see what the numbers were all about. But I couldn't find what I was looking for. Can one of you explain to me what the numbers stand for? Could it be that BIN plus the number simply means the vineyard parcel?
Yep, Bin 389 cab shiraz.

Leaving aside penfolds Grange, which is revered by the aficionados, Bin 389 is my favourite.

The Penfolds St Henri shiraz was better in the olden days, but I find 389 is more to my liking these days.

Then there is Penfolds Bin 407 cab sav.

These are all quite pricy now. I got hooked when St Henri was $8 a bottle.

here is the company's Bin page

https://www.vivino.com/wine-news/penfolds--behind-the-bin-numbers#:~:text=There are a number of Bins in the,389, which is a South Australian Cabernet-Shiraz blend.

The bin number identifies the type of blend so they can aim for a degree of consistency.
 
  • Like
  • Love
  • Fire
Reactions: 14 users

IloveLamp

Top 20
  • Like
Reactions: 6 users
Top Bottom