BRN Discussion Ongoing

Hi Hoppy,

In that 9 years you've seen several changes of business model:

A. Software - Brainchip STUDIO;
B. Software plus hardware - Studio plus Brainchip Accelerator FPGA;
C. Digital SNN SoC - Akida 1 (with IP licence side order for big orders) (software relegated to giveaway tempter);
D. IP licence only;
E. IP licence with a little SoC on the side (Edge Box).
F. E plus Akida Gen 2 IP licence.

Software looks like a good model with zero manufacturing cost, but it is high maintenance and requires continual upgrading. Direct customer sales.

Chips offer a quick entry point, but have significant manufacturing costs. Sales to product manufacturers.

IP licensing is highest return but massive barrier to entry. Sales to chip makers or major retailers.

That's an average of 18 months per business model - difficult to get traction, but it was not because any of the products were duds. The whole area of tech has been in a state of flux, and, even though we have always been leading the field, to the uninformed observer it could give the appearance of chasing our own tail.

While the Edge Box will give customers an opportunity to experience Akida in the wild, with TeNNs we have extended our lead, hopefully sufficiently to allow enough time to finalize some major deals.
Peter never had any intentions of BrainChip being a software Company.

Hardware neuromorphics, was always the end game.

So I don't think you can say that was ever really a "business model" as it was merely a stepping stone, which was unfortunately misplaced at the time and burnt up much resources and Time.

It really does feel to me though, that everything is happening on this journey, as it should.
 
  • Like
Reactions: 11 users

Iseki

Regular
It's June already?
Ha Ha, well, forgive me because it's not everyday that the company changes tack. With any luck we'll avert the spill by enunciating a new and clear vision to commercilization - one that involves committed partners.

See you in June.
 
Ha Ha, well, forgive me because it's not everyday that the company changes tack. With any luck we'll avert the spill by enunciating a new and clear vision to commercilization - one that involves committed partners.

See you in June.
1715060217408.gif
 
  • Haha
  • Fire
Reactions: 10 users

Diogenese

Top 20
Thank you very much for your reply.
Just two further queries if you're up for it. 🤣

Assuming we have a solid take-up of the existing Edge boxes would you say they would be the correct format for a generation 2 device or would you prefer another medium such as the previous FPGA boards or Raspberry PI sticks or something else again?

Also would a second generation device have to be factory preset at either the E, S or P setting or could it be left variable so that an end user could tune it as required to suit their application?
Sorry if these are dumb questions.
The edge Box has 2 Akida SoCs and an NXP 8 Quad processor, so with a quad-core CPU, 36 GB memory and Micro SD card slot there is a lot of processing power:

https://www.nxp.com/products/proces...ision-multimedia-and-industrial-iot:IMX8MPLUS

The i.MX 8M Plus family focuses on machine learning and vision, advanced multimedia, and industrial automation with high reliability. It is built to meet the needs of Smart Home, Building, City and Industry 4.0 applications.
  • Powerful quad or dual Arm® Cortex®-A53 processor with a Neural Processing Unit (NPU) operating at up to 2.3 TOPS.
  • Dual image signal processors (ISP) and two camera inputs for an effective advanced vision system.
  • The multimedia capabilities include video encode (including h.265) and decode, 3D/2D graphic acceleration, and multiple audio and voice functionalities.
  • Real-time control with Cortex-M7. Robust control networks supported by dual CAN FD and dual Gigabit Ethernet with Time Sensitive Networking

Just as an aside, this shows that Akida can co-exist with ARM's Cortex A53 NPU. (We already knew that Akida was compatible with all ARM processors).

The beauty of the Edge Box is it is basically plug-and-play, requiring no assembly, in contrast to a free-standing chip which would need to be glued to a PCB, requiring an entire PCB design and assembly process.

As for the PCIe board, here is some minor assembly and there would also be additional programming to enable it to talk to the resident processor.

The Edge Box specs are at:

https://shop.brainchipinc.com/products/akida™-edge-ai-box

The initialization instructions for Edge Box are "Coming Soon":
https://brainchip.com/support/

but it has a number of external communication ports so loading models and configuration should not be an issue. This also facilitates communication with other devices.

That said, with the accompanying ViT/TeNNs, Akida 2 adds greater analytical capabilities, such as speech recognition, opening up greater potential for human interaction. And. of course, with additional memory, sLLM can add specific knowledge bases for "expert" systems.

As to E, S, & P, that's like your S, M, L undies - you get what you pay for. They are different chips with different numbers of nodes which are manufactured in the foundry, so E only has a couple of nodes, while P has the maximum. You get a lot more Es on a wafer than Ps.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 38 users

7für7

Top 20
Of course it goes down on the end of the day

1715061447071.gif
 
  • Like
  • Sad
  • Haha
Reactions: 4 users

Tothemoon24

Top 20
From FF on HC

IMG_8891.jpeg




IMG_8892.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 46 users

HopalongPetrovski

I'm Spartacus!
The edge Box has 2 Akida SoCs and an NXP 8 Quad processor, so with a quad-core CPU, 36 GB memory and Micro SD card slot there is a lot of processing power:

https://www.nxp.com/products/proces...ision-multimedia-and-industrial-iot:IMX8MPLUS

The i.MX 8M Plus family focuses on machine learning and vision, advanced multimedia, and industrial automation with high reliability. It is built to meet the needs of Smart Home, Building, City and Industry 4.0 applications.
  • Powerful quad or dual Arm® Cortex®-A53 processor with a Neural Processing Unit (NPU) operating at up to 2.3 TOPS.
  • Dual image signal processors (ISP) and two camera inputs for an effective advanced vision system.
  • The multimedia capabilities include video encode (including h.265) and decode, 3D/2D graphic acceleration, and multiple audio and voice functionalities.
  • Real-time control with Cortex-M7. Robust control networks supported by dual CAN FD and dual Gigabit Ethernet with Time Sensitive Networking

Just as an aside, this shows that Akida can co-exist with ARM's Cortex A53 NPU. (We already knew that Akida was compatible with all ARM processors).

The beauty of the Edge Box is it is basically plug-and-play, requiring no assembly, in contrast to a free-standing chip which would need to be glued to a PCB, requiring an entire PCB design and assembly process.

As for the PCIe board, here is some minor assembly and there would also be additional programming to enable it to talk to the resident processor.

The Edge Box specs are at:

https://shop.brainchipinc.com/products/akida™-edge-ai-box

The initialization instructions for Edge Box are "Coming Soon":
https://brainchip.com/support/

but it has a number of external communication ports so loading models and configuration should not be an issue. This also facilitates communication with other devices.

That said, with the accompanying ViT/TeNNs, Akida 2 adds greater analytical capabilities, such as speech recognition, opening up greater potential for human interaction. And. of course, with additional memory, sLLM can add specific knowledge bases for "expert" systems.

As to E, S, & P, that's like your S, M, L undies - you get what you pay for. They are different chips with different numbers of nodes which are manufactured in the foundry, so E only has a couple of nodes, while P has the maximum.
Ah, I see. So the E,S,P are all variables available and configurable on the Gen 2, but factory set at the foundry so in essence, optional variants providing lesser or greater capacity which is determined by their specific use case.

So I am left wondering would a second generation Edge box, even set at E configuration make the current crop redundant in every sense or are they just different in a way that means simple applications only require a Gen. 1 solution, but the more complex or taxing applications need the additional greater analytical capabilities only available with the additional characteristics made available with TENNS, VIT etc.
So, a horses for courses sort of thing.

Is it likely they are holding back the release of Gen 2 whilst they further refine it making it perhaps more suited to Edge based LLM's like they did when they reconfigured the first generation to more fully meet customer demand?
Or are the new characteristics sufficient and further capabilities would justify a new generation.

Sorry.
Your answers beget further questions. 🤣
 
  • Fire
  • Like
  • Love
Reactions: 9 users
And perhaps that's why we didn't make millions of them to sell at $15 each.

So you can see how hard it is to offer a chip that contains multiple party's IP. As you say, Brainchip couldn't even do it without Brainchip resolving the licensing of arm IP. And Arm won't do it. We know this because they haven't licensed akida IP.

So there is an issue that needs to be resolved. By the board.

The solution was that third parties Renesas and MegaChips might do it. We hoped one of these companies would go to the trouble the dual license of both arm and akida IP and manufacture a chip for clients.

When this didn't work, we manufactured some akida1500.

So where to now?

Suppose for a moment you are Samsung, and you'd like to use akida in all of your whitegoods, which can all network together in some way to make your homelife a dream. Who are you going to call? The minute you ask arm for a specialized arm chip, they'll add their NNP to that and give you a price. If they phone us, what can we say? It just blows everything out into the too hard basket. This is most likely the reson we have lost Stevens, Rob and Nandan...

If someone can solve this for BRN we can win the race.
Sorry, I don't quite follow the last paragraph re Samsung and calling BRN, etc. Can you please elaborate?
TY
 
  • Sad
Reactions: 1 users

Esq.111

Fascinatingly Intuitive.
Back again ,

Looking on the Frankfurt exchange....

InfineonTechnology , who are a partner of ours as of 10/1/24 , were up 8+ %.

Refer to link... Introductory slides analyst call Q2.

I'll have a look a little later on.


Regards,
Esq.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 18 users

Diogenese

Top 20
Ah, I see. So the E,S,P are all variables available and configurable on the Gen 2, but factory set at the foundry so in essence, optional variants providing lesser or greater capacity which is determined by their specific use case.

So I am left wondering would a second generation Edge box, even set at E configuration make the current crop redundant in every sense or are they just different in a way that means simple applications only require a Gen. 1 solution, but the more complex or taxing applications need the additional greater analytical capabilities only available with the additional characteristics made available with TENNS, VIT etc.
So, a horses for courses sort of thing.

Is it likely they are holding back the release of Gen 2 whilst they further refine it making it perhaps more suited to Edge based LLM's like they did when they reconfigured the first generation to more fully meet customer demand?
Or are the new characteristics sufficient and further capabilities would justify a new generation.

Sorry.
Your answers beget further questions. 🤣
E would not be used in an Edge Box - it's too small/underpowered. It would be used in doorbell-like applications. But, even then, Akida 1 would probably suffice. There are many applications which do not require TeNNs/ViT/sLLM. There is a range of applications from fridges/doorbells to AGI/automotive where you would select the amount of processing capability required, balancing speed/power/cost/accuracy.

I reckon BRN has a table setting out the appropriate configuration for different applications, and. as you say, it's an ongoing development.

You could be right about further development, particularly with sLLM as this seems to be the new holy grail with Loihi 2 and ARM M85 auditioning for the role of the Black Knight who has "had worse".
 
  • Like
  • Fire
  • Love
Reactions: 27 users

rgupta

Regular
So is this the Akida chip in ARM's chip or is ARM competing with us now? What happened to being 'partners' and their Akida chip?
Everyone have individual opinions but I don't think brainchip should consider anyone a competitor. We have to start the proceeding only then we will know who is seeing us as competitor.
The market is full of opportunities and everyone know after a certain stage no one can stop other, and it is based upon merit of product.
End of the the day stronger product will win and when you become strong others start standing on your side.
e.g we are standing on arm side coz arm is strong and if our product is stronger that means after some time they will be standing with us and in between a new competitor can arise as well.
So end of the day patience is required.
Dyor
 
  • Like
  • Fire
Reactions: 4 users

TECH

Regular
ANT61 awaiting satellite operator to turn on "the brain"...hopefully later this month.

Mikhail confirms all updates will be published.....check Linkedin weekly for latest news.

"Definitely will look into using Akida chips in the future"

More positive attitudes towards Brainchip's technology for space operations, this is just the start.

Space, the final frontier...only commercial grade chips can make the grade, we are at the cutting 'EDGE'

NASA, EUROPEAN SPACE AGENCY, EDGX, ANT61, FRONTGRADE GAISLER, VORAGO TECHNOLOGIES.

Vote 'yes'.................Tech ;)

Election 2020 Vote GIF by Kids Voting Durham
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 46 users

HopalongPetrovski

I'm Spartacus!
E would not be used in an Edge Box - it's too small/underpowered. It would be used in doorbell-like applications. But, even then, Akida 1 would probably suffice. There are many applications which do not require TeNNs/ViT/sLLM. There is a range of applications from fridges/doorbells to AGI/automotive where you would select the amount of processing capability required, balancing speed/power/cost/accuracy.

I reckon BRN has a table setting out the appropriate configuration for different applications, and. as you say, it's an ongoing development.

You could be right about further development, particularly with sLLM as this seems to be the new holy grail with Loihi 2 and ARM M85 auditioning for the role of the Black Knight who has "had worse".
Thank you again Diogenese.
Your bloods worth bottlin'. 🤣
 
  • Like
  • Love
  • Haha
Reactions: 12 users

Frangipani

Regular
Minimisation of grammatical and spelling errors in the quarterly.

😂
Well, talk about a galaxy of possibilities… 🤭
Turns out we have underestimated the BrainChip staff’s creativity in coming up with novel blunders: Tony Dawe started off the latest Quarterly Investor Podcast by asking Sean about an event he had recently attended in person: “Earlier this month you attended the embedded world conference in Frankfurt, Germany…” (from 1:09 min).

Except that the embedded world has been taking place in Nürnberg (Nuremberg) since its inception in 2003!

Both German cities have sausages named after them and are roughly on the same latitude, but that’s about it regarding their similarities. If my memory serves me right, BrainChip had also posted a picture of the Frankfurt skyline (aka Mainhattan due to its location along the Main river) to go along with last year’s notification about exhibiting at embedded world 2023, which already left me puzzled at the time.

In all likelihood, this mix-up of cities was not a slip of the tongue, as all of Tony Dawe’s questions were scripted, and disappointingly, so were the majority of Sean’s answers. I much prefer the live virtual investor roadshow format where our CEO cannot simply read off prepared statements verbatim (although he would already be familiar with some of the questions posed to him, as they would have been emailed to IR in advance). What’s the point of presenting all these questions and answers in the form of a pretend conversation? They could just as well publish a written Q&A on the BrainChip website, if the reason the answers are pre-formulated is worrying about legal implications. Having to listen to several prepared statements being read out in succession really irked me - that is not what a podcast should be about!

I then expected Sean to correct Tony about the conference location, given that he had attended the embedded world 2024 in person less than three weeks prior to the podcast recording, but that didn’t happen either. Sigh…
Maybe he missed it, as his focus was already on what he wanted to say next (which appears to have been one of his much rarer unscripted replies in that podcast)? But wouldn’t Tony have shared his questions with Sean in advance? After all, how else could he have written out any of his other answers?

While confusing Nürnberg with Frankfurt won’t stop interested parties from making further enquiries and obviously doesn’t say anything at all about the caliber of the Akida technology, it is yet another annoying slip-up that could possibly have been avoided by employing the four-six-eyes principle in relation to the scripted parts. How hard can it be?!
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Pappagallo

Regular
Don't forget to vote, I did it just then myself.

Took me less than 5 minutes, including 4 minutes to reset my Boardroom password!
 
  • Like
  • Love
  • Haha
Reactions: 10 users

rgupta

Regular
Don't forget to vote, I did it just then myself.

Took me less than 5 minutes, including 4 minutes to reset my Boardroom password!
Not only your vote but I expect everyone of us to let our proxies as well, how we want to vote. I assume proxy votes can have a major effect on final verdict. So please let your proxy like your super account as well.
Thanks
 
  • Fire
  • Like
Reactions: 2 users

Tothemoon24

Top 20
IMG_8893.jpeg




I’m delighted to share that we have started new AI research collaborations with two US universities.

Our Tech Hub Mercedes-Benz Research and Development North America (MBRDNA) has earmarked a six-month funding initiative aimed at advancing education and research in AI and its potential to elevate the in-car experience as well as autonomous driving.

Stanford University School of Engineering and the University of California San Diego’s (UC San Diego) Contextual Robotics Institute are leaders in this field.

At Stanford Engineering, 15 students led by lecturer Jay Borenstein will formulate project ideas to explore the full spectrum of possibilities with AI in relation to our MBUX Virtual Assistant, which we announced this year at CES. This ranges from optimising the productivity of daily commutes to parking and refining a personalised in-car voice assistant powered by fine-tuned large language models.

Meanwhile, MBRDNA is expanding the scope of research with UC San Diego’s Jacobs School of Engineering. Led by Professor Henrik Christensen, the research team’s Autonomous Vehicle Laboratory (AVL) focuses on perception and fusion for automated driving. The objective of the project is to build on prior work from Christensen’s lab and integrate it with a real-time 4D Neural Radiance Fields (NERFs) model to build a model segmented into semantically meaningful regions.

Supporting research done by educational institutions in key areas of innovation has long been a cornerstone of Mercedes-Benz R&D worldwide. Such collaborations often lead to joint publications and recruitment opportunities in key fields as well as important discoveries and advancements.

I am very keen to see what emerges from these latest projects.
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Deadpool

hyper-efficient Ai
Hey their old chippers, apology's if already posted. Seems like a very well rounded intern who is now an employee with some very exciting developments at hand.

Plagiarized from over on facebook Brn invester group

Interesting activity report from a Brainchip employee: https://www.linkedin.com/.../1709.../single-media-viewer/... Here it is stated that Akida is compatible with the MediaPipe API was achieved. MediaPipe was developed by Google but is open source and accessible to everyone. It simply means further access to ready-made AI models for training or an easy-to-use developer platform with which you can now also train Akida. Another tool for developers alongside MetaTF, Edge Impulse and Nvidia TAO. Maybe it's not the big headline now, but it shouldn't be underestimated for the development of AI models on Akida. https://developers.google.com/mediapipe/solutions/guide. Wallstreet Forum Perhaps

1715081703343.png
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Diogenese

Top 20
Peter never had any intentions of BrainChip being a software Company.

Hardware neuromorphics, was always the end game.

So I don't think you can say that was ever really a "business model" as it was merely a stepping stone, which was unfortunately misplaced at the time and burnt up much resources and Time.

It really does feel to me though, that everything is happening on this journey, as it should.
Obviously your definition of business model is different from mine.
 
  • Like
  • Haha
  • Love
Reactions: 9 users
Top Bottom