BRN Discussion Ongoing

Foxdog

Regular
'Several active customers elected to defer their evaluation of BrainChip’s technology until after the expected release of Akida 2.0 in August 2023'.

So what exactly does this statement from Antonio really mean. It appears to me that no evaluation was done by these companies and that said evaluation will only begin when they get their hands on Gen2. If the usual design cycle is 3 to 4 years then products from this particular cohort of customers will not appear for ages. Have they not been playing around with Akida1000 with the view of incorporating Gen2 quickly into a product range?

Is AKIDA actually groundbreaking? 3 things need to happen over the next month 1. Mercedes confirmation that AKIDA is in their next EV's. 2. Valeo confirmation that we are in Scala 3 and 3. Release of Gen2 and confirmation that specific customer requests have been addressed in this release and that product development is underway to incorporate it.

If by some miracle each of these events come to pass over the next month then we will see a significant re-rate. If none of them happen then I doubt AKIDA is actually a viable solution for the current commercial market. IMO this is a binary outcome that will play out very soon.
 
  • Like
  • Love
Reactions: 11 users

stan9614

Regular
I am puzzled to see there are still a number of people looked really surprised about the revenue figure knowing no new license has been signed since November 2021. Of course the jan-jun half year report will post little revenue. Panicing for what you already knew since june?

akida 2.0 is finally so close to its general availability, and we all know unlike the 1.0, 2.0 was developed with partners and customers input. Which means, they told us what they need on top of 1.0, and we developed the 2.0 exactly to their requirements.

Good luck for those thinking about selling out right before the first akida 2.0 ip sale annoucements
 
  • Like
  • Love
  • Fire
Reactions: 54 users

Cartagena

Regular
I'd love to hear from Sean about this HY report. Seems the biggest expense has been salaries and employee expense accounts. I don't feel were getting value for money here.. gets paid millions to bring in 100k. A fish n chip shop makes more. Wouldn't you think they'd take a pay cut in such times as these..

That's a good point I am bullish and optimistic about the company however I do share a level of disappointment with all here regards to the salaries and share perks so removed from reality like it is a blue chip multi million dollar earning company at present. I do have a substantial investment because I believe in our tech however the value of our shares keeps dropping. There must be a stop to this pattern.
Shareholders effectively pay for these salaries & perks and we just want our money working for us.
I also wonder if Sean and the board can see the hurt and start to expedite the sales and /or contracts. If the product is there with Gen 2 Akida, the patents are there, sales forces are there, the partnerships are there, what's the delay? I still think they could surprise the market with another substantial announcement in the next few days we wait and see. 🧐🤞
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Mccabe84

Regular
  • Like
  • Love
  • Fire
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
When somebody takes a license from us most likely it is way at the front end design cycle and so at best, at best they’re 2 to 3 years out from having production in any sort of meaningful volume, so to be honest they might not even get there so given the time frames plus all the other features that they have to work on that they haven’t done yet at the time they’ve taken a license from us you know our licensee might want to wait a bit before tipping the market with respect to to what technology they’re working on or what product they’ll be producing and naturally this impacts how we make announcements to the market place.”

What’s your take on this comment? Especially in respect to my highlighted section. What is he saying here? To be honest, they might not even get there? I’m confused
Hi there.
Just because our ambition is to be ubiquitous, that doesn't mean we will be the ultimately perfect or even best solution to every technical conundrum out there.

Many applications are being developed by competitors in these still early days of neuromorphic processing and our patent wall necessitates that novel competing designs and approaches will be developed that our manufacturing users will want to examine and test against our offering.

We will win some and we will lose some. 🤣

Sean, Peter and now Antonio have all explained, and we have seen evidence in the many examples reproduced here on these threads, that in some applications extant and perhaps cruder and less efficient solutions have, and continue to be adopted instead of our Akida solution.

The people who are making the decisions have grown up on and are familiar and comfortable with older technology.

As new concepts and applications are developed and shown to be superior but which would strain the capacity of existing tech our offerings will become an available solution.

As yet, such applications that are pushing the boundaries and capabilities of existing tech tend towards autonomous vehicles (from spacecraft through passenger vehicles and into multifarious drones), military spec applications (of which we will only get peeks due to security concerns) and other areas where innovation is a necessity because of either status or status quo concerns.
Much of this is still at the developmental stage and yet to be made available for mainstream consumption.

So.
It's all taking longer than many of us would like.
I think that this is as much of a surprise and concern for the Company as it is for us.

I think they also expected faster and earlier adoption and are also struggling with the current situation, but being, for the most part, experienced professionals they express these facts with little hysteria.

Every engagement BrainChip has with a prospective user comes with mutual NDA's as standard operating procedure and as stated above by Antonio, just what we can announce at any given time is very restricted by our end user's corporate and marketing requirements.

Hence the extremely conservative approach to ASX releases which irks so many and which provides such rich fodder for the shorts and other manipulators.

Looking back we have to conclude that our share price was over inflated at our $2+ peaks driven by a combination of exuberant enthusiasm sparked by the MB declaration, pent up expectation from existing holders, and market force hype granted to any spiking share price by traders looking for quick returns and both ignorant and uninterested in any fundamental or intrinsic value or property of the stock in question.

None of this makes we true believers wrong or should give cause to turning upon or eating each other.

We know progress, however seemingly glacial, is occurring, much of which is behind closed doors.

We know the announcements we all pine for are drafted and awaiting release.

Will it be this Monday.......next month.......December this year or January next????

As one particularly talented musical instrumentalist I know says.......our FOMO keeps us here. 🤣

Hang in their fellow believers.

We may or may not get smashed again on Monday, but at the moment, we are all we've got.

So......Fuch the splitters!

 
  • Like
  • Love
Reactions: 28 users
Been travelling to and at a business conference this past week so not read a lot other than yesterday's releases.

Regardless of how we see these as holders, the overall market sentiment is the litmus and no doubt will not be happy unfortunately.

Having said that, I'm curious in the statement contexts as below.

The first is obviously out ot yesterday's report with the second from a March media release.

My question is, are the same several customers deferring the same as the early adopters or different.

If the early adopters are diff, what level of engagement is there, just input, wants etc or do they also have early access ahead of "general availability" and if so, what level of evaluations have they undertaken in the past 6 months?

That is to also say, are early adopters new potential or other customers (outside the several active indicated in yesterday's release) already testing?

By definition customers, adopters and partners (which we have quite a few of obviously) can be different imo.

Dependent on the statement context that may or may not be a level of positivity to look at as the early adopters could be 6 months or more down the track already.


....and noting that several active customers elected to defer their evaluation of BrainChip’s technology until after the expected release of Akida 2.0 in August 2023.


Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners. BrainChip is engaged with early adopters on the second generation IP platform. General availability will follow in Q3' 2023.
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Mccabe84

Regular
This is not a positive. Well done to the 50 odd people that liked ROBT "buying up big" post
It took 3 minutes to look up how ROBT is composed. All tech companies in this ETF are ranked into 3 categories. 30 companies per category. Each company has the SAME equal weighting with this being rebalanced quarterly.

SO the only reason they are "BUYING UP BIG" is because they have to due to BRN share price falling quarter on quarter on quarter. The "BUYING UP BIG" is to keep BRN weighted the same in the category they are in.
I guess this post wont get 50 likes even though its correct and factual.

View attachment 42969

"The share price will do what the share price will do"
Spoken by a person that will never put one dollar of his own money into the company.
Would it be the same with the multiple Vanguard and other fund’s buying into BRN ?

 
  • Like
  • Love
Reactions: 3 users

Easytiger

Regular
  • Like
  • Love
Reactions: 5 users

Cartagena

Regular
Hi there.
Just because our ambition is to be ubiquitous, that doesn't mean we will be the ultimately perfect or even best solution to every technical conundrum out there.

Many applications are being developed by competitors in these still early days of neuromorphic processing and our patent wall necessitates that novel competing designs and approaches will be developed that our manufacturing users will want to examine and test against our offering.

We will win some and we will lose some. 🤣

Sean, Peter and now Antonio have all explained, and we have seen evidence in the many examples reproduced here on these threads, that in some applications extant and perhaps cruder and less efficient solutions have, and continue to be adopted instead of our Akida solution.

The people who are making the decisions have grown up on and are familiar and comfortable with older technology.

As new concepts and applications are developed and shown to be superior but which would strain the capacity of existing tech our offerings will become an available solution.

As yet, such applications that are pushing the boundaries and capabilities of existing tech tend towards autonomous vehicles (from spacecraft through passenger vehicles and into multifarious drones), military spec applications (of which we will only get peeks due to security concerns) and other areas where innovation is a necessity because of either status or status quo concerns.
Much of this is still at the developmental stage and yet to be made available for mainstream consumption.

So.
It's all taking longer than many of us would like.
I think that this is as much of a surprise and concern for the Company as it is for us.

I think they also expected faster and earlier adoption and are also struggling with the current situation, but being, for the most part, experienced professionals they express these facts with little hysteria.

Every engagement BrainChip has with a prospective user comes with mutual NDA's as standard operating procedure and as stated above by Antonio, just what we can announce at any given time is very restricted by our end user's corporate and marketing requirements.

Hence the extremely conservative approach to ASX releases which irks so many and which provides such rich fodder for the shorts and other manipulators.

Looking back we have to conclude that our share price was over inflated at our $2+ peaks driven by a combination of exuberant enthusiasm sparked by the MB declaration, pent up expectation from existing holders, and market force hype granted to any spiking share price by traders looking for quick returns and both ignorant and uninterested in any fundamental or intrinsic value or property of the stock in question.

None of this makes we true believers wrong or should give cause to turning upon or eating each other.

We know progress, however seemingly glacial, is occurring, much of which is behind closed doors.

We know the announcements we all pine for are drafted and awaiting release.

Will it be this Monday.......next month.......December this year or January next????

As one particularly talented musical instrumentalist I know says.......our FOMO keeps us here. 🤣

Hang in their fellow believers.

We may or may not get smashed again on Monday, but at the moment, we are all we've got.

So......Fuch the splitters!




Very good article, link provided. Properly explains the partnership environment which Brainchip is building. This should put more at ease, even if it's just until we get the next update 🤞

Site icon The Next Platform

Neuromorphic Computing Will Need Partners To Break Into The Datacenter

Jeffrey Burt
Jeffrey Burt

1 year ago
ab_neuromorphic-1024x500.jpg

The emerging field of neuromorphic processing isn’t an easy one to navigate. There are major players in the field that are leveraging their size and ample resources – the highest profile being Intel with its Loihi processors and IBM’s TrueNorth initiative – and a growing list of startups that include the likes of SynSense, Innatera Nanosystems and GrAI Matter Labs.
Included in that latter list is BrainChip, a company that has been developing its Akida chip – Akida is Greek for “spike” – and accompanying IP for more than a decade. We’ve followed BrainChip over the past few years, speaking with them in 2018 and then again two years later, and the company has proven to be adaptable in a rapidly evolving space. The initial plan was to get the commercial SoC into the market by 2019, but BrainChip extended the deadline to add the capability to run convolutional neural networks (CNNs) along with spiking neural networks (SNNs).
In January, the company announced the full commercialization of its AKD1000 platform, which includes its Mini PCIe board that leverages the Akida neural network processor. It’s a key part of BrainChip’s strategy of using the technology as reference models as it pursues partnerships with hardware and chip vendors that will incorporate it in their own designs.


“Looking at our fundamental business model, is it a chip or IP or both?” Jerome Nadel, BrainChip’s chief marketing officer, tells The Next Platform. “It’s an IP license model. We have reference chips, but our go-to-market is definitely to work with ecosystem partners, especially who would take a license, like a chip vendor or a ASIC designer and tier one OEMs. … If we’re connected with a reference design to sensors for various sensor modalities or to an application software development, when somebody puts together AI enablement, they want to run it on our hardware and there’s already interoperability. You’ll see a lot of these building blocks as we’re trying to penetrate the ecosystem, because ultimately when you look at the categoric growth in edge AI, it’s really going to come from basic devices that leverage intelligent sensors.”
BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.
Neuromorphic chips are designed to mimic the brain through the use of SNNs. BrainChip broaden the workloads Akida could run by being able to run CNNs as well, which are useful in edge environments for such tasks as embedded vision, embedded audio, automated driving for LiDAR and RADAR remote sensing devices, and industrial IoT. The company is looking at such sectors as autonomous driving, smart health and smart cities as growth areas.

BrainChip already is seeing some success. It’s Akida 1000 platform is being used in Mercedes-Benz’s Vision EQXX concept car for in-cabin AI, including driver and voice authentication, keyword spotting and contextual understanding.
The vendor sees partnerships as an avenue for increasing its presence in the neuromorphic chip field.
“If we look at a five-year strategic plan, our outer three years probably look different than our inner two,” Nadel says. “In the inner two we we’re still going to focus on chip vendors and designers and tier-one OEMs. But the outer three, if you look at categories, it’s really going to come from basic devices, be they in-car or in-cabin. be they in consumer electronics that are looking for this AI enablement. We need to be in the ecosystem. Our IP is de facto and the business model wraps around that.”
The company has announced a number of partnerships, including with nViso, an AI analytics company. The collaboration will target battery-powered applications in robotics and automotive sectors using Akida chips for nViso’s AI technology for social robots and in-cabin monitoring systems. BrainChip also is working with SiFive to integrate the Akida technology with SiFive’s RISC-V processors for edge AI computing workloads and MosChip, running its Akida IP with the vendor’s ASIC platform for smart edge devices. BrainChip also is working with Arm.

To accelerate the strategy, the company this week rolled out its AI Enablement Program to offer vendors working prototypes of BrainChip IP atop Akida hardware to demonstrate the platform’s capabilities for running AI inference and learning on-chip and in a device. The vendor also is offering support for identifying use cases for sensor and model integration.
The program includes three levels – the Basic and Advanced prototypes to the Functioning Solution – with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts and two to ten development systems. The prototypes will enable BrianChip to get its commercial products to users at a time when other competitors are still developing their own technologies in the relatively nascent market.
“There’s a step of being clear about the use cases and perhaps a road map of more sensory integration and sensor fusion,” Nadel says. “This is not how we make a living as a business model. The intent is to demonstrate real, tangible working systems out of our technology. The thinking was, we could get these into the hands of people and they could see what we do.”
BrainChips Akida IP includes support for up to 1,024 nodes that can be configured into two to 256 nodes connected over a mesh network, with each node comprising four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNNs if needed and each is based on events or spikes, using data sparsity, activations, and weights to reduce the number operations by at least two-fold. The Akida Neural SoC can be used standalone or integrated as a co-processor a range of use cases and provides 1.2 million neurons and 10 billion synapses.
The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.
The platform can be used for one-shot on-chip learning by using the trained model to extract features and adding new classes onto it or in multi-pass processing that leverages parallel processing to reduce the number of NPUs needed.
Here is the one shot:
And there is the multi-pass:
“The idea of our accelerator being close to the sensor means that you’re not sending sensor data, you’re sending inference data,” Nadel said. “It’s really a systems architectural play that we envision our micro hardware is buddied up with sensors. The sensor captures data, it’s pre-processed. We do the inference off of that and the learning at the center, but especially the inference. Like an in-car Advanced Driver Assistance System, you’re not tasking the server box loaded with GPUs with all of the data computation and inference. You’re getting the inference data, the metadata, and your load is going to be lighter.”
The on-chip data processing is part of BrainChip’s belief that for much of edge AI, the future will not require clouds. Rather than send all the data to the cloud – bringing in the higher latency and costs – the key will be doing it all on the chip itself. Nadel says it’s a “bit of a provocation to the semiconductor industry talking about cloud independence. It’s not anti-cloud, but the idea is that hyperscale down to the edge is probably the wrong approach. You have to go sensor up.”
Going back to the cloud also means having to retraining the model if there is a change in object classification, Anil Mankar, co-founder and chief development officer, tells The Next Platform. Adding more classes means changing the weights in the classification.
“On-chip learning,” Mankar says. “It’s called incremental learning or continuous learning, and that is only possible because … we are working with spikes and we actually copy similarly how our brain learns faces and objects and things like that. People don’t want to do transfer learning – go back to the cloud, get new weights. Now you can classify more objects. Once you have an activity on the device, you don’t need cloud, you don’t need to go backwards. Whatever you learn, you learn” and that doesn’t change when something new is added.
Categories: AI

Tags: Akida, Brainchip, Neuromorphic


The Next Platform​

Back to top
 
Last edited:
  • Like
  • Love
Reactions: 23 users

Diogenese

Top 20
Very good article, link provided. Properly explains the partnership environment which Brainchip is building. This should put more at ease, even if it's just until we get the next update 🤞

Site icon The Next Platform

Neuromorphic Computing Will Need Partners To Break Into The Datacenter

Jeffrey Burt
Jeffrey Burt

1 year ago
ab_neuromorphic-1024x500.jpg

The emerging field of neuromorphic processing isn’t an easy one to navigate. There are major players in the field that are leveraging their size and ample resources – the highest profile being Intel with its Loihi processors and IBM’s TrueNorth initiative – and a growing list of startups that include the likes of SynSense, Innatera Nanosystems and GrAI Matter Labs.
Included in that latter list is BrainChip, a company that has been developing its Akida chip – Akida is Greek for “spike” – and accompanying IP for more than a decade. We’ve followed BrainChip over the past few years, speaking with them in 2018 and then again two years later, and the company has proven to be adaptable in a rapidly evolving space. The initial plan was to get the commercial SoC into the market by 2019, but BrainChip extended the deadline to add the capability to run convolutional neural networks (CNNs) along with spiking neural networks (SNNs).
In January, the company announced the full commercialization of its AKD1000 platform, which includes its Mini PCIe board that leverages the Akida neural network processor. It’s a key part of BrainChip’s strategy of using the technology as reference models as it pursues partnerships with hardware and chip vendors that will incorporate it in their own designs.


“Looking at our fundamental business model, is it a chip or IP or both?” Jerome Nadel, BrainChip’s chief marketing officer, tells The Next Platform. “It’s an IP license model. We have reference chips, but our go-to-market is definitely to work with ecosystem partners, especially who would take a license, like a chip vendor or a ASIC designer and tier one OEMs. … If we’re connected with a reference design to sensors for various sensor modalities or to an application software development, when somebody puts together AI enablement, they want to run it on our hardware and there’s already interoperability. You’ll see a lot of these building blocks as we’re trying to penetrate the ecosystem, because ultimately when you look at the categoric growth in edge AI, it’s really going to come from basic devices that leverage intelligent sensors.”
BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.
Neuromorphic chips are designed to mimic the brain through the use of SNNs. BrainChip broaden the workloads Akida could run by being able to run CNNs as well, which are useful in edge environments for such tasks as embedded vision, embedded audio, automated driving for LiDAR and RADAR remote sensing devices, and industrial IoT. The company is looking at such sectors as autonomous driving, smart health and smart cities as growth areas.

BrainChip already is seeing some success. It’s Akida 1000 platform is being used in Mercedes-Benz’s Vision EQXX concept car for in-cabin AI, including driver and voice authentication, keyword spotting and contextual understanding.
The vendor sees partnerships as an avenue for increasing its presence in the neuromorphic chip field.
“If we look at a five-year strategic plan, our outer three years probably look different than our inner two,” Nadel says. “In the inner two we we’re still going to focus on chip vendors and designers and tier-one OEMs. But the outer three, if you look at categories, it’s really going to come from basic devices, be they in-car or in-cabin. be they in consumer electronics that are looking for this AI enablement. We need to be in the ecosystem. Our IP is de facto and the business model wraps around that.”
The company has announced a number of partnerships, including with nViso, an AI analytics company. The collaboration will target battery-powered applications in robotics and automotive sectors using Akida chips for nViso’s AI technology for social robots and in-cabin monitoring systems. BrainChip also is working with SiFive to integrate the Akida technology with SiFive’s RISC-V processors for edge AI computing workloads and MosChip, running its Akida IP with the vendor’s ASIC platform for smart edge devices. BrainChip also is working with Arm.

To accelerate the strategy, the company this week rolled out its AI Enablement Program to offer vendors working prototypes of BrainChip IP atop Akida hardware to demonstrate the platform’s capabilities for running AI inference and learning on-chip and in a device. The vendor also is offering support for identifying use cases for sensor and model integration.
The program includes three levels – the Basic and Advanced prototypes to the Functioning Solution – with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts and two to ten development systems. The prototypes will enable BrianChip to get its commercial products to users at a time when other competitors are still developing their own technologies in the relatively nascent market.
“There’s a step of being clear about the use cases and perhaps a road map of more sensory integration and sensor fusion,” Nadel says. “This is not how we make a living as a business model. The intent is to demonstrate real, tangible working systems out of our technology. The thinking was, we could get these into the hands of people and they could see what we do.”
BrainChips Akida IP includes support for up to 1,024 nodes that can be configured into two to 256 nodes connected over a mesh network, with each node comprising four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNNs if needed and each is based on events or spikes, using data sparsity, activations, and weights to reduce the number operations by at least two-fold. The Akida Neural SoC can be used standalone or integrated as a co-processor a range of use cases and provides 1.2 million neurons and 10 billion synapses.
The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.
The platform can be used for one-shot on-chip learning by using the trained model to extract features and adding new classes onto it or in multi-pass processing that leverages parallel processing to reduce the number of NPUs needed.
Here is the one shot:
And there is the multi-pass:
“The idea of our accelerator being close to the sensor means that you’re not sending sensor data, you’re sending inference data,” Nadel said. “It’s really a systems architectural play that we envision our micro hardware is buddied up with sensors. The sensor captures data, it’s pre-processed. We do the inference off of that and the learning at the center, but especially the inference. Like an in-car Advanced Driver Assistance System, you’re not tasking the server box loaded with GPUs with all of the data computation and inference. You’re getting the inference data, the metadata, and your load is going to be lighter.”
The on-chip data processing is part of BrainChip’s belief that for much of edge AI, the future will not require clouds. Rather than send all the data to the cloud – bringing in the higher latency and costs – the key will be doing it all on the chip itself. Nadel says it’s a “bit of a provocation to the semiconductor industry talking about cloud independence. It’s not anti-cloud, but the idea is that hyperscale down to the edge is probably the wrong approach. You have to go sensor up.”
Going back to the cloud also means having to retraining the model if there is a change in object classification, Anil Mankar, co-founder and chief development officer, tells The Next Platform. Adding more classes means changing the weights in the classification.
“On-chip learning,” Mankar says. “It’s called incremental learning or continuous learning, and that is only possible because … we are working with spikes and we actually copy similarly how our brain learns faces and objects and things like that. People don’t want to do transfer learning – go back to the cloud, get new weights. Now you can classify more objects. Once you have an activity on the device, you don’t need cloud, you don’t need to go backwards. Whatever you learn, you learn” and that doesn’t change when something new is added.
Categories: AI

Tags: Akida, Brainchip, Neuromorphic


The Next Platform​

Back to top


Thanks Cartagena,

I had forgotten these details about the AI Enablement Program.

To accelerate the strategy, the company this week rolled out its AI Enablement Program to offer vendors working prototypes of BrainChip IP atop Akida hardware to demonstrate the platform’s capabilities for running AI inference and learning on-chip and in a device. The vendor also is offering support for identifying use cases for sensor and model integration.
The program includes three levels – the Basic and Advanced prototypes to the Functioning Solution – with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts and two to ten development systems. The prototypes will enable BrianChip to get its commercial products to users at a time when other competitors are still developing their own technologies in the relatively nascent market.
“There’s a step of being clear about the use cases and perhaps a road map of more sensory integration and sensor fusion,” Nadel says. “This is not how we make a living as a business model. The intent is to demonstrate real, tangible working systems out of our technology. The thinking was, we could get these into the hands of people and they could see what we do.”
BrainChips Akida IP includes support for up to 1,024 nodes that can be configured into two to 256 nodes connected over a mesh network, with each node comprising four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNNs if needed and each is based on events or spikes, using data sparsity, activations, and weights to reduce the number operations by at least two-fold. The Akida Neural SoC can be used standalone or integrated as a co-processor a range of use cases and provides 1.2 million neurons and 10 billion synapses.
The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.
The platform can be used for one-shot on-chip learning by using the trained model to extract features and adding new classes onto it or in multi-pass processing that leverages parallel processing to reduce the number of NPUs needed
.

It must have been an eye-opener for the participating companies, revealing possibilities they had not dreamt of, and the feedback led to further advances in Akida's technology.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hey Brain Fam,

giphy (2).gif


I've come to the conclusion that it's 99.999% likely WE ARE ACTUALLY ON THE MOON, LIKE WE ARE LITERALLY (NOT US BUT AKIDA IS) ON THE SOUTH POLE OF THE MOON RIGHT THIS VERY SECOND!!!!

Contrary to the naysayers who reckon that Rob spends most of his days "liking" random blogs about vacuum cleaners and horny goat weed, I happen to believe the opposite and this should prove my case in point. But if I'm wrong about this, then I promise I'll stop posting for two days.

Rob Telson liked Shakeel Perra (Micrchip) Linkedin post congratulating ISRO (Indian Space Research Organisation) on the landing of the Chandrayaan-3 on the Moon's south pole.


Here's Rob's (y). It's just below Brian Coglan's 👍(Microchip).



Screen Shot 2023-08-26 at 2.25.02 pm.png




And, here's a couple of extracts from an article which I highlighted which I think helps to explain why Rob likes it so much and hence why I believe WE ARE ON THE MOOOOOOOOOOOOOOOOON!!!!!.🚀🌙

How else is the rover able to learn/ adapt and respond in real-time?


N2 pm.png





N1 pm.png




 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 58 users

Quiltman

Regular
Good points ...

With reference to Valeo, a "Joint Development " agreement can have numerous commercial iterations. There was agreement with Valeo that would cover some of BrainChips development costs, but no insight to what the commercial outcome of a successful "development using Akida" would be like.

From articles in 2020

Tech developer BrainChip (BRN) has signed a joint development agreement with European automotive supplier Valeo for its Akida system.

Valeo supplies sensors and systems for autonomous vehicles (AV) and advanced driver assistance systems (ADAS).

The tier-one auto supplier will incorporate BrainChip’s Akida neuromorphic System-on-Chip (SoC) as a neural network processor in Valeo’s sensing system.

The SoC will be developed to process data and learn in a way that is a lot more focused and streamlined than previous system architectures.

The low-power, reliable system will bypass the need for an exterior processing unit to integrate large amounts of sensor data, and instead provide a potentially better and more elastic solution in one compact package.


Most likely commercial outcome, IMHO, is a royalty from sales.
Something like 1% - 2%
And this would only become " worthy of ASX announcement" on sales being achieved. Even though Valeo have pre-orders etc, what happens if Valeo goes bust, or production fails etc. etc.
So my logic goes, it's been 3.5 years. Valeo scientists were lauding Spiking Neural Networks as the future back in 2018, they choose to work with BrainChip, so Valeo have been at it for more than 5 years, nothing has been announced that the joint development has been terminated. Is it SCALA 3 ? Very good chance, but not definite. We could still be "developing"
Worth remembering this still flashes across the front page of our website … very much an active relationship 🤩

78B67094-8987-4B91-875B-8E5AAA4BE912.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 28 users
That's a good point I am bullish and optimistic about the company however I do share a level of disappointment with all here regards to the salaries and share perks so removed from reality like it is a blue chip multi million dollar earning company at present. I do have a substantial investment because I believe in our tech however the value of our shares keeps dropping. There must be a stop to this pattern.
Shareholders effectively pay for these salaries & perks and we just want our money working for us.
I also wonder if Sean and the board can see the hurt and start to expedite the sales and /or contracts. If the product is there with Gen 2 Akida, the patents are there, sales forces are there, the partnerships are there, what's the delay? I still think they could surprise the market with another substantial announcement in the next few days we wait and see. 🧐🤞
"I also wonder if Sean and the board can see the hurt and start to expedite the sales and /or contracts. If the product is there with Gen 2 Akida, the patents are there, sales forces are there, the partnerships are there, what's the delay?"

Too right Cartagena 😉
They obviously just have to be more persuasive aye!

20230826_140652.jpg
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

I highlighted some of the important parts.

Exploration: The Vital Role of Artificial Intelligence in Chandrayaan 3 Mission​

Taggart Continuum

Taggart Continuum

1,583 followers
Follow
July 17, 2023
Open Immersive Reader
Chandrayaan 3 Mission
Space organisations from all over the world have been utilising the potential of cutting-
edge technologies in their quest to explore the cosmos and unravel the secrets of the
universe. Artificial intelligence (AI) is one such ground-breaking instrument that has
profoundly changed many sectors. AI has shown to be an invaluable tool for space
exploration, and the Indian Space Research Organisation (ISRO)'s Chandrayaan 3 project
is a prime example.

1. Accurate Navigation and Independent Operations
Building on the achievements of Chandrayaan 1 and 2, the Chandrayaan 3 mission
intends to further examine the lunar surface. AI is essential to this ambitious project's
ability to precisely navigate and operate the spacecraft on its own. A spacecraft may
alter its trajectory, orientation, and altitude on its own with the use of AI-powered
algorithms, minimizing the need for human intervention and maximizing mission
effectiveness. These algorithms analyze enormous volumes of data from numerous
sensors and equipment onboard the spacecraft.

2. Improved Data Analysis and Imaging
The Chandrayaan 3 mission's principal goal is to take detailed scientific measurements
and take high-resolution pictures of the lunar surface. AI has demonstrated its value in
picture processing and analysis, enabling researchers to glean important information
from the copious amounts of data, the spacecraft's cameras and other sensors have
collected.
With the help of AI-powered algorithms, geological characteristics may be automatically
found and categorized, landing sites can be found, and even geological formations that
may include significant resources can be predicted.
This expedites scientific research
and raises the security of upcoming moon expeditions.
3. Optimal Resource Administration
The success of space missions like Chandrayaan 3 depends on careful resource
management. The spacecraft's resource allocation and utilization are much improved
by AI. It can assess fuel, electricity, and other resource usage to make sure the mission
stays on course and within predetermined bounds.
Additionally, AI can provide predictive maintenance by assisting ISRO in identifying
possible problems with the spacecraft's systems before they become more serious,
lowering the likelihood that the mission would fail owing to technical difficulties.

4. Quicker Data Analysis and Transmission
Communication and data transmission are significantly hampered by the great distance
between Earth and the Moon. The spacecraft can effectively compress data, prioritize
crucial information, and communicate it back to Earth in a quicker, more dependable
manner with the use of AI. This makes it possible for scientists and engineers to get
crucial information in real time, making it simpler to react to unforeseen circumstances
and modify mission parameters accordingly.

Conclusion:
The Chandrayaan 3 project marks an important turning point in India's space exploration
history, and it needs on both technological prowess and human ingenuity to succeed.
Artificial intelligence has become a powerful force that is enabling organizations like
ISRO to explore unexplored territory and push the limits of space exploration.
The project would not be possible without AI's vital contributions, which range from
autonomous navigation to improved data processing and ideal resource management.
Future space missions will likely use even more advanced AI applications as technology
develops, opening the door to fascinating new discoveries and helping to solve more
cosmic mysteries.




 
  • Like
  • Love
  • Fire
Reactions: 37 users
Management doesn't give a F__K about Shareholders we are MUSHROOMS keep in the dark and feed SHIT.
This may be true however these guys are making the business which in turn will make us and them some money ,Without them where stuffed , Peter was Ceo and the company went no where, now where starting to look a professional outfit, Time will tell
 

HopalongPetrovski

I'm Spartacus!
Hey Brain Fam,

View attachment 43008

I've come to the conclusion that it's 99.999% likely WE ARE ACTUALLY ON THE MOON, LIKE WE ARE LITERALLY (NOT US BUT AKIDA IS) ON THE SOUTH POLE OF THE MOON RIGHT THIS VERY SECOND!!!!

Contrary to the naysayers who reckon that Rob spends most of his days "liking" random blogs about vacuum cleaners and horny goat weed, I happen to believe the opposite and this should prove my case in point. But if I'm wrong about this, then I promise I'll stop posting for two days.

Rob Telson liked Shakeel Perra (Micrchip) Linkedin post congratulating ISRO (Indian Space Research Organisation) on the landing of the Chandrayaan-3 on the Moon's south pole.


Here's Rob's (y). It's just below Brian Coglan's 👍(Microchip).



View attachment 43004



And, here's a couple of extracts from an article which I highlighted which I think helps to explain why Rob likes it so much and hence why I believe WE ARE ON THE MOOOOOOOOOOOOOOOOON!!!!!.🚀🌙

How else is the rover able to learn/ adapt and respond in real-time?


View attachment 43005




View attachment 42999



Hi B. Wouldn't it be fantastic to find out your right. 🤣

 
  • Like
  • Haha
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi B. Wouldn't it be fantastic to find out your right. 🤣



Hi Hoppy, yes indeed it would be truly momentous! Not really sure how to find out if it’s true. Maybe I should put my post on LinkedIn and see if Rob likes it?🤡
 
  • Haha
  • Like
  • Love
Reactions: 14 users

CHIPS

Regular
...
So my logic goes, it's been 3.5 years. Valeo scientists were lauding Spiking Neural Networks as the future back in 2018, they choose to work with BrainChip, so Valeo have been at it for more than 5 years, nothing has been announced that the joint development has been terminated. Is it SCALA 3 ? Very good chance, but not definite. ...

But the question is: Would BrainChip or any other company officially announce that the joint development was terminated? I would say NO. It would just not talk about it anymore, which they didn't do for for a longer time.

So we cannot be sure if it still exists.
 

Mccabe84

Regular
But the question is: Would BrainChip or any other company officially announce that the joint development was terminated? I would say NO. It would just not talk about it anymore, which they didn't do for for a longer time.

So we cannot be sure if it still exists.
Wouldn’t they no longer appear on the Brainchip website if terminated
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

Sura7667

Member
I'm a long-term holder and been holding this for years now. Rode highs and lows but always remained hopefull that BRN will succeed eventually. But im now less and less confident now to believe anything what BRN is publicly stating. Moving the goal post at every broken promise is not confidence boosting. I'm not saying this because of the half yearly. As anyone I expected nothing else but only a surprise just in case, which as usual not to be there. But my main concern is potential customer r deferring engagement till AKD 2. Unless this is another move by brn to buy more time to keep LTHs dream alive, if true I understand customers will prefer the new beter akd2 over old akd 1. But as brn itself stated development cycle (assessment adopting production)will take 3yr +. So this mean are we looking for yet another more than 3 years of oblivion of BRN story? How many things can change by that time??
I really wish brn prove my pessimism is wrong sooner than later..anyway better or worse I'll ride this till to the end and worse case, I'll live my life as its now..
 
Last edited:
  • Like
  • Love
Reactions: 17 users
Top Bottom