BRN Discussion Ongoing

robsmark

Regular
Spot on mate!
Right now it's important to support each other, we are all (hopefully) here with a common goal.

Diamond Hands GIF - Diamond Hands Diamondhands - Discover & Share GIFs
It’s only spot on if management can regain buoyancy in the company again by demonstrating revenue and signing more deals. Without that we’re done, and let’s be honest, it’s not looking so good right now is it?

Why haven’t more companies signed with us when other companies in similar positions are gaining traction left, right and centre?

The goal posts keep moving as the company matures… “22” is our year, “23” is our year, “24” will be our year… still nothing! When was the last time the company mentioned an NDA? It’s starting to feel like Chinese whispers, and the company are being silent hiding behind a wall of assumption on our part.

I want to know what the fuck is going and nobody in the know is willing to answer me!
 
  • Like
  • Fire
  • Thinking
Reactions: 19 users
Been sniffing around Numem work with NASA (Ames) again to see what next steps are if at all.

Appears Phase 1 completed early 2023 but can't find an outcome summary at this point.

They had an original project ended in 2021 using DNN coprocessor but then switched to Akida in 2022.

Worth seeing whether it moves to a Phase 2 near term especially given that other NASA (Ames) presso I found recently with Akida still on their Sat program.

1646847609588.jpg


PDF summary ex Techport HERE
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Damo4

Regular
It’s only spot on if management can regain buoyancy in the company again by demonstrating revenue and signing more deals. Without that we’re done, and let’s be honest, it’s not looking so good right now is it?

Why haven’t more companies signed with us when other companies in similar positions are gaining traction left, right and centre?

The goal posts keep moving as the company matures… “22” is our year, “23” is our year, “24” will be our year… still nothing! When was the last time the company mentioned an NDA? It’s starting to feel like Chinese whispers, and the company are being silent hiding behind a wall of assumption on our part.

I want to know what the fuck is going and nobody in the know is willing to answer me!

In my honest opinion, there is a disconnect between SP and the outlook of the company.

I don't believe we are looking like we are 'done.' If I did, I would have sold.
I think it would be insanity to believe this company is destined for failure but still hold on. Unless you believe we will be pumped back to a level in which you can exit with most or more than your money in hand? All prior to the apparent inevitable collapse?
TBH there's many people here who would sell out at or around their avg price which is so silly too, but to each their own.

To answer your questions about why, I can only trust what management have to say.
Unfortunately I don't know more about Brainchip than them, so I don't believe its prudent to reject their responses.
If we can't trust them when they speak of the position they are in now, why have so many held onto single sentences from Sean regarding watching the financials, or lumpy revenue?

TBH I can see why management is tight-lipped, everyone here expects them to crystal-ball revenue figures and IP contracts and then try to crucify them when they miss.

BTW, how come this is directed at me?
If you take issue with what Hop said, why don't you reply to him? I just agreed with his well worded response.
My only personal contribution to what he said was support each other?
 
  • Like
  • Love
Reactions: 22 users
While over at Techport, checked out the outline on the Intellisense Phase 2 status as below.

Per the summary and all things going well, we should see a prototype around early / mid next year as Phase 2 expected completion around May.


1637306146457.png



Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoC™, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification. These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype. At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and rover for flight demonstration.
 
  • Like
  • Love
  • Fire
Reactions: 29 users

keyeat

Regular
It’s only spot on if management can regain buoyancy in the company again by demonstrating revenue and signing more deals. Without that we’re done, and let’s be honest, it’s not looking so good right now is it?

Why haven’t more companies signed with us when other companies in similar positions are gaining traction left, right and centre?

The goal posts keep moving as the company matures… “22” is our year, “23” is our year, “24” will be our year… still nothing! When was the last time the company mentioned an NDA? It’s starting to feel like Chinese whispers, and the company are being silent hiding behind a wall of assumption on our part.

I want to know what the fuck is going and nobody in the know is willing to answer me!
Lets hope there is some big news coming on Thursday in the second part of the podcast !

Seems a bit strange why they would separate it out .....
 
  • Like
  • Wow
Reactions: 8 users

FJ-215

Regular
Lets hope there is some big news coming on Thursday in the second part of the podcast !

Seems a bit strange why they would separate it out .....
The half year financial report is likely to come out later this week as well. Good opportunity to provide the market with an update on where the company is at with Akida 1500 and 2.0
 
  • Like
  • Love
  • Fire
Reactions: 29 users

mrgds

Regular
Lets hope there is some big news coming on Thursday in the second part of the podcast !

Seems a bit strange why they would separate it out .....
Unfortunately, i spoke with TD and the podcast was cut in two, because they felt 40mins was too long.
Go figure :rolleyes::(
 
  • Like
  • Haha
  • Sad
Reactions: 17 users

schuey

Regular
Lets hope there is some big news coming on Thursday in the second part of the podcast !

Seems a bit strange why they would separate it out .....
Agrees
 

Foxdog

Regular
Unfortunately, i spoke with TD and the podcast was cut in two, because they felt 40mins was too long.
Go figure :rolleyes::(
Seriously? In a veritable vacuum of information and updates from the company 40mins is barely enough imo. That response shows a complete lack of awareness by the company imo.
 
  • Like
  • Fire
Reactions: 11 users

jk6199

Regular
Cheers FF,

Hope all is well!

I have printed out the list and will add to it. Every time I self-doubt, or question something, I will look at this list and be reminded of the great potential.

For the quiet days of single figure transactions and bots galore.

We all need reassurance some days!

I keep going back to the list.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Diogenese

Top 20
Been sniffing around Numem work with NASA (Ames) again to see what next steps are if at all.

Appears Phase 1 completed early 2023 but can't find an outcome summary at this point.

They had an original project ended in 2021 using DNN coprocessor but then switched to Akida in 2022.

Worth seeing whether it moves to a Phase 2 near term especially given that other NASA (Ames) presso I found recently with Akida still on their Sat program.

View attachment 42489

PDF summary ex Techport HERE
Hi Fmf,

Impressive find.

The addition of MRAM as a plug-in to Akida is important. Funnily enuf, I had for some time speculated that NASA could use memristors as non-volatile storage in conjunction with Akida because of their inherent radhard characteristics.

Akida has always had provision for external memory connection, so I guess the project was just a matter of pin-matching.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

robsmark

Regular
In my honest opinion, there is a disconnect between SP and the outlook of the company.

I don't believe we are looking like we are 'done.' If I did, I would have sold.
I think it would be insanity to believe this company is destined for failure but still hold on. Unless you believe we will be pumped back to a level in which you can exit with most or more than your money in hand? All prior to the apparent inevitable collapse?
TBH there's many people here who would sell out at or around their avg price which is so silly too, but to each their own.

To answer your questions about why, I can only trust what management have to say.
Unfortunately I don't know more about Brainchip than them, so I don't believe its prudent to reject their responses.
If we can't trust them when they speak of the position they are in now, why have so many held onto single sentences from Sean regarding watching the financials, or lumpy revenue?

TBH I can see why management is tight-lipped, everyone here expects them to crystal-ball revenue figures and IP contracts and then try to crucify them when they miss.

BTW, how come this is directed at me?
If you take issue with what Hop said, why don't you reply to him? I just agreed with his well worded response.
My only personal contribution to what he said was support each other?

Hey Damo, defiantly not directed at you - I have zero personal issues with anyone on this forum, most of you I don't know, and those of you who I have met in person are fantastic people. I was simply responding to a post.

I want to respond to your post accurately, so I'm going to break it down by sentence.

In my honest opinion, there is a disconnect between SP and the outlook of the company.

Six months ago and I'd have agreed with you, now, I think we are probably appropriately priced. Every passing day without commercial adoption (and lets be clear - that's all that matters here. Partnerships, podcasts, scientific articles, new hires, Robs likes on LinkedIn all mean nothing if nobody buys our product) diminishes the value of this company (in my opinion of course). AKD1000 has been available for over two years', and call it a test chip if you like (and that another story as this isn't what the company sold it to us as initially), but it was sold to two clients, so obviously there was potential for it to be sold to more - why wasn't it?.. We've seen articles and "evidence" being posted here for the four years' that I've been a shareholder, so why haven't any of the referenced signed a contract? By means of example - We've seen multiple phone and device releases by the big two over that period, so that completely null and voids the argument of development time. We're apparently at the start of an Artificial Intelligence revolution, we have a working chip, ready for sale, better and cheaper than any of the largest semiconductor companies have got, and we cant sell it? It's beginning to feel like there's no appetite for it.

I don't believe we are looking like we are 'done.' If I did, I would have sold.

I actually said that without commercial take up we're done, obviously we have sufficient operating capital for the coming months, but what happens when that runs out? Investors aren't going to keep throwing good money at it without a return. Without being patronising - look at the chart! If it continues on this trajectory for another 18 months without further commercial contracts and revenue then I'll stand by that statement.

There are plenty of companies on the ASX that have peaked and failed, and this is looking very similar right now. Its not good.

I think it would be insanity to believe this company is destined for failure but still hold on. Unless you believe we will be pumped back to a level in which you can exit with most or more than your money in hand? All prior to the apparent inevitable collapse?
TBH there's many people here who would sell out at or around their avg price which is so silly too, but to each their own.


I believe we are at a defining moment in the companies history, and the tires need to touch the tarmac. I've been bullshitted time and time again by company representatives (to my face) telling me that they're extremely positive about the coming months, etc., only to be let down by a fucking partnership or some other insignificant news. Contracts and revenue are needed, and they are needed very soon, but I'm questioning whether we have a team that are up to it, assuming that the chip is as good as its said to be.

I have a significant number of shares. Nobody wants this company to succeed more than I do. I've read and read, and researched and read and have weighed up this investment against others, and this is my current outlook. Its mine, and everyone is entitles to their own. I see high level posters stating with fact that the future is rosy and big things are coming, but (and in response to your question a couple of days ago @Rise from the ashes, this is me calling out the bullshit) they don't know what will happen in the future and they have no guarantees that management turn this company into a success.

To answer your questions about why, I can only trust what management have to say.
Unfortunately I don't know more about Brainchip than them, so I don't believe its prudent to reject their responses.
If we can't trust them when they speak of the position they are in now, why have so many held onto single sentences from Sean regarding watching the financials, or lumpy revenue?

Sean has said a lot of things, as have the rest of them, but the proof is in the pudding and as i mentioned previously, nothing has transpired.

TBH I can see why management is tight-lipped, everyone here expects them to crystal-ball revenue figures and IP contracts and then try to crucify them when they miss.

I don't think that's fair. They need to do better with shareholder engagement and have more transparency. The podcasts tell us nothing, but elude us to believe a future where Akida runs the world, but in reality we're a microcap again, with no real world adoption and a declining SP.

I'm no stranger to making my concerns vocal (and yes, I understand that rubs a lot of you up the wrong way), but I'm a genuine shareholder with genuine concerns that I cant seem to get a proper answer to:

- What is the status of the EAP participants?
- How many companies are Brainchip currently engaged with?
- What is the sector distribution of these engagement?
- How advanced are these engagements?

They aren't hard questions, and don't specifically break any non-disclosure agreements as there is nothing specific in them.

"The share price will do what the share price will do" - This about sums up how few fucks management actually give about company shareholders.

They need to do better.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

Slade

Top 20
Let’s have a group hug and a cry.
 
  • Haha
  • Like
Reactions: 10 users

robsmark

Regular
Unfortunately, i spoke with TD and the podcast was cut in two, because they felt 40mins was too long.
Go figure :rolleyes::(

Case and point.
 
  • Like
  • Sad
Reactions: 7 users
Now, I could be wrong in my understanding and happy to be corrected if so.

However, been reading so many diff articles, papers etc like everyone else and my simple thinking....

There are obviously algorithms whether they be CNN or SNN which are the software component.

Then you have the hardware to do the processing of said algos, whether that be GPU, CPU, TNU or Neuromorphic like AKD1000, 1500 etc.

Now, I'm thinking the company thought it may have been a bit of an easier switch initially by producing AKD1000 as the hardware in conjunction with the CNN2SNN converter.

Kind of makes sense as the majority of algos appear to be CNN based and we would need SNN algos to run on the Akida hardware. The CNN2SNN would facilitate that to show the benefits.

As the company said, the AKD1000 take-up wasn't really forthcoming as existing solutions were fit for purpose and needs at the time for end users to go changing up their production schedules / models / products etc.

It did however provide a POC for those who tested and played with AKD1000 and a feedback channel to BRN for the next iterations.

These days, I'm starting to see more commentary in various places that SNN algos are now getting traction and written accordingly and they will require the neuromorphic hardware side to be able to process them to full capability.

This is where I believe we will start to find our space as and when these SNN algos start finding their spot in newer upcoming products, particularly at the edge, developers and companies will approach the Megachips and Renesas (and us) of the world.

Will be interesting to find out eventually what the Renesas tape out was for and if SNN algo driven.

The downside to my thinking is that our initial mkt penetration was a little muted as we know and provided some additional time for competitors to refine their own offerings.

The upside is that the BRN team have been listening to clients and evolving Akida accordingly with VIT and TENN which will hopefully keep us ahead of the rest though I feel the gap closed slightly during the last couple of years.

Given we have the integration of our IP with the likes of the M85 it allows the current crop of product developers and ARM users to now include specifically written SNN algos.

We need to seriously target end user products that now have SNN algos written imo and get the flow on effect of the IP agreements and / or the production royalties through Megachips and Renesas.
 
  • Like
  • Fire
  • Love
Reactions: 47 users

equanimous

Norse clairvoyant shapeshifter goddess
Now, I could be wrong in my understanding and happy to be corrected if so.

However, been reading so many diff articles, papers etc like everyone else and my simple thinking....

There are obviously algorithms whether they be CNN or SNN which are the software component.

Then you have the hardware to do the processing of said algos, whether that be GPU, CPU, TNU or Neuromorphic like AKD1000, 1500 etc.

Now, I'm thinking the company thought it may have been a bit of an easier switch initially by producing AKD1000 as the hardware in conjunction with the CNN2SNN converter.

Kind of makes sense as the majority of algos appear to be CNN based and we would need SNN algos to run on the Akida hardware. The CNN2SNN would facilitate that to show the benefits.

As the company said, the AKD1000 take-up wasn't really forthcoming as existing solutions were fit for purpose and needs at the time for end users to go changing up their production schedules / models / products etc.

It did however provide a POC for those who tested and played with AKD1000 and a feedback channel to BRN for the next iterations.

These days, I'm starting to see more commentary in various places that SNN algos are now getting traction and written accordingly and they will require the neuromorphic hardware side to be able to process them to full capability.

This is where I believe we will start to find our space as and when these SNN algos start finding their spot in newer upcoming products, particularly at the edge, developers and companies will approach the Megachips and Renesas (and us) of the world.

Will be interesting to find out eventually what the Renesas tape out was for and if SNN algo driven.

The downside to my thinking is that our initial mkt penetration was a little muted as we know and provided some additional time for competitors to refine their own offerings.

The upside is that the BRN team have been listening to clients and evolving Akida accordingly with VIT and TENN which will hopefully keep us ahead of the rest though I feel the gap closed slightly during the last couple of years.

Given we have the integration of our IP with the likes of the M85 it allows the current crop of product developers and ARM users to now include specifically written SNN algos.

We need to seriously target end user products that now have SNN algos written imo and get the flow on effect of the IP agreements and / or the production royalties through Megachips and Renesas.
Amongst all the dark posts there is a fullmoon that shines through
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Article giving us a mention amongst some others as a processing solution to the energy problems emerging.


Brain-inspired computing could solve energy-efficient AI puzzle​

Artificial neurons overcome data management inefficiencies of conventional processors as brain-inspired computing development gathers speed.
4 May 2023


Brain inspired computing

Neuromorphic networking: brain-inspired computing architectures pave the way for energy-efficient AI systems.

There’s much to celebrate in using waste heat from data centers to warm corporate headquarters, university buildings, council offices, and thousands of homes. But the fact that there’s so much heat to go around points to a major issue with conventional computer processors – their power consumption. And while the current race to build even more powerful artificial intelligence (AI) systems has the potential to revolutionize the way that we work, the energy demands of these computationally intensive endeavors raise concerns. Fortunately, brain-inspired computing architectures could come to the rescue by shrinking AI’s carbon footprint.

To understand why, it’s worth recapping what we know about the human brain and delving into how it’s able to perform somewhere in the region of 1000 trillion operations per second, while having the energy consumption of a dim lightbulb. Our brains contain billions of neurons, and each one of those electrically excitable cells can connect to thousands of other neurons, which enables trillions of synapses. And those junctions between individual cells can get stronger or weaker, weighting the calculations being performed.

Biology beats silicon​

The brain’s plasticity – a hardening and softening of nodes in the network so that different inputs trigger different outputs – is represented crudely by the weights and biases in deep neural networks. But what biology can achieve using 20W (the estimated power consumption of a fully developed human brain) requires much more energy when attempts are made to recreate those thought patterns using silicon chips.

For example – when IBM ran its “human scale” synapse simulation in 2012, it required a state-of-the-art supercomputer consuming megawatts of power. Another way of picturing the inefficiencies of using silicon is to consider that Go grandmaster Lee Sedol’s brain is 50,000 times as energy efficient as the racks of CPU’s and GPU’s needed by Deepmind’s Alpha Go to beat him in their 5-match contest held in 2016.

This demand for power ramps up rapidly as more GPUs are deployed. Industry watchers have noted that while GPU performance has soared, so has power consumption. In 2012, units drew around 25 W of power, whereas today’s designs – with superior processing capabilities – consume several hundred watts. And advances in generative AI, which make heavy use of powerful GPUs, have sent energy demands along a much steeper path.

But, as mentioned, developers are exploring other approaches to using conventional CPU and GPU designs. Several years ago IBM and its partners revealed a neuro-synaptic platform dubbed TrueNorth with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. And the power consumption? Just 70 mW.

In-memory advantages​

The system highlights the potential of brain-inspired computing designs to unlock much more energy-efficient AI development. Efficiency gains are realized as computing and storage can be co-located. “You don’t have to establish communication between logic and memory; you just have to make appropriate connections between the different neurons,” explains Manuel Le Gallo – a member of IBM’s In-Memory Computing group. “In conventional computing, whenever you want to perform a computation, you must first access the memory, obtain your data, and transfer it to the logic unit, which returns the computation. And whenever you get a result, you have to send it back to the memory.”

Artificial neurons within that brain-inspired computing logic can be thought of as accumulators, capable of integrating multiple inputs. And those neurons will fire when a certain threshold is reached, informed by the number and strength of those signals. Another benefit of neuromorphic systems – computing designs that take inspiration from the structure of the brain – is their capacity to process noisy inputs, which has advantages in signal-processing.

Reducing the energy budget required to train algorithms on large data sets – a necessary step in activating the full powers of deep neural networks – could put new product opportunities on the roadmap. Brain-inspired computing chips with energy-efficient AI performance open the door to utilizing smaller, more portable devices rather than just having to rely on large infrastructure available in the cloud.

The industry trend for application-specific integrated circuits designed to meet the rising demands of AI can be seen with chips such as Intel’s Loihi 2, Google’s TPUs and Apple’s A16, which the iPhone maker says features a 16-core neural engine. But it’s not just tech giants that are moving forwards in this space. BrainChip is offering what it dubs ‘smart edge silicon’ that exploits neuromorphic computing benefits to lower the energy cost of AI.

And then there are the possibilities of advanced materials beyond silicon to consider too, which have the potential to take brain-inspired computing to a new level. Hybrain brings together a wide range of industry and research partners to explore structures capable of processing both light and electric signals. The photonic structures provide the ability to read in information, which can then be fed into an in-memory analog computing electronic system.

Brain-inspired computing is gathering pace as the energy limits of conventional silicon is being made more apparent by the scramble to build even bigger, more capable generative AI models. And the good news is that the energy rewards put on the table by in-memory processing have the potential to reset commercial AI development along a more sustainable path.
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Damo4

Regular
Hey Damo, defiantly not directed at you - I have zero personal issues with anyone on this forum, most of you I don't know, and those of you who I have met in person are fantastic people. I was simply responding to a post.

I want to respond to your post accurately, so I'm going to break it down by sentence.

In my honest opinion, there is a disconnect between SP and the outlook of the company.

- Six months ago and I'd have agreed with you, now, I think we are probably appropriately priced. Every passing day without commercial adoption (and lets be clear - that's all that matters here. Partnerships, podcasts, scientific articles, new hires, Robs likes on LinkedIn all mean nothing if nobody buys our product) diminishes the value of this company (in my opinion of course). AKD1000 has been available for over two years', and call it a test chip if you like (and that another story as this isn't what the company sold it to us as initially), but it was sold to two clients, so obviously there was potential for it to be sold to more - why wasn't it?.. We've seen articles and "evidence" being posted here for the four years' that I've been a shareholder, so why haven't any of the referenced signed a contract? By means of example - We've seen multiple phone and device releases by the big two over that period, so that completely null and voids the argument of development time. We're apparently at the start of an Artificial Intelligence revolution, we have a working chip, ready for sale, better and cheaper than any of the largest semiconductor companies have got, and we cant sell it? It's beginning to feel like there's no appetite for it.

I don't believe we are looking like we are 'done.' If I did, I would have sold.

I actually said that without commercial take up we're done, obviously we have sufficient operating capital for the coming months, but what happens when that runs out? Investors aren't going to keep throwing good money at it without a return. Without being patronising - look at the chart! If it continues on this trajectory for another 18 months without further commercial contracts and revenue then I'll stand by that statement.

There are plenty of companies on the ASX that have peaked and failed, and this is looking very similar right now. Its not good.

I think it would be insanity to believe this company is destined for failure but still hold on. Unless you believe we will be pumped back to a level in which you can exit with most or more than your money in hand? All prior to the apparent inevitable collapse?
TBH there's many people here who would sell out at or around their avg price which is so silly too, but to each their own.


I believe we are at a defining moment in the companies history, and the tires need to touch the tarmac. I've been bullshitted time and time again by company representatives (to my face) telling me that they're extremely positive about the coming months, etc., only to be let down by a fucking partnership or some other insignificant news. Contracts and revenue are needed, and they are needed very soon, but I'm questioning whether we have a team that are up to it, assuming that the chip is as good as its said to be.

I have a significant number of shares. Nobody wants this company to succeed more than I do. I've read and read, and researched and read and have weighed up this investment against others, and this is my current outlook. Its mine, and everyone is entitles to their own. I see high level posters stating with fact that the future is rosy and big things are coming, but (and in response to your question a couple of days ago @Rise from the ashes, this is me calling out the bullshit) they don't know what will happen in the future and they have no guarantees that management turn this company into a success.

To answer your questions about why, I can only trust what management have to say.
Unfortunately I don't know more about Brainchip than them, so I don't believe its prudent to reject their responses.
If we can't trust them when they speak of the position they are in now, why have so many held onto single sentences from Sean regarding watching the financials, or lumpy revenue?

Sean has said a lot of things, as have the rest of them, but the proof is in the pudding and as i mentioned previously, nothing has transpired.

TBH I can see why management is tight-lipped, everyone here expects them to crystal-ball revenue figures and IP contracts and then try to crucify them when they miss.

I don't think that's fair. They need to do better with shareholder engagement and have more transparency. The podcasts tell us nothing, but elude us to believe a future where Akida runs the world, but in reality we're a microcap again, with no real world adoption and a declining SP.

I'm no stranger to making my concerns vocal (and yes, I understand that rubs a lot of you up the wrong way), but I'm a genuine shareholder with genuine concerns that I cant seem to get a proper answer to:

- What is the status of the EAP participants?
- How many companies are Brainchip currently engaged with?
- What is the sector distribution of these engagement?
- What is the advancement of these engagements?

They aren't hard questions, and don't specifically break any non-disclosure agreements as there is nothing specific in them.

"The share price will do what the share price will do" - This about sums up how few fucks management actually give.

They need to do better.

All good points and also no issue with you voicing the concerns, was just happy to discuss as we have.
Will have a think about what you've raised.

I didn't agree with the following from your previous post however, which is why I mentioned holding onto shares for a company destined for failure.
"Without [commercial uptake] we’re done, and let’s be honest, it’s not looking so good right now is it?"
For me, it's fine, and considering the support on certain posts of mine and others, there's a lot of people who see it my way too.
I truly believe if we had a more reasonable pull back since the $2.34 highs and say hovered around the upper $1 mark, there would be less criticism, or at least a lower frequency of the echoing of said criticism.

For me it's a binary outcome. We succeed or we don't.
There isn't an in between so regardless of who is right or wrong about the outcome, the journey doesn't matter for me yet.
I take the risk and I hope for the reward.
This forum really isn't helping people evaluate their decisions anymore, so I'm surprised so many seek comfort here.

With your 4 decent questions at the end, would an answer to those mean you are satisfied?
Cos it sounds like you have far more concerns than that.
 
  • Like
  • Love
Reactions: 11 users

wilzy123

Founding Member
  • Like
  • Haha
Reactions: 11 users
Worthwhile skim through read if have time. Explains a bit on the hardware side.

Whilst we don't get a mention, mostly the major players, there are some pertinent points on neuromorphic and a couple of speciality forms we do target being ASIC and FPGA and perform well on.

Megachips are ASIC, wonder why :)



In AI | Last updated: August 15, 2023


A Brief Introduction to the Hardware Behind AI​

avatar_user_152_1612934709-96x96.png
avatar_user_255_1665039197-96x96.jpg
By Amrita Pathak and Narendra Mohan Mittal
Brief Introduction to the Hardware Behind AI

Innovative AI hardware has the potential to drive remarkable capabilities and revolutionize how people interact with technology and the world around them.
Have you ever thought about how a tiny chip, smaller than your thumbnail, can mimic human thought processes?
It’s a mind-blowing fact that the hardware behind artificial intelligence (AI) is the powerhouse that makes it possible.
As you explore the world of AI hardware, you will discover how GPUs, TPUs, and neural processing units powerfully shape the landscape of artificial intelligence. Their significant role cannot be underestimated.
In this article, I will discuss with you the complexities of AI hardware, its pivotal role in driving modern innovation, technologies used, pros and cons, their usage, and other details.
Let’s get started!

What Is AI Hardware?​

AI hardware consists of special parts that drive artificial intelligence technologies. These parts are created to manage the complex calculations needed for recognizing patterns, making decisions, and analyzing data.
What-Is-AI-Hardware

Imagine them as the sturdy muscles that support the AI brain’s functions.
The heart of AI hardware lies in the processors such as Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and Neural Processing Units (NPUs).
  • GPUs: These were initially designed for rendering graphics. Since GPUs excel in parallel processing, these are perfect for training AI models.
  • TPUs: Created by Google specifically for accelerating AI computations, TPUs particularly excel in deep learning tasks.
  • NPUs: These can handle tasks involving neural networks and essentially mimic the neural connections found in the human brain.
All the above hardware components work together to process and analyze vast amounts of data, enabling AI systems to learn, adapt, and make predictions.

AI Hardware Technologies​

AI-Hardware-Technologies

Let’s explore the key players in this technological symphony.

#1. Graphical Processing Units (GPUs)​

Originally designed for rendering complex graphics in video games, GPUs have surprisingly found their place in the realm of artificial intelligence. The key to their capability in AI lies in parallel processing – the ability to handle multiple calculations simultaneously.
Unlike traditional processors, GPUs excel at swiftly crunching vast amounts of data, making them an ideal choice for training intricate AI models. Their impressive processing power speeds up data manipulation and model training, significantly reducing the time required to educate AI systems.

#2. Tensor Processing Units (TPUs)​

Returning from the innovative hub of Google, TPUs were crafted with a singular purpose – to supercharge specific AI workloads, especially those involving neural networks.
One remarkable aspect of TPUs is their exceptional efficiency, as they consume less power compared to traditional CPUs and GPUs while accomplishing these tasks.

#3. Deep Learning (DL)​

Deep Learning (DL), a branch of machine learning, embodies the way the human mind can assimilate and comprehend information, but in a digital form. Neural networks with multiple layers are employed by this technology to progressively abstract and manipulate data.
Deep learning serves as the driving force behind modern AI, propelling it towards increasingly sophisticated accomplishments.

#4. Application-Specific Integrated Circuits (ASICs)​

ASICs serve as the tailored suits in the world of AI hardware. These chips are meticulously crafted to excel at specific tasks within AI computations, exhibiting remarkable efficiency.
Application-Specific-Integrated-Circuits-ASICs

Unlike generic processors, ASICs are designed with precision, honing in on particular types of calculations. This focused approach grants them exceptional speed and energy efficiency for AI workloads.

#5. Field-Programmable Gate Arrays (FPGAs)​

What if your computer’s hardware had the remarkable ability to transform?
This unique characteristic defines FPGAs (Field Programmable Gate Arrays).
Unlike conventional processors, FPGAs can be reconfigured after manufacturing to adapt and optimize their performance for specific tasks seamlessly. This extraordinary flexibility positions them like the Swiss army knife of AI hardware, offering a harmonious blend between ASICs’ efficiency and conventional processors’ versatility.

#6. Neuromorphic Chips​

Imagine a world where computer chips function just like our brains, with their intricate connections and rapid signaling.
Enter neuromorphic chips. These chips are different compared to regular chips. These remarkable creations excel at multitasking and swiftly responding to events. As a result, neuromorphic chips are perfect for conserving energy in AI systems and handling real-time tasks that demand speed and efficiency.
Neuromorphic-Chips

When it comes to choosing one among these AI hardware technologies, companies often lean towards using Graphical Processing Units (GPUs) and Tensor Processing Units (TPUs) for their AI tasks.
GPUs offer parallel processing power and versatility, making them a popular choice, especially for training complex AI models. Similarly, TPUs, created by Google, stand out for their ability to speed up neural network tasks, offering both efficiency and swiftness. These two options are favored because of their proven performance in handling the intense computational demands of modern AI applications.

AI Hardware vs. Regular Hardware​

Artificial intelligence AI and machine learning concept. Compute

Understanding the distinction between AI hardware and regular hardware requires you to learn about the components that power the astonishing capabilities of artificial intelligence.
Here’s a breakdown of how AI hardware sets itself apart from regular or traditional hardware.

Complex Computations​

AI tasks involve intricate calculations for pattern recognition, data analysis, making decisions, predicting events, etc. AI hardware is designed to efficiently handle these complex computations.

Parallel Processing Power​

AI hardware, such as GPUs and TPUs, excels in parallel processing or executing multiple tasks simultaneously while ensuring performance. This enables quicker data processing and model training, which is critical for AI applications as you can deploy solutions faster.

Specialized Architecture​

An illustration of a circuit board with people around it.

AI hardware is purpose-built for specific AI workloads, like neural networks and deep learning algorithms. This specialized architecture ensures the efficient execution of AI-specific tasks, unlike regular hardware that lacks this tailored design.

Energy Efficiency​

AI hardware emphasizes energy efficiency due to the power-hungry nature of AI tasks. It’s optimized to perform AI computations using less power, prolonging the lifespan of devices and reducing operational costs.

Customization and Adaptability​

Regular hardware is versatile but lacks the customization level that you can attain with AI hardware like ASICs and FPGAs. AI hardware is designed to cater to specific AI tasks, enhancing performance and efficiency.

How Startups Are Adopting AI Hardware​

Integrating AI hardware into operations has become a strategic avenue for startups in the digital landscape, enhancing operations and driving innovation.
Let’s explore how startups harness the power of AI hardware.

Data Processing​

Startups use AI hardware, like GPUs and TPUs, to accelerate data processing and model training. This, in turn, enables them to perform tasks faster, make informed decisions swiftly, and create out-of-the-box solutions.

Cost-Effectiveness​

AI hardware’s parallel processing capability enables startups to accomplish more while utilizing fewer resources. This ultimately helps optimize costs and generate better ROI.

Customization​

A man is working on a computer motherboard.

In the world of startups, finding customized solutions is often a necessity. The reason is every business has different goals, requirements, and restrictions. So, they need a solution that they can easily customize to make it suitable for their usage.
That’s where AI hardware comes into play. Specifically designed components, like ASICs and FPGAs, are easy to customize to match specific AI workloads. This provides more operational efficiency and boosts performance.

Edge Computing​

Do you know that many startups operate on the edge, where real-time processing matters? Well, AI hardware such as neuromorphic chips can cater to that with its event-driven communication.

Innovation Boost​

By incorporating AI hardware, startups can gain a competitive advantage. This technology allows them to develop innovative AI-driven products and services, positioning themselves ahead in the market.

Best AI Hardware Providers​

Now, let’s look into the best AI hardware providers in the market.

#1. Nvidia​

Nvidia, a global leader in AI computing, stands at the forefront of transforming industries through its innovative hardware. It has pioneered accelerated computing, an integral concept in AI’s functioning.

No longer limited to graphics, their GPUs serve as the brains behind AI operations, driving the computations that fuel its success. Whether powering data centers, the cloud, or personal devices, Nvidia’s hardware delivers the necessary computational power for AI applications.
Nvidia’s cutting-edge products, like the H100 GPU, are specifically designed to tackle complex AI tasks, solidifying their crucial role in the landscape of AI hardware.

#2. Intel​

Intel, a leading name in the tech industry, offers a wide range of AI hardware options. From data preprocessing to training, inferencing, and deployment, their comprehensive portfolio has got you covered.

Whether you need a data science workstation or advanced machine learning and deep learning tools, Intel simplifies the process of AI deployments.
One standout product is their Xeon Scalable processors, which provide accelerated AI capabilities and enhanced security for easy implementation in data centers worldwide.

#3. Graphcore​

Graphcore is an innovative company that has pioneered a new type of processor exclusively crafted for machine intelligence.

Their Intelligent Processing Units (IPUs) are purpose-built to handle the intricate computations required by AI, surpassing traditional hardware and exhibiting remarkable performance.
Graphcore’s comprehensive hardware and software solutions span across diverse sectors like finance, healthcare, and scientific research, enabling these industries to harness the power of AI efficiently.

#4. Cerebras​

Cerebras has significantly contributed to AI hardware through its Wafer Scale Engine (WSE). The traditional use of GPU clusters in scaling deep learning often demands extensive engineering hours, posing a practical barrier for many who wish to harness the potential of large-scale AI.

Cerebras’ WSE removes this obstacle by providing a cluster-scale AI compute resource that is as easy to program as a single desktop machine. This means you can utilize standard tools like TensorFlow or PyTorch without the need for complex adjustments.

#5. Edge TPU​

Developed by Google, Edge TPU is an ASIC that has been purpose-built for running AI at the edge.
This technology has emerged as a response to the growing demand for deploying AI models trained in the cloud on the edge devices due to considerations of privacy, latency, and bandwidth limitations.
With its compact physical size and low power requirements, Edge TPU offers remarkable performance while enabling high-accuracy AI deployment at the edge. It’s not merely a hardware solution; it combines custom hardware with open software and advanced AI algorithms.

#6. Amazon EC2 G4 Instances​

When exploring the world of AI hardware, don’t forget to consider Amazon EC2 G4 Instances since it’s also a significant player in the industry.
YouTube video


G4 instances provide an affordable and flexible option, which makes them perfect for using machine learning models and applications that require a lot of graphics. These are specifically designed to handle tasks like image classification, object detection, speech recognition, and more.
You have the option to select either NVIDIA or AMD GPUs, each with its own unique advantages. Thus, it can become a valuable asset in your AI hardware toolkit.

#7. Qualcomm​

Qualcomm is undoubtedly a global leader in wireless technology, making significant progress in the field of AI hardware. They are currently developing power-efficient AI technology that can be applied to a wide range of products and industries.
A group of people sitting in a living room with a robot.

Qualcomm’s AI solutions bring several advantages, such as user privacy protection, improved reliability, and efficient use of network bandwidth.
With their AI Engine at the wheel, Qualcomm is driving the advancement of the Connected Intelligent Edge. This means the solutions can help enhance user experiences across various devices.

Advancements and Innovations in AI Hardware​

The AI hardware industry is experiencing rapid advancements and groundbreaking innovations that are reshaping the artificial intelligence landscape.
Let’s dive into some exciting progress in this dynamic field.

Specialized Chips for AI​

Tech giants like Google and Apple are responding to the complex requirements of AI with innovative solutions. They are revolutionizing the field by spearheading the development of specialized chips tailored to perform AI tasks.

Neuromorphic Computing​

An image of a brain on a circuit board.

Neuromorphic chips offer cutting-edge technology in the field of AI hardware. They emulate the intricate neural connections of the human brain, paving the way for unprecedented advancements. This new era of neuromorphic computing combines efficiency and brain-inspired design to shape a future where AI can reach incredible heights.

Quantum Computing​

The potential of quantum computers to tackle complex problems surpasses the capabilities of classical computers by leaps and bounds. While we are in the initial stage of witnessing the practical applications of quantum computing in AI, the impact it will have on AI hardware is profound.

Edge AI Acceleration​

The rise of edge computing is being accelerated by AI hardware specifically designed for real-time, energy-efficient processing. This technological progress holds significant relevance, especially for devices such as IoT sensors and wearables.

Memory Innovations​

Are you familiar with how AI algorithms work? They can be quite memory-intensive, which means they require a lot of storage space.
Fortunately, there are innovative solutions available to address this issue. Two emerging memory technologies, called resistive RAM (ReRAM) and phase-change memory (PCM), are stepping in to bridge the gap.

Pros and Cons of Using AI Hardware​

A man and a woman are holding a clipboard.

By incorporating AI hardware, businesses, and industries can harness the power of artificial intelligence effectively. But it’s important to understand the pros and cons associated with using AI hardware.

Pros​

  • Enhanced performance: AI hardware can handle complex AI tasks, offering faster and more efficient processing compared to traditional hardware.
  • Efficiency: Some AI chips, such as TPUs and neuromorphic chips, are made energy efficient. By using these specialized chips, you’re saving money on operations and being kinder to the environment.
  • Speed: AI hardware significantly speeds up data processing and model training, empowering you to gain faster insights and make real-time decisions in various scenarios.
  • Complex problem solving: Quantum computing, a type of AI hardware, has the incredible ability to solve complex problems at an unprecedented speed.
  • Scalability: AI hardware can adapt and expand to accommodate the increasing demands related to growing datasets and evolving AI applications.

Cons​

  • Cost: The initial investment in AI hardware, including development, deployment, and maintenance costs, can be high.
  • Lacks versatility: Some AI hardware, like ASICs, is optimized for specific tasks, limiting versatility for broader applications.
  • Complex implementation: Integrating AI hardware requires both expertise and resources, which may pose challenges for smaller businesses during implementation.

Conclusion​

AI hardware has remarkable capabilities to revolutionize different industries. Using AI hardware for executing heavy AI tasks is advantageous for businesses and individuals. It. It not only can boost efficiency and expedite problem-solving but also allow you to create scalable, futuristic AI solutions.
As AI hardware evolves, it’s expected to unlock opportunities and push boundaries in the field of technology. Whether you’re a business leader or simply curious about technology, understanding the aspects of AI hardware offers a glimpse into an exciting future led by innovative technologies.

  • Amrita Pathak
    Author
    Amrita is a freelance copywriter and content writer. She helps brands enhance their online presence by creating awesome content that connects and converts. She has completed her Bachelor of Technology (B.Tech) in Aeronautical Engineering…. read more

  • Narendra Mohan Mittal
    Editor
    Narendra Mohan Mittal is a Senior Digital Branding Strategist and Content Editor with over 12 years of versatile experience. He holds an M-Tech (Gold Medalist) and B-Tech (Gold Medalist) in Computer Science & Engineering.
    read more
 
  • Like
  • Fire
  • Sad
Reactions: 22 users
Top Bottom