BRN Discussion Ongoing

AARONASX

Holding onto what I've got
Rob is also part of Wavious staff since April 29, 2024.
  • Job Title Sales and Channel Leadership. GTM and Business Development. Global and Regional Experience

Lies, a lot of lies … by BrainChip 😂
I wouldn't say they outright lied, I would however say they just weren't ready to disclose that question so early..maybe not ready for our detective work. Perhaps certain internal processes need to happen before making things public, like the tenure if said person(s) needed to come to an end first.
 
  • Like
  • Thinking
Reactions: 7 users
I wouldn't say they outright lied, I would however say they just weren't ready to disclose that question so early..maybe not ready for our detective work. Perhaps certain internal processes need to happen before making things public, like the tenure if said person(s) needed to come to an end first.

I'm sorry but voluntarily and knowingly providing false information, especially to an investor, IS LYING.


lie2
/lʌɪ/
noun
noun: lie; plural noun: lies
  1. an intentionally false statement.
    "they hint rather than tell outright lies"
 
  • Like
  • Fire
  • Thinking
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I'm sorry but voluntarily and knowingly providing false information, especially to an investor, IS LYING.


lie2
/lʌɪ/
noun
noun: lie; plural noun: lies
  1. an intentionally false statement.
    "they hint rather than tell outright lies"


Please correct me if I'm mistaken, but I beleive Tony Dawe responded to a shareholder query on the 26 March 2024, confirming that both Nandan and Rob were still working at BrainChip at that time.

If Rob has links to Wavious commencing 29 April 2024, then this a full month AFTER Tony's response, which would indicate that Tony (ergo BrainChip) was NOT LYING!
 
  • Like
  • Fire
Reactions: 19 users

AARONASX

Holding onto what I've got
I'm sorry but voluntarily and knowingly providing false information, especially to an investor, IS LYING.


lie2
/lʌɪ/
noun
noun: lie; plural noun: lies
  1. an intentionally false statement.
    "they hint rather than tell outright lies"
As I understand the comment/statement was not made public ie on the market, therefore as my opinion this was not a lie, BUT just wasn't been honest...but...the question remains while not public was Tony actual aware

If on the ASX and they knowling knew internally then yes i would agree it is a lie. I assume this one of the very reasons Brainchip is careful to only make what is needed to be known public.

I am not sure how it works higher up in corporate , but if my resignation was to occur it is between me, the manager and HR. IMO
 
  • Like
Reactions: 3 users
As I understand the comment/statement was not made public ie on the market, therefore as my opinion this was not a lie, BUT just wasn't been honest...but...the question remains while not public was Tony actual aware

If on the ASX and they knowling knew internally then yes i would agree it is a lie. I assume this one of the very reasons Brainchip is careful to only make what is needed to be known public.

I am not sure how it works higher up in corporate , but if my resignation was to occur it is between me, the manager and HR. IMO

If your theory is true, then it shows another problem, that Rob's departure to another company was sudden and unanticipated, and that the Director of Global Investor Relations isn't aware of what's going on within the company. Someone with that post is supposed to be the gatekeeper of information -- to know what is goin on in the co while understanding what can be disclosed to the public and what can't.

For example, if TD knew, then he could reply "Rob's status within the company seems to have changed, we are currently working on a public announcement regarding the matter.".

If TD didn't know, he could have replied "Let me ascertain Rob's status with the management and get back to you."
 
Last edited:
  • Fire
  • Thinking
  • Like
Reactions: 4 users

toasty

Regular
Is that a potential takeover smell in the air??????? Or maybe they're going to call it a merger??? Something is going on. Where there's smoke there's fire...........
 
  • Thinking
  • Haha
Reactions: 8 users
Found the link.


View attachment 62083

Wavious seems to be a small co too, so the argument that he left for greener pastures doesn't hold water. It doesn't even have a website -- the website link on their linkedin page https://www.linkedin.com/company/wavious/about/ does not lead to anything.

From what I can find, it was founded in 2016 and has 11-50 employees which means it's probably no bigger than BC. According to its entry on crunchbase https://www.crunchbase.com/organization/wavious there is no record of any fundraising yet.

Also I wasn't aware that Jerome Nadel, who's supposed to be the CMO, is working PART TIME at another company (CMO at ProGlobalEvents) now? Yeah I know inflation's a b***h but still, if you have a job maybe give it your all eh?

View attachment 62084

View attachment 62085

I honestly think something is not right with the company.

Some more digging shows Wavious isn't in the best shape now either, with the CEO having just left to join another company (although he hasn't updated his Linkedin.

1714701560043.png
 
  • Wow
Reactions: 3 users

Kachoo

Regular
Is that a potential takeover smell in the air??????? Or maybe they're going to call it a merger??? Something is going on. Where there's smoke there's fire...........
What are you referring to ? About a TO merger.

I doubt either is happening but you never know ? This could be one reason for the delay as the partner wants to see the IP sales agreement prior to a TO merger need to use this as an reason for the slow uptake
 
  • Thinking
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Some more digging shows Wavious isn't in the best shape now either, with the CEO having just left to join another company (although he hasn't updated his Linkedin.

View attachment 62086


Sorry, what's your point? Tony should be telepathic and BrainChip is somehow responsible for Rob leaving to join another company that isn't doing so well?

I suppose I should blame BrainChip because I stubbed my toe on a rock this morning? Sheesh!
 
  • Haha
  • Like
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Is that a potential takeover smell in the air??????? Or maybe they're going to call it a merger??? Something is going on. Where there's smoke there's fire...........

I smell something in the air too...

stinky-disgusted.gif
 
  • Haha
  • Like
Reactions: 13 users
Is that a potential takeover smell in the air??????? Or maybe they're going to call it a merger??? Something is going on. Where there's smoke there's fire...........
Maybe it's burnt toasty? :ROFLMAO:😀

I'd rather burnt shorts! 😁😅

SC
 
  • Haha
  • Fire
Reactions: 5 users

FiveBucks

Regular
  • Like
Reactions: 1 users

Shadow59

Regular
  • Like
Reactions: 2 users

Terroni2105

Founding Member
I believe AKIDA 2.00 has been scaled up to 131 TOPS.

At The Heart Of The AI PC Battle Lies The NPU​

Anshel Sag
Contributor

Apr 29, 2024,09:21pm EDT
Green microchip set in a blue printed circuit board

NPUs will be a key battleground for AI chip vendors.
GETTY
There is a clear battle underway among the major players in the PC market about the definition of what makes an AI PC. It’s a battle that extends to how Microsoft and other OEMs interpret that definition as well. The reality is that an AI PC needs to be able to run AI workloads locally, whether that’s using a CPU, GPU or neural processing unit. Microsoft has already introduced the Copilot key as part of its plans to combine GPUs, CPUs and NPUs with cloud-based functionality to enable Windows AI experiences.

The bigger reality is that AI developers and the PC industry at large cannot afford to run AI in the cloud in perpetuity. More to the point, local AI compute is necessary for sustainable growth. And while not all workloads are the same, the NPU has become a new and popular destination for many next-generation AI workloads.

What Is An NPU?​

At its core, an NPU is a specialized accelerator for AI workloads. This means it is fundamentally different from a CPU or a GPU because it does not run the operating system or process graphics, but it can easily assist in doing both when those workloads are accelerated using neural networks. Neural networks are heavily dependent on matrix multiplication tasks, which means that most NPUs are designed to do matrix multiplication at extremely low power in a massively parallel way.


GPUs can do the same, which is one reason they are very popular for neural network tasks in the cloud today. However, GPUs can be very power-hungry in accomplishing this task, whereas NPUs have proven themselves to be much more power-efficient. In short, NPUs can perform selected AI tasks quickly, efficiently and for more sustained workloads.

The NPU’s Evolution​

Some of the earliest efforts in building NPUs came from the world of neuromorphic computing, where many different companies tried to build processors based on the architecture of the human brain and nervous system. However, most of those efforts never panned out, and many were pruned out of existence. Other efforts were born out of the evolution of digital signal processors, which were originally created to convert analog signals such as sound into digital signals. Companies including Xilinx (now part of AMD) and Qualcomm both took this approach, repurposing some or all of their DSPs into AI engines. Ironically, Qualcomm already had an NPU in 2013 called the Zeroth, which was about a decade too early. I wrote about its transition from dedicated hardware to software in 2016.

One of the advantages of DSPs is that they have traditionally been highly programmable while also having very low power consumption. Combining these two benefits with matrix multiplication has led companies to the NPU in many cases. I learned about DSPs in my early days with an electronic prototype design firm that worked a lot with TI’s DSPs in the mid-2000s. In the past, Xilinx called its AI accelerator a DPU, while Intel called it a vision processing unit as a legacy from its acquisition of low-power AI accelerator maker Movidius. All of these have something in common, in that they all come from a processor designed to analyze analog signals (e.g., sound or imagery) and process those signals quickly and at extremely low power.

Qualcomm’s NPU​

As for Qualcomm, I have personally witnessed its journey from the Hexagon DSP to the Hexagon NPU, during which the company has continually invested in incremental improvements for every generation. Now Qualcomm’s NPU is powerful enough to claim 45 TOPS of AI performance on its own. In fact, as far as back as 2017, Qualcomm was talking about AI performance inside the Hexagon DSP, and about leveraging it alongside the GPU for AI workloads. While there were no performance claims for the Hexagon 682 inside the Snapdragon 835 SoC, which shipped that year, the Snapdragon 845 of 2018 included a Hexagon 685 capable of a whopping 3 TOPS thanks to a technology called HVX. By the time Qualcomm put the Hexagon 698 inside the Snapdragon 865 in 2019, the component was no longer being called a DSP; now it was a fifth-generation “AI engine,” which means that the current Snapdragon 8 Gen 3 and Snapdragon X Elite are Qualcomm’s ninth generation of AI engines.


The Rest Of The AI PC NPU Landscape​

Not all NPUs are the same. In fact, we still don’t fully understand what everyone’s NPU architectures are, nor how fast they run, which keeps us from being able to fully compare them. That said, Intel has been very open about the NPU in the Intel Core Ultra model code-named Meteor Lake. Right now, Apple’s M3 Neural Engine ships with 18 TOPS of AI performance, while Intel’s NPU has 11 and the XDNA NPU in AMD’s Ryzen 8040 (a.k.a. Hawk Point) has 16 TOPS. These numbers all seem low when you compare them to Qualcomm’s Snapdragon X Elite, which has an NPU-only TOPS of 45 and a complete system TOPS of 75. In fact, Meteor Lake’s complete system TOPS is 34, while the Ryzen 8040 is 39—both of which are lower than Qualcomm’s NPU-only performance. While I expect Intel and AMD to downplay the role of the NPU initially and Qualcomm to play it up, it does seem that the landscape may become much more interesting at the end of this year moving into early next year.

Shifting Apps From The Cloud To The NPU​

While the CPU and GPU are still extremely relevant for everyday use in PCs, the NPU has become the center of attention for many in the industry as an area for differentiation. One open question is whether the NPU is relevant enough to justify being a technology focus and, if so, how much performance is enough to deliver an adequate experience? In the bigger picture, I believe that NPUs and their TOPS performance have already become a major battlefield within the PC sector. This is especially true if you consider how many applications might target the NPU simultaneously—and possibly bog it down if there isn’t enough performance headroom.
With so much focus on the NPU inside the AI PC, it makes sense that there must be applications that take advantage of that NPU to justify its existence. Today, most AI applications live in the cloud because that’s where most AI compute resides. As more of these applications shift from the cloud to a hybrid model, there will be an increased dependency on local NPUs to offload AI functions from the cloud. Additionally, there will be applications that require higher levels of security for which IT simply won’t allow data to leave the local machine; these applications will be entirely dependent on local compute. Ironically, I believe that one of those key application areas will be security itself, given that security has traditionally been one of the biggest resource hogs for enterprise systems.
As time progresses, more LLMs and other models will be quantized in ways that will enable them to have a smaller footprint on the local device while also improving accuracy. This will enable more on-device AI that has a much better contextual understanding of the local device’s data, and that performs with lower latency. I also believe that while some AI applications will initially deploy as hybrid apps, there will still be some IT organizations that want to deploy on-device first; the earliest versions of those applications will likely not be as optimized as possible and will likely take up more compute, driving more demand for higher TOPS from AI chips.

Increasing Momentum​

However, the race for NPU dominance and relevance has only just begun. Qualcomm’s Snapdragon X Elite is expected to be the NPU TOPS leader when the company launches in the middle of this year, but the company will not be alone. AMD has already committed to delivering 40 TOPS of NPU performance in its next-generation Strix Point Ryzen processors due early next year, while at its recent Vision 2024 conference Intel claimed 100 TOPS of platform-level AI performance for the Lunar Lake chips due in Q4 of 2024. (Recall that Qualcomm’s Snapdragon X Elite claims 75 TOPS across the GPU, CPU and NPU.) While it isn’t official, there is an understanding across the PC ecosystem that Microsoft put a requirement on its silicon vendor partners to deliver at least 40 TOPS of NPU AI performance for running Copilot locally.
One item of note is that most companies are apparently not scaling their NPU performance based on product tier; rather, NPU performance is the same across all platforms. This means that developers can target a single NPU per vendor, which is good news for the developers because optimizing for an NPU is still quite an undertaking. Thankfully, there are low-level APIs such as DirectML and frameworks including ONNX that will hopefully help reduce the burden on developers so they don’t have to target every type of NPU on their own. That said, I do believe that each chip vendor will also have its own set of APIs and SDKs that can help developers take even more advantage of the performance and power savings of their NPUs.

Wrapping Up​

The NPU is quickly becoming the new focus for an industry looking for ways to address the costs and latency that come with cloud-based AI computing. While some companies already have high-performance NPUs, there is a clear and very pressing desire for OEMs to use processors that include NPUs with at least 40 TOPS. There will be an accelerated shift towards on-device AI, which will likely start with hybrid apps and models and in time shift towards mostly on-device computing. This does mean that the NPU’s importance will be less relevant early on for some platforms, but having a less powerful NPU may also translate to not delivering the best possible AI PC experiences.
There are still a lot of unknowns about the complete AI PC vision, especially considering how many different vendors are involved, but I hear that a lot of things will get cleared up at Microsoft’s Build conference in late May. That said, I believe the battle for the AI PC will likely drag on well into 2025 as more chip vendors and OEMs adopt faster and more capable NPUs.

I believe AKIDA 2.00 has been scaled up to 131 TOPS.

At The Heart Of The AI PC Battle Lies The NPU​

Anshel Sag
Contributor

Apr 29, 2024,09:21pm EDT
Green microchip set in a blue printed circuit board

NPUs will be a key battleground for AI chip vendors.
GETTY
There is a clear battle underway among the major players in the PC market about the definition of what makes an AI PC. It’s a battle that extends to how Microsoft and other OEMs interpret that definition as well. The reality is that an AI PC needs to be able to run AI workloads locally, whether that’s using a CPU, GPU or neural processing unit. Microsoft has already introduced the Copilot key as part of its plans to combine GPUs, CPUs and NPUs with cloud-based functionality to enable Windows AI experiences.

The bigger reality is that AI developers and the PC industry at large cannot afford to run AI in the cloud in perpetuity. More to the point, local AI compute is necessary for sustainable growth. And while not all workloads are the same, the NPU has become a new and popular destination for many next-generation AI workloads.

What Is An NPU?​

At its core, an NPU is a specialized accelerator for AI workloads. This means it is fundamentally different from a CPU or a GPU because it does not run the operating system or process graphics, but it can easily assist in doing both when those workloads are accelerated using neural networks. Neural networks are heavily dependent on matrix multiplication tasks, which means that most NPUs are designed to do matrix multiplication at extremely low power in a massively parallel way.


GPUs can do the same, which is one reason they are very popular for neural network tasks in the cloud today. However, GPUs can be very power-hungry in accomplishing this task, whereas NPUs have proven themselves to be much more power-efficient. In short, NPUs can perform selected AI tasks quickly, efficiently and for more sustained workloads.

The NPU’s Evolution​

Some of the earliest efforts in building NPUs came from the world of neuromorphic computing, where many different companies tried to build processors based on the architecture of the human brain and nervous system. However, most of those efforts never panned out, and many were pruned out of existence. Other efforts were born out of the evolution of digital signal processors, which were originally created to convert analog signals such as sound into digital signals. Companies including Xilinx (now part of AMD) and Qualcomm both took this approach, repurposing some or all of their DSPs into AI engines. Ironically, Qualcomm already had an NPU in 2013 called the Zeroth, which was about a decade too early. I wrote about its transition from dedicated hardware to software in 2016.

One of the advantages of DSPs is that they have traditionally been highly programmable while also having very low power consumption. Combining these two benefits with matrix multiplication has led companies to the NPU in many cases. I learned about DSPs in my early days with an electronic prototype design firm that worked a lot with TI’s DSPs in the mid-2000s. In the past, Xilinx called its AI accelerator a DPU, while Intel called it a vision processing unit as a legacy from its acquisition of low-power AI accelerator maker Movidius. All of these have something in common, in that they all come from a processor designed to analyze analog signals (e.g., sound or imagery) and process those signals quickly and at extremely low power.

Qualcomm’s NPU​

As for Qualcomm, I have personally witnessed its journey from the Hexagon DSP to the Hexagon NPU, during which the company has continually invested in incremental improvements for every generation. Now Qualcomm’s NPU is powerful enough to claim 45 TOPS of AI performance on its own. In fact, as far as back as 2017, Qualcomm was talking about AI performance inside the Hexagon DSP, and about leveraging it alongside the GPU for AI workloads. While there were no performance claims for the Hexagon 682 inside the Snapdragon 835 SoC, which shipped that year, the Snapdragon 845 of 2018 included a Hexagon 685 capable of a whopping 3 TOPS thanks to a technology called HVX. By the time Qualcomm put the Hexagon 698 inside the Snapdragon 865 in 2019, the component was no longer being called a DSP; now it was a fifth-generation “AI engine,” which means that the current Snapdragon 8 Gen 3 and Snapdragon X Elite are Qualcomm’s ninth generation of AI engines.


The Rest Of The AI PC NPU Landscape​

Not all NPUs are the same. In fact, we still don’t fully understand what everyone’s NPU architectures are, nor how fast they run, which keeps us from being able to fully compare them. That said, Intel has been very open about the NPU in the Intel Core Ultra model code-named Meteor Lake. Right now, Apple’s M3 Neural Engine ships with 18 TOPS of AI performance, while Intel’s NPU has 11 and the XDNA NPU in AMD’s Ryzen 8040 (a.k.a. Hawk Point) has 16 TOPS. These numbers all seem low when you compare them to Qualcomm’s Snapdragon X Elite, which has an NPU-only TOPS of 45 and a complete system TOPS of 75. In fact, Meteor Lake’s complete system TOPS is 34, while the Ryzen 8040 is 39—both of which are lower than Qualcomm’s NPU-only performance. While I expect Intel and AMD to downplay the role of the NPU initially and Qualcomm to play it up, it does seem that the landscape may become much more interesting at the end of this year moving into early next year.

Shifting Apps From The Cloud To The NPU​

While the CPU and GPU are still extremely relevant for everyday use in PCs, the NPU has become the center of attention for many in the industry as an area for differentiation. One open question is whether the NPU is relevant enough to justify being a technology focus and, if so, how much performance is enough to deliver an adequate experience? In the bigger picture, I believe that NPUs and their TOPS performance have already become a major battlefield within the PC sector. This is especially true if you consider how many applications might target the NPU simultaneously—and possibly bog it down if there isn’t enough performance headroom.
With so much focus on the NPU inside the AI PC, it makes sense that there must be applications that take advantage of that NPU to justify its existence. Today, most AI applications live in the cloud because that’s where most AI compute resides. As more of these applications shift from the cloud to a hybrid model, there will be an increased dependency on local NPUs to offload AI functions from the cloud. Additionally, there will be applications that require higher levels of security for which IT simply won’t allow data to leave the local machine; these applications will be entirely dependent on local compute. Ironically, I believe that one of those key application areas will be security itself, given that security has traditionally been one of the biggest resource hogs for enterprise systems.
As time progresses, more LLMs and other models will be quantized in ways that will enable them to have a smaller footprint on the local device while also improving accuracy. This will enable more on-device AI that has a much better contextual understanding of the local device’s data, and that performs with lower latency. I also believe that while some AI applications will initially deploy as hybrid apps, there will still be some IT organizations that want to deploy on-device first; the earliest versions of those applications will likely not be as optimized as possible and will likely take up more compute, driving more demand for higher TOPS from AI chips.

Increasing Momentum​

However, the race for NPU dominance and relevance has only just begun. Qualcomm’s Snapdragon X Elite is expected to be the NPU TOPS leader when the company launches in the middle of this year, but the company will not be alone. AMD has already committed to delivering 40 TOPS of NPU performance in its next-generation Strix Point Ryzen processors due early next year, while at its recent Vision 2024 conference Intel claimed 100 TOPS of platform-level AI performance for the Lunar Lake chips due in Q4 of 2024. (Recall that Qualcomm’s Snapdragon X Elite claims 75 TOPS across the GPU, CPU and NPU.) While it isn’t official, there is an understanding across the PC ecosystem that Microsoft put a requirement on its silicon vendor partners to deliver at least 40 TOPS of NPU AI performance for running Copilot locally.
One item of note is that most companies are apparently not scaling their NPU performance based on product tier; rather, NPU performance is the same across all platforms. This means that developers can target a single NPU per vendor, which is good news for the developers because optimizing for an NPU is still quite an undertaking. Thankfully, there are low-level APIs such as DirectML and frameworks including ONNX that will hopefully help reduce the burden on developers so they don’t have to target every type of NPU on their own. That said, I do believe that each chip vendor will also have its own set of APIs and SDKs that can help developers take even more advantage of the performance and power savings of their NPUs.

Wrapping Up​

The NPU is quickly becoming the new focus for an industry looking for ways to address the costs and latency that come with cloud-based AI computing. While some companies already have high-performance NPUs, there is a clear and very pressing desire for OEMs to use processors that include NPUs with at least 40 TOPS. There will be an accelerated shift towards on-device AI, which will likely start with hybrid apps and models and in time shift towards mostly on-device computing. This does mean that the NPU’s importance will be less relevant early on for some platforms, but having a less powerful NPU may also translate to not delivering the best possible AI PC experiences.
There are still a lot of unknowns about the complete AI PC vision, especially considering how many different vendors are involved, but I hear that a lot of things will get cleared up at Microsoft’s Build conference in late May. That said, I believe the battle for the AI PC will likely drag on well into 2025 as more chip vendors and OEMs adopt faster and more capable NPUs.

Bravo, can I ask your thoughts on we’re Akida maybe heading here and with whom if you would speculate ?
 

Easytiger

Regular
From page 29 in the AGM notice of meeting.

Not sure what some people don't get.

If they are such astute investors they'd read the NOM and see the below.

Come across more like they expected a quick return....rarely with a spec no matter what industry.

The quick return was the MB spike :rolleyes:

If you were in before that and held (without planning on holding with a number of years to go) then maybe that was probs greed thinking it would go higher or bounce and now trapped?

If got in after the spike then maybe a touch of FOMO?

Some may want a full spill but if they are gunning for Sean...ain't gonna happen.

Not a real fan of his myself yet...jury is out till see something a little more tangible but I'm not totally against him either as there has been network / ecosystem growth and take up by other companies in real work with Akida like EdgX, Ant61 and NVISO.

The other thing is, it's up to those Directors in the spill to decide if they nominate again so people like PVDM may just go stuff it...that's not something I prefer.


"Each of these Directors would be eligible to stand for re-election at the Spill Meeting, however there is no guarantee that they would do so. As Mr Sean Hehir is an Executive Director of the Company, he is excluded from the requirements under the Corporations Act to seek re-election at the Spill Meeting (if held) and will continue to hold office regardless of the outcome of this Resolution or the Spill Meeting (if held)."
If there is a board spill, then the incoming board will have direction and mandate to review the CEO’s performance and based on the review take appropriate action.

Spill or no spill, CEO should be assessed based on performance.
 
  • Like
Reactions: 5 users
How do we get ubiquitous and I feel the podcast with Keith Witek and his comments on chiplets is one of the ways forward, with whom I have know idea however there is something on the horizon I can feel it. The models made as chiplets will be part of our future imo.
 
  • Like
Reactions: 1 users

Easytiger

Regular
BRN has all AGM bases covered.
The funds including super funds holding retail accounts with BRN shares will be canvassed. They will vote YES to all except NO for a BOD spill. Together with large and other positive retail holders there is no way a 2nd strike will lead to a spill.
Yes discussions with instos would have occurred and if it’s agreed that the position is a no vote on BOD spill, then conditions would have been agreed.
 
  • Like
Reactions: 2 users

Terroni2105

Founding Member
Found the link.


View attachment 62083

Wavious seems to be a small co too, so the argument that he left for greener pastures doesn't hold water. It doesn't even have a website -- the website link on their linkedin page https://www.linkedin.com/company/wavious/about/ does not lead to anything.

From what I can find, it was founded in 2016 and has 11-50 employees which means it's probably no bigger than BC. According to its entry on crunchbase https://www.crunchbase.com/organization/wavious there is no record of any fundraising yet.

Also I wasn't aware that Jerome Nadel, who's supposed to be the CMO, is working PART TIME at another company (CMO at ProGlobalEvents) now? Yeah I know inflation's a b***h but still, if you have a job maybe give it your all eh?

View attachment 62084

View attachment 62085

I honestly think something is not right with the company.
Jerome Nadel hasn’t worked for BrainChip since June last year.
 
  • Like
  • Fire
Reactions: 3 users
If there is a board spill, then the incoming board will have direction and mandate to review the CEO’s performance and based on the review take appropriate action.

Spill or no spill, CEO should be assessed based on performance.
Agree and don't think anyone saying he shouldn't be reviewed on performance.

Merely pointing out that based on some comments here and the other place, some are gunning for the CEO or believe a spill will dump him also.

Won't be the case, simple.
 
  • Like
Reactions: 5 users
Top Bottom