BRN Discussion Ongoing

Townyj

Ermahgerd
Compliments to rayz over on the crapper


SHOT.png
 
  • Like
  • Fire
Reactions: 12 users

7für7

Top 20
Jesus Christ… that crapper over there is a complete mess. Those guys need help. I can’t even read their stuff, but the feedback from other users tells me exactly what’s going on. Whoever invented the ignore button deserves a Nobel Peace Prize…no joke.

Ron Simmons Wow GIF
 
  • Haha
  • Like
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!

For those that don't have access.​

Cheers

The 20% Question: Why AI Isn't Failing and What's Missing​

Kevin D. Johnson

Kevin D. Johnson


Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LSF



February 7, 2026
The notion that the AI economy is headed for a crash is a narrative that's been in play for awhile now. The numbers appear unsustainable. Global spending will probably hit $500 billion through 2026. Financial engineering around data center securities echoes pre-2008 patterns. And a widely cited McKinsey finding tells us that roughly 80% of companies deploying AI see no significant bottom-line impact.
The concerns are not necessarily baseless. The financial engineering deserves scrutiny. The historical parallels to canals, railroads, and dot-com fiber are worth taking seriously. But the diagnosis most people land on, that AI itself is a bubble, rests on weaker evidence than the confidence of the claim suggests.

The Number Everybody Cites​

McKinsey's 2025 Global Survey on AI has become the anchor statistic for the crash narrative. The headline finding: while 88% of organizations report using AI in at least one business function, roughly the same proportion report no significant gains in top-line or bottom-line performance. The figure shows up in financial commentary, policy analysis, and boardroom presentations as though it were established fact.
But, what does that number represent?
The survey collected 1,993 responses across 105 nations between June and July 2025, a geographic scope far broader than the American markets where AI investment and deployment are most concentrated. Every data point is self-reported. No financial results were verified. No audited figures were examined. The finding that 80% of companies see no meaningful profit impact is not a measurement of profit impact. It is a measurement of whether survey respondents believe profit impact has occurred and are willing to attribute it to AI. These are fundamentally different things, and the distinction matters when the number is being used to evaluate hundreds of billions of dollars in economic activity.
The study's definition of "AI high performers" compounds the problem. McKinsey identifies roughly 6% of respondents as high performers based on two self-reported criteria: EBIT impact of 5% or more attributable to AI and a subjective assessment that their organization has seen "significant" value. The term "significant" is undefined. A respondent at a company with $10 billion in revenue claiming 5% EBIT impact from AI is asserting $500 million in AI-driven profit validated by nothing more than a survey response.
The sampling itself introduces bias. The people who respond to McKinsey's annual AI survey are people at companies paying attention to AI. Companies that never adopted AI or abandoned it early are underrepresented by design. The 88% adoption figure does not describe the full economy. It describes the portion of the economy that self-selected into a survey about AI.
None of this means the underlying observation is necessarily wrong. Companies clearly struggle to move from AI experimentation to production value. That pattern is real and recognizable to anyone working in enterprise technology. But the precision with which the 80% figure is cited far exceeds the rigor of the methodology that produced it.

The Evidence That Contradicts the Headline​

Consider what the 80% figure requires you to believe in the face of observable market evidence.
Palantir Technologies has become one of the most widely deployed AI and data platforms in both government and industry. Palantir's revenue growth, margin expansion, and market capitalization reflect an organization whose customers are deriving measurable value at enterprise scale. Palantir is not an exception to the AI economy. Palantir is deployed across intelligence agencies, defense organizations, major financial institutions, healthcare systems, and energy companies. The breadth and depth of that adoption is difficult to reconcile with a claim that 80% of AI-deploying companies see no meaningful returns.
IBM's AI business is steadily growing in a manner that reflects institutional commitment and operational stability. When the clients in question are banks, insurers, government, and many others whose decisions are governed by regulatory exposure and fiduciary obligation, growth represents something more durable than speculative enthusiasm.
The McKinsey survey cannot distinguish between a company running Palantir Foundry at enterprise scale across multiple business functions and a company that gave a few employees access to a chatbot. The survey instrument treats both as "using AI in at least one business function." When those two categories are pooled and the survey reports that most respondents have not seen enterprise-scale results, the finding is nearly tautological. Most respondents have not deployed AI at enterprise scale. Of course most respondents have not seen enterprise-scale results by that standard!

What Actually Separates Success from Failure​

The more useful question is not what percentage of companies are failing at AI. The more useful question is what distinguishes organizations that are capturing real value from those that are not.
The answer is not better hardware. The answer is platform and architecture.
Companies that treat AI as a procurement decision tend to struggle. In that model, capacity is provisioned without clarity about what question is being answered. Models are deployed without workload-aware platforms underneath. The organization buys hardware, installs a framework, and waits for transformation that never arrives.
Companies that treat AI as a disciplined platform tend to succeed. Each starts with a specific workload: portfolio optimization, risk classification, fraud detection, trade settlement, document analysis. Each matches that workload to the right platform and the right compute. And each orchestrates the full pipeline as a managed system rather than scattering experiments across departments.
The difference between these two approaches is not subtle, and it is not primarily about leadership commitment or organizational culture, which is where McKinsey's analysis lands. The difference is structural. One approach has a platform. The other does not.

The Ecosystem Is Already Broader Than People Think​

The crash narrative also tends to frame the AI economy as dangerously dependent on a single vendor and a single hardware architecture. That framing is too simple.
The GPU compute ecosystem is already a genuine ecosystem. CUDA provides the programming model. LSF orchestrates large-scale training workloads. Symphony can route and manage high-throughput inference and dynamic compute allocation. GPFS provides a distributed storage fabric. IBM Power delivers dense embedding generation and inferencing capability that complements GPU-driven workloads. The inference layer runs on open frameworks like vLLM deployed through OpenShift or RHEL AI. Platforms like watsonx bring model lifecycle management and governance to the enterprise. The model universe itself is diversified as open weights from Meta, Mistral, IBM Granite, and others compete on quality and efficiency. No single vendor controls the full stack, and the open-source community has ensured that the model layer in particular resists consolidation.
The AI economy's actual vulnerability is not vendor concentration. The vulnerability is architectural monotony: the assumption that every AI workload belongs on a GPU. That assumption is where the real fragility lives, not because GPU infrastructure is inadequate but because the GPU alone is incomplete.

Workloads Are Not Interchangeable​

A real-time market classification from a live data feed is a fundamentally different workload than generating a 2,000-word analysis. The classification requires sub-millisecond latency and minimal power draw. The generation requires a large parameter model and substantial memory bandwidth. Running both on the same GPU cluster means one of them is dramatically over-provisioned.
As I have demonstrated recently, neuromorphic hardware like BrainChip's Akida handles event-driven classification at microsecond latency on milliwatts of power. Quantum-classical hybrid methods are producing peer-reviewed results in bond trading prediction and combinatorial optimization. IBM Quantum has arrived as something you can engage today, not just tomorrow. Legacy systems running proven COBOL and RPG logic do not need rewriting, but can benefit from a conversational AI layer that preserves the deterministic logic underneath. Edge deployment pushes inference to the point of data generation instead of requiring every byte to travel to a central facility.
None of these developments eliminate the GPU. Large-scale training and generative inference genuinely need GPU hardware or its equivalent. A mature AI presence in companies will be heterogeneous: different platforms governing different compute for different workloads at different cost structures, all orchestrated under unified management.
The historical parallel everyone cites actually supports this point. The dot-com fiber buildout crashed in valuation, but the physical fiber became the backbone of the modern internet. What changed was not the infrastructure. What changed was how the infrastructure was used and what was built on top of it. The GPU buildout may follow the same pattern. Valuations correct. Infrastructure endures. The platforms that match workloads to the right resources become the durable value.

The Real Risk​

The real risk in the AI economy is not excessive spending. The real risk is unorchestrated spending.
A semantic query that needs a 2B parameter model should not burn cycles on a 70B model. A classification task that runs effectively on a $150 edge board should not occupy a $30,000 GPU. A batch risk calculation that runs on proven mainframe logic should not be rewritten for a cloud-native framework simply to place AI on the architecture diagram.
When the right task routes to the right hardware automatically, when the platform itself becomes intelligent about resource allocation, the economics change in ways that compound. The same fleet serves more workloads. Each workload costs what it should cost. And the resources freed by efficient allocation become available for additional compute, extending the reach of the organization beyond what was possible when every task consumed premium hardware regardless of need
. The governance that enterprises require for auditability, compliance, and chargeback comes built into the platform rather than bolted on after deployment.
High-performance computing addressed these challenges years ago. The largest compute grids in the world do not run homogeneous hardware. Those grids run dynamic compute platforms that allocate resources based on workload characteristics, scale capacity based on demand, and provide unified management across every resource in the cluster. What is changing now is the range of hardware those platforms can govern and the intelligence they can bring to routing decisions.

The Right Question​

The correction that is coming is not an AI crash. The correction is architectural.
The market is going to learn what successful deployments already demonstrate: the value is in the overall ecosystem, properly deployed. The value lies in knowing which chip to use, when to use it, and how to govern the system as a whole. The companies that build workload-aware heterogeneous platforms and treat AI as an architectural discipline informed by what high-performance computing has already learned will keep compounding returns. The companies that do not will cite McKinsey's survey and blame AI for what was always their own platform problem.
The AI economy does not need to shrink. The AI economy is already generating profit, and the companies that have not yet seen returns need to grow toward a proper platform. Diversify the hardware. Orchestrate the workloads. Match compute to purpose. Govern the whole system on a platform designed for it.
The real question was never "is AI worth the investment?" The real question is "do we have the platform to make the investment work?"

The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author's employer.
Thanks to both Yogesh and Boab for reproducing this article.
Whilst its a slightly longish and perhaps dry and arcane read I particularly liked these sections from the last two paragraphs which I have reproduced in bold below. This, along with applications in cybersecurity are where BrainChip can start making some bread and butter money relatively quickly I would have thought. The models can be run in parallel with existing systems to evaluate efficacy and the potential savings speak more convincingly than any salesman. Its a real current issue that we, as a component in a hybridised system can help provide a cost effective fix for. And its already been tested out on our most elementary hardware. It may well prove that our more advanced products can provide even greater efficiency and savings.

"A classification task that runs effectively on a $150 edge board should not occupy a $30,000 GPU. A batch risk calculation that runs on proven mainframe logic should not be rewritten for a cloud-native framework simply to place AI on the architecture diagram.

When the right task routes to the right hardware automatically, when the platform itself becomes intelligent about resource allocation, the economics change in ways that compound. The same fleet serves more workloads. Each workload costs what it should cost. And the resources freed by efficient allocation become available for additional compute, extending the reach of the organization beyond what was possible when every task consumed premium hardware regardless of need."




"The market is going to learn what successful deployments already demonstrate: the value is in the overall ecosystem, properly deployed.


The value lies in knowing which chip to use, when to use it, and how to govern the system as a whole. The companies that build workload-aware heterogeneous platforms and treat AI as an architectural discipline informed by what high-performance computing has already learned will keep compounding returns."
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 5 users

manny100

Top 20
Jesus Christ… that crapper over there is a complete mess. Those guys need help. I can’t even read their stuff, but the feedback from other users tells me exactly what’s going on. Whoever invented the ignore button deserves a Nobel Peace Prize…no joke.

Ron Simmons Wow GIF
Its organised downramping. Likely controlled by a few with a number of user names. Its pretty easy to do.
They have infiltrated here as well but not near to the same extent.
Over on the crapper they have taken it to far. Even new posters will pick up their crap in an instant.
 
  • Like
Reactions: 6 users

7für7

Top 20
Its organised downramping. Likely controlled by a few with a number of user names. Its pretty easy to do.
They have infiltrated here as well but not near to the same extent.
Over on the crapper they have taken it to far. Even new posters will pick up their crap in an instant.


welcome MR. ANDERSOOOON


Calculate Artificial Intelligence GIF by Pudgy Penguins
 

IloveLamp

Top 20



1000018666.jpg
1000018665.jpg
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 12 users

7für7

Top 20
Uuuuuuummmm wut?


View attachment 94906 View attachment 94907

Maybe she has the infos from chatty… I don’t think she would deep dive… usual Halluzinationen if you ask me

Edit. I allowed me to ask chatty as well

“A few factual corrections (with sources):

• “AI already consumes 134 TWh annually” – that 85–134 TWh figure is a projection for ~2027, not current usage.

• “IBM NorthPole 25× vs H100” – IBM’s 25× claim is not a blanket H100 comparison; published comparisons include ~5× frames/joule vs H100 in specific vision benchmarks.

• “Akida 2.0 already shipping in Mercedes EVs” – public evidence points to Vision EQXX concept demos (e.g., “Hey Mercedes” hotword), not confirmed mass-production shipping.

• “ANYmal 72 hours on battery” – official ANYmal runtimes are ~90–120 minutes (range extension via docking).

• Jetson power: Orin up to 60W, but Thor-class modules go up to ~130W.

Big picture: neuromorphic is promising for certain edge workloads, but a lot of the numbers here need benchmark context and primary sources.”

Edit

“I can’t find any official Intel Newsroom / Intel Labs announcement for a “Loihi 3 launch” or those claimed specs. Intel’s publicly documented neuromorphic chip is Loihi 2 (2021) and the big 2024 system Hala Point is built from Loihi 2 chips.
Also, “8 million neurons” is an old Intel figure from Pohoiki Beach (2019) — a 64-chip Loihi system, not a single new chip spec.”
 
Last edited:

TheDrooben

Pretty Pretty Pretty Pretty Good
Maybe she has the infos from chatty… I don’t think she would deep dive… usual Halluzinationen if you ask me

Edit. I allowed me to ask chatty as well

“A few factual corrections (with sources):

• “AI already consumes 134 TWh annually” – that 85–134 TWh figure is a projection for ~2027, not current usage.

• “IBM NorthPole 25× vs H100” – IBM’s 25× claim is not a blanket H100 comparison; published comparisons include ~5× frames/joule vs H100 in specific vision benchmarks.

• “Akida 2.0 already shipping in Mercedes EVs” – public evidence points to Vision EQXX concept demos (e.g., “Hey Mercedes” hotword), not confirmed mass-production shipping.

• “ANYmal 72 hours on battery” – official ANYmal runtimes are ~90–120 minutes (range extension via docking).

• Jetson power: Orin up to 60W, but Thor-class modules go up to ~130W.

Big picture: neuromorphic is promising for certain edge workloads, but a lot of the numbers here need benchmark context and primary sources.”

Edit

“I can’t find any official Intel Newsroom / Intel Labs announcement for a “Loihi 3 launch” or those claimed specs. Intel’s publicly documented neuromorphic chip is Loihi 2 (2021) and the big 2024 system Hala Point is built from Loihi 2 chips.
Also, “8 million neurons” is an old Intel figure from Pohoiki Beach (2019) — a 64-chip Loihi system, not a single new chip spec.”
She might have got it from here......


Happy as Larry
 
  • Like
  • Thinking
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Did anyone else notice a seven and a half million after market trade recorded for BRN on the ASX sometime last night?
After the auction finished I had noted a 17,169,675 turnover but when looking just now see it recorded at 24,669,675?

Also nice to see a bounce in America last night.
Hopefully it continues and we follow suit.
With the recent validation we are getting it would be nice to see some flow through to our share price.
 
  • Like
  • Thinking
Reactions: 6 users
Did anyone else notice a seven and a half million after market trade recorded for BRN on the ASX sometime last night?
After the auction finished I had noted a 17,169,675 turnover but when looking just now see it recorded at 24,669,675?

Also nice to see a bounce in America last night.
Hopefully it continues and we follow suit.
With the recent validation we are getting it would be nice to see some flow through to our share price.
1770443398922.gif
 
  • Haha
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Hi Pom. They've had a red hot go at us and managed to push the share price down to a level it hasn't seen since late July 2020.
And back then we were blowing through it going in the other direction on our way to a major spike at 97 cents.
If this is being done by shorts on borrowed shares they are now heading into a danger zone, I would think.

Of course, I don't know what's going to happen or what strategy is playing out, but any positive news announced about now, which I think we can all agree is somewhat overdue, on top of the rebound in the American tech sector last night, might just be enough to have some folks having a tiny little crap in their panties. Would be nice for the poo shoe to be on the other foot for a change. 🤣
 
Top Bottom