BRN Discussion Ongoing

jrp173

Regular
“ The fact is, that Anil has NOTHING to do with BrainChip anymore. “


In actual fact ,

Annual Report , Page 33

View attachment 94890


Thank you. Happy to be corrected by you on the matter of Anil consulting for BRN, however that makes it even worse that Tech happily drops Anil's name in his post.

I wonder what Anil would think about this? Would he appreciate his name being used, and him being quoted on TSE?
 
Last edited:
Im preparing for the up rampers to crack the shits when we hit 9 cents
1770368515284.gif
 
Last edited:
  • Haha
Reactions: 16 users
  • Like
Reactions: 1 users

White Horse

Regular
A recent comment about Hehir’s hair chimed with something that has bothered me He is obviously fond of his hair. Vanity in such an unprepossessing person is unsettling. It hints at narcissism. Too much concern for the personal, not so much for the team.

This squares with the unease about staff churn and the lack of any strong endorsements of Hehir’s leadership. BRN is a small company with a limited range of products. An internal succession should be possible. We need an engineer with a deep understanding of Akida and the ability to transfer enthusiasm, with congruent words and gestures, to those wanting to adopt our technology. With a light touch, Not the confected earnestness of a salesman.

Hehir has never seemed to fully understand Akida Early on he dismissed Akida 1 as a mere reference chip, he had a strange idea that Akida 2 might come out in three versions, Pico arrived without any clear definition - (Is it only IP? Does it exist as a micro-chip, 1.8mm square, and if so, who made it?) Now the focus is on Akida 1500, which was originally a stripped down version of 1000, but presumably is now a re-engineered update, meeting client requests
I'd like to see your psychology degree.
You have definite BOT tendencies.
 
  • Haha
Reactions: 4 users

Esq.111

Fascinatingly Intuitive.
Imagine you manage to reach the 1 million dollar question…. And than THIS….

View attachment 94889
Evening 7fury7 ,

F€uck the Lambo,

F¥uck the Bently ,

Waiting for the G8 , nothing more , nothing less.

* Had yet another buy in the market today , did not trip , with any luck The Hair will voice again , hence will trigger , in absolutely no rush .

Regards,
Esq .

Note , GO KEVIN , what a legend.
IBM.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Getupthere

Regular
Investor update: Sean is truly an operator.

Watch us now.

This man should have been removed as CEO by year three of the five-year plan.
 
  • Like
  • Fire
  • Thinking
Reactions: 4 users

FJ-215

Regular
Investor update: Sean is truly an operator.

Watch us now.

This man should have been removed as CEO by year three of the five-year plan.
Well, I believe he buggered up the first three years of his plan....

so.............we now have an eight year plan...

Which will be approved by all genuine shareholders!!!!!!!

:cry::cry::cry::cry:
 
  • Like
Reactions: 2 users

FJ-215

Regular
Good for a laugh!!!


 
  • Love
  • Haha
Reactions: 2 users

7für7

Top 20
Evening 7fury7 ,

F€uck the Lambo,

F¥uck the Bently ,

Waiting for the G8 , nothing more , nothing less.

* Had yet another buy in the market today , did not trip , with any luck The Hair will voice again , hence will trigger , in absolutely no rush .

Regards,
Esq .

Note , GO KEVIN , what a legend.
IBM.

I want to see the shorts

BURN

Mad Fire GIF by Elgato
 
  • Like
Reactions: 3 users
Ummmm....legit? :unsure:

Cool if true...cool if via someone like Megachips / Renesas tech...but not so cool if BRN knew and could have announced it.




Alibaba|January 27, 2026

Best AI-powered Home Lighting Systems That Adapt To Circadian Rhythm Without Needing Constant Cloud Sync​


Excerpt:

Top 5 locally intelligent circadian lighting systems​

The following systems were selected after reviewing firmware documentation, third-party spectral measurements (IES files), open-source SDK disclosures, and verified user-reported uptime logs. All run inference on-device; none require persistent internet connectivity for core circadian functionality.

SystemOn-Device AI PlatformKey Circadian FeaturesLocal Storage CapacityPrivacy Certification
LuminaCore Pro v3.2Custom ASIC + Edge TPU (Google Coral)Real-time ipRGC-weighted lux modeling; DLMO prediction via wrist-worn sync (optional); dynamic twilight ramping16 GB encrypted flash (stores 18 months of anonymized light-event logs)ISO/IEC 27701 certified; zero telemetry by default
Solara LocalAI HubQualcomm QCS610 + custom spectral inference engineAdaptive melanopic EDI calculation; ambient light + skin temperature fusion (via optional wearable); seasonal UV index compensation8 GB eMMC (retains 6 months of calibration history)GDPR-compliant; all processing occurs within EU-hosted edge node (user-selectable)
NightLume Neural SeriesNeuromorphic chip (BrainChip Akida)Event-driven spiking neural network for motion-triggered phase-appropriate light bursts; learns from manual overrides without retraining2 GB LPDDR4X (on-chip memory only; no persistent storage)Privacy-by-design architecture; no identifiers stored or transmitted
HelioSync EdgeRaspberry Pi Compute Module 4 + PyTorch MobileOpen-source model trained on NIH circadian datasets; supports custom phase-shift schedules (e.g., jet lag prep)32 GB microSD (user-replaceable; logs disabled by default)Self-certified under NIST SP 800-53 Rev. 5 (Low Impact)
VitaLight OnboardESP32-S3 + TinyML spectral optimizerBattery-powered portable units with local sunrise simulation; auto-calibrates to geographic coordinates via GPS module (no cloud lookup)Internal FRAM (128 KB; retains settings through power loss)FCC Part 15B compliant; no wireless transmission beyond Bluetooth LE pairing
 
  • Like
  • Wow
  • Thinking
Reactions: 19 users

CHIPS

Regular
Ummmm....legit? :unsure:

Cool if true...cool if via someone like Megachips / Renesas tech...but not so cool if BRN knew and could have announced it.




Alibaba|January 27, 2026

Best AI-powered Home Lighting Systems That Adapt To Circadian Rhythm Without Needing Constant Cloud Sync​


Excerpt:

Top 5 locally intelligent circadian lighting systems​

The following systems were selected after reviewing firmware documentation, third-party spectral measurements (IES files), open-source SDK disclosures, and verified user-reported uptime logs. All run inference on-device; none require persistent internet connectivity for core circadian functionality.

SystemOn-Device AI PlatformKey Circadian FeaturesLocal Storage CapacityPrivacy Certification
LuminaCore Pro v3.2Custom ASIC + Edge TPU (Google Coral)Real-time ipRGC-weighted lux modeling; DLMO prediction via wrist-worn sync (optional); dynamic twilight ramping16 GB encrypted flash (stores 18 months of anonymized light-event logs)ISO/IEC 27701 certified; zero telemetry by default
Solara LocalAI HubQualcomm QCS610 + custom spectral inference engineAdaptive melanopic EDI calculation; ambient light + skin temperature fusion (via optional wearable); seasonal UV index compensation8 GB eMMC (retains 6 months of calibration history)GDPR-compliant; all processing occurs within EU-hosted edge node (user-selectable)
NightLume Neural SeriesNeuromorphic chip (BrainChip Akida)Event-driven spiking neural network for motion-triggered phase-appropriate light bursts; learns from manual overrides without retraining2 GB LPDDR4X (on-chip memory only; no persistent storage)Privacy-by-design architecture; no identifiers stored or transmitted
HelioSync EdgeRaspberry Pi Compute Module 4 + PyTorch MobileOpen-source model trained on NIH circadian datasets; supports custom phase-shift schedules (e.g., jet lag prep)32 GB microSD (user-replaceable; logs disabled by default)Self-certified under NIST SP 800-53 Rev. 5 (Low Impact)
VitaLight OnboardESP32-S3 + TinyML spectral optimizerBattery-powered portable units with local sunrise simulation; auto-calibrates to geographic coordinates via GPS module (no cloud lookup)Internal FRAM (128 KB; retains settings through power loss)FCC Part 15B compliant; no wireless transmission beyond Bluetooth LE pairing

I tried to find the manufacturer or the lamp, but in vain. Even Alibaba does not show a search result.
 
  • Like
Reactions: 2 users
I tried to find the manufacturer or the lamp, but in vain. Even Alibaba does not show a search result.
Tried as well and can find other Nightume products, just not neural series.
 
  • Like
Reactions: 3 users

Guzzi62

Regular
Tried as well and can find other Nightume products, just not neural series.
It's just marketing fluff.

You can also buy hairdriers/vacuum cleaners where part of the name is "turbo" and as we all know, those items will off course not have one fitted but enhanced performance compared to non turbo versions or just pure marketing fluff.
So if you wear a toupee, be careful and don't vacuum if your cat is close by.

I ran the article though ChatGPT:

Summary: What’s Technically Valid vs. Marketing Stretch


ClaimTechnically ValidNotes
“Neuromorphic SNN architecture”✔️Real and documented for Akida.
“Event-driven processing that saves power”✔️Core benefit of SNNs.
“On-chip learning without cloud retraining”✔️Supported, but scope is product-dependent.
“No identifiers stored/transmitted”⚠️Possible in design, but not a hardware guarantee.
“2 GB LPDDR4X on-chip memory only”❌Not representative of Akida core specs. System memory and core memory are different.
“No persistent storage”❌Many board products include eMMC/SD storage.

Bottom Line

Neuromorphic chips like BrainChip’s Akida are a real category of efficient edge AI hardware that can do online adaptation and event-based sensing.
But many product claims you see online (especially on marketplaces or vendor pages) mix real tech with marketing language about memory, storage, and privacy that oversimplifies or misattributes what the hardware can inherently guarantee.
 
Last edited:
  • Like
Reactions: 2 users

itsol4605

Regular
Steve Brightfield likes this

 
  • Like
  • Fire
Reactions: 5 users

itsol4605

Regular
Steve Brightfield likes this


1000047495.jpg
 
  • Fire
  • Like
Reactions: 3 users

Yoghesh

Regular

The 20% Question: Why AI Isn't Failing and What's Missing Kevin D. Johnson Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LS​



 
  • Like
  • Fire
  • Love
Reactions: 13 users

IloveLamp

Top 20
1000018650.jpg
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Boab

I wish I could paint like Vincent

The 20% Question: Why AI Isn't Failing and What's Missing Kevin D. Johnson Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LS​



For those that don't have access.​

Cheers

The 20% Question: Why AI Isn't Failing and What's Missing​

Kevin D. Johnson

Kevin D. Johnson


Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LSF



February 7, 2026
The notion that the AI economy is headed for a crash is a narrative that's been in play for awhile now. The numbers appear unsustainable. Global spending will probably hit $500 billion through 2026. Financial engineering around data center securities echoes pre-2008 patterns. And a widely cited McKinsey finding tells us that roughly 80% of companies deploying AI see no significant bottom-line impact.
The concerns are not necessarily baseless. The financial engineering deserves scrutiny. The historical parallels to canals, railroads, and dot-com fiber are worth taking seriously. But the diagnosis most people land on, that AI itself is a bubble, rests on weaker evidence than the confidence of the claim suggests.

The Number Everybody Cites​

McKinsey's 2025 Global Survey on AI has become the anchor statistic for the crash narrative. The headline finding: while 88% of organizations report using AI in at least one business function, roughly the same proportion report no significant gains in top-line or bottom-line performance. The figure shows up in financial commentary, policy analysis, and boardroom presentations as though it were established fact.
But, what does that number represent?
The survey collected 1,993 responses across 105 nations between June and July 2025, a geographic scope far broader than the American markets where AI investment and deployment are most concentrated. Every data point is self-reported. No financial results were verified. No audited figures were examined. The finding that 80% of companies see no meaningful profit impact is not a measurement of profit impact. It is a measurement of whether survey respondents believe profit impact has occurred and are willing to attribute it to AI. These are fundamentally different things, and the distinction matters when the number is being used to evaluate hundreds of billions of dollars in economic activity.
The study's definition of "AI high performers" compounds the problem. McKinsey identifies roughly 6% of respondents as high performers based on two self-reported criteria: EBIT impact of 5% or more attributable to AI and a subjective assessment that their organization has seen "significant" value. The term "significant" is undefined. A respondent at a company with $10 billion in revenue claiming 5% EBIT impact from AI is asserting $500 million in AI-driven profit validated by nothing more than a survey response.
The sampling itself introduces bias. The people who respond to McKinsey's annual AI survey are people at companies paying attention to AI. Companies that never adopted AI or abandoned it early are underrepresented by design. The 88% adoption figure does not describe the full economy. It describes the portion of the economy that self-selected into a survey about AI.
None of this means the underlying observation is necessarily wrong. Companies clearly struggle to move from AI experimentation to production value. That pattern is real and recognizable to anyone working in enterprise technology. But the precision with which the 80% figure is cited far exceeds the rigor of the methodology that produced it.

The Evidence That Contradicts the Headline​

Consider what the 80% figure requires you to believe in the face of observable market evidence.
Palantir Technologies has become one of the most widely deployed AI and data platforms in both government and industry. Palantir's revenue growth, margin expansion, and market capitalization reflect an organization whose customers are deriving measurable value at enterprise scale. Palantir is not an exception to the AI economy. Palantir is deployed across intelligence agencies, defense organizations, major financial institutions, healthcare systems, and energy companies. The breadth and depth of that adoption is difficult to reconcile with a claim that 80% of AI-deploying companies see no meaningful returns.
IBM's AI business is steadily growing in a manner that reflects institutional commitment and operational stability. When the clients in question are banks, insurers, government, and many others whose decisions are governed by regulatory exposure and fiduciary obligation, growth represents something more durable than speculative enthusiasm.
The McKinsey survey cannot distinguish between a company running Palantir Foundry at enterprise scale across multiple business functions and a company that gave a few employees access to a chatbot. The survey instrument treats both as "using AI in at least one business function." When those two categories are pooled and the survey reports that most respondents have not seen enterprise-scale results, the finding is nearly tautological. Most respondents have not deployed AI at enterprise scale. Of course most respondents have not seen enterprise-scale results by that standard!

What Actually Separates Success from Failure​

The more useful question is not what percentage of companies are failing at AI. The more useful question is what distinguishes organizations that are capturing real value from those that are not.
The answer is not better hardware. The answer is platform and architecture.
Companies that treat AI as a procurement decision tend to struggle. In that model, capacity is provisioned without clarity about what question is being answered. Models are deployed without workload-aware platforms underneath. The organization buys hardware, installs a framework, and waits for transformation that never arrives.
Companies that treat AI as a disciplined platform tend to succeed. Each starts with a specific workload: portfolio optimization, risk classification, fraud detection, trade settlement, document analysis. Each matches that workload to the right platform and the right compute. And each orchestrates the full pipeline as a managed system rather than scattering experiments across departments.
The difference between these two approaches is not subtle, and it is not primarily about leadership commitment or organizational culture, which is where McKinsey's analysis lands. The difference is structural. One approach has a platform. The other does not.

The Ecosystem Is Already Broader Than People Think​

The crash narrative also tends to frame the AI economy as dangerously dependent on a single vendor and a single hardware architecture. That framing is too simple.
The GPU compute ecosystem is already a genuine ecosystem. CUDA provides the programming model. LSF orchestrates large-scale training workloads. Symphony can route and manage high-throughput inference and dynamic compute allocation. GPFS provides a distributed storage fabric. IBM Power delivers dense embedding generation and inferencing capability that complements GPU-driven workloads. The inference layer runs on open frameworks like vLLM deployed through OpenShift or RHEL AI. Platforms like watsonx bring model lifecycle management and governance to the enterprise. The model universe itself is diversified as open weights from Meta, Mistral, IBM Granite, and others compete on quality and efficiency. No single vendor controls the full stack, and the open-source community has ensured that the model layer in particular resists consolidation.
The AI economy's actual vulnerability is not vendor concentration. The vulnerability is architectural monotony: the assumption that every AI workload belongs on a GPU. That assumption is where the real fragility lives, not because GPU infrastructure is inadequate but because the GPU alone is incomplete.

Workloads Are Not Interchangeable​

A real-time market classification from a live data feed is a fundamentally different workload than generating a 2,000-word analysis. The classification requires sub-millisecond latency and minimal power draw. The generation requires a large parameter model and substantial memory bandwidth. Running both on the same GPU cluster means one of them is dramatically over-provisioned.
As I have demonstrated recently, neuromorphic hardware like BrainChip's Akida handles event-driven classification at microsecond latency on milliwatts of power. Quantum-classical hybrid methods are producing peer-reviewed results in bond trading prediction and combinatorial optimization. IBM Quantum has arrived as something you can engage today, not just tomorrow. Legacy systems running proven COBOL and RPG logic do not need rewriting, but can benefit from a conversational AI layer that preserves the deterministic logic underneath. Edge deployment pushes inference to the point of data generation instead of requiring every byte to travel to a central facility.
None of these developments eliminate the GPU. Large-scale training and generative inference genuinely need GPU hardware or its equivalent. A mature AI presence in companies will be heterogeneous: different platforms governing different compute for different workloads at different cost structures, all orchestrated under unified management.
The historical parallel everyone cites actually supports this point. The dot-com fiber buildout crashed in valuation, but the physical fiber became the backbone of the modern internet. What changed was not the infrastructure. What changed was how the infrastructure was used and what was built on top of it. The GPU buildout may follow the same pattern. Valuations correct. Infrastructure endures. The platforms that match workloads to the right resources become the durable value.

The Real Risk​

The real risk in the AI economy is not excessive spending. The real risk is unorchestrated spending.
A semantic query that needs a 2B parameter model should not burn cycles on a 70B model. A classification task that runs effectively on a $150 edge board should not occupy a $30,000 GPU. A batch risk calculation that runs on proven mainframe logic should not be rewritten for a cloud-native framework simply to place AI on the architecture diagram.
When the right task routes to the right hardware automatically, when the platform itself becomes intelligent about resource allocation, the economics change in ways that compound. The same fleet serves more workloads. Each workload costs what it should cost. And the resources freed by efficient allocation become available for additional compute, extending the reach of the organization beyond what was possible when every task consumed premium hardware regardless of need. The governance that enterprises require for auditability, compliance, and chargeback comes built into the platform rather than bolted on after deployment.
High-performance computing addressed these challenges years ago. The largest compute grids in the world do not run homogeneous hardware. Those grids run dynamic compute platforms that allocate resources based on workload characteristics, scale capacity based on demand, and provide unified management across every resource in the cluster. What is changing now is the range of hardware those platforms can govern and the intelligence they can bring to routing decisions.

The Right Question​

The correction that is coming is not an AI crash. The correction is architectural.
The market is going to learn what successful deployments already demonstrate: the value is in the overall ecosystem, properly deployed. The value lies in knowing which chip to use, when to use it, and how to govern the system as a whole. The companies that build workload-aware heterogeneous platforms and treat AI as an architectural discipline informed by what high-performance computing has already learned will keep compounding returns. The companies that do not will cite McKinsey's survey and blame AI for what was always their own platform problem.
The AI economy does not need to shrink. The AI economy is already generating profit, and the companies that have not yet seen returns need to grow toward a proper platform. Diversify the hardware. Orchestrate the workloads. Match compute to purpose. Govern the whole system on a platform designed for it.
The real question was never "is AI worth the investment?" The real question is "do we have the platform to make the investment work?"

The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author's employer.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Rach2512

Regular
GR801


Screenshot_20260207_082252_Samsung Internet.jpg
Screenshot_20260207_082312_Samsung Internet.jpg
Screenshot_20260207_082323_Samsung Internet.jpg
Screenshot_20260207_082335_Samsung Internet.jpg
Screenshot_20260207_082346_Samsung Internet.jpg
 
  • Like
  • Wow
  • Fire
Reactions: 9 users
Top Bottom