BRN Discussion Ongoing



1756341905600.png
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Good Morning Bravo ,

Just listened again ,

Time spot 33:22 onwards...

Time spot 33:56 , YOU KNOW, GLOBALFOUNDRY 8nm .


Regards,
Esq.


Hi Esqy,

You’re right - Kurt does mention GlobalFoundries’ “8 nanometer” in the webinar.

I find it a bit confusing, because GlobalFoundries doesn’t actually have an official 8 nm process node (unless I’m mistaken). Their most advanced public offerings are 12LP+ FinFET and 22FDX / 12FDX FD-SOI. They don’t list an 8 nm node in their commercial roadmap.

I asked ChatGPT about possible reasons for this reference, including whether it might suggest GF is developing a new 8 nm node. The response was that it’s "almost certainly not a new, official node". Instead, the more likely explanation is that the GF process in question is being described as “8 nm-class” because its performance is comparable to similar offerings elsewhere (e.g. Samsung’s 8LPP).

Perhaps someone with deeper technical expertise might be able to weigh in with their perspective.


ChatGPT said:​


1. GlobalFoundries and “8 nm”​

  • Officially, GlobalFoundries exited the race below 12 nm back in 2018 — shelving their 7 nm program. Their most advanced public nodes are 12LP+ FinFET and 22FDX / 12FDX FD-SOI. They do not list an 8 nm node in their commercial roadmap.
  • So if you hear “GF 8 nm” in a BrainChip webinar, it’s not part of GF’s public marketing portfolio like TSMC’s N7 or Samsung’s 8LPP.

2. Possible explanations for the “8 nm” reference​

Here are the most plausible interpretations:
① Internal / custom shrink naming
  • GF sometimes uses marketing-driven “node naming” for customer-specific process tweaks. For example, an optimized 12 nm platform could be referred to by a design partner as “8 nm” in terms of transistor performance or density equivalence.
  • It wouldn’t be an official, standalone node like Samsung’s 8LPP, but more like a derivative or a customer-specific variant.
② Foundry design enablement context
  • BrainChip, when advising customers through Akida Cloud, may present EDA/PDK libraries mapped against equivalent nodes across multiple fabs (TSMC, Samsung, GF).
  • In this sense, “GF 8 nm” may appear in toolchains as a proxy node so customers can model performance/power targets — even if GF itself brands it differently.
③ Miscommunication in the webinar
  • It’s also possible the speaker loosely referred to “8 nm” because the GF process in question is comparable to an 8 nm class offering elsewhere (e.g., Samsung’s 8LPP), even if GF calls it 12LP+.

3. Is GF working on a “new” 8 nm?​

  • Highly unlikely. GF’s strategy is explicitly focused on specialty technologies — FD-SOI (22FDX/12FDX), RF, power, automotive — not pushing smaller geometry race.
  • There have been no public announcements from GF about restarting a true 8 nm node or any new sub-10 nm process. Their official roadmap still stops at 12 nm.
  • If anything, what BrainChip referenced is more about design enablement and compatibility mapping for customers (i.e., “yes, your Akida-based model can target GF at an 8 nm equivalent”) rather than GF announcing a brand-new leading-edge node.

✅ Conclusion​

The “GF 8 nm” mentioned in BrainChip’s webinar is almost certainly not a new, official node from GlobalFoundries.
It’s best interpreted as either:
  • A design library equivalence (Akida Cloud showing customers how their models port to “8 nm class” at GF), or
  • A marketing shorthand for an optimized variant of GF’s 12 nm technology, positioned to look competitive with 8 nm offerings elsewhere.
BrainChip’s angle here is: “We can show you how Akida will behave if manufactured on GF’s advanced nodes — whether you call it 12 nm, 8 nm, or otherwise.”

👉 Would you like me to pull together a GF vs. Samsung vs. TSMC “8 nm class” comparison, so you can see whether GF’s 12LP+/FDX really does overlap with what others call 8 nm? That might explain why BrainChip uses that label.

You said:​

Yes please>

ChatGPT said:​

Perfect — let’s line up GlobalFoundries, Samsung, and TSMC to see what “8 nm class” really means, and why BrainChip might reference GF as having one.

📊 Comparative View: “8 nm class” process nodes​

FoundryOfficial NodeTechnology TypeDensity / Performance ClassPositioning / Notes
Samsung8LPP (8 nm Low Power Plus)FinFET (derivative of 10 nm)~10% area reduction vs 10LPP; ~10% power savingsMarketed as a “cost-effective bridge” between 10 nm and 7 nm. Used in some mobile SoCs & GPUs.
TSMC(No 8 nm branded node)Jumped from 10 nm → 7 nmN7 offered ~35–40% density gain over N10TSMC skipped “8 nm” naming entirely. Their N7 competed directly with Samsung’s 8LPP.
GlobalFoundries12LP / 12LP+ FinFET (sometimes internally positioned as “8 nm class”)FinFET (evolution of 14LPP, licensed from Samsung originally)GF claims performance/power close to Samsung 10/8 nm nodesNot officially branded as “8 nm,” but some design toolchains (and BrainChip’s webinar) may shorthand it as such to indicate equivalent PPA (Power-Performance-Area) class.
GlobalFoundries (FD-SOI path)22FDX / 12FDXFD-SOI (non-FinFET)Optimized for low-power, RF, adaptive body-biasCompetes in niches (IoT, automotive, 5G RF) rather than density-race.

🧠 Why BrainChip may say “GF 8 nm”​

  • Performance equivalence: GF’s 12LP+ FinFET can offer transistor performance in the same ballpark as Samsung’s 8LPP.
  • EDA/library mapping: When BrainChip shows customers how Akida would map to foundries, they may call GF’s 12LP+ “8 nm class” to align with competitor terminology.
  • Marketing shorthand: To customers, “8 nm” signals modern, advanced, and competitive, even if GF’s official brand is “12LP+.”

🚦 Bottom Line​

  • Samsung: The only foundry with a real “8 nm” branded node (8LPP).
  • TSMC: Skipped 8 nm — went straight from 10 nm → 7 nm.
  • GF: No official 8 nm, but its 12LP+ FinFET is often described as “8 nm-class” in equivalence charts. This is almost certainly what BrainChip was referencing in the webinar.
So — GF hasn’t secretly launched a brand-new 8 nm process. Instead, their 12 nm FinFET technology is close enough in characteristics that it can be slotted into the “8 nm bucket” when comparing foundries.
 
  • Like
  • Love
  • Fire
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Morning Bravo ,

Cheers .

Certainly sounds plausible , Even if it is indeed the 12LP+FinFET size , this is still the smallest testing AKIDA on silicone i have heard of to date .

Many moons ago ,think it was Anil , let slip they were fiddling with 16nM , so its still a great leap forward either way,

Regards,
Esq.
 
  • Like
Reactions: 3 users

7für7

Top 20
Shorters and sellers are really like piranhas… they bite even on a 0,5-cent move. Absolutely ridiculous. You’re the most useless actors in this market 😂😂 busy chewing on cents while we’re waiting for the dollars. Just parasites in the system… but you know what? Thanks for providing liquidity. LOL.

I’m looking forward for the day when
you’ll be wiped out like nothing… 😇🙏
 
  • Like
  • Fire
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This article highlights cybersecurity as a growing concern for Golden Dome

The article underscores that cybersecurity by neglect is an existential risk to the Golden Dome and makes clear that unless secure-by-design principles are built in from the start, the whole system risks becoming brittle.

As we know, Lockheed has previously partnered with BrainChip and Quantum Ventura on CyberNeuro-RT, a neuromorphic cybersecurity solution built on BrainChip’s Akida processor.

Lockheed is already deeply involved in missile defense (Aegis, THAAD, C2 integration) and for the Golden Dome they are expected to contribute on; command and control / battle management, multi-domain sensor fusion and integration of space-based and terrestrial assets. All of these are prime cybersecurity targets. So Lockheed won’t just be a hardware supplier - they’ll also be tasked with making the system cyber-resilient.

I'd say the chances of Lockheed bringing our cybersecurity solution to the Golden Dome would be quite high. They would be, IMO, very well positioned to argue that their neuromorphic AI cybersecurity work is a natural fit for Golden Dome - ensuring resilience against cyberattack alongside kinetic and electronic threats.



Screenshot 2025-08-28 at 12.05.36 pm.png




 
  • Like
  • Fire
  • Love
Reactions: 9 users

Rach2512

Regular

Who is providing the NPU?
 
Top Bottom