I miss the days where we had investor presentations. Those simple PowerPoints that said here is where we are, there is where we want to be in 6, 12, 18, 24, 60 etc months time. We were provided with some sort of guidance and expectations to judge the companies performance.
It is frustrating being a holder because they provide so little. Secrecy and trust is a reason Lockheed got to where they are. Government contracts can be gold not only in money but in terms of reputation and validation.
Thank you for all the research that is placed on here it provides me with such confidence that our time is close. I'll just keep watching those financials.
Well it does say Brainchip WILL be part of this integration. not might or maybe or possibly, but WILL!Lockheed Martin gets another mention here in relation to the Golden Dome, as does RTX.
The other thing I noticed is that the article says "Notably, the slides did not mention Mr Elon Musk’s SpaceX, which was part of a bid for Golden Dome contracts alongside software maker Palantir and defence systems manufacturer Anduril."
Is it merely a coincidence that Jonathan Tapson also mentioned both Palantir and Anduril in his Washington post?
Pentagon Golden Dome to have 4-layer defence system, slides show
![]()
The Golden Dome missile defence system faces an ambitious 2028 deadline set by US President Donald Trump himself.
PHOTO: REUTERS
Follow topic:
Pentagon
Published Aug 13, 2025, 06:33 AM
Updated Aug 13, 2025, 06:53 AM
WASHINGTON - The Trump administration's flagship Golden Dome missile defence system will include four layers - one satellite-based and three on land - with 11 short-range batteries located across the continental US, Alaska and Hawaii, according to a US government slide presentation on the project first reported by Reuters.
The slides, tagged “Go Fast, Think Big!” were presented to 3,000 defence contractors in Huntsville, Alabama, last week and reveal the unprecedented complexity of the system, which faces an ambitious 2028 deadline set by US President Donald Trump.
The system is estimated to cost US$175 billion (S$224.53 billion), but the slides show uncertainties still loom over the basic architecture of the project because the number of launchers, interceptors, ground stations, and missile sites needed for the system has yet to be determined.
"They have a lot of money, but they don't have a target of what it costs yet," said one US official.
So far, Congress has appropriated US$25 billion for Golden Dome in Mr Trump’s tax-and-spend Bill passed in July.
Another US$45.3 billion is earmarked for Golden Dome in his 2026 presidential budget request.
Intended as a multi-layered missile defence shield for the United States, Golden Dome draws inspiration from Israel's Iron Dome, but is significantly bigger due to the geography it will need to protect and the complexity due to the varied threats it will face.
According to the slides, the system architecture consists of four integrated layers: a space-based sensing and targeting layer for missile warning and tracking as well as "missile defence" and three land-based layers consisting of missile interceptors, radar arrays, and potentially lasers.
One surprise was a new large missile field - seemingly in the Midwest according to a map contained in the presentation - for Next Generation Interceptors (NGI) which are made by Lockheed Martin and would be a part of the "upper layer" alongside Terminal High Altitude Area Defense (Thaad) Aegis systems which are also made by Lockheed.
NGI is the modernised missile for the Ground-Based Midcourse Defence (GMD) network of radars, interceptors and other equipment - currently the primary missile defence shield to protect the United States from intercontinental ballistic missiles from rogue states.
The US operates GMD launch sites in southern California and Alaska. This plan would add a third site in the Midwest to counter additional threats.
Other technical hurdles the slides identified included communication latency across the "kill chain" of systems.
Contractors such as Lockheed, Northrop Grumman, RTX, and Boeing have a variety of missile defence systems.
Notably, the slides did not mention Mr Elon Musk’s SpaceX, which was part of a bid for Golden Dome contracts alongside software maker Palantir and defence systems manufacturer Anduril.
The Pentagon said it is gathering information "from industry, academia, national labs, and other government agencies for support to Golden Dome" but it would be "imprudent" to release more information on a programme in these early stages.
One key goal for Golden Dome is to shoot targets down during their “boost phase,” the slow and predictable climb through the Earth's atmosphere of a missile.
Rather, it seeks to field space-based interceptors that can more quickly intercept incoming missiles.
The presentation highlighted that the United States "has built both interceptors and re-entry vehicles" but has never built a vehicle that can handle the heat of reentry while targeting an enemy missile.
The last lines of defence dubbed the "under layer" and "Limited Area Defence" will include new radars and current systems like the Patriot missile defence system and a new "common" launcher that will launch current and future interceptors against all threat types.
These modular and relocatable systems would be designed to minimise reliance on prepared sites, allowing for rapid deployment across multiple theatres.
Space Force General Michael Guetlein, confirmed in July to lead the Golden Dome project, has 30 days from his July 17 confirmation to build a team, another 60 days to deliver an initial system design, and 120 days to present a full implementation plan, including satellite and ground station details, people briefed on a memo signed by Defence Secretary Pete Hegseth have told Reuters. REUTERS
![]()
Pentagon Golden Dome to have 4-layer defence system, slides show
The system is estimated to cost US$175 billion (S$224.53 billion). Read more at straitstimes.com. Read more at straitstimes.com.www.straitstimes.com
Reminder:
View attachment 89570
This looks like a great hire, with a lot of industry relevant experience.Yep
![]()
James Shields - BrainChip | LinkedIn
• A versatile and skilled technical sales professional with leadership qualities and… · Experience: BrainChip · Education: University of California, Los Angeles · Location: Chicago · 500+ connections on LinkedIn. View James Shields’ profile on LinkedIn, a professional community of 1 billion...www.linkedin.com
Got the experience but someone having 3 jobs in 3 years is a concern so let’s hope this position is only temporaryWhile both Sales Managers and Business Development Managers aim to drive revenue, they focus on different aspects of the sales process. Sales Managers are primarily concerned with managing and motivating a sales team to achieve short-term sales targets. Business Development Managers focus on long-term strategic growth, identifying new markets, and building partnerships
Let’s hope he has a few easy wins in developing partnerships between those already interested in our product before identifying how big the rocket, I mean the potential market is![]()
One would have to think BRN is involved here in some way.Arm adding a new division....."Arm Neural Technology"
![]()
Arm Neural Technology Delivers Smarter, Sharper, More Efficient Mobile Graphics for Developers
Explore Arm Neural Technology, the first to integrate neural accelerators into GPUs, enhancing mobile graphics with AI.newsroom.arm.com
"Arm neural technology is an industry first, adding dedicated neural accelerators to Arm GPUs, bringing PC-quality, AI powered graphics to mobile for the first time – and laying the foundation for future on-device AI innovation"
No mention of Akida but right place, right time
Happy as Larry
One would have to think BRN is involved here in some way.
Go brainchip
Component | Description | Relation to BrainChip |
---|---|---|
Arm Neural Technology | Dedicated neural GPUs for AI graphics | Unrelated to neuromorphic; not BrainChip |
BrainChip’s Akida | Neuromorphic spiking AI for low-power edge® | Compatible with Arm ecosystem but distinct |
Must be a mistake as per normal as they have a price sensitive on the wheelSAY CHEEEEEEEEEEEESE
View attachment 89593
ai hallucinationMust be a mistake as per normal as they have a price sensitive on the wheel![]()
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.Not related to BrainChip according to ChatGPT, unfortunately.
ChatGPT 5:
The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.
What Arm’s Neural Technology Is
- Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
- An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
- This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.
How It Differs from BrainChip’s Neuromorphic Approach
- Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
- BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
- While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.
Summary
Component Description Relation to BrainChip Arm Neural Technology Dedicated neural GPUs for AI graphics Unrelated to neuromorphic; not BrainChip BrainChip’s Akida Neuromorphic spiking AI for low-power edge® Compatible with Arm ecosystem but distinct
If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take![]()
Hi Bravo,Not related to BrainChip according to ChatGPT, unfortunately.
ChatGPT 5:
The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.
What Arm’s Neural Technology Is
- Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
- An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
- This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.
How It Differs from BrainChip’s Neuromorphic Approach
- Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
- BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
- While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.
Summary
Component Description Relation to BrainChip Arm Neural Technology Dedicated neural GPUs for AI graphics Unrelated to neuromorphic; not BrainChip BrainChip’s Akida Neuromorphic spiking AI for low-power edge® Compatible with Arm ecosystem but distinct
If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
Hi Bravo,
The ARM U85 uses MACs:
https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11
View attachment 89599
Arm® Ethos™-U85 NPU Technical Overview
View attachment 89600
The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream
Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.
Feature / Use Case | Ethos-U85 | Akida 3 | Comments |
---|---|---|---|
INT8 inference for vision AI | ![]() | ![]() | Both handle low-bit CNN/Transformer workloads efficiently. |
Edge AI in ARM-based SoCs | ![]() | ![]() | Akida is already ARM-compatible and could replace U85 in designs. |
Consumer IoT & smart home devices | ![]() | ![]() | E.g., cameras, voice assistants, home hubs — same target market. |
Robotics & drones | ![]() | ![]() | Both can handle perception & navigation; Akida offers ultra-low power SNN modes. |
Automotive driver monitoring | ![]() | ![]() | Akida could match U85 for INT8 workloads, but also add mixed precision. |
AR/VR lightweight inference | ![]() | ![]() | Both could run vision-based gesture tracking or object recognition. |
Feature / Use Case | U85 Limitations | Akida 3 Advantage |
---|---|---|
High precision (INT16 / FP32) | Only ≤INT8 | Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement. |
Mixed-mode processing | Primarily CNNs / Transformers | Akida 3 can combine SNN + ANN + mixed precision in one device. |
Event-based data handling | Not designed for spikes | Akida 3 natively supports event-driven SNN processing, reducing power & latency. |
On-device learning / adaptation | Limited to retraining off-device | Akida 3 supports incremental learning on-chip — key for adaptive edge AI. |
Sparse computing efficiency | Relies on compression & pruning | Akida 3 exploits sparsity at the architectural level without preprocessing overhead. |