500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
www.linkedin.com
Explore top LinkedIn content from members on a range of professional topics.
www.linkedin.com
Executive Decisions at the Edge: Non-Extractive Targeting Built in Five Days with Symphony, Akida, and Foundry
Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LSF
March 5, 2026
Yesterday I
posted about building a targeting system in five days with
IBM Spectrum Symphony,
BrainChip Akida, and
Palantir Technologies AIP Foundry. Here is the architecture, how each layer works, what connects them, how data stays separated by classification level, and why the system scales.
An Executive Technology Platform
Many sensor-to-C2 architectures are extractive. Extractive systems pull raw data from the edge, transport it to a central server, and filter it there. The 25th Infantry Division experienced the result when working with Palantir: thousands of data objects flooding the system with no way to control the flow. Filtering happens too late, after the data has already consumed bandwidth, compute, and analyst attention in transit. Running traditional inference at the edge is not the answer either. GPUs demand power, cooling, and infrastructure the tactical edge cannot provide. Neuromorphic processing changes the equation. Spiking neural networks on milliwatt silicon classify at the sensor itself, turning raw data into meaning before it ever touches a network. Extract-transform-load matters here. Even with enterprise systems, fully processing all the data that crosses a wire remains extremely rare. The question is where the reduction happens and how much meaning survives the cut.
The present system is non-extractive. Data stays local where it belongs and where it is most useful. Each layer processes, reduces, and passes only what the next layer needs. The design mirrors how military hierarchy actually works. A squad leader does not forward every observation to their brigade. The squad synthesizes, decides what is relevant, and reports up the chain. Platoons and companies do the same. Each echelon receives the level of detail appropriate for its command and control responsibility rather than a raw dump of everything below it.
Symphony is the interface and dynamic compute platform that enforces the hierarchy. Local sensor capability determines what data appears up the channel and in what form. Akida decides what is meaningful at the sensor. The emergence engine decides what is confirmed at the node. Symphony decides what is worth projecting to AIP Foundry. Foundry presents only confirmed targets, confidence levels, threat profiles, and satellite imagery when needed. The operator sees what is needed to make a decision and act. Each layer does its job so the next layer can do its job faster. The result is an executive technology platform that performs the way it should at every layer.
The Pipeline
The system runs in five layers: sensor, satellite, shared storage, orchestration, and ontology. Data flows in one direction. Each layer reduces volume and increases meaning. Nothing classified crosses a boundary it should not cross.
Layer 1: Neuromorphic Inference
BrainChip AKD1000 processors sit at the sensor edge. Each processor runs a trained spiking neural network purpose-built for its modality: visual classifiers on cameras, RF classifiers on software-defined radios, a BLE classifier scanning for commercial fleet beacons, and an acoustic classifier on audio feeds. Inference runs at sub-millisecond latency on milliwatts, making these processors easy to place in the field on battery power, solar, or any constrained platform. The output is a 128-byte observation record containing a classification label, confidence score, threat score, timestamp, sensor type, and source identifier.
Humans aren't removed from the loop here. Instead human effort is placed where effort is most valuable: training the model and getting it right. A well-trained spiking neural network encodes human judgment into silicon. The expertise that would otherwise be spent watching feeds and filtering noise is invested once, upfront, building a classifier that handles sensor data at machine speed with human-level discrimination. The human decides what matters. The chip enforces that decision at machine scale, millions of times a second.
Once the investment is made, raw sensor data never leaves the node. Camera footage stays on the edge node or is shredded after classification. RF samples stay or disappear. Audio follows the same pattern. The only thing that crosses the network is an encrypted 128-byte observation carrying meaning without revealing the underlying intelligence source. The edge network never transports imagery, audio, or signals intelligence. Observations travel over IPsec tunnels compliant with FIPS 140-3 whether the transport is HaLow RF mesh or standard gigabit Ethernet. An adversary intercepting edge traffic sees encrypted 128-byte classification records rather than exploitable sensor feeds.
Layer 2: Classified Satellite Imagery
When a target is confirmed the operator may need visual verification. The system pulls a satellite image from a secure AIX instance running on IBM Power Virtual Server (PowerVS) via IBM Cloud. The image arrives at the edge node where the AKD1000 tiles it into overlapping regions and classifies each tile: overhead truck or background. Only tiles that pass the confidence threshold are saved as crops. The full image is transient on the edge node. The operator never sees it. What reaches the operator is a small tile containing only the target area with its classification and confidence score. The Akida chip decides what is relevant. The operator sees only what matters for the decision at hand. Classification separation is enforced at the inference layer: the raw satellite pass is consumed and discarded by the NPU. Only classified relevant fragments are presented or persisted.
The architecture protects more than the image. A full satellite pass reveals resolution, coverage area, revisit timing, and collection geometry. Exposing a full pass to an operator screen or a downstream network means exposing it to anyone who compromises that screen or that network. Letting the NPU consume the pass and discard everything outside the target tile means the wider dataset is never stored, never transmitted, and never available for exfiltration. The classified material is protected because it never persists in a form that could be captured. An adversary learns nothing about what the satellite can see. The adversary learns only that a specific target was identified.
Layer 3: GPFS as the Integration Layer
IBM Storage Scale (GPFS) is a parallel filesystem most people associate with supercomputers. GPFS and Symphony were built and proven when the functional capability of high-performance hardware matched what edge computing affords today. The processing power that once required a data center now fits in a tactical node. Decades of battle-tested distributed systems software for filesystem coherence, atomic operations, and resource scheduling run natively at the edge without adaptation. Here GPFS serves a different role than it did in the supercomputer: the shared substrate that connects the edge fleet without requiring message brokers, artifact servers, or coordination services.
Observation records flush from local shared memory to GPFS for persistence. Edge nodes operate independently. Nodes do not need to communicate with each other to any great degree. Each node classifies, writes its observations, and Symphony handles what happens next. There is no pub/sub. There is no message queue. There is very little cross-node chatter.
GPFS can also store a sensor registry. Inference workers claim available sensors from GPFS using atomic file operations. If a node fails another picks up the sensor automatically. Model files in the form of .fbz binaries compiled for the AKD1000 live on GPFS as well. A training run produces a quantized model, writes it to the shared filesystem, and every inference node can load it immediately. No container registry. No deployment pipeline. The filesystem is the model store, the coordination layer, and the persistence tier in one.
Scale follows naturally. GPFS has been running at petabyte scale in HPC environments for decades. Adding nodes to the fleet means mounting the filesystem on the new hardware. The models, the sensor registry, and the observation history are already there. GPFS scales with the fleet.
Layer 4: Symphony Orchestration
IBM Spectrum Symphony manages the compute fleet through SOAM (Service-Oriented Architecture Middleware). Seven services run simultaneously across the cluster. There is no architectural reason the number could not be seventy or seven thousand. The resource allocator assigns service instances to nodes, restarts failed workers, and rebalances load. Adding a new sensor modality means registering a new service and enabling it. Removing one means disabling it. The service fleet expands and contracts as the mission requires. Symphony handles placement and prioritization.
Symphony also manages network-level concerns. Service instances can be constrained to specific network segments or set to failover between transport paths. The same orchestration layer that manages compute can manage connectivity, routing services to the best available transport tier when conditions degrade. The 25th Infantry Division described needing transport that is "self-sensing and self-determining." Symphony provides exactly that capability.
The emergence engine runs inside the Foundry interface service on every node. The engine answers a simple question: did an event actually happen? A single sensor observation is not evidence. A single observation is a data point. The emergence engine collects observations across modalities and time, deduplicates within a five-second window, and applies confirmation thresholds.
A vehicle classification requires three consistent observations before promotion to a confirmed event. A camera sees a semi truck. An acoustic classifier detects diesel engine signature. A BLE scanner picks up an electronic logging device beacon from the same vehicle. Three modalities produce one conclusion. The event is confirmed not because one sensor reported it but because multiple independent sensors agreed.
The engine also handles negative correlation. When the emergency RF classifier detects active dispatch traffic on county frequencies the signal is to hold rather than to confirm. The emergence engine suppresses all other confirmations for sixty seconds. If fire or EMS is responding to a location then a target event at the same coordinates is likely coincidence rather than a threat. The architecture automatically handles the kind of contextual judgment that would otherwise require an analyst watching multiple feeds.
Layer 5: Palantir Foundry
Confirmed events project to Palantir Foundry through its REST API. The schema is not pre-defined. When the emergence engine confirms a classification Foundry has never seen the system creates a new dataset, defines the Object Type on an ontology branch, submits a proposal, and merges it. The ontology grows from observed data.
Six dataset types are live. A cross-modal target confirmation transform in Foundry unions all event datasets, groups by a spatiotemporal window, and promotes clusters with multiple contributing modalities to confirmed targets. Link types connect each target back to its contributing sensor events for full provenance.
The user interface reflects the non-extractive design. The operator is not presented with a wall of sensor feeds or thousands of unfiltered data points. The screen shows confirmed targets with clear confidence levels, threat profiles, contributing modalities, and classified satellite crops available on demand. Every piece of information on the screen has already been filtered by NPUs, confirmed by the emergence engine, and validated by Symphony before it arrives. The interface is calm because the architecture beneath it has already done the work. The decision is informed. The action is one click.
Where This Goes: Exponential Wisdom At Scale
As a potential enhancement Symphony could also bridge LLM workloads with this architecture. The same SOAM framework that orchestrates neuromorphic inference can orchestrate vLLM services for semantic validation, RAG-based enrichment, or natural language summarization of confirmed events. IBM Granite or other models running on GPU nodes validate that confirmed events are semantically plausible before reaching Foundry. The orchestration layer treats an LLM service instance the same way it treats an Akida inference worker: a managed, restartable, load-balanced unit of compute. The point is not automation alone. The point is extending the system for human discernment. An LLM layer does not replace the analyst. An LLM layer gives the analyst a richer and more contextualized picture through natural language summaries of what the emergence engine confirmed, semantic connections across events that raw data would obscure, and explanations that make the underlying intelligence accessible to operators who were never trained on the sensor modalities producing it.
The architecture begins to compound at every layer. Each integration builds on the last. Each optimization at one layer improves the behavior of every layer above and below it. The neuromorphic classifiers get better with each training cycle. The emergence engine refines its thresholds from operational feedback. The LLM layer learns what kinds of summaries lead to faster decisions. Performance, resilience, and security posture are not installed as features, these qualities emerge from a vertically integrated stack refining itself under production conditions.
Exponential wisdom is the term for systemic knowledge that compounds through implementation over time. The value is not in any single component. The value is in the interaction between components that have been tested, refined, and tightened under real conditions. Because every layer in the stack scales horizontally the compounding is not linear. Ten nodes learning from six modalities produce a certain level of insight. A hundred nodes learning from sixty modalities do not produce ten times the insight. A hundred nodes produce categories of understanding not possible at the smaller scale: pattern recognition across theaters, threat correlation across domains, predictive models trained on volumes of confirmed events no single installation could generate. The wisdom scales exponentially because the interactions between components scale exponentially.
Organizations seeing real returns from AI already understand the principle. Successful organizations like Palantir, IBM, and BrainChip are not using AI to displace proven platforms. Successful organizations are leveraging the accumulated wisdom of existing systems and compounding the value further. Using these platforms together only compounds that even more.
What is unique to each platform complements both function and capabilities not often considered. For example, Monte Carlo simulation is Symphony's bread and butter. The same fleet running sensor inference can run mission simulations with thousands of parallel trials exploring courses of action, terrain effects, or threat scenarios with real data in real time. The network that observes the battlefield can also war-game it.
Why This Scales
Ten nodes today. There is no architectural ceiling. Symphony manages service placement across any number of nodes, thousands of them. GPFS scales the shared storage. Akida chips use commodity M.2 cards that slot into standard hardware. Palantir Foundry handles ontology growth without manual schema intervention. Every layer in the stack was designed for horizontal scale. The difference between a ten-node demo and a ten-thousand-node deployment is hardware procurement rather than software redesign.
Data separation scales as well. More nodes means more sensors but the security model does not change. Raw sensor data stays on the edge node. Classified imagery is accessed through the same narrow AIX or other interface regardless of fleet size. Only 128-byte observations and confirmed events cross the network. The attack surface does not grow with the fleet.
The full system: neuromorphic inference at the sensor, GPFS as the integration layer, Symphony for orchestration and LLM services, Foundry for ontology and targeting. Running now. Writing to Palantir. Built in five days. Scales to whatever the mission needs.