BRN Discussion Ongoing

  • Haha
  • Fire
  • Thinking
Reactions: 21 users
Not only has Kevin created 2 entirely different Akida implementations, but he has done it in no time flat. Obviously, having the databases fresh from the oven was essential for adaptation to create Akida models, but there is a hell of a lot of pre-baked datasets out there. The legacy market is an untapped gold mine.


Diogenese, can I ask you what your thoughts are on Kevin continuing with Akida 1500. 2500.
How do you see this all playing out , is akida1000 all they need or is it the initial starting point to verify Akida’s capabilities and now Kevin will trial the others 🤔 ?.
 
  • Like
  • Fire
Reactions: 3 users

manny100

Top 20
From the Founders Letter for the Annual report:
" Our commercial and engineering teams are actively engaged with a growing number of customers across the automotive, industrial, consumer, aerospace, and defense sectors."
Setting up the future with a growing pipeline.
Anyone Iike to add IBM?
 
  • Like
  • Love
  • Haha
Reactions: 8 users
Kevin is well aware, as is his employer (IBM) of the potential trajectory all this testing and benchmarking results will have, publishing via Linkedin sends a very clear message, procrastinate and the bus will have left the station, we are in a very commanding position at the far edge, we have Intel, Nvidia and IBM as potential suitors, who will it be, my money is and has always been on Jensen.

Can our company explode on our own, we are continually refining our technology, we are finetuning it to specific customers' requirements (demands), the commitment to IP will come, but I believe that the company has finally come to the realization that the IP only business model may have well passed it's used by date, just ask ARM if they have or are refocusing their own business model to adjust to the times.

My only request is that the internal, ex employee's stop trying to cause unrest, you had your opportunity, you failed, Brainchip has continued to move forward, the technology has been expanded upon, you know it, so, swallow your pride, accept that you aren't the main players anymore, just sit back and enjoy the success that is coming, egos cannot only damage our future, but reflect your true character.

Your early contributions have always been acknowledged......move on.
Come on Big Kev
Tell us the price
 
  • Haha
Reactions: 4 users

jrp173

Regular
I hadn’t considered that this could be paid marketing. Could it be? I’d be really disappointed if it was…

Pandaxxx, I personally don’t think this is paid marketing, but it raises an obvious question. Why is BrainChip so quiet about it?

Why aren’t they reposting this on their own LinkedIn page and other social media channels so potential customers can see the excitement, validation and enthusiasm coming from people within the industry?

Their investor relations are already poor and they show little regard for communicating with shareholders, but it makes even less sense from a business perspective. Why wouldn’t they want potential customers to see the demo and testing that Kevin from IBM has been doing?

This isn’t “ramping,” as Antonio loves to claim as an excuse for lack of IR. It’s simply sharing factual demo and test results. So why won’t they repost it?
 
  • Like
  • Fire
Reactions: 5 users

IloveLamp

Top 20

1000020516.jpg
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Frangipani

Top 20

View attachment 95736

Here is today’s official press release:



Mar 5, 2026 9:00 AM Eastern Standard Time

BrainChip Announces Neuromorphyx as Strategic Customer and Go-to-Market Partner for AKD1500 Neuromorphic Processor​



LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world leader in ultra-low power, fully digital, event-based neuromorphic AI, today announced that Nex Novus d.o.o., Neuromorphyx™ has selected the Akida™ AKD1500 co-processor for evaluation and integration in its Vision NeuroNode™ edge-AI device.

"By combining event-driven sensing with Akida’s ultra-low power neuromorphic processing, we’re helping enable scalable edge AI for defense, industrial monitoring and autonomous systems," Sean Hehir, CEO of BrainChip

This strategic engagement and partnership agreement expands BrainChip’s reach into defense, robotics and industrial edge sensing. By integrating the AKD1500 into Neuromorphyx’s rugged, modular architecture, the companies are enabling always-on intelligence where power, bandwidth and connectivity are constrained.

Enabling Scalable, Always-on Edge Intelligence
The AKD1500 will be integrated into Neuromorphyx’s modular NeuroBlocks™ architecture: SensorBlock™ (DVS/EVS), BridgeBlock™ (FPGA), BrainBlock™ (AKD1500) and InterfaceBlock™, forming a key compute element within the Vision NeuroNode™ device and its NeuroHive™ fleet orchestration platform. By leveraging BrainChip’s Akida™ technology, Neuromorphyx is building configurable edge nodes for real-time detection and tracking using event-based vision and other sensors such as audio, radar and IMU:
  • Ultra-low power, always-on inference: AKD1500 supports mission-critical detection pipelines within tight power budgets, operating at milliwatt power levels (under 300mW for high-performance tasks), enabling battery deployments matching the core NeuroNode feature of multi-year field operations on a single integrated battery.
  • Low-latency sparse processing: Paired with event-based vision sensors, NeuroNode™ can convert event streams into compact spatiotemporal tensors and run them on AKD1500, where Akida’s event-based execution exploits activation sparsity to reduce data movement and accelerate response.
  • Configurable deployment at scale: AKD1500’s 1 MB on-chip memory enables NeuroNode™ to run fully self-contained neuromorphic SNN models without external DRAM, reducing power, latency, and attack surface while enabling deterministic real-time response.
The Potential: Networked NeuroNodes at Scale
Neuromorphyx’s approach to in-house manufacturing scalability paired with AKD1500 competitive pricing, allows for compact NeuroNodes to have simple deployments even in tens of thousands of units by forming networks that cover vast areas managed with NeuroHive™: a map-based platform for deploying, monitoring and updating these edge nodes across sites. The platform supports coordinated sensing, target analytics (including velocity and direction), and external API triggers for integration into existing command-and-control or industrial systems, while natively maintaining privacy and keeping processing at the edge.

"Our mission has always been to bring AI to the edge where it is most needed," said Sean Hehir, CEO of BrainChip. "Neuromorphyx is building a compelling, modular platform for deploying always-on intelligence in demanding environments. By combining event-driven sensing with Akida’s ultra-low power neuromorphic processing, we’re helping enable scalable edge AI for defense, industrial monitoring and autonomous systems."

About Neuromorphyx™ and Nex Novus d.o.o.
Neuromorphyx™ is a deep-tech hardware and embedded software company, spun out of Nex Novus d.o.o., focused on high-performance, energy-efficient edge AI for defense, robotics and industrial sensing. Its modular NeuroBlocks™ architecture enables configurable edge AI devices, NeuroNodes™, built around advanced sensors, including event-based vision, audio, radar, IMU and a choice of accelerator technologies. Neuromorphyx’s NeuroHive™ platform provides fleet management, orchestration and secure over-the-air model updates.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY):
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs with unmatched efficiency and energy economy. Explore more at www.brainchip.com.

Contacts​

Media Contact
Madeline Coe
prforbrainchip@bospar.com
224-433-9056
 
  • Love
  • Like
  • Fire
Reactions: 10 users

Diogenese

Top 20

View attachment 95736
Neuromorphyx lists Prophesee in its scroll:

https://www.neuromorphyx.com/neuronode

Brainchip Embedded World 2026​

Experience the Vision NeuroNode™ tiny, compact and rugged form in-person! Featuring the AKD1500 neuromorphic co-processor from Brainchip Inc. pushing the boundaries of high-performance, low-latency, Spiking Neural Networks (SNN) for energy efficient edge AI.

1772726966162.png

1772727116632.png



1772727034549.png


We announced a partnership with Prophesee in 2022, but we'd been flirting with them for a while before that:

Crwe World | BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency

...
"We've successfully ported the data from Prophesee's neuromorphic-based camera sensor to process inference on Akida with impressive performance," said Anil Mankar, Co-Founder and CDO of BrainChip. "This combination of intelligent vision sensors with Akida's ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution."

"By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings, said Luca Verre, CEO and co-founder of Prophesee."
 
  • Like
  • Love
Reactions: 8 users

itsol4605

Regular
Old Germany is waking up and has apparently discovered the term "Neuromorphic Computing".

Experience shows that from now on, Germany will only need 40 years until the first project has gone through the approval process.

 
  • Haha
  • Like
  • Fire
Reactions: 4 users

MegaportX

Regular
P
Kevin is well aware, as is his employer (IBM) of the potential trajectory all this testing and benchmarking results will have, publishing via Linkedin sends a very clear message, procrastinate and the bus will have left the station, we are in a very commanding position at the far edge, we have Intel, Nvidia and IBM as potential suitors, who will it be, my money is and has always been on Jensen.

Can our company explode on our own, we are continually refining our technology, we are finetuning it to specific customers' requirements (demands), the commitment to IP will come, but I believe that the company has finally come to the realization that the IP only business model may have well passed it's used by date, just ask ARM if they have or are refocusing their own business model to adjust to the times.

My only request is that the internal, ex employee's stop trying to cause unrest, you had your opportunity, you failed, Brainchip has continued to move forward, the technology has been expanded upon, you know it, so, swallow your pride, accept that you aren't the main players anymore, just sit back and enjoy the success that is coming, egos cannot only damage our future, but reflect your true character.

Your early contributions have always been acknowledged......move on.
Philip?
 

Frangipani

Top 20
Last edited:
  • Like
  • Love
  • Fire
Reactions: 3 users

Frangipani

Top 20

B420AF08-CD3D-48E0-8C1B-2C4BC9153273.jpeg




DDFB2C9D-DE31-44BA-A553-8C2685F56420.png


Executive Decisions at the Edge: Non-Extractive Targeting Built in Five Days with Symphony, Akida, and Foundry​

Kevin D. Johnson

Kevin D. Johnson


Field CTO – HPC, AI, LLM & Quantum Computing | Principal HPC Cloud Technical Specialist at IBM | Symphony • GPFS • LSF



March 5, 2026

Yesterday I posted about building a targeting system in five days with IBM Spectrum Symphony, BrainChip Akida, and Palantir Technologies AIP Foundry. Here is the architecture, how each layer works, what connects them, how data stays separated by classification level, and why the system scales.

An Executive Technology Platform​

Many sensor-to-C2 architectures are extractive. Extractive systems pull raw data from the edge, transport it to a central server, and filter it there. The 25th Infantry Division experienced the result when working with Palantir: thousands of data objects flooding the system with no way to control the flow. Filtering happens too late, after the data has already consumed bandwidth, compute, and analyst attention in transit. Running traditional inference at the edge is not the answer either. GPUs demand power, cooling, and infrastructure the tactical edge cannot provide. Neuromorphic processing changes the equation. Spiking neural networks on milliwatt silicon classify at the sensor itself, turning raw data into meaning before it ever touches a network. Extract-transform-load matters here. Even with enterprise systems, fully processing all the data that crosses a wire remains extremely rare. The question is where the reduction happens and how much meaning survives the cut.

The present system is non-extractive. Data stays local where it belongs and where it is most useful. Each layer processes, reduces, and passes only what the next layer needs. The design mirrors how military hierarchy actually works. A squad leader does not forward every observation to their brigade. The squad synthesizes, decides what is relevant, and reports up the chain. Platoons and companies do the same. Each echelon receives the level of detail appropriate for its command and control responsibility rather than a raw dump of everything below it.

Symphony is the interface and dynamic compute platform that enforces the hierarchy. Local sensor capability determines what data appears up the channel and in what form. Akida decides what is meaningful at the sensor. The emergence engine decides what is confirmed at the node. Symphony decides what is worth projecting to AIP Foundry. Foundry presents only confirmed targets, confidence levels, threat profiles, and satellite imagery when needed. The operator sees what is needed to make a decision and act. Each layer does its job so the next layer can do its job faster. The result is an executive technology platform that performs the way it should at every layer.

The Pipeline​

The system runs in five layers: sensor, satellite, shared storage, orchestration, and ontology. Data flows in one direction. Each layer reduces volume and increases meaning. Nothing classified crosses a boundary it should not cross.

Layer 1: Neuromorphic Inference​

BrainChip AKD1000 processors sit at the sensor edge. Each processor runs a trained spiking neural network purpose-built for its modality: visual classifiers on cameras, RF classifiers on software-defined radios, a BLE classifier scanning for commercial fleet beacons, and an acoustic classifier on audio feeds. Inference runs at sub-millisecond latency on milliwatts, making these processors easy to place in the field on battery power, solar, or any constrained platform. The output is a 128-byte observation record containing a classification label, confidence score, threat score, timestamp, sensor type, and source identifier.

Humans aren't removed from the loop here. Instead human effort is placed where effort is most valuable: training the model and getting it right. A well-trained spiking neural network encodes human judgment into silicon. The expertise that would otherwise be spent watching feeds and filtering noise is invested once, upfront, building a classifier that handles sensor data at machine speed with human-level discrimination. The human decides what matters. The chip enforces that decision at machine scale, millions of times a second.

Once the investment is made, raw sensor data never leaves the node. Camera footage stays on the edge node or is shredded after classification. RF samples stay or disappear. Audio follows the same pattern. The only thing that crosses the network is an encrypted 128-byte observation carrying meaning without revealing the underlying intelligence source. The edge network never transports imagery, audio, or signals intelligence. Observations travel over IPsec tunnels compliant with FIPS 140-3 whether the transport is HaLow RF mesh or standard gigabit Ethernet. An adversary intercepting edge traffic sees encrypted 128-byte classification records rather than exploitable sensor feeds.

Layer 2: Classified Satellite Imagery​

When a target is confirmed the operator may need visual verification. The system pulls a satellite image from a secure AIX instance running on IBM Power Virtual Server (PowerVS) via IBM Cloud. The image arrives at the edge node where the AKD1000 tiles it into overlapping regions and classifies each tile: overhead truck or background. Only tiles that pass the confidence threshold are saved as crops. The full image is transient on the edge node. The operator never sees it. What reaches the operator is a small tile containing only the target area with its classification and confidence score. The Akida chip decides what is relevant. The operator sees only what matters for the decision at hand. Classification separation is enforced at the inference layer: the raw satellite pass is consumed and discarded by the NPU. Only classified relevant fragments are presented or persisted.

The architecture protects more than the image. A full satellite pass reveals resolution, coverage area, revisit timing, and collection geometry. Exposing a full pass to an operator screen or a downstream network means exposing it to anyone who compromises that screen or that network. Letting the NPU consume the pass and discard everything outside the target tile means the wider dataset is never stored, never transmitted, and never available for exfiltration. The classified material is protected because it never persists in a form that could be captured. An adversary learns nothing about what the satellite can see. The adversary learns only that a specific target was identified.

Layer 3: GPFS as the Integration Layer​

IBM Storage Scale (GPFS) is a parallel filesystem most people associate with supercomputers. GPFS and Symphony were built and proven when the functional capability of high-performance hardware matched what edge computing affords today. The processing power that once required a data center now fits in a tactical node. Decades of battle-tested distributed systems software for filesystem coherence, atomic operations, and resource scheduling run natively at the edge without adaptation. Here GPFS serves a different role than it did in the supercomputer: the shared substrate that connects the edge fleet without requiring message brokers, artifact servers, or coordination services.
Observation records flush from local shared memory to GPFS for persistence. Edge nodes operate independently. Nodes do not need to communicate with each other to any great degree. Each node classifies, writes its observations, and Symphony handles what happens next. There is no pub/sub. There is no message queue. There is very little cross-node chatter.

GPFS can also store a sensor registry. Inference workers claim available sensors from GPFS using atomic file operations. If a node fails another picks up the sensor automatically. Model files in the form of .fbz binaries compiled for the AKD1000 live on GPFS as well. A training run produces a quantized model, writes it to the shared filesystem, and every inference node can load it immediately. No container registry. No deployment pipeline. The filesystem is the model store, the coordination layer, and the persistence tier in one.
Scale follows naturally. GPFS has been running at petabyte scale in HPC environments for decades. Adding nodes to the fleet means mounting the filesystem on the new hardware. The models, the sensor registry, and the observation history are already there. GPFS scales with the fleet.

Layer 4: Symphony Orchestration​

IBM Spectrum Symphony manages the compute fleet through SOAM (Service-Oriented Architecture Middleware). Seven services run simultaneously across the cluster. There is no architectural reason the number could not be seventy or seven thousand. The resource allocator assigns service instances to nodes, restarts failed workers, and rebalances load. Adding a new sensor modality means registering a new service and enabling it. Removing one means disabling it. The service fleet expands and contracts as the mission requires. Symphony handles placement and prioritization.

Symphony also manages network-level concerns. Service instances can be constrained to specific network segments or set to failover between transport paths. The same orchestration layer that manages compute can manage connectivity, routing services to the best available transport tier when conditions degrade. The 25th Infantry Division described needing transport that is "self-sensing and self-determining." Symphony provides exactly that capability.
The emergence engine runs inside the Foundry interface service on every node. The engine answers a simple question: did an event actually happen? A single sensor observation is not evidence. A single observation is a data point. The emergence engine collects observations across modalities and time, deduplicates within a five-second window, and applies confirmation thresholds.

A vehicle classification requires three consistent observations before promotion to a confirmed event. A camera sees a semi truck. An acoustic classifier detects diesel engine signature. A BLE scanner picks up an electronic logging device beacon from the same vehicle. Three modalities produce one conclusion. The event is confirmed not because one sensor reported it but because multiple independent sensors agreed.

The engine also handles negative correlation. When the emergency RF classifier detects active dispatch traffic on county frequencies the signal is to hold rather than to confirm. The emergence engine suppresses all other confirmations for sixty seconds. If fire or EMS is responding to a location then a target event at the same coordinates is likely coincidence rather than a threat. The architecture automatically handles the kind of contextual judgment that would otherwise require an analyst watching multiple feeds.

Layer 5: Palantir Foundry​

Confirmed events project to Palantir Foundry through its REST API. The schema is not pre-defined. When the emergence engine confirms a classification Foundry has never seen the system creates a new dataset, defines the Object Type on an ontology branch, submits a proposal, and merges it. The ontology grows from observed data.

Six dataset types are live. A cross-modal target confirmation transform in Foundry unions all event datasets, groups by a spatiotemporal window, and promotes clusters with multiple contributing modalities to confirmed targets. Link types connect each target back to its contributing sensor events for full provenance.

The user interface reflects the non-extractive design. The operator is not presented with a wall of sensor feeds or thousands of unfiltered data points. The screen shows confirmed targets with clear confidence levels, threat profiles, contributing modalities, and classified satellite crops available on demand. Every piece of information on the screen has already been filtered by NPUs, confirmed by the emergence engine, and validated by Symphony before it arrives. The interface is calm because the architecture beneath it has already done the work. The decision is informed. The action is one click.

Where This Goes: Exponential Wisdom At Scale​

As a potential enhancement Symphony could also bridge LLM workloads with this architecture. The same SOAM framework that orchestrates neuromorphic inference can orchestrate vLLM services for semantic validation, RAG-based enrichment, or natural language summarization of confirmed events. IBM Granite or other models running on GPU nodes validate that confirmed events are semantically plausible before reaching Foundry. The orchestration layer treats an LLM service instance the same way it treats an Akida inference worker: a managed, restartable, load-balanced unit of compute. The point is not automation alone. The point is extending the system for human discernment. An LLM layer does not replace the analyst. An LLM layer gives the analyst a richer and more contextualized picture through natural language summaries of what the emergence engine confirmed, semantic connections across events that raw data would obscure, and explanations that make the underlying intelligence accessible to operators who were never trained on the sensor modalities producing it.

The architecture begins to compound at every layer. Each integration builds on the last. Each optimization at one layer improves the behavior of every layer above and below it. The neuromorphic classifiers get better with each training cycle. The emergence engine refines its thresholds from operational feedback. The LLM layer learns what kinds of summaries lead to faster decisions. Performance, resilience, and security posture are not installed as features, these qualities emerge from a vertically integrated stack refining itself under production conditions.

Exponential wisdom is the term for systemic knowledge that compounds through implementation over time. The value is not in any single component. The value is in the interaction between components that have been tested, refined, and tightened under real conditions. Because every layer in the stack scales horizontally the compounding is not linear. Ten nodes learning from six modalities produce a certain level of insight. A hundred nodes learning from sixty modalities do not produce ten times the insight. A hundred nodes produce categories of understanding not possible at the smaller scale: pattern recognition across theaters, threat correlation across domains, predictive models trained on volumes of confirmed events no single installation could generate. The wisdom scales exponentially because the interactions between components scale exponentially.

Organizations seeing real returns from AI already understand the principle. Successful organizations like Palantir, IBM, and BrainChip are not using AI to displace proven platforms. Successful organizations are leveraging the accumulated wisdom of existing systems and compounding the value further. Using these platforms together only compounds that even more.
What is unique to each platform complements both function and capabilities not often considered. For example, Monte Carlo simulation is Symphony's bread and butter. The same fleet running sensor inference can run mission simulations with thousands of parallel trials exploring courses of action, terrain effects, or threat scenarios with real data in real time. The network that observes the battlefield can also war-game it.

Why This Scales​

Ten nodes today. There is no architectural ceiling. Symphony manages service placement across any number of nodes, thousands of them. GPFS scales the shared storage. Akida chips use commodity M.2 cards that slot into standard hardware. Palantir Foundry handles ontology growth without manual schema intervention. Every layer in the stack was designed for horizontal scale. The difference between a ten-node demo and a ten-thousand-node deployment is hardware procurement rather than software redesign.

Data separation scales as well. More nodes means more sensors but the security model does not change. Raw sensor data stays on the edge node. Classified imagery is accessed through the same narrow AIX or other interface regardless of fleet size. Only 128-byte observations and confirmed events cross the network. The attack surface does not grow with the fleet.

The full system: neuromorphic inference at the sensor, GPFS as the integration layer, Symphony for orchestration and LLM services, Foundry for ontology and targeting. Running now. Writing to Palantir. Built in five days. Scales to whatever the mission needs.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 5 users

Frangipani

Top 20
Here is today’s official press release:



Mar 5, 2026 9:00 AM Eastern Standard Time

BrainChip Announces Neuromorphyx as Strategic Customer and Go-to-Market Partner for AKD1500 Neuromorphic Processor​



LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world leader in ultra-low power, fully digital, event-based neuromorphic AI, today announced that Nex Novus d.o.o., Neuromorphyx™ has selected the Akida™ AKD1500 co-processor for evaluation and integration in its Vision NeuroNode™ edge-AI device.



This strategic engagement and partnership agreement expands BrainChip’s reach into defense, robotics and industrial edge sensing. By integrating the AKD1500 into Neuromorphyx’s rugged, modular architecture, the companies are enabling always-on intelligence where power, bandwidth and connectivity are constrained.

Enabling Scalable, Always-on Edge Intelligence
The AKD1500 will be integrated into Neuromorphyx’s modular NeuroBlocks™ architecture: SensorBlock™ (DVS/EVS), BridgeBlock™ (FPGA), BrainBlock™ (AKD1500) and InterfaceBlock™, forming a key compute element within the Vision NeuroNode™ device and its NeuroHive™ fleet orchestration platform. By leveraging BrainChip’s Akida™ technology, Neuromorphyx is building configurable edge nodes for real-time detection and tracking using event-based vision and other sensors such as audio, radar and IMU:
  • Ultra-low power, always-on inference: AKD1500 supports mission-critical detection pipelines within tight power budgets, operating at milliwatt power levels (under 300mW for high-performance tasks), enabling battery deployments matching the core NeuroNode feature of multi-year field operations on a single integrated battery.
  • Low-latency sparse processing: Paired with event-based vision sensors, NeuroNode™ can convert event streams into compact spatiotemporal tensors and run them on AKD1500, where Akida’s event-based execution exploits activation sparsity to reduce data movement and accelerate response.
  • Configurable deployment at scale: AKD1500’s 1 MB on-chip memory enables NeuroNode™ to run fully self-contained neuromorphic SNN models without external DRAM, reducing power, latency, and attack surface while enabling deterministic real-time response.
The Potential: Networked NeuroNodes at Scale
Neuromorphyx’s approach to in-house manufacturing scalability paired with AKD1500 competitive pricing, allows for compact NeuroNodes to have simple deployments even in tens of thousands of units by forming networks that cover vast areas managed with NeuroHive™: a map-based platform for deploying, monitoring and updating these edge nodes across sites. The platform supports coordinated sensing, target analytics (including velocity and direction), and external API triggers for integration into existing command-and-control or industrial systems, while natively maintaining privacy and keeping processing at the edge.

"Our mission has always been to bring AI to the edge where it is most needed," said Sean Hehir, CEO of BrainChip. "Neuromorphyx is building a compelling, modular platform for deploying always-on intelligence in demanding environments. By combining event-driven sensing with Akida’s ultra-low power neuromorphic processing, we’re helping enable scalable edge AI for defense, industrial monitoring and autonomous systems."

About Neuromorphyx™ and Nex Novus d.o.o.
Neuromorphyx™ is a deep-tech hardware and embedded software company, spun out of Nex Novus d.o.o., focused on high-performance, energy-efficient edge AI for defense, robotics and industrial sensing. Its modular NeuroBlocks™ architecture enables configurable edge AI devices, NeuroNodes™, built around advanced sensors, including event-based vision, audio, radar, IMU and a choice of accelerator technologies. Neuromorphyx’s NeuroHive™ platform provides fleet management, orchestration and secure over-the-air model updates.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY):
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs with unmatched efficiency and energy economy. Explore more at www.brainchip.com.

Contacts​

Media Contact
Madeline Coe
prforbrainchip@bospar.com
224-433-9056

And today’s LinkedIn post:


02949270-B20D-45A5-A91E-F658D5D80BB8.jpeg
 
  • Love
  • Like
Reactions: 2 users
Top Bottom