BRN Discussion Ongoing

schuey

Regular
SP holding up pretty well compared to what’s happening outside 😀
100% agree but a snakes belly cant get lower......
 
  • Haha
Reactions: 1 users

Rach2512

Regular
EDGX



 
  • Fire
  • Like
Reactions: 5 users

Diogenese

Top 20
EDGX



Hi Rach,

I downloaded the EdgeX Sterna datasheet. It's only Nvidia with Nvidia NPUs- no Akida., which explains the 10 to 40 Watts.

https://www.edgx.space/product/sterna#downloads

1772619310507.png


1772619366634.png
1772620276642.png
 
  • Like
  • Sad
  • Fire
Reactions: 10 users

Fiendish

Regular
  • Haha
Reactions: 1 users

Fiendish

Regular
  • Haha
  • Like
Reactions: 2 users

Diogenese

Top 20
Ogre's remorse:

1772628088541.png
 
  • Like
Reactions: 1 users
Curlednoodles


Curiouser and curiouser.
It would appear that all of Kevins previous tests were actually precursors for todays build of a multi modal sensor fusion edge system, which is what he was likely working on from the start.

*This is where the earlier tests suddenly make sense:


Always-on inference (finance demo) → trust gating (voice recognition) → anomaly detection (cybersecurity) → orchestration (Symphony) → scaling across multiple Akida devices/nodes (~10) → multi-sensor fusion (today).


Individually, they looked like:

finance analytics

voice authentication

emotion recognition

cybersecurity

And they could absolutely function as standalone applications.


But when viewed together, they resemble a testbed for today’s system, bringing multiple sensing domains together:

video

audio

BLE device detection

RF spectrum sensing

satellite signals

All fused into a single platform, with Akida providing the inference layer and Symphony coordinating the system.


What once looked like separate applications now appears to be a single edge architecture assembled step by step, with each test validating the capabilities needed for a multi-modal sensor fusion system.

The inclusion of SDR spectrum sensing, LoRa, satellite signals, and Doppler-derived positioning is particularly telling, suggesting the system is intended to operate in GPS-constrained or disconnected environments — a capability rarely demonstrated in typical edge AI systems.


Another subtle and often missed clue appears in Kevin’s screenshots:
the presence of /dev/akida0 alongside orchestration and multiple sensor pipelines. In Linux systems, anything that appears under /dev/ is treated as a system device.


This means the Akida chip isn’t being accessed as a one-off library call for a specific application.
Instead it is exposed as a persistent system-level compute device.


That is precisely how hardware behaves when it is intended to act as shared infrastructure for multiple workloads — exactly what you would want for a multi-modal sensor fusion node.


This leads to a more interesting conclusion:
Akida isn’t simply acting as an accelerator for a single task. It appears to be operating as the shared intelligence layer tying multiple sensing domains together. A substrate.

The latest multi-modal sensor fusion system brings those capabilities together into a single platform, making it feel less like a new experiment and more like the integration stage of something he has been building toward all along
 
  • Fire
  • Love
  • Like
Reactions: 5 users
Top Bottom