Hi Doz,
Yes, smartphones are absolutely a potential application for Akida/TENNs + GNNs, but ChatGPT says "not with the same Alif board-level E8 + Akida integration we’ve been describing for IoT/edge devices".
It says "For smartphones → yes, Akida/TENNs + GNNs could bring unique “always-on + adaptive + graph reasoning” benefits, but it would require tight silicon integration (either IP licensing or chiplet form) to make sense in that ecosystem."
And this where Arm comes back into the picture!
If Arm offered a Corstone platform that bundles Cortex-A/M cores, Ethos-U85, and an Akida-style SNN block, that would make it easier for SoC vendors to adopt. Arm’s business model is already about licensing IP blocks - so a U85+Akida “chiplet” or IP package could be the right vehicle.
As I say, I don't know how plausible any of this is, but it sounds really good, so I sincerely hope it's right!
My Question:
When you say Akida/TENNs + GNNs could bring unique “always-on + adaptive + graph reasoning” benefits to smartphones, what does that look like? What sort of things could it do?
ChatGPT said:
Great question — let’s make it tangible. If Akida/TENNs + GNNs were integrated into a smartphone SoC (alongside the usual CPU/GPU/NPU), you’d unlock a class of features that today’s phones can’t do well, or can only do at the cost of heavy power drain.
What “always-on + adaptive + graph reasoning” could look like in smartphones
1. Ultra-low-power, always-on sensing
- Akida/TENNs role: Continuously listen/watch in the μW–mW range, only waking the big NPU when needed.
- Examples:
- Smarter wake words: distinguish between you and other voices, adapt to your accent, background noise.
- Ambient sound awareness: alert you to alarms, baby cries, glass breaking, or traffic horns — without streaming mic data to the cloud.
- Pocket/wearable fusion: recognize walking, running, falls, or gestures (raise-to-silence, nod-to-answer).
Today, this burns battery on Snapdragon NPUs or requires offloading to earbuds/wearables. With Akida, it’s continuous and frugal.
2. Adaptive personalization (on-device learning)
- Akida/TENNs role: Update models locally, without retraining in the cloud.
- Examples:
- Voice biometrics: phone adapts to your voice over time, even as it changes (illness, aging).
- Gesture learning: you invent a new gesture (e.g., double-shake = launch camera), and the phone learns it instantly.
- Environmental tuning: auto-adapts noise cancellation or audio profiles based on where you usually are (train, office, home).
Current phones can’t personalize in the field without big cloud retraining. This would make them self-learning devices.
3. Graph reasoning across sensors (GNNs on top of TENNs)
- Akida/TENNs role: Emit event embeddings (sound = horn, vision = crosswalk, IMU = running).
- U85/GNN role: Build a graph of these events and reason over relationships.
- Examples:
- Contextual safety assistant:
- Mic hears horn → camera sees crosswalk → IMU detects step-off → GNN infers “user about to step into traffic” → alert/vibrate.
- Digital wellness coach:
- Phone + watch + earbuds create a graph of sleep, stress (voice), activity (IMU), and environment (noise).
- GNN infers risk patterns: “You’re stressed, haven’t slept enough, and your calendar is packed — suggest downtime.”
- Privacy-first context understanding:
- Phone can reason: “Kids’ voices + home Wi-Fi + calendar says weekend = family context” → automatically mute work notifications.
This is next-level context awareness — something dense NPUs alone don’t deliver because they treat each modality in isolation.
4. Smarter security & liveness
- Akida/TENNs role: Detect temporal cues like blinking, micro-movements, gait, or pulse.
- GNN role: Correlate across sensors (face + voice + motion) to confirm identity.
- Examples:
- Anti-spoofing: Spot deepfake attacks (static photo, replayed voice).
- Continuous auth: Phone knows it’s still you as you use it, without re-scans or draining power.
Today, FaceID/biometrics are point-in-time checks. This makes them continuous, adaptive, and harder to spoof.
5. Always-on multimodal assistants
- Combine Akida’s sentinel mode with GNN reasoningto create an assistant that’s:
- Context-aware (not just reactive commands).
- Privacy-first (sensitive data stays local).
- Battery-friendly (μW–mW background, bursts only when needed).
Example: Instead of “Hey Siri/Google,” the phone knows when you’re likely asking it a question (gaze, gesture, voice tone) and wakes the assistant seamlessly.
Why it matters
This wouldn’t replace Snapdragon’s Hexagon or Apple’s Neural Engine — it would complement them by catching events early, filtering noise, and only engaging the heavy engines when meaningful.
- Today’s gap: Smartphones either run dense NPUs (burning battery for continuous monitoring) or offload to cloud/wearables.
- What Akida+TENNs+GNNs adds: A third compute domain that enables truly always-on, adaptive, context-aware intelligence.
In short:
On smartphones, Akida/TENNs + GNNs would unlock continuous, adaptive, context-aware features: smarter assistants, better safety/security, and health/wellness insights — all without hammering the battery or sending raw data to the cloud.
Hi Bravo ,
Chatty needs to take some of the below into consideration .
Note : Alif’s co founder , Reza could not wait any longer for the “ Transformer Model “ , so he incorporated on the Ensemble E8 development board a Ethos U85 without a Cortex processor . IE AKD1500 ( And we know that the board was FD - SOI ) . In the rush , they also did not use Arm’s Corstone compute subsystem .
In my opinion , Arm quickly realised that selling Cortex ip is more profitable than selling the ip of the Ethos U85 .
Mitko from ABB makes the following statements .
The following podcast between Nandan and Ian ( Arm ) looks to have been an early indicator . Arm seem to be pushing the new V9 Cortex A320 . IE 4 x A320’s to 1 x U85 under a hierarchy of processing . ( Note the Alif board has two Cortex A32 V8 . )
We also know that both the Ethos U55 ( again Alif board has 2 ) and the Ethos U65 were CNN . To me Chatty seems to be under the opinion that the Ethos U85 is CNN . But because we know that they are different binary , in my opinion the Ethos U85 is SNN .
a
Reinhard’s comments about the Ethos U85 being more than just a performance upgrade are interesting . The Ethos U65 was just an upgrade of the U55 . Chloe’s response tells us plenty .
We might need to fine tune Chatty and provide some direction with the above in mind .
.