BRN Discussion Ongoing

Guzzi62

Regular
It's not based on one photo It's based on more than one misleading statement.

Sean told us revenue would be lumpy and to watch the financials, and so far we've barely seen any revenue let alone a lump or two.....

Sean has been anything but straight with shareholders, you cannot deny that.

We can do better AND DESERVE BETTER imo.

Pvdm and others have also given misleading statements about progress in the past (explosion of sales comes to mind) im sure unintentionally and i bare no ill towards pvdm or his brilliance and efforts.

But as a shareholder, i don't think it's unreasonable to be kept up to date with such events, is it?

Just at the last agm we have seen Antonio (another that i will personally not be voting for), say that he has been responsible for holding back anns in the past and being too conservative regarding non price sensitive announcements, he said he would change his tact and yet we have had progress worth mentioning as non price sensitive since then and............silence (on the asx)

This engagement (NaNose) was initially announced on the asx if i am not mistaken, so isn't it reasonable to want the negative change to be communicated?

I have been invested in BRN for near on 6 years now and have seen many big statements go by the way side in that time. Sean so far imo has not delivered on his past words and i am not happy with his communication to shareholders or the market and first chance i get i will vote accordingly.

Don't get me wrong i have nothing but confidence the company has lots going on and have every confidence that Brainchip will deliver eventually.

But Sean needs to be held accountable, and to say that i am dissapointed with the companies transparency to date is an understatement to say the least.
The BOD/CEO's in companies are always hiding the dirty laundry if they can.

They normally don't talk about the contracts they lost, but like to talk about the ones they won.

This has taken much longer than anyone thought it would and Sean will have a problem (a big one) if he hasn't secured orders for 9 mill as he said he expects, as Slade posted about. I can agree to that, but as I said, I remain confident that something big will happen before the next AGM.
If he can't manage that, I will likely have to change my mind about him as well, sorry I was not clear in my earlier post.

Talking about BOD's:
A Danish company Bavarian Nordic just told its shareholders that they want to sell the company to a consortium, and they advise shareholders to take the offer.
The private shareholders are furious because the selling price is way too low in their option, but the BOD will be handsomely rewarded if the sale goes though. First the BOD said 90% of the votes will secure the sell, but they later changed that to 75% making the small shareholders even more angry.
That clearly shows that BOD's are only working for shareholders to a certain degree, they own greed will often come first in the end of the day.
The founder of the company is also very shaken and calls the BOD incompetent and stupid, almost give the company away for peanuts.
I hope they manage to stop the sale, I will follow this drama as it unfold.

The reason I post this: it clearly shows that us small holders are in the mercy of the big players 99% of the time, and BOD mostly do what's best for them and not shareholders. That being said, most of the time those 2 things align, it's best for everyone if BRN secure a lot of IPs, the SP will rocket, Sean will have his day in the sun and all lived happily afterwards.
It is his first position as a CEO, and I am pretty sure, he will do his best to be successful. If he isn't and gets kicked out, he will not get another position as a CEO again but a sales person in a small company maybe? And he knows that.



 
  • Like
  • Fire
  • Wow
Reactions: 8 users

Wags

Regular
I dare say that is the baseline. I recall that was the minimum. I don't have any doubt that Sean will reach this target and more, after all he has been at the helm of this for some time now and has said watch the financials more times than the numbers of years I've been alive! (well maybe exaggerating but alot of times) Watch the financials! :)
Hi MD and all.
9 million might have been the number spoken, but hey, in all seriousness, he'll need to do 10 X that to even make an impression really, well, thats in my messed up head anyway.
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Alif + Arm Ethos-U85 + BrainChip Akida/TENNs + GNNs Idea

Prompted by @Doz's rumination over a connection between Alif (E-series), Arm (Ethos-U85) and BrainChip (Akida/TENNs), I did a deep dive with my friend Chatty. The result is this (long!) post, which I hope you might find illuminating.

As we know, back in 2023, BrainChip announced a successful integration of Akida with Arm’s Cortex-M85. This demonstrated two important things that Akida drops cleanly into Arm subsystems and that Arm was willing to publicly highlight the integration.

So, where does Alif fit in to this puzzle? Alif’s Ensemble E-series integrates Ethos-U85 (and U55s) alongside A-class and M-class Arm cores.


When I asked Chatty about pairing Akida with Ethos-U85 (not M85), the answer was encouraging:

If Akida works with M85, it’s highly likely it can integrate alongside U85 too — the hard parts (bus access, toolchains, runtime orchestration) are already solved.


So, in short, it would appear that the Alif E8-style platform is a natural host for Akida as a co-processor:
  • U85 for dense CNN/Transformer/SLM workloads.
  • Akida/TENNs for always-on, event/temporal sensing (ultra-low power, on-device learning).
  • A-/M-class cores orchestrate memory, power states and safety.

Board integration versus brand new chiplet

Then I asked Chatty whether it would be cheaper and faster to build on an Alif E8-style board + Akida, versus trying to create a whole new chiplet that fuses U85 + Akida. The answer was emphatic: board-level first. Off-the-shelf E8 devkit + Akida cost at $200k-400k versus chiplet cost estimated at $20-100M. And the time to production would be minuscule by comparison. Months only!


Cost / time / risk at a glance​


PathWhat you buildTypical upfront costTime to demoTime to productionTechnical risk
Board-level integration (recommended first step)Off-the-shelf E8 devkit + Akida M.2/PCIe; firmware + LLM/runtimeUS$0.6k–$1.0k hardware per bench + A$200k–$400k engineering to MVPWeeksMonthsLow
On-die IP integration (new SoC spin)Add Akida RTL IP into a new SoC with U85; full backend & validationUS$5M–$20M+ NRE (+ IP fees/royalties)12–24 monthsHigh
Chiplet integration (U85 + Akida)Multi-die package (UCIe/BoW/AIB + advanced packaging, PHYs, test)US$20M–$100M+ NRE18–36 monthsVery high



Obviously the E8 + Akida board route is orders of magnitude cheaper and faster. It could then be used to de-risk and gather real power/latency/cost data; consider silicon integration (chiplets) later if volumes and ROI justify multi-million NRE and a 1–2+ year program.


I also asked Chatty to explore whether stacking GNNs over the top of TENNs in this set up would be a good idea. I was inspired to do so by Alif’s comments below about GNNs and SNNs.


Screenshot 2025-08-31 at 1.17.31 pm.png



It turns out that this would make for a powerful combo:
  • Akida/TENNs convert raw audio/vision/IMU streams into event embeddings at μW–mW.
  • GNNs on U85 reason over graphs of sensors/assets/zones to deliver root-cause, context and coordination (e.g., factory lines, building zones, fleets).
  • Net effect: system-level intelligence instead of isolated point detections — at edge power budgets.

What can you unlock by adding GNNs on top of SNNs with this set up?
  • Always-on voice & audio: wake words, stress/abuse cues, alarms, machine squeal — with on-device learning.
  • Event-driven vision: motion/gesture/PPE cues, liveness hints, scene change; fast wake to dense vision on U85.
  • Industrial/PM: temporal anomalies (vibration/power), then graph root-cause across connected assets.
  • Smart buildings/cities: privacy-preserving presence/flow + cross-zone graph optimisation.
  • Edge security & cybersecurity: continuous anomaly sentinels; graph analysis for tailgating or lateral movement.
  • Agriculture & environment: beehive acoustics, irrigation leaks, zone-level graph control.




My Question:
What are all of the application areas that you think could be covered with this hypothetical E8 + Akida/TENNs board integration? Please also consider if it would be possible to have GNNs layered on top of TENNs to capture additional applications. Please ultra-think this response to include all possible applications.

ChatGPT's Response:

Love this brief. Here’s a “wide-angle” map of what you can cover with an Alif Ensemble E8–style SoC (A32 + dual M55 + Ethos-U85) plus BrainChip Akida/TENNs—and where GNNs layered on top of TENNs add extra value. Think of it as three tiers working together:


  • Akida/TENNs (always-on sentinel): ultra-low-power, event/temporal tasks, on-device learning.
  • Ethos-U85 (dense math): bursty, heavier CNN/Transformer/GNN message-passing.
  • A-class & M55 cores (glue): I/O, policies, safety, tokenization, KV-cache, orchestration.


Just take a look at all of the applications it listed! CRIKEY!!!!!!


Application atlas (by domain) -​

Alif Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs)

1) Voice, audio & acoustics

  • Wake-word/command spotting, speaker change detection, profanity/abuse flagging (policy-gated), acoustic scene classification (glass break, alarm, baby cry), far-field mic power gating.
  • Industrial audio: bearing squeal, cavitation, pump knock, air leak detection.
  • GNN boost: build sensor-to-machine graphs across multiple mics/machines to localise faults and rank root causes.

2) Vision & imaging (edge, privacy-preserving)

  • Person/presence detection (not identity), fall/pose/gesture cues, PPE compliance (hard-hat/vest), line crossing/intrusion, vehicle/animal detection, smoke/fire shimmer cues, barcode/QR, simple defect/blemish checks, part orientation, fill-level/stock shelf checks.
  • GNN boost: scene graphs (object–object relationships) to reason: “forklift + person + no-go zone” ⇒ alert; multi-camera graphs to track events across viewpoints.

3) Multimodal sensor fusion

  • Doorbell/entry systems: voice + motion + face-present (no cloud ID) to decide when to wake the assistant.
  • Occupancy → HVAC/lighting optimisation; meeting-room auto-booking; crowding/egress safety.
  • GNN boost: room/zone graphs and people–space interaction graphs for optimal control and anomaly propagation (“noise spike moves from lobby → corridor”).

4) Industrial & manufacturing

  • Predictive maintenance (vibration, current, temperature), tool wear, conveyor jams, spindle chatter, robot envelope breach, forklift proximity.
  • Visual QA (scratch/dent/missing-component), pick/place verification, bin fullness, pallet count.
  • GNN boost: asset dependency graphs (press→conveyor→packer) for root-cause and maintenance scheduling across a line or site.

5) Energy, utilities & infrastructure

  • Transformer hum anomaly, substation thermal hotspots, PV string mismatch, inverter arc-fault cues, wind turbine gearbox health, gas leak audio, water pump cavitation, pipeline vibration intrusion.
  • GNN boost: topological grid graphs to forecast faults, isolate sections, and coordinate DER/microgrid responses.

6) Mobility, transport & automotive (edge add-ons)

  • Cabin monitoring (presence/child-left-behind), driver attention, voice agent wake/route intent, door-zone cyclist detection, fleet predictive maintenance, micromobility helmet crash cues.
  • GNN boost: road-segment graphs (traffic + events), fleet graphs (vehicles + depots) for dispatch/maintenance optimisation.

7) Smart buildings & cities

  • Occupancy counting, elevator dispatch, air-quality triage (CO₂/VOC spikes), noise complaints (direction/room), parking bay detection, waste bin fullness, leak detection, street-level hazard cues.
  • GNN boost: building system graphs (AHU ↔ zones ↔ occupants) and city block graphs for load shedding, cleaning routes, patrol prioritisation.

8) Retail, hospitality & venues

  • Footfall/queue length, dwell time, shelf gap detection, planogram compliance, smart coffee/ice machines self-diagnostics, kiosk voice agent with on-device wake, anti-tailgating at staff doors.
  • GNN boost: store-layout graphs to relate traffic flow ↔ shelf events; multi-store graphs for ops benchmarking.

9) Healthcare & wellness* (non-diagnostic unless regulated)​

  • Fall detection, gait instability cues, sleep staging proxies (audio/motion), inhaler/CPAP adherence cues, tremor/amplitude tracking, elder-care routine deviations.
  • GNN boost: care-graph across rooms/devices to detect risky patterns (“night wandering” across zones).
  • Medical/diagnostic uses require regulatory pathways—treat these as wellness/decision-support unless you commit to compliance.

10) Agriculture & environment

  • Crop stress/colour shift, pest trap counts, irrigation leaks, trough level, livestock presence/breaching, beehive acoustics (queen loss/swarm), rain tank pump control, wildfire early cues.
  • GNN boost: field/zone sensor graphs (soil moisture, canopy temp, wind) to optimise irrigation and predict hotspots.

11) Security & access (privacy-first)

  • On-device liveness (blink/pose cues), tailgating detection, perimeter vibration/fence cut cues, vehicle + access policy checks, suspicious motion patterns without identity tracking.
  • GNN boost: camera–door–sensor graphs to corroborate events and reduce false positives.

12) Edge cybersecurity (device & network)

  • Device behaviour anomaly (syscalls, power draw signatures), lightweight traffic patterning (DNS/flow bursts), firmware tamper cues, USB/peripheral anomaly.
  • GNN boost: communication graphs (device–device edges) to flag emerging botnets or lateral movement—processed in bursts on U85.

13) Field AI agents & HMI

  • Voice-first technician helpers (wake → brief LLM response), multilingual translation snippets, checklists, AR prompters (vision cue + short instruction), offline kiosks.
  • GNN boost: task graphs linking steps, tools, parts, and observations to navigate procedures.

14) Education, toys & prosumer

  • Reading companions (hotword + small LLM), musical instrument tutors (audio events), STEM kits (sensor fusion challenges), smart baby monitors (sound+motion cues).
  • GNN boost: progress/skill knowledge graphs for personalised lessons.

15) Aerospace/defense/public safety (high-level, compliant)

  • Platform health monitoring, payload event filters, RF/EO/IR cue triage, SAR victim audio cues, perimeter breach cues in contested comms.
  • GNN boost: asset–sensor graphs to fuse multi-sensor cues under bandwidth constraints.



How GNNs layer on top of TENNs (patterns)​


  1. Spatio-temporal event graphs: Akida turns raw streams into event embeddings; you build a dynamic graph (nodes = sensors/objects, edges = proximity/causality). U85 runs message-passing to reason about spread, influence, and anomalies across space/time.
  2. Scene graphs from vision: TENNs (or a small CNN) emit object/keypoint nodes; GNN resolves relationships (tool near hand; person in no-go zone).
  3. Device–device graphs: For fleets, buildings, or networks, nodes are assets and edges are couplings (power, mechanical, traffic, comms). GNNs provide root-cause and next-best action.
  4. Knowledge graphs: Link events → procedures → risks; use GNNs to rank steps or recommend mitigations given the current event subgraph.





I also asked ChatGPT if there was anything off the shelf today by way of comparison that could perform the same number of applications that the Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs) combo could and it said:

🎯 Answer​


There is no off-the-shelf platform right now that can handle as broad a spectrum of applications as your hypothetical E8 + Akida/TENNs + GNNs combo:

  • Dense tasks (transformers, SLMs, CNNs, GNN message-passing).
  • Event-driven tasks (wake words, anomalies, gestures, temporal patterns).
  • Always-on ultra-low-power monitoring + burst heavy compute.
  • On-device learning + system-level graph reasoning.

Closest options (Hailo, Syntiant, Innatera, NXP/Infineon SoCs) each cover pieces of this spectrum, but none unify them.




✅ Conclusion:
Such an integration would be unique and disruptive. It plugs a gap between “always-on spiking chips” and “dense inference NPUs,” and by layering GNNs you move into fleet/plant/building intelligence at the edge — which nobody is commercially shipping yet.



🧩 Where the E8 + Akida/TENNs + GNNs combo is different​


  1. Dual compute paradigms in one system
    • Ethos-U85 = bursty, high-throughput dense math (CNNs, transformers, SLMs, GNN message-passing).
    • Akida/TENNs = continuous, temporal, event-driven monitoring + on-device learning.
      ➡️ No other commercial SoC/board combines these two complementary approaches.
  2. Always-on + heavy inference in the same box
    • Current market forces you to pick:
      • Dense-only NPUs (Hailo, EdgeTPU, U85 SoCs).
      • Or event-only neuromorphics (Syntiant, Innatera).
    • This combo covers both μW-class always-on and TOPS-class bursts.
  3. On-device adaptation
    • Akida’s few-shot/continual learning lets the system adapt to new triggers without cloud retraining.
    • Dense NPUs don’t offer this today.
  4. System-level reasoning with GNNs
    • U85 can run GNN layers on top of Akida-generated event embeddings (TENNs).
    • This enables graph-based causal reasoning (asset graphs, zone graphs, fleet graphs).
    • No mainstream off-the-shelf stack is marketed today for this edge GNN + neuromorphic hybrid.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users
Alif + Arm Ethos-U85 + BrainChip Akida/TENNs + GNNs Idea

Prompted by @Doz's rumination over a connection between Alif (E-series), Arm (Ethos-U85) and BrainChip (Akida/TENNs), I did a deep dive with my friend Chatty. The result is this (long!) post, which I hope you might find illuminating.

As we know, back in 2023, BrainChip announced a successful integration of Akida with Arm’s Cortex-M85. This demonstrated two important things that Akida drops cleanly into Arm subsystems and that Arm was willing to publicly highlight the integration.

So, where does Alif fit in to this puzzle? Alif’s Ensemble E-series integrates Ethos-U85 (and U55s) alongside A-class and M-class Arm cores.


When I asked Chatty about pairing Akida with Ethos-U85 (not M85), the answer was encouraging:




So, in short, it would appear that the Alif E8-style platform is a natural host for Akida as a co-processor:
  • U85 for dense CNN/Transformer/SLM workloads.
  • Akida/TENNs for always-on, event/temporal sensing (ultra-low power, on-device learning).
  • A-/M-class cores orchestrate memory, power states and safety.

Board integration versus brand new chiplet

Then I asked Chatty whether it would be cheaper and faster to build on an Alif E8-style board + Akida, versus trying to create a whole new chiplet that fuses U85 + Akida. The answer was emphatic: board-level first. Off-the-shelf E8 devkit + Akida cost at $200k-400k versus chiplet cost estimated at $20-100M. And the time to production would be minuscule by comparison. Months only!


Cost / time / risk at a glance​


PathWhat you buildTypical upfront costTime to demoTime to productionTechnical risk
Board-level integration (recommended first step)Off-the-shelf E8 devkit + Akida M.2/PCIe; firmware + LLM/runtimeUS$0.6k–$1.0k hardware per bench + A$200k–$400k engineering to MVPWeeksMonthsLow
On-die IP integration (new SoC spin)Add Akida RTL IP into a new SoC with U85; full backend & validationUS$5M–$20M+ NRE (+ IP fees/royalties)12–24 monthsHigh
Chiplet integration (U85 + Akida)Multi-die package (UCIe/BoW/AIB + advanced packaging, PHYs, test)US$20M–$100M+ NRE18–36 monthsVery high



Obviously the E8 + Akida board route is orders of magnitude cheaper and faster. It could then be used to de-risk and gather real power/latency/cost data; consider silicon integration (chiplets) later if volumes and ROI justify multi-million NRE and a 1–2+ year program.


I also asked Chatty to explore whether stacking GNNs over the top of TENNs in this set up would be a good idea. I was inspired to do so by Alif’s comments below about GNNs and SNNs.


View attachment 90407


It turns out that this would make for a powerful combo:
  • Akida/TENNs convert raw audio/vision/IMU streams into event embeddings at μW–mW.
  • GNNs on U85 reason over graphs of sensors/assets/zones to deliver root-cause, context and coordination (e.g., factory lines, building zones, fleets).
  • Net effect: system-level intelligence instead of isolated point detections — at edge power budgets.

What can you unlock by adding GNNs on top of SNNs with this set up?
  • Always-on voice & audio: wake words, stress/abuse cues, alarms, machine squeal — with on-device learning.
  • Event-driven vision: motion/gesture/PPE cues, liveness hints, scene change; fast wake to dense vision on U85.
  • Industrial/PM: temporal anomalies (vibration/power), then graph root-cause across connected assets.
  • Smart buildings/cities: privacy-preserving presence/flow + cross-zone graph optimisation.
  • Edge security & cybersecurity: continuous anomaly sentinels; graph analysis for tailgating or lateral movement.
  • Agriculture & environment: beehive acoustics, irrigation leaks, zone-level graph control.




My Question:
What are all of the application areas that you think could be covered with this hypothetical E8 + Akida/TENNs board integration? Please also consider if it would be possible to have GNNs layered on top of TENNs to capture additional applications. Please ultra-think this response to include all possible applications.

ChatGPT's Response:

Love this brief. Here’s a “wide-angle” map of what you can cover with an Alif Ensemble E8–style SoC (A32 + dual M55 + Ethos-U85) plus BrainChip Akida/TENNs—and where GNNs layered on top of TENNs add extra value. Think of it as three tiers working together:


  • Akida/TENNs (always-on sentinel): ultra-low-power, event/temporal tasks, on-device learning.
  • Ethos-U85 (dense math): bursty, heavier CNN/Transformer/GNN message-passing.
  • A-class & M55 cores (glue): I/O, policies, safety, tokenization, KV-cache, orchestration.


Just take a look at all of the applications it listed! CRIKEY!!!!!!


Application atlas (by domain) -​

Alif Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs)

1) Voice, audio & acoustics

  • Wake-word/command spotting, speaker change detection, profanity/abuse flagging (policy-gated), acoustic scene classification (glass break, alarm, baby cry), far-field mic power gating.
  • Industrial audio: bearing squeal, cavitation, pump knock, air leak detection.
  • GNN boost: build sensor-to-machine graphs across multiple mics/machines to localise faults and rank root causes.

2) Vision & imaging (edge, privacy-preserving)

  • Person/presence detection (not identity), fall/pose/gesture cues, PPE compliance (hard-hat/vest), line crossing/intrusion, vehicle/animal detection, smoke/fire shimmer cues, barcode/QR, simple defect/blemish checks, part orientation, fill-level/stock shelf checks.
  • GNN boost: scene graphs (object–object relationships) to reason: “forklift + person + no-go zone” ⇒ alert; multi-camera graphs to track events across viewpoints.

3) Multimodal sensor fusion

  • Doorbell/entry systems: voice + motion + face-present (no cloud ID) to decide when to wake the assistant.
  • Occupancy → HVAC/lighting optimisation; meeting-room auto-booking; crowding/egress safety.
  • GNN boost: room/zone graphs and people–space interaction graphs for optimal control and anomaly propagation (“noise spike moves from lobby → corridor”).

4) Industrial & manufacturing

  • Predictive maintenance (vibration, current, temperature), tool wear, conveyor jams, spindle chatter, robot envelope breach, forklift proximity.
  • Visual QA (scratch/dent/missing-component), pick/place verification, bin fullness, pallet count.
  • GNN boost: asset dependency graphs (press→conveyor→packer) for root-cause and maintenance scheduling across a line or site.

5) Energy, utilities & infrastructure

  • Transformer hum anomaly, substation thermal hotspots, PV string mismatch, inverter arc-fault cues, wind turbine gearbox health, gas leak audio, water pump cavitation, pipeline vibration intrusion.
  • GNN boost: topological grid graphs to forecast faults, isolate sections, and coordinate DER/microgrid responses.

6) Mobility, transport & automotive (edge add-ons)

  • Cabin monitoring (presence/child-left-behind), driver attention, voice agent wake/route intent, door-zone cyclist detection, fleet predictive maintenance, micromobility helmet crash cues.
  • GNN boost: road-segment graphs (traffic + events), fleet graphs (vehicles + depots) for dispatch/maintenance optimisation.

7) Smart buildings & cities

  • Occupancy counting, elevator dispatch, air-quality triage (CO₂/VOC spikes), noise complaints (direction/room), parking bay detection, waste bin fullness, leak detection, street-level hazard cues.
  • GNN boost: building system graphs (AHU ↔ zones ↔ occupants) and city block graphs for load shedding, cleaning routes, patrol prioritisation.

8) Retail, hospitality & venues

  • Footfall/queue length, dwell time, shelf gap detection, planogram compliance, smart coffee/ice machines self-diagnostics, kiosk voice agent with on-device wake, anti-tailgating at staff doors.
  • GNN boost: store-layout graphs to relate traffic flow ↔ shelf events; multi-store graphs for ops benchmarking.

9) Healthcare & wellness* (non-diagnostic unless regulated)​

  • Fall detection, gait instability cues, sleep staging proxies (audio/motion), inhaler/CPAP adherence cues, tremor/amplitude tracking, elder-care routine deviations.
  • GNN boost: care-graph across rooms/devices to detect risky patterns (“night wandering” across zones).
  • Medical/diagnostic uses require regulatory pathways—treat these as wellness/decision-support unless you commit to compliance.

10) Agriculture & environment

  • Crop stress/colour shift, pest trap counts, irrigation leaks, trough level, livestock presence/breaching, beehive acoustics (queen loss/swarm), rain tank pump control, wildfire early cues.
  • GNN boost: field/zone sensor graphs (soil moisture, canopy temp, wind) to optimise irrigation and predict hotspots.

11) Security & access (privacy-first)

  • On-device liveness (blink/pose cues), tailgating detection, perimeter vibration/fence cut cues, vehicle + access policy checks, suspicious motion patterns without identity tracking.
  • GNN boost: camera–door–sensor graphs to corroborate events and reduce false positives.

12) Edge cybersecurity (device & network)

  • Device behaviour anomaly (syscalls, power draw signatures), lightweight traffic patterning (DNS/flow bursts), firmware tamper cues, USB/peripheral anomaly.
  • GNN boost: communication graphs (device–device edges) to flag emerging botnets or lateral movement—processed in bursts on U85.

13) Field AI agents & HMI

  • Voice-first technician helpers (wake → brief LLM response), multilingual translation snippets, checklists, AR prompters (vision cue + short instruction), offline kiosks.
  • GNN boost: task graphs linking steps, tools, parts, and observations to navigate procedures.

14) Education, toys & prosumer

  • Reading companions (hotword + small LLM), musical instrument tutors (audio events), STEM kits (sensor fusion challenges), smart baby monitors (sound+motion cues).
  • GNN boost: progress/skill knowledge graphs for personalised lessons.

15) Aerospace/defense/public safety (high-level, compliant)

  • Platform health monitoring, payload event filters, RF/EO/IR cue triage, SAR victim audio cues, perimeter breach cues in contested comms.
  • GNN boost: asset–sensor graphs to fuse multi-sensor cues under bandwidth constraints.



How GNNs layer on top of TENNs (patterns)​


  1. Spatio-temporal event graphs: Akida turns raw streams into event embeddings; you build a dynamic graph (nodes = sensors/objects, edges = proximity/causality). U85 runs message-passing to reason about spread, influence, and anomalies across space/time.
  2. Scene graphs from vision: TENNs (or a small CNN) emit object/keypoint nodes; GNN resolves relationships (tool near hand; person in no-go zone).
  3. Device–device graphs: For fleets, buildings, or networks, nodes are assets and edges are couplings (power, mechanical, traffic, comms). GNNs provide root-cause and next-best action.
  4. Knowledge graphs: Link events → procedures → risks; use GNNs to rank steps or recommend mitigations given the current event subgraph.





I also asked ChatGPT if there was anything off the shelf today by way of comparison that could perform the same number of applications that the Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs) combo could and it said:

🎯 Answer​


There is no off-the-shelf platform right now that can handle as broad a spectrum of applications as your hypothetical E8 + Akida/TENNs + GNNs combo:

  • Dense tasks (transformers, SLMs, CNNs, GNN message-passing).
  • Event-driven tasks (wake words, anomalies, gestures, temporal patterns).
  • Always-on ultra-low-power monitoring + burst heavy compute.
  • On-device learning + system-level graph reasoning.

Closest options (Hailo, Syntiant, Innatera, NXP/Infineon SoCs) each cover pieces of this spectrum, but none unify them.




✅ Conclusion:
Such an integration would be unique and disruptive. It plugs a gap between “always-on spiking chips” and “dense inference NPUs,” and by layering GNNs you move into fleet/plant/building intelligence at the edge — which nobody is commercially shipping yet.



🧩 Where the E8 + Akida/TENNs + GNNs combo is different​


  1. Dual compute paradigms in one system
    • Ethos-U85 = bursty, high-throughput dense math (CNNs, transformers, SLMs, GNN message-passing).
    • Akida/TENNs = continuous, temporal, event-driven monitoring + on-device learning.
      ➡️ No other commercial SoC/board combines these two complementary approaches.
  2. Always-on + heavy inference in the same box
    • Current market forces you to pick:
      • Dense-only NPUs (Hailo, EdgeTPU, U85 SoCs).
      • Or event-only neuromorphics (Syntiant, Innatera).
    • This combo covers both μW-class always-on and TOPS-class bursts.
  3. On-device adaptation
    • Akida’s few-shot/continual learning lets the system adapt to new triggers without cloud retraining.
    • Dense NPUs don’t offer this today.
  4. System-level reasoning with GNNs
    • U85 can run GNN layers on top of Akida-generated event embeddings (TENNs).
    • This enables graph-based causal reasoning (asset graphs, zone graphs, fleet graphs).
    • No mainstream off-the-shelf stack is marketed today for this edge GNN + neuromorphic hybrid.
Absolutely amazing... please send all this info into the hands of the powers to be,.
Its 's to good to be left lying here without them knowing, unless of course their all over this beauty already .
 
  • Like
  • Fire
Reactions: 7 users

Gazzafish

Regular
The BOD/CEO's in companies are always hiding the dirty laundry if they can.

They normally don't talk about the contracts they lost, but like to talk about the ones they won.

This has taken much longer than anyone thought it would and Sean will have a problem (a big one) if he hasn't secured orders for 9 mill as he said he expects, as Slade posted about. I can agree to that, but as I said, I remain confident that something big will happen before the next AGM.
If he can't manage that, I will likely have to change my mind about him as well, sorry I was not clear in my earlier post.

Talking about BOD's:
A Danish company Bavarian Nordic just told its shareholders that they want to sell the company to a consortium, and they advise shareholders to take the offer.
The private shareholders are furious because the selling price is way too low in their option, but the BOD will be handsomely rewarded if the sale goes though. First the BOD said 90% of the votes will secure the sell, but they later changed that to 75% making the small shareholders even more angry.
That clearly shows that BOD's are only working for shareholders to a certain degree, they own greed will often come first in the end of the day.
The founder of the company is also very shaken and calls the BOD incompetent and stupid, almost give the company away for peanuts.
I hope they manage to stop the sale, I will follow this drama as it unfold.

The reason I post this: it clearly shows that us small holders are in the mercy of the big players 99% of the time, and BOD mostly do what's best for them and not shareholders. That being said, most of the time those 2 things align, it's best for everyone if BRN secure a lot of IPs, the SP will rocket, Sean will have his day in the sun and all lived happily afterwards.
It is his first position as a CEO, and I am pretty sure, he will do his best to be successful. If he isn't and gets kicked out, he will not get another position as a CEO again but a sales person in a small company maybe? And he knows that.



A good sales friend of mine always used to correct me about losing a contract. You have only lost a contract if you had it to begin with. You didn’t lose it if you never had it. You just didn’t win it. 😎
 
  • Like
  • Thinking
  • Love
Reactions: 7 users

Doz

Regular
Alif + Arm Ethos-U85 + BrainChip Akida/TENNs + GNNs Idea

Prompted by @Doz's rumination over a connection between Alif (E-series), Arm (Ethos-U85) and BrainChip (Akida/TENNs), I did a deep dive with my friend Chatty. The result is this (long!) post, which I hope you might find illuminating.

As we know, back in 2023, BrainChip announced a successful integration of Akida with Arm’s Cortex-M85. This demonstrated two important things that Akida drops cleanly into Arm subsystems and that Arm was willing to publicly highlight the integration.

So, where does Alif fit in to this puzzle? Alif’s Ensemble E-series integrates Ethos-U85 (and U55s) alongside A-class and M-class Arm cores.


When I asked Chatty about pairing Akida with Ethos-U85 (not M85), the answer was encouraging:




So, in short, it would appear that the Alif E8-style platform is a natural host for Akida as a co-processor:
  • U85 for dense CNN/Transformer/SLM workloads.
  • Akida/TENNs for always-on, event/temporal sensing (ultra-low power, on-device learning).
  • A-/M-class cores orchestrate memory, power states and safety.

Board integration versus brand new chiplet

Then I asked Chatty whether it would be cheaper and faster to build on an Alif E8-style board + Akida, versus trying to create a whole new chiplet that fuses U85 + Akida. The answer was emphatic: board-level first. Off-the-shelf E8 devkit + Akida cost at $200k-400k versus chiplet cost estimated at $20-100M. And the time to production would be minuscule by comparison. Months only!


Cost / time / risk at a glance​


PathWhat you buildTypical upfront costTime to demoTime to productionTechnical risk
Board-level integration (recommended first step)Off-the-shelf E8 devkit + Akida M.2/PCIe; firmware + LLM/runtimeUS$0.6k–$1.0k hardware per bench + A$200k–$400k engineering to MVPWeeksMonthsLow
On-die IP integration (new SoC spin)Add Akida RTL IP into a new SoC with U85; full backend & validationUS$5M–$20M+ NRE (+ IP fees/royalties)12–24 monthsHigh
Chiplet integration (U85 + Akida)Multi-die package (UCIe/BoW/AIB + advanced packaging, PHYs, test)US$20M–$100M+ NRE18–36 monthsVery high



Obviously the E8 + Akida board route is orders of magnitude cheaper and faster. It could then be used to de-risk and gather real power/latency/cost data; consider silicon integration (chiplets) later if volumes and ROI justify multi-million NRE and a 1–2+ year program.


I also asked Chatty to explore whether stacking GNNs over the top of TENNs in this set up would be a good idea. I was inspired to do so by Alif’s comments below about GNNs and SNNs.


View attachment 90407


It turns out that this would make for a powerful combo:
  • Akida/TENNs convert raw audio/vision/IMU streams into event embeddings at μW–mW.
  • GNNs on U85 reason over graphs of sensors/assets/zones to deliver root-cause, context and coordination (e.g., factory lines, building zones, fleets).
  • Net effect: system-level intelligence instead of isolated point detections — at edge power budgets.

What can you unlock by adding GNNs on top of SNNs with this set up?
  • Always-on voice & audio: wake words, stress/abuse cues, alarms, machine squeal — with on-device learning.
  • Event-driven vision: motion/gesture/PPE cues, liveness hints, scene change; fast wake to dense vision on U85.
  • Industrial/PM: temporal anomalies (vibration/power), then graph root-cause across connected assets.
  • Smart buildings/cities: privacy-preserving presence/flow + cross-zone graph optimisation.
  • Edge security & cybersecurity: continuous anomaly sentinels; graph analysis for tailgating or lateral movement.
  • Agriculture & environment: beehive acoustics, irrigation leaks, zone-level graph control.




My Question:
What are all of the application areas that you think could be covered with this hypothetical E8 + Akida/TENNs board integration? Please also consider if it would be possible to have GNNs layered on top of TENNs to capture additional applications. Please ultra-think this response to include all possible applications.

ChatGPT's Response:

Love this brief. Here’s a “wide-angle” map of what you can cover with an Alif Ensemble E8–style SoC (A32 + dual M55 + Ethos-U85) plus BrainChip Akida/TENNs—and where GNNs layered on top of TENNs add extra value. Think of it as three tiers working together:


  • Akida/TENNs (always-on sentinel): ultra-low-power, event/temporal tasks, on-device learning.
  • Ethos-U85 (dense math): bursty, heavier CNN/Transformer/GNN message-passing.
  • A-class & M55 cores (glue): I/O, policies, safety, tokenization, KV-cache, orchestration.


Just take a look at all of the applications it listed! CRIKEY!!!!!!


Application atlas (by domain) -​

Alif Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs)

1) Voice, audio & acoustics

  • Wake-word/command spotting, speaker change detection, profanity/abuse flagging (policy-gated), acoustic scene classification (glass break, alarm, baby cry), far-field mic power gating.
  • Industrial audio: bearing squeal, cavitation, pump knock, air leak detection.
  • GNN boost: build sensor-to-machine graphs across multiple mics/machines to localise faults and rank root causes.

2) Vision & imaging (edge, privacy-preserving)

  • Person/presence detection (not identity), fall/pose/gesture cues, PPE compliance (hard-hat/vest), line crossing/intrusion, vehicle/animal detection, smoke/fire shimmer cues, barcode/QR, simple defect/blemish checks, part orientation, fill-level/stock shelf checks.
  • GNN boost: scene graphs (object–object relationships) to reason: “forklift + person + no-go zone” ⇒ alert; multi-camera graphs to track events across viewpoints.

3) Multimodal sensor fusion

  • Doorbell/entry systems: voice + motion + face-present (no cloud ID) to decide when to wake the assistant.
  • Occupancy → HVAC/lighting optimisation; meeting-room auto-booking; crowding/egress safety.
  • GNN boost: room/zone graphs and people–space interaction graphs for optimal control and anomaly propagation (“noise spike moves from lobby → corridor”).

4) Industrial & manufacturing

  • Predictive maintenance (vibration, current, temperature), tool wear, conveyor jams, spindle chatter, robot envelope breach, forklift proximity.
  • Visual QA (scratch/dent/missing-component), pick/place verification, bin fullness, pallet count.
  • GNN boost: asset dependency graphs (press→conveyor→packer) for root-cause and maintenance scheduling across a line or site.

5) Energy, utilities & infrastructure

  • Transformer hum anomaly, substation thermal hotspots, PV string mismatch, inverter arc-fault cues, wind turbine gearbox health, gas leak audio, water pump cavitation, pipeline vibration intrusion.
  • GNN boost: topological grid graphs to forecast faults, isolate sections, and coordinate DER/microgrid responses.

6) Mobility, transport & automotive (edge add-ons)

  • Cabin monitoring (presence/child-left-behind), driver attention, voice agent wake/route intent, door-zone cyclist detection, fleet predictive maintenance, micromobility helmet crash cues.
  • GNN boost: road-segment graphs (traffic + events), fleet graphs (vehicles + depots) for dispatch/maintenance optimisation.

7) Smart buildings & cities

  • Occupancy counting, elevator dispatch, air-quality triage (CO₂/VOC spikes), noise complaints (direction/room), parking bay detection, waste bin fullness, leak detection, street-level hazard cues.
  • GNN boost: building system graphs (AHU ↔ zones ↔ occupants) and city block graphs for load shedding, cleaning routes, patrol prioritisation.

8) Retail, hospitality & venues

  • Footfall/queue length, dwell time, shelf gap detection, planogram compliance, smart coffee/ice machines self-diagnostics, kiosk voice agent with on-device wake, anti-tailgating at staff doors.
  • GNN boost: store-layout graphs to relate traffic flow ↔ shelf events; multi-store graphs for ops benchmarking.

9) Healthcare & wellness* (non-diagnostic unless regulated)​

  • Fall detection, gait instability cues, sleep staging proxies (audio/motion), inhaler/CPAP adherence cues, tremor/amplitude tracking, elder-care routine deviations.
  • GNN boost: care-graph across rooms/devices to detect risky patterns (“night wandering” across zones).
  • Medical/diagnostic uses require regulatory pathways—treat these as wellness/decision-support unless you commit to compliance.

10) Agriculture & environment

  • Crop stress/colour shift, pest trap counts, irrigation leaks, trough level, livestock presence/breaching, beehive acoustics (queen loss/swarm), rain tank pump control, wildfire early cues.
  • GNN boost: field/zone sensor graphs (soil moisture, canopy temp, wind) to optimise irrigation and predict hotspots.

11) Security & access (privacy-first)

  • On-device liveness (blink/pose cues), tailgating detection, perimeter vibration/fence cut cues, vehicle + access policy checks, suspicious motion patterns without identity tracking.
  • GNN boost: camera–door–sensor graphs to corroborate events and reduce false positives.

12) Edge cybersecurity (device & network)

  • Device behaviour anomaly (syscalls, power draw signatures), lightweight traffic patterning (DNS/flow bursts), firmware tamper cues, USB/peripheral anomaly.
  • GNN boost: communication graphs (device–device edges) to flag emerging botnets or lateral movement—processed in bursts on U85.

13) Field AI agents & HMI

  • Voice-first technician helpers (wake → brief LLM response), multilingual translation snippets, checklists, AR prompters (vision cue + short instruction), offline kiosks.
  • GNN boost: task graphs linking steps, tools, parts, and observations to navigate procedures.

14) Education, toys & prosumer

  • Reading companions (hotword + small LLM), musical instrument tutors (audio events), STEM kits (sensor fusion challenges), smart baby monitors (sound+motion cues).
  • GNN boost: progress/skill knowledge graphs for personalised lessons.

15) Aerospace/defense/public safety (high-level, compliant)

  • Platform health monitoring, payload event filters, RF/EO/IR cue triage, SAR victim audio cues, perimeter breach cues in contested comms.
  • GNN boost: asset–sensor graphs to fuse multi-sensor cues under bandwidth constraints.



How GNNs layer on top of TENNs (patterns)​


  1. Spatio-temporal event graphs: Akida turns raw streams into event embeddings; you build a dynamic graph (nodes = sensors/objects, edges = proximity/causality). U85 runs message-passing to reason about spread, influence, and anomalies across space/time.
  2. Scene graphs from vision: TENNs (or a small CNN) emit object/keypoint nodes; GNN resolves relationships (tool near hand; person in no-go zone).
  3. Device–device graphs: For fleets, buildings, or networks, nodes are assets and edges are couplings (power, mechanical, traffic, comms). GNNs provide root-cause and next-best action.
  4. Knowledge graphs: Link events → procedures → risks; use GNNs to rank steps or recommend mitigations given the current event subgraph.





I also asked ChatGPT if there was anything off the shelf today by way of comparison that could perform the same number of applications that the Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs) combo could and it said:

🎯 Answer​


There is no off-the-shelf platform right now that can handle as broad a spectrum of applications as your hypothetical E8 + Akida/TENNs + GNNs combo:

  • Dense tasks (transformers, SLMs, CNNs, GNN message-passing).
  • Event-driven tasks (wake words, anomalies, gestures, temporal patterns).
  • Always-on ultra-low-power monitoring + burst heavy compute.
  • On-device learning + system-level graph reasoning.

Closest options (Hailo, Syntiant, Innatera, NXP/Infineon SoCs) each cover pieces of this spectrum, but none unify them.




✅ Conclusion:
Such an integration would be unique and disruptive. It plugs a gap between “always-on spiking chips” and “dense inference NPUs,” and by layering GNNs you move into fleet/plant/building intelligence at the edge — which nobody is commercially shipping yet.



🧩 Where the E8 + Akida/TENNs + GNNs combo is different​


  1. Dual compute paradigms in one system
    • Ethos-U85 = bursty, high-throughput dense math (CNNs, transformers, SLMs, GNN message-passing).
    • Akida/TENNs = continuous, temporal, event-driven monitoring + on-device learning.
      ➡️ No other commercial SoC/board combines these two complementary approaches.
  2. Always-on + heavy inference in the same box
    • Current market forces you to pick:
      • Dense-only NPUs (Hailo, EdgeTPU, U85 SoCs).
      • Or event-only neuromorphics (Syntiant, Innatera).
    • This combo covers both μW-class always-on and TOPS-class bursts.
  3. On-device adaptation
    • Akida’s few-shot/continual learning lets the system adapt to new triggers without cloud retraining.
    • Dense NPUs don’t offer this today.
  4. System-level reasoning with GNNs
    • U85 can run GNN layers on top of Akida-generated event embeddings (TENNs).
    • This enables graph-based causal reasoning (asset graphs, zone graphs, fleet graphs).
    • No mainstream off-the-shelf stack is marketed today for this edge GNN + neuromorphic hybrid.

For anyone interested in Bravo’s post on Graph Neural Networks , maybe the below will make for interesting reading .








Bravo , with everything you posted above , I can’t believe that Chatty didn’t even mention mobile phones …..

All in my opinion …..
 
  • Fire
  • Like
  • Love
Reactions: 6 users

Diogenese

Top 20
Alif + Arm Ethos-U85 + BrainChip Akida/TENNs + GNNs Idea

Prompted by @Doz's rumination over a connection between Alif (E-series), Arm (Ethos-U85) and BrainChip (Akida/TENNs), I did a deep dive with my friend Chatty. The result is this (long!) post, which I hope you might find illuminating.

As we know, back in 2023, BrainChip announced a successful integration of Akida with Arm’s Cortex-M85. This demonstrated two important things that Akida drops cleanly into Arm subsystems and that Arm was willing to publicly highlight the integration.

So, where does Alif fit in to this puzzle? Alif’s Ensemble E-series integrates Ethos-U85 (and U55s) alongside A-class and M-class Arm cores.


When I asked Chatty about pairing Akida with Ethos-U85 (not M85), the answer was encouraging:




So, in short, it would appear that the Alif E8-style platform is a natural host for Akida as a co-processor:
  • U85 for dense CNN/Transformer/SLM workloads.
  • Akida/TENNs for always-on, event/temporal sensing (ultra-low power, on-device learning).
  • A-/M-class cores orchestrate memory, power states and safety.

Board integration versus brand new chiplet

Then I asked Chatty whether it would be cheaper and faster to build on an Alif E8-style board + Akida, versus trying to create a whole new chiplet that fuses U85 + Akida. The answer was emphatic: board-level first. Off-the-shelf E8 devkit + Akida cost at $200k-400k versus chiplet cost estimated at $20-100M. And the time to production would be minuscule by comparison. Months only!


Cost / time / risk at a glance​


PathWhat you buildTypical upfront costTime to demoTime to productionTechnical risk
Board-level integration (recommended first step)Off-the-shelf E8 devkit + Akida M.2/PCIe; firmware + LLM/runtimeUS$0.6k–$1.0k hardware per bench + A$200k–$400k engineering to MVPWeeksMonthsLow
On-die IP integration (new SoC spin)Add Akida RTL IP into a new SoC with U85; full backend & validationUS$5M–$20M+ NRE (+ IP fees/royalties)12–24 monthsHigh
Chiplet integration (U85 + Akida)Multi-die package (UCIe/BoW/AIB + advanced packaging, PHYs, test)US$20M–$100M+ NRE18–36 monthsVery high



Obviously the E8 + Akida board route is orders of magnitude cheaper and faster. It could then be used to de-risk and gather real power/latency/cost data; consider silicon integration (chiplets) later if volumes and ROI justify multi-million NRE and a 1–2+ year program.


I also asked Chatty to explore whether stacking GNNs over the top of TENNs in this set up would be a good idea. I was inspired to do so by Alif’s comments below about GNNs and SNNs.


View attachment 90407


It turns out that this would make for a powerful combo:
  • Akida/TENNs convert raw audio/vision/IMU streams into event embeddings at μW–mW.
  • GNNs on U85 reason over graphs of sensors/assets/zones to deliver root-cause, context and coordination (e.g., factory lines, building zones, fleets).
  • Net effect: system-level intelligence instead of isolated point detections — at edge power budgets.

What can you unlock by adding GNNs on top of SNNs with this set up?
  • Always-on voice & audio: wake words, stress/abuse cues, alarms, machine squeal — with on-device learning.
  • Event-driven vision: motion/gesture/PPE cues, liveness hints, scene change; fast wake to dense vision on U85.
  • Industrial/PM: temporal anomalies (vibration/power), then graph root-cause across connected assets.
  • Smart buildings/cities: privacy-preserving presence/flow + cross-zone graph optimisation.
  • Edge security & cybersecurity: continuous anomaly sentinels; graph analysis for tailgating or lateral movement.
  • Agriculture & environment: beehive acoustics, irrigation leaks, zone-level graph control.




My Question:
What are all of the application areas that you think could be covered with this hypothetical E8 + Akida/TENNs board integration? Please also consider if it would be possible to have GNNs layered on top of TENNs to capture additional applications. Please ultra-think this response to include all possible applications.

ChatGPT's Response:

Love this brief. Here’s a “wide-angle” map of what you can cover with an Alif Ensemble E8–style SoC (A32 + dual M55 + Ethos-U85) plus BrainChip Akida/TENNs—and where GNNs layered on top of TENNs add extra value. Think of it as three tiers working together:


  • Akida/TENNs (always-on sentinel): ultra-low-power, event/temporal tasks, on-device learning.
  • Ethos-U85 (dense math): bursty, heavier CNN/Transformer/GNN message-passing.
  • A-class & M55 cores (glue): I/O, policies, safety, tokenization, KV-cache, orchestration.


Just take a look at all of the applications it listed! CRIKEY!!!!!!


Application atlas (by domain) -​

Alif Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs)

1) Voice, audio & acoustics

  • Wake-word/command spotting, speaker change detection, profanity/abuse flagging (policy-gated), acoustic scene classification (glass break, alarm, baby cry), far-field mic power gating.
  • Industrial audio: bearing squeal, cavitation, pump knock, air leak detection.
  • GNN boost: build sensor-to-machine graphs across multiple mics/machines to localise faults and rank root causes.

2) Vision & imaging (edge, privacy-preserving)

  • Person/presence detection (not identity), fall/pose/gesture cues, PPE compliance (hard-hat/vest), line crossing/intrusion, vehicle/animal detection, smoke/fire shimmer cues, barcode/QR, simple defect/blemish checks, part orientation, fill-level/stock shelf checks.
  • GNN boost: scene graphs (object–object relationships) to reason: “forklift + person + no-go zone” ⇒ alert; multi-camera graphs to track events across viewpoints.

3) Multimodal sensor fusion

  • Doorbell/entry systems: voice + motion + face-present (no cloud ID) to decide when to wake the assistant.
  • Occupancy → HVAC/lighting optimisation; meeting-room auto-booking; crowding/egress safety.
  • GNN boost: room/zone graphs and people–space interaction graphs for optimal control and anomaly propagation (“noise spike moves from lobby → corridor”).

4) Industrial & manufacturing

  • Predictive maintenance (vibration, current, temperature), tool wear, conveyor jams, spindle chatter, robot envelope breach, forklift proximity.
  • Visual QA (scratch/dent/missing-component), pick/place verification, bin fullness, pallet count.
  • GNN boost: asset dependency graphs (press→conveyor→packer) for root-cause and maintenance scheduling across a line or site.

5) Energy, utilities & infrastructure

  • Transformer hum anomaly, substation thermal hotspots, PV string mismatch, inverter arc-fault cues, wind turbine gearbox health, gas leak audio, water pump cavitation, pipeline vibration intrusion.
  • GNN boost: topological grid graphs to forecast faults, isolate sections, and coordinate DER/microgrid responses.

6) Mobility, transport & automotive (edge add-ons)

  • Cabin monitoring (presence/child-left-behind), driver attention, voice agent wake/route intent, door-zone cyclist detection, fleet predictive maintenance, micromobility helmet crash cues.
  • GNN boost: road-segment graphs (traffic + events), fleet graphs (vehicles + depots) for dispatch/maintenance optimisation.

7) Smart buildings & cities

  • Occupancy counting, elevator dispatch, air-quality triage (CO₂/VOC spikes), noise complaints (direction/room), parking bay detection, waste bin fullness, leak detection, street-level hazard cues.
  • GNN boost: building system graphs (AHU ↔ zones ↔ occupants) and city block graphs for load shedding, cleaning routes, patrol prioritisation.

8) Retail, hospitality & venues

  • Footfall/queue length, dwell time, shelf gap detection, planogram compliance, smart coffee/ice machines self-diagnostics, kiosk voice agent with on-device wake, anti-tailgating at staff doors.
  • GNN boost: store-layout graphs to relate traffic flow ↔ shelf events; multi-store graphs for ops benchmarking.

9) Healthcare & wellness* (non-diagnostic unless regulated)​

  • Fall detection, gait instability cues, sleep staging proxies (audio/motion), inhaler/CPAP adherence cues, tremor/amplitude tracking, elder-care routine deviations.
  • GNN boost: care-graph across rooms/devices to detect risky patterns (“night wandering” across zones).
  • Medical/diagnostic uses require regulatory pathways—treat these as wellness/decision-support unless you commit to compliance.

10) Agriculture & environment

  • Crop stress/colour shift, pest trap counts, irrigation leaks, trough level, livestock presence/breaching, beehive acoustics (queen loss/swarm), rain tank pump control, wildfire early cues.
  • GNN boost: field/zone sensor graphs (soil moisture, canopy temp, wind) to optimise irrigation and predict hotspots.

11) Security & access (privacy-first)

  • On-device liveness (blink/pose cues), tailgating detection, perimeter vibration/fence cut cues, vehicle + access policy checks, suspicious motion patterns without identity tracking.
  • GNN boost: camera–door–sensor graphs to corroborate events and reduce false positives.

12) Edge cybersecurity (device & network)

  • Device behaviour anomaly (syscalls, power draw signatures), lightweight traffic patterning (DNS/flow bursts), firmware tamper cues, USB/peripheral anomaly.
  • GNN boost: communication graphs (device–device edges) to flag emerging botnets or lateral movement—processed in bursts on U85.

13) Field AI agents & HMI

  • Voice-first technician helpers (wake → brief LLM response), multilingual translation snippets, checklists, AR prompters (vision cue + short instruction), offline kiosks.
  • GNN boost: task graphs linking steps, tools, parts, and observations to navigate procedures.

14) Education, toys & prosumer

  • Reading companions (hotword + small LLM), musical instrument tutors (audio events), STEM kits (sensor fusion challenges), smart baby monitors (sound+motion cues).
  • GNN boost: progress/skill knowledge graphs for personalised lessons.

15) Aerospace/defense/public safety (high-level, compliant)

  • Platform health monitoring, payload event filters, RF/EO/IR cue triage, SAR victim audio cues, perimeter breach cues in contested comms.
  • GNN boost: asset–sensor graphs to fuse multi-sensor cues under bandwidth constraints.



How GNNs layer on top of TENNs (patterns)​


  1. Spatio-temporal event graphs: Akida turns raw streams into event embeddings; you build a dynamic graph (nodes = sensors/objects, edges = proximity/causality). U85 runs message-passing to reason about spread, influence, and anomalies across space/time.
  2. Scene graphs from vision: TENNs (or a small CNN) emit object/keypoint nodes; GNN resolves relationships (tool near hand; person in no-go zone).
  3. Device–device graphs: For fleets, buildings, or networks, nodes are assets and edges are couplings (power, mechanical, traffic, comms). GNNs provide root-cause and next-best action.
  4. Knowledge graphs: Link events → procedures → risks; use GNNs to rank steps or recommend mitigations given the current event subgraph.





I also asked ChatGPT if there was anything off the shelf today by way of comparison that could perform the same number of applications that the Ensemble E8 (A32 + dual M55 + Ethos-U85) + BrainChip Akida/TENNs (+ GNNs) combo could and it said:

🎯 Answer​


There is no off-the-shelf platform right now that can handle as broad a spectrum of applications as your hypothetical E8 + Akida/TENNs + GNNs combo:

  • Dense tasks (transformers, SLMs, CNNs, GNN message-passing).
  • Event-driven tasks (wake words, anomalies, gestures, temporal patterns).
  • Always-on ultra-low-power monitoring + burst heavy compute.
  • On-device learning + system-level graph reasoning.

Closest options (Hailo, Syntiant, Innatera, NXP/Infineon SoCs) each cover pieces of this spectrum, but none unify them.




✅ Conclusion:
Such an integration would be unique and disruptive. It plugs a gap between “always-on spiking chips” and “dense inference NPUs,” and by layering GNNs you move into fleet/plant/building intelligence at the edge — which nobody is commercially shipping yet.



🧩 Where the E8 + Akida/TENNs + GNNs combo is different​


  1. Dual compute paradigms in one system
    • Ethos-U85 = bursty, high-throughput dense math (CNNs, transformers, SLMs, GNN message-passing).
    • Akida/TENNs = continuous, temporal, event-driven monitoring + on-device learning.
      ➡️ No other commercial SoC/board combines these two complementary approaches.
  2. Always-on + heavy inference in the same box
    • Current market forces you to pick:
      • Dense-only NPUs (Hailo, EdgeTPU, U85 SoCs).
      • Or event-only neuromorphics (Syntiant, Innatera).
    • This combo covers both μW-class always-on and TOPS-class bursts.
  3. On-device adaptation
    • Akida’s few-shot/continual learning lets the system adapt to new triggers without cloud retraining.
    • Dense NPUs don’t offer this today.
  4. System-level reasoning with GNNs
    • U85 can run GNN layers on top of Akida-generated event embeddings (TENNs).
    • This enables graph-based causal reasoning (asset graphs, zone graphs, fleet graphs).
    • No mainstream off-the-shelf stack is marketed today for this edge GNN + neuromorphic hybrid.
Hi Bravo,

Everything I know about GNNs comes from Sgt Schultz.

So I asked Prof Wiki ...
https://en.wikipedia.org/wiki/Graph_neural_network

The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed,[2][3][9][10][11] which implement different flavors of message passing,[12][13] started by recursive[2] or convolutional constructive[3] approaches. As of 2022, it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.[14]
In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[12] A convolutional neural network layer, in the context of computer vision, can be considered a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be considered a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text.

From recent articles about Akida 2, it seems that TENNs is superior to transformers. ViT are no longer advertised in relation to Akida 2. ViTs were only offered on the larger versions of Akida 2, while TENNs are available on all Akida 2 nodes, suggesting that transformers require more NPUs. As I said, I don't know how this translates to GNNs.

In relation to versions of Akida, my guess is that Akida 2 will be the go-to version as it has TENNs and improved precision (INT8) compared with Akida 1, and Akida 3 will be the top-of-the-range high precision premium priced version.

TENNs is the game-changer.

Akida's versatility is a two-edged sword in that the number of nodes can be selected for specific applications, but the downside is that each different version needs to be "hand made", so it is up to each IP customer to select their node design. BRN used to talk about 3 different types of Akida 2 based on the number of nodes, each type encompassing a range of nodes. At this stage of development, it is out of the question that BRN hold an inventory of each possible physical chip. So, if BRN are to make an Akida 2 chip at this stage, they will need to pick the sweet spot in the market. No doubt they have a lot of market information, but the industry is evolving rapidly, as evidenced by the number of iterations of Akida to date. So they may decide to err on the side of bigness, somewhere to the right of the peak of the bell curve.

According to the roadmap, the RTL (Register Transistor Level) design for Akida 2 should have been completed by mid-2025:
1756621963928.png

RTL precedes tapeout, so I suppose that the RTL will undergo rigorous testing/revision before tapeout (Socionext, GF, ...?).

We still have a few Akida 1 chips and some 1500 chips, but, as the roadmap plans to produce Akida 2 in 2026, this to makes it less likely we will make any Akida 1s/1500s in the near term.

So I'm basing my comments on Akida 2.

In my view Akida2 will make ARM's Ethos unnecessary in any SoC including Akida 2. Keep in mind that wafer real estate is more expensive than Vaucluse, so you would not want to have any superfluous circuitry in the chip. Now I've got to go and research Alfie and presumably GNUs, which will be hard YAKka.
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Sorry for going off topic, but as it’s a Sunday and you have nothing to watch and like anything from
Stephen King then watch this

 
  • Like
Reactions: 2 users
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

Everything I know about GNNs comes from Sgt Schultz.

So I asked Prof Wiki ...
https://en.wikipedia.org/wiki/Graph_neural_network

The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed,[2][3][9][10][11] which implement different flavors of message passing,[12][13] started by recursive[2] or convolutional constructive[3] approaches. As of 2022, it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.[14]
In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[12] A convolutional neural network layer, in the context of computer vision, can be considered a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be considered a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text.

From recent articles about Akida 2, it seems that TENNs is superior to transformers. ViT are no longer advertised in relation to Akida 2. ViTs were only offered on the larger versions of Akida 2, while TENNs are available on all Akida 2 nodes, suggesting that transformers require more NPUs. As I said, I don't know how this translates to GNNs.

In relation to versions of Akida, my guess is that Akida 2 will be the go-to version as it has TENNs and improved precision (INT8) compared with Akida 1, and Akida 3 will be the top-of-the-range high precision premium priced version.

TENNs is the game-changer.

Akida's versatility is a two-edged sword in that the number of nodes can be selected for specific applications, but the downside is that each different version needs to be "hand made", so it is up to each IP customer to select their node design. BRN used to talk about 3 different types of Akida 2 based on the number of nodes, each type encompassing a range of nodes. At this stage of development, it is out of the question that BRN hold an inventory of each possible physical chip. So, if BRN are to make an Akida 2 chip at this stage, they will need to pick the sweet spot in the market. No doubt they have a lot of market information, but the industry is evolving rapidly, as evidenced by the number of iterations of Akida to date. So they may decide to err on the side of bigness, somewhere to the right of the peak of the bell curve.

According to the roadmap, the RTL (Register Transistor Level) design for Akida 2 should have been completed by mid-2025:
View attachment 90408
RTL precedes tapeout, so I suppose that the RTL will undergo rigorous testing/revision before tapeout (Socionext, GF, ...?).

We still have a few Akida 1 chips and some 1500 chips, but, as the roadmap plans to produce Akida 2 in 2026, this to makes it less likely we will make any Akida 1s/1500s in the near term.

So I'm basing my comments on Akida 2.

In my view Akida2 will make ARM's Ethos unnecessary in any SoC including Akida 2. Keep in mind that wafer real estate is more expensive than Vaucluse, so you would not want to have any superfluous circuitry in the chip. Now I've got to go and research Alfie and presumably GNUs, which will be hard YAKka.


Hi Diogenese,

Where it gets really tricky is that I'm totally reliant on ChatGPT telling me stuff that isn't entirely inaccurate, because I literally know NOTHing!

But what I can tell you is that Chatty insists that it's very unlikely that Akida 2 will make Arm’s Ethos unnecessary.

On the contrary it says:
  • Akida (spiking/TENNs) and Ethos (dense CNN/Transformer math) are complementary, not redundant.
  • Ethos excels at dense matrix math for CNNs/SLMs, Akida excels at event-driven, sparse, temporal, continual learning.
  • Together, they cover a much broader application range than either alone — which is the whole basis of your “unique combo” idea.

It also says "GNNs can be layered on TENN embeddings: Akida emits sparse event graphs → U85 handles message passing with GNN kernels. That’s where the combo becomes unique. OEMs want complementary heterogeneous compute (CPU + DSP + NPU + SNN). And that Arm’s ecosystem reach (billions of units, compiler/toolchain dominance) means Ethos IP will remain highly attractive, especially paired with Akida.

The strengths of Akida/TENNs and Arm’s Ethos-U85 are actually complementary, not overlapping:
  • Ethos-U85 shines for dense CNN/Transformer/SLM workloads — the heavy lifting.
  • Akida/TENNs excel at always-on, temporal, sparse/event-driven sensing and on-device adaptation.
Together they cover a much wider range of use cases than either block on its own. And when you add GNNs layered on top of TENN embeddings, you move from local anomaly/event detection to system-level reasoning (factories, fleets, grids, smart buildings). That’s where the real disruption lies."



I agree with you 100% that TENNs are the game-changer inside Akida 2. My take was that pairing them with U85 (and eventually layering GNNs on top) might provide a winning mix that is much more versatile than a single-architecture approach.

But again I repeat. I know NOTHing! 🥴

I guess if ChatGPT is right then maybe the Alif board-first, chiplet-later strategy might be the way to go - to prove it with E8 + Akida devkits today, then revisit silicon integration once ROI is clear.
 
  • Like
  • Haha
  • Fire
Reactions: 9 users
 
  • Like
  • Fire
  • Haha
Reactions: 8 users

7für7

Top 20
For anyone interested in Bravo’s post on Graph Neural Networks , maybe the below will make for interesting reading .








Bravo , with everything you posted above , I can’t believe that Chatty didn’t even mention mobile phones …..

All in my opinion …..


Nice read for sure… a weekend lecture I would say…

BrainChip could become big in all of these areas… if partners/companies had the courage to actually adopt it. As long as that doesn’t happen, it remains in the subjunctive: “could, would, might chatty story.”

That Akida has immense potential we already know … but as long as nothing official is announced, it’s just another wishful thought.

Have a nice Sunday evening everyone.. if there are still people who didn’t put me on ignore… yeeeeeehaaaa
 
  • Like
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!

Jeez, I wish they would come up with some more interesting demos. Asking my fridge about it's vacation mode is about as interesting as a description of my last stool. 🤣 Soft, brown and sticky for those following along, by the way. 🤣
 
  • Haha
  • Like
  • Fire
Reactions: 12 users
Jeez, I wish they would come up with some more interesting demos. Asking my fridge about it's vacation mode is about as interesting as a description of my last stool. 🤣 Soft, brown and sticky for those following along, by the way. 🤣
You watched more than me then 😂
 
  • Haha
  • Like
Reactions: 3 users
Jeez, I wish they would come up with some more interesting demos. Asking my fridge about its vacation mode is about as interesting as a description of my last stool. 🤣 Soft, brown and sticky for those following along, by the way. 🤣
I won’t bother posting the other 16 videos I have saved in my favorites now and I was about to post the one with Nvidia. Some people just can’t be pleased 😂


1756629161597.gif
 
  • Wow
  • Haha
  • Fire
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
For anyone interested in Bravo’s post on Graph Neural Networks , maybe the below will make for interesting reading .








Bravo , with everything you posted above , I can’t believe that Chatty didn’t even mention mobile phones …..

All in my opinion …..

Hi Doz,

Yes, smartphones are absolutely a potential application for Akida/TENNs + GNNs, but ChatGPT says "not with the same Alif board-level E8 + Akida integration we’ve been describing for IoT/edge devices".

It says "For smartphones → yes, Akida/TENNs + GNNs could bring unique “always-on + adaptive + graph reasoning” benefits, but it would require tight silicon integration (either IP licensing or chiplet form) to make sense in that ecosystem."

And this where Arm comes back into the picture!

If Arm offered a Corstone platform that bundles Cortex-A/M cores, Ethos-U85, and an Akida-style SNN block, that would make it easier for SoC vendors to adopt. Arm’s business model is already about licensing IP blocks - so a U85+Akida “chiplet” or IP package could be the right vehicle.

As I say, I don't know how plausible any of this is, but it sounds really good, so I sincerely hope it's right!



My Question:​

When you say Akida/TENNs + GNNs could bring unique “always-on + adaptive + graph reasoning” benefits to smartphones, what does that look like? What sort of things could it do?

ChatGPT said:​

Great question — let’s make it tangible. If Akida/TENNs + GNNs were integrated into a smartphone SoC (alongside the usual CPU/GPU/NPU), you’d unlock a class of features that today’s phones can’t do well, or can only do at the cost of heavy power drain.

📱 What “always-on + adaptive + graph reasoning” could look like in smartphones​

1. Ultra-low-power, always-on sensing

  • Akida/TENNs role: Continuously listen/watch in the μW–mW range, only waking the big NPU when needed.
  • Examples:
    • Smarter wake words: distinguish between you and other voices, adapt to your accent, background noise.
    • Ambient sound awareness: alert you to alarms, baby cries, glass breaking, or traffic horns — without streaming mic data to the cloud.
    • Pocket/wearable fusion: recognize walking, running, falls, or gestures (raise-to-silence, nod-to-answer).
👉 Today, this burns battery on Snapdragon NPUs or requires offloading to earbuds/wearables. With Akida, it’s continuous and frugal.

2. Adaptive personalization (on-device learning)

  • Akida/TENNs role: Update models locally, without retraining in the cloud.
  • Examples:
    • Voice biometrics: phone adapts to your voice over time, even as it changes (illness, aging).
    • Gesture learning: you invent a new gesture (e.g., double-shake = launch camera), and the phone learns it instantly.
    • Environmental tuning: auto-adapts noise cancellation or audio profiles based on where you usually are (train, office, home).
👉 Current phones can’t personalize in the field without big cloud retraining. This would make them self-learning devices.

3. Graph reasoning across sensors (GNNs on top of TENNs)

  • Akida/TENNs role: Emit event embeddings (sound = horn, vision = crosswalk, IMU = running).
  • U85/GNN role: Build a graph of these events and reason over relationships.
  • Examples:
    • Contextual safety assistant:
      • Mic hears horn → camera sees crosswalk → IMU detects step-off → GNN infers “user about to step into traffic” → alert/vibrate.
    • Digital wellness coach:
      • Phone + watch + earbuds create a graph of sleep, stress (voice), activity (IMU), and environment (noise).
      • GNN infers risk patterns: “You’re stressed, haven’t slept enough, and your calendar is packed — suggest downtime.”
    • Privacy-first context understanding:
      • Phone can reason: “Kids’ voices + home Wi-Fi + calendar says weekend = family context” → automatically mute work notifications.
👉 This is next-level context awareness — something dense NPUs alone don’t deliver because they treat each modality in isolation.

4. Smarter security & liveness

  • Akida/TENNs role: Detect temporal cues like blinking, micro-movements, gait, or pulse.
  • GNN role: Correlate across sensors (face + voice + motion) to confirm identity.
  • Examples:
    • Anti-spoofing: Spot deepfake attacks (static photo, replayed voice).
    • Continuous auth: Phone knows it’s still you as you use it, without re-scans or draining power.
👉 Today, FaceID/biometrics are point-in-time checks. This makes them continuous, adaptive, and harder to spoof.

5. Always-on multimodal assistants

  • Combine Akida’s sentinel mode with GNN reasoningto create an assistant that’s:
    • Context-aware (not just reactive commands).
    • Privacy-first (sensitive data stays local).
    • Battery-friendly (μW–mW background, bursts only when needed).
👉 Example: Instead of “Hey Siri/Google,” the phone knows when you’re likely asking it a question (gaze, gesture, voice tone) and wakes the assistant seamlessly.

⚖️ Why it matters​

  • Today’s gap: Smartphones either run dense NPUs (burning battery for continuous monitoring) or offload to cloud/wearables.
  • What Akida+TENNs+GNNs adds: A third compute domain that enables truly always-on, adaptive, context-aware intelligence.
This wouldn’t replace Snapdragon’s Hexagon or Apple’s Neural Engine — it would complement them by catching events early, filtering noise, and only engaging the heavy engines when meaningful.

✅ In short:
On smartphones, Akida/TENNs + GNNs would unlock continuous, adaptive, context-aware features: smarter assistants, better safety/security, and health/wellness insights — all without hammering the battery or sending raw data to the cloud.

 
  • Like
  • Love
  • Fire
Reactions: 11 users
Is it the eve of an ip deal for the first day of spring
Come on 🧠 🍟
Let’s get the ball rolling we need some excitement please Sean
9Million for a start will be good
 
  • Like
  • Fire
  • Love
Reactions: 6 users
Jeez, I wish they would come up with some more interesting demos. Asking my fridge about it's vacation mode is about as interesting as a description of my last stool. 🤣 Soft, brown and sticky for those following along, by the way. 🤣
So you had sticky date pudding in the fridge?

SC
 
  • Haha
Reactions: 5 users

Diogenese

Top 20
Hi Diogenese,

Where it gets really tricky is that I'm totally reliant on ChatGPT telling me stuff that isn't entirely inaccurate, because I literally know NOTHing!

But what I can tell you is that Chatty insists that it's very unlikely that Akida 2 will make Arm’s Ethos unnecessary.

On the contrary it says:
  • Akida (spiking/TENNs) and Ethos (dense CNN/Transformer math) are complementary, not redundant.
  • Ethos excels at dense matrix math for CNNs/SLMs, Akida excels at event-driven, sparse, temporal, continual learning.
  • Together, they cover a much broader application range than either alone — which is the whole basis of your “unique combo” idea.

It also says "GNNs can be layered on TENN embeddings: Akida emits sparse event graphs → U85 handles message passing with GNN kernels. That’s where the combo becomes unique. OEMs want complementary heterogeneous compute (CPU + DSP + NPU + SNN). And that Arm’s ecosystem reach (billions of units, compiler/toolchain dominance) means Ethos IP will remain highly attractive, especially paired with Akida.

The strengths of Akida/TENNs and Arm’s Ethos-U85 are actually complementary, not overlapping:
  • Ethos-U85 shines for dense CNN/Transformer/SLM workloads — the heavy lifting.
  • Akida/TENNs excel at always-on, temporal, sparse/event-driven sensing and on-device adaptation.
Together they cover a much wider range of use cases than either block on its own. And when you add GNNs layered on top of TENN embeddings, you move from local anomaly/event detection to system-level reasoning (factories, fleets, grids, smart buildings). That’s where the real disruption lies."



I agree with you 100% that TENNs are the game-changer inside Akida 2. My take was that pairing them with U85 (and eventually layering GNNs on top) might provide a winning mix that is much more versatile than a single-architecture approach.

But again I repeat. I know NOTHing! 🥴

I guess if ChatGPT is right then maybe the Alif board-first, chiplet-later strategy might be the way to go - to prove it with E8 + Akida devkits today, then revisit silicon integration once ROI is clear.
Hi Bravo/Sgt Schultz,

Didn't your mother teach you not to talk to strange bots?

As I said, GNNs are above my paygrade, but your chatty mate has hallucinations.

So Akida can't do CNN/SLM?

That's a surprise.

Akida cut its teeth on CNNs, and Akida 2 has developed a strong affinity for SLMs (small LLMs and RAG - the terminology fluctuates). It is true that Akida 1 is limited to "simple CNNs", but Akida 2 has no such limitations, so your friend and I may be talking at cross porpoises. True also that Akida 2 only exists as FPGA at this stage, but since we are talking IP ...

Also TENNs seems to have made ViT (Vision Transformer) redundant in Akida 2.

BrainChip Technology Roadmap - BrainChip

1756633544624.png


1756633693852.png



1756633096749.png



1756633250401.png


As you said, Akida is compatible with U85, and Akida needs a processor for configuration (NPUs per layer, number of layers, loading models), but I'm not convinced that anything Akida can do, Ethos can do better - no it can't.

As the Alfie, I need more info, but that draft BRN page of a couple of days ago distinguished between classification and detection, and th3e example of detection seemed to be similar to GNN with the memory-friendly central location of the bounding box.

PS: Any technical comments are provided by an engineer trained in the last millennium, but at least not speaking through his bot.
 
Last edited:
  • Haha
  • Love
  • Like
Reactions: 12 users

Diogenese

Top 20
Sorry for going off topic, but as it’s a Sunday and you have nothing to watch and like anything from
Stephen King then watch this


Son of Doge! aka Big Balls revenge ...

Eat your heart out Elon!
 
  • Haha
  • Like
Reactions: 2 users
Top Bottom