Bravo
Meow Meow 🐾
NASA+Google “Space Doctor”.
This is pretty interesting.
I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.
AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.
If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).
NASA and Google Launch Space Doctor AI Assistant
![]()
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”
![]()
NASA and Google Launch Space Doctor AI Assistant
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.meritalk.com
OpenAI launched the latest version ChatGPT 5.0 today, so I thought I would test it by asking it about NASA+Google “Space Doctor”.
My Question:
Can you please provide a potential system architecture for the Space Doctor incorporating BrainChip's AKIDA?
ChatGPT 5.0's reponse:
Here’s a speculative system architecture for a “space-doctor” assistant and where BrainChip’s Akida could add real value. I’ve split it into layers so it’s easy to map to flight hardware.
1) Mission goals (what the system must do)
- Onboard, offline care: triage, checklists, vitals monitoring, procedure guidance without ground link.
- Ultra-low power + high reliability: runs for months/years; graceful under comms blackouts.
- Radiation tolerance & safety: fail-safe behaviors; no single point of failure.
2) High-level stack (two compute paths working together)
- Cognitive path (LLM/clinical engine): runs medical dialog, protocols, decision support (x86/ARM/RISC-V flight CPU or space-graded AI SoC).
- Reflex path (Akida neuromorphic): event-driven, always-on perception + anomaly detection (sub-watt), feeding signals to the cognitive path only when needed.
3) Where Akida fits (neuromorphic “reflex” services)
- Vitals watcher (always-on)
- Inputs: ECG/PPG, respiration mic, IMU, skin temp.
- Akida tasks: edge denoising, heartbeat/arrhythmia detection, apnea events, fall detection.
- Why Akida: event-based spikes → microjoule-level monitoring, near-zero false wakeups.
- Procedural vision assistant
- Inputs: head-cam / bay camera (event camera or low-FPS vision).
- Akida tasks: instrument recognition, hand-pose/step confirmation, sterile-field breach alerts.
- Output: lightweight “OK/alert/next-step” tokens to the LLM UI.
- Acoustic anomaly guard
- Inputs: cabin mics, equipment mics.
- Akida tasks: detect cough patterns, distress calls, equipment bearing changes.
- Benefit: sub-watt continuous acoustic sentry without streaming audio to the main CPU.
- Intent & workload classifier
- Inputs: glance/head motion, short speech cues, task context.
- Akida tasks: infer “crew busy/stressed,” gate LLM verbosity, prioritize alerts.
- Safety watchdog
- Monitors the cognitive stack; if the big model stalls, Akida triggers safe checklists/alerts.
4) Data flow (simplified)
Sensors → Akida micro-models (event detectors) → tokens/flags → Cognitive engine (LLM/clinical rules) → Crew UI (audio AR prompts, tablet HUD)
↳ During comms windows: compressed summaries to ground (DTN); otherwise full autonomy.
5) Hardware sketch
- Compute:
- Rad-tolerant CPU/SoC (e.g., NOEL-V, Rad-ARM) for OS, UI, LLM inference (quantized).
- Akida IP/SoC (or space-qualified board) for always-on neuromorphic tasks.
- Sensors: ECG/PPG patch, IMU, event-camera or low-light camera, boundary mics, skin-temp, SpO₂.
- Storage: local model zoo (LLM + Akida SNNs), medical protocols, offline pharmacopoeia.
- Comms: DTN stack; store-and-forward to ground.
6) Fault tolerance & safety
- Triple-mode alerts: visual + haptic + audio.
- Graceful degradation: if LLM is unavailable, Akida still runs vitals/fall detection + “hard rules” (CPR checklist, EpiPen guidance).
- Radiation strategy: Akida models are small; use TMR/EDAC on weights, periodic checksum & refresh from golden image.
- Human-in-the-loop: confirmations before any high-risk recommendation.
7) Security & privacy
- Edge-only PII processing.
- No raw audio/video offboard; only encrypted summaries/metrics sent during comms windows.
- Model integrity checks before activation; signed updates only.
8) Two deployment profiles
- Near-Earth (ISS/Gateway): bigger LLM possible; more frequent syncs; Akida still handles 24/7 sensing cheaply.
- Deep-space (Moon/Mars): heavier reliance on Akida gatekeeping + smaller, quantized LLM; aggressive power caps; longer offline windows.
9) MVP build plan (practical steps)
- Sim lab prototype (Earth):
- Sensor kit + Akida dev board + SBC (e.g., Jetson/ARM) with a small local LLM (e.g., 3–7B quantized).
- Implement 3 Akida micro-models: arrhythmia, fall detection, instrument recognition.
- Edge-first orchestration:
- Only wake the LLM when Akida raises a “meaningful event.”
- Latency & power budget:
- Target: Akida services <300 mW continuous; LLM duty-cycled to <10% uptime.
- Ops drills:
- Run scripted med scenarios (burns, lacerations, chest pain, EVA fatigue), evaluate false-alarm rate and response timing.
- Hardening path:
- Port to rad-tolerant CPU; add EDAC/TMR; test with injected bit-flips and sensor dropouts.
10) Why this helps the mission
- Power: continuous monitoring without draining batteries.
- Responsiveness: sub-100 ms reflexes for critical events.
- Bandwidth: local understanding; only brief summaries to Earth.
- Reliability: Akida as a low-power safety net even if the “big brain” naps.