NASA+Google “Space Doctor”.
This is pretty interesting.
I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.
AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.
If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).
NASA and Google Launch Space Doctor AI Assistant
![]()
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”
![]()
NASA and Google Launch Space Doctor AI Assistant
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.meritalk.com
I’ll just go ahead and say it:
Today feels like a perfect day for a price-sensitive announcement – nothing exaggerated, just something realistic that puts recurring revenue on the horizon.
Not a one-off payment, but a long-term agreement that contributes meaningfully to the company’s sustainable growth over time.
Just one such signal would already go a long way in restoring confidence and making a lot of us pretty happy.
![]()
Hi Bravo,NASA+Google “Space Doctor”.
This is pretty interesting.
I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.
AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.
If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).
NASA and Google Launch Space Doctor AI Assistant
![]()
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”
![]()
NASA and Google Launch Space Doctor AI Assistant
What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.meritalk.com
My Question:
In what ways would the neuromorphic Akida “Space Doctor” system differ from traditional approaches?
ChatGPT 5.0's respone:
The neuromorphic Akida “Space Doctor” system would differ from a traditional CPU/GPU-based approach in some really important ways — and in a space environment, those differences become mission-critical.
Here’s the breakdown:
1. Processing philosophy — events vs. frames
Traditional processors
Akida neuromorphic processor
- Work in frame-based mode — constantly process entire streams of data at fixed intervals (e.g., every camera frame, every ECG reading).
- Even if nothing changes in the input, they still burn the same compute cycles and power.
- Good for heavy numerical tasks but wasteful for long periods of “no activity.”
- Works in event-driven mode — only processes when something changes (spike in heart rate, hand movement in the fridge, sudden camera motion, new sound).
- If nothing changes, power draw is near-zero.
- Perfect for long-duration idle-but-listening health monitoring in space.
2. Power & thermal management
Traditional processors
Akida neuromorphic
- Continuous high-power draw for AI inference (LLMs, CNNs, signal processing).
- On the ISS or deep space, every watt matters, and waste heat is hard to radiate away.
- You’d need more batteries, cooling systems, and scheduling to avoid overheating.
- Can run always-on sensing at hundreds of microwatts to a few milliwatts.
- Frees up the main CPU/GPU to sleep most of the time and only wake on Akida’s signal.
- Less heat generated → easier thermal management in microgravity.
3. Latency
Traditional processors
Akida neuromorphic
- A “big” AI model might have 200–500 ms delay before producing an inference, especially if it needs to boot from idle.
- That’s fine for chatting, but not for “astronaut collapsed” scenarios.
- Sub-10 ms reaction for known patterns (fall detection, arrhythmia, abnormal breathing).
- Immediate trigger to wake the main medical assistant model.
4. Radiation resilience
Traditional processors
Akida neuromorphic
- Complex DRAM/GPU architectures → more susceptible to single-event upsets (SEUs) from cosmic radiation.
- Needs heavier error correction and redundancy → more mass.
- Smaller memory footprint, simpler weight storage → easier to harden and triple-modular-redundancy (TMR).
- Small models mean it’s easier to store multiple golden copies to refresh after bit flips.
5. Autonomy & communication load
Traditional processors
Akida neuromorphic
- Without optimization, they might require sending large raw datasets to Earth for analysis.
- Deep-space missions (Mars) can have 20+ min one-way comms delays — not suitable for urgent triage.
- Runs pre-trained detection locally → sends only compressed “event summaries” to main CPU/LLM.
- Cuts down on both comms bandwidth and storage requirements.
- Enables true edge autonomy when comms are down or delayed.
6. AI model footprint
Traditional processors
Akida neuromorphic
- Often require hundreds of MB or even GB of weights for AI models.
- Onboard storage and loading times become significant.
- Updates in space are slower and riskier.
- Uses ultra-compact spiking neural network (SNN) models — often 1–10 MB for a detection task.
- Faster to load, update, and validate.
- Can carry dozens of task-specific models in memory (e.g., different medical conditions) and swap on demand.
7. Mission risk profile
Traditional-only system
Akida + traditional hybrid
- If main CPU/GPU locks up or fails, all AI functionality is gone.
- Crew has to revert to manual procedures until system reboot.
- Akida operates independently as a “medical watchdog” — still running core health monitoring even if the big system crashes.
- Can even trigger an emergency call or alert without CPU intervention.
Bottom line:
In space, idle listening, ultra-low power, ultra-low latency, and radiation resilience are more important than raw TOPS.
Akida’s neuromorphic design means:
- It “listens” almost for free and only wakes the big systems when necessary.
- It’s far less fragile in radiation-heavy environments.
- It enables autonomous, safe operation without constant ground support.
That’s why NASA-style projects like the “Space Doctor” could lean on neuromorphic processors — because in deep space, the less you waste and the faster you react, the longer you survive.
I would not be surprised to see a number of companies trialing with AKIDA1000 chips test in parallel with Gen 2 on AKIDA cloud.
I have the exact same thought about Arquimea and defence.I would not be surprised to see a number of companies trialing with AKIDA1000 chips test in parallel with Gen 2 on AKIDA cloud.
Efficiency and performance gains to good to ignore. Those really intent on adopting AKIDA will get credit towards hardware purchases.
Arquimea are on to it quick smart. Cannot help suspecting that apart from lifesaving they are testing for their defense products.
" Hélder Rodríguez López, Embedded Software Research Engineer at Arquimea Research said, “The Akida Cloud’s ability to provide us advanced access to the latest features of Akida and easily test our neuromorphic model innovations remotely is a real advantage for progressing our advanced model development programs.”"
I can see companies like Tata parallel. Bascom Hunter? etc.
![]()
BrainChip Launches Akida Cloud for Instant Neuromorphic Access
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.brainchip.com
While Larry agrees with all the frustration about the lack of ASX announcements and the effect on the SP as a result and also the shorters doing their faecal trades...Larry has never sold and keeps on buying as he did again today....so thank you to everyone who is keeping the SP at this level.....Larry loves you
View attachment 89417
Really Happy as Larry