HopalongPetrovski
I'm Spartacus!
I'm the new morale officer.In charge of Human Resources are we Hop?
SC

I'm the new morale officer.In charge of Human Resources are we Hop?
SC
Ooops, thanks for the correction. Than that makes the AKIDA cloud look even better.Hi manny,
8-bit and VIT are in Akida 2.
1500 is just Akida 1 without the ARM Cortex processor. The NPUs are basically the same as Akida 1. It relies on an external processor for configuration. It is made by Global Foundries in 22nm FD-SoI which makes it a bit faster compared to 28nm and more power efficient (less leakage losses) than vanilla CMOS.
8-bit makes it easier to run more 3rd party models on Akida 2.
Akida 2 with the TENNs model also has long skip which cuts out reprocessing already classified blocks of data by bypassing subsequent layers and sending these blocks to the output. The TENNs model is loaded in one of the 4 NPUs (aka: NPEs) in each node. The TENNs model can be run on a non-Akida processor.
Feature | Weebit Nano (ReRAM) | BrainChip (Akida) |
---|---|---|
Integration | Bolt-on to existing systems | Requires new architecture |
Adoption Speed | Faster | Slower due to learning curve |
Market Familiarity | High (memory tech is well-known) | Low (neuromorphic is emerging) |
Innovation Depth | Incremental improvement | Radical innovation |
Risk vs Reward | Lower risk, steady reward | Higher risk, potentially huge reward |
NASA+Google āSpace Doctorā.
This is pretty interesting.
I wasn't able to locate any public documentation specifically describing how this NASAāGoogle āSpace Doctorā AI assistant actually works.
AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.
If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).
NASA and Google Launch Space Doctor AI Assistant
![]()
What happens if you run into medical issues in space and canāt get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time youāre stuck in Earthās orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant ā or CMO-DA ā the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
āAs NASA embarks on a new era of human spaceflight, beginning with the Artemis campaignās aim to return to the Moon, preparations are underway to ensure crew health and wellness,ā says a release from Google. āThis includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.ā
CMO-DA could help support human exploration of the Moon, Mars, āand beyond,ā according to the release, which said the care the system provides ācould become increasingly important as NASA missions venture deeper into space.ā
Google said that trials applying AI on a range of medical scenarios showed āpromise for reliable diagnoses based on reported symptoms,ā and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASAās Lunar Command and Control Interoperability project, and NASAās Human Research Program.
āThis tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,ā said Google. āNot only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.ā
![]()
NASA and Google Launch Space Doctor AI Assistant
What happens if you run into medical issues in space and canāt get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time youāre stuck in Earthās orbit and need some help.meritalk.com
Iāll just go ahead and say it:
Today feels like a perfect day for a price-sensitive announcement ā nothing exaggerated, just something realistic that puts recurring revenue on the horizon.
Not a one-off payment, but a long-term agreement that contributes meaningfully to the companyās sustainable growth over time.
Just one such signal would already go a long way in restoring confidence and making a lot of us pretty happy.
![]()
Hi Bravo,NASA+Google āSpace Doctorā.
This is pretty interesting.
I wasn't able to locate any public documentation specifically describing how this NASAāGoogle āSpace Doctorā AI assistant actually works.
AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.
If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).
NASA and Google Launch Space Doctor AI Assistant
![]()
What happens if you run into medical issues in space and canāt get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time youāre stuck in Earthās orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant ā or CMO-DA ā the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
āAs NASA embarks on a new era of human spaceflight, beginning with the Artemis campaignās aim to return to the Moon, preparations are underway to ensure crew health and wellness,ā says a release from Google. āThis includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.ā
CMO-DA could help support human exploration of the Moon, Mars, āand beyond,ā according to the release, which said the care the system provides ācould become increasingly important as NASA missions venture deeper into space.ā
Google said that trials applying AI on a range of medical scenarios showed āpromise for reliable diagnoses based on reported symptoms,ā and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASAās Lunar Command and Control Interoperability project, and NASAās Human Research Program.
āThis tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,ā said Google. āNot only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.ā
![]()
NASA and Google Launch Space Doctor AI Assistant
What happens if you run into medical issues in space and canāt get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time youāre stuck in Earthās orbit and need some help.meritalk.com
My Question:
In what ways would the neuromorphic Akida āSpace Doctorā system differ from traditional approaches?
ChatGPT 5.0's respone:
The neuromorphic Akida āSpace Doctorā system would differ from a traditional CPU/GPU-based approach in some really important ways ā and in a space environment, those differences become mission-critical.
Hereās the breakdown:
1. Processing philosophy ā events vs. frames
Traditional processors
Akida neuromorphic processor
- Work in frame-based mode ā constantly process entire streams of data at fixed intervals (e.g., every camera frame, every ECG reading).
- Even if nothing changes in the input, they still burn the same compute cycles and power.
- Good for heavy numerical tasks but wasteful for long periods of āno activity.ā
- Works in event-driven mode ā only processes when something changes (spike in heart rate, hand movement in the fridge, sudden camera motion, new sound).
- If nothing changes, power draw is near-zero.
- Perfect for long-duration idle-but-listening health monitoring in space.
2. Power & thermal management
Traditional processors
Akida neuromorphic
- Continuous high-power draw for AI inference (LLMs, CNNs, signal processing).
- On the ISS or deep space, every watt matters, and waste heat is hard to radiate away.
- Youād need more batteries, cooling systems, and scheduling to avoid overheating.
- Can run always-on sensing at hundreds of microwatts to a few milliwatts.
- Frees up the main CPU/GPU to sleep most of the time and only wake on Akidaās signal.
- Less heat generated ā easier thermal management in microgravity.
3. Latency
Traditional processors
Akida neuromorphic
- A ābigā AI model might have 200ā500 ms delay before producing an inference, especially if it needs to boot from idle.
- Thatās fine for chatting, but not for āastronaut collapsedā scenarios.
- Sub-10 ms reaction for known patterns (fall detection, arrhythmia, abnormal breathing).
- Immediate trigger to wake the main medical assistant model.
4. Radiation resilience
Traditional processors
Akida neuromorphic
- Complex DRAM/GPU architectures ā more susceptible to single-event upsets (SEUs) from cosmic radiation.
- Needs heavier error correction and redundancy ā more mass.
- Smaller memory footprint, simpler weight storage ā easier to harden and triple-modular-redundancy (TMR).
- Small models mean itās easier to store multiple golden copies to refresh after bit flips.
5. Autonomy & communication load
Traditional processors
Akida neuromorphic
- Without optimization, they might require sending large raw datasets to Earth for analysis.
- Deep-space missions (Mars) can have 20+ min one-way comms delays ā not suitable for urgent triage.
- Runs pre-trained detection locally ā sends only compressed āevent summariesā to main CPU/LLM.
- Cuts down on both comms bandwidth and storage requirements.
- Enables true edge autonomy when comms are down or delayed.
6. AI model footprint
Traditional processors
Akida neuromorphic
- Often require hundreds of MB or even GB of weights for AI models.
- Onboard storage and loading times become significant.
- Updates in space are slower and riskier.
- Uses ultra-compact spiking neural network (SNN) models ā often 1ā10 MB for a detection task.
- Faster to load, update, and validate.
- Can carry dozens of task-specific models in memory (e.g., different medical conditions) and swap on demand.
7. Mission risk profile
Traditional-only system
Akida + traditional hybrid
- If main CPU/GPU locks up or fails, all AI functionality is gone.
- Crew has to revert to manual procedures until system reboot.
- Akida operates independently as a āmedical watchdogā ā still running core health monitoring even if the big system crashes.
- Can even trigger an emergency call or alert without CPU intervention.
Bottom line:
In space, idle listening, ultra-low power, ultra-low latency, and radiation resilience are more important than raw TOPS.
Akidaās neuromorphic design means:
- It ālistensā almost for free and only wakes the big systems when necessary.
- Itās far less fragile in radiation-heavy environments.
- It enables autonomous, safe operation without constant ground support.
Thatās why NASA-style projects like the āSpace Doctorā could lean on neuromorphic processors ā because in deep space, the less you waste and the faster you react, the longer you survive.