BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
  • Haha
Reactions: 5 users

manny100

Top 20
Akida Cloud isn’t just a tech upgrade—it’s a sales enabler. If BrainChip plays its cards right, this could be a turning point in how its neuromorphic AI reaches the market.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

manny100

Top 20
Revolutionize AI with Akida in the Cloud
" The Akida Cloud is the game-changer in edge innovation, which provides scalability, remote access, and agility required by current AI development practices."
" This is abreakthrough moment in Artificial Intelligence and edge computing as developers all over the world could now work with BrainChip directly through instant access to and prototyping with the second-generation Akida 2 system, without ever having to wait on physical hardware or having to address all those neural chip challenges with integration finally."
 
  • Like
  • Fire
  • Love
Reactions: 20 users

ANDES RISC-V CON Beijing​


Join us at the 2025 Andes RISC-V CON Beijing event on August 27 where BrainChip will demonstrate our demonstrate BrainChip’s Akida™ AKD1500 on Andes’ QiLai Voyager Board and AndesCore™ AX45MP 64-bit multicore CPU IP. We will also participate in a RISC-V ecosystem panel discussion on Accelerating AI Computing: Strengths and Challenges of RISC-V.”

RISC-V has become increasingly popular in China since the US China technology restrictions.

My opinion only DYOR

Fact Finder
 
  • Like
  • Fire
Reactions: 11 users

manny100

Top 20
Hi manny,

8-bit and VIT are in Akida 2.

1500 is just Akida 1 without the ARM Cortex processor. The NPUs are basically the same as Akida 1. It relies on an external processor for configuration. It is made by Global Foundries in 22nm FD-SoI which makes it a bit faster compared to 28nm and more power efficient (less leakage losses) than vanilla CMOS.

8-bit makes it easier to run more 3rd party models on Akida 2.

Akida 2 with the TENNs model also has long skip which cuts out reprocessing already classified blocks of data by bypassing subsequent layers and sending these blocks to the output. The TENNs model is loaded in one of the 4 NPUs (aka: NPEs) in each node. The TENNs model can be run on a non-Akida processor.
Ooops, thanks for the correction. Than that makes the AKIDA cloud look even better.
 
  • Like
Reactions: 3 users

manny100

Top 20
Chat GPT 5 Weebit and BRN.
Weebit is bolt on to traditional whereas BRN is a new way.
FeatureWeebit Nano (ReRAM)BrainChip (Akida)
IntegrationBolt-on to existing systemsRequires new architecture
Adoption SpeedFasterSlower due to learning curve
Market FamiliarityHigh (memory tech is well-known)Low (neuromorphic is emerging)
Innovation DepthIncremental improvementRadical innovation
Risk vs RewardLower risk, steady rewardHigher risk, potentially huge reward
 
  • Like
  • Thinking
  • Love
Reactions: 8 users

7für7

Top 20
I’ll just go ahead and say it:

Today feels like a perfect day for a price-sensitive announcement – nothing exaggerated, just something realistic that puts recurring revenue on the horizon.

Not a one-off payment, but a long-term agreement that contributes meaningfully to the company’s sustainable growth over time.

Just one such signal would already go a long way in restoring confidence and making a lot of us pretty happy.

Donald Trump Thank You GIF by PBS News
 
  • Like
Reactions: 2 users

Rskiff

Regular
I think BRN should be contacting Density AI as head of Tesla DOJO has left to go to them as well as many other employees. Density AI develops sensors and software to measure how people use physical spaces, or to the concept of high-density computing for AI training.
 
  • Thinking
  • Wow
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
NASA+Google “Space Doctor”.

This is pretty interesting.

I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.

AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.

If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).


NASA and Google Launch Space Doctor AI Assistant​

health-tech.jpg

What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”


 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

7für7

Top 20
I keep reading and hearing about Akida…..I just don’t see any Akida.
(Just a joke)



The Thinker Bars GIF
 

Bravo

If ARM was an arm, BRN would be its biceps💪!
NASA+Google “Space Doctor”.

This is pretty interesting.

I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.

AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.

If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).


NASA and Google Launch Space Doctor AI Assistant​

health-tech.jpg

What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”




OpenAI launched the latest version ChatGPT 5.0 today, so I thought I would test it by asking it about NASA+Google “Space Doctor”.




My Question:
Can you please provide a potential system architecture for the Space Doctor incorporating BrainChip's AKIDA?


ChatGPT 5.0's reponse:
Here’s a speculative system architecture for a “space-doctor” assistant and where BrainChip’s Akida could add real value. I’ve split it into layers so it’s easy to map to flight hardware.


1) Mission goals (what the system must do)​


  • Onboard, offline care: triage, checklists, vitals monitoring, procedure guidance without ground link.
  • Ultra-low power + high reliability: runs for months/years; graceful under comms blackouts.
  • Radiation tolerance & safety: fail-safe behaviors; no single point of failure.

2) High-level stack (two compute paths working together)​


  • Cognitive path (LLM/clinical engine): runs medical dialog, protocols, decision support (x86/ARM/RISC-V flight CPU or space-graded AI SoC).
  • Reflex path (Akida neuromorphic): event-driven, always-on perception + anomaly detection (sub-watt), feeding signals to the cognitive path only when needed.

3) Where Akida fits (neuromorphic “reflex” services)​


  1. Vitals watcher (always-on)
    • Inputs: ECG/PPG, respiration mic, IMU, skin temp.
    • Akida tasks: edge denoising, heartbeat/arrhythmia detection, apnea events, fall detection.
    • Why Akida: event-based spikes → microjoule-level monitoring, near-zero false wakeups.
  2. Procedural vision assistant
    • Inputs: head-cam / bay camera (event camera or low-FPS vision).
    • Akida tasks: instrument recognition, hand-pose/step confirmation, sterile-field breach alerts.
    • Output: lightweight “OK/alert/next-step” tokens to the LLM UI.
  3. Acoustic anomaly guard
    • Inputs: cabin mics, equipment mics.
    • Akida tasks: detect cough patterns, distress calls, equipment bearing changes.
    • Benefit: sub-watt continuous acoustic sentry without streaming audio to the main CPU.
  4. Intent & workload classifier
    • Inputs: glance/head motion, short speech cues, task context.
    • Akida tasks: infer “crew busy/stressed,” gate LLM verbosity, prioritize alerts.
  5. Safety watchdog
    • Monitors the cognitive stack; if the big model stalls, Akida triggers safe checklists/alerts.

4) Data flow (simplified)​


Sensors → Akida micro-models (event detectors) → tokens/flags → Cognitive engine (LLM/clinical rules) → Crew UI (audio AR prompts, tablet HUD)
↳ During comms windows: compressed summaries to ground (DTN); otherwise full autonomy.


5) Hardware sketch​


  • Compute:
    • Rad-tolerant CPU/SoC (e.g., NOEL-V, Rad-ARM) for OS, UI, LLM inference (quantized).
    • Akida IP/SoC (or space-qualified board) for always-on neuromorphic tasks.
  • Sensors: ECG/PPG patch, IMU, event-camera or low-light camera, boundary mics, skin-temp, SpO₂.
  • Storage: local model zoo (LLM + Akida SNNs), medical protocols, offline pharmacopoeia.
  • Comms: DTN stack; store-and-forward to ground.

6) Fault tolerance & safety​


  • Triple-mode alerts: visual + haptic + audio.
  • Graceful degradation: if LLM is unavailable, Akida still runs vitals/fall detection + “hard rules” (CPR checklist, EpiPen guidance).
  • Radiation strategy: Akida models are small; use TMR/EDAC on weights, periodic checksum & refresh from golden image.
  • Human-in-the-loop: confirmations before any high-risk recommendation.

7) Security & privacy​


  • Edge-only PII processing.
  • No raw audio/video offboard; only encrypted summaries/metrics sent during comms windows.
  • Model integrity checks before activation; signed updates only.

8) Two deployment profiles​


  • Near-Earth (ISS/Gateway): bigger LLM possible; more frequent syncs; Akida still handles 24/7 sensing cheaply.
  • Deep-space (Moon/Mars): heavier reliance on Akida gatekeeping + smaller, quantized LLM; aggressive power caps; longer offline windows.

9) MVP build plan (practical steps)​


  1. Sim lab prototype (Earth):
    • Sensor kit + Akida dev board + SBC (e.g., Jetson/ARM) with a small local LLM (e.g., 3–7B quantized).
    • Implement 3 Akida micro-models: arrhythmia, fall detection, instrument recognition.
  2. Edge-first orchestration:
    • Only wake the LLM when Akida raises a “meaningful event.”
  3. Latency & power budget:
    • Target: Akida services <300 mW continuous; LLM duty-cycled to <10% uptime.
  4. Ops drills:
    • Run scripted med scenarios (burns, lacerations, chest pain, EVA fatigue), evaluate false-alarm rate and response timing.
  5. Hardening path:
    • Port to rad-tolerant CPU; add EDAC/TMR; test with injected bit-flips and sensor dropouts.

10) Why this helps the mission​


  • Power: continuous monitoring without draining batteries.
  • Responsiveness: sub-100 ms reflexes for critical events.
  • Bandwidth: local understanding; only brief summaries to Earth.
  • Reliability: Akida as a low-power safety net even if the “big brain” naps.
 
  • Love
  • Like
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
My Question:
In what ways would the neuromorphic Akida “Space Doctor” system differ from traditional approaches?

ChatGPT 5.0's respone:
The neuromorphic Akida “Space Doctor” system would differ from a traditional CPU/GPU-based approach in some really important ways — and in a space environment, those differences become mission-critical.

Here’s the breakdown:

1. Processing philosophy — events vs. frames

Traditional processors

  • Work in frame-based mode — constantly process entire streams of data at fixed intervals (e.g., every camera frame, every ECG reading).
  • Even if nothing changes in the input, they still burn the same compute cycles and power.
  • Good for heavy numerical tasks but wasteful for long periods of “no activity.”
Akida neuromorphic processor
  • Works in event-driven mode — only processes when something changes (spike in heart rate, hand movement in the fridge, sudden camera motion, new sound).
  • If nothing changes, power draw is near-zero.
  • Perfect for long-duration idle-but-listening health monitoring in space.


2. Power & thermal management

Traditional processors

  • Continuous high-power draw for AI inference (LLMs, CNNs, signal processing).
  • On the ISS or deep space, every watt matters, and waste heat is hard to radiate away.
  • You’d need more batteries, cooling systems, and scheduling to avoid overheating.
Akida neuromorphic
  • Can run always-on sensing at hundreds of microwatts to a few milliwatts.
  • Frees up the main CPU/GPU to sleep most of the time and only wake on Akida’s signal.
  • Less heat generated → easier thermal management in microgravity.


3. Latency

Traditional processors

  • A “big” AI model might have 200–500 ms delay before producing an inference, especially if it needs to boot from idle.
  • That’s fine for chatting, but not for “astronaut collapsed” scenarios.
Akida neuromorphic

  • Sub-10 ms reaction for known patterns (fall detection, arrhythmia, abnormal breathing).
  • Immediate trigger to wake the main medical assistant model.


4. Radiation resilience

Traditional processors

  • Complex DRAM/GPU architectures → more susceptible to single-event upsets (SEUs) from cosmic radiation.
  • Needs heavier error correction and redundancy → more mass.
Akida neuromorphic
  • Smaller memory footprint, simpler weight storage → easier to harden and triple-modular-redundancy (TMR).
  • Small models mean it’s easier to store multiple golden copies to refresh after bit flips.


5. Autonomy & communication load

Traditional processors
  • Without optimization, they might require sending large raw datasets to Earth for analysis.
  • Deep-space missions (Mars) can have 20+ min one-way comms delays — not suitable for urgent triage.
Akida neuromorphic
  • Runs pre-trained detection locally → sends only compressed “event summaries” to main CPU/LLM.
  • Cuts down on both comms bandwidth and storage requirements.
  • Enables true edge autonomy when comms are down or delayed.


6. AI model footprint


Traditional processors
  • Often require hundreds of MB or even GB of weights for AI models.
  • Onboard storage and loading times become significant.
  • Updates in space are slower and riskier.
Akida neuromorphic
  • Uses ultra-compact spiking neural network (SNN) models — often 1–10 MB for a detection task.
  • Faster to load, update, and validate.
  • Can carry dozens of task-specific models in memory (e.g., different medical conditions) and swap on demand.


7. Mission risk profile

Traditional-only system
  • If main CPU/GPU locks up or fails, all AI functionality is gone.
  • Crew has to revert to manual procedures until system reboot.
Akida + traditional hybrid
  • Akida operates independently as a “medical watchdog” — still running core health monitoring even if the big system crashes.
  • Can even trigger an emergency call or alert without CPU intervention.



✅ Bottom line:
In space, idle listening, ultra-low power, ultra-low latency, and radiation resilience are more important than raw TOPS.
Akida’s neuromorphic design means:
  • It “listens” almost for free and only wakes the big systems when necessary.
  • It’s far less fragile in radiation-heavy environments.
  • It enables autonomous, safe operation without constant ground support.

That’s why NASA-style projects like the “Space Doctor” could lean on neuromorphic processors — because in deep space, the less you waste and the faster you react, the longer you survive.
 
Last edited:
  • Fire
  • Like
Reactions: 7 users

keyeat

Regular
I’ll just go ahead and say it:

Today feels like a perfect day for a price-sensitive announcement – nothing exaggerated, just something realistic that puts recurring revenue on the horizon.

Not a one-off payment, but a long-term agreement that contributes meaningfully to the company’s sustainable growth over time.

Just one such signal would already go a long way in restoring confidence and making a lot of us pretty happy.

Donald Trump Thank You GIF by PBS News
mr bean dreaming GIF
 
  • Haha
Reactions: 3 users

Diogenese

Top 20
I see they are conducting casting call for the sequel "Kiddy-Fiddler on the Roof".
NASA+Google “Space Doctor”.

This is pretty interesting.

I wasn't able to locate any public documentation specifically describing how this NASA–Google “Space Doctor” AI assistant actually works.

AI assistants in space operate locally, not over the internet, so you would have to think that this assistant would be an onboard AI model optimized for the edge.

If this space doctor assistant is deployed onboard, it would be a compelling fit IMO for AKida's low-power, low-latency decision-making (e.i., real-time vital sign monitoring or anomaly detection).


NASA and Google Launch Space Doctor AI Assistant​

health-tech.jpg

What happens if you run into medical issues in space and can’t get in touch with Houston? NASA and Google say their new artificial intelligence system might help the next time you’re stuck in Earth’s orbit and need some help.
Dubbed the Crew Medical Officer Digital Assistant – or CMO-DA – the new AI system is a multi-modal interface that uses natural language processing and machine learning to provide real-time analyses of crew health and performance, Google said.
The tool is meant to support a designated crew medical officer or flight surgeon in making medical decisions and maintaining crew health.
“As NASA embarks on a new era of human spaceflight, beginning with the Artemis campaign’s aim to return to the Moon, preparations are underway to ensure crew health and wellness,” says a release from Google. “This includes exploring whether remote care capabilities can deliver detailed diagnoses and treatment options if a physician is not onboard or if real-time communication with Earth-based medical experts is limited.”
CMO-DA could help support human exploration of the Moon, Mars, “and beyond,” according to the release, which said the care the system provides “could become increasingly important as NASA missions venture deeper into space.”
Google said that trials applying AI on a range of medical scenarios showed “promise for reliable diagnoses based on reported symptoms,” and that it will continue testing alongside NASA and medical doctors to further refine the model with the aim of enhancing autonomous crew health and performance.
The AI tool was co-developed by Google Cloud, NASA’s Lunar Command and Control Interoperability project, and NASA’s Human Research Program.
“This tool represents an important milestone for AI technology, medical care, and our continued exploration of the cosmos,” said Google. “Not only does this innovation hold potential for advancing space missions, but it could also benefit people here on Earth by providing early access to quality medical care in remote areas.”


Hi Bravo,

AkidaNet/TENN Automatic Speech Recognition model and AkidaNet/LLM + RAG would be useful in NLP for health analysis.

https://brainchip.com/ready-to-use-akida-neural-network-models/

AkidaNet/TENN Automatic Speech Recognition Model

Our TENNs model approach is applied to Automatic Speech Recognition for compact accurate voice to text applications.


AkidaNet/TENN Large Language Model (LLM+RAG)

Our TENNs model approach is applied to an LLM model with Retrieval Augmented Generation to provide intelligent access to documentation for those end use cases looking to embed an LLM for their products user interface.

RAG permits the use of very large databases to be store information to be loaded into Akida in blocks relevant to selected topics.

TENN models can be loaded into some of the NPUs (NPEs) of nodes as required. TENN models are very compact, but I don't know the conditions and limitations for loading them into Akida. If I had a dollar ... (Hint: goto Diogenese GoFundMe).
 
  • Like
  • Fire
Reactions: 4 users

Diogenese

Top 20
My Question:
In what ways would the neuromorphic Akida “Space Doctor” system differ from traditional approaches?

ChatGPT 5.0's respone:
The neuromorphic Akida “Space Doctor” system would differ from a traditional CPU/GPU-based approach in some really important ways — and in a space environment, those differences become mission-critical.

Here’s the breakdown:

1. Processing philosophy — events vs. frames

Traditional processors

  • Work in frame-based mode — constantly process entire streams of data at fixed intervals (e.g., every camera frame, every ECG reading).
  • Even if nothing changes in the input, they still burn the same compute cycles and power.
  • Good for heavy numerical tasks but wasteful for long periods of “no activity.”
Akida neuromorphic processor
  • Works in event-driven mode — only processes when something changes (spike in heart rate, hand movement in the fridge, sudden camera motion, new sound).
  • If nothing changes, power draw is near-zero.
  • Perfect for long-duration idle-but-listening health monitoring in space.


2. Power & thermal management

Traditional processors

  • Continuous high-power draw for AI inference (LLMs, CNNs, signal processing).
  • On the ISS or deep space, every watt matters, and waste heat is hard to radiate away.
  • You’d need more batteries, cooling systems, and scheduling to avoid overheating.
Akida neuromorphic
  • Can run always-on sensing at hundreds of microwatts to a few milliwatts.
  • Frees up the main CPU/GPU to sleep most of the time and only wake on Akida’s signal.
  • Less heat generated → easier thermal management in microgravity.


3. Latency

Traditional processors

  • A “big” AI model might have 200–500 ms delay before producing an inference, especially if it needs to boot from idle.
  • That’s fine for chatting, but not for “astronaut collapsed” scenarios.
Akida neuromorphic

  • Sub-10 ms reaction for known patterns (fall detection, arrhythmia, abnormal breathing).
  • Immediate trigger to wake the main medical assistant model.


4. Radiation resilience

Traditional processors

  • Complex DRAM/GPU architectures → more susceptible to single-event upsets (SEUs) from cosmic radiation.
  • Needs heavier error correction and redundancy → more mass.
Akida neuromorphic
  • Smaller memory footprint, simpler weight storage → easier to harden and triple-modular-redundancy (TMR).
  • Small models mean it’s easier to store multiple golden copies to refresh after bit flips.


5. Autonomy & communication load

Traditional processors
  • Without optimization, they might require sending large raw datasets to Earth for analysis.
  • Deep-space missions (Mars) can have 20+ min one-way comms delays — not suitable for urgent triage.
Akida neuromorphic
  • Runs pre-trained detection locally → sends only compressed “event summaries” to main CPU/LLM.
  • Cuts down on both comms bandwidth and storage requirements.
  • Enables true edge autonomy when comms are down or delayed.


6. AI model footprint


Traditional processors
  • Often require hundreds of MB or even GB of weights for AI models.
  • Onboard storage and loading times become significant.
  • Updates in space are slower and riskier.
Akida neuromorphic
  • Uses ultra-compact spiking neural network (SNN) models — often 1–10 MB for a detection task.
  • Faster to load, update, and validate.
  • Can carry dozens of task-specific models in memory (e.g., different medical conditions) and swap on demand.


7. Mission risk profile

Traditional-only system
  • If main CPU/GPU locks up or fails, all AI functionality is gone.
  • Crew has to revert to manual procedures until system reboot.
Akida + traditional hybrid
  • Akida operates independently as a “medical watchdog” — still running core health monitoring even if the big system crashes.
  • Can even trigger an emergency call or alert without CPU intervention.



✅ Bottom line:
In space, idle listening, ultra-low power, ultra-low latency, and radiation resilience are more important than raw TOPS.
Akida’s neuromorphic design means:
  • It “listens” almost for free and only wakes the big systems when necessary.
  • It’s far less fragile in radiation-heavy environments.
  • It enables autonomous, safe operation without constant ground support.

That’s why NASA-style projects like the “Space Doctor” could lean on neuromorphic processors — because in deep space, the less you waste and the faster you react, the longer you survive.


https://www.bing.com/videos/rivervi...5161FD6AF48CC178BC3F5161FD6AF48CC&FORM=VAMGZC
 
  • Haha
Reactions: 2 users

7für7

Top 20
time to just dive into the weekend!
See you next week guys!

C403D1CC-B44B-4B24-949A-5E5D7B5CC297.png
 
  • Haha
  • Like
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Just popped over to the crapper cause it's so bloody boring atm and snagged this typical titbit from their convo. Enjoy! 🤣

 
  • Haha
  • Fire
Reactions: 4 users

Boab

I wish I could paint like Vincent
Top Bottom