Give it a rest. No one is interested in your pissing and moaning View attachment 94586
Ignore button springs to mind.
Give it a rest. No one is interested in your pissing and moaning View attachment 94586
I think its time they need before the AGM to Prove to the shareholders that things are divulged roadmap wise with deals etc, Shareholders deserve more , If the roadmap is on par tell us moreBefore you vote against the board, make sure you understand the company's trajectory. Revisit the Roadmap. Look at the engagements. Look at the confirmed applications.
QV's CyberNeuro-RT/Akida Edge box cybersecurity alone has enormous potential.
Frontgrade's GR801 is in development for space applications.
The AFRL/RTX microDoppler radar has produced high precision see-in-the dark radar.
Onsor is producing EEG-type specs.
Compare the performance of TENNs against the competition.
Also consider the consequences of voting the board out - who would replace them?
Podiatrists have limited success in treating self-inflicted gunshots.
His LLM is hallucinating, there is no official info that says Cisco is using Akida!
ChatGPT should be used with care, you can get them to say almost everything if you ask them in a certain way.
A link supporting this statement is counting but not GPT, sorry!
LolwutWhat’s truly disheartening is that if I left all that I’ve invested in Brainchip in my superfund , the interest on that amount would have probably been more than the revenue in the 4c we’re about to read about over the coming days. However I do accept responsibility for my decisions after getting caught up in the explosive sales hype.
Cisco Far Edge is Traditional mutton dressed up as lamb.FF
This is the one and only known partnership between CISCO and Brainchip:
https://www.asx.com.au/asxpdf/20161201/pdf/43ddwm40sz8y4v.pdf
Additionally Peter van der Made and former CEO Mr. DiNardo mentioned CISCO in a number of oral and written presentations.
There was also a research paper some years ago proposing the use of AKIDA neuromorphic compute to secure CISCO routers however the researchers had no disclosed connection to either company.
My opinion only DYOR
Fact Finder
Another promising study using Akida
![]()
UOC researchers develop a low-power, high-performance AI model
UOC researchers propose a more sustainable, more efficient artificial intelligence model based on spiking neural networks. This model reduces energy consumption.www.uoc.edu
Looks like someone working on a project(s) using TENNs for EEG, Vision, Object and LLM / KWS.
The possible author imo is from Kherson Uni but happy to be corrected as not done heaps of digging.
![]()
GitHub - mlyu2010/Model-Compilation: This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and ARM processors. The
This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and...github.com
![]()
GitHub - mlyu2010/EEG-Edge-Model: Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimized on BrainChip’s
Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimi...github.com
mlyu2010/EEG-Edge-ModelPublic
Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimized on BrainChip’s Akida SDK. This will create advance AI TENN models that can be deployed on the Akida platform for real-time inference.
Dependencies
- Docker
- Docker Compose
- Python 3.12+
Hardware Acceleration Support
This project supports multiple compute devices for training and inference:
Device selection is automatic by default but can be configured manually.
- CUDA - NVIDIA GPU acceleration (Linux/Windows)
- MPS - Apple Silicon GPU acceleration (macOS with M1/M2/M3 chips)
- CPU - Fallback for systems without GPU
Description
- Create a Vision model for object detection and classification.
- Create a segmentations model for semantic segmentation.
- Create 1D time series models for EEG or other healthcare use cases
- Create 1D time series for anomaly detection
- Set up development environments for the TENN models and the Akida SDK.
- Create a Docker image for the TENN models and the Akida SDK.
- Create a Docker Compose file for the TENN models and the Akida SDK.
- Create a FastAPI application for the TENN models and the Akida SDK.
- Create unit and integration tests for the TENN models and the Akida SDK.
- Create a script for generating HTML documentation for the TENN models and the Akida SDK.
- Create a hardware setup, configuration, testing environment and troubleshooting for the TENN models and the Akida SDK.
- Train and quantize a PyTorch model using the TENN codebase.
- Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
- Implement TVM compilation pipeline for the models.
- Optimize using auto-scheduling and quantization tools.
- Train the TENN models for efficient and constrained edge use cases
- Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
- Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
- Create reproducible scripts for both workflows.
- Draft a performance analysis report with recommendations.
Features
- FastAPI Framework: Modern, fast Python web framework
- Docker Support: Full Docker and docker-compose setup
- Comprehensive Tests: Unit and integration tests included
- API Documentation and Testing via Web UI: http://localhost:8000/docs
- Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included
Installation
Using Docker (Recommended)
docker-compose down
docker-compose build --no-cache
docker-compose up -d
docker-compose logs -f edge-models
Access the API at: http://localhost:8000/docs
Local Installation
# Create virtual environment
python3.12 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
# Run application
uvicorn app.main:app --reload
Device Support:
Note: See INSTALL_NOTES.md for detailed installation instructions and troubleshooting.
- Auto-detect: --device auto (default - automatically selects best available)
- Apple Silicon: --device mps (M1/M2/M3 Macs)
- NVIDIA GPU: --device cuda (requires CUDA toolkit)
- CPU: --device cpu (all platforms)
And.....
This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and ARM processors. The project implements optimized workflows for both frameworks, analyze performance metrics, and document best practices for deployment.
Dependencies
- Docker
- Docker Compose
- Python 3.13
Description
- Set up development environments for TVM (using Hugging Face integration) and XLA (via PyTorch/XLA).
- Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
- Implement TVM and OpenXLA compilation pipelines for both models.
- Optimize using auto-scheduling and quantization tools.
- Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
- Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
- Create reproducible scripts for both workflows.
- Draft a performance analysis report with recommendations.
Features
- FastAPI Framework: Modern, fast Python web framework
- Docker Support: Full Docker and docker-compose setup
- Comprehensive Tests: Unit and integration tests included
- API Documentation and Testing via Web UI: http://localhost:8000/docs
- Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included
Installation
docker-compose down docker-compose build --no-cache docker-compose up -d docker-compose logs -f model-compilation
We should be seeing more demos like this. Seeing the technology in action is invaluable.![]()
#ces2026 #brainchip #haila #ces2026 #innovation #futuretech | BrainChip
𝐏𝐚𝐫𝐭𝐧𝐞𝐫 𝐒𝐩𝐨𝐭𝐥𝐢𝐠𝐡𝐭: 𝐑𝐞𝐝𝐞𝐟𝐢𝐧𝐢𝐧𝐠 𝐋𝐨𝐰 𝐏𝐨𝐰𝐞𝐫 𝐚𝐭 𝐭𝐡𝐞 𝐄𝐝𝐠𝐞 ⚡️ We’re kicking off our #CES2026 partner series by showcasing a seamless low-power stack. By combining BrainChip’s on-device inference with HaiLa’s low-power Wi-Fi, we’ve eliminated the need to transmit heavy raw data. Akida™ processes the...www.linkedin.com
View attachment 94607
View attachment 94608 View attachment 94609
Thanks @FrangipaniHi Fullmoonfever,
I’m afraid you’ve got the wrong gentleman.
Check out my 7 January post about Michael Liubchenko (AI/ML CyberSec Advisor for The Spartan AI) and his visit to the CES 2026 BrainChip suite:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-480369
View attachment 94596
View attachment 94595
Michael Liubchenko has a M.Sc. in Computer Science from Kharkiv National University of Radio Electronics, Ukraine, but lives in the US now.
As you can tell from his LinkedIn profile, he is definitely the person behind the GitHub account you’ve discovered:
View attachment 94600
View attachment 94601
View attachment 94602
View attachment 94594
![]()
Home - Secured & Private AI Networks
spartanshield.ai
View attachment 94605
About Us - Secured & Private AI Networks
spartanshield.ai
Our Products
SpartanShield is a trend setter in:
Private AI Networks
Spartan SecAI Platform – Makes Private AI a reality by automatical provisioning LLMs to personal and business endpoints locally – phones, laptops, servers. Allows to own your AI Knowledge and not sharing anything with Cloud.
In recent months, Michael Liubchenko has done quite a bit of contract work for various companies, among them Defense Tek Intelligence aka DefenseTek.AI, which has strong connections to Ukraine and is also partnered with The Spartan AI. He worked for them as an AI/ML CyberSec Advisor from July 2025 to December 2025:
![]()
LinkedIn Login, Sign in | LinkedIn
Login to LinkedIn to keep in touch with people you know, share ideas, and build your career.www.linkedin.com
View attachment 94597
![]()
View attachment 94603
View attachment 94606