Looks like someone working on a project(s) using TENNs for EEG, Vision, Object and LLM / KWS.
The possible author imo is from Kherson Uni but happy to be corrected as not done heaps of digging.
This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and...
github.com
Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimi...
github.com
mlyu2010/
EEG-Edge-ModelPublic
Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimized on BrainChip’s Akida SDK. This will create advance AI TENN models that can be deployed on the Akida platform for real-time inference.
Dependencies
- Docker
- Docker Compose
- Python 3.12+
Hardware Acceleration Support
This project supports multiple compute devices for training and inference:
- CUDA - NVIDIA GPU acceleration (Linux/Windows)
- MPS - Apple Silicon GPU acceleration (macOS with M1/M2/M3 chips)
- CPU - Fallback for systems without GPU
Device selection is automatic by default but can be configured manually.
Description
- Create a Vision model for object detection and classification.
- Create a segmentations model for semantic segmentation.
- Create 1D time series models for EEG or other healthcare use cases
- Create 1D time series for anomaly detection
- Set up development environments for the TENN models and the Akida SDK.
- Create a Docker image for the TENN models and the Akida SDK.
- Create a Docker Compose file for the TENN models and the Akida SDK.
- Create a FastAPI application for the TENN models and the Akida SDK.
- Create unit and integration tests for the TENN models and the Akida SDK.
- Create a script for generating HTML documentation for the TENN models and the Akida SDK.
- Create a hardware setup, configuration, testing environment and troubleshooting for the TENN models and the Akida SDK.
- Train and quantize a PyTorch model using the TENN codebase.
- Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
- Implement TVM compilation pipeline for the models.
- Optimize using auto-scheduling and quantization tools.
- Train the TENN models for efficient and constrained edge use cases
- Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
- Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
- Create reproducible scripts for both workflows.
- Draft a performance analysis report with recommendations.
Features
- FastAPI Framework: Modern, fast Python web framework
- Docker Support: Full Docker and docker-compose setup
- Comprehensive Tests: Unit and integration tests included
- API Documentation and Testing via Web UI: http://localhost:8000/docs
- Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included
Installation
Using Docker (Recommended)
docker-compose down
docker-compose build --no-cache
docker-compose up -d
docker-compose logs -f edge-models
Access the API at:
http://localhost:8000/docs
Local Installation
# Create virtual environment
python3.12 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
# Run application
uvicorn app.main:app --reload
Device Support:
- Auto-detect: --device auto (default - automatically selects best available)
- Apple Silicon: --device mps (M1/M2/M3 Macs)
- NVIDIA GPU: --device cuda (requires CUDA toolkit)
- CPU: --device cpu (all platforms)
Note: See INSTALL_NOTES.md for detailed installation instructions and troubleshooting.
And.....
This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and ARM processors. The project implements optimized workflows for both frameworks, analyze performance metrics, and document best practices for deployment.
Dependencies
- Docker
- Docker Compose
- Python 3.13
Description
- Set up development environments for TVM (using Hugging Face integration) and XLA (via PyTorch/XLA).
- Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
- Implement TVM and OpenXLA compilation pipelines for both models.
- Optimize using auto-scheduling and quantization tools.
- Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
- Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
- Create reproducible scripts for both workflows.
- Draft a performance analysis report with recommendations.
Features
- FastAPI Framework: Modern, fast Python web framework
- Docker Support: Full Docker and docker-compose setup
- Comprehensive Tests: Unit and integration tests included
- API Documentation and Testing via Web UI: http://localhost:8000/docs
- Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included
Installation
docker-compose down docker-compose build --no-cache docker-compose up -d docker-compose logs -f model-compilation