BRN Discussion Ongoing

Rach2512

Regular
  • Like
  • Love
Reactions: 5 users
Before you vote against the board, make sure you understand the company's trajectory. Revisit the Roadmap. Look at the engagements. Look at the confirmed applications.

QV's CyberNeuro-RT/Akida Edge box cybersecurity alone has enormous potential.

Frontgrade's GR801 is in development for space applications.

The AFRL/RTX microDoppler radar has produced high precision see-in-the dark radar.

Onsor is producing EEG-type specs.

Compare the performance of TENNs against the competition.

Also consider the consequences of voting the board out - who would replace them?

Podiatrists have limited success in treating self-inflicted gunshots.
I think its time they need before the AGM to Prove to the shareholders that things are divulged roadmap wise with deals etc, Shareholders deserve more , If the roadmap is on par tell us more
 
  • Like
Reactions: 4 users

BrainShit

Regular
His LLM is hallucinating, there is no official info that says Cisco is using Akida!

ChatGPT should be used with care, you can get them to say almost everything if you ask them in a certain way.

A link supporting this statement is counting but not GPT, sorry!

What most people don't get is the difference between the LLM AI.
Use the appropriate AI for the right task... it's just like Akida, you can't use it for anything and expect accuracy.


LLM-comparison.png
 
  • Like
Reactions: 1 users

TopCat

Regular
What’s truly disheartening is that if I left all that I’ve invested in Brainchip in my superfund , the interest on that amount would have probably been more than the revenue in the 4c we’re about to read about over the coming days. However I do accept responsibility for my decisions after getting caught up in the explosive sales hype.
 
  • Like
  • Fire
Reactions: 4 users

gex

Regular
What’s truly disheartening is that if I left all that I’ve invested in Brainchip in my superfund , the interest on that amount would have probably been more than the revenue in the 4c we’re about to read about over the coming days. However I do accept responsibility for my decisions after getting caught up in the explosive sales hype.
Lolwut
 
Looks like someone working on a project(s) using TENNs for EEG, Vision, Object and LLM / KWS.

The possible author imo is from Kherson Uni but happy to be corrected as not done heaps of digging.






mlyu2010/EEG-Edge-ModelPublic

Develop, create train and quantize a comprehensive AI models based on TENN (Temporal Event-based neural networks or State Space Recurrent Models) using 2D images and 1D convolutions that are optimized on BrainChip’s Akida SDK. This will create advance AI TENN models that can be deployed on the Akida platform for real-time inference.

Dependencies​

  1. Docker
  2. Docker Compose
  3. Python 3.12+

Hardware Acceleration Support​

This project supports multiple compute devices for training and inference:

  • CUDA - NVIDIA GPU acceleration (Linux/Windows)
  • MPS - Apple Silicon GPU acceleration (macOS with M1/M2/M3 chips)
  • CPU - Fallback for systems without GPU
Device selection is automatic by default but can be configured manually.

Description​

  • Create a Vision model for object detection and classification.
  • Create a segmentations model for semantic segmentation.
  • Create 1D time series models for EEG or other healthcare use cases
  • Create 1D time series for anomaly detection
  • Set up development environments for the TENN models and the Akida SDK.
  • Create a Docker image for the TENN models and the Akida SDK.
  • Create a Docker Compose file for the TENN models and the Akida SDK.
  • Create a FastAPI application for the TENN models and the Akida SDK.
  • Create unit and integration tests for the TENN models and the Akida SDK.
  • Create a script for generating HTML documentation for the TENN models and the Akida SDK.
  • Create a hardware setup, configuration, testing environment and troubleshooting for the TENN models and the Akida SDK.
  • Train and quantize a PyTorch model using the TENN codebase.
  • Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
  • Implement TVM compilation pipeline for the models.
  • Optimize using auto-scheduling and quantization tools.
  • Train the TENN models for efficient and constrained edge use cases
  • Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
  • Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
  • Create reproducible scripts for both workflows.
  • Draft a performance analysis report with recommendations.

Features​

  • FastAPI Framework: Modern, fast Python web framework
  • Docker Support: Full Docker and docker-compose setup
  • Comprehensive Tests: Unit and integration tests included
  • API Documentation and Testing via Web UI: http://localhost:8000/docs
  • Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included

Installation​

Using Docker (Recommended)​

docker-compose down
docker-compose build --no-cache
docker-compose up -d
docker-compose logs -f edge-models

Access the API at: http://localhost:8000/docs

Local Installation​

# Create virtual environment
python3.12 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt

# Run application
uvicorn app.main:app --reload

Device Support:

  • Auto-detect: --device auto (default - automatically selects best available)
  • Apple Silicon: --device mps (M1/M2/M3 Macs)
  • NVIDIA GPU: --device cuda (requires CUDA toolkit)
  • CPU: --device cpu (all platforms)
Note: See INSTALL_NOTES.md for detailed installation instructions and troubleshooting.

And.....

This project focuses on evaluating Apache TVM and OpenXLA as ML compilers for deploying two BrainChip's Akida TENNs models (a Large Language Model and a Keyword Spotting Model) across x86 and ARM processors. The project implements optimized workflows for both frameworks, analyze performance metrics, and document best practices for deployment.

Dependencies​

  1. Docker
  2. Docker Compose
  3. Python 3.13

Description​

  • Set up development environments for TVM (using Hugging Face integration) and XLA (via PyTorch/XLA).
  • Export both PyTorch models to ONNX or TorchScript for cross-framework compatibility.
  • Implement TVM and OpenXLA compilation pipelines for both models.
  • Optimize using auto-scheduling and quantization tools.
  • Generate binaries for x86 and ARM targets (e.g., Raspberry Pi).
  • Test on multiple devices and compare metrics: Inference Latency, Memory Usage, Cross-Platform Consistency
  • Create reproducible scripts for both workflows.
  • Draft a performance analysis report with recommendations.

Features​

  • FastAPI Framework: Modern, fast Python web framework
  • Docker Support: Full Docker and docker-compose setup
  • Comprehensive Tests: Unit and integration tests included
  • API Documentation and Testing via Web UI: http://localhost:8000/docs
  • Production and Development Environments: 'docker-compose.yml' and 'docker-compose.prod.yml' files included

Installation​

docker-compose down docker-compose build --no-cache docker-compose up -d docker-compose logs -f model-compilation
 
  • Like
  • Fire
Reactions: 5 users

manny100

Top 20
"And so when does revenue really start to hit its stride? We believe right about now is the right time."
From the AusBus video interview.
It appears that is where the now infamous 'now' got some legs.
Sean was more than likely referring to the commercialisation of Defense Grade MetaGuard-RT. Should be an earner,
Obviously there needs to be good news prior to the AGM or it will be more than a little 'difficult'.

FF

This is the one and only known partnership between CISCO and Brainchip:

https://www.asx.com.au/asxpdf/20161201/pdf/43ddwm40sz8y4v.pdf

Additionally Peter van der Made and former CEO Mr. DiNardo mentioned CISCO in a number of oral and written presentations.

There was also a research paper some years ago proposing the use of AKIDA neuromorphic compute to secure CISCO routers however the researchers had no disclosed connection to either company.

My opinion only DYOR

Fact Finder
Cisco Far Edge is Traditional mutton dressed up as lamb.
" Key features include high-performance modular architecture which combines compute, storage, and networking in a single chassis with CPU/GPU options, SD-WAN, and pre-validated designs."
You can't get complete neuromorphic Edge AI benefits unless you use it.
If for example in combat it comes across a situation it was not pretrained for its 'stuffed'. It needs to be retrained.
" The Intel Xeon 6 SoC provides higher CPU-native inference performance, enabling critical functionality at the edge that can reduce latency and enhance performance for end user applications." Its 'souped up' traditional AI.
 
Last edited:
  • Like
Reactions: 2 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Larry has been off enjoying the rise of CXO but sticking to his mantra of buying anything under 20c.........still waiting Sean and AntoniNOTED......

20251007_211606.gif


Happy as Larry (Yes.....just)
 
  • Haha
  • Like
  • Fire
Reactions: 3 users

GStocks123

Regular
Another promising study using Akida

 
  • Fire
  • Love
  • Like
Reactions: 5 users

jrp173

Regular

Frangipani

Top 20
Sounds as if the upcoming AKD 2 ASIC tapeout (see roadmap below) will give birth to a silicon chip called AKD2500?



C9386DA9-B644-4BFC-8A72-EFA1B8CB3310.jpeg




BrainChip logo
BrainChip

Head of Software – AI Toolchain & Developer Experience​



Laguna Hills, CA · 17 hours ago · 9 applicants
Promoted by hirer · Company review time is typically 1 week
$225K/yr - $275K/yr + Bonus, Stock
Hybrid
Full-time


About the job​


BrainChip is hiring a Head of Software to define and deliver a world-class software toolchain and developer experience for Akida NPUs spanning Akida 1500, Akida 2500, and our Gen3 roadmap.

This is a high impact role reporting directly to executive leadership.
Whether you're currently leading a small team or driving strategy as a senior engineer, this role offers a significant increase in scope, ownership, and impact. We’re looking for a high-energy, high-ownership, player/coach leader who wants more impact, loves collaborating with hardware teams, and enjoys building tools that make developers wildly productive.

You’ll guide the evolution of MetaTF with user experience as a primary design principle, while owning execution excellence across performance modeling, RTOS/embedded enablement, training/quantization pipelines, and cost-aware cloud training operations.

What you’ll do
  • Own the multi-generation toolchain strategy (Akida 1500 → Akida 2500 → Gen3): workflows, APIs, model/operator compatibility, versioning, migration paths, and release discipline.
  • Lead the next phase of MetaTF to strengthen usability and end-to-end developer workflow (model → quantize → compile → deploy → debug/profile), with excellent documentation and predictable experiences.
  • Drive execution excellence across the Gen3 workplan:
  • compiler / toolchain delivery
  • runtime systems and interfaces (host + device)
  • observability: logging, tracing, debugging strategy
  • profiling and performance regression detection
  • emulation/simulation enablement (as needed to accelerate validation)
  • Oversee company-wide performance modeling: fast model analysis, operator frequency insights, latency/throughput/energy estimation approaches (as applicable), and reporting that teams trust.
  • Lead RTOS and embedded enablement across products: portability strategy, testability, and clear boundaries between runtime and low-level interfaces.
  • Own training, quantization, and conversion pipelines so research and product teams can move smoothly from experimentation to deployment.
  • Steward cloud training costs: visibility, attribution, guardrails, and practical optimization, balancing speed with sustainability.
  • Establish best practices for agent-assisted development (design, coding, maintenance): accelerating delivery while protecting quality, security, and maintainability.
  • Build a culture of strong collaboration across hardware, software, and research turning architecture discussions into shipped capabilities.

What you’ll bring
  • Player/coach leadership: You lead teams effectively and you’re comfortable diving into architecture, design reviews, critical-path debugging, and unblockers when needed.
  • International team experience: Proven success leading globally distributed teams across time zones and cultures with clear priorities, crisp communication, and reliable delivery.
  • Toolchain/platform shipping track record: Experience building and shipping developer-facing platforms such as SDKs, compilers/runtimes, ML deployment toolchains, embedded systems, or performance tooling.
  • NPU/accelerator depth: Strong understanding of the hardware/software boundary for accelerators, including:
  • memory movement and bandwidth constraints
  • DMA-style data paths and scheduling considerations
  • ISA/architecture tradeoffs
  • appreciation for dataflow-oriented thinking and near-memory compute concepts
  • Developer experience instincts: You care about how tools feel—onboarding, docs, reproducibility, error messages, debuggability, and “time-to-first-success.”
  • Executive communication: You can communicate at multiple altitudes—deep technical reviews, crisp written decision docs, and clear exec-level updates that drive decisions.
  • Determined and flexible: You push through ambiguity and deliver outcomes, while adapting quickly as new information emerges.
  • High-trust collaboration: You set a high bar, stay constructive under pressure, and build teams people want to work with.
  • You will trust your team to get the job done but not hesitate to put hands on the keyboard to meet deadlines.

Education
  • BS required in Computer Engineering, Electrical Engineering, Computer Science, from a top tier school or equivalent practical experience.
  • MS preferred (EE/CE/CS or similar), especially with exposure to computer architecture, compilers, embedded/RTOS, or ML systems.

Nice to have
  • ML compiler and deployment experience (graph lowering, kernel/operator strategies, quantization workflows).
  • Debugger/profiler/trace tooling experience and strong instincts for IDE-integrated workflows.
  • RTOS/bare-metal expertise in production environments.
  • Open-source contributions and/or active engagement in technical communities.
  • You genuinely enjoy performance deep-dives—memory hierarchies, cache behavior, and attention-cache conversations (yes, including KV cache debates).
  • Multi-lingual and can speak CNNs, RNNs, Transformers and State-Space models with ease.

Why this role is a step up
You’ll shape the software direction for an NPU platform across multiple hardware generations, partner closely with architecture and silicon teams, and deliver the toolchain experience that enables customers and internal teams to move fast with confidence.

*We are not working with agencies on this opening at this time.




9D45AA2A-5FBD-4A4F-9FC1-7DD70541F68D.jpeg
 
  • Love
Reactions: 1 users
Top Bottom