Hi Crabman,Maybe I have missed it, but has anyone found out already which FPGA provider Brainchip‘s IP has been demoed on?
(I‘m hoping for AMD‘s Xilinx, but Intel‘s Altera would be nice too …)
Hadn't given it any thought, although we have used Xilinx in the past.
AMD Versal has been developed for AI:
https://www.xilinx.com/content/dam/...s/xilinx-versal-ai-compute-solution-brief.pdf
The Versal AI Core series solves the unique and most difficult problem of AI inference—compute efficiency—by coupling ASIC-class compute engines (AI Engines) together with flexible fabric (Adaptable Engines) to build accelerators with maximum efficiency for any given network, while delivering low power and low latency. Through its integrated shell—enabled by a programmable network on chip and hardened interfaces— Versal SoCs are built from the ground up to ensure streamlined connectivity to data center compute infrastructure, simplifying accelerator card development.
AI Engines
> Tiled array of vector processors, flexible interconnect, and local memory enabling massive parallelism
> Up to 133 INT8 TOPS with the Versal AI Core VC1902 device, scales up to 405 INT4 TOPS in the portfolio
> Compiles models in minutes based on TensorFlow, PyTorch, and Caffe using Python or C++ APIs
> Ideal for neural networks ranging from CNN, RNN, and MLP; hardware adaptable to optimize for evolving algorithms
I'm out of my depth here, but what is a puzzle to me is that Akida/TENNs has unique NPU architecture, so a pre-baked arrangement might not provide an accurate simulacrum. Maybe we need a more freestyle FPGA so we can build our own NPUs?