Thanks Rise for the heads up. Looking forward to this benchmarking! The momentum of the Brainchip low power smarts is trending nicely!
I searched for:
low power edge tinyml benchmarking best practices. Lots of info on this, deep stuff.
The Brainchip team will put it in a format that everyone will understand. Should be awesome.
https://www.wevolver.com/article/the-art-and-science-of-benchmarking-neural-network-architectures
The variety of available neural network architectures, configurations, and performance-related parameters makes it challenging to compare alternative ML/DL products and services. To facilitate this, engineers often turn to proven benchmarks for compute intense and data intense systems. For example:
- The Standard Performance Evaluation Corporation (SPEC) family of benchmarks focuses on computational workloads over different computing architectures including cloud-based systems.
- The LINPACK Benchmarks provide measures of a computer’s floating-point rate of execution i.e., they focus on floating-point computing power.
- The TPC consortium includes industry leaders in computing systems and datasets, which have provided a suite of benchmarks for stress testing the transaction processing and data warehousing capabilities of data intensive systems.
- The High Performance Conjugate Gradients (HPCG) benchmark provides novel metrics that enable the evaluation and ranking of High Performance Computing (HPC) systems.
These benchmarks aid in evaluating different components of ML/DL systems, which are typically deployed over computationally intensive platforms. However, they are overly focused on conventional computational workloads and hence not sufficient for training and executing deep learning tasks that are typically more complex.