Edge impulse has got Nvidia TAO working for CPUs:
https://www.edgeimpulse.com/blog/nvidia-tao-omniverse/?utm_campaign=Edge AI Insights Newsletter&utm_medium=email&_hsmi=300346629&_hsenc=p2ANqtz-8xr2wVNzLyJ8XUruBET6YWdC5SpasbNiJyLtXN3P15uAJcOw2e3TtSW2iqjIh0V8wd1KUeUBpIVN6MD_Qa632HE6oJzQ&utm_content=300346629&utm_source=hs_email
E
dge Impulse is pleased to announce that we have unlocked previously inaccessible NVIDIA AI capabilities for any edge device with NVIDIA TAO and Omniverse, alongside our launch of native support for NVIDIA Jetson Orin hardware. These developments further amplify what’s achievable at the edge for developers and enterprises who want to accelerate time to market and gain a competitive advantage with world-class AI solutions.
With these new integrations, Edge Impulse provides the only way for developers to deploy NVIDIA technology directly to MCU and CPU. Engineers can speed up the use of large NVIDIA GPU trained models on low-cost MCUs and MPUs with AI accelerators, while accessing a powerful set of tools to create digital twins, synthetic datasets, and virtual model testing environments.
NVIDIA TAO delivers a low-code, open-source AI framework to accelerate vision AI model development suitable for all skill levels — from beginners to expert data scientists.
With the NVIDIA TAO Toolkit integration, Edge Impulse’s enterprise customers can now use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization. This is integrated directly into the Edge Impulse platform for any existing object detection project, available for all enterprise users today.
The TAO Toolkit provides a faster, easier way to create highly accurate, customized, and enterprise-ready AI models to power your vision AI applications. The open-source TAO toolkit for AI training and optimization delivers everything you need, putting the power of the world’s best Vision Transformers (ViTs) in the hands of every developer and service provider.
Built on TensorFlow and PyTorch, the NVIDIA TAO toolkit uses the power of transfer learning while simultaneously simplifying the model training process and optimizing the model for inference throughput on the target platform. The result is an ultra-streamlined workflow. Take your own models or pre-trained models, adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets.
The integration of NVIDIA TAO into Edge Impulse means that engineers can finally utilize NVIDIA’s industry-leading AI models on hardware outside of that offered by NVIDIA, a capability exclusively provided by Edge Impulse.
Over 100 accurate, custom, production-ready computer vision models are accessible via the Edge Impulse and NVIDIA TAO Toolkit, allowing engineers to seamlessly deploy to edge-optimized hardware, including the Arm® Cortex®-M based NXP I.MXRT1170, Alif E3, STMicro STM32H747AI, and Renesas EK-RA8D1.
"The advent of generative AI and the growth of IoT deployments means the industry must evolve to run AI models at the edge,” said Paul Williamson, senior vice president and general manager, IoT Line of Business, Arm. “NVIDIA and Edge Impulse have now made it possible to deploy state-of-the-art computer vision models on a broad range of technology based on Arm Cortex-M and Cortex-A CPUs and Arm Ethos™-U NPUs, unlocking a multitude of new AI use cases at the edge."
... and who's Edge Impulse's golden haired edge AI boy since forever?
https://brainchip.com/brainchip-and-edge-impulse-partner-to-accelerate-ai-ml-deployments/
BrainChip and Edge Impulse Partner to Accelerate AI/ML Deployments
Laguna Hills, Calif. – May 15, 2022 –
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY),
the world’s first commercial producer of ultra-low power neuromorphic AI IP, and Edge Impulse, the leading development platform for machine learning (ML) on edge devices, are partnering to deliver next-generation platforms to customers looking to develop products utilizing the companies’ unique machine learning capabilities.
Edge Impulse is ushering in the future of embedded machine learning by empowering developers to create and optimize solutions with real-world data. The company is making the process of building, deploying, and scaling embedded ML applications easier and faster than ever, unlocking massive value across every industry, with millions of developers making billions of devices smarter.
“
Organizations are understanding more and more the importance of implementing machine learning capabilities within their products to turn them into the ‘smart’ devices that consumers are clamoring for,” said Zach Shelby, CEO and co-founder at Edge Impulse. “By integrating solutions, such as deploying BrainChip’s neuromorphic IP with our ML platform, developers and enterprise customers are empowered to build advanced machine learning solutions quickly and efficiently so that they are well-positioned as leaders within their respective markets.”
https://developer.nvidia.com/blog/a...ment-workflows-with-nvidia-tao-toolkit-5-0-2/
NVIDIA TAO Toolkit provides a low-code AI framework to accelerate vision AI model development suitable for all skill levels, from novice beginners to expert data scientists. With the TAO Toolkit, developers can use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization.
NVIDIA released TAO Toolkit 5.0, bringing groundbreaking features to enhance any AI model development. The new features include source-open architecture, transformer-based pretrained models, AI-assisted data annotation, and the capability to deploy models on any platform.
Release highlights include:
- Model export in open ONNX format to support deployment on GPUs, CPUs, MCUs, neural accelerators, and more.
- Advanced Vision Transformer training for better accuracy and robustness against image corruption and noise.
- New AI-assisted data annotation, accelerating labeling tasks for segmentation masks.
- Support for new computer vision tasks and pretrained models for optical inspection, such as optical character detection and Siamese Network models.
- Open source availability for customizable solutions, faster development, and integration.
This may simplify the process of implementing the TAO models on Akida which will broaden Akida's market appeal.