So, have we got an offshoot, rebranding, part the restructure process on NVISO?
BeEmotion.ai.
I thought the layout, images and graphics looked similar when I looked at their homepage initially and digging deeper find some of what I snipped below.
When I scrolled to the bottom of the page I can see the site managed by Tropus here in Perth. I know exactly where their office is as drive past it for certain client meetings.
There is also a small Japanese link at the bottom which takes you to the Japanese site.
beemotion.ai
beemotion.ai
View attachment 53840
HUMAN BEHAVIOUR AI
NEUROMORPHIC COMPUTING
BeEmotion empowers system integrators to build AI-driven human machine interfaces to transform our lives using neuromorphic computing. Understand people and their behavior in real-time without an internet connection to make autonomous devices safe, secure, and personalized for humans.
NEUROMORPHIC COMPUTING INTEROPERABILITY
ULTRA-LOW LATENCY WITH LOW POWER
ULTRA-LOW LATENCY (<1MS)
Total BeEmotion Neuro Model latency is similar for GPU
and BrainChip Akida™ neuromorphic processor (300 MHz), however CPU latency is approximately 2.4x slower. All models on all platforms can achieve <10ms latency and the best model can achieve 0.6ms which is almost 2x times faster than a GPU. On a clock frequency normalization basis, this represents an acceleration of 6x.
HIGH THROUGHPUT (>1000 FPS)
BeEmotion Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.
SMALL STORAGE (<1MB)
BeEmotion Neuro Models can achieve a model storage size under 1MB targeting ultra-low power MCU system where onboard flash memory is limited. Removing the need for external flash memory saves cost and power. BrainChip Akida™ format uses 4-bit quantisation where ONNX format uses Float32 format.
DESIGNED FOR EDGE COMPUTING
NO CLOUD REQUIRED
PRIVACY PRESERVING
By processing video and audio sensor data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as it can perform all processing disconnected from the central server, which is a more secure and private architecture decreasing security risks