Following the recent NVISO Neuro SDK milestone release, including two new high performance AI Apps from its Human Behavior AI App catalogue, Gaze and Action Unit Detection, NVISO is pleased to announce that the SDK can be seen running on the BrainChip Akida platform by visitors to the Socionext...
www.nviso.ai
SA (NVISO), the leading Human Behavioural Analytics AI company, is pleased that its Neuro SDK will be demonstrated running on the Brainchip Akida platform at the Socionext stand at CES2023. Following the porting of additional AI Apps from its catalogue, NVISO has further enhanced the range of Human Behavioural AI Apps that it supports on the BrainChip Akida event-based, fully digital neuromorphic processing platform. These additions include Action Units, Body Pose and Gesture Recognition on top of the Headpose, Facial Landmark, Gaze and Emotion AI Apps previously announced with the launch of the Evaluation Kit (EVK) version. This increased capability supports the further deployment of NVISO Human Behavioural Analytics AI software solutions with these being able to further exploit the performance capabilities of BrainChip neuromorphic AI processing IP to be deployed within the next generation of SOC devices. Target applications include Robotics, Automotive, Telecommunication, Infotainment, and Gaming.
“BrainChip’s event-based Akida platform is accelerating today’s traditional networks and simultaneously enabling future trends in AI software applications” said Rob Telson, VP Ecosystems, BrainChip. “NVISO is a valued partner of Brain Chip’s growing ecosystem, and their leadership in driving extremely efficient software solutions gives a taste of what compelling applications are possible at the edge on a minimal energy budget”.
The SDK release supports the latest advancements for analysis of complex emotions:
This latest release of the SDK for use by solution developers will support implementation across an increased range of use case scenarios providing for deployment of a wide selection of NVISO’s existing range of real-time, deep learning-based AI Apps, such as those used for face detection, gaze, head pose recognition, facial analysis, emotion recognition, object detection, gesture recognition and body pose analysis along with its recently announced implementation of state of the art graph-based facial analysis applicable to the analysis of complex emotions.
Implementation of complex emotion analysis using state-of-the-art graph-based facial analysis. As one of the most important affective signals, facial affect analysis (FAA) is essential for developing human-computer interaction systems. Early methods focussed on extracting appearance and geometry features associated with human effects while ignoring the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent work attempts to establish a graph-based representation to model these semantic relationships and develop frameworks to leverage them for various FAA tasks.