I had a little debate with myself about whether to post this here or on the Nviso thread but on balance while it relates to them it actually is mostly about the significance of what Brainchip has achieved with its AKD1000 in particular its capacity to process all five human senses on the one chip.
I am sure you will all recall Rob Telson speaking about this and how the person he is talking too has a cup of coffee and can smell it and feel the cup in his hand and can hear noises but can thanks to the way the human brain works focus his attention on what is being said by Rob Telson.
I thought I understood what Rob Telson was talking about and in one sense I do but upon reading the following linked article I realised that Nviso and every one else working in this area of robotics absolutely needs AKIDA technology and this ability otherwise what they are trying to do which is give machines the ability to interact with humans and understand their emotional needs will fail.
While I never discourage the reading of the entire paper the extracted conclusion makes clear that this industry needs to process more than one input to achieve reasonable accuracy and of course if you have no power constraints and connectivity is not a problem then you can use a super computer but as is entirely likely you would at least be worried about power consumption in mass deployment of these types of robots you need something capable of what AKIDA can do but guess what while there may be some low powered SNN chips coming to market none have the capacity of AKD1000 to process all five sensory inputs and do so on a ridiculously low power budgets without web connectivity.
There is of course the other biggie that no one else has which is patent protected one shot learning on chip allowing personalisation of your robotic companion or work assistant.
This paper is from 2017 but AKIDA technology provides Nviso with the ability to supply the robotics market with
"a real-time multimodal affect detector which can effectively and affectively communicate with humans, and feel our emotions."
8. Conclusion
In this paper, we carried out a first of its kind review of the
fundamental stages of a multimodal affect recognition framework.
We started by discussing available benchmark datasets,
followed by an overview of the state of the art in audio-, visual and
textual-based affect recognition. In particular, we highlighted
prominent studies in unimodal affect recognition, which
we consider crucial components of a multimodal affect detector
framework. For example, without efficient unimodal affect
classifiers or feature extractors, it is not possible to build a well
performing multimodal affect detector. Hence, if one is aware
of the state of the art in unimodal affect recognition, which has
been thoroughly reviewed in this paper, it would facilitate the
construction of an appropriate multimodal framework.
Our survey has confirmed other researchers’ findings that
multimodal classifiers can outperform unimodal classifiers.
Furthermore, text modality plays an important role in boosting
the performance of an audio-visual affect detector. On the
other hand, the use of deep learning is increasing in popularity,
particularly for extracting features from modalities. Although
feature-level fusion is widely used for multimodal fusion, there
are other fusion methods developed in the literature. However,
since fusion methods are, in general, not being used widely
by the sentiment analysis and related NLP research communities,
there are significant and timely opportunities for future research
in the multi-disciplinary field of multimodal fusion. As identified in
this review, some of the other key outstanding challenges in
this exciting field include: estimating noise in unimodal channels,
synchronization of frames, voice and utterance, reduction of
multi-modal Big Data dimensionality to meet real-time performance
needs, etc.
These challenges suggest we are still far from producing
a real-time multimodal affect detector which can effectively and
affectively communicate with humans, and feel our emotions.
My opinion only DYOR
FF
AKIDA BALLISTA