Learning
Learning to the Top 🕵♂️
Wow, SabThatGermanChap.
Thanks for sharing.
That's definitely more valuation for Brainchip's Akida.
Learning 🏖
Wow, SabThatGermanChap.
Great find. Great news!!!
Hey mate, I'm going for a WOW, FIRE & LOVE. on this one. Good stuffSo we know that Teksun partnered with us.
We know that Vivoka partnered with Teksun recently too.
View attachment 36224
View attachment 36225
Hmmmm...clutching?...would be nice
Vivoka challenges the voice assistant giants with its offline solution
![]()
Written by Anaïs DAUFFER
Latest | Press Releases
Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.
An almost human voice assistant?
The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.
The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.
“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.
The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.
The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.
The power of the cloud but embedded?
Vivoka is the first company to achieve this!
“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.
Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:
“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.
Here are two examples of how an embedded NLU can be used:
- Professional setting:
- Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
- The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
- Private setting:
- In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.
These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
Fantastic @cosors! Thanks for sharing!!!Sometimes we just have to be patient to harvest the nectar and not sour fruit.
View attachment 36217
https://www.islinc.com/wp-content/uploads/2022/07/IMS2022-Radar-Overview-Guerci.pdf
So we know that Teksun partnered with us.
We know that Vivoka partnered with Teksun recently too.
View attachment 36224
View attachment 36225
Hmmmm...clutching?...would be nice
Vivoka challenges the voice assistant giants with its offline solution
![]()
Written by Anaïs DAUFFER
Latest | Press Releases
Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.
An almost human voice assistant?
The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.
The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.
“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.
The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.
The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.
The power of the cloud but embedded?
Vivoka is the first company to achieve this!
“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.
Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:
“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.
Here are two examples of how an embedded NLU can be used:
- Professional setting:
- Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
- The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
- Private setting:
- In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.
These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
Hi Fmf,So we know that Teksun partnered with us.
We know that Vivoka partnered with Teksun recently too.
View attachment 36224
View attachment 36225
Hmmmm...clutching?...would be nice
Vivoka challenges the voice assistant giants with its offline solution
![]()
Written by Anaïs DAUFFER
Latest | Press Releases
Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.
An almost human voice assistant?
The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.
The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.
“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.
The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.
The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.
The power of the cloud but embedded?
Vivoka is the first company to achieve this!
“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.
Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:
“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.
Here are two examples of how an embedded NLU can be used:
- Professional setting:
- Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
- The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
- Private setting:
- In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.
These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
„Obwohl es NICHT für diese Anwendungen entwickelt wurde. Ein echter Game Changer!“
View attachment 36223
AKIDA BALLISTA – SPIELWECHSEL – ALLGEMEIN – BAHNBRECHEND – UNVERGLEICHLICHE LEISTUNG – MAGISCHE SAUCE –
KRYPTONIT – GRÖSSENORDNUNG EFFIZIENTER – 10 x GRÖSSER ALS MICROSOFT – 11 X GRÖSSER ALS BEN HUR – REVOLUTIONÄR – BLITZSCHNELL – DE-FAKTO-STANDARD – SUPER-CHIP AUF STEROIDEN![]()
Cheers DHi Fmf,
As you know, Vivoka's NLU is software.
"Indeed, the voice assistants, implemented in any device, can be used without an internet connection"
This is their picture of their Natural Language Understanding tech:
View attachment 36231
Akida1 could do the Automatic speech recognition. I'm not sure how deeply into the Natural Language Understanding Module Akida 2 would get, but if Akida 2 can do the job of the NLU, Vivoka is out of business.
Well they have their model libraries and word-association/sentence interpretation, but ... there are only 350K words in the english language (nouns, verbs, adjectives, adverbs, gerunds, prepositions, conjunctions, ...). I suppose French and many other languages would be in the same ball park. Of course, then there's the accents and dialects ...
Cheers D
Was looking at that diagram on their site not long ago.
Was also just reading a recent blog by Readspeaker who already partnered with Vivoka with their TTS.
Excerpt from the blog quoting Aurélien Chapuzet from Vivoka.
Does seem focused on hardware partners....given some of the hardware Dev that Teksun do...wonder if we get in that way possibly?
![]()
IoT Voice Control: How a Voice Revolution Is Changing the Internet of Things
The next evolution of IoT is voice control. Find out how voice user interfaces are changing the way we interact with our connected devices.www.readspeaker.com
IoT Voice Control: How a Voice Revolution Is Changing the Internet of Things
Published on May 8, 2023
Where is IoT voice control heading next?
So far, voice technology has focused on software. That software is deployed on IoT devices, certainly, but it often depends on cloud solutions—processing the data on a distant server.
The future will be in voice hardware. We need hardware that’s built from the ground up to leverage voice interaction, says Chapuzet. Some CPUs (think Apple) are already designed precisely for their operating software. Voice should be the same, with computing hardware that’s optimized to support voice on the device, further limiting the requirements of cloud computing.
View attachment 36232
“At Vivoka, we all imagine a world where voice is the most natural way to interact with our surrounding technologies. With clients such as Vuzix, we are very proud to participate in a revolution that is carried by powerful innovation players. Wearables like Vuzix’s famous smart glasses will bring even more thrust in the voice tech mass adoption by companies. Our goal is more than ever to provide developers with state-of-the-art technologies that are easy to adopt and to turn into real inventions.” said William SIMONIN, Vivoka’s CEO and Co-Founder.
“Our broad language support is a distinct competitive advantage for us across many foreign markets and we are happy to be working with a speech technology leader such as Vivoka to enable this,” said Paul Travers, President and CEO of Vuzix. “We continue to add additional functionalities and feature enhancements to our Smart Glasses all the time, many based on the ongoing customer feedback we receive, as we continue to leverage the power and performance of our robust hardware platform.”
A bit older 2022-02, but for me as a sceptical German it's nice to see any sign that Akida is known here. It's about a project funded by a federal state. And what do my tired eyes discover:
"PROJECT RESULTS
In the course of the project, "event-based" cameras were used in parallel with classic "frame-based" cameras to create a data set with twelve different gestures/actions. This dataset was used to train both classical neural networks (CNN+LSTM) and spiking neural networks to classify the gestures shown. The networks trained in this way were run on corresponding hardware accelerators and the two resulting systems (classic and event-based) were compared in terms of prediction accuracy, electrical power consumption and generated data rate. The event-based system achieved a higher prediction accuracy with a significantly lower data rate and electrical power consumption. The figure shows a section of the demonstration system.final report can be extracted."
View attachment 36197
"The SNNs were also ported as far as possible to the neuromorphic hardware platform BrainChip Akida and evaluated there in terms of throughput and electrical power consumption. A comparison with the simulation of the SNN on a GPU-based computer and with the values of the KNN-based system can be seen in Table 3."
Means power consumption and frames per second
View attachment 36195
The entire final report translated see attached.
It is/was about:
View attachment 36196
The Germans can read the report here:
https://www.embeddedneurovision.de/...2-02-abschlussbericht-embeddedneurovision.pdf
https://www.embeddedneurovision.de/
https://www.fzi.de/
https://www.inferics.com/
https://www.hs-analysis.com/
Who knows, maybe one of the project participants is among us?
"In addition, the system is connected to an Nvidia Jetson NX8 AI processor and a BrainChip Akida as a neuromorphic AI processor.
neural networks that the partners have developed as part of the project. developed by the partners as part of the project.
...
The development kits CeleX5_MP from CelePixel Technology Co. LTD and Gen 4.0 from Prophesee are used as hardware.
...
In the end, Norse was used in combination with Tonic as a framework, as these promised sufficient functionality, fast results and easy integration of event camera data. Recurrent, convolutional and fully networked variants were chosen as SNN models.
...
The best results were obtained with a combination of recurrent, fully wired and convolutional layers, which achieve a very high accuracy of about 98% for 3 classes with less than 400,000 parameters. A detailed list and comparison of the different networks and accuracies can be found in Table 2.
...
HS Analysis has experienced great interest from our existing customers as well as some potential new customers to apply SNNs in commercial products, especially in the area of process monitoring. The main interest here is in the reduced energy consumption and increased data protection of event-based cameras. This enables data protection-compliant process monitoring even in areas accessible to the public, and the reduced power requirement offers the possibility of operating on-edge devices via Power-over-Ethernet (PoE), which simplifies deployment. Our previous core product, the HSA KIT, which is a toolbox of different which includes a toolbox of various customised AI analyses, can also be extended with the SNN knowledge gained in the project. This means that advanced, minimalistic time series analyses are now possible, for which we have already started with the quotation and ordering process."
_____________________
I would also give a general tip not to always search in English. For the first time I had today the idea of searching in my own language ("gepulste neuronale Netze" ~ spiking neural network).
Does FCAS mean anything to you? There is no entry here.
https://en.wikipedia.org/wiki/Future_Combat_Air_System
In addition to the companies mentioned above the Fraunhofer-Institut is deeply involved in the development of military equipment, as I heard on the radio yesterday. Was new to me. This is one of the top research institutions in our country.
In recent interview with Edge Impulse, Sean Heir mentioned that they are working on communication as something exciting. Zack kind of jokingly said “what like walki talkies”.
Intellisense seem to agree Akida is a good fit for these type of use cases too.In recent interview with Edge Impulse, Sean Heir mentioned that they are working on communication as something exciting. Zack kind of jokingly said “what like walki talkies”.
Then Sean went on stating that what he was referring to was more like telecommunications, radio technologies etc.
This comment from Joe talking about RF signal processing is interesting in light of what Sean talked about in that interview. I will have to go back and Re listen to it as I can’t quite pinpoint it exactly.
But this is what Chat GPT says RF signal processing is:
RF signal processing refers to the techniques and methods used for processing signals in the radio frequency (RF) range, typically from a few kilohertz to several gigahertz. These signals are used in a wide range of applications, including radio and television broadcasting, telecommunications, radar, and wireless networking.
The processing of RF signals involves a range of techniques, including modulation, demodulation, filtering, amplification, and frequency conversion. These processes are used to extract useful information from the RF signal, remove unwanted noise and interference, and prepare the signal for further processing or transmission.
RF signal processing is an important field of study in electrical engineering and communications, as it plays a critical role in the design and operation of many modern technologies.