BRN Discussion Ongoing

schuey

Regular
Well said Tech. I couldn’t agree more.
Well said Rob .......
 
  • Haha
  • Like
Reactions: 4 users

cosors

👀
A bit older 2022-02, but for me as a sceptical German it's nice to see any sign that Akida is known here. It's about a project funded by a federal state. And what do my tired eyes discover:


"PROJECT RESULTS​

In the course of the project, "event-based" cameras were used in parallel with classic "frame-based" cameras to create a data set with twelve different gestures/actions. This dataset was used to train both classical neural networks (CNN+LSTM) and spiking neural networks to classify the gestures shown. The networks trained in this way were run on corresponding hardware accelerators and the two resulting systems (classic and event-based) were compared in terms of prediction accuracy, electrical power consumption and generated data rate. The event-based system achieved a higher prediction accuracy with a significantly lower data rate and electrical power consumption. The figure shows a section of the demonstration system.final report can be extracted."
1683717696133.png



"The SNNs were also ported as far as possible to the neuromorphic hardware platform BrainChip Akida and evaluated there in terms of throughput and electrical power consumption. A comparison with the simulation of the SNN on a GPU-based computer and with the values of the KNN-based system can be seen in Table 3."

Means power consumption and frames per second
brnembd.png

The entire final report translated see attached.


It is/was about:
embd.png

The Germans can read the report here:
https://www.embeddedneurovision.de/...2-02-abschlussbericht-embeddedneurovision.pdf

https://www.embeddedneurovision.de/

https://www.fzi.de/

https://www.inferics.com/

https://www.hs-analysis.com/

Who knows, maybe one of the project participants is among us?


"In addition, the system is connected to an Nvidia Jetson NX8 AI processor and a BrainChip Akida as a neuromorphic AI processor.
neural networks that the partners have developed as part of the project. developed by the partners as part of the project.
...
The development kits CeleX5_MP from CelePixel Technology Co. LTD and Gen 4.0 from Prophesee are used as hardware.
...
In the end, Norse was used in combination with Tonic as a framework, as these promised sufficient functionality, fast results and easy integration of event camera data. Recurrent, convolutional and fully networked variants were chosen as SNN models.
...
The best results were obtained with a combination of recurrent, fully wired and convolutional layers, which achieve a very high accuracy of about 98% for 3 classes with less than 400,000 parameters. A detailed list and comparison of the different networks and accuracies can be found in Table 2.
...
HS Analysis has experienced great interest from our existing customers as well as some potential new customers to apply SNNs in commercial products, especially in the area of process monitoring. The main interest here is in the reduced energy consumption and increased data protection of event-based cameras. This enables data protection-compliant process monitoring even in areas accessible to the public, and the reduced power requirement offers the possibility of operating on-edge devices via Power-over-Ethernet (PoE), which simplifies deployment. Our previous core product, the HSA KIT, which is a toolbox of different which includes a toolbox of various customised AI analyses, can also be extended with the SNN knowledge gained in the project. This means that advanced, minimalistic time series analyses are now possible, for which we have already started with the quotation and ordering process."


_____________________
I would also give a general tip not to always search in English. For the first time I had today the idea of searching in my own language ("gepulste neuronale Netze" ~ spiking neural networks).

Does FCAS mean anything to you? There is no entry here.
https://en.wikipedia.org/wiki/Future_Combat_Air_System
In addition to the companies mentioned above the Fraunhofer-Institut is deeply involved in the development of military equipment, as I heard on the radio yesterday. Was new to me. This is one of the top research institutions in our country.
 

Attachments

  • 2022-02-abschlussbericht-embeddedneurovision en.pdf
    818 KB · Views: 117
Last edited:
  • Like
  • Love
  • Fire
Reactions: 63 users

cosors

👀
Good morning all,

I thought that the tech talk was excellent, Nandan speaks very clearly and deliberately in an attempt to get his message out as
clear as possible, and that's part of his job in Marketing, communication is key, shutting down misconceptions and/or misunderstandings
that could throw a potential client off in engaging with us in the first place and unintentionally telling his network of friends, hence spreading
misinformation about our Akida suite of products.

A number of posters on this forum have once again packaged this webinar up into the "nothing new to see" category, lets be clear here,
this webinar was targeted at developers, companies in or looking at getting into the AI product development space, not Australian Mum
Dad investors wanting to hear that revenue is flowing in and we've just signed Samsung so the share price will burst through the $2.00
mark again in 5 trading days !

Our exposure has never been this engaging before, all the different departments within the Brainchip organization are working hard, all
with the same team goal, to spread the message, to deliver brilliance to current and future clients, to make the ease to market as uncomplicated as possible.

Some companies have already met us at the intersection, others may be 3/6/9/12 months away from that same point on the road, but once our
paths intersect, well, "once you're in, you're in for generations".

Rob spoke great as well..:ROFLMAO::ROFLMAO:

Regards....Tech :geek:
Even if they are only small fishs compared to your Samsung I would still have the automation sector on board just 🤏
I hope that doesn't come across as greedy, but 2.5 wouldn't be bad either 😅
It doesn't have to be exactly this order - I'm modest.

Siemens
Emerson
ABB
Schneider Electric
Rockwell Automation
Festo
Fortive
Mitsubishi Electric
Honeywell
...~KUKA
 
  • Like
  • Fire
Reactions: 13 users

schuey

Regular
I'm seriously likeable, I was just informed. ....haha .. y'all don't know me ....hahah
 
Screenshot_20230510_175535_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 136 users

Learning

Learning to the Top 🕵‍♂️
Not particularly exciting and a bit older, but for me as a sceptical German it's nice to see any sign that Akida is known here. It's about a project funded by a federal state. And what do my tired eyes discover:


"PROJECT RESULTS​

In the course of the project, "event-based" cameras were used in parallel with classic "frame-based" cameras to create a data set with twelve different gestures/actions. This dataset was used to train both classical neural networks (CNN+LSTM) and spiking neural networks to classify the gestures shown. The networks trained in this way were run on corresponding hardware accelerators and the two resulting systems (classic and event-based) were compared in terms of prediction accuracy, electrical power consumption and generated data rate. The event-based system achieved a higher prediction accuracy with a significantly lower data rate and electrical power consumption. The figure shows a section of the demonstration system.final report can be extracted."
View attachment 36197


"The SNNs were also ported as far as possible to the neuromorphic hardware platform BrainChip Akida and evaluated there in terms of throughput and electrical power consumption. A comparison with the simulation of the SNN on a GPU-based computer and with the values of the KNN-based system can be seen in Table 3."

Means power consumption and frames per second
View attachment 36195
The entire final report:
https://www.embeddedneurovision.de/...2-02-abschlussbericht-embeddedneurovision.pdf

It is/was about:
View attachment 36196
https://www.embeddedneurovision.de/

https://www.fzi.de/

https://www.inferics.com/

https://www.hs-analysis.com/

Who knows, maybe one of the project participants is among us?

_____________________
I would also give a general tip not to always search in English. We come from all parts of the world here. Today, for the first time I had the idea of searching in my own language German ("gepulste neuronale Netze" ~ spiking neural network). I recommend this even more to the French among us. They are even more 'specialised' than we are with their language.) We love our Denglish.

Does FCAS mean anything to you? There is no entry here.
https://en.wikipedia.org/wiki/Future_Combat_Air_System
In addition to the companies mentioned above the Fraunhofer-Institut is deeply involved in the development of military equipment, as I heard on the radio yesterday. Was new to me. This is one of the top research institutions in our country.
Fantastic find cosor.

No matter how old. When reading its first time. It's news to me. It's validation for Brainchip's Akida.

Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Love
  • Fire
Reactions: 28 users

cosors

👀
;) Sometimes we just have to be patient to harvest the nectar and not sour fruits.
islinc.png

https://www.islinc.com/wp-content/uploads/2022/07/IMS2022-Radar-Overview-Guerci.pdf


"ISL Wins Urban Air Mobility Challenge​

Image-300x225.jpeg

(Second from left, ISL’s Project Manager, Alex Talbott)
ISL’s Agility Prime project team submitted an innovative idea to the ATI Urban Air Mobility Innovation Challenge in July 2022. The ISL team was notified of selection to pitch the idea to a select panel of judges at the Defense TechConnect Innovation Summit & Expo in Washington D.C. on September 28th, 2022. The innovative idea was centered around providing neuromorphic radar capabilities to the urban air mobility community for safety and autonomous use.

The ISL team was one of 20 selected companies to pitch their autonomous and urban air mobility ideas to the panel. The top 5 pitches were selected to receive “no strings attached” funding. The pitch was limited to 5 minutes with a 3-minute Q&A after to help the judging panel better understand the technology’s use and project goals. The ISL pitch team received a fair number of questions from the panel, many of which were centered around our neuromorphic training capability and modeling and simulation tool RFview®. ISL’s pitch was selected as one of the winners of the innovation challenge and received an innovation award certificate and an innovation medal."
https://www.islinc.com/isl-wins-urban-air-mobility-challenge
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 67 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

"Even though it was NOT designed with these applications in mind. A real game changer!" ❤️🧠🍟

amorsito-hearts.gif




AKIDA BALLISTA - GAME CHANGER - UBIQUITOUS - GROUND BREAKING - UNPARALLELED PERFORMANCE - MAGIC SAUCE -
KRYPTONITE - ORDERS OF MAGNITUDE MORE EFFICIENT - 10 x BIGGER THAN MICROSOFT - 11 X BIGGER THAN BEN HUR - REVOLUTIONARY - BLAZINGLY FAST - DE FACTO STANDARD - SUPER-CHIP ON STEROIDS🍾🤩🥳
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 69 users
So we know that Teksun partnered with us.

We know that Vivoka partnered with Teksun recently too.


Screenshot_2023-05-10-21-20-55-68_4641ebc0df1485bf6b47ebd018b5ee76.jpg



Screenshot_2023-05-10-21-21-26-77_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Hmmmm...clutching?...would be nice :unsure:


Vivoka challenges the voice assistant giants with its offline solution​

Favicon Vivoka Author

Written by Anaïs DAUFFER

Latest | Press Releases

Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.

An almost human voice assistant?​


The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.

The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.

“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.

The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.

The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.

The power of the cloud but embedded?​


Vivoka is the first company to achieve this!

“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.

Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:

“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.

Here are two examples of how an embedded NLU can be used:
  • Professional setting:
    • Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
    • The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
  • Private setting:
    • In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.

These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
 
  • Like
  • Fire
  • Love
Reactions: 86 users

Deadpool

hyper-efficient Ai
So we know that Teksun partnered with us.

We know that Vivoka partnered with Teksun recently too.


View attachment 36224


View attachment 36225

Hmmmm...clutching?...would be nice :unsure:


Vivoka challenges the voice assistant giants with its offline solution​

Favicon Vivoka Author

Written by Anaïs DAUFFER

Latest | Press Releases

Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.

An almost human voice assistant?​


The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.

The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.

“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.

The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.

The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.

The power of the cloud but embedded?​


Vivoka is the first company to achieve this!

“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.

Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:

“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.

Here are two examples of how an embedded NLU can be used:
  • Professional setting:
    • Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
    • The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
  • Private setting:
    • In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.

These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
Hey mate, I'm going for a WOW, FIRE & LOVE. on this one. Good stuff
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
Reactions: 11 users

cosors

👀
So we know that Teksun partnered with us.

We know that Vivoka partnered with Teksun recently too.


View attachment 36224


View attachment 36225

Hmmmm...clutching?...would be nice :unsure:


Vivoka challenges the voice assistant giants with its offline solution​

Favicon Vivoka Author

Written by Anaïs DAUFFER

Latest | Press Releases

Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.

An almost human voice assistant?​


The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.

The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.

“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.

The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.

The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.

The power of the cloud but embedded?​


Vivoka is the first company to achieve this!

“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.

Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:

“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.

Here are two examples of how an embedded NLU can be used:
  • Professional setting:
    • Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
    • The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
  • Private setting:
    • In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.

These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
threema-20230510-164603046.png

@Deadpool .)
 
Last edited:
  • Like
  • Wow
Reactions: 10 users

Diogenese

Top 20
So we know that Teksun partnered with us.

We know that Vivoka partnered with Teksun recently too.


View attachment 36224


View attachment 36225

Hmmmm...clutching?...would be nice :unsure:


Vivoka challenges the voice assistant giants with its offline solution​

Favicon Vivoka Author

Written by Anaïs DAUFFER

Latest | Press Releases

Paris, France. April 11, 2023. Vivoka is announcing an NLU (Natural Language Understanding) technology that is as powerful as the cloud, but running in an embedded voice assistant. In doing so, Vivoka is challenging the biggest in voice technology such as Siri, Alexa or Google Assistant as the French company stands out for its ability to operate offline and thus advocate data protection and Green IT.

An almost human voice assistant?​


The NLU (natural language understanding) allows voice assistants to understand any vocal command as long as the user’s intention is clear. Today, artificial intelligence pushes the limits of what is possible by allowing real interactions between humans and machines.

The voice assistant focuses on the intention and not on the words in particular. The machine, which learns from examples, will refine its understanding to interact more easily, quickly and widely with humans; this is called Machine Learning. Until now, embedded voice assistants have included predefined phrases and voice commands could not go outside of this framework.

“The current boundaries of embedded voice assistants lie in their limited ability to understand complex sentences. The NLU we’re working on will enable tomorrow’s assistants to not only perform as well as those available in the cloud, but also limit energy impact and protect our customers’ data.” said William Simonin, Vivoka’s co-founder and CEO.

The particularity of Vivoka, since the creation of its VDK (Voice Development Kit), is the fact that everything is offline (embedded). Indeed, the voice assistants, implemented in any device, can be used without an internet connection, which allows a total independence on the final product. This independence already takes effect in the development of Vivoka, which does not use any external components for this version of its software.

The disconnection mainly reduces the impact of data and data storage on the environment. Vivoka’s digital sobriety allows its customers to meet their sustainability goals while having a voice assistant as powerful as those connected.

The power of the cloud but embedded?​


Vivoka is the first company to achieve this!

“It was time to bring to the industry and to consumer electronic devices, the possibility to control their interfaces by voice, without having to make them dependent on the Internet, for a secure and operational use instantly. Well, it’s done with our voice development software VDK. In 2023, Vivoka brings embedded voice control to life with revolutionary AI functionality.” – confirms Emmanuel Chaligné, Product Director.

Many sectors, such as defense, industry, logistics or even household appliances, cannot depend on an internet-connected solution. As a result, the demand from players in these sectors is growing. Vivoka is not only established in France but worldwide thanks to this not so widespread technology. The development of this functionality based on natural language seemed logical to them:

“Today we export our VDK technology to all five continents and are present in many industrial sectors. We know the demand. Our partners have been waiting for a long time for a high-performance solution that allows their users to talk naturally to their systems, without going through the Internet. The technical barrier that we are going to remove with the first embedded NLU will make this possible. This was the logical next step in our development. It will allow Vivoka to show that the cloud is not an obligation, and that a company should not have to choose between performance and data protection.” says William Simonin.

Here are two examples of how an embedded NLU can be used:
  • Professional setting:
    • Voice implemented in dental office chairs will allow practitioners to naturally and easily control patient height and tilt.
    • The NLU will enable next generation robots to be equipped with speech. Voice is a natural interaction and robots will be able to replicate human actions in many fields such as education, industry or health.
  • Private setting:
    • In the future, household appliances will be controlled by voice without the need for the house to be connected and without an internet connection for these appliances. Users will not need to learn ready-made phrases, they will be able to speak naturally, as they would with other humans.

These are only 3 examples among many sectors concerned such as: IoT (internet of things), connected home, transport, logistics, education, connected glasses/headsets, robotics, defense and health
Hi Fmf,

As you know, Vivoka's NLU is software.
"Indeed, the voice assistants, implemented in any device, can be used without an internet connection"

This is their picture of their Natural Language Understanding tech:

1683730493955.png



Akida1 could do the Automatic speech recognition. I'm not sure how deeply into the Natural Language Understanding Module Akida 2 would get, but if Akida 2 can do the job of the NLU, Vivoka is out of business.

Well they have their model libraries and word-association/sentence interpretation, but ... there are only 350K words in the english language (nouns, verbs, adjectives, adverbs, gerunds, prepositions, conjunctions, ...). I suppose French and many other languages would be in the same ball park. Of course, then there's the accents and dialects ...
 
  • Like
  • Thinking
Reactions: 27 users

Sirod69

bavarian girl ;-)
„Obwohl es NICHT für diese Anwendungen entwickelt wurde. Ein echter Game Changer!“ ❤️🧠🍟

View attachment 36223



AKIDA BALLISTA – SPIELWECHSEL – ALLGEMEIN – BAHNBRECHEND – UNVERGLEICHLICHE LEISTUNG – MAGISCHE SAUCE –
KRYPTONIT – GRÖSSENORDNUNG EFFIZIENTER – 10 x GRÖSSER ALS MICROSOFT – 11 X GRÖSSER ALS BEN HUR – REVOLUTIONÄR – BLITZSCHNELL – DE-FAKTO-STANDARD – SUPER-CHIP AUF STEROIDEN🍾🤩🥳

I would be ready! Are you setting up your pool again?

Summer Water GIF by Apala 9
 
  • Haha
Reactions: 8 users
Hi Fmf,

As you know, Vivoka's NLU is software.
"Indeed, the voice assistants, implemented in any device, can be used without an internet connection"

This is their picture of their Natural Language Understanding tech:

View attachment 36231


Akida1 could do the Automatic speech recognition. I'm not sure how deeply into the Natural Language Understanding Module Akida 2 would get, but if Akida 2 can do the job of the NLU, Vivoka is out of business.

Well they have their model libraries and word-association/sentence interpretation, but ... there are only 350K words in the english language (nouns, verbs, adjectives, adverbs, gerunds, prepositions, conjunctions, ...). I suppose French and many other languages would be in the same ball park. Of course, then there's the accents and dialects ...
Cheers D

Was looking at that diagram on their site not long ago.

Was also just reading a recent blog by Readspeaker who already partnered with Vivoka with their TTS.

Excerpt from the blog quoting Aurélien Chapuzet from Vivoka.

Does seem focused on hardware partners....given some of the hardware Dev that Teksun do...wonder if we get in that way possibly?


IoT Voice Control: How a Voice Revolution Is Changing the Internet of Things​

Published on May 8, 2023

Where is IoT voice control heading next?​

So far, voice technology has focused on software. That software is deployed on IoT devices, certainly, but it often depends on cloud solutions—processing the data on a distant server.

The future will be in voice hardware. We need hardware that’s built from the ground up to leverage voice interaction, says Chapuzet. Some CPUs (think Apple) are already designed precisely for their operating software. Voice should be the same, with computing hardware that’s optimized to support voice on the device, further limiting the requirements of cloud computing.



Screenshot_2023-05-10-23-25-15-45_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

Sirod69

bavarian girl ;-)

I think someone wrote the wrong year on the homepage?

BrainChip Showcases Edge AI Technologies at 2023 Embedded Vision Summit

Laguna Hills, Calif. – May 10, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that Chief Marketing Officer Nandan Nayampally has been selected to present as part of the Enabling Technologies track at the Embedded Vision Summit in the Santa Clara Convention Center May 23 at 4:50 p.m. PDT.


1683733395447.png


 
  • Like
  • Fire
  • Love
Reactions: 25 users

Terroni2105

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 29 users
Top Bottom