jtardif999
Regular
I particularly like this claim (I like all the claims), this claim says this patent is Akida2
“[0081] The sound prediction learning rule minimizes or at least reduces the error between a prediction of the upcoming speech sounds generated by the acoustic model 124 (or a prediction of future feature coefficients that will be received by the acoustic model 124 ) and the actual observed speech sounds (or the actual feature coefficients). This can be important as it allows the speech processing engine 120 to operate in a fully self-supervised manner in which only the future speech of the user is needed to adjust the parameters of the acoustic model 124 . In addition, partial predictions can also be used to improve efficiency. For example, predicting the speech sounds of parts of speech while other parts are indecipherable due to background noise or an end of speech event is not detected reduces the latency in providing a response to the user. The sound prediction learning rule can be processed continuously by the neuromorphic processor 123 allowing for real-time and continuous self-learning of the acoustic model 124 . In this example, the acoustic model 124 is configured to output the predicted speech sounds and/or predicted feature coefficients for future utterances that the acoustic model 124 predicts the user to make. The acoustic model 124 can predict future utterances based on the predicted speech sounds. For example, the acoustic model 124 can predict the future utterances based on a probability of the future utterances following the predicted speech sounds output by the acoustic model 124 .”
The acoustic model being mentioned can be in the cloud or on the actual device.
This patent is a significant development in the company’s fortunes imo. Thank you @TECH for the find
“[0081] The sound prediction learning rule minimizes or at least reduces the error between a prediction of the upcoming speech sounds generated by the acoustic model 124 (or a prediction of future feature coefficients that will be received by the acoustic model 124 ) and the actual observed speech sounds (or the actual feature coefficients). This can be important as it allows the speech processing engine 120 to operate in a fully self-supervised manner in which only the future speech of the user is needed to adjust the parameters of the acoustic model 124 . In addition, partial predictions can also be used to improve efficiency. For example, predicting the speech sounds of parts of speech while other parts are indecipherable due to background noise or an end of speech event is not detected reduces the latency in providing a response to the user. The sound prediction learning rule can be processed continuously by the neuromorphic processor 123 allowing for real-time and continuous self-learning of the acoustic model 124 . In this example, the acoustic model 124 is configured to output the predicted speech sounds and/or predicted feature coefficients for future utterances that the acoustic model 124 predicts the user to make. The acoustic model 124 can predict future utterances based on the predicted speech sounds. For example, the acoustic model 124 can predict the future utterances based on a probability of the future utterances following the predicted speech sounds output by the acoustic model 124 .”
The acoustic model being mentioned can be in the cloud or on the actual device.
This patent is a significant development in the company’s fortunes imo. Thank you @TECH for the find
Last edited: