Sensors, Vol. 22, Pages 7319: Improving Hybrid CTC/Attention Architecture for Agglutinative Language Speech Recognition
Sensors, Vol. 22, Pages 7319: Improving Hybrid CTC/Attention Architecture for Agglutinative Language Speech Recognition
Sensors doi: 10.3390/s22197319
Authors:
Zeyu Ren
Nurmemet Yolwas
Wushour Slamu
Ronghe Cao
Huiru Wang
Unlike the traditional model, the end-to-end (E2E) ASR model does not require speech information such as a pronunciation dictionary, and its system is built through a single neural network and obtains performance comparable to that of traditional methods. However, the model requires massive amounts of training data. Recently, hybrid CTC/attention ASR systems have become more popular and have achieved good performance even under low-resource conditions, but they are rarely used in Central Asian languages such as Turkish and Uzbek. We extend the dataset by adding noise to the original audio and using speed perturbation. To develop the performance of an E2E agglutinative language speech recognition system, we propose a new feature extractor, MSPC, which uses different sizes of convolution kernels to extract and fuse features of different scales. The experimental results show that this structure is superior to VGGnet. In addition to this, the attention module is improved. By using the CTC objective function in training and the BERT model to initialize the language model in the decoding stage, the proposed method accelerates the convergence of the model and improves the accuracy of speech recognition. Compared with the baseline model, the character...
Source: Sensors - Category: Biotechnology Authors: Zeyu Ren Nurmemet Yolwas Wushour Slamu Ronghe Cao Huiru Wang Tags: Article Source Type: research
More News: Biotechnology | Training | Turkey Health | Universities & Medical Training | Uzbekistan Health