AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (5.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

WTASR: Wavelet Transformer for Automatic Speech Recognition of Indian Languages

Department of Electronics and Communication, GLA University, Mathura 281406, India
Chandigarh University, Mohali 140413, India
Show Author Information

Abstract

Automatic speech recognition systems are developed for translating the speech signals into the corresponding text representation. This translation is used in a variety of applications like voice enabled commands, assistive devices and bots, etc. There is a significant lack of efficient technology for Indian languages. In this paper, an wavelet transformer for automatic speech recognition (WTASR) of Indian language is proposed. The speech signals suffer from the problem of high and low frequency over different times due to variation in speech of the speaker. Thus, wavelets enable the network to analyze the signal in multiscale. The wavelet decomposition of the signal is fed in the network for generating the text. The transformer network comprises an encoder decoder system for speech translation. The model is trained on Indian language dataset for translation of speech into corresponding text. The proposed method is compared with other state of the art methods. The results show that the proposed WTASR has a low word error rate and can be used for effective speech recognition for Indian language.

References

[1]
L. Deng, G. Hinton, and B. Kingsbury, New types of deep neural network learning for speech recognition and related applications: An overview, in Proc. 2013 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vancouver, Canada, 2013, pp. 85998603.
[2]
S. R. Shahamiri and S. S. B. Salim, A multi-views multi-learners approach towards dysarthric speech recognition using multi-nets artificial neural networks, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 5, pp. 10531063, 2014.
[3]
S. R. Shahamiri and S. S. B. Salim, Artificial neural networks as speech recognisers for dysarthric speech: Identifying the best-performing set of MFCC parameters and studying a speaker-independent approach, Adv. Eng. Inf., vol. 28, no. 1, pp. 102110, 2014.
[4]
H. Bourlard and N. Morgan, Connectionist Speech Recognition: A Hybrid Approach. Boston, MA, USA: Kluwer Academic Publishers, 1994.
[5]
C. España-Bonet and J. A. R. Fonollosa, Automatic speech recognition with deep neural networks for impaired speech, in Proc. 3rd Int. Conf. on Advances in Speech and Language Technologies for Iberian Languages, Lisbon, Portugal, 2016, pp. 97107.
[6]
H. Sak, A. W. Senior, K. Rao, and F. Beaufays, Fast and accurate recurrent neural network acoustic models for speech recognition, in Proc. 16th Annu. Conf. of the Int. Speech Communication Association, Dresden, Germany, 2015, pp. 14681472.
[7]
W. Chan, N. Jaitly, Q. Le, and O. Vinyals, Listen, attend and spell: A neural network for large vocabulary conversational speech recognition, in Proc. 2016 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Shanghai, China, 2016, pp. 49604964.
[8]
C. C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, et al., State-of-the-art speech recognition with sequence-to-sequence models, in Proc. 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Calgary, Canada, 2018, pp. 47744778.
[9]
T. Hori, J. Cho, and S. Watanabe, End-to-end speech recognition with word-based Rnn language models, in Proc. 2018 IEEE Spoken Language Technology Workshop, Athens, Greece, 2018, pp. 389396.
[10]
O. Abdel-Hamid, A. R. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, Convolutional neural networks for speech recognition, IEEE/ACM Trans. Audio Speech Lang Process., vol. 22, no. 10, pp. 15331545, 2014.
[11]
B. Vachhani, C. Bhat, B. Das, and S. K. Kopparapu, Deep autoencoder based speech features for improved dysarthric speech recognition, in Proc. 18th Annu. Conf. of the Int. Speech Communication Association, Stockholm, Sweden, 2017, pp. 18541858.
[12]
Q. Zhang, H. Lu, H. Sak, A. Tripathi, E. McDermott, S. Koo, and S. Kumar, Transformer transducer: A streamable speech recognition model with transformer encoders and RNN-T loss, in Proc. 2020 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Barcelona, Spain, 2020, pp. 78297833.
[13]
K. Rao, H. Sak, and R. Prabhavalkar, Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer, in Proc. 2017 IEEE Automatic Speech Recognition and Understanding Workshop, Okinawa, Japan, 2017, pp. 193199.
[14]
Y. Wang, X. Deng, S. Pu, and Z. Huang, Residual convolutional CTC networks for automatic speech recognition, arXiv preprint arXiv: 1702.07793, 2017.
[15]
L. Dong, S. Xu, and B. Xu, Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition, in Proc. 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Calgary, Canada, 2018, 58845888.
[16]
X. Chen, Y. Wu, Z. Wang, S. Liu, and J. Li, Developing real-time streaming transformer transducer for speech recognition on large-scale dataset, in Proc. 2021 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Toronto, Canada, 2021, 59045908.
[17]
A. Singh, V. Kadyan, M. Kumar, and N. Bassan, ASRoIL: A comprehensive survey for automatic speech recognition of Indian languages, Artif. Intell. Rev., vol. 53, no. 5, pp. 36733704, 2020.
[18]
S. Jaglan, S. Dhull, and K. K. Singh, Tertiary wavelet model based automatic epilepsy classification system, Int. J. Intell. Unmanned. Syst.,.
Big Data Mining and Analytics
Pages 85-91
Cite this article:
Choudhary T, Goyal V, Bansal A. WTASR: Wavelet Transformer for Automatic Speech Recognition of Indian Languages. Big Data Mining and Analytics, 2023, 6(1): 85-91. https://doi.org/10.26599/BDMA.2022.9020017

712

Views

64

Downloads

3

Crossref

2

Web of Science

5

Scopus

0

CSCD

Altmetrics

Received: 31 May 2022
Revised: 06 June 2022
Accepted: 21 June 2022
Published: 24 November 2022
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return