Digital Library[ Search Result ]
A Study on Improving the Accuracy of Korean Speech Recognition Texts Using KcBERT
Donguk Min, Seungsoo Nam, Daeseon Choi
http://doi.org/10.5626/JOK.2024.51.12.1115
In the field of speech recognition, models such as Whisper, Wav2Vec2.0, and Google STT are widely utilized. However, Korean speech recognition faces challenges because complex phonological rules and diverse pronunciation variations hinder performance improvements. To address these issues, this study proposed a method that combined the Whisper model with a post-processing approach using KcBERT. By applying KcBERT’s bidirectional contextual learning to text generated by the Whisper model, the proposed method could enhance contextual coherence and refine the text for greater naturalness. Experimental results showed that post-processing reduced the Character Error Rate (CER) from 5.12% to 1.88% in clean environments and from 22.65% to 10.17% in noisy environments. Furthermore, the Word Error Rate (WER) was significantly improved, decreasing from 13.29% to 2.71% in clean settings and from 38.98% to 11.15% in noisy settings. BERTScore also exhibited overall improvement. These results demonstrate that the proposed approach is effective in addressing complex phonological rules and maintaining text coherence within Korean speech recognition.
Creating a of Noisy Environment Speech Mixture Dataset for Korean Speech Separation
Jaehoo Jang, Kun Park, Jeongpil Lee, Myoung-Wan Koo
http://doi.org/10.5626/JOK.2024.51.6.513
In the field of speech separation, models are typically trained using datasets that contain mixtures of speech and overlapping noise. Although there are established international datasets for advancing speech separation techniques, Korea currently lacks a similar precedent for constructing datasets with overlapping speech and noise. Therefore, this paper presents a dataset generator specifically designed for single-channel speech separation models tailored to the Korean language. The Korean Speech mixture with Noise dataset is introduced, which has been constructed using this generator. In our experiments, we train and evaluate a Conv-TasNet speech separation model using the newly created dataset. Additionally, we verify the dataset's efficacy by comparing the Character Error Rate (CER) between the separated speech and the original speech using a pre-trained speech recognition model.
Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients
Sung Min Kim, Wooil Kim, Tack-Kyun Kwon, Myung-Whun Sung, Mee Young Sung
This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr