Digital Library[ Search Result ]
SyllaBERT: A Syllable-Based Efficient Robust Transformer Model for Real-World Noise and Typographical Errors
Seongwan Park, Yumin Heo, Youngjoong Ko
http://doi.org/10.5626/JOK.2025.52.3.250
Training a Korean language model necessitates the development of a tokenizer specifically designed for the unique features of the Korean language, making this a crucial step in the modeling process. Most current language models utilize morpheme-based or subword-based tokenization. While these approaches work well with clean Korean text data, they are prone to out-of-vocabulary (OOV) issues due to abbreviations and neologisms frequently encountered in real-world Korean data. Moreover, actual Korean text often contains various typos and non-standard expressions, to which traditional morpheme-based or subword-based tokenizers are not sufficiently robust. To tackle these challenges, this paper introduces the SyllaBERT model, which employs syllable-level tokenization to effectively address the specific characteristics of Korean, even in noisy and non-standard contexts, with minimal resources. A compact syllable-level vocabulary was created, and a syllable-based language model was developed by reducing the embedding and hidden layer sizes of existing models. Experimental results show that, despite having approximately four times fewer parameters than subword-based models, the SyllaBERT model outperforms them in natural language understanding tasks on real-world conversational Korean data that includes noise.
Context Based Real-time Korean Writing Correction for Foreigners
Young-Keun Park, Jae-Min Kim, Seong-Dong Lee, Hyun Ah Lee
http://doi.org/10.5626/JOK.2017.44.10.1087
Educating foreigners in Korean language is attracting increasing attention with the growing number of foreigners who want to learn Korean or want to reside in Korea. Existing spell checkers mostly focus on native Korean speakers, so they are inappropriate for foreigners. In this paper, we propose a correction method for the Korean language that reflects the contextual characteristics of Korean and writing characteristics of foreigners. Our method can extract frequently used expressions by Koreans by constructing syllable reverse-index for eojeol bi-gram extracted from corpus as correction candidates, and generate ranked Korean corrections for foreigners with upgraded edit distance calculation. Our system provides a user interface based on keyboard hooking, so a user can easily use the correction system along with other applications. Our system improves the detection rate for foreign language users by about 45% compared to other systems in foreign language writing environments. This will help foreign users to judge and correct their own writing errors.
Expansion of Word Representation for Named Entity Recognition Based on Bidirectional LSTM CRFs
Named entity recognition (NER) seeks to locate and classify named entities in text into pre-defined categories such as names of persons, organizations, locations, expressions of times, etc. Recently, many state-of-the-art NER systems have been implemented with bidirectional LSTM CRFs. Deep learning models based on long short-term memory (LSTM) generally depend on word representations as input. In this paper, we propose an approach to expand word representation by using pre-trained word embedding, part of speech (POS) tag embedding, syllable embedding and named entity dictionary feature vectors. Our experiments show that the proposed approach creates useful word representations as an input of bidirectional LSTM CRFs. Our final presentation shows its efficacy to be 8.05%p higher than baseline NERs with only the pre-trained word embedding vector.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr