Search : [ author: 장현기 ] (2)

Jamo Unit Convolutional Neural Network Based Automatic Classification of Frequently Asked Questions with Spelling Errors

Youngjin Jang, Harksoo Kim, Dongho Kang, Sebin Kim, Hyunki Jang

http://doi.org/10.5626/JOK.2019.46.6.563

Web and mobile users obtain the desired information using the frequently asked questions (FAQ) listed on the homepage. The FAQ system displays a query response candidate that is most similar to the input based on an information retrieval model. However, the information retrieval model depends on the index, and therefore, it is vulnerable to spelling errors in the sentence. This paper proposes a model applying the FAQ system to the sentence classifier, which minimizes the spelling errors. Using the embedded layer with jamo-based convolutional neural network, the spelling errors of the user input were reduced. The performance of the classifier was improved using class embedding and feed-forward neural network. As a result of 457 and 769 FAQ classifications, the Micro F1 score showed 81.32% p and 61.11% p performance, respectively. We used the sigmoid function to quantify the reliability of the model prediction.

Word Embedding using Relative Position Information between Words

Hyunsun Hwang, Changki Lee, HyunKi Jang, Dongho Kang

http://doi.org/10.5626/JOK.2018.45.9.943

In Word embedding, which is used to apply deep learning to natural language processing, a word is expressed on a vector space. This has the advantage of dimension reduction, whereby similar words have similar vector values. Word embedding needs to learn large-scale corpus to get achieve good performance. However, the word2vec model, which has frequently been used in the past, has a disadvantage in that it does not use relative position information between words because it largely learns the word appearance rate by simplifying the model for large capacity corpus learning. In this paper, we modified the existing word embedding learning model to enable it to learn using relative position information between words. Experimental results show that the performance of the word-analogy of the proposed modified word embedding learning model is improved when word embedding is learned using relative position information between words.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr