Digital Library[ Search Result ]
Korean Machine Reading Comprehension with S²-Net
Cheoneum Park, Changki Lee, Sulyn Hong, Yigyu Hwang, Taejoon Yoo, Hyunki Kim
http://doi.org/10.5626/JOK.2018.45.12.1260
Machine reading comprehension is the task of understanding a given context and identifying the right answer in context. Simple recurrent unit (SRU) solves the vanishing gradient problem in recurrent neural network (RNN) by using neural gate such as gated recurrent unit (GRU), and removes previous hidden state from gate input to improve speed. Self-matching network is used in r-net, and this has a similar effect as coreference resolution can show similar semantic context information by calculating attention weight for its RNN sequence. In this paper, we propose a S²-Net model that add self-matching layer to an encoder using stacked SRUs and constructs a Korean machine reading comprehension dataset. Experimental results reveal the proposed S²-Net model has EM 70.81% and F1 82.48% performance in Korean machine reading comprehension.
Korean Machine Reading Comprehension using Reinforcement Learning and Dual Co-Attention Mechanism
http://doi.org/10.5626/JOK.2018.45.9.932
Machine Reading Comprehension is a question-answering model for the purposes of understanding a given document and then finding the correct answer within the document. Previous studies on the Machine Reading Comprehension model have been based on end-to-end neural network models with various attention mechanisms. However, in the previous models, difficulties arose when attempting to find answers with long dependencies between lexical clues because these models did not use grammatical and syntactic information. To resolve this problem, we propose a Machine Reading Comprehension model with a dual co-attention mechanism reflecting part-of-speech information and shortest dependency path information. In addition, to increase the performances, we propose a reinforce learning method using F1-scores of answer extraction as rewards. In the experiments with 18,863 question-answering pairs, the proposed model showed higher performances (exact match: 0.4566, F1-score: 0.7290) than the representative previous model.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr