Digital Library[ Search Result ]
Data Augmentation Methods for Improving the Performance of Machine Reading Comprehension
Sunkyung Lee, Eunseong Choi, Seonho Jeong, Jongwuk Lee
http://doi.org/10.5626/JOK.2021.48.12.1298
Machine reading comprehension is a method of understanding the meaning and performing inference over a given text by computers, and it is one of the most essential techniques for understanding natural language. The question answering task yields a way to test the reasoning ability of intelligent systems. Nowadays, machine reading comprehension techniques performance has significantly improved following the recent progress of deep neural networks. Nevertheless, there may be challenges in improving performance when data is sparse. To address this issue, we leverage word-level and sentence-level data augmentation techniques through text editing, while minimizing changes to the existing models and cost. In this work, we propose data augmentation methods for a pre-trained language model, which is most widely used in English question answering tasks, to confirm the improved performance over the existing models.
A Span Matrix-based Answer Candidates Detection Model used 2-Step Learning
Boeun Kim, Youngjin Jang, Harksoo Kim
http://doi.org/10.5626/JOK.2021.48.5.539
Automatic data construction refers to a technology that automatically constructs data through algorithms or deep neural networks. The automated construction system of question-answer data aimed at in this paper was mainly studied through a question generation model, which signifies a model that generates questions related to a given paragraph. Previously, paragraph and answer candidates were entered into the question generation model and related questions were generated. The answer candidates" input to the question generation model was detected through a rule-based method or a method using a deep neural network. We judged that answer detection, which is a subtask of question generation, will have a great influence on question generation. Consequently, we have proposed answer candidates detection model and 2-step learning method using Span Matrix. An experiment was conducted to find out how the questions generated through various methods of extracting answer candidates affect the question-answering system. The proposed model extracted a large number of correct answers compared to the existing model, and the noise in the learning process was supplemented by using the entity name dataset. Apparently, it was confirmed that the question-answer data generated as answer candidates extracted by the proposed model contributed the most to the performance of the question-answer system.
2-Phase Passage Re-ranking Model based on Neural-Symbolic Ranking Models
Yongjin Bae, Hyun Kim, Joon-Ho Lim, Hyun-ki Kim, Kong Joo Lee
http://doi.org/10.5626/JOK.2021.48.5.501
Previous researches related to the QA system have focused on extracting exact answers for the given questions and passages. However, when expanding the problem from machine reading comprehension to open domain question answering, finding the passage containing the correct answer is as important as machine reading comprehension. DrQA reported that Exact Match@Top1 performance decreased from 69.5 to 27.1 when the QA system had the initial search step. In the present work, we have proposed the 2-phase passage reranking model to improve the performance of the question answering system. The proposed model integrates the results of the symbolic and neural ranking models to re-rank them again. The symbolic ranking model was trained based on the CatBoost algorithm and manual features between the question and passage. The neural model was trained based on the KorBERT model by fine-tuning. The second stage model was trained based on the neural regression model. We maximized the performance by combining ranking models with different characters. Finally, the proposed model showed the performance of 85.8% via MRR and 82.2% via BinaryRecall@Top1 measure while evaluating 1,000 questions. Each performance was improved by 17.3%(MRR) and 22.3%(BR@Top1) compared with the baseline model.
KorQuAD 2.0: Korean QA Dataset for Web Document Machine Comprehension
Youngmin Kim, Seungyoung Lim, Hyunjeong Lee, Soyoon Park, Myungji Kim
http://doi.org/10.5626/JOK.2020.47.6.577
KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it is necessary to understand the document structured with HTML tags. Finally, the answer can be a long text covering not only word or phrase units, but paragraphs, tables, and lists. As a baseline model, BERT Multilingual is used, released by Google as an open source. It shows 46.0% F1 score, a very low score compared to 85.7% of the human F1 score. It indicates that this data is a challenging task. Additionally, we increased the performance by no-answer data augmentation. Through the distribution of this data, we intend to extend the limit of MRC that was limited to plain text to real world tasks of various lengths and formats.
Korean Machine Reading Comprehension with S²-Net
Cheoneum Park, Changki Lee, Sulyn Hong, Yigyu Hwang, Taejoon Yoo, Hyunki Kim
http://doi.org/10.5626/JOK.2018.45.12.1260
Machine reading comprehension is the task of understanding a given context and identifying the right answer in context. Simple recurrent unit (SRU) solves the vanishing gradient problem in recurrent neural network (RNN) by using neural gate such as gated recurrent unit (GRU), and removes previous hidden state from gate input to improve speed. Self-matching network is used in r-net, and this has a similar effect as coreference resolution can show similar semantic context information by calculating attention weight for its RNN sequence. In this paper, we propose a S²-Net model that add self-matching layer to an encoder using stacked SRUs and constructs a Korean machine reading comprehension dataset. Experimental results reveal the proposed S²-Net model has EM 70.81% and F1 82.48% performance in Korean machine reading comprehension.
Resolution of Answer-Repetition Problems in a Generative Question-Answering Chat System
http://doi.org/10.5626/JOK.2018.45.9.925
A question-answering (QA) chat system is a chatbot that responds to simple factoid questions by retrieving information from knowledge bases. Recently, many chat systems based on sequence-to-sequence neural networks have been implemented and have shown new possibilities for generative models. However, the generative chat systems have word repetition problems, in that the same words in a response are repeatedly generated. A QA chat system also has similar problems, in that the same answer expressions frequently appear for a given question and are repeatedly generated. To resolve this answer-repetition problem, we propose a new sequence-to-sequence model reflecting a coverage mechanism and an adaptive control of attention (ACA) mechanism in a decoder. In addition, we propose a repetition loss function reflecting the number of unique words in a response. In the experiments, the proposed model performed better than various baseline models on all metrics, such as accuracy, BLEU, ROUGE-1, ROUGE-2, ROUGE-L, and Distinct-1.
Question Answering Optimization via Temporal Representation and Data Augmentation of Dynamic Memory Networks
Dong-Sig Han, Chung-Yeon Lee, Byoung-Tak Zhang
The research area for solving question answering (QA) problems using artificial intelligence models is in a methodological transition period, and one such architecture, the dynamic memory network (DMN), is drawing attention for two key attributes: its attention mechanism defined by neural network operations and its modular architecture imitating cognition processes during QA of human. In this paper, we increased accuracy of the inferred answers, by adapting an automatic data augmentation method for lacking amount of training data, and by improving the ability of time perception. The experimental results showed that in the 1K-bAbI tasks, the modified DMN achieves 89.21% accuracy and passes twelve tasks which is 13.58% higher with passing four more tasks, as compared with one implementation of DMN. Additionally, DMN’s word embedding vectors form strong clusters after training. Moreover, the number of episodic passes and that of supporting facts shows direct correlation, which affects the performance significantly.
Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems
Wooin Lee, Gwangho Song, Kyuseok Shim
Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system’s vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.
Answer Snippet Retrieval for Question Answering of Medical Documents
Hyeon-gu Lee, Minkyoung Kim, Harksoo Kim
With the explosive increase in the number of online medical documents, the demand for question-answering systems is increasing. Recently, question-answering models based on machine learning have shown high performances in various domains. However, many question-answering models within the medical domain are still based on information retrieval techniques because of sparseness of training data. Based on various information retrieval techniques, we propose an answer snippet retrieval model for question-answering systems of medical documents. The proposed model first searches candidate answer sentences from medical documents using a cluster-based retrieval technique. Then, it generates reliable answer snippets using a re-ranking model of the candidate answer sentences based on various sentence retrieval techniques. In the experiments with BioASQ 4b, the proposed model showed better performances (MAP of 0.0604) than the previous models.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr