Search : [ keyword: Machine Reading Comprehension ] (12)

Robust Korean Table Machine Reading Comprehension across Various Domains

Sanghyun Cho, Hye-Lynn Kim, Hyuk-chul Kwon

http://doi.org/10.5626/JOK.2023.50.12.1102

Unlike regular text data, tabular data has structural features that allow it to represent compressed information. This has led to their use in a variety of domains, and machine reading comprehension of tables has become an increasingly important aspect of Machine Reading Comprehension(MRC). However, the structure of tables and the knowledge required for each domain are different, and when a language model is trained for a single domain, the evaluation performance of the model in other domains is likely to be reduced, resulting in poor generalization performance. To overcome this, it is important to build datasets of various domains and apply various techniques rather than simply pre-trained models. In this study, we design a language model that learns cross-domain invariant linguistic features to improve domain generalization performance. We applied adversarial training to improve performance on evaluation datasets in each domain and modify the structure of the model by adding an embedding layer and a transformer layer specialized for tabular data. When applying adversarial learning, we found that the model with a structure that does not add table-specific embeddings improves performance. On the other hand, while adding a table-specific transformer layer and having the added layer receive additional table-specific embeddings as input, shows the best performance on data from all domains.

Type-specific Multi-Head Shared-Encoder Model for Commonsense Machine Reading Comprehension

Jinyeong Chae, Jihie Kim

http://doi.org/10.5626/JOK.2023.50.5.376

Machine reading comprehension (MRC) is a task introduced to a machine that can understand natural languages by solving various tasks based on given context. To evaluate natural language understanding of machine, a machine must make commonsense inference under full comprehension of a given context. To enhance model obtaining such abilities, we proposed a multi-task learning scheme and a model for commonsense MRC. Contributions of this study are as follows: 1) a method of task-specific dataset configuration is proposed; 2) a type-specific multi-head shared-encoder model with multi-task learning scheme including batch sampling and loss scaling is developed; and 3) when the method is evaluated on CosmosQA dataset (commonsense MRC), the accuracy was improved by 2.38% compared to the performance at baseline with fine-tuning.

Performance Improvement of a Korean Open Domain Q&A System by Applying the Trainable Re-ranking and Response Filtering Model

Hyeonho Shin, Myunghoon Lee, Hong-Woo Chun, Jae-Min Lee, Sung-Pil Choi

http://doi.org/10.5626/JOK.2023.50.3.273

Research on Open Domain Q&A, which can identify answers to user inquiries without preparing the target paragraph in advance, is currently being undertaken as deep learning technology is used for natural language processing. However, existing studies have limitations in semantic matching using keyword-based information retrieval. To supplement this, deep learning-based information retrieval research is in progress. But there are not many domestic studies that have been empirically applied to real systems. In this paper, a two-step performance enhancement method was proposed to improve the performance of the Korean open domain Q&A system. The proposed method is a method of sequentially applying a machine learning-based re-ranking model and a response filtering model to a baseline system in which a search engine and an MRC model was combined. In the case of the baseline system, the initial performance was an F1 score of 74.43 and an EM score of 60.79, and it was confirmed that the performance improved to an F1 score of 82.5 and an EM score of 68.82 when the proposed method was used.

KorSciQA 2.0: Question Answering Dataset for Machine Reading Comprehension of Korean Papers in Science & Technology Domain

Hyesoo Kong, Hwamook Yoon, Mihwan Hyun, Hyejin Lee, Jaewook Seol

http://doi.org/10.5626/JOK.2022.49.9.686

Recently, the performance of the Machine Reading Comprehension(MRC) system has been increased through various open-ended Question Answering(QA) task, and challenging QA task which has to comprehensively understand multiple text paragraphs and make discrete inferences is being released to train more intelligent MRC systems. However, due to the absence of a QA dataset for complex reasoning to understand academic information in Korean, MRC research on academic papers has been limited. In this paper, we constructed a QA dataset, KorSciQA 2.0, for the full text including abstracts of Korean academic papers and divided the difficulty level into general, easy, and hard for discriminative MRC systems. A methodology, process, and system for constructing KorSciQA 2.0 were proposed. We conducted MRC performance evaluation experiments and when fine-tuning based on the KorSciBERT model, which is a Korean-based BERT model for science and technology domains, the F1 score was 80.76%, showing the highest performance.

Korean Dependency Parsing using Subtree Linking based on Machine Reading Comprehension

Jinwoo Min, Seung-Hoon Na, Jong-Hoon Shin, Young-Kil Kim, Kangil Kim

http://doi.org/10.5626/JOK.2022.49.8.617

In Korean dependency parsing, biaffine attention models have shown state-of-the-art performances; they first obtain head-level and modifier-level representations by applying two multi-layer perceptrons (MLP) on the encoded contextualized word representation, perform the attention by regarding modifier-level representation as a query and head-level one as a key, and take the resulting attention score as a probability of forming a dependency arc between the corresponding two words. However, given two target words (i.e., candidate head and modifier), biaffine attention methods are basically limited to their word-level representations, not being aware of the explicit boundaries of their phrases or subtrees. Thus, without relying on semantically and syntactically enriched phrase-level and subtree-level representations, biaffine attention methods might be not effective in the case that determining a dependency arc is not simple but complicated such as identifying a dependency between “far-distant” words, where these cases may often require subtree or phrase-level information surrounding target words. To address this drawback, this paper presents the use of dependency paring framework based on machine reading comprehension (MRC) that explicitly utilizes the subtree-level information by mapping a given child subtree and its parent subtree to a question and an answer, respectively. The experiment results on standard datasets of Korean dependency parsing shows that the MRC-based dependency paring outperforms the biaffine attention model. In particular, the results further given observations that improvements in performances are likely strong in long sentences, comparing to short ones.

Training Data Augmentation Technique for Machine Comprehension by Question-Answer Pairs Generation Models based on a Pretrained Encoder-Decoder Model

Hyeonho Shin, Sung-Pil Choi

http://doi.org/10.5626/JOK.2022.49.2.166

The goal of Machine Reading Comprehension (MRC) research is to find answers to questions in documents. MRC research requires large-scale, high-quality data. However, individual researchers or small research institutes have limitations in constructing them. To overcome the limitations, in this paper, we propose an MRC data augmentation technique using a pre-training language model. This MRC data augmentation technique consists of a Q&A pair generation model and a data validation model. The Q&A pair generation model consists of an answer extraction model and a question generation model. Both models are constructed by fine-tuning the BART model. The data validation model is added to increase the reliability of the augmented data. It is used to verify the generated augmented data. The validation model is used by fine-tuning the ELECTRA model as an MRC model. To see the performance improvement of the MRC model through the data augmentation technique, we applied the data augmentation technique to KorQuAD v1.0 data. As a result of the experiment, compared to the previous model, the Exact Match(EM) Score increased up to 7.2 and the F1 Score increased up to 5.7.

Data Augmentation Methods for Improving the Performance of Machine Reading Comprehension

Sunkyung Lee, Eunseong Choi, Seonho Jeong, Jongwuk Lee

http://doi.org/10.5626/JOK.2021.48.12.1298

Machine reading comprehension is a method of understanding the meaning and performing inference over a given text by computers, and it is one of the most essential techniques for understanding natural language. The question answering task yields a way to test the reasoning ability of intelligent systems. Nowadays, machine reading comprehension techniques performance has significantly improved following the recent progress of deep neural networks. Nevertheless, there may be challenges in improving performance when data is sparse. To address this issue, we leverage word-level and sentence-level data augmentation techniques through text editing, while minimizing changes to the existing models and cost. In this work, we propose data augmentation methods for a pre-trained language model, which is most widely used in English question answering tasks, to confirm the improved performance over the existing models.

Evaluating of Korean Machine Reading Comprehension Generalization Performance via Cross-, Blind and Open-Domain QA Dataset Assessment

Joon-Ho Lim, Hyun-ki Kim

http://doi.org/10.5626/JOK.2021.48.3.275

Machine reading comprehension (MRC) entails identification of the correct answer in a paragraph when a natural language question and paragraph are provided. Recently, fine-tuning based on a pre-trained language model yields the best performance. In this study, we evaluated the ability of machine-reading comprehension method to generalize question and paragraph pairs, rather than similar training sets. Towards this end, the cross-evaluation between datasets and blind evaluation was performed. The results showed a correlation between generalization performance and datasets such as answer length and overlap ratio between question and paragraph. As a result of blind evaluation, the evaluation dataset with the long answer and low lexical overlap between the questions and paragraphs resulted in less than 80% performance. Finally, the generalized performance of the MRC model under the open domain QA environment was evaluated, and the performance of the MRC using the searched paragraph was found to be degraded. According to the MRC task characteristics, the difficulty and differences in generalization performance depend on the relationship between the question and the answer, suggesting the need for analysis of different evaluation sets.

Passage Re-ranking Method Based on Sentence Similarity Through Multitask Learning

Youngjin Jang, Hyeon-gu Lee, Jihyun Wang, Chunghee Lee, Harksoo Kim

http://doi.org/10.5626/JOK.2020.47.4.416

The machine reading comprehension(MRC) system is a question answering system in which a computer understands a given passage and respond questions. Recently, with the development of the deep neural network, research on the machine reading system has been actively conducted, and the open domain machine reading system that identifies the correct answer from the results of the information retrieval(IR) model rather than the given passage is in progress. However, if the IR model fails to identify a passage comprising the correct answer, the MRC system cannot respond to the question. That is, the performance of the open domain MRC system depends on the performance of the IR model. Thus, for an open domain MRC system to record high performance, a high performance IR model must be preceded. The previous IR model has been studied through query expansion and reranking. In this paper, we propose a re-ranking method using deep neural networks. The proposed model re-ranks the retrieval results (passages) through multi-task learning-based sentence similarity, and improves the performance by approximately 8% compared to the performance of the existing IR model with experimental results of 58,980 pairs of MRC data.

Korean Machine Reading Comprehension using S³-Net based on Position Encoding

Choeneum Park, Changki Lee, Hyunki Kim

http://doi.org/10.5626/JOK.2019.46.3.234

S³-Net is a deep learning model that is used in machine reading comprehension question answering (MRQA) based on Simple Recurrent Unit and Self-Matching Networks that calculates attention weight for own RNN sequence. The answers to the questions in the MRQA occur within the passage, because any passage is made up of several sentences, so the length of the input sequence becomes longer and the performance deteriorates. In this paper, a hierarchical model that adds sentence-level encoding and S³-Net that applies position encoding to check word order information to solve the problem of long-term context degradation are proposed. The experimental results show that the S³-Net model proposed in this paper has a performance of 69.43% in EM and 81.53% in F1 for single test, and 71.28% in EM and 82.67 in F1 for ensemble test.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr