Search : [ keyword: Relation Extraction ] (5)

Relation Extraction based on Neural-Symbolic Structure

Jinyoung Oh, Jeong-Won Cha

http://doi.org/10.5626/JOK.2021.48.5.533

Deep learning has been continually demonstrating excellent performance in the field of natural language processing. However, enormous training data and long training time are required to achieve good performance. Herein, we propose a method that exceeds deep learning performance in a small learning data environment by using a neural-symbolic method for the relationship extraction problem. We have designed a structure that uses the inconsistency between the rule results and deep learning results. In addition, logical rule filtering has been proposed to improve the convergence speed and a context has been added to improve the performance of the rule. The proposed method showed excellent performance for a small amount of training data, and we confirmed that fast performance convergence was achieved.

Relation Extraction among Multiple Entities using Dual-Pointer Network

Seongsik Park, Harksoo Kim

http://doi.org/10.5626/JOK.2019.46.11.1186

Information Extraction is the process of automatically extracting structured information from unstructured machine-readable texts. The rapid increase in large-scale unstructured texts in recent years has led to many studies investigating information extraction. Information extraction consists of two sub-tasks: an entity linking task and a relation extraction task. Most previous studies examining relation extraction have assumed that a single sentence contains a single entity pair mention. They have also focused on extracting a single entity pair (i.e., Subject-Relation-Object triple) per sentence. However, sentences can also contain multiple entity pairs. Therefore, in this paper, we propose a Dual-pointer network model that can entirely extract all possible entity pairs from a given text. In relation extraction experiments with two kinds of representative English datasets, NYT and ACE-2005, the proposed model achieved state-of-the-art performances with an F1-score of 0.8050 in ACE-2005 and an F1-score of 0.7834 in NYT.

Multi-sense Word Embedding to Improve Performance of a CNN-based Relation Extraction Model

Sangha Nam, Kijong Han, Eun-kyung Kim, Sunggoo Kwon, Yoosung Jung, Key-Sun Choi

http://doi.org/10.5626/JOK.2018.45.8.816

The relation extraction task is to classify a relation between two entities in an input sentence and is important in natural language processing and knowledge extraction. Many studies have designed a relation extraction model using a distant supervision method. Recently the deep-learning based relation extraction model became mainstream such as CNN or RNN. However, the existing studies do not solve the homograph problem of word embedding used as an input of the model. Therefore, model learning proceeds with a single embedding value of homogeneous terms having different meanings; that is, the relation extraction model is learned without grasping the meaning of a word accurately. In this paper, we propose a relation extraction model using multi-sense word embedding. In order to learn multi-sense word embedding, we used a word sense disambiguation module based on the CoreNet concept, and the relation extraction model used CNN and PCNN models to learn key words in sentences.

One-Class Classification Model Based on Lexical Information and Syntactic Patterns

Hyeon-gu Lee, Maengsik Choi, Harksoo Kim

http://doi.org/

Relation extraction is an important information extraction technique that can be widely used in areas such as question-answering and knowledge population. Previous studies on relation extraction have been based on supervised machine learning models that need a large amount of training data manually annotated with relation categories. Recently, to reduce the manual annotation efforts for constructing training data, distant supervision methods have been proposed. However, these methods suffer from a drawback: it is difficult to use these methods for collecting negative training data that are necessary for resolving classification problems. To overcome this drawback, we propose a one-class classification model that can be trained without using negative data. The proposed model determines whether an input data item is included in an inner category by using a similarity measure based on lexical information and syntactic patterns in a vector space. In the experiments conducted in this study, the proposed model showed higher performance (an F1-score of 0.6509 and an accuracy of 0.6833) than a representative one-class classification model, one-class SVM(Support Vector Machine).

Competition Relation Extraction based on Combining Machine Learning and Filtering

ChungHee Lee, YoungHoon Seo, HyunKi Kim

http://doi.org/

This study was directed at the design of a hybrid algorithm for competition relation extraction. Previous works on relation extraction have relied on various lexical and deep parsing indicators and mostly utilize only the machine learning method. We present a new algorithm integrating machine learning with various filtering methods. Some simple but useful features for competition relation extraction are also introduced, and an optimum feature set is proposed. The goal of this paper was to increase the precision of competition relation extraction by combining supervised learning with various filtering methods. Filtering methods were employed for classifying compete relation occurrence, using distance restriction for the filtering of feature pairs, and classifying whether or not the candidate entity pair is spam. For evaluation, a test set consisting of 2,565 sentences was examined. The proposed method was compared with the rule-based method and general relation extraction method. As a result, the rule-based method achieved positive precision of 0.812 and accuracy of 0.568, while the general relation extraction method achieved 0.612 and 0.563, respectively. The proposed system obtained positive precision of 0.922 and accuracy of 0.713. These results demonstrate that the developed method is effective for competition relation extraction.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr