Digital Library[ Search Result ]
Knowledge Base Population Model Using Non-Negative Matrix Factorization
Jiho Kim, Sangha Nam, Key-Sun Choi
http://doi.org/10.5626/JOK.2018.45.9.918
The purpose of a knowledge base is to incorporate all the knowledge in the world in a format that machines can understand. In order for a knowledge base to be useful, it must continuously acquire and add new knowledge. However, it cannot if it lacks knowledge-acquisition ability. Knowledge is mainly acquired by analyzing natural language sentences. However, studies on internal knowledge acquisition are being neglected. In this paper, we introduce a non-negative matrix factorization method for knowledge base population. The model introduced in this paper transforms a knowledge base into a matrix and then learns the latent feature vector of each entity tuple and relation by decomposing the matrix and reassembling the vectors to score the reliability of the new knowledge. In order to demonstrate the effectiveness and superiority of our method, we present results of experiments and analysis performed with Korean DBpedia.
Multi-sense Word Embedding to Improve Performance of a CNN-based Relation Extraction Model
Sangha Nam, Kijong Han, Eun-kyung Kim, Sunggoo Kwon, Yoosung Jung, Key-Sun Choi
http://doi.org/10.5626/JOK.2018.45.8.816
The relation extraction task is to classify a relation between two entities in an input sentence and is important in natural language processing and knowledge extraction. Many studies have designed a relation extraction model using a distant supervision method. Recently the deep-learning based relation extraction model became mainstream such as CNN or RNN. However, the existing studies do not solve the homograph problem of word embedding used as an input of the model. Therefore, model learning proceeds with a single embedding value of homogeneous terms having different meanings; that is, the relation extraction model is learned without grasping the meaning of a word accurately. In this paper, we propose a relation extraction model using multi-sense word embedding. In order to learn multi-sense word embedding, we used a word sense disambiguation module based on the CoreNet concept, and the relation extraction model used CNN and PCNN models to learn key words in sentences.
Design and Implementation of a Hybrid Spatial Reasoning Algorithm
In order to answer questions successfully on behalf of the human contestant in DeepQA environments such as ‘Jeopardy!’, the American quiz show, the computer needs to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a hybrid spatial reasoning algorithm, among various efficient spatial reasoning methods, for handling directional and topological relations. Our algorithm not only improves the query processing time while reducing unnecessary reasoning calculation, but also effectively deals with the change of spatial knowledge base, as it takes a hybrid method that combines forward and backward reasoning. Through experiments performed on the sample spatial knowledge base with the hybrid spatial reasoner of our algorithm, we demonstrated the high performance of our hybrid spatial reasoning algorithm.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr