Digital Library[ Search Result ]
Generating Relation Descriptions with Large Language Model for Link Prediction
http://doi.org/10.5626/JOK.2024.51.10.908
The Knowledge Graph is a network consisting of entities and the relations between them. It is used for various natural language processing tasks. One specific task related to the Knowledge Graph is Knowledge Graph Completion, which involves reasoning with known facts in the graph and automatically inferring missing links. In order to tackle this task, studies have been conducted on both link prediction and relation prediction. Recently, there has been significant interest in a dual-encoder architecture that utilizes textual information. However, the dataset for link prediction only provides descriptions for entities, not for relations. As a result, the model heavily relies on descriptions for entities. To address this issue, we utilized a large language model called GPT-3.5-turbo to generate relation descriptions. This allows the baseline model to be trained with more comprehensive relation information. Moreover, the relation descriptions generated by our proposed method are expected to improve the performance of other language model-based link prediction models. The evaluation results for link prediction demonstrate that our proposed method outperforms the baseline model on various datasets, including Korean ConceptNet, WN18RR, FB15k-237, and YAGO3-10. Specifically, we observed improvements of 0.34%p, 0.11%p, 0.12%p, and 0.41%p in terms of Mean Reciprocal Rank (MRR), respecitvely.
Efficient Compositional Translation Embedding for Visual Relationship Detection
Yu-Jung Heo, Eun-Sol Kim, Woo Suk Choi, Kyoung-Woon On, Byoung-Tak Zhang
http://doi.org/10.5626/JOK.2022.49.7.544
Scene graphs are widely used to express high-order visual relationships between objects present in an image. To generate the scene graph automatically, we propose an algorithm that detects visual relationships between objects and predicts the relationship as a predicate. Inspired by the well-known knowledge graph embedding method TransR, we present the CompTransR algorithm that i) defines latent relational subspaces considering the compositional perspective of visual relationships and ii) encodes predicate representations by applying transitive constraints between the object representations in each subspace. Our proposed model not only reduces computational complexity but also outperformed previous state-of-the-art performance in predicate detection tasks in three benchmark datasets: VRD, VG200, and VrR-VG. We also showed that a scene graph could be applied to the image-caption retrieval task, which is one of the high-level visual reasoning tasks, and the scene graph generated by our model increased retrieval performance.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr