Search : [ keyword: 어텐션 메커니즘 ] (7)

A Graph Neural Network Approach for Predicting the Lung Carcinogenicity of Single Molecular Compounds

Yunju Song, Sunyong Yoo

http://doi.org/10.5626/JOK.2025.52.6.482

Cancer is one of the major diseases causing millions of deaths worldwide every year, and lung cancer has been recorded as the leading cause of cancer-related deaths in Korea in 2022. Therefore, research on lung cancer-causing compounds is essential, and this study proposes and evaluates a novel approach to predict lung cancer-causing potential using graph neural networks to overcome the limitations of existing machine learning and deep learning methods. Based on SMILES(Simplified Molecular Input Line Entry System) information from the compound carcinogenicity databases CPDB, CCRIS, IRIS and T3DB, the structure and chemical properties of molecules were converted into graph data for training, and the proposed model showed superior prediction performance compared to other models. This demonstrates the potential of graph neural networks as an effective tool for lung cancer prediction and suggests that they can make important contributions to future cancer research and treatment development.

Task-Oriented Dialogue System Using a Fusion Module between Knowledge Graphs

Jinyoung Kim, Hyunmook Cha, Youngjoong Ko

http://doi.org/10.5626/JOK.2024.51.10.882

The field of Task-Oriented Dialogue Systems focuses on using natural language processing to assist users in achieving specific tasks through conversation. Recently, transformer-based pre-trained language models have been employed to enhance performances of task-oriented dialogue systems. This paper proposes a response generation model based on Graph Attention Networks (GAT) to integrate external knowledge data into transformer-based language models for more specialized responses in dialogue systems. Additionally, we extend this research to incorporate information from multiple graphs, leveraging information from more than two graphs. We also collected and refined dialogue data based on music domain knowledge base to evaluate the proposed model. The collected dialogue dataset consisted of 2,076 dialogues and 226,823 triples. In experiments, the proposed model showed a performance improvement of 13.83%p in ROUGE-1, 8.26%p in ROUGE-2, and 13.5%p in ROUGE-L compared to the baseline KoBART model on the proposed dialogue dataset.

Graph Embedding-Based Point-Of-Interest Recommendation Considering Weather Features

Kun Woo Lee, Jongseon Kim, Yon Dohn Chung

http://doi.org/10.5626/JOK.2022.49.3.221

As the Location-Based Services (LBS) grow rapidly, the Point-Of-Interest (POI) recommendation becomes an active research area to provide users appropriate information relevant to their locations. Recently, translation-based recommendation systems using graph embedding, such as TransRec, are attracting great attention. In this paper, we discovered some drawbacks of TransRec; it is limited in expressing the complex relationship between users and POIs, and the relation embedding is fixed without considering weather features. We propose WAPTRec, a graph embedding-based POI recommendation method considering the weather, that overcomes the drawback of TransRec. WAPTRec can rep resent the same POI embedding in different ways according to users by using a category projection matrix and attention mechanism. In addition, it provides better recommendation accuracy by utilizing the users’ movement history, category of POIs and weather features. Experiments using public datasets illustrated that WAPTRec outperformed the conventional translation-based recommendation methods.

Analyzing the Impact of Sequential Context Learning on the Transformer Based Korean Text Summarization Model

Subin Kim, Yongjun Kim, Junseong Bang

http://doi.org/10.5626/JOK.2021.48.10.1097

Text summarization reduces the sequence length while maintaining the meaning of the entire article body, solving the problem of overloading information and helping readers consume information quickly. To this end, research on a Transformer-based English text summarization model has been actively conducted. Recently, an abstract text summary model reflecting the characteristics of English with a fixed word order by adding a Recurrent Neural Network (RNN)-based encoder was proposed. In this paper, we study the effect of sequential context learning on the text abstract summary model by using an RNN-based encoder for Korean, which has more free word order than English. Transformer-based model and a model that added RNN-based encoder to existing Transformer model are trained to compare the performance of headline generation and article body summary for the Korean articles collected directly. Experiments show that the model performs better when the RNN-based encoder is added, and that sequential contextual information learning is required for Korean abstractive text summarization.

EFA-DTI: Prediction of Drug-Target Interactions Using Edge Feature Attention

Erkhembayar Jadamba, Sooheon Kim, Hyeonsu Lee, Hwajong Kim

http://doi.org/10.5626/JOK.2021.48.7.825

Drug discovery is a high-level field of research requiring the coordination of disciplines ranging from medicinal chemistry, systems biology, structural biology, and increasingly, artificial intelligence. In particular, drug-target interaction (DTI) prediction is central to the process of screening for and optimizing candidate substances to treat disease from a nearly infinite set of compounds. Recently, as computer performance has developed dramatically, studies using artificial intelligence neural networks have been actively conducted to reduce the cost and increase the efficiency of DTI prediction. This paper proposes a model that predicts an interaction value between a given molecule and protein using a learned molecule representation via Edge Feature Attention-applied Graph Net Embedding with Fixed Fingerprints and a protein representation using pre-trained protein embeddings. The paper describes architectures, experimental methods, and findings. The model demonstrated higher performance than DeepDTA and GraphDTA, which had previously demonstrated the best performance in DTI studies.

An Explainable Knowledge Completion Model Using Explanation Segments

Min-Ho Lee, Wan-Gon Lee, Batselem Jagvaral, Young-Tack Park

http://doi.org/10.5626/JOK.2021.48.6.680

Recently, a large number of studies that used deep learning have been conducted to predict new links in incomplete knowledge graphs. However, link prediction using deep learning has a major limitation as the inferred results cannot be explained. We propose a high-utility knowledge graph prediction model that yields explainable inference paths supporting the inference results. We define paths to the object from the knowledge graph using a path ranking algorithm and define them as the explanation segments. Then, the generated explanation segments are embedded using a Convolutional neural network (CNN) and a Bidirectional Long short-term memory (BiLSTM). The link prediction model is then trained by applying an attention mechanism, based on the calculation of the semantic similarity between the embedded explanation segments and inferred candidate predicates to be inferred. The explanation segment suitable for link prediction explanation is selected based on the measured attention scores. To evaluate the performance of the proposed method, a link prediction comparison experiment and an accuracy verification experiment are performed to measure the proportion of the explanation segments suitable to explain the link prediction results. We used the benchmark datasets NELL-995, FB15K-237, and countries for the experiment, and accuracy verification experiments showed the accuracies of 89%, 44%, and 97%, respectively. Compared with the existing method, the NELL-995, FB15K-237 data exhibited 35%p and 21%p higher performance on average.

Query-based Abstractive Summarization Model Using Sentence Ranking Scores and Graph Techniques

Gihwan Kim, Youngjoong Ko

http://doi.org/10.5626/JOK.2020.47.12.1172

The purpose of the fundamental abstractive summarization model is to generate a short summary document that includes all important contents within the document. Conversely, in the query-based abstractive summarization model, information related to the query should be selected and summarized within the document. The existing query-based summarization models calculates the importance of sentences using only the weight of words through an attention mechanism between words in the document and the query. This method has a disadvantage in that it is difficult to reflect the entire context information of the document to generate an abstractive summary. In this paper, we resolve this problems by calculating the sentence ranking scores and a sentence-level graph structure. Our proposed model shows higher performance than the previous research model, 1.44%p in ROUGE-1 and 0.52%p in ROUGE-L.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr