Search : [ keyword: Interpretability ] (3)

Online Opinion Fraud Detection Using Graph Neural Network

Woochang Hyun, Insoo Lee, Bongwon Suh

http://doi.org/10.5626/JOK.2023.50.11.985

This study proposed a graph neural network model to detect opinion frauds that undermine the of information and hinder users" decision-making on online platforms. The proposed method uses methods on a graph of relationships between online reviews to produce relational representations, are then combined with the characteristics of the center nodes to predict fraud. Experimental results on a real-world dataset demonstrate that this approach is more accurate and faster than existing state-of-art methods, while also providing interpretability for key relations. With the help of this study, practitioners will be able to utilize the analytical results in decision-making and overcome the general drawback of neural network-based models" lack of explainability.

Explainable Graph Neural Network for Medical Science Research

Yewon Shin, Kisung Moon, Youngsuk Jung, Sunyoung Kwon

http://doi.org/10.5626/JOK.2022.49.11.999

Explainable AI (XAI) is a technology that provides explainability for our end-users to comprehend prediction results of ML algorithms. In particular, the reliability of the decision-making process of an AI algorithm through XAI technology is the most critical in the medical field in terms of real applications. However, complex interaction-based medical data restrict the application of existing XAI technologies developed mostly for image or text data. Graph Neural Network (GNN)-based XAI research has been highlighted in recent years because GNN is technically specialized to capture complex relationships in data. In this paper, we proposed a taxonomy according to the application method and algorithm of GNN-based XAI technology with current XAI research trends and its use-cases in four detailed areas of the medical field. We also expounded on the technical limitations and future works of XAI research specialized in the biomedical area.

Branchpoint Prediction Using Self-Attention Based Deep Neural Networks

Hyeonseok Lee, Sungchan Kim

http://doi.org/10.5626/JOK.2020.47.4.343

Splicing is a ribonucleic acid (RNA) process of creating a messenger RNA (mRNA) translated into proteins. Branchpoints are sequence elements of RNAs essential in splicing. This paper proposes a novel method for branchpoint prediction. Identification of branchpoints involves several challenges. Branchpoint sites are known to depend on several sequence patterns, called motifs. Also, a branchpoint distribution is highly biased, imposing a class-imbalanced problem. Existing approaches are limited in that they either rely on handcrafted sequential features or ignore the class imbalance. To address those difficulties, the proposed method incorporates 1) Attention mechanisms to learn sequence-positional long-term dependencies, and 2) Regularization with triplet loss to alleviate the class imbalance. Our method is comparable to the state-of-the-art performance while providing rich interpretability on its decisions.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr