Digital Library[ Search Result ]
A Study of Metric and Framework Improving Fairness-utility Trade-off in Link Prediction
Heeyoon Yang, YongHoon Kang, Gahyung Kim, Jiyoung Lim, SuHyun Yoon, Ho Seung Kim, Jee-Hyong Lee
http://doi.org/10.5626/JOK.2023.50.2.179
The advance in artificial intelligence (AI) technology has shown remarkable improvements over the last decade. However, sometimes, AI makes biased predictions based on real-world big data that intrinsically contain discriminative social factors. This problem often arises in friend recommendations in Social Network Services (SNS). In the case of social network datasets, Graph Neural Network (GNN) is utilized for training these datasets, but it has a high tendency to connect similar nodes (Homophily effect). Furthermore, it is more likely to make a biased prediction based on socially sensitive attributes, such as, gender or religion, making it ethically more problematic. To overcome these problems, various fairness-aware AI models and fairness metrics have been proposed. However, most of the studies used different metrics to evaluate fairness and did not consider the trade-off relationship that existed between accuracy and fairness. Thus, we propose a novel fairness metric called Fairβ-metri which takes both accuracy and prediction into consideration, and a framework called FairU that shows outstanding performance in the proposed metric.
Document Summarization Considering Entailment Relation between Sentences
Youngdae Kwon, Noo-ri Kim, Jee-Hyong Lee
Document summarization aims to generate a summary that is consistent and contains the highly related sentences in a document. In this study, we implemented for document summarization that extracts highly related sentences from a whole document by considering both similarities and entailment relations between sentences. Accordingly, we proposed a new algorithm, TextRank-NLI, which combines a Recurrent Neural Network based Natural Language Inference model and a Graphbased ranking algorithm used in single document extraction-based summarization task. In order to evaluate the performance of the new algorithm, we conducted experiments using the same datasets as used in TextRank algorithm. The results indicated that TextRank-NLI showed 2.3% improvement in performance, as compared to TextRank.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr