Digital Library[ Search Result ]
LLMEE: Enhancing Explainability and Evaluation of Large Language Models through Visual Token Attribution
Yunsu Kim, Minchan Kim, Jinwoo Choi, Youngseok Hwang, Hyunwoo Park
http://doi.org/10.5626/JOK.2024.51.12.1104
Large Language Models (LLMs) have made significant advancements in Natural Language Processing (NLP) and generative AI. However, their complex structure poses challenges in terms of interpretability and reliability. To address this issue, this study proposed LLMEE, a tool designed to visually explain and evaluate the prediction process of LLMs. LLMEE visually represents the impact of each input token on the output, enhancing model transparency and providing insights into various NLP tasks such as Summarization, Question Answering, Text Generation. Additionally, it integrates evaluation metrics such as ROUGE, BLEU, and BLEURTScore, offering both quantitative and qualitative assessments of LLM outputs. LLMEE is expected to contribute to more reliable evaluation and improvement of LLMs in both academic and industrial contexts by facilitating a better understanding of their complex workings and by providing enhanced output quality assessments.
Explainable Graph Neural Network for Medical Science Research
Yewon Shin, Kisung Moon, Youngsuk Jung, Sunyoung Kwon
http://doi.org/10.5626/JOK.2022.49.11.999
Explainable AI (XAI) is a technology that provides explainability for our end-users to comprehend prediction results of ML algorithms. In particular, the reliability of the decision-making process of an AI algorithm through XAI technology is the most critical in the medical field in terms of real applications. However, complex interaction-based medical data restrict the application of existing XAI technologies developed mostly for image or text data. Graph Neural Network (GNN)-based XAI research has been highlighted in recent years because GNN is technically specialized to capture complex relationships in data. In this paper, we proposed a taxonomy according to the application method and algorithm of GNN-based XAI technology with current XAI research trends and its use-cases in four detailed areas of the medical field. We also expounded on the technical limitations and future works of XAI research specialized in the biomedical area.
Survey on Feature Attribution Methods in Explainable AI
Gihyuk Ko, Gyumin Lim, Homook Cho
http://doi.org/10.5626/JOK.2020.47.12.1181
As artificial intelligence (AI)-based technologies are increasingly being used in areas that can have big socioeconomic effects, there is a growing effort to explain decisions made by AI models. One important direction in such eXplainable AI (XAI) is the ‘feature attribution’ method, which explains AI models by assigning a contribution score to each input feature. In this work, we surveyed nine recently developed feature attribution methods and categorized them using four different criteria. Based on the categorizations, we found that the current methods focused only on specific settings such as generating local, white-box explanations of neural networks and lacked theoretical foundations such as axiomatic definitions. We suggest future research directions toward a unified feature attribution method based on our findings.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr