Digital Library[ Search Result ]
Survey on Feature Attribution Methods in Explainable AI
Gihyuk Ko, Gyumin Lim, Homook Cho
http://doi.org/10.5626/JOK.2020.47.12.1181
As artificial intelligence (AI)-based technologies are increasingly being used in areas that can have big socioeconomic effects, there is a growing effort to explain decisions made by AI models. One important direction in such eXplainable AI (XAI) is the ‘feature attribution’ method, which explains AI models by assigning a contribution score to each input feature. In this work, we surveyed nine recently developed feature attribution methods and categorized them using four different criteria. Based on the categorizations, we found that the current methods focused only on specific settings such as generating local, white-box explanations of neural networks and lacked theoretical foundations such as axiomatic definitions. We suggest future research directions toward a unified feature attribution method based on our findings.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr