Survey on Feature Attribution Methods in Explainable AI 


Vol. 47,  No. 12, pp. 1181-1191, Dec.  2020
10.5626/JOK.2020.47.12.1181


PDF

  Abstract

As artificial intelligence (AI)-based technologies are increasingly being used in areas that can have big socioeconomic effects, there is a growing effort to explain decisions made by AI models. One important direction in such eXplainable AI (XAI) is the ‘feature attribution’ method, which explains AI models by assigning a contribution score to each input feature. In this work, we surveyed nine recently developed feature attribution methods and categorized them using four different criteria. Based on the categorizations, we found that the current methods focused only on specific settings such as generating local, white-box explanations of neural networks and lacked theoretical foundations such as axiomatic definitions. We suggest future research directions toward a unified feature attribution method based on our findings.


  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

G. Ko, G. Lim, H. Cho, "Survey on Feature Attribution Methods in Explainable AI," Journal of KIISE, JOK, vol. 47, no. 12, pp. 1181-1191, 2020. DOI: 10.5626/JOK.2020.47.12.1181.


[ACM Style]

Gihyuk Ko, Gyumin Lim, and Homook Cho. 2020. Survey on Feature Attribution Methods in Explainable AI. Journal of KIISE, JOK, 47, 12, (2020), 1181-1191. DOI: 10.5626/JOK.2020.47.12.1181.


[KCI Style]

고기혁, 임규민, 조호묵, "설명가능한 인공지능을 위한 특성기여도 분석 방법론 조사," 한국정보과학회 논문지, 제47권, 제12호, 1181~1191쪽, 2020. DOI: 10.5626/JOK.2020.47.12.1181.


[Endnote/Zotero/Mendeley (RIS)]  Download


[BibTeX]  Download



Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr