Layered Abstraction Technique for Effective Formal Verification of Deep Neural Networks 


Vol. 49,  No. 11, pp. 958-971, Nov.  2022
10.5626/JOK.2022.49.11.958


PDF

  Abstract

Deep learning has performed well in many areas. However, deep learning is vulnerable to errors such as adversarial examples. Therefore, much research exists on ensuring the safety and robustness of deep neural networks. Since deep neural networks are large in scale and the activation functions are non-linear, linear approximation methods for such activation functions are proposed and widely used for verification. In this paper, we propose a new technique, called layered abstraction, for non-linear activation functions, such as ReLU and Tanh, and the verification algorithm based on that. We have implemented our method by extending the existing SMT-based methods. The experimental evaluation showed that our tool performs better than an existing tool.


  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

J. Yeon, S. Chae, K. Bae, "Layered Abstraction Technique for Effective Formal Verification of Deep Neural Networks," Journal of KIISE, JOK, vol. 49, no. 11, pp. 958-971, 2022. DOI: 10.5626/JOK.2022.49.11.958.


[ACM Style]

Jueun Yeon, Seunghyun Chae, and Kyungmin Bae. 2022. Layered Abstraction Technique for Effective Formal Verification of Deep Neural Networks. Journal of KIISE, JOK, 49, 11, (2022), 958-971. DOI: 10.5626/JOK.2022.49.11.958.


[KCI Style]

연주은, 채승현, 배경민, "심층 신경망의 효과적인 정형 검증을 위한 계층별 요약 기법," 한국정보과학회 논문지, 제49권, 제11호, 958~971쪽, 2022. DOI: 10.5626/JOK.2022.49.11.958.


[Endnote/Zotero/Mendeley (RIS)]  Download


[BibTeX]  Download



Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr