TY - JOUR T1 - Effective Robustness Improvement in Medical Image Segmentation : Adversarial Noise Removal by the Input Transform Method AU - Lee, Seungeun AU - Kang, Kyungtae JO - Journal of KIISE, JOK PY - 2023 DA - 2023/1/14 DO - 10.5626/JOK.2023.50.10.859 KW - adversarial attack KW - medical imaging KW - robust model KW - segmentation model AB - Adversarial attacks induce the model to make misjudgments by adding fine noise to the deep learning model input data. Deep learning in medical images raises the expectations for computer-assisted diagnosis, but there is a risk of being vulnerable to adversarial attacks. In addition, in the case of the double segmentation model, the defense of adversarial attacks is more difficult, but security studies related to this topic have not received attention. In this study, we perform FGSM attacks ony brain tumor segmentation models and employ input image transformation and gradient regularization as defenses against these attacks. The proposed application of JPEG compression and Gaussian filters effectively removes adversarial noise while maintaining performance in the original images. Moreover, the input image transformation method, when compared to the conventional gradient regularization model for achieving robustness, not only exhibits a higher defense performance but also offers the advantage of being applicable without the need for model retraining. Through this research, we identify vulnerabilities in the security of medical artificial intelligence and propose ensuring robustness that can be applied in the preprocessing stage of the model.