Digital Library[ Search Result ]
Analysis of Vulnerabilities in Autonomous Driving Environments through Physical Adversarial Attacks Incorporating Natural Elements
Kyuchan Cho, Woosang Im, Sooyong Jeong, Hyunil Kim, Changho Seo
http://doi.org/10.5626/JOK.2024.51.10.935
Advancements in artificial intelligence technology have significantly impacted the field of computer vision. Concurrently, numerous vulnerabilities related to adversarial attacks, which are techniques designed to force models into misclassification, have been discovered. In particular, adversarial attacks such as physical adversarial attacks in the real world, pose a serious threat to autonomous vehicle systems. These attacks include artificially created attacks such as adversarial patches and attacks that exploit natural elements to cause misclassification. A common scenario in autonomous driving environments involves obstruction of traffic signs by natural elements such as fallen leaves or snow. These elements do not remain stationary. They can cause misclassification even in fleeting moments, highlighting a critical vulnerability. Therefore, this study investigated adversarial patch attacks based on natural elements, proposing fallen leaves as a natural adversarial element. Specifically, it reviewed current trends in adversarial attack research, presented an experimental environment based on natural elements, and analyzed experimental results to assess vulnerabilities posed by fallen leaves in physical environments to autonomous vehicles.
Capsule Neural Networks as Noise Stabilizer for Time Series Data
Soyeon Kim, Jihyeon Seong, Hyunkyung Han, Jaesik Choi
http://doi.org/10.5626/JOK.2024.51.8.678
A Capsule is a vector-wise representation formed by multiple neurons that encodes conceptual information about an object, such as angle, position, and size. Capsule Neural Network (CapsNet) learns to be viewpoint invariant using these capsules. This property makes CapsNet more resilient to noisy data compared to traditional Convolutional Neural Networks (CNNs). The Dynamic-Routing Capsule Neural Network (DR-CapsNet) uses an affine matrix and dynamic routing mechanism to train the capsules. In this paper, we propose that DR-CapsNet has the potential to act as a noise stabilizer in time series sensor data that have high sensitivity and significant noise in real world. To demonstrate the robustness of DR-CapsNet as a stabilizer, we conduct manual and adversarial attacks on an electrocardiogram (ECG) dataset. Our study provides empirical evidence that CapsNet effectively functions as a noise stabilizer and highlights its potential in addressing the challenges of preprocessing noisy measurements in time series analysis.
Effective Robustness Improvement in Medical Image Segmentation : Adversarial Noise Removal by the Input Transform Method
http://doi.org/10.5626/JOK.2023.50.10.859
Adversarial attacks induce the model to make misjudgments by adding fine noise to the deep learning model input data. Deep learning in medical images raises the expectations for computer-assisted diagnosis, but there is a risk of being vulnerable to adversarial attacks. In addition, in the case of the double segmentation model, the defense of adversarial attacks is more difficult, but security studies related to this topic have not received attention. In this study, we perform FGSM attacks ony brain tumor segmentation models and employ input image transformation and gradient regularization as defenses against these attacks. The proposed application of JPEG compression and Gaussian filters effectively removes adversarial noise while maintaining performance in the original images. Moreover, the input image transformation method, when compared to the conventional gradient regularization model for achieving robustness, not only exhibits a higher defense performance but also offers the advantage of being applicable without the need for model retraining. Through this research, we identify vulnerabilities in the security of medical artificial intelligence and propose ensuring robustness that can be applied in the preprocessing stage of the model.
Pruning Deep Neural Networks Neurons for Improved Robustness against Adversarial Examples
Gyumin Lim, Gihyuk Ko, Suyoung Lee, Sooel Son
http://doi.org/10.5626/JOK.2023.50.7.588
Deep Neural Networks (DNNs) have a security vulnerability to adversarial examples, which can result in incorrect classification of the DNNs results. In this paper, we assume that the activation patterns of DNNs will differ between normal data and adversarial examples. We propose a revision that prunes neurons that are activated only in the adversarial examples but not in the normal data, by identifying such neurons in the DNNs. We conducted adversarial revision using various adversarial examples generation techniques and used MNIST and CIFAR-10 datasets. The DNNs neurons that were pruned using the MNIST datasets achieved adversarial revision performance that increased up to 100% and 70.20% depending on the pruning method (label-wise and all-label pruning) while maintaining classification accuracy of normal data at above 99%. In contrast, the CIFAR-10 datasets showed a decreased classification accuracy for normal data, but the adversarial revision performance increased up to 99.37% and 47.61% depending on the pruning method. In addition, the efficiency of the proposed pruning-based adversarial revision performance was confirmed through a comparative analysis with adversarial training methods.
Attack Success Rate Analysis of Adversarial Patch in Physical Environment
Hyeon-Jae Jeong, Jubin Lee, Yu Seung Ma, Seung-Ik Lee
http://doi.org/10.5626/JOK.2023.50.2.185
Adversarial patches are widely known as representative adversarial example attacks in physical environment. However, most studies on adversarial patches have demonstrated robust attack success rates based on digital environment rather than physical environment. This study investigated the robustness of adversarial patches in physical environment. To this end, 5 types of generation conditions and 3 types of attachment conditions were derived. The attack success rates of digital patches in physical environment were reviewed according to the changes in conditions. As a basic condition, location, angle, and size variables were targeted as presented in the original adversarial patch paper. Additionally, learning epoch, intent class, and neural network under simulated attack were newly considered and tested as digital patch generation conditions. As a result, the condition which greatly influenced the attack success rates of digital patches was the size. As a learning condition for digital patch generation, digital patches showed sufficient attack success rates with only one to two small learning epochs and simple intent class images. In conclusion, the attack success rate of digital patches in physical environment was not robust unlike in the digital environment.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr