Digital Library[ Search Result ]
Reference Image-Based Contrastive Attention Mechanism for Printed Circuit Board Defect Classification
http://doi.org/10.5626/JOK.2025.52.1.70
Effective classification of defects in Printed Circuit Boards (PCBs) is critical for ensuring product quality. Traditional approaches to PCB defect detection have primarily relied on single-image analysis or failed to adequately address alignment issues between reference and test images, leading to reduced reliability and precision in defect detection. To overcome these limitations, this study aimed to introduce a novel deep image comparison method that could incorporate contrastive loss functions to improve image alignment with a contrastive attention mechanism to focus the model on areas with a higher likelihood of defects. Experiments conducted on actual PCB data demonstrated that the proposed method achieved superior classification performance, even with limited data, highlighting its potential to significantly enhance the reliability of PCB defect detection and address existing challenges in the field.
Enhanced Image Harmonization Scheme Using LAB Color Space-based Loss Function and Data Preprocessing
Doyeon Kim, Eunbeen Kim, Hyeonwoo Kim, Eenjun Hwang
http://doi.org/10.5626/JOK.2024.51.8.729
Image composition, which involves combining the background and foreground from different images to create a new image, is a useful technique in image editing. However, it often results in awkward images due to differences in brightness and color tones between the background and foreground. Image harmonization techniques aim to reduce this incongruity and have gained significant attention in the field of image editing. These techniques allow for realistic matching of color tones between the foreground and background. Existing deep learning models for image harmonization have shown promise in achieving harmonization performance through the use of large-scale training datasets. However, these models tend to exhibit poor generalization performance when the loss function does not effectively consider brightness or when the dataset has a biased brightness distribution. To address these issues, we propose an image harmonization scheme that is robust to variations in brightness. This scheme incorporates an LAB color space-based loss function, which explicitly calculates the brightness of a given image, and an LAB color space-based preprocessing scheme to create a dataset with a balanced brightness distribution. Experimental results on public image datasets demonstrate that the proposed scheme exhibits robust harmonization performance under various brightness conditions.
Semi-Supervised Learning Exploiting Robust Loss Function for Sparse Labeled Data
http://doi.org/10.5626/JOK.2021.48.12.1343
This paper proposes a semi-supervised learning method which uses data augmentation and robust loss function when labeled data are extremely sparse. Existing semi-supervised learning methods augment unlabeled data and use one-hot vector labels predicted by the current model if the confidence of the prediction is high. Since it does not use low-confidence data, a recent work has used low-confidence data in the training by utilizing robust loss function. Meanwhile, if labeled data are extremely sparse, the prediction can be incorrect even if the confidence is high. In this paper, we propose a method to improve the performance of a classification model when labeled data are extremely sparse by using predicted probability, instead of one hot vector as the label. Experiments show that the proposed method improves the performance of a classification model.
Design and Evaluation of Loss Functions based on Classification Models
Hyun-Kyu Jeon, Yun-Gyung Cheong
http://doi.org/10.5626/JOK.2021.48.10.1132
Paraphrase generation is a task in which the model generates an output sentence conveying the same meaning as the given input text but with a different representation. Recently, paraphrase generation has been widely used for solving the task of using artificial neural networks with supervised learning between the model’s prediction and labels. However, this method gives limited information because it only detects the representational difference. For that reason, we propose a method to extract semantic information with classification models and use them for the training loss function. Our evaluations showed that the proposed method outperformed baseline models.
A Perimeter-Based IoU Loss for Efficient Bounding Box Regression in Object Detection
http://doi.org/10.5626/JOK.2021.48.8.913
In object detection, neural networks are generally trained by minimizing two types of losses simultaneously, namely classification loss and regression loss for bounding boxes. However, the regression loss often fails to achieve its ultimate goal, that is, it often obtains a predicted bounding box that maximally intersects with its target box. This is due to the fact that the regression loss is not highly correlated with the IoU, which actually measures how much the bounding box and its target box overlap with each other. Although several penalty terms have been invented and added to the IoU loss in order to address the problem of regression losses, they still show some inefficiency particularly when penalty terms become zero by enclosing another box or overlapping with the center point before the bounding box and its target box are perfectly the same. In this paper, we propose a perimeter based IoU (PIoU) loss exploiting the perimeter differences of the minimum bounding rectangle of both a predicted box and its target box from those of two boxes themselves. In our experiments using the state-of-the-art object detection models (e.g., YOLO v3, SSD, and FCOS), we show that our PIoU loss consistently achieves better accuracy than all the other existing IoU losses.
Model-Based Reinforcement Learning with Discriminative Loss
Guang Jin, Yohwan Noh, DoHoon Lee
http://doi.org/10.5626/JOK.2020.47.6.547
Reinforcement learning is a framework for training the agent to make a good sequence of decisions through interacting with a complex environment. Although reinforcement learning has shown promising results in many tasks, sample efficiency still remains a major challenge for its real world application. We propose a novel model-based reinforcement learning framework that incorporates the discriminative loss function, in which models are trained to discriminate one action from another. The encoder pre-trained in this framework shows the feature alignment property, which aligns with the policy gradient method. The proposed method showed better sample efficiency than conventional model-based reinforcement learning approaches in the Atari game environment. In the early stage of the training, the proposed method surpassed the baseline by a large margin.
Solving for Redundant Repetition Problem of Generating Summarization using Decoding History
Jaehyun Ryu, Yunseok Noh, Su Jeong Choi, Seyoung Park, Seong-Bae Park
http://doi.org/10.5626/JOK.2019.46.6.535
Neural attentional sequence-to-sequence models have achieved great success in abstractive summarization. However, the model is limited by several challenges including repetitive generation of words, phrase and sentences in the decoding step. Many studies have attempted to address the problem by modifying the model structure. Although the consideration of actual history of word generation is crucial to reduce word repetition, these methods, however, do not consider the decoding history of generated sequence. In this paper, we propose a new loss function, called ‘Repeat Loss’ to avoid repetitions. The Repeat Loss directly prevents the model from repetitive generation of words by giving a loss penalty to the generation probability of words already generated in the decoding history. Since the propose Repeat Loss does not need a special network structure, the loss function is applicable to any existing sequence-to-sequence models. In experiments, we applied the Repeat Loss to a number of sequence-to-sequence model based summarization systems and trained them on both Korean and CNN/Daily Mail summarization datasets. The results demonstrate that the proposed method reduced repetitions and produced high-quality summarization.
Resolution of Answer-Repetition Problems in a Generative Question-Answering Chat System
http://doi.org/10.5626/JOK.2018.45.9.925
A question-answering (QA) chat system is a chatbot that responds to simple factoid questions by retrieving information from knowledge bases. Recently, many chat systems based on sequence-to-sequence neural networks have been implemented and have shown new possibilities for generative models. However, the generative chat systems have word repetition problems, in that the same words in a response are repeatedly generated. A QA chat system also has similar problems, in that the same answer expressions frequently appear for a given question and are repeatedly generated. To resolve this answer-repetition problem, we propose a new sequence-to-sequence model reflecting a coverage mechanism and an adaptive control of attention (ACA) mechanism in a decoder. In addition, we propose a repetition loss function reflecting the number of unique words in a response. In the experiments, the proposed model performed better than various baseline models on all metrics, such as accuracy, BLEU, ROUGE-1, ROUGE-2, ROUGE-L, and Distinct-1.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr