A Real-time Scheduling Framework for Multi-threaded ROS 2 Applications

Seryun Kang, Jinseop Jeong, Kanghee Kim

http://doi.org/10.5626/JOK.2025.52.1.1

Real-time performance of robot applications operating in the physical world is crucial. In ROS (Robot Operating System) 2, robot applications consist of dozens or even hundreds of tasks. If the end-to-end delay from sensing to control increases, the resulting motion may be delayed, potentially leading to physical accidents. Consequently, many studies have been conducted to analyze and reduce delays in robot applications. This paper proposes a real-time scheduling framework that allows the application of the probabilistic latency analysis method, originally designed for process graphs, to thread graphs. The proposed framework groups callback functions with the same period into a single group based on a global schedule table and creates a thread graph by assigning a dedicated thread to each group. Each thread is then fixed to a CPU core as determined by the table and is scheduled using FIFO. This paper applies the proposed framework to the localization pipeline of Autoware and confirms that probabilistic latency analysis is feasible within this framework.

Exploiting Arma 3 to Construct Synthetic Data for Military Target Detection on Remote Sensing Imagery

Yechan Kim, JongHyun Park, SooYeon Kim, Sihyun Kim, Sung Heon Kim, YeongMin Ko, Junggyun Oh, Dongho Yoon, Moongu Jeon

http://doi.org/10.5626/JOK.2025.52.1.9

Recently, satellite-based surveillance and reconnaissance systems have garnered significant attention in the military sector. However, the acquisition of large-scale satellite imagery for training military target detection models presents practical challenges, primarily due to high costs and security concerns. To tackle this issue, this paper proposes an algorithm for generating synthetic satellite imagery and annotations for military target detection using Arma 3, a well-known military simulation game. Arma 3 offers realistic military equipment and environments, which facilitates the creation of high-quality synthetic data. This study specifically validates the proposed method by demonstrating that our synthetic dataset can effectively complement real-world data, utilizing the DOTA dataset and web-scraped military images.

Political Bias in Large Language Models and its Implications on Downstream Tasks

Jeong yeon Seo, Sukmin Cho, Jong C. Park

http://doi.org/10.5626/JOK.2025.52.1.18

This paper contains examples of political leaning bias that can be offensive. Abstract As the performance of the Large Language Models (LLMs) improves, direct interaction with users becomes possible, raising ethical issues. In this study, we design two experiments to explore the diverse spectrum of political stances that an LLM exhibits and how these stances affect downstream tasks. We first define the inherent political stances of the LLM as the baseline and compare results from three different inputs (jailbreak, political persona, and jailbreak persona). The results of the experiments show that the political stances of the LLM changed the most with the jailbreak attack, while lesser changes were observed with the other two inputs. Moreover, an experiment involving downstream tasks demonstrated that the distribution of altered inherent political stances can affect the outcome of these tasks. These results suggest that the model generates responses that align more closely like its inherent stance rather than the user’s intention to personalize responses. We conclude that the intrinsic political bias of the model and its judgments can be explicitly communicated to users.

Chain-of-Thought and Chain-of-Verification Prompting for Grammar-based Test Case Generation

Aditi, Sang-Ki Ko

http://doi.org/10.5626/JOK.2025.52.1.29

Software testing is an essential but cost-intensive work in the software development process. Automatic test case generation tools are utilized to distinguish between the correct and the incorrect solutions more effectively than manually generating them. Many researchers have recently proposed deep learning-based methods to generate test cases automatically for given logical specifications of problems or programs. In this work, we propose teaching the large language models (LLMs) such as ChatGPT and Google Gemini to generate ‘test case grammars’ from problem specifications, particularly using the chain-of-thought (CoT) prompting. Additionally, we implemented it using the CoT to verify and by providing the details of generalized rules to the LLMs, termed “chain-of-verification” (CoVe). We further evaluate our method with the publicly available dataset, DeepMind CodeContests dataset, which consists of numerous programming problems ranging from beginner to advanced level and is submitted by programming students with test cases for verifying the correctness of programs.

EnhPred: Deep Learning Model for Precise Prediction of Enhancer Positions

Jinseok Kim, Suyeon Wy, Jaebum Kim

http://doi.org/10.5626/JOK.2025.52.1.35

Enhancers are crucial regulatory elements that control gene expression in living organisms. Therefore, enhancer prediction is essential for a deeper understanding of gene regulation mechanisms. However, precise enhancer prediction is challenging due to their variable lengths and distant target genes. Existing artificial intelligence-based enhancer prediction methods often predict enhancers without identifying their boundaries accurately. In this study, we developed a new deep learning-based enhancer prediction method called EnhPred, which consisted of Convolutional Neural Networks (CNN) and bidirectional Gated Recurrent Units (GRU). To predict enhancer regions with a high resolution, we designed EnhPred to predict probabilities of enhancer presence within narrow segmented genomic regions. When evaluated with existing machine learning- and deep learning-based methods using data from three human cell lines, EnhPred demonstrated superior performances in terms of accuracy of enhancer prediction and resolution of enhancer boundaries.

Analyzing Model Hubs for Effective Composition of Pre-Trained Machine Learning Models

Arogya Kharel, In-Young Ko

http://doi.org/10.5626/JOK.2025.52.1.42

Deep Neural Network (DNN) models have become prevalent. They are increasingly adopted as components in software systems. Designing and training these DNNs from scratch is not trivial. Designing requires domain expertise and familiarity with DNN frameworks while training necessitates substantial computational resources and large training datasets. Following the philosophy of traditional software engineering, developers often reuse Pre-Trained Models (PTMs) organized in model hubs. However, challenges arise when PTMs that match a developer’s specific requirements are lacking. In this paper, we explored the concept of PTM composition and investigated whether a combination of PTMs could fulfill application requirements without needing fine-tuning or creating a new DNN. We present current challenges in PTM composition through our case study and identified shortcomings of existing model hubs. By drawing parallels between PTM composition and web service composition, we highlighted essential technologies required for successful PTM composition and discussed potential solutions to these issues.

Adversarial Training with Contrastive Learning in NLP

Daniela N. Rim, DongNyeong Heo, Heeyoul Choi

http://doi.org/10.5626/JOK.2025.52.1.52

Adversarial training has been extensively studied in natural language processing (NLP) settings to make models robust so that similar inputs derive similar outcomes semantically. However, since language has no objective measure of semantic similarity, previous works use an external pre-trained NLP model to ensure this similarity, introducing an extra training stage with huge memory consumption. This work proposes adversarial training with contrastive learning (ATCL) to train a language processing model adversarially using the benefits of contrastive learning. The core idea is to make linear perturbations in the embedding space of the input via fast gradient methods (FGM) and train the model to keep the original and perturbed representations close via contrastive learning. We apply ATCL to language modeling and neural machine translation tasks showing an improvement in the quantitative (perplexity and BLEU) scores. Furthermore, ATCL achieves good qualitative results in the semantic level for both tasks without using a pre-trained model through simulation.

A VQG Framework for Accurate and Diverse Question Generation

Hee-Yeon Choi, Dong-Wan Choi

http://doi.org/10.5626/JOK.2025.52.1.62

Visual Question Generation (VQG) aims to generate questions based on a given image, often utilizing additional information such as answers or answer types if necessary. A VQG system should be able to generate diverse questions for a single image, while maintaining relevance to the image alongside its additional information. However, models that highly focus on relevance to the image might overfit to the dataset, leading to limited diversity, while those that emphasize diversity might generate questions less related to the input. Therefore, balancing these two aspects is crucial in VQG. To address this challenge, we proposed BCVQG (BLIP-CVAE VQG), a system that could integrate a pre-trained vision-language model with a Conditional Variational AutoEncoder (CVAE). The effectiveness of the proposed method was validated through quantitative and qualitative evaluations on the VQA2.0 dataset.

Reference Image-Based Contrastive Attention Mechanism for Printed Circuit Board Defect Classification

Sung Ho Park, Seung Hoon Lee

http://doi.org/10.5626/JOK.2025.52.1.70

Effective classification of defects in Printed Circuit Boards (PCBs) is critical for ensuring product quality. Traditional approaches to PCB defect detection have primarily relied on single-image analysis or failed to adequately address alignment issues between reference and test images, leading to reduced reliability and precision in defect detection. To overcome these limitations, this study aimed to introduce a novel deep image comparison method that could incorporate contrastive loss functions to improve image alignment with a contrastive attention mechanism to focus the model on areas with a higher likelihood of defects. Experiments conducted on actual PCB data demonstrated that the proposed method achieved superior classification performance, even with limited data, highlighting its potential to significantly enhance the reliability of PCB defect detection and address existing challenges in the field.

Font Generation System Development based on Few-shot Font Generation Model

Yeongjin Jo, Shinjin Kang, Beomjoo Seo, Sunyoung Kim

http://doi.org/10.5626/JOK.2025.52.1.77

As the demand for personalized font generation continues to rise, the need for a customized font generation system has become increasingly important. In this study, we designed and implemented a font generation system using VQ-Font, a AI-based model in the field of Few-Shot Font Generation. This system can produce complete font files from just a few input images, making it ideal for personalized font generation. We adapted the original Chinese-centric font generation model to fit the structural characteristics of Korean and collected a Korean font dataset to fine-tune the model. As a result, our proposed model outperformed existing Korean font generation models, as confirmed by comparative experiments. We also assessed the font generation speed of the enhanced model, demonstrating its potential for practical applications.

An Effective Graph Edit Distance Model Using Node Mapping Information

Jun-Gyu Lee, Jongik Kim

http://doi.org/10.5626/JOK.2025.52.1.88

Graph Edit Distance (GED) is the most representative method for quantifying similarity between graphs. However, calculating an exact GED is an NP-Hard problem, which incurs a prohibitively large amount of computational cost. To efficiently compute GED, recent studies have focused on deriving an approximate GED between graphs using deep learning models. However, existing models tend to exhibit large approximation errors and suffer from insufficient interpretability because they do not consider node-to-node relationships between graphs. To remedy these problems faced by existing models, a model that could learn a mapping matrix through node-level embeddings of two graphs was proposed in this study to provide better interpretability of the GED approximation while minimizing information loss during the learning process. Results of experiments showed that the proposed model consistently outperformed existing models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr