Digital Library[ Search Result ]
Sensor Selection Strategies for Activity Recognition in a Smart Environment
The recent emergence of smart phones, wearable devices, and even the IoT concept made it possible for various objects to interact one another anytime and anywhere. Among many of such smart services, a smart home service typically requires a large number of sensors to recognize the residents’ activities. For this reason, the ideas on activity recognition using the data obtained from those sensors are actively discussed and studied these days. Furthermore, plenty of sensors are installed in order to recognize activities and analyze their patterns via data mining techniques. However, if many of these sensors should be installed for IoT smart home service, it raises the issue of cost and energy consumption. In this paper, we proposed a new method for reducing the number of sensors for activity recognition in a smart environment, which utilizes the principal component analysis and clustering techniques, and also show the effect of improvement in terms of the activity recognition by the proposed method.
Construction of Korean Knowledge Base Based on Machine Learning from Wikipedia
Seok-won Jeong, Maengsik Choi, Harksoo Kim
The performance of many natural language processing applications depends on the knowledge base as a major resource. WordNet, YAGO, Cyc, and BabelNet have been extensively used as knowledge bases in English. In this paper, we propose a method to construct a YAGO-style knowledge base automatically for Korean (hereafter, K-YAGO) from Wikipedia and YAGO. The proposed system constructs an initial K-YAGO simply by matching YAGO to info-boxes in Wikipedia. Then, the initial K-YAGO is expanded through the use of a machine learning technique. Experiments with the initial K-YAGO shows that the proposed system has a precision of 0.9642. In the experiments with the expanded part of K-YAGO, an accuracy of 0.9468 was achieved with an average macro F1-measure of 0.7596.
Automatic Construction of a Negative/positive Corpus and Emotional Classification using the Internet Emotional Sign
Kyoungae Jang, Sanghyun Park, Woo-Je Kim
Internet users purchase goods on the Internet and express their positive or negative emotions of the goods in product reviews. Analysis of the product reviews become critical data to both potential consumers and to the decision making of enterprises. Therefore, the importance of opinion mining techniques which derive opinions by analyzing meaningful data from large numbers of Internet reviews. Existing studies were mostly based on comments written in English, yet analysis in Korean has not actively been done. Unlike English, Korean has characteristics of complex adjectives and suffixes. Existing studies did not consider the characteristics of the Internet language. This study proposes an emotional classification method which increases the accuracy of emotional classification by analyzing the characteristics of the Internet language connoting feelings. We can classify positive and negative comments about products automatically using the Internet emoticon. Also we can check the validity of the proposed algorithm through the result of high precision, recall and coverage for the evaluation of this method.
Adversarial Training with Contrastive Learning in NLP
Daniela N. Rim, DongNyeong Heo, Heeyoul Choi
http://doi.org/10.5626/JOK.2025.52.1.52
Adversarial training has been extensively studied in natural language processing (NLP) settings to make models robust so that similar inputs derive similar outcomes semantically. However, since language has no objective measure of semantic similarity, previous works use an external pre-trained NLP model to ensure this similarity, introducing an extra training stage with huge memory consumption. This work proposes adversarial training with contrastive learning (ATCL) to train a language processing model adversarially using the benefits of contrastive learning. The core idea is to make linear perturbations in the embedding space of the input via fast gradient methods (FGM) and train the model to keep the original and perturbed representations close via contrastive learning. We apply ATCL to language modeling and neural machine translation tasks showing an improvement in the quantitative (perplexity and BLEU) scores. Furthermore, ATCL achieves good qualitative results in the semantic level for both tasks without using a pre-trained model through simulation.
Analyzing Model Hubs for Effective Composition of Pre-Trained Machine Learning Models
http://doi.org/10.5626/JOK.2025.52.1.42
Deep Neural Network (DNN) models have become prevalent. They are increasingly adopted as components in software systems. Designing and training these DNNs from scratch is not trivial. Designing requires domain expertise and familiarity with DNN frameworks while training necessitates substantial computational resources and large training datasets. Following the philosophy of traditional software engineering, developers often reuse Pre-Trained Models (PTMs) organized in model hubs. However, challenges arise when PTMs that match a developer’s specific requirements are lacking. In this paper, we explored the concept of PTM composition and investigated whether a combination of PTMs could fulfill application requirements without needing fine-tuning or creating a new DNN. We present current challenges in PTM composition through our case study and identified shortcomings of existing model hubs. By drawing parallels between PTM composition and web service composition, we highlighted essential technologies required for successful PTM composition and discussed potential solutions to these issues.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr