Search : [ keyword: Convolution Neural Network ] (5)

Copy-Paste Based Image Data Augmentation Method Using

Su-A Lee, Ji-Hyeong Han

http://doi.org/10.5626/JOK.2022.49.12.1056

In the field of computer vision, massive well-annotated image data are essential to achieve good performance of a convolutional neural network (CNN) model. However, in real world applications, gathering massive well-annotated data is a difficult and time-consuming job. Thus, image data augmentation has been continually studied. In this paper, we proposed an image data augmentation method that could generate more diverse image data by combining generative adversarial network (GAN) and copy-paste based augmentation. The proposed method generated not pixel-level or image-level augmentation, but object-level augmentation by cutting off segmentation boundaries(mask) instead of bounding boxes. It then applyied GAN to transform objects.

Study and Application of RSSI-based Wi-Fi Channel Detection Using CNN and Frequency Band Characteristics

Junhyun Park, Hyungho Byun, Chong-Kwon Kim

http://doi.org/10.5626/JOK.2020.47.3.335

For mobile devices, Wi-Fi channel scanning is essential to initiating an internet connection, which enables access to a variety of services, and maintaining a stable link quality by periodic monitoring. However, inefficient Wi-Fi operation, where all channels are scanned regardless of whether or not an access point (AP) exists, wastes resources and leads to performance degradation. In this paper, we present a fast and accurate Wi-Fi channel detection method that learns the dynamic frequency band characteristics of signal strengths collected via a low power antenna using a convolution neural network (CNN). Experiments were conducted to demonstrate the channel detection accuracy for different AP combination scenarios. Furthermore, we analyzed the expected performance gain if the suggested method were to assist the scanning operation of the legacy Wi-Fi.

Sentence Similarity Prediction based on Siamese CNN-Bidirectional LSTM with Self-attention

Mintae Kim, Yeongtaek Oh, Wooju Kim

http://doi.org/10.5626/JOK.2019.46.3.241

A deep learning model for semantic similarity between sentences was presented. In general, most of the models for measuring similarity word use level or morpheme level embedding. However, the attempt to apply either word use or morpheme level embedding results in higher complexity of the model due to the large size of the dictionary. To solve this problem, a Siamese CNN-Bidirectional LSTM model that utilizes phonemes instead of words or morphemes and combines long short term memory (LSTM) with 1D convolution neural networks with various window lengths that bind phonemes is proposed. For evaluation, we compared our model with Manhattan LSTM (MaLSTM) which shows good performance in measuring similarity between similar questions in the Naver Q&A dataset (similar to Kaggle Quora Question Pair).

A Transfer Learning Method for Solving Imbalance Data of Abusive Sentence Classification

Suin Seo, Sung-Bae Cho

http://doi.org/10.5626/JOK.2017.44.12.1275

The supervised learning approach is suitable for classification of insulting sentences, but pre-decided training sentences are necessary. Since a Character-level Convolution Neural Network is robust for each character, so is appropriate for classifying abusive sentences, however, has a drawback that demanding a lot of training sentences. In this paper, we propose transfer learning method that reusing the trained filters in the real classification process after the filters get the characteristics of offensive words by generated abusive/normal pair of sentences. We got higher performances of the classifier by decreasing the effects of data shortage and class imbalance. We executed experiments and evaluations for three datasets and got higher F1-score of character-level CNN classifier when applying transfer learning in all datasets.

Speakers’ Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network

Minkyoung Kim, Harksoo Kim

http://doi.org/10.5626/JOK.2017.44.12.1252

In dialogues, speakers’ intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr