Search : [ keyword: Network ] (246)

A Model for Topic Classification and Extraction of Sentimental Expression using a Lexical Semantic Network

JiEun Park, JuSang Lee, JoonChoul Shin, ChoelYoung Ock

http://doi.org/10.5626/JOK.2023.50.8.700

The majority of the previous sentiment analysis studies classified a single sentence or document into only a single sentiment. However, more than one sentiment can exist in one sentence. In this paper, we propose a method that extracts sentimental expression for word units. The structure of the proposed model is a UBERT model that uses morphologically analyzed sentences as input and adds layers to predict topic classification and sentimental expression. The proposed model uses topic feature of a sentence predicted by topic dictionary. The topic dictionary is built at the beginning of machine learning. The learning module collects topic words from a training corpus and expands them using the lexical semantic network. The evaluation is performed with the word unit F1-Score. The proposed model achieves an F1-Score of 58.19%, an improvement of 0.97% point over the baseline.

Comparative Analysis of Accuracy and Stability of Software Reliability Estimation Models based on Recurrent Neural Networks

Taehyoun Kim, Duksan Ryu, Jongmoon Baik

http://doi.org/10.5626/JOK.2023.50.8.688

Existing studies on software reliability estimation based on recurrent neural networks have used networks to create one model under the same conditions and evaluated the accuracy of the model. However, due to the randomness of artificial neural networks, such recurrent neural networks can generate different training results of models even under the same conditions, which can lead to inaccurate software reliability estimation. Therefore, this paper compares and analyzes which recurrent neural networks could estimate software reliability more stably and accurately. We estimated software reliability in eight real projects using three representative recurrent neural networks and compared and analyzed the performances of these models in terms of accuracy and stability. As a result, Long Short-Term Memory showed the most stable and accurate software reliability estimation performance. A more accurate and stable software reliability estimation model is expected to be selected based on the results of this study.

Pruning Deep Neural Networks Neurons for Improved Robustness against Adversarial Examples

Gyumin Lim, Gihyuk Ko, Suyoung Lee, Sooel Son

http://doi.org/10.5626/JOK.2023.50.7.588

Deep Neural Networks (DNNs) have a security vulnerability to adversarial examples, which can result in incorrect classification of the DNNs results. In this paper, we assume that the activation patterns of DNNs will differ between normal data and adversarial examples. We propose a revision that prunes neurons that are activated only in the adversarial examples but not in the normal data, by identifying such neurons in the DNNs. We conducted adversarial revision using various adversarial examples generation techniques and used MNIST and CIFAR-10 datasets. The DNNs neurons that were pruned using the MNIST datasets achieved adversarial revision performance that increased up to 100% and 70.20% depending on the pruning method (label-wise and all-label pruning) while maintaining classification accuracy of normal data at above 99%. In contrast, the CIFAR-10 datasets showed a decreased classification accuracy for normal data, but the adversarial revision performance increased up to 99.37% and 47.61% depending on the pruning method. In addition, the efficiency of the proposed pruning-based adversarial revision performance was confirmed through a comparative analysis with adversarial training methods.

Open-source-based 5G Access Network Security Vulnerability Automated Verification Framework

Jewon Jung, Jaemin Shin, Sugi Lee, Yusung Kim

http://doi.org/10.5626/JOK.2023.50.6.531

Recently, various open sources based on 5G standards have emerged, and are widely used in research to find 5G control plane security vulnerabilities. However, leveraging those open sources requires extensive knowledge of complex source code, wireless communication devices, and massive 5G security standards. Therefore, in this paper, we propose a framework for the automatic verification of security vulnerabilities in the 5G control plane. This framework builds a 5G network using commercial Software Defined Radio (SDR) equipment and open-source software and implements a Man-in-the-Middle (MitM) attacker to deploy a control plane attack test bed. It also implements control plane message decoding and correction modules to execute message spoofing attacks and automatically classifies security vulnerabilities in 5G networks. In addition, a GUI-based web user interface is implemented so that users can create MitM attack scenarios and check the verification results themselves.

A Deep Learning Approach for Target-oriented Communication Resource Allocation in Holographic MIMO

Apurba Adhikary, Md. Shirajum Munir, Avi Deb Raha, Min Seok Kim, Jong Won Choe, Choong Seon Hong

http://doi.org/10.5626/JOK.2023.50.5.441

In this paper, we propose a single-cell massive multiple-input multiple-output (mMIMO) system assisted with holography that performs target-oriented communication resource allocation for heterogeneous users. This paper proposes a technique that can minimize the number of active grids from holographic grid arrays (HGA) for confirming the requirement of lower power toward beamforming to serve target-oriented users. Therefore, we formulated a problem by maximizing the signal-to-interference-noise ratio (SINR), which, in turn, maximizes the efficient resource allocation for the users by generating effective beamforming and controlling the sum-power rule. Additionally, our holography-assisted mMIMO system is capable of serving heterogeneous user equipment simultaneously with a lower power budget. To devise the artificial intelligence (AI)-based solution, we developed a sequential neural network model for grid activation decisions with minimized power constraint. Finally, the simulation and performance evaluation results show that power was allocated efficiently, and effective beams were formed for serving the users with a lower RMSE score of 0.01.

Rehearsal with Stored Latent Vectors for Incremental Learning Over GANs

Hye-Min Jeong, Dong-Wan Choi

http://doi.org/10.5626/JOK.2023.50.4.351

Unlike humans, sequential learning of multiple tasks is a difficult problem in a deep learning model. This problem is not only for discriminative models, but also for generative models, such as GAN. The Generative Replay method, which is frequently used in GAN continual learning, uses images generated by GAN provided in the previous task together for learning new tasks, but does not generate good images for CIFAR10, which is a relatively challenging task. Therefore, we can consider a rehearsal-based method that stores a portion of the real data, which cannot store a huge amount of images in limited memory because of large dimension of the real image. In this paper, we propose LactoGAN and LactoGAN+, continual learning methods that store latent vectors that are the inputs of GANs rather than storing real images, as the existing rehearsal-based approaches. As a result, more image knowledge can be stored in the same memory; thus, showing better results than the existing GAN continual learning methods.

C++ based Deep Learning Open Source Framework WICWIU.v3 that Supports Natural Language and Time-series Data Processing

Junseok Oh, Chanhyo Lee, Okkyun Koo, Injung Kim

http://doi.org/10.5626/JOK.2023.50.4.313

WICWIU is the first open-source deep learning framework developed by Korean university. In this work, we developed WICWIU.v3 that includes features for natural language and time-series data processing. WICWIU was designed for C++ environment, and supports GPU-based parallel processing, and has excellent readability and extensibility, allowing users to easily add new features. In addition to WICWIU.v1 and v2 that focus on image processing models, such as convolutional neural networks (CNN) and general adversarial networks (GAN), WICWIU.v3 provides classes and functions for natural language and time-series data processing, such as recurrent neural networks (RNN), including LSTM (Long Short-Term Memory Networks) and GRU (Gated Recurrent Units), attention modules, and Transformers. We validated the newly added functions for natural language and time-series data by implementing a machine translator and a text generator with WICWIU.v3.

DNN Retraining Method Reducing Accuracy Degradation in Packet-Lossy Environments

Dongwhee Kim, Yujin Lim, Syngha Han, Jungrae Kim

http://doi.org/10.5626/JOK.2023.50.3.285

Limited resources on mobile devices have necessitated a collaboration with cloud servers, called “Collaborative Intelligence”, to process growing Deep Neural Network (DNN) model sizes. Collaborative intelligence takes a long time to send a lot of feature data from clients to servers. One can reduce the transfer time using User Datagram Protocol (UDP), but a dropped packet during UDP transfer reduces inference accuracy. This paper proposed a DNN retraining method to develop a robust DNN model. The server-side layers are retrained to avoid lossy features by modeling continuous feature losses resulting from a packet drop. Our results showed that it can reduce accuracy reduction from packet losses, provide high accuracy reliability against changes in the communication environment, and reduce the storage overheads of mobile devices.

Integrating Domain Knowledge with Graph Convolution based on a Semantic Network for Elderly Depression Prediction

Seok-Jun Bu, Kyoung-Won Park, Sung-Bae Cho

http://doi.org/10.5626/JOK.2023.50.3.243

Depression in the elderly is a global problem that causes 300 million patients and 800,000 suicides every year, so it is critical to detect early daily activity patterns closely related to mobility. Although a graph-convolution neural network based on sensing logs has been promising, it is required to represent high-level behaviors extracted from complex sensing information sequences. In this paper, a semantic network that structuralizes the daily activity patterns of the elderly was constructed using additional domain knowledge, and a graph convolution model was proposed for complementary uses of low-level sensing log graphs. Cross-validation with 800 hours of data from 69 senior citizens provided by DNX, Inc. revealed improved prediction performance for the suggested strategy compared to the most recent deep learning model. In particular, the inference of a semantic network was justified by a graph convolution model by showing a performance improvement of 28.86% compared with the conventional model.

Graph Neural Networks with Prototype Nodes for Few-shot Image Classification

Sung-eun Jang, Juntae Kim

http://doi.org/10.5626/JOK.2023.50.2.127

The remarkable performance of deep learning models is based on a large amount of training data. However, there are a number of domains where it is difficult to obtain such a large amount of data, and in these domains a large amount of resources must be invested for data collection and refining. To overcome these limitations, research on few-shot learning, which enables learning with only a small number of data, is being actively conducted. In particular, among meta learning methodologies, metric-based learning which utilizes similarity between data has the advantage that it does not require fine-tuning of the model for a new task, and recent studies using graph neural networks have shown good results. A few-shot classification model based on a graph neural network can explicitly process data characteristics and the relationship between data by constructing a task graph using data of a given support set and query set as nodes. The EGNN(Edge-labeling Graph Neural Net) model expresses the similarity between data in the form of edge labels and models the intra-class and inter-class similarity more clearly. In this paper, we propose a method of applying a prototype node representing each class to few-shot task graph to model the similarity between data and class-data at the same time. The proposed model provides a generalized prototype node that is created based on task data and class configuration, and it can perform two different few-shot image classification predictions based on the prototype-query edge label or the Euclidean distance between prototype-query nodes. Comparing the 5-way 5-shot classification performance on the mini-ImageNet dataset with the EGNN model and other meta-learning-based few-shot classification models, the proposed model showed significant performance improvement.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr