Search : [ keyword: 예측 ] (83)

Optimizing Throughput Prediction Models Based on Feature Category Contribution in 4G/5G Network Environments

Jaeyoung Shin, Jihyun Park

http://doi.org/10.5626/JOK.2024.51.11.961

The acceleration in 5G technology adoption due to increased network data consumption and limitations of 4G has led to the establishment of a heterogeneous network environment comprising both 4G and limited 5G. Consequently, this highlights the importance of throughput prediction for network service quality (QoS) and resource optimization. Traditional throughput prediction research mainly relies on the use of single attributes or extraction of attributes through correlation analysis. However, these approaches have limitations, including potential exclusion of variables with nonlinear relationships with arbitrariness and inconsistency of correlation coefficient thresholds. To overcome these limitations, this paper proposed a new approach based on Feature Importance. This method could calculate the relative importance of features used in the network and assign contribution scores to attribute categories. By utilizing these scores, throughput prediction was enhanced. This approach was applied and tested on four open network datasets. Experiments demonstrated that the proposed method successfully derived an optimal category combination for throughput prediction, reduced model complexity, and improved prediction accuracy compared to using all categories.

Generating Relation Descriptions with Large Language Model for Link Prediction

Hyunmook Cha, Youngjoong Ko

http://doi.org/10.5626/JOK.2024.51.10.908

The Knowledge Graph is a network consisting of entities and the relations between them. It is used for various natural language processing tasks. One specific task related to the Knowledge Graph is Knowledge Graph Completion, which involves reasoning with known facts in the graph and automatically inferring missing links. In order to tackle this task, studies have been conducted on both link prediction and relation prediction. Recently, there has been significant interest in a dual-encoder architecture that utilizes textual information. However, the dataset for link prediction only provides descriptions for entities, not for relations. As a result, the model heavily relies on descriptions for entities. To address this issue, we utilized a large language model called GPT-3.5-turbo to generate relation descriptions. This allows the baseline model to be trained with more comprehensive relation information. Moreover, the relation descriptions generated by our proposed method are expected to improve the performance of other language model-based link prediction models. The evaluation results for link prediction demonstrate that our proposed method outperforms the baseline model on various datasets, including Korean ConceptNet, WN18RR, FB15k-237, and YAGO3-10. Specifically, we observed improvements of 0.34%p, 0.11%p, 0.12%p, and 0.41%p in terms of Mean Reciprocal Rank (MRR), respecitvely.

Expected Addressee and Target Utterance Prediction for Construction of Multi-Party Dialogue Systems

Yoonjin Jang, Keunha Kim, Youngjoong Ko

http://doi.org/10.5626/JOK.2024.51.10.918

As the number of communication channels between people has increased in recent years, there has been a rise in both multi-party conversations and one-to-one conversations. Research on analyzing multi-party conversations has also been active. In the past, models for analyzing such dialogues typically predicted the addressee of the final response based on the previous responses. However, this differs from the task of generating multi-party dialogue responses, which requires the speaker to select the addressee to whom they will respond. In this paper, we propose a new task for predicting the addressee of a multi-party dialogue that does not rely on response information. Our task aims to predict and match the expected target utterance with the expected addressee in a real multi-party dialogue. To accomplish this, we introduce a model that uses a transform encoder-based masked token prediction learning method. This model predicts the expected target utterance and the expected addressee of the current speaker based on the previous dialogue context, without considering the final response. The proposed model achieves an accuracy of 82% in predicting the expected recipient and 68% in predicting the expected target utterance accuracy on the Ubuntu IRC dataset. These results demonstrate the potential of our model for use in a multi-party dialogue system, as it can accurately predict the target utterance that should be used. Moving forward, we plan to expand our research by creating additional datasets for multi-party dialogues and applying them to real-world multilateral dialogue response generation systems.

A Study on Sales Prediction Model Based on BiLSTM-GAT Using Credit Card Transaction Data

Wonseok Jung, Dohyung Kim, Young Ik Eom

http://doi.org/10.5626/JOK.2024.51.9.807

Sales prediction using credit card transaction data is essential for understanding consumer buying patterns and market trends. However, traditional statistical and machine learning models have limitations when it comes to analyzing temporal features and the relationships between different variables, such as geographical data and sales information by service types, population, and transaction times. This paper proposes two models that can simultaneously analyze the relationships based on commercial district features and sales time-series features. To evaluate the performance of these models, we constructed graphs based on the distances and sales similarity of features between commercial districts. We then compared the performance of the proposed models with traditional time-series models, namely LSTM and BiLSTM. The results of the experiment showed that the GAT-BiLSTM model improved prediction accuracy by approximately 15% compared to the BiLSTM model, while the BiLSTM-GAT model improved it by about 29% over the BiLSTM model, as measured by RMSE.

Deep Learning-Based Abnormal Event Recognition Method for Detecting Pedestrian Abnormal Events in CCTV Video

Jinha Song, Youngjoon Hwang, Jongho Nang

http://doi.org/10.5626/JOK.2024.51.9.771

With increasing CCTV installations, the workload for monitoring has significantly increased. However, a growing workforce has reached its limits in addressing this issue. To overcome this problem, intelligent CCTV technology has been developed. However, this technology experiences performance degradation in various situations. This paper proposes a robust and versatile method for integrated abnormal behavior recognition in CCTV footage that could be applied in multiple situations. This method could extract frame images from videos to use raw images and heatmap representation images as inputs. It could remove feature vectors through merging methods at both image and feature vector levels. Based on these vectors, we proposed an abnormal behavior recognition method utilizing 2D CNN models, 3D CNN models, LSTM, and Average Pooling. We defined minor classes for performance validation and generated 1,957 abnormal behavior video clips for testing. The proposed method is expected to improve the accuracy of abnormal behavior recognition through CCTV footage, thereby enhancing the efficiency of security and surveillance systems.

Photovoltaic Power Forecasting Scheme Based on Graph Neural Networks through Long- and Short-Term Time Pattern Learning

Jaeseung Lee, Sungwoo Park, Jaeuk Moon, Eenjun Hwang

http://doi.org/10.5626/JOK.2024.51.8.690

As the use of solar energy has become increasingly common in recent years, there has been active research in predicting the amount of photovoltaic power generation to improve the efficiency of solar energy. In this context, photovoltaic power forecasting models based on graph neural networks have been presented, going beyond existing deep learning models. These models enhance prediction accuracy by learning the interactions between regions. Specifically, they consider how the amount of photovoltaic power in a specific region is affected by the climate conditions of adjacent regions and the time pattern of photovoltaic power generation. However, existing models mainly rely on a fixed graph structure, making it difficult to capture temporal and spatial interactions. In this paper, we propose a graph neural networks-based photovoltaic power forecasting scheme that takes into account both long-term and short-term time patterns of regional photovoltaic power generation data. We then incorporate these patterns into the learning process to establish correlations between regions. Compared to other graph neural networks-based prediction models, our proposed scheme achieved a performance improvement of up to 7.49% based on the RRSE, demonstrating its superiority.

A GRU-based Time-Series Forecasting Method using Patching

Yunyeong Kim, Sungwon jung

http://doi.org/10.5626/JOK.2024.51.7.663

Time series forecasting plays a crucial role in decision-making within various fields. Two recent approaches, namely, the patch time series Transformer (PatchTST) and the long-term time series foraging linear (LTSF-Linear) of the MLP structure have shown promising performance in this area. However, PatchTST requires significant time for both model training and inference, while LTSF-Linear has limited capacity due to its simplistic structure. To address these limitations, we propose a new approach called patch time series GRU (PatchTSG). By leveraging a Gated Recurrent Unit (GRU) on the patched data, PatchTSG reduces the training time and captures valuable information from the time series data. Compared to PatchTST, PatchTSG achieves an impressive reduction in learning time (up to 82%) and inference time (up to 46%).

Predicting of the Number of Diners in School Cafeteria; Including COVID-19 Pandemic Period Data

Chae-eun Baek, Yesl Kwon, Jangmin Oh

http://doi.org/10.5626/JOK.2024.51.7.634

Accurately predicting the number of diners in institutional food service is essential for efficient operations, reducing leftovers, and ensuring customer satisfaction. University cafeterias, in particular, face additional challenges in making these predictions due to various environmental factors and changes in class formats caused by the COVID-19 pandemic. To tackle this issue, this study utilized specialized data collected during the pandemic period in university cafeteria environments. The data was used to train and compare the performance of five different models. The three best-performing ensemble tree-based models -- RandomForest, LightGBM, and XGBoost -- were averaged to obtain a final prediction with a Mean Absolute Error (MAE) of 30.96. By regularly providing prediction results to on-campus cafeterias using this final model, practical support can be offered to optimize operations. This study presents an effective methodology for accurately predicting of the number of diners, even in abnormal situations such as the COVID-19 pandemic.

Improving Prediction of Chronic Hepatitis B Treatment Response Using Molecular Embedding

Jihyeon Song, Soon Sun Kim, Ji Eun Han, Hyo Jung Cho, Jae Youn Cheong, Charmgil Hong

http://doi.org/10.5626/JOK.2024.51.7.627

Chronic hepatitis B patients with no timely treatment are at a high risk of developing complications such as liver cirrhosis and hepatocellular carcinoma (liver cancer). As a result, various antiviral agents for hepatitis B have been developed, and due to the different components of these antiviral agents, there can be variations in treatment responses among patients. Therefore, selecting the appropriate medication that leads to a favorable treatment response is considered crucial. In this study, in addition to the patient's blood test results and electronic medical records indicating drug prescriptions, information about components of the hepatitis B antiviral agents was incorporated for learning. The aim was to enhance the prediction performance of treatment responses one year after chronic hepatitis B patients' treatment. Molecular embedding of the antiviral agents included both fixed molecular embedding and those generated through an end-to-end structure utilizing a graph neural network model. By comparing with the baseline model, drug molecule embedding was confirmed to contribute to improving performance.

Prediction of Cancer Prognosis Using Patient-Specific Cancer Driver Gene Information

Dohee Lee, Jaegyoon Ahn

http://doi.org/10.5626/JOK.2024.51.6.574

Accurate prediction of cancer prognosis is crucial for effective treatment. Consequently, numerous studies on cancer prognosis have been conducted, with recent research leveraging various machine learning techniques such as deep learning. In this paper, we first constructed patient-specific gene networks for each patient, then selected patient-specific cancer driver genes, considering the heterogeneity of cancer. We propose a deep neural architecture that can predict the prognosis more accurately using patient-specific cancer driver gene information. When our method was applied to gene expression data for 11 types of cancer, it demonstrated a significantly higher prediction accuracy compared to the existing methods.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr