Search : [ keyword: 심층학습 ] (5)

Graph Structure Learning-Based Neural Network for ETF Price Movement Prediction

Hyeonsoo Jo, Jin-gee Kim, Taehun Kim, Kijung Shin

http://doi.org/10.5626/JOK.2024.51.5.473

Exchange-Traded Funds (ETFs) are index funds that mirror particular market indices, usually associated with their low risk and expense ratio to individual investors. Various methods have emerged for accurately predicting ETF price movements, and recently, AI-based technologies have been developed. One representative method involves using time-series-based neural networks to predict the price movement of ETFs. This approach effectively incorporates past price information of ETFs, allowing the prediction of their movement. However, it has a limitation as it only utilizes historical information of individual ETFs and does not account for the relationships and interactions between different ETFs. To address this issue, we propose a model that can capture relationships between ETFs. The proposed model uses graph structure learning to infer a graph representing relationships between ETFs. Based on this, a graph neural network predicts the ETF price movement. The proposed model demonstrates superior performance compared to time-series-based deep-learning models that only use individual ETF information.

Defining Chunks and Chunking using Its Corpus and Bi-LSTM/CRFs in Korean

Young Namgoong, Chang-Hyun Kim, Min-ah Cheon, Ho-min Park, Ho Yoon, Min-seok Choi, Jae-kyun Kim, Jae-Hoon Kim

http://doi.org/10.5626/JOK.2020.47.6.587

There are several notorious problems in Korean dependency parsing: the head position problem and the constituent unit problem. Such problems can be somewhat resolved by chunking. Chunking seeks to locate and classify constituents referred to as chunks into predefined categories. So far, several studies in Korean have been conducted without a clear definition of chunks partially. Thus, we define chunks in Korean thoroughly and build a chunk-tagged corpus based on the definition as well as propose a Bi-LSTM/CRF chunking model using the corpus. Through experiments, we have shown that the proposed model achieved a F1-score of 98.54% and can be used for practical applications. We analyzed performance variations according to word embedding and so fastText showed the best performance. Error analysis was performed so that it could be used to improve the proposed model in the near future.

Visualization of Convolutional Neural Networks for Time Series Input Data

Sohee Cho, Jaesik Choi

http://doi.org/10.5626/JOK.2020.47.5.445

Globally, the use of artificial intelligence (AI) applications has increased in a variety of industries from manufacturing, to health care to the financial sector. As a result, there is a growing interest in explainable artificial intelligence (XAI), which can provide explanations of what happens inside AI. Unlike previous work using image data, we visualize hidden nodes for a time series. To interpret which patterns of a node make more effective model decisions, we propose a method of arranging nodes in a hidden layer. The hidden nodes sorted by weight matrix values show which patterns significantly affected the classification. Visualizing hidden nodes explains a process inside the deep learning model, as well as enables the users to improve their understanding of time series data.

Research on Joint Models for Korean Word Spacing and POS (Part-Of-Speech) Tagging based on Bidirectional LSTM-CRF

Seon-Wu Kim, Sung-Pil Choi

http://doi.org/10.5626/JOK.2018.45.8.792

In general, Korean part-of-speech tagging is done on a sentence in which the spacing is completed by a word as an input. In order to process a sentence that is not properly spaced, automatic spacing is needed to correct the error. However, if the automatic spacing and the parts tagging are sequentially performed, a serious performance degradation may result from an error occurring at each step. In this study, we try to solve this problem by constructing an integrated model that can perform automatic spacing and POS(Part-Of-Speech) tagging simultaneously. Based on the Bidirectional LSTM-CRF model, we propose an integrated model that can simultaneously perform syllable-based word spacing and POS tagging complementarily. In the experiments using a Sejong tagged text, we obtained 98.77% POS tagging accuracy for the completely spaced sentences, and 97.92% morpheme accuracy for the sentences without any word spacing.

Quality Estimation of English-Korean Machine Translation using Neural Network based Predictor-Estimator Model

Hyun Kim, Jaehun Shin, Wonkee Lee, Seungwoo Cho, Jong-Hyeok Lee

http://doi.org/10.5626/JOK.2018.45.6.545

Quality Estimation (QE) for machine translation is an automatic method for estimating the quality of machine translation output without the need to use reference translations. QE has recently grown in importance in the field of machine translation (MT). Recent studies on QE have mainly focused on European languages, whereas fewer studies have been carried out on QE for Korean. In this paper, we create a new QE dataset for English to Korean translations and apply a neural network based Predictor-Estimator model to a QE task of English-Korean. Creating a QE dataset requires manual post-edited translations for MT outputs. Because Korean is a free word order language and allows various writing styles for translation, we provide guidance for creating manual post-edited Korean translations for English-Korean QE data. Also, we alleviate the imbalanced data problem of QE data. Finally, this paper reports on our experimental results of the QE task of English-Korean by using the Predictor-Estimator model trained from the created English-Korean QE data.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr