Search : [ keyword: 의존 구문 분석 ] (3)

Prompt Tuning For Korean Aspect-Based Sentiment Analysis

Bong-Su Kim, Seung-Ho Choi, Si-hyun Park, Jun-Ho Wang, Ji-Yoon Kim, Hyun-Kyu Jeon, Jung-Hoon Jang

http://doi.org/10.5626/JOK.2024.51.12.1043

Aspect-based sentiment analysis examines how emotions in text relate to specific aspects, such as product characteristics or service features. This paper presents a comprehensive methodology for applying prompt tuning techniques to multi-task token labeling challenges using aspect-based sentiment analysis data. The methodology includes a pipeline for identifying emotion expression domains, which generalizes the token labeling problem into a sequence labeling problem. It also suggests selecting templates to classify separated sequences based on aspects and emotions, and expanding label words to align with the dataset’s characteristics, thus optimizing the model's performance. Finally, the paper provides several experimental results and analyses for the aspect-based sentiment analysis task in a few-shot setting. The constructed data and baseline model are available on AIHUB. (www.aihub.or.kr).

An Automatic Method of Generating a Large-Scale Train Set for Bi-LSTM based Sentiment Analysis

Min-Seong Choi, Byung-Won On

http://doi.org/10.5626/JOK.2019.46.8.800

Sentiment analysis using deep learning requires a large-scale train set labeled sentiment. However, direct labeling of sentiment by humans is time and cost-constrained, and it is not easy to collect the required data for sentiment analysis from many data. In the present work, to solve the existing problems, the existing sentiment lexicon was used to assign sentiment score, and when there was sentiment transformation element, the sentiment score was reset through dependency parsing and morphological analysis for automatic generation of large-scale train set labeled with the sentiment. The Top-k data with high sentiment score was extracted. Sentiment transformation elements include sentiment reversal, sentiment activation, and sentiment deactivation. Our experimental results reveal the generation of a large-scale train set in a shorter time than manual labeling and improvement in the performance of deep learning with an increase in the amount of train set. The accuracy of the model using only sentiment lexicon was 80.17% and the accuracy of the proposed model, which includes natural language processing technology was 89.17%. Overall, a 9% improvement was observed.

Korean Dependency Parsing using Pointer Networks

Cheoneum Park, Changki Lee

http://doi.org/10.5626/JOK.2017.44.8.822

In this paper, we propose a Korean dependency parsing model using multi-task learning based pointer networks. Multi-task learning is a method that can be used to improve the performance by learning two or more problems at the same time. In this paper, we perform dependency parsing by using pointer networks based on this method and simultaneously obtaining the dependency relation and dependency label information of the words. We define five input criteria to perform pointer networks based on multi-task learning of morpheme in dependency parsing of a word. We apply a fine-tuning method to further improve the performance of the dependency parsing proposed in this paper. The results of our experiment show that the proposed model has better UAS 91.79% and LAS 89.48% than conventional Korean dependency parsing.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr