Search : [ keyword: 감정 분석 ] (4)

KcBert-based Movie Review Corpus Emotion Analysis Using Emotion Vocabulary Dictionary

Yeonji Jang, Jiseon Choi, Hansaem Kim

http://doi.org/10.5626/JOK.2022.49.8.608

Emotion analysis is the classification of human emotions expressed in text data into various emotional types such as joy, sadness, anger, surprise, and fear. In this study, using the emotion vocabulary dictionary, the emotions expressed in the movie review corpus were classified into nine categories: joy, sadness, fear, anger, disgust, surprise, interest, boredom, and pain to construct an emotion corpus. Then, the performance of the model was evaluated by training the emotion corpus in KcBert. To build the emotion analysis corpus, an emotion vocabulary dictionary based on a psychological model was used. It was judged whether the vocabulary of the emotion vocabulary dictionary and the emotion vocabulary displayed in the movie review corpus matched, and the emotion type matching the vocabulary appearing at the end of the movie review corpus was tagged. Based on the performance of the emotion analysis corpus constructed in this way by training it on KcBert pre-trained with NSMC, KcBert showed excellent performance in the model classified into 9 types.

Combining Sentiment-Combined Model with Pre-Trained BERT Models for Sentiment Analysis

Sangah Lee, Hyopil Shin

http://doi.org/10.5626/JOK.2021.48.7.815

It is known that BERT can capture various linguistic knowledge from raw text via language modeling without using any additional hand-crafted features. However, some studies have shown that BERT-based models with an additional use of specific language knowledge have higher performance for natural language processing problems associated with that knowledge. Based on such finding, we trained a sentiment-combined model by adding sentiment features to the BERT structure. We constructed sentiment feature embeddings using sentiment polarity and intensity values annotated in a Korean sentiment lexicon and proposed two methods (external fusing and knowledge distillation) to combine sentiment-combined model with a general-purpose BERT pre-trained model. The external fusing method resulted in higher performances in Korean sentiment analysis tasks with movie reviews and hate speech datasets than baselines from other pre-trained models not fused with sentiment-combined models. We also observed that adding sentiment features to the BERT structure improved the model’s language modeling and sentiment analysis performance. Furthermore, when implementing sentiment-combined models, training time and cost could be decreased by using a small-scale BERT model with a small number of layers, dimensions, and steps.

RNN model for Emotion Recognition in Dialogue by incorporating the Attention on the Other’s State

Seunguook Lim, Jihie Kim

http://doi.org/10.5626/JOK.2021.48.7.802

Emotion recognition has increasingly received much attention in artificial intelligence, lately. In this paper, we present an RNN model that analyzes and identifies a speaker’s emotion appeared through utterances in conversation. There are two kinds of speaker considered context, self-dependency and inter-speaker dependency. In particular, we focus more on inter-speaker dependency by considering that the state context information of the relative speaker can affect the emotions of the current speaker. We propose a DialogueRNN based model that adds a new GRU Cell for catching inter-speaker dependency. Our model shows higher performance than the performances of DialogueRNN and its three variants on multiple emotion classification datasets.

Speakers’ Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network

Minkyoung Kim, Harksoo Kim

http://doi.org/10.5626/JOK.2017.44.12.1252

In dialogues, speakers’ intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr