Digital Library[ Search Result ]
Speech-Act Analysis System Based on Dialogue Level RNN-CNN Effective on the Exposure Bias Problem
http://doi.org/10.5626/JOK.2018.45.9.911
The speech-act is the intention of the speaker in his or her utterance. Speech-act analysis classifies the speech-act about a given utterance. Recently, a lot of research based on machine learning using a corpus have been done. We have two goals in this study. First, the utterances in dialogue are continuative and organically related to each other, and the speech-act of a current utterance is greatly influenced by the direct previous utterance. Second, previous research did not deal with the exposure bias problem when the speech-act analysis model use the speech-act result of a previous utterance. In this paper, we suggest the RNN-CNN dialogue-level speech-act analysis model. We also experiment with the exposure bias problem. Finally, the RNN-CNN shows an 86.87% performance on the oracle condition and an 86.27% performance on the greedy condition.
Speakers’ Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network
http://doi.org/10.5626/JOK.2017.44.12.1252
In dialogues, speakers’ intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr