Search : [ author: 김홍진 ] (3)

Multi-Level Attention-Based Generation Model for Long-Term Conversation

Hongjin Kim, Bitna Keum, Jinxia Huang, Ohwoog Kwon, Harksoo Kim

http://doi.org/10.5626/JOK.2025.52.2.117

Research into developing more human-like conversational models is actively underway, utilizing persona memory to generate responses. Many existing studies employ a separate retrieval model to identify relevant personas from memory, which can slow down the overall system and make it cumbersome. Additionally, these studies primarily focused on ability to respond by reflecting a persona well. However, the ability to determine the necessity of referencing a persona should precede this. Therefore, in this paper, we propose a model that does not use a retriever. Instead, the need to reference memory was determined through multi-level attention operations within the generation model itself. If a reference is deemed necessary, the response reflects the relevant persona; Otherwise, the response focuses on the conversational context. Experimental results confirm that our proposed model operates effectively in long-term conversations.

Improved Open-Domain Conversation Generative Model via Denoising Training of Guide Responses

Bitna Keum, Hongjin Kim, Jinxia Huang, Ohwoog Kwon, Harksoo Kim

http://doi.org/10.5626/JOK.2023.50.10.851

In recent open-domain conversation research, research is actively conducted to combine the strengths of retrieval models and generative models while overcoming their respective weaknesses. However, there is a problem where the generative model either disregards the retrieved response or copies the retrieved response as it is to generate a response. In this paper, we propose a method of mitigating the aforementioned problems. To alleviate the former problem, we filter the retrieved responses and use the gold response together. To address the latter problem, we perform noising on the gold response and the retrieved responses. The generative model enhances the ability to generate responses via denoising training. The effectiveness of our proposed method is verified through human and automatic evaluation.

Joint Model of Morphological Analysis and Named Entity Recognition Using Shared Layer

Hongjin Kim, Seongsik Park, Harksoo Kim

http://doi.org/10.5626/JOK.2021.48.2.167

Named entity recognition is a natural language processing technology that finds words with unique meanings such as human names, place names, organization names, dates, and time in sentences and attaches them. Morphological analysis in Korean is generally divided into morphological analysis and part-of-speech tagging. In general, named entity recognition and morphological analysis studies conducted in independently. However, in this architecture, the error of morphological analysis propagates to named entity recognition. To alleviate the error propagation problem, we propose an integrated model using Label Attention Network (LAN). As a result of the experiment, our model shows better performance than the single model of named entity recognition and morphological analysis. Our model also demonstrates better performance than previous integration models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr