Search : [ author: 김수빈 ] (2)

Test Bed for Abstraction and Reasoning

Subin Kim, Phunyaphibarn Prin, Donghyun Ahn, Sundong Kim

http://doi.org/10.5626/JOK.2024.51.1.59

Abstraction and Reasoning Corpus (ARC) proposed by François Chollet is a benchmark designed to develop generalizable intelligence, suitable for measuring cognitive abilities of both humans and computers. While most problems can be solved by humans, a computing-based ARC-Solver that can solve more than 30% of the problems is not yet known. In this study, a benchmark dataset, Mini-ARC, was introduced to simplify the model complexity while maintaining the difficulty level of the original ARC. To collect Mini-ARC, O2ARC was designed. It is an interface that can track the human solution process. A total of 3,000 solutions were collected from 25 people. This study proposed a new approach to developing a computing-based ARCSolver by constructing a system that could massively secure the simplified cognitive solution process. The Mini-ARC dataset can be found at https://github.com/ksb21ST/Mini-ARC.

Analyzing the Impact of Sequential Context Learning on the Transformer Based Korean Text Summarization Model

Subin Kim, Yongjun Kim, Junseong Bang

http://doi.org/10.5626/JOK.2021.48.10.1097

Text summarization reduces the sequence length while maintaining the meaning of the entire article body, solving the problem of overloading information and helping readers consume information quickly. To this end, research on a Transformer-based English text summarization model has been actively conducted. Recently, an abstract text summary model reflecting the characteristics of English with a fixed word order by adding a Recurrent Neural Network (RNN)-based encoder was proposed. In this paper, we study the effect of sequential context learning on the text abstract summary model by using an RNN-based encoder for Korean, which has more free word order than English. Transformer-based model and a model that added RNN-based encoder to existing Transformer model are trained to compare the performance of headline generation and article body summary for the Korean articles collected directly. Experiments show that the model performs better when the RNN-based encoder is added, and that sequential contextual information learning is required for Korean abstractive text summarization.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr