Search : [ author: Minjin Choi ] (2)

Linear Sequential Recommendation Models using Textual Side Information

Dongcheol Lee, Minjin Choi, Jongwuk Lee

http://doi.org/10.5626/JOK.2025.52.6.529

Recently, research on leveraging auxiliary information in sequential recommendation systems is being actively conducted. Most approaches have focused on combining language models with deep neural networks. However, they often lead to high computational costs and latency issues. While linear recommendation models can serve as an efficient alternative, research on how to effectively incorporate auxiliary information is lacking. This study proposed a framework that could effectively utilize auxiliary information within a linear model. Since textual data cannot be directly used in linear model training, we transformed item texts into dense vectors using a pre-trained text encoder. Although these vectors contained rich information, they failed to capture relationships between items. To address this, we applied graph convolution to obtain enhanced item representations. These representations were then used alongside the user-item interaction matrix for linear model training. Extensive experiments showed that the proposed method improved the overall performance, particularly in recommending less popular items.

Learning with Noisy Labels using Sample Selection based on Language-Image Pre-trained Model

Bonggeon Cha, Minjin Choi, Jongwuk Lee

http://doi.org/10.5626/JOK.2023.50.6.511

Deep neural networks have significantly degraded generalization performance when learning with noisy labels. To address this problem, previous studies observed that the model learns clean samples first in the early learning stage, and based on this, sample selection methods that selectively train data by considering small-loss samples as clean samples have been used to improve performance. However, when noisy labels are similar to their ground truth(e.g., seal vs. otter), sample selection is not effective because the model learns noisy data in the early learning stage. In this paper, we propose a Sample selection with Language-Image Pre-trained model (SLIP) which effectively distinguishes and learns clean samples without the early learning stage by leveraging zero-shot predictions from a pre-trained language-image model. Our proposed method shows up to 18.45%p improved performance over previously proposed methods on CIFAR-10, CIFAR-100, and WebVision.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr