Digital Library[ Search Result ]
Learning with Noisy Labels using Sample Selection based on Language-Image Pre-trained Model
Bonggeon Cha, Minjin Choi, Jongwuk Lee
http://doi.org/10.5626/JOK.2023.50.6.511
Deep neural networks have significantly degraded generalization performance when learning with noisy labels. To address this problem, previous studies observed that the model learns clean samples first in the early learning stage, and based on this, sample selection methods that selectively train data by considering small-loss samples as clean samples have been used to improve performance. However, when noisy labels are similar to their ground truth(e.g., seal vs. otter), sample selection is not effective because the model learns noisy data in the early learning stage. In this paper, we propose a Sample selection with Language-Image Pre-trained model (SLIP) which effectively distinguishes and learns clean samples without the early learning stage by leveraging zero-shot predictions from a pre-trained language-image model. Our proposed method shows up to 18.45%p improved performance over previously proposed methods on CIFAR-10, CIFAR-100, and WebVision.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr