Digital Library[ Search Result ]
Adversarial Training with Contrastive Learning in NLP
Daniela N. Rim, DongNyeong Heo, Heeyoul Choi
http://doi.org/10.5626/JOK.2025.52.1.52
Adversarial training has been extensively studied in natural language processing (NLP) settings to make models robust so that similar inputs derive similar outcomes semantically. However, since language has no objective measure of semantic similarity, previous works use an external pre-trained NLP model to ensure this similarity, introducing an extra training stage with huge memory consumption. This work proposes adversarial training with contrastive learning (ATCL) to train a language processing model adversarially using the benefits of contrastive learning. The core idea is to make linear perturbations in the embedding space of the input via fast gradient methods (FGM) and train the model to keep the original and perturbed representations close via contrastive learning. We apply ATCL to language modeling and neural machine translation tasks showing an improvement in the quantitative (perplexity and BLEU) scores. Furthermore, ATCL achieves good qualitative results in the semantic level for both tasks without using a pre-trained model through simulation.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr