Digital Library[ Search Result ]
Effective Importance-Based Entity Grouping Method in Continual Graph Embedding
http://doi.org/10.5626/JOK.2025.52.7.627
This study proposed a novel approach to improving entity importance evaluation in continual graph embeddings by incorporating edge betweenness centrality as a weighting factor in a Weighted PageRank algorithm. By normalizing and integrating betweenness centrality, the proposed method effectively propagated entity importance while accounting for the significance of information flow through edges. Experimental results demonstrated significant performance improvements in MRR and Hit@N metrics across various datasets using the proposed method compared to existing methods. Notably, the proposed method showed enhanced learning performance after the initial snapshot in scenarios where new entities and relationships were continuously added. These findings highlight the effectiveness of leveraging edge centrality in promoting efficient and accurate learning in continual knowledge graph embeddings.
Continual Learning using Memory-Efficient Parameter Generation
Hyung-Wook Lim, Han-Eol Kang, Dong-Wan Choi
http://doi.org/10.5626/JOK.2024.51.8.747
Continual Learning with Parameter Generation shows remarkable stability in retaining knowledge from previous tasks. However, it suffers from a gradual decline in parameter generation performance due to its lack of adaptability to new tasks. Furthermore, the difficulty in predetermining the optimal size of the parameter generation model (meta-model) can lead to memory efficiency issues. To address these limitations, this paper proposed two novel techniques. Firstly, the Chunk Save & Replay (CSR) technique selectively stored and replayed vulnerable parts of the generative neural network, maintaining diversity in the parameter generation model while efficiently utilizing memory. Secondly, the Automatically Growing GAN (AG-GAN) technique automatically expanded the memory of the parameter generation model based on learning tasks, enabling effective memory utilization in resource-constrained environments. Experimental results demonstrated that these proposed techniques significantly reduced memory usage while minimizing performance degradation. Moreover, their ability to recover from deteriorated network performance was observed. This research presents new approaches to overcoming limitations of parameter generation-based continual learning, facilitating the implementation of more effective and efficient continual learning systems.
Efficient Prompt Learning Method in Blurry Class Incremental Learning Environment
http://doi.org/10.5626/JOK.2024.51.7.655
Continual learning is the process of continuously integrating new knowledge to maintain performance across a sequence of tasks. While disjoint continual learning, which assumes no overlap between classes across tasks, blurry continual learning addresses more realistic scenarios where overlaps do exist. Traditionally, most related works have predominantly focused on disjoint scenarios and recent attention has shifted towards prompt-based continual learning. This approach uses prompt mechanism within a Vision Transformer (ViT) model to improve adaptability. In this study, we analyze the effectiveness of a similarity function designed for blurry class incremental learning, applied within a prompt-based continual learning framework. Our experiments demonstrate the success of this method, particularly in its superior ability to learn from and interpret blurry data.
Rehearsal with Stored Latent Vectors for Incremental Learning Over GANs
http://doi.org/10.5626/JOK.2023.50.4.351
Unlike humans, sequential learning of multiple tasks is a difficult problem in a deep learning model. This problem is not only for discriminative models, but also for generative models, such as GAN. The Generative Replay method, which is frequently used in GAN continual learning, uses images generated by GAN provided in the previous task together for learning new tasks, but does not generate good images for CIFAR10, which is a relatively challenging task. Therefore, we can consider a rehearsal-based method that stores a portion of the real data, which cannot store a huge amount of images in limited memory because of large dimension of the real image. In this paper, we propose LactoGAN and LactoGAN+, continual learning methods that store latent vectors that are the inputs of GANs rather than storing real images, as the existing rehearsal-based approaches. As a result, more image knowledge can be stored in the same memory; thus, showing better results than the existing GAN continual learning methods.
Re-Generation of Models via Generative Adversarial Networks and Bayesian Neural Networks for Task-Incremental Learning
http://doi.org/10.5626/JOK.2022.49.12.1115
In contrast to the human ability of continual learning, deep learning models have considerable difficulty maintaining their original performance when the model learns a series of incrementally arriving tasks. In this paper, we propose ParameterGAN, a novel task-incremental learning approach based on model synthesis. The proposed method leverages adversarial generative learning to regenerate neural networks themselves which have a parameter distribution similar to that of a pre-trained Bayesian network. Also, using pseudo-rehearsal methods, ParameterGAN enables continual learning by regenerating the networks of all previous tasks without catastrophic forgetting. Our experiment showed that the accuracy of the synthetic model composed of regenerated parameters was comparable to that of the pre-trained model, and the proposed method outperformed the other SOTA methods in the comparative experiments using the popular task-incremental learning benchmarks Split-MNIST and Permuted-MNIST.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr