Search : [ author: 윤세영 ] (3)

Model Contrastive Federated Learning on Re-Identification

Seongyoon Kim, Woojin Chung, Sungwoo Cho, Yongjin Yang, Shinhyeok Hwang, Se-Young Yun

http://doi.org/10.5626/JOK.2024.51.9.841

Advances in data collection and computing power have dramatically increased the integration of AI technology into various services. Traditional centralized cloud data processing raises concerns over the exposure of sensitive user data. To address these issues, federated learning (FL) has emerged as a decentralized training method where clients train models locally on their data and send locally updated models to a central server. The central server aggregates these locally updated models to improve a global model without directly accessing local data, thereby enhancing data privacy. This paper presents FedCON, a novel FL framework specifically designed for re-identification (Re-ID) tasks across various domains. FedCON integrates contrastive learning with FL to enhance feature representation, which is crucial for Re-ID tasks that emphasize similarity between feature vectors to match identities across different images. By focusing on feature similarity, FedCON can effectively addresses data heterogeneity challenges and improve the global model's performance in Re-ID applications. Empirical studies on person and vehicle Re-ID datasets demonstrated that FedCON outperformed existing FL methods for Re-ID. Our experiments with FedCON on various CCTV datasets for person Re-ID showed superior performance to several baselines. Additionally, FedCON significantly enhanced vehicle Re-ID performance on real-world datasets such as VeRi-776 and VRIC, demonstrating its practical applicability.

Risk Scheduling-based Optimistic Exploration for Distributional Reinforcement Learning

Jihwan Oh, Joonkee Kim, Se-Young Yun

http://doi.org/10.5626/JOK.2023.50.2.172

Distributional reinforcement learning demonstrates state-of-the-art performance in continuous and discrete control systems with the features of variance and risk, which can be used to explore action space. However, the exploration method employing the risk property is hard to find, although numerous exploration methods in Distributional RL employ the variance of the return distribution for an action. This paper presents risk scheduling approaches that explore risk levels and optimistic behaviors from a risk perspective in Distributional reinforcement learning. We demonstrate the performance (win-rate) enhancement of the DMIX, DDN, and DIQL algorithms, which integrate Distributional reinforcement learning into a multi-agent system using risk scheduling in a multi-agent setting with comprehensive experiments.

CoEM: Contrastive Embedding Mapper for Audio-visual Latents

Gihun Lee, Kyungchae Lee, Minchan Jeong, Myungjin Lee, Se-young Yun, Chan-hyun Yun

http://doi.org/10.5626/JOK.2023.50.1.80

Human perception can link audio-visual information to each other, making it possible to recall visual information from audio information and vice versa. Such ability is naturally acquired by experiencing situations where these two kinds of information are combined. However, it is hard to obtain video datasets that are richly combined with both types of information, and at the same time, labeled for the semantics of each scene. This paper proposes a Contrastive Embedding Mapper (CoEM), which maps embedding from one type of information to the another, corresponding to its categorical modality. Paired data is not required, CoEM learns to contrast the mapped embedding by its categories. We validated the efficacy of CoEM on the embeddings for audio and visual datasets which were trained to classify 20 shared categories. In the experiment, the embedding mapped by CoEM showed that it was capable of retrieving and generating data on its mapped domain.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr