Search : [ keyword: Federated learning ] (3)

Model Contrastive Federated Learning on Re-Identification

Seongyoon Kim, Woojin Chung, Sungwoo Cho, Yongjin Yang, Shinhyeok Hwang, Se-Young Yun

http://doi.org/10.5626/JOK.2024.51.9.841

Advances in data collection and computing power have dramatically increased the integration of AI technology into various services. Traditional centralized cloud data processing raises concerns over the exposure of sensitive user data. To address these issues, federated learning (FL) has emerged as a decentralized training method where clients train models locally on their data and send locally updated models to a central server. The central server aggregates these locally updated models to improve a global model without directly accessing local data, thereby enhancing data privacy. This paper presents FedCON, a novel FL framework specifically designed for re-identification (Re-ID) tasks across various domains. FedCON integrates contrastive learning with FL to enhance feature representation, which is crucial for Re-ID tasks that emphasize similarity between feature vectors to match identities across different images. By focusing on feature similarity, FedCON can effectively addresses data heterogeneity challenges and improve the global model's performance in Re-ID applications. Empirical studies on person and vehicle Re-ID datasets demonstrated that FedCON outperformed existing FL methods for Re-ID. Our experiments with FedCON on various CCTV datasets for person Re-ID showed superior performance to several baselines. Additionally, FedCON significantly enhanced vehicle Re-ID performance on real-world datasets such as VeRi-776 and VRIC, demonstrating its practical applicability.

R-FLHE: Robust Federated Learning Framework Against Untargeted Model Poisoning Attacks in Hierarchical Edge Computing

Jeehu Kim, Jaewoo Lee

http://doi.org/10.5626/JOK.2023.50.1.94

Federated learning is a server-client based distributed learning strategy that collects only trained model to guarantee data privacy and reduce communication costs. Recently, research is being conducted to prepare for the future IoT ecosystem by combining edge computing and federated learning. However, research considering vulnerabilities and threat is insufficient. In this paper, we propose Robust Federated Learning in Hierarchical Edge computing (R-FLHE), a federated learning framework for robust global model from untargeted model poisoning attacks. R-FLHE can aggregate models learned from clients, evaluate them on the edge server, and score them based on the calculated model’s loss. R-FLHE can maintain robustness of the global model by sending only the model of the edge server with the best score to the cloud server. The R-FLHE proposed in this paper shows robustness in maintaining constant performance for each federated learning round, with performance drop of only 0.81% and 1.88% on average even if attacks occur.

FedGC: Global Consistency Regularization for Federated Semi-supervised Learning

Gubon Jeong, Dong-Wan Choi

http://doi.org/10.5626/JOK.2022.49.12.1108

Recently, in the field of artificial intelligence, methods of learning neural network models in distributed environments that use sufficient data and hardware have been actively studied. Among them, federated learning, which guarantees privacy preservation without sharing data, has been a dominant scheme. However, existing federated learning methods assume supervised learning using only labeled data. Since labeling costs are incurred for supervised learning, the assumption that only label data exists in the clients is unrealistic. Therefore, this study proposes a federated semi-supervised learning method using both labeled data and unlabeled data, considering a more realistic situation where only labeled data exists on the server and unlabeled data on the client. We designed a loss function considering consistency regularization between the output distributions of the server and client models and analyzed how to adjust the influence of consistency regularization. The proposed method improved the performance of existing semi-supervised learning methods in federated learning settings, and through additional experiments, we analyzed the influence of the loss term and verified the validity of the proposed method.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr