检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yongsheng Zhu Chong Liu Chunlei Chen Xiaoting Lyu Zheng Chen Bin Wang Fuqiang Hu Hanxi Li Jiao Dai Baigen Cai Wei Wang
机构地区:[1]School of Automation and Intelligence,Beijing Jiaotong University,Beijing,100044,China [2]Institute of Computing Technologies,China Academy of Railway Sciences Corporation Limited,Beijing,100081,China [3]School of Computer Science and Technology,Beijing Jiaotong University,Beijing,100044,China [4]Beijing Key Laboratory of Security and Privacy in Intelligent Transportation,Beijing Jiaotong University,Beijing,100044,China [5]Institute of Infrastructure Inspection,China Academy of Railway Sciences Corporation Limited,Beijing,100081,China [6]Zhejiang Key Laboratory of Multi-Dimensional Perception Technology,Application and Cybersecurity,Hangzhou,310053,China
出 处:《Computer Modeling in Engineering & Sciences》2024年第11期1305-1325,共21页工程与科学中的计算机建模(英文)
基 金:supported by Systematic Major Project of China State Railway Group Corporation Limited(Grant Number:P2023W002).
摘 要:The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness.
关 键 词:PRIVACY-PRESERVING intelligent railway transportation system federated learning poisoning attacks DEFENSES
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.144.17.93