Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems:Hierarchical Poisoning Attacks and Defenses in Federated Learning  

在线阅读下载全文

作  者:Yongsheng Zhu Chong Liu Chunlei Chen Xiaoting Lyu Zheng Chen Bin Wang Fuqiang Hu Hanxi Li Jiao Dai Baigen Cai Wei Wang 

机构地区:[1]School of Automation and Intelligence,Beijing Jiaotong University,Beijing,100044,China [2]Institute of Computing Technologies,China Academy of Railway Sciences Corporation Limited,Beijing,100081,China [3]School of Computer Science and Technology,Beijing Jiaotong University,Beijing,100044,China [4]Beijing Key Laboratory of Security and Privacy in Intelligent Transportation,Beijing Jiaotong University,Beijing,100044,China [5]Institute of Infrastructure Inspection,China Academy of Railway Sciences Corporation Limited,Beijing,100081,China [6]Zhejiang Key Laboratory of Multi-Dimensional Perception Technology,Application and Cybersecurity,Hangzhou,310053,China

出  处:《Computer Modeling in Engineering & Sciences》2024年第11期1305-1325,共21页工程与科学中的计算机建模(英文)

基  金:supported by Systematic Major Project of China State Railway Group Corporation Limited(Grant Number:P2023W002).

摘  要:The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness.

关 键 词:PRIVACY-PRESERVING intelligent railway transportation system federated learning poisoning attacks DEFENSES 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] TP309[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象