Defense against local model poisoning attacks to byzantine-robust federated learning  被引量:3

在线阅读下载全文

作  者:Shiwei LU Ruihu LI Xuan CHEN Yuena MA 

机构地区:[1]Department of Basic Sciences,Air Force Engineering University,Xi’an,710051,China

出  处:《Frontiers of Computer Science》2022年第6期171-173,共3页中国计算机科学前沿(英文版)

基  金:supported by the National Natural Science Foundation of China (Grand Nos.11901579,11801564).

摘  要:1 Introduction As a new mode of distributed learning,Federated Learning(FL)helps multiple organizations or clients to jointly train an artificial intelligence model without sharing their own datasets.Compared with the model trained by each client alone,a high-accuracy federated model can be obtained after multiple communication rounds in FL.Due to the characteristics of privacy protection and distributed learning,FL has been applied in many fields,such as the prognosis of pandemicdiseases,smartmanufacturing systems,etc.

关 键 词:CLIENT jointly model 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象