Byzantine-robust distributed support vector machine  

作  者:Xiaozhou Wang Weidong Liu Xiaojun Mao 

机构地区:[1]School of Statistics,East China Normal University,Shanghai 200062,China [2]Key Laboratory of Advanced Theory and Application in Statistics and Data Science(Ministry of Education),East China Normal University,Shanghai 200062,China [3]School of Mathematical Sciences,Shanghai Jiao Tong University,Shanghai 200240,China [4]Key Laboratory of Artificial Intelligence(Ministry of Education),Shanghai Jiao Tong University,Shanghai 200240,China [5]Key Laboratory of Scientific and Engineering Computing(Ministry of Education),Shanghai Jiao Tong University,Shanghai 200240,China

出  处:《Science China Mathematics》2025年第3期707-728,共22页中国科学(数学英文版)

基  金:supported by National Natural Science Foundation of China (Grant No. 12101240);supported by National Natural Science Foundation of China (Grant No. 11825104);supported by National Natural Science Foundation of China (Grant Nos. 12371273 and 12001109);the Chenguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission (Grant No. 20CG29);the Shanghai Sailing Program (Grant No. 21YF1410500);the Shanghai Rising-Star Program (Grant No. 23QA1404600)。

摘  要:The development of information technology brings diversification of data sources and large-scale data sets and calls for the exploration of distributed learning algorithms. In distributed systems, some local machines may behave abnormally and send arbitrary information to the central machine(known as Byzantine failures), which can invalidate the distributed algorithms based on the assumption of faultless systems. This paper studies Byzantine-robust distributed algorithms for support vector machines(SVMs) in the context of binary classification. Despite a vast literature on Byzantine problems, much less is known about the theoretical properties of Byzantine-robust SVMs due to their unique challenges. In this paper, we propose two distributed gradient descent algorithms for SVMs. The median and trimmed mean operations in aggregation can effectively defend against Byzantine failures. Theoretically, we show the convergence of the proposed estimators and provide the statistical error rates. After a certain number of iterations, our estimators achieve near-optimal rates. Simulation studies and real data analysis are conducted to demonstrate the performance of the proposed Byzantine-robust distributed algorithms.

关 键 词:Byzantine robustness CONVERGENCE distributed learning support vector machine 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象