检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:傅刚[1] FU Gang(Department of Special Education,Fuzhou Polytechnic,Fuzhou 350108,China)
机构地区:[1]福州职业技术学院特殊教育系,福州350108
出 处:《计算机系统应用》2024年第5期228-238,共11页Computer Systems & Applications
摘 要:在联邦学习环境中选取适宜的优化器是提高模型性能的有效途径,尤其在数据高度异构的情况下.本文选取FedAvg算法与FedALA算法作为主要研究对象,并提出其改进算法pFedALA.pFedALA通过令客户端在等待期间继续本地训练,有效降低了由于同步需求导致的资源浪费.在此基础上,本文重点分析这3种算法中优化器的作用,通过在MNIST和CIFAR-10数据集上测试,比较了SGD、Adam、ASGD以及AdaGrad等多种优化器在处理非独立同分布(Non-IID)、数据不平衡时的性能.其中重点关注了基于狄利克雷分布的实用异构以及极端的异构数据设置.实验结果表明:1) pFedALA算法呈现出比FedALA算法更优的性能,表现为其平均测试准确率较FedALA提升约1%;2)传统单机深度学习环境中的优化器在联邦学习环境中表现存在显著差异,与其他主流优化器相比,SGD、ASGD与AdaGrad优化器在联邦学习环境中展现出更强的适应性和鲁棒性.Selecting appropriate optimizers for a federated learning environment is an effective way to improve model performance,especially in situations where the data is highly heterogeneous.In this study,the FedAvg and FedALA algorithms are mainly investigated,and an improved version called pFedALA is proposed.PFedALA effectively reduces resource waste caused by synchronization demands by allowing clients to continue local training during waiting periods.Then,the roles of the optimizers in these three algorithms are analyzed in detail,and the performance of various optimizers such as stochastic gradient descent(SGD),Adam,averaged SGD(ASGD),and AdaGrad in handling nonindependent and identically distributed(Non-IID) and imbalanced data is compared by testing them on the MNIST and CIFAR-10 datasets.Special attention is given to practical heterogeneity based on the Dirichlet distribution and extreme heterogeneity in terms of data setting.The experimental results suggest the following observations:1) The pFedALA algorithm outperforms the FedALA algorithm,with an average test accuracy approximately 1% higher than that of FedALA;2) Optimizers commonly used in traditional single-machine deep learning environments deliver significantly different performance in a federated learning environment.Compared with other mainstream optimizers,the SGD,ASGD,and AdaGrad optimizers appear to be more adaptable and robust in the federated learning environment.
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程] TP391.41[自动化与计算机技术—控制科学与工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222