检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:陈学斌 任志强 CHEN Xuebin;REN Zhiqiang(College of Science/Hebei Key Laboratory of Data Science and Application/Tangshan Key Laboratory of Data Science,North China University of Science and Technology,Tangshan 063210,China)
机构地区:[1]华北理工大学理学院/河北省数据科学与应用重点实验室/唐山市数据科学重点实验室,唐山063210
出 处:《南京信息工程大学学报》2024年第4期513-519,共7页Journal of Nanjing University of Information Science & Technology
基 金:国家自然科学基金(U20A20179)。
摘 要:联邦学习是解决机器学习中数据共享和隐私保护两个关键难题的重要方法.然而,联邦学习本身也面临着数据异构和模型异构的挑战.现有研究往往只专注于解决其中一个方面的问题,忽视了两者之间的关联性.为此,本文提出了一个名为PFKD的框架,该框架通过知识蒸馏技术解决模型异构问题,通过个性化算法解决数据异构问题,以实现更具个性化的联邦学习.通过实验分析验证了所提出框架的有效性.实验结果显示,该框架能够突破模型的性能瓶颈,提高模型精度约1个百分点.此外,在调整适当的超参数后,该框架的性能得到进一步提升.Federated learning is an important method to address two critical challenges in machine learning:data sharing and privacy protection.However,federated learning itself faces challenges related to data heterogeneity and model heterogeneity.Existing researches often focus on addressing one of these issues while overlook the correlation between them.To address this,this paper introduces a framework named PFKD(Personalized Federated learning based on Knowledge Distillation).This framework utilizes knowledge distillation techniques to address model heterogeneity and personalized algorithms to tackle data heterogeneity,thereby achieving more personalized federated learning.Experimental analysis validates the effectiveness of the proposed framework.The experimental results demonstrate that the framework can overcome model performance bottlenecks and improve model accuracy by approximately one percentage point.Furthermore,with appropriate hyperparameter adjustment,the framework s performance is further enhanced.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7