检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:姚光乐 祝钧桃[2] 周文龙 张贵宇[1,3] 张伟 张谦 YAO Guangle;ZHU Juntao;ZHOU Wenlong;ZHANG Guiyu;ZHANG Wei;ZHANG Qian(Artificial Intelligence Key Laboratory of Sichuan Province,Yibin,Sichuan 643000,China;School of Computer and Network Security,Chengdu University of Technology,Chengdu 610059,China;School of Automation&Information Engineering,Sichuan University of Science&Engineering,Yibin,Sichuan 643000,China;School of Information and Communication Engineering,University of Electronic Science and Technology of China,Chengdu 611731,China;Science and Technology on Electronic Information Control Laboratory,Chengdu 610036,China)
机构地区:[1]人工智能四川省重点实验室,四川宜宾643000 [2]成都理工大学计算机与网络安全学院,成都610059 [3]四川轻化工大学自动化与信息工程学院,四川宜宾643000 [4]电子科技大学信息与通信工程学院,成都611731 [5]电子信息控制重点实验室,成都610036
出 处:《计算机工程与应用》2023年第14期151-157,共7页Computer Engineering and Applications
基 金:人工智能四川省重点实验室开放基金(2020RYJ03);国家自然科学基金(U20B2070);四川省重点研发计划(2021YFS0313,2021YJ0086)。
摘 要:关注了一个非常具有挑战性的问题:深度神经网络的小样本类增量学习。其中深度神经网络模型可以从少量的样本中逐步学习新知识,同时不会忘记已学习的旧知识。为了平衡模型对旧知识的记忆和对新知识的学习,提出了一个基于特征分布学习的小样本类增量学习方法。在基类上学习模型以获得一个性能良好的特征提取器,并使用每类的特征分布信息来表示知识。将已学习的知识与新类的特征一起映射到一个新的低维子空间中,以统一地回顾旧知识与学习新知识。在子空间内,还为每个新类生成了分类权值初始化,以提高模型对新类的适应性。大量实验表明,该方法可以有效地减轻模型对已学习知识的遗忘,同时提高模型对新知识的适应性。This paper focuses on a very challenging problem:few-shot class-incremental learning for deep neural networks,where the deep neural network model can gradually learn new knowledge from a small number of samples without forgetting the learned knowledge.To balance the model’s memory of old knowledge and learning of new knowledge,it proposes a few-shot class-incremental learning method based on feature distribution learning.First,it learns the model on the base classes to obtain a well-performing feature extractor and take the feature distribution information as the learned knowledge.Then,it maps the learned knowledge together with the features of the novel classes into a low-dimensional subspace to review old knowledge and learn new knowledge uniformly.Finally,within the subspace,it also generates classification weight initializations for each novel class to improve the adaptability of the model to novel classes.Extensive experiments show that the method can effectively alleviate the model’s forgetting of learned knowledge and improve the model’s adaptability to new knowledge.
分 类 号:TP39[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49