检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:付晓 沈远彤[1] 付丽华[1] 杨迪威[1] FU Xiao;SHEN Yuan-tong;FU Li-hua;YANG Di-wei(College of Mathematics and Physics,China University of Geosciences,Wuhan,Hubei 430074,Chin)
机构地区:[1]中国地质大学数学与物理学院,湖北武汉430074
出 处:《电子学报》2018年第5期1041-1046,共6页Acta Electronica Sinica
基 金:国家自然科学基金青年基金(No.61601417);新世纪优秀人才支持计划(No.NCET-13-1011)
摘 要:稀疏自编码网络在自然语言、图像处理等领域都取得了显著效果.已有的研究表明增加网络提取的特征个数可以优化稀疏自编码网络的处理效果,同时该操作将导致网络训练耗时过长.为尽可能减少网络的训练时间,本文提出了一种基于特征聚类的稀疏自编码快速算法.本算法首先根据K均值聚类最优数确定本质特征的个数,再由网络训练得到本质特征,并通过旋转扭曲增加特征的多样性,使网络处理效果得到提升的同时,减少网络训练耗间.实验在标准的手写体识别数据库MNIST和人脸数据库CMU-PIE上进行,结果表明本文所提算法能在保证网络正确率有所提升的同时,大幅度缩短网络训练耗时.The method of deep sparse auto-encoder networks has achieved state-of-art performance in the fields of image processing and natural language processing.It’s been proved that higher accuracy of deep sparse auto-encoder networks is obtained by the increase of features’ number,however,it also leads to a longer training time.In this paper,an optimized sparse auto-encoder networks which based on feature clustering is been presented for both classification accuracy enhancement and training time decreasing.The proposed method first get the number of substantive features by ptimizing k-means clustering.Then initialize the network with that number and obtain the substantive features by training again the network.Finally the improvement of feature varieties is achieved by rotation and distortion of the substantive features.In the experiments,the improvement of classification accuracy and reduction of training time is verified by comparing the performance of optimized sparse auto-encoder with normal sparse auto-encoder in the basic dataset MNIST and CMU-PIE.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145