检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yiping GAO Xinyu LI Liang GAO
机构地区:[1]School of Mechanical,Huazhong University of Science and Technology,Wuhan 430072,China
出 处:《Science China(Information Sciences)》2020年第2期93-94,共2页中国科学(信息科学)(英文版)
基 金:supported in part by National Natural Science Foundation of China(Grant No.51721092);Natural Science Foundation of Hubei Province(Grant No.2018CFA078);the Program for HUST Academic Frontier Youth Team(Grant No.2017QYTD04).
摘 要:Dear editor,Recently,deep learning(DL)has become a hot research topic and as one of the most well-known DL models,stacked autoencoder(SAE)[1]has received increasing attention.In SAE,layer-wise pretraining is the basic mechanism for automatic feature extraction and it can also avoid gradient vanishing while constructing deep architectures.
关 键 词:SAE Discriminative stacked autoencoder for FEATURE REPRESENTATION and CLASSIFICATION
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.38