检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Jiemin FANG Yukang CHEN Xinbang ZHANG Qian ZHANG Chang HUANG Gaofeng MENG Wenyu LIU Xinggang WANG
机构地区:[1]Institute of Artificial Intelligence,Huazhong University of Science and Technology,Wuhan 430074,China [2]School of Electronic Information and Communications,Huazhong University of Science and Technology,Wuhan 430074,China [3]Horizon Robotics,Beijing 100089,China [4]National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China
出 处:《Science China(Information Sciences)》2021年第9期99-111,共13页中国科学(信息科学)(英文版)
基 金:This work was in part supported by National Natural Science Foundation of China(NSFC)(Grant Nos.61876212,61976208,61733007);Zhejiang Lab(Grant No.2019NB0AB02);HUST-Horizon Computer Vision Research Center。
摘 要:Neural architecture search(NAS) methods have been proposed to relieve human experts from tedious architecture engineering. However, most current methods are constrained in small-scale search owing to the issue of huge computational resource consumption. Meanwhile, the direct application of architectures searched on small datasets to large datasets often bears no performance guarantee due to the discrepancy between different datasets. This limitation impedes the wide use of NAS on large-scale tasks. To overcome this obstacle, we propose an elastic architecture transfer mechanism for accelerating large-scale NAS(EATNAS).In our implementations, the architectures are first searched on a small dataset, e.g., CIFAR-10. The best one is chosen as the basic architecture. The search process on a large dataset, e.g., ImageNet, is initialized with the basic architecture as the seed. The large-scale search process is accelerated with the help of the basic architecture. We propose not only a NAS method but also a mechanism for architecture-level transfer learning. In our experiments, we obtain two final models EATNet-A and EATNet-B, which achieve competitive accuracies of 75.5% and 75.6%, respectively, on ImageNet. Both the models also surpass the models searched from scratch on ImageNet under the same settings. For the computational cost, EAT-NAS takes only fewer than 5 days using 8 TITAN X GPUs, which is significantly less than the computational consumption of the state-of-the-art large-scale NAS methods.
关 键 词:architecture transfer neural architecture search evolutionary algorithm large-scale dataset
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222