AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks  

在线阅读下载全文

作  者:龚成 卢冶 代素蓉 邓倩 杜承昆 李涛 Cheng Gong;Ye Lu;Su-Rong Dai;Qian Deng;Cheng-Kun Du;Tao Li(College of Software,Nankai University,Tianjin 300350,China;College of Computer Science,Nankai University,Tianjin 300350,China;State Key Laboratory of Processors,Institute of Computing Technology,Chinese Academy of Sciences Beijing 100190,China)

机构地区:[1]College of Software,Nankai University,Tianjin 300350,China [2]College of Computer Science,Nankai University,Tianjin 300350,China [3]State Key Laboratory of Processors,Institute of Computing Technology,Chinese Academy of Sciences Beijing 100190,China

出  处:《Journal of Computer Science & Technology》2024年第2期401-420,共20页计算机科学技术学报(英文版)

基  金:supported by the China Postdoctoral Science Foundation under Grant No.2022M721707;the National Natural Science Foundation of China under Grant Nos.62002175 and 62272248;the Special Funding for Excellent Enterprise Technology Correspondent of Tianjin under Grant No.21YDTPJC00380;the Open Project Foundation of Information Security Evaluation Center of Civil Aviation,Civil Aviation University of China,under Grant No.ISECCA-202102.

摘  要:Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain experts,and an automatic compression method is needed.However,the huge search space of the automatic method introduces plenty of computing budgets that make the automatic process challenging to be applied in real scenarios.In this paper,we propose an end-to-end framework named AutoQNN,for automatically quantizing different layers utilizing different schemes and bitwidths without any human labor.AutoQNN can seek desirable quantizing schemes and mixed-precision policies for mainstream DNN models efficiently by involving three techniques:quantizing scheme search(QSS),quantizing precision learning(QPL),and quantized architecture generation(QAG).QSS introduces five quantizing schemes and defines three new schemes as a candidate set for scheme search,and then uses the Differentiable Neural Architecture Search(DNAS)algorithm to seek the layer-or model-desired scheme from the set.QPL is the first method to learn mixed-precision policies by reparameterizing the bitwidths of quantizing schemes,to the best of our knowledge.QPL optimizes both classification loss and precision loss of DNNs efficiently and obtains the relatively optimal mixed-precision model within limited model size and memory footprint.QAG is designed to convert arbitrary architectures into corresponding quantized ones without manual intervention,to facilitate end-to-end neural network quantization.We have implemented AutoQNN and integrated it into Keras.Extensive experiments demonstrate that AutoQNN can consistently outperform state-of-the-art quantization.For 2-bit weight and activation of AlexNet and ResNet18,AutoQNN can achieve the accuracy results of 59.75%and 68.86%,respectively,and obtain accuracy improvements by up to 1.65%and 1.74%,respectively,compared with state-of-the-art methods.Especially,compared with the full-precision AlexNet and ResN

关 键 词:automatic quantization mixed precision quantizing scheme search quantizing precision learning quan-tized architecture generation 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象