机构地区:[1]University of California,Santa Barbara,Santa Barbara,93106,CA,United States [2]University of Hong Kong,Hong Kong,999077,China [3]Tsinghua University,Beijing,100084,China [4]University of California,San Diego,San Diego,92093,CA,United States [5]University of Sydney,Sydney,2006,Australia [6]Institute of Automation,Chinese Academy of Science,Beijing,100190,China [7]University of Chinese Academy of Science,Beijing,100190,China [8]The Hong Kong University of Science and Technology,Hong Kong,999077,China
出 处:《Journal of Automation and Intelligence》2024年第2期101-110,共10页自动化与人工智能(英文)
基 金:National Key R&D Program of China(2018AAA0102600);National Natural Science Foundation of China(No.61876215,62106119);Beijing Academy of Artificial Intelligence(BAAI),China;Chinese Institute for Brain Research,Beijing,and the Science and Technology Major Project of Guangzhou,China(202007030006).
摘 要:Self-normalizing neural networks(SNN)regulate the activation and gradient flows through activation functions with the self-normalization property.As SNNs do not rely on norms computed from minibatches,they are more friendly to data parallelism,kernel fusion,and emerging architectures such as ReRAM-based accelerators.However,existing SNNs have mainly demonstrated their effectiveness on toy datasets and fall short in accuracy when dealing with large-scale tasks like ImageNet.They lack the strong normalization,regularization,and expression power required for wider,deeper models and larger-scale tasks.To enhance the normalization strength,this paper introduces a comprehensive and practical definition of the self-normalization property in terms of the stability and attractiveness of the statistical fixed points.It is comprehensive as it jointly considers all the fixed points used by existing studies:the first and second moment of forward activation and the expected Frobenius norm of backward gradient.The practicality comes from the analytical equations provided by our paper to assess the stability and attractiveness of each fixed point,which are derived from theoretical analysis of the forward and backward signals.The proposed definition is applied to a meta activation function inspired by prior research,leading to a stronger self-normalizing activation function named‘‘bi-scaled exponential linear unit with backward standardized’’(bSELU-BSTD).We provide both theoretical and empirical evidence to show that it is superior to existing studies.To enhance the regularization and expression power,we further propose scaled-Mixup and channel-wise scale&shift.With these three techniques,our approach achieves 75.23%top-1 accuracy on the ImageNet with Conv MobileNet V1,surpassing the performance of existing self-normalizing activation functions.To the best of our knowledge,this is the first SNN that achieves comparable accuracy to batch normalization on ImageNet.
关 键 词:Self-normalizing neural network Mean-field theory Block dynamical isometry Activation function
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...