检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Qian Zhu Qian Kang Tao Xu Dengxiu Yu Zhen Wang
机构地区:[1]School of Cybersecurity,Northwestern Polytechnical University,Xi’an,710072,China [2]Unmanned System Research Institute,Northwestern Polytechnical University,Xi’an,710072,China [3]School of Artificial Intelligence,Optics and Electronics(iOPEN),Northwestern Polytechnical University,Xi’an,710072,China
出 处:《Computers, Materials & Continua》2025年第5期1855-1879,共25页计算机、材料和连续体(英文)
基 金:supported by the National Science Fund for Distinguished Young Scholarship(No.62025602);National Natural Science Foundation of China(Nos.U22B2036,11931015);the Fok Ying-Tong Education Foundation China(No.171105);the Fundamental Research Funds for the Central Universities(No.G2024WD0151);in part by the Tencent Foundation and XPLORER PRIZE.
摘 要:In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.
关 键 词:Gated recurrent unit REGULARIZATION convergence
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7