检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:吕宜生[1,2] 刘雅慧 陈圆圆 朱凤华[1] LYU Yi-sheng;LIU Ya-hui;CHEN Yuan-yuan;ZHU Feng-hua(The State Key Laboratory for Management and Control of Complex Systems,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100049,China)
机构地区:[1]中国科学院自动化研究所复杂系统管理与控制国家重点实验室,北京100190 [2]中国科学院大学人工智能学院,北京100049
出 处:《中国公路学报》2022年第3期263-272,共10页China Journal of Highway and Transport
基 金:国家自然科学基金项目(61876011);广东省基础与应用基础研究基金项目(2019B1515120030).
摘 要:端到端自动驾驶系统可完成从感知输入到车辆控制输出的直接映射,已成为当前无人驾驶研究的一个重要方向。显然,在动态环境中自主驾驶车辆需要具有处理时空信息的能力,以实现精确平滑的车辆运动控制。为此,提出一种新的时空信息融合模型,在双流卷积网络(Two-stream CNN)的基础上引入门控循环单元(GRU)网络来实现端到端自动驾驶车辆转向角预测。该模型利用RGB图像、基于运动的光流图像和门控循环单元网络来融合连续多帧驾驶场景的空间特征与时间特征。首先通过双流卷积网络的2组卷积网络分支提取特征,一组分支从RGB图像中提取空间特征,另一组分支从光流中学习时间特征;然后利用门控循环单元网络对具有短时依赖关系的特征进行建模;最后,融合时间与空间特征,得到转向角预测结果。提出的结合门控循环单元的双流卷积模型(Two-stream C-GRU)获取的时间动态不仅依赖于表示前后2帧图像位移的光流,也与连续多帧图像相关。在真实驾驶场景数据集上进行模型的测试工作,试验结果表明:提出的时空模型在驾驶转向角预测的准确度和平稳性方面效果显著,优于其他主流时空模型;其中,对比基本的双流卷积网络,该模型在测试集1上的转向角预测精度和稳定度分别提高了20%和6%,在测试集2上分别提高了5%和10%。The end-to-end autonomous driving system directly maps sensory inputs to controller outputs and has become an important research direction for autonomous driving.To perform accurate and smooth driving actions in dynamic environments,autonomous driving vehicles should include the ability to process spatiotemporal information.Therefore,we proposed a new spatiotemporal model to perform an end-to-end prediction of steering angles using two-stream convolutional neural networks(Two-Stream CNN)combined with gated recurrent unit(GRU)networks.The proposed model utilizes RGB images,motion-based optical flows,and gated recurrent unit networks to fuse spatial features and temporal features of consecutive driving scenarios.First,the two-stream convolutional networks composed of two CNN branches were used to extract features,where the first branch is intended to learn spatial features extracted from RGB images,and the second learns temporal characteristics from optical flows.Then,the features with short-term temporal dependence were modeled by the gated recurrent unit networks.Finally,the prediction results of the steering angle were obtained by integrating the spatiotemporal features.Therefore,the temporal dynamics captured using the proposed two-stream C-GRU model which does not only depend on the optical flows representing the displacement of objects between adjacent frames,but also relate to consecutive multiple frames.We used real driving dataset to test the proposed model,and the experimental results showed that the proposed model has a competitive performance in prediction accuracy and stability compared to other spatiotemporal models.Essentially,compared to the basic two-stream CNN,the proposed two-stream C-GRU model increases the steering angle prediction accuracy and stability by 20%and 6%on the test set 1,and by 5%and 10%on the test set 2,respectively.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222