视觉感知的端到端自动驾驶运动规划综述  被引量:11

Review of end-to-end motion planning for autonomous driving with visual perception

在线阅读下载全文

作  者:刘旖菲 胡学敏 陈国文 刘士豪 陈龙[2] Liu Yifei;Hu Xuemin;Chen Guowen;Liu Shihao;Chen Long(School of Computer Science and Information Engineering,Hubei University,Wuhan 430062,China;School of Data and Computer Science,Sun Yat-sen University,Guangzhou 510006,China)

机构地区:[1]湖北大学计算机与信息工程学院,武汉430062 [2]中山大学数据科学与计算机学院,广州510006

出  处:《中国图象图形学报》2021年第1期49-66,共18页Journal of Image and Graphics

基  金:国家自然科学基金项目(61806076,61773414);湖北省自然科学基金项目(2018CFB158);国家级大学生创新创业训练计划项目(202010512030);湖北省大学生创新创业训练计划基金资助项目(S201910512026)。

摘  要:视觉感知模块能够利用摄像机等视觉传感器获取丰富的图像和视频信息,进而检测自动驾驶汽车视野中的车辆、行人与交通标识等信息,是自动驾驶最有效、成本最低的感知方式之一。运动规划为自主车辆提供从车辆初始状态到目标状态的一系列运动参数和驾驶动作,而端到端的模型能够直接从感知的数据获取车辆的运动参数,因而受到广泛的关注。为了全面反映视觉感知的端到端自动驾驶运动规划方法的研究进展,本文对国内外公开发表的具有代表性和前沿的论文进行了概述。首先分析端到端方法的应用,以及视觉感知和运动规划在端到端自动驾驶中的作用,然后以自主车辆的学习方式作为分类依据,将视觉感知的端到端自动驾驶运动规划的实现方法分为模仿学习和强化学习两大类,并对各类方法的不同算法进行了归纳和分析;考虑到现阶段端到端模型的研究面临着虚拟到现实的任务,故对基于迁移学习的方法进行了梳理。最后列举与自动驾驶相关的数据集和仿真平台,总结存在的问题和挑战,对未来的发展趋势进行思考和展望。视觉感知的端到端自动驾驶运动规划模型的普适性强且结构简单,这类方法具有广阔的应用前景和研究价值,但是存在不可解释和难以保证绝对安全的问题,未来需要更多的研究改善端到端模型存在的局限性。A visual perception module can use cameras to obtain various image features for detecting peripheral information,such as vehicles,pedestrians,and traffic signs in the visual field of self-driving vehicle. This module is an effective and low cost perception method for autonomous driving. Motion planning provides self-driving vehicles with a series of motion parameters and driving actions from the initial state to the target state of the vehicle. It makes the vehicle subject to collision avoidance and dynamic constraints from the external environment and spatial-temporal constraint from the internal system during the whole traveling process. Traditional autonomous driving approaches refer to constructing intermediate processes from the sensor inputs to the actuator outputs into a plurality of independent submodules,such as perception,planning,decision making,and control. However,traditional modular approaches require the design and selection of features,camera calibration,and manual adjustment of parameters. Therefore,autonomous driving systems based on traditional modular approaches do not have complete autonomy. With the rapid development of big data,computer performance,and deep learning algorithms,increasing researchers apply deep learning to autonomous driving. An end-to-end model based on deep learning obtains the vehicle motion parameters directly from the perceived data and can fully embody the autonomy of autonomous driving. Thus,this model has been widely investigated in recent years. The representative and cutting-edge papers published locally and overseas are summarized in this paper to fully review the research progress of end-to-end motion planning for autonomous driving with visual perception. Applications of the end-to-end model in computer vision tasks and games are introduced. The complexity of tasks solved by end-to-end approaches is higher than that of autonomous driving in some other fields. End-to-end approaches can be successfully applied in the commercial field of autonomous driving. The imp

关 键 词:视觉感知 运动规划 端到端 自动驾驶 模仿学习 强化学习 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象