检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:陈超[1] 李强 闫青 CHEN Chao;LI Qiang;YAN Qing(College of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212003, China)
出 处:《科学技术与工程》2018年第13期86-91,共6页Science Technology and Engineering
基 金:江苏省产学研前瞻性联合研究项目(BY2013066-10)资助
摘 要:针对采用单一传感器在移动机器人同步定位与构图(SLAM)中存在定位精度低、构图不完整等问题,提出一种基于Kinect视觉传感器和激光传感器信息融合的SLAM算法。首先将Kinect传感器获取的深度图像,经过坐标系转换得到三维点云、通过限制垂直方向滤波器过滤三维点云的高度信息;再将剩余三维点云投影到水平面并提取边界点云信息转化为激光扫描数据;然后与激光传感器的扫描数据进行数据级的信息融合;最后输出统一数据实现移动机器人的构图及自主导航。实验结果表明,该方法能够准确地检测小的,及特征复杂的障碍物,能够构建更精确、更完整的环境地图;且更好地完成移动机器人自主导航任务。Aiming at the problem of low locationing accuracy and incomplete mapping in simultaneous localization and mapping( SLAM) using single sensor of mobile robot,a SLAM algorithm based on the information fusion of Kinect vision sensor and laser sensor was proposed. Firstly,the depth image from Kinect sensor are converted to a 3 D point cloud through the coordinate system conversion,and the height information of the 3 D point cloud is filtered by vertical limit filter. Then the remaining 3 D point cloud is projected to the horizontal plane and the boundary point cloud information are extracted to convert into the laser scan data. After that the converted laser scan data fused with the scan data from laser sensor at the data-level. Finally,unified data was output to realizing the mapping and autonomous navigation of mobile robot. The experimental results show that this method can accurately detect small and complex features obstacles,create accurately and fully environment map,and also better complete the task of the mobile robot autonomous navigation.
关 键 词:同步定位与构图 Kinect视觉传感器 激光传感器 信息融合 自主导航
分 类 号:TP242.6[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117