面向机器人手眼协调抓取的3维建模方法  被引量:8

A 3D Modeling Method for Robot’s Hand-Eye Coordinated Grasping

在线阅读下载全文

作  者:杨扬[1] 曹其新[1] 朱笑笑[1] 陈培华[1] 

机构地区:[1]上海交通大学机器人研究所,上海200240

出  处:《机器人》2013年第2期151-155,共5页Robot

基  金:国家863计划资助项目(2012AA100906);机械系统与振动国家重点试验室资助项目(MSV-MS-2010-01);上海市教委创新项目(12ZZ014)

摘  要:面向机器人手眼协调抓取,提出一种针对家庭环境中常见物体的3维建模方法.利用RGB-D传感器能同时获取RGB图像与深度图像的特点,从RGB图像中提取特征点与特征描述子,利用特征描述子的匹配建立相邻帧数据间的对应关系,利用基于随机抽样一致性的三点算法实现帧间的相对位姿计算,并基于路径闭环用Levenberg-Marquardt算法最小化再投影误差以优化位姿计算结果.利用该方法,只需将待建模物体放置在平整桌面上,环绕物体采集10~20帧数据即可建立物体的密集3维点云模型.对20种适于服务机器人抓取的家庭常见物体建立了3维模型.实验结果表明,对于直径5cm~7cm的模型,其误差约为1mm,能够满足机器人抓取时位姿计算的需要.For robot's hand-eye coordinated grasping, a 3D modeling method for common objects in the household en- vironment is proposed. By simultaneously collecting RGB image and depth image from the RGB-D sensor, feature points and feature descriptors are extracted from the RGB image. The correspondences between adjacent frames are set up through matching of the feature descriptors. The RANSAC (RANdom SAmple Consensus) based three point algorithm is used to compute the relative pose between adjacent frames. Based on loop closure, the result is refined by minimizing the re-projection error with Levenberg-Marquardt algorithm. With this method, object's dense 3D point cloud model can be obtained simply by placing the object on a plane table, and collecting ten to twenty frames data around the object. 3D models are set up for twenty household objects which are appropriate for the service robot to grasp. The experiment results show that the error is about 1 mm for models with diameters between 5 cm and 7 cm, which fully satisfies the requirements for the pose determination in robot grasping.

关 键 词:3维建模 特征点 特征描述子 自运动估计 位姿计算 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象