检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:任秉银[1] 魏坤[1] 代勇[1] REN Bingyin;WEI Kun;DAI Yong(School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China)
机构地区:[1]哈尔滨工业大学机电工程学院
出 处:《哈尔滨工业大学学报》2019年第7期42-48,共7页Journal of Harbin Institute of Technology
摘 要:为解决机械臂在大小目标共存的3D混杂场景中无法利用3D视觉传感器直接感知分布于操作视场范围内的小目标这一难题,提出一种基于“固定安装的全局Kinect深度相机”与“安装在机械臂末端执行器上的移动相机(手眼相机)”相结合的视觉系统混合配置方法.固定的全局Kinect深度相机用于感知并获取视场范围内的大目标点云,进而识别估计其位姿,然后借助路径规划技术引导机械臂到达大目标的上方,启动手眼相机近距离获取小目标的图像;离线阶段获取小目标的CAD模型,虚拟2D相机在以目标中心为球心的虚拟球表面的不同位姿和不同半径处拍摄目标的一系列二维视图,并且储存在目标的3D形状模板数据库中;在线阶段从真实手眼相机拍摄的场景图像中基于图像金字塔分层逐一搜索匹配,找到与目标模板相匹配的所有实例并计算其二维位姿,经过一系列转换后得到在相机坐标系下的初始三维位姿,应用非线性最小二乘法对其进行位姿修正.由ABB机械臂和微软KinectV2传感器以及维视图像公司的工业相机进行位姿估计精度实验和混杂目标分拣实验,利用棋盘标定板来测定目标真实的位姿.实验结果表明,位置精度0.48mm,姿态精度0.62°,平均识别时间1.85s,识别率达到98%,远高于传统的基于特征和基于描述符的位姿估计方法,从而证明了提出方法的有效性和可行性.It is very difficult for a robotic manipulator to perceive and manipulate small objects directly using 3D visual sensors within its vision range in the scene where big and small targets are co-existed in 3D clutter scene. To solve the problem, a method for hybrid configuration of vision system based on fixed globally Kinect depth camera and fixed in robotic end effector moving camera (eye-in-hand camera) is proposed, in which the fixed globally Kinect depth camera is adopted to perceive and obtain the point clouds of big targets within its vision range, and their poses are recognized and estimated, which is utilized to guide the manipulator to move and arrive at big targets using path planning technology. An eye-in-hand camera is launched to capture the images of small object. In offline phase, the CAD model of a small object is created. A set of 2D view images are captured by a virtual 2D camera which is located at the surface of a sphere whose center is pointed into an object at different pose and radius, and stored in a database of 3D shape template of the object. In online phase, the scene image captured by a real eye-in-hand camera is explored and matched hierarchically one by one in details based on image pyramid to find all the instances matching with object templates and to compute their 2D poses. Initial 3D pose is obtained with respect to camera frame coordinate through a series of transformations. Rough pose is refined based on nonlinear least squares method. Experiments of pose estimation accuracy and industrial clutter objects sorting application are performed with ABB robotic manipulator, Microsoft Kinect V2 sensor and Micro Vision industrial camera. A checkerboard is employed to determine the true pose of the object. The results show that the position and orientation accuracy is 0.48 mm and 0.62°, respectively, and the recognition rate is 98% with average time 1.85 s, which is much higher than those of traditional feature-based and descriptor-based pose estimation methods.
关 键 词:机械臂 3D感知 小目标 手眼相机 CAD模型 模板匹配 自主分拣
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.218.61.200