检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘家东 费博文 万子豪 胡建华[2] Liu Jiadong;Fei Bowen;Wan Zihao;Hu Jianhua(China Academy of Information and Communications Technology,Beijing 100191,China;Institute of Automation Chinese Academy of Sciences,Beijing 100190,China)
机构地区:[1]中国信息通信研究院,北京100191 [2]中国科学院自动化研究所,北京100190
出 处:《机电工程技术》2024年第6期74-78,118,共6页Mechanical & Electrical Engineering Technology
基 金:中科院科技服务网络计划(STS)(STS-HP-202202)。
摘 要:针对复杂场景抓取位姿估计速度慢、精度低的问题,基于BlendMask实例分割网络,结合实例分割与抓取位姿估计,设计了一种端到端的机械手臂抓取位姿估计深度模型GPNet。该模型首先依据图像2D信息,在BlendMask网络中增加抓取中心、抓取主方向估计分支,提升抓取位姿估计速度;其次采用霍夫投票获取抓取中心及主方向,提高了2D抓取位姿估计的精度与鲁棒性;然后采用椭圆筛选机制,有效地解决了由圆形物体任意性对抓取主方向估计的干扰;最后依据新的损失函数训练GPNet,并结合图像深度信息获得最终抓取位姿信息。以中国信息通信研究院工业互联网平台、工业和信息化部重点实验室抓取实验场景为对象进行抓取速度与精度验证,以9种物体为抓取目标,结果表明在平均每个场景6种抓取目标实例且有干扰物体遮挡的复杂场景下,所提模型位姿估计平均速度达到0.057 s,平均抓取成功率达到90.2%。In the robot grasping environment,both speed and accuracy are of vital importance.An end-to-end grasping pose estimation model called Grasp PCA Network(GPNet)is presented.Basing on BlendMask,GPNet combines instance segmentation and grasping pose estimation to handle real-time grasping pose estimation in complex scenes.To get grasping pose,GPNet extends BlendMask by adding the grasping center and grasping direction estimation branch;Secondly,GPNet combines the Hough voting idea to improve the accuracy and robustness of the 2D grasping center estimation;To eliminate interference on predicting grasping direction caused by round object,the method based on geometric information called Ellipse Filtering is used.Finally,by making use of a new loss function to train GPNet,the grasping pose result using the depth is got.The grasping model is evaluated on industrial Internet platform of the China Academy of Information and Communications Technology and the laboratory of the Ministry of Industry and Information Technology.Pick nine kinds of object as grasping target,under the complex scene of average six kinds of objects with obstacles,the average speed of grasping pose estimation by the GPNet reaches 0.057 s,and the average grasping accuracy reaches 90.2%.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28