检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李淦 牛洺第 陈路 杨静[4] 闫涛 陈斌 LI Gan;NIU Mingdi;CHEN Lu;YANG Jing;YAN Tao;CHEN Bin(School of Computer and Information Technology,Shanxi University,Taiyuan Shanxi 030006,China;Institute of Big Data Science and Industry,Shanxi University,Taiyuan Shanxi 030006,China;Technology Department,Taiyuan Satellite Launch Center,Taiyuan Shanxi 030027,China;School of Automation and Software Engineering,Shanxi University,Taiyuan Shanxi 030031,China;Chongqing Research Institute,Harbin Institute of Technology,Chongqing 401151,China;International Institute of Artificial Intelligence,Harbin Institute of Technology(Shenzhen),Shenzhen Guangdong 518055,China)
机构地区:[1]山西大学计算机与信息技术学院,太原030006 [2]山西大学大数据科学与产业研究院,太原030006 [3]太原卫星发射中心技术部,太原030027 [4]山西大学自动化与软件学院,太原030031 [5]哈尔滨工业大学重庆研究院,重庆401151 [6]哈尔滨工业大学(深圳)国际人工智能研究院,深圳518055
出 处:《计算机应用》2023年第8期2564-2571,共8页journal of Computer Applications
基 金:国家自然科学基金资助项目(62003200,62006146);山西省基础研究计划项目(202203021222010);山西省科技重大专项(202201020101006)。
摘 要:现有的机器人抓取操作通常在良好光照条件下开展,此时目标细节清晰、区域对比度高,而在夜间、遮挡等弱光环境下目标的视觉特征微弱,会导致现有的机器人抓取检测模型的检测准确率急剧下降。为提高弱光场景下稀疏、微弱抓取特征的表征能力,提出一种融合视觉特征增强机制的抓取检测模型,通过视觉增强子任务为抓取检测施加特征增强约束。对于抓取检测模块,采用仿U-Net框架的编码器-解码器结构实现特征的高效融合;对于弱光增强模块,从局部、全局层面分别提取纹理、颜色信息,以实现兼顾目标细节与视觉效果的特征增强。此外,分别构建弱光Cornell数据集和弱光Jacquard数据集两个新的弱光抓取基准数据集,并基于上述数据集开展对比实验。实验结果表明,所提弱光抓取检测模型在基准数据集上的准确率分别达到了95.5%和87.4%,与生成抓取卷积神经网络(GGCNN)、生成残差卷积神经网络(GR-ConvNet)等现有抓取检测模型相比,准确率在弱光Cornell数据集提升11.1、1.2个百分点,在弱光Jacquard数据集上提升5.5、5.0个百分点,取得了较好的抓取检测效果。Existing robotic grasping operations are usually performed under well-illuminated conditions with clear object details and high regional contrast.At the same time,for low-light conditions caused by night and occlusion,where the objects' visual features are weak,the detection accuracies of existing robotic grasp detection models decrease dramatically.In order to improve the representation ability of sparse and weak grasp features in low-light scenarios,a grasp detection model incorporating visual feature enhancement mechanism was proposed to use the visual enhancement sub-task to impose feature enhancement constraints on grasp detection.In grasp detection module,the U-Net like encoder-decoder structure was adopted to achieve efficient feature fusion.In low-light enhancement module,the texture and color information was respectively extracted from local and global level,thereby balancing the object details and visual effect in feature enhancement.In addition,two low-light grasp datasets called low-light Cornell dataset and low-light Jacquard dataset were constructed as new benchmark dataset of low-light grasp and used to conduct the comparative experiments.Experimental results show that the accuracies of the proposed low-light grasp detection model are 95.5% and 87.4% on the benchmark datasets respectively,which are 11.1,1.2 percentage points higher on low-light Cornell dataset and 5.5,5.0 percentage points higher on low-light Jacquard dataset than those of the existing grasp detection models,including Generative Grasping Convolutional Neural Network(GG-CNN),and Generative Residual Convolutional Neural Network(GR-ConvNet),indicating that the proposed model has good grasp detection performance.
关 键 词:机器人 抓取检测 弱光成像 深度神经网络 视觉增强
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7