检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:惠记庄[1] 王锦豪 周涛 张雅倩 丁凯 HUI Jizhuang;WANG Jinhao;ZHOU Tao;ZHANG Yaqian;DING Kai(School of Construction Machinery,Chang'an University,Xi'an 710064,China)
出 处:《机械科学与技术》2024年第8期1418-1426,共9页Mechanical Science and Technology for Aerospace Engineering
基 金:中国博士后科学基金项目(2022T150073);陕西省秦创原“科学家+工程师”团队建设项目(2022KXJ-150)。
摘 要:针对人机协同装配环境复杂多变且装配零件尺度差异大、部分零件相似度高的特征,为保证人机协作装配过程中机器人准确抓取装配零件,提出了一种改进的YOLOv7模型来提高装配场景中多零件目标检测效果。首先,采用ODConv(Omni-dimensional dynamic convolution)替换YOLOv7主干网络中的卷积层,使其能够自适应调整卷积核的权值,提取不同形状、大小的装配零件的特征。其次,在YOLOv7主干网络中引入SimAM(Selective image attention mechanism)模块来减轻复杂多变的装配环境背景对零件检测准确率的影响。最后,使用Efficient-IOU替换原始的CompleteIOU来加速收敛,同时降低部分装配零件相似度高对检测准确率的影响。实验结果表明,该模型的平均准确率为93.4%,改进后的网络优于原始网络和其他目标检测算法。所提出的改进YOLOv7算法在保持高精度的同时具有较高的FPS,模型参数和计算量也相对较低,适合动态人机协同装配场景下实时目标检测要求。Aiming at the characteristics of complex and changeable Human-Robot Collaborative assembly environment,large scale difference with the assembly parts,and high similarity of some parts,in order to ensure that the robot can grasp the assembly parts accurately in the Human-Robot Collaborative assembly,an improved YOLOv7 model is proposed to improve the multi-part target detection effect in the assembly scene.Firstly,ODConv(Omni-Dimensional Dynamic Convolution)is used to replace the convolutional layers in the YOLOv7 backbone network,so that it can adjust the weight of the convolutional kernel adaptively and extract the features of assembly parts of different shapes and sizes.Secondly,the SIAM(Selective Image Attention Mechanism)model was introduced into the YOLOv7 backbone network to reduce the influence of the complex and variable assembly environment backgrounds on the detection accuracy of parts.Finally,Efficient-IOU is used to replace the original Complete-IOU to accelerate convergence and reduce the influence of the high similarity of some assembly parts on the detection accuracy.Experimental results show that the average accuracy of the model is 93.4%,and the improved network is superior to the original network and other target detection algorithms.The present improved YOLOv7 algorithm has high FPS while maintaining high precision,relatively low model parameters,and computational load,and is suitable for the real-time target detection requirements in dynamic Human-Robot Collaborative assembly scenarios.
关 键 词:人机协同装配 YOLOv7 注意力机制 E-IOU 装配零件检测 多目标检测
分 类 号:TP242[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222