检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:曹智雄 吴晓鸰 骆晓伟 凌捷[1] Cao Zhi-xiong;Wu Xiao-ling;Luo Xiao-wei;Ling Jie(School of Computer Science and Technology,Guangdong University of Technology,Guangzhou 510006,China;Department of Architecture and Civil Engineering,City University of Hong Kong,Hong Kong 999077,China)
机构地区:[1]广东工业大学计算机学院,广东广州510006 [2]香港城市大学建筑及土木系,中国香港999077
出 处:《广东工业大学学报》2023年第4期67-76,共10页Journal of Guangdong University of Technology
基 金:教育部重点实验室开放课题(2021-1EQBD-02);广东省国际科技合作领域项目(2019A050513010)。
摘 要:针对现有安全帽佩戴检测算法在检测小目标和密集目标时出现漏检、检测准确度低下等问题,本文提出一种基于改进YOLOv5和迁移学习的安全帽佩戴检测新方法。使用K-means算法聚类出更适合检测任务的先验框尺寸以解决默认先验框不适应任务的问题;在特征提取网络后段引入空间通道混合注意力模块,使模型加强对目标权重的学习,抑制无关背景的权重;改进YOLOv5后处理阶段的非极大值抑制(Non-Maximum-Suppression,NMS)算法的判断度量,减少预测框误删和缺失的现象;采用迁移学习的策略对网络进行训练,克服现有数据集不足的缺陷并提升模型泛化能力;最后提出一种适用于视觉传感网络的安全帽佩戴级联判断框架。实验结果表明改进模型的平均准确率(IOU=0.5)达到了93.6%,与原始模型相比提高了5%,性能优于其他同类算法,提高了施工场景下对安全帽佩戴检测的准确率。To address the problems of missing detection and low detection accuracy of the existing helmet wearing detection algorithms for small and crowded targets detection,this paper proposes a helmet wearing detection method based on improved YOLOv5 and transfer learning.First,different from the default priori frame that is not suitable for the task,we use the K-means algorithm to cluster the suitable priori frame size for the detection task.Then,in the back of the feature extraction network,we introduce a spatial channel mixed attention module to strengthen the learning of relevant weights and suppress the weights of irrelevant backgrounds,respectively.Further,we improve the judgment metric of the non-maximum-suppression(NMS)algorithm in the postprocessing stage of YOLOv5 to reduce the phenomenon of false deletion and missing of prediction boxes.After that,the proposed network is trained based on the strategy of transfer learning,which can overcome the scarcity of limited existing data sets and improve the generalization ability of the model.Finally,we build a cascade judgment framework for helmet wearing deployed in visual sensor networks.The experimental results show that our proposed method improves the average accuracy(IOU=0.5)to 93.6%,which is 5%higher than the original model in the helmet wearing data set.The proposed model also outperforms other state-of-the-art algorithms by obviously improving the accuracy of helmet wearing detection in the construction scenarios.
关 键 词:安全帽佩戴检测 YOLOv5 迁移学习 注意力机制 视觉传感器网络
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15