Res2Net融合注意力机制的YOLOv4目标检测算法  被引量:2

Detection and Algorithm for the YOLOv4 Object Based on Res2Net Fusion Attention Mechanism

在线阅读下载全文

作  者:张翔[1] 刘振凯 叶娜[1] 赵妍祯 ZHANG Xiang;LIU Zhenkai;YE Na;ZHAO Yanzhen(School of Information and Control Engineering,Xi'an University of Architecture and Technology,Xi'an710311,China)

机构地区:[1]西安建筑科技大学信息与控制工程学院,西安710311

出  处:《计算机测量与控制》2022年第9期213-220,227,共9页Computer Measurement &Control

基  金:陕西省自然科学科学基础研究计划资助项目(2018JM6080)。

摘  要:针对传统目标检测算法容易出现漏检、误检或者有遮挡物时检测困难等问题,提出一种Res2Net融合注意力机制的YOLOv4(Res2Net fusion with attention learning YOLOv4,RFAL YOLOv4)目标检测模型;首先为了获取更多特征图语义信息,通过在一个残差块内构造层次化的类残差连接,引入Res2Net替换原YOLOv4主干网络中的ResNet残差网络结构,可以获取到更细小的特征,同时也增加了模型感受野;其次将Res2Net与注意力机制相融合,获取关键特征信息,减轻因优化主干网络带来计算量增加的负担;最后通过改进CIOU损失,降低预测框与真实框之间的误差值,有效的解决因目标过小或者有遮挡时模型出现漏检误检等问题;在公开的PASCAL VOC数据集上进行验证,结果表明:RFAL YOLOv4模型的mAP达到了79.5%,比原模型提升了5.5%,改进后的模型具有较高的鲁棒性。Aiming at the problems of missed detection,false detection and difficult detection with occlusions in traditional object detection algorithm,A Res2Net fusing with attention learning YOLOv4(Res2Net fusing with attention learning YOLOv4,RFAL YOLOv4)object detection model is proposed.Firstly,to increase the receptive field of the model and obtain more semantic information of the feature map,by constructing a hierarchical class residual connection in a residual block,the Res2Net is introduced to replace the ResNet residual network structure in the original YOLOv4 backbone network,the model can obtain the finer features,at same time,the receptive field of the model is increased.Secondly,the attention mechanism is introduced to obtain the key feature information,and the residual network is integrated with the attention mechanism to reduce the burden of increased computation caused by optimizing the backbone network.Finally,the CIOU loss is improved to reduce the error between the prediction box and the real box,and the problem of missed or false detection with occlusions is effectively solved.The public PASCAL VOC data set is used to verify the improved model.The results show that the mAP of the RFAL YOLOv4 model reaches 79.5%,which is 5.5%higher than that of the original model.The improved model has better robustness.

关 键 词:目标检测 YOLOv4 Res2Net 注意力机制 CIOU 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象