检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:雷雪梅 李云涛 LEI Xuemei;LI Yuntao(Office of Information Construction and Management,University of Science and Technology Beijing,Beijing 100083,China;School of Automation,University of Science and Technology Beijing,Beijing 100083,China)
机构地区:[1]北京科技大学信息化建设与管理办公室,北京100083 [2]北京科技大学自动化学院,北京100083
出 处:《计算机应用文摘》2024年第15期52-57,共6页Chinese Journal of Computer Application
基 金:国家自然科学基金(12071025)。
摘 要:在复杂场景下,实时图像识别面临精度和效率之间的平衡与挑战问题,而基于深度学习的图像实例分割方法是解决问题的关键。文章构建了基于改进SOLOv2的实例分割网络模型,提出了基于位置注意力的跨阶段融合主干网络,不仅提高了图像实例分割精度,还减少了模型计算量。同时,设计了跨阶段掩码特征融合,提升了小目标识别率;提出了自适应最小损失匹配方法,提升了遮挡目标的分割精度。最后,利用COCO数据集进行性能测试,结果表明:改进SOLOv2实例分割模型的分割精度较其他模型提升超过2.5%。In complex scenarios,real-time image recognition faces the challenge of balancing accuracy and efficiency,and deep learning based image instance segmentation methods are the key to solving this problem.This article constructs an instance segmentation network model based on improved SOLOv2 and proposes a cross stage fusion backbone network based on position attention,which not only improves the accuracy of image instance segmentation but also reduces the computational complexity of the model.At the same time,cross stage mask feature fusion was designed to improve the recognition rate of small targets.An adaptive minimum loss matching method was proposed to improve the segmentation accuracy of occluded targets.Finally,performance testing was conducted using the COCO dataset,and the results showed that the improved SOLOv2 instance segmentation model achieved a segmentation accuracy improvement of over 2.5%compared to other models.
关 键 词:图像实列分割 SOLOv2 位置注意力模块 特征融合
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.117.9.230