检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:蔡吉轮
机构地区:[1]广东工业大学自动化学院,广东广州510006
出 处:《科学技术创新》2021年第10期107-109,共3页Scientific and Technological Innovation
摘 要:图像分割在表层上分割,用传统图像分割方法所得分割图的语义不够充分。随着深度卷积神经网络发展和数据日益增强,通过训练卷积神经网络,对图像数据每个像素点分类,比传统图像分割方法的分割精度更好。由于卷积神经网络因池化造成位置信息损失,所以在本文网络的主干网络中增加特征融合结构,并且在特征融合结构中加入空洞卷积,主要增加特征图较细的抽象特征和增大特征图像素点的感受野,以此提高分割精度。在公开的Pascal VOC2012数据上进行图像语义分割实验,实验表明本文网络的平均交并比和精度都比U-Net网络有所提高。Image segmentation is segmented on the surface layer.The semantics of the segmented image obtained by traditional image segmentation methods are not sufficient.With the development of deep convolutional neural network and the increasing enhancement for data,the pixel on image data can be classified by training convolutional neural network,and the accuracy of segmentation is better than that of traditional methods in image segmentation.Due to the location information loss caused by the pooling layer in the convolutional neural network,the feature fusion structure is added to the backbone network for the network in this paper,and the dilated convolution is added to the structure of feature fusion,which mainly increases abstract and tiny features on the feature map and which increases the receiver field of the pixel points on the feature map,so as to improve the accuracy in segmentation.The semantic segmentation experiments on Pascal VOC2012 data show that MIoU and accuracy of the proposed network are better than that of U-Net network.
关 键 词:图像语义分割 主干网络 特征融合 卷积神经网络 空洞卷积
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.229