检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘盼盼 安典龙 丰艳[1] LIU Pan-Pan;AN Dian-Long;FENG Yan(College of Information Science and Technology,Qingdao University of Science and Technology,Qingdao 266042,China)
机构地区:[1]青岛科技大学信息科学技术学院,青岛266042
出 处:《计算机系统应用》2024年第8期196-204,共9页Computer Systems & Applications
基 金:国家自然科学基金(62172248);山东省自然科学基金(ZR2021MF098)。
摘 要:在计算机视觉分割任务中,基于Transformer的图像分割模型需要大量的图像数据才能达到最好的性能,医学图像相对于自然图像,数据量非常稀少,而卷积本身具有更高的感应偏差,使得它更适合医学图像方面的应用.为了将Transformer的远程表征学习与CNN的感应偏差相结合,本文设计了残差ConvNeXt模块来模拟Transformer的设计结构,采用深度卷积和逐点卷积组成的残差ConvNeXt模块来提取特征信息,极大地降低了参数量.并对感受野和特征通道进行了有效的缩放和扩展,丰富了特征信息.此外,本文提出了一个非对称3D U型网络ASUNet用于脑肿瘤图像的分割.在非对称U型结构中,采用残差连接,将最后两个编码器的输出特征进行连接来扩大通道数.最后,在上采样的过程中采用深度监督,促进了上采样过程中语义信息的恢复.在BraTS 2020和FeTS 2021数据集上的实验结果表明,ET、WT和TC的骰子分数分别达到了77.08%、90.83%、83.41%和75.63%、90.45%、84.21%.并且通过对比实验,ASUNet在准确性方面完全可以与Transformer构建的模型竞争,同时保持了标准卷积神经网络的简单性和高效性.In computer vision segmentation,the Transformer-based image segmentation model needs a large amount of image data to achieve the best performance.However,the data volume of medical images is very scarce compared with natural images.Convolution,with its higher inductive bias,is more suitable for medical images.To combine the longrange representation learning of Transformer with the inductive bias of CNN,a residual ConvNeXt module is designed to simulate the design structure of Transformer in this research.The module,composed of deep convolution and point wise convolution,is used to extract feature information,which greatly reduces the number of parameters.The receptive field and feature channel are effectively scaled and expanded to enrich the feature information.In addition,an asymmetric 3D U-shaped network called ASUNet is proposed for the segmentation of brain tumor images.In the asymmetric U-shaped structure,the output features of the last two encoders are connected by residual connection to expand the number of channels.Finally,deep supervision is used in the process of upsampling,which promotes the recovery of semantic information.Experimental results on the BraTS 2020 and FeTS 2021 datasets show that the dice scores of ET,WT,and TC reach 77.08%,90.83%,83.41%,and 75.63%,90.45,84.21%,respectively.Comparative experiments show that ASUNet can fully compete with Transformer-based models in terms of accuracy while maintaining the simplicity and efficiency of standard convolutional neural networks.
关 键 词:非对称U型结构 倒置瓶颈 深度监督 ASUNet
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程] TP391.41[自动化与计算机技术—控制科学与工程] R739.41[医药卫生—肿瘤]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.31