基于多尺度注意力融合网络的胃癌病理图像分割方法  被引量:1

Gastric cancer pathological image segmentation method based on multi-scale attentionfusion network

在线阅读下载全文

作  者:张婷[1] 秦涵书[1] 赵若璇 Zhang Ting;Qin Hanshu;Zhao Ruoxuan(Information Center,The First Affiliated Hospital of Chongqing Medical University,Chongqing 400016,China;Key Laboratory of Optoelectronic Technique System of the Ministry of Education,Chongqing University,Chongqing 400044,China)

机构地区:[1]重庆医科大学附属第一医院信息中心,重庆400016 [2]重庆大学光电技术与系统教育部重点实验室,重庆400044

出  处:《电子技术应用》2023年第9期46-52,共7页Application of Electronic Technique

基  金:重庆市医学科研项目(2023WAJK028);重庆医科智慧医学研究项目(ZHYX202221)。

摘  要:近年来,随着深度学习技术的发展,基于编解码的图像分割方法在病理图像自动化分析上的研究与应用也逐渐广泛,但由于胃癌病灶复杂多变、尺度变化大,加上数字化染色图像时易导致的边界模糊,目前仅从单一尺度设计的分割算法往往无法获得更精准的病灶边界。为优化胃癌病灶图像分割准确度,基于编解码网络结构,提出一种基于多尺度注意力融合网络的胃癌病灶图像分割算法。编码结构以EfficientNet作为特征提取器,在解码器中通过对多路径不同层级的特征进行提取和融合,实现了网络的深监督,在输出时采用空间和通道注意力对多尺度的特征图进行注意力筛选,同时在训练过程中应用综合损失函数来优化模型。实验结果表明,该方法在SEED数据集上Dice系数得分达到0.806 9,相比FCN和UNet系列网络一定程度上实现了更精细化的胃癌病灶分割。In recent years,with the development of deep learning technology,the research and application of image segmentation methods based on coding and decoding in the automatic analysis of pathological images have gradually become widespread.However,due to the complexity and variability of gastric cancer lesions,large scale changes,and the blurring of boundaries caused by digital staining images,segmentation algorithms designed solely from a single scale often cannot obtain more accurate lesion boundaries.To optimize the accuracy of gastric cancer lesion image segmentation,this paper proposes a gastric cancer image segmentation algorithm based on multi-scale attention fusion network using the coding and decoding network structure.The coding structure uses EfficientNet as the feature extractor.In the decoder,the deep supervision of the network is realized by extracting and fusing the features of different levels of multi-path.When outputting,the spatial and channel attention is used to screen the multi-scale feature map for attention.At the same time,the integrated loss function is used in the training process to optimize the model.The experimental results show that the Dice coefficient score of this method on the SEED data set is 0.8069,which to some extent achieves more refined gastric cancer lesion segmentation compared to FCN and UNet series networks.

关 键 词:病理图像 图像分割 注意力融合 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象