基于CNN和Transformer的组织病理图像分割方法  

Histopathological image segmentation method based on CNN and Transformer

在线阅读下载全文

作  者:丁维龙[1] 宗泽永 朱红波 徐利锋[1] DING Weilong;ZONG Zeyong;ZHU Hongbo;XU Lifeng(College of Computer Science&Technology,Zhejiang University of Technology,Hangzhou 310023,China;Department of Pathology,Shanghai Pudong Hospital,Shanghai 201399,China)

机构地区:[1]浙江工业大学计算机科学与技术学院,浙江杭州310023 [2]上海市浦东医院病理科,上海201399

出  处:《浙江工业大学学报》2024年第6期591-600,共10页Journal of Zhejiang University of Technology

基  金:浙江省公益技术研究计划/工业项目(LTGY24F020001,LTGY23F020005)。

摘  要:在数字组织病理诊断中的肿瘤细胞精确分割研究中,病理图像具有复杂的背景以及组织形态变化的多样性,且经常面临样本数量不足和类别不均衡的挑战。为了改善现有方法中存在的分割精度不佳、分割边缘失真等问题,提出了一种混合架构的编解码器语义分割网络模型(MixU-Net)。首先,在编码器中引入Swin-Transformer模块,增强模型对于全局信息的建模能力;然后,在编解码器之间设计了多尺度特征融合模块,使全局特征和局部细粒度特征能够得到深度融合;最后,采用加权Dice Loss作为损失函数以增强模型对小目标的关注。通过在组织病理图像数据集Pannuke上进行消融和对比实验,不仅达到了67.33%的平均交并比(mIoU)和95.05%的像素准确率(aAcc),与传统的基于CNN特征提取方法的U-Net相比分别提升了7.23%和1.70%,而且在性能上超过了其他基于深度学习的图像分割方法。In the study of precise segmentation of tumor cells in digital pathological diagnosis,pathological images exhibit complex backgrounds and diverse variations in tissue morphology.Furthermore,they often face challenges such as insufficient sample quantity and imbalanced categories.In order to improve the problems of poor segmentation accuracy and distorted segmentation boundaries in existing methods,a hybrid architecture encoder-decoder semantic segmentation network model called MixU-Net is proposed.Firstly,by introducing the Swin-Transformer modules into the encoder,the modeling capability to capture global information is enhanced.Furthermore,a multi-scale feature fusion module is designed between the encoder and decoder to facilitate the deep fusion of global features and local fine-grained features.Finally,a weighted Dice Loss is employed as the loss function to enhance the focus of model on small targets.By conducting ablation and comparative experiments on the Pannuke dataset of histopathological images,the proposed method achieves mIoU of 67.33%and aAcc of 95.05%.These results represent improvements of 7.23%and 1.70%,respectively,compared to the traditional U-Net based on CNN feature extraction methods.Moreover,the proposed method surpasses other deep learning-based image segmentation approaches in terms of performance.

关 键 词:图像分割 深度学习 组织病理图像 TRANSFORMER 卷积神经网络 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象