自适应模态融合双编码器MRI脑肿瘤分割网络  被引量:1

Adaptive modal fusion dual encoder MRI brain tumor segmentation network

在线阅读下载全文

作  者:张奕涵 柏正尧[1] 尤逸琳 李泽锴 Zhang Yihan;Bai Zhengyao;You Yilin;Li Zekai(School of Information Science and Engineering,Yunnan University,Kunming 650500,China)

机构地区:[1]云南大学信息学院,昆明650500

出  处:《中国图象图形学报》2024年第3期768-781,共14页Journal of Image and Graphics

基  金:云南省重大科技专项计划资助项目(202002AD080001)。

摘  要:目的 评估肿瘤的恶性程度是临床诊断中的一项具有挑战性的任务。因脑肿瘤的磁共振成像呈现出不同的形状和大小,肿瘤的边缘模糊不清,导致肿瘤分割具有挑战性。为有效辅助临床医生进行肿瘤评估和诊断,提高脑肿瘤分割精度,提出一种自适应模态融合双编码器分割网络D3D-Net(double3DNet)。方法 本文提出的网络使用多个编码器和特定的特征融合的策略,采用双层编码器用于充分提取不同模态组合的图像特征,并在编码部分利用特定的融合策略将来自上下两个子编码器的特征信息充分融合,去除冗余特征。此外,在编码解码部分使用扩张多纤维模块在不增加计算开销的前提下捕获多尺度的图像特征,并引入注意力门控以保留细节信息。结果 采用BraTS2018(brain tumor segmentation 2018)、BraTS2019和BraTS2020数据集对D3D-Net网络进行训练和测试,并进行了消融实验。在BraTS2018数据集上,本模型在增强肿瘤、整个肿瘤、肿瘤核心的平均Dice值与3D U-Net相比分别提高了3.6%,1.0%,11.5%,与DMF-Net(dilatedmulti-fibernetwork)相比分别提高了2.2%,0.2%,0.1%。在BraTS2019数据集上进行实验,增强肿瘤、整个肿瘤、肿瘤核心的平均Dice值与3D U-Net相比分别提高了2.2%,0.6%,7.1%。在BraTS2020数据集上,增强肿瘤、整个肿瘤、肿瘤核心的平均Dice值与3D U-Net相比分别提高了2.5%,1.9%,2.2%。结论 本文提出的双编码器融合网络能够充分融合多模态特征可以有效地分割小肿瘤部位。Objective Accurate segmentation of brain tumors is a challenging clinical diagnosis task,especially in assess⁃ing the degree of malignancy.The magnetic resonance imaging(MRI)of brain tumors exhibits various shapes and sizes,and the accurate segmentation of small tumors plays a crucial role in achieving accurate assessment results.However,due to the significant variability in the shape and size of brain tumors,their fuzzy boundaries make tumor segmentation a chal⁃lenging task.In this paper,we propose a multi-modal MRI brain tumor image segmentation network,named D3D-Net,based on a dual encoder fusion architecture to improve the segmentation accuracy.The performance of the proposed net⁃work is evaluated on the BraTS2018 and BraTS2019 datasets.Method The paper proposes a network that utilizes multiple encoders and a feature fusion strategy.The network incorporates dual-layer encoders to thoroughly extract image features from various modal combinations,thereby enhancing the segmentation accuracy.In the encoding phase,a targeted fusion strategy is adopted to fully integrate the feature information from both upper and lower sub-encoders,effectively eliminating redundant features.Additionally,the encoding-decoding process employs an expanded multi-fiber module to capture multiscale image features without incurring additional computational costs.Furthermore,an attention gate is introduced in the process to preserve fine-grained details.We conducted experiments on the BraTS2018,BraTS2019,and BraTS2020 data⁃sets,including ablation and comparative experiments.We used the BraTS2018 training dataset,which consists of the mag⁃netic resonance images of 210 high-grade glioma(HGG)and 75 low-grade glioma(LGG)patients.The validation dataset contains 66 cases.The BraTS2019 dataset added 49 HGG cases and 1 LGG case on top of the BraTS2018 dataset.Specifi⁃cally,BraTS2018 is an open dataset that was released for the 2018 Brain Tumor Segmentation Challenge.The dataset con⁃tains multi-modal magnetic resonance images of HGG

关 键 词:脑肿瘤分割 多模态融合 双编码器 MRI 注意力门控 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象