基于模态交互学习的多源心脏图像分割方法研究  被引量:1

Research on multi-source cardiac image segmentation method based on modal interaction learning

在线阅读下载全文

作  者:钟乔鑫 赵毅忠 张飞燕 陆雪松[1] ZHONG Qiaoxin;ZHAO Yizhong;ZHANG Feiyan;LU Xuesong(School of Biomedical Engineering,South-Central Minzu University,Wuhan 430074,China)

机构地区:[1]中南民族大学生物医学工程学院,武汉430074

出  处:《磁共振成像》2024年第4期145-152,共8页Chinese Journal of Magnetic Resonance Imaging

基  金:国家自然科学基金项目(编号:61002046);湖北省自然科学基金项目(编号:2016CFB489)。

摘  要:目的通过研究和搭建人工智能深度学习网络,实现多模态心脏磁共振(cardiac magnetic resonance,CMR)图像分割,并提升Dice系数。材料与方法回顾性分析来自2019年多序列CMR分割挑战赛的公开数据集,它包含了45例患者平衡稳态自由进动(balanced-steady state free precession,bSSFP)模态,晚期钆增强(late gadolinium enhancement,LGE)模态与T2WI模态的CMR图像数据。本文构建了一种新的双流U型网络框架,实现bSSFP与LGE两种模态以及bSSFP与T2WI两种模态的CMR图像分割。在编码阶段,未配准各模态图像被交替地送入各自分支进行特征学习,所获取的特征图接着都流入共享层,实现多模态信息的交互补充,最终共享特征分开流出到各自分支进行解码输出。通过在45例患者的CMR图像数据集上进行五折交叉验证实验,分别对bSSFP与LGE模态、bSSFP与T2WI模态进行了分割,以Dice系数对提出的模型进行性能评估,Wilcoxon符号秩检验被用来检验模型差异性。结果在bSSFP与LGE模态的分割实验中,本文方法在bSSFP模态的平均Dice系数相较于传统UNet模型和最新的Swin-Unet模型都有显著提升(P<0.001);在LGE模态的平均Dice系数较传统UNet模型(P<0.001)、Swin-Unet模型(P=0.001)、双流UNet(P=0.021)均有显著提升。在bSSFP与T2WI模态的分割实验中,本文方法在bSSFP模态的平均Dice系数较UNet模型、Swin-Unet模型与双流UNet均有显著提升(P<0.001);在T2WI模态的平均Dice系数较UNet模型有显著提升(P<0.001),较Swin-Unet模型有提升(P=0.025)。结论本研究提出的双流U型网络框架为CMR图像多模态分割提供有效方法,且该网络提高了CMR图像bSSFP模态与LGE模态及bSSFP模态与T2WI模态的Dice系数,很好地解决了多模态CMR图像个体解剖学差异大和图像间存在灰度不一致问题,提升了模型的泛化能力。Objective:To establish an artificial intelligence(AI)deep learning network for multimodal cardiac magnetic resonance(CMR)image segmentation and improve the Dice coefficient.Materials and Methods:A retrospective analysis was performed on a publicly available dataset from the 2019 multi-sequence cardiac CMR segmentation challenge,which contains CMR image data of 45 patients including balanced steady-state free precession(bSSFP)modality,late gadolinium enhancement(LGE)modality,and T2-weighted imaging(T2WI)modality.A new dual-stream U-shaped network framework was constructed to achieve segmentation of cardiac MR images in both bSSFP and LGE modalities,as well as bSSFP and T2WI modalities.During the encoding phase,unregistered images of each modality were alternately fed into their respective branches for feature learning.The obtained feature maps were then fed into a shared layer for the interaction and supplementation of multi-modal information,and the shared features were finally separated and fed into their respective branches for decoding and output.Validation experiments were conducted on the 2019 multi-sequence CMR segmentation challenge dataset using five-fold cross-validation.The proposed model's performance was evaluated using the Dice coefficient,and the Wilcoxon signed-rank test was used to test the differences between the models.Results:In the segmentation experiments of bSSFP and LGE modalities,the proposed method showed a significant improvement in average Dice coefficient compared to the traditional UNet model and the latest Swin-Unet model for the bSSFP modality(P<0.001).For the LGE modality,the average Dice coefficient was significantly improved compared to the traditional UNet model(P<0.001),and there was some improvement compared to the Swin-Unet model(P=0.001)and the dual-stream UNet model(P=0.021).In the segmentation experiments of bSSFP and T2WI modalities,the proposed method demonstrates a significant improvement in average Dice coefficient for the bSSFP modality compared to the UNet model,Swin

关 键 词:心肌梗死 心肌病 心血管疾病 多源心脏图像分割 深度神经网络 模态交互学习 磁共振成像 

分 类 号:R445.2[医药卫生—影像医学与核医学] R541.7[医药卫生—诊断学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象