基于磁共振多参数图像融合及DenseNet网络的脑胶质瘤IDH突变预测研究  被引量:2

Prediction of IDH mutations in glioma based on MRI multiparametric image fusion and DenseNet network

在线阅读下载全文

作  者:胡振远 魏炜 胡文鐘 马梦航 李艳 吴旭莎 印弘 席一斌 HU Zhenyuan;WEI Wei;HU Wenzhong;MA Menghang;LI Yan;WU Xusha;YIN Hong;XI Yibin(School of Electronic Information,Xi'an Polytechnic University,Xi'an 710600,China;Department of Radiology,Xijing Hospital of the Fourth Military Medical University,Xi'an 710032,China;Medical Imaging Center,Xi'an People's Hospital(Xi'an Fourth Hospital),Xi'an 710004,China)

机构地区:[1]西安工程大学电子信息学院,西安710600 [2]空军军医大学西京医院放射科,西安710032 [3]西安市人民医院(西安市第四医院)医学影像中心,西安710004

出  处:《磁共振成像》2023年第7期10-17,共8页Chinese Journal of Magnetic Resonance Imaging

基  金:陕西省自然科学基础研究计划(编号:2023-JC-YB-682、2023-JC-ZD-58);西安市科技计划高校院所科技人员服务企业项目(编号:22GXFW0036)。

摘  要:目的建立一个基于人工智能深度学习DenseNet网络和多模态融合技术的预测模型,实现对胶质瘤患者术前异柠檬酸脱氢酶(isocitrate dehydrogenase,IDH)基因突变状态的高准确性预测。材料与方法回顾性分析空军军医大学西京医院2012年1月至2016年9月连续收治的256名(155名IDH野生型和101名IDH突变型)患者的术前多序列MRI扫描图像,在T1加权成像(T1-weighted imaging,T1WI)、T2加权成像(T2-weighted imaging,T2WI)、增强T1WI序列上勾画肿瘤感兴趣区;通过深度学习卷积神经网络提取并融合了MRI多模态特征,定量比较了其与多模态特征简单拼接两种方法之间的模型性能差异。结果多模态融合比各模态简单拼接具有更优越的预测性能,实现了训练集和测试集受试者工作特征曲线下面积分别为0.903[95%置信区间(confidence interval,CI):0.845~0.961]、0.904(95%CI:0.842~0.966)的良好鉴别性能;准确率分别达到了91.3%、88.7%。敏感度分别达到了86.4%、90.5%;特异度分别达到了94.5%、87.5%,使用校准曲线进行模型一致性验证,模型校准曲线靠近对角线,反映出模型具有较好的预测效果。DeLong检验结果显示多模态融合方法和消融方法两种方法的模型性能差异具有统计学意义(P<0.05),前者优于后者。结论基于深度学习DenseNet网络的MRI多模态融合模型通过整合肿瘤的多模态MRI图像信息,可以实现在术前对胶质瘤IDH基因状态的无创、低成本的预测。Objective:Developing a high-accuracy prediction model based on artificial intelligence deep learning DenseNet network and multimodal fusion technology to predict the preoperative isocitrate dehydrogenase(IDH)gene mutation status in glioma patients.Materials and Methods:Retrospective analysis of the preoperative multisequence MRI scan images of 256(155 IDH wild type and 101 IDH mutant type)patients consecutively admitted to xijing hospital,air force military medical university,from January 2012 to September 2016,and the region of interest was outlined on T1-weighted imaging(T1WI),T2-weighted imaging(T2WI),and contrast-enhanced T1WI sequences;deep learning convolutional neural networks were used to extract and fuse the MRI multimodal features.The model performance differences between the multimodal fusion model and two simple stitching methods of multimodal features were quantitatively compared.Results:The multimodal fusion had superior prediction performance than other single-modal simple splicing,achieving good discriminative performance with the training and testing set receiver operating characteristic curve area under the curve of 0.903[95%confidence interval(CI),0.845-0.961]and 0.904(95%CI,0.842-0.966),respectively;accuracy of 91.3%and 88.7%,respectively.The sensitivity reached 86.4%and 90.5%respectively;the specificity reached 94.5%and 87.5%respectively,and the model consistency was verified using the calibration curve,and the model calibration graph is close to the diagonal line,reflecting that the model has a good prediction effect.The DeLong test results showed a statistical difference(P<0.05)in the model performance between the two methods of multimodal fusion and ablation,with the former being superior to the latter.Conclusions:MRI multimodal fusion model based on deep learning DenseNet network can achieve non-invasive and low-cost prediction of IDH gene status of glioma before surgery by integrating multimodal MRI image information of tumor.

关 键 词:胶质瘤 深度学习 智能医疗 磁共振成像 多模态融合 异柠檬酸脱氢酶 

分 类 号:R445.2[医药卫生—影像医学与核医学] R739.41[医药卫生—诊断学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象