Deep learning models for automatic classification ofechocardiographic views  被引量:2

深度学习模型用于自动分类超声心动图切面

在线阅读下载全文

作  者:CHEN Wenwen ZHU Ye ZHANG Yiwei WU Chun LI Yuman ZHANG Ziming SUN Zhenxing XIE Mingxing ZHANG Li 陈雯雯;朱业;张易薇;吴纯;李玉曼;章子铭;孙振兴;谢明星;张丽(华中科技大学同济医学院附属协和医院超声医学科,湖北武汉430022;湖北省影像医学临床医学研究中心,湖北武汉430022;分子影像湖北省重点实验室,湖北武汉430022)

机构地区:[1]Department of Ultrasound,Union Hospital,Tongji Medical College,Huazhong University of Science and Technology,Wuhan 430022,China [2]Clinical Research Center for Medical Imaging in Hubei Province,Wuhan 430022,China [3]Hubei Province Key Laboratory of Molecular Imaging,Wuhan 430022,China

出  处:《中国医学影像技术》2024年第8期1124-1129,共6页Chinese Journal of Medical Imaging Technology

基  金:国家重点研发计划(2022YFF0706504);国家自然科学基金(82230066、82371991、82302226、82151316)。

摘  要:Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.目的 观察深度学习(DL)模型用于自动分类超声心动图切面的价值。方法 回顾性分析100例心脏移植术后患者,按7∶2∶1比例将其分为训练集、验证集及测试集;基于2D心尖二腔、心尖三腔、心尖四腔、剑突下、胸骨旁左心室长轴、大动脉短轴、心尖水平左心室短轴、乳头肌水平左心室短轴、二尖瓣水平左心室短轴、3D及CDFI切面超声心动图分别构建ResNet18、ResNet34、Swin Transformer及Swin Transformer V2模型;以准确率、精确率、召回率、F1分数及混淆矩阵评估各模型自动分类超声心动图切面的效能;以Qt Designer软件设计交互式界面并部署于桌面端。结果 各模型自动分类测试集超声心动图切面效能均良好,分类左心室短轴切面效能有待提升,分类3D及CDFI切面效能优越。其中Swin Transformer V2为最优模型,其准确率、精确率、召回率及F1分数均较高(分别为92.56%、89.01%、89.97%及89.31%),在混淆矩阵中对角线值最大,且t-SNE图显示各切面分类效果最好。结论 基于DL模型自动分类超声心动图切面效能良好,尤以Swin Transformer V2模型效能最佳;交互式分类界面可进一步提高预测结果的可解释性。

关 键 词:heart transplantation ECHOCARDIOGRAPHY deep learning 

分 类 号:R654.2[医药卫生—外科学] R540.45[医药卫生—临床医学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象