机构地区:[1]陆军军医大学(第三军医大学)第一附属医院放射科,重庆400038 [2]重庆知见生命科技有限公司,重庆401329 [3]重庆大学附属三峡医院放射科,重庆404000
出 处:《陆军军医大学学报》2023年第21期2266-2274,共9页Journal of Army Medical University
基 金:陆军军医大学第一附属医院新冠病毒感染临床救治科技攻关应急项目(2023XGIIT06)。
摘 要:目的建立基于胸部CT图像的AI模型,实现对细菌、真菌、病毒性(包括COVID-19)肺炎的快速分类预测。方法回顾性收集2013-2020年就诊于陆军军医大学附属第一医院的559例细菌性、真菌性及非COVID-19病毒性肺炎患者及2020年就诊于重庆大学附属三峡医院的53例COVID-19患者的影像资料,首先利用Resnet_18、Efficientnet_b5、ViT、Swin-Transformer等4种典型的深度神经网络构建图像级三分类及四分类预测模型,在独立测试集中进行验证选出最优模型;然后分析采用单张图像和3张图像融合构建数据集对模型的影响;最后分别使用按图像类别占比投票及随机森林2种方法进行患者级分类预测。使用精确率、召回率、特异性、F1值、AUC、准确率评估模型效能,最终筛选出表现最优的AI预测模型。结果在图像级分类中Swin-Transformer模型表现最佳,三分类准确率为0.932,四分类准确率为0.948。测试Swin-Transformer模型分别采用单张图像和3张图像融合构建数据集的效果,采用融合图像的Swin-Transformer_C模型效能进一步提升,在测试集中三分类准确率和AUC值分别为0.931、0.989,四分类准确率和AUC值分别为0.952、0.990。使用Swin-Transformer_C模型进行患者级分类,采用随机森林的方法预测效能更佳,三分类准确率和AUC值分别为0.984、0.987,四分类准确率和AUC值分别为0.967、0.971。Resnet_18、Efficientent_b5、Vit等3种网络也取得了较好的效果,但总体效能低于Swin-Transformer网络。结论基于融合数据建立的深度学习模型Swin-Transformer_C与其他4种模型相比在图像级分类中效果最佳,其融合随机森林分类器在患者级分类中也取得最优性能。表明深度学习可用于不同病原体感染所致肺炎类型的快速分类预测。Objective To establish an AI model based on chest CT images to achieve rapid classification prediction for bacterial,fungal,and viral(including COVID-19)infectious pneumonia.Methods Chest CT imaging data of 559 bacterial,fungal and non-COVID-19 viral pneumonia patients admitted to the First Affiliated Hospital of Army Medical University from 2013 to 2020 and 53 COVID-19 patients to the Chongqing University Three Gorges Hospital from January 2020 to December 2020 were collected and analyzed retrospectively.Firstly,4 typical deep neural networks(Resnet_18,Efficientnet_b5,ViT,and Swin-Transformer)were used to construct image-level triple and quadruple classification prediction models,and the optimal model was selected by validation in an independent test set.Then,the effects of single-image and three-fused-images to construct dataset on the models were analyzed.Finally,voting by image category and random forest were carried out to make the patient-level classification prediction,respectively.The precision,recall rate,specificity,F1 value,AUC,and accuracy were employed to evaluate the performance of the models in order to screen out the best performing AI prediction model.Results The Swin Transformer model performed best in image-level classification,with a triple classification accuracy of 0.932 and a quadruple classification accuracy of 0.948.After the model was constructed with single-image and three-fused-images,a model,named as Swin-Transformer_C,which was further improved with fused images,showed good performance,with a triple classification accuracy and AUC of 0.931 and 0.989,and with a quadruple classification accuracy and AUC of 0.952 and 0.990,respectively,in the test set.Patient-level categorization using the Swin-transformer_C model integrated with random forest was more effective,with a triple classification accuracy and AUC of 0.984 and 0.987,and with a quadruple classification accuracy and AUC of 0.967 and 0.971,respectively.Three other networks,Resnet_18,Efficientent_b5 and Vit,also achieved good resu
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...