面向乳腺超声计算机辅助诊断的两阶段深度迁移学习  被引量:7

Two-stage deep transfer learning for human breast ultrasound computer-aided diagnosis

在线阅读下载全文

作  者:贡荣麟 施俊[1] 周玮珺[2] 汪程 Gong Ronglin;Shi Jun;Zhou Weijun;Wang Cheng(School of Communication and Information Engineering,Shanghai University,Shanghai 200444,China;Department of Ultrasound,the First Affiliated Hospital of Anhui Medical University,Hefei 230032,China)

机构地区:[1]上海大学通信与信息工程学院,上海200444 [2]安徽医科大学第一附属医院超声科,合肥230032

出  处:《中国图象图形学报》2022年第3期898-910,共13页Journal of Image and Graphics

基  金:国家自然科学基金项目(81830058,81627804)。

摘  要:目的为了提升基于单模态B型超声(B超)的乳腺癌计算机辅助诊断(computer-aided diagnosis,CAD)模型性能,提出一种基于两阶段深度迁移学习(two-stage deep transfer learning,TSDTL)的乳腺超声CAD算法,将超声弹性图像中的有效信息迁移至基于B超的乳腺癌CAD模型之中,进一步提升该CAD模型的性能。方法在第1阶段的深度迁移学习中,提出将双模态超声图像重建任务作为一种自监督学习任务,训练一个关联多模态深度卷积神经网络模型,实现B超图像和超声弹性图像之间的信息交互迁移;在第2阶段的深度迁移学习中,基于隐式的特权信息学习(learning using privilaged information,LUPI)范式,进行基于双模态超声图像的乳腺肿瘤分类任务,通过标签信息引导下的分类进一步加强两个模态之间的特征融合与信息交互;采用单模态B超数据对所对应通道的分类网络进行微调,实现最终的乳腺癌B超图像分类模型。结果实验在一个乳腺肿瘤双模超声数据集上进行算法性能验证。实验结果表明,通过迁移超声弹性图像的信息,TSDTL在基于B超的乳腺癌诊断任务中取得的平均分类准确率为87.84±2.08%、平均敏感度为88.89±3.70%、平均特异度为86.71±2.21%、平均约登指数为75.60±4.07%,优于直接基于单模态B超训练的分类模型以及多种典型迁移学习算法。结论提出的TSDTL算法通过两阶段的深度迁移学习,将超声弹性图像的信息有效迁移至基于B超的乳腺癌CAD模型,提升了模型的诊断性能,具备潜在的应用可行性。Objective B-mode ultrasound(BUS)provides information about the structure and morphology information of human breast lesions,while elastography ultrasound(EUS)can provide additional bio-mechanical information.Dual-modal ultrasound imaging can effectively improve the accuracy of the human breast cancer diagnosis.The single-modal ultrasound-based computer-aided diagnosis(CAD)model has its potential applications.Deep transfer learning is a significant branch of transfer learning analysis.This technique can be utilized to guide the information transfer between EUS images and BUS images.However,clinical image samples are limited based on training deep learning models due to the high cost of data collection and annotation.Self-supervised learning(SSL)is an effective solution to demonstrate its potential in a variety of medical image analysis tasks.In respect of the SSL pipeline,the backbone network is trained based on a pretext task,where the supervision information is generated from the train samples without manual annotation.Based on the weight parameters of the trained backbone network,the obtained results are then transferred to the downstream network for further fine-tuning with small size annotated samples.A step-based correlation multi-modal deep convolution neural network(CorrMCNN)has been facilitated to conduct a self-supervised image reconstruction task currently.In the training process,the model transfers the effective information between two modalities to optimize the correlation loss through SSL-based deep transfer learning.Since each BUS and EUS scan the same lesion area for the targeted patient simultaneously,the analyzed results are demonstrated in pairs and share labels.Learning using privileged information(LUPI)is a supervised transfer learning paradigm for paired source domain(privileged information)and target domain data based on shared labels.It can exploit the intrinsic knowledge correlation between the paired data in the source domain and target domain with shared labels,which guides knowledge tra

关 键 词:B型超声成像 超声弹性成像 乳腺癌计算机辅助诊断 特权信息学习(LUPI) 深度迁移学习 自监督学习(SSL) 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象