基于自适应融合和显微成像的乳腺肿瘤分级网络  被引量:1

Breast tumor grading network based on adaptive fusion and microscopic imaging

在线阅读下载全文

作  者:黄盼 何鹏[1] 杨兴 罗家洋 肖华亮 田素坤 冯鹏[1] Huang Pan;He Peng;Yang Xing;Luo Jiayang;Xiao Hualiang;Tian Sukun;Feng Peng(Key Laboratory of Optoelectronic Technology&Systems(Ministry of Education),College of Optoelectronic Engineering,Chongqing University,Chongqing 400044,China;College of Computer and Network Security,Chengdu University of Technology,Chengdu,Sichuan 610000,China;Daping Hospital,Department of Pathology,Army Military Medical University,Chongqing 400037,China;School of Mechanical Engineering,Shandong University,Jinan,Shandong 250000,China)

机构地区:[1]重庆大学光电技术及系统教育部重点实验室,光电工程学院,重庆400044 [2]成都理工大学计算机与网络安全学院,四川成都610000 [3]陆军军医大学大坪医院病理科,重庆400037 [4]山东大学机械工程学院,山东济南250000

出  处:《光电工程》2023年第1期69-81,共13页Opto-Electronic Engineering

基  金:国家重点研发计划资助项目(2019YFC0605203);国家自然科学基金资助项目(52105265);重庆市基础研究与前沿探索专项资助项目(cstc2020jcyj-msxmX0553)。

摘  要:基于显微成像技术的肿瘤分级对于乳腺癌诊断和预后有着重要的意义,且诊断结果需具备高精度和可解释性。目前,集成Attention的CNN模块深度网络归纳偏差能力较强,但可解释性较差;而基于ViT块的深度网络其可解释性较好,但归纳偏差能力较弱。本文通过融合ViT块和集成Attention的CNN块,提出了一种端到端的自适应模型融合的深度网络。由于现有模型融合方法存在负融合现象,无法保证ViT块和集成Attention的CNN块同时具有良好的特征表示能力;另外,两种特征表示之间相似度高且冗余信息多,导致模型融合能力较差。为此,本文提出一种包含多目标优化、自适应特征表示度量和自适应特征融合的自适应模型融合方法,有效地提高了模型的融合能力。实验表明本文模型的准确率达到95.14%,相比ViT-B/16提升了9.73%,比FABNet提升了7.6%;模型的可视化图更加关注细胞核异型的区域(例如巨型核、多形核、多核和深色核),与病理专家所关注的区域更加吻合。整体而言,本文所提出的模型在精度和可解释性上均优于当前最先进的(state of the art)模型。Tumor grading based on microscopic imaging is critical for the diagnosis and prognosis of breast cancer,which demands excellent accuracy and interpretability.Deep networks with CNN blocks combined with attention currently offer better induction bias capabilities but low interpretability.In comparison,the deep network based on ViT blocks has stronger interpretability but less induction bias capabilities.To that end,we present an end-to-end adaptive model fusion for deep networks that combine ViT and CNN blocks with integrated attention.However,the existing model fusion methods suffer from negative fusion.Because there is no guarantee that both the ViT blocks and the CNN blocks with integrated attention have acceptable feature representation capabilities,and secondly,the great similarity between the two feature representations results in a lot of redundant information,resulting in a poor model fusion capability.For that purpose,the adaptive model fusion approach suggested in this study consists of multi-objective optimization,an adaptive feature representation metric,and adaptive feature fusion,thereby significantly boosting the model’s fusion capabilities.The accuracy of this model is 95.14%,which is 9.73%better than that of ViT-B/16,and 7.6%better than that of FABNet;secondly,the visualization map of our model is more focused on the regions of nuclear heterogeneity(e.g.,mega nuclei,polymorphic nuclei,multi-nuclei,and dark nuclei),which is more consistent with the regions of interest to pathologists.Overall,the proposed model outperforms other state-of-the-art models in terms of accuracy and interpretability.

关 键 词:显微镜成像 可解释性 深度学习 自适应融合 乳腺癌 肿瘤分级 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象