检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:王帅坤 周志勇[2] 胡冀苏 钱旭升 耿辰[2] 陈光强[3] 纪建松[4] 戴亚康[2,5] WANG Shuaikun;ZHOU Zhiyong;HU Jisu;QIAN Xusheng;GENG Chen;CHEN Guangqiang;JI Jiansong;DAI Yakang(Division of Life Sciences and Medicine,School of Biomedical Engineering(Suzhou),University of Science and Technology of China,Suzhou,Jiangsu 215163,China;Suzhou Institute of Biomedical Engineering and Technology,Chinese Academy of Science,Suzhou,Jiangsu 215163,China;The Second Affiliated Hospital of Suzhou University,Suzhou,Jiangsu 215000,China;The Lishui Central Hospital,Lishui,Zhejiang 323000,China;Jinan Guoke Medical Engineering Technology Development Co.,Ltd.,Jinan 250000,China)
机构地区:[1]中国科学技术大学(苏州)生物医学工程学院生命科学与医学部,江苏苏州215163 [2]中国科学院苏州生物医学工程技术研究所,江苏苏州215163 [3]苏州大学附属第二医院,江苏苏州215000 [4]丽水市中心医院,浙江丽水323000 [5]济南国科医工科技发展有限公司,济南250000
出 处:《计算机工程》2023年第1期223-233,共11页Computer Engineering
基 金:国家自然科学基金(81971685);国家重点研发计划(2018YFA0703101);中国科学院青年创新促进会会员基金(2021324);江苏省重点研发计划(BE2021053);苏州市科技计划(SS202054)。
摘 要:多模态配准是医学图像分析中的关键环节,在肝癌辅助诊断、图像引导的手术治疗中具有重要作用。针对传统的迭代式肝脏多模态配准计算量大、耗时长、配准精度低等问题,提出一种基于多尺度形变融合和双输入空间注意力的无监督深度学习配准算法。利用多尺度形变融合框架提取不同分辨率的图像特征,实现肝脏的逐阶配准,在提高配准精度的同时避免网络陷入局部最优。采用双输入空间注意力模块在编解码阶段融合不同水平的空间和文本信息提取图像间的差异特征,增强特征表达。引入基于邻域描述符的结构信息损失项进行网络迭代优化,不需要任何先验信息即可实现精确的无监督配准。在临床肝脏CT-MR数据集上的实验结果表明,与传统的Affine、Elastix、VoxelMorph等算法相比,该算法达到最优的DSC值和TRE值,分别为0.926 1±0.018 6和6.39±3.03 mm,其平均配准时间为0.35±0.018 s,相比Elastix算法提升了近380倍,能准确地提取特征及估计规则的形变场,具有较高的配准精度和较快的配准速度。Multimodal registration is a key step in medical image analysis,which plays an important role in the assisted diagnosis and the image-guided surgical treatment of liver cancer.Aiming at the problems of large computation,long time consuming,and low registration accuracy of traditional iterative multimodal registration,this paper proposes an unsupervised deep learning-based image registration method based on multi-scale deformation fusion and dual-input spatial attention.Using the multi-scale deformation fusion architecture captures different resolution features of images to achieve liver registration in a coarse-to-fine pattern and avoids local optimization.The dual-input spatial attention module is used to extract the discrepant features between images by integrating spatial and text information at different levels in the codec stage and enhancing feature expression.Additionally,a structural information loss is introduced to globally optimize the registration network,which does not require any prior information and achieves an accurate unsupervised registration.Experimental results on liver Computed Tomography-Magnetic Resonance(CT-MR)datasets show that the proposed algorithm achieved an optimal global Dice Similarity Coefficient(DSC)and Target Registration Error(TRE)values of 0.926 1 ±0.018 6 and 6.39 ±3.03 mm,respectively,which is superior to Affine,Elastix,and VoxelMorph amongst other algorithms.In addition,the average registration time of the proposed algorithm is 0.35 ±0.018 s,which is nearly 380 times faster than the Elastix algorithm.Results show that the proposed algorithm demonstrates higher registration accuracy and faster registration speed by accurately extracting features and estimating the regular deformation field.
关 键 词:深度学习 无监督配准 多模态配准 形变融合 结构信息损失 空间注意力
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.17.74.181