图像增强下基于生成对抗网络和卷积神经网络的CT与MRI融合方法  被引量:1

CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement

在线阅读下载全文

作  者:刘云鹏 李瑾[2] 王宇[1] 蔡文立[3] 陈飞[2] 刘文洁 毛显昊 干开丰 王仁芳[2] 孙德超 邱虹 刘邦权 LIU Yunpeng;LI Jin;WANG Yu;CAI Wenli;CHEN Fei;LIU Wenjie;MAO Xianhao;GAN Kaifeng;WANG Renfang;SUN Dechao;QIU Hong;LIU Bangquan(Information and Computing Science Major,School of International Exchange,Ningbo University of Technology,Ningbo,Zhejiang 315000,P.R.China;Zhejiang Wanli University,Ningbo,Zhejiang 315000,P.R.China;Radiology Imaging Laboratory,Harvard Medical School,Boston,Massachusetts 02114,USA;Li Huili Hospital Affiliated to Ningbo University,Ningbo,Zhejiang 315000,P.R.China;School of Digital Technology and Engineering,Ningbo University of Finance&Economics,Ningbo,Zhejiang 315000,P.R.China)

机构地区:[1]宁波工程学院国交学院,浙江宁波315000 [2]浙江万里学院,浙江宁波315000 [3]哈佛医学院放射学图像实验室,美国马萨诸塞州波士顿02114 [4]宁波大学附属李惠利医院,浙江宁波315000 [5]宁波财经学院数字技术与工程学院,浙江宁波315000

出  处:《生物医学工程学杂志》2023年第2期208-216,共9页Journal of Biomedical Engineering

基  金:国家自然科学基金(61906170);浙江省基础公益研究计划项目(LGF21F020022,LQ21H060002);浙江省哲学社会科学规划课题(21NDJC021Z);宁波市科技计划项目重大专项(2021Z050);宁波市公益性科技计划项目(2021S105,2022S002);宁波市自然科学基金(202003N4072)。

摘  要:针对多模态医学图像融合中的重要特征丢失、细节表现不突出和纹理不清晰等问题,提出一种图像增强下使用生成对抗网络(GAN)和卷积神经网络(CNN)进行电子计算机断层扫描(CT)图像与磁共振成像(MRI)图像融合的方法。生成器针对高频特征图像,双鉴别器针对逆变换后的融合图像;高频特征图像通过GAN模型进行特征融合,低频特征图像通过基于迁移学习的CNN预训练模型进行特征融合。实验结果表明,与当前先进融合算法相比,所提方法在主观表现上纹理细节特征更加丰富,轮廓边缘信息更加清晰突出;在客观指标评估中,融合质量评价指标(Q^(AB/F))、信息熵(IE)、空间频率(SF)、结构相似性(SSIM)、互信息(MI)和融合视觉信息保真度(VIFF)等关键指标比其他最佳测试结果分别提高了2.0%、6.3%、7.0%、5.5%、9.0%和3.3%。融合后图像可以有效地应用于医学诊断,进一步提高诊断效率。Aiming at the problems of missing important features,inconspicuous details and unclear textures in the fusion of multimodal medical images,this paper proposes a method of computed tomography(CT)image and magnetic resonance imaging(MRI)image fusion using generative adversarial network(GAN)and convolutional neural network(CNN)under image enhancement.The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform;Then high-frequency feature images were fused by trained GAN model,and low-frequency feature images were fused by CNN pre-training model based on transfer learning.Experimental results showed that,compared with the current advanced fusion algorithm,the proposed method had more abundant texture details and clearer contour edge information in subjective representation.In the evaluation of objective indicators,Q^(AB/F),information entropy(IE),spatial frequency(SF),structural similarity(SSIM),mutual information(MI)and visual information fidelity for fusion(VIFF)were 2.0%,6.3%,7.0%,5.5%,9.0%and 3.3%higher than the best test results,respectively.The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.

关 键 词:图像增强 图像融合 生成对抗网络 深度学习 医学图像 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术] TP183[自动化与计算机技术—计算机科学与技术] R445.2[医药卫生—影像医学与核医学] R814.42[医药卫生—诊断学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象