联合生成对抗网络与扩散模型的稀疏CT图像重建  

Reconstruction of Sparse CT Images via the Integration of Generative Adversarial Networks and Diffusion Models

在线阅读下载全文

作  者:钟泉 吴锡[1] ZHONG Quan;WU Xi(School of Computer Science,Chengdu University of Information Technology,Chengdu 610225,China)

机构地区:[1]成都信息工程大学计算机学院,四川成都610225

出  处:《软件导刊》2025年第2期172-180,共9页Software Guide

基  金:四川省中央引导地方科技发展专项项目(2022ZYD0117)。

摘  要:稀疏CT图像重建对降低患者辐射剂量并促进影像诊断具有重要的临床意义。在基于深度学习的医学图像重建任务中,现有方法往往忽略了真实图像与重建图像之间的残差,造成重建图像错误结构的生成和细节的不足。生成对抗网络通过对抗学习快速重建全局内容和结构信息,而扩散模型通过稳定的训练重建出细节丰富的图像。为提高稀疏CT重建图像质量,提出一种联合生成对抗网络与扩散模型的残差细化重建网络(RRRNet)。该网络首先利用生成对抗网络作为初级生成器,捕捉图像的全局结构信息,然后通过扩散模型建模真实数据与初步预测之间的残差以修正初步预测。此外,在细化过程中还引入高频信息分离训练模块,以增强边缘和细节的恢复能力。在LIDC数据集上的实验结果表明,在4.50%的采样率下,RRRNet在SSIM、PSNR和MAE等量化指标的评估上分别达到了96.40%、40.76dB与32.49HU。相比基于生成对抗网络或扩散模型的方法,RRRNet提升了重建图像质量。Sparse CT image reconstruction is of great clinical significance for reducing patient radiation dosage and promoting imaging diagnosis.In deep learning-based medical image reconstruction tasks,existing methods often overlook the residuals between the reconstructed image and the ground truth,leading to structural errors and insufficient details in the reconstructed images.Generative adversarial networks(GANs)leverage adversarial learning to rapidly reconstruct global content and structural information.On the other hand,diffusion models,offer stable training and can reconstruct images with rich details.To improve the quality of sparse CT reconstruction images,a network named RRRNet that combines GAN and diffusion model is proposed.The network first utilizes a GAN as the primary generator to capture global structure information of images.Then,a diffusion model is applied to model the residual between the real data and the initial prediction,performing residual prediction to refine the initial prediction.In addition,a high-frequency information separation training module is introduced in the refinement process to enhance the recovery of edges and details.Validation on the LIDC dataset shows that at 4.50% sampling rate,RRRNet reaches 96.40%,40.76dB and 32.49HU on quantitative metrics including SSIM,PSNR and MAE.Compared with using either GANs or diffusion models alone,RRRNet improves the quality of reconstructed images.

关 键 词:图像重建 深度学习 生成对抗网络 扩散模型 

分 类 号:TP399[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象