条件扩散和多通道高低频并行的红外与可见光图像融合  

Conditional diffusion and multi-channel high-low frequency parallel fusion of infrared and visible light images

作  者:邸敬[1] 王鹤然 梁婵 刘冀钊[2] 廉敬[1] DI Jing;WANG Heran;LIANG Chan;LIU Jizhao;LIAN Jing(School of Electronic and Information Engineering,Lanzhou Jiaotong University,Lanzhou 730070,China;School of Information Science and Engineering,Lanzhou University,Lanzhou 730000,China)

机构地区:[1]兰州交通大学电子与信息工程学院,甘肃兰州730070 [2]兰州大学信息科学与工程学院,甘肃兰州730000

出  处:《光学精密工程》2025年第1期148-163,共16页Optics and Precision Engineering

基  金:国家自然科学基金资助项目(No.62061023);甘肃省自然科学基金资助项目(No.24JRRA231);甘肃省级科技计划重点研发计划资助项目(No.24YFFA024)。

摘  要:针对去噪扩散模型在红外与可见光图像融合任务中缺少基准真实值和可见光信息利用不足的问题,提出一种条件扩散和多通道高低频并行的红外与可见光图像融合模型。条件扩散模型利用拼接技术将拼接源图像作为基准真实值进行训练,获得红外与可见光图像特征提取任务的最优先验分布,在反向去噪过程中引入多通道似然校正模块,更有效地模拟红外与可见光图像的多通道复杂分布。然后,提出细节自适应去噪网络来完成红外与可见光图像的多通道高低频特征提取任务。最后,在融合网络中设计了一种多通道高低频并行融合模块,采用区域一致性融合网络和多通道低频特征融合网络分别完成高低频特征的融合。该模型为红外与可见光图像融合任务提供了一种可训练的扩散模型范式用于特征提取,使用特定的卷积神经网络进行特征融合。通过与近年来提出的9种高水平方法相比,在MSRS和RoadScene数据集上的实验结果表明,本文方法的8种客观评价指标平均提升了4.52%~59.62%。本文方法在色彩保真度和纹理细节保持等方面都优于其他方法,符合人眼视觉特性,能够很好地处理各种光照和环境场景下的红外与可见光图像融合任务。To address the challenges of the absence of baseline ground truth and the underutilization of visi-ble light information in infrared and visible light image fusion using denoising diffusion models,this study introduces a novel conditional diffusion and multi-channel high-low frequency parallel infrared and visible light image fusion model.First,a conditional diffusion model is developed,employing a splicing tech-nique to generate spliced source images as ground truth during training,thereby facilitating an optimal pri-or distribution for feature extraction in infrared and visible images.During the reverse denoising process,a multi-channel likelihood correction module is incorporated to effectively model the intricate multi-channel distribution of these images.Subsequently,a detail-adaptive denoising network is proposed to perform multi-channel high-and low-frequency feature extraction for infrared and visible light images.The model also integrates a multi-channel high-and low-frequency parallel fusion module within the fusion network,which utilizes a regional consistency fusion network and a multi-channel low-frequency feature fusion net-work to merge high-and low-frequency features,respectively.This approach introduces a trainable diffu-sion-based paradigm for feature extraction in infrared and visible light image fusion tasks,leveraging spe-cialized convolutional neural networks for feature integration.Comparative experiments on the MSRS and RoadScene datasets,against nine state-of-the-art methods,reveal that the proposed model improves the average performance across eight objective evaluation metrics by 4.52%to 59.62%.The method demon-strates superior performance in maintaining color fidelity and preserving texture details,aligning well with human visual perception,and proves robust in handling diverse lighting and environmental conditions for infrared and visible light image fusion tasks.

关 键 词:图像融合 红外与可见光 条件扩散模型 细节自适应去噪网络 多通道高低频并行融合模块 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象