改进残差密集生成对抗网络的红外与可见光图像融合  被引量:4

Infrared and visible image fusion with improved residual dense generative adversarial network

在线阅读下载全文

作  者:闵莉[1] 曹思健 赵怀慈[2] 刘鹏飞[2] 邰炳昌 MIN Li;CAO Si-jian;ZHAO Huai-ci;LIU Peng-fei;TAI Bing-chang(School of Mechanical Engineering,Shenyang Jianzhu University,Shenyang 110168,China;Key Laboratory of Optical-Electronics Information Processing,Shenyang Institute of Automation,Chinses Academy of Sciences,Shenyang 110169,China;Excellence Xin Shi Dai Certification Co.,Ltd,Shenyang 110013,China)

机构地区:[1]沈阳建筑大学机械工程学院,沈阳110168 [2]中国科学院、沈阳自动化研究所光电信息处理重点实验室,沈阳110169 [3]卓越新时代认证有限公司,沈阳110013

出  处:《控制与决策》2023年第3期721-728,共8页Control and Decision

基  金:装备预研重点基金项目(41401040105)。

摘  要:基于深度学习的红外与可见光图像融合算法通常无法感知源图像显著性区域,导致融合结果没有突出红外与可见光图像各自的典型特征,无法达到理想的融合效果.针对上述问题,设计一种适用于红外与可见光图像融合任务的改进残差密集生成对抗网络结构.首先,使用改进残差密集模块作为基础网络分别构建生成器与判别器,并引入基于注意力机制的挤压激励网络来捕获通道维度下的显著特征,充分保留红外图像的热辐射信息和可见光图像的纹理细节信息;其次,使用相对平均判别器,分别衡量融合图像与红外图像、可见光图像之间的相对差异,并根据差异指导生成器保留缺少的源图像信息;最后,在TNO等多个图像融合数据集上进行实验,结果表明所提方法能够生成目标清晰、细节丰富的融合图像,相比基于残差网络的融合方法,边缘强度和平均梯度分别提升了64.56%和64.94%.The current infrared and visible image fusion methods based on deep learning usually cannot perceive the saliency region of source images. As a result, the result of image fusion fails to highlight the respective typical features of infrared and visible images, and the ideal fusion effect cannot be reached. In response to these issues, an improved residual dense generative adversarial network structure suitable for infrared and visible image fusion tasks is designed.First of all, the improved residual dense block is used as the basic network to construct the generator and the discriminator respectively;the squeeze and excitation network based on attention mechanism is introduced to capture the salient features in the channel dimension, which will preserve the thermal radiation information of infrared images and the textures information of visible images adequately. Then, the relativistic average discriminator is used to measure the relative average difference between the fusion image and two source images respectively, so as to instruct the generator to preserve the missing information of source images based on the difference. Finally, the experimental results on multiple image fusion datasets such as the TNO, prove that the proposed method can generate fused images with clear targets and rich details. Compared with the fusion method based on the residual network, the edge intensity and average gradient are increased by 64.56 % and 64.94 %, respectively.

关 键 词:图像融合 残差密集块 生成对抗网络 注意力机制 显著性区域 相对平均判别器 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象