基于跨域交互注意力和对比学习引导的红外与可见光图像融合  

Infrared and visible image fusion guided by cross-domain interactive attention and contrastive learning

在线阅读下载全文

作  者:邸敬[1] 梁婵 刘冀钊[2] 廉敬[1] DI Jing;LIANG Chan;LIU Ji-zhao;LIAN Jing(School of Electronic&Information Engineering,Lan Zhou Jiao Tong University,Lanzhou 730070,China;School of Information Science&Engineering,Lan Zhou University,Lanzhou 730070,China)

机构地区:[1]兰州交通大学电子与信息工程学院,甘肃兰州730070 [2]兰州大学信息科学与工程学院,甘肃兰州730070

出  处:《中国光学(中英文)》2025年第2期317-332,共16页Chinese Optics

基  金:甘肃省自然科学基金项目(No.24JRRA231);国家自然科学基金(No.62061023);甘肃省杰出青年基金资助项目(No.21JR7RA345)。

摘  要:现有红外与可见光图像融合方法难以充分提取和保留源图像细节信息与对比度,导致纹理细节模糊。针对这一问题,本文提出了一种跨域交互注意力和对比学习引导的红外与可见光图像融合方法。首先,设计了双支路跳跃连接的细节增强网络,从红外和可见光图像中分别提取和增强细节信息,并利用跳跃连接避免信息丢失,生成增强后的细节图像。接着,构建了联合双分支编码器和跨域交互注意力模块的图像融合网络,确保特征融合时充分进行特征交互,并通过解码器重建为最终的融合图像。然后,引入了通过对比学习块进行浅层和深层属性和内容的对比学习网络,优化特征表示,进一步提升图像融合网络的性能。最后,为了约束网络训练以保留源图像的固有特征,设计了一种基于对比约束的损失函数,以辅助融合过程对源图像信息的对比保留。将提出方法与前沿融合方法进行了定性和定量的分析比较。在TNO、MSRS、RoadSence数据集上的实验结果表明:本文方法的8项客观评价指标均较对比方法有显著提升。本文方法融合后图像具有丰富的细节纹理、显著的清晰度和对比度,有效提高了道路交通、安防监控等实际应用中的目标识别和环境感知能力。Aiming at the problems in existing infrared and visible image fusion methods,such as the difficulty in fully extracting and preserving the source image details,contrast,and blurred texture details,this paper proposes an infrared and visible image fusion method guided by cross-domain interactive attention and contrastive learning.First,a dual-branch skip connection detail enhancement network was designed to separately extract and enhance detail information from infrared and visible images,using skip connections to prevent information loss and generate enhanced detail images.Next,a fusion network combining a dual-branch encoder and cross-domain interactive attention module was constructed to ensure sufficient feature interaction during fusion,and the decoder was used to reconstruct the final fused image.Then,a contrastive learning network was introduced,performing shallow and deep attribute and content contrastive learning from the contrastive learning block,optimizing feature representation,and further improving the performance of the fusion network.Finally,to constrain network training and retain the inherent features of the source images,a contrast-based loss function was designed to assist in preserving source image information during fusion.The proposed method is qualitatively and quantitatively compared with current state-of-the-art fusion methods.Experimental results show that the eight objective evaluation metrics of the proposed method significantly outperform the comparison methods on the TNO,MSRS,and RoadSence datasets.The fused images produced by the proposed method have rich detail textures,enhanced sharpness,and contrast,effectively improving target recognition and environmental perception in real-world applications such as road traffic and security surveillance.

关 键 词:红外与可见光图像融合 对比学习 跨域交互注意力机制 对比约束损失 

分 类 号:TP394.1[自动化与计算机技术—计算机应用技术] TH691.9[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象