检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张思宇 江雪 侯晓赟[3] ZHANG Siyu;JIANG Xue;HOU Xiaoyun(School of Internet of Things,Nanjing University of Posts and Telecommunications,Nanjing 210003,China;Jiangsu Engineering Research Center of Communication and Network Technology,Nanjing University of Posts and Telecommunications,Nanjing 210003,China;School of Communications and Information Engineering,Nanjing University of Posts and Telecommunications,Nanjing 210003,China)
机构地区:[1]南京邮电大学物联网学院,江苏南京210003 [2]南京邮电大学江苏省通信与网络技术工程研究中心,江苏南京210003 [3]南京邮电大学通信与信息工程学院,江苏南京210003
出 处:《无线电工程》2025年第3期463-474,共12页Radio Engineering
基 金:国家自然科学基金(62071255,61971241)。
摘 要:针对目前红外与可见光图像融合算法中由于源图像信息特征不同而产生的全局结构和细节信息无法保留等问题,提出一种基于低秩稀疏分解(Low-rank Sparse Decomposition,LRSD)的红外与可见光图像融合方法。该方法通过最优方向选择(Method of Optimal Directions,MOD)、K奇异值分解(K-Singular Value Decomposition,K-SVD)和背景字典3种字典学习方法构造字典,并采用低秩表示(Low-rank Represention,LRR)对源图像分解得到低秩部分和稀疏细节部分,其中低秩部分保留了源图像的全局结构,稀疏部分突出了源图像的局部结构和细节信息。在融合过程中,对低秩部分和稀疏部分分别采用加权平均与l_(2)-l_(1)范数约束策略进行融合,保留了全局对比度和像素强度。实验结果表明,与经典融合算法相比,提出的方法在图像视觉效果和定量评价指标方面有显著提升。采用MOD和K-SVD方法进行字典训练,得到的融合图像在定量评价指标互信息(Mutual Information,MI)、结构相似度(Structural Similarity Index,SSIM)、视觉信息保真度(Visual Information Fidelity,VIF)、标准差(Standard Deviation,SD)和峰值信噪比(Peak Signal to Noise Ratio,PSNR)上分别提高了约6%、27%、9.6%、2.4%和3.4%;采用背景字典方法进行字典训练,得到的融合图像在定量评价指标MI、SSIM、SD、均方误差(Mean Squared Error,MSE)、PSNR上分别提高了约23%、29%、1.2%、33%和4.5%。To solve the problem that the global structure and detail information cannot be preserved due to the different information features of the source image in current infrared and visible image fusion algorithms,a Low-rank Sparse Decomposition(LRSD)based infrared and visible image fusion method is proposed.In this method,the dictionary is constructed by Method of Optimal Directions(MOD),K-Singular Value Decomposition(K-SVD),and background dictionary,and then Low-rank Representation(LRR)is used to decompose the source image to obtain the low-rank part and the sparse detail part.The low-rank part preserves the global structure of the source image,and the sparse part highlights the local structure and detail information of the source image.In the fusion process,weighted average and l_(2)-l_(1) norm constraint strategies are used to merge the low-rank and sparse parts respectively to preserve the global contrast and pixel intensity.The experimental results show that compared with classical fusion algorithms,the proposed method has significant improvements in image visual effects and quantitative evaluation indicators.The quantitative evaluation indexes of fusion images obtained with MOD and K-SVD dictionary training methods such as Mutual Information(MI),Structural Similarity Index(SSIM),Visual Information Fidelity(VIF),Standard Deviation(SD),and Peak Signal to Noise Ratio(PSNR)have been improved by approximately 6%,27%,9.6%,2.4%and 3.4%,respectively.Meanwhile,the fusion images obtained with background dictionary training method improve MI,SSIM,SD,Mean Squared Error(MSE),and PSNR by about 23%,29%,1.2%,33%and 4.5%,respectively.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222