检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]陆军军官学院光电技术与系统实验室,安徽合肥230031
出 处:《红外技术》2013年第11期696-701,共6页Infrared Technology
基 金:安徽省自然科学基金;编号:1208085QF126
摘 要:基于人类视觉系统及信号的过完备稀疏表示理论,提出了一种基于多尺度字典的红外与微光图像融合方法。首先把输入的红外与微光图像按照高斯金字塔模型分解,用DCT字典作为初始字典按照四叉树的结构进行分解,对于各尺度的字典按照K-SVD算法独立训练更新,构造出多尺度学习字典。其次在该字典下利用改进的OMP算法得到输入源图像各自的稀疏系数,然后按照最优化融合图像与输入源图像的欧氏距离、融合图像方差的准则,建立一个融合图像稀疏系数的最优化函数,最后通过求解该函数的l1范数得到融合图像。实验结果表明:该算法的融合效果优于小波变换法、Laplacian塔型方法以及PCA方法等传统融合方法。A novel infrared and low light level image fusion algorithm based on multi-scale sparse representation is introduced on the basis of the Human Visual System and over-complete sparse representation theory in this paper. Firstly, infrared and low light level images are decomposed according to the Gaussian pyramid model. Then a multi-scale learned dictionary is obtained by using an efficient quadtree decomposition of the DCT dictionary which is considered as the initial dictionary and each scale dictionary independent training update using K-SVD algorithm. We use the improved OMP algorithm with the dictionary to get the input sparse coefficients of source images. And then we get an optimization function of the fusion image sparse coefficients by optimizing the Euclidean distances between fused image and each input, weighted by their own variance. Finally, we obtain the fusion image by solving the Ii norm of the function. The experimental results show that the proposed method exhibits considerably higher fusion performance than the typical methods such as the wavelet transform method, the Laplacian pyramid method and Principal Component Analysis method.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145