检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李鹏越 续欣莹 唐延东[3,4] 张朝霞 韩晓霞 岳海峰 Li Pengyue;Xu Xinying;Tang Yandong;Zhang Zhaoxia;Han Xiaoxia;Yue Haifeng(College of Electrical and Power Engineering,Taiyuan University of Technology,Taiyuan 030024,China;Taiyuan Heavy Machinery(Group)Company,Taiyuan 030027,China;State Key Laboratory of Robotics,Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China;Institutes for Robotics and Intelligent Manufacturing,Chinese Academy of Sciences,Shenyang 110016,China)
机构地区:[1]太原理工大学电气与动力工程学院,山西太原030024 [2]太原重型机械(集团)有限公司,山西太原030027 [3]中国科学院沈阳自动化研究所机器人学国家重点实验室,辽宁沈阳110016 [4]中国科学院机器人与智能制造创新研究院,辽宁沈阳110016
出 处:《红外与激光工程》2024年第3期254-264,共11页Infrared and Laser Engineering
基 金:国家自然科学基金项目(62203319);山西省自然科学基金项目(202203021212220,202103021224056);山西省科技合作交流专项(202104041101030)。
摘 要:图像高光层模型的模糊性和高光动态范围大的特点,使得图像去高光成为了一个挑战性的视觉任务。纯局部性方法容易导致图像高光区出现伪影,纯全局性方法容易使图像非高光区色彩失真。针对图像去高光中局部和全局特征不平衡导致的上述问题,以及高光层建模的模糊性,提出了基于并行多轴自注意力机制的门限融合U型深度网络图像去高光算法。该方法通过隐式建模避免了高光层模型模糊引入的问题,利用U型网络结构将上下文信息与低层信息融合对无高光图像进行估计,并在U型结构编码器和解码器之间引入门限融合结构进一步提升网络模型的特征表达能力。此外,U型网络的单元结构通过融合局部和全局自注意力平衡了局部和全局特征的编码和解码。定性实验结果表明,文中方法可以更有效地去除图像中的高光,其他对比算法在高光处容易产生伪影和失真。定量实验结果表明,文中方法在PSNR和SSIM指标上优于其他五种典型的图像去高光方法,在三个数据集上,PSNR值分别高于次优方法4.10、7.09、6.58 dB,SSIM值分别取得了4%、9%和3%的增量。Objective Highlights are manifested as high bright spots on the surface of glossy materials under the action of light.The highlights of the image can obscure background information with different degrees.The ambiguity of the image highlight layer model and the large dynamic range of highlights enable highlight removal to be still a challenging visual task.The purely local methods tend to result in artifacts in the highlight areas of the image,and the purely global methods tend to produce color distortion in highlight-free areas of the image.To address the issues caused by the imbalance of local and global features in image highlight removal and the ambiguity of highlight layer modeling,we propose a threshold fusion U-shaped deep network based on parallel multi-axis selfattention mechanism for image highlight removal.Methods Our method avoids the ambiguity of highlight layer modeling by implicit modeling.It uses the Ushaped network structure to combine the contextual information with the low-level information to estimate the highlight-free image,and introduces a threshold fusion structure between the encoder and decoder of the U-shape structure to further enhance the feature representation capability of the network.The U-shaped network uses the contraction convolution strategy to extract the contextual semantic information faster.It gradually recovers the low-layer information of the image by expanding,and connects the features of the various stages of the contraction path in the corresponding stages of the expansion path.The threshold mechanism between the encoder and decoder is used to adjust the information flow in each channel of the encoder,which allows the encoder to extract features related to highlights as much as possible at channel level.The threshold structure first performs high-and low-frequency decoupling and feature extraction for the input features,then fuses the two types of features by pixel-wise multiplication,and finally uses the residual pattern to learn the low-level features complementary.In
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.142.97.186