检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Kui Jiang Ruoxi Wang Yi Xiao Junjun Jiang Xin Xu Tao Lu
机构地区:[1]IEEE [2]the School of Computer Science and Technology,Harbin Institute of Technology [3]Zhengzhou Research Institute,Harbin Institute of Technology [4]the School of Artificial Intelligence,Jianghan University [5]the School of Geodesy and Geomatics,Wuhan University [6]the Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System,Wuhan University of Science and Technology [7]the School of Computer Science and Engineering and also with the Hubei Province Key Laboratory of Intelligent Robot,Wuhan Institute of Technology
出 处:《IEEE/CAA Journal of Automatica Sinica》2024年第11期2253-2269,共17页自动化学报(英文版)
基 金:supported by the National Natural Science Foundation of China (U23B2009, 62376201, 423B2104);Open Foundation (ZNXX2023MSO2, HBIR202311)。
摘 要:Degradation under challenging conditions such as rain, haze, and low light not only diminishes content visibility, but also results in additional degradation side effects, including detail occlusion and color distortion. However, current technologies have barely explored the correlation between perturbation removal and background restoration, consequently struggling to generate high-naturalness content in challenging scenarios. In this paper, we rethink the image enhancement task from the perspective of joint optimization: Perturbation removal and texture reconstruction. To this end, we advise an efficient yet effective image enhancement model, termed the perturbation-guided texture reconstruction network(PerTeRNet). It contains two subnetworks designed for the perturbation elimination and texture reconstruction tasks, respectively. To facilitate texture recovery,we develop a novel perturbation-guided texture enhancement module(PerTEM) to connect these two tasks, where informative background features are extracted from the input with the guidance of predicted perturbation priors. To alleviate the learning burden and computational cost, we suggest performing perturbation removal in a sub-space and exploiting super-resolution to infer high-frequency background details. Our PerTeRNet has demonstrated significant superiority over typical methods in both quantitative and qualitative measures, as evidenced by extensive experimental results on popular image enhancement and joint detection tasks. The source code is available at https://github.com/kuijiang94/PerTeRNet.
关 键 词:Association learning attention mechanism image enhancement perturbation modeling
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.19.255.255