LLFlowGAN:以生成对抗方式约束可逆流的低照度图像增强  被引量:2

LLFlowGAN:a low-light image enhancement method for constraining invertible flow in a generative adversarial manner

在线阅读下载全文

作  者:黄颖[1] 彭慧 李昌盛 高胜美 陈奉 Huang Ying;Peng Hui;Li Changsheng;Gao Shengmei;Chen Feng(School of Software Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)

机构地区:[1]重庆邮电大学软件工程学院,重庆400065

出  处:《中国图象图形学报》2024年第1期65-79,共15页Journal of Image and Graphics

摘  要:目的现有低照度图像增强方法大多依赖于像素级重建,旨在学习低照度输入和正常曝光图像之间的确定性映射,没有对复杂的光照分布进行建模,从而导致了不适当的亮度及噪声。大多图像生成方法仅使用一种(显式或隐式)生成模型,在灵活性和效率方面有所限制。为此,改进了一种混合显式—隐式的生成模型,该模型允许同时进行对抗训练和最大似然训练。方法首先设计了一个残差注意力条件编码器对低照度输入进行处理,提取丰富的特征以减少生成图像的色差;然后,将编码器提取到的特征作为可逆流生成模型的条件先验,学习将正常曝光图像的分布映射为高斯分布的双向映射,以此来模拟正常曝光图像的条件分布,使模型能够对多个正常曝光结果进行采样,生成多样化的样本;最后,利用隐式生成对抗网络(generative adversarial network,GAN)为模型提供约束,改善图像的细节信息。特别地,两个映射方向都受到损失函数的约束,因此本文设计的模型具有较强的抗模式崩溃能力。结果实验在2个数据集上进行训练与测试,在低照度(low-light dataset,LOL)数据集与其他算法对比,本文算法在峰值信噪比(peak signal-to-noise ratio,PSNR)上均有最优表现、图像感知相似度(learned perceptual image patch similarity,LPIPS)、在结构相似性(structural similarity index measure,SSIM)上取得次优表现0.01,在无参考自然图像质量指标(natural image quality evaluator,NIQE)上取得较优结果。具体地,相较于18种现有显著性模型中的最优值,本文算法PSNR提高0.84 dB,LPIPS降低0.02,SSIM降低0.01,NIQE值降低1.05。在MIT-Adobe FiveK(Massachu-setts Institute of Technology Adobe FiveK)数据集中,与5种显著性模型进行对比,相较于其中的最优值,本文算法PSNR提高0.58 dB,SSIM值取得并列第一。结论本文提出的流生成对抗模型,综合了显式和隐式生成模型的优点,更好地调�Objective Low-light images are produced by imaging devices that cannot capture sufficient light due to unavoidable environmental or technical limitations(such as nighttime,backlight,and underexposure).Such images usually have the characteristics of low brightness,low contrast,narrow grayscale range,color distortion,and strong noise,which almost need more information.Low-light images containing these problems do not meet human beings’visual requirements and directly limit the role of the subsequent advanced visual system.The low-light image enhancement task is an ill-posed problem because the low-light image loss of illumination information,that is,a low-light image may correspond to countless normal-light images.Low-light image enhancement should be regarded as selecting the most suitable solution from all possible outputs.Most existing reconstruction methods rely on pixel-level reconstruction algorithms that aim to learn a deterministic mapping between low-light inputs and normal-light images.They provide a normal-light result for a low-light image rather than modeling complex lighting distributions,which usually result in inappropriate brightness and noise.Furthermore,most existing image generation methods use only one(explicit or implicit)generative model,which limits flexibility and efficiency.Flow models have recently demonstrated promising results for low-level vision tasks.This paper improves a hybrid explicit-implicit generative model,which can flexibly and efficiently reconstruct normal-light images with satisfied lighting,cleanliness,and realism from degraded inputs.The model alleviates the fuzzy details and singularity problems produced by explicit or implicit generative modeling.Method This paper proposes a low-light image enhancement network with a hybrid explicit(Flow)and implicit generative adversarial network(GAN),named LLFlowGAN that contains three parts:conditional encoder,flow generation network,and discriminator.Flow generation networks operate at multiple scales conditioned on encoded infor

关 键 词:低照度图像增强 流模型 生成对抗网络(GAN) 双向映射 复杂光照分布 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象