循环生成对抗网络的线稿图像自动提取  被引量:2

Image extraction of cartoon line art based on cycle-consistent adversarial networks

在线阅读下载全文

作  者:王素琴[1] 张加其 石敏[1] 赵银君 Wang Suqin;Zhang Jiaqi;Shi Min;Zhao Yinjun(School of Control&Computer Engineering,North China Electric Power University,Beijing 102206,China)

机构地区:[1]华北电力大学控制与计算机工程学院,北京102206

出  处:《中国图象图形学报》2021年第5期1117-1127,共11页Journal of Image and Graphics

基  金:国家自然科学基金项目(61972379)。

摘  要:目的动漫制作中线稿绘制与上色耗时费力,为此很多研究致力于动漫制作过程自动化。目前基于数据驱动的自动化研究工作快速发展,但并没有一个公开的线稿数据集可供使用。针对真实线稿图像数据获取困难,以及现有线稿提取方法效果失真等问题,提出基于循环生成对抗网络的线稿图像自动提取模型。方法模型基于循环生成对抗网络结构,以解决非对称数据训练问题。然后将不同比例的输入图像及其边界图输入到掩码指导卷积单元,以自适应选择网络中间特征。同时为了进一步提升网络提取线稿的效果,提出边界一致性约束损失函数,确保生成结果与输入图像在梯度变化上的一致性。结果在公开的动漫彩色图像数据集Danbooru2018上,应用本文模型提取的线稿图像相比于现有线稿提取方法,噪声少、线条清晰且接近真实漫画家绘制的线稿图像。实验中邀请30名年龄在2025岁的用户,对本文以及其他4种方法提取的线稿图像进行打分。最终在30组测试样例中,本文方法提取的线稿图像被认为最佳的样例占总样例84%。结论通过在循环生成对抗网络中引入掩码指导单元,更加合理地提取彩色图像的线稿图像,并通过对已有方法提取效果进行用户打分证明,在动漫线稿图像提取中本文方法优于对比方法。此外,该模型不需要大量真实线稿图像训练数据,实验中仅采集1000幅左右真实线稿图像。模型不仅为后续动漫绘制与上色研究提供数据支持,同时也为图像边缘提取方法提供了新的解决方案。Objective With the continuous development of digital media,people’s demand for animation works continues to increase.Excellent two-dimensional animation works usually require a lot of time and effort.In the animation production process,the key frame line draft images are usually drawn by the original artist,then the intermediate frame line draft images are drawn by multiple ordinary animators,and finally all the line draft images are colored by the coloring staff.In order to improve the production efficiency of two-dimensional animation art,researchers have committed to improving the automation of the production process.At present,data-driven deep learning technology is developing rapidly,which provides a new solution for improving the production efficiency of animation works.Although many data-driven automated methods have been proposed,it is very difficult to obtain training datasets,and there is no public dataset that corresponds to color images and linear images.For this reason,the research work of automatically extracting line draft images from color animation images will provide data support for animation production-related research.MethodEarly image edge extraction methods depend on the setting of parameter values,and fixed parameters cannot be applied to all images.However,the datadriven image edge extraction method is limited by the collection and size of the dataset.Therefore,researchers usually use data enhancement techniques or use images similar to line art,such as boundary images(edge information extracted from color images).This study proposes an automatic extraction model of linear art images based on the cycle-consistent adversarial networks to solve the problem of the difficulty of obtaining real line art images and the distortion of the existing line art image extraction methods.First of all,this study uses a cycle-consistent adversarial network structure to solve the dataset problem without real color images and corresponding line art images.It only uses a few collected real line art images

关 键 词:动漫线稿图像生成 非对称数据训练 掩码指导卷积单元(MGCU) 循环生成对抗网络(CycleGAN) 卷积神经网络(CNN) 

分 类 号:TP37[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象