机构地区:[1]宁波大学信息科学与工程学院,浙江宁波315000 [2]浙江省移动网应用技术重点实验室,浙江宁波315000
出 处:《电子学报》2024年第7期2491-2502,共12页Acta Electronica Sinica
基 金:国家自然科学基金(No.62271274);宁波市科技项目(No.2024Z004,No.2023Z059)~~。
摘 要:对长序列的动画线稿帧进行着色是计算机视觉中一项具有挑战性的任务.一方面,线稿中包含的信息较为稀疏,需要着色算法对缺失的信息进行推断;另一方面,连续帧之间的色彩需要保持一致,以确保整个视频的视觉质量.现有的着色算法多数只针对单张图片进行着色,这类算法只给出一个开放性的符合合理范围的色彩结果,无法适用于帧序列着色.另一些基于参考帧的着色算法,并没有将2帧之间的关系有机地联系起来,导致着色效果不够出色.在同一镜头序列中,同一对象的特征往往不会发生太大变化,因此,可以设计一个根据给定参考帧,即可给线稿自动着色的模型.为此,本文提出了基于CNN(Convolutional Neural Networks)和Transformer相结合的模型Cross-CNN,该模型能够从参考帧中寻找并匹配颜色,从而保证时间维度上的特征一致性.Cross-CNN模型参考帧和线稿帧在通道维度叠加,输入预训练的Resnet50网络提取局部融合特征,将融合特征图传给Transformer结构进行编码以提取全局特征.在Transformer结构中设计了交叉注意力机制更好地匹配远距离特征.最后使用带有跳层连接的卷积解码器完成着色图片输出.本文在数据集方面从8部电影中截取画面并经过严格筛选,最终制作了一个包含20000对二元组的数据集用于实验研究.Cross-CNN的SSIM(Structural SIMilarity)达到了0.932,高于SOTA算法0.014.本文算法代码链接:https://github.com/silenye/Cross-CNN.Coloring long sequences of animated sketch frames is a challenging task in computer vision.On one hand,the information contained in sketches is sparse,and coloring algorithms need to infer missing information.On the other hand,the colors between consecutive frames need to be consistent to ensure visual quality throughout the video.Most exist⁃ing coloring algorithms are designed for single images and only provide one open-ended,reasonable color result,which is not suitable for coloring frame sequences.Other reference-based coloring algorithms do not have an organic connection be⁃tween two frames,resulting in unsatisfactory coloring results.In the same shot sequence,the features of same object usually do not change too much.Therefore,a model that can automatically color sketches based on a given reference frame can be designed.This paper proposes a new model called Cross-CNN that combines convolutional neural networks(CNN)and Transformer.Our Cross-CNN can find and match colors from the reference frame,thus ensuring temporal feature consisten⁃cy.In this model,the reference frame and the sketch frame are superimposed in the channel dimension,and the pre-trained Resnet50 network is used to extract locally fused features.The fused feature map is then passed to the Transformer structure for encoding to extract global features.In the Transformer structure,a cross attention mechanism is designed to better match long-distance features.Finally,a convolutional decoder with skip connections is used to output the colored image.In terms of the dataset,this paper extracted frames from eight movies and conducted strict screening to create a dataset containing 20000 pairs of reference and sketch frames for experimental research.The SSIM(Structural SIMilarity)of Cross-CNN can reach 0.932,which is higher than the SOTA algorithm by 0.014.The algorithm codes link for this paper:https://github.com/silenye/Cross-CNN.
关 键 词:线稿着色 卷积神经网络 TRANSFORMER 颜色匹配 动画制作
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...