基于文字边缘失真特征的翻拍图像篡改定位  

Tampering localization of recaptured image based on text edge distortion features

在线阅读下载全文

作  者:陈昌盛[1] 陈自炜 李锡劲 CHEN Changsheng;CHEN Ziwei;LI Xijin(College of Electronics and Information Engineering,Shenzhen University,Shenzhen,Guangdong 518060,China)

机构地区:[1]深圳大学电子与信息工程学院,广东深圳518060

出  处:《中国科技论文》2024年第2期160-168,199,共10页China Sciencepaper

基  金:国家自然科学基金资助项目(62072313)。

摘  要:针对翻拍文档图像的篡改定位问题,提出一种基于文字边缘失真特征的翻拍图像篡改定位方法。从文字边缘分布、边缘梯度以及待检测文本与参考文本在边缘梯度上的差异3个方面构建了文字失真特征,并训练了一个基于深度神经网络的分类器进行决策。同时,为了评估检测方法的性能,构建了一个包含120张合法图像、1 200张翻拍篡改文档图像的数据集。实验结果表明:所提出的方法在跨库实验场景下词汇级别的ROC曲线下面积(area under ROC curve,AUC)和等错误率(equal error rate,EER)分别达到了0.84和0.23;与Forensic Similarity (128×128)和DenseFCN相比,所提出的特征结合LightDenseNet的方法在翻拍篡改文档数据集的跨库协议下,词汇级别的AUC指标分别提高了0.06和0.17。To address the tampering localization of recaptured document image,a tampering localization method of recaptured docu-ment images based on text edge distortion features was proposed.Text distortion features were constructed based on text edge distri-bution,edge gradient,and the difference in edge gradient between the text to be detected and reference.A deep neural network-based classifier was trained to make decisions.To evaluate the performance of the detection method,a dataset containing 120 cap-tured images and 1200 recaptured documents images with tampering was constructed.The results show that the proposed method achieves 0.84 and 0.23 area under ROC curve(AUC)and equal error rate(EER)at the lexical level in the cross-library scenario.Compared with Forensic Similarity(128×128)and DenseFCN,the proposed features combined with LightDenseNet improve the lexical-level AUC by 0.06 and 0.17 under the cross-library protocol of the dataset.

关 键 词:文档图像 翻拍攻击 篡改定位 文字边缘失真 翻拍篡改文档数据库 

分 类 号:TP317.4[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象