检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Youqing Fang Jingwen Jia Yuhai Yang Wanli Lyu
出 处:《国际计算机前沿大会会议论文集》2023年第1期417-432,共16页International Conference of Pioneering Computer Scientists, Engineers and Educators(ICPCSEE)
基 金:This research work is partly supported by the National Natural Science Foundation of China(62172001);the Provincial Colleges Quality Project of Anhui Province(2020xsxxkc047);the National Undergraduate Innovation and Entrepreneurship Training Program(202210357077).
摘 要:Adding subtle perturbations to an image can cause the classification model to misclassify,and such images are called adversarial examples.Adversar-ial examples threaten the safe use of deep neural networks,but when combined with reversible data hiding(RDH)technology,they can protect images from being correctly identified by unauthorized models and recover the image lossless under authorized models.Based on this,the reversible adversarial example(RAE)is ris-ing.However,existing RAE technology focuses on feasibility,attack success rate and image quality,but ignores transferability and time complexity.In this paper,we optimize the data hiding structure and combine data augmentation technology,whichflips the input image in probability to avoid overfitting phenomenon on the dataset.On the premise of maintaining a high success rate of white-box attacks and the image’s visual quality,the proposed method improves the transferability of reversible adversarial examples by approximately 16%and reduces the com-putational cost by approximately 43%compared to the state-of-the-art method.In addition,the appropriateflip probability can be selected for different application scenarios.
关 键 词:reversible adversarial example black-box attack transferability COMPLEXITY
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117