检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Haoran Lyu Yajie Wang Yu‑an Tan Huipeng Zhou Yuhang Zhao Quanxin Zhang
机构地区:[1]School of Cyberspace Science and Technology,Beijing Institute of Technology,Beijing,China [2]School of Computer Science and Technology,Beijing Institute of Technology,Beijing,China
出 处:《Cybersecurity》2025年第1期180-188,共9页网络空间安全科学与技术(英文)
摘 要:Models based on MLP-Mixer architecture are becoming popular,but they still sufer from adversarial examples.Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks(CNNs),there has been no research on adversarial attacks tailored to its architecture.In this paper,we fll this gap.We propose a dedicated attack framework called Maxwell’s demon Attack(MA).Specifcally,we break the chan‑nel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability.We demonstrate that disrupting the MLP-Mixer’s capture of the main information of images by mask‑ing its inputs can generate adversarial examples with cross-architectural transferability.Extensive evaluations show the efectiveness and superior performance of MA.Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks.Moreover,our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with difer‑ent architectures.We achieve up to 55.9%attack performance improvement.Our work exploits the true generaliza‑tion potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments.
关 键 词:Adversarial attacks Adversarial examples Image classifcation
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:13.59.196.41