基于改进Transformer模型的模板攻击  

Template Attack Based on Improved Transformer Model

在线阅读下载全文

作  者:彭静[1] 王敏[1] 王燚[1] 

机构地区:[1]成都信息工程大学网络安全学院,四川 成都

出  处:《应用数学进展》2023年第2期679-689,共11页Advances in Applied Mathematics

摘  要:模板攻击是最强的侧信道攻击方法,然而传统模板攻击在处理高维特征数据时,可能会遇到数值计算问题。掩码策略是抵抗侧信道攻击的常见策略之一,其主要思想是利用随机掩码使密码算法运行过程中的敏感信息泄露能耗随机化。针对传统模板攻击存在的问题和加掩抵抗策略,本文重点研究了在机器翻译领域取得了显著成果的Transformer网络模型,首次提出了一种基于Transformer网络模型的模板攻击新方法。为了使适用于机器翻译的神经网络适应侧信道一维数据特征,本文对网络模型结构进行了适当的调整。实验对加掩防护的AES128算法采集能耗曲线,选取第一轮第三个S盒的输出作为攻击点,分别采用了多层感知机、一维卷积神经网络和基于改进Transformer的神经网络模型建立模板。最终实验结果表明,改进Transformer模型的卷积层在训练过程中会结合能量迹的不同兴趣点进行学习,自注意力机制能够赋予大的权值给重要的特征来提取出对模型分类重要的兴趣点,由此基于改进Transformer模型的模板攻击能够成功实现对带掩防护数据集的攻击,且需要的能迹数少于多层感知机和一维卷积神经网络。Template attack is the strongest method of side-channel attack. However, traditional template at-tack may encounter numerical problems when processing high-dimensional feature data. Mask strategy is one of the common strategies to resist side-channel attacks. Its main idea is to use ran-dom mask to randomize the energy consumption of sensitive information leakage during the oper-ation of cryptographic algorithms. Aiming at the problems of traditional template attacks and masking resistance strategies, this paper focuses on the Transformer network model, which has achieved remarkable results in the field of machine translation, and proposes a new template at-tack method based on the Transformer network model for the first time. In order to adapt the neu-ral network suitable for machine translation to the one-dimensional data characteristics of the side channel, the structure of the network model has been appropriately adjusted. The experiment col-lects the energy consumption curve of AES128 algorithm for masking protection, selects the output of the third S-box in the first round as the attack point, and uses multi-layer perceptron, one-dimensional convolutional neural network and neural network model based on improved Transformer to build the template. The final experimental results show that the convolution layer of the improved Transformer model will combine the different interest points of the energy trace to learn during the training process, and the self-attention mechanism can give large weights to im-portant features to extract the important interest points of the model classification, so the template attack based on the improved Transformer model can successfully achieve the attack on the mask-ing protection data set. The number of traces required is less than that of multi-layer perceptron and one-dimensional convolutional neural network.

关 键 词:Transformer模型 注意力机制 模板攻击 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象