检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:JiaShuo Liu JiongJiong Ren ShaoZhen Chen
机构地区:[1]Information Engineering Universiy,Zhengzhou,People's Republic of China
出 处:《Cybersecurity》2024年第4期36-51,共16页网络空间安全科学与技术(英文)
基 金:supported by the National Natural Science Foundation of China[Grant number 62206312].
摘 要:In CRYPTO 2019,Gohr opens up a new direction for cryptanalysis.He successfully applied deep learning to differential cryptanalysis against the NSA block cipher SPECK32/64,achieving higher accuracy than traditional differential distinguishers.Until now,one of the mainstream research directions is increasing the training sample size and utilizing different neural networks to improve the accuracy of neural distinguishers.This conversion mindset may lead to a huge number of parameters,heavy computing load,and a large number of memory in the distinguishers training process.However,in the practical application of cryptanalysis,the applicability of the attacks method in a resourceconstrained environment is very important.Therefore,we focus on the cost optimization and aim to reduce network parameters for differential neural cryptanalysis.ln this paper,we propose two cost-optimized neural distinguisher improvement methods from the aspect of data format and network structure,respectively.Firstly,we obtain a partial output difference neural distinguisher using only 4-bits training data format which is constructed with a new advantage bits search algorithm based on two key improvement conditions.In addition,we perform an interpretability analysis of the new neural distinguishers whose results are mainly reflected in the relationship between the neural distinguishers,truncated differential,and advantage bits.Secondly,we replace the traditional convolution with the depthwise separable convolution to reduce the training cost without affecting the accuracy as much as possible.Overall,the number of training parameters can be reduced by less than 50%by using our new network structure for training neural distinguishers.Finally,we apply the network structure to the partial output difference neural distinguishers.The combinatorial approach have led to a further reduction in the number of parameters(approximately 30% of Gohr's distinguishers for SPECK).
关 键 词:Deep learning Block cipher Neural distinguisher Depthwise separabe convolution SPECK
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7