机构地区:[1]中国科学技术大学信息科学技术学院,安徽合肥230027 [2]中国科学院电磁空间信息重点实验室,安徽合肥230027 [3]中国科学技术大学网络空间安全学院,安徽合肥230027
出 处:《网络与信息安全学报》2023年第4期29-39,共11页Chinese Journal of Network and Information Security
基 金:国家自然科学基金(U20B2047,62072421,62002334,62102386,62121002);中国科学技术大学探索基金(YD3480002001);中央高校基本科研业务费专项基金(WK2100000011)。
摘 要:近年来,深度学习已经成为多方领域的核心技术,而深度学习模型的训练过程中往往需要大量的数据,这些数据中可能含有隐私信息,包括个人身份信息(如电话号码、身份证号等)和敏感信息(如金融财务、医疗健康等)。因此,人工智能模型的隐私风险问题成为学术界的研究热点。深度学习模型的隐私研究仅局限于传统神经网络,而很少针对特殊网络结构的新兴网络(如可逆神经网络)。可逆神经网络的上层信息输入可以由下层输出直接得到,直观上讲,该结构保留了更多有关训练数据的信息,相比传统网络具有更大的隐私泄露风险。为此,提出从数据隐私泄露和模型功能隐私泄露两个层面来探讨深度网络的隐私问题,并将该风险评估策略应用到可逆神经网络。具体来说,选取了两种经典的可逆神经网络(RevNet和i-RevNet),并使用了成员推理攻击、模型逆向攻击、属性推理攻击和模型窃取攻击4种攻击手段进行隐私泄露分析。实验结果表明,可逆神经网络在面对数据层面的隐私攻击时存在相比传统神经网络更严重的隐私泄露风险,而在面对模型层面的隐私攻击时存在相似的隐私泄露风险。由于可逆神经网络研究越来越多,目前被广泛被应用于各种任务,这些任务也涉及敏感数据,在实验结果分析基础上提出了一些潜在的解决方法,希望能够应用于未来可逆神经网络的发展。In recent years,deep learning has emerged as a crucial technology in various fields.However,the training process of deep learning models often requires a substantial amount of data,which may contain private and sensitive information such as personal identities and financial or medical details.Consequently,research on the privacy risk associated with artificial intelligence models has garnered significant attention in academia.However,privacy research in deep learning models has mainly focused on traditional neural networks,with limited exploration of emerging networks like reversible networks.Reversible neural networks have a distinct structure where the upper information input can be directly obtained from the lower output.Intuitively,this structure retains more information about the training data,potentially resulting in a higher risk of privacy leakage compared to traditional networks.Therefore,the privacy of reversible networks was discussed from two aspects:data privacy leakage and model function privacy leakage.The risk assessment strategy was applied to reversible networks.Two classical reversible networks were selected,namely RevNet and i-RevNet.And four attack methods were used accordingly,namely membership inference attack,model inversion attack,attribute inference attack,and model extraction attack,to analyze privacy leakage.The experimental results demonstrate that reversible networks exhibit more serious privacy risks than traditional neural networks when subjected to membership inference attacks,model inversion attacks,and attribute inference attacks.And reversible networks have similar privacy risks to traditional neural networks when subjected to model extraction attack.Considering the increasing popularity of reversible neural networks in various tasks,including those involving sensitive data,it becomes imperative to address these privacy risks.Based on the analysis of the experimental results,potential solutions were proposed which can be applied to the development of reversible networks in the
分 类 号:TP309.7[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...