An Explanatory Strategy for Reducing the Risk of Privacy Leaks  

在线阅读下载全文

作  者:Mingting Liu Xiaozhang Liu Anli Yan Xiulai Li Gengquan Xie Xin Tang 

机构地区:[1]Hainan University,Haikou,570228,China [2]Hainan Hairui Zhong Chuang Technol Co.,Ltd.,Haikou,570228,China [3]School of Electrical and Electronic Engineering,Nanyang Technological University,639798,Singapore

出  处:《Journal of Information Hiding and Privacy Protection》2021年第4期181-192,共12页信息隐藏与隐私保护杂志(英文)

基  金:This work is supported by the National Natural Science Foundation of China(Grant No.61966011);Hainan University Education and Teaching Reform Research Project(Grant No.HDJWJG01);Key Research and Development Program of Hainan Province(Grant No.ZDYF2020033);Young Talents’Science and Technology Innovation Project of Hainan Association for Science and Technology(Grant No.QCXM202007);Hainan Provincial Natural Science Foundation of China(Grant No.621RC612);Hainan Provincial Natural Science Foundation of China(Grant No.2019RC107).

摘  要:As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.

关 键 词:Machine learning model data privacy risks machine learning explanatory strategies 

分 类 号:TN9[电子电信—信息与通信工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象