Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications  被引量:1

Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications

在线阅读下载全文

作  者:Harshvardhan Aditya Siddansh Chawla Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti Harshvardhan Aditya;Siddansh Chawla;Gunika Dhingra;Parijat Rai;Saumil Sood;Tanmay Singh;Zeba Mohsin Wase;Arshdeep Bahga;Vijay K. Madisetti(School of Computer Science Engineering & Technology, Bennett University, Greater Noida, India;Cloudemy Technology Labs, Chandigarh, India;School of Cybersecurity and Privacy, Georgia Institute of Technology, Atlanta, USA)

机构地区:[1]School of Computer Science Engineering & Technology, Bennett University, Greater Noida, India [2]Cloudemy Technology Labs, Chandigarh, India [3]School of Cybersecurity and Privacy, Georgia Institute of Technology, Atlanta, USA

出  处:《Journal of Software Engineering and Applications》2024年第5期421-447,共27页软件工程与应用(英文)

摘  要:The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.

关 键 词:Large Language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA) 

分 类 号:H31[语言文字—英语]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象