PIAFGNN:Property Inference Attacks against Federated Graph Neural Networks  

作  者:Jiewen Liu Bing Chen Baolu Xue Mengya Guo Yuntao Xu 

机构地区:[1]College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing,321002,China [2]Collaborative Innovation Center of Novel Software Technology and Industrialization,Nanjing,210023,China

出  处:《Computers, Materials & Continua》2025年第2期1857-1877,共21页计算机、材料和连续体(英文)

基  金:supported by the National Natural Science Foundation of China(Nos.62176122 and 62061146002).

摘  要:Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.

关 键 词:Federated graph neural networks GNNs privacy leakage regression model property inference attacks EMBEDDINGS 

分 类 号:O15[理学—数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象