Transfer Learning in Adversarial Attack Across Graph Neural Networks
When: Wednesday,
December 4, 2024
3:30 PM
-
4:30 PM
Where: > See description for location
Description: DSC Master Thesis Defense by Pooja Chigurupati
Advisor: Dr. Ashokkumar Patel
Committee Members: Dr. Yuchou Chang and Dr. Long Jiao
Join Zoom Meeting:
https://umassd.zoom.us/j/6606199870?pwd=UzJn5i4klKllpqquQkXaaIVkp4LMZA.1&omn=92154165478
Meeting ID: 660 619 9870
Passcode: gMj1pw
Abstract:
Graph Neural Networks (GNNs) are progressively utilized in multiple relevant domains, such as social networks, recommendation systems and cybersecurity. However, their resilience against adversarial attacks has become a significant concern. Existing research has proven that there are adversarial vulnerabilities in GNNs by lightly perturbing the network data to corrupt their performances This study analyzes the transferability of these vulnerabilities across different GNN architectures by employing transfer learning techniques. This approach assesses how adversarial information may spread through distinct models. We begin with a GNN that is trained on adversarial data. This model exhibits specific weaknesses, allowing us to transfer learned representations to another GNN architecture to evaluate how adversarial patterns impact the target model. By training on an additional adversarial dataset, we can assess inherited weaknesses and explore new vulnerabilities unique to the selected architecture. Our research aims to analyze both common and specific susceptibilities among various GNN models, thereby enhancing our understanding of adversarial robustness and informing the development of more effective defenses for applications involving graph-based data protection.
For more information please contact Prof. Ashok Patel at ashok.patel@umassd.edu
Advisor: Dr. Ashokkumar Patel
Committee Members: Dr. Yuchou Chang and Dr. Long Jiao
Join Zoom Meeting:
https://umassd.zoom.us/j/6606199870?pwd=UzJn5i4klKllpqquQkXaaIVkp4LMZA.1&omn=92154165478
Meeting ID: 660 619 9870
Passcode: gMj1pw
Abstract:
Graph Neural Networks (GNNs) are progressively utilized in multiple relevant domains, such as social networks, recommendation systems and cybersecurity. However, their resilience against adversarial attacks has become a significant concern. Existing research has proven that there are adversarial vulnerabilities in GNNs by lightly perturbing the network data to corrupt their performances This study analyzes the transferability of these vulnerabilities across different GNN architectures by employing transfer learning techniques. This approach assesses how adversarial information may spread through distinct models. We begin with a GNN that is trained on adversarial data. This model exhibits specific weaknesses, allowing us to transfer learned representations to another GNN architecture to evaluate how adversarial patterns impact the target model. By training on an additional adversarial dataset, we can assess inherited weaknesses and explore new vulnerabilities unique to the selected architecture. Our research aims to analyze both common and specific susceptibilities among various GNN models, thereby enhancing our understanding of adversarial robustness and informing the development of more effective defenses for applications involving graph-based data protection.
For more information please contact Prof. Ashok Patel at ashok.patel@umassd.edu
Contact: > See Description for contact information
Topical Areas: Faculty, Staff and Administrators