1 of 16

Unsupervised Graph Poisoning Attack via Contrastive Loss�Back-propagation

Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, Guandong Xu

DSMI at UTS

2 of 16

Main Contribution

  • We propose Contrastive Loss Gradient Attack (CLGA), a gradient based unsupervised attack method targeting at graph contrastive learning. Unlike most supervised attack models, CLGA does not rely on labels, and can degrade the quality of the learned embeddings and thus affect the performance of various downstream tasks.

3 of 16

Contents

  • Preliminaries
  • Motivations
  • Contributions
  • Methodology
  • Experiments

4 of 16

Preliminaries

  • Graph representation models

Encoder

1

5

3

6

5

4

0

4

6

3

8

6

8

4

1

0

0

0

1

6

 

0.1

0.3

0.8

Embeddings

5 of 16

Preliminaries

  • Graph representation models

graph representation models

supervised

unsupervised

graph contrastive learning

node2vec

DeepWalk

……

6 of 16

Preliminaries

  • Graph contrastive learning

7 of 16

Preliminaries

  • Robustness

8 of 16

Preliminaries

  • Graph adversarial attacks

  • Most existing graph adversarial attacks are supervised attacks.

Adversarial attack model

poisoned graph

original graph

9 of 16

Motivations

  • Graph contrastive learning is SOTA unsupervised graph representation model and is more robust to adversarial attacks compared with conventional models.
  • Existing graph adversarial attacks are mostly supervised attacks. They are not suitable for evaluating the robustness of graph contrastive learning.
      • Labels are unavailable.
  • We fill this gap and propose a novel unsupervised attack for graph contrastive learning.

10 of 16

Contributions

  • We propose Contrastive Loss Gradient Attack (CLGA), a gradient based unsupervised attack method targeting at graph contrastive learning. Unlike most supervised attack models, CLGA does not rely on labels, and can degrade the quality of the learned embeddings and thus affect the performance of various downstream tasks.
  • We show by extensive experiments that CLGA outperforms unsupervised attack baselines and has comparable performance with some of the supervised attack methods on three benchmark datasets and on both node classification and link prediction tasks.
  • We also show that CLGA can be transferred to other graph representation models such as GCN and DeepWalk.
  • We visualize the learned embeddings to show how CLGA influences the quality of them.

11 of 16

Contrastive Loss Gradient Attack

12 of 16

Contrastive Loss Gradient Attack

 

13 of 16

Experiments

14 of 16

Experiments

15 of 16

Experiments

16 of 16

Conclusions

  • In this paper, we introduce Contrastive Loss Gradient Attack (CLGA), an unsupervised untargeted poisoning attack for attacking graph contrastive learning. This is the first work to attack graph contrastive learning in an unsupervised manner without using labels. The quality of the learned embeddings are damaged and the performance of various downstream tasks is degraded. We show by extensive experiments that our CLGA outperforms unsupervised baselines and has comparable and even better performance with supervised baselines. We also show that CLGA is able to be transferred to other graph representation models such as DeepWalk and GCN.