Negative-Free Graph Contrastive Learning for Recommendation Conference

Liu, J, Yu, M, Hu, X et al. Negative-Free Graph Contrastive Learning for Recommendation . Proceedings - IEEE International Conference on Data Mining, ICDM, 517. 10.1109/ICDM65498.2025.00059

cited authors

  • Liu, J; Yu, M; Hu, X; Yang, J; Guo, Y; Li, W; Zhang, W

abstract

  • Graph Contrastive Learning (GCL) emerges as a powerful approach in recommendation systems, leveraging graph structures to learn effective representations. However, existing contrastive sampling strategies often introduce unintended biases, most notably, the misclassification of genuine positive samples as negatives, which undermines representation quality and overall recommendation performance. Accordingly, this paper revisits the conventional contrastive sampling and introduces Negative-Free Sampling for Graph Contrastive Learning (NFS). NFS adopts a two-stage sampling strategy that selectively identifies and utilizes only positive instances during training. By removing reliance on negative samples, it effectively mitigates misclassification bias and improves the semantic alignment between related representations. In addition, a comprehensive theoretical analysis is also provided to establish the robustness of NFS against representation collapse. Experimental results on three benchmarks demonstrate that NFS consistently outperforms or performs state-of-the-art methods, achieving up to a 14.2% relative improvement across evaluated datasets. In addition, a detailed ablation study is also provided to examine how exclusively leveraging positive samples contributes to the efficiency of GCL. The results further demonstrate the plug-and-play nature of the proposed method and its resilience to noisy data.

authors

Digital Object Identifier (DOI)

start page

  • 517