Retrieval-Augmented Graph Explanation with LLM-Based Bayesian Inference Book Chapter

Zhang, J, Liu, J, Luo, D et al. (2026). Retrieval-Augmented Graph Explanation with LLM-Based Bayesian Inference . 2695 CCIS 54-74. 10.1007/978-3-032-11477-8_6

cited authors

  • Zhang, J; Liu, J; Luo, D; Neville, J; Wei, H

authors

abstract

  • Recent advances in Graph Neural Network (GNN) explanation focus on identifying faithful subgraph-based rationales for model predictions. However, due to the scarcity of annotated graph explanation datasets, existing methods are prone to learning bias, resulting in unreliable or overfitted explanations. In this work, we propose a novel agentic explanation retrieval framework, where a GNN-based explainer generates candidate explanations, and a Large Language Model (LLM) agent evaluates and ranks them based on prior domain knowledge. Specifically, we cast the explanation process as a Bayesian inference problem and embed the LLM as a Bayesian variational inference module, thereby mitigating the bias introduced by limited supervision. The LLM agent acts as an explanation grader in a retrieval-augmented loop, guiding the learning dynamics of the explainer through uncertainty-aware optimization. Theoretically, we show that the integration of the LLM agent preserves a lower bound on explanation quality, and empirically, we validate our approach on both synthetic and real-world graph datasets. Our contributions are twofold: (1) we introduce a novel view of explanation retrieval as an agentic process, and (2) we demonstrate how LLMs can be effectively employed as Bayesian evaluators to improve interpretability in graph-based systems.

publication date

  • January 1, 2026

Digital Object Identifier (DOI)

start page

  • 54

end page

  • 74

volume

  • 2695 CCIS