Recent advances in Graph Neural Network (GNN) explanation focus on identifying faithful subgraph-based rationales for model predictions. However, due to the scarcity of annotated graph explanation datasets, existing methods are prone to learning bias, resulting in unreliable or overfitted explanations. In this work, we propose a novel agentic explanation retrieval framework, where a GNN-based explainer generates candidate explanations, and a Large Language Model (LLM) agent evaluates and ranks them based on prior domain knowledge. Specifically, we cast the explanation process as a Bayesian inference problem and embed the LLM as a Bayesian variational inference module, thereby mitigating the bias introduced by limited supervision. The LLM agent acts as an explanation grader in a retrieval-augmented loop, guiding the learning dynamics of the explainer through uncertainty-aware optimization. Theoretically, we show that the integration of the LLM agent preserves a lower bound on explanation quality, and empirically, we validate our approach on both synthetic and real-world graph datasets. Our contributions are twofold: (1) we introduce a novel view of explanation retrieval as an agentic process, and (2) we demonstrate how LLMs can be effectively employed as Bayesian evaluators to improve interpretability in graph-based systems.