ExplainIt! A Tool for Computing Robust Attributions of DNNs Conference

Jha, S, Velasquez, A, Ewetz, R et al. (2022). ExplainIt! A Tool for Computing Robust Attributions of DNNs . 5916-5919.

cited authors

  • Jha, S; Velasquez, A; Ewetz, R; Pullum, L; Jha, S

abstract

  • Responsible integration of deep neural networks into the design of trustworthy systems requires the ability to explain decisions made by these models. Explainability and transparency are critical for system analysis, certification, and human-machine teaming. We have recently demonstrated that neural stochastic differential equations (SDEs) present an explanation-friendly DNN architecture. In this paper, we present ExplainIt, an online tool for explaining AI decisions that uses neural SDEs to create visually sharper and more robust attributions than traditional residual neural networks. Our tool shows that the injection of noise in every layer of a residual network often leads to less noisy and less fragile integrated gradient attributions. The discrete neural stochastic differential equation model is trained on the ImageNet data set with a million images, and the demonstration produces robust attributions on images in the ImageNet validation library and on a variety of images in the wild. Our online tool is hosted publicly for educational purposes.

publication date

  • January 1, 2022

International Standard Book Number (ISBN) 13

start page

  • 5916

end page

  • 5919