Adversarial attacks on computer vision algorithms using natural perturbations Conference

Ramanathan, A, Pullum, L, Husein, Z et al. (2017). Adversarial attacks on computer vision algorithms using natural perturbations . 2018-January 1-6. 10.1109/IC3.2017.8284294

cited authors

  • Ramanathan, A; Pullum, L; Husein, Z; Raj, S; Torosdagli, N; Pattanaik, S; Jha, SK

abstract

  • Verifying the correctness of intelligent embedded systems is notoriously difficult due to the use of machine learning algorithms that cannot provide guarantees of deterministic correctness. In this paper, our validation efforts demonstrate that the OpenCV Histogram of Oriented Gradients (HOG) implementation for human detection is susceptible to errors due to both malicious perturbations and naturally occurring fog phenomena. To the best of our knowledge, we are the first to explicitly employ a natural perturbation (like fog) as an adversarial attack using methods from computer graphics. Our experimental results show that computer vision algorithms are susceptible to errors under a small set of naturally occurring perturbations even if they are robust to a majority of such perturbations. Our methods and results may be of interest to the designers, developers and validation teams of intelligent cyber-physical systems such as autonomous cars.

publication date

  • July 2, 2017

Digital Object Identifier (DOI)

International Standard Book Number (ISBN) 13

start page

  • 1

end page

  • 6

volume

  • 2018-January