Deep neural networks (DNNs) are finding applications in wide-ranging applications such as image recognition, medical diagnosis and self-driving cars. However, DNNs suffer from a security threat: decisions can be misled by adversarial inputs crafted by adding human-imperceptible perturbations into normal inputs during training of DNN model. Defending against adversarial attacks is challenging due to multiple attack vectors, unknown adversary's strategies and cost. This project investigates a compression/decompression-based defense strategy to protect DNNs against any attack, with low cost and high accuracy. The project aims to create a new paradigm of safeguarding DNNs from a radically different perspective by using signal compression with a focus on integrating defenses into compression of the inputs and DNN models. The research tasks include: (i) developing defensive compression for visual/audio inputs to maximize defense efficiency without compromising testing accuracy; (ii) developing defensive model compression, and novel gradient masking/obfuscating methods without involving retraining, to universally harden DNN models; and (iii) conducting attack-defense evaluations through algorithm-level simulation and live platform experimentation.Any success from this EAGER project will be useful to research community interested in deep learning, hardware- and cyber- security, and multimedia. This project enhances economic opportunities by promoting wider applications of deep learning into realistic systems, and gives special attention to educating women and students from traditionally under-represented/under-served groups in Florida International University (FIU).The project repository will be stored on a publicly accessible server at FIU (http://web.eng.fiu.edu/wwen/). Data will be maintained for at least 5 years after the project period.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.