Research Project Title:
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
abstract:Research has shown that neural networks can be fooled by small changes to images. Current research fails to consider a reasonable space of changes. Most work uses the $l_infty$ norm or the $l_2$ norm to constrain the space of possible adversarial examples but neither norm inclues rotations translations or skews. We propose two metrics to better capture the space of possible adversarial examples: VGG Distance (from the Oxford Visual Geometry Group) and Madry Distance. VGG Distance uses a VGG-19 feature extractor to find adversarial examples and Madry Distance uses composed $l_infty$ and $l_2$ norm application along with rotations and translations. Using these two metrics we can better defend against a wide range of adversarial attacks on the MNIST dataset.