Akhilan Boopathy
MIT EECS | Lincoln Laboratory Undergraduate Research and Innovation Scholar
Analysis and Quantification of the Robustness of Neural Networks
2018–2019
Electrical Engineering and Computer Science
- Theory of Computation
Luca Daniel
Neural networks have become an increasingly popular and effective tool for a variety of machine learning tasks, including classification of images and text. However, recent research has shown that neural networks are susceptible to adversarial attacks. In critical applications, networks susceptible to attack could result in fatal accidents and/or unfair discrimination. Improving the robustness of neural networks to attack enhances safety and has positive social implications. Recent work has been done on empirically finding attack-agnostic robustness guarantees for neural networks. This project aims to build off of prior work to find even stronger robustness guarantees so that future neural networks can be made more robust for safety and security critical applications.
I am participating in SuperUROP to be part of a longer-term, more intensive research effort. In this project, I plan to leverage previous exposure to neural networks, and I hope to become more familiar with current work in this area. I look forward to gaining insight into neural network robustness.