Research Project Title:
Building a Principled Science of Deep Learning
abstract:Recent research on machine learning security has shown that traditionally trained neural networks are vulnerable to adversarial examples—inputs that are misclassified by the network, but indistinguishable from natural data to a human eye. Moreover, attackers can inject a small amount of adversarial training data to compromise the resulting classifier—a technique known as data poisoning. In this project, we will work on defenses against adversarial examples by modifying the network architecture and training procedure, and explore adversarial example attacks under non-standard notions of similarity. In addition, we will investigate robustness of neural network classifiers to data poisoning attacks.
I’m really excited to learn and explore state-of-the-art ML security literature, as well as gain valuable hands-on experience with theoretical computer science research. Through this SuperUROP, I also hope to expand upon my machine learning knowledge from courses I’ve taken in the past, and apply my math and computer science background in tangible and meaningful ways.