Research Project Title:
Building a Principled Understanding of Deep Neural Networks: Exploring the Limits of Adversarial Robustness
abstract:Deep neural networks are used to solve a variety of machine learning problems both in academia and industry nowadays. Unfortunately, they are generally not robust against adversarial inputs, and thus cannot provide a strong security guarantee. My project will look at developing a principled method of making neural networks robust that allows us to provide security guarantees for deep neural networks. We will look at projected gradient descent (PGD) adversaries, (which are known as the general first-order adversary) and networks trained against them, and examine the robustness guarantees they can provide.
“I want to gain a solid exposure into the world of academic research and simultaneously contribute to a deeper understanding of deep learning. I have taken various math, algorithm, and machine learning classes that prepared me for this project. I am excited to be a part of a lab that's working at the bleeding edge of human knowledge.”