Research Project Title:
Extending Generalization Theory to Include Adversarial Robustness
abstract:Standard generalization theory focuses on bounding generalization error, which measures a model’s ability to handle never-before-seen examples which are sampled at random. In this project, we plan to explore a new theory of robust generalization, which will instead measure a model’s ability to handle adversarial examples intentionally chosen to fool it. Previous research has already shown that this robust generalization requires more data than standard generalization. By re-examining the tools of standard generalization theory, such as Rademacher complexity, and extending them, we hope to develop theory and insights into this new notion of robustness. Ideally, the project will culminate in a theoretically-grounded technique for training robust, human-like AI.
"I am excited to work on a project where I can draw on both my theoretical background in algorithms and my practical experience contributing to machine learning projects at top industry labs. I hope to contribute to the field of deep learning by understanding existing methods’ robust generalization properties, then inventing new methods which are both theoretically grounded and highly effective."