Research Project Title:
Towards Interpretability of Neural Networks via Certified Robustness for Top-K Predictions
abstract:Deep learning models are fast becoming the method of choice for a variety of machine learning tasks in fields such as vision, language, and audio. As their prominence has increased, it has become increasingly important to examine their vulnerabilities in various adversarial settings. Despite demonstrations that adversarial attacks are possible on deep automatic speech recognition (ASR) models, there has still been relatively little research done on adversarial audio. Therefore, this project will further investigate audio adversarial examples and evaluate the robustness of deep ASR models against attacks. More specifically, we aim to quantify perturbation bounds for robustness guarantees for such ASR models and potentially develop an adversarial defense method for them.
"I am participating in SuperUROP because I would like to be involved in a more structured, long-term research project. Many of my previous experiences involved working with speech and audio data, especially in the context of deep learning models. With how prominent adversarial attacks have become in deep learning recently, I hope to gain experience and become more familiar with the work in this area."