Rohil Verma
MIT EECS | Draper Laboratory Undergraduate Research and Innovation Scholar
Investigating the Confidence of Deep Neural Networks
2018–2019
EECS
- Artificial Intelligence & Machine Learning
Daniela L. Rus
Deep learning is reshaping machine learning in many domains such as robotics and medicine. Although these algorithms can approach or exceed human accuracy at certain tasks, they lack interpretability. This makes it difficult to assess result accuracy in practical applications, which often have high costs of error. This makes it key that we develop rigorous methods of evaluating the confidence of these networks in their outputs. In this project, we will first explore scenarios under which state-of-the-art networks struggle with confidence and then develop specific measures of confidence under these scenarios. We will then attempt to generalize results across these different scenarios. Finally, we will attempt to extend any results into the more general field of neural network confidence.
I’ m working on a SuperUROP project because I’ d like to explore what I can achieve through a focused yearlong program in a fascinating research area: autonomous driving. I’ve taken theoretical machine learning classes and would like to bring that rigor to this field to ensure that prospective solutions are as safe as they are flashy. Beyond my work, I’m most excited about being able to learn about what successful, long-term research requires.