Research Project Title:
Interpreting the Operation of Neural Question-Answering Models
abstract:Deep neural networks have become state-of-the-art for many tasks, but because they are trained end-to-end, interpreting their inner operations is a challenge. This is a salient problem for natural language question-answering models. These models have become very complex, but there are many open questions about what that complexity does in practice. The goal of this project is to build tools to interpret the internal neurons in question-answering models. This will involve building both visualizations and quantitative tests for behaviors and functions. We hope these tools will help determine whether and where real language comprehension occurs in question-answering models and assist in debugging and improving these models with a greater understanding of how they encode information.
"I am interested in language and in making complex systems understandable. I have worked before on interpreting neural machine translation models, and I am excited to apply what I have learned to a new task. I hope to gain some insight both into language and into the neural models that process it, and to gain some comfort with the research process and community."