Research Project Title:
Do We Really Understand Neural Networks?
abstract:Deep neural networks are a powerful tool that have made many difficult problems tractable. However, their internal mechanisms are opaque, and it is usually necessary to treat them as black boxes which makes debugging them difficult and raises ethical concerns about algorithmic transparency. This research will use statistical methods to analyze populations of trained deep networks. By comparing and contrasting the internal states of multiple networks we hope to develop metrics that will allow us to effectively characterize their interpretability by humans. We will investigate the effects of different architectures data sets and training algorithms on our metrics. We hope to develop networks that are highly human-interpretable while still retaining good test-set performance.
"Through this SuperUROP I hope to gain more experience in machine learning research. I'm looking forward to apply my data visualization and statistics experience to problems without established solutions as well as working in a high-performance scientific programming environment. I think interpretability is an important area of the deep learning field and I hope to make a useful contribution."