Alexander Matthew Turner
MIT EECS | Lal Undergraduate Research and Innovation Scholar
Introducing Backdoors in Neural Networks with Data Poisoning
2017–2018
EECS
- Artificial Intelligence & Machine Learning
Aleksander Madry
Deep neural networks have shown remarkable performance even on extremely complex, highly nonlinear problems such as classifying objects in images, language translation, and robotics. Training these neural networks, however, takes vast amounts of time. A major factor increasing this time is the need to tune various higher-level parameters (hyperparameters) of the network. Testing each set of hyperparameters typically requires training the network from scratch, making the search very computationally expensive. This project aims to develop a principled understanding of hyperparameter search from the perspective of continuous optimization using a mixed theoretical and experimental approach to study the training of deep neural networks and the impact of hyperparameter settings.
I am participating in SuperUROP because I have enjoyed previous research projects and I would like to expand my knowledge of machine learning while applying what I have already learned. I hope to publish a paper if my research has interesting results.