MIT EECS | CS+HASS Undergraduate Research and Innovation Scholar
Robert C. Berwick
Mathematics with Computer Science
Research Project Title:
More Than One Model for Learning: A Computational Investigation
abstract:More Than One Model for Learning: A Computational Investigation We are interested in studying how well neural network based NLP models can acquire linearized language. To begin, we will develop the details of a method for linearizing English sentences. With the method, we will map an adult English corpus into its linearized form and then train it using packages such as Natural Language Toolkit (NLTK) in Python, UDPipe in R, or stanza in R. Then, we train a current NLP model on the manipulated corpus, Then we will compare the performance to the model’s performance on the original corpus. If the model performs just as well, then the model has captured pattern recognition, as opposed to language learning. These results inform our understanding of computational neural network models, and what computational model best fits what children actually do.
I'm really excited to participate in SuperUROP to delve deeper into research and be mentored by my advisor. Thanks!