Karen Gu
MIT | IBM-Watson Undergraduate Research and Innovation Scholar
Incremental Computational Model for Logically Complex Generic Statements
2019–2020
EECS
- Brain and Cognitive Science
Roger Levy
This work focuses on generic language, a type of language that is very prevalent in natural language, especially in child-directed speech, but which has proven to be difficult to characterize quantificationally thus far. We will be extending previous work which provided a computational model for generic language based on the Rational Speech Acts Framework of human language, a model which assumes that speakers seek to maximize their communicative efficiency and listeners can use Bayesian inference to determine a speaker’ s intended meaning. We hope to extend the computational account of generic language and to demonstrate its validity using experimental data from human participants.
“I find human language fascinating because it is so effortless to learn while we are young but so difficult to explain systematically yet simply. I’m really excited to work on this SuperUROP project because we will be continuing work that we started last year and learning more about how human language works in the process. I’m also interested in learning more about computational modeling and in becoming more experienced as an experimentalist.”