Zachary Tangbei Zhang
MIT EECS Undergraduate Research and Innovation Scholar
Communicating Human Priors to Neural Logic Machines
- Artificial Intelligence and Machine Learning
Leslie P. Kaelbling
Given a set of base predicates, Neural Logic Machines sequentially apply first-order rules to draw conclusions. NLMs are powerful because they can recover a set of lifted rules which allow for more effective generalization to scenarios that are larger than those in the training dataset, deal with higher-arity relational data and quantifiers, and effectively scale up with the complexity of given rules (Dong, 2019). My project investigates how to communicate human priors to an NLM in order to improve its performance, mimicking how humans rarely learn new tasks with a completely random or structureless approach. Investigating these improvements is crucial because we ultimately want to build robots that are highly data efficient and can make precise decisions quickly with limited information.
I am participating in SuperUROP because I am interested in having a structured environment to dive deep into my research. Prior to SuperUROP, I was a UROP student at the Koch Institute working on differentiating cells based on mass, stiffness, and density. I am excited to gain the skills to drive my own research in the future and to discover efficient ways for human knowledge and thought processes to be communicated to machines.