Maximillian S. Langenkamp
MIT EECS | Undergraduate Research and Innovation Scholar
Dual System Morality: A Reinforcement Learning Approach
2019–2020
EECS
- Brain and Cognitive Science
Joshua Tenenbaum
Dual processing theory is a model of cognition that has gained popularity in the field of human psychology. It is able to explain human decision-making in several domains, from shopping purchases to political donations. At the same time, there has been increasing awareness of a parity between the dual processing model and models within reinforcement learning. Our goal is twofold. First, we aim to create a set of novel moral environments’ that enable reinforcement learning agents to make decisions that involve the wellbeing of others. Second, we aim to create a reinforcement learning agent that uses a decision-making model closely inspired by dual processing theory. We will then use our agent within our generated environment and investigate the properties of the agent within the environment, including whether our agent faithfully approximates moral decision-making.
“I’ve always been curious about our minds and morality. Why do we feel that eating chicken is acceptable while harming a dog isn’t? Using my formal background in computer science and my informal background in philosophy, I’m hoping to build upon a computational framework that helps to codify moral questions and actions.”