Almog Hilel
MIT EECS | Landsman Undergraduate Research and Innovation Scholar
Reverse Engineering the Mind
2024–2025
Electrical Engineering and Computer Science
- Brain and Cognitive Science
Leslie P. Kaelbling
Joshua B. Tenenbaum
How do we make guesses about minds? How do we share these guesses using natural language? How do we convey so much information with so few words? We aim to investigate the cognitive mechanisms underlying theory of mind and language generation. This process can be described in three steps: (1) observe– actions of the other, (2) infer– a probabilistic model underlying those actions, and (3) share– this model using language to enable a listener to reconstruct it in their own mind. Compressing an inferred model into a few words is a non-trivial task.
To investigate this cognitive mechanism we use computational tools tailored for a close-domain controlled game scenario: We develop a Bayesian model architecture to (1) observe virtual players actions (2) infer their RL model (3) generate language to share that model through natural language to a listener. The model achieves this entirely through its architecture, for a specific controlled environment, without any training on human data. To evaluate our computational model we compare its natural language guesses to humans prompted with a similar stimulus.
Our results highlight that by incorporating theoretical constructs from the communication literature—balancing accuracy, informativity, and relevance—the Bayesian model demonstrates better alignment with human explanations. Preliminary lab experiment with humans shows that the Bayesian model generates natural language guesses that closely match how humans articulate their guesses when exposed to similar visual stimuli in our specific lab controlled setting.
Reverse engineering one of the most beautiful and intricate system known to exist: the human mind.