Research Project Title:
Understanding Vision-and-Language Processing in the Brain
abstract:Despite many years of studies, we understand very little about multimodal processing in the brain. Our project focuses on determining the neural components of vision-and-language integration in the brain using current state-of-the-art deep learning (DL) networks that process multimodal data. Using a rich dataset of stereoelectroencephalography (sEEG) recordings in response to audiovisual stimuli, we analyze how representations built from vision-and-language DL networks model activity in the brain using regressions. We hope to use these analyses to build better multimodal DL networks while gaining a deeper understanding of how the brain processes audiovisual stimuli concurrently.
I have participated in short UROP projects in natural language processing and neuroscience for two years and became aware of important research problems in these areas. I believe that unlocking the mysteries of multimodal processing in the brain will be one of the most important discoveries we make, leading to a new era in machine learning and neuroscience. I hope to make contributions to these exciting research problems in this project.