Multi-Modal Deep Learning Using Network Features My project aims to predict food purchase behavior by combining information from multiple modalities (such as tweets news weather census data and product descriptions). Multimodal learning is a relatively new machine learning technique and has been shown to improve over performance with just one modality. By training deep Boltzmann machines on two distinct modes and then combining these into a shared feature representation we will have a more robust input for our deep neural network.
Throughout my years at MIT I've found machine learning and computer vision highly interesting and I'm excited to dive into exploring this new direction that combines information from a variety of channels. I hope to deepen my understanding of state-of-the-art machine learning techniques as well as develop my research skills.