MIT EECS | Quanta Computer Undergraduate Research and Innovation Scholar
Natural Language Understanding
Natural language processing is comprised of the process of analyzing and interpreting different combinations of words to understand the meaning of some text. Word vectors are one way to break up the components of a sentence and develop an understanding of its structure. The cosine distance between word vectors tends to be associated with semantic similarity. The goal of my project is to compute the sparse encodings of word vectors using machine learning. Recent research has demonstrated that these sparse encodings can be used to find ways to represent the vectors as weighted sums of small numbers of vectors, ultimately getting at the different meanings of a particular word.
I really enjoyed the work I did at Google this past summer, working on classifying images based on their sharpness and brightness using machine learning techniques. I wanted to explore the applications of machine learning in other areas of computer science as well as gain a more in-depth understanding of the mathematics behind machine learning algorithms.