MIT EECS - Qualcomm Undergraduate Research and Innovation Scholar
Text Detection for Assisting the Blind
It can be challenging for blind people and robots to navigate in the human world where text on signs, walls, and doors often indicate the desired path. Our goal is to build a system that can effectively detect and decode text in a natural environment using sensor observations of the surroundings. My part of the project involves improving the text-detection phase. Here, our system identifies which regions, if any, of the current field of view contain decodable text. Similar state-of-the-art text detection systems currently achieve at most 65% recall while maintaining 70-80% precision on natural images from the ICDAR 2003 and 2011 Robust Reading datasets. My work is to the effect of increasing performance over such existing methods.
I have developed computer vision algorithms to identify and distinguish patterns of spots on individual whale sharks, work that was done under Professor Sai Ravela at MIT. I have also worked at Google, researching and designing a system to detect audio/video synchronization issues in transcoded video content.