Howard Zhong

Howard  Zhong
Advisor: William T. Freeman
Department: EECS
Areas of Research: Computer Vision and Graphics
Years: 2021-2022
Research Project Title:

Unsupervised and Hierarchical Semantic Segmentation

abstract:Humans have a rich ability to understand the visual world without constant supervisory signals. Modern learning methods for semantic segmentation often require a large corpus of labelled training data that adheres to a handcrafted “ontology” of objects. Curating labeled data is expensive. Moreover, in many applications, such as biomedical and astrophysical imaging, there is no clear “ontology.” This poses a significant challenge to supervised methods. This project aims to create new learning systems capable of jointly discovering a hierarchy of visual objects and parsing the visual world into this learned ontology without requiring labelled data. We aim to introduce a new loss function that encourages the formation of “tree-structured” embeddings useful for unsupervised segmentation.

I am conducting research through SuperUROP to gain deeper insights into difficult scientific questions. I seek to explore the intrinsic hierarchy of visual objects and apply machine learning to better understand the world.