Research Project Title:
Multi-Agent Hierarchical Reinforcement Learning
abstract:Hierarchical Reinforcement Learning (HRL) works on decomposing an RL problem into sub-problems where solving each sub-problem will be more powerful than solving the entire problem. The purpose of this research direction is to understand the current work done in the multiagent hierarchical reinforcement learning space and work upon combating assumptions that result from assuming a set level of hierarchies in the model. We will be applying hierarchical RL in non-stationary environments and hoping to learn hierarchies automatically as opposed to assuming a two-layer hierarchical policy, looking at solving exceedingly complex tasks that combine locomotion and rudimentary object interaction.
The first part of this project will analyze the common 2-level hierarchy found in HRL architectures, which majorly define a set of lower-level policies, each of which is trained to match its observed states to a desired goal. The higher-level policy chooses these goals for temporally extended periods and uses an off-policy correction to enable it to use past experience collected from previous, different instantiations of the lower-level policy. We will show that this representation breaks down for non-stationary settings where the number of levels in the hierarchy must be flexible by running a series of experiments. We will then build the theory behind how to abstractly define the levels in the hierarchical model and show this selection for a series of environments.
"Through pursuing research in multi-agent reinforcement learning, I hope to delve into understanding how we can model human decision making with deep learning, taking into account influences from the ever-changing environment around us. I am also excited to receive mentorship from my supervisors and peers while developing good research skills for my long-term graduate studies! "