Research Project Title:
Dynamic Abstraction for Efficient Planning in Large MDPs
abstract:Planning algorithms effectively find optimal policies for acting in an environment modeled as a Markov Decision Process (MDP). However, environments that are realistic representations of real-world situations are extremely complex and tend to have many variables. Unfortunately, planners often scale poorly, and the time required for planning quickly becomes unfeasible for practical applications. Our goal is to devise a method for dynamically finding subgoals for the task and abstracting the agent’s action space. We hypothesize that combining these strategies would enable a planner to concentrate on immediately useful or important actions, and thus achieve a large reduction in planning time while maintaining near-optimal reward.
By participating in the SuperUROP program, I aim to obtain a more rigorous research experience and explore the field of reinforcement learning in greater depth. I hope to apply my knowledge from classes to address compelling questions that arise in the real world. Artificial intelligence has an enormous potential for contributing to society now and in the future, so I am excited to learn more about and contribute to research in the field.