Research Project Title:
Overcoming Reward Design for Exploration in Reinforcement Learning
abstract:Exploration remains a critical topic in reinforcement learning today. In many tasks, simple exploration strategies are adequate for learning an optimal policy. Simple exploration becomes difficult in hard-exploration tasks where the rewards within the environment are very spare. To encourage exploration in such environments, intrinsic rewards generated by the agent are often implemented, but this can lead to poor performance for reasons such as biasing policy optimization or choosing a substandard intrinsic reward function. Thus, we aim to investigate a new exploration strategy in reinforcement learning that learns an optimal intrinsic reward function and automatically fine-tunes the balance between extrinsic and intrinsic rewards.
I'm participating in SuperUROP because I believe it will be a great opportunity to gain advanced research experience. Through machine learning classes and previous UROPs, I developed an interest for working on machine learning problems in a research setting. I hope to make meaningful contributions towards the intersection of neuroscience and machine learning and to gain the skills necessary to lead my own research projects in the future.