MIT EECS | Advanced Micro Devices Undergraduate Research and Innovation Scholar
Reinforcement Learning for Microrobot Locomotion
- Artificial Intelligence and Robotics
Microrobots offer numerous advantages over their larger brethren; small creatures can have high strength to weight ratios and travel over difficult terrain such as rubble and water. However, microrobots often have difficult dynamics that make traditional modeling techniques unsuitable for planning and locomotion tasks, as they vary both between robots and over time. Optimization of pre-planned gaits is also challenging, as many more complex locomotion tasks may rely on unintuitive emergent phenomena from the robot’s dynamics. My project is exploring model-free reinforcement learning methods in both simulation and hardware to teach the HAMR 1.7 gram quadruped robot to perform locomotion tasks such as walking, jumping, and obstacle avoidance. I am also investigating methods of transferring learned behavior, both between simulation and hardware and between different real-world robots, in order to make these methods more sample efficient.
“I am participating in SuperUROP to gain hands-on experience designing complex hardware that can perform computation better than traditional processors. I think there’s a whole world of computation that has yet to be co-optimized for custom hardware, and given my background in machine learning, this seems like a particularly fruitful area.”