Research Project Title:
Transfer Learning through Latent Representations of Exteroceptive Signals
abstract:We present a framework that addresses two key problems of learning-based methods for (soft) robot perception. First, sensor signals change over the course of a robot’s lifespan, for example, through changing environments or soft material degradation. The most popular present approach, training a neural network to predict desired outputs directly from sensor signals, does not allow for easy recalibration and often requires tediously re-collecting the data and re-training the neural network from scratch. Second, onboard sensors may not provide a signal informative enough to learn a direct mapping to the desired outputs. Information-rich exteroceptive sensors such as external cameras alleviate this problem but are often unavailable when the robot is deployed and suffer from discrepancies between lab and deployment environment. In this work, we present a framework that allows for easy recalibration and takes advantage of the information-richness in vision while avoiding vision-related nuisances. The idea is based on first learning a latent encoding of external camera images and then learning a mapping from the onboard sensors to the latent encodings. Importantly, our method does not require the vision system to be present in deployment. Prior research has indicated that the results of our idea would outperform conventional approaches and makes recalibration easier.