GuDA: Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning

1University of Wisconsin-Madison, 2Carnegie Mellon University
GuDA overview.

GuDA is a human-guided data augmentation framework that inexpensively produces expert-quality augmented data from a limited set of suboptimal demonstration data.

Abstract

In offline reinforcement learning (RL), an RL agent learns to solve a task using only a fixed dataset of previously collected data. While offline RL has been successful in learning real-world robot control policies, it typically requires large amounts of expert-quality data to learn effective policies that generalize to out-of-distribution states. Unfortunately, such data is often difficult and expensive to acquire in real-world tasks. Several recent works have leveraged data augmentation (DA) to inexpensively generate additional data, but most DA works apply augmentations in a random fashion and ultimately produce highly suboptimal augmented experience.

In this work, we propose Guided Data Augmentation (GuDA), a human-guided DA framework that generates expert-quality augmented data. The key insight behind GuDA is that while it may be difficult to demonstrate the sequence of actions required to produce expert data, a user can often easily characterize when an augmented trajectory segment represents progress toward task completion. Thus, a user can restrict the space of possible augmentations to automatically reject suboptimal augmented data. To extract a policy from GuDA, we use off-the-shelf offline reinforcement learning and behavior cloning algorithms.

We evaluate GuDA on a physical robot soccer task as well as simulated D4RL navigation tasks, a simulated autonomous driving task, and a simulated soccer task. Empirically, GuDA enables learning given a small initial dataset of potentially suboptimal experience and outperforms a random DA strategy as well as a model-based DA strategy.

BibTeX

@article{corrado2024guda,
  author    = {Corrado, Nicholas E. and Qu, Yuxiao and Balis, John U. and Labiosa, Adam and Hanna, Josiah P.},
  title     = {Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning},
  journal   = {Reinforcement Learning Conference (RLC)},
  year      = {2024},
}