RoLA: Robot Learning from Any Images
A framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment.
A framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment.
We introduce RoLA, a framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment. Unlike previous methods, RoLA operates directly on a single image without requiring additional hardware or digital assets. Our framework democratizes robotic data generation by producing massive visuomotor robotic demonstrations within minutes from a wide range of image sources, including camera captures, robotic datasets, and Internet images. At its core, our approach combines a novel method for single-view physical scene recovery with an efficient visual blending strategy for photorealistic data collection. We demonstrate RoLA's versatility across applications like scalable robotic data generation and augmentation, robot learning from Internet images, and single-image real-to-sim-to-real systems for manipulators and humanoids.
Using apple picking as an example, we collect diverse internet images containing apples to train an image-based grasping prior.
We perform real-world evaluations on apples of varying colors, sizes, types, and positions.