RoLA: Robot Learning from Any Images
A framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment.
A framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment.
We introduce RoLA, a framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment. Unlike previous methods, RoLA operates directly on a single image without requiring additional hardware or digital assets. Our framework democratizes robotic data generation by producing massive visuomotor robotic demonstrations within minutes from a wide range of image sources, including camera captures, robotic datasets, and Internet images. At its core, our approach combines a novel method for single-view physical scene recovery with an efficient visual blending strategy for photorealistic data collection. We demonstrate RoLA's versatility across applications like scalable robotic data generation and augmentation, robot learning from Internet images, and single-image real-to-sim-to-real systems for manipulators and humanoids.
Using apple picking as an example, we collect diverse internet images containing apples to train an image-based grasping prior.
We perform real-world evaluations on apples of varying colors, sizes, types, and positions.
If you find our work useful, please consider citing:
@inproceedings{rola,
title={Robot learning from any images},
author={Zhao, Siheng and Mao, Jiageng and Chow, Wei and Shangguan, Zeyu and Shi, Tianheng and Xue, Rong and Zheng, Yuxi and Weng, Yijia and You, Yang and Seita, Daniel and others},
booktitle={Conference on Robot Learning},
pages={4226--4245},
year={2025},
organization={PMLR}
}