RoLA: Robot Learning from Any Images

A framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment.

Robot Learning from Any Images



(* Equal contribution. Author order determined by coin flip.)


Abstract

We introduce RoLA, a framework that transforms any in-the-wild image into an interactive, physics-enabled robotic environment. Unlike previous methods, RoLA operates directly on a single image without requiring additional hardware or digital assets. Our framework democratizes robotic data generation by producing massive visuomotor robotic demonstrations within minutes from a wide range of image sources, including camera captures, robotic datasets, and Internet images. At its core, our approach combines a novel method for single-view physical scene recovery with an efficient visual blending strategy for photorealistic data collection. We demonstrate RoLA's versatility across applications like scalable robotic data generation and augmentation, robot learning from Internet images, and single-image real-to-sim-to-real systems for manipulators and humanoids.


Any Image Sources

Robotic
Dataset
Camera
Capture
Internet
Images

Single-Image Imitation Learning

Manipulation in
Clutted Scenes
RoLA-Generated Data @ Sim
Real-world Deploy
Pour SodaWater

Vision-Language-Action Models

Pick up the yellow banana.
Pick up the carrot.
Put the yellow lemon beside the green apple.
Take the grey object beside the lemon and place it beside the yellow apple.

Manipulation Prior from Internet Images

Using apple picking as an example, we collect diverse internet images containing apples to train an image-based grasping prior.

We perform real-world evaluations on apples of varying colors, sizes, types, and positions.