Robot learning from interacting with the physical world is fundamentally bottlenecked by the cost of physical interaction.
The two alternatives, supervised finetuning (SFT) from expert demonstrations and reinforcement learning (RL) in a
software-based simulator, are limited by the amount of expert data available and the sim-to-real gap for manipulation.
With the recent emergence of world models learned from real-world video-action data, we ask the question of whether
training a policy in a world model can be more effective than supervised learning or software simulation in achieving
better real-robot performance. We propose World-Gymnast, which performs RL finetuning of a vision-language-action (VLA)
policy by rolling out the policy in an action-conditioned video world model and rewarding the rollouts with a
vision-language model (VLM). On the Bridge robot setup, World-Gymnast outperforms SFT by as much as 18x and outperforms
software simulator by as much as 2x. More importantly, World-Gymnast demonstrates intriguing capabilities of RL with a
world model, including training on diverse language instructions and novel scenes from the world model, test-time
training in a novel scene, and online iterative world model and policy improvement. Our results suggest learning a world
model and training robot policies in the cloud could be the key to bridging the gap between robots that work in
demonstrations and robots that can work in anyone's household.