Diffusion Policy: Visuomotor Policy Learning via Action Diffusion


Cheng Chi
Columbia University
Siyuan Feng
Toyota Research Institute
Yilun Du
Massachusetts Institute of Technology
Zhenjia Xu
Columbia University
Eric Cousineau
Toyota Research Institute
Benjamin CM Burchfiel
Toyota Research Institute
Shuran Song
Columbia University
Paper Website

Paper ID 26

Session 4. Large Data and Vision-Language Models for Robotics

Poster Session Tuesday, July 11

Poster 26

Abstract: This paper introduces Diffusion Policy, a new way of generating robot behavior by representing a robot’s visuomotor policy as a conditional denoising diffusion process. We benchmark Diffusion Policy across 12 different tasks from 4 different robot manipulation benchmarks and find that it consistently outperforms existing state-of-the-art robot learning methods with an average improvement of 46.9%. Diffusion Policy learns the score function of the action distribution and optimizes with respect to this gradient field iteratively during inference via a series of stochastic Langevin dynamics steps. We find that the diffusion formulation yields powerful advantages when used for robot policies, including gracefully handling multimodal action distributions, being suitable for high-dimensional action spaces, and exhibiting impressive training stability. To fully unlock the potential of diffusion models for visuomotor policy learning on physical robots, this paper presents a set of key technical contributions including the incorporation of receding horizon control, visual conditioning, and the time-series diffusion transformer. We hope this work will help motivate a new generation of policy learning techniques that are able to leverage the powerful generative modeling capabilities of diffusion models. Code, data, and training details will be publicly available.