Abstract: Diffusion models excel at creating images and videos thanks to their multimodal generative capabilities, which have also attracted the interest of roboticists for trajectory planning and policy learning. However, the stochastic nature of diffusion models is fundamentally at odds with the precise dynamical equations describing the feasible motion of robots. Hence, generating dynamically admissible robot trajectories is a challenge for diffusion models. To alleviate this issue, we introduce DDAT: Diffusion policies for Dynamically Admissible Trajectories to generate provably admissible trajectories of black-box robotic systems using diffusion models. A sequence of states is a dynamically admissible trajectory if each state of the sequence belongs to the reachable set of its predecessor by the robot’s equations of motion. To generate such trajectories our diffusion policies project their predictions onto a dynamically admissible manifold during both training and inference to align the objective of the denoiser neural network with the dynamical admissibility constraint. These projections are challenging due to their autoregressive character and because the black-box nature of the dynamics prevents an exact characterization of the reachable sets. We thus enforce admissibility by iteratively sampling a polytopic underapproximation of the reachable set of a state onto which we project its predicted successor, before iterating this process with the projected successor. By generating accurate trajectories, this projection rids diffusion models of their unceasing replanning to enable one-shot long-horizon trajectory planning. We demonstrate that our proposed framework generates higher quality dynamically admissible robot trajectories through extensive simulations on a quadcopter and various MuJoCo environments, along with real-world experiments on a Unitree GO1.