Composable Energy Policies for Reactive Motion Generation and Reinforcement Learning


Julen Urain (TU Darmstadt),
Puze Liu (IAS, TU Darmstadt),
Anqi Li (University of Washington),
Carlo D'Eramo (TU Darmstadt),
Jan Peters (TU Darmstadt)
Paper Website
Paper Website
Paper #052
Interactive Poster Session III Interactive Poster Session VI

0d 00h 00m

0d 00h 00m


Abstract

Reactive motion generation problems are usually solved by computing actions as a sum of policies. However, these policies are independent of each other and thus, they can have conflicting behaviors when summing their contributions together. We introduce Composable Energy Policies (CEP), a novel framework for modular reactive motion generation. CEP computes the control action by optimization over the product of a set of stochastic policies. This product of policies will provide a high probability to those actions that satisfy all the components and low probability to the others. Optimizing over the product of the policies avoids the detrimental effect of conflicting behaviors between policies choosing an action that satisfies all the objectives. Besides, we show that CEP naturally adapts to the Reinforcement Learning problem allowing us to integrate, in a hierarchical fashion, any distribution as prior, from multimodal distributions to non-smooth distributions and learn a new policy given them

Spotlight Presentation

Previous Paper Paper Website Next Paper