Safety in Robot Learning and Control

Organizers: Scott Niekum, Hadas Kress-Gazit


Designing robotic systems and learning algorithms with safety and correctness in mind is an increasingly important area of research. However, generally accepted definitions of safety -- and means of achieving it -- remain nebulous. This workshop aims to bring together researchers from diverse backgrounds to better define safety issues in robotics and learning, characterize the space of current approaches, and to identify promising opportunities for future research directions and multidisciplinary collaborations. Toward this goal, we ask that participants be willing to contribute to a position paper, both during and after the workshop, that will outline our conclusions on the topics for the benefit of the larger research community. We will solicit pecha kucha presentations from areas including, but not limited to:

  • Theory of safety in robotics and learning
  • Synthesis and verification of safe policies
  • Model-based vs. model-free safety guarantees
  • High-confidence policy evaluation
  • Safety in reinforcement and imitation learning
  • Safe exploration
  • Safety, ethics, and public policy