Scalable Learning for Integrated Perception and Planning


Organizers: Maximilian Durner, Martin Sundermeyer, Zoltan Marton, En Yen Puang, Rudolph Triebel

Website: https://scalableroboticlearning.github.io

In recent years, both the computer vision and the machine learning communities have shown an increasing interest in the specific challenges of robotic perception and planning. This field features a high demand for feasible training procedures, strong generalization capabilities, fast runtime, interpretable models and robustness. However, current state-of-the-art approaches can still not meet all these requirements, which is why perception is often seen as the bottleneck for robotic manipulation.

This workshop serves as a platform to connect communities and encourage them to find feasible solutions that bridge the gap between stand-alone perception and robotic related tasks such as motion or assembly planning, visual servoing and grasping. A main topic is how sensing, manipulation and planning can be combined to yield mutual benefits. We also search for scalable learning-based approaches that require little supervision and examine them on their benefits and limitations. This can include learning in simulation, transfer and few-shot learning, automatic labeling or reinforcement learning. Are end-to-end learning approaches really the right way to go or are modular pipelines still preferable due to better introspection? Are current subtask metrics suitable indicators for execution success? What is necessary to address the needs of end-user applications in terms of scalability, robustness, runtime, cost, maintainability and fail-safety?