Early Career Awards


It is our great pleasure to announce this year’s Early Career Awards. The three awardees will give live plenary keynotes with Q&A on July 14, 15, and 16, respectively. Additional live Q&As with the awardees in Eastern time zones will be held on the following day.


Byron Boots
University of Washington
July 14, 18:30 UTC
Title:

Perspectives on Machine Learning for Robotics

Abstract:

Recent advances in machine learning are leading to new tools for designing intelligent robots: functions relied on to govern a robot's behavior can be learned from a robot's interaction with its environment rather than hand-designed by an engineer. Many machine learning methods assume little prior knowledge and are extremely flexible, they can model almost anything! But, this flexibility comes at a cost. The same algorithms are often notoriously data hungry and computationally expensive, two problems that can be debilitating for robotics. In this talk I’ll discuss how machine learning can be combined with prior knowledge to build effective solutions to robotics problems. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I'll focus on the use of simulation and expert advice to augment learning. I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied some of these ideas to an autonomous off-road racing task that requires impressive sensing, speed, and agility to complete.

Biography:

Byron Boots is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. He received his PhD from the Machine Learning Department in the School of Computer Science at Carnegie Mellon University in 2012. He joined the University of Washington as a postdoctoral researcher from 2012-2014, and was an Assistant Professor in the School of Interactive Computing at Georgia Tech from 2014-2019. His group performs fundamental and applied research in machine learning, artificial intelligence, and robotics with a focus on developing theory and systems that tightly integrate perception, learning, and control. His work has been applied to a range of problems including localization and mapping, motion planning, robotic manipulation, and high-speed navigation. Byron has received several awards including the 2010 ICML Best Paper Award, the 2018 AISTATS Best Paper Award, the 2019 RSS Best Student Paper Award, and the IJRR Paper of the Year Award for 2018. He is also the recipient of the NSF CAREER Award (2018), the Amazon Research Award (2019), and the Outstanding Junior Faculty Research Award from the College of Computing at Georgia Tech (2019).


Luca Carlone
Massachusetts Institute of Technology
July 15, 18:30 UTC
Title:

The Future of Robot Perception: Certifiable Algorithms and Real-time High-level Understanding

Abstract:

Robot perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception.
This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems.
The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time.

Biography:

Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the best paper award at WAFR’16, the best Student paper award at the 2018 Symposium on VLSI Circuits, the best paper award in Robotic Vision at ICRA'20, and he was best paper finalist at RSS’15. He is also the recipient of the Google Daydream (2019) and the Amazon Research Award (2020), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.


Jeannette Bohg
Stanford University
July 16, 17:00 UTC
Title:

A Tale of Success and Failure in Robotics Grasping and Manipulation

Abstract:

In 2007, I was a naïve grad student and started to work on vision-based robotic grasping. I had no prior background in manipulation, kinematics, dynamics or control. Yet, I dove into the field by re-implementing and improving a learning-based method. While making some contributions, the proposed method also had many limitations partly due to the way the problem was framed. Looking back at the entire journey until today, I find that I have learned the most about robotic grasping and manipulation from observing failures and limitations of existing approaches - including my own. In this talk, I want to highlight how these failures and limitations have shaped my view on what may be some of the underlying principles of autonomous robotic manipulation. I will emphasise three points. First, perception and prediction will always be noisy, partial and sometimes just plain wrong. Therefore, one focus of my research is on methods that support decision-making under uncertainty due to noisy sensing, inaccurate models and hard-to-predict dynamics. To this end, I will present a robotic system that demonstrates the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. I will also talk about work that funnels uncertainty by enabling robots to exploit contact constraints during manipulation. Second, a robot has many more sensors than just cameras and they all provide complementary information. Therefore, one focus of my research is on methods that can exploit multimodal information such as vision and touch for contact-rich manipulation. It is non-trivial to manually design a manipulation controller that combines modalities with very different characteristics. I will present work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. And third, choosing the right robot action representation has a large influence on the success of a manipulation policy, controller or planner. While believing many years that inferring contact points for robotic grasping is futile, I will present work that convinced me otherwise. Specifically, this work uses contact points as an abstraction that can be re-used by a diverse set of robot hands.

Biography:

Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.