Perception and Control for Fast and Agile Super-vehicles


Organizers: Varun Murali, Keith Lynn, Chelsea Sabo, Sertac Karaman

Website: https://mit-fast.github.io/WorkshopRSS19SuperVehicles/

Remotely-piloted racing vehicles buzzing through complex racing courses have inspired many roboticists to build autonomy algorithms that can do the same. As advances in algorithmic perception and control for fast and agile robotic vehicles materialize, autonomous racing vehicles are quickly approaching the ability to defeat human remote pilots in head to head races. Most recently, Lockheed Martin, NVIDIA and the Drone Racing League (DRL) challenged the robotics community with the AlphaPilot program (https://www.herox.com/alphapilot), where contestants will design and implement the algorithms for fully-autonomous drone racing. These advances may ultimately lead to autonomous super-vehicles, i.e., next-generation autonomous robots that are capable of achieving super-human maneuvering and racing capabilities. The resulting algorithms may become invaluable components of high-throughput autonomy software, e.g., to maneuver cars out of traffic accidents. However, the development of these super-vehicles brings significant challenges. The purpose of this workshop is to identify, highlight and discuss possible solutions to the open research questions in high-throughput computing for autonomous racing vehicles. The goal of the workshop is to also identify gaps in current techniques addressing these problems, and open the conversation about real time onboard high throughput computing to address these technical gaps. Is end to end deep learning a viable option to solve these high speed interactions? What are the challenges for model based solutions? What can we model and what do we have to simulate? What are the gaps of transferring experiments from photorealistic exteroceptive sensor simulation (http://flightgoggles.mit.edu) to real world systems?