Get to the Point: Learning Lidar Place Recognition and Metric Localisation Using Overhead Imagery


Tim Y. Tang (University of Oxford),
Daniele De Martini (University of Oxford),
Paul Newman (University of Oxford)
Paper Website
Paper #003
Interactive Poster Session V Interactive Poster Session VIII

0d 00h 00m

0d 00h 00m


Abstract

This paper is about localising a robot in overhead images using lidar. Specifically, we show how to solve both place recognition and metric localisation of a lidar using only publicly available overhead imagery as a map proxy. This is in contrast to current approaches that rely on prior sensor maps. To handle the drastic modality difference (overhead image vs. on the ground lidar), our method learns a representation that purposely and suitably transforms a given overhead image into a collection of 2D points, allowing for direct comparison against lidar scans. After both modalities are expressed as points, point-based methods can then be leveraged to learn the registration and place recognition task. Our method is the first to learn the place recognition of a lidar using only overhead imagery, and outperforms prior work for metric localisation with large initial pose offsets.

Spotlight Presentation

Previous Paper Paper Website Next Paper