Fast Monocular Visual-Inertial Initialization Leveraging Learned Single-View Depth


Nathaniel W Merrill
University of Delaware
Patrick Geneva
University of Delaware
Saimouli Katragadda
University of Delaware
Chuchu Chen
University of Delaware
Guoquan Huang
University of Delaware
Paper Website

Paper ID 72

Nominated for Best Student Paper

Session 9. Robot State Estimation

Poster Session Thursday, July 13

Poster 8

Abstract: In monocular visual-inertial navigation systems, it is ideal to initialize as quickly and robustly as possible. State-of-the- art initialization methods typically make linear approximations using the image features and inertial information in order to initialize in closed-form, and then refine the states with a nonlinear optimization. While the standard methods typically wait for a 2sec data window, a recent work has shown that it is possible to initialize faster (0.5sec) by adding constraints from a robust but only up-to-scale monocular depth network in the nonlinear optimization. To further expedite the initialization, in this work, we leverage the scale-less depth measurements instead in the linear initialization step that is performed prior to the nonlinear one, which only requires a single depth image for the first frame. We show that the typical estimation of each feature state independently in the closed-form solution can be replaced by just estimating the scale and offset parameters of the learned depth map. Interestingly, our formulation makes it possible to construct small minimal problems in a RANSAC loop, whereas the typical linear system’s minimal problem is quite large and includes every feature state. Experiments show that our method can improve the overall initialization performance on popular public datasets (EuRoC MAV and TUM-VI) over state- of-the-art methods. For the TUM-VI dataset, we show superior initialization performance with only a 300ms window of data, which is the smallest ever reported, and show that our method can initialize more often, robustly, and accurately in different challenging scenarios.