AnyFeature-VSLAM: Automating the Usage of Any Feature into Visual SLAM


Alejandro Fontan, Javier Civera, Michael Milford
Paper Website

Paper ID 84

Session 8. Perception and navigation

Poster Session day 2 (Wednesday, July 17)

Abstract: Feature-based SLAM heavily relies on the specific type of visual features employed. The most effective feature in some conditions may perform worse or not be suitable for other ones, leading to significant performance variability. Seamlessly switching to the most effective visual feature is a desirable quality for SLAM, but, currently, this involves a cumbersome manual task that demands substantial parameter tuning efforts and expert knowledge.

In this paper, we present AnyFeature-VSLAM, an automated visual SLAM pipeline capable of switching to a chosen type of feature effortlessly and without manual intervention. The tuning of parameters associated with visual features is performed automatically to achieve the best performance. We built AnyFeature-VSLAM on top of ORB-SLAM2, one of the most popular and widely used feature-based visual SLAM implementations. Through extensive experiments across various benchmark datasets, we demonstrate that AnyFeature-VSLAM consistently delivers good results irrespective of the chosen visual feature, outperforming baseline implementations. Specifically, our paper includes a quantitative assessment of trajectory estimation involving seven different keypoint and descriptor combinations across thirty sequences spanning four distinct publicly available datasets. Furthermore, we showcase the enhanced flexibility of our system by subjecting it to four additional challenging datasets. Code publicly available at: https://github.com/alejandrofontan/AnyFeature-VSLAM.