Abstract: Feature-based SLAM heavily relies on the specific type of visual features employed. The most effective feature in some conditions may perform worse or not be suitable for other ones, leading to significant performance variability. Seamlessly switching to the most effective visual feature is a desirable quality for SLAM, but, currently, this involves a cumbersome manual task that demands substantial parameter tuning efforts and expert knowledge.
In this paper, we present AnyFeature-VSLAM, an automated visual SLAM pipeline capable of switching to a chosen type of feature effortlessly and without manual intervention. The tuning of parameters associated with visual features is performed automatically to achieve the best performance. We built AnyFeature-VSLAM on top of ORB-SLAM2, one of the most popular and widely used feature-based visual SLAM implementations. Through extensive experiments across various benchmark datasets, we demonstrate that AnyFeature-VSLAM consistently delivers good results irrespective of the chosen visual feature, outperforming baseline implementations. Specifically, our paper includes a quantitative assessment of trajectory estimation involving seven different keypoint and descriptor combinations across thirty sequences spanning four distinct publicly available datasets. Furthermore, we showcase the enhanced flexibility of our system by subjecting it to four additional challenging datasets. Code publicly available at: https://github.com/alejandrofontan/AnyFeature-VSLAM.