TerrainNet: Visual Modeling of Complex Terrain for High-speed, Off-road Navigation


Xiangyun Meng
University of Washington
Nathan Hatch
University of Washington
Alexander Lambert
University of Washington
Anqi Li
University of Washington
Nolan Wagener
Georgia Tech
Matthew Schmittle
University of Washington
JoonHo Lee
University of Washington
Wentao Yuan
University of Washington
Zoey Chen
University of Washington
Sameul Deng
University of Washington
Greg Okopal
University of Washington
Dieter Fox
NVIDIA Research / University of Washington
Byron Boots
University of Washington
Amirreza Shaban
University of Washington
Paper Website

Paper ID 103

Session 13. Autonomous Vehicles & Field Robotics

Poster Session Friday, July 14

Poster 7

Abstract: Effective use of camera-based vision systems is essential for robust performance in autonomous off-road driving, particularly in the high-speed regime. Despite success in structured, on-road settings, current end-to-end approaches for scene prediction have yet to be successfully adapted for complex outdoor terrain. To this end, we present TerrainNet, a vision-based terrain perception system for semantic and geometric terrain prediction for aggressive, off-road navigation. The approach relies on several key insights and practical considerations for achieving reliable terrain modeling. The network includes a multi-headed output representation to capture fine- and coarse-grained terrain features necessary for estimating traversability. Accurate depth estimation is achieved using self-supervised depth completion with multi-view RGB and stereo inputs. Requirements for real-time performance and fast inference speeds are met using efficient, learned image feature projections. Furthermore, the model is trained on a large-scale, real-world off-road dataset collected across a variety of diverse outdoor environments. We show how TerrainNet can also be used for costmap prediction and provide a detailed framework for integration into a planning module. We demonstrate the performance of TerrainNet through extensive comparison to current state-of-the-art baselines for camera-only scene prediction. Finally, we showcase the effectiveness of integrating TerrainNet within a complete autonomous-driving stack by conducting a real-world vehicle test in a challenging off-road scenario.