CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory


Nur Muhammad ❨Mahi❩ Shafiullah
New York University
Chris Paxton
Meta AI
Lerrel Pinto
New York University
Soumith Chintala
Meta
Arthur Szlam
Meta
Paper Website

Paper ID 74

Session 10. Robot Perception

Poster Session Thursday, July 13

Poster 10

Abstract: We propose CLIP-Fields, an implicit scene model that can be used for a variety of tasks, such as segmentation, instance identification, semantic search over space, and view localization. CLIP-Fields learns a mapping from spatial locations to semantic embedding vectors. Importantly, we show that this mapping can be trained with supervision coming only from web-image and web-text trained models such as CLIP, Detic, and Sentence-BERT; and thus uses no direct human supervision. When compared to baselines like Mask-RCNN, our method outperforms on few-shot instance identification or semantic segmentation on the HM3D dataset with only a fraction of the examples. Finally, we show that using CLIP-Fields as a scene memory, robots can perform semantic navigation in real-world environments. Our code and demonstration videos are available here: https://clip-fields.github.io