Learning Any-View 6DoF Robotic Grasping in Cluttered Scenes via Neural Surface Rendering


Snehal Jauhri, Ishikaa Lunawat, Georgia Chalvatzaki
Paper Website

Paper ID 46

Session 6. Grasping

Poster Session day 1 (Tuesday, July 16)

Abstract: A significant challenge for real-world robotic manipulation is the effective 6DoF grasping of objects in cluttered scenes from any single viewpoint without needing additional scene exploration. This work re-interprets grasping as rendering and introduces NeuGraspNet, a novel method for 6DoF grasp detection that leverages advances in neural volumetric representations and surface rendering. We encode the interaction between a robot’s end-effector and an object’s surface by jointly learning to render the local object surface and learning grasping functions in a shared feature space. Our approach uses global (scene-level) features for grasp generation and local (grasp-level) neural surface features for grasp evaluation. This enables effective, fully implicit 6DoF grasp quality prediction, even in partially observed scenes. NeuGraspNet operates on random viewpoints, common in mobile manipulation scenarios, and outperforms existing implicit and semi-implicit grasping methods. We demonstrate the real-world applicability of the method with a mobile manipulator robot, grasping in open cluttered spaces. Project website at: https://sites.google.com/view/neugraspnet