Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks


Andrew S Morgan (Yale University),
Bowen Wen (Rutgers University),
Junchi Liang (Rutgers University),
Abdeslam Boularias (Rutgers University),
Aaron M. Dollar (Yale University),
Kostas Bekris (Rutgers University)
Paper Website
Paper #070
Interactive Poster Session I Interactive Poster Session IV

0d 00h 00m

0d 00h 00m


Abstract

Highly constrained manipulation tasks continue to be challenging for autonomous robots as they require high levels of precision, typically less than 1mm, which is often incompatible with what can be achieved by traditional perception systems. This paper demonstrates that the combination of state-of-the-art object tracking with passively adaptive mechanical hardware can be leveraged to complete precision manipulation tasks with tight, industrially-relevant tolerances (0.25mm). The proposed control method closes the loop through vision by tracking the relative 6D pose of objects in the relevant workspace. It adjusts the control reference of both the compliant manipulator and the hand to complete object insertion tasks via within-hand manipulation. Contrary to previous efforts for insertion, our method does not require expensive force sensors, precision manipulators, or time-consuming, online learning, which is data hungry. Instead, this effort leverages mechanical compliance and utilizes an object-agnostic manipulation model of the hand learned offline, off-the-shelf motion planning, and an RGBD-based object tracker trained solely with synthetic data. These features allow the proposed system to easily generalize and transfer to new tasks and environments. This paper describes in detail the system components and showcases its efficacy with extensive experiments involving tight tolerance peg-in-hole insertion tasks of various geometries as well as open-world constrained placement tasks.

Spotlight Presentation

Previous Paper Paper Website Next Paper