Demonstrating Language-Grounded Motion Controller


Ravi Tejwani, Chengyuan Ma, Paco Gomez-Paz, Paolo Bonato, Haruhiko Asada
Paper Website

Paper ID 105

Session 14. Control

Poster Session day 3 (Thursday, July 18)

Abstract: Recent advancements have enabled human-robot collaboration through physical assistance and verbal guidance. However, limitations persist in coordinating robots’ physical motions and speech in response to real-time changes in human behavior during collaborative contact tasks. We first derive principles from analyzing physical therapists’ movements and speech during patient exercises. These principles are translated into control objectives to: 1) guide users through trajectories, 2) control motion and speech pace to align completion times with varying user cooperation, and 3) dynamically paraphrase speech along the trajectory. We then propose a Language Controller that synchronizes motion and speech, modulating both based on user cooperation. Experiments with 12 users show the Language Controller successfully aligns motion and speech compared to baselines. This provides a framework for fluent human-robot collaboration.