Morpheus: A Neural-driven Animatronic Face with Hybrid Actuation and Diverse Emotion Control


Zongzheng Zhang, Jiawen Yang, Ziqiao Peng, Meng Yang, Jianzhu Ma, Lin Cheng, Huazhe Xu, Hang Zhao, Hao Zhao

Paper ID 80

Session 9. HRI

Poster Session (Day 3): Monday, June 23, 12:30-2:00 PM

Abstract: Previous animatronic faces struggle to effectively express emotions due to both hardware and software limitations. On the hardware side, earlier approaches either used rigid-driven mechanisms, which provide precise control but are difficult to design within constrained spaces, or tendon-driven mechanisms, which are more space-efficient but challenging to control. In contrast, we propose a hybrid actuation approach that combines the best of both worlds. The eyes and mouth—key areas for emotional expression—are controlled using rigid mechanisms for precise movement, while the nose and cheeks, which convey subtle facial microexpressions, are driven by strings. This design allows us to build a compact yet versatile hardware platform capable of expressing a wide range of emotions. On the algorithmic side, our method introduces a self-modelling network that maps motor actions to facial landmarks, allowing us to automatically establish the relationship between blendshape primitives for different facial expressions and the corresponding motor control signals through gradient backpropagation. We then train a neural network to map speech input to corresponding blendshape controls. With our method, we can generate distinct emotional expressions (happy, fear, disgust, and anger) from any given sentence, each with nuanced, emotion-specific control signals—a feature that has not been demonstrated in earlier systems.