Abstract: Learning human preferences is essential for human-robot interaction, as it enables robots to adapt their behaviors to align with human expectations and goals. However, the inherent uncertainties in both human behavior and robotic systems make preference learning a challenging task. While probabilistic robotics algorithms offer uncertainty quantification, the integration of human preference uncertainty remains underexplored. To bridge this gap, we introduce uncertainty unification and propose a novel framework, uncertainty-unified preference learning (UUPL), which enhances Gaussian Process (GP)-based preference learning by unifying human and robot uncertainty. Specifically, UUPL includes a human preference uncertainty model that improves GP posterior mean estimation, and an uncertainty-unified Gaussian Mixture Model that enhances GP predictive covariance accuracy. Additionally, we design a user-specific calibration process to personalize the uncertainty parameters and further improve user experience. Comprehensive experiments and user studies demonstrate that UUPL achieves state-of-the-art performance in both prediction accuracy and human ratings. An ablation study further validates the effectiveness of uncertainty-unified covariance and the human preference uncertainty model of UUPL.