Abstract: Humans naturally integrate vision and haptics for robust object perception during manipulation; losing either modality significantly degrades performance. Inspired by this multisensory integration, prior pose estimation research has attempted to combine visual and haptic/tactile feedback. While these works demonstrate improvements in controlled environments or synthetic datasets, they often underperform vision-only approaches in real-world settings due to poor generalization across diverse grippers, sensor layouts, or sim-to-real environments. Furthermore, they typically estimate the pose for each frame, resulting in less coherent tracking over sequences in real-world deployments. To address these limitations, we introduce a novel unified haptic representation that effectively handles multiple gripper embodiments. Building on this representation, we introduce a visuo-haptic Transformer-based pose tracker that seamlessly integrates visual and haptic input. We validate our framework on our dataset and the Feelsight dataset, demonstrating significant performance improvement on challenging sequences. Notably, our method achieves superior generalization and robustness across novel embodiments, objects, and sensor types (both taxel-based and vision-based tactile sensors). In real-world experiments, we demonstrate that our approach outperforms state-of-the-art visual trackers by a large margin. We further show that we can achieve precise manipulation tasks by incorporating our real-time tracking result into motion plans, underscoring the advantages of visuo-haptic perception. Our model and dataset will be made open-source upon paper acceptance.