Abstract: Standing-up control is crucial for humanoid robots, with the potential for integration into current locomotion and loco-manipulation systems. Existing approaches are either limited to simulations that neglect hardware constraints or rely on predefined ground-specific motion trajectories, failing to enable standing-up across diverse postures in the real world. To bridge this gap, we present HoST (Humanoid Standing-up Control), a reinforcement learning framework that learns standing-up control from scratch, enabling robust sim-to-real transfer across diverse postures. HoST learns posture-adaptive motions through training with diverse simulated terrains, a multi-critic architecture, and curricula. To ensure real-world deployment, we constrain the motion with smoothness regularization and implicit motion speed bound, preventing oscillations and abrupt movements on the hardware. After simulation training, the resulting controllers can be directly deployed on the real humanoid robot, Unitree G1. Our experimental results demonstrate that the controllers achieve smooth, robust, and stable standing-up motions across diverse real-world scenes.