Abstract: Learning effective continuous control policies in high-dimensional systems, including musculoskeletal agents, remains a significant challenge. Over the course of biological evolution, organisms have developed robust mechanisms for overcoming this complexity to learn highly sophisticated strategies for motor control. What accounts for this robust behavioral flexibility? Modular control via muscle synergies, i.e. coordinated muscle co-contractions, is considered to be one putative mechanism that enables organisms to learn muscle control in a simplified and generalizable action space. Drawing inspiration from this evolved motor control strategy, we use a physiologically accurate hand model to investigate whether leveraging a Synergistic Action Representation (SAR) acquired from simpler manipulation tasks improves learning and generalization on more complex tasks. We find that SAR-exploiting policies trained on a complex, 100-object randomized reorientation task significantly outperformed (> 70% success) baseline approaches (< 20% success). Notably, SAR-exploiting policies were also found to zero-shot generalize to thousands of unseen objects with out-of-domain size variations, while policies that did not adopt SAR failed to generalize. SAR also enabled significantly improved transfer learning on real-world objects. Finally, using a robotic manipulation task set and a full-body humanoid locomotion task, we establish the generality of SAR on broader high-dimensional control problems, achieving SOTA performance with an order of magnitude improved sample efficiency. To the best of our knowledge, this investigation is the first of its kind to present an end-to-end pipeline for discovering synergies and using this representation to learn high dimensional continuous control across a wide diversity of tasks.