Abstract: Visuomotor policies learned through imitation learning methods often struggle to generalize to new visual domains due to the limited diversity of expert demonstrations, and collecting extensive real-world data is exhaustive. To address this challenge, we propose a novel demonstration generation approach leveraging 3D Gaussian Splatting (3DGS), an explicit and interpretable means of 3D scene representation. Our method reconstructs manipulation scenes with high fidelity and enables autonomous scene editing, giving rise to novel scene configurations. Stemming from a single expert demonstration, diversified data are generated across various visual domains, including different object poses, object types, camera views, scene appearance, lighting conditions, and robot embodiments. Comprehensive real-world experiments suggest that our demonstration generation pipeline significantly enhances the generalization of visuomotor policies when confronting multiple disturbances. Specifically, while policies trained on real-world demonstrations achieve an average success rate of less than 10%, our method lifts this number to 85.9% across various task settings and scenarios.