*Equal contribution to supervision of the project.
Adapting motion to new contexts in digital entertainment often demands fast agile prototyping. State-of-the-art techniques use reinforcement learning policies for simulating the underlined motion in a physics engine. Unfortunately, policies typically fail on unseen tasks and it is too time-consuming to fine-tune the policy for every new morphological, environmental, or motion change. We propose a novel point of view on using policy networks as a representation of motion for physics-based character animation. Our policies are compact, tailored to individual motion tasks, and preserve similarity with nearby tasks. This allows us to view the space of all motions as a manifold of policies where sampling substitutes training. We obtain memory-efficient encoding of motion that leverages the characteristics of control policies such as being generative, and robust to small environmental changes. With this perspective, we can sample novel motions by directly manipulating weights and biases through a Diffusion Model. Our newly generated policies can adapt to previously unseen characters, potentially saving time in rapid prototyping scenarios. Our contributions include the introduction of Common Neighbor Policy regularization to constrain policy similarity during motion imitation training making them suitable for generative modeling; a Diffusion Model adaptation for diverse morphology; and an open policy dataset. The results show that we can learn non-linear transformations in the policy space from labeled examples, and conditionally generate new ones. In a matter of seconds, we sample a batch of policies for different conditions that show comparable motion fidelity metrics as their respective trained ones.