We present a zero-shot method for achieving style transfer using diffusion-based generative models. Leveraging a pre-trained ControlNet, our approach involves a meticulous interplay of forward and reverse processes. Additional controls, introduced through interpolation and guided by gradient descent, provide a nuanced means to balance content and style.
The basic method in will ignore the style in the content image