This is so cool to see. Thank you for sharing!
This is so cool to see. Thank you for sharing!
There’s a way to do this in Auto1111 (sort of):
This feels pretty janky, though. I think you could do it better (and in one shot) in comfyUI by processing the partially generated latent, feeding that result to a controlnet preprocessor node, then adding the resulting controlnet conditioning plus the original half-finished latent to a new ksampler node. You’d then finish generation (continuing from the original latent) at whatever step you split off.
Agreed on the Auto1111 UI; I like the idea of ComfyUI but making quick changes + testing rapidly feels like a pain. I always feel like I must be doing something wrong. I do appreciate how easy it is to replicate a workflow, though.
What are you running SDXL in? I tried it in comfy UI yesterday and it seems really powerful, but it seems like it always takes a long time to mess around with images. I haven’t tried it in SD.Next or Auto1111 yet.
LOL. I didn’t immediately get this one. Well done.
I think there are tearable sides on some.
This is unbelievably detailed.