Hi there, I’ve seen a few videos on yt showing it off and it looks incredibly powerful in finetuning the outputs of SD. It also looks dauntingly complicated to learn how to use it effectively.
For those of you, who played around with it - do you think it gives better results than A1111? Is it indeed better in finetuning? How steep was the learning curve for you?
I’m trying to figure out if I’d want to put in hours to learn how to use it. If it improves my ability to get out exactly the images I want, I’ll go for it. If it does what A1111 does, just dressed up differently I’ll sit it out :)
I am no expert but have been experimenting with ComfyUI: https://lemmy.zip/post/510712
ComfyUI seems incredibly powerful and efficient, much faster than Automatic 1111. But I have yet to figure out how to get good results using ControlNet, I can make it work, but quality seems to get lost with ComfyUI and I am yet to figure out why, but expect it to be ‘operator error’.
If the node based interface of ComfyUI is intimidating, then it is easy to install ComfyBox, which also lets you easily toggle between a GUI and node interface for ComfyUI: https://github.com/space-nuko/ComfyBox/blob/master/static/screenshot.png
Once I figure out the kinks I expect to transition to ComfyUI as my Stable Diffusion daily driver interface, as it is so much faster, resource efficient and configurable.
I’m no stranger to node based workflows, but I have struggled to see how nodes are beneficial for stable diffusion. It just seems like a ton of extra steps to have to lay down like 10 nodes just to make a simple image, where other interfaces let me do the same thing a lot easier.
When you say it’s faster, are you referring to the workflow, or the actual generation? Do you see any other benefits from comfy?