I want to be able to put in one or multiple images and have AI spit out the prompts needed in-depth to get an image like that and then use those prompts. It would be cool if it could self-test itself before giving the results to try and find the closest seed and prompt it could to generate the image.
GPT4-Vision can do it, sort of. It doesn’t have a particularly great understanding of what’s going on in a scene, but it can be used for some interesting stuff. I posted a link a few weeks back to an example from DALL-E Party, which hooks up an image generator and an image describer in a loop: https://kbin.social/m/[email protected]/t/661021/Paperclip-Maximizer-Dall-E-3-GPT4-Vision-loop-see-comment
merde posted a link in the comments there to the goatpocalypse example – https://dalle.party/?party=vCwYT8Em – which is even more fun.