- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
On Wednesday, OpenAI announced DALL-E 3, the latest version of its AI image synthesis model that features full integration with ChatGPT. DALL-E 3 renders images by closely following complex descriptions and handling in-image text generation (such as labels and signs), which challenged earlier models. Currently in research preview, it will be available to ChatGPT Plus and Enterprise customers in early October.
Like its predecessor, DALLE-3 is a text-to-image generator that creates novel images based on written descriptions called prompts. Although OpenAI released no technical details about DALL-E 3, the AI model at the heart of previous versions of DALL-E was trained on millions of images created by human artists and photographers, some of them licensed from stock websites like Shutterstock. It’s likely DALL-E 3 follows this same formula, but with new training techniques and more computational training time.
Judging by the samples provided by OpenAI on its promotional blog, DALL-E 3 appears to be a radically more capable image synthesis model than anything else available in terms of following prompts. While OpenAI’s examples have been cherry-picked for their effectiveness, they appear to follow the prompt instructions faithfully and convincingly render objects with minimal deformations. Compared to DALL-E 2, OpenAI says that DALL-E 3 refines small details like hands more effectively, creating engaging images by default with “no hacks or prompt engineering required.”
Nah that Guardians of the Galaxy art is exactly what I’m talking about. It makes basic mistakes even a child could point out and looks more long a knockoff. And refining it is just rolling the dice to get a better result, whereas an artist you can actually give feedback they can understand.
The game assets look a little better, but if you look carefully you’ll notice that they don’t tile correctly. It’s 90% there but the last 10% is the hardest part and it’s important especially for large projects and not just single static images. Not too mention they look generic as fuck, you’re not going to get the next Hollow Knight or Darkest Dungeon with an amazing original style from AI, you’re only going to get existing styles mashed together. The more specific the vision for the artstyle the harder it will be to generate it.
Also the idea of a Tiktok feed of AI generated content is exactly why I hate AI art. Sure, go ahead and use it to help existing artists generate cheap assets that would otherwise be random brush strokes. But replacing them? The idea that AI generated slop will have anything close to the quality and meaning of even cheap art is ridiculous. Why would anyone want that when they could have actual art made by real people, more of which exists today than anyone could go through in their entire life?
deleted by creator