cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
This article is annoyingly one-sided. The tool performs an act of synthesis just like an art student looking at a bunch of art might. Sure, like an art student, it could copy someone’s style or even an exact image if asked (though those asking may be better served by torrent sites). But that’s not how most people use these tools. People create novel things with these tools and should be protected under the law.
So what you’re saying is that the AI is the artist, not the prompter. The AI is performing the labor of creating the work, at the request of the prompter, like the hypothetical art student you mentioned did, and the prompter is not the creator any more than I would be if I kindly asked an art student to paint me a picture.
In which case, the AI is the thing that gets the authorial credit, not the prompter. And since AI is not a person, anything it authors cannot be subjected to copyright, just like when that monkey took a selfie.
It should be as copyrightable as the prompt. If the prompt is something super generic, then there’s no real work done by the human. If the prompt is as long and unique as other copyrightable writing (which includes short works like poems) then why shouldn’t it be copyrightable?
Because it wasn’t created by a human being.
If I ask an artist to create a work, the artist owns authorship of that work, no matter how long I spent discussing the particulars of the work with them. Hours? Days? Months? Doesn’t matter. They may choose to share or reassign some or all of the rights that go with that, but initial authorship resides with them. Why should that change if that discussion is happening not with an artist, but with an AI?
The only change is that, not being a human being, an AI cannot hold copyright. Which means a work created by an AI is not copyrightable. The prompter owns the prompt, not the final result.
You’re assigning agency to the program, which seems wrong to me. I think of AI like an advanced Photoshop filter, not like a rudimentary person. It’s an artistic tool that artists can use to create art. It does not in and of itself create art any more than Photoshop creates graphics or a synthesizer creates music.
How do the actions of the prompter differ from the actions of someone who commissions an artist to create a work of art?
I don’t think commissioning a work is ever as hands-on as using a program to create a work.
I suspect the hangup here is that people assume that using these tools requires no creative effort. And to be fair, that can be true. I could go into Dall-E, spend three seconds typing “fantasy temple with sun rays”, and get something that might look passable for, like, a powerpoint presentation. In that case, I would not claim to have done any artistic work. Similarly, when I was a kid I used to scribble in paint programs, and they were already advanced enough that the result of a couple minutes of paint-bucketing with gradients might look similar to something that would have required serious work and artistic vision 20 years prior.
In both cases, these worst-case examples should not be taken as an indictment of the medium or the tools. In both cases, the tools are as good as the artist.
If I spend many hours experimenting with prompts, systematically manipulating it to create something that matches my vision, then the artistic work is in the imagination. MOST artistic work is in the imagination. That is the difference between an artist and craftsman. It’s also why photography is art, and not just “telling the camera to capture light”. AI is changing the craft, but it is not changing the art.
Similarly, if I write music in a MIDI app (or whatever the modern equivalent is; my knowledge of music production is frozen in the 90s), the computer will play it. I never touch an instrument, I never create any sound. The art is not the sound; it is the composition.
I think the real problem is economic, and has very little to do with art. Artists need to get paid, and we have a system that kinda-sorta allows that to happen (sometimes) within the confines of a system that absolutely does not value artists or art, and never has. That’s a real problem, but it is only tangentially related to art.
should a camera also own the copyright to the pictures it takes? (I seriously hate photographers)
Ah, but there is a fundamental difference there. A photographer takes a picture, they do not tell the camera to take a picture for them.
It is the difference between speech and action.
Okay, so the prompt can be that. But we’re talking about the output, no? My hello-world source code is copyrighted, but the output “hello world” on your machine isn’t really, no?
Does it require any creative thought for the user to get it to write “hello world”? No. Literally everyone launching the app gets that output, so obviously they didn’t create it.
A better example would be a text editor. I can write a poem in Notepad, but nobody would claim that “Notepad wrote the poem”.
It’s wild to me how much people anthropomorphize AI while simultaneously trying to delegitimize it.
Lol, no. A student still incorporates their own personality in their work. Art by humans always communicates something. LLMs can’t communicate.
I thought it’s “the tool” the “performs an act of synthesis”. Do people create things, or the LLM?
No no, he created the prompt. That’s the artistic value /s
the machine learning model creates the picture, and does have a “style”, the “style” has been at least partially removed from most commercial models but still exist.
It doesn’t have a “style”. It stores a statistical correlation of art styles.
different models will have been trained on different ratios of art styles, one may have been trained on a large number of oil paintings and another pencil sketches, these models would provide different outputs to the same inputs.
You’re not stating anything different than my “correlation” statement.
It’s deterministic. I can exactly duplicate your “art” by typing in the same sentence. You’re not creative, you’re just playing with toys.
Try it out and show us the result.
Ok, here’s an image I generated with a random seed:
Here’s the UI showing it as a result:
Then I reused the exact same input parameters. Here you can see it in the middle of generating the image:
Then it finished, and you can see it generated the exact same image:
Here’s the second image, so you can see for yourself compared to the first:
You can download Flux Dev, the model I used for this image, and input the exact same parameters yourself, and you’ll get the same image.
But you’re using the same seed. Isn’t the default behaviour to use random seed?
And obviously, you’re using the same model for each of these, while these people would probably have a custom trained model that they use which you have no access to.
That’s not really proof that you can replicate their art by typing the same sentence like you claimed.
If you didn’t understand that I clearly meant with the same model and seed from the context of talking about it being deterministic, that’s a you problem.
Bro, it’s you who said type the same sentence. Why are you saying the wrong things and then try to change your claims later?
The problem is that you couldn’t be bothered to try and say the correct thing, and then have the gall to blame other people for your own mistake.
And in what kind of context does using the same seed even makes sense? Do people determine the seed first before creating their prompt? This is a genuine question, btw. I’ve always thought that people generally use a random seed when generating an image until they find one they like, then use that seed to modify the prompt to fine tune it.
In the context that I’m explaining that the thing is deterministic. Do you disagree? Because that was my point. Diffusion models are deterministic.
That’s as much deterministic as tracing someone’s artwork, really.
If you have to use a different creation process than how someone would normally create the artwork, whether legitimate or using AI, then it’s not really a criticism of that method in the first place.
I was seriously thinking you found a way to get similar enough results to another person’s AI output just from knowing the prompt. That would actually prove that AI artwork require zero effort to reproduce.
Edit: To expand on that 1st prargrpah, yes, AI is deterministic as much as a drawing tablet and app is deterministic, that is if you copy exactly what another person does using the tool, it will produce the same result.
That’s actually fundamentally untrue, like independent of your opinion, I promise that when people generate an image with a phrase it will be different and is not deterministic ( not in the way you mean ) .
You and I cannot type the same prompt into the same AI generative model and receive the same result, no system works with that level of specificity, by design.
They pretty much all use some form of entropy / noise.
This can actually be true, depending on how the system is configured.
For instance, if you and someone else use the same locally-hosted Stable Diffusion UI, both put the exact same prompt, and are using the same seed, # of steps, and dimensions, you’ll get an identical result.
The only reason outputs are different between prompts is because of the noise from the seed, normally randomly set between generations, which can be easily set to the same value as someone else’s generation, and will yield an identical result unless the prompt is changed.
It’s literally as true as it can possibly be. Given the same inputs (including the same seed), a diffusion model will produce exactly the same output every time. It’s deterministic in the most fundamental meaning of the word. That’s why when you share an image on CivitAI people like it when you share your input parameters, so they can duplicate the image. I have recreated the exact same images using models from there.
Humans are not deterministic (at least as far as we know). If I give two people exactly the same prompt, and exactly the same “training data” (show them the same references, I guess), they will never produce the same output. Even if I give the same person the same prompt, they won’t be able to reproduce the same image again.
I do actually believe that everything, including human behavior is deterministic. I also believe there is nothing special about human consciousness or creation tbh