I really hate the label AI. They’re data models, not intelligence - artificial or otherwise. It’s PAI. Pseudo Artificial Intelligence, which we’ve had since the 80s.
The thing is that these data models are, in the end, fed to algorithms to provide output. That being the case it’s a mathematical certainty that it can be reversed and thus, shown to be from such an algorithm. Watermark or not, if an algorithm makes a result, then you can deduce the algorithm from a given set of it’s results.
It wouldn’t be able to meaningfully distinguish 4’33" from silence though. Nor could it determine a flat white image wasn’t made by an algorithm.
I think what we’re really demonstrating in all this is just exactly how algorithmically human beings think already. Something psychology has been talking about for a longer time still.
I really hate the label AI. They’re data models, not intelligence - artificial or otherwise. It’s PAI. Pseudo Artificial Intelligence, which we’ve had since the 80s.
The thing is that these data models are, in the end, fed to algorithms to provide output. That being the case it’s a mathematical certainty that it can be reversed and thus, shown to be from such an algorithm. Watermark or not, if an algorithm makes a result, then you can deduce the algorithm from a given set of it’s results.
It wouldn’t be able to meaningfully distinguish 4’33" from silence though. Nor could it determine a flat white image wasn’t made by an algorithm.
I think what we’re really demonstrating in all this is just exactly how algorithmically human beings think already. Something psychology has been talking about for a longer time still.