It’s more a “yeah, but…” than a refutation.
It’s more a “yeah, but…” than a refutation.
A less than infinite number of simians have already done it once.
And how likely is it that it’ll be done again identically by a finite set of simians?
If the monkeys’ probability distribution function can be transformed to a uniform distribution by a continuous function, the outcomes are equivalent enough for this exercise. (There are probably some discontinous functions that’d also work). So, unless there’s some genetic weirdness in monkeys that prevents their ever hitting certain keys, they’re adequate RNG engines. But at that point, you’re really tweaking the assumptions based on how realistically you think monkeys are portrayed in the thought experiment.
And I don’t believe “quantumly random” is a necessary condition here.
Once you factor in the infinite number of monkeys, every novel in existence will not only be written, it will be written an infinite number of times.
You don’t need an infinite number of monkeys to ensure that. The cardinality of an infinite collection of 2-tuples (monkey, char) is the same as the cardinality of an infinite sequence of characters, just as the cardinality of the rational numbers is the same as the cardinality of the integers.
And in a countably infinite sequence of uniformly random characters, there is no assurance that any particular finite sequence will occur only a finite number of times.
The idea is that given an infinite truly random output of text by the nature of infinity the text of Shakespeare will be outputted in its entirety eventually
Only for a certain kind of randomness. For example, it’s possible to construct a random process that at each step emits a uniformly distributed character, but which also includes a filter that blocks the emission of the string “Falstaff” if it occurs. Such a process cannot ever produce the complete works of Shakespeare, since the complete works include that string, though it will still contain (for example) every lost work of Aristotle, as well as an infinite number of false and corrupted versions of those works.
But yeah, an unconstrained uniform-random-distributed countably infinite sequence of printable English characters and whitespace cannot be proven to not contain the complete works of Shakespeare, or any other finite sequence. I believe it’s also impossible to exclude any countably infinite sequence, but I might be wrong on that part, since my mathematics education happened a very long time ago.
You know it? That’s nice. A lot of people think they know a lot of things that aren’t really true.
Now prove it.
Which is not what the common saying said.
So the researchers didn’t refute the assumption “given an infinite amount of time,” and instead chose to address the long finite-time case, which is fundamentally different.
An ancestor of mine wrote a memoir of growing up in an Old West mining town. He saw one gunfight. In the early morning, a man saw the front door of his house open and another man walk out. Not happy to find that another gentleman’s bacon had been in his grill, he demanded satisfaction. That led to an impromptu duel which the offended husband won. My ancestor was walking to school when it all went down.
That was probably an exceptional situation, since the town in question was notoriously violent and corrupt.
(Shot himself in the shin, so it wasn’t a suicide attempt)
Might have been, sounds like he’s a shin-for-brains.
This is a privacy intrusion that should be banned nationally.
And some subreddits have fascist mods who arbitrarily ban anyone who’s not a alt-right or worse.
Interoperability is a big job, but the extent to which it matters varies widely according to the use case. There are layers of standards atop other standards, some new, some near deprecation. There are some extremely large and complex datasets that need a shit-ton of metadata to decipher or even extract. Some more modern dataset standards have that metadata baked into the file, but even then there are corner cases. And the standards for zero-trust security enclaves, discoverability, non-repudiation, attribution, multidimensional queries, notification and alerting, pub/sub are all relatively new, so we occasionally encounter operational situations that the standards authors didn’t anticipate.
TripAdvisor has better content. Too many Google reviews give a business 1 star because the review author was too stupid to check working hours, or has some incredibly rare digestive condition that they didn’t bother to communicate to the eatery before ordering. Or they expect their Basque waiter to speak fluent Latvian, or to accommodate a walk-in party of 20.
Isn’t yelp a pretty easily replaceable thing?
Yelp is at this stage a completely worthless thing. The only thing they were originally was an aggregator of semi-literate reviews, and a shakedown racket against businesses that pissed off some Karen
Yeah, just like the thousands or millions of failed IT projects. AI is just a new weapon you can use to shoot yourself in the foot.
is all but guaranteed to be possible
It’s more correct to say it “is not provably impossible.”
Someone, somewhere along the line, almost certainly coded rate(2025) = 2*rate(2024)
. And someone approved that going into production.
If they aren’t liable for what their product does, who is?
The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.
If you have infinite time, you don’t also need infinite monkeys.