

Art belongs to its audience.
People have a right to culture.


Art belongs to its audience.
People have a right to culture.
Like calling all e-mail “spam.”


Is there a point explaining what the N in NP-Complete means, when you’re just gonna ignore two-thirds of a much simpler comment?
If you demand determinism, it’s just matrix algebra. Randomness is optional. It makes them work better. They run on your normal-ass computer, a deterministic Turing machine.
I categorically do not claim determinism is necessary for consciousness or intelligence. I ask you, again: are you deterministic?


Argumentum ad webster is shite philosophy. Only an explanation of consciousness in terms of unconscious events could explain consciousness.
LLMs could obviously be deterministic - they add randomness because it’s useful. Matrix algebra is not intrinsically stochastic.
What other intelligent entity can you name, that’s purely deterministic? Why is that a precondition? Why is it even relevant?


Okay. So what’s the difference between a model of thinking and literally doing it?
You can say it’s different from how people do it. But a calculator doesn’t multiply the way students do. In mathematics and Turing machines, any process that gets the right answer is the same.


Right, because nothing important in life is ambiguous or approximate.


Does that razor let you say anything at all about intelligence or consciousness, given that neither has a rigid, formal, or universal definition?
If the metric is ‘see, it does the thing,’ then a model which demonstrates thought would not be pretending to think.


Fuck no. It is only because of the Turing test that we can say they’re not conscious. You get someone questioning a bot and a person at the same time, they’re gonna figure out who’s who in short order. See: how many Rs in strawberry, name states without an E, should I walk to the car wash.
If a program was indistinguishable from a person, what basis would we have to say the person is intelligent but the program is not?


Any woman can make a whole new consciousness all by herself, with just a little help from a friend.


… and this wasn’t made by accident, it was deliberately engineered to develop emergent behavior. Quite a lot of money has been spent hiring a variety of experts to make it do this thing.
Hasn’t worked. Almost certainly will never work, with this particular kind of network. But we would not have known that, just by looking at diagrams and going ‘naaahhh.’


Does a calculator simulate math?


Careful down that road. Thought is a process, and we don’t understand it well enough to explain it. So we cannot confidently declare it couldn’t happen by tumbling text through layers of fake neurons.
LLMs definitely aren’t conscious, because they’re dumb as hell. But we had to check. When GPT-2 was novel and closely guarded, we had no idea how well backpropagation could abstract all text ever published - and pessimists were mostly pushing Chinese Room nonsense. We have to bully that denialist thought experiment off the internet. It starts from a demonstrably intelligent subject - as real to you as I am now - then interrogates some unrelated interchangeable hardware. As if the conversations with your short-range pen-pal were not real unless the guy in the box knows why he’s blindly following instructions. It’s p-zombie dualism, except instead of a soul, you need Steve to pay attention.
Only an explanation in terms of unconscious events could explain consciousness.
It costs money. It doesn’t lose money. We spend money on it, so it fucking works.
It costs less and works more if we all spend a little, than if your ass had to shop for it, yourself.


In undue fairness, there is a difference between turning text files into a chatbot, and exfiltrating that chatbot. One is transformative, and the other is making a megaphone out of some string, a squirrel, and a megaphone.
But if I don’t give a shit about companies doing math on Disney DVDs I’m not about to give a shit about them hoarding their big pile of numbers. I’m jazzed when source code leaks for things written by people.


The fuck are people downvoting for? 8 GB and no CUDA is sufficient for a variety of LLMs. That comparison’s from a year and a half ago, which is forever in this industry, but it’s not like small models got worse.
This mildly terrible website shows Min Istral 3B benchmarking above state-of-the-art DeepSeek R1 32B from ten months prior. And also above the 72B version of Qwen 2.5, whose 3B version was top-of-the-list for the ItsFoss guy.


A Raspberry Pi can run local models. You don’t need 64 gigs and a 5090.


A customer paying a recurring subscription just to do their job.
Local models will win. They’re half-assed, but the big boys only provide fractionally more ass. LLMs will become just another tool you can call on when you’d rather read code than write it.
Who could possibly give a shit?