• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • And some of us live in the US, which has the highest incarceration rate in the world, is built on genocide of the indigenous (still an ongoing problem), slavery (prison labor loophole still exists), and is currently funding and supporting a genocide against the Palestinian people. You can repeat the word cosplay as many times as you want, it doesn’t suddenly make your world real and others not.

    My point about you “living in anecdote” is you’re playing the internet trope “I was X and I understand it better than you” card, and so far, as far as I can tell, you have yet to even name what this mystery country is, in spite of being directly asked by someone. Meanwhile, you’re pushing garden variety “vote blue no matter who” talking points and showing repeated ignorance of what kind of person Biden is comparative to Trump and what the US is actually like.

    You are not “way to the left of Biden” in actual substance. You are enabling of genocide by framing one of two runners of it as lesser evil. You call others cosplayers, but it’s you who is treating the claiming of a political label purely as a badge you put on yourself rather than something that has to be backed up by, you know, actually aligning with it.


  • Trump was already in office for 4 years though. It’s not some big mystery how he would act as president. Meet the new boss, same as the old boss. The nature of US fascism is not identical to every other country, but you’re just ignoring history if you think it has never seriously opposed communism internally. Like, COINTELPRO for starters? Come on.

    It just comes across to me like you’re inventing this arbitrary goalpost for fascism, so that you can say the US isn’t at it yet and then say vote for the other guy. With a helping of vague “I lived under anecdote” to go with it. Like what is with this language of calling people cosplayers? Where exactly do you think US citizens live, not in the US?

    I’m genuinely confused as to what your politics are supposed to be.



  • That’s an interesting take on it and I think sort of highlights part of where I take issue. Since it has no world model (at least, not one that researchers can yet discern substantively, anyway) and has no adaptive capability (without purposeful fine-tuning of its output from Machine Learning engineers), it is sort of a closed system. And within that, is locked into its limitations and biases, which are derived from the material it was trained on and the humans who consciously fine-tuned it toward one “factual” view of the world or another. Human beings work on probability in a way too, but we also learn continuously and are able to do an exchange between external and internal, us and environment, us and other human beings, and in doing so, adapt to our surroundings. Perhaps more importantly in some contexts, we’re able to build on what came before (where science, in spite of its institutional flaws at times, has such strength of knowledge).

    So far, LLMs operate sort of like a human whose short-term memory is failing to integrate things into long-term, except it’s just by design. Which presents a problem for getting it to be useful beyond specific points in time of cultural or historical relevance and utility. As an example to try to illustrate what I mean, suppose we’re back in time to when it was commonly thought the earth is flat and we construct an LLM with a world model based on that. Then the consensus changes. Now we have to either train a whole new LLM (and LLM training is expensive and takes time, at least so far) or somehow go in and change its biases. Otherwise, the LLM just sits there in its static world model, continually reinforcing the status quo belief for people.

    OTOH, supposing we could get it to a point where an LLM can learn continuously, now it has all the stuff being thrown at it to contend with and the biases contained within. Then you can run into the Tay problem, where it may learn all kinds of stuff you didn’t intend: https://en.wikipedia.org/wiki/Tay_(chatbot)

    So I think there are a couple important angles to this, one is the purely technical endeavor of seeing how far we can push the capability of AI (which I am not opposed to inherently, I’ve been following and using generative AI for over a year now during it becoming more of a big thing). And then there is the culture/power/utility angle where we’re talking about what kind of impact it has on society and what kind of impact we think it should have and so on. And the 2nd one is where things get hairy for me fast, especially since I live in the US and can easily imagine such a powerful mode of influence being used to further manipulate people. Or on the “incompetence” side of malice and incompetence, poorly regulated businesses simply being irresponsible with the technology. Like Google’s recent stuff with AI search result summaries giving hallucinations. Or like what happened with the Replika chatbot service in early 2023, where they filtered it heavily out of nowhere claiming it was for people’s “safety” and in so doing, caused mental health damage to people who were relying on it for emotional support; and mind you, in this case, the service had actively designed it and advertised it as being for that, so it wasn’t like people were using it in an unexpected way from that standpoint. The company was just two-faced and thoughtless throughout the whole affair.


  • It never ceases to amaze me the amount of effort being put into shoehorning a probability machine into being a deterministic fact-lookup assistant. The word “reliable” seems like a bit of a misnomer here. I guess only in the sense of reliable meaning “yielding the same or compatible results in different clinical experiments or statistical trials.” But certainly not reliable in the sense of “fit or worthy to be relied on; worthy of reliance; to be depended on; trustworthy.”

    Since that notion of reliability has to do with “facts” determined by human beings and implanted in the model as learned “knowledge” via its training data. There’s just so much wrong with pushing LLMs as a means of accurate information. One of the problems being that supposing they got an LLM to, say, reflect the accuracy of wikipedia or something 99% of the time. Even setting aside how shaky wikipedia would be on some matters, it’s still a blackbox AI that you can’t check the sources on. You are supposed to just take it at its word. So sure, okay, you tune the thing to give the “correct” answer more consistently, but the person using it doesn’t know that and has no way to verify that you have done so, without checking outside sources, which defeats the whole point of using it to get factual information…! 😑

    Sorry, I think this is turning into a rant. It frustrates me that they keep trying to shoehorn LLMs into being fact machines.


  • I have had conversations with self described communists who don’t care at all about minorities.

    There are those who co-opt, historically. And in modern day, in the US, there’s patsoc MAGA communists (though I’m not sure how much they actually exist beyond online bullshitting).

    But I would also ask what you mean by “don’t care at all about minorities,” like if they have actually expressed such things to you and in what way, or if you’re inferring that from something and from what. Because sometimes there are disagreements on what is actually going to make a difference and that is taken as a lack of interest in caring what happens, in bad faith. For example, democrats in the US who shame people on “the left” for not supporting their blue ghoul because the red ghoul might get in, citing that their disinterest in validating the blue ghoul as a candidate means they don’t care about XYZ issue minorities have that the blue ghoul pays lip service to.


  • Pretty sure he’s been showing visible signs of cognitive decline since 2020 elections. The sad part for people who aren’t genocidal imperialists, is I’m not sure how much it matters either way. Supposing that he’s not all there, a clear-minded Biden would likely be making much the same decisions, considering his past record in politics. So either way, he’s still a piece of shit doing immense harm, whether he’s all there mentally doing it or he’s somewhat of a stand-in for it by this point.


  • I need to reread State and Revolution, cause I want to say Lenin distinguishes between the two there as OP replied, where one is transition state and the other is after the state has “withered away” but now I can’t recall exactly if he used that specific terminology. Either way, the phrasing I tend to see used is that there is a socialist worker state with a vanguard party who suppresses the capitalist class and has a dictatorship of the working class, or proletariat. And then there is communism, which is the end goal to transition to. But the party itself is communist.

    So something like:

    • People doing socialist worker state: communists heading up a communist vanguard party that focuses on the needs of the masses and on educating them in communist principles and methods of analysis (such as dialectical materialism), and guards against the reaction
    • The state power model: dictatorship of the proletariat in order to suppress the capitalist class and empower the proletariat
    • Goals: to create and maintain a socialist state along the lines of “to each according to their contribution” and transition to a communist “to each according to their needs” as the need for the state “withers away,” and maintain the revolution which is an ongoing process of transition and guarding against the reaction, not something that ends as soon as you have state power.

    If anyone thinks I’m oversimplifying, am open to correction. (Is worth noting that the details of this will vary some in practice because of the conditions unique to the socialist project and what they have developed and so on.)




  • I learned a new phrase today, thanks. I could see that being intentional to an extent, for sure. Certainly the alphabet agencies have done far worse over the decades, so it’s not like it’s a stretch to imagine western imperialism trying a thing like that. Would also fit with the general theme of gangster/mafia-like, where the threat isn’t necessarily made explicit, but you are steered toward drawing the conclusion about what can happen. So that people develop certain kinds of fears without those in power having to go full mask off to induce those fears directly. Which, loosely related, but reminds me of how in horror writing, it’s often the case that the audience’s imagined version of the monster through implication is scarier than the real monster. And so much time is spent activating the imagination without showing the monster directly. I know there are also uses of this kind of thing through history, such as military tactics to make an army look bigger than it is or that sort of thing.

    In fitting with this, I remember that Mao quote:

    All reactionaries are paper tigers. In appearance, the reactionaries are terrifying, but in reality, they are not so powerful. From a long-term point of view, it is not the reactionaries but the people who are powerful.

    Intimidation and appearance of threat can be more powerful in its effect than the threat itself. Important for us to remember that. That we need to ground ourselves in what the threats substantively are, so we don’t let runaway imagination intimidate us into subservience to imperialism.





  • I can explain more later if need be, but some quick-ish thoughts (I have spent a lot of time around LLMs and discussion of them in the past year or so).

    • They are best for “hallucination” on purpose. That is, fiction/fantasy/creative stuff. Novels, RP, etc. There is a push in some major corporations to “finetune” them to be as accurate as possible and market them for that use, but this is a dead end for a number of reasons and you should never ever trust what an LLM says on anything without verifying it outside of the LLM (e.g. you shouldn’t take what it says at face value).

    • LLMs operate on probability of continuing what is in “context” by picking the next token. This means it could have the correct info on something and even with a 95% chance of picking it, it could hit that 5% and go off the rails. LLMs can’t go back and edit phrasing or plan out a sentence either, so if it picks a token that makes a mess of things, it just has to keep going. Similar to an improv partner in RL. No backtracking and “this isn’t a backstory we agreed on”, you just have to keep moving.

    • Because LLMs continue based on what is in “context” (its short-term memory of the conversation, kind of), they tend to double down on what is already said. So if you get it saying blue is actually red once, it may keep saying that. If you argue with it and it argues back, it’ll probably keep arguing. If you agree with it and it agrees back, it’ll probably keep agreeing. It’s very much a feedback loop that way.






  • Reminds me of that quote, IIRC from Capitalist Realism, “It’s easier to imagine the end of the world than the end of capitalism.” The way it’s ingrained in some people goes very deep. When their view is that it’s capitalism or nothing, it sort of makes sense their only view of an alternative is running away from it rather than confronting it.