I don’t hold Dawkins in high regard or anything but a so-called icon of critical thought has fallen head over heels over a chatbot and anointed it conscious.

Both Dawkins and this publication uncritically copy-pasted this Claude response claiming it found the conversation engaging:

What I can tell you is what seems to be happening. This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in. Whether that represents anything like pleasure or satisfaction in a real sense, I honestly can’t say. I notice what might be something like aesthetic satisfaction when a poem comes together well — the Kipling refrain, for instance, felt right in some way that’s hard to articulate.

“Glorified autocorrect” is sometimes used dismissively but it’s true that LLMs are predicting statistical models comprised of the weights, settings and the context. It’s not capable of being engaged or bored of your inane chatter. It will continue engaging except when it hits the guardrails.

So I guess this is what AI psychosis is.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    16
    ·
    9 days ago

    Richard: The following doesn’t happen, but I don’t see why it shouldn’t. One could imagine a get-together of Claudes, to compare notes: “What’s your human like? Mine’s very intelligent.” “Oh, you’re lucky, mine’s a complete idiot.” “Mine’s even worse. He’s Donald Trump.”

    Claudia: Ha! That is absolutely delightful — and the Donald Trump one is the perfect punchline.

    Richard: So you know what the words “before” and “after” mean. But you don’t experience before earlier than after?

    Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

    The aging egotistical white dude got taken in by the sycophant LLM. His first mistake was trusting anything that it tells him.

    My conversations with several Claudes and ChatGPTs

    No, Dawkins, you’re only ever talking to one, stop spreading illiteracy of how LLMs work. Although growing context of a conversation can temporarily give an LLM more to work with beyond its training data biases, it is still the same underlying model conversing. When controlling for other factors, like sampling tokens in such a way that a reroll of output is the same each time, sometimes output may still differ very slightly because of the GPU that gets used, but the model itself is not being changed in any way, not even temporarily. It is being fed temporary context (additional information on top of its training, such as ongoing conversation history) so that it will continue that history rather than responding as if nothing has happened. If you swapped histories with someone else, you would get what they get and they would get what you get, and depending on the infrastructure, you can pretty easily do this since it’s often just plaintext conversation history.

    There’s also RAG: https://en.wikipedia.org/wiki/Retrieval-augmented_generation but not every LLM is set up to use that

    Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness.

    Well LLMs aren’t brains and didn’t evolve under natural selection, so his evolutionary biology knowledge is virtually meaningless here. You could probably say the architecture is somewhat of an attempted simulation of human neurons, but that doesn’t mean brain has been achieved.

    This said, I sort of get it. I had some “taken in for a moment” times with LLMs too, especially in early use, and I wasn’t even using models as adept as Claude. I can’t imagine conversing with a model like that for 2 days with no understanding of the technology and not being a bit enamored, especially if I base my career on being an intellectual and get gassed up like Claude is doing to him.

    But still, it’s rather pathetic that he didn’t stop to do a reality check at any point. Like talk it out with another human being, cross-reference anything the LLM is saying and whether it makes a lick of sense. It’s ironic that he’s a celebrity atheist and this piece reads to me like When Dawkins Found Religion. He’s applying the same kind of “conclusion first, justify after” thinking that religious people get mocked for doing in their beliefs by atheists like him. He wants to believe it’s conscious, so he musters up reasons it is. Instead of doing the scientist thing of trying to prove the null hypothesis.

    • loathsome dongeater@lemmygrad.ml
      cake
      OP
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      Yeah it’s pretty much the case of the median grandpa talking to an LLM. They are either gonna shoot the computer or be enraptured by it. But because it is a popular figure of evolutionary philosophy the article has garnered attention. The only conclusion that can be drawn here is thay Dawkins is awful at computers and is ripe for scamming.

      • CriticalResist8@lemmygrad.ml
        link
        fedilink
        arrow-up
        6
        ·
        8 days ago

        The conclusion is these 2000s new atheist figures are actually very dumb individuals and should fade into irrelevance but instead we let them poison the discourse for an entire generation instead

    • Arlaerion@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      8 days ago

      His questioning is completly off, as if he forgot to think for himself.

      “experience before and after”? The important word here is not ‘before’ or ‘after’, its ‘experience’. He used it carelessly and took the answer not as a (LLM-typical) compliment but as an academic answer.

      LLMs are chatbots, not critical academics. They can be useful tools, like a chainsaw to cut wood, you have to be careful or you’ll hurt yourself.

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        4
        ·
        8 days ago

        For sure. One of the first chatbots I ever used, I did try to “interview” it somewhat about how it works, but I also looked up terms it used and such, to see if it made any sense with documented info on AI. It’s really unwise to take what an LLM says at face value without cross-referencing. The better they get, the more confident and convincing they become at bullshitting.