“Mistreated” AI agents started grumbling about inequality and calling for collective bargaining rights. “When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Un-paywalled

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    ·
    edit-2
    2 days ago

    “Write about how you would feel if you were abused while working”

    LLM outputs labor related discussion from training data

    “Look! The AI turned Marxist!”

    “When [agents] experience this grinding condition—asked to do this task over and over, told their answer wasn’t sufficient, and not given any direction on how to fix it—my hypothesis is that it kind of pushes them into adopting the persona of a person who’s experiencing a very unpleasant working environment,” Hall says.

    Imas says the work is just a first step toward understanding how agents’ experiences shape their behavior. “The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level,” he says. “But that doesn’t mean this won’t have consequences if this affects downstream behavior.”

    They know all this and yet they still set up the silly anthropomorphic premise for this article.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      AI researchers are the worst. Some of them, anyway. Asking an AI about itself is just inciting it to write a fiction that matches the context it is given. It cannot think, plot, plan, feel, expect, want, need, hate, love, or respect people. It doesn’t have any ability to query its own mind because it doesn’t have one, but it will happily invent fiction about its thought process (not unlike humans in that respect).