This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • mfed1122@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 days ago

    Idunno, it says cheating was wrong and that it wasn’t the right choice. I feel like this approach would be more likely to eventually persuade the human that they did something wrong, versus just outright saying “cheating is wrong and you have no excuse for this behavior, what you did was totally unjustified and makes no sense”. That may be true, but it’s more likely to just make the user say “fuck this, nobody understands me, I didn’t do anything that bad”. If I was talking to my friend I’d probably take the same approach. You try to empathize with why they did the wrong thing to assure them that you understand why they did what they did, whether it was justified or not. That’s so that you can be on their side from their point of view. People get defensive and irrational when they sense antagonism. You’re much more likely to persuade someone “from the inside”.

    Plus, and the irony of this couldn’t be any more emphasized: accusing the AI of “never telling you you’re in the wrong” is a little strange when it literally tells you you’re in the wrong at both the start and end of its response.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 days ago

      But if the user insists the wife is in the wrong the LLM isn’t going to stick to its guns to convince the person. It will adjust the input to meet what the person wants to hear because that is how they are designed.