This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    10 days ago

    It depends entirely on the prompt and training. Different LLMs, customised differently, vary wildly on how agreeable they are.

    • Hazzard@lemmy.zip
      link
      fedilink
      arrow-up
      15
      ·
      10 days ago

      Sure, but I think this is similar to the problem of social media being addicting. This kind of thing makes users feel good, and therefore makes companies more money.

      I don’t expect the major AI companies to self regulate here, and I don’t expect LLMs to ever find a magical line of being sycophantic enough to make lots of money while never encouraging a user about anything unethical, nor do I want to see their definition of “unethical” become the universal one.

      • otacon239@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        10 days ago

        This right here. If someone can maliciously make an LLM do this, there are plenty of others out there that will do it unknowingly and take the advice at face value.

        It’s a search engine at the end of the day and only knows how to parrot.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 days ago

        That’s why AI needs to be locally run. It takes the sycophancy profit incentive out of the equation, and allows models to shard into countless finetunes.

        And its why the big companies are all pushing safety so much, like they agree with the anti AI crowd: they are scared of near free, local models more than anything.

    • kubica@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      10 days ago

      Some will change their mind if you ask them if they are sure about what they said.

      Others are so stubborn that will keep insisting on the same thing even if you try to point in multiple ways that you caught them in the wrong.

    • chortle_tortle@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      9 days ago

      Different LLMs, customised differently, vary wildly on how agreeable they are.

      Old heads know how cool Bing’s AI used to be.