This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • Hazzard@lemmy.zip
    link
    fedilink
    arrow-up
    15
    ·
    10 days ago

    Sure, but I think this is similar to the problem of social media being addicting. This kind of thing makes users feel good, and therefore makes companies more money.

    I don’t expect the major AI companies to self regulate here, and I don’t expect LLMs to ever find a magical line of being sycophantic enough to make lots of money while never encouraging a user about anything unethical, nor do I want to see their definition of “unethical” become the universal one.

    • otacon239@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      10 days ago

      This right here. If someone can maliciously make an LLM do this, there are plenty of others out there that will do it unknowingly and take the advice at face value.

      It’s a search engine at the end of the day and only knows how to parrot.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      10 days ago

      That’s why AI needs to be locally run. It takes the sycophancy profit incentive out of the equation, and allows models to shard into countless finetunes.

      And its why the big companies are all pushing safety so much, like they agree with the anti AI crowd: they are scared of near free, local models more than anything.