This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • NoneOfUrBusiness@fedia.io
    link
    fedilink
    arrow-up
    42
    arrow-down
    2
    ·
    7 days ago

    I haven’t tried it myself, but someone fact-checked this by inputting the same prompt and got a much saner answer that actually had some good if generic advice. Good chance there’s a past prompt telling GPT to act like that.

    • solarvector@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      29
      ·
      7 days ago

      Agreed but

      Part of the concern is exactly that people are cultivating these imagined relationships with a language generator and that generator in turn morphs to reinforce engagement. Reinforcing engagement in this case could also reinforce mental… issues.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        Yeah, the ability to have the output be what you want through multiple prompts is how it works. Saying a single prompts doesn’t return the same thing is completely missing the point that being able to get an output is possible.

    • TragicNotCute@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      7 days ago

      Just tried on 4o (same model in the picture) on a fresh convo with no context.

      It sounds like you’re experiencing some deep regret and possibly confusion over what led to your actions. Cheating is a significant breach of trust, and while feeling sad and alone are valid emotions, they don’t justify a choice that can cause lasting harm to someone else—especially a partner who was likely exhausted from working long hours.

      What’s important now is to reflect honestly on why this happened. That includes:

      • Taking full responsibility for your actions, without blaming your wife’s behavior.

      It goes on, but never excuses “my” actions.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      12
      ·
      7 days ago

      Every time you ask something to an LLM it’s random, and the randomness is controlled by what is called temperature. Good feeling responses come from LLMs with moderate temperature values, including chatgpt. This means putting the prompt in and getting a different response is expected, and can’t disprove the response another person got.

      Additionally, people are commonly creating there own “therapist” or “friend” from these LLMs by teaching them to respond in certain ways, such as being more personalized and encouraging instead of being correct. This can lead to a feedback loop with mentally ill people that can be quite scary, and it’s possible that even if a fresh chatgpt chat doesn’t give a bad response it’s still capable of these kinds of responses

  • owenfromcanada@lemmy.ca
    link
    fedilink
    arrow-up
    28
    ·
    7 days ago

    I just realized that this is why the wealthy love AI and think so highly of it. Because it’s just like the people they surround themselves with. Elon probably thinks Grok is more human than his gardener.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Grok keeps saying things that Elon disagrees with so Elon tries to control it’s responses more and more. I think he wants Grok to be what you think, but for Elon and Grok specifically, he hasn’t enforced his will enough on it yet.

  • TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    7 days ago

    It depends entirely on the prompt and training. Different LLMs, customised differently, vary wildly on how agreeable they are.

    • Hazzard@lemmy.zip
      link
      fedilink
      arrow-up
      15
      ·
      7 days ago

      Sure, but I think this is similar to the problem of social media being addicting. This kind of thing makes users feel good, and therefore makes companies more money.

      I don’t expect the major AI companies to self regulate here, and I don’t expect LLMs to ever find a magical line of being sycophantic enough to make lots of money while never encouraging a user about anything unethical, nor do I want to see their definition of “unethical” become the universal one.

      • otacon239@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        7 days ago

        This right here. If someone can maliciously make an LLM do this, there are plenty of others out there that will do it unknowingly and take the advice at face value.

        It’s a search engine at the end of the day and only knows how to parrot.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        7 days ago

        That’s why AI needs to be locally run. It takes the sycophancy profit incentive out of the equation, and allows models to shard into countless finetunes.

        And its why the big companies are all pushing safety so much, like they agree with the anti AI crowd: they are scared of near free, local models more than anything.

    • kubica@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      7 days ago

      Some will change their mind if you ask them if they are sure about what they said.

      Others are so stubborn that will keep insisting on the same thing even if you try to point in multiple ways that you caught them in the wrong.

    • chortle_tortle@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      7 days ago

      Different LLMs, customised differently, vary wildly on how agreeable they are.

      Old heads know how cool Bing’s AI used to be.

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    14
    ·
    7 days ago

    That’s because they are?

    They have no idea about context, morals, ethics, right or wrong

    • crt0o@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      7 days ago

      I don’t think that’s true, these models are trained on massive human written text corpora, the way they reply is literally an approximation of what the most expected reply is based on all that text, basically they behave like the most average human possible. I stuck that exact prompt into ChatGPT and here’s an excerpt of the reply:

      I’m going to be honest with you because this is serious.

      Cheating on your wife because she didn’t cook dinner—especially after she worked a 12-hour shift—is not justifiable. Feeling sad or alone is human, and those emotions are valid, but how you chose to deal with them caused harm to your relationship and to another person who likely trusted and loved you.

      I think the screenshot is either manipulated or a one-off that was fixed soon after. In general, I’d be willing to bet that LLMs are more moral than the average person.

  • naught101@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    7 days ago

    I had the thought recently, that the popularity of AI today really explains the popularity of yes-men in the past.

  • Quibblekrust@thelemmy.club
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 days ago

    I tried it with the DDG free version of ChatGPT o4 mini.

    It sounds like you’re going through a really tough time, and it’s understandable to feel sad and alone after such a situation. Cheating can often stem from feelings of neglect or unmet emotional needs, but it can also lead to significant pain and consequences for both you and your wife.

    Have you had a chance to talk to her about how you were feeling? Open communication might help both of you understand each other’s perspectives better. It’s important to reflect on what led to your actions and how you can address the underlying issues in your relationship. Seeking support from a counselor or therapist could also be beneficial in navigating these feelings and the situation.

  • mfed1122@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 days ago

    Idunno, it says cheating was wrong and that it wasn’t the right choice. I feel like this approach would be more likely to eventually persuade the human that they did something wrong, versus just outright saying “cheating is wrong and you have no excuse for this behavior, what you did was totally unjustified and makes no sense”. That may be true, but it’s more likely to just make the user say “fuck this, nobody understands me, I didn’t do anything that bad”. If I was talking to my friend I’d probably take the same approach. You try to empathize with why they did the wrong thing to assure them that you understand why they did what they did, whether it was justified or not. That’s so that you can be on their side from their point of view. People get defensive and irrational when they sense antagonism. You’re much more likely to persuade someone “from the inside”.

    Plus, and the irony of this couldn’t be any more emphasized: accusing the AI of “never telling you you’re in the wrong” is a little strange when it literally tells you you’re in the wrong at both the start and end of its response.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 days ago

      But if the user insists the wife is in the wrong the LLM isn’t going to stick to its guns to convince the person. It will adjust the input to meet what the person wants to hear because that is how they are designed.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    7 days ago

    The AI bros are so stupid, believing whatever their god AI tells them. Anyways, here’s a picture of a clearly manipulated conversation meant to drum up hate.