This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

  • JPAKx4@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    12
    ·
    9 days ago

    Every time you ask something to an LLM it’s random, and the randomness is controlled by what is called temperature. Good feeling responses come from LLMs with moderate temperature values, including chatgpt. This means putting the prompt in and getting a different response is expected, and can’t disprove the response another person got.

    Additionally, people are commonly creating there own “therapist” or “friend” from these LLMs by teaching them to respond in certain ways, such as being more personalized and encouraging instead of being correct. This can lead to a feedback loop with mentally ill people that can be quite scary, and it’s possible that even if a fresh chatgpt chat doesn’t give a bad response it’s still capable of these kinds of responses