return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 day agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comexternal-linkmessage-square89fedilinkarrow-up1323arrow-down110
arrow-up1313arrow-down1external-linkHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 day agomessage-square89fedilink
minus-squareaffenlehrer@feddit.orglinkfedilinkEnglisharrow-up3·1 day agoAlso, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).