

Deepseek will literally think in its reasoning sometimes “Well what they said is incorrect, but i need to make sure i approach this delicately as to not upset the user” and stuff. You can mitigate it a bit by just literally telling it to be straight forward and correct things when needed but still not entirely.
LLMs will literally detect where you are from via the words you use. Like they can tell if your American, British, or Australian, or if your someone whose 2nd lang is english within a few sentences. Then they will tailor their answers to what they think someone of that nationality would want to hear lol.
I think it’s a result of them being trained to be very nice and personable customer servicey things. They basically act the way your boss wants you to act if you work customer service.
Oh yeah I’ve had to tell ChatGPT to stop bringing up shit from other chats before. Like if something seems related to another chat it’ll start referencing it. As if i didnt just make a new chat for a reason. The worst part is the more you talk to them the more they hallucinate so a fresh new chat is the best way to go about things usually. ChatGPT seems to be worse at hallucinating these days than DeepSeek probably for this reason. New chats arent actually clean slates.