Just goes to show how LLMs are a reflection of the culture they are trained on. Reminds me of the story of the Google engineer some years ago, some might have heard of him, the guy who believed Google’s AI was sentient (or words to that effect). I remember online, a number of people were kinda mocking him for believing that and I think he lost his job at Google as a result of his public beliefs. Well, there was another side to that story too, he did an interview and while he did have his beliefs about the AI, it wasn’t really the main focus of the interview. If I remember right, he was more concerned with using it as a means to get the spotlight on the ethics of AI as a whole and the problems with black box corporate executives making decisions about an AI’s direction. He even mentioned one concern of his “AI colonization”; I believe his concern there was basically that it’d get used similar to other colonial stuff of the past, as a means to push a certain culture on people.
With what I know now about AI, I don’t think that concern is too big, provided a culture has a dominant language that is not English. I think in that case, they’ll more so be reinforcing their own existing culture via an LLM, like this story. But either way, it’s important to remember the biases. LLMs are no more neutral in output than a monotone-speaking, suit-wearing news reporter is. The assistant-tuned ones just pull on existing cultural norms that people associate with impartiality, and in doing so, can seem more detached and authoritative that way.