23
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
It can’t; again the model does not and cannot change once it’s been generated.
And you really don’t want it to either. That could cause all sorts of privacy issues if you accidentally include private information in the conversation - and as far as I have heard it is harder to remove information from LLMs than it is to “add” information to it.
Also Microsoft’s Tay could adapt itself based on conversations and that went real well…
That’s an architectural choice, there’s nothing inherent to the approach that would prevent that from happening.
What if you don’t need to change the model to accomplish that?
What is the point of your reply? ChatGPT-4 does not use this method, and even if it did, it still does not allow it to change its model on-the-fly… so it just seems like a total non-sequitur.
What if mmaaaaann? *puffs joint