• Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.

    If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn’t have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.

    As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as abuse, hate, and threat which are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.