• VibeSurgeon@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    It’s likely that you’ll get reduced performance from this, as blowing up the token count is part of getting a marginal amount of increased performance out of the models.

    Funny though, as is most discoveries related to emergent LLM properties