vermaterc@lemmy.ml to AI@lemmy.mlEnglish · 4 days agoMaking Claude talk like caveman to cut 75% of tokensgithub.comexternal-linkmessage-square6fedilinkarrow-up145arrow-down14
arrow-up141arrow-down1external-linkMaking Claude talk like caveman to cut 75% of tokensgithub.comvermaterc@lemmy.ml to AI@lemmy.mlEnglish · 4 days agomessage-square6fedilink
minus-squareVibeSurgeon@piefed.sociallinkfedilinkEnglisharrow-up5·4 days agoIt’s likely that you’ll get reduced performance from this, as blowing up the token count is part of getting a marginal amount of increased performance out of the models. Funny though, as is most discoveries related to emergent LLM properties
It’s likely that you’ll get reduced performance from this, as blowing up the token count is part of getting a marginal amount of increased performance out of the models.
Funny though, as is most discoveries related to emergent LLM properties