You must log in or register to comment.
It’s likely that you’ll get reduced performance from this, as blowing up the token count is part of getting a marginal amount of increased performance out of the models.
Funny though, as is most discoveries related to emergent LLM properties
That’s friken hilarious.
can’t you just append “make your reply as concise as possible” work?
I think most LLMs would read “as concise as possible” and still spout lots of kindergarten yes-man positivity while skimming the actual meat of the technical part.
Finally - the Grug brained LLM.
I didn’t realize you could ask for the 10 peso version.


