Seems like getting a decent answer out of an LLM for a novel problem requires a high degree of specificity and thoroughness in your prompt engineering. Seems like a set of rigid rules for specifying what you want are required to avoid getting broken slop. Fortunately, we have great specific ways of specifying what we want computers to do. We call them programming languages.
If i write the code, the computer yells at me
If the LLM writes the code, i get to yell at the LLM
Both approaches results in code that doesn’t work, so i might as well go for the more cathartic one
Seems like getting a decent answer out of an LLM for a novel problem requires a high degree of specificity and thoroughness in your prompt engineering. Seems like a set of rigid rules for specifying what you want are required to avoid getting broken slop. Fortunately, we have great specific ways of specifying what we want computers to do. We call them programming languages.
If i write the code, the computer yells at me
If the LLM writes the code, i get to yell at the LLM
Both approaches results in code that doesn’t work, so i might as well go for the more cathartic one