While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.
I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?
For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.
Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.
LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.
Plan first, using planning modes to help you, decomposition the plan
Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up
While you’re right that it’s a new technology and not everyone is using it right, if it requires all of that setup and infrastructure to work then are we sure it provides a material benefit. Most projects never get that kind of attention at all, to require it for AI integration means that currently it may be more work than it’s worth.
Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.
Sometimes, the tool is simply bad and not worth using.
Even writing an RFC for a mildly complicated feature to mostly describe it takes so many words and communication with stakeholders that it can be a full time job. Imagine an entire app.
Describing what they want in plain, human language is impossible for stakeholders.
‘I want you to make me a Facebook-killer app with agentive AI and blockchains. Why is that so hard for you code monkeys to understand?’
You forgot we run on fritos, tab, and mountain dew.
Maybe he want to write damn login page himself.
Not say it out loud. Not stupid… Just proud.
Getting ai to do a complex problem correctly takes so much detailed explanation, it’s quicker to do it myself
While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.
I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?
For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.
Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.
LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.
https://www.promptingguide.ai/
https://www.anthropic.com/engineering/claude-code-best-practices
There are community guides that take this even further, but these are some starting references I found very valuable.
While you’re right that it’s a new technology and not everyone is using it right, if it requires all of that setup and infrastructure to work then are we sure it provides a material benefit. Most projects never get that kind of attention at all, to require it for AI integration means that currently it may be more work than it’s worth.
“If I need to write boilerplate and learn a new skill, is it really worth it?”
So even more work than actual coding.
Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.
Sometimes, the tool is simply bad and not worth using.
Everyone is a senior engineer with an idiot intern now.
Even writing an RFC for a mildly complicated feature to mostly describe it takes so many words and communication with stakeholders that it can be a full time job. Imagine an entire app.
You want the answer to the ultimate question of life, the universe, and everything? Ok np