If you automate all aspects of your life, what have you got?
As always, the process matters more than the result.
Whenever I’m learning a new resource/concept and pull up the example code, I have the habit of typing it out instead of copy-pasting. Is it slower? Definitely. But it does make a big difference in actually understanding what you’re doing
How do you compile it? Do you stick the paper in the CD tray?
i hear rumors that as many person-hours are spent cleaning up the messes left by LLMs as are saved having them write the code. has anyone found that to be true or am i just talking out of my ass?
imo more. just a hunch
Writing code is only the tip of the iceberg. You actually have to:
- understand how the company works
- understand the use case you are managing and how it relates to other business flows
- understand strenghts and weaknesses of the technologies, libraries and frameworks involved
- decide which one to use and how
- thinking about all possible corner cases, evaluating their frequency and importance
- only at the end, write, test, optimize the code
While large language models can help in the last step, they are very limited in previous ones, except working as a search engine on steroids.
More like a search engine on LSD.
AI results are always shit when trying to find anything not completely obvious. You end up more often than not with an hallucinated reality that has absolutely no value.
No, AI results can be quite good, especially if your internal documentation is poor and disorganised. Fundamentally you cannot trust it, but in software we have the luxury of being able to check solutions cheaply (usually).
Our internal search at work is dogshit, but the internal LLM can turn up things quicker. Do I wish they’d improve the internal search? Yes. Am I going to make that my problem by continuing to use a slower tool? No.
“It’s quite good” “you cannot trust it”
What is your definition of good?
What a recipe for disaster…
Something that you can’t trust can be good if it is possible to verify without significant penalties, as long as its accuracy is sufficiently high.
In my country, you would never just trust the weather forecast if your life depended on it not raining: if you book an open-air event more than a week in advance, the plan cannot rely on the weather being fair, because the long-range forecast is not that reliable. But this is OK if the cost of inaccuracy is that you take an umbrella with you, or change plans last-minute and stay in. It’s not OK you don’t have an umbrella, or staying in would cost you dearly.
In software development, if you ask a question like, “how do I fix this error message from the CI system”, and it comes back with some answer, you can just try it out. If it doesn’t work, oh well, you wasted a few minutes of your time and some minutes on the CI nodes. If it does, hurrah!
Given that, in practice the alternative is often spending hours digging through internal posts, messaging other people (disrupting their time) who don’t know the answer, only to end up with a hack workaround, this is actually well worth a go - at my place of work. In fact, let’s compare the AI process to the internal search one - I search for the error message and the top 5 results are all completely unrelated. This isn’t much different to the AI returning a hallucinated solution - the difference is that to check the hallucinated solution, I have to run the command it gives (or whatever), whereas to check the search results, I have to read the posts. There is a higher time cost to checking the AI solution - it probably only takes 30 seconds to click a link, load the page, and read enough of it to see it’s wrong. Whereas the hallucinated solution, as I said, will take a few minutes (of my time actually typing commands, watching it run, looking at results - not waiting for CI to complete which I can spend doing something else). So that is, roughly, the ratio for how much better the LLM needs to be than search (in terms of % good results).
Like I said, I wish that the state of our internal search and internal documentation were better, but it ain’t.
Good point. Reading the documentation of the library and the source code is often a better use of a software developer’s time.
In order to be effective at software engineering, you must be familiar with the problem space, and this requires thinking and wrestling with the problem. You can’t truly know the pain of using an API by just reading its documentation or implementation. You have to use it to experience it. The act of writing code, despite being slower, was a way for me to wrestle with the problem space, a way for me to find out that my initial ideas didn’t work, a way for thinking. Vibe coding interfered with that.
If you’re thinking without writing, you only think you’re thinking.
– Leslie Lamport
Yep. This what I don’t get about people who are using these spaghetti-bots. How do they figure out the right solution to a problem without actually walking around the whole perimeter of the problem?
My guess is they are not, and they’re just waiting until someone complains and they’ll get a job somewhere else and leave the mess for someone else(‘s chatbot) to clean up.
Between that and the death of open source, our industry is about to become a disaster area.
Reading only the headline: “why would you write code by hand? Would your fingers cramp up? How are you going to test it?”
Reading the article: “Oooohhhh.”
@W3dd1e @codeinabox For what it’s worth, a lot of pretty famous programmers did/do write code by hand. They often have an assistant of some kind do the actual typing after it’s done. It can be an interesting experience.
Humans want to accomplish things, but business wants to get shit done. The two will always be at odds.
Ya, but one is shit.
Sometimes both:-P
But one is always shit.
There are true artists who make paper by hand like they used to in China. It’s beautiful. It’s much more heterogeneous than machine made paper. It makes for wonderful gifts.
It would be prohibitively expensive and lower quality to use hadmade paper towels, toilet paper, etc.
Code is not something that you consume, it is something that you build.
You want a house made of cardboard because it’s affordable? Sure, just don’t start pretending like it’s an actual house, and don’t try to rent, sell, or lend it to anyone.
Maybe you’ll have a different opinion about code when it’s almost all disposable.
Yes some places still make bespoke $30 steak hamburgers. But also most people eat mass produced, less expensive McD’s
Yes some places still make bespoke $30 steak hamburgers. But also most people eat mass produced, less expensive McD’s
Yes, but the hand crafted artisinal version I wrote? You can copy that infinitly, already.
There’s no added value by copy/pasting infinite slop burgers.
And if the burgers are made by people who just grab random items to try to make it look like a burger, whether or not it is edible, no one would.
AI code is unreliable, unsafe, and worst of all, has no basis of even basic intelligence. I’d trust more code produced by a monkey than by an AI. Trusting AI code is like trusting a very realistic drawing of a tunnel and running into it while knowing that it’s a drawing, because “it looks like a real tunnel so it must work like one”. AI code is not code, it’s keywords scrambled together to look like code.
Except that code is generally made of… other code. And generally gets transformed from some kind of source form to some kind of deployment form. And then executed by some kind of runtime, made of code. On some kind of OS, made of code.
The level of abstraction at which you make paper by hand is pretty much constant. The level of abstraction at which you make even a “hello world” program by hand is extremely flexible.
Depending on your operating environment, even an incredibly complex and impressive task may just be a matter of passing the right flag to a CLI tool that you already use.
Being attentive to the manual experience of how a codebase “feels” is pretty important for making sure a system has a coherent (read: not over-engineered) approach to bridging the high and low levels of the tasks it performs.
Not paying attention to that, because you can delegate it to a chatbot, is kind of like forgoing having light switches in a room because you can just keep a crane parked outside and have it slam a lighting fixture through the ceiling when you need it and then dump a mound of dirt to cover the hole when you don’t need it.
Like, that functions and accomplishes the task in a pinch, but you do not want to try occupying that room in person at any point to do any kind of detail work.
Maybe that metaphor would make sense if technology hadn’t already completely saturated the world before we had this new process.
Maybe coding isn’t as hard as making artisan hand made paper…
There is something similar in Japan, it is called gampi paper. It played a role in Japanese literature and a movie, The Pillow Book (1996).









