What’s the largest program, in line count (wc -l will be close enough, or open the file in Notepad++ and scroll to the end), that you’ve created this way?
I got one up around 500 lines before it started falling apart when trying to add new features. That was a mix of Rust and HTML, total source file size was around 14kB, with what I might call a “normal amount” of comments in the code.
If you count only 100% vibed code, it’s probably a 20 lines long script.
Usually, I tweak the code to fit my needs, so it’s not 100% vibes at that point. This way, I have built a bunch of scripts, each about 200 lines long, but that arbitrary limit is just my personal preference. I could put them all together into a single horribly unreadable file, which could be like 1000 lines per project. However, vast majority of them were modified by me, so that doesn’t count.
If you ask something longer than 20 lines, there’s a very high probability that it won’t work on the 15th round of corrections. Either GPT just can’t handle things that complicated, or maybe my needs are so obscure and bizarre that the training data just didn’t cover those cases.
If you ask something longer than 20 lines, there’s a very high probability that it won’t work on the 15th round of corrections.
Try Claude by Anthropic. I noticed Copilot and Google getting hung up much faster than Claude.
Also, I find that if you encourage a good architecture, like a formalized system of variables with Atomic / Mutexed access and getter/setter functions, that seems to give a project more legs than letting the AI work out fiddly access protection schemes one by one.
What’s the largest program, in line count (wc -l will be close enough, or open the file in Notepad++ and scroll to the end), that you’ve created this way?
I just checked and it’s 278.
I got one up around 500 lines before it started falling apart when trying to add new features. That was a mix of Rust and HTML, total source file size was around 14kB, with what I might call a “normal amount” of comments in the code.
If you count only 100% vibed code, it’s probably a 20 lines long script.
Usually, I tweak the code to fit my needs, so it’s not 100% vibes at that point. This way, I have built a bunch of scripts, each about 200 lines long, but that arbitrary limit is just my personal preference. I could put them all together into a single horribly unreadable file, which could be like 1000 lines per project. However, vast majority of them were modified by me, so that doesn’t count.
If you ask something longer than 20 lines, there’s a very high probability that it won’t work on the 15th round of corrections. Either GPT just can’t handle things that complicated, or maybe my needs are so obscure and bizarre that the training data just didn’t cover those cases.
Try Claude by Anthropic. I noticed Copilot and Google getting hung up much faster than Claude.
Also, I find that if you encourage a good architecture, like a formalized system of variables with Atomic / Mutexed access and getter/setter functions, that seems to give a project more legs than letting the AI work out fiddly access protection schemes one by one.