I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.
One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
No, it does not.
Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.
And how many errors is it creating that we don’t know about? You’re using AI to review code but then someone has to review what the AI did because they fuck up.
If humans code something, then humans can troubleshoot and fix it. It’s on our level. But how are you going to fix a gigantic complicated tangle of vibe code that AI makes and only AI understands? Make a better AI to fix it? Again and again? Just get the AI to code another system from scratch every month? This shit is not sustainable.
I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.
No, it does not.
Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.
And how many errors is it creating that we don’t know about? You’re using AI to review code but then someone has to review what the AI did because they fuck up.
If humans code something, then humans can troubleshoot and fix it. It’s on our level. But how are you going to fix a gigantic complicated tangle of vibe code that AI makes and only AI understands? Make a better AI to fix it? Again and again? Just get the AI to code another system from scratch every month? This shit is not sustainable.
I’m not having the same experience.
Maybe reconsider which model you’re using?
If there was a model that coded perfectly then there wouldn’t be a plurality of models. There would just be THE model.
It’s like companies having competing math formulas for structural engineering where some work and others don’t. It’s insanity.