Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.
Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.
I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There’s no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.
I suspect the anti-AI push is coming from Russia or China, probably because the AI products that are in such high demand right now are of Western origin.
What is “AI vulnerable”? What is the problem here? Claude isn’t reverse-Midas, it’s not like everything they touch turns to shit.
Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.
Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
If you don’t understand how existentially bad the problem is for FOSS, you aren’t paying attention.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.
Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.
I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There’s no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.
I agree with you. More to the point…why accept code from anyone (clanker or meatbag) without provenance?
If I don’t know you, and you can’t explain what it does? Straight into the garbage it goes.
The issue isn’t AI contamination. It’s accepting code from any source without provenance and accountable review.
I suspect the anti-AI push is coming from Russia or China, probably because the AI products that are in such high demand right now are of Western origin.