Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.
Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.
Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
If you don’t understand how existentially bad the problem is for FOSS, you aren’t paying attention.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.