Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.
Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.
I’m all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as “compromised” feels reactionary to me.
If you don’t understand how existentially bad the problem is for FOSS, you aren’t paying attention.
I understand the problems, but I don’t think they amount to something as simple and close-minded as “all LLM generated code bad and evil”, unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it’s easier for them.