Posted to the mailing list on Friday were the latest proposed guidelines for tool-generated contributions to the Linux kernel. The coding tools in large part being focused on AI generated content.

Intel Linux engineer Dave Hansen posted the third draft of the proposed AI/tool generated content guidelines for the Linux kernel. The new guidelines rename “LLM” mentions to “coding assistant”, mentioning testing as part of the change if done by a tool, and some other minor revisions.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 hours ago

    Thing is, most LLM submissions are low-quality as well as low-effort. If you forbid them, well-meaning numbskulls will hopefully not clutter your bug tracker by submitting them, and those who are more interested in adding a line to their resume than following the rules can be blacklisted immediately for breaking said rules. As for the odd undeclared one that’s not low-quality and slips through without being spotted, no big deal. By my understanding, they’re unicorns, though.

    Because the submissions are so low-quality overall, chances are that projects requiring that submitters admit there was an LLM involved in their submission will end up effectively shadow-banning most such submitters because it isn’t worth wading through their tripe. That’s just a different version of non-transparency.

    The endgame we want isn’t blacklisting LLM submissions into perdition, it’s the code version of xkcd 810. Currently, most LLM code submissions are about as useful and desirable as porn spam on a forum. Maybe in a few years, that’ll be different. If it is, policies can be reviewed.