• ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    5 days ago

    Some of us respectfully disagree with LLMs for programming being “appropriate and legitimate”, at least if that involves generating code and not just locating bugs.

    Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      Locating bugs is one of the most important tasks in programming, and if devs can’t do that, not are willing to learn to do so, they are fucked.

      There’s no other way of saying it. Can’t wait for the AI bubble to pop.

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        LLMs can sometimes point out potential trouble spots, which is also one of the uses that may avoid injecting problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn’t seem the type of use KDE is currently limiting themselves to.

      • Bazoogle@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        You are using current AI as your baseline. There will come a point where writing code will mean there being zero bugs or vulnerabilities. Humans cannot do that. AI will, whether we want it or not, one day be able to. Idk if we are talk 10 years or 40 years, but it will happen.

        • msage@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          LOL at that.

          LLMs need to disappear before that happens.

          In order to not have any bugs, and for anything to produce perfect software, you need to define perfect business rules, and if managers could do that, they wouldn’t have needed developers for decades.

          If we have AI that can produce the perfect code, you won’t have access to it. Why giving everyone something so powerful when now you can circle around everyone easily?

          • Bazoogle@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            17 hours ago

            If we have AI that can produce the perfect code, you won’t have access to it.

            If one company can make it, then other will make it too. Someone will be the first, but others will follow behind. It is too critical for each countries national security to not research it themselves, let alone the profit the companies can make. It will definitely be longer before someone like me will get access, and even longer before it is cost effective, but it will eventually happen.

            In order to not have any bugs

            I should have been clearer. I meant exploitable vulnerabilities in the software. “Bugs” and “features” can have an overlap, but that’s not what I meant. The only attack surface left would be the human one, which would still be a massive vulnerability like it currently is.

            • msage@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              5 hours ago

              That’s not how anything works.

              You are assuming a god-like coder entity which can consider everything, and that’s a whole new problem which we can’t solve right now.

              And if it’s a national security, it won’t be shared with others, so if one country stumbles upon it, others won’t know how.