Just a PSA.

See this thread

Sorry to link to Reddit, but not only is the dev sloppily using using Claude to do something like 20k line PRs, but they are completely crashing out, banning people from the Discord (actually I think they wiped everything from Discord now), and accusing people forking their code of theft.

It’s a bummer because the app was pretty good… thankfully Calibre-web and Kavita still exist.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.

    As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.

    • shads@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      I still don’t think quantity is lacking, and when quality is there it’s amazing how often Open Source becomes a defacto standard. How many video tools are just a shim over FFMPEG for example?

      Yet again the problem I see is that LLMs are a seductive form of software cancer, it starts as a little help and before you know it we have booklore like projects. If open source can’t be better it will be subsumed in slop.

      Not disagreeing about LLMs as a weapon. In a functional society the person who pulls the trigger on any weapon is responsible for the consequences of that action. I wonder how eager the CEOs of these “AI” companies would be to weaponise their creations if they were held personally accountable for every injury caused by their product. By a jury. Preferably with explicit laws stating they could not indemnify or gain immunity.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.

        • shads@lemy.lol
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          How many browsers would you like me to list, yes a lot of them are spins on some of the big incumbents, but there is a much wider variety than you might credit. Rendering engines on the other hand, yeah there’s not much variety there.

          Mobile operating systems are something of a special case I’m afraid, the Telcos and incumbents have got way too heavy a thumb on the scale, and if any new comer looks like breaking the duopoly it will be treated as an existential threat. It will be associated with paedophilic terrorists faster than you can blink.

          Both incidentally categories where I will never be happy with slopcode. But hey if anyone wants to use a slop-coded browser I just heavily suggest you never enter any passwords or personal information while using it.

          We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?

          Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?

          At some point the bubble is going to burst and we will see a number of countries bankrupted in the name of “AI” I’m really curious to see if we learn our lessons at that point. Should be interesting.