• normalentrance@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 hours ago

    It feels like relying on GPS while driving around. If you know the roads well and just want some help with live traffic or somewhere you haven’t been before, it’s a decent tool.

    If you rely on it because you don’t want to think and just want to press the easy button, you’re going to have a bad time sooner or later.

    Back to software, I think there are a lot of people introducing concepts they don’t understand or can’t maintain (either from poor quality slop or it is just too advanced for their current level of understanding). You can do a few turns like this, until you’re stuck burning tokens in a loop without moving forward in a meaningful way.

    I try to avoid taking the easy route myself unless I’ve burnt too much time stuck on some small detail. Ultimately I feel it is super important to understand what you are delivering. Whether it is writing it yourself, copying a stack overflow post, or using an LLM. Once you commit and push to prod you’re got to deal with that crap.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    2 hours ago

    Muhaha. Those morons were never software engineers in the first place. A software engineer would neither benefit from LLM any more than from a deterministic assistant (tenplates), nor would they be stupid enough to label a stochastic slop generator as “AI”.

    (Yes, this is a “no true scotsman” kind of argument, yet I stand by it. People who call this bullshit AI, as well as people who claim it is better than coding stuff yourself, should not be let anywhere near any kind of software more relevant than a mobile game, and probably not even those)

  • dejected_warp_core@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    7 hours ago

    (X) Doubt

    As a Sr. Engineer, I completely get that my situation may be wildly different from what’s cited in the article.

    Right now, I’m using AI “in the loop” rather than “as the loop”. That’s a big difference. And I’m getting my ass kicked routinely on review for dumb-ass things that I’m letting slide from AI generated output. And rightly so. Plus, models routinely lead me down sub-optimal blind alleys while dreaming up really stupid ways to fix problems. The level of (re)prompting I have to provide to suggest to get decent quality results converges on a post-grad that has encyclopedic knowledge of software engineering as it exists online, but with zero real-world experience. It’s both impressive and dangerous as a replacement for software engineering.

    In the mode I describe above, I’m not losing the ability to do anything. I can see how one could surrender some coding chops or familiarity with a whole language or stack, in favor of automation. But all you have to do is not do that.

    I will say that as a rapid-prototyping technology, It’s nothing short of miraculous. I’ve watched junior engineers knock together medium-weight applications, complete with browser UI/UX and decent workflow, in less than a week. This is great for showing value or putting something semi-functional in front of management and/or customers. But pivoting those prototypes into something maintainable is an utter nightmare. Depending on how beholden to AI and forever prompt-looping with “skills” and MCPs you want to be, I suppose it’s possible to just keep mashing the AI button. But at some point, you’re going to need to get inside there to fix security problems or bugs that elude this workflow. What then?

    • tinfoilhat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      I joined a project that was forced to use some vibe coded solution that an intern cooked up – marketed as “solution for data pipelining”.

      There are no tests, every semantic query calculates embeddings every time, and there is help together with so much bubble gum and “glue code” that nobody feels confident with any of the data were showing our customer.

      It’s great for rapid prototyping, and then straight to the trash.

      • northface@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        Thing is, as we all know, prototypes rarely make it to the trash bin if managers and product owners have a stake in the project. Which becomes an even bigger problem now that minimal amounts of humans are involved in producing said prototypes.

        I had a meeting with a customer who proudly proclaimed they do “full-on agentic coding” at their startup, and one of their developers mentioned their entire codebase has been rewritten three times in the past week before the meeting took place. I do not have high hopes for their project ever being refactored by humans involved in anything else than light UAT before customer demo time.

  • vogi@piefed.social
    link
    fedilink
    English
    arrow-up
    32
    ·
    10 hours ago

    Its a silver lining of AI that you can easily tell whos a big baby idiot and whos actually worth engaging with.

    • TotalCourage007@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Honestly yeah its like wearing a huge red AI flag. Can’t imagine being stupid enough to fall in love with a not-secure CHATBOT.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      9 hours ago

      Preach.

      The AI “revolution” is the thing that finally killed my imposter syndrome as a software engineer. Not because I can write better code than AI (that’s a very low bar), but from listening to all these breathless idiots talk about how they’re “10x-ing my productivity!” or how “AI has replaced search for me!” or how “In 6 months no one will have to manually write code anymore!”

      • schema@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        7 hours ago

        Similar for me. What i find ironic is that AI already ran into a brick wall. It’s inherent statelessness by design means that AI is unlikely to be suited for anything more than isolated well defined tasks in the near future. Still usable as a tool, but without someone who is actually experienced, it will result in disaster.

        and even in smaller tasks it can fucks up, especially if the person prompting it is incapable of writing the code themselves as they don’t know how to properly design it and don’t spot the issues. Like everything with AI, it looks impressive at first glance until you look at it for more than 10 seconds and spot the metaphorical 6th finger.

        What we see currently with AI getting “better” at coding is more or less duct tape to make it work. Basically, they create the agents to bolt on the state, more layers between user and model. Iterative processes to make the answers better, etc, and to create “memory”, which in essence is just an ever growing prompt managed by the agent. But in the end, this won’t fix the inherent problem, so it will only do so much and is already hitting another ceiling. It introduces state decay. With the agent method its not really possible to “take away” memory, so if you gave it multiple versions of the same code (as you would if you work with AI), the AI never really forgets about old code. It can supress it through agent instructions (more duct tape), but the more there is the more it bleeds through, which can make the AI reintroduce old code or base assumptions on outdated things.

        There is no fix without changing the inherent way how models work, which would introduce complexity beyond what is currently feasible in computing (and the current AI is already gobbling up all computing reaoureces as is)

      • foodandart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        As someone that wanted to write but never had the time to learn, what’s so bad about writing code manually?

        Seems like if you can learn to do it well, you will be fairly well set with that skill.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          8 hours ago

          If you’re someone who cares at all about the quality and consistency of your craft, there’s absolutely nothing wrong with manually writing code.

          If you’re a misanthropic “techno-feudalist” who thinks of code as nothing more than an asset to sell, then pumping out as much code as quickly as possible without any human intervention is a very attractive proposition.

          Tech, sadly, is absolutely infested with these people at all levels.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          You will still need to learn programming manually.

          The process of struggling to understand and synthesize working code is a critical part of learning. Skipping it feels easier, but you’re hurting your ability to understand coding.

          Sure, you can make an LLM generate code and if you’re inexperienced it can outperform you on the basic tasks that you’re given as exercises. This is a trap that a lot of students fall into. It’s very easy to let LLMs do the ‘hard work’ part of learning while you just read the textbook or watch a video. Unfortunately, the hard part is the part that builds your skillset.

          It’s just like how you can’t just watch a video about physical fitness and then use a robot to lift the weights for you. Sure, you get to the end of your sets faster and you’re not physically tired and sore but you won’t actually benefit in the ways that matter.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          There’s nothing wrong with writing code manually. Over the past few months LLMs have gotten a lot better at writing code than they were before, but they can still make weird mistakes.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    ·
    11 hours ago

    We have been interviewing for entry level positions and the new grads know less than ever before. I don’t really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations… One candidate was waivering between 13 and 17…

    • foodandart@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      …easy foundational questions like how many bits in a byte…

      GTFO.

      I mean, yeah… perhaps it’s to be expected. https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z - if this is true, it’s as the methods of using computers and various devices has been infantilized and made too easy.

      Yeah… let’s obscure the inner working of computing and make the process as opaque to the user as possible. It’ll be fine… no negative consequences at all.

      Colleges do not matriculate anymore (that’s in the British sense of the word, where one has to show actual knowledge in the degree field one is seeking before enrolling, and TBH, they haven’t done so for a very long time, actually…) so this is what we get.

      Higher ed in the US is just about da moneys…

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        It is ridiculous. I am interviewing for embedded systems development where we frequently write to specific bits in a register. I am sure these kids have had to learn something, but I can’t figure out a polite way to ask them to give me some examples of what.

        • foodandart@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 hours ago

          There was a series of questions I heard in a political discussion about whether or not any given politician understood what the internet was, and if they really had any idea of how to regulate it.

          They are… “Explain the differences between, the internet, the world wide web, a search engine and a browser.”

          If the person could not answer those 4 questions , well… they shouldn’t have been trying to write legislation about it. I think that still stands as a basic foundational step to start from

                • foodandart@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 hours ago

                  Well, I for one am delighted to find lemmy and in a small way, do my bit to resurrect a miniscule, tiny bit of it.

                  It’s mandlebrot patterns, all the way down… right? Smaller iterations of the larger seed.

                  Best we can do…

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        My company probably doesn’t get the best candidates (defense contractor that pays somewhat less than market rate), but yeah.

    • Jako302@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      One candidate was waivering between 13 and 17…

      Pleas tell me that’s a joke. Or were they trying to switch fields and were a baker or something before? I just can’t accept that someone that would struggle with that question, even in a stressfull situation, ever took a single comp science class.

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I wish it was a joke. Maybe they were deliberately getting the answer wrong to waste our time, but the body language was not consistent with someone fucking with me.

  • circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    12 hours ago

    When you start relying on something else, it’s quite natural and expected to no longer be good at the thing now being done for you.

    But in this context, it’s a net negative. While you can certainly write more code while using the tool, you’re almost always writing worse code. And you still get the atrophy, so the result overall: now you’re not good at the thing, and neither is the tool you’re using.

    And remember, AI models need constant retraining as systems and approaches are updated, languages change, etc. Where is that training data going to come from? From the people now worse at coding than they were before.

    • foodandart@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      The atrophy scares the hell out of me.

      Years ago, I would often have long conversations with my dad about how manual skill sets in the trades (my training) and in engineering in the field (which was his bailiwick) were being lost to the pivot towards college degrees for every student, including the ones that preferred to work with their hands.

      Three decades on, I witnessed the full turn when construction firms had to - and still have to - mass import workers from Central and South America (legally and illegally) just to get things built. NGL, there are some scary good builders that have been brought in, and those people work insanely hard.

      Yes, it’s slowly pivoting back as more boys and men opt for the trades and become journeymen and apprentices, but to get the skillsets needed to get to a master’s level, you’re looking at at least 20k hours. Wer’re still a decade out - at best - before we get enough kids through the system and into steady work that they can step up and strike out on their own and make crazy bank. Skilled craftsmen and women can earn 100 bucks an hour - easily - in the right markets, and the rich folks will be glad to pay.

      Goddamn, it’s gonna be scary until that sorts itself out in another decade or so (and that does pin itself on the hope the financially feckless idiot in the White House doesn’t torpedo the economy…)

  • jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    13 hours ago

    I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.

    I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

    Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…

    Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      12 hours ago

      I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

      That’s because you are being gaslit.

      The people making those claims are either a) not developers in the first place, with no awareness of just how shit the “products” they’re pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who’ve become addicted to the speed that’s possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a for loop.

      All these people claiming 10x or 100x gains, and everything they’re making is garbage no one should or would touch with a ten-foot pole.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          If a software project is built better but no one else ever bothers to maintain it, can it even be said to be better?

        • mabeledo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          Still terrible code.

          I’ve seen bad coders trying to merge hundreds of lines of code where maybe ten were needed. They rely on more experienced devs to tell them how to fix that, just for these to copy and paste the suggestions given in Claude.

          I mean if that’s the value someone provides, no wonder they fear for their future.

        • Ŝan • 𐑖ƨɤ@piefed.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 hours ago

          Maybe not better, but þey have no ability to evaluate quality. But, yeah, þere are a lot of really bad programmers in þe market. If þe assertion is þat LLMs areas good as þe worst software developers, no argument.

          Capitalism created þis world. Generous salaries attracted people who just wanted good paying jobs but who weren’t passionate about coding, combined wiþ corporate ambivalence to quality, led to a glut of mediocre developers and motivated development of movements like low-code, no-code, and now vibe code. It has been a vicious capitalist cycle.

    • flandish@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      12 hours ago

      what it seems to be doing, in your case and others i have seen, is pushing the burden onto those who “care” and really fully grok (no pun intended) the concept of a real code review. it’s exhausting.

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    English
    arrow-up
    117
    arrow-down
    1
    ·
    16 hours ago

    Lol! Losers. I’ve been programming for almost two decades and extensive use of AI hasn’t compromised my skills AT ALL! These slop machines can’t hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.

    • Goodeye8@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 hours ago

      AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that’s not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can’t be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!

  • CaptainBasculin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    11 hours ago

    It’s really useful in creating base templates, but anything further than that and you won’t be able to read “your own” codebase if you depend too much on AI.

      • chronicledmonocle@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        9 hours ago

        And … If the AI can’t solve your problem after you’ve cultivated all that tech debt, you’re fucked. You should always know the code being written, whether vibe coded or otherwise.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 hours ago

            Also, a bunch of shit is about ready to burst out because they somehow decided to use wallpaper to hold a bunch of stuff to the wall instead of putting it in a closet. But it looked fine in the moment, so decided it was good enough.

  • gokayburuc@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 hours ago

    Muscles that are not used lose their function.It weakens and eventually becomes unusable.As humans’ ability to ask questions of artificial intelligence increases, its ability to learn and store information is disappearing.If the brain can obtain something easily, it doesn’t feel the need to take precautions regarding it.Therefore, memorizing code doesn’t involve things like writing it anymore.It only keeps the recognition function active when it sees it.

  • thericofactor@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    3
    ·
    18 hours ago

    I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don’t have to do anything but wait a few seconds.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      17 hours ago

      If I was paying for it, hell naw. But if my employer not only is willing to pay for it, but considers it a performance metric? I’m going to use it for fucking everything. These are the incentives they give me, I’m going to follow the incentives. Talking to Claude is what they pay me for, apparently.

      But like the article says, if I don’t continue practicing on my own code in my unpaid off-work hours, I imagine I’d be regressing in my skills too. I do that because I enjoy it as a hobby, but if I didn’t, I could see myself and probably a lot of other people getting rugpulled by this.

      • WFH@lemmy.zip
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        15 hours ago

        I’m not using it for the incentive. I’m using it to avoid punishment. The company I work for made it mandatory to use it daily. So I’m tokenmaxxing bullshit tasks so I can focus on interesting ones, but yeah I already feel it’s making me lazy because I sometimes can’t be bothered to read a log anymore. We are truly fucked.

        This company is working on terrible assumptions. They spent years hunting for the best engineers in the country (or so they pretend to anyway) and suddenly decided that

        • we are average at best and it is better and faster than most of us (it’s not)
        • software engineers don’t like to write code anyway (we do, at least when the challenge is interesting)
        • it will forever be more affordable than properly qualified engineers (oh boy it won’t)
        • a PM with Claude is as qualified as us to bring features to production (talk about tech stack suicide)
        • etc.

        They either have drunk the propaganda koolaid and betting everything on this lie, or are so arrogant they think we can succeed where the largest AI investors in the world utterly failed (see GitHub that can’t even get 3 nines of availability since the switched to full-ai-code).

    • meme_historian@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      28
      ·
      16 hours ago

      The thing that scares me (and why I’ve stopped using it): my brain automatically reaches for the shortcut whenever I would have to do deep thinking/planning.

      I have ADD, so getting my brain to focus and work on a task is not an easy feat to begin with. Now I’ve found myself multiple times a day unable to will myself to think about a problem but rather deferred to Claude. It’s seriously fucked up.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    16 hours ago

    For those unable to code without AI:
    What even is your contribution outside of a glorified typing monkey that can parse code but is unable to write it?
    It’s like a paramedic not being trained at all for a medical emergency response but sent there regardless to just stand and observe the patient while writing notes about the sounds they make while dying.

    • Luckyfriend222@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      2
      ·
      15 hours ago

      So this is going to invoke a multitude of downvotes, but here goes.

      I will give you an example. I can read a bit of python code, not the advanced stuff, but enough to understand to a large degree what the code does. Last week, I had the need to add a button to Netbox that will download a multitude of device configs that are being rendered via config templates. This use case helps a whole department apply configs, without having to create them by hand.

      I knew Netbox has a very powerful plugins ecosystem. The way the base code is written grants the capability of adding any type of plugin you might need in your unique environment. I used Claude to create this plugin for me. I wrote a very specific spec file, told it to utilise the already built pynetbox plugin and ensure it uses nothing fancy that is not sustainable. It created the plugin, helped me with pip installing it, and I deployed it on my dev environment where I tested it extensively.

      My alternative to using claude: Asking our internal development team to write something like this. I would need to wait 3 weeks to even get a spot on their meeting for the request, just to then be told their backlog is full with customer code and they won’t be able to help. This plugin will help our support team with fewer calls, because the configs are accurately built according to the source of truth (Netbox) and will need less human input. So in the greater scheme of the company, that is a net positive.

      What I will do when Netbox updates, is update my dev environment, install the plugin, and test it. If something broke, I will troubleshoot it, of course I will be using Claude with error logs etc, then update the plugin code to work on the new netbox. Is this ideal? Probably not. Is it the only way to get this done? Maybe not either. Is it all I can do at this very moment? Yes.

      My specialist fields are the lower levels. Hardware, hypervisors and setting up VMs + System Software. I need code from time to time to get something functional done. I don’t write whole systems with Claude, that is just ridiculously naive. But small pieces of functional code that solves a single small problem, I honestly don’t understand the problem with that.

      My 2c.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        15 hours ago

        But you arent a dev as a main job.
        This is talking about developers, employed as developers, beginning to being inept to be developers and (not offense) being not worth much more than what your technical abbilities already provide.
        So what’s their point?

        It’s like someone being employed as a translator, is able to hear the language and sort of understand it but every translation is done through deepL or google translate.
        So why should I a translator instead of using paid deepL directly and proofread it using google translate to make sure it didnt generate (mostly) nonsense?
        Isnt this mostly the point of a trained professional to being better than a self taught amateur?

        • Luckyfriend222@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          15 hours ago

          You are correct. I mistook your comment to refer to people in general, rather than trained professional coders. So indeed, you are correct.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      13 hours ago

      Clarifying requirements, designing architecture. Also, I dont understand how is someone supposed to be able to “parse code” without being able to write it? It’s like being able to read but unable to write.

      • JordanZ@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        I can read significantly more programming languages than I can write working code in. You can usually figure out the syntax and get the gist of what’s going on in a non trivial amount of code. Sure, the oddball syntax/language feature comes up that I have to lookup but it’s not too bad.

        • Upgrayedd1776@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 hours ago

          ditto, similar to the way Severence gets a sense of whats off, i cna do that with code, ask me to start from scratch i would not know where to start. Give me google, i will have a bunch of a copy pasta that works in the end, claude does the research, evaluation, best practices and review and testing and re-review and testing, when the Developers department will go to war with you if you put a Slack question through the wrong channel

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        I understand cooking concepts and can tell when something I am familiar with is made well. If I watch a cook, most of the time I can tell why they do certain things anand how it impacts the food.

        My cooking skills are very limited, especially when it comes to making new things. My sql skills are the same, I can read through the code and spot errors that match issues, but even creating something new is fairly limited despite being able to read and comprehend what has already been done.

  • Matty_r@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    ·
    15 hours ago

    Go ahead, use your AI to replace all of your own skills. The rest of us will gladly take your job when you can no longer troubleshoot problems.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 hours ago

      Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…

      In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:

      Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.

      While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…

      Lots of human slop out there that AI can actually compete with.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I’ve spent the last few days cleaning up some genuine garbage in a file. The file shows a single UI grid with four editable columns and some basic validation logic. It started out as over 3000 lines long and I have managed to consolidate and remove enough to get ot down to 1000. I have done nothing but remove 2/3 of the file and literally no functionality has been lost. I’m losing my mind over here how does the tech debt even get this bad lmao