Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.

    • chickenf622@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      And keep it that way! We need people on both sides to further spur progress. Plus I’m jealous cause I still don’t have a firm grasp on docker.

      • Dasnap@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I use Kubernetes at work and a simpler docker-compose driven setup at home. I find that it’s a tidier way to build infra, but it does have its limits. Abstracting from bare-metal has its ups and downs, but I find the positives outweigh the negatives.

    • JBloodthorn@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’m anti everything that requires daily use of arcane command line bullshit. I thought we were on the way to being over that when Windows 3.1 came out.

      If it needs to be done more than once, make it a button on a little program. I’ve rolled my own for any of them that can be triggered from the windows command line. But Docker and others that require their own unique command line I can’t do that. I wouldn’t be as annoyed by Docker if Docker Desktop just did all the crap it should instead of requiring command line bullshit every damn day.

      • astral_avocado@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I mean… all those buttons are essentially just calling a command line in the end. And coding that button takes more work so command line is always going to be more likely to be your only option. If you find commands arcane then that’s probably an argument that the help docs should be clearer or the commands themselves should be clearer.

        • JBloodthorn@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Making a little program that opens a window with some buttons to pin to my taskbar is infinitely easier than digging out docs and copy pasting into a command line every time I need to do anything. Paste the command once, done. It’s like 10 lines of code, plus about 3-4 for each command I add. Maybe drag the window a bit bigger when I add the button.

            • JBloodthorn@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              No shit. I’m saying the tools I had to make myself should come standard instead of wasting dev time on command line bullshit.

    • CodeBlooded@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Docker is like, my favorite utility tool, for both deployment AND development (my replacement for Python virtual environments). I wanted to hear more of why I shouldn’t use it also.

      • astral_avocado@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Right? If it’s about ease of insight into containers for debugging and troubleshooting, I can kinda see that. Although I’m so used to working with containers it isn’t a barrier really to me anymore.

        • sip@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          yup. it’s a breeze especially for interpreted langs. mount the source code, expose the ports and voila. need a db?

          services:
            pg:
              image: postgres
          
    • Jeezy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Hi!

      First I’d like to clarify that I’m not “anti-container/Docker”. 😅

      There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don’t wanna copy-paste everything from there, but I’ll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:

      Some high level points on the “why”:

      • Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it’s nice not to have to worry about a docker build command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next

      • Cost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn’t guarantee reproducibility and has poor performance to boot (see below)

      • Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default

      I think it’s also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don’t really apply to the latter.

      • Dasnap@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Docker performance on macOS (and Windows), especially storage mount performance remains poor

        I remember when I first got a work Macbook and was confused why I had to install some ‘Docker Desktop’ crap.

        I also learnt how much Docker images care about the silicon they’re built on… Fucking M1 chip can be a pain…

      • CodeBlooded@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Docker builds are not reproducible

        What makes you say that?

        My team relies on Docker because it is reproducible…

        • uthredii@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          You might be interested in this article that compares nix and docker. It explains why docker builds are not considered reproducible:

          For example, a Dockerfile will run something like apt-get-update as one of the first steps. Resources are accessible over the network at build time, and these resources can change between docker build commands. There is no notion of immutability when it comes to source.

          and why nix builds are reproducible a lot of the time:

          Builds can be fully reproducible. Resources are only available over the network if a checksum is provided to identify what the resource is. All of a package’s build time dependencies can be captured through a Nix expression, so the same steps and inputs (down to libc, gcc, etc.) can be repeated.

          Containerization has other advantages though (security) and you can actually use nix’s reproducible builds in combination with (docker) containers.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            That seems like an argument for maintaining a frozen repo of packages, not against containers. You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.

            I think this is a misguided way to workaround proper toolchain setup. Nix is pretty cool though.

            • uthredii@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              That seems like an argument for maintaining a frozen repo of packages, not against containers.

              I am not arguing against containers, I am arguing that nix is more reproducible. Containers can be used with nix and are useful in other ways.

              an argument for maintaining a frozen repo of packages

              This is essentially what nix does. In addition it verifies that the packages are identical to the packages specified in your flake.nix file.

              You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.

              This is essentially what Nix does, except Nix verifies the external software is the same with checksums. It also does hermetic builds.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Nix is indeed cool. I just see it as less practical than maintaining a toolchain for devs to use. Seems like reinventing the wheel, instead of airing-up the tires. I could well be absolutely wrong there - my experience is mainly enterprise software and not every process or tool there is used because it is the best one.

                • uthredii@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  I just see it as less practical than maintaining a toolchain for devs to use.

                  There are definately some things preventing Nix adoption. What are the reasons you see it as less practical than the alternatives?

                  What are alternative ways of maintaining a toolchain that achieves the same thing?

      • Hexarei@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If your dev documentation includes your devs running docker build, you’re doing docker wrong.

        The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don’t have to build them and can just run the existing image.

        Then for development, you simply use a bind mount to ensure your local copy of the code is available in the container instead of the copy the container was built with.

        That doesn’t solve the performance issues on Windows and Mac, but it does prevent the “my environment is broke” issues that docker is designed to solve

      • Ethan@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Cost: Docker licenses for most companies now cost $9/user/month

        Are you talking about Docker Desktop and/or Docker Hub? Because plain old docker is free and open source, unless I missed something bug. Personally I’ve never had much use for Docker Desktop and I use GitLab so I have no reason to use Docker Hub.

        • Jeezy@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I believe this is the Docker Desktop license pricing.

          On an individual scale and even some smaller startup scales, things are a little bit different (you qualify for the free tier, everyone you work with is able to debug off-the-beaten-path Docker errors, knowledge about fixes is quick and easy to disseminate, etc.), but the context of this article and the thread on Mastodon that spawned it was a “unicorn” company with an engineering org comprised of hundreds of developers.

          • Ethan@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            My point is that Docker Desktop is entirely optional. On Linux you can run Docker Engine natively, on Windows you can run it in WSL, and on macOS you can run it in a VM with Docker Engine, or via something like hyperkit and minikube. And Docker Engine (and the CLI) is FOSS.

            • Jeezy@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don’t have any real docker knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.

              • Martin@feddit.nu
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                We are over 1000 developers and use docker ce just fine. We use a self hosted repository for our images. IT is configuring new computers to use this internal docker repository by default. So new employees don’t even have to know about it to do their first docker build.

                We all use Linux on our workstations and laptops. That might make it easier.

                • Von_Broheim@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  Op comes off a bit, uninformed. E.g. I use docker engine and docker compose inside WSL2 on windows and performance is fine, then I use Intellij to manage images/containers, the service tab handles the basics. If I need to do anything very involved I use the cli.

                  Docker is fine, the docker desktop panic really only revealed who never took the time to learn how to use docker and what the alternative UIs are.

  • Digital Mark@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    I know you won’t believe this, but you don’t need any of these GTOS (giant towers of shit) to write & ship code. “Replace one GTOS with another” is a horizontal move to still using a GTOS.

    You can just install the dev tools you need, write code & libraries yourself, or maybe download one. If you don’t go crazy with the libraries, you can even tell a team “here’s the 2 or 3 things you need” and everyone does it themselves. I know Make is scary, with the mandatory tabs, but you can also just compile with a shell script.

    Deployment is packing it up in a zip and unzipping it on your server.

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Sometimes you need complex tools for complex problems. We just have a homegrown GTOS at my work instead, I wish we had something that made as much sense as Nix!

    • Jeezy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Lot’s of (incorrect) assumptions here and generally a very poorly worded post that doesn’t make any attempt to engage in good faith. These are the reasons for what I believe is my very first down-vote of a comment on Lemmy.

      • Digital Mark@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        1 year ago

        You’re advocating switching to another OS with a complex package manager, to avoid using a package manager that’s basically a whole new OS. Giant Tower of Shit may be too generous for that.

        But I was of course correct, I said you wouldn’t believe it.