As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    16 hours ago

    I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

    If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

    • SilverShark@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      16 hours ago

      We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        14 hours ago

        But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

        Readability maybe?

        • SilverShark@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          11 hours ago

          It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it’s just easier.

        • Consti@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

          • Valmond@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            9 hours ago

            Show me one.

            I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

            • Consti@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              7 hours ago

              Basically anything low level. When you need a byte, you also don’t use a int, you use a uint8_t (reminder that char is actually not defined to be signed or unsigned, “Plain char may be signed or unsigned; this depends on the compiler, the machine in use, and its operating system”). Any time you need to interact with another system, like hardware or networking, it is incredibly important to know how many bits the other side uses to avoid mismatching.

              For purely the size of an int, the most famous example is the Ariane 5 Spaceship Launch, there an integer overflow crashed the space ship. OWASP (the Open Worldwide Application Security Project) lists integer overflows as a security concern, though not ranked very highly, since it only causes problems when combined with buffer accesses (using user input with some arithmetic operation that may overflow into unexpected ranges).

    • Hetare King@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.

      There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

      And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

      I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        So we should not have #defines in the way, right?

        Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.

        uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.

        • Feyd@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          7 hours ago

          “int” can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn’t always been the case, in which case you must roll your own. I’m sure some projects did it where it was unneeded, but when you have to do it you have to do it

        • Hetare King@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          The standard type aliases like uint64_t weren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.

          The use of #define to make type aliases never made sense to me. The earliest versions of C didn’t have typedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.

        • xthexder@l.sw0.com
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 hours ago

          I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.