As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • Hetare King@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.

    There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

    And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

    I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      So we should not have #defines in the way, right?

      Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.

      uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        “int” can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn’t always been the case, in which case you must roll your own. I’m sure some projects did it where it was unneeded, but when you have to do it you have to do it

      • Hetare King@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        The standard type aliases like uint64_t weren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.

        The use of #define to make type aliases never made sense to me. The earliest versions of C didn’t have typedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.

      • xthexder@l.sw0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.