

If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing,  yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.
There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.
And then there are type aliases that are useful because they have different sizes on different platforms like size_t.
I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra  you may need to add pays for itself pretty quickly.


The standard type aliases like
uint64_tweren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.The use of
#defineto make type aliases never made sense to me. The earliest versions of C didn’t havetypedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.