𞋴𝛂𝛋𝛆

  • 158 Posts
  • 1.28K Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • Just be aware that W11 is secure boot only.

    There is a lot of ambiguous nonsense about this subject by people that lack a fundamental understanding of secure boot. Secure Boot, is not supported by Linux at all. It is part of systems distros build outside of the kernel. These are different for various distros. Fedora does it best IMO, but Ubuntu has an advanced system too. Gentoo has tutorial information about how to setup the system properly yourself.

    The US government also has a handy PDF about setting up secure boot properly. This subject is somewhat complicated by the fact the UEFI bootloader graphical interface standard is only a reference implementation, with no guarantee that it is fully implemented, (especially the case in consumer grade hardware). Last I checked, Gentoo has the only tutorial guide about how to use an application called Keytool to boot directly into the UEFI system, bypassing the GUI implemented on your hardware, and where you are able to set your own keys manually.

    If you choose to try this, some guides will suggest using a better encryption key than the default. The worst that can happen is that the new keys will get rejected and a default will be refreshed. It may seem like your system does not support custom keys. Be sure to try again with the default for UEFI in your bootloader GUI implementation. If it still does not work, you must use Keytool.

    The TPM module is a small physical hardware chip. Inside there is a register that has a secret hardware encryption key hard coded. This secret key is never accessible in software. Instead, this key is used to encrypt new keys, and hash against those keys to verify that whatever software package is untampered with, and to decrypt information outside of the rest of the system using Direct Memory Access (DMA), as in DRAM/system memory. This effectively means some piece of software is able to create secure connections to the outside world using encrypted communications that cannot be read by anything else running on your system.

    As a more tangible example, Google Pixel phones are the only ones with a TPM chip. This TPM chip is how and why Graphene OS exists. They leverage the TPM chip to encrypt the device operating system that can be verified, and they create the secure encrypted communication path to manage Over The Air software updates automatically.

    There are multiple Keys in your UEFI bootloader on your computer. The main key is by the hardware manufacturer. Anyone with this key is able to change all software from UEFI down in your device. These occasionally get leaked or compromised too, and often the issue is never resolved. It is up to you to monitor and update… - as insane as it sounds.

    The next level key below, is the package key for an operating system. It cannot alter UEFI software, but does control anything that boots after. This is typically where the Microsoft key is the default. It means they effectively control what operating system boots. Microsoft has issued what are called shim keys to Ubuntu and Fedora. Last I heard, these keys expired in October 2025 and had to be refreshed or may not have been reissued by M$. This shim was like a pass for these two distros to work under the M$ PKey. In other words, vanilla Ubuntu and Fedora Workstation could just work with Secure Boot enabled.

    All issues in this space have nothing to do with where you put the operating systems on your drives. Stating nonsense about dual booting a partition is the stupid ambiguous misinformation that causes all of the problems. It is irrelevant where the operating systems are placed. Your specific bootloader implementation may be optimised to boot faster by jumping into the first one it finds. That is not the correct way for secure boot to work. It is supposed to check for any bootable code and deplete anything without a signed encryption key. People that do not understand this system, are playing a game of Russian Roulette. There one drive may get registered first in UEFI 99% of the time due to physical hardware PCB design and layout. That one time some random power quality issue shows up due to a power transient or whatnot, suddenly their OS boot entry is deleted.

    The main key, and package keys are the encryption key owners of your hardware. People can literally use these to log into your machine if they have access to these keys. They can install or remove software from this interface. You have the right to take ownership of your machine by setting these yourself. You can set the main key, then you can use the Microsoft system online to get a new package key to run W10 w/SB or W11. You can sign any distro or other bootable code with your main key. Other than the issue of one of the default keys from the manufacturer or Microsoft getting compromised, I think the only vulnerabilities that secure boot protects against are physical access based attacks in terms of 3rd party issues. The system places a lot of trust in the manufacturer and Microsoft, and they are the owners of the hardware that are able to lock you out of, surveil, or theoretically exploit you with stalkerware. In practice, these connections are still using DNS on your network. If you have not disabled or blocked ECH like cloudflare-ech.com, I believe it is possible for a server to make an ECH connection and then create a side channel connection that would not show up on your network at all. Theoretically, I believe Microsoft could use their PKey on your hardware to connect to your hardware through ECH after your machine connects to any of their infrastructure.

    Then the TMP chip becomes insidious and has the potential to create a surveillance state, as it can be used to further encrypt communications. The underlying hardware in all modern computers has another secret operating system too, so it does not need to cross your machine. For Intel, this system is call the Management Engine. In AMD it is the Platform Security Processor. In ARM it is called TrustZone.

    Anyways, all of that is why it is why the Linux kernel does not directly support secure boot, the broader machinery, and the abstracted broader implications of why it matters.

    I have a dual boot w11 partition on the same drive with secure boot and have had this for the last 2 years without ever having an issue. It is practically required to do this if you want to run CUDA stuff. I recommend owning your own hardware whenever possible.



  • Fedora just works. Just do fedora workstation. Get on the current version, then always trail the new version by a couple of months, just so they have time to fix bugs.

    The range of Linux is enormous. It is everything from small microcontroller-ish devices to cars, routers, phones, appliances, and servers. These are the primary use cases. Desktop is a side thing.

    Part of the learning curve is that no one knows the whole thing top to bottom end to end at all levels of source. Many entire careers and PhDs and entire companies exist here. You will never fully understand the thing, but that is okay, you do not need to understand it like this.

    The main things are that every distro has a purpose. Every distro can be shaped into what you want.

    Fundamentally, Linux as the kernel is a high level set of command line tools on top of the hardware drivers required to run on most hardware. The Linux kernel is structured so that the hardware drivers, called modules, are built into the kernel already. There is actually a configuration menu for building the kernel where you select only the modules you need for you’re hardware and it builds only what you need automatically based upon your selection. This is well explained in Gentoo in tutorial form.

    Gentoo is the true realm of the masters. It has tutorial documentation, but is written for people with an advanced understanding and infinite capacity to learn. The reason Gentoo is special is the Portage terminal package manager. Gentoo is made to compile the packages for your system from source and with any configuration or source code changes you would like to make. This is super complicated in practice, but if you have very specific needs or goals, Gentoo is the place to go. Arch is basically Gentoo, but in binary form for people too lazy or incapable of managing Gentoo, but where they either already have a CS degree level understanding of operating systems or they are the unwitting testers of why rsync works so well for backing up and reloading systems. It is the only place you will likely need and use backups regularly. The other thing about arch is that the wiki is a great encyclopedia of almost everything. It is only an encyclopedia. It is not tutorial or ever intended as such. Never use arch as a distro to learn on. It is possible, but you’re climbing up hill backwards when far easier tutorial paths exist.

    Godmode is LFS, aka Linux From Scratch. It is a massive tutorial about building everything yourself. No package maintainers for you.

    Redhat is the main distro for server stuff. It is paid. The main thing it offers is zero down time kernel updates. You never need to reboot. It transitions packages in real time. Most of the actual kernel development outside of hardware peripheral driver support happens at Redhat. Fedora is upstream of Redhat. They are not directly related, but many Fedora devs are Redhat employees. Fedora informally functions kinda like a beta test bed for Redhat. Most of the Redhat tools are present or available in Fedora. This is why the goto IT career path is through Fedora using The Linux Bible. So if you want to run server type stuff or use advanced IT tools, maybe try Fedora.

    Here is the thing, you do not need to use these distros. They likely are of no interest to you. All of this bla bla bla is for this simple point, distros are not branding or team sports. They are simply pathways and configurations that best handle certain use cases. The reason you need to understand the specific use case is because these are like chapters of Linux documentation. How do I configure, schedule and automatic some package? Gentoo probably has a tutorial I will find useful. How do I figure out the stuff going on prior to init? LFS will walk me through it. What is init? Arch wiki will tell me.

    On the other hand, there is certain stuff to know like how Debian is for hardware modules development, and mostly unrelated to the latter, building one off custom server tools. When you see Debian like on some single board computer where no other distro is listed, that means it probably isn’t worth buying or messing with. It means the hardware is likely on an orphaned kernel that will never have mainline kernel support so it won’t be safe on the internet for long.

    That’s another thing. Most of what is relevant is keeping a system safe to be online, meaning server stuff.

    OpenWRT is the goto Linux for routers and embedded hardware. You can easily fit the whole thing in well under 32 megabytes of flash. It is a pain in the ass for even a typically advanced Linux terminal user, but that is Linux with a GUI too. The toolset is hard, and has little built in documentation by default.

    With very early early 1970’s+ personal computers, crashing and resetting computers was a thing. Code just ran directly on the memory. The kernel is about solving the issue of code crashing everything. The kernel creates the abstractions that separate the actual hardware registers and memory from the the user space tools and software so that your code bug does not crash everything. It is a basic set of high level user space commands and structures to manage a file tree, open, edit, and run stuff. In kernel space, it is the scheduler and process management that swaps out what is running when and where for both the kernel processes and separate user processes. The kernel is not the window manager, desktop, or most of the actual software you want to run.

    The other non intuitive issue many people have is sandboxing and dependencies. Not all software is maintained equally. When some software has conflicting dependencies with other software, major problems arise. How you interact with this issue is what really matters and one size does not fit all or even most. This issue is the reason the many distros actually exist. Sandboxing, in almost every context you will encounter, is about creating an independent layer location for a special package’s dependencies to exist without conflicts on your base host system. It is not about clutter management or security, just package dependencies. That is the main thing that each distro’s maintainers are handling. The packages native in the distro already have their dependencies managed for you; they should just work. Maybe you want to use more specific or unrelated things. Well then you need to manage them. Nix is designed especially for this in applications where you need to send your configuration sandbox to other user. Alternatively, people use an immutable base like silverblue and run all non native software from sandboxed dependency containers.











  • So the trick to sanding longer with abrasives is wet sanding. In addition, in automotive work, a drop of Palmolive dish soap is added to a bucket of water. This addition makes a huge difference.

    Overall, the principal of like polishes like is important. In abstract, polish is just fine abrasion. Like your finger prints are around 5k-7k grit equivalent. Rub something long enough and you will both polish and abrade it the same as this grit. The oils in your skin are the polishing agent.

    I have played around with 10k grit wet sanding and then machine polishing with a light compound where places I rested my hand showed minor variations after stripping any oils and fillers with wax and grease remover (solvent).

    I can think of several aspects to increase the complexity here. One could add inserts into the outer vibrating shell. These could be any materials.

    I think the bigger issue will actually be the distance between the object and the shell. You see, the size of the random orbital action is the product of two concentric circles. In the pro automotive world, these are pneumatically driven. There are several models available with different properties related to this motion and the internal balance of the mechanism. Within this range of actuation, it is critical that abrasion does not follow a path of repetition. I think this likely means the shell must be larger than the radius of the largest of these two circles or maybe a more complicated size larger than the combination of overlapping radii including their central connection point. This should enable the part to move within the range of random sanding action. That range means the sanding is over a larger area.

    The best shell is likely one with gaps similar to a DA sander with ports for dust collection.

    Very little of any fiber touches the actual nozzle during printing. The actual fiber size used in filament is far far smaller than what most people imagine. It is only the waste dust from the production and processing of carbon fiber. All actual fibers of any useful length are sold in industry for use in composites. There are continuous fiber printers, but that is not at all related to what is used in 3d printing. If you actually look at the data from people testing materials, fiber infused materials are always weaker. They print better because they are breaking up the polymer bonds. Lots of people jump on the buzzword thinking it is technomagic mor betterer but do not pay attention to the details. If the fiber had any length to it, it would clog like crazy because a long bunch of fibers distributed in 1.75mm crammed into 0.4mm is never going to happen. It is just like a dust additive that happens to be available and is compatible. So it should be well distributed throughout. With ABS a wipe of acetone should help too, if left to completely flash off the solvent for a week or more. That needs to be super limited though. Acetone tends to get retained in bad bad ways with ABS. It is a massive no no to use in automotive applications.



  • Not in terms of kernel supported encodings and long term kernel support, from what I have seen. I have not looked into this in depth. However, looking at git repo merged pulls, issues raised, and the lack of any consistent hardware commitments or consensus, implies to me that the hardware is very unstable in the long term. When I see any hardware with mostly only base Debian support, it screams that the hardware is on an orphaned kernel and will likely never get to mainline. The same applies to Arch to a lesser degree. Debian has the primary tool chain for bootstrapping and hardware hacking. When it is the primary option supported, I consider the hardware insecure and unsafe to connect to the internet. I’ve seen a few instances where people are talking about the limited forms of encoding support and the incomplete nature of those that do exist. It is far more important to have hardware that will be supported with mainline kernel security updates and is compatible with the majority of encodings. It would be terrible to find out the thing could not support common audio or video codecs. IIRC there was an issue along these lines with the RISC-V PineTab.

    I know the primary goto for RISC-V is SiFive, but I have not seen a goto LTS processor from them in terms of third party consistent use.

    Plus, while more open is mor betterer, RISC-V is not full proof from a proprietary blob either. The ISA addresses the monopolistic tyranny and extortion of players like Intel, but there is nothing preventing the inclusion of 3rd party proprietary module blocks. The entire point is to create an open market for the sale and inclusion of IP blocks that are compatible with an open standard. Nothing about these blocks is required to be open. I don’t know if such a thing could be set to a negative ring more privileged than the kernel, but I expect this to be the case.


  • Most people’s routers are already up 24/7.

    We should be able to do our own DNS. Who cares if it is on the wider clearweb. You are paying for an IP address with your internet connection. If you are running a server with verified hardware and signed code, all we need is a half dozen nodes mirroring our own DNS. There must be a backup proxy for the few terrible providers that cause issues with IP. The addresses are not static, but they do not change very often. At worse, you hit a manual button to reset or wait 10 minutes before the DNS updates.





  • Your slice of life is funny. At least you don’t get: …(news: “terrible shit is happening”) …the bible says…the sky is falling…the end times…Armageddon… Come back to (sadistic masochism where everyone goes through the motions and no one is intelligent or real or cares about anyone as evidenced by their actions) kingdom hall. (be like the rest of us that never paid any attention to the conspiracy level nonsensical explanation of the reason why Jehovah’s Witnesses exist in the Revelations book.)

    I don’t know, it might be fun to improvise against some fresh hate materials. The same thing all the time gets old but polished responses.


  • It is not about the people that already host. It is about enabling many more by giving them an option to buy a path of least resistance. In exchange, it creates a potential revenue source in a completely untapped demographic. The subscription/donations demographic is like a very unique and niche market. The vast majority of people do not exist within that space. Most people do not have the financial stability to engage like this. It is not that they are unable to accumulate adequate funds, it is that their pay fluctuates over time and their baseline constraints are far more stressful than spending from times of surplus and opportunity. Catering only to those with such surplus and gatekeeping the complexity of self hosting is massively limiting adoption.

    The rule in managing a chain of retail stores is that, no matter how you select products to stock in stores, it is impossible to only select products that will all sell on one platform. How you manage the overburden always determines your long term success. You must employ other platforms and demographics to prioritize the mobility of cash flow.

    Similarly but inverted, this place has a slice of all demographics. Efforts tailored to the various subsets should tap entirely new potential. A fool imagines they can convert the unstable poor*'r* into a reliable stable income source via donations. Someone like myself has means but not a situation that is compatible. If I have some tangible thing to purchase, I can make that happen. I do not have any subscriptions in life for anything at all. Heck, I won’t even shop on any of my devices I use regularly because I only buy what I intend to go looking to purchase with intent. That is not common, but what is common are spontaneous people that need time to align their finances with their desires. That person is likely to dread paying $5 every month compared to $250 in May when they get a couple thousand dollars on a tax return. Expecting the public to float the stability is stupid. That is not how the real world works. Real businesses always float the overhead. I’m talking about how to free the masses to self host everything for the cost of a nice router spent once with no techno leet filter.