𞋴𝛂𝛋𝛆

  • 158 Posts
  • 1.29K Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • It is just a cleanliness standard. It is not required. I spent a decade in the details of automotive paint. I only covered the surface basics for paint. What I call clean for paint is an order of magnitude more dirty than a surgeon, and they are orders of magnitude more dirty than a silicon chip foundry. When it comes to making plastic stick and look pretty, an automotive painter might be helpful for framing the scope of what is possible. All I can tell you is I have a Prusa and never have these problems, so I explained my experience and methodology as to why I do as I said. Again, sorry this upsets you.



  • It can coat the inside of the drier. Use Bounty paper towels as a control when in question. Bounty are often used in automotive paint shops for a few reasons, but they are trustworthy for composition. If the two plies are separated, they make a good strain filter. That is the primary reason they are used. They also tend to be lower lint though not perfect. A tack cloth is used in the booth with controlled filtered air flow either across or down draft, so it is not a concern for perfect paint.

    One of the tricks of automotive painting is to add a couple of drops of Palmolive dish soap to the water bucket used with wet sanding. It makes 3M Imperial Wet/Dry sandpaper last several times longer and acts as a mild degreaser the whole time. Any residue is cleaned in the booth stage using a special Wax and Grease Remover solvent that is the least reactive of the painting solvents. While this solvent is used extensively, still the fact that Palmolive dish soap can be used at all indicates how it is clean, consistent, and chemically irrelevant. Automotive paint reacts with many chemicals but specifically silicon is the worst problem. It causes fisheyes aka little divot like holes to form in the clearcoat. In most situations involving contamination and adhesion, silicon is the main issue that will be very persistent. It is so bad in automotive paint that in the worst cases, we turn to adding an actual silicon solution into the 2k clearcoat and trying to guess what concentration will match the problem area to level it. Otherwise, the entire job must be stripped to the raw surface and start over. Silicon issues only show up in the final wet clearcoat layer shortly after it is sprayed and leveled.

    The reason why I have written all of this is to illustrate this point: the silicon is essentially floating on every underlying layer. The solvent has wet the area and the silicon just floats to the top of some filler, 2k primer, sealer, top coat color and when it gets to the clearcoat it blows a hole through it. There are two solutions. Use a two part epoxy primer that is a pain in the ass to sand, or clean the the raw surface with lacquer thinner or virgin acetone. In automotive paint, those two solvents are dangerous for causing a ton of other contamination and reactions issues. However, these are the only solvents that will take off silicon without diluting it and making the problem worse. Alcohol is a joke with no place in the automotive paint world when I was painting. I got out before water based stuff ruined the industry by making refinishing exponentially more expensive. That is only the color coat and some primers, so there may be alcohol used in some way in these, but it will not involve cleaning. Tire shine is the main source of silicon issues in automotive paint.

    I have the empirical experience to know what I am looking at with cleaning and solvents. Alcohol is okay for minor issues, but think of it as constantly diluting and wiping the problem across the whole surface. Eventually, just use some virgin acetone to actually clean the thing properly. Paint is just plastic too. Each type requires a different type of tooth to mechanically bond to. With printing, I use 600 grit to lightly knock the shine off of the print plate surface. I go lighter on the textured sheet, but I only use the textured sheet with PETG because it is the only one that takes the textured pattern completely without showing layer lines. I print weekly on average, and use acetone and sandpaper around once a year. When I use glue stick, I clean the plate with dish soap after. I use alcohol in between. You will need an enclosure for ASA, ABS, and any larger PC prints regardless of the sheet or glue. Two IKEA Lack tables with legs stacked using double sided screws, then a clear shower curtain liner, and some tack nails does the job for under $50.

    I would never use towels from any drier that has ever had fabric softener used in it for automotive paint. That is a contamination nightmare for me.


  • llama.cpp is at the core of almost all offline, open weights models. The server it creates is Open AI API compatible. Oobabooga Textgen WebUI is more user GUI oriented but based on llama.cpp. Oobabooga has the setup for loading models with a split workload between the CPU and GPU which makes larger gguf quantized models possible to run. Llama.cpp, has this feature, Oobabooga implements it. The model loading settings and softmax sampling settings take some trial and error to dial in well. It helps if you have a way of monitoring GPU memory usage in real time. Like I use a script that appends my terminal window title bar with GPU memory usage until inference time.

    Ollama is another common project people use for offline open weights models, and it also runs on top of llama.cpp. It is a lot easier to get started in some instances and several projects use Ollama as a baseline for “Hello World!” type stuff. It has pretty good model loading and softmax settings without any fuss, but it does this at the expense of only running on GPU or CPU but never both in a split workload. This may seem great at first, but if you never experience running much larger quantized models in the 30B-140B range, you are unlikely to have success or a positive experience overall. The much smaller models in the 4B-14B range are all that are likely to run fast enough on your hardware AND completely load in your GPU memory if you only have 8GB-24GB. Most of the newer models are actually Mixture of Experts architectures. This means it is like loading ~7 models initially, but then only inferencing two of them at any one time. All you need is the system memory or the Deepspeed package (uses disk drive for excess space required) to load these larger models. Larger quantized models are much much smarter and more capable. You also need llama.cpp if you want to use function calling for agentic behaviors. Look into the agentic API and pull history in this area of llama.cpp before selecting what models to test in depth.

    Huggingface is the goto website for sharing and sourcing models. That is heavily integrated with GitHub, so it is probably as toxic long term, but I do not know of a real FOSS alternative for that one. Hosting models is massive I/O for a server.


  • PLA will be better for hardware store and hobby junk. You cannot use automotive class finishes and expect them to last. Generally stick to one brand. Most paints are formulated for steel. ABS is the closest to steel in thermal properties. The expansion is the most important attribute. PLA has a different thermal profile so catalysed 2-part paints will not work very well long term. Rattle can enamel is junk by comparison, but it never fully cures like automotive paints. That property helps it stay in place longer in general. There are special adhesion promoters like bulldog for automotive stuff, but the thermal properties will still be an issue.

    Pro automotive paint is 99.9% sanding and prep work. It is far more intense and rigorous than people realize. Perfection happens in the prep work. The actual paint is just a way of showing off that perfection. Mastering automotive paint is actually all about defeating yourself. Perfection is not subject to your emotions or expectations. It is right when it is perfect.

    You want the highest pressure spray cans as possible. Also, if you do not used all of the can at once, flip it upside down and clear the nozzle by letting the siphon into the empty void and spraying. If you have a compressor that does not shoot out a bunch of oil or water, a cheap Harbor Freight pink gun with the nozzle of the can beside the spray gun will work wonders by atomizing the spray far more effectively.


  • You are solely responsible for vetting the software that you choose to run.

    I do not review or care about the tools a person uses to create their projects. I appreciate the disclaimer when the person discloses their aptitude and confidence in their code.

    Free software and Unix culture is a culture of hackers. Stallman’s very degree is in AI. Emacs is mostly a thing because lisp was adapted early on for AI development many decades ago.

    Junk code is nothing new. X11 is notoriously bad, yet you likely have parts of it running on your hardware. Proprietary code is far far worse than anything a hacker posts as open source, yet you are running proprietary blobs on whatever device you are looking at now. Even if you are like myself with a libreboot machine, Leah readily admits that you need to run the core duo microcode if you want it to run right, and are not using that hardware for your primary device. The culture of antiAI is dogmatic nonsense. It is a tool, not a religion. It can be used harmfully or helpfully. I can’t fix stupid in anyone except myself. I do not fault anyone for what they run, the projects they share, or the background they come from. I encourage everyone to be positive and help their fellow hackers. I value participation and enthusiasm. Dogma and negativity are toxic.

    I am ultra liberal. You have a right to all information, a right to skepticism, a right to error, and a right to protest in non violent forms aka the right to offend others. You do not have a right to infringe the rights of others.

    This anti AI populism infringes the rights to all information and right to error if any administrative actions are taken. Your right to protest and skepticism is duly noted. If these become toxic in any ways that alter the dissemination of information, or toxic/harmful to the individual sharing information, I will remove the offending comments. If the person continues, I will escalate. I am only the janitor here. I clean up the messes. I do not matter, but neither does anyone else here. It is a community, and only the community matters. Garbage software is bog standard. Crusade against things that matter like proprietary software leveraged hardware theft and SaaS.










  • How do you punch holes in that dogma? I can think if many logical ways, but that is meaningless against the tribal structure.

    • If family is so valuable, why didn’t strong families usher in the present age of technology
    • intelligence, business acumen, and competency are not hereditary.
    • team sports are a capitalist marketing scam. Putting a blue jersey on your sperm does not make it relevant or better than purple jersey’d sperm.
    • patriarchal male culture is chauvinistic ineptitude and masochism marketed as a replacement for intelligence. It is an admission of subservience to those that dominate by thought and fundamental logic. Fools only fear a brute, civilizations fear a physicist.
    • Strong families are only peripherally useful if capable of creating the opportunities and support needed to produce a physicist.
    • We are all only a product of our environment. That environment is primarily a result of the opportunities and support given freely by its members. So if your family is not strong, one should look in the mirror first.
    • A plant dies because you did not water it, not because of the room it was placed within.

  • Just be aware that W11 is secure boot only.

    There is a lot of ambiguous nonsense about this subject by people that lack a fundamental understanding of secure boot. Secure Boot, is not supported by Linux at all. It is part of systems distros build outside of the kernel. These are different for various distros. Fedora does it best IMO, but Ubuntu has an advanced system too. Gentoo has tutorial information about how to setup the system properly yourself.

    The US government also has a handy PDF about setting up secure boot properly. This subject is somewhat complicated by the fact the UEFI bootloader graphical interface standard is only a reference implementation, with no guarantee that it is fully implemented, (especially the case in consumer grade hardware). Last I checked, Gentoo has the only tutorial guide about how to use an application called Keytool to boot directly into the UEFI system, bypassing the GUI implemented on your hardware, and where you are able to set your own keys manually.

    If you choose to try this, some guides will suggest using a better encryption key than the default. The worst that can happen is that the new keys will get rejected and a default will be refreshed. It may seem like your system does not support custom keys. Be sure to try again with the default for UEFI in your bootloader GUI implementation. If it still does not work, you must use Keytool.

    The TPM module is a small physical hardware chip. Inside there is a register that has a secret hardware encryption key hard coded. This secret key is never accessible in software. Instead, this key is used to encrypt new keys, and hash against those keys to verify that whatever software package is untampered with, and to decrypt information outside of the rest of the system using Direct Memory Access (DMA), as in DRAM/system memory. This effectively means some piece of software is able to create secure connections to the outside world using encrypted communications that cannot be read by anything else running on your system.

    As a more tangible example, Google Pixel phones are the only ones with a TPM chip. This TPM chip is how and why Graphene OS exists. They leverage the TPM chip to encrypt the device operating system that can be verified, and they create the secure encrypted communication path to manage Over The Air software updates automatically.

    There are multiple Keys in your UEFI bootloader on your computer. The main key is by the hardware manufacturer. Anyone with this key is able to change all software from UEFI down in your device. These occasionally get leaked or compromised too, and often the issue is never resolved. It is up to you to monitor and update… - as insane as it sounds.

    The next level key below, is the package key for an operating system. It cannot alter UEFI software, but does control anything that boots after. This is typically where the Microsoft key is the default. It means they effectively control what operating system boots. Microsoft has issued what are called shim keys to Ubuntu and Fedora. Last I heard, these keys expired in October 2025 and had to be refreshed or may not have been reissued by M$. This shim was like a pass for these two distros to work under the M$ PKey. In other words, vanilla Ubuntu and Fedora Workstation could just work with Secure Boot enabled.

    All issues in this space have nothing to do with where you put the operating systems on your drives. Stating nonsense about dual booting a partition is the stupid ambiguous misinformation that causes all of the problems. It is irrelevant where the operating systems are placed. Your specific bootloader implementation may be optimised to boot faster by jumping into the first one it finds. That is not the correct way for secure boot to work. It is supposed to check for any bootable code and deplete anything without a signed encryption key. People that do not understand this system, are playing a game of Russian Roulette. There one drive may get registered first in UEFI 99% of the time due to physical hardware PCB design and layout. That one time some random power quality issue shows up due to a power transient or whatnot, suddenly their OS boot entry is deleted.

    The main key, and package keys are the encryption key owners of your hardware. People can literally use these to log into your machine if they have access to these keys. They can install or remove software from this interface. You have the right to take ownership of your machine by setting these yourself. You can set the main key, then you can use the Microsoft system online to get a new package key to run W10 w/SB or W11. You can sign any distro or other bootable code with your main key. Other than the issue of one of the default keys from the manufacturer or Microsoft getting compromised, I think the only vulnerabilities that secure boot protects against are physical access based attacks in terms of 3rd party issues. The system places a lot of trust in the manufacturer and Microsoft, and they are the owners of the hardware that are able to lock you out of, surveil, or theoretically exploit you with stalkerware. In practice, these connections are still using DNS on your network. If you have not disabled or blocked ECH like cloudflare-ech.com, I believe it is possible for a server to make an ECH connection and then create a side channel connection that would not show up on your network at all. Theoretically, I believe Microsoft could use their PKey on your hardware to connect to your hardware through ECH after your machine connects to any of their infrastructure.

    Then the TMP chip becomes insidious and has the potential to create a surveillance state, as it can be used to further encrypt communications. The underlying hardware in all modern computers has another secret operating system too, so it does not need to cross your machine. For Intel, this system is call the Management Engine. In AMD it is the Platform Security Processor. In ARM it is called TrustZone.

    Anyways, all of that is why it is why the Linux kernel does not directly support secure boot, the broader machinery, and the abstracted broader implications of why it matters.

    I have a dual boot w11 partition on the same drive with secure boot and have had this for the last 2 years without ever having an issue. It is practically required to do this if you want to run CUDA stuff. I recommend owning your own hardware whenever possible.



  • Fedora just works. Just do fedora workstation. Get on the current version, then always trail the new version by a couple of months, just so they have time to fix bugs.

    The range of Linux is enormous. It is everything from small microcontroller-ish devices to cars, routers, phones, appliances, and servers. These are the primary use cases. Desktop is a side thing.

    Part of the learning curve is that no one knows the whole thing top to bottom end to end at all levels of source. Many entire careers and PhDs and entire companies exist here. You will never fully understand the thing, but that is okay, you do not need to understand it like this.

    The main things are that every distro has a purpose. Every distro can be shaped into what you want.

    Fundamentally, Linux as the kernel is a high level set of command line tools on top of the hardware drivers required to run on most hardware. The Linux kernel is structured so that the hardware drivers, called modules, are built into the kernel already. There is actually a configuration menu for building the kernel where you select only the modules you need for you’re hardware and it builds only what you need automatically based upon your selection. This is well explained in Gentoo in tutorial form.

    Gentoo is the true realm of the masters. It has tutorial documentation, but is written for people with an advanced understanding and infinite capacity to learn. The reason Gentoo is special is the Portage terminal package manager. Gentoo is made to compile the packages for your system from source and with any configuration or source code changes you would like to make. This is super complicated in practice, but if you have very specific needs or goals, Gentoo is the place to go. Arch is basically Gentoo, but in binary form for people too lazy or incapable of managing Gentoo, but where they either already have a CS degree level understanding of operating systems or they are the unwitting testers of why rsync works so well for backing up and reloading systems. It is the only place you will likely need and use backups regularly. The other thing about arch is that the wiki is a great encyclopedia of almost everything. It is only an encyclopedia. It is not tutorial or ever intended as such. Never use arch as a distro to learn on. It is possible, but you’re climbing up hill backwards when far easier tutorial paths exist.

    Godmode is LFS, aka Linux From Scratch. It is a massive tutorial about building everything yourself. No package maintainers for you.

    Redhat is the main distro for server stuff. It is paid. The main thing it offers is zero down time kernel updates. You never need to reboot. It transitions packages in real time. Most of the actual kernel development outside of hardware peripheral driver support happens at Redhat. Fedora is upstream of Redhat. They are not directly related, but many Fedora devs are Redhat employees. Fedora informally functions kinda like a beta test bed for Redhat. Most of the Redhat tools are present or available in Fedora. This is why the goto IT career path is through Fedora using The Linux Bible. So if you want to run server type stuff or use advanced IT tools, maybe try Fedora.

    Here is the thing, you do not need to use these distros. They likely are of no interest to you. All of this bla bla bla is for this simple point, distros are not branding or team sports. They are simply pathways and configurations that best handle certain use cases. The reason you need to understand the specific use case is because these are like chapters of Linux documentation. How do I configure, schedule and automatic some package? Gentoo probably has a tutorial I will find useful. How do I figure out the stuff going on prior to init? LFS will walk me through it. What is init? Arch wiki will tell me.

    On the other hand, there is certain stuff to know like how Debian is for hardware modules development, and mostly unrelated to the latter, building one off custom server tools. When you see Debian like on some single board computer where no other distro is listed, that means it probably isn’t worth buying or messing with. It means the hardware is likely on an orphaned kernel that will never have mainline kernel support so it won’t be safe on the internet for long.

    That’s another thing. Most of what is relevant is keeping a system safe to be online, meaning server stuff.

    OpenWRT is the goto Linux for routers and embedded hardware. You can easily fit the whole thing in well under 32 megabytes of flash. It is a pain in the ass for even a typically advanced Linux terminal user, but that is Linux with a GUI too. The toolset is hard, and has little built in documentation by default.

    With very early early 1970’s+ personal computers, crashing and resetting computers was a thing. Code just ran directly on the memory. The kernel is about solving the issue of code crashing everything. The kernel creates the abstractions that separate the actual hardware registers and memory from the the user space tools and software so that your code bug does not crash everything. It is a basic set of high level user space commands and structures to manage a file tree, open, edit, and run stuff. In kernel space, it is the scheduler and process management that swaps out what is running when and where for both the kernel processes and separate user processes. The kernel is not the window manager, desktop, or most of the actual software you want to run.

    The other non intuitive issue many people have is sandboxing and dependencies. Not all software is maintained equally. When some software has conflicting dependencies with other software, major problems arise. How you interact with this issue is what really matters and one size does not fit all or even most. This issue is the reason the many distros actually exist. Sandboxing, in almost every context you will encounter, is about creating an independent layer location for a special package’s dependencies to exist without conflicts on your base host system. It is not about clutter management or security, just package dependencies. That is the main thing that each distro’s maintainers are handling. The packages native in the distro already have their dependencies managed for you; they should just work. Maybe you want to use more specific or unrelated things. Well then you need to manage them. Nix is designed especially for this in applications where you need to send your configuration sandbox to other user. Alternatively, people use an immutable base like silverblue and run all non native software from sandboxed dependency containers.