

Try looking on killercoda, which is katacoda’s spiritual successor.


Could it be this?
Incus is a fork of LXD, so if you are usimg LXD the same issues apply.
Okay, I hath returned. Here is what I am doing with FLuxCD and it’s method of installing helm charts:
Okay, I’m cheating. :/ . I’m using Flux’s method where you can have a secret that has values, and then I’m just including those.
But yeah, using an ENV var that pulls from a secret is probably better.


I would say the big thing that might give you trouble is not the init system, but NetworkManager. NetworkManager is the… network management software (wow who woulda guessed?) used on desktop linux distros.
People have many criticisms of it, that are similar to criticisms applied to systemd (it’s also Red Hat software), so I see my friends switching to iwd, wpa_supplicant, or other alternatives when trying something other than systemd as well.
It gives them a lot of pain. None of the other alternatives are as reliable as NetworkManager when it comes to connecting to Wifi. Switching away from Systemd shouldn’t be too hard, but NetworkManager is much tougher to give up. Thankfully, you can run NetworkManager on non-systemd setups.


a grand scale with the XZ backdoor
The XZ backdoor, affected a lot less machines than you think. It did not affect:
The malicious code never made it into RHEL or Debian. Both of those distros have a model of freezing packages at a specific version. They then only push manually reviewed security updates, ignoring feature updates or bugfixes to the programs they are packaging. This ensures maximum stability for enterprise usecases, but the way that the changes are small and reviawable also causes them to dodge supply chain attacks like xz (it also enables these distros to have stable auto update features, which I will mention later). But those distros make up a HUGE family of enterprise Linux machines, that were simply untouched by this supply chain attack.
As for linux distros that don’t integrate ssh with systemd or non systemd distros being affected, that was because the malware was inactive in those scenarios. Malicious code did make it there, but it didn’t activate. I wonder if that was sloppiness on the part of the maker of the malware, or intentional, having it activate less frequently as a way of avoiding detection?
Regardless, comparing the XZ backdoor to the recent NPM and other programming language specific package manager supply chain attacks is a huge false analogy. They aren’t comparable at all. Enterprise Linux distros have excellent supply chain security, whereas programming language package managers have basically none. To copy from another comment of mine about them:
Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.
Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.
So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.
And that’s why supply chain attacks are now a daily occurrence.
And then this:
You should probably turn off Dependabot. In my experience, we get more problems from automatic updates than we would by staying on the old versions until needed.
Also drives me insane as well. It’s a form of survivorship bias, where people only notice when automatic upgrades cause problems, but they completely ignore the way that automatic security upgrades prevent many issues. Nobody cares about some organization NOT getting ransomwared because their webserver was automatically patched. That doesn’t make the news the way that auto upgrades breaking things does. To copy from yet another comment of mine
If your software updates between stable releases break, the root cause is the vendor, rather than auto updating. There exist many projects that manage to auto update without causing problems. For example, Debian doesn’t even do features or bugfixes, but only updates apps with security patches for maximum compatibility.
Crowdstrike auto updating also had issues on Linux, even before the big windows bsod incident.
https://www.neowin.net/news/crowdstrike-broke-debian-and-rocky-linux-months-ago-but-no-one-noticed/
It’s not the fault of the auto update process, but instead the lack of QA at crowdstrike. And it’s the responsibility of the system administrators to vet their software vendors and ensure the models in use don’t cause issues like this. Thousands of orgs were happily using Debian/Rocky/RHEL with autoupdates, because those distros have a model of minimal feature/bugfixes and only security patches, ensuring no fuss security auto updates for around a decade for each stable release that had already had it’s software extensively tested. Stories of those breaking are few and far between.
I would rather pay attention to the success stories, than the failures. Because in a world without automatic security updates, millions of lazy organizations would be running vulnerable software unknowingly. This already happens, because not all software auto updates. But some is better than none and for all software to be vulnerable by default until a human manually touches it to update it is simply a nightmare to me.
Wikipedia itself is doing fine but they have a bunch of super interesting side projects that they don’t advertise much, and aren’t doing as well. Wikinews, their news site is shutting down: https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/Single/2026-03-31#News_and_notes (this is really close to april fools hopefully I didn’t eat the onion. Or hopefully I did?).
My favorite is wikibooks: http://wikibooks.org/ , which are open source texbooks that can be edited wikipedia style. Their programming one’s are really high quality. The idea behind those is that you can export a known good frozen version of them, as a texbook for a class. Related is also wikiversity, which is course curriculum. It’s similar, but different.
But they also have a travel voyage, wikivoyage, and more: https://en.wikipedia.org/wiki/Wikipedia:Wikimedia_sister_projects
This is a message to remind myself to share my config later.
I will state that I a, using cloudnativepg for postgres.


The way forgejo actions works, is that it is not a universal thing for every repo. Each repo, can have it’s own forgejo actions instance connected to it, running stuff.
The big benefit of that, is that you can make users bring their own actions servers, and not bother to deploy your own.


It has newer packages than Debian.
This is not quite true. They have overlapping release cycles. A new Debian release will ship frozen versions of the latest packages, causing it to have newer packages than most ubuntu releases. Then the new ubuntu release comes out, with and it has newer packages. Ubuntu doesn’t universally newer packages than debian. The difference is that Debian ONLY does security updates, and doesn’t do feature updates or even bugfixes over it’s lifespan. Ubuntu, on the other hand, does ship feature updates and bug fixes, incrementing the package version as they go over the lifespan of an Ubuntu release.
Comparing the bash versions of the latest ubuntu stable version versus the current debian stable, and you’ll notice that Debian has a newer bash:
[moonpie@osiris moonpiedumplings.github.io]$ podman run -it --rm debian
root@980ac170ddb4:/# bash --version
GNU bash, version 5.2.37(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
root@980ac170ddb4:/# exit
exit
[moonpie@osiris moonpiedumplings.github.io]$ podman run -it --rm ubuntu
Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/00-shortnames.conf)
Trying to pull docker.io/library/ubuntu:latest...
Getting image source signatures
Copying blob 817807f3c64e done |
Copying config f794f40ddf done |
Writing manifest to image destination
root@1486a1c38699:/# bash --version
GNU bash, version 5.2.21(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
This is Ubuntu 24, the current stable release. 25/questing, the rolling version does have newer/same package versions of debian. But people don’t base distros off of the rolling version of ubuntu, only the stable releases.


Debian Linux, and many other Linux distros, have extensive measures to protect their supply chain. Packages are signed and verified, by multiple developers, before being built reproducibly (I can build and verify and identical binary/package). The build system has layers, such that if only a single layer is compromised, nothing happens and nobody flinches.
Programming langauge specific package repos, have no such protections. A single developer has their key/token/account, and then they can push packages, which are often built on their own devices. There are no reproducible build to ensure the binaries are from the same source code, and no multi-party signing to ensure that multiple devs would need to be compromised in order to compromise the package.
So what happened, probably, is some developer got phished or hacked, and gave up their API key. And the package they made was popular, and frequently ran unsandboxed on devs personal devices, so when other developers downloaded the latest version of that package, they got hacked too. The attackers then used their devices to push more malicious packages to the repo, and the cycle repeats.
And that’s why supply chain attacks are now a daily occurrence.


No, they’re dual licensed. Canonical has users contributing signing a Contributor License agreement, in which they agree to allow Canonical to distribute alternatively licesed, or proprietary versions.
This change was somewhat controversial, and partially why Incus was forked from LXD.
Companies at onferences give 4/8gb out sometimes. They buy branded ones in bulk.




Void auth, or kanidm look like easier alternatives.


I have installed an OS onto just the btrfs root subvolume, leaving the home directory intact. This is how I originally swapped from Manjaro to Arch. The arch manual install instructions helped.
But this should be a feature of the graphical installers imo.


Transparent fileystem compression and deduplication (btrfs feature not in ext4) compresses data while still having it be accessible normally. This leads to big space savings.
You can use the tool compsize to check it out.
Postgres jsonb?
That’s what I thought too: https://programming.dev/comment/22854391
But it seems to be possible to still do them wrong.
I like ORM’s because they prevent sql injection. Mostly. Sql injection is a really bad vuln that’s nowhere near as ubiqitous as it used to be for every php app, and that’s partly due to ORM’s.
Not a stupid question.
Cachyos to cachyos.
This matters. Firefox will refuse to do anything with a profile directory from a newer version of firefox. So if I switched to opensuse leap, or another linux distro that has an older version of firefox, then I might encounter issues with just directly copying the profiles.